Fr. 132.00

Natural Language Processing and Chinese Computing - 14th National CCF Conference, NLPCC 2025, Urumqi, China, August 7-9, 2025, Proceedings, Part I

English · Paperback / Softback

Will be released 15.11.2025

Description

Read more

The four-volume set LNAI 16102 - 16105 constitutes the refereed proceedings of the 14th CCF National Conference on Natural Language Processing and Chinese Computing, NLPCC 2025, held in Urumqi, China, during August 7 9, 2025.
The 152 full papers and 26 evaluation workshop papers included in these proceedings were carefully reviewed and selected from 505 submissions. They were focused on the following topical sections:
Part I : Information Extraction and Knowledge Graph & Large Language Models and Agents.
Part II : Multimodality and Explainability & NLP Applications / Text Mining.
Part III :  IR / Dialogue Systems / Question Answering; Machine Translation and Multilinguality & Sentiment analysis / Argumentation Mining / Social Media.
Part IV : Machine Learning for NLP; Fundamentals of NLP; Summarization and Generation; Others &
Evaluation Workshop.

List of contents

.- Information Extraction and Knowledge Graph.
.- Progressive Training of Transformer for Knowledge Graph Completion Tasks.
.- Document-level Event Coreference Resolution on Trigger Augmentation and Contrastive Learning.
.- Dynamic Chain-of-thought for Low-Resource Event Extraction.
.- On Sentence-level Non-adversarial Robustness of Chinese Named Entity Recognition with Large Language Models.
.- Spatial Relation Classification on Supervised In-Context Learning.
.- HGNN2KAN: Distilling hypergraph neural networks into KAN for efficient inference.
.- Adapting Task-General ORE Systems for Extracting Open Relations between Fictional Characters in Chinese Novels.
.- DRLF:Denoiser-Reinforcement Learning Framework for Entity Completion.
.- Fashion-related Attribute Value Extraction with Visual Prompting.
.- Discovering Latent Relationship for Temporal Knowledge Graph Reasoning.
.- Logical Rule-Constrained Large Language Models for Document-Level Relation Extraction.
.- An Adaptive Semantic-Aware Fusion Method for Multimodal Entity Linking.
.- Retrieve, Interaction, Fusion: a Simple Approach in Ancient Chinese Named Entity Recognition.
.- Reasoning-Guided Prompt Learning with Historical Knowledge Injection for Ancient Chinese Relation Extraction.
.- MMD-TKGR: Multi-Agent Multi-Round Debate for Temporal Knowledge Graph Reasoning.
.- AutoPRE: Discovering Concept Prerequisites with LLM Agents.
.- Weakly-Supervised Generative Framework for Product Attribute Identification in Live-Streaming E-Commerce.
.- Exploring Representation-Efficient Transfer Learning Approaches for Speech Recognition and Translation Using Pre-trained Speech Models.
.- A Neighborhood Aggregation-based Knowledge Graph Reasoning Approach in Operations and Maintenance.
.- CARE: Contextual Augmentation with Retrieval Enhancement for Relation Extraction in Large Language Models.
.- RHDG: Retrieval-augmented Heuristics-driven Demonstration Generation for Document-Level Event Argument Extraction.
.- Large Language Models and Agents.
.- Beyond One-Size-Fits-All: Adaptive Fine-Tuning for LLMs Based on Data Inherent Heterogeneity.
.- From Chain to Loop: Improving Reasoning Capability in Small Language Models via Loop-of-Thought.
.- TaxBen: Benchmarking the Chinese Tax Knowledge of Large Language Models.
.- Propagandistic Meme Detection via Large Language Model Distillation.
.- Multi-Candidate Speculative Decoding.
.- Debate-Driven Legal Reasoning: Disambiguating Confusing Charges through Multi-Agent Debate.
.- A Human-Centered AI Agent Framework with Large Language Models for Academic Research Tasks.
.- ReGA: Reasoning and Grounding Decoupled GUI Navigation Agents.
.- PSYCHE: Practical Synthetic Math Data Evolution. 
.- MultiJustice: A Chinese Dataset for Multi-Party, Multi-Charge Legal Prediction.
.- Reward-Guided Many-Shot Jailbreaking.
.- Self-Prompt Tuning: Enable Autonomous Role-Playing in LLMs.
.- RASR: A Multi-Perspective RAG-based Strategy for Semantic Textual Similarity.
.- H2HTALK: Evaluating Large Language Models as Emotional Companion.
.- EvoP: Robust LLM Inference via Evolutionary Pruning.
.- Large Language Model based Multi-Agent Learning for Mixed Cooperative-Competitive Environments.
.- EduMate:LLM-Powered Detection of Student Learning Emotions and Efficacy in Semi-Structured Counseling.
.- MAD-HD: Multi-Agent Debate-Driven Ungrounded Hallucination Detection.
.- TIANWEN: A Comprehensive Benchmark for Evaluating LLMs in Chinese Classical Poetry Understanding and Reasoning.
.- RKE-Coder: A LLMs-based Code Generation Framework with Algorithmic and Code Knowledge Integration.
.- See Better, Say Better: Vision-Augmented Decoding for Mitigating Hallucinations in Large Vision-Language Models.
.- Exploring Large Language Models for Grammar Error Explanation and Correction in Indonesian as a Low-Reso

Summary

The four-volume set LNAI 16102 - 16105 constitutes the refereed proceedings of the 14th CCF National Conference on Natural Language Processing and Chinese Computing, NLPCC 2025, held in Urumqi, China, during August 7–9, 2025.
The 152 full papers and 26 evaluation workshop papers included in these proceedings were carefully reviewed and selected from 505 submissions. They were focused on the following topical sections:
Part I : Information Extraction and Knowledge Graph & Large Language Models and Agents.
Part II : Multimodality and Explainability & NLP Applications / Text Mining.
Part III :  IR / Dialogue Systems / Question Answering; Machine Translation and Multilinguality & Sentiment analysis / Argumentation Mining / Social Media.
Part IV : Machine Learning for NLP; Fundamentals of NLP; Summarization and Generation; Others &
Evaluation Workshop.

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.