
LogicAD: Explainable Anomaly Detection via VLM-based Text Feature Extraction
Er Jin, Qihui Feng, Yongli Mou, Gerhard Lakemeyer, Stefan Decker, Oliver Simons and Johannes Stegmaier, LogicAD: Explainable Anomaly Detection via VLM-based Text Feature Extraction. Proceedings of the AAAI Conference on Artificial Intelligence. 39, 4 (Apr. 2025), 4129-4137. DOI:https://doi.org/10.1609/aaai.v39i4.32433.
RadLink: Linking Clinical Entities from Radiology Reports
Yongli Mou, Hanbin Chen, Gwendolyn Isabella Lode, Daniel Truhn, Sulayman Sowe and Stefan Decker, „RadLink: Linking Clinical Entities from Radiology Reports,“ 2024 2nd International Conference on Foundation and Large Language Models (FLLM), Dubai, United Arab Emirates, 2024, pp. 443-449, doi: 10.1109/FLLM63129.2024.10852450.
The Design and Implementation of APLOS: An Automated PoLicy DOcument Summarisation System
Sulayman Sowe, Tobias Kiel, Alexander Neumann, Yongli Mou, Vassilios Peristeras and Stefan Decker, „The Design and Implementation of APLOS: An Automated PoLicy DOcument Summarisation System,“ 2024 2nd International Conference on Foundation and Large Language Models (FLLM), Dubai, United Arab Emirates, 2024, pp. 345-356, doi: 10.1109/FLLM63129.2024.10852442.
Understanding Open Source Large Language Models: An Exploratory Study
Sulayman Sowe, Yongli Mou, Du Cheng, Lingxiao Kong, Alexander Tobias Neumann and Stefan Decker, „Understanding Open Source Large Language Models: An Exploratory Study,“ 2024 2nd International Conference on Foundation and Large Language Models (FLLM), Dubai, United Arab Emirates, 2024, pp. 132-140, doi: 10.1109/FLLM63129.2024.10852438.
Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning
Yixin Peng, Yongli Mou, Bozhen Zhu, Sulayman Sowe, and Stefan Decker, “RWTH-DBIS at LLMs4OL 2024 Tasks A and B: Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning”, Open Conf Proc, vol. 4, pp. 49–63, Oct. 2024.
Leveraging LLMs Few-shot Learning to Improve Instruction-driven Knowledge Graph Construction
Yongli Mou, “Leveraging LLMs Few-shot Learning to Improve Instruction-driven Knowledge Graph Construction,” VLDB Workshop, Jan. 2025, [Online]. Available: https://vldb.org/workshops/2024/proceedings/LLM+KG/LLM+KG-8.pdf
Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models
Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti, and Jenia Jitsev, “Alice in Wonderland: Simple Tasks showing complete reasoning Breakdown in State-Of-the-Art Large Language Models,” arXiv (Cornell University), Jun. 2024, doi: 10.48550/arxiv.2406.02061.
Resolving Discrepancies in Compute-Optimal Scaling of Language Models
Tomer Porian, Mitchell Wortsman, Jenia Jitsev, Ludwig Schmidt, and Yair Carmon, “Resolving discrepancies in Compute-Optimal scaling of language models,” arXiv (Cornell University), Jun. 2024, doi: 10.48550/arxiv.2406.19146.
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models
Julian Spravil, Sebastian Houben, and Sven Behnke, “Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models,” arXiv.org, Mar. 12, 2025. https://arxiv.org/abs/2503.09443
Project Alexandria: Towards Freeing Scientific Knowledge from Copyright Burdens via LLMs
Christoph Schuhmann, Jenia Jitsev, “Project Alexandria: Towards Freeing Scientific Knowledge from Copyright Burdens via LLMs,” arXiv.org, Feb. 26, 2025. https://arxiv.org/abs/2502.19413
Nehmen Sie Kontakt mit uns auf
Sie haben Fragen oder sind an einer Zusammenarbeit interessiert?
Schreiben Sie uns eine E-Mail – wir beraten Sie gerne.
Folgen Sie uns auf LinkedIn
Sie möchten keine Updates, News und Events zu WestAI verpassen oder schneller mit Ihrem Netzwerk teilen?
Folgen Sie uns gerne auf LinkedIn!