WestAI
  • About
    • Team
    • Network
  • Services
    • AI Consulting
    • AI Hardware
    • AI Training Courses
    • AI Test Platforms
  • Research
  • Updates
  • DE
Select Page

Understanding Open Source Large Language Models: An Exploratory Study

Prompted by the increasing dominance of proprietary Large Language Models (LLMs), such as OpenAI’s GPT-4 and Google’s Gemini, concerns about data privacy, accessibility and bias have led to a growing advocacy for OSLLMs. This study investigates Open Source Large...

Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning

The increasing capabilities of Large Language Models (LLMs) have opened new opportunities for enhancing Ontology Learning (OL), a process crucial for structuring domain knowledge in a machine-readable format. This paper reports on the participation of the RWTH-DBIS...

Leveraging LLMs Few-shot Learning to Improve Instruction-driven Knowledge Graph Construction

Large Language Models (LLMs) continue to demonstrate immense capabilities in many tasks relating to natural language understanding and generation. These capabilities, to a large extent, are possible because of the effective data management strategies that go into...

Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models

Large Language Models (LLMs) are often described as instances of foundation models that possess strong generalization obeying scaling laws, and therefore transfer robustly across various conditions in few- or zero-shot manner. Such claims rely on standardized...

Resolving Discrepancies in Compute-Optimal Scaling of Language Models

Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two...

Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models

Cross-lingual transfer enables vision-language models (VLMs) to perform vision tasks in various languages with training data only in one language. Current approaches rely on large pre-trained multilingual language models. However, they face the curse of...
« Older Entries
Next Entries »
  • Contact
  • Publishing Notes
  • Data Protection
© WestAI 2025