Prompted by the increasing dominance of proprietary Large Language Models (LLMs), such as OpenAI’s GPT-4 and Google’s Gemini, concerns about data privacy, accessibility and bias have led to a growing advocacy for OSLLMs. This study investigates Open Source Large...
The increasing capabilities of Large Language Models (LLMs) have opened new opportunities for enhancing Ontology Learning (OL), a process crucial for structuring domain knowledge in a machine-readable format. This paper reports on the participation of the RWTH-DBIS...
Large Language Models (LLMs) continue to demonstrate immense capabilities in many tasks relating to natural language understanding and generation. These capabilities, to a large extent, are possible because of the effective data management strategies that go into...
Large Language Models (LLMs) are often described as instances of foundation models that possess strong generalization obeying scaling laws, and therefore transfer robustly across various conditions in few- or zero-shot manner. Such claims rely on standardized...
Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two...
Cross-lingual transfer enables vision-language models (VLMs) to perform vision tasks in various languages with training data only in one language. Current approaches rely on large pre-trained multilingual language models. However, they face the curse of...