Chronicling Germany: An Annotated Historical Newspaper Dataset
Christian Schultze, Niklas Kerkfeld, Kara Kuebart, Princilia Weber, Moritz Wolter, and Felix Selgert, “Chronicling Germany: an annotated historical newspaper dataset,” arXiv.org, Jan. 30, 2024. https://arxiv.org/abs/2401.16845
How Much Temporal Long-Term Context is Needed for Action Segmentation?
Emad Bahrami, Gianpiero Francesca, and Juergen Gall, “How Much Temporal Long-Term Context is Needed for Action Segmentation?,” arXiv.org, Aug. 22, 2023. https://arxiv.org/abs/2308.11358
Gated Temporal Diffusion for Stochastic Long-Term Dense Anticipation
Olga Zatsarynna, Emad Bahrami, Yazan A. Farha, Gianpiero Francesca, and Juergen Gall, “Gated temporal diffusion for stochastic Long-Term dense anticipation,” arXiv (Cornell University), Jul. 2024, doi: 10.48550/arxiv.2407.11954.
HyenaPixel: Global Image Context with Convolutions
Julian Spravil, Sebastian Houben, and Sven Behnke, “HyenaPixel: Global Image Context with Convolutions,” arXiv (Cornell University), Feb. 2024, doi: 10.48550/arxiv.2402.19305.
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Jenia Jitsev et al., “OpenFlamingo: an Open-Source framework for training large autoregressive Vision-Language models,” arXiv (Cornell University), Jan. 2023, doi: 10.48550/arxiv.2308.01390.
Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models
Lars Hillebrand, Armin Berger, Tobias Deußer, Tim Dilmaghani, Mohamed Khaled, Bernd Kliem, Rüdiger Loitz, Maren Pielka, David Leonhard, Christian Bauckhage, and Rafel Sifa, “Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models,” ACM Digital Library, Aug. 2023, doi: 10.1145/3573128.3609344
Language models scale reliably with over-training and on downstream tasks
Marianna Nezhurina and Jenia Jitsev, “Language models scale reliably with over-training and on downstream tasks,” arXiv (Cornell University), Mar. 2024, doi: 10.48550/arxiv.2403.08540.
Measurability of quality characteristics identified in latent spaces of Generative AI Models
Robert H. Schmitt, Dominik Wolfschläger, Jan-Henrik Woltersmann, and Lennart Stohrer, “Measurability of quality characteristics identified in latent spaces of Generative AI Models,” CIRP Annals, Jan. 2024, doi: 10.1016/j.cirp.2024.04.073.
Generating Prototypes for Contradiction Detection Using Large Language Models and Linguistic Rules
Maren Pielka, Svetlana Schmidt, and Rafet Sifa, “Generating Prototypes for Contradiction Detection Using Large Language Models and Linguistic Rules,” IEEE Xplore, Dec. 2023, doi: 10.1109/bigdata59044.2023.10386499.
Improving Natural Language Inference in Arabic Using Transformer Models and Linguistically Informed Pre-Training
Maren Pielka, Jörn Hees, Bouthaina Soulef Abdou, Rafet Sifa, and Mohammad Majd Saad Al Deen, “Improving natural language inference in Arabic using transformer models and linguistically informed Pre-Training,” IEEE Conference Publication | IEEE Xplore, Dec. 05, 2023. https://ieeexplore.ieee.org/document/10371891
Nehmen Sie Kontakt mit uns auf
Sie haben Fragen oder sind an einer Zusammenarbeit interessiert?
Schreiben Sie uns eine E-Mail – wir beraten Sie gerne.
Folgen Sie uns auf LinkedIn
Sie möchten keine Updates, News und Events zu WestAI verpassen oder schneller mit Ihrem Netzwerk teilen?
Folgen Sie uns gerne auf LinkedIn!