Hi there, it seems like you are using an outdated browser. Vi highly recommend that you are using the latest version of your browser. Tekna.no supports Edge, Firefox, Google Chrome, Safari and Opera, among others. If you are not able to update your browser to the latest version, other browsers are available here: http://browsehappy.com
Go directly to content
En mann og en dame som sitter sammen og ser på et nettbrett

Language models and search

Modified: Aug. 29 2023 Streaming

Large Language Models (LLMs) are revolutionizing how machines understand and generate human-like text. They can be used for text generation, translation, classification, summarization, and question answering, among others. Search is one of the fields being highly impacted by this new technology.

This event give you an overview of how search typically works, and what innovative approaches derive from the use of large language models in search.

Tekna invited two great speakers James Briggs (Pinecone) and Jo Kristian Bergum (Vespa.ai) who helped us understand how information retrieval can be implemented with LLMs, scenarios when LLMs can help you, and when they miserably fail.

You will achieve a better understanding of the latest developments within LLMs and search, and how they can benefit you for your particular use case.

The program:

  • Welcome and introduction,
    Marco Bertani-Økland, chair of Tekna Big Data

  • Making Retrieval Work for LLMs,
    James Briggs
    Exploration of the different ways in which we can implement information retrieval for use with Large Language Models.

  • Neural Search using Language models
    Jo Kristian Bergum
    We have witnessed a surge of interest in retrieval methods and systems to enhance generations by generative large language models (LLMs), such as chatGPT or LLAMA 2. This retrieval augmentation aims to bridge knowledge gaps and establish connections between pre-trained generative language models and private data beyond their training corpus. In such settings, the retrieval quality sets the upper bound of the quality of the generated answer. So, how do you optimize a system for retrieval quality?

    A different class of neural language models (non-generative) and training objectives have proven to be effective retrievers. These neural retrieval methods are called semantic search and have been around longer than high-quality generative LLMs, so we better understand how to evaluate them and what works and doesn't.

    This presentation explores two model architectures for implementing semantic search. It highlights the benefits of neural search but also presents scenarios where neural models fail miserably compared to more straightforward methods, particularly in new domains or languages.

Tekna Big Data

Tekna Big Data is the network for members of Tekna interested in technologies like data science, artificial intelligence, IoT or other related areas. Join us

Related articles