Big language models have taken the artificial intelligence community by storm. Its recent impact has helped contribute to a wide range of industries like healthcare, finance, education, entertainment, etc. Well-known big language models like GPT, DALLE, and BERT perform extraordinary tasks and make life easier. While DALLE 2 can create images that respond to a simple textual description, GPT-3 can write an excellent essay, complete code, summarize long textual paragraphs, answer questions like humans, and generate content with just a brief prompt in natural language. These models are helping AI and machine learning rapidly advance through a paradigm shift.
Recently, a team of researchers introduced LMQL, an open source programming language and platform for language model interaction. LMQL, which stands for Language Model Query Language, improvises the capabilities of Large Language Models (LLM) by combining prompts, constraints, and scripts. As a Python-based SQL-like declarative language, LMQL extends static text indications with control flow, constraint-guided decoding, and tool augmentation. With this type of script, LMQL simplifies multi-part request flows with a very small piece of code.
Researchers have used LMQL to enable LMP (Language Model Programming), which generalizes language model prompts from pure text prompts to a combination of text prompts and scripts. LMQL influences the constraints and flow of control from an LMP notice to generate an efficient inference procedure. These high-level, super-logical constraints are translated into token masks with the help of some evaluation semantics that are strongly applied at generation time.
The team introduced LMQL to avoid the high cost of requerying and validating the generated text. This can help LMQL to produce text that is closest to the desired output on the first try without the need for subsequent iterations. In addition, LMQL constraints allow users to guide or direct the text generation process according to their desired specifications, such as ensuring that the generated text follows certain grammatical or syntactic rules or that certain words or phrases are avoided.
The researchers mentioned how LMQL can capture a wide range of state-of-the-art request methods, such as interactive flows, that are difficult to implement with existing APIs. The evaluation shows that LMQL retains or improves accuracy in many downstream tasks while significantly reducing computation or cost in pay-as-you-go APIs, resulting in cost savings of 13-85%.
LMQL allows users to express a wide range of common and advanced indication techniques in a simple and concise way. It integrates with Hugging Face’s Transformers, OpenAI API and Langchain. Developer resources for the same are available at lmql.aiand a browser-based Playground IDE is available for experimentation.
To summarize, LMQL looks like a promising development as the evaluation demonstrates how LMQL is a powerful tool that can improve the efficiency and accuracy of language model programming. You can make it easier for users to achieve desired results with fewer resources.
review the Tool. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 18k+ ML SubReddit, discord channeland electronic newsletterwhere we share the latest AI research news, exciting AI projects, and more.
Tanya Malhotra is a final year student at the University of Petroleum and Power Studies, Dehradun, studying BTech in Computer Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a data science enthusiast with good analytical and critical thinking, along with a keen interest in acquiring new skills, leading groups, and managing work in an organized manner.
🔥 Must Read: What is AI Hallucination? What goes wrong with AI chatbots? How to detect an amazing artificial intelligence?