diff --git a/404.html b/404.html index ec3057fb..e7b5b363 100644 --- a/404.html +++ b/404.html @@ -5,7 +5,7 @@ 404 | LMQL - + @@ -17,7 +17,7 @@
Skip to content

404

PAGE NOT FOUND

But if you don't change your direction, and if you keep looking, you may end up where you are heading.
- + \ No newline at end of file diff --git a/README.html b/README.html index 4e6d14b6..fb2f9e29 100644 --- a/README.html +++ b/README.html @@ -5,7 +5,7 @@ LMQL Documentation and Web Resources | LMQL - + @@ -21,7 +21,7 @@
Skip to content

LMQL Documentation and Web Resources

This directory contains the LMQL documentation and web resources:

  • blog/: LMQL's blog posts
  • docs/: LMQL's documentation
  • features/: LMQL feature highlights as shown on the landing page
  • public/: Public web resources like logos and custom JavaScript

The main documentation can be found in the docs/ directory and is written in Markdown.

For details on building the documentation (the web distribution), see docs/development/documentation.md.

- + \ No newline at end of file diff --git a/assets/blog_index.md.ff9d28cf.js b/assets/blog_index.md.b2796768.js similarity index 98% rename from assets/blog_index.md.ff9d28cf.js rename to assets/blog_index.md.b2796768.js index 1bfeabc0..5239c420 100644 --- a/assets/blog_index.md.ff9d28cf.js +++ b/assets/blog_index.md.b2796768.js @@ -1 +1 @@ -import{_ as o,o as a,c as t,F as r,D as i,l,k as e,t as c}from"./chunks/framework.980cae92.js";const p=JSON.parse('[{"src":"---\\ndate: 2024-02-14 10:10:00\\ntitle: LMQL Developer Survey\\n---\\n\\n# LMQL Developer Survey\\n\\n\\nFebruary 14, 2024\\n\\n\\"image\\"\\n\\nWe have started a new initiative called the **LMQL developer survey**. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for. \\n\\nThe outcome of this survey will help shape our work around the next major version of LMQL.\\n\\nYou can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.\\n","html":"

LMQL Developer Survey

\\n

February 14, 2024

\\n\\"image\\"\\n

We have started a new initiative called the LMQL developer survey. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for.

\\n

The outcome of this survey will help shape our work around the next major version of LMQL.

\\n

You can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.

\\n","frontmatter":{"date":"2024-02-14T10:10:00.000Z","title":"LMQL Developer Survey"},"excerpt":"","url":"/blog/posts/developer-survey.html"},{"src":"---\\ndate: 2023-10-10 10:10:00\\ntitle: LMQL 0.7 brings Procedural Prompt Programming\\n---\\n\\n# LMQL 0.7 brings Procedural Prompt Programming\\n\\nOctober 10, 2023\\n\\nToday, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several *experimental preview features*, allowing you to experiment with new incoming functionality before it is fully released.\\n\\nLMQL 0.7 has also moved to [semantic versioning](https://semver.org) with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.\\n\\n## Nested Queries for Procedural Prompt Programming\\n\\nIn 0.7, you can now use [Nested Queries](../../docs/language/nestedqueries.md) to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:\\n\\n```lmql\\n# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n \'\'\'lmql\\n \\"A: Let\'s think step by step.\\\\n [REASONING]\\"\\n \\"Therefore the answer is[ANSWER]\\" where STOPS_AT(ANSWER, \\".\\")\\n return ANSWER.strip()\\n \'\'\'\\n\\n# top-level query\\n\\"Q: It is August 12th, 2020. What date was it \\\\\\n 100 days ago? [ANSWER: chain_of_thought]\\"\\n\\nANSWER # May 4th, 2020\\n```\\n\\nWe first define a simple LMQL function `chain_of_thought` to do *chain-of-thought prompting*. In our top-level query, we can then call this function to decode an answer using the `[ANSWER: chain_of_thought]` syntax. During execution, LMQL then inserts the instructions and constraints from `chain_of_thought` into the top-level query, generates a value for `ANSWER`, and then removes the instructions and constraints again, only returning the final result.\\n\\n**Nested queries are Prompt Function Calls.** This design of nested queries is inspired by the idea of *function or procedure calls* in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of *stack unwinding*, a technique to implement function calls in low-level languages. \\n\\nLMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:\\n\\n- **Encapsulation and Model Focus** Nested Queries encapsulate and hide the prompting logic used to generate `ANSWER`, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.\\n\\n- **Nesting and Reuse** LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query `get_year` to extract a year from the response text, and then use this query in `chain_of_thought` to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.\\n\\nTo learn more about nested queries, please refer to the [relevant chapter in the documentation](../../docs/language/nestedqueries.md).\\n\\n## Generations API\\n\\nLMQL 0.7 adds the *Generations API*, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:\\n\\n```python\\n# obtain a model instance\\nm: lmql.LLM = lmql.model(\\"openai/gpt-3.5-turbo-instruct\\")\\n# simple generation\\nm.generate_sync(\\"Hello\\", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n```\\n
\\n\\nFunctions such as [`LLM.generate`](../../docs/lib/generations.html#llm-generate) and [`LLM.score`](../../docs/lib/generations.html#llm-score) allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed. \\n\\nFor more information, please refer to the [documentation](../../docs/lib/generations.html).\\n\\n## Chat \\n\\nLMQL 0.7 adds a new [Chat API](../../docs/lib/chat.md), allowing you to easily deploy chatbots with just a couple lines of LMQL.\\n\\n\\n\\nLMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple `lmql chat` CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots. \\n\\nWe also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the [documentation](../../docs/lib/chat.md).\\n\\n## Backends\\n\\nLMQL 0.7 ships with three new backends for inference and tokenization:\\n\\n* LMQL 0.7 adds support for OpenAI\'s newly released `gpt-3.5-turbo-instruct` model. In contrast to other 3.5 series models, this variant supports the *Completions API*, which means that LMQL constraints are compatible with it.\\n\\n* LMQL now supports hosting models on [replicate.com](https://replicate.com) infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the [documentation](../../docs/models/replicate.md). Thanks a lot to community member [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this!\\n\\n* LMQL added `sentencepiece` as an additional tokenization backend, specifically for `llama.cpp` models. This means, `llama.cpp` models can now be used without requiring `transformers` for tokenization. Thanks a lot to community member [@khushChopra](https://github.com/khushChopra) for contributing this.\\n\\n\\n## Inference Certificates\\n\\nTo make LLM inference more transparent and re-producible, LMQL 0.7 also adds [*inference certificates*](../../docs/lib/inference-certificates.md). An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.\\n\\nTo produce an inference certificate, pass `certificate=True` or `certificate=` to your query or generate call:\\n\\n```truncated\\n# call and save certificate\\nsay_hello(certificate=\\"my-certificate.json\\")\\n```\\n\\nThe resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the *exact (tokenized) prompts* and information on the *environment and generation parameters*.\\n\\nThis can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.\\n\\n## Decorators\\n\\n[Variable Decorators](../../docs/language/decorators.md) offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:\\n\\n```lmql\\ndef screaming(value):\\n \\"\\"\\"Decorator to convert a string to uppercase\\"\\"\\"\\n return value.upper()\\n\\n\\"Say \'this is a test\':[@screaming TEST]\\"\\n```\\n```promptdown\\nSay \'this is a test\': [TEST| THIS IS A TEST]\\n```\\n\\nSimilar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value. \\n\\nIn the example above, we use the `@screaming` decorator to convert the value of `TEST` to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the [documentation](../../docs/language/decorators.md).\\n\\n\\n## Documentation Update\\n\\nThe website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate. \\n\\nThe documentation now also includes a *work-in-progress* [Language Reference](/docs/language/reference.md), which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.\\n\\n## Preview Features\\n\\nApart from many new core features, LMQL 0.7 also ships with several *experimental preview features*, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.\\n\\nThese features are marked as *experimental* and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.\\n\\n### LMQL Actions Preview\\n\\n*LMQL Actions* is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:\\n\\n```{lmql}\\ndef wiki(q): ...\\ndef calc(expr): ...\\n\\n\\"Q: What is the population of the US and Germany combined?\\"\\n\\"A: [REASONING]\\" where inline_use(REASONING, [wiki, calc])\\n```\\n\\nA future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in [`actions.py`](https://github.com/eth-sri/lmql/blob/main/src/lmql/lib/actions.py).\\n\\n### Regex Constraints Preview\\n\\nLMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form `DD/MM`:\\n\\n```{lmql}\\n\\"It\'s the last day of June so today is [RESPONSE]\\" where REGEX(RESPONSE, r\\"[0-9]{2}/[0-9]{2}\\")\\n```\\n\\n### Types / Datatype Constraints Preview\\n\\nLMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for *dataclass constraints*, allowing you to constrain the output of a variable to a specific structured output schema:\\n\\n```lmql\\nimport lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n name: str\\n age: int\\n job: str\\n\\n\\"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n\\"\\n\\"Structured: [PERSON_DATA]\\\\n\\" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name=\'Alice\', age=21, job=\'engineer\')\\n```\\n\\nTo achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid `Person` object. The resulting `PERSON_DATA` object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality. \\n\\n\\n## Other Changes\\n\\n* The LMQL playground can now be used from the Windows `cmd.exe`. Thanks a lot to community member [@mosheduminer](https://github.com/mosheduminer).\\n\\n* LMQL/LMTP model backends can now be accessed [as Langchain `LLM` objects](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/lmtp_langchain.py) to use them in your Langchain pipelines. Thanks to [@4onon](https://github.com/4onon) for contributing this. \\n\\n* LMQL can now be [installed as a NixOS package](https://github.com/eth-sri/lmql/tree/main/scripts/flake.d). Thanks to [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this.\\n\\n## 🎬 And that\'s a wrap!\\n\\nLMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on [GitHub](https://github.com/eth-sri/lmql), [Discord](https://discord.gg/7eJP4fcyNT), [Twitter](https://twitter.com/lmqllang) or via [hello@lmql.ai](mailto:hello@lmql.ai).\\n","html":"

LMQL 0.7 brings Procedural Prompt Programming

\\n

October 10, 2023

\\n

Today, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several experimental preview features, allowing you to experiment with new incoming functionality before it is fully released.

\\n

LMQL 0.7 has also moved to semantic versioning with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.

\\n

Nested Queries for Procedural Prompt Programming

\\n

In 0.7, you can now use Nested Queries to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:

\\n
lmql
# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n    '''lmql\\n    "A: Let's think step by step.\\\\n [REASONING]"\\n    "Therefore the answer is[ANSWER]" where STOPS_AT(ANSWER, ".")\\n    return ANSWER.strip()\\n    '''\\n\\n# top-level query\\n"Q: It is August 12th, 2020. What date was it \\\\\\n    100 days ago? [ANSWER: chain_of_thought]"\\n\\nANSWER # May 4th, 2020\\n
\\n

We first define a simple LMQL function chain_of_thought to do chain-of-thought prompting. In our top-level query, we can then call this function to decode an answer using the [ANSWER: chain_of_thought] syntax. During execution, LMQL then inserts the instructions and constraints from chain_of_thought into the top-level query, generates a value for ANSWER, and then removes the instructions and constraints again, only returning the final result.

\\n

Nested queries are Prompt Function Calls. This design of nested queries is inspired by the idea of function or procedure calls in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of stack unwinding, a technique to implement function calls in low-level languages.

\\n

LMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:

\\n\\n

To learn more about nested queries, please refer to the relevant chapter in the documentation.

\\n

Generations API

\\n

LMQL 0.7 adds the Generations API, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:

\\n
python
# obtain a model instance\\nm: lmql.LLM = lmql.model("openai/gpt-3.5-turbo-instruct")\\n# simple generation\\nm.generate_sync("Hello", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n
\\n

\\n

Functions such as LLM.generate and LLM.score allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed.

\\n

For more information, please refer to the documentation.

\\n

Chat

\\n

LMQL 0.7 adds a new Chat API, allowing you to easily deploy chatbots with just a couple lines of LMQL.

\\n\\n

LMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple lmql chat CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots.

\\n

We also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the documentation.

\\n

Backends

\\n

LMQL 0.7 ships with three new backends for inference and tokenization:

\\n\\n

Inference Certificates

\\n

To make LLM inference more transparent and re-producible, LMQL 0.7 also adds inference certificates. An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.

\\n

To produce an inference certificate, pass certificate=True or certificate=<filename> to your query or generate call:

\\n
truncated
# call and save certificate\\nsay_hello(certificate="my-certificate.json")\\n
\\n

The resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the exact (tokenized) prompts and information on the environment and generation parameters.

\\n

This can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.

\\n

Decorators

\\n

Variable Decorators offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:

\\n
lmql
def screaming(value):\\n    """Decorator to convert a string to uppercase"""\\n    return value.upper()\\n\\n"Say 'this is a test':[@screaming TEST]"\\n
\\n
promptdown

Say \'this is a test\': TEST THIS IS A TEST\\n

\\n

Similar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value.

\\n

In the example above, we use the @screaming decorator to convert the value of TEST to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the documentation.

\\n

Documentation Update

\\n

The website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate.

\\n

The documentation now also includes a work-in-progress Language Reference, which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.

\\n

Preview Features

\\n

Apart from many new core features, LMQL 0.7 also ships with several experimental preview features, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.

\\n

These features are marked as experimental and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.

\\n

LMQL Actions Preview

\\n

LMQL Actions is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:

\\n
def wiki(q): ...\\ndef calc(expr): ...\\n\\n"Q: What is the population of the US and Germany combined?"\\n"A: [REASONING]" where inline_use(REASONING, [wiki, calc])\\n
\\n

A future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in actions.py.

\\n

Regex Constraints Preview

\\n

LMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form DD/MM:

\\n
"It's the last day of June so today is [RESPONSE]" where REGEX(RESPONSE, r"[0-9]{2}/[0-9]{2}")\\n
\\n

Types / Datatype Constraints Preview

\\n

LMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for dataclass constraints, allowing you to constrain the output of a variable to a specific structured output schema:

\\n
lmql
import lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n    name: str\\n    age: int\\n    job: str\\n\\n"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n"\\n"Structured: [PERSON_DATA]\\\\n" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name='Alice', age=21, job='engineer')\\n
\\n

To achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid Person object. The resulting PERSON_DATA object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality.

\\n

Other Changes

\\n\\n

🎬 And that\'s a wrap!

\\n

LMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on GitHub, Discord, Twitter or via hello@lmql.ai.

\\n","frontmatter":{"date":"2023-10-10T10:10:00.000Z","title":"LMQL 0.7 brings Procedural Prompt Programming"},"excerpt":"","url":"/blog/posts/release-0.7.html"},{"src":"---\\ndate: 2023-07-25\\ntitle: LMQL v0.0.6.6\\n---\\n\\nJuly 25, 2023\\n\\nWe just released LMQL *0.0.6.6*. This is a minor update with a couple of smaller fixes and improvements.\\n\\n* `lmql.F` now supports positional arguments:\\n\\n```python\\ngreet = lmql.F(\\"Greet {a} and {b}: [GREETING]\\")\\n\\n# call with positional arguments\\ngreet(\\"Alice\\", \\"Bob\\") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a=\\"Alice\\", b=\\"Bob\\") # Greet Alice and Bob: Hello!\\n```\\n\\n* We improved the error handling of the `llama.cpp` backend and fixed a bug with model identifier parsing. \\n\\n* We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member [@4onen](https://github.com/4onen) for reporting and fixing this!\\n\\n* Added backend support for `auto_gptq` quantized models, contributed by community member [@meditans](https://github.com/meditans).\\n\\n* We fixed an issue where for Azure OpenAI models, a dummy configuration `api.env` was needed. See our [documentation](../../docs/models/azure.md) for details. Thanks to community members Missing and [@hooman-bayer](https://github.com/hooman-bayer) for their feedback and contributions to this.\\n\\n> **Versioning Note**: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.","html":"

July 25, 2023

\\n

We just released LMQL 0.0.6.6. This is a minor update with a couple of smaller fixes and improvements.

\\n\\n
python
greet = lmql.F("Greet {a} and {b}: [GREETING]")\\n\\n# call with positional arguments\\ngreet("Alice", "Bob") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a="Alice", b="Bob") # Greet Alice and Bob: Hello!\\n
\\n
\\n
\\n

Versioning Note: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.

\\n
\\n","frontmatter":{"date":"2023-07-25T00:00:00.000Z","title":"LMQL v0.0.6.6"},"excerpt":"","url":"/blog/posts/release-0.0.6.6.html"},{"src":"---\\ndate: 2023-07-13\\ntitle: LMQL becomes simpler and adds llama.cpp\\n---\\n\\n# LMQL becomes simpler and adds llama.cpp\\n\\nJuly 13, 2023\\n\\nToday we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a `llama.cpp` based inference backend, several bug fixes and other minor improvements.\\n\\nYou can try the latest version of LMQL in your browser at [lmql.ai/playground](https://lmql.ai/playground) or install it via `pip install lmql`.\\n\\n## One Line Is All It Takes\\n\\nMost notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.\\n\\nWith this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:\\n\\n```{lmql}\\nname::simple-syntax\\n\\n\\"One line is all it takes [CONTINUATION]\\"\\n```\\n```promptdown\\nOne line is all it takes [CONTINUATION|Fallin\' in love with me.]\\n```\\n\\n**Sensible Defaults** This is possible because LMQL now automatically assumes `argmax` and `openai/text-davinci-003` as (configurable) default model. If you prefer to use \\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the `@lmql.query` decorator function as demonstrated later in this post.\\n\\nWithout any additional configuration, the simple query code above translates to a full LMQL program like this:\\n\\n```{lmql}\\nname::simple-syntax-default\\n\\nargmax \\"One line is all it takes [CONTINUATION]\\" from \\"openai/text-davinci-003\\"\\n```\\n\\n
\\n\\n### Inline Constraints\\n\\nLMQL now allows you to specify several inline `where` constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.\\n\\n```{lmql}\\nname::list-with-array\\n\\n\\"A list of awesome Dua Lipa songs:\\\\n\\"\\nsongs = []\\n\\n\\"- New Rules\\\\n\\"\\nfor i in range(4):\\n \\"-[SONG]\\\\n\\" where STOPS_BEFORE(SONG, \\"\\\\n\\")\\n songs.append(SONG)\\n\\n\\"Out of these, my favorite is[FAVORITE]\\" where FAVORITE in songs\\n```\\n```promptdown\\nA list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- [SONG|Don\'t Start Now]\\n- [SONG|IDGAF]\\n- [SONG|Be the One]\\n- [SONG|Blow Your Mind (Mwah)]\\nOut of these, my favorite is [FAVORITE|Don\'t Start Now]\\n```\\n\\nNote also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation. \\n\\n
\\n\\n### `@lmql.query` functions\\n\\nThe overhauled syntax also makes LMQL much easier on the eyes when used with the `@lmql.query` [function decorator in Python](/docs/lib/python.md):\\n\\n```python\\nimport lmql\\nimport json\\n\\n@lmql.query(model=\\"openai/text-curie-001\\", temperature=0.9)\\ndef summarize(): \\n \'\'\'lmql\\n \\"\\"\\"\\n Provide a summary of Dua Lipa, the pop icon:\\n {{\\n \\"name\\": \\"[STRING_VALUE]\\",\\n \\"chart_position\\": [INT_VALUE],\\n \\"top_songs\\": [[\\n \\"[STRING_VALUE]\\",\\n \\"[STRING_VALUE]\\"\\n ]]\\n }}\\n \\"\\"\\" where STOPS_BEFORE(STRING_VALUE, \'\\"\') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n \\n return json.loads(context.prompt.split(\\"pop icon:\\",1)[1])\\n \'\'\'\\n\\nprint(summarize()) # {\'name\': \'Dua Lipa\', \'chart_position\': 3415, \'top_songs\': [\'New Rules\', \'Havana\']}\\n\\n```\\n\\n
\\n\\n### `lmql.F` Lambda Functions\\n\\nBased on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.\\n\\n```python\\nimport lmql\\n\\nsummarize = lmql.F(\\"Summarize the following in a few words: {data}: [SUMMARY]\\")\\nmain_subject = lmql.F(\\"What is the main subject (noun) of the following text? {data}: [SUBJECT]\\", \\n \\"len(TOKENS(SUBJECT)) < 20\\")\\n\\ntext = \\"In LMQL, users can specify high-level, logical constraints ...\\"\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n```\\n\\n
\\n
\\n\\n## `llama.cpp` Inference Backend\\n\\nLMQL now also fully integrates with the excellent [llama.cpp](https://github.com/ggerganov/llama.cpp) C++ implementation of a number of Transformer-based language models. \\n\\nUsing `llama.cpp` from LMQL is as simple as specifying it in the `from` clause of a query:\\n\\n```{lmql}\\nname::llama-cpp-blog\\n\\nargmax \\"Say \'this is a test\':[RESPONSE]\\" from \\"llama.cpp:.bin\\"\\n```\\n\\nWe support, both, in-process loading of `llama.cpp`, as well as remote inference via `lmql serve-model`. To learn more about `llama.cpp` and how to use it with LMQL, check out the corresponding chapter in the LMQL [documentation](/docs/models/llama.cpp.md).\\n\\n
\\n\\n## Other Changes\\n\\n* LMQL now includes a `random` model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.\\n\\n* Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.\\n\\n* More robust query string parsing, allowing for [robust escaping](/docs/language/scripted-prompting.md#escaping) of special characters `[`, `]`, `{` and `}`.\\n\\n* Added support for `transformers` based Llama models and the associated (fast) implementation of HF tokenizers.\\n\\n* Simplified Azure OpenAI support, see the relevant chapter in the [documentation](/docs/models/azure.md).\\n\\nWe thank community members [@minosvasilias](https://github.com/minosvasilias) and [@CircArgs](https://github.com/CircArgs) for their contribution to this release.","html":"

LMQL becomes simpler and adds llama.cpp

\\n

July 13, 2023

\\n

Today we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a llama.cpp based inference backend, several bug fixes and other minor improvements.

\\n

You can try the latest version of LMQL in your browser at lmql.ai/playground or install it via pip install lmql.

\\n

One Line Is All It Takes

\\n

Most notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.

\\n

With this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:

\\n
"One line is all it takes [CONTINUATION]"\\n
\\n
promptdown

One line is all it takes CONTINUATIONFallin\' in love with me.\\n

\\n

Sensible Defaults This is possible because LMQL now automatically assumes argmax and openai/text-davinci-003 as (configurable) default model. If you prefer to use\\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the @lmql.query decorator function as demonstrated later in this post.

\\n

Without any additional configuration, the simple query code above translates to a full LMQL program like this:

\\n
argmax "One line is all it takes [CONTINUATION]" from "openai/text-davinci-003"\\n
\\n

\\n

Inline Constraints

\\n

LMQL now allows you to specify several inline where constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.

\\n
"A list of awesome Dua Lipa songs:\\\\n"\\nsongs = []\\n\\n"- New Rules\\\\n"\\nfor i in range(4):\\n    "-[SONG]\\\\n" where STOPS_BEFORE(SONG, "\\\\n")\\n    songs.append(SONG)\\n\\n"Out of these, my favorite is[FAVORITE]" where FAVORITE in songs\\n
\\n
promptdown

A list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- SONGDon\'t Start Now\\n- SONGIDGAF\\n- SONGBe the One\\n- SONGBlow Your Mind (Mwah)\\nOut of these, my favorite is FAVORITEDon\'t Start Now\\n

\\n

Note also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation.

\\n
\\n

@lmql.query functions

\\n

The overhauled syntax also makes LMQL much easier on the eyes when used with the @lmql.query function decorator in Python:

\\n
python
import lmql\\nimport json\\n\\n@lmql.query(model="openai/text-curie-001", temperature=0.9)\\ndef summarize(): \\n    '''lmql\\n    """\\n    Provide a summary of Dua Lipa, the pop icon:\\n    {{\\n      "name": "[STRING_VALUE]",\\n      "chart_position": [INT_VALUE],\\n      "top_songs": [[\\n         "[STRING_VALUE]",\\n         "[STRING_VALUE]"\\n      ]]\\n    }}\\n    """ where STOPS_BEFORE(STRING_VALUE, '"') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n    \\n    return json.loads(context.prompt.split("pop icon:",1)[1])\\n    '''\\n\\nprint(summarize()) # {'name': 'Dua Lipa', 'chart_position': 3415, 'top_songs': ['New Rules', 'Havana']}\\n\\n
\\n

\\n

lmql.F Lambda Functions

\\n

Based on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.

\\n
python
import lmql\\n\\nsummarize = lmql.F("Summarize the following in a few words: {data}: [SUMMARY]")\\nmain_subject = lmql.F("What is the main subject (noun) of the following text? {data}: [SUBJECT]", \\n                      "len(TOKENS(SUBJECT)) < 20")\\n\\ntext = "In LMQL, users can specify high-level, logical constraints ..."\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n                     # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n
\\n

\\n
\\n

llama.cpp Inference Backend

\\n

LMQL now also fully integrates with the excellent llama.cpp C++ implementation of a number of Transformer-based language models.

\\n

Using llama.cpp from LMQL is as simple as specifying it in the from clause of a query:

\\n
argmax "Say 'this is a test':[RESPONSE]" from "llama.cpp:<PATH TO WEIGHTS>.bin"\\n
\\n

We support, both, in-process loading of llama.cpp, as well as remote inference via lmql serve-model. To learn more about llama.cpp and how to use it with LMQL, check out the corresponding chapter in the LMQL documentation.

\\n
\\n

Other Changes

\\n\\n

We thank community members @minosvasilias and @CircArgs for their contribution to this release.

\\n","frontmatter":{"date":"2023-07-13T00:00:00.000Z","title":"LMQL becomes simpler and adds llama.cpp"},"excerpt":"","url":"/blog/posts/release-0.0.6.5.html"},{"src":"---\\ndate: 2023-06-08\\ntitle: Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more\\n---\\n\\n# Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more\\n\\nJune 8, 2023\\n\\nAmong many things, this update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Azure OpenAI support** LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the [documentation](/docs/models/azure.md). Many thanks to [@veqtor](https://github.com/veqtor) for contributing this feature.\\n\\n* **Local Models via the Language Model Transport Protocol** LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the [documentation](/docs/models/hf.md).\\n\\n To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in [this README file](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/README.md). In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.\\n\\n
\\n \\n
\\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
\\n\\n* **Synchronous Python API** Next to an `async/await` based API, LMQL now also provides a synchronous API. This means you no longer need to use `asyncio` to use LMQL from Python. \\n\\n To use the synchronous API, simply declare `@lmql.query` function without the `async` keyword, e.g.\\n\\n ```python\\n import lmql\\n\\n @lmql.query\\n def hello(s: str):\\n \'\'\'lmql\\n argmax \\n \\"Hello {s} [RESPONSE]\\" \\n return RESPONSE\\n from \\n \\"chatgpt\\"\\n \'\'\'\\n\\n print(hello(\\"world\\")) # [\'Hello! How can I assist you today?\']\\n ```\\n\\n If you instead want to use `lmql.run` in a synchronous context, you can now use `lmql.run_sync` instead. To learn more about how LMQL can be used from Python, check out our [documentation](/docs/lib/python.md).\\n\\n* **Improved Tokenizer Backends** LMQL can now use the excellent [`tiktoken` tokenizer](https://github.com/openai/tiktoken) as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.\\n\\n* **Docker Image** LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the [documentation](/docs/development/docker-setup.md). Many thanks to [@SilacciA](https://github.com/SilacciA) for contributing this feature.\\n\\n* **Faster Startup Time** We optimized LMQL\'s import hierarchy, which results in faster module loading time.","html":"

Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more

\\n

June 8, 2023

\\n

Among many things, this update contains several bug fixes and improvements. The most notable changes are:

\\n\\n","frontmatter":{"date":"2023-06-08T00:00:00.000Z","title":"Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more"},"excerpt":"","url":"/blog/posts/release-0.0.6.4.html"},{"src":"---\\ndate: 2023-05-11\\ntitle: LMQL Release v0.0.6.3\\n---\\n\\n# LMQL v0.0.6.3\\n\\nMay 11, 2023\\n\\nToday, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Lighter Runtime** As part of our continued efforts, we made LMQL much lighter (no more mandatory `transformers` dependency). By default LMQL now no longer requires `transformers` or PyTorch. If you rely on local models, just install LMQL via `pip install lmql[hf]` to get full Transformers integration.\\n\\n* **Token Constraints** A new function `TOKENS(...)` was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.\\n \\n ```{lmql}\\n name::token_constraints\\n argmax \\n \\"A 10 token response[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) == 10\\n ```\\n\\n* **Conditional Stopping** `STOPS_AT` can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met. \\n\\n For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.\\n\\n ```{lmql}\\n name::conditional_stopping \\n argmax \\n \\"Hello[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, \\"\\\\n\\")\\n ```\\n\\n* **lmql.run**: Improved input validation for `lmql.run` as contributed by @lfegray. More specifically, `lmql.run` wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.\\n\\n* **Automatic Cache Invalidation**: LMQL\'s tokenizer cache at `~/.cache/lmql` is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.\\n\\n> Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.","html":"

LMQL v0.0.6.3

\\n

May 11, 2023

\\n

Today, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:

\\n\\n
\\n

Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.

\\n
\\n","frontmatter":{"date":"2023-05-11T00:00:00.000Z","title":"LMQL Release v0.0.6.3"},"excerpt":"","url":"/blog/posts/release-0.0.6.3.html"},{"src":"---\\ndate: 2023-05-03\\ntitle: LMQL Release v0.0.6.1\\n---\\n\\n# LMQL v0.0.6.1\\n\\nMay 3, 2023\\n\\nWe released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Cache Layer Bug Fixes** This release contains several fixes and improvements to the recently introduced cache layer.\\n\\n* **Stopping Phrases** Stopping phrases specified via `STOPS_BEFORE` are now passed to the OpenAI API as `\\"stop\\"` parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter `openai_nonstop=True`.\\n\\n* **Asynchronous Output Writers** All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on [Output Streaming](/docs/lib/output.md) to the documentation.\\n\\n* **Output Writers for HTTP endpoints, WebSockets and Server-Sent Events** Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new [lmql.output](https://github.com/eth-sri/lmql/tree/main/src/lmql/output) module. We will also provide more documentation on how to use them, e.g. with `aiohttp` in the future.","html":"

LMQL v0.0.6.1

\\n

May 3, 2023

\\n

We released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:

\\n\\n","frontmatter":{"date":"2023-05-03T00:00:00.000Z","title":"LMQL Release v0.0.6.1"},"excerpt":"","url":"/blog/posts/release-0.0.6.1.html"},{"src":"---\\ndate: 2023-05-01\\ntitle: Releasing the LMQL Caching Layer (v0.0.6)\\n---\\n\\n# Releasing the LMQL Caching Layer (v0.0.6)\\n\\nMay 1, 2023\\n\\nToday we are releasing LMQL 0.0.6, the first version of LMQL that integrates the *LMQL Caching Layer*. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including **template-based queries, long-form constraints and tool augmentation.**\\n\\nYou can experiment with LMQL in the browser-based [Playground IDE](http://lmql.ai/playground) or install the latest version locally, via `pip install lmql`.\\n\\n## Caching Layer\\n\\nThe caching layer is implemented as a **tree-based data structure** that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.\\n\\nTo illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.\\n\\n### Template-Based Queries \\n\\nWhen specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:\\n```{lmql}\\nname::list-of-things-speculative\\nargmax\\n \\"A list of things not to forget when going to the sea (not travelling): \\\\n\\"\\n \\"- Sunglasses \\\\n\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\nfrom\\n \'openai/text-ada-001\'\\nwhere\\n STOPS_AT(THING, \\"\\\\n\\")\\n```\\n**Without Caching:** Tokens: 390, Requests: 4 | **With Caching Layer:** Tokens: 89 (-77%), Requests: 1 (-75%)\\n\\nHere, the LLM typically needs to be invoked 4 times, once per `[THING]` variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the `-` token to be inserted before each `[THING]`. \\n\\nWith the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate `[THING]` variables.\\n\\n### Short-Circuiting Long Constraints\\n\\nWhen you specify long constraints like `A in [\\"ABCDE\\", \\"FGHIJK\\"]`, the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:\\n```{lmql}\\nname::long-form-constraints-speculative\\nargmax\\n \\"If we have the choice we choose[OPTION]\\"\\nfrom \\n \\"openai/text-ada-001\\"\\nwhere\\n OPTION in [\\"Option A with a whole lot of extra context\\", \\n \\"Option B with context\\", \\n \\"Another Option, also with a lot of additional text\\"\\n ]\\n```\\n```promptdown\\nIf we have the choice we choose [OPTION|Option A with a whole lot of extra context]\\n```\\n**Without Caching:** Tokens: 123, Requests: 9 | **With Caching Layer:** Tokens: 25 (-80%), Requests: 2 (-78%)\\n\\nHere, after the LLM has produced `\\"Option\\"` and then `\\" A\\"`, LMQL short-circuits further model calls and automatically completes the resulting sequence to `\\"Option A with a whole lot of extra context\\"`. This is possible because once `Option A` has been predicted, the remaining tokens are fully determined by the constraints.\\n\\n### Tool-Augmented Queries\\n\\nLastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.\\n\\nAs an example, consider the following query that augments an LLM with the ability to use a key-value storage, [also runnable in the browser-based LMQL Playground](http://lmql.ai/playground?snippet=kv).\\n\\n
\\n\\n \\"Key-Storage\\n\\n
\\n\\n**Without Caching:** Tokens: 5,162, Requests: 12 | **With Caching Layer:** Tokens: 3,481 (-33%), Requests: 8 (-33%)\\n\\nHere, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to `assign` and `get` stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.\\n\\nWe count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output. \\n\\nThis scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.\\n\\n## Persisting the Cache\\n\\nOf course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result. \\n\\nTo do so, you can simply specify a `cache=\\"file.tokens\\"` parameter in your query code:\\n\\n```{lmql}\\nname::joke-with-cache\\nargmax(cache=\\"joke.tokens\\")\\n \\"\\"\\"A good dad joke. A indicates the punchline\\n Q:[JOKE]\\n A:[PUNCHLINE]\\"\\"\\"\\nfrom\\n \\"openai/text-davinci-003\\"\\nwhere\\n len(JOKE) < 120 and \\n STOPS_AT(JOKE, \\"?\\") and \\n STOPS_AT(PUNCHLINE, \\"\\\\n\\") and \\n len(PUNCHLINE) > 1\\n```\\n\\nThe first successful run of this query will persist the cache to `joke.tokens`. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.\\n\\n**Caching During Query Development**: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.\\n\\n## Caveats and Disabling the Cache\\n\\nYou can disable the caching layer by specifying `cache=False` in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.\\n\\nFurther, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.\\n\\n## Conclusion\\n\\nIn this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.\\n\\nTo learn more about LMQL please also check out our [documentation](/docs), or join our [Discord](https://discord.gg/2Y3Wz2Q) to chat with us directly. We are looking forward to hearing from you!","html":"

Releasing the LMQL Caching Layer (v0.0.6)

\\n

May 1, 2023

\\n

Today we are releasing LMQL 0.0.6, the first version of LMQL that integrates the LMQL Caching Layer. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including template-based queries, long-form constraints and tool augmentation.

\\n

You can experiment with LMQL in the browser-based Playground IDE or install the latest version locally, via pip install lmql.

\\n

Caching Layer

\\n

The caching layer is implemented as a tree-based data structure that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.

\\n

To illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.

\\n

Template-Based Queries

\\n

When specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:

\\n
argmax\\n    "A list of things not to forget when going to the sea (not travelling): \\\\n"\\n    "- Sunglasses \\\\n"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\nfrom\\n    'openai/text-ada-001'\\nwhere\\n    STOPS_AT(THING, "\\\\n")\\n
\\n

Without Caching: Tokens: 390, Requests: 4 | With Caching Layer: Tokens: 89 (-77%), Requests: 1 (-75%)

\\n

Here, the LLM typically needs to be invoked 4 times, once per [THING] variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the - token to be inserted before each [THING].

\\n

With the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate [THING] variables.

\\n

Short-Circuiting Long Constraints

\\n

When you specify long constraints like A in ["ABCDE", "FGHIJK"], the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:

\\n
argmax\\n    "If we have the choice we choose[OPTION]"\\nfrom \\n    "openai/text-ada-001"\\nwhere\\n    OPTION in ["Option A with a whole lot of extra context", \\n        "Option B with context", \\n        "Another Option, also with a lot of additional text"\\n    ]\\n
\\n
promptdown

If we have the choice we choose OPTIONOption A with a whole lot of extra context\\n

\\n

Without Caching: Tokens: 123, Requests: 9 | With Caching Layer: Tokens: 25 (-80%), Requests: 2 (-78%)

\\n

Here, after the LLM has produced "Option" and then " A", LMQL short-circuits further model calls and automatically completes the resulting sequence to "Option A with a whole lot of extra context". This is possible because once Option A has been predicted, the remaining tokens are fully determined by the constraints.

\\n

Tool-Augmented Queries

\\n

Lastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.

\\n

As an example, consider the following query that augments an LLM with the ability to use a key-value storage, also runnable in the browser-based LMQL Playground.

\\n
\\n\\n \\"Key-Storage\\n\\n
\\n

Without Caching: Tokens: 5,162, Requests: 12 | With Caching Layer: Tokens: 3,481 (-33%), Requests: 8 (-33%)

\\n

Here, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to assign and get stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.

\\n

We count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output.

\\n

This scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.

\\n

Persisting the Cache

\\n

Of course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result.

\\n

To do so, you can simply specify a cache="file.tokens" parameter in your query code:

\\n
argmax(cache="joke.tokens")\\n   """A good dad joke. A indicates the punchline\\n   Q:[JOKE]\\n   A:[PUNCHLINE]"""\\nfrom\\n   "openai/text-davinci-003"\\nwhere\\n   len(JOKE) < 120 and \\n   STOPS_AT(JOKE, "?") and \\n   STOPS_AT(PUNCHLINE, "\\\\n") and \\n   len(PUNCHLINE) > 1\\n
\\n

The first successful run of this query will persist the cache to joke.tokens. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.

\\n

Caching During Query Development: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.

\\n

Caveats and Disabling the Cache

\\n

You can disable the caching layer by specifying cache=False in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.

\\n

Further, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.

\\n

Conclusion

\\n

In this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.

\\n

To learn more about LMQL please also check out our documentation, or join our Discord to chat with us directly. We are looking forward to hearing from you!

\\n","frontmatter":{"date":"2023-05-01T00:00:00.000Z","title":"Releasing the LMQL Caching Layer (v0.0.6)"},"excerpt":"","url":"/blog/posts/release-0.0.6.html"},{"src":"---\\ndate: 2023-04-17\\ntitle: LMQL Release 0.0.5\\n---\\n\\n# LMQL Release 0.0.5\\n\\nApril 17, 2023\\n\\nToday we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.\\n\\n`lmql==0.0.5` has been published on [PyPI](https://pypi.org/project/lmql/), based the current `main` branch of the [GitHub repository](https://github.com/eth-sri/lmql). The updated version has also been deployed to the browser-based [lmql.ai/playground](http://lmql.ai/playground).\\n\\n### Changelog\\n\\n* **Decoder Performance** The `argmax` and `sample` decoders have undergone some optimizations, allowing them to run faster. This results in a *20-30% speed-up* on common query workloads. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n* **Postprocessing Semantics** Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. [#24](https://github.com/eth-sri/lmql/pull/24). \\n\\n For example, when using an `INT()` constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting `NUM` value will additionally be converted to an `int` value:\\n\\n ```\\n argmax\\n \\"My favorite number is: [NUM]\\\\n\\"\\n print(type(NUM), NUM * 2) # 4\\n \\"Number times two is {NUM * 2}\\"\\n from\\n \'openai/text-ada-001\'\\n where\\n INT(NUM) \\n ```\\n\\n* **Core Interpreter** A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with *branching* decoding algorithms. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n\\n* **Playground** Locally and when used in-browser, the [LMQL Playground](http://lmql.ai/playground) now *streams debugger information* from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. [#27f9a8ad](https://github.com/eth-sri/lmql/commit/27f9a8adb819f732608ef61c9aca9dca579dc536).\\n\\n\\n* **Other Fixes**:\\n - When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write `STOPS_AT(VAR, \\"\\\\n\\")` instead of `STOPS_AT(VAR, \\"\\\\\\\\n\\")`\\n - The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. [#15](https://github.com/eth-sri/lmql/pull/15), thanks to [@chrispan](https://github.com/chrispan).\\n - OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes [#6](https://github.com/eth-sri/lmql/issues/6).\\n\\n### Preview\\n\\nApart from the changes above, we are also working on a number of other features, including:\\n\\n* **llama.cpp support** as started in [this PR](https://github.com/eth-sri/lmql/pull/18), thanks to [@CircArgs](https://github.com/CircArgs).\\n* Support for **Type Constraints**, e.g. `type(VAR) is DataClass`, that automatically force the model to produce a value that structurally conforms to the given type. See this [Twitter thread](https://twitter.com/lbeurerkellner/status/1646187597901733889) for more details.\\n* Support for using **Antlr parsers** during query execution, to force the model to produce a value that conforms to a given grammar. \\n\\n* **Extending Logit Masking to OpenAI Chat Models**. This will enable full support for LMQL constraints with e.g. `chatgpt` and `gpt-4` models. See [#25](https://github.com/eth-sri/lmql/pull/25), thanks to [@kharvd](https://github.com/kharvd).","html":"

LMQL Release 0.0.5

\\n

April 17, 2023

\\n

Today we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.

\\n

lmql==0.0.5 has been published on PyPI, based the current main branch of the GitHub repository. The updated version has also been deployed to the browser-based lmql.ai/playground.

\\n

Changelog

\\n
    \\n
  • \\n

    Decoder Performance The argmax and sample decoders have undergone some optimizations, allowing them to run faster. This results in a 20-30% speed-up on common query workloads. #24.

    \\n
  • \\n
  • \\n

    Postprocessing Semantics Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. #24.

    \\n

    For example, when using an INT(<var>) constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting NUM value will additionally be converted to an int value:

    \\n
    argmax\\n   "My favorite number is: [NUM]\\\\n"\\n   print(type(NUM), NUM * 2) # <class 'int'> 4\\n   "Number times two is {NUM * 2}"\\nfrom\\n   'openai/text-ada-001'\\nwhere\\n   INT(NUM) \\n
    \\n
  • \\n
  • \\n

    Core Interpreter A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with branching decoding algorithms. #24.

    \\n
  • \\n
  • \\n

    Playground Locally and when used in-browser, the LMQL Playground now streams debugger information from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. #27f9a8ad.

    \\n
  • \\n
  • \\n

    Other Fixes:

    \\n
      \\n
    • When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write STOPS_AT(VAR, "\\\\n") instead of STOPS_AT(VAR, "\\\\\\\\n")
    • \\n
    • The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. #15, thanks to @chrispan.
    • \\n
    • OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes #6.
    • \\n
    \\n
  • \\n
\\n

Preview

\\n

Apart from the changes above, we are also working on a number of other features, including:

\\n
    \\n
  • \\n

    llama.cpp support as started in this PR, thanks to @CircArgs.

    \\n
  • \\n
  • \\n

    Support for Type Constraints, e.g. type(VAR) is DataClass, that automatically force the model to produce a value that structurally conforms to the given type. See this Twitter thread for more details.

    \\n
  • \\n
  • \\n

    Support for using Antlr parsers during query execution, to force the model to produce a value that conforms to a given grammar.

    \\n
  • \\n
  • \\n

    Extending Logit Masking to OpenAI Chat Models. This will enable full support for LMQL constraints with e.g. chatgpt and gpt-4 models. See #25, thanks to @kharvd.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-04-17T00:00:00.000Z","title":"LMQL Release 0.0.5"},"excerpt":"","url":"/blog/posts/release-0.0.5.html"}]');const h={class:"posts"},d={class:"post"},u=["href"],m=["innerHTML"],v=JSON.parse('{"title":"Blog","description":"","frontmatter":{"title":"Blog","layout":"doc","aside":false,"outline":false},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md"}'),g={name:"blog/index.md"},f=Object.assign(g,{setup(y){function b(s){return s}return(s,w)=>(a(),t("div",null,[(a(!0),t(r,null,i(l(p),n=>(a(),t("div",h,[e("div",d,[e("a",{href:n.url},[e("h1",null,c(n.frontmatter.title),1)],8,u),e("div",{class:"body",innerHTML:n.html},null,8,m)])]))),256))]))}}),q=o(f,[["__scopeId","data-v-61c06c99"]]);export{v as __pageData,q as default}; +import{_ as o,o as a,c as t,F as r,D as i,l,k as e,t as c}from"./chunks/framework.980cae92.js";const p=JSON.parse('[{"src":"---\\ndate: 2024-02-14 10:10:00\\ntitle: LMQL Developer Survey\\n---\\n\\n# LMQL Developer Survey\\n\\n\\nFebruary 14, 2024\\n\\n\\"image\\"\\n\\nWe have started a new initiative called the **LMQL developer survey**. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for. \\n\\nThe outcome of this survey will help shape our work around the next major version of LMQL.\\n\\nYou can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.\\n","html":"

LMQL Developer Survey

\\n

February 14, 2024

\\n\\"image\\"\\n

We have started a new initiative called the LMQL developer survey. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for.

\\n

The outcome of this survey will help shape our work around the next major version of LMQL.

\\n

You can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.

\\n","frontmatter":{"date":"2024-02-14T10:10:00.000Z","title":"LMQL Developer Survey"},"excerpt":"","url":"/blog/posts/developer-survey.html"},{"src":"---\\ndate: 2023-10-10 10:10:00\\ntitle: LMQL 0.7 brings Procedural Prompt Programming\\n---\\n\\n# LMQL 0.7 brings Procedural Prompt Programming\\n\\nOctober 10, 2023\\n\\nToday, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several *experimental preview features*, allowing you to experiment with new incoming functionality before it is fully released.\\n\\nLMQL 0.7 has also moved to [semantic versioning](https://semver.org) with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.\\n\\n## Nested Queries for Procedural Prompt Programming\\n\\nIn 0.7, you can now use [Nested Queries](../../docs/language/nestedqueries.md) to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:\\n\\n```lmql\\n# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n \'\'\'lmql\\n \\"A: Let\'s think step by step.\\\\n [REASONING]\\"\\n \\"Therefore the answer is[ANSWER]\\" where STOPS_AT(ANSWER, \\".\\")\\n return ANSWER.strip()\\n \'\'\'\\n\\n# top-level query\\n\\"Q: It is August 12th, 2020. What date was it \\\\\\n 100 days ago? [ANSWER: chain_of_thought]\\"\\n\\nANSWER # May 4th, 2020\\n```\\n\\nWe first define a simple LMQL function `chain_of_thought` to do *chain-of-thought prompting*. In our top-level query, we can then call this function to decode an answer using the `[ANSWER: chain_of_thought]` syntax. During execution, LMQL then inserts the instructions and constraints from `chain_of_thought` into the top-level query, generates a value for `ANSWER`, and then removes the instructions and constraints again, only returning the final result.\\n\\n**Nested queries are Prompt Function Calls.** This design of nested queries is inspired by the idea of *function or procedure calls* in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of *stack unwinding*, a technique to implement function calls in low-level languages. \\n\\nLMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:\\n\\n- **Encapsulation and Model Focus** Nested Queries encapsulate and hide the prompting logic used to generate `ANSWER`, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.\\n\\n- **Nesting and Reuse** LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query `get_year` to extract a year from the response text, and then use this query in `chain_of_thought` to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.\\n\\nTo learn more about nested queries, please refer to the [relevant chapter in the documentation](../../docs/language/nestedqueries.md).\\n\\n## Generations API\\n\\nLMQL 0.7 adds the *Generations API*, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:\\n\\n```python\\n# obtain a model instance\\nm: lmql.LLM = lmql.model(\\"openai/gpt-3.5-turbo-instruct\\")\\n# simple generation\\nm.generate_sync(\\"Hello\\", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n```\\n
\\n\\nFunctions such as [`LLM.generate`](../../docs/lib/generations.html#llm-generate) and [`LLM.score`](../../docs/lib/generations.html#llm-score) allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed. \\n\\nFor more information, please refer to the [documentation](../../docs/lib/generations.html).\\n\\n## Chat \\n\\nLMQL 0.7 adds a new [Chat API](../../docs/lib/chat.md), allowing you to easily deploy chatbots with just a couple lines of LMQL.\\n\\n\\n\\nLMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple `lmql chat` CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots. \\n\\nWe also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the [documentation](../../docs/lib/chat.md).\\n\\n## Backends\\n\\nLMQL 0.7 ships with three new backends for inference and tokenization:\\n\\n* LMQL 0.7 adds support for OpenAI\'s newly released `gpt-3.5-turbo-instruct` model. In contrast to other 3.5 series models, this variant supports the *Completions API*, which means that LMQL constraints are compatible with it.\\n\\n* LMQL now supports hosting models on [replicate.com](https://replicate.com) infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the [documentation](../../docs/models/replicate.md). Thanks a lot to community member [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this!\\n\\n* LMQL added `sentencepiece` as an additional tokenization backend, specifically for `llama.cpp` models. This means, `llama.cpp` models can now be used without requiring `transformers` for tokenization. Thanks a lot to community member [@khushChopra](https://github.com/khushChopra) for contributing this.\\n\\n\\n## Inference Certificates\\n\\nTo make LLM inference more transparent and re-producible, LMQL 0.7 also adds [*inference certificates*](../../docs/lib/inference-certificates.md). An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.\\n\\nTo produce an inference certificate, pass `certificate=True` or `certificate=` to your query or generate call:\\n\\n```truncated\\n# call and save certificate\\nsay_hello(certificate=\\"my-certificate.json\\")\\n```\\n\\nThe resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the *exact (tokenized) prompts* and information on the *environment and generation parameters*.\\n\\nThis can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.\\n\\n## Decorators\\n\\n[Variable Decorators](../../docs/language/decorators.md) offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:\\n\\n```lmql\\ndef screaming(value):\\n \\"\\"\\"Decorator to convert a string to uppercase\\"\\"\\"\\n return value.upper()\\n\\n\\"Say \'this is a test\':[@screaming TEST]\\"\\n```\\n```promptdown\\nSay \'this is a test\': [TEST| THIS IS A TEST]\\n```\\n\\nSimilar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value. \\n\\nIn the example above, we use the `@screaming` decorator to convert the value of `TEST` to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the [documentation](../../docs/language/decorators.md).\\n\\n\\n## Documentation Update\\n\\nThe website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate. \\n\\nThe documentation now also includes a *work-in-progress* [Language Reference](/docs/language/reference.md), which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.\\n\\n## Preview Features\\n\\nApart from many new core features, LMQL 0.7 also ships with several *experimental preview features*, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.\\n\\nThese features are marked as *experimental* and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.\\n\\n### LMQL Actions Preview\\n\\n*LMQL Actions* is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:\\n\\n```{lmql}\\ndef wiki(q): ...\\ndef calc(expr): ...\\n\\n\\"Q: What is the population of the US and Germany combined?\\"\\n\\"A: [REASONING]\\" where inline_use(REASONING, [wiki, calc])\\n```\\n\\nA future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in [`actions.py`](https://github.com/eth-sri/lmql/blob/main/src/lmql/lib/actions.py).\\n\\n### Regex Constraints Preview\\n\\nLMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form `DD/MM`:\\n\\n```{lmql}\\n\\"It\'s the last day of June so today is [RESPONSE]\\" where REGEX(RESPONSE, r\\"[0-9]{2}/[0-9]{2}\\")\\n```\\n\\n### Types / Datatype Constraints Preview\\n\\nLMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for *dataclass constraints*, allowing you to constrain the output of a variable to a specific structured output schema:\\n\\n```lmql\\nimport lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n name: str\\n age: int\\n job: str\\n\\n\\"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n\\"\\n\\"Structured: [PERSON_DATA]\\\\n\\" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name=\'Alice\', age=21, job=\'engineer\')\\n```\\n\\nTo achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid `Person` object. The resulting `PERSON_DATA` object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality. \\n\\n\\n## Other Changes\\n\\n* The LMQL playground can now be used from the Windows `cmd.exe`. Thanks a lot to community member [@mosheduminer](https://github.com/mosheduminer).\\n\\n* LMQL/LMTP model backends can now be accessed [as Langchain `LLM` objects](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/lmtp_langchain.py) to use them in your Langchain pipelines. Thanks to [@4onon](https://github.com/4onon) for contributing this. \\n\\n* LMQL can now be [installed as a NixOS package](https://github.com/eth-sri/lmql/tree/main/scripts/flake.d). Thanks to [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this.\\n\\n## 🎬 And that\'s a wrap!\\n\\nLMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on [GitHub](https://github.com/eth-sri/lmql), [Discord](https://discord.gg/7eJP4fcyNT), [Twitter](https://twitter.com/lmqllang) or via [hello@lmql.ai](mailto:hello@lmql.ai).\\n","html":"

LMQL 0.7 brings Procedural Prompt Programming

\\n

October 10, 2023

\\n

Today, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several experimental preview features, allowing you to experiment with new incoming functionality before it is fully released.

\\n

LMQL 0.7 has also moved to semantic versioning with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.

\\n

Nested Queries for Procedural Prompt Programming

\\n

In 0.7, you can now use Nested Queries to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:

\\n
lmql
# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n    '''lmql\\n    "A: Let's think step by step.\\\\n [REASONING]"\\n    "Therefore the answer is[ANSWER]" where STOPS_AT(ANSWER, ".")\\n    return ANSWER.strip()\\n    '''\\n\\n# top-level query\\n"Q: It is August 12th, 2020. What date was it \\\\\\n    100 days ago? [ANSWER: chain_of_thought]"\\n\\nANSWER # May 4th, 2020\\n
\\n

We first define a simple LMQL function chain_of_thought to do chain-of-thought prompting. In our top-level query, we can then call this function to decode an answer using the [ANSWER: chain_of_thought] syntax. During execution, LMQL then inserts the instructions and constraints from chain_of_thought into the top-level query, generates a value for ANSWER, and then removes the instructions and constraints again, only returning the final result.

\\n

Nested queries are Prompt Function Calls. This design of nested queries is inspired by the idea of function or procedure calls in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of stack unwinding, a technique to implement function calls in low-level languages.

\\n

LMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:

\\n
    \\n
  • \\n

    Encapsulation and Model Focus Nested Queries encapsulate and hide the prompting logic used to generate ANSWER, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.

    \\n
  • \\n
  • \\n

    Nesting and Reuse LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query get_year to extract a year from the response text, and then use this query in chain_of_thought to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.

    \\n
  • \\n
\\n

To learn more about nested queries, please refer to the relevant chapter in the documentation.

\\n

Generations API

\\n

LMQL 0.7 adds the Generations API, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:

\\n
python
# obtain a model instance\\nm: lmql.LLM = lmql.model("openai/gpt-3.5-turbo-instruct")\\n# simple generation\\nm.generate_sync("Hello", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n
\\n

\\n

Functions such as LLM.generate and LLM.score allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed.

\\n

For more information, please refer to the documentation.

\\n

Chat

\\n

LMQL 0.7 adds a new Chat API, allowing you to easily deploy chatbots with just a couple lines of LMQL.

\\n\\n

LMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple lmql chat CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots.

\\n

We also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the documentation.

\\n

Backends

\\n

LMQL 0.7 ships with three new backends for inference and tokenization:

\\n
    \\n
  • \\n

    LMQL 0.7 adds support for OpenAI\'s newly released gpt-3.5-turbo-instruct model. In contrast to other 3.5 series models, this variant supports the Completions API, which means that LMQL constraints are compatible with it.

    \\n
  • \\n
  • \\n

    LMQL now supports hosting models on replicate.com infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the documentation. Thanks a lot to community member @charles-dyfis-net for contributing this!

    \\n
  • \\n
  • \\n

    LMQL added sentencepiece as an additional tokenization backend, specifically for llama.cpp models. This means, llama.cpp models can now be used without requiring transformers for tokenization. Thanks a lot to community member @khushChopra for contributing this.

    \\n
  • \\n
\\n

Inference Certificates

\\n

To make LLM inference more transparent and re-producible, LMQL 0.7 also adds inference certificates. An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.

\\n

To produce an inference certificate, pass certificate=True or certificate=<filename> to your query or generate call:

\\n
truncated
# call and save certificate\\nsay_hello(certificate="my-certificate.json")\\n
\\n

The resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the exact (tokenized) prompts and information on the environment and generation parameters.

\\n

This can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.

\\n

Decorators

\\n

Variable Decorators offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:

\\n
lmql
def screaming(value):\\n    """Decorator to convert a string to uppercase"""\\n    return value.upper()\\n\\n"Say 'this is a test':[@screaming TEST]"\\n
\\n
promptdown

Say \'this is a test\': TEST THIS IS A TEST\\n

\\n

Similar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value.

\\n

In the example above, we use the @screaming decorator to convert the value of TEST to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the documentation.

\\n

Documentation Update

\\n

The website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate.

\\n

The documentation now also includes a work-in-progress Language Reference, which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.

\\n

Preview Features

\\n

Apart from many new core features, LMQL 0.7 also ships with several experimental preview features, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.

\\n

These features are marked as experimental and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.

\\n

LMQL Actions Preview

\\n

LMQL Actions is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:

\\n
def wiki(q): ...\\ndef calc(expr): ...\\n\\n"Q: What is the population of the US and Germany combined?"\\n"A: [REASONING]" where inline_use(REASONING, [wiki, calc])\\n
\\n

A future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in actions.py.

\\n

Regex Constraints Preview

\\n

LMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form DD/MM:

\\n
"It's the last day of June so today is [RESPONSE]" where REGEX(RESPONSE, r"[0-9]{2}/[0-9]{2}")\\n
\\n

Types / Datatype Constraints Preview

\\n

LMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for dataclass constraints, allowing you to constrain the output of a variable to a specific structured output schema:

\\n
lmql
import lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n    name: str\\n    age: int\\n    job: str\\n\\n"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n"\\n"Structured: [PERSON_DATA]\\\\n" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name='Alice', age=21, job='engineer')\\n
\\n

To achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid Person object. The resulting PERSON_DATA object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality.

\\n

Other Changes

\\n\\n

🎬 And that\'s a wrap!

\\n

LMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on GitHub, Discord, Twitter or via hello@lmql.ai.

\\n","frontmatter":{"date":"2023-10-10T10:10:00.000Z","title":"LMQL 0.7 brings Procedural Prompt Programming"},"excerpt":"","url":"/blog/posts/release-0.7.html"},{"src":"---\\ndate: 2023-07-25\\ntitle: LMQL v0.0.6.6\\n---\\n\\nJuly 25, 2023\\n\\nWe just released LMQL *0.0.6.6*. This is a minor update with a couple of smaller fixes and improvements.\\n\\n* `lmql.F` now supports positional arguments:\\n\\n```python\\ngreet = lmql.F(\\"Greet {a} and {b}: [GREETING]\\")\\n\\n# call with positional arguments\\ngreet(\\"Alice\\", \\"Bob\\") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a=\\"Alice\\", b=\\"Bob\\") # Greet Alice and Bob: Hello!\\n```\\n\\n* We improved the error handling of the `llama.cpp` backend and fixed a bug with model identifier parsing. \\n\\n* We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member [@4onen](https://github.com/4onen) for reporting and fixing this!\\n\\n* Added backend support for `auto_gptq` quantized models, contributed by community member [@meditans](https://github.com/meditans).\\n\\n* We fixed an issue where for Azure OpenAI models, a dummy configuration `api.env` was needed. See our [documentation](../../docs/models/azure.md) for details. Thanks to community members Missing and [@hooman-bayer](https://github.com/hooman-bayer) for their feedback and contributions to this.\\n\\n> **Versioning Note**: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.","html":"

July 25, 2023

\\n

We just released LMQL 0.0.6.6. This is a minor update with a couple of smaller fixes and improvements.

\\n
    \\n
  • lmql.F now supports positional arguments:
  • \\n
\\n
python
greet = lmql.F("Greet {a} and {b}: [GREETING]")\\n\\n# call with positional arguments\\ngreet("Alice", "Bob") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a="Alice", b="Bob") # Greet Alice and Bob: Hello!\\n
\\n
    \\n
  • \\n

    We improved the error handling of the llama.cpp backend and fixed a bug with model identifier parsing.

    \\n
  • \\n
  • \\n

    We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member @4onen for reporting and fixing this!

    \\n
  • \\n
  • \\n

    Added backend support for auto_gptq quantized models, contributed by community member @meditans.

    \\n
  • \\n
  • \\n

    We fixed an issue where for Azure OpenAI models, a dummy configuration api.env was needed. See our documentation for details. Thanks to community members Missing and @hooman-bayer for their feedback and contributions to this.

    \\n
  • \\n
\\n
\\n

Versioning Note: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.

\\n
\\n","frontmatter":{"date":"2023-07-25T00:00:00.000Z","title":"LMQL v0.0.6.6"},"excerpt":"","url":"/blog/posts/release-0.0.6.6.html"},{"src":"---\\ndate: 2023-07-13\\ntitle: LMQL becomes simpler and adds llama.cpp\\n---\\n\\n# LMQL becomes simpler and adds llama.cpp\\n\\nJuly 13, 2023\\n\\nToday we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a `llama.cpp` based inference backend, several bug fixes and other minor improvements.\\n\\nYou can try the latest version of LMQL in your browser at [lmql.ai/playground](https://lmql.ai/playground) or install it via `pip install lmql`.\\n\\n## One Line Is All It Takes\\n\\nMost notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.\\n\\nWith this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:\\n\\n```{lmql}\\nname::simple-syntax\\n\\n\\"One line is all it takes [CONTINUATION]\\"\\n```\\n```promptdown\\nOne line is all it takes [CONTINUATION|Fallin\' in love with me.]\\n```\\n\\n**Sensible Defaults** This is possible because LMQL now automatically assumes `argmax` and `openai/text-davinci-003` as (configurable) default model. If you prefer to use \\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the `@lmql.query` decorator function as demonstrated later in this post.\\n\\nWithout any additional configuration, the simple query code above translates to a full LMQL program like this:\\n\\n```{lmql}\\nname::simple-syntax-default\\n\\nargmax \\"One line is all it takes [CONTINUATION]\\" from \\"openai/text-davinci-003\\"\\n```\\n\\n
\\n\\n### Inline Constraints\\n\\nLMQL now allows you to specify several inline `where` constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.\\n\\n```{lmql}\\nname::list-with-array\\n\\n\\"A list of awesome Dua Lipa songs:\\\\n\\"\\nsongs = []\\n\\n\\"- New Rules\\\\n\\"\\nfor i in range(4):\\n \\"-[SONG]\\\\n\\" where STOPS_BEFORE(SONG, \\"\\\\n\\")\\n songs.append(SONG)\\n\\n\\"Out of these, my favorite is[FAVORITE]\\" where FAVORITE in songs\\n```\\n```promptdown\\nA list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- [SONG|Don\'t Start Now]\\n- [SONG|IDGAF]\\n- [SONG|Be the One]\\n- [SONG|Blow Your Mind (Mwah)]\\nOut of these, my favorite is [FAVORITE|Don\'t Start Now]\\n```\\n\\nNote also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation. \\n\\n
\\n\\n### `@lmql.query` functions\\n\\nThe overhauled syntax also makes LMQL much easier on the eyes when used with the `@lmql.query` [function decorator in Python](/docs/lib/python.md):\\n\\n```python\\nimport lmql\\nimport json\\n\\n@lmql.query(model=\\"openai/text-curie-001\\", temperature=0.9)\\ndef summarize(): \\n \'\'\'lmql\\n \\"\\"\\"\\n Provide a summary of Dua Lipa, the pop icon:\\n {{\\n \\"name\\": \\"[STRING_VALUE]\\",\\n \\"chart_position\\": [INT_VALUE],\\n \\"top_songs\\": [[\\n \\"[STRING_VALUE]\\",\\n \\"[STRING_VALUE]\\"\\n ]]\\n }}\\n \\"\\"\\" where STOPS_BEFORE(STRING_VALUE, \'\\"\') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n \\n return json.loads(context.prompt.split(\\"pop icon:\\",1)[1])\\n \'\'\'\\n\\nprint(summarize()) # {\'name\': \'Dua Lipa\', \'chart_position\': 3415, \'top_songs\': [\'New Rules\', \'Havana\']}\\n\\n```\\n\\n
\\n\\n### `lmql.F` Lambda Functions\\n\\nBased on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.\\n\\n```python\\nimport lmql\\n\\nsummarize = lmql.F(\\"Summarize the following in a few words: {data}: [SUMMARY]\\")\\nmain_subject = lmql.F(\\"What is the main subject (noun) of the following text? {data}: [SUBJECT]\\", \\n \\"len(TOKENS(SUBJECT)) < 20\\")\\n\\ntext = \\"In LMQL, users can specify high-level, logical constraints ...\\"\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n```\\n\\n
\\n
\\n\\n## `llama.cpp` Inference Backend\\n\\nLMQL now also fully integrates with the excellent [llama.cpp](https://github.com/ggerganov/llama.cpp) C++ implementation of a number of Transformer-based language models. \\n\\nUsing `llama.cpp` from LMQL is as simple as specifying it in the `from` clause of a query:\\n\\n```{lmql}\\nname::llama-cpp-blog\\n\\nargmax \\"Say \'this is a test\':[RESPONSE]\\" from \\"llama.cpp:.bin\\"\\n```\\n\\nWe support, both, in-process loading of `llama.cpp`, as well as remote inference via `lmql serve-model`. To learn more about `llama.cpp` and how to use it with LMQL, check out the corresponding chapter in the LMQL [documentation](/docs/models/llama.cpp.md).\\n\\n
\\n\\n## Other Changes\\n\\n* LMQL now includes a `random` model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.\\n\\n* Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.\\n\\n* More robust query string parsing, allowing for [robust escaping](/docs/language/scripted-prompting.md#escaping) of special characters `[`, `]`, `{` and `}`.\\n\\n* Added support for `transformers` based Llama models and the associated (fast) implementation of HF tokenizers.\\n\\n* Simplified Azure OpenAI support, see the relevant chapter in the [documentation](/docs/models/azure.md).\\n\\nWe thank community members [@minosvasilias](https://github.com/minosvasilias) and [@CircArgs](https://github.com/CircArgs) for their contribution to this release.","html":"

LMQL becomes simpler and adds llama.cpp

\\n

July 13, 2023

\\n

Today we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a llama.cpp based inference backend, several bug fixes and other minor improvements.

\\n

You can try the latest version of LMQL in your browser at lmql.ai/playground or install it via pip install lmql.

\\n

One Line Is All It Takes

\\n

Most notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.

\\n

With this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:

\\n
"One line is all it takes [CONTINUATION]"\\n
\\n
promptdown

One line is all it takes CONTINUATIONFallin\' in love with me.\\n

\\n

Sensible Defaults This is possible because LMQL now automatically assumes argmax and openai/text-davinci-003 as (configurable) default model. If you prefer to use\\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the @lmql.query decorator function as demonstrated later in this post.

\\n

Without any additional configuration, the simple query code above translates to a full LMQL program like this:

\\n
argmax "One line is all it takes [CONTINUATION]" from "openai/text-davinci-003"\\n
\\n

\\n

Inline Constraints

\\n

LMQL now allows you to specify several inline where constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.

\\n
"A list of awesome Dua Lipa songs:\\\\n"\\nsongs = []\\n\\n"- New Rules\\\\n"\\nfor i in range(4):\\n    "-[SONG]\\\\n" where STOPS_BEFORE(SONG, "\\\\n")\\n    songs.append(SONG)\\n\\n"Out of these, my favorite is[FAVORITE]" where FAVORITE in songs\\n
\\n
promptdown

A list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- SONGDon\'t Start Now\\n- SONGIDGAF\\n- SONGBe the One\\n- SONGBlow Your Mind (Mwah)\\nOut of these, my favorite is FAVORITEDon\'t Start Now\\n

\\n

Note also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation.

\\n
\\n

@lmql.query functions

\\n

The overhauled syntax also makes LMQL much easier on the eyes when used with the @lmql.query function decorator in Python:

\\n
python
import lmql\\nimport json\\n\\n@lmql.query(model="openai/text-curie-001", temperature=0.9)\\ndef summarize(): \\n    '''lmql\\n    """\\n    Provide a summary of Dua Lipa, the pop icon:\\n    {{\\n      "name": "[STRING_VALUE]",\\n      "chart_position": [INT_VALUE],\\n      "top_songs": [[\\n         "[STRING_VALUE]",\\n         "[STRING_VALUE]"\\n      ]]\\n    }}\\n    """ where STOPS_BEFORE(STRING_VALUE, '"') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n    \\n    return json.loads(context.prompt.split("pop icon:",1)[1])\\n    '''\\n\\nprint(summarize()) # {'name': 'Dua Lipa', 'chart_position': 3415, 'top_songs': ['New Rules', 'Havana']}\\n\\n
\\n

\\n

lmql.F Lambda Functions

\\n

Based on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.

\\n
python
import lmql\\n\\nsummarize = lmql.F("Summarize the following in a few words: {data}: [SUMMARY]")\\nmain_subject = lmql.F("What is the main subject (noun) of the following text? {data}: [SUBJECT]", \\n                      "len(TOKENS(SUBJECT)) < 20")\\n\\ntext = "In LMQL, users can specify high-level, logical constraints ..."\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n                     # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n
\\n

\\n
\\n

llama.cpp Inference Backend

\\n

LMQL now also fully integrates with the excellent llama.cpp C++ implementation of a number of Transformer-based language models.

\\n

Using llama.cpp from LMQL is as simple as specifying it in the from clause of a query:

\\n
argmax "Say 'this is a test':[RESPONSE]" from "llama.cpp:<PATH TO WEIGHTS>.bin"\\n
\\n

We support, both, in-process loading of llama.cpp, as well as remote inference via lmql serve-model. To learn more about llama.cpp and how to use it with LMQL, check out the corresponding chapter in the LMQL documentation.

\\n
\\n

Other Changes

\\n
    \\n
  • \\n

    LMQL now includes a random model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.

    \\n
  • \\n
  • \\n

    Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.

    \\n
  • \\n
  • \\n

    More robust query string parsing, allowing for robust escaping of special characters [, ], { and }.

    \\n
  • \\n
  • \\n

    Added support for transformers based Llama models and the associated (fast) implementation of HF tokenizers.

    \\n
  • \\n
  • \\n

    Simplified Azure OpenAI support, see the relevant chapter in the documentation.

    \\n
  • \\n
\\n

We thank community members @minosvasilias and @CircArgs for their contribution to this release.

\\n","frontmatter":{"date":"2023-07-13T00:00:00.000Z","title":"LMQL becomes simpler and adds llama.cpp"},"excerpt":"","url":"/blog/posts/release-0.0.6.5.html"},{"src":"---\\ndate: 2023-06-08\\ntitle: Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more\\n---\\n\\n# Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more\\n\\nJune 8, 2023\\n\\nAmong many things, this update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Azure OpenAI support** LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the [documentation](/docs/models/azure.md). Many thanks to [@veqtor](https://github.com/veqtor) for contributing this feature.\\n\\n* **Local Models via the Language Model Transport Protocol** LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the [documentation](/docs/models/hf.md).\\n\\n To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in [this README file](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/README.md). In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.\\n\\n
\\n \\n
\\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
\\n\\n* **Synchronous Python API** Next to an `async/await` based API, LMQL now also provides a synchronous API. This means you no longer need to use `asyncio` to use LMQL from Python. \\n\\n To use the synchronous API, simply declare `@lmql.query` function without the `async` keyword, e.g.\\n\\n ```python\\n import lmql\\n\\n @lmql.query\\n def hello(s: str):\\n \'\'\'lmql\\n argmax \\n \\"Hello {s} [RESPONSE]\\" \\n return RESPONSE\\n from \\n \\"chatgpt\\"\\n \'\'\'\\n\\n print(hello(\\"world\\")) # [\'Hello! How can I assist you today?\']\\n ```\\n\\n If you instead want to use `lmql.run` in a synchronous context, you can now use `lmql.run_sync` instead. To learn more about how LMQL can be used from Python, check out our [documentation](/docs/lib/python.md).\\n\\n* **Improved Tokenizer Backends** LMQL can now use the excellent [`tiktoken` tokenizer](https://github.com/openai/tiktoken) as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.\\n\\n* **Docker Image** LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the [documentation](/docs/development/docker-setup.md). Many thanks to [@SilacciA](https://github.com/SilacciA) for contributing this feature.\\n\\n* **Faster Startup Time** We optimized LMQL\'s import hierarchy, which results in faster module loading time.","html":"

Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more

\\n

June 8, 2023

\\n

Among many things, this update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Azure OpenAI support LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the documentation. Many thanks to @veqtor for contributing this feature.

    \\n
  • \\n
  • \\n

    Local Models via the Language Model Transport Protocol LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the documentation.

    \\n

    To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in this README file. In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.

    \\n
    \\n \\n
    \\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
    \\n
  • \\n
  • \\n

    Synchronous Python API Next to an async/await based API, LMQL now also provides a synchronous API. This means you no longer need to use asyncio to use LMQL from Python.

    \\n

    To use the synchronous API, simply declare @lmql.query function without the async keyword, e.g.

    \\n
    python
    import lmql\\n\\n@lmql.query\\ndef hello(s: str):\\n    '''lmql\\n    argmax \\n        "Hello {s} [RESPONSE]" \\n        return RESPONSE\\n    from \\n        "chatgpt"\\n    '''\\n\\nprint(hello("world")) # ['Hello! How can I assist you today?']\\n
    \\n

    If you instead want to use lmql.run in a synchronous context, you can now use lmql.run_sync instead. To learn more about how LMQL can be used from Python, check out our documentation.

    \\n
  • \\n
  • \\n

    Improved Tokenizer Backends LMQL can now use the excellent tiktoken tokenizer as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.

    \\n
  • \\n
  • \\n

    Docker Image LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the documentation. Many thanks to @SilacciA for contributing this feature.

    \\n
  • \\n
  • \\n

    Faster Startup Time We optimized LMQL\'s import hierarchy, which results in faster module loading time.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-06-08T00:00:00.000Z","title":"Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more"},"excerpt":"","url":"/blog/posts/release-0.0.6.4.html"},{"src":"---\\ndate: 2023-05-11\\ntitle: LMQL Release v0.0.6.3\\n---\\n\\n# LMQL v0.0.6.3\\n\\nMay 11, 2023\\n\\nToday, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Lighter Runtime** As part of our continued efforts, we made LMQL much lighter (no more mandatory `transformers` dependency). By default LMQL now no longer requires `transformers` or PyTorch. If you rely on local models, just install LMQL via `pip install lmql[hf]` to get full Transformers integration.\\n\\n* **Token Constraints** A new function `TOKENS(...)` was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.\\n \\n ```{lmql}\\n name::token_constraints\\n argmax \\n \\"A 10 token response[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) == 10\\n ```\\n\\n* **Conditional Stopping** `STOPS_AT` can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met. \\n\\n For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.\\n\\n ```{lmql}\\n name::conditional_stopping \\n argmax \\n \\"Hello[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, \\"\\\\n\\")\\n ```\\n\\n* **lmql.run**: Improved input validation for `lmql.run` as contributed by @lfegray. More specifically, `lmql.run` wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.\\n\\n* **Automatic Cache Invalidation**: LMQL\'s tokenizer cache at `~/.cache/lmql` is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.\\n\\n> Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.","html":"

LMQL v0.0.6.3

\\n

May 11, 2023

\\n

Today, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Lighter Runtime As part of our continued efforts, we made LMQL much lighter (no more mandatory transformers dependency). By default LMQL now no longer requires transformers or PyTorch. If you rely on local models, just install LMQL via pip install lmql[hf] to get full Transformers integration.

    \\n
  • \\n
  • \\n

    Token Constraints A new function TOKENS(...) was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.

    \\n
    argmax \\n    "A 10 token response[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) == 10\\n
    \\n
  • \\n
  • \\n

    Conditional Stopping STOPS_AT can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met.

    \\n

    For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.

    \\n
    argmax \\n    "Hello[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, "\\\\n")\\n
    \\n
  • \\n
  • \\n

    lmql.run: Improved input validation for lmql.run as contributed by @lfegray. More specifically, lmql.run wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.

    \\n
  • \\n
  • \\n

    Automatic Cache Invalidation: LMQL\'s tokenizer cache at ~/.cache/lmql is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.

    \\n
  • \\n
\\n
\\n

Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.

\\n
\\n","frontmatter":{"date":"2023-05-11T00:00:00.000Z","title":"LMQL Release v0.0.6.3"},"excerpt":"","url":"/blog/posts/release-0.0.6.3.html"},{"src":"---\\ndate: 2023-05-03\\ntitle: LMQL Release v0.0.6.1\\n---\\n\\n# LMQL v0.0.6.1\\n\\nMay 3, 2023\\n\\nWe released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Cache Layer Bug Fixes** This release contains several fixes and improvements to the recently introduced cache layer.\\n\\n* **Stopping Phrases** Stopping phrases specified via `STOPS_BEFORE` are now passed to the OpenAI API as `\\"stop\\"` parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter `openai_nonstop=True`.\\n\\n* **Asynchronous Output Writers** All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on [Output Streaming](/docs/lib/output.md) to the documentation.\\n\\n* **Output Writers for HTTP endpoints, WebSockets and Server-Sent Events** Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new [lmql.output](https://github.com/eth-sri/lmql/tree/main/src/lmql/output) module. We will also provide more documentation on how to use them, e.g. with `aiohttp` in the future.","html":"

LMQL v0.0.6.1

\\n

May 3, 2023

\\n

We released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Cache Layer Bug Fixes This release contains several fixes and improvements to the recently introduced cache layer.

    \\n
  • \\n
  • \\n

    Stopping Phrases Stopping phrases specified via STOPS_BEFORE are now passed to the OpenAI API as "stop" parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter openai_nonstop=True.

    \\n
  • \\n
  • \\n

    Asynchronous Output Writers All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on Output Streaming to the documentation.

    \\n
  • \\n
  • \\n

    Output Writers for HTTP endpoints, WebSockets and Server-Sent Events Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new lmql.output module. We will also provide more documentation on how to use them, e.g. with aiohttp in the future.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-05-03T00:00:00.000Z","title":"LMQL Release v0.0.6.1"},"excerpt":"","url":"/blog/posts/release-0.0.6.1.html"},{"src":"---\\ndate: 2023-05-01\\ntitle: Releasing the LMQL Caching Layer (v0.0.6)\\n---\\n\\n# Releasing the LMQL Caching Layer (v0.0.6)\\n\\nMay 1, 2023\\n\\nToday we are releasing LMQL 0.0.6, the first version of LMQL that integrates the *LMQL Caching Layer*. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including **template-based queries, long-form constraints and tool augmentation.**\\n\\nYou can experiment with LMQL in the browser-based [Playground IDE](http://lmql.ai/playground) or install the latest version locally, via `pip install lmql`.\\n\\n## Caching Layer\\n\\nThe caching layer is implemented as a **tree-based data structure** that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.\\n\\nTo illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.\\n\\n### Template-Based Queries \\n\\nWhen specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:\\n```{lmql}\\nname::list-of-things-speculative\\nargmax\\n \\"A list of things not to forget when going to the sea (not travelling): \\\\n\\"\\n \\"- Sunglasses \\\\n\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\nfrom\\n \'openai/text-ada-001\'\\nwhere\\n STOPS_AT(THING, \\"\\\\n\\")\\n```\\n**Without Caching:** Tokens: 390, Requests: 4 | **With Caching Layer:** Tokens: 89 (-77%), Requests: 1 (-75%)\\n\\nHere, the LLM typically needs to be invoked 4 times, once per `[THING]` variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the `-` token to be inserted before each `[THING]`. \\n\\nWith the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate `[THING]` variables.\\n\\n### Short-Circuiting Long Constraints\\n\\nWhen you specify long constraints like `A in [\\"ABCDE\\", \\"FGHIJK\\"]`, the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:\\n```{lmql}\\nname::long-form-constraints-speculative\\nargmax\\n \\"If we have the choice we choose[OPTION]\\"\\nfrom \\n \\"openai/text-ada-001\\"\\nwhere\\n OPTION in [\\"Option A with a whole lot of extra context\\", \\n \\"Option B with context\\", \\n \\"Another Option, also with a lot of additional text\\"\\n ]\\n```\\n```promptdown\\nIf we have the choice we choose [OPTION|Option A with a whole lot of extra context]\\n```\\n**Without Caching:** Tokens: 123, Requests: 9 | **With Caching Layer:** Tokens: 25 (-80%), Requests: 2 (-78%)\\n\\nHere, after the LLM has produced `\\"Option\\"` and then `\\" A\\"`, LMQL short-circuits further model calls and automatically completes the resulting sequence to `\\"Option A with a whole lot of extra context\\"`. This is possible because once `Option A` has been predicted, the remaining tokens are fully determined by the constraints.\\n\\n### Tool-Augmented Queries\\n\\nLastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.\\n\\nAs an example, consider the following query that augments an LLM with the ability to use a key-value storage, [also runnable in the browser-based LMQL Playground](http://lmql.ai/playground?snippet=kv).\\n\\n
\\n\\n \\"Key-Storage\\n\\n
\\n\\n**Without Caching:** Tokens: 5,162, Requests: 12 | **With Caching Layer:** Tokens: 3,481 (-33%), Requests: 8 (-33%)\\n\\nHere, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to `assign` and `get` stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.\\n\\nWe count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output. \\n\\nThis scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.\\n\\n## Persisting the Cache\\n\\nOf course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result. \\n\\nTo do so, you can simply specify a `cache=\\"file.tokens\\"` parameter in your query code:\\n\\n```{lmql}\\nname::joke-with-cache\\nargmax(cache=\\"joke.tokens\\")\\n \\"\\"\\"A good dad joke. A indicates the punchline\\n Q:[JOKE]\\n A:[PUNCHLINE]\\"\\"\\"\\nfrom\\n \\"openai/text-davinci-003\\"\\nwhere\\n len(JOKE) < 120 and \\n STOPS_AT(JOKE, \\"?\\") and \\n STOPS_AT(PUNCHLINE, \\"\\\\n\\") and \\n len(PUNCHLINE) > 1\\n```\\n\\nThe first successful run of this query will persist the cache to `joke.tokens`. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.\\n\\n**Caching During Query Development**: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.\\n\\n## Caveats and Disabling the Cache\\n\\nYou can disable the caching layer by specifying `cache=False` in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.\\n\\nFurther, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.\\n\\n## Conclusion\\n\\nIn this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.\\n\\nTo learn more about LMQL please also check out our [documentation](/docs), or join our [Discord](https://discord.gg/2Y3Wz2Q) to chat with us directly. We are looking forward to hearing from you!","html":"

Releasing the LMQL Caching Layer (v0.0.6)

\\n

May 1, 2023

\\n

Today we are releasing LMQL 0.0.6, the first version of LMQL that integrates the LMQL Caching Layer. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including template-based queries, long-form constraints and tool augmentation.

\\n

You can experiment with LMQL in the browser-based Playground IDE or install the latest version locally, via pip install lmql.

\\n

Caching Layer

\\n

The caching layer is implemented as a tree-based data structure that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.

\\n

To illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.

\\n

Template-Based Queries

\\n

When specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:

\\n
argmax\\n    "A list of things not to forget when going to the sea (not travelling): \\\\n"\\n    "- Sunglasses \\\\n"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\nfrom\\n    'openai/text-ada-001'\\nwhere\\n    STOPS_AT(THING, "\\\\n")\\n
\\n

Without Caching: Tokens: 390, Requests: 4 | With Caching Layer: Tokens: 89 (-77%), Requests: 1 (-75%)

\\n

Here, the LLM typically needs to be invoked 4 times, once per [THING] variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the - token to be inserted before each [THING].

\\n

With the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate [THING] variables.

\\n

Short-Circuiting Long Constraints

\\n

When you specify long constraints like A in ["ABCDE", "FGHIJK"], the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:

\\n
argmax\\n    "If we have the choice we choose[OPTION]"\\nfrom \\n    "openai/text-ada-001"\\nwhere\\n    OPTION in ["Option A with a whole lot of extra context", \\n        "Option B with context", \\n        "Another Option, also with a lot of additional text"\\n    ]\\n
\\n
promptdown

If we have the choice we choose OPTIONOption A with a whole lot of extra context\\n

\\n

Without Caching: Tokens: 123, Requests: 9 | With Caching Layer: Tokens: 25 (-80%), Requests: 2 (-78%)

\\n

Here, after the LLM has produced "Option" and then " A", LMQL short-circuits further model calls and automatically completes the resulting sequence to "Option A with a whole lot of extra context". This is possible because once Option A has been predicted, the remaining tokens are fully determined by the constraints.

\\n

Tool-Augmented Queries

\\n

Lastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.

\\n

As an example, consider the following query that augments an LLM with the ability to use a key-value storage, also runnable in the browser-based LMQL Playground.

\\n
\\n\\n \\"Key-Storage\\n\\n
\\n

Without Caching: Tokens: 5,162, Requests: 12 | With Caching Layer: Tokens: 3,481 (-33%), Requests: 8 (-33%)

\\n

Here, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to assign and get stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.

\\n

We count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output.

\\n

This scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.

\\n

Persisting the Cache

\\n

Of course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result.

\\n

To do so, you can simply specify a cache="file.tokens" parameter in your query code:

\\n
argmax(cache="joke.tokens")\\n   """A good dad joke. A indicates the punchline\\n   Q:[JOKE]\\n   A:[PUNCHLINE]"""\\nfrom\\n   "openai/text-davinci-003"\\nwhere\\n   len(JOKE) < 120 and \\n   STOPS_AT(JOKE, "?") and \\n   STOPS_AT(PUNCHLINE, "\\\\n") and \\n   len(PUNCHLINE) > 1\\n
\\n

The first successful run of this query will persist the cache to joke.tokens. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.

\\n

Caching During Query Development: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.

\\n

Caveats and Disabling the Cache

\\n

You can disable the caching layer by specifying cache=False in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.

\\n

Further, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.

\\n

Conclusion

\\n

In this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.

\\n

To learn more about LMQL please also check out our documentation, or join our Discord to chat with us directly. We are looking forward to hearing from you!

\\n","frontmatter":{"date":"2023-05-01T00:00:00.000Z","title":"Releasing the LMQL Caching Layer (v0.0.6)"},"excerpt":"","url":"/blog/posts/release-0.0.6.html"},{"src":"---\\ndate: 2023-04-17\\ntitle: LMQL Release 0.0.5\\n---\\n\\n# LMQL Release 0.0.5\\n\\nApril 17, 2023\\n\\nToday we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.\\n\\n`lmql==0.0.5` has been published on [PyPI](https://pypi.org/project/lmql/), based the current `main` branch of the [GitHub repository](https://github.com/eth-sri/lmql). The updated version has also been deployed to the browser-based [lmql.ai/playground](http://lmql.ai/playground).\\n\\n### Changelog\\n\\n* **Decoder Performance** The `argmax` and `sample` decoders have undergone some optimizations, allowing them to run faster. This results in a *20-30% speed-up* on common query workloads. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n* **Postprocessing Semantics** Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. [#24](https://github.com/eth-sri/lmql/pull/24). \\n\\n For example, when using an `INT()` constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting `NUM` value will additionally be converted to an `int` value:\\n\\n ```\\n argmax\\n \\"My favorite number is: [NUM]\\\\n\\"\\n print(type(NUM), NUM * 2) # 4\\n \\"Number times two is {NUM * 2}\\"\\n from\\n \'openai/text-ada-001\'\\n where\\n INT(NUM) \\n ```\\n\\n* **Core Interpreter** A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with *branching* decoding algorithms. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n\\n* **Playground** Locally and when used in-browser, the [LMQL Playground](http://lmql.ai/playground) now *streams debugger information* from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. [#27f9a8ad](https://github.com/eth-sri/lmql/commit/27f9a8adb819f732608ef61c9aca9dca579dc536).\\n\\n\\n* **Other Fixes**:\\n - When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write `STOPS_AT(VAR, \\"\\\\n\\")` instead of `STOPS_AT(VAR, \\"\\\\\\\\n\\")`\\n - The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. [#15](https://github.com/eth-sri/lmql/pull/15), thanks to [@chrispan](https://github.com/chrispan).\\n - OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes [#6](https://github.com/eth-sri/lmql/issues/6).\\n\\n### Preview\\n\\nApart from the changes above, we are also working on a number of other features, including:\\n\\n* **llama.cpp support** as started in [this PR](https://github.com/eth-sri/lmql/pull/18), thanks to [@CircArgs](https://github.com/CircArgs).\\n* Support for **Type Constraints**, e.g. `type(VAR) is DataClass`, that automatically force the model to produce a value that structurally conforms to the given type. See this [Twitter thread](https://twitter.com/lbeurerkellner/status/1646187597901733889) for more details.\\n* Support for using **Antlr parsers** during query execution, to force the model to produce a value that conforms to a given grammar. \\n\\n* **Extending Logit Masking to OpenAI Chat Models**. This will enable full support for LMQL constraints with e.g. `chatgpt` and `gpt-4` models. See [#25](https://github.com/eth-sri/lmql/pull/25), thanks to [@kharvd](https://github.com/kharvd).","html":"

LMQL Release 0.0.5

\\n

April 17, 2023

\\n

Today we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.

\\n

lmql==0.0.5 has been published on PyPI, based the current main branch of the GitHub repository. The updated version has also been deployed to the browser-based lmql.ai/playground.

\\n

Changelog

\\n
    \\n
  • \\n

    Decoder Performance The argmax and sample decoders have undergone some optimizations, allowing them to run faster. This results in a 20-30% speed-up on common query workloads. #24.

    \\n
  • \\n
  • \\n

    Postprocessing Semantics Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. #24.

    \\n

    For example, when using an INT(<var>) constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting NUM value will additionally be converted to an int value:

    \\n
    argmax\\n   "My favorite number is: [NUM]\\\\n"\\n   print(type(NUM), NUM * 2) # <class 'int'> 4\\n   "Number times two is {NUM * 2}"\\nfrom\\n   'openai/text-ada-001'\\nwhere\\n   INT(NUM) \\n
    \\n
  • \\n
  • \\n

    Core Interpreter A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with branching decoding algorithms. #24.

    \\n
  • \\n
  • \\n

    Playground Locally and when used in-browser, the LMQL Playground now streams debugger information from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. #27f9a8ad.

    \\n
  • \\n
  • \\n

    Other Fixes:

    \\n
      \\n
    • When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write STOPS_AT(VAR, "\\\\n") instead of STOPS_AT(VAR, "\\\\\\\\n")
    • \\n
    • The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. #15, thanks to @chrispan.
    • \\n
    • OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes #6.
    • \\n
    \\n
  • \\n
\\n

Preview

\\n

Apart from the changes above, we are also working on a number of other features, including:

\\n
    \\n
  • \\n

    llama.cpp support as started in this PR, thanks to @CircArgs.

    \\n
  • \\n
  • \\n

    Support for Type Constraints, e.g. type(VAR) is DataClass, that automatically force the model to produce a value that structurally conforms to the given type. See this Twitter thread for more details.

    \\n
  • \\n
  • \\n

    Support for using Antlr parsers during query execution, to force the model to produce a value that conforms to a given grammar.

    \\n
  • \\n
  • \\n

    Extending Logit Masking to OpenAI Chat Models. This will enable full support for LMQL constraints with e.g. chatgpt and gpt-4 models. See #25, thanks to @kharvd.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-04-17T00:00:00.000Z","title":"LMQL Release 0.0.5"},"excerpt":"","url":"/blog/posts/release-0.0.5.html"}]');const h={class:"posts"},d={class:"post"},u=["href"],m=["innerHTML"],v=JSON.parse('{"title":"Blog","description":"","frontmatter":{"title":"Blog","layout":"doc","aside":false,"outline":false},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md"}'),g={name:"blog/index.md"},f=Object.assign(g,{setup(y){function b(s){return s}return(s,w)=>(a(),t("div",null,[(a(!0),t(r,null,i(l(p),n=>(a(),t("div",h,[e("div",d,[e("a",{href:n.url},[e("h1",null,c(n.frontmatter.title),1)],8,u),e("div",{class:"body",innerHTML:n.html},null,8,m)])]))),256))]))}}),q=o(f,[["__scopeId","data-v-61c06c99"]]);export{v as __pageData,q as default}; diff --git a/assets/blog_index.md.ff9d28cf.lean.js b/assets/blog_index.md.b2796768.lean.js similarity index 98% rename from assets/blog_index.md.ff9d28cf.lean.js rename to assets/blog_index.md.b2796768.lean.js index 1bfeabc0..5239c420 100644 --- a/assets/blog_index.md.ff9d28cf.lean.js +++ b/assets/blog_index.md.b2796768.lean.js @@ -1 +1 @@ -import{_ as o,o as a,c as t,F as r,D as i,l,k as e,t as c}from"./chunks/framework.980cae92.js";const p=JSON.parse('[{"src":"---\\ndate: 2024-02-14 10:10:00\\ntitle: LMQL Developer Survey\\n---\\n\\n# LMQL Developer Survey\\n\\n\\nFebruary 14, 2024\\n\\n\\"image\\"\\n\\nWe have started a new initiative called the **LMQL developer survey**. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for. \\n\\nThe outcome of this survey will help shape our work around the next major version of LMQL.\\n\\nYou can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.\\n","html":"

LMQL Developer Survey

\\n

February 14, 2024

\\n\\"image\\"\\n

We have started a new initiative called the LMQL developer survey. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for.

\\n

The outcome of this survey will help shape our work around the next major version of LMQL.

\\n

You can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.

\\n","frontmatter":{"date":"2024-02-14T10:10:00.000Z","title":"LMQL Developer Survey"},"excerpt":"","url":"/blog/posts/developer-survey.html"},{"src":"---\\ndate: 2023-10-10 10:10:00\\ntitle: LMQL 0.7 brings Procedural Prompt Programming\\n---\\n\\n# LMQL 0.7 brings Procedural Prompt Programming\\n\\nOctober 10, 2023\\n\\nToday, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several *experimental preview features*, allowing you to experiment with new incoming functionality before it is fully released.\\n\\nLMQL 0.7 has also moved to [semantic versioning](https://semver.org) with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.\\n\\n## Nested Queries for Procedural Prompt Programming\\n\\nIn 0.7, you can now use [Nested Queries](../../docs/language/nestedqueries.md) to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:\\n\\n```lmql\\n# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n \'\'\'lmql\\n \\"A: Let\'s think step by step.\\\\n [REASONING]\\"\\n \\"Therefore the answer is[ANSWER]\\" where STOPS_AT(ANSWER, \\".\\")\\n return ANSWER.strip()\\n \'\'\'\\n\\n# top-level query\\n\\"Q: It is August 12th, 2020. What date was it \\\\\\n 100 days ago? [ANSWER: chain_of_thought]\\"\\n\\nANSWER # May 4th, 2020\\n```\\n\\nWe first define a simple LMQL function `chain_of_thought` to do *chain-of-thought prompting*. In our top-level query, we can then call this function to decode an answer using the `[ANSWER: chain_of_thought]` syntax. During execution, LMQL then inserts the instructions and constraints from `chain_of_thought` into the top-level query, generates a value for `ANSWER`, and then removes the instructions and constraints again, only returning the final result.\\n\\n**Nested queries are Prompt Function Calls.** This design of nested queries is inspired by the idea of *function or procedure calls* in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of *stack unwinding*, a technique to implement function calls in low-level languages. \\n\\nLMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:\\n\\n- **Encapsulation and Model Focus** Nested Queries encapsulate and hide the prompting logic used to generate `ANSWER`, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.\\n\\n- **Nesting and Reuse** LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query `get_year` to extract a year from the response text, and then use this query in `chain_of_thought` to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.\\n\\nTo learn more about nested queries, please refer to the [relevant chapter in the documentation](../../docs/language/nestedqueries.md).\\n\\n## Generations API\\n\\nLMQL 0.7 adds the *Generations API*, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:\\n\\n```python\\n# obtain a model instance\\nm: lmql.LLM = lmql.model(\\"openai/gpt-3.5-turbo-instruct\\")\\n# simple generation\\nm.generate_sync(\\"Hello\\", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n```\\n
\\n\\nFunctions such as [`LLM.generate`](../../docs/lib/generations.html#llm-generate) and [`LLM.score`](../../docs/lib/generations.html#llm-score) allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed. \\n\\nFor more information, please refer to the [documentation](../../docs/lib/generations.html).\\n\\n## Chat \\n\\nLMQL 0.7 adds a new [Chat API](../../docs/lib/chat.md), allowing you to easily deploy chatbots with just a couple lines of LMQL.\\n\\n\\n\\nLMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple `lmql chat` CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots. \\n\\nWe also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the [documentation](../../docs/lib/chat.md).\\n\\n## Backends\\n\\nLMQL 0.7 ships with three new backends for inference and tokenization:\\n\\n* LMQL 0.7 adds support for OpenAI\'s newly released `gpt-3.5-turbo-instruct` model. In contrast to other 3.5 series models, this variant supports the *Completions API*, which means that LMQL constraints are compatible with it.\\n\\n* LMQL now supports hosting models on [replicate.com](https://replicate.com) infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the [documentation](../../docs/models/replicate.md). Thanks a lot to community member [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this!\\n\\n* LMQL added `sentencepiece` as an additional tokenization backend, specifically for `llama.cpp` models. This means, `llama.cpp` models can now be used without requiring `transformers` for tokenization. Thanks a lot to community member [@khushChopra](https://github.com/khushChopra) for contributing this.\\n\\n\\n## Inference Certificates\\n\\nTo make LLM inference more transparent and re-producible, LMQL 0.7 also adds [*inference certificates*](../../docs/lib/inference-certificates.md). An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.\\n\\nTo produce an inference certificate, pass `certificate=True` or `certificate=` to your query or generate call:\\n\\n```truncated\\n# call and save certificate\\nsay_hello(certificate=\\"my-certificate.json\\")\\n```\\n\\nThe resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the *exact (tokenized) prompts* and information on the *environment and generation parameters*.\\n\\nThis can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.\\n\\n## Decorators\\n\\n[Variable Decorators](../../docs/language/decorators.md) offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:\\n\\n```lmql\\ndef screaming(value):\\n \\"\\"\\"Decorator to convert a string to uppercase\\"\\"\\"\\n return value.upper()\\n\\n\\"Say \'this is a test\':[@screaming TEST]\\"\\n```\\n```promptdown\\nSay \'this is a test\': [TEST| THIS IS A TEST]\\n```\\n\\nSimilar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value. \\n\\nIn the example above, we use the `@screaming` decorator to convert the value of `TEST` to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the [documentation](../../docs/language/decorators.md).\\n\\n\\n## Documentation Update\\n\\nThe website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate. \\n\\nThe documentation now also includes a *work-in-progress* [Language Reference](/docs/language/reference.md), which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.\\n\\n## Preview Features\\n\\nApart from many new core features, LMQL 0.7 also ships with several *experimental preview features*, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.\\n\\nThese features are marked as *experimental* and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.\\n\\n### LMQL Actions Preview\\n\\n*LMQL Actions* is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:\\n\\n```{lmql}\\ndef wiki(q): ...\\ndef calc(expr): ...\\n\\n\\"Q: What is the population of the US and Germany combined?\\"\\n\\"A: [REASONING]\\" where inline_use(REASONING, [wiki, calc])\\n```\\n\\nA future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in [`actions.py`](https://github.com/eth-sri/lmql/blob/main/src/lmql/lib/actions.py).\\n\\n### Regex Constraints Preview\\n\\nLMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form `DD/MM`:\\n\\n```{lmql}\\n\\"It\'s the last day of June so today is [RESPONSE]\\" where REGEX(RESPONSE, r\\"[0-9]{2}/[0-9]{2}\\")\\n```\\n\\n### Types / Datatype Constraints Preview\\n\\nLMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for *dataclass constraints*, allowing you to constrain the output of a variable to a specific structured output schema:\\n\\n```lmql\\nimport lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n name: str\\n age: int\\n job: str\\n\\n\\"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n\\"\\n\\"Structured: [PERSON_DATA]\\\\n\\" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name=\'Alice\', age=21, job=\'engineer\')\\n```\\n\\nTo achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid `Person` object. The resulting `PERSON_DATA` object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality. \\n\\n\\n## Other Changes\\n\\n* The LMQL playground can now be used from the Windows `cmd.exe`. Thanks a lot to community member [@mosheduminer](https://github.com/mosheduminer).\\n\\n* LMQL/LMTP model backends can now be accessed [as Langchain `LLM` objects](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/lmtp_langchain.py) to use them in your Langchain pipelines. Thanks to [@4onon](https://github.com/4onon) for contributing this. \\n\\n* LMQL can now be [installed as a NixOS package](https://github.com/eth-sri/lmql/tree/main/scripts/flake.d). Thanks to [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this.\\n\\n## 🎬 And that\'s a wrap!\\n\\nLMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on [GitHub](https://github.com/eth-sri/lmql), [Discord](https://discord.gg/7eJP4fcyNT), [Twitter](https://twitter.com/lmqllang) or via [hello@lmql.ai](mailto:hello@lmql.ai).\\n","html":"

LMQL 0.7 brings Procedural Prompt Programming

\\n

October 10, 2023

\\n

Today, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several experimental preview features, allowing you to experiment with new incoming functionality before it is fully released.

\\n

LMQL 0.7 has also moved to semantic versioning with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.

\\n

Nested Queries for Procedural Prompt Programming

\\n

In 0.7, you can now use Nested Queries to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:

\\n
lmql
# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n    '''lmql\\n    "A: Let's think step by step.\\\\n [REASONING]"\\n    "Therefore the answer is[ANSWER]" where STOPS_AT(ANSWER, ".")\\n    return ANSWER.strip()\\n    '''\\n\\n# top-level query\\n"Q: It is August 12th, 2020. What date was it \\\\\\n    100 days ago? [ANSWER: chain_of_thought]"\\n\\nANSWER # May 4th, 2020\\n
\\n

We first define a simple LMQL function chain_of_thought to do chain-of-thought prompting. In our top-level query, we can then call this function to decode an answer using the [ANSWER: chain_of_thought] syntax. During execution, LMQL then inserts the instructions and constraints from chain_of_thought into the top-level query, generates a value for ANSWER, and then removes the instructions and constraints again, only returning the final result.

\\n

Nested queries are Prompt Function Calls. This design of nested queries is inspired by the idea of function or procedure calls in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of stack unwinding, a technique to implement function calls in low-level languages.

\\n

LMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:

\\n
    \\n
  • \\n

    Encapsulation and Model Focus Nested Queries encapsulate and hide the prompting logic used to generate ANSWER, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.

    \\n
  • \\n
  • \\n

    Nesting and Reuse LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query get_year to extract a year from the response text, and then use this query in chain_of_thought to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.

    \\n
  • \\n
\\n

To learn more about nested queries, please refer to the relevant chapter in the documentation.

\\n

Generations API

\\n

LMQL 0.7 adds the Generations API, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:

\\n
python
# obtain a model instance\\nm: lmql.LLM = lmql.model("openai/gpt-3.5-turbo-instruct")\\n# simple generation\\nm.generate_sync("Hello", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n
\\n

\\n

Functions such as LLM.generate and LLM.score allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed.

\\n

For more information, please refer to the documentation.

\\n

Chat

\\n

LMQL 0.7 adds a new Chat API, allowing you to easily deploy chatbots with just a couple lines of LMQL.

\\n\\n

LMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple lmql chat CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots.

\\n

We also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the documentation.

\\n

Backends

\\n

LMQL 0.7 ships with three new backends for inference and tokenization:

\\n
    \\n
  • \\n

    LMQL 0.7 adds support for OpenAI\'s newly released gpt-3.5-turbo-instruct model. In contrast to other 3.5 series models, this variant supports the Completions API, which means that LMQL constraints are compatible with it.

    \\n
  • \\n
  • \\n

    LMQL now supports hosting models on replicate.com infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the documentation. Thanks a lot to community member @charles-dyfis-net for contributing this!

    \\n
  • \\n
  • \\n

    LMQL added sentencepiece as an additional tokenization backend, specifically for llama.cpp models. This means, llama.cpp models can now be used without requiring transformers for tokenization. Thanks a lot to community member @khushChopra for contributing this.

    \\n
  • \\n
\\n

Inference Certificates

\\n

To make LLM inference more transparent and re-producible, LMQL 0.7 also adds inference certificates. An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.

\\n

To produce an inference certificate, pass certificate=True or certificate=<filename> to your query or generate call:

\\n
truncated
# call and save certificate\\nsay_hello(certificate="my-certificate.json")\\n
\\n

The resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the exact (tokenized) prompts and information on the environment and generation parameters.

\\n

This can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.

\\n

Decorators

\\n

Variable Decorators offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:

\\n
lmql
def screaming(value):\\n    """Decorator to convert a string to uppercase"""\\n    return value.upper()\\n\\n"Say 'this is a test':[@screaming TEST]"\\n
\\n
promptdown

Say \'this is a test\': TEST THIS IS A TEST\\n

\\n

Similar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value.

\\n

In the example above, we use the @screaming decorator to convert the value of TEST to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the documentation.

\\n

Documentation Update

\\n

The website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate.

\\n

The documentation now also includes a work-in-progress Language Reference, which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.

\\n

Preview Features

\\n

Apart from many new core features, LMQL 0.7 also ships with several experimental preview features, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.

\\n

These features are marked as experimental and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.

\\n

LMQL Actions Preview

\\n

LMQL Actions is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:

\\n
def wiki(q): ...\\ndef calc(expr): ...\\n\\n"Q: What is the population of the US and Germany combined?"\\n"A: [REASONING]" where inline_use(REASONING, [wiki, calc])\\n
\\n

A future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in actions.py.

\\n

Regex Constraints Preview

\\n

LMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form DD/MM:

\\n
"It's the last day of June so today is [RESPONSE]" where REGEX(RESPONSE, r"[0-9]{2}/[0-9]{2}")\\n
\\n

Types / Datatype Constraints Preview

\\n

LMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for dataclass constraints, allowing you to constrain the output of a variable to a specific structured output schema:

\\n
lmql
import lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n    name: str\\n    age: int\\n    job: str\\n\\n"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n"\\n"Structured: [PERSON_DATA]\\\\n" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name='Alice', age=21, job='engineer')\\n
\\n

To achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid Person object. The resulting PERSON_DATA object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality.

\\n

Other Changes

\\n\\n

🎬 And that\'s a wrap!

\\n

LMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on GitHub, Discord, Twitter or via hello@lmql.ai.

\\n","frontmatter":{"date":"2023-10-10T10:10:00.000Z","title":"LMQL 0.7 brings Procedural Prompt Programming"},"excerpt":"","url":"/blog/posts/release-0.7.html"},{"src":"---\\ndate: 2023-07-25\\ntitle: LMQL v0.0.6.6\\n---\\n\\nJuly 25, 2023\\n\\nWe just released LMQL *0.0.6.6*. This is a minor update with a couple of smaller fixes and improvements.\\n\\n* `lmql.F` now supports positional arguments:\\n\\n```python\\ngreet = lmql.F(\\"Greet {a} and {b}: [GREETING]\\")\\n\\n# call with positional arguments\\ngreet(\\"Alice\\", \\"Bob\\") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a=\\"Alice\\", b=\\"Bob\\") # Greet Alice and Bob: Hello!\\n```\\n\\n* We improved the error handling of the `llama.cpp` backend and fixed a bug with model identifier parsing. \\n\\n* We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member [@4onen](https://github.com/4onen) for reporting and fixing this!\\n\\n* Added backend support for `auto_gptq` quantized models, contributed by community member [@meditans](https://github.com/meditans).\\n\\n* We fixed an issue where for Azure OpenAI models, a dummy configuration `api.env` was needed. See our [documentation](../../docs/models/azure.md) for details. Thanks to community members Missing and [@hooman-bayer](https://github.com/hooman-bayer) for their feedback and contributions to this.\\n\\n> **Versioning Note**: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.","html":"

July 25, 2023

\\n

We just released LMQL 0.0.6.6. This is a minor update with a couple of smaller fixes and improvements.

\\n
    \\n
  • lmql.F now supports positional arguments:
  • \\n
\\n
python
greet = lmql.F("Greet {a} and {b}: [GREETING]")\\n\\n# call with positional arguments\\ngreet("Alice", "Bob") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a="Alice", b="Bob") # Greet Alice and Bob: Hello!\\n
\\n
    \\n
  • \\n

    We improved the error handling of the llama.cpp backend and fixed a bug with model identifier parsing.

    \\n
  • \\n
  • \\n

    We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member @4onen for reporting and fixing this!

    \\n
  • \\n
  • \\n

    Added backend support for auto_gptq quantized models, contributed by community member @meditans.

    \\n
  • \\n
  • \\n

    We fixed an issue where for Azure OpenAI models, a dummy configuration api.env was needed. See our documentation for details. Thanks to community members Missing and @hooman-bayer for their feedback and contributions to this.

    \\n
  • \\n
\\n
\\n

Versioning Note: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.

\\n
\\n","frontmatter":{"date":"2023-07-25T00:00:00.000Z","title":"LMQL v0.0.6.6"},"excerpt":"","url":"/blog/posts/release-0.0.6.6.html"},{"src":"---\\ndate: 2023-07-13\\ntitle: LMQL becomes simpler and adds llama.cpp\\n---\\n\\n# LMQL becomes simpler and adds llama.cpp\\n\\nJuly 13, 2023\\n\\nToday we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a `llama.cpp` based inference backend, several bug fixes and other minor improvements.\\n\\nYou can try the latest version of LMQL in your browser at [lmql.ai/playground](https://lmql.ai/playground) or install it via `pip install lmql`.\\n\\n## One Line Is All It Takes\\n\\nMost notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.\\n\\nWith this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:\\n\\n```{lmql}\\nname::simple-syntax\\n\\n\\"One line is all it takes [CONTINUATION]\\"\\n```\\n```promptdown\\nOne line is all it takes [CONTINUATION|Fallin\' in love with me.]\\n```\\n\\n**Sensible Defaults** This is possible because LMQL now automatically assumes `argmax` and `openai/text-davinci-003` as (configurable) default model. If you prefer to use \\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the `@lmql.query` decorator function as demonstrated later in this post.\\n\\nWithout any additional configuration, the simple query code above translates to a full LMQL program like this:\\n\\n```{lmql}\\nname::simple-syntax-default\\n\\nargmax \\"One line is all it takes [CONTINUATION]\\" from \\"openai/text-davinci-003\\"\\n```\\n\\n
\\n\\n### Inline Constraints\\n\\nLMQL now allows you to specify several inline `where` constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.\\n\\n```{lmql}\\nname::list-with-array\\n\\n\\"A list of awesome Dua Lipa songs:\\\\n\\"\\nsongs = []\\n\\n\\"- New Rules\\\\n\\"\\nfor i in range(4):\\n \\"-[SONG]\\\\n\\" where STOPS_BEFORE(SONG, \\"\\\\n\\")\\n songs.append(SONG)\\n\\n\\"Out of these, my favorite is[FAVORITE]\\" where FAVORITE in songs\\n```\\n```promptdown\\nA list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- [SONG|Don\'t Start Now]\\n- [SONG|IDGAF]\\n- [SONG|Be the One]\\n- [SONG|Blow Your Mind (Mwah)]\\nOut of these, my favorite is [FAVORITE|Don\'t Start Now]\\n```\\n\\nNote also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation. \\n\\n
\\n\\n### `@lmql.query` functions\\n\\nThe overhauled syntax also makes LMQL much easier on the eyes when used with the `@lmql.query` [function decorator in Python](/docs/lib/python.md):\\n\\n```python\\nimport lmql\\nimport json\\n\\n@lmql.query(model=\\"openai/text-curie-001\\", temperature=0.9)\\ndef summarize(): \\n \'\'\'lmql\\n \\"\\"\\"\\n Provide a summary of Dua Lipa, the pop icon:\\n {{\\n \\"name\\": \\"[STRING_VALUE]\\",\\n \\"chart_position\\": [INT_VALUE],\\n \\"top_songs\\": [[\\n \\"[STRING_VALUE]\\",\\n \\"[STRING_VALUE]\\"\\n ]]\\n }}\\n \\"\\"\\" where STOPS_BEFORE(STRING_VALUE, \'\\"\') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n \\n return json.loads(context.prompt.split(\\"pop icon:\\",1)[1])\\n \'\'\'\\n\\nprint(summarize()) # {\'name\': \'Dua Lipa\', \'chart_position\': 3415, \'top_songs\': [\'New Rules\', \'Havana\']}\\n\\n```\\n\\n
\\n\\n### `lmql.F` Lambda Functions\\n\\nBased on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.\\n\\n```python\\nimport lmql\\n\\nsummarize = lmql.F(\\"Summarize the following in a few words: {data}: [SUMMARY]\\")\\nmain_subject = lmql.F(\\"What is the main subject (noun) of the following text? {data}: [SUBJECT]\\", \\n \\"len(TOKENS(SUBJECT)) < 20\\")\\n\\ntext = \\"In LMQL, users can specify high-level, logical constraints ...\\"\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n```\\n\\n
\\n
\\n\\n## `llama.cpp` Inference Backend\\n\\nLMQL now also fully integrates with the excellent [llama.cpp](https://github.com/ggerganov/llama.cpp) C++ implementation of a number of Transformer-based language models. \\n\\nUsing `llama.cpp` from LMQL is as simple as specifying it in the `from` clause of a query:\\n\\n```{lmql}\\nname::llama-cpp-blog\\n\\nargmax \\"Say \'this is a test\':[RESPONSE]\\" from \\"llama.cpp:.bin\\"\\n```\\n\\nWe support, both, in-process loading of `llama.cpp`, as well as remote inference via `lmql serve-model`. To learn more about `llama.cpp` and how to use it with LMQL, check out the corresponding chapter in the LMQL [documentation](/docs/models/llama.cpp.md).\\n\\n
\\n\\n## Other Changes\\n\\n* LMQL now includes a `random` model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.\\n\\n* Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.\\n\\n* More robust query string parsing, allowing for [robust escaping](/docs/language/scripted-prompting.md#escaping) of special characters `[`, `]`, `{` and `}`.\\n\\n* Added support for `transformers` based Llama models and the associated (fast) implementation of HF tokenizers.\\n\\n* Simplified Azure OpenAI support, see the relevant chapter in the [documentation](/docs/models/azure.md).\\n\\nWe thank community members [@minosvasilias](https://github.com/minosvasilias) and [@CircArgs](https://github.com/CircArgs) for their contribution to this release.","html":"

LMQL becomes simpler and adds llama.cpp

\\n

July 13, 2023

\\n

Today we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a llama.cpp based inference backend, several bug fixes and other minor improvements.

\\n

You can try the latest version of LMQL in your browser at lmql.ai/playground or install it via pip install lmql.

\\n

One Line Is All It Takes

\\n

Most notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.

\\n

With this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:

\\n
"One line is all it takes [CONTINUATION]"\\n
\\n
promptdown

One line is all it takes CONTINUATIONFallin\' in love with me.\\n

\\n

Sensible Defaults This is possible because LMQL now automatically assumes argmax and openai/text-davinci-003 as (configurable) default model. If you prefer to use\\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the @lmql.query decorator function as demonstrated later in this post.

\\n

Without any additional configuration, the simple query code above translates to a full LMQL program like this:

\\n
argmax "One line is all it takes [CONTINUATION]" from "openai/text-davinci-003"\\n
\\n

\\n

Inline Constraints

\\n

LMQL now allows you to specify several inline where constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.

\\n
"A list of awesome Dua Lipa songs:\\\\n"\\nsongs = []\\n\\n"- New Rules\\\\n"\\nfor i in range(4):\\n    "-[SONG]\\\\n" where STOPS_BEFORE(SONG, "\\\\n")\\n    songs.append(SONG)\\n\\n"Out of these, my favorite is[FAVORITE]" where FAVORITE in songs\\n
\\n
promptdown

A list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- SONGDon\'t Start Now\\n- SONGIDGAF\\n- SONGBe the One\\n- SONGBlow Your Mind (Mwah)\\nOut of these, my favorite is FAVORITEDon\'t Start Now\\n

\\n

Note also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation.

\\n
\\n

@lmql.query functions

\\n

The overhauled syntax also makes LMQL much easier on the eyes when used with the @lmql.query function decorator in Python:

\\n
python
import lmql\\nimport json\\n\\n@lmql.query(model="openai/text-curie-001", temperature=0.9)\\ndef summarize(): \\n    '''lmql\\n    """\\n    Provide a summary of Dua Lipa, the pop icon:\\n    {{\\n      "name": "[STRING_VALUE]",\\n      "chart_position": [INT_VALUE],\\n      "top_songs": [[\\n         "[STRING_VALUE]",\\n         "[STRING_VALUE]"\\n      ]]\\n    }}\\n    """ where STOPS_BEFORE(STRING_VALUE, '"') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n    \\n    return json.loads(context.prompt.split("pop icon:",1)[1])\\n    '''\\n\\nprint(summarize()) # {'name': 'Dua Lipa', 'chart_position': 3415, 'top_songs': ['New Rules', 'Havana']}\\n\\n
\\n

\\n

lmql.F Lambda Functions

\\n

Based on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.

\\n
python
import lmql\\n\\nsummarize = lmql.F("Summarize the following in a few words: {data}: [SUMMARY]")\\nmain_subject = lmql.F("What is the main subject (noun) of the following text? {data}: [SUBJECT]", \\n                      "len(TOKENS(SUBJECT)) < 20")\\n\\ntext = "In LMQL, users can specify high-level, logical constraints ..."\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n                     # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n
\\n

\\n
\\n

llama.cpp Inference Backend

\\n

LMQL now also fully integrates with the excellent llama.cpp C++ implementation of a number of Transformer-based language models.

\\n

Using llama.cpp from LMQL is as simple as specifying it in the from clause of a query:

\\n
argmax "Say 'this is a test':[RESPONSE]" from "llama.cpp:<PATH TO WEIGHTS>.bin"\\n
\\n

We support, both, in-process loading of llama.cpp, as well as remote inference via lmql serve-model. To learn more about llama.cpp and how to use it with LMQL, check out the corresponding chapter in the LMQL documentation.

\\n
\\n

Other Changes

\\n
    \\n
  • \\n

    LMQL now includes a random model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.

    \\n
  • \\n
  • \\n

    Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.

    \\n
  • \\n
  • \\n

    More robust query string parsing, allowing for robust escaping of special characters [, ], { and }.

    \\n
  • \\n
  • \\n

    Added support for transformers based Llama models and the associated (fast) implementation of HF tokenizers.

    \\n
  • \\n
  • \\n

    Simplified Azure OpenAI support, see the relevant chapter in the documentation.

    \\n
  • \\n
\\n

We thank community members @minosvasilias and @CircArgs for their contribution to this release.

\\n","frontmatter":{"date":"2023-07-13T00:00:00.000Z","title":"LMQL becomes simpler and adds llama.cpp"},"excerpt":"","url":"/blog/posts/release-0.0.6.5.html"},{"src":"---\\ndate: 2023-06-08\\ntitle: Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more\\n---\\n\\n# Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more\\n\\nJune 8, 2023\\n\\nAmong many things, this update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Azure OpenAI support** LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the [documentation](/docs/models/azure.md). Many thanks to [@veqtor](https://github.com/veqtor) for contributing this feature.\\n\\n* **Local Models via the Language Model Transport Protocol** LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the [documentation](/docs/models/hf.md).\\n\\n To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in [this README file](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/README.md). In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.\\n\\n
\\n \\n
\\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
\\n\\n* **Synchronous Python API** Next to an `async/await` based API, LMQL now also provides a synchronous API. This means you no longer need to use `asyncio` to use LMQL from Python. \\n\\n To use the synchronous API, simply declare `@lmql.query` function without the `async` keyword, e.g.\\n\\n ```python\\n import lmql\\n\\n @lmql.query\\n def hello(s: str):\\n \'\'\'lmql\\n argmax \\n \\"Hello {s} [RESPONSE]\\" \\n return RESPONSE\\n from \\n \\"chatgpt\\"\\n \'\'\'\\n\\n print(hello(\\"world\\")) # [\'Hello! How can I assist you today?\']\\n ```\\n\\n If you instead want to use `lmql.run` in a synchronous context, you can now use `lmql.run_sync` instead. To learn more about how LMQL can be used from Python, check out our [documentation](/docs/lib/python.md).\\n\\n* **Improved Tokenizer Backends** LMQL can now use the excellent [`tiktoken` tokenizer](https://github.com/openai/tiktoken) as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.\\n\\n* **Docker Image** LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the [documentation](/docs/development/docker-setup.md). Many thanks to [@SilacciA](https://github.com/SilacciA) for contributing this feature.\\n\\n* **Faster Startup Time** We optimized LMQL\'s import hierarchy, which results in faster module loading time.","html":"

Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more

\\n

June 8, 2023

\\n

Among many things, this update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Azure OpenAI support LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the documentation. Many thanks to @veqtor for contributing this feature.

    \\n
  • \\n
  • \\n

    Local Models via the Language Model Transport Protocol LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the documentation.

    \\n

    To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in this README file. In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.

    \\n
    \\n \\n
    \\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
    \\n
  • \\n
  • \\n

    Synchronous Python API Next to an async/await based API, LMQL now also provides a synchronous API. This means you no longer need to use asyncio to use LMQL from Python.

    \\n

    To use the synchronous API, simply declare @lmql.query function without the async keyword, e.g.

    \\n
    python
    import lmql\\n\\n@lmql.query\\ndef hello(s: str):\\n    '''lmql\\n    argmax \\n        "Hello {s} [RESPONSE]" \\n        return RESPONSE\\n    from \\n        "chatgpt"\\n    '''\\n\\nprint(hello("world")) # ['Hello! How can I assist you today?']\\n
    \\n

    If you instead want to use lmql.run in a synchronous context, you can now use lmql.run_sync instead. To learn more about how LMQL can be used from Python, check out our documentation.

    \\n
  • \\n
  • \\n

    Improved Tokenizer Backends LMQL can now use the excellent tiktoken tokenizer as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.

    \\n
  • \\n
  • \\n

    Docker Image LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the documentation. Many thanks to @SilacciA for contributing this feature.

    \\n
  • \\n
  • \\n

    Faster Startup Time We optimized LMQL\'s import hierarchy, which results in faster module loading time.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-06-08T00:00:00.000Z","title":"Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more"},"excerpt":"","url":"/blog/posts/release-0.0.6.4.html"},{"src":"---\\ndate: 2023-05-11\\ntitle: LMQL Release v0.0.6.3\\n---\\n\\n# LMQL v0.0.6.3\\n\\nMay 11, 2023\\n\\nToday, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Lighter Runtime** As part of our continued efforts, we made LMQL much lighter (no more mandatory `transformers` dependency). By default LMQL now no longer requires `transformers` or PyTorch. If you rely on local models, just install LMQL via `pip install lmql[hf]` to get full Transformers integration.\\n\\n* **Token Constraints** A new function `TOKENS(...)` was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.\\n \\n ```{lmql}\\n name::token_constraints\\n argmax \\n \\"A 10 token response[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) == 10\\n ```\\n\\n* **Conditional Stopping** `STOPS_AT` can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met. \\n\\n For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.\\n\\n ```{lmql}\\n name::conditional_stopping \\n argmax \\n \\"Hello[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, \\"\\\\n\\")\\n ```\\n\\n* **lmql.run**: Improved input validation for `lmql.run` as contributed by @lfegray. More specifically, `lmql.run` wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.\\n\\n* **Automatic Cache Invalidation**: LMQL\'s tokenizer cache at `~/.cache/lmql` is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.\\n\\n> Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.","html":"

LMQL v0.0.6.3

\\n

May 11, 2023

\\n

Today, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Lighter Runtime As part of our continued efforts, we made LMQL much lighter (no more mandatory transformers dependency). By default LMQL now no longer requires transformers or PyTorch. If you rely on local models, just install LMQL via pip install lmql[hf] to get full Transformers integration.

    \\n
  • \\n
  • \\n

    Token Constraints A new function TOKENS(...) was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.

    \\n
    argmax \\n    "A 10 token response[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) == 10\\n
    \\n
  • \\n
  • \\n

    Conditional Stopping STOPS_AT can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met.

    \\n

    For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.

    \\n
    argmax \\n    "Hello[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, "\\\\n")\\n
    \\n
  • \\n
  • \\n

    lmql.run: Improved input validation for lmql.run as contributed by @lfegray. More specifically, lmql.run wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.

    \\n
  • \\n
  • \\n

    Automatic Cache Invalidation: LMQL\'s tokenizer cache at ~/.cache/lmql is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.

    \\n
  • \\n
\\n
\\n

Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.

\\n
\\n","frontmatter":{"date":"2023-05-11T00:00:00.000Z","title":"LMQL Release v0.0.6.3"},"excerpt":"","url":"/blog/posts/release-0.0.6.3.html"},{"src":"---\\ndate: 2023-05-03\\ntitle: LMQL Release v0.0.6.1\\n---\\n\\n# LMQL v0.0.6.1\\n\\nMay 3, 2023\\n\\nWe released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Cache Layer Bug Fixes** This release contains several fixes and improvements to the recently introduced cache layer.\\n\\n* **Stopping Phrases** Stopping phrases specified via `STOPS_BEFORE` are now passed to the OpenAI API as `\\"stop\\"` parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter `openai_nonstop=True`.\\n\\n* **Asynchronous Output Writers** All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on [Output Streaming](/docs/lib/output.md) to the documentation.\\n\\n* **Output Writers for HTTP endpoints, WebSockets and Server-Sent Events** Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new [lmql.output](https://github.com/eth-sri/lmql/tree/main/src/lmql/output) module. We will also provide more documentation on how to use them, e.g. with `aiohttp` in the future.","html":"

LMQL v0.0.6.1

\\n

May 3, 2023

\\n

We released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Cache Layer Bug Fixes This release contains several fixes and improvements to the recently introduced cache layer.

    \\n
  • \\n
  • \\n

    Stopping Phrases Stopping phrases specified via STOPS_BEFORE are now passed to the OpenAI API as "stop" parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter openai_nonstop=True.

    \\n
  • \\n
  • \\n

    Asynchronous Output Writers All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on Output Streaming to the documentation.

    \\n
  • \\n
  • \\n

    Output Writers for HTTP endpoints, WebSockets and Server-Sent Events Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new lmql.output module. We will also provide more documentation on how to use them, e.g. with aiohttp in the future.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-05-03T00:00:00.000Z","title":"LMQL Release v0.0.6.1"},"excerpt":"","url":"/blog/posts/release-0.0.6.1.html"},{"src":"---\\ndate: 2023-05-01\\ntitle: Releasing the LMQL Caching Layer (v0.0.6)\\n---\\n\\n# Releasing the LMQL Caching Layer (v0.0.6)\\n\\nMay 1, 2023\\n\\nToday we are releasing LMQL 0.0.6, the first version of LMQL that integrates the *LMQL Caching Layer*. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including **template-based queries, long-form constraints and tool augmentation.**\\n\\nYou can experiment with LMQL in the browser-based [Playground IDE](http://lmql.ai/playground) or install the latest version locally, via `pip install lmql`.\\n\\n## Caching Layer\\n\\nThe caching layer is implemented as a **tree-based data structure** that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.\\n\\nTo illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.\\n\\n### Template-Based Queries \\n\\nWhen specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:\\n```{lmql}\\nname::list-of-things-speculative\\nargmax\\n \\"A list of things not to forget when going to the sea (not travelling): \\\\n\\"\\n \\"- Sunglasses \\\\n\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\nfrom\\n \'openai/text-ada-001\'\\nwhere\\n STOPS_AT(THING, \\"\\\\n\\")\\n```\\n**Without Caching:** Tokens: 390, Requests: 4 | **With Caching Layer:** Tokens: 89 (-77%), Requests: 1 (-75%)\\n\\nHere, the LLM typically needs to be invoked 4 times, once per `[THING]` variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the `-` token to be inserted before each `[THING]`. \\n\\nWith the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate `[THING]` variables.\\n\\n### Short-Circuiting Long Constraints\\n\\nWhen you specify long constraints like `A in [\\"ABCDE\\", \\"FGHIJK\\"]`, the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:\\n```{lmql}\\nname::long-form-constraints-speculative\\nargmax\\n \\"If we have the choice we choose[OPTION]\\"\\nfrom \\n \\"openai/text-ada-001\\"\\nwhere\\n OPTION in [\\"Option A with a whole lot of extra context\\", \\n \\"Option B with context\\", \\n \\"Another Option, also with a lot of additional text\\"\\n ]\\n```\\n```promptdown\\nIf we have the choice we choose [OPTION|Option A with a whole lot of extra context]\\n```\\n**Without Caching:** Tokens: 123, Requests: 9 | **With Caching Layer:** Tokens: 25 (-80%), Requests: 2 (-78%)\\n\\nHere, after the LLM has produced `\\"Option\\"` and then `\\" A\\"`, LMQL short-circuits further model calls and automatically completes the resulting sequence to `\\"Option A with a whole lot of extra context\\"`. This is possible because once `Option A` has been predicted, the remaining tokens are fully determined by the constraints.\\n\\n### Tool-Augmented Queries\\n\\nLastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.\\n\\nAs an example, consider the following query that augments an LLM with the ability to use a key-value storage, [also runnable in the browser-based LMQL Playground](http://lmql.ai/playground?snippet=kv).\\n\\n
\\n\\n \\"Key-Storage\\n\\n
\\n\\n**Without Caching:** Tokens: 5,162, Requests: 12 | **With Caching Layer:** Tokens: 3,481 (-33%), Requests: 8 (-33%)\\n\\nHere, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to `assign` and `get` stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.\\n\\nWe count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output. \\n\\nThis scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.\\n\\n## Persisting the Cache\\n\\nOf course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result. \\n\\nTo do so, you can simply specify a `cache=\\"file.tokens\\"` parameter in your query code:\\n\\n```{lmql}\\nname::joke-with-cache\\nargmax(cache=\\"joke.tokens\\")\\n \\"\\"\\"A good dad joke. A indicates the punchline\\n Q:[JOKE]\\n A:[PUNCHLINE]\\"\\"\\"\\nfrom\\n \\"openai/text-davinci-003\\"\\nwhere\\n len(JOKE) < 120 and \\n STOPS_AT(JOKE, \\"?\\") and \\n STOPS_AT(PUNCHLINE, \\"\\\\n\\") and \\n len(PUNCHLINE) > 1\\n```\\n\\nThe first successful run of this query will persist the cache to `joke.tokens`. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.\\n\\n**Caching During Query Development**: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.\\n\\n## Caveats and Disabling the Cache\\n\\nYou can disable the caching layer by specifying `cache=False` in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.\\n\\nFurther, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.\\n\\n## Conclusion\\n\\nIn this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.\\n\\nTo learn more about LMQL please also check out our [documentation](/docs), or join our [Discord](https://discord.gg/2Y3Wz2Q) to chat with us directly. We are looking forward to hearing from you!","html":"

Releasing the LMQL Caching Layer (v0.0.6)

\\n

May 1, 2023

\\n

Today we are releasing LMQL 0.0.6, the first version of LMQL that integrates the LMQL Caching Layer. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including template-based queries, long-form constraints and tool augmentation.

\\n

You can experiment with LMQL in the browser-based Playground IDE or install the latest version locally, via pip install lmql.

\\n

Caching Layer

\\n

The caching layer is implemented as a tree-based data structure that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.

\\n

To illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.

\\n

Template-Based Queries

\\n

When specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:

\\n
argmax\\n    "A list of things not to forget when going to the sea (not travelling): \\\\n"\\n    "- Sunglasses \\\\n"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\nfrom\\n    'openai/text-ada-001'\\nwhere\\n    STOPS_AT(THING, "\\\\n")\\n
\\n

Without Caching: Tokens: 390, Requests: 4 | With Caching Layer: Tokens: 89 (-77%), Requests: 1 (-75%)

\\n

Here, the LLM typically needs to be invoked 4 times, once per [THING] variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the - token to be inserted before each [THING].

\\n

With the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate [THING] variables.

\\n

Short-Circuiting Long Constraints

\\n

When you specify long constraints like A in ["ABCDE", "FGHIJK"], the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:

\\n
argmax\\n    "If we have the choice we choose[OPTION]"\\nfrom \\n    "openai/text-ada-001"\\nwhere\\n    OPTION in ["Option A with a whole lot of extra context", \\n        "Option B with context", \\n        "Another Option, also with a lot of additional text"\\n    ]\\n
\\n
promptdown

If we have the choice we choose OPTIONOption A with a whole lot of extra context\\n

\\n

Without Caching: Tokens: 123, Requests: 9 | With Caching Layer: Tokens: 25 (-80%), Requests: 2 (-78%)

\\n

Here, after the LLM has produced "Option" and then " A", LMQL short-circuits further model calls and automatically completes the resulting sequence to "Option A with a whole lot of extra context". This is possible because once Option A has been predicted, the remaining tokens are fully determined by the constraints.

\\n

Tool-Augmented Queries

\\n

Lastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.

\\n

As an example, consider the following query that augments an LLM with the ability to use a key-value storage, also runnable in the browser-based LMQL Playground.

\\n
\\n\\n \\"Key-Storage\\n\\n
\\n

Without Caching: Tokens: 5,162, Requests: 12 | With Caching Layer: Tokens: 3,481 (-33%), Requests: 8 (-33%)

\\n

Here, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to assign and get stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.

\\n

We count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output.

\\n

This scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.

\\n

Persisting the Cache

\\n

Of course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result.

\\n

To do so, you can simply specify a cache="file.tokens" parameter in your query code:

\\n
argmax(cache="joke.tokens")\\n   """A good dad joke. A indicates the punchline\\n   Q:[JOKE]\\n   A:[PUNCHLINE]"""\\nfrom\\n   "openai/text-davinci-003"\\nwhere\\n   len(JOKE) < 120 and \\n   STOPS_AT(JOKE, "?") and \\n   STOPS_AT(PUNCHLINE, "\\\\n") and \\n   len(PUNCHLINE) > 1\\n
\\n

The first successful run of this query will persist the cache to joke.tokens. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.

\\n

Caching During Query Development: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.

\\n

Caveats and Disabling the Cache

\\n

You can disable the caching layer by specifying cache=False in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.

\\n

Further, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.

\\n

Conclusion

\\n

In this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.

\\n

To learn more about LMQL please also check out our documentation, or join our Discord to chat with us directly. We are looking forward to hearing from you!

\\n","frontmatter":{"date":"2023-05-01T00:00:00.000Z","title":"Releasing the LMQL Caching Layer (v0.0.6)"},"excerpt":"","url":"/blog/posts/release-0.0.6.html"},{"src":"---\\ndate: 2023-04-17\\ntitle: LMQL Release 0.0.5\\n---\\n\\n# LMQL Release 0.0.5\\n\\nApril 17, 2023\\n\\nToday we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.\\n\\n`lmql==0.0.5` has been published on [PyPI](https://pypi.org/project/lmql/), based the current `main` branch of the [GitHub repository](https://github.com/eth-sri/lmql). The updated version has also been deployed to the browser-based [lmql.ai/playground](http://lmql.ai/playground).\\n\\n### Changelog\\n\\n* **Decoder Performance** The `argmax` and `sample` decoders have undergone some optimizations, allowing them to run faster. This results in a *20-30% speed-up* on common query workloads. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n* **Postprocessing Semantics** Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. [#24](https://github.com/eth-sri/lmql/pull/24). \\n\\n For example, when using an `INT()` constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting `NUM` value will additionally be converted to an `int` value:\\n\\n ```\\n argmax\\n \\"My favorite number is: [NUM]\\\\n\\"\\n print(type(NUM), NUM * 2) # 4\\n \\"Number times two is {NUM * 2}\\"\\n from\\n \'openai/text-ada-001\'\\n where\\n INT(NUM) \\n ```\\n\\n* **Core Interpreter** A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with *branching* decoding algorithms. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n\\n* **Playground** Locally and when used in-browser, the [LMQL Playground](http://lmql.ai/playground) now *streams debugger information* from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. [#27f9a8ad](https://github.com/eth-sri/lmql/commit/27f9a8adb819f732608ef61c9aca9dca579dc536).\\n\\n\\n* **Other Fixes**:\\n - When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write `STOPS_AT(VAR, \\"\\\\n\\")` instead of `STOPS_AT(VAR, \\"\\\\\\\\n\\")`\\n - The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. [#15](https://github.com/eth-sri/lmql/pull/15), thanks to [@chrispan](https://github.com/chrispan).\\n - OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes [#6](https://github.com/eth-sri/lmql/issues/6).\\n\\n### Preview\\n\\nApart from the changes above, we are also working on a number of other features, including:\\n\\n* **llama.cpp support** as started in [this PR](https://github.com/eth-sri/lmql/pull/18), thanks to [@CircArgs](https://github.com/CircArgs).\\n* Support for **Type Constraints**, e.g. `type(VAR) is DataClass`, that automatically force the model to produce a value that structurally conforms to the given type. See this [Twitter thread](https://twitter.com/lbeurerkellner/status/1646187597901733889) for more details.\\n* Support for using **Antlr parsers** during query execution, to force the model to produce a value that conforms to a given grammar. \\n\\n* **Extending Logit Masking to OpenAI Chat Models**. This will enable full support for LMQL constraints with e.g. `chatgpt` and `gpt-4` models. See [#25](https://github.com/eth-sri/lmql/pull/25), thanks to [@kharvd](https://github.com/kharvd).","html":"

LMQL Release 0.0.5

\\n

April 17, 2023

\\n

Today we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.

\\n

lmql==0.0.5 has been published on PyPI, based the current main branch of the GitHub repository. The updated version has also been deployed to the browser-based lmql.ai/playground.

\\n

Changelog

\\n
    \\n
  • \\n

    Decoder Performance The argmax and sample decoders have undergone some optimizations, allowing them to run faster. This results in a 20-30% speed-up on common query workloads. #24.

    \\n
  • \\n
  • \\n

    Postprocessing Semantics Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. #24.

    \\n

    For example, when using an INT(<var>) constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting NUM value will additionally be converted to an int value:

    \\n
    argmax\\n   "My favorite number is: [NUM]\\\\n"\\n   print(type(NUM), NUM * 2) # <class 'int'> 4\\n   "Number times two is {NUM * 2}"\\nfrom\\n   'openai/text-ada-001'\\nwhere\\n   INT(NUM) \\n
    \\n
  • \\n
  • \\n

    Core Interpreter A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with branching decoding algorithms. #24.

    \\n
  • \\n
  • \\n

    Playground Locally and when used in-browser, the LMQL Playground now streams debugger information from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. #27f9a8ad.

    \\n
  • \\n
  • \\n

    Other Fixes:

    \\n
      \\n
    • When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write STOPS_AT(VAR, "\\\\n") instead of STOPS_AT(VAR, "\\\\\\\\n")
    • \\n
    • The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. #15, thanks to @chrispan.
    • \\n
    • OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes #6.
    • \\n
    \\n
  • \\n
\\n

Preview

\\n

Apart from the changes above, we are also working on a number of other features, including:

\\n
    \\n
  • \\n

    llama.cpp support as started in this PR, thanks to @CircArgs.

    \\n
  • \\n
  • \\n

    Support for Type Constraints, e.g. type(VAR) is DataClass, that automatically force the model to produce a value that structurally conforms to the given type. See this Twitter thread for more details.

    \\n
  • \\n
  • \\n

    Support for using Antlr parsers during query execution, to force the model to produce a value that conforms to a given grammar.

    \\n
  • \\n
  • \\n

    Extending Logit Masking to OpenAI Chat Models. This will enable full support for LMQL constraints with e.g. chatgpt and gpt-4 models. See #25, thanks to @kharvd.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-04-17T00:00:00.000Z","title":"LMQL Release 0.0.5"},"excerpt":"","url":"/blog/posts/release-0.0.5.html"}]');const h={class:"posts"},d={class:"post"},u=["href"],m=["innerHTML"],v=JSON.parse('{"title":"Blog","description":"","frontmatter":{"title":"Blog","layout":"doc","aside":false,"outline":false},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md"}'),g={name:"blog/index.md"},f=Object.assign(g,{setup(y){function b(s){return s}return(s,w)=>(a(),t("div",null,[(a(!0),t(r,null,i(l(p),n=>(a(),t("div",h,[e("div",d,[e("a",{href:n.url},[e("h1",null,c(n.frontmatter.title),1)],8,u),e("div",{class:"body",innerHTML:n.html},null,8,m)])]))),256))]))}}),q=o(f,[["__scopeId","data-v-61c06c99"]]);export{v as __pageData,q as default}; +import{_ as o,o as a,c as t,F as r,D as i,l,k as e,t as c}from"./chunks/framework.980cae92.js";const p=JSON.parse('[{"src":"---\\ndate: 2024-02-14 10:10:00\\ntitle: LMQL Developer Survey\\n---\\n\\n# LMQL Developer Survey\\n\\n\\nFebruary 14, 2024\\n\\n\\"image\\"\\n\\nWe have started a new initiative called the **LMQL developer survey**. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for. \\n\\nThe outcome of this survey will help shape our work around the next major version of LMQL.\\n\\nYou can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.\\n","html":"

LMQL Developer Survey

\\n

February 14, 2024

\\n\\"image\\"\\n

We have started a new initiative called the LMQL developer survey. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for.

\\n

The outcome of this survey will help shape our work around the next major version of LMQL.

\\n

You can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.

\\n","frontmatter":{"date":"2024-02-14T10:10:00.000Z","title":"LMQL Developer Survey"},"excerpt":"","url":"/blog/posts/developer-survey.html"},{"src":"---\\ndate: 2023-10-10 10:10:00\\ntitle: LMQL 0.7 brings Procedural Prompt Programming\\n---\\n\\n# LMQL 0.7 brings Procedural Prompt Programming\\n\\nOctober 10, 2023\\n\\nToday, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several *experimental preview features*, allowing you to experiment with new incoming functionality before it is fully released.\\n\\nLMQL 0.7 has also moved to [semantic versioning](https://semver.org) with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.\\n\\n## Nested Queries for Procedural Prompt Programming\\n\\nIn 0.7, you can now use [Nested Queries](../../docs/language/nestedqueries.md) to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:\\n\\n```lmql\\n# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n \'\'\'lmql\\n \\"A: Let\'s think step by step.\\\\n [REASONING]\\"\\n \\"Therefore the answer is[ANSWER]\\" where STOPS_AT(ANSWER, \\".\\")\\n return ANSWER.strip()\\n \'\'\'\\n\\n# top-level query\\n\\"Q: It is August 12th, 2020. What date was it \\\\\\n 100 days ago? [ANSWER: chain_of_thought]\\"\\n\\nANSWER # May 4th, 2020\\n```\\n\\nWe first define a simple LMQL function `chain_of_thought` to do *chain-of-thought prompting*. In our top-level query, we can then call this function to decode an answer using the `[ANSWER: chain_of_thought]` syntax. During execution, LMQL then inserts the instructions and constraints from `chain_of_thought` into the top-level query, generates a value for `ANSWER`, and then removes the instructions and constraints again, only returning the final result.\\n\\n**Nested queries are Prompt Function Calls.** This design of nested queries is inspired by the idea of *function or procedure calls* in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of *stack unwinding*, a technique to implement function calls in low-level languages. \\n\\nLMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:\\n\\n- **Encapsulation and Model Focus** Nested Queries encapsulate and hide the prompting logic used to generate `ANSWER`, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.\\n\\n- **Nesting and Reuse** LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query `get_year` to extract a year from the response text, and then use this query in `chain_of_thought` to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.\\n\\nTo learn more about nested queries, please refer to the [relevant chapter in the documentation](../../docs/language/nestedqueries.md).\\n\\n## Generations API\\n\\nLMQL 0.7 adds the *Generations API*, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:\\n\\n```python\\n# obtain a model instance\\nm: lmql.LLM = lmql.model(\\"openai/gpt-3.5-turbo-instruct\\")\\n# simple generation\\nm.generate_sync(\\"Hello\\", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n```\\n
\\n\\nFunctions such as [`LLM.generate`](../../docs/lib/generations.html#llm-generate) and [`LLM.score`](../../docs/lib/generations.html#llm-score) allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed. \\n\\nFor more information, please refer to the [documentation](../../docs/lib/generations.html).\\n\\n## Chat \\n\\nLMQL 0.7 adds a new [Chat API](../../docs/lib/chat.md), allowing you to easily deploy chatbots with just a couple lines of LMQL.\\n\\n\\n\\nLMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple `lmql chat` CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots. \\n\\nWe also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the [documentation](../../docs/lib/chat.md).\\n\\n## Backends\\n\\nLMQL 0.7 ships with three new backends for inference and tokenization:\\n\\n* LMQL 0.7 adds support for OpenAI\'s newly released `gpt-3.5-turbo-instruct` model. In contrast to other 3.5 series models, this variant supports the *Completions API*, which means that LMQL constraints are compatible with it.\\n\\n* LMQL now supports hosting models on [replicate.com](https://replicate.com) infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the [documentation](../../docs/models/replicate.md). Thanks a lot to community member [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this!\\n\\n* LMQL added `sentencepiece` as an additional tokenization backend, specifically for `llama.cpp` models. This means, `llama.cpp` models can now be used without requiring `transformers` for tokenization. Thanks a lot to community member [@khushChopra](https://github.com/khushChopra) for contributing this.\\n\\n\\n## Inference Certificates\\n\\nTo make LLM inference more transparent and re-producible, LMQL 0.7 also adds [*inference certificates*](../../docs/lib/inference-certificates.md). An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.\\n\\nTo produce an inference certificate, pass `certificate=True` or `certificate=` to your query or generate call:\\n\\n```truncated\\n# call and save certificate\\nsay_hello(certificate=\\"my-certificate.json\\")\\n```\\n\\nThe resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the *exact (tokenized) prompts* and information on the *environment and generation parameters*.\\n\\nThis can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.\\n\\n## Decorators\\n\\n[Variable Decorators](../../docs/language/decorators.md) offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:\\n\\n```lmql\\ndef screaming(value):\\n \\"\\"\\"Decorator to convert a string to uppercase\\"\\"\\"\\n return value.upper()\\n\\n\\"Say \'this is a test\':[@screaming TEST]\\"\\n```\\n```promptdown\\nSay \'this is a test\': [TEST| THIS IS A TEST]\\n```\\n\\nSimilar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value. \\n\\nIn the example above, we use the `@screaming` decorator to convert the value of `TEST` to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the [documentation](../../docs/language/decorators.md).\\n\\n\\n## Documentation Update\\n\\nThe website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate. \\n\\nThe documentation now also includes a *work-in-progress* [Language Reference](/docs/language/reference.md), which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.\\n\\n## Preview Features\\n\\nApart from many new core features, LMQL 0.7 also ships with several *experimental preview features*, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.\\n\\nThese features are marked as *experimental* and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.\\n\\n### LMQL Actions Preview\\n\\n*LMQL Actions* is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:\\n\\n```{lmql}\\ndef wiki(q): ...\\ndef calc(expr): ...\\n\\n\\"Q: What is the population of the US and Germany combined?\\"\\n\\"A: [REASONING]\\" where inline_use(REASONING, [wiki, calc])\\n```\\n\\nA future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in [`actions.py`](https://github.com/eth-sri/lmql/blob/main/src/lmql/lib/actions.py).\\n\\n### Regex Constraints Preview\\n\\nLMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form `DD/MM`:\\n\\n```{lmql}\\n\\"It\'s the last day of June so today is [RESPONSE]\\" where REGEX(RESPONSE, r\\"[0-9]{2}/[0-9]{2}\\")\\n```\\n\\n### Types / Datatype Constraints Preview\\n\\nLMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for *dataclass constraints*, allowing you to constrain the output of a variable to a specific structured output schema:\\n\\n```lmql\\nimport lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n name: str\\n age: int\\n job: str\\n\\n\\"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n\\"\\n\\"Structured: [PERSON_DATA]\\\\n\\" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name=\'Alice\', age=21, job=\'engineer\')\\n```\\n\\nTo achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid `Person` object. The resulting `PERSON_DATA` object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality. \\n\\n\\n## Other Changes\\n\\n* The LMQL playground can now be used from the Windows `cmd.exe`. Thanks a lot to community member [@mosheduminer](https://github.com/mosheduminer).\\n\\n* LMQL/LMTP model backends can now be accessed [as Langchain `LLM` objects](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/lmtp_langchain.py) to use them in your Langchain pipelines. Thanks to [@4onon](https://github.com/4onon) for contributing this. \\n\\n* LMQL can now be [installed as a NixOS package](https://github.com/eth-sri/lmql/tree/main/scripts/flake.d). Thanks to [@charles-dyfis-net](https://github.com/charles-dyfis-net) for contributing this.\\n\\n## 🎬 And that\'s a wrap!\\n\\nLMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on [GitHub](https://github.com/eth-sri/lmql), [Discord](https://discord.gg/7eJP4fcyNT), [Twitter](https://twitter.com/lmqllang) or via [hello@lmql.ai](mailto:hello@lmql.ai).\\n","html":"

LMQL 0.7 brings Procedural Prompt Programming

\\n

October 10, 2023

\\n

Today, we are releasing LMQL 0.7. This series is the biggest update since the original release, including many community contributions. Next to several new main-line features like nested queries, the Generations API and the Chat API, it also includes several experimental preview features, allowing you to experiment with new incoming functionality before it is fully released.

\\n

LMQL 0.7 has also moved to semantic versioning with the direct predecessor being 0.0.6.6. This means that the next feature release will be 0.8, and the next bugfix release will be 0.7.1.

\\n

Nested Queries for Procedural Prompt Programming

\\n

In 0.7, you can now use Nested Queries to call an LMQL query as a nested function in the context of another query. For this, LMQL implements procedural programming for prompting. To illustrate, consider the following example:

\\n
lmql
# chain of thought prompting strategy\\n@lmql.query\\ndef chain_of_thought():\\n    '''lmql\\n    "A: Let's think step by step.\\\\n [REASONING]"\\n    "Therefore the answer is[ANSWER]" where STOPS_AT(ANSWER, ".")\\n    return ANSWER.strip()\\n    '''\\n\\n# top-level query\\n"Q: It is August 12th, 2020. What date was it \\\\\\n    100 days ago? [ANSWER: chain_of_thought]"\\n\\nANSWER # May 4th, 2020\\n
\\n

We first define a simple LMQL function chain_of_thought to do chain-of-thought prompting. In our top-level query, we can then call this function to decode an answer using the [ANSWER: chain_of_thought] syntax. During execution, LMQL then inserts the instructions and constraints from chain_of_thought into the top-level query, generates a value for ANSWER, and then removes the instructions and constraints again, only returning the final result.

\\n

Nested queries are Prompt Function Calls. This design of nested queries is inspired by the idea of function or procedure calls in traditional programming. Removing intermediate instructions and constraints also has parallels to the idea of stack unwinding, a technique to implement function calls in low-level languages.

\\n

LMQL transfers these ideas to prompting, inheriting the general benefits of procedural programming:

\\n
    \\n
  • \\n

    Encapsulation and Model Focus Nested Queries encapsulate and hide the prompting logic used to generate ANSWER, which means our top-level query is much cleaner and more concise. Further, by hiding intermediate instructions from the model in the context of the top-level query, we can reduce noise in the overall prompt, allowing the model to focus on the currently relevant information only, and not get distracted by previous intermediate steps.

    \\n
  • \\n
  • \\n

    Nesting and Reuse LMQL queries can be nested arbitrarily deep, allowing you to reuse and combine queries modularly. For example, you could define a query get_year to extract a year from the response text, and then use this query in chain_of_thought to extract the date from the question. By achieving modularity for sub-prompts, nested queries also allow you to reuse prompts across different query programs.

    \\n
  • \\n
\\n

To learn more about nested queries, please refer to the relevant chapter in the documentation.

\\n

Generations API

\\n

LMQL 0.7 adds the Generations API, a lightweight high-level library for LMQL-based text generation and scoring. The API was designed to be easy to use and does not require users to write any LMQL themselves:

\\n
python
# obtain a model instance\\nm: lmql.LLM = lmql.model("openai/gpt-3.5-turbo-instruct")\\n# simple generation\\nm.generate_sync("Hello", max_tokens=10)\\n# -> Hello, I am a 23 year old female.\\n
\\n

\\n

Functions such as LLM.generate and LLM.score allow you to generate and score text using any LMQL-support inference backend. The Generations API is also seamlessly compatible with standard LMQL, allowing you to switch and combine the two as needed.

\\n

For more information, please refer to the documentation.

\\n

Chat

\\n

LMQL 0.7 adds a new Chat API, allowing you to easily deploy chatbots with just a couple lines of LMQL.

\\n\\n

LMQL Chat comes with custom output writers, that allow you to easily stream chatbot input and output over a variety of channels, including WebSockets, HTTP, and SSE. A simple lmql chat CLI tool was also added, that allows you to instantly launch your LMQL queries as fully interactive chatbots.

\\n

We also provide documentation resources on how to get started with chatbot development with LMQL, including chapters on Chatbot Serving, Internal Reasoning and Defending against Prompt Injection. For more information, please refer to the documentation.

\\n

Backends

\\n

LMQL 0.7 ships with three new backends for inference and tokenization:

\\n
    \\n
  • \\n

    LMQL 0.7 adds support for OpenAI\'s newly released gpt-3.5-turbo-instruct model. In contrast to other 3.5 series models, this variant supports the Completions API, which means that LMQL constraints are compatible with it.

    \\n
  • \\n
  • \\n

    LMQL now supports hosting models on replicate.com infrastructure, allowing you to run LMQL models in the cloud. To learn more, please refer to the documentation. Thanks a lot to community member @charles-dyfis-net for contributing this!

    \\n
  • \\n
  • \\n

    LMQL added sentencepiece as an additional tokenization backend, specifically for llama.cpp models. This means, llama.cpp models can now be used without requiring transformers for tokenization. Thanks a lot to community member @khushChopra for contributing this.

    \\n
  • \\n
\\n

Inference Certificates

\\n

To make LLM inference more transparent and re-producible, LMQL 0.7 also adds inference certificates. An inference certificate is a simple data structure that records essential information needed to reproduce an inference result. Certificates can be generated for any LLM call that happens in an LMQL context.

\\n

To produce an inference certificate, pass certificate=True or certificate=<filename> to your query or generate call:

\\n
truncated
# call and save certificate\\nsay_hello(certificate="my-certificate.json")\\n
\\n

The resulting certificate file provides a way to document, trace and reproduce LLM inference results by recording the exact (tokenized) prompts and information on the environment and generation parameters.

\\n

This can be helpful to better understand what is happening during inference, to debug issues, and to reproduce results. It also offers a way to document LLM failures, to better guide the discussion around the concrete capabilities and limitations of LLMs.

\\n

Decorators

\\n

Variable Decorators offer a new and simple way to call custom Python functions as part of the core generation loop in LMQL:

\\n
lmql
def screaming(value):\\n    """Decorator to convert a string to uppercase"""\\n    return value.upper()\\n\\n"Say 'this is a test':[@screaming TEST]"\\n
\\n
promptdown

Say \'this is a test\': TEST THIS IS A TEST\\n

\\n

Similar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value.

\\n

In the example above, we use the @screaming decorator to convert the value of TEST to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the documentation.

\\n

Documentation Update

\\n

The website and many chapters of the LMQL documentation have also been updated and extended and now include more examples and explanations. We have updated the visual design to make it easier to read and navigate.

\\n

The documentation now also includes a work-in-progress Language Reference, which aims to provide a more comprehensive and formal description of LMQL\'s syntax and semantics, all in one place.

\\n

Preview Features

\\n

Apart from many new core features, LMQL 0.7 also ships with several experimental preview features, allowing you to test drive new functionality before it has fully stabilized and is released as main-line functionality.

\\n

These features are marked as experimental and are not yet fully supported. We are releasing them to gather feedback and to allow users to test them out early on. Note that these features are subject to change and may be removed/modified in future releases.

\\n

LMQL Actions Preview

\\n

LMQL Actions is the first version of LMQL\'s function calling layer. It allows you to expose arbitrary Python functions to the LLM reasoning loop and lets the model call them during generation. Function demonstration and the calling protocol can be both handled automatically by the LMQL runtime, allowing for simple use like this:

\\n
def wiki(q): ...\\ndef calc(expr): ...\\n\\n"Q: What is the population of the US and Germany combined?"\\n"A: [REASONING]" where inline_use(REASONING, [wiki, calc])\\n
\\n

A future release will bring more documentation and details on Actions, including how to use and customize it for your use cases. Until then we invite everyone to try and hack with the current implementation, fully contained in actions.py.

\\n

Regex Constraints Preview

\\n

LMQL now has support for regex constraints, allowing you to use regular expressions to constrain the output of a variable. For example, the following query will always generate a valid date of the form DD/MM:

\\n
"It's the last day of June so today is [RESPONSE]" where REGEX(RESPONSE, r"[0-9]{2}/[0-9]{2}")\\n
\\n

Types / Datatype Constraints Preview

\\n

LMQL is moving towards fully typed LLM generation. On the way there, we have started to add support for dataclass constraints, allowing you to constrain the output of a variable to a specific structured output schema:

\\n
lmql
import lmql\\nfrom dataclasses import dataclass\\n\\n@dataclass\\nclass Person:\\n    name: str\\n    age: int\\n    job: str\\n\\n"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\\\n"\\n"Structured: [PERSON_DATA]\\\\n" where type(PERSON_DATA) is Person\\n\\nPERSON_DATA\\n# Person(name='Alice', age=21, job='engineer')\\n
\\n

To achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid Person object. The resulting PERSON_DATA object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality.

\\n

Other Changes

\\n\\n

🎬 And that\'s a wrap!

\\n

LMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on GitHub, Discord, Twitter or via hello@lmql.ai.

\\n","frontmatter":{"date":"2023-10-10T10:10:00.000Z","title":"LMQL 0.7 brings Procedural Prompt Programming"},"excerpt":"","url":"/blog/posts/release-0.7.html"},{"src":"---\\ndate: 2023-07-25\\ntitle: LMQL v0.0.6.6\\n---\\n\\nJuly 25, 2023\\n\\nWe just released LMQL *0.0.6.6*. This is a minor update with a couple of smaller fixes and improvements.\\n\\n* `lmql.F` now supports positional arguments:\\n\\n```python\\ngreet = lmql.F(\\"Greet {a} and {b}: [GREETING]\\")\\n\\n# call with positional arguments\\ngreet(\\"Alice\\", \\"Bob\\") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a=\\"Alice\\", b=\\"Bob\\") # Greet Alice and Bob: Hello!\\n```\\n\\n* We improved the error handling of the `llama.cpp` backend and fixed a bug with model identifier parsing. \\n\\n* We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member [@4onen](https://github.com/4onen) for reporting and fixing this!\\n\\n* Added backend support for `auto_gptq` quantized models, contributed by community member [@meditans](https://github.com/meditans).\\n\\n* We fixed an issue where for Azure OpenAI models, a dummy configuration `api.env` was needed. See our [documentation](../../docs/models/azure.md) for details. Thanks to community members Missing and [@hooman-bayer](https://github.com/hooman-bayer) for their feedback and contributions to this.\\n\\n> **Versioning Note**: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.","html":"

July 25, 2023

\\n

We just released LMQL 0.0.6.6. This is a minor update with a couple of smaller fixes and improvements.

\\n
    \\n
  • lmql.F now supports positional arguments:
  • \\n
\\n
python
greet = lmql.F("Greet {a} and {b}: [GREETING]")\\n\\n# call with positional arguments\\ngreet("Alice", "Bob") # Greet Alice and Bob: Hello!\\n# call with keyword arguments\\ngreet(a="Alice", b="Bob") # Greet Alice and Bob: Hello!\\n
\\n
    \\n
  • \\n

    We improved the error handling of the llama.cpp backend and fixed a bug with model identifier parsing.

    \\n
  • \\n
  • \\n

    We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member @4onen for reporting and fixing this!

    \\n
  • \\n
  • \\n

    Added backend support for auto_gptq quantized models, contributed by community member @meditans.

    \\n
  • \\n
  • \\n

    We fixed an issue where for Azure OpenAI models, a dummy configuration api.env was needed. See our documentation for details. Thanks to community members Missing and @hooman-bayer for their feedback and contributions to this.

    \\n
  • \\n
\\n
\\n

Versioning Note: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.

\\n
\\n","frontmatter":{"date":"2023-07-25T00:00:00.000Z","title":"LMQL v0.0.6.6"},"excerpt":"","url":"/blog/posts/release-0.0.6.6.html"},{"src":"---\\ndate: 2023-07-13\\ntitle: LMQL becomes simpler and adds llama.cpp\\n---\\n\\n# LMQL becomes simpler and adds llama.cpp\\n\\nJuly 13, 2023\\n\\nToday we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a `llama.cpp` based inference backend, several bug fixes and other minor improvements.\\n\\nYou can try the latest version of LMQL in your browser at [lmql.ai/playground](https://lmql.ai/playground) or install it via `pip install lmql`.\\n\\n## One Line Is All It Takes\\n\\nMost notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.\\n\\nWith this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:\\n\\n```{lmql}\\nname::simple-syntax\\n\\n\\"One line is all it takes [CONTINUATION]\\"\\n```\\n```promptdown\\nOne line is all it takes [CONTINUATION|Fallin\' in love with me.]\\n```\\n\\n**Sensible Defaults** This is possible because LMQL now automatically assumes `argmax` and `openai/text-davinci-003` as (configurable) default model. If you prefer to use \\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the `@lmql.query` decorator function as demonstrated later in this post.\\n\\nWithout any additional configuration, the simple query code above translates to a full LMQL program like this:\\n\\n```{lmql}\\nname::simple-syntax-default\\n\\nargmax \\"One line is all it takes [CONTINUATION]\\" from \\"openai/text-davinci-003\\"\\n```\\n\\n
\\n\\n### Inline Constraints\\n\\nLMQL now allows you to specify several inline `where` constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.\\n\\n```{lmql}\\nname::list-with-array\\n\\n\\"A list of awesome Dua Lipa songs:\\\\n\\"\\nsongs = []\\n\\n\\"- New Rules\\\\n\\"\\nfor i in range(4):\\n \\"-[SONG]\\\\n\\" where STOPS_BEFORE(SONG, \\"\\\\n\\")\\n songs.append(SONG)\\n\\n\\"Out of these, my favorite is[FAVORITE]\\" where FAVORITE in songs\\n```\\n```promptdown\\nA list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- [SONG|Don\'t Start Now]\\n- [SONG|IDGAF]\\n- [SONG|Be the One]\\n- [SONG|Blow Your Mind (Mwah)]\\nOut of these, my favorite is [FAVORITE|Don\'t Start Now]\\n```\\n\\nNote also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation. \\n\\n
\\n\\n### `@lmql.query` functions\\n\\nThe overhauled syntax also makes LMQL much easier on the eyes when used with the `@lmql.query` [function decorator in Python](/docs/lib/python.md):\\n\\n```python\\nimport lmql\\nimport json\\n\\n@lmql.query(model=\\"openai/text-curie-001\\", temperature=0.9)\\ndef summarize(): \\n \'\'\'lmql\\n \\"\\"\\"\\n Provide a summary of Dua Lipa, the pop icon:\\n {{\\n \\"name\\": \\"[STRING_VALUE]\\",\\n \\"chart_position\\": [INT_VALUE],\\n \\"top_songs\\": [[\\n \\"[STRING_VALUE]\\",\\n \\"[STRING_VALUE]\\"\\n ]]\\n }}\\n \\"\\"\\" where STOPS_BEFORE(STRING_VALUE, \'\\"\') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n \\n return json.loads(context.prompt.split(\\"pop icon:\\",1)[1])\\n \'\'\'\\n\\nprint(summarize()) # {\'name\': \'Dua Lipa\', \'chart_position\': 3415, \'top_songs\': [\'New Rules\', \'Havana\']}\\n\\n```\\n\\n
\\n\\n### `lmql.F` Lambda Functions\\n\\nBased on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.\\n\\n```python\\nimport lmql\\n\\nsummarize = lmql.F(\\"Summarize the following in a few words: {data}: [SUMMARY]\\")\\nmain_subject = lmql.F(\\"What is the main subject (noun) of the following text? {data}: [SUBJECT]\\", \\n \\"len(TOKENS(SUBJECT)) < 20\\")\\n\\ntext = \\"In LMQL, users can specify high-level, logical constraints ...\\"\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n```\\n\\n
\\n
\\n\\n## `llama.cpp` Inference Backend\\n\\nLMQL now also fully integrates with the excellent [llama.cpp](https://github.com/ggerganov/llama.cpp) C++ implementation of a number of Transformer-based language models. \\n\\nUsing `llama.cpp` from LMQL is as simple as specifying it in the `from` clause of a query:\\n\\n```{lmql}\\nname::llama-cpp-blog\\n\\nargmax \\"Say \'this is a test\':[RESPONSE]\\" from \\"llama.cpp:.bin\\"\\n```\\n\\nWe support, both, in-process loading of `llama.cpp`, as well as remote inference via `lmql serve-model`. To learn more about `llama.cpp` and how to use it with LMQL, check out the corresponding chapter in the LMQL [documentation](/docs/models/llama.cpp.md).\\n\\n
\\n\\n## Other Changes\\n\\n* LMQL now includes a `random` model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.\\n\\n* Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.\\n\\n* More robust query string parsing, allowing for [robust escaping](/docs/language/scripted-prompting.md#escaping) of special characters `[`, `]`, `{` and `}`.\\n\\n* Added support for `transformers` based Llama models and the associated (fast) implementation of HF tokenizers.\\n\\n* Simplified Azure OpenAI support, see the relevant chapter in the [documentation](/docs/models/azure.md).\\n\\nWe thank community members [@minosvasilias](https://github.com/minosvasilias) and [@CircArgs](https://github.com/CircArgs) for their contribution to this release.","html":"

LMQL becomes simpler and adds llama.cpp

\\n

July 13, 2023

\\n

Today we are releasing LMQL 0.0.6.5. This update contains a major simplification of the LMQL syntax, moving it much closer to standard Python. It also includes a llama.cpp based inference backend, several bug fixes and other minor improvements.

\\n

You can try the latest version of LMQL in your browser at lmql.ai/playground or install it via pip install lmql.

\\n

One Line Is All It Takes

\\n

Most notably, 0.0.6.5 comes with several simplifications of the core syntax of LMQL. Of course, all changes are backwards compatible, so you can continue to use your existing query code and move to the new version without any changes.

\\n

With this, we aim to minimize syntactic overhead, employing sensible defaults to enable more concise programs like the following:

\\n
"One line is all it takes [CONTINUATION]"\\n
\\n
promptdown

One line is all it takes CONTINUATIONFallin\' in love with me.\\n

\\n

Sensible Defaults This is possible because LMQL now automatically assumes argmax and openai/text-davinci-003 as (configurable) default model. If you prefer to use\\na different model or custom decoder settings, you can still specify them explicitly, e.g. in the @lmql.query decorator function as demonstrated later in this post.

\\n

Without any additional configuration, the simple query code above translates to a full LMQL program like this:

\\n
argmax "One line is all it takes [CONTINUATION]" from "openai/text-davinci-003"\\n
\\n

\\n

Inline Constraints

\\n

LMQL now allows you to specify several inline where constraints. This enables constraints that refer to local program variables, which means constraints can now be dependent on previous model outputs.

\\n
"A list of awesome Dua Lipa songs:\\\\n"\\nsongs = []\\n\\n"- New Rules\\\\n"\\nfor i in range(4):\\n    "-[SONG]\\\\n" where STOPS_BEFORE(SONG, "\\\\n")\\n    songs.append(SONG)\\n\\n"Out of these, my favorite is[FAVORITE]" where FAVORITE in songs\\n
\\n
promptdown

A list of awesome Dua Lipa songs:⏎\\n- New Rules\\n- SONGDon\'t Start Now\\n- SONGIDGAF\\n- SONGBe the One\\n- SONGBlow Your Mind (Mwah)\\nOut of these, my favorite is FAVORITEDon\'t Start Now\\n

\\n

Note also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation.

\\n
\\n

@lmql.query functions

\\n

The overhauled syntax also makes LMQL much easier on the eyes when used with the @lmql.query function decorator in Python:

\\n
python
import lmql\\nimport json\\n\\n@lmql.query(model="openai/text-curie-001", temperature=0.9)\\ndef summarize(): \\n    '''lmql\\n    """\\n    Provide a summary of Dua Lipa, the pop icon:\\n    {{\\n      "name": "[STRING_VALUE]",\\n      "chart_position": [INT_VALUE],\\n      "top_songs": [[\\n         "[STRING_VALUE]",\\n         "[STRING_VALUE]"\\n      ]]\\n    }}\\n    """ where STOPS_BEFORE(STRING_VALUE, '"') and INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 3\\n    \\n    return json.loads(context.prompt.split("pop icon:",1)[1])\\n    '''\\n\\nprint(summarize()) # {'name': 'Dua Lipa', 'chart_position': 3415, 'top_songs': ['New Rules', 'Havana']}\\n\\n
\\n

\\n

lmql.F Lambda Functions

\\n

Based on LMQL\'s new minimal syntax, we introduce a novel and concise way to write LLM-based lambda functions. This offers a lightweight entryway to get started with integrated small LLM-based utilities in your code, without having to write a full LMQL program.

\\n
python
import lmql\\n\\nsummarize = lmql.F("Summarize the following in a few words: {data}: [SUMMARY]")\\nmain_subject = lmql.F("What is the main subject (noun) of the following text? {data}: [SUBJECT]", \\n                      "len(TOKENS(SUBJECT)) < 20")\\n\\ntext = "In LMQL, users can specify high-level, logical constraints ..."\\n\\nsummarize(data=text) # LMQL enables high-level constraints to be enforced during text \\n                     # generation, simplifying multi-part prompting and integration.\\nmain_subject(data=text) # Language Model Query Language (LMQL)\\n\\n
\\n

\\n
\\n

llama.cpp Inference Backend

\\n

LMQL now also fully integrates with the excellent llama.cpp C++ implementation of a number of Transformer-based language models.

\\n

Using llama.cpp from LMQL is as simple as specifying it in the from clause of a query:

\\n
argmax "Say 'this is a test':[RESPONSE]" from "llama.cpp:<PATH TO WEIGHTS>.bin"\\n
\\n

We support, both, in-process loading of llama.cpp, as well as remote inference via lmql serve-model. To learn more about llama.cpp and how to use it with LMQL, check out the corresponding chapter in the LMQL documentation.

\\n
\\n

Other Changes

\\n
    \\n
  • \\n

    LMQL now includes a random model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.

    \\n
  • \\n
  • \\n

    Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.

    \\n
  • \\n
  • \\n

    More robust query string parsing, allowing for robust escaping of special characters [, ], { and }.

    \\n
  • \\n
  • \\n

    Added support for transformers based Llama models and the associated (fast) implementation of HF tokenizers.

    \\n
  • \\n
  • \\n

    Simplified Azure OpenAI support, see the relevant chapter in the documentation.

    \\n
  • \\n
\\n

We thank community members @minosvasilias and @CircArgs for their contribution to this release.

\\n","frontmatter":{"date":"2023-07-13T00:00:00.000Z","title":"LMQL becomes simpler and adds llama.cpp"},"excerpt":"","url":"/blog/posts/release-0.0.6.5.html"},{"src":"---\\ndate: 2023-06-08\\ntitle: Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more\\n---\\n\\n# Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more\\n\\nJune 8, 2023\\n\\nAmong many things, this update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Azure OpenAI support** LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the [documentation](/docs/models/azure.md). Many thanks to [@veqtor](https://github.com/veqtor) for contributing this feature.\\n\\n* **Local Models via the Language Model Transport Protocol** LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the [documentation](/docs/models/hf.md).\\n\\n To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in [this README file](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/README.md). In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.\\n\\n
\\n \\n
\\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
\\n\\n* **Synchronous Python API** Next to an `async/await` based API, LMQL now also provides a synchronous API. This means you no longer need to use `asyncio` to use LMQL from Python. \\n\\n To use the synchronous API, simply declare `@lmql.query` function without the `async` keyword, e.g.\\n\\n ```python\\n import lmql\\n\\n @lmql.query\\n def hello(s: str):\\n \'\'\'lmql\\n argmax \\n \\"Hello {s} [RESPONSE]\\" \\n return RESPONSE\\n from \\n \\"chatgpt\\"\\n \'\'\'\\n\\n print(hello(\\"world\\")) # [\'Hello! How can I assist you today?\']\\n ```\\n\\n If you instead want to use `lmql.run` in a synchronous context, you can now use `lmql.run_sync` instead. To learn more about how LMQL can be used from Python, check out our [documentation](/docs/lib/python.md).\\n\\n* **Improved Tokenizer Backends** LMQL can now use the excellent [`tiktoken` tokenizer](https://github.com/openai/tiktoken) as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.\\n\\n* **Docker Image** LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the [documentation](/docs/development/docker-setup.md). Many thanks to [@SilacciA](https://github.com/SilacciA) for contributing this feature.\\n\\n* **Faster Startup Time** We optimized LMQL\'s import hierarchy, which results in faster module loading time.","html":"

Releasing LMQL 0.0.6.4: LMTP, Azure, Synchronous API, and more

\\n

June 8, 2023

\\n

Among many things, this update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Azure OpenAI support LMQL now supports OpenAI models that are served via Azure. For more information on how to use Azure models, please see the corresponding chapter in the documentation. Many thanks to @veqtor for contributing this feature.

    \\n
  • \\n
  • \\n

    Local Models via the Language Model Transport Protocol LMQL 0.0.6.4 implements a novel protocol to stream token output from local models, vastly improving performance. In our first benchmarks, we observed a 5-6x speedup for local model inference. For more information on how to use local models, please see the corresponding chapter in the documentation.

    \\n

    To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in this README file. In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.

    \\n
    \\n \\n
    \\n LMQL\'s new streaming protocol (LMTP) allows for faster local model inference.\\n
    \\n
  • \\n
  • \\n

    Synchronous Python API Next to an async/await based API, LMQL now also provides a synchronous API. This means you no longer need to use asyncio to use LMQL from Python.

    \\n

    To use the synchronous API, simply declare @lmql.query function without the async keyword, e.g.

    \\n
    python
    import lmql\\n\\n@lmql.query\\ndef hello(s: str):\\n    '''lmql\\n    argmax \\n        "Hello {s} [RESPONSE]" \\n        return RESPONSE\\n    from \\n        "chatgpt"\\n    '''\\n\\nprint(hello("world")) # ['Hello! How can I assist you today?']\\n
    \\n

    If you instead want to use lmql.run in a synchronous context, you can now use lmql.run_sync instead. To learn more about how LMQL can be used from Python, check out our documentation.

    \\n
  • \\n
  • \\n

    Improved Tokenizer Backends LMQL can now use the excellent tiktoken tokenizer as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.

    \\n
  • \\n
  • \\n

    Docker Image LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the documentation. Many thanks to @SilacciA for contributing this feature.

    \\n
  • \\n
  • \\n

    Faster Startup Time We optimized LMQL\'s import hierarchy, which results in faster module loading time.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-06-08T00:00:00.000Z","title":"Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more"},"excerpt":"","url":"/blog/posts/release-0.0.6.4.html"},{"src":"---\\ndate: 2023-05-11\\ntitle: LMQL Release v0.0.6.3\\n---\\n\\n# LMQL v0.0.6.3\\n\\nMay 11, 2023\\n\\nToday, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Lighter Runtime** As part of our continued efforts, we made LMQL much lighter (no more mandatory `transformers` dependency). By default LMQL now no longer requires `transformers` or PyTorch. If you rely on local models, just install LMQL via `pip install lmql[hf]` to get full Transformers integration.\\n\\n* **Token Constraints** A new function `TOKENS(...)` was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.\\n \\n ```{lmql}\\n name::token_constraints\\n argmax \\n \\"A 10 token response[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) == 10\\n ```\\n\\n* **Conditional Stopping** `STOPS_AT` can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met. \\n\\n For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.\\n\\n ```{lmql}\\n name::conditional_stopping \\n argmax \\n \\"Hello[WHO]\\" \\n from \\n \\"openai/text-ada-001\\" \\n where \\n len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, \\"\\\\n\\")\\n ```\\n\\n* **lmql.run**: Improved input validation for `lmql.run` as contributed by @lfegray. More specifically, `lmql.run` wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.\\n\\n* **Automatic Cache Invalidation**: LMQL\'s tokenizer cache at `~/.cache/lmql` is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.\\n\\n> Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.","html":"

LMQL v0.0.6.3

\\n

May 11, 2023

\\n

Today, we are releasing LMQL v0.0.6.3. This update contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Lighter Runtime As part of our continued efforts, we made LMQL much lighter (no more mandatory transformers dependency). By default LMQL now no longer requires transformers or PyTorch. If you rely on local models, just install LMQL via pip install lmql[hf] to get full Transformers integration.

    \\n
  • \\n
  • \\n

    Token Constraints A new function TOKENS(...) was added to the LMQL constraint language, allowing you to specify lower and upper bounds or the exact number of tokens to generate for a given variable.

    \\n
    argmax \\n    "A 10 token response[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) == 10\\n
    \\n
  • \\n
  • \\n

    Conditional Stopping STOPS_AT can now be combined with additional side conditions. This allows you to specify stopping phrases that are only enforced, once other conditions are met.

    \\n

    For example, below, we stop when the generated text hits a newline character, but only if the overall variable output is already at least 10 tokens long.

    \\n
    argmax \\n    "Hello[WHO]" \\nfrom \\n    "openai/text-ada-001" \\nwhere \\n    len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, "\\\\n")\\n
    \\n
  • \\n
  • \\n

    lmql.run: Improved input validation for lmql.run as contributed by @lfegray. More specifically, lmql.run wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.

    \\n
  • \\n
  • \\n

    Automatic Cache Invalidation: LMQL\'s tokenizer cache at ~/.cache/lmql is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.

    \\n
  • \\n
\\n
\\n

Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.

\\n
\\n","frontmatter":{"date":"2023-05-11T00:00:00.000Z","title":"LMQL Release v0.0.6.3"},"excerpt":"","url":"/blog/posts/release-0.0.6.3.html"},{"src":"---\\ndate: 2023-05-03\\ntitle: LMQL Release v0.0.6.1\\n---\\n\\n# LMQL v0.0.6.1\\n\\nMay 3, 2023\\n\\nWe released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:\\n\\n* **Cache Layer Bug Fixes** This release contains several fixes and improvements to the recently introduced cache layer.\\n\\n* **Stopping Phrases** Stopping phrases specified via `STOPS_BEFORE` are now passed to the OpenAI API as `\\"stop\\"` parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter `openai_nonstop=True`.\\n\\n* **Asynchronous Output Writers** All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on [Output Streaming](/docs/lib/output.md) to the documentation.\\n\\n* **Output Writers for HTTP endpoints, WebSockets and Server-Sent Events** Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new [lmql.output](https://github.com/eth-sri/lmql/tree/main/src/lmql/output) module. We will also provide more documentation on how to use them, e.g. with `aiohttp` in the future.","html":"

LMQL v0.0.6.1

\\n

May 3, 2023

\\n

We released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:

\\n
    \\n
  • \\n

    Cache Layer Bug Fixes This release contains several fixes and improvements to the recently introduced cache layer.

    \\n
  • \\n
  • \\n

    Stopping Phrases Stopping phrases specified via STOPS_BEFORE are now passed to the OpenAI API as "stop" parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter openai_nonstop=True.

    \\n
  • \\n
  • \\n

    Asynchronous Output Writers All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on Output Streaming to the documentation.

    \\n
  • \\n
  • \\n

    Output Writers for HTTP endpoints, WebSockets and Server-Sent Events Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new lmql.output module. We will also provide more documentation on how to use them, e.g. with aiohttp in the future.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-05-03T00:00:00.000Z","title":"LMQL Release v0.0.6.1"},"excerpt":"","url":"/blog/posts/release-0.0.6.1.html"},{"src":"---\\ndate: 2023-05-01\\ntitle: Releasing the LMQL Caching Layer (v0.0.6)\\n---\\n\\n# Releasing the LMQL Caching Layer (v0.0.6)\\n\\nMay 1, 2023\\n\\nToday we are releasing LMQL 0.0.6, the first version of LMQL that integrates the *LMQL Caching Layer*. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including **template-based queries, long-form constraints and tool augmentation.**\\n\\nYou can experiment with LMQL in the browser-based [Playground IDE](http://lmql.ai/playground) or install the latest version locally, via `pip install lmql`.\\n\\n## Caching Layer\\n\\nThe caching layer is implemented as a **tree-based data structure** that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.\\n\\nTo illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.\\n\\n### Template-Based Queries \\n\\nWhen specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:\\n```{lmql}\\nname::list-of-things-speculative\\nargmax\\n \\"A list of things not to forget when going to the sea (not travelling): \\\\n\\"\\n \\"- Sunglasses \\\\n\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\n \\"-[THING]\\"\\nfrom\\n \'openai/text-ada-001\'\\nwhere\\n STOPS_AT(THING, \\"\\\\n\\")\\n```\\n**Without Caching:** Tokens: 390, Requests: 4 | **With Caching Layer:** Tokens: 89 (-77%), Requests: 1 (-75%)\\n\\nHere, the LLM typically needs to be invoked 4 times, once per `[THING]` variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the `-` token to be inserted before each `[THING]`. \\n\\nWith the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate `[THING]` variables.\\n\\n### Short-Circuiting Long Constraints\\n\\nWhen you specify long constraints like `A in [\\"ABCDE\\", \\"FGHIJK\\"]`, the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:\\n```{lmql}\\nname::long-form-constraints-speculative\\nargmax\\n \\"If we have the choice we choose[OPTION]\\"\\nfrom \\n \\"openai/text-ada-001\\"\\nwhere\\n OPTION in [\\"Option A with a whole lot of extra context\\", \\n \\"Option B with context\\", \\n \\"Another Option, also with a lot of additional text\\"\\n ]\\n```\\n```promptdown\\nIf we have the choice we choose [OPTION|Option A with a whole lot of extra context]\\n```\\n**Without Caching:** Tokens: 123, Requests: 9 | **With Caching Layer:** Tokens: 25 (-80%), Requests: 2 (-78%)\\n\\nHere, after the LLM has produced `\\"Option\\"` and then `\\" A\\"`, LMQL short-circuits further model calls and automatically completes the resulting sequence to `\\"Option A with a whole lot of extra context\\"`. This is possible because once `Option A` has been predicted, the remaining tokens are fully determined by the constraints.\\n\\n### Tool-Augmented Queries\\n\\nLastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.\\n\\nAs an example, consider the following query that augments an LLM with the ability to use a key-value storage, [also runnable in the browser-based LMQL Playground](http://lmql.ai/playground?snippet=kv).\\n\\n
\\n\\n \\"Key-Storage\\n\\n
\\n\\n**Without Caching:** Tokens: 5,162, Requests: 12 | **With Caching Layer:** Tokens: 3,481 (-33%), Requests: 8 (-33%)\\n\\nHere, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to `assign` and `get` stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.\\n\\nWe count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output. \\n\\nThis scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.\\n\\n## Persisting the Cache\\n\\nOf course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result. \\n\\nTo do so, you can simply specify a `cache=\\"file.tokens\\"` parameter in your query code:\\n\\n```{lmql}\\nname::joke-with-cache\\nargmax(cache=\\"joke.tokens\\")\\n \\"\\"\\"A good dad joke. A indicates the punchline\\n Q:[JOKE]\\n A:[PUNCHLINE]\\"\\"\\"\\nfrom\\n \\"openai/text-davinci-003\\"\\nwhere\\n len(JOKE) < 120 and \\n STOPS_AT(JOKE, \\"?\\") and \\n STOPS_AT(PUNCHLINE, \\"\\\\n\\") and \\n len(PUNCHLINE) > 1\\n```\\n\\nThe first successful run of this query will persist the cache to `joke.tokens`. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.\\n\\n**Caching During Query Development**: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.\\n\\n## Caveats and Disabling the Cache\\n\\nYou can disable the caching layer by specifying `cache=False` in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.\\n\\nFurther, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.\\n\\n## Conclusion\\n\\nIn this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.\\n\\nTo learn more about LMQL please also check out our [documentation](/docs), or join our [Discord](https://discord.gg/2Y3Wz2Q) to chat with us directly. We are looking forward to hearing from you!","html":"

Releasing the LMQL Caching Layer (v0.0.6)

\\n

May 1, 2023

\\n

Today we are releasing LMQL 0.0.6, the first version of LMQL that integrates the LMQL Caching Layer. The caching layer can drastically reduce token use of LLM interaction, lowering both the cost and latency of running queries. In this blog post, we provide a quick overview of the caching layer and demonstrate how it can reduce token use, latency and the number of requests needed to run queries by up to 80%. We observe improvements across a wide range of different scenarios, including template-based queries, long-form constraints and tool augmentation.

\\n

You can experiment with LMQL in the browser-based Playground IDE or install the latest version locally, via pip install lmql.

\\n

Caching Layer

\\n

The caching layer is implemented as a tree-based data structure that caches all model output including logits, tokens, and metadata, allowing the runtime to more efficiently explore the token space of an LLM, even in the presence of multiple variables, constraints and tool augmentation. The cache can be considered an append-only tree, that is explored during query execution, expanding branches according to query code, constraints and speculative execution.

\\n

To illustrate the effect of a caching layer, we consider the following example scenarios, all of which now run in a fraction of the time and with a fraction of the tokens needed with traditional querying methods.

\\n

Template-Based Queries

\\n

When specifying a prompt template with multiple variables to fill in, an LLM typically needs to be invoked once per variable. For instance, consider the following template that guides an LLM in generating a list of things:

\\n
argmax\\n    "A list of things not to forget when going to the sea (not travelling): \\\\n"\\n    "- Sunglasses \\\\n"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\n    "-[THING]"\\nfrom\\n    'openai/text-ada-001'\\nwhere\\n    STOPS_AT(THING, "\\\\n")\\n
\\n

Without Caching: Tokens: 390, Requests: 4 | With Caching Layer: Tokens: 89 (-77%), Requests: 1 (-75%)

\\n

Here, the LLM typically needs to be invoked 4 times, once per [THING] variable. On each call, this incurs a token and latency cost (both with OpenAI and local models). Separate calls are needed, because our template dictates the - token to be inserted before each [THING].

\\n

With the caching layer, LMQL can now invoke the LLM only once, and fill in all variables with the resulting tokens, as long as the LLM output already aligns naturally with your template. In case the LLM result of the initial invocation at some point no longer aligns with the template, LMQL will automatically re-invoke the LLM from this point on, guaranteeing an overall consistent result that is already parsed into separate [THING] variables.

\\n

Short-Circuiting Long Constraints

\\n

When you specify long constraints like A in ["ABCDE", "FGHIJK"], the LMQL runtime guides the LLM to choose one of the provided options and then continues enforcing the sequence until the chosen values is fully decoded. To illustrate, consider the following query:

\\n
argmax\\n    "If we have the choice we choose[OPTION]"\\nfrom \\n    "openai/text-ada-001"\\nwhere\\n    OPTION in ["Option A with a whole lot of extra context", \\n        "Option B with context", \\n        "Another Option, also with a lot of additional text"\\n    ]\\n
\\n
promptdown

If we have the choice we choose OPTIONOption A with a whole lot of extra context\\n

\\n

Without Caching: Tokens: 123, Requests: 9 | With Caching Layer: Tokens: 25 (-80%), Requests: 2 (-78%)

\\n

Here, after the LLM has produced "Option" and then " A", LMQL short-circuits further model calls and automatically completes the resulting sequence to "Option A with a whole lot of extra context". This is possible because once Option A has been predicted, the remaining tokens are fully determined by the constraints.

\\n

Tool-Augmented Queries

\\n

Lastly, we consider tool augmented queries. LLM agents and tool augmentation are very powerful paradigms, that allow LLMs to incorporate external knowledge and reasoning into their predictions. However, this comes at a cost: On each tool invocation, the LLM needs to be re-invoked to continue decoding after the tool output has been inserted. This impacts both the token cost and latency of running queries, as many requests have to be send forth and back between the LLM and the tool.

\\n

As an example, consider the following query that augments an LLM with the ability to use a key-value storage, also runnable in the browser-based LMQL Playground.

\\n
\\n\\n \\"Key-Storage\\n\\n
\\n

Without Caching: Tokens: 5,162, Requests: 12 | With Caching Layer: Tokens: 3,481 (-33%), Requests: 8 (-33%)

\\n

Here, whenever the LLM produces an action relating to our key-value storage, we invoke a tool that handles the storage and return the result (to assign and get stored values). The result of each tool invocation is then inserted into the LLM output, and the LLM is re-invoked to continue decoding.

\\n

We count 10 tool interactions which results in 12 requests if we run without caching. However, using the new caching layer, we can reduce this to 8 requests, even undercutting the number of tool interactions. This is possible because the caching layer will not abort LLM generation, if the LLM already correctly predicts the tool output.

\\n

This scenario demonstrates that the natural ability of LLMs to complete sequences can be leveraged to reduce the number of tool interactions, by relying on speculative execution.

\\n

Persisting the Cache

\\n

Of course, the in-memory cache of the LMQL runtime can also be persisted to disk, allowing you to reuse the cache tree across multiple queries, automatically reducing token cost and latency. In some cases this can even be used to reduce the number of requests to the LLM to 0, e.g. if the cache already contains the desired result.

\\n

To do so, you can simply specify a cache="file.tokens" parameter in your query code:

\\n
argmax(cache="joke.tokens")\\n   """A good dad joke. A indicates the punchline\\n   Q:[JOKE]\\n   A:[PUNCHLINE]"""\\nfrom\\n   "openai/text-davinci-003"\\nwhere\\n   len(JOKE) < 120 and \\n   STOPS_AT(JOKE, "?") and \\n   STOPS_AT(PUNCHLINE, "\\\\n") and \\n   len(PUNCHLINE) > 1\\n
\\n

The first successful run of this query will persist the cache to joke.tokens. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.

\\n

Caching During Query Development: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.

\\n

Caveats and Disabling the Cache

\\n

You can disable the caching layer by specifying cache=False in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.

\\n

Further, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.

\\n

Conclusion

\\n

In this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.

\\n

To learn more about LMQL please also check out our documentation, or join our Discord to chat with us directly. We are looking forward to hearing from you!

\\n","frontmatter":{"date":"2023-05-01T00:00:00.000Z","title":"Releasing the LMQL Caching Layer (v0.0.6)"},"excerpt":"","url":"/blog/posts/release-0.0.6.html"},{"src":"---\\ndate: 2023-04-17\\ntitle: LMQL Release 0.0.5\\n---\\n\\n# LMQL Release 0.0.5\\n\\nApril 17, 2023\\n\\nToday we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.\\n\\n`lmql==0.0.5` has been published on [PyPI](https://pypi.org/project/lmql/), based the current `main` branch of the [GitHub repository](https://github.com/eth-sri/lmql). The updated version has also been deployed to the browser-based [lmql.ai/playground](http://lmql.ai/playground).\\n\\n### Changelog\\n\\n* **Decoder Performance** The `argmax` and `sample` decoders have undergone some optimizations, allowing them to run faster. This results in a *20-30% speed-up* on common query workloads. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n* **Postprocessing Semantics** Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. [#24](https://github.com/eth-sri/lmql/pull/24). \\n\\n For example, when using an `INT()` constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting `NUM` value will additionally be converted to an `int` value:\\n\\n ```\\n argmax\\n \\"My favorite number is: [NUM]\\\\n\\"\\n print(type(NUM), NUM * 2) # 4\\n \\"Number times two is {NUM * 2}\\"\\n from\\n \'openai/text-ada-001\'\\n where\\n INT(NUM) \\n ```\\n\\n* **Core Interpreter** A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with *branching* decoding algorithms. [#24](https://github.com/eth-sri/lmql/pull/24).\\n\\n\\n* **Playground** Locally and when used in-browser, the [LMQL Playground](http://lmql.ai/playground) now *streams debugger information* from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. [#27f9a8ad](https://github.com/eth-sri/lmql/commit/27f9a8adb819f732608ef61c9aca9dca579dc536).\\n\\n\\n* **Other Fixes**:\\n - When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write `STOPS_AT(VAR, \\"\\\\n\\")` instead of `STOPS_AT(VAR, \\"\\\\\\\\n\\")`\\n - The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. [#15](https://github.com/eth-sri/lmql/pull/15), thanks to [@chrispan](https://github.com/chrispan).\\n - OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes [#6](https://github.com/eth-sri/lmql/issues/6).\\n\\n### Preview\\n\\nApart from the changes above, we are also working on a number of other features, including:\\n\\n* **llama.cpp support** as started in [this PR](https://github.com/eth-sri/lmql/pull/18), thanks to [@CircArgs](https://github.com/CircArgs).\\n* Support for **Type Constraints**, e.g. `type(VAR) is DataClass`, that automatically force the model to produce a value that structurally conforms to the given type. See this [Twitter thread](https://twitter.com/lbeurerkellner/status/1646187597901733889) for more details.\\n* Support for using **Antlr parsers** during query execution, to force the model to produce a value that conforms to a given grammar. \\n\\n* **Extending Logit Masking to OpenAI Chat Models**. This will enable full support for LMQL constraints with e.g. `chatgpt` and `gpt-4` models. See [#25](https://github.com/eth-sri/lmql/pull/25), thanks to [@kharvd](https://github.com/kharvd).","html":"

LMQL Release 0.0.5

\\n

April 17, 2023

\\n

Today we are releasing version 0.0.5 of LMQL. This release focuses on stability and performance improvements. For a detailed list of changes, please see below. We are particularly excited about the first community contributions that have been merged as part of this release, with many more in the works.

\\n

lmql==0.0.5 has been published on PyPI, based the current main branch of the GitHub repository. The updated version has also been deployed to the browser-based lmql.ai/playground.

\\n

Changelog

\\n
    \\n
  • \\n

    Decoder Performance The argmax and sample decoders have undergone some optimizations, allowing them to run faster. This results in a 20-30% speed-up on common query workloads. #24.

    \\n
  • \\n
  • \\n

    Postprocessing Semantics Internally, LMQL now allows constraints to implement postprocessing semantics. This is used to convert variable values after they have been completed, to a more normalized form in the prompt, and to a semantically meaningful data type in the context of the query code. #24.

    \\n

    For example, when using an INT(<var>) constraint on a generated number, the model will be restricted to only generate valid integers, and now, the resulting NUM value will additionally be converted to an int value:

    \\n
    argmax\\n   "My favorite number is: [NUM]\\\\n"\\n   print(type(NUM), NUM * 2) # <class 'int'> 4\\n   "Number times two is {NUM * 2}"\\nfrom\\n   'openai/text-ada-001'\\nwhere\\n   INT(NUM) \\n
    \\n
  • \\n
  • \\n

    Core Interpreter A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with branching decoding algorithms. #24.

    \\n
  • \\n
  • \\n

    Playground Locally and when used in-browser, the LMQL Playground now streams debugger information from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. #27f9a8ad.

    \\n
  • \\n
  • \\n

    Other Fixes:

    \\n
      \\n
    • When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write STOPS_AT(VAR, "\\\\n") instead of STOPS_AT(VAR, "\\\\\\\\n")
    • \\n
    • The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. #15, thanks to @chrispan.
    • \\n
    • OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes #6.
    • \\n
    \\n
  • \\n
\\n

Preview

\\n

Apart from the changes above, we are also working on a number of other features, including:

\\n
    \\n
  • \\n

    llama.cpp support as started in this PR, thanks to @CircArgs.

    \\n
  • \\n
  • \\n

    Support for Type Constraints, e.g. type(VAR) is DataClass, that automatically force the model to produce a value that structurally conforms to the given type. See this Twitter thread for more details.

    \\n
  • \\n
  • \\n

    Support for using Antlr parsers during query execution, to force the model to produce a value that conforms to a given grammar.

    \\n
  • \\n
  • \\n

    Extending Logit Masking to OpenAI Chat Models. This will enable full support for LMQL constraints with e.g. chatgpt and gpt-4 models. See #25, thanks to @kharvd.

    \\n
  • \\n
\\n","frontmatter":{"date":"2023-04-17T00:00:00.000Z","title":"LMQL Release 0.0.5"},"excerpt":"","url":"/blog/posts/release-0.0.5.html"}]');const h={class:"posts"},d={class:"post"},u=["href"],m=["innerHTML"],v=JSON.parse('{"title":"Blog","description":"","frontmatter":{"title":"Blog","layout":"doc","aside":false,"outline":false},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md"}'),g={name:"blog/index.md"},f=Object.assign(g,{setup(y){function b(s){return s}return(s,w)=>(a(),t("div",null,[(a(!0),t(r,null,i(l(p),n=>(a(),t("div",h,[e("div",d,[e("a",{href:n.url},[e("h1",null,c(n.frontmatter.title),1)],8,u),e("div",{class:"body",innerHTML:n.html},null,8,m)])]))),256))]))}}),q=o(f,[["__scopeId","data-v-61c06c99"]]);export{v as __pageData,q as default}; diff --git a/assets/index.md.3b2473f1.js b/assets/index.md.930a925c.js similarity index 88% rename from assets/index.md.3b2473f1.js rename to assets/index.md.930a925c.js index 6c3a917e..9e3aefc4 100644 --- a/assets/index.md.3b2473f1.js +++ b/assets/index.md.930a925c.js @@ -1 +1 @@ -import{_ as v}from"./chunks/lmql.17cc0505.js";import{_ as m,o as p,c as o,k as s,r as d,p as f,m as b,e as L,n as x,h as q,F as _,D as j,t as g,l as c,H as h,w as e,a as i,a0 as M}from"./chunks/framework.980cae92.js";const A={},u=a=>(f("data-v-fb660782"),a=a(),b(),a),S={class:"hero"},k=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),N=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),T=u(()=>s("br",null,null,-1)),E={class:"tagline"},C=u(()=>s("div",{class:"buttons"},[s("a",{class:"primary btn",href:"/docs/"}," Get Started "),s("a",{class:"btn",href:"https://github.com/eth-sri/lmql#contribute')"}," Contribute ")],-1));function W(a,n){return p(),o("div",S,[k,s("div",null,[s("h1",null,[N,d(a.$slots,"title",{},void 0,!0),T]),s("h2",E,[d(a.$slots,"subtitle",{},void 0,!0)]),C])])}const Q=m(A,[["render",W],["__scopeId","data-v-fb660782"]]);const R={key:0,class:"badge"},I={class:"reveal"},P={__name:"LMFeature",props:["template","new"],setup(a){return(n,w)=>(p(),o("div",{class:x(["feature",a.template])},[s("div",null,[s("h1",null,[d(n.$slots,"default",{},void 0,!0),a.new?(p(),o("span",R,"NEW")):L("",!0)]),s("p",null,[d(n.$slots,"description",{},void 0,!0)])]),s("code",I,[d(n.$slots,"code",{},void 0,!0)])],2))}},H=m(P,[["__scopeId","data-v-9d5b0837"]]),Y=JSON.parse(`[{"snippet":"","description":"
lmql
@lmql.query\\ndef meaning_of_life():\\n '''lmql\\n # top-level strings are prompts\\n "Q: What is the answer to life, the \\\\\\n universe and everything?"\\n\\n # generation via (constrained) variables\\n "A: [ANSWER]" where \\\\\\n len(ANSWER) < 120 and STOPS_AT(ANSWER, ".")\\n\\n # results are directly accessible\\n print("LLM returned", ANSWER)\\n\\n # use typed variables for guaranteed \\n # output format\\n "The answer is [NUM: int]"\\n\\n # query programs are just functions \\n return NUM\\n '''\\n\\n# so from Python, you can just do this\\nmeaning_of_life() # 42\\n
\\n

\\n
\\n

Created by the SRI Lab @ ETH Zurich and contributors.

\\n
\\n
\\n Star\\n
\\n
\\n","title":null,"template":"code"},{"snippet":"
promptdown

Execution Trace

\\nQ: When was Obama born?wait200beginincontext

dateformat(respond in DD/MM/YYYY)endincontext

wait200ANSWER04/08/1961wait200fadeincontextwait200hideincontextwait200\\nQ: When was Bruno Mars born?wait200beginincontext1

dateformat(respond in DD/MM/YYYY)endincontext1

wait200ANSWER08/10/1985wait200fadeincontext1wait200hideincontext1wait200\\nQ: When was Dua Lipa born?wait200beginincontext2

dateformat(respond in DD/MM/YYYY)endincontext2

wait200ANSWER22/08/1995wait200fadeincontext2wait200hideincontext2wait200\\n\\nOut of these, who was born last?LASTDua Lipa\\n

\\n
","description":"

LMQL now supports nested queries, enabling modularized local instructions and re-use of prompt components.

\\n
\\n\\nLearn more\\n\\n","title":"Nested Queries bring Procedural Programming to Prompting","template":"side-by-side","new":0.7},{"snippet":"","description":"

LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

\\n\\n","title":"Works Across Backends","template":"middle"}]`),r=JSON.parse(`[{"id":0,"path":"/features/examples/1-packing-list.html","title":"🌴 Packing List","description":"

Prompt construction and generation is implemented via expressive Python control flow and string interpolation.

\\n","code":"
lmql
# top level strings are prompts\\n"My packing list for the trip:"\\n\\n# use loops for repeated prompts\\nfor i in range(4):\\n    # 'where' denotes hard constraints enforced by the runtime\\n    "- [THING] \\\\n" where THING in \\\\ \\n        ["Volleyball", "Sunscreen", "Bathing Suit"]\\n
\\n
","output":"
promptdown

My packing list for the trip:\\n\\n- THING Volleyball\\n- THING Bathing Suit\\n- THING Sunscreen\\n- THING Volleyball\\n

\\n
"},{"id":1,"path":"/features/examples/2-constraining.html","title":"⛓️ Constrained LLMs","description":"

LMQL's support for constrained generation enables robust interfacing, to integrate LLMs safely into your applications.Learn More →

\\n","code":"
lmql
# top-level strings are prompts\\n"Tell me a joke:\\\\n"\\n\\n# use 'where' constraints to control and restrict generation\\n"Q:[JOKE]\\\\n" where len(JOKE) < 120 and STOPS_AT(JOKE, "?")\\n\\n"A:[PUNCHLINE]\\\\n" where \\\\ \\n    STOPS_AT(PUNCHLINE, "\\\\n") and len(TOKENS(PUNCHLINE)) > 1\\n
\\n
","output":"
promptdown

Tell me a joke:\\n\\nQ: JOKE What did the fish say when it hit the wall?\\nA: PUNCHLINE Dam\\n

\\n
"},{"id":2,"path":"/features/examples/2.5-data-types.html","title":"🔢 Types and Regex","description":"

LMQL supports integer and regex constraints, enabling advanced output formatting. The results are automatically represented as the appropriate Python type, and can be manipulated as such.

\\n","code":"
lmql
# restrict generation to MM/DD format\\n"Q: It's the last day of June. What day is it?\\\\n"\\n"A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\\\\n"\\n\\n# generate numbers\\n"Q: What's the month number?\\\\n"\\n"A: [ANSWER: int]"\\n\\n# results are automatically cast to int...\\ntype(ANSWER) # -> int\\n\\n# ...and can be easily manipulated\\n10 * ANSWER # -> 60\\n
\\n
","output":"
promptdown

Q: It's the last day of June. What day is it?\\nA: Today is RESPONSE 30/06\\n\\nQ: What's the month number?\\nA: ANSWER 6\\n

\\n
"},{"id":3,"path":"/features/examples/3-multi-part.html","title":"🧠 Multi-Part Prompts","description":"

LMQL's programming model supports multi-part prompt programs, enabling enhanced controls over the LLM reasoning process.

\\n","code":"
lmql
# use multi-part prompting for complicated questions\\n"Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?"\\n"Answer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021"\\n\\n# use a reasoning step to break down the problem\\n"A: Let's think step by step.\\\\n [REASONING]"\\n\\n# use a constrained variable to extract the final response\\n"Therefore, the answer is [ANSWER]" where \\\\\\n    ANSWER in ["A", "B", "C", "D", "E", "F"]\\n\\n# access results just like a normal variable\\nANSWER # "A"\\n
\\n
","output":"
promptdown

Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?\\nAnswer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021\\n\\nA: Let's think step by step.\\nREASONING Sept. 1st, 2021 was a week ago, so 10 days ago would be 8 days before that, which is August 23rd, 2021, so the answer is (A) 08/29/2021.\\n\\nTherefore, the answer is ANSWER A\\n

\\n
"},{"id":4,"path":"/features/examples/3.5-distributions.html","title":"📐 Measure Distributions","description":"

Apart from text generation, LMQL also measures model scores, allowing users to extract classification results and confidence scores.

\\n","code":"
lmql
# prompt with a data sample\\n"Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\\\n"\\n\\n# instruct model to do sentiment analysis\\n"Q: What is the underlying sentiment of this review and why?\\\\n"\\n\\n# generate a text-based analysis\\n"A:[ANALYSIS]\\\\n"\\n\\n# based on the analysis, measure certainity about the sentiment\\n"Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]" distribution \\\\\\n   CLASSIFICATION in [" positive", " neutral", " negative"]\\n
\\n
","output":"
promptdown

Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\n\\nQ: What is the underlying sentiment of this review and why?\\n\\nA: ANALYSISPositive, because the reviewer enjoyed their stay and had positive experiences with both the activities and food.\\n\\nBased on this, the overall sentiment of the message \\ncan be considered to be CLS[CLASSIFICATION]\\n\\n\\n\\n\\n\\n\\n

\\n
\\n P(CLASSIFICATION) = \\n
\\n - positive 0.9998711120293567
\\n - neutral 0.00012790777085508993
\\n - negative 9.801997880775052e-07\\n
\\n
"},{"id":5,"path":"/features/examples/3.6-python.html","title":"🐍 Python Support","description":"

LMQL can be used directly from within Python, allowing for seamless integration with your existing codebase.

\\n","code":"
python
import lmql\\n\\n# defines an LMQL function from within Python\\n@lmql.query\\ndef say(phrase):\\n    '''lmql\\n    # we can seamlessly use 'phrase' in LMQL\\n    "Say '{phrase}': [TEST]"\\n    # return the result to the caller\\n    return TEST\\n    '''\\n\\n# call your LMQL function like any other Python function\\nprint(say("Hello World!", model="openai/gpt-3.5-turbo"))\\n
\\n
","output":"
promptdown

Say 'Hello World': TEST Hello World\\n

\\n
"},{"id":6,"path":"/features/examples/4-meta-prompting.html","title":"🌳 Meta Prompting","description":"

LMQL supports program-level decoding algorithms like beam, sample and best_k, allowing for a branching exploration of multi-step reasoning flows.

\\n","code":"
lmql
# specify a decoding algorithm (e.g. beam, sample, best_k)\\n# to enable multi-branch exploration of your program\\nbeam(n=2)\\n\\n# pose a question\\n"Q: What are Large Language Models?\\\\n\\\\n"\\n\\n# use multi-part meta prompting for improved reasoning\\n"A good person to answer this question would be[EXPERT]\\\\n\\\\n" where STOPS_AT(EXPERT, ".") and STOPS_AT(EXPERT, "\\\\n")\\n\\n# process intermediate results in Python\\nexpert_name = EXPERT.rstrip(".\\\\n")\\n\\n# generate the final response by leveraging the expert\\n"For instance,{expert_name} would answer [ANSWER]" \\\\ \\n    where STOPS_AT(ANSWER, ".") \\n
\\n
","output":"
promptdown

Q: What are Large Language Models?⏎\\n\\nA good person to answer this question would be EXPERT a data scientist or a machine learning engineer.\\n\\nFor instance, (a data scientist or a machine learning engineer) would answer ANSWER this question by explaining that large language models are a type of artificial intelligence (AI) model that uses deep learning algorithms to process large amounts of natural language data.\\n

\\n
"},{"id":7,"path":"/features/examples/5-wikipedia.html","title":"🌎 Tool Augmentation","description":"

LMQL supports arbitrary Python function calls during generation, enabling seamless integration with external tools and APIs, augmenting the model's capabilities.

\\n","code":"
lmql
# define or import an external function\\nasync def wikipedia(q): ...\\n\\n# pose a question\\n"Q: From which countries did the Norse originate?\\\\n"\\n\\n# invoke 'wikipedia' function during reasoning\\n"Action: Let's search Wikipedia for the \\\\\\n term '[TERM]\\\\n" where STOPS_AT(TERM, "'")\\n\\n# seamlessly call it *during* generation\\nresult = await wikipedia(TERM)\\n"Result: {result}\\\\n"\\n\\n# generate final response using retrieved data\\n"Final Answer:[ANSWER]"\\n
\\n
","output":"
promptdown

Q: From which countries did the Norse originate?\\n\\nAction: Let's search Wikipedia for the term TERM 'Norse'\\nResult: (Norse is a demonym for Norsemen, a Medieval North Germanic ethnolinguistic group ancestral to modern Scandinavians, defined as speakers of Old Norse from about the 9th to the 13th centuries.)\\n\\nFinal Answer: ANSWER The Norse originated from Scandinavia.\\n

\\n
"},{"id":8,"path":"/features/examples/6-chat.html","title":"💬 Chatbots","description":"

Implement custom chatbots with ease, using LMQL's direct integration of interactive generation and result streaming.

\\n","code":"
lmql
# {:system} and other tags can be used to control chat-tuned models\\n"{:system} You are a marketing chatbot for the language model query language (LMQL)."\\n\\n# implement a chatbot as simple loop\\nwhile True:\\n   # integrate user input just like in a standard Python program\\n   "{:user} {await input()}"\\n   "{:assistant} [ANSWER]"\\n
\\n
","output":"
promptdown

bubble:userWhat is the best way to interact with LLMs?
\\n\\n
bubble:assistantANSWER The best way to interact with LLMs (Language Model Models) is through a query language like LMQL. LMQL allows you to easily and efficiently query large language models and retrieve the information you need. With LMQL, you can specify the input text, the output format, and the model you want to use , all in a single query. This makes it easy to integrate LLMs into your applications and workflows, and to get the most out of these powerful language models. Additionally, LMQL provides a standardized way of interacting with LLMs, which makes it easier for developers and data scientists to collaborate and share their work .
\\n

\\n
"}]`);const O={},D={class:"code-by-code"},$={class:"left"},F={class:"right"};function B(a,n){return p(),o("div",D,[s("div",$,[d(a.$slots,"code")]),s("div",F,[d(a.$slots,"output")])])}const G=m(O,[["render",B]]);const y=a=>(f("data-v-96cfe14a"),a=a(),b(),a),J={class:"examples"},z=y(()=>s("div",{style:{"margin-top":"60pt"}},null,-1)),V={class:"btn-group",role:"group","aria-label":"Basic example"},U=["onClick"],K=["innerHTML"],X=y(()=>s("h2",null,"LMQL",-1)),Z=["innerHTML"],ss=y(()=>s("h2",null,"Model Output",-1)),as=["innerHTML"],ns={__name:"LMExamples",setup(a){const n=q(r[0].id);return(w,l)=>(p(),o("div",J,[z,s("h1",null,[d(w.$slots,"title",{},void 0,!0)]),s("div",V,[(p(!0),o(_,null,j(c(r),t=>(p(),o("button",{key:t.title,class:x(["btn btn-primary",{active:n.value===t.id}]),onClick:is=>n.value=t.id},g(t.title),11,U))),128))]),s("div",{innerHTML:c(r).find(t=>t.id===n.value).description,class:"description"},null,8,K),h(G,null,{code:e(()=>[X,s("div",{innerHTML:c(r).find(t=>t.id===n.value).code},null,8,Z)]),output:e(()=>[ss,s("div",{innerHTML:c(r).find(t=>t.id===n.value).output},null,8,as)]),_:1})]))}},ts=m(ns,[["__scopeId","data-v-96cfe14a"]]);const es=s("div",{class:"banner"},[s("p",null,[i("Help shape the next major version of LMQL by filling out the "),s("a",{href:"https://forms.gle/pGvAicNpUhS1rAkK9",target:"_blank",rel:"noreferrer"},"LMQL developer survey")])],-1),ps=s("b",null,"types, templates, constraints and an optimizing runtime.",-1),os=["innerHTML"],ls=["innerHTML"],hs=JSON.parse('{"title":"LMQL is a programming language for LLM interaction.","description":"","frontmatter":{"layout":"home","title":"LMQL is a programming language for LLM interaction.","outline":false},"headers":[],"relativePath":"index.md","filePath":"index.md"}'),ds={name:"index.md"},ms=Object.assign(ds,{setup(a){return(n,w)=>(p(),o("div",null,[es,h(Q,null,{title:e(()=>[i("LMQL is a programming language for LLMs.")]),subtitle:e(()=>[i("Robust and modular LLM prompting using "),ps]),_:1}),(p(!0),o(_,null,j(c(Y),l=>(p(),o("div",{key:l.title},[h(H,{template:l.template,new:l.new},M({template:e(()=>[i(g(l.template),1)]),description:e(()=>[s("div",{innerHTML:l.description},null,8,os)]),default:e(()=>[i(g(l.title)+" ",1)]),_:2},[l.snippet?{name:"code",fn:e(()=>[s("div",{innerHTML:l.snippet},null,8,ls)]),key:"0"}:void 0]),1032,["template","new"])]))),128)),h(ts,null,{title:e(()=>[i("Explore LMQL")]),description:e(()=>[i("LMQL is a versatile tool for leveraging the full potential of LLMs. Here are some examples of what you can do with it:")]),_:1})]))}});export{hs as __pageData,ms as default}; +import{_ as v}from"./chunks/lmql.17cc0505.js";import{_ as m,o as p,c as o,k as s,r as d,p as f,m as b,e as L,n as x,h as q,F as _,D as j,t as g,l as c,H as h,w as e,a as i,a0 as M}from"./chunks/framework.980cae92.js";const A={},u=a=>(f("data-v-fb660782"),a=a(),b(),a),S={class:"hero"},k=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),N=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),T=u(()=>s("br",null,null,-1)),E={class:"tagline"},C=u(()=>s("div",{class:"buttons"},[s("a",{class:"primary btn",href:"/docs/"}," Get Started "),s("a",{class:"btn",href:"https://github.com/eth-sri/lmql#contribute')"}," Contribute ")],-1));function W(a,n){return p(),o("div",S,[k,s("div",null,[s("h1",null,[N,d(a.$slots,"title",{},void 0,!0),T]),s("h2",E,[d(a.$slots,"subtitle",{},void 0,!0)]),C])])}const Q=m(A,[["render",W],["__scopeId","data-v-fb660782"]]);const R={key:0,class:"badge"},I={class:"reveal"},P={__name:"LMFeature",props:["template","new"],setup(a){return(n,w)=>(p(),o("div",{class:x(["feature",a.template])},[s("div",null,[s("h1",null,[d(n.$slots,"default",{},void 0,!0),a.new?(p(),o("span",R,"NEW")):L("",!0)]),s("p",null,[d(n.$slots,"description",{},void 0,!0)])]),s("code",I,[d(n.$slots,"code",{},void 0,!0)])],2))}},H=m(P,[["__scopeId","data-v-9d5b0837"]]),Y=JSON.parse(`[{"snippet":"","description":"
lmql
@lmql.query\\ndef meaning_of_life():\\n '''lmql\\n # top-level strings are prompts\\n "Q: What is the answer to life, the \\\\\\n universe and everything?"\\n\\n # generation via (constrained) variables\\n "A: [ANSWER]" where \\\\\\n len(ANSWER) < 120 and STOPS_AT(ANSWER, ".")\\n\\n # results are directly accessible\\n print("LLM returned", ANSWER)\\n\\n # use typed variables for guaranteed \\n # output format\\n "The answer is [NUM: int]"\\n\\n # query programs are just functions \\n return NUM\\n '''\\n\\n# so from Python, you can just do this\\nmeaning_of_life() # 42\\n
\\n

\\n
\\n

Created by the SRI Lab @ ETH Zurich and contributors.

\\n
\\n
\\n Star\\n
\\n
\\n","title":null,"template":"code"},{"snippet":"
promptdown

Execution Trace

\\nQ: When was Obama born?wait200beginincontext

dateformat(respond in DD/MM/YYYY)endincontext

wait200ANSWER04/08/1961wait200fadeincontextwait200hideincontextwait200\\nQ: When was Bruno Mars born?wait200beginincontext1

dateformat(respond in DD/MM/YYYY)endincontext1

wait200ANSWER08/10/1985wait200fadeincontext1wait200hideincontext1wait200\\nQ: When was Dua Lipa born?wait200beginincontext2

dateformat(respond in DD/MM/YYYY)endincontext2

wait200ANSWER22/08/1995wait200fadeincontext2wait200hideincontext2wait200\\n\\nOut of these, who was born last?LASTDua Lipa\\n

\\n
","description":"

LMQL now supports nested queries, enabling modularized local instructions and re-use of prompt components.

\\n
\\n\\nLearn more\\n\\n","title":"Nested Queries bring Procedural Programming to Prompting","template":"side-by-side","new":0.7},{"snippet":"","description":"

LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

\\n\\n","title":"Works Across Backends","template":"middle"}]`),r=JSON.parse(`[{"id":0,"path":"/features/examples/1-packing-list.html","title":"🌴 Packing List","description":"

Prompt construction and generation is implemented via expressive Python control flow and string interpolation.

\\n","code":"
lmql
# top level strings are prompts\\n"My packing list for the trip:"\\n\\n# use loops for repeated prompts\\nfor i in range(4):\\n    # 'where' denotes hard constraints enforced by the runtime\\n    "- [THING] \\\\n" where THING in \\\\ \\n        ["Volleyball", "Sunscreen", "Bathing Suit"]\\n
\\n
","output":"
promptdown

My packing list for the trip:\\n\\n- THING Volleyball\\n- THING Bathing Suit\\n- THING Sunscreen\\n- THING Volleyball\\n

\\n
"},{"id":1,"path":"/features/examples/2-constraining.html","title":"⛓️ Constrained LLMs","description":"

LMQL's support for constrained generation enables robust interfacing, to integrate LLMs safely into your applications.Learn More →

\\n","code":"
lmql
# top-level strings are prompts\\n"Tell me a joke:\\\\n"\\n\\n# use 'where' constraints to control and restrict generation\\n"Q:[JOKE]\\\\n" where len(JOKE) < 120 and STOPS_AT(JOKE, "?")\\n\\n"A:[PUNCHLINE]\\\\n" where \\\\ \\n    STOPS_AT(PUNCHLINE, "\\\\n") and len(TOKENS(PUNCHLINE)) > 1\\n
\\n
","output":"
promptdown

Tell me a joke:\\n\\nQ: JOKE What did the fish say when it hit the wall?\\nA: PUNCHLINE Dam\\n

\\n
"},{"id":2,"path":"/features/examples/2.5-data-types.html","title":"🔢 Types and Regex","description":"

LMQL supports integer and regex constraints, enabling advanced output formatting. The results are automatically represented as the appropriate Python type, and can be manipulated as such.

\\n","code":"
lmql
# restrict generation to MM/DD format\\n"Q: It's the last day of June. What day is it?\\\\n"\\n"A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\\\\n"\\n\\n# generate numbers\\n"Q: What's the month number?\\\\n"\\n"A: [ANSWER: int]"\\n\\n# results are automatically cast to int...\\ntype(ANSWER) # -> int\\n\\n# ...and can be easily manipulated\\n10 * ANSWER # -> 60\\n
\\n
","output":"
promptdown

Q: It's the last day of June. What day is it?\\nA: Today is RESPONSE 30/06\\n\\nQ: What's the month number?\\nA: ANSWER 6\\n

\\n
"},{"id":3,"path":"/features/examples/3-multi-part.html","title":"🧠 Multi-Part Prompts","description":"

LMQL's programming model supports multi-part prompt programs, enabling enhanced controls over the LLM reasoning process.

\\n","code":"
lmql
# use multi-part prompting for complicated questions\\n"Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?"\\n"Answer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021"\\n\\n# use a reasoning step to break down the problem\\n"A: Let's think step by step.\\\\n [REASONING]"\\n\\n# use a constrained variable to extract the final response\\n"Therefore, the answer is [ANSWER]" where \\\\\\n    ANSWER in ["A", "B", "C", "D", "E", "F"]\\n\\n# access results just like a normal variable\\nANSWER # "A"\\n
\\n
","output":"
promptdown

Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?\\nAnswer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021\\n\\nA: Let's think step by step.\\nREASONING Sept. 1st, 2021 was a week ago, so 10 days ago would be 8 days before that, which is August 23rd, 2021, so the answer is (A) 08/29/2021.\\n\\nTherefore, the answer is ANSWER A\\n

\\n
"},{"id":4,"path":"/features/examples/3.5-distributions.html","title":"📐 Measure Distributions","description":"

Apart from text generation, LMQL also measures model scores, allowing users to extract classification results and confidence scores.

\\n","code":"
lmql
# prompt with a data sample\\n"Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\\\n"\\n\\n# instruct model to do sentiment analysis\\n"Q: What is the underlying sentiment of this review and why?\\\\n"\\n\\n# generate a text-based analysis\\n"A:[ANALYSIS]\\\\n"\\n\\n# based on the analysis, measure certainity about the sentiment\\n"Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]" distribution \\\\\\n   CLASSIFICATION in [" positive", " neutral", " negative"]\\n
\\n
","output":"
promptdown

Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\n\\nQ: What is the underlying sentiment of this review and why?\\n\\nA: ANALYSISPositive, because the reviewer enjoyed their stay and had positive experiences with both the activities and food.\\n\\nBased on this, the overall sentiment of the message \\ncan be considered to be CLS[CLASSIFICATION]\\n\\n\\n\\n\\n\\n\\n

\\n
\\n P(CLASSIFICATION) = \\n
\\n - positive 0.9998711120293567
\\n - neutral 0.00012790777085508993
\\n - negative 9.801997880775052e-07\\n
\\n
"},{"id":5,"path":"/features/examples/3.6-python.html","title":"🐍 Python Support","description":"

LMQL can be used directly from within Python, allowing for seamless integration with your existing codebase.

\\n","code":"
python
import lmql\\n\\n# defines an LMQL function from within Python\\n@lmql.query\\ndef say(phrase):\\n    '''lmql\\n    # we can seamlessly use 'phrase' in LMQL\\n    "Say '{phrase}': [TEST]"\\n    # return the result to the caller\\n    return TEST\\n    '''\\n\\n# call your LMQL function like any other Python function\\nprint(say("Hello World!", model="openai/gpt-3.5-turbo"))\\n
\\n
","output":"
promptdown

Say 'Hello World': TEST Hello World\\n

\\n
"},{"id":6,"path":"/features/examples/4-meta-prompting.html","title":"🌳 Meta Prompting","description":"

LMQL supports program-level decoding algorithms like beam, sample and best_k, allowing for a branching exploration of multi-step reasoning flows.

\\n","code":"
lmql
# specify a decoding algorithm (e.g. beam, sample, best_k)\\n# to enable multi-branch exploration of your program\\nbeam(n=2)\\n\\n# pose a question\\n"Q: What are Large Language Models?\\\\n\\\\n"\\n\\n# use multi-part meta prompting for improved reasoning\\n"A good person to answer this question would be[EXPERT]\\\\n\\\\n" where STOPS_AT(EXPERT, ".") and STOPS_AT(EXPERT, "\\\\n")\\n\\n# process intermediate results in Python\\nexpert_name = EXPERT.rstrip(".\\\\n")\\n\\n# generate the final response by leveraging the expert\\n"For instance,{expert_name} would answer [ANSWER]" \\\\ \\n    where STOPS_AT(ANSWER, ".") \\n
\\n
","output":"
promptdown

Q: What are Large Language Models?⏎\\n\\nA good person to answer this question would be EXPERT a data scientist or a machine learning engineer.\\n\\nFor instance, (a data scientist or a machine learning engineer) would answer ANSWER this question by explaining that large language models are a type of artificial intelligence (AI) model that uses deep learning algorithms to process large amounts of natural language data.\\n

\\n
"},{"id":7,"path":"/features/examples/5-wikipedia.html","title":"🌎 Tool Augmentation","description":"

LMQL supports arbitrary Python function calls during generation, enabling seamless integration with external tools and APIs, augmenting the model's capabilities.

\\n","code":"
lmql
# define or import an external function\\nasync def wikipedia(q): ...\\n\\n# pose a question\\n"Q: From which countries did the Norse originate?\\\\n"\\n\\n# invoke 'wikipedia' function during reasoning\\n"Action: Let's search Wikipedia for the \\\\\\n term '[TERM]\\\\n" where STOPS_AT(TERM, "'")\\n\\n# seamlessly call it *during* generation\\nresult = await wikipedia(TERM)\\n"Result: {result}\\\\n"\\n\\n# generate final response using retrieved data\\n"Final Answer:[ANSWER]"\\n
\\n
","output":"
promptdown

Q: From which countries did the Norse originate?\\n\\nAction: Let's search Wikipedia for the term TERM 'Norse'\\nResult: (Norse is a demonym for Norsemen, a Medieval North Germanic ethnolinguistic group ancestral to modern Scandinavians, defined as speakers of Old Norse from about the 9th to the 13th centuries.)\\n\\nFinal Answer: ANSWER The Norse originated from Scandinavia.\\n

\\n
"},{"id":8,"path":"/features/examples/6-chat.html","title":"💬 Chatbots","description":"

Implement custom chatbots with ease, using LMQL's direct integration of interactive generation and result streaming.

\\n","code":"
lmql
# {:system} and other tags can be used to control chat-tuned models\\n"{:system} You are a marketing chatbot for the language model query language (LMQL)."\\n\\n# implement a chatbot as simple loop\\nwhile True:\\n   # integrate user input just like in a standard Python program\\n   "{:user} {await input()}"\\n   "{:assistant} [ANSWER]"\\n
\\n
","output":"
promptdown

bubble:userWhat is the best way to interact with LLMs?
\\n\\n
bubble:assistantANSWER The best way to interact with LLMs (Language Model Models) is through a query language like LMQL. LMQL allows you to easily and efficiently query large language models and retrieve the information you need. With LMQL, you can specify the input text, the output format, and the model you want to use , all in a single query. This makes it easy to integrate LLMs into your applications and workflows, and to get the most out of these powerful language models. Additionally, LMQL provides a standardized way of interacting with LLMs, which makes it easier for developers and data scientists to collaborate and share their work .
\\n

\\n
"}]`);const O={},D={class:"code-by-code"},$={class:"left"},F={class:"right"};function B(a,n){return p(),o("div",D,[s("div",$,[d(a.$slots,"code")]),s("div",F,[d(a.$slots,"output")])])}const G=m(O,[["render",B]]);const y=a=>(f("data-v-96cfe14a"),a=a(),b(),a),J={class:"examples"},z=y(()=>s("div",{style:{"margin-top":"60pt"}},null,-1)),V={class:"btn-group",role:"group","aria-label":"Basic example"},U=["onClick"],K=["innerHTML"],X=y(()=>s("h2",null,"LMQL",-1)),Z=["innerHTML"],ss=y(()=>s("h2",null,"Model Output",-1)),as=["innerHTML"],ns={__name:"LMExamples",setup(a){const n=q(r[0].id);return(w,l)=>(p(),o("div",J,[z,s("h1",null,[d(w.$slots,"title",{},void 0,!0)]),s("div",V,[(p(!0),o(_,null,j(c(r),t=>(p(),o("button",{key:t.title,class:x(["btn btn-primary",{active:n.value===t.id}]),onClick:is=>n.value=t.id},g(t.title),11,U))),128))]),s("div",{innerHTML:c(r).find(t=>t.id===n.value).description,class:"description"},null,8,K),h(G,null,{code:e(()=>[X,s("div",{innerHTML:c(r).find(t=>t.id===n.value).code},null,8,Z)]),output:e(()=>[ss,s("div",{innerHTML:c(r).find(t=>t.id===n.value).output},null,8,as)]),_:1})]))}},ts=m(ns,[["__scopeId","data-v-96cfe14a"]]);const es=s("div",{class:"banner"},[s("p",null,[i("Help shape the next major version of LMQL by filling out the "),s("a",{href:"https://forms.gle/pGvAicNpUhS1rAkK9",target:"_blank",rel:"noreferrer"},"LMQL developer survey")])],-1),ps=s("b",null,"types, templates, constraints and an optimizing runtime.",-1),os=["innerHTML"],ls=["innerHTML"],hs=JSON.parse('{"title":"LMQL is a programming language for LLM interaction.","description":"","frontmatter":{"layout":"home","title":"LMQL is a programming language for LLM interaction.","outline":false},"headers":[],"relativePath":"index.md","filePath":"index.md"}'),ds={name:"index.md"},ms=Object.assign(ds,{setup(a){return(n,w)=>(p(),o("div",null,[es,h(Q,null,{title:e(()=>[i("LMQL is a programming language for LLMs.")]),subtitle:e(()=>[i("Robust and modular LLM prompting using "),ps]),_:1}),(p(!0),o(_,null,j(c(Y),l=>(p(),o("div",{key:l.title},[h(H,{template:l.template,new:l.new},M({template:e(()=>[i(g(l.template),1)]),description:e(()=>[s("div",{innerHTML:l.description},null,8,os)]),default:e(()=>[i(g(l.title)+" ",1)]),_:2},[l.snippet?{name:"code",fn:e(()=>[s("div",{innerHTML:l.snippet},null,8,ls)]),key:"0"}:void 0]),1032,["template","new"])]))),128)),h(ts,null,{title:e(()=>[i("Explore LMQL")]),description:e(()=>[i("LMQL is a versatile tool for leveraging the full potential of LLMs. Here are some examples of what you can do with it:")]),_:1})]))}});export{hs as __pageData,ms as default}; diff --git a/assets/index.md.3b2473f1.lean.js b/assets/index.md.930a925c.lean.js similarity index 88% rename from assets/index.md.3b2473f1.lean.js rename to assets/index.md.930a925c.lean.js index 6c3a917e..9e3aefc4 100644 --- a/assets/index.md.3b2473f1.lean.js +++ b/assets/index.md.930a925c.lean.js @@ -1 +1 @@ -import{_ as v}from"./chunks/lmql.17cc0505.js";import{_ as m,o as p,c as o,k as s,r as d,p as f,m as b,e as L,n as x,h as q,F as _,D as j,t as g,l as c,H as h,w as e,a as i,a0 as M}from"./chunks/framework.980cae92.js";const A={},u=a=>(f("data-v-fb660782"),a=a(),b(),a),S={class:"hero"},k=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),N=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),T=u(()=>s("br",null,null,-1)),E={class:"tagline"},C=u(()=>s("div",{class:"buttons"},[s("a",{class:"primary btn",href:"/docs/"}," Get Started "),s("a",{class:"btn",href:"https://github.com/eth-sri/lmql#contribute')"}," Contribute ")],-1));function W(a,n){return p(),o("div",S,[k,s("div",null,[s("h1",null,[N,d(a.$slots,"title",{},void 0,!0),T]),s("h2",E,[d(a.$slots,"subtitle",{},void 0,!0)]),C])])}const Q=m(A,[["render",W],["__scopeId","data-v-fb660782"]]);const R={key:0,class:"badge"},I={class:"reveal"},P={__name:"LMFeature",props:["template","new"],setup(a){return(n,w)=>(p(),o("div",{class:x(["feature",a.template])},[s("div",null,[s("h1",null,[d(n.$slots,"default",{},void 0,!0),a.new?(p(),o("span",R,"NEW")):L("",!0)]),s("p",null,[d(n.$slots,"description",{},void 0,!0)])]),s("code",I,[d(n.$slots,"code",{},void 0,!0)])],2))}},H=m(P,[["__scopeId","data-v-9d5b0837"]]),Y=JSON.parse(`[{"snippet":"","description":"
lmql
@lmql.query\\ndef meaning_of_life():\\n '''lmql\\n # top-level strings are prompts\\n "Q: What is the answer to life, the \\\\\\n universe and everything?"\\n\\n # generation via (constrained) variables\\n "A: [ANSWER]" where \\\\\\n len(ANSWER) < 120 and STOPS_AT(ANSWER, ".")\\n\\n # results are directly accessible\\n print("LLM returned", ANSWER)\\n\\n # use typed variables for guaranteed \\n # output format\\n "The answer is [NUM: int]"\\n\\n # query programs are just functions \\n return NUM\\n '''\\n\\n# so from Python, you can just do this\\nmeaning_of_life() # 42\\n
\\n

\\n
\\n

Created by the SRI Lab @ ETH Zurich and contributors.

\\n
\\n
\\n Star\\n
\\n
\\n","title":null,"template":"code"},{"snippet":"
promptdown

Execution Trace

\\nQ: When was Obama born?wait200beginincontext

dateformat(respond in DD/MM/YYYY)endincontext

wait200ANSWER04/08/1961wait200fadeincontextwait200hideincontextwait200\\nQ: When was Bruno Mars born?wait200beginincontext1

dateformat(respond in DD/MM/YYYY)endincontext1

wait200ANSWER08/10/1985wait200fadeincontext1wait200hideincontext1wait200\\nQ: When was Dua Lipa born?wait200beginincontext2

dateformat(respond in DD/MM/YYYY)endincontext2

wait200ANSWER22/08/1995wait200fadeincontext2wait200hideincontext2wait200\\n\\nOut of these, who was born last?LASTDua Lipa\\n

\\n
","description":"

LMQL now supports nested queries, enabling modularized local instructions and re-use of prompt components.

\\n
\\n\\nLearn more\\n\\n","title":"Nested Queries bring Procedural Programming to Prompting","template":"side-by-side","new":0.7},{"snippet":"","description":"

LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

\\n\\n","title":"Works Across Backends","template":"middle"}]`),r=JSON.parse(`[{"id":0,"path":"/features/examples/1-packing-list.html","title":"🌴 Packing List","description":"

Prompt construction and generation is implemented via expressive Python control flow and string interpolation.

\\n","code":"
lmql
# top level strings are prompts\\n"My packing list for the trip:"\\n\\n# use loops for repeated prompts\\nfor i in range(4):\\n    # 'where' denotes hard constraints enforced by the runtime\\n    "- [THING] \\\\n" where THING in \\\\ \\n        ["Volleyball", "Sunscreen", "Bathing Suit"]\\n
\\n
","output":"
promptdown

My packing list for the trip:\\n\\n- THING Volleyball\\n- THING Bathing Suit\\n- THING Sunscreen\\n- THING Volleyball\\n

\\n
"},{"id":1,"path":"/features/examples/2-constraining.html","title":"⛓️ Constrained LLMs","description":"

LMQL's support for constrained generation enables robust interfacing, to integrate LLMs safely into your applications.Learn More →

\\n","code":"
lmql
# top-level strings are prompts\\n"Tell me a joke:\\\\n"\\n\\n# use 'where' constraints to control and restrict generation\\n"Q:[JOKE]\\\\n" where len(JOKE) < 120 and STOPS_AT(JOKE, "?")\\n\\n"A:[PUNCHLINE]\\\\n" where \\\\ \\n    STOPS_AT(PUNCHLINE, "\\\\n") and len(TOKENS(PUNCHLINE)) > 1\\n
\\n
","output":"
promptdown

Tell me a joke:\\n\\nQ: JOKE What did the fish say when it hit the wall?\\nA: PUNCHLINE Dam\\n

\\n
"},{"id":2,"path":"/features/examples/2.5-data-types.html","title":"🔢 Types and Regex","description":"

LMQL supports integer and regex constraints, enabling advanced output formatting. The results are automatically represented as the appropriate Python type, and can be manipulated as such.

\\n","code":"
lmql
# restrict generation to MM/DD format\\n"Q: It's the last day of June. What day is it?\\\\n"\\n"A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\\\\n"\\n\\n# generate numbers\\n"Q: What's the month number?\\\\n"\\n"A: [ANSWER: int]"\\n\\n# results are automatically cast to int...\\ntype(ANSWER) # -> int\\n\\n# ...and can be easily manipulated\\n10 * ANSWER # -> 60\\n
\\n
","output":"
promptdown

Q: It's the last day of June. What day is it?\\nA: Today is RESPONSE 30/06\\n\\nQ: What's the month number?\\nA: ANSWER 6\\n

\\n
"},{"id":3,"path":"/features/examples/3-multi-part.html","title":"🧠 Multi-Part Prompts","description":"

LMQL's programming model supports multi-part prompt programs, enabling enhanced controls over the LLM reasoning process.

\\n","code":"
lmql
# use multi-part prompting for complicated questions\\n"Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?"\\n"Answer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021"\\n\\n# use a reasoning step to break down the problem\\n"A: Let's think step by step.\\\\n [REASONING]"\\n\\n# use a constrained variable to extract the final response\\n"Therefore, the answer is [ANSWER]" where \\\\\\n    ANSWER in ["A", "B", "C", "D", "E", "F"]\\n\\n# access results just like a normal variable\\nANSWER # "A"\\n
\\n
","output":"
promptdown

Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?\\nAnswer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021\\n\\nA: Let's think step by step.\\nREASONING Sept. 1st, 2021 was a week ago, so 10 days ago would be 8 days before that, which is August 23rd, 2021, so the answer is (A) 08/29/2021.\\n\\nTherefore, the answer is ANSWER A\\n

\\n
"},{"id":4,"path":"/features/examples/3.5-distributions.html","title":"📐 Measure Distributions","description":"

Apart from text generation, LMQL also measures model scores, allowing users to extract classification results and confidence scores.

\\n","code":"
lmql
# prompt with a data sample\\n"Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\\\n"\\n\\n# instruct model to do sentiment analysis\\n"Q: What is the underlying sentiment of this review and why?\\\\n"\\n\\n# generate a text-based analysis\\n"A:[ANALYSIS]\\\\n"\\n\\n# based on the analysis, measure certainity about the sentiment\\n"Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]" distribution \\\\\\n   CLASSIFICATION in [" positive", " neutral", " negative"]\\n
\\n
","output":"
promptdown

Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\n\\nQ: What is the underlying sentiment of this review and why?\\n\\nA: ANALYSISPositive, because the reviewer enjoyed their stay and had positive experiences with both the activities and food.\\n\\nBased on this, the overall sentiment of the message \\ncan be considered to be CLS[CLASSIFICATION]\\n\\n\\n\\n\\n\\n\\n

\\n
\\n P(CLASSIFICATION) = \\n
\\n - positive 0.9998711120293567
\\n - neutral 0.00012790777085508993
\\n - negative 9.801997880775052e-07\\n
\\n
"},{"id":5,"path":"/features/examples/3.6-python.html","title":"🐍 Python Support","description":"

LMQL can be used directly from within Python, allowing for seamless integration with your existing codebase.

\\n","code":"
python
import lmql\\n\\n# defines an LMQL function from within Python\\n@lmql.query\\ndef say(phrase):\\n    '''lmql\\n    # we can seamlessly use 'phrase' in LMQL\\n    "Say '{phrase}': [TEST]"\\n    # return the result to the caller\\n    return TEST\\n    '''\\n\\n# call your LMQL function like any other Python function\\nprint(say("Hello World!", model="openai/gpt-3.5-turbo"))\\n
\\n
","output":"
promptdown

Say 'Hello World': TEST Hello World\\n

\\n
"},{"id":6,"path":"/features/examples/4-meta-prompting.html","title":"🌳 Meta Prompting","description":"

LMQL supports program-level decoding algorithms like beam, sample and best_k, allowing for a branching exploration of multi-step reasoning flows.

\\n","code":"
lmql
# specify a decoding algorithm (e.g. beam, sample, best_k)\\n# to enable multi-branch exploration of your program\\nbeam(n=2)\\n\\n# pose a question\\n"Q: What are Large Language Models?\\\\n\\\\n"\\n\\n# use multi-part meta prompting for improved reasoning\\n"A good person to answer this question would be[EXPERT]\\\\n\\\\n" where STOPS_AT(EXPERT, ".") and STOPS_AT(EXPERT, "\\\\n")\\n\\n# process intermediate results in Python\\nexpert_name = EXPERT.rstrip(".\\\\n")\\n\\n# generate the final response by leveraging the expert\\n"For instance,{expert_name} would answer [ANSWER]" \\\\ \\n    where STOPS_AT(ANSWER, ".") \\n
\\n
","output":"
promptdown

Q: What are Large Language Models?⏎\\n\\nA good person to answer this question would be EXPERT a data scientist or a machine learning engineer.\\n\\nFor instance, (a data scientist or a machine learning engineer) would answer ANSWER this question by explaining that large language models are a type of artificial intelligence (AI) model that uses deep learning algorithms to process large amounts of natural language data.\\n

\\n
"},{"id":7,"path":"/features/examples/5-wikipedia.html","title":"🌎 Tool Augmentation","description":"

LMQL supports arbitrary Python function calls during generation, enabling seamless integration with external tools and APIs, augmenting the model's capabilities.

\\n","code":"
lmql
# define or import an external function\\nasync def wikipedia(q): ...\\n\\n# pose a question\\n"Q: From which countries did the Norse originate?\\\\n"\\n\\n# invoke 'wikipedia' function during reasoning\\n"Action: Let's search Wikipedia for the \\\\\\n term '[TERM]\\\\n" where STOPS_AT(TERM, "'")\\n\\n# seamlessly call it *during* generation\\nresult = await wikipedia(TERM)\\n"Result: {result}\\\\n"\\n\\n# generate final response using retrieved data\\n"Final Answer:[ANSWER]"\\n
\\n
","output":"
promptdown

Q: From which countries did the Norse originate?\\n\\nAction: Let's search Wikipedia for the term TERM 'Norse'\\nResult: (Norse is a demonym for Norsemen, a Medieval North Germanic ethnolinguistic group ancestral to modern Scandinavians, defined as speakers of Old Norse from about the 9th to the 13th centuries.)\\n\\nFinal Answer: ANSWER The Norse originated from Scandinavia.\\n

\\n
"},{"id":8,"path":"/features/examples/6-chat.html","title":"💬 Chatbots","description":"

Implement custom chatbots with ease, using LMQL's direct integration of interactive generation and result streaming.

\\n","code":"
lmql
# {:system} and other tags can be used to control chat-tuned models\\n"{:system} You are a marketing chatbot for the language model query language (LMQL)."\\n\\n# implement a chatbot as simple loop\\nwhile True:\\n   # integrate user input just like in a standard Python program\\n   "{:user} {await input()}"\\n   "{:assistant} [ANSWER]"\\n
\\n
","output":"
promptdown

bubble:userWhat is the best way to interact with LLMs?
\\n\\n
bubble:assistantANSWER The best way to interact with LLMs (Language Model Models) is through a query language like LMQL. LMQL allows you to easily and efficiently query large language models and retrieve the information you need. With LMQL, you can specify the input text, the output format, and the model you want to use , all in a single query. This makes it easy to integrate LLMs into your applications and workflows, and to get the most out of these powerful language models. Additionally, LMQL provides a standardized way of interacting with LLMs, which makes it easier for developers and data scientists to collaborate and share their work .
\\n

\\n
"}]`);const O={},D={class:"code-by-code"},$={class:"left"},F={class:"right"};function B(a,n){return p(),o("div",D,[s("div",$,[d(a.$slots,"code")]),s("div",F,[d(a.$slots,"output")])])}const G=m(O,[["render",B]]);const y=a=>(f("data-v-96cfe14a"),a=a(),b(),a),J={class:"examples"},z=y(()=>s("div",{style:{"margin-top":"60pt"}},null,-1)),V={class:"btn-group",role:"group","aria-label":"Basic example"},U=["onClick"],K=["innerHTML"],X=y(()=>s("h2",null,"LMQL",-1)),Z=["innerHTML"],ss=y(()=>s("h2",null,"Model Output",-1)),as=["innerHTML"],ns={__name:"LMExamples",setup(a){const n=q(r[0].id);return(w,l)=>(p(),o("div",J,[z,s("h1",null,[d(w.$slots,"title",{},void 0,!0)]),s("div",V,[(p(!0),o(_,null,j(c(r),t=>(p(),o("button",{key:t.title,class:x(["btn btn-primary",{active:n.value===t.id}]),onClick:is=>n.value=t.id},g(t.title),11,U))),128))]),s("div",{innerHTML:c(r).find(t=>t.id===n.value).description,class:"description"},null,8,K),h(G,null,{code:e(()=>[X,s("div",{innerHTML:c(r).find(t=>t.id===n.value).code},null,8,Z)]),output:e(()=>[ss,s("div",{innerHTML:c(r).find(t=>t.id===n.value).output},null,8,as)]),_:1})]))}},ts=m(ns,[["__scopeId","data-v-96cfe14a"]]);const es=s("div",{class:"banner"},[s("p",null,[i("Help shape the next major version of LMQL by filling out the "),s("a",{href:"https://forms.gle/pGvAicNpUhS1rAkK9",target:"_blank",rel:"noreferrer"},"LMQL developer survey")])],-1),ps=s("b",null,"types, templates, constraints and an optimizing runtime.",-1),os=["innerHTML"],ls=["innerHTML"],hs=JSON.parse('{"title":"LMQL is a programming language for LLM interaction.","description":"","frontmatter":{"layout":"home","title":"LMQL is a programming language for LLM interaction.","outline":false},"headers":[],"relativePath":"index.md","filePath":"index.md"}'),ds={name:"index.md"},ms=Object.assign(ds,{setup(a){return(n,w)=>(p(),o("div",null,[es,h(Q,null,{title:e(()=>[i("LMQL is a programming language for LLMs.")]),subtitle:e(()=>[i("Robust and modular LLM prompting using "),ps]),_:1}),(p(!0),o(_,null,j(c(Y),l=>(p(),o("div",{key:l.title},[h(H,{template:l.template,new:l.new},M({template:e(()=>[i(g(l.template),1)]),description:e(()=>[s("div",{innerHTML:l.description},null,8,os)]),default:e(()=>[i(g(l.title)+" ",1)]),_:2},[l.snippet?{name:"code",fn:e(()=>[s("div",{innerHTML:l.snippet},null,8,ls)]),key:"0"}:void 0]),1032,["template","new"])]))),128)),h(ts,null,{title:e(()=>[i("Explore LMQL")]),description:e(()=>[i("LMQL is a versatile tool for leveraging the full potential of LLMs. Here are some examples of what you can do with it:")]),_:1})]))}});export{hs as __pageData,ms as default}; +import{_ as v}from"./chunks/lmql.17cc0505.js";import{_ as m,o as p,c as o,k as s,r as d,p as f,m as b,e as L,n as x,h as q,F as _,D as j,t as g,l as c,H as h,w as e,a as i,a0 as M}from"./chunks/framework.980cae92.js";const A={},u=a=>(f("data-v-fb660782"),a=a(),b(),a),S={class:"hero"},k=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),N=u(()=>s("img",{src:v,alt:"LMQL"},null,-1)),T=u(()=>s("br",null,null,-1)),E={class:"tagline"},C=u(()=>s("div",{class:"buttons"},[s("a",{class:"primary btn",href:"/docs/"}," Get Started "),s("a",{class:"btn",href:"https://github.com/eth-sri/lmql#contribute')"}," Contribute ")],-1));function W(a,n){return p(),o("div",S,[k,s("div",null,[s("h1",null,[N,d(a.$slots,"title",{},void 0,!0),T]),s("h2",E,[d(a.$slots,"subtitle",{},void 0,!0)]),C])])}const Q=m(A,[["render",W],["__scopeId","data-v-fb660782"]]);const R={key:0,class:"badge"},I={class:"reveal"},P={__name:"LMFeature",props:["template","new"],setup(a){return(n,w)=>(p(),o("div",{class:x(["feature",a.template])},[s("div",null,[s("h1",null,[d(n.$slots,"default",{},void 0,!0),a.new?(p(),o("span",R,"NEW")):L("",!0)]),s("p",null,[d(n.$slots,"description",{},void 0,!0)])]),s("code",I,[d(n.$slots,"code",{},void 0,!0)])],2))}},H=m(P,[["__scopeId","data-v-9d5b0837"]]),Y=JSON.parse(`[{"snippet":"","description":"
lmql
@lmql.query\\ndef meaning_of_life():\\n '''lmql\\n # top-level strings are prompts\\n "Q: What is the answer to life, the \\\\\\n universe and everything?"\\n\\n # generation via (constrained) variables\\n "A: [ANSWER]" where \\\\\\n len(ANSWER) < 120 and STOPS_AT(ANSWER, ".")\\n\\n # results are directly accessible\\n print("LLM returned", ANSWER)\\n\\n # use typed variables for guaranteed \\n # output format\\n "The answer is [NUM: int]"\\n\\n # query programs are just functions \\n return NUM\\n '''\\n\\n# so from Python, you can just do this\\nmeaning_of_life() # 42\\n
\\n

\\n
\\n

Created by the SRI Lab @ ETH Zurich and contributors.

\\n
\\n
\\n Star\\n
\\n
\\n","title":null,"template":"code"},{"snippet":"
promptdown

Execution Trace

\\nQ: When was Obama born?wait200beginincontext

dateformat(respond in DD/MM/YYYY)endincontext

wait200ANSWER04/08/1961wait200fadeincontextwait200hideincontextwait200\\nQ: When was Bruno Mars born?wait200beginincontext1

dateformat(respond in DD/MM/YYYY)endincontext1

wait200ANSWER08/10/1985wait200fadeincontext1wait200hideincontext1wait200\\nQ: When was Dua Lipa born?wait200beginincontext2

dateformat(respond in DD/MM/YYYY)endincontext2

wait200ANSWER22/08/1995wait200fadeincontext2wait200hideincontext2wait200\\n\\nOut of these, who was born last?LASTDua Lipa\\n

\\n
","description":"

LMQL now supports nested queries, enabling modularized local instructions and re-use of prompt components.

\\n
\\n\\nLearn more\\n\\n","title":"Nested Queries bring Procedural Programming to Prompting","template":"side-by-side","new":0.7},{"snippet":"","description":"

LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

\\n\\n","title":"Works Across Backends","template":"middle"}]`),r=JSON.parse(`[{"id":0,"path":"/features/examples/1-packing-list.html","title":"🌴 Packing List","description":"

Prompt construction and generation is implemented via expressive Python control flow and string interpolation.

\\n","code":"
lmql
# top level strings are prompts\\n"My packing list for the trip:"\\n\\n# use loops for repeated prompts\\nfor i in range(4):\\n    # 'where' denotes hard constraints enforced by the runtime\\n    "- [THING] \\\\n" where THING in \\\\ \\n        ["Volleyball", "Sunscreen", "Bathing Suit"]\\n
\\n
","output":"
promptdown

My packing list for the trip:\\n\\n- THING Volleyball\\n- THING Bathing Suit\\n- THING Sunscreen\\n- THING Volleyball\\n

\\n
"},{"id":1,"path":"/features/examples/2-constraining.html","title":"⛓️ Constrained LLMs","description":"

LMQL's support for constrained generation enables robust interfacing, to integrate LLMs safely into your applications.Learn More →

\\n","code":"
lmql
# top-level strings are prompts\\n"Tell me a joke:\\\\n"\\n\\n# use 'where' constraints to control and restrict generation\\n"Q:[JOKE]\\\\n" where len(JOKE) < 120 and STOPS_AT(JOKE, "?")\\n\\n"A:[PUNCHLINE]\\\\n" where \\\\ \\n    STOPS_AT(PUNCHLINE, "\\\\n") and len(TOKENS(PUNCHLINE)) > 1\\n
\\n
","output":"
promptdown

Tell me a joke:\\n\\nQ: JOKE What did the fish say when it hit the wall?\\nA: PUNCHLINE Dam\\n

\\n
"},{"id":2,"path":"/features/examples/2.5-data-types.html","title":"🔢 Types and Regex","description":"

LMQL supports integer and regex constraints, enabling advanced output formatting. The results are automatically represented as the appropriate Python type, and can be manipulated as such.

\\n","code":"
lmql
# restrict generation to MM/DD format\\n"Q: It's the last day of June. What day is it?\\\\n"\\n"A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\\\\n"\\n\\n# generate numbers\\n"Q: What's the month number?\\\\n"\\n"A: [ANSWER: int]"\\n\\n# results are automatically cast to int...\\ntype(ANSWER) # -> int\\n\\n# ...and can be easily manipulated\\n10 * ANSWER # -> 60\\n
\\n
","output":"
promptdown

Q: It's the last day of June. What day is it?\\nA: Today is RESPONSE 30/06\\n\\nQ: What's the month number?\\nA: ANSWER 6\\n

\\n
"},{"id":3,"path":"/features/examples/3-multi-part.html","title":"🧠 Multi-Part Prompts","description":"

LMQL's programming model supports multi-part prompt programs, enabling enhanced controls over the LLM reasoning process.

\\n","code":"
lmql
# use multi-part prompting for complicated questions\\n"Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?"\\n"Answer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021"\\n\\n# use a reasoning step to break down the problem\\n"A: Let's think step by step.\\\\n [REASONING]"\\n\\n# use a constrained variable to extract the final response\\n"Therefore, the answer is [ANSWER]" where \\\\\\n    ANSWER in ["A", "B", "C", "D", "E", "F"]\\n\\n# access results just like a normal variable\\nANSWER # "A"\\n
\\n
","output":"
promptdown

Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?\\nAnswer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021\\n\\nA: Let's think step by step.\\nREASONING Sept. 1st, 2021 was a week ago, so 10 days ago would be 8 days before that, which is August 23rd, 2021, so the answer is (A) 08/29/2021.\\n\\nTherefore, the answer is ANSWER A\\n

\\n
"},{"id":4,"path":"/features/examples/3.5-distributions.html","title":"📐 Measure Distributions","description":"

Apart from text generation, LMQL also measures model scores, allowing users to extract classification results and confidence scores.

\\n","code":"
lmql
# prompt with a data sample\\n"Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\\\n"\\n\\n# instruct model to do sentiment analysis\\n"Q: What is the underlying sentiment of this review and why?\\\\n"\\n\\n# generate a text-based analysis\\n"A:[ANALYSIS]\\\\n"\\n\\n# based on the analysis, measure certainity about the sentiment\\n"Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]" distribution \\\\\\n   CLASSIFICATION in [" positive", " neutral", " negative"]\\n
\\n
","output":"
promptdown

Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\n\\nQ: What is the underlying sentiment of this review and why?\\n\\nA: ANALYSISPositive, because the reviewer enjoyed their stay and had positive experiences with both the activities and food.\\n\\nBased on this, the overall sentiment of the message \\ncan be considered to be CLS[CLASSIFICATION]\\n\\n\\n\\n\\n\\n\\n

\\n
\\n P(CLASSIFICATION) = \\n
\\n - positive 0.9998711120293567
\\n - neutral 0.00012790777085508993
\\n - negative 9.801997880775052e-07\\n
\\n
"},{"id":5,"path":"/features/examples/3.6-python.html","title":"🐍 Python Support","description":"

LMQL can be used directly from within Python, allowing for seamless integration with your existing codebase.

\\n","code":"
python
import lmql\\n\\n# defines an LMQL function from within Python\\n@lmql.query\\ndef say(phrase):\\n    '''lmql\\n    # we can seamlessly use 'phrase' in LMQL\\n    "Say '{phrase}': [TEST]"\\n    # return the result to the caller\\n    return TEST\\n    '''\\n\\n# call your LMQL function like any other Python function\\nprint(say("Hello World!", model="openai/gpt-3.5-turbo"))\\n
\\n
","output":"
promptdown

Say 'Hello World': TEST Hello World\\n

\\n
"},{"id":6,"path":"/features/examples/4-meta-prompting.html","title":"🌳 Meta Prompting","description":"

LMQL supports program-level decoding algorithms like beam, sample and best_k, allowing for a branching exploration of multi-step reasoning flows.

\\n","code":"
lmql
# specify a decoding algorithm (e.g. beam, sample, best_k)\\n# to enable multi-branch exploration of your program\\nbeam(n=2)\\n\\n# pose a question\\n"Q: What are Large Language Models?\\\\n\\\\n"\\n\\n# use multi-part meta prompting for improved reasoning\\n"A good person to answer this question would be[EXPERT]\\\\n\\\\n" where STOPS_AT(EXPERT, ".") and STOPS_AT(EXPERT, "\\\\n")\\n\\n# process intermediate results in Python\\nexpert_name = EXPERT.rstrip(".\\\\n")\\n\\n# generate the final response by leveraging the expert\\n"For instance,{expert_name} would answer [ANSWER]" \\\\ \\n    where STOPS_AT(ANSWER, ".") \\n
\\n
","output":"
promptdown

Q: What are Large Language Models?⏎\\n\\nA good person to answer this question would be EXPERT a data scientist or a machine learning engineer.\\n\\nFor instance, (a data scientist or a machine learning engineer) would answer ANSWER this question by explaining that large language models are a type of artificial intelligence (AI) model that uses deep learning algorithms to process large amounts of natural language data.\\n

\\n
"},{"id":7,"path":"/features/examples/5-wikipedia.html","title":"🌎 Tool Augmentation","description":"

LMQL supports arbitrary Python function calls during generation, enabling seamless integration with external tools and APIs, augmenting the model's capabilities.

\\n","code":"
lmql
# define or import an external function\\nasync def wikipedia(q): ...\\n\\n# pose a question\\n"Q: From which countries did the Norse originate?\\\\n"\\n\\n# invoke 'wikipedia' function during reasoning\\n"Action: Let's search Wikipedia for the \\\\\\n term '[TERM]\\\\n" where STOPS_AT(TERM, "'")\\n\\n# seamlessly call it *during* generation\\nresult = await wikipedia(TERM)\\n"Result: {result}\\\\n"\\n\\n# generate final response using retrieved data\\n"Final Answer:[ANSWER]"\\n
\\n
","output":"
promptdown

Q: From which countries did the Norse originate?\\n\\nAction: Let's search Wikipedia for the term TERM 'Norse'\\nResult: (Norse is a demonym for Norsemen, a Medieval North Germanic ethnolinguistic group ancestral to modern Scandinavians, defined as speakers of Old Norse from about the 9th to the 13th centuries.)\\n\\nFinal Answer: ANSWER The Norse originated from Scandinavia.\\n

\\n
"},{"id":8,"path":"/features/examples/6-chat.html","title":"💬 Chatbots","description":"

Implement custom chatbots with ease, using LMQL's direct integration of interactive generation and result streaming.

\\n","code":"
lmql
# {:system} and other tags can be used to control chat-tuned models\\n"{:system} You are a marketing chatbot for the language model query language (LMQL)."\\n\\n# implement a chatbot as simple loop\\nwhile True:\\n   # integrate user input just like in a standard Python program\\n   "{:user} {await input()}"\\n   "{:assistant} [ANSWER]"\\n
\\n
","output":"
promptdown

bubble:userWhat is the best way to interact with LLMs?
\\n\\n
bubble:assistantANSWER The best way to interact with LLMs (Language Model Models) is through a query language like LMQL. LMQL allows you to easily and efficiently query large language models and retrieve the information you need. With LMQL, you can specify the input text, the output format, and the model you want to use , all in a single query. This makes it easy to integrate LLMs into your applications and workflows, and to get the most out of these powerful language models. Additionally, LMQL provides a standardized way of interacting with LLMs, which makes it easier for developers and data scientists to collaborate and share their work .
\\n

\\n
"}]`);const O={},D={class:"code-by-code"},$={class:"left"},F={class:"right"};function B(a,n){return p(),o("div",D,[s("div",$,[d(a.$slots,"code")]),s("div",F,[d(a.$slots,"output")])])}const G=m(O,[["render",B]]);const y=a=>(f("data-v-96cfe14a"),a=a(),b(),a),J={class:"examples"},z=y(()=>s("div",{style:{"margin-top":"60pt"}},null,-1)),V={class:"btn-group",role:"group","aria-label":"Basic example"},U=["onClick"],K=["innerHTML"],X=y(()=>s("h2",null,"LMQL",-1)),Z=["innerHTML"],ss=y(()=>s("h2",null,"Model Output",-1)),as=["innerHTML"],ns={__name:"LMExamples",setup(a){const n=q(r[0].id);return(w,l)=>(p(),o("div",J,[z,s("h1",null,[d(w.$slots,"title",{},void 0,!0)]),s("div",V,[(p(!0),o(_,null,j(c(r),t=>(p(),o("button",{key:t.title,class:x(["btn btn-primary",{active:n.value===t.id}]),onClick:is=>n.value=t.id},g(t.title),11,U))),128))]),s("div",{innerHTML:c(r).find(t=>t.id===n.value).description,class:"description"},null,8,K),h(G,null,{code:e(()=>[X,s("div",{innerHTML:c(r).find(t=>t.id===n.value).code},null,8,Z)]),output:e(()=>[ss,s("div",{innerHTML:c(r).find(t=>t.id===n.value).output},null,8,as)]),_:1})]))}},ts=m(ns,[["__scopeId","data-v-96cfe14a"]]);const es=s("div",{class:"banner"},[s("p",null,[i("Help shape the next major version of LMQL by filling out the "),s("a",{href:"https://forms.gle/pGvAicNpUhS1rAkK9",target:"_blank",rel:"noreferrer"},"LMQL developer survey")])],-1),ps=s("b",null,"types, templates, constraints and an optimizing runtime.",-1),os=["innerHTML"],ls=["innerHTML"],hs=JSON.parse('{"title":"LMQL is a programming language for LLM interaction.","description":"","frontmatter":{"layout":"home","title":"LMQL is a programming language for LLM interaction.","outline":false},"headers":[],"relativePath":"index.md","filePath":"index.md"}'),ds={name:"index.md"},ms=Object.assign(ds,{setup(a){return(n,w)=>(p(),o("div",null,[es,h(Q,null,{title:e(()=>[i("LMQL is a programming language for LLMs.")]),subtitle:e(()=>[i("Robust and modular LLM prompting using "),ps]),_:1}),(p(!0),o(_,null,j(c(Y),l=>(p(),o("div",{key:l.title},[h(H,{template:l.template,new:l.new},M({template:e(()=>[i(g(l.template),1)]),description:e(()=>[s("div",{innerHTML:l.description},null,8,os)]),default:e(()=>[i(g(l.title)+" ",1)]),_:2},[l.snippet?{name:"code",fn:e(()=>[s("div",{innerHTML:l.snippet},null,8,ls)]),key:"0"}:void 0]),1032,["template","new"])]))),128)),h(ts,null,{title:e(()=>[i("Explore LMQL")]),description:e(()=>[i("LMQL is a versatile tool for leveraging the full potential of LLMs. Here are some examples of what you can do with it:")]),_:1})]))}});export{hs as __pageData,ms as default}; diff --git a/assets/research_index.md.2696fbba.js b/assets/research_index.md.2696fbba.js new file mode 100644 index 00000000..086e71d3 --- /dev/null +++ b/assets/research_index.md.2696fbba.js @@ -0,0 +1 @@ +import{_ as a,o as e,c as t,Q as r}from"./chunks/framework.980cae92.js";const m=JSON.parse('{"title":"Research","description":"","frontmatter":{"aside":false},"headers":[],"relativePath":"research/index.md","filePath":"research/index.md"}'),n={name:"research/index.md"},o=r('

Research

The core publications around LMQL and its implementation.

Prompt Sketching for Large Language Models

arXiv:2311.04954 [cs.CL]

SRIlab @ ETH Zürich, Switzerland

Read the full paper

Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially – first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of potential follow-up prompts, leading to disconnected and undesirably wordy intermediate responses. In this work, we address this issue by proposing prompt sketching, a new prompting paradigm in which an LLM does not only respond by completing a prompt, but by predicting values for multiple variables in a template. This way, sketching grants users more control over the generation process, e.g., by providing a reasoning framework via intermediate instructions, leading to better overall results. The key idea enabling sketching with existing, autoregressive models is to adapt the decoding procedure to also score follow-up instructions during text generation, thus optimizing overall template likelihood in inference. Our experiments show that in a zero-shot setting, prompt sketching outperforms existing, sequential prompting schemes such as direct asking or chain-of-thought on 7 out of 8 LLM benchmarking tasks, including state tracking, arithmetic reasoning, and general question answering. To facilitate future use, we release a number of generic, yet effective sketches applicable to many tasks, and an open source library called dclib, powering our sketch-aware decoders.

Large Language Models are Zero-Shot Multi-Tool Users

Knowlege and Logical Reasoning Workshop - ICML 2023, Honolulu, Hawaii

SRIlab @ ETH Zürich, Switzerland

Read the full paper

We introduce LMQL Actions, a framework and programming environment to facilitate the implementation of tool-augmented language models (LMs). Concretely, we augment LMs with the ability to call actions (arbitrary Python functions), and experiment with different ways of tool discovery and invocation. We find that, while previous works heavily rely on few-shot prompting to teach tool use, a zero-shot, instruction-only approach is enough to achieve competitive performance. At the same time, LMQL Actions zero-shot approach also offers a much simpler programming interface, not requiring any involved demonstrations. Building on this, we show how LMQL Actions enables LLMs to automatically discover and combine multiple tools to solve complex tasks. Overall, we find that inline tool use as enabled by LMQL Actions, outperforms existing tool augmentation approaches, both in arithmetic reasoning tasks and text-based question answering.

LMQL Chat: Scripted Chatbot Development

Neural Conversational AI Workshop, TEACH - ICML 2023, Honolulu, Hawaii

SRIlab @ ETH Zürich, Switzerland

Read the full paper

We introduce LMQL Chat, a powerful open-source framework for building interactive systems on top of large language models, making it easy to create conversational agents with features like tool usage, internal reflection or safety constraints.

Prompting Is Programming: A Query Language For Large Language Models

44th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2023), Orlando, Florida

SRIlab @ ETH Zürich, Switzerland

Read the full paper

Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction.

Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks, while abstracting language model internals and providing high-level semantics.

To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model.

We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (26-85% cost savings).

',6),i=[o];function s(l,h,d,p,c,f){return e(),t("div",null,i)}const u=a(n,[["render",s],["__scopeId","data-v-34af5329"]]);export{m as __pageData,u as default}; diff --git a/assets/research_index.md.2696fbba.lean.js b/assets/research_index.md.2696fbba.lean.js new file mode 100644 index 00000000..b021170b --- /dev/null +++ b/assets/research_index.md.2696fbba.lean.js @@ -0,0 +1 @@ +import{_ as a,o as e,c as t,Q as r}from"./chunks/framework.980cae92.js";const m=JSON.parse('{"title":"Research","description":"","frontmatter":{"aside":false},"headers":[],"relativePath":"research/index.md","filePath":"research/index.md"}'),n={name:"research/index.md"},o=r("",6),i=[o];function s(l,h,d,p,c,f){return e(),t("div",null,i)}const u=a(n,[["render",s],["__scopeId","data-v-34af5329"]]);export{m as __pageData,u as default}; diff --git a/assets/research_index.md.c26a598e.js b/assets/research_index.md.c26a598e.js deleted file mode 100644 index 8de96a09..00000000 --- a/assets/research_index.md.c26a598e.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as a,o as e,c as t,Q as r}from"./chunks/framework.980cae92.js";const u=JSON.parse('{"title":"Research","description":"","frontmatter":{"aside":false},"headers":[],"relativePath":"research/index.md","filePath":"research/index.md"}'),n={name:"research/index.md"},o=r('

Research

The core publications around LMQL and its implementation.

Prompting Is Programming: A Query Language For Large Language Models

44th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2023), Orlando, Florida

SRIlab @ ETH Zürich, Switzerland

Read the full paper

Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction.

Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks, while abstracting language model internals and providing high-level semantics.

To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model.

We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (26-85% cost savings).

',3),i=[o];function s(d,l,c,g,p,h){return e(),t("div",null,i)}const b=a(n,[["render",s],["__scopeId","data-v-adbb595c"]]);export{u as __pageData,b as default}; diff --git a/assets/research_index.md.c26a598e.lean.js b/assets/research_index.md.c26a598e.lean.js deleted file mode 100644 index ed5c5fb8..00000000 --- a/assets/research_index.md.c26a598e.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as a,o as e,c as t,Q as r}from"./chunks/framework.980cae92.js";const u=JSON.parse('{"title":"Research","description":"","frontmatter":{"aside":false},"headers":[],"relativePath":"research/index.md","filePath":"research/index.md"}'),n={name:"research/index.md"},o=r("",3),i=[o];function s(d,l,c,g,p,h){return e(),t("div",null,i)}const b=a(n,[["render",s],["__scopeId","data-v-adbb595c"]]);export{u as __pageData,b as default}; diff --git a/assets/style.d660084a.css b/assets/style.aa6b8dc6.css similarity index 87% rename from assets/style.d660084a.css rename to assets/style.aa6b8dc6.css index c717f247..a4108e52 100644 --- a/assets/style.d660084a.css +++ b/assets/style.aa6b8dc6.css @@ -1 +1 @@ -@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-cyrillic.5f2c6c8c.woff2) format("woff2");unicode-range:U+0301,U+0400-045F,U+0490-0491,U+04B0-04B1,U+2116}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-cyrillic-ext.e75737ce.woff2) format("woff2");unicode-range:U+0460-052F,U+1C80-1C88,U+20B4,U+2DE0-2DFF,U+A640-A69F,U+FE2E-FE2F}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-greek.d5a6d92a.woff2) format("woff2");unicode-range:U+0370-03FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-greek-ext.ab0619bc.woff2) format("woff2");unicode-range:U+1F00-1FFF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-latin.2ed14f66.woff2) format("woff2");unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-latin-ext.0030eebd.woff2) format("woff2");unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-vietnamese.14ce25a6.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-cyrillic.ea42a392.woff2) format("woff2");unicode-range:U+0301,U+0400-045F,U+0490-0491,U+04B0-04B1,U+2116}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-cyrillic-ext.33bd5a8e.woff2) format("woff2");unicode-range:U+0460-052F,U+1C80-1C88,U+20B4,U+2DE0-2DFF,U+A640-A69F,U+FE2E-FE2F}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-greek.8f4463c4.woff2) format("woff2");unicode-range:U+0370-03FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-greek-ext.4fbe9427.woff2) format("woff2");unicode-range:U+1F00-1FFF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-latin.bd3b6f56.woff2) format("woff2");unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-latin-ext.bd8920cc.woff2) format("woff2");unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-vietnamese.6ce511fb.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:Chinese Quotes;src:local("PingFang SC Regular"),local("PingFang SC"),local("SimHei"),local("Source Han Sans SC");unicode-range:U+2018,U+2019,U+201C,U+201D}:root{--vp-c-white: #ffffff;--vp-c-black: #000000;--vp-c-neutral: var(--vp-c-black);--vp-c-neutral-inverse: var(--vp-c-white)}.dark{--vp-c-neutral: var(--vp-c-white);--vp-c-neutral-inverse: var(--vp-c-black)}:root{--vp-c-gray-1: #dddde3;--vp-c-gray-2: #e4e4e9;--vp-c-gray-3: #ebebef;--vp-c-gray-soft: rgba(142, 150, 170, .14);--vp-c-indigo-1: #3451b2;--vp-c-indigo-2: #3a5ccc;--vp-c-indigo-3: #5672cd;--vp-c-indigo-soft: rgba(100, 108, 255, .14);--vp-c-green-1: #18794e;--vp-c-green-2: #299764;--vp-c-green-3: #30a46c;--vp-c-green-soft: rgba(16, 185, 129, .14);--vp-c-yellow-1: #915930;--vp-c-yellow-2: #946300;--vp-c-yellow-3: #9f6a00;--vp-c-yellow-soft: rgba(234, 179, 8, .14);--vp-c-red-1: #b8272c;--vp-c-red-2: #d5393e;--vp-c-red-3: #e0575b;--vp-c-red-soft: rgba(244, 63, 94, .14);--vp-c-sponsor: #db2777}.dark{--vp-c-gray-1: #515c67;--vp-c-gray-2: #414853;--vp-c-gray-3: #32363f;--vp-c-gray-soft: rgba(101, 117, 133, .16);--vp-c-indigo-1: #a8b1ff;--vp-c-indigo-2: #5c73e7;--vp-c-indigo-3: #3e63dd;--vp-c-indigo-soft: rgba(100, 108, 255, .16);--vp-c-green-1: #3dd68c;--vp-c-green-2: #30a46c;--vp-c-green-3: #298459;--vp-c-green-soft: rgba(16, 185, 129, .16);--vp-c-yellow-1: #f9b44e;--vp-c-yellow-2: #da8b17;--vp-c-yellow-3: #a46a0a;--vp-c-yellow-soft: rgba(234, 179, 8, .16);--vp-c-red-1: #f66f81;--vp-c-red-2: #f14158;--vp-c-red-3: #b62a3c;--vp-c-red-soft: rgba(244, 63, 94, .16)}:root{--vp-c-bg: #ffffff;--vp-c-bg-alt: #f6f6f7;--vp-c-bg-elv: #ffffff;--vp-c-bg-soft: #f6f6f7}.dark{--vp-c-bg: #1b1b1f;--vp-c-bg-alt: #161618;--vp-c-bg-elv: #202127;--vp-c-bg-soft: #202127}:root{--vp-c-border: #c2c2c4;--vp-c-divider: #e2e2e3;--vp-c-gutter: #e2e2e3}.dark{--vp-c-border: #3c3f44;--vp-c-divider: #2e2e32;--vp-c-gutter: #000000}:root{--vp-c-text-1: rgba(60, 60, 67);--vp-c-text-2: rgba(60, 60, 67, .78);--vp-c-text-3: rgba(60, 60, 67, .56)}.dark{--vp-c-text-1: rgba(255, 255, 245, .86);--vp-c-text-2: rgba(235, 235, 245, .6);--vp-c-text-3: rgba(235, 235, 245, .38)}:root{--vp-c-default-1: var(--vp-c-gray-1);--vp-c-default-2: var(--vp-c-gray-2);--vp-c-default-3: var(--vp-c-gray-3);--vp-c-default-soft: var(--vp-c-gray-soft);--vp-c-brand-1: var(--vp-c-indigo-1);--vp-c-brand-2: var(--vp-c-indigo-2);--vp-c-brand-3: var(--vp-c-indigo-3);--vp-c-brand-soft: var(--vp-c-indigo-soft);--vp-c-brand: var(--vp-c-brand-1);--vp-c-tip-1: var(--vp-c-brand-1);--vp-c-tip-2: var(--vp-c-brand-2);--vp-c-tip-3: var(--vp-c-brand-3);--vp-c-tip-soft: var(--vp-c-brand-soft);--vp-c-warning-1: var(--vp-c-yellow-1);--vp-c-warning-2: var(--vp-c-yellow-2);--vp-c-warning-3: var(--vp-c-yellow-3);--vp-c-warning-soft: var(--vp-c-yellow-soft);--vp-c-danger-1: var(--vp-c-red-1);--vp-c-danger-2: var(--vp-c-red-2);--vp-c-danger-3: var(--vp-c-red-3);--vp-c-danger-soft: var(--vp-c-red-soft)}:root{--vp-font-family-base: "Chinese Quotes", "Inter var", "Inter", ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Helvetica, Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";--vp-font-family-mono: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace}:root{--vp-shadow-1: 0 1px 2px rgba(0, 0, 0, .04), 0 1px 2px rgba(0, 0, 0, .06);--vp-shadow-2: 0 3px 12px rgba(0, 0, 0, .07), 0 1px 4px rgba(0, 0, 0, .07);--vp-shadow-3: 0 12px 32px rgba(0, 0, 0, .1), 0 2px 6px rgba(0, 0, 0, .08);--vp-shadow-4: 0 14px 44px rgba(0, 0, 0, .12), 0 3px 9px rgba(0, 0, 0, .12);--vp-shadow-5: 0 18px 56px rgba(0, 0, 0, .16), 0 4px 12px rgba(0, 0, 0, .16)}:root{--vp-z-index-footer: 10;--vp-z-index-local-nav: 20;--vp-z-index-nav: 30;--vp-z-index-layout-top: 40;--vp-z-index-backdrop: 50;--vp-z-index-sidebar: 60}:root{--vp-icon-copy: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='none' height='20' width='20' stroke='rgba(128,128,128,1)' stroke-width='2' viewBox='0 0 24 24'%3E%3Cpath stroke-linecap='round' stroke-linejoin='round' d='M9 5H7a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2h-2M9 5a2 2 0 0 0 2 2h2a2 2 0 0 0 2-2M9 5a2 2 0 0 1 2-2h2a2 2 0 0 1 2 2'/%3E%3C/svg%3E");--vp-icon-copied: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='none' height='20' width='20' stroke='rgba(128,128,128,1)' stroke-width='2' viewBox='0 0 24 24'%3E%3Cpath stroke-linecap='round' stroke-linejoin='round' d='M9 5H7a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2h-2M9 5a2 2 0 0 0 2 2h2a2 2 0 0 0 2-2M9 5a2 2 0 0 1 2-2h2a2 2 0 0 1 2 2m-6 9 2 2 4-4'/%3E%3C/svg%3E")}:root{--vp-layout-max-width: 1440px}:root{--vp-header-anchor-symbol: "#"}:root{--vp-code-line-height: 1.7;--vp-code-font-size: .875em;--vp-code-color: var(--vp-c-brand-1);--vp-code-link-color: var(--vp-c-brand-1);--vp-code-link-hover-color: var(--vp-c-brand-2);--vp-code-bg: var(--vp-c-default-soft);--vp-code-block-color: var(--vp-c-text-2);--vp-code-block-bg: var(--vp-c-bg-alt);--vp-code-block-divider-color: var(--vp-c-gutter);--vp-code-lang-color: var(--vp-c-text-3);--vp-code-line-highlight-color: var(--vp-c-default-soft);--vp-code-line-number-color: var(--vp-c-text-3);--vp-code-line-diff-add-color: var(--vp-c-green-soft);--vp-code-line-diff-add-symbol-color: var(--vp-c-green-1);--vp-code-line-diff-remove-color: var(--vp-c-red-soft);--vp-code-line-diff-remove-symbol-color: var(--vp-c-red-1);--vp-code-line-warning-color: var(--vp-c-yellow-soft);--vp-code-line-error-color: var(--vp-c-red-soft);--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2);--vp-code-copy-copied-text-content: "Copied";--vp-code-tab-divider: var(--vp-code-block-divider-color);--vp-code-tab-text-color: var(--vp-c-text-2);--vp-code-tab-bg: var(--vp-code-block-bg);--vp-code-tab-hover-text-color: var(--vp-c-text-1);--vp-code-tab-active-text-color: var(--vp-c-text-1);--vp-code-tab-active-bar-color: var(--vp-c-brand-1)}:root{--vp-button-brand-border: transparent;--vp-button-brand-text: var(--vp-c-white);--vp-button-brand-bg: var(--vp-c-brand-3);--vp-button-brand-hover-border: transparent;--vp-button-brand-hover-text: var(--vp-c-white);--vp-button-brand-hover-bg: var(--vp-c-brand-2);--vp-button-brand-active-border: transparent;--vp-button-brand-active-text: var(--vp-c-white);--vp-button-brand-active-bg: var(--vp-c-brand-1);--vp-button-alt-border: transparent;--vp-button-alt-text: var(--vp-c-text-1);--vp-button-alt-bg: var(--vp-c-default-3);--vp-button-alt-hover-border: transparent;--vp-button-alt-hover-text: var(--vp-c-text-1);--vp-button-alt-hover-bg: var(--vp-c-default-2);--vp-button-alt-active-border: transparent;--vp-button-alt-active-text: var(--vp-c-text-1);--vp-button-alt-active-bg: var(--vp-c-default-1);--vp-button-sponsor-border: var(--vp-c-text-2);--vp-button-sponsor-text: var(--vp-c-text-2);--vp-button-sponsor-bg: transparent;--vp-button-sponsor-hover-border: var(--vp-c-sponsor);--vp-button-sponsor-hover-text: var(--vp-c-sponsor);--vp-button-sponsor-hover-bg: transparent;--vp-button-sponsor-active-border: var(--vp-c-sponsor);--vp-button-sponsor-active-text: var(--vp-c-sponsor);--vp-button-sponsor-active-bg: transparent}:root{--vp-custom-block-font-size: 14px;--vp-custom-block-code-font-size: 13px;--vp-custom-block-info-border: transparent;--vp-custom-block-info-text: var(--vp-c-text-1);--vp-custom-block-info-bg: var(--vp-c-default-soft);--vp-custom-block-info-code-bg: var(--vp-c-default-soft);--vp-custom-block-tip-border: transparent;--vp-custom-block-tip-text: var(--vp-c-text-1);--vp-custom-block-tip-bg: var(--vp-c-brand-soft);--vp-custom-block-tip-code-bg: var(--vp-c-brand-soft);--vp-custom-block-warning-border: transparent;--vp-custom-block-warning-text: var(--vp-c-text-1);--vp-custom-block-warning-bg: var(--vp-c-warning-soft);--vp-custom-block-warning-code-bg: var(--vp-c-warning-soft);--vp-custom-block-danger-border: transparent;--vp-custom-block-danger-text: var(--vp-c-text-1);--vp-custom-block-danger-bg: var(--vp-c-danger-soft);--vp-custom-block-danger-code-bg: var(--vp-c-danger-soft);--vp-custom-block-details-border: var(--vp-custom-block-info-border);--vp-custom-block-details-text: var(--vp-custom-block-info-text);--vp-custom-block-details-bg: var(--vp-custom-block-info-bg);--vp-custom-block-details-code-bg: var(--vp-custom-block-info-code-bg)}:root{--vp-input-border-color: var(--vp-c-border);--vp-input-bg-color: var(--vp-c-bg-alt);--vp-input-switch-bg-color: var(--vp-c-gray-soft)}:root{--vp-nav-height: 64px;--vp-nav-bg-color: var(--vp-c-bg);--vp-nav-screen-bg-color: var(--vp-c-bg);--vp-nav-logo-height: 24px}.hide-nav{--vp-nav-height: 0px}.hide-nav .VPSidebar{--vp-nav-height: 22px}:root{--vp-local-nav-bg-color: var(--vp-c-bg)}:root{--vp-sidebar-width: 272px;--vp-sidebar-bg-color: var(--vp-c-bg-alt)}:root{--vp-backdrop-bg-color: rgba(0, 0, 0, .6)}:root{--vp-home-hero-name-color: var(--vp-c-brand-1);--vp-home-hero-name-background: transparent;--vp-home-hero-image-background-image: none;--vp-home-hero-image-filter: none}:root{--vp-badge-info-border: transparent;--vp-badge-info-text: var(--vp-c-text-2);--vp-badge-info-bg: var(--vp-c-default-soft);--vp-badge-tip-border: transparent;--vp-badge-tip-text: var(--vp-c-brand-1);--vp-badge-tip-bg: var(--vp-c-brand-soft);--vp-badge-warning-border: transparent;--vp-badge-warning-text: var(--vp-c-warning-1);--vp-badge-warning-bg: var(--vp-c-warning-soft);--vp-badge-danger-border: transparent;--vp-badge-danger-text: var(--vp-c-danger-1);--vp-badge-danger-bg: var(--vp-c-danger-soft)}:root{--vp-carbon-ads-text-color: var(--vp-c-text-1);--vp-carbon-ads-poweredby-color: var(--vp-c-text-2);--vp-carbon-ads-bg-color: var(--vp-c-bg-soft);--vp-carbon-ads-hover-text-color: var(--vp-c-brand-1);--vp-carbon-ads-hover-poweredby-color: var(--vp-c-text-1)}:root{--vp-local-search-bg: var(--vp-c-bg);--vp-local-search-result-bg: var(--vp-c-bg);--vp-local-search-result-border: var(--vp-c-divider);--vp-local-search-result-selected-bg: var(--vp-c-bg);--vp-local-search-result-selected-border: var(--vp-c-brand-1);--vp-local-search-highlight-bg: var(--vp-c-brand-1);--vp-local-search-highlight-text: var(--vp-c-neutral-inverse)}@media (prefers-reduced-motion: reduce){*,:before,:after{animation-delay:-1ms!important;animation-duration:1ms!important;animation-iteration-count:1!important;background-attachment:initial!important;scroll-behavior:auto!important;transition-duration:0s!important;transition-delay:0s!important}}*,:before,:after{box-sizing:border-box}html{line-height:1.4;font-size:16px;-webkit-text-size-adjust:100%}html.dark{color-scheme:dark}body{margin:0;width:100%;min-width:320px;min-height:100vh;line-height:24px;font-family:var(--vp-font-family-base);font-size:16px;font-weight:400;color:var(--vp-c-text-1);background-color:var(--vp-c-bg);direction:ltr;font-synthesis:style;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}main{display:block}h1,h2,h3,h4,h5,h6{margin:0;line-height:24px;font-size:16px;font-weight:400}p{margin:0}strong,b{font-weight:600}a,area,button,[role=button],input,label,select,summary,textarea{touch-action:manipulation}a{color:inherit;text-decoration:inherit}ol,ul{list-style:none;margin:0;padding:0}blockquote{margin:0}pre,code,kbd,samp{font-family:var(--vp-font-family-mono)}img,svg,video,canvas,audio,iframe,embed,object{display:block}figure{margin:0}img,video{max-width:100%;height:auto}button,input,optgroup,select,textarea{border:0;padding:0;line-height:inherit;color:inherit}button{padding:0;font-family:inherit;background-color:transparent;background-image:none}button:enabled,[role=button]:enabled{cursor:pointer}button:focus,button:focus-visible{outline:1px dotted;outline:4px auto -webkit-focus-ring-color}button:focus:not(:focus-visible){outline:none!important}input:focus,textarea:focus,select:focus{outline:none}table{border-collapse:collapse}input{background-color:transparent}input:-ms-input-placeholder,textarea:-ms-input-placeholder{color:var(--vp-c-text-3)}input::-ms-input-placeholder,textarea::-ms-input-placeholder{color:var(--vp-c-text-3)}input::placeholder,textarea::placeholder{color:var(--vp-c-text-3)}input::-webkit-outer-spin-button,input::-webkit-inner-spin-button{-webkit-appearance:none;margin:0}input[type=number]{-moz-appearance:textfield}textarea{resize:vertical}select{-webkit-appearance:none}fieldset{margin:0;padding:0}h1,h2,h3,h4,h5,h6,li,p{overflow-wrap:break-word}vite-error-overlay{z-index:9999}mjx-container{display:inline-block;margin:auto 2px -2px}mjx-container>svg{margin:auto}.visually-hidden{position:absolute;width:1px;height:1px;white-space:nowrap;clip:rect(0 0 0 0);clip-path:inset(50%);overflow:hidden}.custom-block{border:1px solid transparent;border-radius:8px;padding:16px 16px 8px;line-height:24px;font-size:var(--vp-custom-block-font-size);color:var(--vp-c-text-2)}.custom-block.info{border-color:var(--vp-custom-block-info-border);color:var(--vp-custom-block-info-text);background-color:var(--vp-custom-block-info-bg)}.custom-block.info a,.custom-block.info code{color:var(--vp-c-brand-1)}.custom-block.info a:hover{color:var(--vp-c-brand-2)}.custom-block.info code{background-color:var(--vp-custom-block-info-code-bg)}.custom-block.tip{border-color:var(--vp-custom-block-tip-border);color:var(--vp-custom-block-tip-text);background-color:var(--vp-custom-block-tip-bg)}.custom-block.tip a,.custom-block.tip code{color:var(--vp-c-brand-1)}.custom-block.tip a:hover{color:var(--vp-c-brand-2)}.custom-block.tip code{background-color:var(--vp-custom-block-tip-code-bg)}.custom-block.warning{border-color:var(--vp-custom-block-warning-border);color:var(--vp-custom-block-warning-text);background-color:var(--vp-custom-block-warning-bg)}.custom-block.warning a,.custom-block.warning code{color:var(--vp-c-warning-1)}.custom-block.warning a:hover{color:var(--vp-c-warning-2)}.custom-block.warning code{background-color:var(--vp-custom-block-warning-code-bg)}.custom-block.danger{border-color:var(--vp-custom-block-danger-border);color:var(--vp-custom-block-danger-text);background-color:var(--vp-custom-block-danger-bg)}.custom-block.danger a,.custom-block.danger code{color:var(--vp-c-danger-1)}.custom-block.danger a:hover{color:var(--vp-c-danger-2)}.custom-block.danger code{background-color:var(--vp-custom-block-danger-code-bg)}.custom-block.details{border-color:var(--vp-custom-block-details-border);color:var(--vp-custom-block-details-text);background-color:var(--vp-custom-block-details-bg)}.custom-block.details a{color:var(--vp-c-brand-1)}.custom-block.details a:hover{color:var(--vp-c-brand-2)}.custom-block.details code{background-color:var(--vp-custom-block-details-code-bg)}.custom-block-title{font-weight:600}.custom-block p+p{margin:8px 0}.custom-block.details summary{margin:0 0 8px;font-weight:700;cursor:pointer}.custom-block.details summary+p{margin:8px 0}.custom-block a{color:inherit;font-weight:600;text-decoration:underline;text-underline-offset:2px;transition:opacity .25s}.custom-block a:hover{opacity:.75}.custom-block code{font-size:var(--vp-custom-block-code-font-size)}.custom-block.custom-block th,.custom-block.custom-block blockquote>p{font-size:var(--vp-custom-block-font-size);color:inherit}.dark .vp-code-light{display:none}html:not(.dark) .vp-code-dark{display:none}.vp-code-group{margin-top:16px}.vp-code-group .tabs{position:relative;display:flex;margin-right:-24px;margin-left:-24px;padding:0 12px;background-color:var(--vp-code-tab-bg);overflow-x:auto;overflow-y:hidden;box-shadow:inset 0 -1px var(--vp-code-tab-divider)}@media (min-width: 640px){.vp-code-group .tabs{margin-right:0;margin-left:0;border-radius:8px 8px 0 0}}.vp-code-group .tabs input{position:fixed;opacity:0;pointer-events:none}.vp-code-group .tabs label{position:relative;display:inline-block;border-bottom:1px solid transparent;padding:0 12px;line-height:48px;font-size:14px;font-weight:500;color:var(--vp-code-tab-text-color);white-space:nowrap;cursor:pointer;transition:color .25s}.vp-code-group .tabs label:after{position:absolute;right:8px;bottom:-1px;left:8px;z-index:1;height:2px;border-radius:2px;content:"";background-color:transparent;transition:background-color .25s}.vp-code-group label:hover{color:var(--vp-code-tab-hover-text-color)}.vp-code-group input:checked+label{color:var(--vp-code-tab-active-text-color)}.vp-code-group input:checked+label:after{background-color:var(--vp-code-tab-active-bar-color)}.vp-code-group div[class*=language-],.vp-block{display:none;margin-top:0!important;border-top-left-radius:0!important;border-top-right-radius:0!important}.vp-code-group div[class*=language-].active,.vp-block.active{display:block}.vp-block{padding:20px 24px}.vp-doc h1,.vp-doc h2,.vp-doc h3,.vp-doc h4,.vp-doc h5,.vp-doc h6{position:relative;font-weight:600;outline:none}.vp-doc h1{letter-spacing:-.02em;line-height:40px;font-size:28px}.vp-doc h2{margin:48px 0 16px;border-top:1px solid var(--vp-c-divider);padding-top:24px;letter-spacing:-.02em;line-height:32px;font-size:24px}.vp-doc h3{margin:32px 0 0;letter-spacing:-.01em;line-height:28px;font-size:20px}.vp-doc .header-anchor{position:absolute;top:0;left:0;margin-left:-.87em;font-weight:500;-webkit-user-select:none;user-select:none;opacity:0;text-decoration:none;transition:color .25s,opacity .25s}.vp-doc .header-anchor:before{content:var(--vp-header-anchor-symbol)}.vp-doc h1:hover .header-anchor,.vp-doc h1 .header-anchor:focus,.vp-doc h2:hover .header-anchor,.vp-doc h2 .header-anchor:focus,.vp-doc h3:hover .header-anchor,.vp-doc h3 .header-anchor:focus,.vp-doc h4:hover .header-anchor,.vp-doc h4 .header-anchor:focus,.vp-doc h5:hover .header-anchor,.vp-doc h5 .header-anchor:focus,.vp-doc h6:hover .header-anchor,.vp-doc h6 .header-anchor:focus{opacity:1}@media (min-width: 768px){.vp-doc h1{letter-spacing:-.02em;line-height:40px;font-size:32px}}.vp-doc h2 .header-anchor{top:24px}.vp-doc p,.vp-doc summary{margin:16px 0}.vp-doc p{line-height:28px}.vp-doc blockquote{margin:16px 0;border-left:2px solid var(--vp-c-divider);padding-left:16px;transition:border-color .5s}.vp-doc blockquote>p{margin:0;font-size:16px;color:var(--vp-c-text-2);transition:color .5s}.vp-doc a{font-weight:500;color:var(--vp-c-brand-1);text-decoration:underline;text-underline-offset:2px;transition:color .25s,opacity .25s}.vp-doc a:hover{color:var(--vp-c-brand-2)}.vp-doc strong{font-weight:600}.vp-doc ul,.vp-doc ol{padding-left:1.25rem;margin:16px 0}.vp-doc ul{list-style:disc}.vp-doc ol{list-style:decimal}.vp-doc li+li{margin-top:8px}.vp-doc li>ol,.vp-doc li>ul{margin:8px 0 0}.vp-doc table{display:block;border-collapse:collapse;margin:20px 0;overflow-x:auto}.vp-doc tr{border-top:1px solid var(--vp-c-divider);transition:background-color .5s}.vp-doc tr:nth-child(2n){background-color:var(--vp-c-bg-soft)}.vp-doc th,.vp-doc td{border:1px solid var(--vp-c-divider);padding:8px 16px}.vp-doc th{text-align:left;font-size:14px;font-weight:600;color:var(--vp-c-text-2);background-color:var(--vp-c-bg-soft)}.vp-doc td{font-size:14px}.vp-doc hr{margin:16px 0;border:none;border-top:1px solid var(--vp-c-divider)}.vp-doc .custom-block{margin:16px 0}.vp-doc .custom-block p{margin:8px 0;line-height:24px}.vp-doc .custom-block p:first-child{margin:0}.vp-doc .custom-block div[class*=language-]{margin:8px 0;border-radius:8px}.vp-doc .custom-block div[class*=language-] code{font-weight:400;background-color:transparent}.vp-doc .custom-block .vp-code-group .tabs{margin:0;border-radius:8px 8px 0 0}.vp-doc :not(pre,h1,h2,h3,h4,h5,h6)>code{font-size:var(--vp-code-font-size);color:var(--vp-code-color)}.vp-doc :not(pre)>code{border-radius:4px;padding:3px 6px;background-color:var(--vp-code-bg);transition:color .25s,background-color .5s}.vp-doc a>code{color:var(--vp-code-link-color)}.vp-doc a:hover>code{color:var(--vp-code-link-hover-color)}.vp-doc h1>code,.vp-doc h2>code,.vp-doc h3>code{font-size:.9em}.vp-doc div[class*=language-],.vp-block{position:relative;margin:16px -24px;background-color:var(--vp-code-block-bg);overflow-x:auto;transition:background-color .5s}@media (min-width: 640px){.vp-doc div[class*=language-],.vp-block{border-radius:8px;margin:16px 0}}@media (max-width: 639px){.vp-doc li div[class*=language-]{border-radius:8px 0 0 8px}}.vp-doc div[class*=language-]+div[class*=language-],.vp-doc div[class$=-api]+div[class*=language-],.vp-doc div[class*=language-]+div[class$=-api]>div[class*=language-]{margin-top:-8px}.vp-doc [class*=language-] pre,.vp-doc [class*=language-] code{direction:ltr;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}.vp-doc [class*=language-] pre{position:relative;z-index:1;margin:0;padding:20px 0;background:transparent;overflow-x:auto}.vp-doc [class*=language-] code{display:block;padding:0 24px;width:fit-content;min-width:100%;line-height:var(--vp-code-line-height);font-size:var(--vp-code-font-size);color:var(--vp-code-block-color);transition:color .5s}.vp-doc [class*=language-] code .highlighted{background-color:var(--vp-code-line-highlight-color);transition:background-color .5s;margin:0 -24px;padding:0 24px;width:calc(100% + 48px);display:inline-block}.vp-doc [class*=language-] code .highlighted.error{background-color:var(--vp-code-line-error-color)}.vp-doc [class*=language-] code .highlighted.warning{background-color:var(--vp-code-line-warning-color)}.vp-doc [class*=language-] code .diff{transition:background-color .5s;margin:0 -24px;padding:0 24px;width:calc(100% + 48px);display:inline-block}.vp-doc [class*=language-] code .diff:before{position:absolute;left:10px}.vp-doc [class*=language-] .has-focused-lines .line:not(.has-focus){filter:blur(.095rem);opacity:.4;transition:filter .35s,opacity .35s}.vp-doc [class*=language-] .has-focused-lines .line:not(.has-focus){opacity:.7;transition:filter .35s,opacity .35s}.vp-doc [class*=language-]:hover .has-focused-lines .line:not(.has-focus){filter:blur(0);opacity:1}.vp-doc [class*=language-] code .diff.remove{background-color:var(--vp-code-line-diff-remove-color);opacity:.7}.vp-doc [class*=language-] code .diff.remove:before{content:"-";color:var(--vp-code-line-diff-remove-symbol-color)}.vp-doc [class*=language-] code .diff.add{background-color:var(--vp-code-line-diff-add-color)}.vp-doc [class*=language-] code .diff.add:before{content:"+";color:var(--vp-code-line-diff-add-symbol-color)}.vp-doc div[class*=language-].line-numbers-mode{padding-left:32px}.vp-doc .line-numbers-wrapper{position:absolute;top:0;bottom:0;left:0;z-index:3;border-right:1px solid var(--vp-code-block-divider-color);padding-top:20px;width:32px;text-align:center;font-family:var(--vp-font-family-mono);line-height:var(--vp-code-line-height);font-size:var(--vp-code-font-size);color:var(--vp-code-line-number-color);transition:border-color .5s,color .5s}.vp-doc [class*=language-]>button.copy{direction:ltr;position:absolute;top:12px;right:12px;z-index:3;border:1px solid var(--vp-code-copy-code-border-color);border-radius:4px;width:40px;height:40px;background-color:var(--vp-code-copy-code-bg);opacity:0;cursor:pointer;background-image:var(--vp-icon-copy);background-position:50%;background-size:20px;background-repeat:no-repeat;transition:border-color .25s,background-color .25s,opacity .25s}.vp-doc [class*=language-]:hover>button.copy,.vp-doc [class*=language-]>button.copy:focus{opacity:1}.vp-doc [class*=language-]>button.copy:hover,.vp-doc [class*=language-]>button.copy.copied{border-color:var(--vp-code-copy-code-hover-border-color);background-color:var(--vp-code-copy-code-hover-bg)}.vp-doc [class*=language-]>button.copy.copied,.vp-doc [class*=language-]>button.copy:hover.copied{border-radius:0 4px 4px 0;background-color:var(--vp-code-copy-code-hover-bg);background-image:var(--vp-icon-copied)}.vp-doc [class*=language-]>button.copy.copied:before,.vp-doc [class*=language-]>button.copy:hover.copied:before{position:relative;top:-1px;transform:translate(calc(-100% - 1px));display:flex;justify-content:center;align-items:center;border:1px solid var(--vp-code-copy-code-hover-border-color);border-right:0;border-radius:4px 0 0 4px;padding:0 10px;width:fit-content;height:40px;text-align:center;font-size:12px;font-weight:500;color:var(--vp-code-copy-code-active-text);background-color:var(--vp-code-copy-code-hover-bg);white-space:nowrap;content:var(--vp-code-copy-copied-text-content)}.vp-doc [class*=language-]>span.lang{position:absolute;top:2px;right:8px;z-index:2;font-size:12px;font-weight:500;color:var(--vp-code-lang-color);transition:color .4s,opacity .4s}.vp-doc [class*=language-]:hover>button.copy+span.lang,.vp-doc [class*=language-]>button.copy:focus+span.lang{opacity:0}.vp-doc .VPTeamMembers{margin-top:24px}.vp-doc .VPTeamMembers.small.count-1 .container{margin:0!important;max-width:calc((100% - 24px)/2)!important}.vp-doc .VPTeamMembers.small.count-2 .container,.vp-doc .VPTeamMembers.small.count-3 .container{max-width:100%!important}.vp-doc .VPTeamMembers.medium.count-1 .container{margin:0!important;max-width:calc((100% - 24px)/2)!important}:is(.vp-external-link-icon,.vp-doc a[href*="://"],.vp-doc a[target=_blank]):not(.no-icon):after{display:inline-block;margin-top:-1px;margin-left:4px;width:11px;height:11px;background:currentColor;color:var(--vp-c-text-3);flex-shrink:0;--icon: url("data:image/svg+xml, %3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' %3E%3Cpath d='M0 0h24v24H0V0z' fill='none' /%3E%3Cpath d='M9 5v2h6.59L4 18.59 5.41 20 17 8.41V15h2V5H9z' /%3E%3C/svg%3E");-webkit-mask-image:var(--icon);mask-image:var(--icon)}.vp-external-link-icon:after{content:""}.vp-sponsor{border-radius:16px;overflow:hidden}.vp-sponsor.aside{border-radius:12px}.vp-sponsor-section+.vp-sponsor-section{margin-top:4px}.vp-sponsor-tier{margin-bottom:4px;text-align:center;letter-spacing:1px;line-height:24px;width:100%;font-weight:600;color:var(--vp-c-text-2);background-color:var(--vp-c-bg-soft)}.vp-sponsor.normal .vp-sponsor-tier{padding:13px 0 11px;font-size:14px}.vp-sponsor.aside .vp-sponsor-tier{padding:9px 0 7px;font-size:12px}.vp-sponsor-grid+.vp-sponsor-tier{margin-top:4px}.vp-sponsor-grid{display:flex;flex-wrap:wrap;gap:4px}.vp-sponsor-grid.xmini .vp-sponsor-grid-link{height:64px}.vp-sponsor-grid.xmini .vp-sponsor-grid-image{max-width:64px;max-height:22px}.vp-sponsor-grid.mini .vp-sponsor-grid-link{height:72px}.vp-sponsor-grid.mini .vp-sponsor-grid-image{max-width:96px;max-height:24px}.vp-sponsor-grid.small .vp-sponsor-grid-link{height:96px}.vp-sponsor-grid.small .vp-sponsor-grid-image{max-width:96px;max-height:24px}.vp-sponsor-grid.medium .vp-sponsor-grid-link{height:112px}.vp-sponsor-grid.medium .vp-sponsor-grid-image{max-width:120px;max-height:36px}.vp-sponsor-grid.big .vp-sponsor-grid-link{height:184px}.vp-sponsor-grid.big .vp-sponsor-grid-image{max-width:192px;max-height:56px}.vp-sponsor-grid[data-vp-grid="2"] .vp-sponsor-grid-item{width:calc((100% - 4px)/2)}.vp-sponsor-grid[data-vp-grid="3"] .vp-sponsor-grid-item{width:calc((100% - 4px * 2) / 3)}.vp-sponsor-grid[data-vp-grid="4"] .vp-sponsor-grid-item{width:calc((100% - 12px)/4)}.vp-sponsor-grid[data-vp-grid="5"] .vp-sponsor-grid-item{width:calc((100% - 16px)/5)}.vp-sponsor-grid[data-vp-grid="6"] .vp-sponsor-grid-item{width:calc((100% - 4px * 5) / 6)}.vp-sponsor-grid-item{flex-shrink:0;width:100%;background-color:var(--vp-c-bg-soft);transition:background-color .25s}.vp-sponsor-grid-item:hover{background-color:var(--vp-c-default-soft)}.vp-sponsor-grid-item:hover .vp-sponsor-grid-image{filter:grayscale(0) invert(0)}.vp-sponsor-grid-item.empty:hover{background-color:var(--vp-c-bg-soft)}.dark .vp-sponsor-grid-item:hover{background-color:var(--vp-c-white)}.dark .vp-sponsor-grid-item.empty:hover{background-color:var(--vp-c-bg-soft)}.vp-sponsor-grid-link{display:flex}.vp-sponsor-grid-box{display:flex;justify-content:center;align-items:center;width:100%}.vp-sponsor-grid-image{max-width:100%;filter:grayscale(1);transition:filter .25s}.dark .vp-sponsor-grid-image{filter:grayscale(1) invert(1)}.VPBadge[data-v-ea5b2908]{display:inline-block;margin-left:2px;border:1px solid transparent;border-radius:12px;padding:0 10px;line-height:22px;font-size:12px;font-weight:500;transform:translateY(-2px)}.vp-doc h1>.VPBadge[data-v-ea5b2908]{margin-top:4px;vertical-align:top}.vp-doc h2>.VPBadge[data-v-ea5b2908]{margin-top:3px;padding:0 8px;vertical-align:top}.vp-doc h3>.VPBadge[data-v-ea5b2908]{vertical-align:middle}.vp-doc h4>.VPBadge[data-v-ea5b2908],.vp-doc h5>.VPBadge[data-v-ea5b2908],.vp-doc h6>.VPBadge[data-v-ea5b2908]{vertical-align:middle;line-height:18px}.VPBadge.info[data-v-ea5b2908]{border-color:var(--vp-badge-info-border);color:var(--vp-badge-info-text);background-color:var(--vp-badge-info-bg)}.VPBadge.tip[data-v-ea5b2908]{border-color:var(--vp-badge-tip-border);color:var(--vp-badge-tip-text);background-color:var(--vp-badge-tip-bg)}.VPBadge.warning[data-v-ea5b2908]{border-color:var(--vp-badge-warning-border);color:var(--vp-badge-warning-text);background-color:var(--vp-badge-warning-bg)}.VPBadge.danger[data-v-ea5b2908]{border-color:var(--vp-badge-danger-border);color:var(--vp-badge-danger-text);background-color:var(--vp-badge-danger-bg)}.VPBackdrop[data-v-54a304ca]{position:fixed;top:0;right:0;bottom:0;left:0;z-index:var(--vp-z-index-backdrop);background:var(--vp-backdrop-bg-color);transition:opacity .5s}.VPBackdrop.fade-enter-from[data-v-54a304ca],.VPBackdrop.fade-leave-to[data-v-54a304ca]{opacity:0}.VPBackdrop.fade-leave-active[data-v-54a304ca]{transition-duration:.25s}@media (min-width: 1280px){.VPBackdrop[data-v-54a304ca]{display:none}}.NotFound[data-v-b9c0c15a]{padding:64px 24px 96px;text-align:center}@media (min-width: 768px){.NotFound[data-v-b9c0c15a]{padding:96px 32px 168px}}.code[data-v-b9c0c15a]{line-height:64px;font-size:64px;font-weight:600}.title[data-v-b9c0c15a]{padding-top:12px;letter-spacing:2px;line-height:20px;font-size:20px;font-weight:700}.divider[data-v-b9c0c15a]{margin:24px auto 18px;width:64px;height:1px;background-color:var(--vp-c-divider)}.quote[data-v-b9c0c15a]{margin:0 auto;max-width:256px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.action[data-v-b9c0c15a]{padding-top:20px}.link[data-v-b9c0c15a]{display:inline-block;border:1px solid var(--vp-c-brand-1);border-radius:16px;padding:3px 16px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:border-color .25s,color .25s}.link[data-v-b9c0c15a]:hover{border-color:var(--vp-c-brand-2);color:var(--vp-c-brand-2)}.root[data-v-463da30f]{position:relative;z-index:1}.nested[data-v-463da30f]{padding-left:16px}.outline-link[data-v-463da30f]{display:block;line-height:28px;color:var(--vp-c-text-2);white-space:nowrap;overflow:hidden;text-overflow:ellipsis;transition:color .5s;font-weight:400}.outline-link[data-v-463da30f]:hover,.outline-link.active[data-v-463da30f]{color:var(--vp-c-text-1);transition:color .25s}.outline-link.nested[data-v-463da30f]{padding-left:13px}.VPDocAsideOutline[data-v-3a6c4994]{display:none}.VPDocAsideOutline.has-outline[data-v-3a6c4994]{display:block}.content[data-v-3a6c4994]{position:relative;border-left:1px solid var(--vp-c-divider);padding-left:16px;font-size:13px;font-weight:500}.outline-marker[data-v-3a6c4994]{position:absolute;top:32px;left:-1px;z-index:0;opacity:0;width:2px;border-radius:2px;height:18px;background-color:var(--vp-c-brand-1);transition:top .25s cubic-bezier(0,1,.5,1),background-color .5s,opacity .25s}.outline-title[data-v-3a6c4994]{letter-spacing:.4px;line-height:28px;font-size:13px;font-weight:600}.VPDocAside[data-v-cb998dce]{display:flex;flex-direction:column;flex-grow:1}.spacer[data-v-cb998dce]{flex-grow:1}.VPDocAside[data-v-cb998dce] .spacer+.VPDocAsideSponsors,.VPDocAside[data-v-cb998dce] .spacer+.VPDocAsideCarbonAds{margin-top:24px}.VPDocAside[data-v-cb998dce] .VPDocAsideSponsors+.VPDocAsideCarbonAds{margin-top:16px}.VPLastUpdated[data-v-19a7ae4e]{line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}@media (min-width: 640px){.VPLastUpdated[data-v-19a7ae4e]{line-height:32px;font-size:14px;font-weight:500}}.VPDocFooter[data-v-a2d931e4]{margin-top:64px}.edit-info[data-v-a2d931e4]{padding-bottom:18px}@media (min-width: 640px){.edit-info[data-v-a2d931e4]{display:flex;justify-content:space-between;align-items:center;padding-bottom:14px}}.edit-link-button[data-v-a2d931e4]{display:flex;align-items:center;border:0;line-height:32px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:color .25s}.edit-link-button[data-v-a2d931e4]:hover{color:var(--vp-c-brand-2)}.edit-link-icon[data-v-a2d931e4]{margin-right:8px;width:14px;height:14px;fill:currentColor}.prev-next[data-v-a2d931e4]{border-top:1px solid var(--vp-c-divider);padding-top:24px;display:grid;grid-row-gap:8px}@media (min-width: 640px){.prev-next[data-v-a2d931e4]{grid-template-columns:repeat(2,1fr);grid-column-gap:16px}}.pager-link[data-v-a2d931e4]{display:block;border:1px solid var(--vp-c-divider);border-radius:8px;padding:11px 16px 13px;width:100%;height:100%;transition:border-color .25s}.pager-link[data-v-a2d931e4]:hover{border-color:var(--vp-c-brand-1)}.pager-link.next[data-v-a2d931e4]{margin-left:auto;text-align:right}.desc[data-v-a2d931e4]{display:block;line-height:20px;font-size:12px;font-weight:500;color:var(--vp-c-text-2)}.title[data-v-a2d931e4]{display:block;line-height:20px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:color .25s}.VPDocOutlineDropdown[data-v-95bb0785]{margin-bottom:48px}.VPDocOutlineDropdown button[data-v-95bb0785]{display:block;font-size:14px;font-weight:500;line-height:24px;border:1px solid var(--vp-c-border);padding:4px 12px;color:var(--vp-c-text-2);background-color:var(--vp-c-default-soft);border-radius:8px;transition:color .5s}.VPDocOutlineDropdown button[data-v-95bb0785]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPDocOutlineDropdown button.open[data-v-95bb0785]{color:var(--vp-c-text-1)}.icon[data-v-95bb0785]{display:inline-block;vertical-align:middle;width:16px;height:16px;fill:currentColor}[data-v-95bb0785] .outline-link{font-size:14px;font-weight:400}.open>.icon[data-v-95bb0785]{transform:rotate(90deg)}.items[data-v-95bb0785]{margin-top:12px;border-left:1px solid var(--vp-c-divider)}.VPDoc[data-v-a3c25e27]{padding:32px 24px 96px;width:100%}.VPDoc .VPDocOutlineDropdown[data-v-a3c25e27]{display:none}@media (min-width: 960px) and (max-width: 1279px){.VPDoc .VPDocOutlineDropdown[data-v-a3c25e27]{display:block}}@media (min-width: 768px){.VPDoc[data-v-a3c25e27]{padding:48px 32px 128px}}@media (min-width: 960px){.VPDoc[data-v-a3c25e27]{padding:32px 32px 0}.VPDoc:not(.has-sidebar) .container[data-v-a3c25e27]{display:flex;justify-content:center;max-width:992px}.VPDoc:not(.has-sidebar) .content[data-v-a3c25e27]{max-width:752px}}@media (min-width: 1280px){.VPDoc .container[data-v-a3c25e27]{display:flex;justify-content:center}.VPDoc .aside[data-v-a3c25e27]{display:block}}@media (min-width: 1440px){.VPDoc:not(.has-sidebar) .content[data-v-a3c25e27]{max-width:784px}.VPDoc:not(.has-sidebar) .container[data-v-a3c25e27]{max-width:1104px}}.container[data-v-a3c25e27]{margin:0 auto;width:100%}.aside[data-v-a3c25e27]{position:relative;display:none;order:2;flex-grow:1;padding-left:32px;width:100%;max-width:256px}.left-aside[data-v-a3c25e27]{order:1;padding-left:unset;padding-right:32px}.aside-container[data-v-a3c25e27]{position:fixed;top:0;padding-top:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + var(--vp-doc-top-height, 0px) + 32px);width:224px;height:100vh;overflow-x:hidden;overflow-y:auto;scrollbar-width:none}.aside-container[data-v-a3c25e27]::-webkit-scrollbar{display:none}.aside-curtain[data-v-a3c25e27]{position:fixed;bottom:0;z-index:10;width:224px;height:32px;background:linear-gradient(transparent,var(--vp-c-bg) 70%)}.aside-content[data-v-a3c25e27]{display:flex;flex-direction:column;min-height:calc(100vh - (var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 32px));padding-bottom:32px}.content[data-v-a3c25e27]{position:relative;margin:0 auto;width:100%}@media (min-width: 960px){.content[data-v-a3c25e27]{padding:0 32px 128px}}@media (min-width: 1280px){.content[data-v-a3c25e27]{order:1;margin:0;min-width:640px}}.content-container[data-v-a3c25e27]{margin:0 auto}.VPDoc.has-aside .content-container[data-v-a3c25e27]{max-width:688px}.external-link-icon-enabled[data-v-a3c25e27] :is(.vp-doc a[href*="://"],.vp-doc a[target=_blank]):after{content:"";color:currentColor}.VPButton[data-v-1e76fe75]{display:inline-block;border:1px solid transparent;text-align:center;font-weight:600;white-space:nowrap;transition:color .25s,border-color .25s,background-color .25s}.VPButton[data-v-1e76fe75]:active{transition:color .1s,border-color .1s,background-color .1s}.VPButton.medium[data-v-1e76fe75]{border-radius:20px;padding:0 20px;line-height:38px;font-size:14px}.VPButton.big[data-v-1e76fe75]{border-radius:24px;padding:0 24px;line-height:46px;font-size:16px}.VPButton.brand[data-v-1e76fe75]{border-color:var(--vp-button-brand-border);color:var(--vp-button-brand-text);background-color:var(--vp-button-brand-bg)}.VPButton.brand[data-v-1e76fe75]:hover{border-color:var(--vp-button-brand-hover-border);color:var(--vp-button-brand-hover-text);background-color:var(--vp-button-brand-hover-bg)}.VPButton.brand[data-v-1e76fe75]:active{border-color:var(--vp-button-brand-active-border);color:var(--vp-button-brand-active-text);background-color:var(--vp-button-brand-active-bg)}.VPButton.alt[data-v-1e76fe75]{border-color:var(--vp-button-alt-border);color:var(--vp-button-alt-text);background-color:var(--vp-button-alt-bg)}.VPButton.alt[data-v-1e76fe75]:hover{border-color:var(--vp-button-alt-hover-border);color:var(--vp-button-alt-hover-text);background-color:var(--vp-button-alt-hover-bg)}.VPButton.alt[data-v-1e76fe75]:active{border-color:var(--vp-button-alt-active-border);color:var(--vp-button-alt-active-text);background-color:var(--vp-button-alt-active-bg)}.VPButton.sponsor[data-v-1e76fe75]{border-color:var(--vp-button-sponsor-border);color:var(--vp-button-sponsor-text);background-color:var(--vp-button-sponsor-bg)}.VPButton.sponsor[data-v-1e76fe75]:hover{border-color:var(--vp-button-sponsor-hover-border);color:var(--vp-button-sponsor-hover-text);background-color:var(--vp-button-sponsor-hover-bg)}.VPButton.sponsor[data-v-1e76fe75]:active{border-color:var(--vp-button-sponsor-active-border);color:var(--vp-button-sponsor-active-text);background-color:var(--vp-button-sponsor-active-bg)}html:not(.dark) .VPImage.dark[data-v-ab19afbb]{display:none}.dark .VPImage.light[data-v-ab19afbb]{display:none}.VPHero[data-v-5a3e9999]{margin-top:calc((var(--vp-nav-height) + var(--vp-layout-top-height, 0px)) * -1);padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 48px) 24px 48px}@media (min-width: 640px){.VPHero[data-v-5a3e9999]{padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 80px) 48px 64px}}@media (min-width: 960px){.VPHero[data-v-5a3e9999]{padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 80px) 64px 64px}}.container[data-v-5a3e9999]{display:flex;flex-direction:column;margin:0 auto;max-width:1152px}@media (min-width: 960px){.container[data-v-5a3e9999]{flex-direction:row}}.main[data-v-5a3e9999]{position:relative;z-index:10;order:2;flex-grow:1;flex-shrink:0}.VPHero.has-image .container[data-v-5a3e9999]{text-align:center}@media (min-width: 960px){.VPHero.has-image .container[data-v-5a3e9999]{text-align:left}}@media (min-width: 960px){.main[data-v-5a3e9999]{order:1;width:calc((100% / 3) * 2)}.VPHero.has-image .main[data-v-5a3e9999]{max-width:592px}}.name[data-v-5a3e9999],.text[data-v-5a3e9999]{max-width:392px;letter-spacing:-.4px;line-height:40px;font-size:32px;font-weight:700;white-space:pre-wrap}.VPHero.has-image .name[data-v-5a3e9999],.VPHero.has-image .text[data-v-5a3e9999]{margin:0 auto}.name[data-v-5a3e9999]{color:var(--vp-home-hero-name-color)}.clip[data-v-5a3e9999]{background:var(--vp-home-hero-name-background);-webkit-background-clip:text;background-clip:text;-webkit-text-fill-color:var(--vp-home-hero-name-color)}@media (min-width: 640px){.name[data-v-5a3e9999],.text[data-v-5a3e9999]{max-width:576px;line-height:56px;font-size:48px}}@media (min-width: 960px){.name[data-v-5a3e9999],.text[data-v-5a3e9999]{line-height:64px;font-size:56px}.VPHero.has-image .name[data-v-5a3e9999],.VPHero.has-image .text[data-v-5a3e9999]{margin:0}}.tagline[data-v-5a3e9999]{padding-top:8px;max-width:392px;line-height:28px;font-size:18px;font-weight:500;white-space:pre-wrap;color:var(--vp-c-text-2)}.VPHero.has-image .tagline[data-v-5a3e9999]{margin:0 auto}@media (min-width: 640px){.tagline[data-v-5a3e9999]{padding-top:12px;max-width:576px;line-height:32px;font-size:20px}}@media (min-width: 960px){.tagline[data-v-5a3e9999]{line-height:36px;font-size:24px}.VPHero.has-image .tagline[data-v-5a3e9999]{margin:0}}.actions[data-v-5a3e9999]{display:flex;flex-wrap:wrap;margin:-6px;padding-top:24px}.VPHero.has-image .actions[data-v-5a3e9999]{justify-content:center}@media (min-width: 640px){.actions[data-v-5a3e9999]{padding-top:32px}}@media (min-width: 960px){.VPHero.has-image .actions[data-v-5a3e9999]{justify-content:flex-start}}.action[data-v-5a3e9999]{flex-shrink:0;padding:6px}.image[data-v-5a3e9999]{order:1;margin:-76px -24px -48px}@media (min-width: 640px){.image[data-v-5a3e9999]{margin:-108px -24px -48px}}@media (min-width: 960px){.image[data-v-5a3e9999]{flex-grow:1;order:2;margin:0;min-height:100%}}.image-container[data-v-5a3e9999]{position:relative;margin:0 auto;width:320px;height:320px}@media (min-width: 640px){.image-container[data-v-5a3e9999]{width:392px;height:392px}}@media (min-width: 960px){.image-container[data-v-5a3e9999]{display:flex;justify-content:center;align-items:center;width:100%;height:100%;transform:translate(-32px,-32px)}}.image-bg[data-v-5a3e9999]{position:absolute;top:50%;left:50%;border-radius:50%;width:192px;height:192px;background-image:var(--vp-home-hero-image-background-image);filter:var(--vp-home-hero-image-filter);transform:translate(-50%,-50%)}@media (min-width: 640px){.image-bg[data-v-5a3e9999]{width:256px;height:256px}}@media (min-width: 960px){.image-bg[data-v-5a3e9999]{width:320px;height:320px}}[data-v-5a3e9999] .image-src{position:absolute;top:50%;left:50%;max-width:192px;max-height:192px;transform:translate(-50%,-50%)}@media (min-width: 640px){[data-v-5a3e9999] .image-src{max-width:256px;max-height:256px}}@media (min-width: 960px){[data-v-5a3e9999] .image-src{max-width:320px;max-height:320px}}.VPFeature[data-v-ee984185]{display:block;border:1px solid var(--vp-c-bg-soft);border-radius:12px;height:100%;background-color:var(--vp-c-bg-soft);transition:border-color .25s,background-color .25s}.VPFeature.link[data-v-ee984185]:hover{border-color:var(--vp-c-brand-1)}.box[data-v-ee984185]{display:flex;flex-direction:column;padding:24px;height:100%}.box[data-v-ee984185]>.VPImage{margin-bottom:20px}.icon[data-v-ee984185]{display:flex;justify-content:center;align-items:center;margin-bottom:20px;border-radius:6px;background-color:var(--vp-c-default-soft);width:48px;height:48px;font-size:24px;transition:background-color .25s}.title[data-v-ee984185]{line-height:24px;font-size:16px;font-weight:600}.details[data-v-ee984185]{flex-grow:1;padding-top:8px;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.link-text[data-v-ee984185]{padding-top:8px}.link-text-value[data-v-ee984185]{display:flex;align-items:center;font-size:14px;font-weight:500;color:var(--vp-c-brand-1)}.link-text-icon[data-v-ee984185]{display:inline-block;margin-left:6px;width:14px;height:14px;fill:currentColor}.VPFeatures[data-v-b1eea84a]{position:relative;padding:0 24px}@media (min-width: 640px){.VPFeatures[data-v-b1eea84a]{padding:0 48px}}@media (min-width: 960px){.VPFeatures[data-v-b1eea84a]{padding:0 64px}}.container[data-v-b1eea84a]{margin:0 auto;max-width:1152px}.items[data-v-b1eea84a]{display:flex;flex-wrap:wrap;margin:-8px}.item[data-v-b1eea84a]{padding:8px;width:100%}@media (min-width: 640px){.item.grid-2[data-v-b1eea84a],.item.grid-4[data-v-b1eea84a],.item.grid-6[data-v-b1eea84a]{width:50%}}@media (min-width: 768px){.item.grid-2[data-v-b1eea84a],.item.grid-4[data-v-b1eea84a]{width:50%}.item.grid-3[data-v-b1eea84a],.item.grid-6[data-v-b1eea84a]{width:calc(100% / 3)}}@media (min-width: 960px){.item.grid-4[data-v-b1eea84a]{width:25%}}.VPHome[data-v-20eabd3a]{padding-bottom:96px}.VPHome[data-v-20eabd3a] .VPHomeSponsors{margin-top:112px;margin-bottom:-128px}@media (min-width: 768px){.VPHome[data-v-20eabd3a]{padding-bottom:128px}}.VPContent[data-v-3cf691b6]{flex-grow:1;flex-shrink:0;margin:var(--vp-layout-top-height, 0px) auto 0;width:100%}.VPContent.is-home[data-v-3cf691b6]{width:100%;max-width:100%}.VPContent.has-sidebar[data-v-3cf691b6]{margin:0}@media (min-width: 960px){.VPContent[data-v-3cf691b6]{padding-top:var(--vp-nav-height)}.VPContent.has-sidebar[data-v-3cf691b6]{margin:var(--vp-layout-top-height, 0px) 0 0;padding-left:var(--vp-sidebar-width)}}@media (min-width: 1440px){.VPContent.has-sidebar[data-v-3cf691b6]{padding-right:calc((100vw - var(--vp-layout-max-width)) / 2);padding-left:calc((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width))}}.VPFooter[data-v-e4279f1c]{position:relative;z-index:var(--vp-z-index-footer);border-top:1px solid var(--vp-c-gutter);padding:32px 24px;background-color:var(--vp-c-bg)}.VPFooter.has-sidebar[data-v-e4279f1c]{display:none}@media (min-width: 768px){.VPFooter[data-v-e4279f1c]{padding:32px}}.container[data-v-e4279f1c]{margin:0 auto;max-width:var(--vp-layout-max-width);text-align:center}.message[data-v-e4279f1c],.copyright[data-v-e4279f1c]{line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.VPLocalNavOutlineDropdown[data-v-24251f6f]{padding:12px 20px 11px}.VPLocalNavOutlineDropdown button[data-v-24251f6f]{display:block;font-size:12px;font-weight:500;line-height:24px;color:var(--vp-c-text-2);transition:color .5s;position:relative}.VPLocalNavOutlineDropdown button[data-v-24251f6f]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPLocalNavOutlineDropdown button.open[data-v-24251f6f]{color:var(--vp-c-text-1)}.icon[data-v-24251f6f]{display:inline-block;vertical-align:middle;margin-left:2px;width:14px;height:14px;fill:currentColor}[data-v-24251f6f] .outline-link{font-size:14px;padding:2px 0}.open>.icon[data-v-24251f6f]{transform:rotate(90deg)}.items[data-v-24251f6f]{position:absolute;top:64px;right:16px;left:16px;display:grid;gap:1px;border:1px solid var(--vp-c-border);border-radius:8px;background-color:var(--vp-c-gutter);max-height:calc(var(--vp-vh, 100vh) - 86px);overflow:hidden auto;box-shadow:var(--vp-shadow-3)}.header[data-v-24251f6f]{background-color:var(--vp-c-bg-soft)}.top-link[data-v-24251f6f]{display:block;padding:0 16px;line-height:48px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1)}.outline[data-v-24251f6f]{padding:8px 0;background-color:var(--vp-c-bg-soft)}.flyout-enter-active[data-v-24251f6f]{transition:all .2s ease-out}.flyout-leave-active[data-v-24251f6f]{transition:all .15s ease-in}.flyout-enter-from[data-v-24251f6f],.flyout-leave-to[data-v-24251f6f]{opacity:0;transform:translateY(-16px)}.VPLocalNav[data-v-9e669cc1]{position:sticky;top:0;left:0;z-index:var(--vp-z-index-local-nav);display:flex;justify-content:space-between;align-items:center;border-top:1px solid var(--vp-c-gutter);border-bottom:1px solid var(--vp-c-gutter);padding-top:var(--vp-layout-top-height, 0px);width:100%;background-color:var(--vp-local-nav-bg-color)}.VPLocalNav.fixed[data-v-9e669cc1]{position:fixed}.VPLocalNav.reached-top[data-v-9e669cc1]{border-top-color:transparent}@media (min-width: 960px){.VPLocalNav[data-v-9e669cc1]{display:none}}.menu[data-v-9e669cc1]{display:flex;align-items:center;padding:12px 24px 11px;line-height:24px;font-size:12px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.menu[data-v-9e669cc1]:hover{color:var(--vp-c-text-1);transition:color .25s}@media (min-width: 768px){.menu[data-v-9e669cc1]{padding:0 32px}}.menu-icon[data-v-9e669cc1]{margin-right:8px;width:16px;height:16px;fill:currentColor}.VPOutlineDropdown[data-v-9e669cc1]{padding:12px 24px 11px}@media (min-width: 768px){.VPOutlineDropdown[data-v-9e669cc1]{padding:12px 32px 11px}}.VPSwitch[data-v-1c29e291]{position:relative;border-radius:11px;display:block;width:40px;height:22px;flex-shrink:0;border:1px solid var(--vp-input-border-color);background-color:var(--vp-input-switch-bg-color);transition:border-color .25s!important}.VPSwitch[data-v-1c29e291]:hover{border-color:var(--vp-c-brand-1)}.check[data-v-1c29e291]{position:absolute;top:1px;left:1px;width:18px;height:18px;border-radius:50%;background-color:var(--vp-c-neutral-inverse);box-shadow:var(--vp-shadow-1);transition:transform .25s!important}.icon[data-v-1c29e291]{position:relative;display:block;width:18px;height:18px;border-radius:50%;overflow:hidden}.icon[data-v-1c29e291] svg{position:absolute;top:3px;left:3px;width:12px;height:12px;fill:var(--vp-c-text-2)}.dark .icon[data-v-1c29e291] svg{fill:var(--vp-c-text-1);transition:opacity .25s!important}.sun[data-v-3329432d]{opacity:1}.moon[data-v-3329432d],.dark .sun[data-v-3329432d]{opacity:0}.dark .moon[data-v-3329432d]{opacity:1}.dark .VPSwitchAppearance[data-v-3329432d] .check{transform:translate(18px)}.VPNavBarAppearance[data-v-283b26e9]{display:none}@media (min-width: 1280px){.VPNavBarAppearance[data-v-283b26e9]{display:flex;align-items:center}}.VPMenuGroup+.VPMenuLink[data-v-f51f088d]{margin:12px -12px 0;border-top:1px solid var(--vp-c-divider);padding:12px 12px 0}.link[data-v-f51f088d]{display:block;border-radius:6px;padding:0 12px;line-height:32px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);white-space:nowrap;transition:background-color .25s,color .25s}.link[data-v-f51f088d]:hover{color:var(--vp-c-brand-1);background-color:var(--vp-c-default-soft)}.link.active[data-v-f51f088d]{color:var(--vp-c-brand-1)}.VPMenuGroup[data-v-a6b0397c]{margin:12px -12px 0;border-top:1px solid var(--vp-c-divider);padding:12px 12px 0}.VPMenuGroup[data-v-a6b0397c]:first-child{margin-top:0;border-top:0;padding-top:0}.VPMenuGroup+.VPMenuGroup[data-v-a6b0397c]{margin-top:12px;border-top:1px solid var(--vp-c-divider)}.title[data-v-a6b0397c]{padding:0 12px;line-height:32px;font-size:14px;font-weight:600;color:var(--vp-c-text-2);white-space:nowrap;transition:color .25s}.VPMenu[data-v-e42ed9b3]{border-radius:12px;padding:12px;min-width:128px;border:1px solid var(--vp-c-divider);background-color:var(--vp-c-bg-elv);box-shadow:var(--vp-shadow-3);transition:background-color .5s;max-height:calc(100vh - var(--vp-nav-height));overflow-y:auto}.VPMenu[data-v-e42ed9b3] .group{margin:0 -12px;padding:0 12px 12px}.VPMenu[data-v-e42ed9b3] .group+.group{border-top:1px solid var(--vp-c-divider);padding:11px 12px 12px}.VPMenu[data-v-e42ed9b3] .group:last-child{padding-bottom:0}.VPMenu[data-v-e42ed9b3] .group+.item{border-top:1px solid var(--vp-c-divider);padding:11px 16px 0}.VPMenu[data-v-e42ed9b3] .item{padding:0 16px;white-space:nowrap}.VPMenu[data-v-e42ed9b3] .label{flex-grow:1;line-height:28px;font-size:12px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.VPMenu[data-v-e42ed9b3] .action{padding-left:24px}.VPFlyout[data-v-aa8de344]{position:relative}.VPFlyout[data-v-aa8de344]:hover{color:var(--vp-c-brand-1);transition:color .25s}.VPFlyout:hover .text[data-v-aa8de344]{color:var(--vp-c-text-2)}.VPFlyout:hover .icon[data-v-aa8de344]{fill:var(--vp-c-text-2)}.VPFlyout.active .text[data-v-aa8de344]{color:var(--vp-c-brand-1)}.VPFlyout.active:hover .text[data-v-aa8de344]{color:var(--vp-c-brand-2)}.VPFlyout:hover .menu[data-v-aa8de344],.button[aria-expanded=true]+.menu[data-v-aa8de344]{opacity:1;visibility:visible;transform:translateY(0)}.button[aria-expanded=false]+.menu[data-v-aa8de344]{opacity:0;visibility:hidden;transform:translateY(0)}.button[data-v-aa8de344]{display:flex;align-items:center;padding:0 12px;height:var(--vp-nav-height);color:var(--vp-c-text-1);transition:color .5s}.text[data-v-aa8de344]{display:flex;align-items:center;line-height:var(--vp-nav-height);font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.option-icon[data-v-aa8de344]{margin-right:0;width:16px;height:16px;fill:currentColor}.text-icon[data-v-aa8de344]{margin-left:4px;width:14px;height:14px;fill:currentColor}.icon[data-v-aa8de344]{width:20px;height:20px;fill:currentColor;transition:fill .25s}.menu[data-v-aa8de344]{position:absolute;top:calc(var(--vp-nav-height) / 2 + 20px);right:0;opacity:0;visibility:hidden;transition:opacity .25s,visibility .25s,transform .25s}.VPSocialLink[data-v-16cf740a]{display:flex;justify-content:center;align-items:center;width:36px;height:36px;color:var(--vp-c-text-2);transition:color .5s}.VPSocialLink[data-v-16cf740a]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPSocialLink[data-v-16cf740a]>svg{width:20px;height:20px;fill:currentColor}.VPSocialLinks[data-v-e71e869c]{display:flex;justify-content:center}.VPNavBarExtra[data-v-c8c2ae4b]{display:none;margin-right:-12px}@media (min-width: 768px){.VPNavBarExtra[data-v-c8c2ae4b]{display:block}}@media (min-width: 1280px){.VPNavBarExtra[data-v-c8c2ae4b]{display:none}}.trans-title[data-v-c8c2ae4b]{padding:0 24px 0 12px;line-height:32px;font-size:14px;font-weight:700;color:var(--vp-c-text-1)}.item.appearance[data-v-c8c2ae4b],.item.social-links[data-v-c8c2ae4b]{display:flex;align-items:center;padding:0 12px}.item.appearance[data-v-c8c2ae4b]{min-width:176px}.appearance-action[data-v-c8c2ae4b]{margin-right:-2px}.social-links-list[data-v-c8c2ae4b]{margin:-4px -8px}.VPNavBarHamburger[data-v-6bee1efd]{display:flex;justify-content:center;align-items:center;width:48px;height:var(--vp-nav-height)}@media (min-width: 768px){.VPNavBarHamburger[data-v-6bee1efd]{display:none}}.container[data-v-6bee1efd]{position:relative;width:16px;height:14px;overflow:hidden}.VPNavBarHamburger:hover .top[data-v-6bee1efd]{top:0;left:0;transform:translate(4px)}.VPNavBarHamburger:hover .middle[data-v-6bee1efd]{top:6px;left:0;transform:translate(0)}.VPNavBarHamburger:hover .bottom[data-v-6bee1efd]{top:12px;left:0;transform:translate(8px)}.VPNavBarHamburger.active .top[data-v-6bee1efd]{top:6px;transform:translate(0) rotate(225deg)}.VPNavBarHamburger.active .middle[data-v-6bee1efd]{top:6px;transform:translate(16px)}.VPNavBarHamburger.active .bottom[data-v-6bee1efd]{top:6px;transform:translate(0) rotate(135deg)}.VPNavBarHamburger.active:hover .top[data-v-6bee1efd],.VPNavBarHamburger.active:hover .middle[data-v-6bee1efd],.VPNavBarHamburger.active:hover .bottom[data-v-6bee1efd]{background-color:var(--vp-c-text-2);transition:top .25s,background-color .25s,transform .25s}.top[data-v-6bee1efd],.middle[data-v-6bee1efd],.bottom[data-v-6bee1efd]{position:absolute;width:16px;height:2px;background-color:var(--vp-c-text-1);transition:top .25s,background-color .5s,transform .25s}.top[data-v-6bee1efd]{top:0;left:0;transform:translate(0)}.middle[data-v-6bee1efd]{top:6px;left:0;transform:translate(8px)}.bottom[data-v-6bee1efd]{top:12px;left:0;transform:translate(4px)}.VPNavBarMenuLink[data-v-cb318fec]{display:flex;align-items:center;padding:0 12px;line-height:var(--vp-nav-height);font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.VPNavBarMenuLink.active[data-v-cb318fec],.VPNavBarMenuLink[data-v-cb318fec]:hover{color:var(--vp-c-brand-1)}.VPNavBarMenu[data-v-f732b5d0]{display:none}@media (min-width: 768px){.VPNavBarMenu[data-v-f732b5d0]{display:flex}}/*! @docsearch/css 3.5.2 | MIT License | © Algolia, Inc. and contributors | https://docsearch.algolia.com */:root{--docsearch-primary-color:#5468ff;--docsearch-text-color:#1c1e21;--docsearch-spacing:12px;--docsearch-icon-stroke-width:1.4;--docsearch-highlight-color:var(--docsearch-primary-color);--docsearch-muted-color:#969faf;--docsearch-container-background:rgba(101,108,133,.8);--docsearch-logo-color:#5468ff;--docsearch-modal-width:560px;--docsearch-modal-height:600px;--docsearch-modal-background:#f5f6f7;--docsearch-modal-shadow:inset 1px 1px 0 0 hsla(0,0%,100%,.5),0 3px 8px 0 #555a64;--docsearch-searchbox-height:56px;--docsearch-searchbox-background:#ebedf0;--docsearch-searchbox-focus-background:#fff;--docsearch-searchbox-shadow:inset 0 0 0 2px var(--docsearch-primary-color);--docsearch-hit-height:56px;--docsearch-hit-color:#444950;--docsearch-hit-active-color:#fff;--docsearch-hit-background:#fff;--docsearch-hit-shadow:0 1px 3px 0 #d4d9e1;--docsearch-key-gradient:linear-gradient(-225deg,#d5dbe4,#f8f8f8);--docsearch-key-shadow:inset 0 -2px 0 0 #cdcde6,inset 0 0 1px 1px #fff,0 1px 2px 1px rgba(30,35,90,.4);--docsearch-footer-height:44px;--docsearch-footer-background:#fff;--docsearch-footer-shadow:0 -1px 0 0 #e0e3e8,0 -3px 6px 0 rgba(69,98,155,.12)}html[data-theme=dark]{--docsearch-text-color:#f5f6f7;--docsearch-container-background:rgba(9,10,17,.8);--docsearch-modal-background:#15172a;--docsearch-modal-shadow:inset 1px 1px 0 0 #2c2e40,0 3px 8px 0 #000309;--docsearch-searchbox-background:#090a11;--docsearch-searchbox-focus-background:#000;--docsearch-hit-color:#bec3c9;--docsearch-hit-shadow:none;--docsearch-hit-background:#090a11;--docsearch-key-gradient:linear-gradient(-26.5deg,#565872,#31355b);--docsearch-key-shadow:inset 0 -2px 0 0 #282d55,inset 0 0 1px 1px #51577d,0 2px 2px 0 rgba(3,4,9,.3);--docsearch-footer-background:#1e2136;--docsearch-footer-shadow:inset 0 1px 0 0 rgba(73,76,106,.5),0 -4px 8px 0 rgba(0,0,0,.2);--docsearch-logo-color:#fff;--docsearch-muted-color:#7f8497}.DocSearch-Button{align-items:center;background:var(--docsearch-searchbox-background);border:0;border-radius:40px;color:var(--docsearch-muted-color);cursor:pointer;display:flex;font-weight:500;height:36px;justify-content:space-between;margin:0 0 0 16px;padding:0 8px;-webkit-user-select:none;user-select:none}.DocSearch-Button:active,.DocSearch-Button:focus,.DocSearch-Button:hover{background:var(--docsearch-searchbox-focus-background);box-shadow:var(--docsearch-searchbox-shadow);color:var(--docsearch-text-color);outline:none}.DocSearch-Button-Container{align-items:center;display:flex}.DocSearch-Search-Icon{stroke-width:1.6}.DocSearch-Button .DocSearch-Search-Icon{color:var(--docsearch-text-color)}.DocSearch-Button-Placeholder{font-size:1rem;padding:0 12px 0 6px}.DocSearch-Button-Keys{display:flex;min-width:calc(40px + .8em)}.DocSearch-Button-Key{align-items:center;background:var(--docsearch-key-gradient);border-radius:3px;box-shadow:var(--docsearch-key-shadow);color:var(--docsearch-muted-color);display:flex;height:18px;justify-content:center;margin-right:.4em;position:relative;padding:0 0 2px;border:0;top:-1px;width:20px}@media (max-width:768px){.DocSearch-Button-Keys,.DocSearch-Button-Placeholder{display:none}}.DocSearch--active{overflow:hidden!important}.DocSearch-Container,.DocSearch-Container *{box-sizing:border-box}.DocSearch-Container{background-color:var(--docsearch-container-background);height:100vh;left:0;position:fixed;top:0;width:100vw;z-index:200}.DocSearch-Container a{text-decoration:none}.DocSearch-Link{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;color:var(--docsearch-highlight-color);cursor:pointer;font:inherit;margin:0;padding:0}.DocSearch-Modal{background:var(--docsearch-modal-background);border-radius:6px;box-shadow:var(--docsearch-modal-shadow);flex-direction:column;margin:60px auto auto;max-width:var(--docsearch-modal-width);position:relative}.DocSearch-SearchBar{display:flex;padding:var(--docsearch-spacing) var(--docsearch-spacing) 0}.DocSearch-Form{align-items:center;background:var(--docsearch-searchbox-focus-background);border-radius:4px;box-shadow:var(--docsearch-searchbox-shadow);display:flex;height:var(--docsearch-searchbox-height);margin:0;padding:0 var(--docsearch-spacing);position:relative;width:100%}.DocSearch-Input{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:transparent;border:0;color:var(--docsearch-text-color);flex:1;font:inherit;font-size:1.2em;height:100%;outline:none;padding:0 0 0 8px;width:80%}.DocSearch-Input::placeholder{color:var(--docsearch-muted-color);opacity:1}.DocSearch-Input::-webkit-search-cancel-button,.DocSearch-Input::-webkit-search-decoration,.DocSearch-Input::-webkit-search-results-button,.DocSearch-Input::-webkit-search-results-decoration{display:none}.DocSearch-LoadingIndicator,.DocSearch-MagnifierLabel,.DocSearch-Reset{margin:0;padding:0}.DocSearch-MagnifierLabel,.DocSearch-Reset{align-items:center;color:var(--docsearch-highlight-color);display:flex;justify-content:center}.DocSearch-Container--Stalled .DocSearch-MagnifierLabel,.DocSearch-LoadingIndicator{display:none}.DocSearch-Container--Stalled .DocSearch-LoadingIndicator{align-items:center;color:var(--docsearch-highlight-color);display:flex;justify-content:center}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Reset{animation:none;-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:var(--docsearch-icon-color);cursor:pointer;right:0;stroke-width:var(--docsearch-icon-stroke-width)}}.DocSearch-Reset{animation:fade-in .1s ease-in forwards;-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:var(--docsearch-icon-color);cursor:pointer;padding:2px;right:0;stroke-width:var(--docsearch-icon-stroke-width)}.DocSearch-Reset[hidden]{display:none}.DocSearch-Reset:hover{color:var(--docsearch-highlight-color)}.DocSearch-LoadingIndicator svg,.DocSearch-MagnifierLabel svg{height:24px;width:24px}.DocSearch-Cancel{display:none}.DocSearch-Dropdown{max-height:calc(var(--docsearch-modal-height) - var(--docsearch-searchbox-height) - var(--docsearch-spacing) - var(--docsearch-footer-height));min-height:var(--docsearch-spacing);overflow-y:auto;overflow-y:overlay;padding:0 var(--docsearch-spacing);scrollbar-color:var(--docsearch-muted-color) var(--docsearch-modal-background);scrollbar-width:thin}.DocSearch-Dropdown::-webkit-scrollbar{width:12px}.DocSearch-Dropdown::-webkit-scrollbar-track{background:transparent}.DocSearch-Dropdown::-webkit-scrollbar-thumb{background-color:var(--docsearch-muted-color);border:3px solid var(--docsearch-modal-background);border-radius:20px}.DocSearch-Dropdown ul{list-style:none;margin:0;padding:0}.DocSearch-Label{font-size:.75em;line-height:1.6em}.DocSearch-Help,.DocSearch-Label{color:var(--docsearch-muted-color)}.DocSearch-Help{font-size:.9em;margin:0;-webkit-user-select:none;user-select:none}.DocSearch-Title{font-size:1.2em}.DocSearch-Logo a{display:flex}.DocSearch-Logo svg{color:var(--docsearch-logo-color);margin-left:8px}.DocSearch-Hits:last-of-type{margin-bottom:24px}.DocSearch-Hits mark{background:none;color:var(--docsearch-highlight-color)}.DocSearch-HitsFooter{color:var(--docsearch-muted-color);display:flex;font-size:.85em;justify-content:center;margin-bottom:var(--docsearch-spacing);padding:var(--docsearch-spacing)}.DocSearch-HitsFooter a{border-bottom:1px solid;color:inherit}.DocSearch-Hit{border-radius:4px;display:flex;padding-bottom:4px;position:relative}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit--deleting{transition:none}}.DocSearch-Hit--deleting{opacity:0;transition:all .25s linear}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit--favoriting{transition:none}}.DocSearch-Hit--favoriting{transform:scale(0);transform-origin:top center;transition:all .25s linear;transition-delay:.25s}.DocSearch-Hit a{background:var(--docsearch-hit-background);border-radius:4px;box-shadow:var(--docsearch-hit-shadow);display:block;padding-left:var(--docsearch-spacing);width:100%}.DocSearch-Hit-source{background:var(--docsearch-modal-background);color:var(--docsearch-highlight-color);font-size:.85em;font-weight:600;line-height:32px;margin:0 -4px;padding:8px 4px 0;position:sticky;top:0;z-index:10}.DocSearch-Hit-Tree{color:var(--docsearch-muted-color);height:var(--docsearch-hit-height);opacity:.5;stroke-width:var(--docsearch-icon-stroke-width);width:24px}.DocSearch-Hit[aria-selected=true] a{background-color:var(--docsearch-highlight-color)}.DocSearch-Hit[aria-selected=true] mark{text-decoration:underline}.DocSearch-Hit-Container{align-items:center;color:var(--docsearch-hit-color);display:flex;flex-direction:row;height:var(--docsearch-hit-height);padding:0 var(--docsearch-spacing) 0 0}.DocSearch-Hit-icon{height:20px;width:20px}.DocSearch-Hit-action,.DocSearch-Hit-icon{color:var(--docsearch-muted-color);stroke-width:var(--docsearch-icon-stroke-width)}.DocSearch-Hit-action{align-items:center;display:flex;height:22px;width:22px}.DocSearch-Hit-action svg{display:block;height:18px;width:18px}.DocSearch-Hit-action+.DocSearch-Hit-action{margin-left:6px}.DocSearch-Hit-action-button{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:inherit;cursor:pointer;padding:2px}svg.DocSearch-Hit-Select-Icon{display:none}.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-Select-Icon{display:block}.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{background:rgba(0,0,0,.2);transition:background-color .1s ease-in}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{transition:none}}.DocSearch-Hit-action-button:focus path,.DocSearch-Hit-action-button:hover path{fill:#fff}.DocSearch-Hit-content-wrapper{display:flex;flex:1 1 auto;flex-direction:column;font-weight:500;justify-content:center;line-height:1.2em;margin:0 8px;overflow-x:hidden;position:relative;text-overflow:ellipsis;white-space:nowrap;width:80%}.DocSearch-Hit-title{font-size:.9em}.DocSearch-Hit-path{color:var(--docsearch-muted-color);font-size:.75em}.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-action,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-icon,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-path,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-text,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-title,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-Tree,.DocSearch-Hit[aria-selected=true] mark{color:var(--docsearch-hit-active-color)!important}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{background:rgba(0,0,0,.2);transition:none}}.DocSearch-ErrorScreen,.DocSearch-NoResults,.DocSearch-StartScreen{font-size:.9em;margin:0 auto;padding:36px 0;text-align:center;width:80%}.DocSearch-Screen-Icon{color:var(--docsearch-muted-color);padding-bottom:12px}.DocSearch-NoResults-Prefill-List{display:inline-block;padding-bottom:24px;text-align:left}.DocSearch-NoResults-Prefill-List ul{display:inline-block;padding:8px 0 0}.DocSearch-NoResults-Prefill-List li{list-style-position:inside;list-style-type:"» "}.DocSearch-Prefill{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:1em;color:var(--docsearch-highlight-color);cursor:pointer;display:inline-block;font-size:1em;font-weight:700;padding:0}.DocSearch-Prefill:focus,.DocSearch-Prefill:hover{outline:none;text-decoration:underline}.DocSearch-Footer{align-items:center;background:var(--docsearch-footer-background);border-radius:0 0 8px 8px;box-shadow:var(--docsearch-footer-shadow);display:flex;flex-direction:row-reverse;flex-shrink:0;height:var(--docsearch-footer-height);justify-content:space-between;padding:0 var(--docsearch-spacing);position:relative;-webkit-user-select:none;user-select:none;width:100%;z-index:300}.DocSearch-Commands{color:var(--docsearch-muted-color);display:flex;list-style:none;margin:0;padding:0}.DocSearch-Commands li{align-items:center;display:flex}.DocSearch-Commands li:not(:last-of-type){margin-right:.8em}.DocSearch-Commands-Key{align-items:center;background:var(--docsearch-key-gradient);border-radius:2px;box-shadow:var(--docsearch-key-shadow);display:flex;height:18px;justify-content:center;margin-right:.4em;padding:0 0 1px;color:var(--docsearch-muted-color);border:0;width:20px}@media (max-width:768px){:root{--docsearch-spacing:10px;--docsearch-footer-height:40px}.DocSearch-Dropdown{height:100%}.DocSearch-Container{height:100vh;height:-webkit-fill-available;height:calc(var(--docsearch-vh, 1vh)*100);position:absolute}.DocSearch-Footer{border-radius:0;bottom:0;position:absolute}.DocSearch-Hit-content-wrapper{display:flex;position:relative;width:80%}.DocSearch-Modal{border-radius:0;box-shadow:none;height:100vh;height:-webkit-fill-available;height:calc(var(--docsearch-vh, 1vh)*100);margin:0;max-width:100%;width:100%}.DocSearch-Dropdown{max-height:calc(var(--docsearch-vh, 1vh)*100 - var(--docsearch-searchbox-height) - var(--docsearch-spacing) - var(--docsearch-footer-height))}.DocSearch-Cancel{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;color:var(--docsearch-highlight-color);cursor:pointer;display:inline-block;flex:none;font:inherit;font-size:1em;font-weight:500;margin-left:var(--docsearch-spacing);outline:none;overflow:hidden;padding:0;-webkit-user-select:none;user-select:none;white-space:nowrap}.DocSearch-Commands,.DocSearch-Hit-Tree{display:none}}@keyframes fade-in{0%{opacity:0}to{opacity:1}}[class*=DocSearch]{--docsearch-primary-color: var(--vp-c-brand-1);--docsearch-highlight-color: var(--docsearch-primary-color);--docsearch-text-color: var(--vp-c-text-1);--docsearch-muted-color: var(--vp-c-text-2);--docsearch-searchbox-shadow: none;--docsearch-searchbox-background: transparent;--docsearch-searchbox-focus-background: transparent;--docsearch-key-gradient: transparent;--docsearch-key-shadow: none;--docsearch-modal-background: var(--vp-c-bg-soft);--docsearch-footer-background: var(--vp-c-bg)}.dark [class*=DocSearch]{--docsearch-modal-shadow: none;--docsearch-footer-shadow: none;--docsearch-logo-color: var(--vp-c-text-2);--docsearch-hit-background: var(--vp-c-default-soft);--docsearch-hit-color: var(--vp-c-text-2);--docsearch-hit-shadow: none}.DocSearch-Button{display:flex;justify-content:center;align-items:center;margin:0;padding:0;width:48px;height:55px;background:transparent;transition:border-color .25s}.DocSearch-Button:hover{background:transparent}.DocSearch-Button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}.DocSearch-Button:focus:not(:focus-visible){outline:none!important}@media (min-width: 768px){.DocSearch-Button{justify-content:flex-start;border:1px solid transparent;border-radius:8px;padding:0 10px 0 12px;width:100%;height:40px;background-color:var(--vp-c-bg-alt)}.DocSearch-Button:hover{border-color:var(--vp-c-brand-1);background:var(--vp-c-bg-alt)}}.DocSearch-Button .DocSearch-Button-Container{display:flex;align-items:center}.DocSearch-Button .DocSearch-Search-Icon{position:relative;width:16px;height:16px;color:var(--vp-c-text-1);fill:currentColor;transition:color .5s}.DocSearch-Button:hover .DocSearch-Search-Icon{color:var(--vp-c-text-1)}@media (min-width: 768px){.DocSearch-Button .DocSearch-Search-Icon{top:1px;margin-right:8px;width:14px;height:14px;color:var(--vp-c-text-2)}}.DocSearch-Button .DocSearch-Button-Placeholder{display:none;margin-top:2px;padding:0 16px 0 0;font-size:13px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.DocSearch-Button:hover .DocSearch-Button-Placeholder{color:var(--vp-c-text-1)}@media (min-width: 768px){.DocSearch-Button .DocSearch-Button-Placeholder{display:inline-block}}.DocSearch-Button .DocSearch-Button-Keys{direction:ltr;display:none;min-width:auto}@media (min-width: 768px){.DocSearch-Button .DocSearch-Button-Keys{display:flex;align-items:center}}.DocSearch-Button .DocSearch-Button-Key{display:block;margin:2px 0 0;border:1px solid var(--vp-c-divider);border-right:none;border-radius:4px 0 0 4px;padding-left:6px;min-width:0;width:auto;height:22px;line-height:22px;font-family:var(--vp-font-family-base);font-size:12px;font-weight:500;transition:color .5s,border-color .5s}.DocSearch-Button .DocSearch-Button-Key+.DocSearch-Button-Key{border-right:1px solid var(--vp-c-divider);border-left:none;border-radius:0 4px 4px 0;padding-left:2px;padding-right:6px}.DocSearch-Button .DocSearch-Button-Key:first-child{font-size:0!important}.DocSearch-Button .DocSearch-Button-Key:first-child:after{content:"Ctrl";font-size:12px;letter-spacing:normal;color:var(--docsearch-muted-color)}.mac .DocSearch-Button .DocSearch-Button-Key:first-child:after{content:"⌘"}.DocSearch-Button .DocSearch-Button-Key:first-child>*{display:none}.VPNavBarSearch{display:flex;align-items:center}@media (min-width: 768px){.VPNavBarSearch{flex-grow:1;padding-left:24px}}@media (min-width: 960px){.VPNavBarSearch{padding-left:32px}}.dark .DocSearch-Footer{border-top:1px solid var(--vp-c-divider)}.DocSearch-Form{border:1px solid var(--vp-c-brand-1);background-color:var(--vp-c-white)}.dark .DocSearch-Form{background-color:var(--vp-c-default-soft)}.DocSearch-Screen-Icon>svg{margin:auto}.VPNavBarSocialLinks[data-v-ef6192dc]{display:none}@media (min-width: 1280px){.VPNavBarSocialLinks[data-v-ef6192dc]{display:flex;align-items:center}}.title[data-v-2973dbb4]{display:flex;align-items:center;border-bottom:1px solid transparent;width:100%;height:var(--vp-nav-height);font-size:16px;font-weight:600;color:var(--vp-c-text-1);transition:opacity .25s}@media (min-width: 960px){.title[data-v-2973dbb4]{flex-shrink:0}.VPNavBarTitle.has-sidebar .title[data-v-2973dbb4]{border-bottom-color:var(--vp-c-divider)}}[data-v-2973dbb4] .logo{margin-right:8px;height:var(--vp-nav-logo-height)}.VPNavBarTranslations[data-v-ff4524ae]{display:none}@media (min-width: 1280px){.VPNavBarTranslations[data-v-ff4524ae]{display:flex;align-items:center}}.title[data-v-ff4524ae]{padding:0 24px 0 12px;line-height:32px;font-size:14px;font-weight:700;color:var(--vp-c-text-1)}.VPNavBar[data-v-f1abbc6e]{position:relative;border-bottom:1px solid transparent;padding:0 8px 0 24px;height:var(--vp-nav-height);pointer-events:none;white-space:nowrap}@media (min-width: 768px){.VPNavBar[data-v-f1abbc6e]{padding:0 32px}}@media (min-width: 960px){.VPNavBar.has-sidebar[data-v-f1abbc6e]{padding:0}.VPNavBar[data-v-f1abbc6e]:not(.has-sidebar):not(.top){border-bottom-color:var(--vp-c-gutter);background-color:var(--vp-nav-bg-color)}}.container[data-v-f1abbc6e]{display:flex;justify-content:space-between;margin:0 auto;max-width:calc(var(--vp-layout-max-width) - 64px);height:var(--vp-nav-height);pointer-events:none}.container>.title[data-v-f1abbc6e],.container>.content[data-v-f1abbc6e]{pointer-events:none}.container[data-v-f1abbc6e] *{pointer-events:auto}@media (min-width: 960px){.VPNavBar.has-sidebar .container[data-v-f1abbc6e]{max-width:100%}}.title[data-v-f1abbc6e]{flex-shrink:0;height:calc(var(--vp-nav-height) - 1px);transition:background-color .5s}@media (min-width: 960px){.VPNavBar.has-sidebar .title[data-v-f1abbc6e]{position:absolute;top:0;left:0;z-index:2;padding:0 32px;width:var(--vp-sidebar-width);height:var(--vp-nav-height);background-color:transparent}}@media (min-width: 1440px){.VPNavBar.has-sidebar .title[data-v-f1abbc6e]{padding-left:max(32px,calc((100% - (var(--vp-layout-max-width) - 64px)) / 2));width:calc((100% - (var(--vp-layout-max-width) - 64px)) / 2 + var(--vp-sidebar-width) - 32px)}}.content[data-v-f1abbc6e]{flex-grow:1}@media (min-width: 960px){.VPNavBar.has-sidebar .content[data-v-f1abbc6e]{position:relative;z-index:1;padding-right:32px;padding-left:var(--vp-sidebar-width)}}@media (min-width: 1440px){.VPNavBar.has-sidebar .content[data-v-f1abbc6e]{padding-right:calc((100vw - var(--vp-layout-max-width)) / 2 + 32px);padding-left:calc((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width))}}.content-body[data-v-f1abbc6e]{display:flex;justify-content:flex-end;align-items:center;height:calc(var(--vp-nav-height) - 1px);transition:background-color .5s}@media (min-width: 960px){.VPNavBar:not(.top) .content-body[data-v-f1abbc6e]{position:relative;background-color:var(--vp-nav-bg-color)}}@media (max-width: 767px){.content-body[data-v-f1abbc6e]{column-gap:.5rem}}.menu+.translations[data-v-f1abbc6e]:before,.menu+.appearance[data-v-f1abbc6e]:before,.menu+.social-links[data-v-f1abbc6e]:before,.translations+.appearance[data-v-f1abbc6e]:before,.appearance+.social-links[data-v-f1abbc6e]:before{margin-right:8px;margin-left:8px;width:1px;height:24px;background-color:var(--vp-c-divider);content:""}.menu+.appearance[data-v-f1abbc6e]:before,.translations+.appearance[data-v-f1abbc6e]:before{margin-right:16px}.appearance+.social-links[data-v-f1abbc6e]:before{margin-left:16px}.social-links[data-v-f1abbc6e]{margin-right:-8px}@media (min-width: 960px){.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]{position:absolute;right:0;bottom:-31px;width:calc(100% - var(--vp-sidebar-width));height:32px}.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]:before{display:block;width:100%;height:32px;background:linear-gradient(var(--vp-c-bg),transparent 70%);content:""}}@media (min-width: 1440px){.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]{width:calc(100% - ((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width)))}}.VPNavScreenAppearance[data-v-0dc5cf49]{display:flex;justify-content:space-between;align-items:center;border-radius:8px;padding:12px 14px 12px 16px;background-color:var(--vp-c-bg-soft)}.text[data-v-0dc5cf49]{line-height:24px;font-size:12px;font-weight:500;color:var(--vp-c-text-2)}.VPNavScreenMenuLink[data-v-fe523e3d]{display:block;border-bottom:1px solid var(--vp-c-divider);padding:12px 0 11px;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:border-color .25s,color .25s}.VPNavScreenMenuLink[data-v-fe523e3d]:hover{color:var(--vp-c-brand-1)}.VPNavScreenMenuGroupLink[data-v-aea78dd1]{display:block;margin-left:12px;line-height:32px;font-size:14px;font-weight:400;color:var(--vp-c-text-1);transition:color .25s}.VPNavScreenMenuGroupLink[data-v-aea78dd1]:hover{color:var(--vp-c-brand-1)}.VPNavScreenMenuGroupSection[data-v-f60dbfa7]{display:block}.title[data-v-f60dbfa7]{line-height:32px;font-size:13px;font-weight:700;color:var(--vp-c-text-2);transition:color .25s}.VPNavScreenMenuGroup[data-v-c2c554ed]{border-bottom:1px solid var(--vp-c-divider);height:48px;overflow:hidden;transition:border-color .5s}.VPNavScreenMenuGroup .items[data-v-c2c554ed]{visibility:hidden}.VPNavScreenMenuGroup.open .items[data-v-c2c554ed]{visibility:visible}.VPNavScreenMenuGroup.open[data-v-c2c554ed]{padding-bottom:10px;height:auto}.VPNavScreenMenuGroup.open .button[data-v-c2c554ed]{padding-bottom:6px;color:var(--vp-c-brand-1)}.VPNavScreenMenuGroup.open .button-icon[data-v-c2c554ed]{transform:rotate(45deg)}.button[data-v-c2c554ed]{display:flex;justify-content:space-between;align-items:center;padding:12px 4px 11px 0;width:100%;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.button[data-v-c2c554ed]:hover{color:var(--vp-c-brand-1)}.button-icon[data-v-c2c554ed]{width:14px;height:14px;fill:var(--vp-c-text-2);transition:fill .5s,transform .25s}.group[data-v-c2c554ed]:first-child{padding-top:0}.group+.group[data-v-c2c554ed],.group+.item[data-v-c2c554ed]{padding-top:4px}.VPNavScreenTranslations[data-v-41505286]{height:24px;overflow:hidden}.VPNavScreenTranslations.open[data-v-41505286]{height:auto}.title[data-v-41505286]{display:flex;align-items:center;font-size:14px;font-weight:500;color:var(--vp-c-text-1)}.icon[data-v-41505286]{width:16px;height:16px;fill:currentColor}.icon.lang[data-v-41505286]{margin-right:8px}.icon.chevron[data-v-41505286]{margin-left:4px}.list[data-v-41505286]{padding:4px 0 0 24px}.link[data-v-41505286]{line-height:32px;font-size:13px;color:var(--vp-c-text-1)}.VPNavScreen[data-v-57cce842]{position:fixed;top:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 1px);right:0;bottom:0;left:0;padding:0 32px;width:100%;background-color:var(--vp-nav-screen-bg-color);overflow-y:auto;transition:background-color .5s;pointer-events:auto}.VPNavScreen.fade-enter-active[data-v-57cce842],.VPNavScreen.fade-leave-active[data-v-57cce842]{transition:opacity .25s}.VPNavScreen.fade-enter-active .container[data-v-57cce842],.VPNavScreen.fade-leave-active .container[data-v-57cce842]{transition:transform .25s ease}.VPNavScreen.fade-enter-from[data-v-57cce842],.VPNavScreen.fade-leave-to[data-v-57cce842]{opacity:0}.VPNavScreen.fade-enter-from .container[data-v-57cce842],.VPNavScreen.fade-leave-to .container[data-v-57cce842]{transform:translateY(-8px)}@media (min-width: 768px){.VPNavScreen[data-v-57cce842]{display:none}}.container[data-v-57cce842]{margin:0 auto;padding:24px 0 96px;max-width:288px}.menu+.translations[data-v-57cce842],.menu+.appearance[data-v-57cce842],.translations+.appearance[data-v-57cce842]{margin-top:24px}.menu+.social-links[data-v-57cce842]{margin-top:16px}.appearance+.social-links[data-v-57cce842]{margin-top:16px}.VPNav[data-v-7ad780c2]{position:relative;top:var(--vp-layout-top-height, 0px);left:0;z-index:var(--vp-z-index-nav);width:100%;pointer-events:none;transition:background-color .5s}@media (min-width: 960px){.VPNav[data-v-7ad780c2]{position:fixed}}.VPSidebarItem.level-0[data-v-bd01e0d5]{padding-bottom:24px}.VPSidebarItem.collapsed.level-0[data-v-bd01e0d5]{padding-bottom:10px}.item[data-v-bd01e0d5]{position:relative;display:flex;width:100%}.VPSidebarItem.collapsible>.item[data-v-bd01e0d5]{cursor:pointer}.indicator[data-v-bd01e0d5]{position:absolute;top:6px;bottom:6px;left:-17px;width:2px;border-radius:2px;transition:background-color .25s}.VPSidebarItem.level-2.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-3.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-4.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-5.is-active>.item>.indicator[data-v-bd01e0d5]{background-color:var(--vp-c-brand-1)}.link[data-v-bd01e0d5]{display:flex;align-items:center;flex-grow:1}.text[data-v-bd01e0d5]{flex-grow:1;padding:4px 0;line-height:24px;font-size:14px;transition:color .25s}.VPSidebarItem.level-0 .text[data-v-bd01e0d5]{font-weight:700;color:var(--vp-c-text-1)}.VPSidebarItem.level-1 .text[data-v-bd01e0d5],.VPSidebarItem.level-2 .text[data-v-bd01e0d5],.VPSidebarItem.level-3 .text[data-v-bd01e0d5],.VPSidebarItem.level-4 .text[data-v-bd01e0d5],.VPSidebarItem.level-5 .text[data-v-bd01e0d5]{font-weight:500;color:var(--vp-c-text-2)}.VPSidebarItem.level-0.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-1.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-2.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-3.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-4.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-5.is-link>.item>.link:hover .text[data-v-bd01e0d5]{color:var(--vp-c-brand-1)}.VPSidebarItem.level-0.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-0.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.has-active>.item>.link>.text[data-v-bd01e0d5]{color:var(--vp-c-text-1)}.VPSidebarItem.level-0.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.is-active>.item .link>.text[data-v-bd01e0d5]{color:var(--vp-c-brand-1)}.caret[data-v-bd01e0d5]{display:flex;justify-content:center;align-items:center;margin-right:-7px;width:32px;height:32px;color:var(--vp-c-text-3);cursor:pointer;transition:color .25s;flex-shrink:0}.item:hover .caret[data-v-bd01e0d5]{color:var(--vp-c-text-2)}.item:hover .caret[data-v-bd01e0d5]:hover{color:var(--vp-c-text-1)}.caret-icon[data-v-bd01e0d5]{width:18px;height:18px;fill:currentColor;transform:rotate(90deg);transition:transform .25s}.VPSidebarItem.collapsed .caret-icon[data-v-bd01e0d5]{transform:rotate(0)}.VPSidebarItem.level-1 .items[data-v-bd01e0d5],.VPSidebarItem.level-2 .items[data-v-bd01e0d5],.VPSidebarItem.level-3 .items[data-v-bd01e0d5],.VPSidebarItem.level-4 .items[data-v-bd01e0d5],.VPSidebarItem.level-5 .items[data-v-bd01e0d5]{border-left:1px solid var(--vp-c-divider);padding-left:16px}.VPSidebarItem.collapsed .items[data-v-bd01e0d5]{display:none}.VPSidebar[data-v-ee2efba5]{position:fixed;top:var(--vp-layout-top-height, 0px);bottom:0;left:0;z-index:var(--vp-z-index-sidebar);padding:32px 32px 96px;width:calc(100vw - 64px);max-width:320px;background-color:var(--vp-sidebar-bg-color);opacity:0;box-shadow:var(--vp-c-shadow-3);overflow-x:hidden;overflow-y:auto;transform:translate(-100%);transition:opacity .5s,transform .25s ease;overscroll-behavior:contain}.VPSidebar.open[data-v-ee2efba5]{opacity:1;visibility:visible;transform:translate(0);transition:opacity .25s,transform .5s cubic-bezier(.19,1,.22,1)}.dark .VPSidebar[data-v-ee2efba5]{box-shadow:var(--vp-shadow-1)}@media (min-width: 960px){.VPSidebar[data-v-ee2efba5]{z-index:1;padding-top:var(--vp-nav-height);padding-bottom:128px;width:var(--vp-sidebar-width);max-width:100%;background-color:var(--vp-sidebar-bg-color);opacity:1;visibility:visible;box-shadow:none;transform:translate(0)}}@media (min-width: 1440px){.VPSidebar[data-v-ee2efba5]{padding-left:max(32px,calc((100% - (var(--vp-layout-max-width) - 64px)) / 2));width:calc((100% - (var(--vp-layout-max-width) - 64px)) / 2 + var(--vp-sidebar-width) - 32px)}}@media (min-width: 960px){.curtain[data-v-ee2efba5]{position:sticky;top:-64px;left:0;z-index:1;margin-top:calc(var(--vp-nav-height) * -1);margin-right:-32px;margin-left:-32px;height:var(--vp-nav-height);background-color:var(--vp-sidebar-bg-color)}}.nav[data-v-ee2efba5]{outline:0}.group+.group[data-v-ee2efba5]{border-top:1px solid var(--vp-c-divider);padding-top:10px}@media (min-width: 960px){.group[data-v-ee2efba5]{padding-top:10px;width:calc(var(--vp-sidebar-width) - 64px)}}.VPSkipLink[data-v-c8291ffa]{top:8px;left:8px;padding:8px 16px;z-index:999;border-radius:8px;font-size:12px;font-weight:700;text-decoration:none;color:var(--vp-c-brand-1);box-shadow:var(--vp-shadow-3);background-color:var(--vp-c-bg)}.VPSkipLink[data-v-c8291ffa]:focus{height:auto;width:auto;clip:auto;clip-path:none}@media (min-width: 1280px){.VPSkipLink[data-v-c8291ffa]{top:14px;left:16px}}.Layout[data-v-9d8abc1e]{display:flex;flex-direction:column;min-height:100vh}.VPHomeSponsors[data-v-843cc1b2]{border-top:1px solid var(--vp-c-gutter);padding:88px 24px 96px;background-color:var(--vp-c-bg)}.container[data-v-843cc1b2]{margin:0 auto;max-width:1152px}.love[data-v-843cc1b2]{margin:0 auto;width:28px;height:28px;color:var(--vp-c-text-3)}.icon[data-v-843cc1b2]{width:28px;height:28px;fill:currentColor}.message[data-v-843cc1b2]{margin:0 auto;padding-top:10px;max-width:320px;text-align:center;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}.sponsors[data-v-843cc1b2]{padding-top:32px}.action[data-v-843cc1b2]{padding-top:40px;text-align:center}.VPTeamPage[data-v-b1cfd8dc]{padding-bottom:96px}@media (min-width: 768px){.VPTeamPage[data-v-b1cfd8dc]{padding-bottom:128px}}.VPTeamPageSection+.VPTeamPageSection[data-v-b1cfd8dc-s],.VPTeamMembers+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:64px}.VPTeamMembers+.VPTeamMembers[data-v-b1cfd8dc-s]{margin-top:24px}@media (min-width: 768px){.VPTeamPageTitle+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:16px}.VPTeamPageSection+.VPTeamPageSection[data-v-b1cfd8dc-s],.VPTeamMembers+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:96px}}.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 24px}@media (min-width: 768px){.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 48px}}@media (min-width: 960px){.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 64px}}.VPTeamPageTitle[data-v-46c5e327]{padding:48px 32px;text-align:center}@media (min-width: 768px){.VPTeamPageTitle[data-v-46c5e327]{padding:64px 48px 48px}}@media (min-width: 960px){.VPTeamPageTitle[data-v-46c5e327]{padding:80px 64px 48px}}.title[data-v-46c5e327]{letter-spacing:0;line-height:44px;font-size:36px;font-weight:500}@media (min-width: 768px){.title[data-v-46c5e327]{letter-spacing:-.5px;line-height:56px;font-size:48px}}.lead[data-v-46c5e327]{margin:0 auto;max-width:512px;padding-top:12px;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}@media (min-width: 768px){.lead[data-v-46c5e327]{max-width:592px;letter-spacing:.15px;line-height:28px;font-size:20px}}.VPTeamPageSection[data-v-3bf2e850]{padding:0 32px}@media (min-width: 768px){.VPTeamPageSection[data-v-3bf2e850]{padding:0 48px}}@media (min-width: 960px){.VPTeamPageSection[data-v-3bf2e850]{padding:0 64px}}.title[data-v-3bf2e850]{position:relative;margin:0 auto;max-width:1152px;text-align:center;color:var(--vp-c-text-2)}.title-line[data-v-3bf2e850]{position:absolute;top:16px;left:0;width:100%;height:1px;background-color:var(--vp-c-divider)}.title-text[data-v-3bf2e850]{position:relative;display:inline-block;padding:0 24px;letter-spacing:0;line-height:32px;font-size:20px;font-weight:500;background-color:var(--vp-c-bg)}.lead[data-v-3bf2e850]{margin:0 auto;max-width:480px;padding-top:12px;text-align:center;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}.members[data-v-3bf2e850]{padding-top:40px}.VPTeamMembersItem[data-v-3a0078bd]{display:flex;flex-direction:column;gap:2px;border-radius:12px;width:100%;height:100%;overflow:hidden}.VPTeamMembersItem.small .profile[data-v-3a0078bd]{padding:32px}.VPTeamMembersItem.small .data[data-v-3a0078bd]{padding-top:20px}.VPTeamMembersItem.small .avatar[data-v-3a0078bd]{width:64px;height:64px}.VPTeamMembersItem.small .name[data-v-3a0078bd]{line-height:24px;font-size:16px}.VPTeamMembersItem.small .affiliation[data-v-3a0078bd]{padding-top:4px;line-height:20px;font-size:14px}.VPTeamMembersItem.small .desc[data-v-3a0078bd]{padding-top:12px;line-height:20px;font-size:14px}.VPTeamMembersItem.small .links[data-v-3a0078bd]{margin:0 -16px -20px;padding:10px 0 0}.VPTeamMembersItem.medium .profile[data-v-3a0078bd]{padding:48px 32px}.VPTeamMembersItem.medium .data[data-v-3a0078bd]{padding-top:24px;text-align:center}.VPTeamMembersItem.medium .avatar[data-v-3a0078bd]{width:96px;height:96px}.VPTeamMembersItem.medium .name[data-v-3a0078bd]{letter-spacing:.15px;line-height:28px;font-size:20px}.VPTeamMembersItem.medium .affiliation[data-v-3a0078bd]{padding-top:4px;font-size:16px}.VPTeamMembersItem.medium .desc[data-v-3a0078bd]{padding-top:16px;max-width:288px;font-size:16px}.VPTeamMembersItem.medium .links[data-v-3a0078bd]{margin:0 -16px -12px;padding:16px 12px 0}.profile[data-v-3a0078bd]{flex-grow:1;background-color:var(--vp-c-bg-soft)}.data[data-v-3a0078bd]{text-align:center}.avatar[data-v-3a0078bd]{position:relative;flex-shrink:0;margin:0 auto;border-radius:50%;box-shadow:var(--vp-shadow-3)}.avatar-img[data-v-3a0078bd]{position:absolute;top:0;right:0;bottom:0;left:0;border-radius:50%;object-fit:cover}.name[data-v-3a0078bd]{margin:0;font-weight:600}.affiliation[data-v-3a0078bd]{margin:0;font-weight:500;color:var(--vp-c-text-2)}.org.link[data-v-3a0078bd]{color:var(--vp-c-text-2);transition:color .25s}.org.link[data-v-3a0078bd]:hover{color:var(--vp-c-brand-1)}.desc[data-v-3a0078bd]{margin:0 auto}.desc[data-v-3a0078bd] a{font-weight:500;color:var(--vp-c-brand-1);text-decoration-style:dotted;transition:color .25s}.links[data-v-3a0078bd]{display:flex;justify-content:center;height:56px}.sp-link[data-v-3a0078bd]{display:flex;justify-content:center;align-items:center;text-align:center;padding:16px;font-size:14px;font-weight:500;color:var(--vp-c-sponsor);background-color:var(--vp-c-bg-soft);transition:color .25s,background-color .25s}.sp .sp-link.link[data-v-3a0078bd]:hover,.sp .sp-link.link[data-v-3a0078bd]:focus{outline:none;color:var(--vp-c-white);background-color:var(--vp-c-sponsor)}.sp-icon[data-v-3a0078bd]{margin-right:8px;width:16px;height:16px;fill:currentColor}.VPTeamMembers.small .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(224px,1fr))}.VPTeamMembers.small.count-1 .container[data-v-bf782009]{max-width:276px}.VPTeamMembers.small.count-2 .container[data-v-bf782009]{max-width:576px}.VPTeamMembers.small.count-3 .container[data-v-bf782009]{max-width:876px}.VPTeamMembers.medium .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(256px,1fr))}@media (min-width: 375px){.VPTeamMembers.medium .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(288px,1fr))}}.VPTeamMembers.medium.count-1 .container[data-v-bf782009]{max-width:368px}.VPTeamMembers.medium.count-2 .container[data-v-bf782009]{max-width:760px}.container[data-v-bf782009]{display:grid;gap:24px;margin:0 auto;max-width:1152px}.post>a[data-v-61c06c99]{text-decoration:none;color:var(--color-text)}h2[data-v-61c06c99],h3[data-v-61c06c99],h4[data-v-61c06c99]{color:var(--color-text)}h1[data-v-61c06c99]{font-size:2.5em!important;line-height:1.2em;font-weight:700}.post[data-v-61c06c99]{margin-top:10pt;margin-bottom:4rem}.posts[data-v-61c06c99]{margin-top:10rem!important}.posts[data-v-61c06c99]:first-child{margin-top:0!important}.post h1 .VPBadge[data-v-61c06c99]{transform:scale(1.2);margin-left:10pt;position:relative;top:7pt}.post .body h1:first-child{display:none}h4{margin-top:50pt}img[data-v-fb660782]{display:inline;height:130pt;position:relative;top:-15pt;margin-right:15pt;transform:translate(0)}h1[data-v-fb660782],h2[data-v-fb660782]{font-size:2.5em;line-height:1.2em;border:none;margin:0;padding:0;font-weight:550}h2[data-v-fb660782]{font-size:1.2em;margin-top:10pt;font-weight:500;color:#565656}html.dark h2[data-v-fb660782]{color:#b7b7b7}.hero[data-v-fb660782]{margin:50pt auto auto;display:flex;width:100%;max-width:650pt;text-align:left;align-items:center;justify-content:center;padding:20pt}.hero h1>img[data-v-fb660782]{display:none}html.dark .btn{background-color:#282828;border:2pt solid rgb(40,40,40)}html.dark .btn:hover{border:2pt solid rgb(60,60,60)}html.dark .btn.primary,html.dark .btn.primary:hover{background-color:#007bff;border:2pt solid #007bff;color:#fff}@media (max-width: 600px){.hero h1{font-size:2em}.hero h2{font-size:1.4em}.hero{margin:70pt 0!important;align-items:flex-start!important;max-width:200pt}.hero button{margin-top:10pt;margin-bottom:0}.hero img{width:40pt;display:none}.hero h1>img{display:inline!important;padding:0;position:relative;height:1em;top:3.5pt;margin:0 -5pt 0 -8pt}.hero{margin-top:20pt!important}}.buttons{margin-top:20pt}.feature[data-v-9d5b0837]{margin:100pt auto auto;display:flex;flex-direction:row;width:100%;max-width:750pt;text-align:left;align-items:center;padding:20pt}.feature>div[data-v-9d5b0837]:first-child{margin-right:20pt;flex:2}.feature code[data-v-9d5b0837]{font-size:1em;padding:-10pt 10pt 10pt;border-radius:8px}.feature code[data-v-9d5b0837]{max-width:60%;font-size:.8em}.feature code pre[data-v-9d5b0837]{margin:0;padding:10pt}.feature code[data-v-9d5b0837]{transform:translate(0);animation:slidefade .5s;animation-fill-mode:forwards}.feature.code[data-v-9d5b0837]{display:flex;margin-top:-20pt}.feature .btn{margin-top:20pt}.feature code span.lang{display:none!important}.feature code pre{margin-top:-15pt!important;line-break:anywhere;overflow-x:scroll;-webkit-overflow-scrolling:touch;-ms-overflow-style:none;padding:15pt}.feature code pre::-webkit-scrollbar{display:none}@keyframes slidefade{0%{opacity:0;transform:translate(20pt)}to{opacity:1;transform:translate(0)}}@keyframes slidefadeleft{0%{opacity:0;transform:translate(-20pt)}to{opacity:1;transform:translate(0)}}div:nth-child(2n)>.feature{flex-direction:row-reverse}div:nth-child(2n)>.feature>code{margin-right:30pt;animation:slidefadeleft .2s;animation-fill-mode:forwards}.feature pre.promptdown,.feature pre.promptdown.promptdown-compiled,html.dark .feature pre.promptdown,html.dark .feature pre.promptdown.promptdown-compiled{width:320pt;max-width:calc(50vw - 30pt);overflow-x:scroll;min-height:180pt;position:relative;top:20pt;white-space:pre-wrap;line-break:anywhere;text-indent:10pt!important;line-height:1.5em!important}.feature.left>code{display:none}.feature.middle{position:relative;width:520pt;max-width:100vw}.feature.middle>code{display:none}.feature.middle{text-align:center}.cards{display:flex;flex-direction:row;flex-wrap:wrap;justify-content:center;margin-top:40pt;margin-bottom:40pt}.cards>a{border-radius:5pt;border:1pt solid rgb(192,190,190);margin:5pt 2.5pt 0;padding:30pt 10pt 10pt;display:flex;flex-direction:column;justify-content:flex-end;line-height:1.2;font-weight:700;width:100pt;height:100pt;transition:all .1s;cursor:pointer;text-align:center}.cards img{position:relative;bottom:-5pt}.cards h1{margin-top:0}.cards>a:hover{background-color:#f0f0f0;transform:scale(1.05)}html.dark .cards>a{border:1pt solid rgb(50,50,50)}html.dark .cards>a:hover{background-color:#3232321b}.cards>a img{width:50pt;height:40pt;margin:-10pt auto auto;display:block;padding-bottom:10pt}.cards>a h1{font-weight:700!important;font-size:10pt}.feature.middle pre{text-align:left;padding:10pt}@media (max-width: 700px){.feature{flex-direction:column!important;margin-top:20pt!important;width:calc(100vw - 15pt)!important}.feature>div:first-child{margin-right:0}.feature.code{width:100vw!important}.feature>code{margin-right:0;margin-top:20pt;width:100vw!important;max-width:100vw!important;margin-left:0;border-radius:0;box-shadow:none!important;border:none!important;padding-left:20pt!important;margin-right:0!important}.feature.middle{width:calc(100vw - 10pt);padding:0 10pt!important}.feature.middle>div{max-width:100vw;text-align:left;margin:0!important}.feature pre.promptdown,.feature pre.promptdown.promptdown-compiled,html.dark .feature pre.promptdown,html.dark .feature pre.promptdown.promptdown-compiled{width:calc(100vw - 30pt);max-width:calc(100vw - 30pt);font-size:12pt;margin-left:-5pt;padding:0;margin-right:0}.feature pre.promptdown .promptdown-var{line-break:word!important}}.feature.code pre{flex:1;margin:0;padding:20pt;white-space:pre-wrap;box-shadow:0 0 80pt #00000045;font-size:10pt}.feature.code>code{display:none}.feature.code>div:first-child{margin-right:0}.feature.code a{text-decoration:underline}.feature.code{margin-top:10pt;max-width:480pt;font-size:12pt;line-height:1.4;z-index:100}@media (max-width: 800px){.feature.code{margin-top:-30pt!important;font-size:11pt;padding:2pt!important;max-width:calc(100vw - 20pt);overflow:hidden!important}.feature.code pre{white-space:pre;margin:0;padding:10pt;font-size:9pt;max-width:calc(100vw - 20pt);box-shadow:none;overflow-x:scroll}.feature.code pre .window-controls{display:none}}@media (max-width: 600px){.feature.code{margin-top:-90pt!important}}.code-by-code{display:flex;flex-direction:row;margin:0}.code-by-code .left,.code-by-code .right{flex:1;padding:0;max-width:50%}.code-by-code .left{margin-right:2.5pt}.code-by-code .right{margin-left:2.5pt}.code-by-code .left pre{padding:10pt!important;margin-top:-6pt;white-space:pre-wrap;font-size:12pt;line-height:1.5em}@media (max-width: 750pt){.code-by-code{flex-direction:column}.code-by-code .left pre{font-size:11pt}.code-by-code .left,.code-by-code .right{max-width:100%;width:100vw}}.code-by-code .left h2{color:#fff}.code-by-code h2{display:block;text-align:center;margin-bottom:-35pt;font-weight:700;opacity:.8;font-size:10pt}.code-by-code .language-lmql pre{padding-top:30pt!important}.code-by-code .promptdown,.code-by-code .promptdown.promptdown-compiled,html.dark .code-by-code .promptdown,html.dark .code-by-code .promptdown.promptdown-compiled{padding-top:30pt!important;font-size:14pt}.code-by-code .promptdown .promptdown-var{line-height:1.5em}.examples[data-v-96cfe14a]{max-width:1030pt;margin:auto;padding:0 8pt}h1[data-v-96cfe14a]{margin-bottom:20pt}.btn-group[data-v-96cfe14a]{margin-bottom:1rem;font-size:10pt;margin-top:1em}.btn[data-v-96cfe14a]{padding:4pt;margin:0 4pt 4pt 0}.btn-group .btn.active[data-v-96cfe14a]{background-color:#007bff;color:#fff;border:2pt solid #007bff}.examples .description[data-v-96cfe14a]{max-width:450pt;margin-bottom:30pt}.examples .description a{text-align:left;margin-left:4pt;color:#007bff}.examples .description a:hover{text-decoration:underline}.examples .right .distribution{position:relative;top:-110pt;margin-left:20pt;width:220pt}.examples .left pre{margin-top:-4pt!important}.post{margin-bottom:4rem}.primary.pdf[data-v-adbb595c]{top:10pt;right:10pt;margin:5pt 0;display:inline-block;text-decoration:none}.primary.pdf[data-v-adbb595c]:hover{background-color:#0069d9}.paper[data-v-adbb595c]{position:relative;text-align:justify}.paper p[data-v-adbb595c]{margin:10pt 0}:root{--vp-c-default-1: var(--vp-c-gray-1);--vp-c-default-2: var(--vp-c-gray-2);--vp-c-default-3: var(--vp-c-gray-3);--vp-c-default-soft: var(--vp-c-gray-soft);--vp-c-brand-1: var(--vp-c-indigo-1);--vp-c-brand-2: var(--vp-c-indigo-2);--vp-c-brand-3: var(--vp-c-indigo-3);--vp-c-brand-soft: var(--vp-c-indigo-soft);--vp-c-tip-1: var(--vp-c-brand-1);--vp-c-tip-2: var(--vp-c-brand-2);--vp-c-tip-3: var(--vp-c-brand-3);--vp-c-tip-soft: var(--vp-c-brand-soft);--vp-c-warning-1: var(--vp-c-yellow-1);--vp-c-warning-2: var(--vp-c-yellow-2);--vp-c-warning-3: var(--vp-c-yellow-3);--vp-c-warning-soft: var(--vp-c-yellow-soft);--vp-c-danger-1: var(--vp-c-red-1);--vp-c-danger-2: var(--vp-c-red-2);--vp-c-danger-3: var(--vp-c-red-3);--vp-c-danger-soft: var(--vp-c-red-soft)}:root{--vp-button-brand-border: transparent;--vp-button-brand-text: var(--vp-c-white);--vp-button-brand-bg: var(--vp-c-brand-3);--vp-button-brand-hover-border: transparent;--vp-button-brand-hover-text: var(--vp-c-white);--vp-button-brand-hover-bg: var(--vp-c-brand-2);--vp-button-brand-active-border: transparent;--vp-button-brand-active-text: var(--vp-c-white);--vp-button-brand-active-bg: var(--vp-c-brand-1)}:root{--vp-home-hero-name-color: transparent;--vp-home-hero-name-background: -webkit-linear-gradient( 120deg, #bd34fe 30%, #41d1ff );--vp-home-hero-image-background-image: linear-gradient( -45deg, #bd34fe 50%, #47caff 50% );--vp-home-hero-image-filter: blur(40px)}@media (min-width: 640px){:root{--vp-home-hero-image-filter: blur(56px)}}@media (min-width: 960px){:root{--vp-home-hero-image-filter: blur(72px)}}:root{--vp-custom-block-tip-border: transparent;--vp-custom-block-tip-text: var(--vp-c-text-1);--vp-custom-block-tip-bg: var(--vp-c-brand-soft);--vp-custom-block-tip-code-bg: var(--vp-c-brand-soft);--vp-code-block-bg: rgb(36, 39, 45);--vp-code-copy-code-bg: rgb(32, 33, 39);--vp-code-copy-code-border-color: #2e2e32;--vp-code-copy-code-hover-bg: #1b1b1f;--vp-code-copy-code-hover-border-color: #2e2e32;--vp-code-copy-code-active-text: rgba(235, 235, 245, .6)}.DocSearch{--docsearch-primary-color: var(--vp-c-brand-1) !important}.VPImage.logo{width:12pt}body{padding-bottom:200pt}span.lang{display:none}img.invert{filter:invert(90%)}html.dark img.invert{filter:invert(10%)}pre code{color:#ffffffdf!important}pre,.vp-doc div[class*=language-],.vp-block,html.dark pre,html.dark .vp-doc div[class*=language-]{background:var(--vp-code-block-bg)!important}.hljs-comment{opacity:.6}.hljs-string{color:#a7d884}.hljs-meta{color:#68edf2}.hljs-built_in,.hljs-keyword{color:#c678dd}.hljs-placeholder{color:#68edf2}.hljs-subst{color:#f4955d}html.dark .promptdown.promptdown-compiled,.promptdown.promptdown-compiled{opacity:1;line-height:1!important;padding:10pt!important;transform-origin:top center;background:transparent!important}pre.promptdown>p{margin:0}pre.promptdown>h1{margin:0 0 5pt;line-height:1em;text-transform:uppercase;opacity:.6}.language-promptdown button.copy{display:none}.language-promptdown .promptdown button.promptdown-button-replay{top:8pt;right:8pt;border-radius:15pt}.language-promptdown{--vp-code-block-bg: none;border-radius:0!important;transform-origin:top center;text-align:left}pre{border-radius:6pt}h1{font-size:1.4em;font-weight:700;margin-bottom:10pt}span.badge{background-color:#007bff;border-radius:2pt;transform:scale(.6);transform-origin:center left;display:inline-block;line-height:1.2em;padding:2pt 4pt;position:relative;top:0;color:#fff}h1 span.badge{transform:scale(.45);margin-left:3pt}div.subtitle{font-size:14pt;color:gray;font-weight:500;margin-bottom:25pt;margin-top:-5pt}.VPDoc:not(.has-sidebar):not(.has-aside) h1{font-size:2.5rem}.VPDoc:not(.has-sidebar):not(.has-aside) .content{max-width:830pt!important}.VPDoc:not(.has-sidebar):not(.has-aside) .container{max-width:830pt!important}.VPDoc:not(.has-sidebar) .content{max-width:1130pt!important}.VPDoc:not(.has-sidebar) .content .content-container{max-width:830pt}.VPDoc:not(.has-sidebar) .container{max-width:1130pt!important}html.dark p strong{font-weight:1200;text-decoration:underline}span.date{font-size:.8em;color:gray;display:block}pre.promptdown,pre.promptdown.promptdown-compiled,html.dark pre.promptdown,html.dark pre.promptdown.promptdown-compiled{text-indent:0pt!important;line-height:1.2em!important}.banner{background-color:#007bff;color:#fff;font-weight:700;padding:2pt 5pt;border-radius:2pt;max-width:calc(100vw - 40pt);margin:auto;width:730pt;position:relative;bottom:-20pt}@media (max-width: 800px){.banner{margin:0;max-width:100vw;border-radius:0}}.banner a{text-decoration:underline}pre .window-controls{margin-bottom:10pt;margin-left:-10pt;margin-top:-5pt}pre{position:relative}pre .window-controls .window-control{background:white;width:10pt;height:10pt;border-radius:50%;display:inline-block;margin-left:5pt}pre .window-controls .window-control:nth-child(1){background:#ff5f56}pre .window-controls .window-control:nth-child(2){background:#ffbd2e}pre .window-controls .window-control:nth-child(3){background:#27c93f}html.dark .language-grammar,.language-grammar,html.dark .language-grammar pre.hljs,.language-grammar pre.hljs{--vp-code-block-bg: none;background-color:transparent!important;background:none!important;color:var(--vp-c-text-1)!important;font-size:14pt;margin:0!important;margin-left:-20pt;white-space:pre-wrap}.language-grammar pre code{color:var(--vp-c-text-1)!important;white-space:pre!important;margin:0 0 0 -15pt!important}.language-grammar .hljs-comment{opacity:.6;color:var(--vp-c-text-1)}.language-grammar .hljs-string{color:var(--vp-c-text-1);color:var(--vp-c-danger-1)}.language-grammar .hljs-meta{color:var(--vp-c-text-1)}.language-grammar .hljs-built_in,.language-grammar .hljs-keyword{color:var(--vp-c-text-1);font-weight:700}.language-grammar .hljs-placeholder,.language-grammar .hljs-subst{color:var(--vp-c-text-1)}.language-grammar a[href^="#python-fragments"]{text-decoration:none;color:var(--vp-c-text-2)}.language-grammar{--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2)}.github-star{transform:scale(1.3)!important}.language-lmql .inline-lmql-delim{opacity:.2}.language-truncated{max-height:200pt;overflow:hidden}.info.show .language-truncated{max-height:none}.info.show button.btn.expand{display:none}html.dark .info button.btn.expand{background-color:var(--vp-c-gray-soft);border-color:var(--vp-c-gray-soft)}html.dark .info button.btn.expand:hover{border-color:var(--vp-c-gray-2)}.info button.btn.expand{text-align:center;width:100%;font-size:10pt;font-weight:700;margin-top:0}.language-output:before{content:"Console Output";font-size:10pt;font-weight:700;opacity:.4;text-align:right;position:absolute;display:block;top:2pt;right:5pt;margin-bottom:-2em}.language-result:before{content:"Result";font-size:10pt;font-weight:700;opacity:.4;text-align:right;position:absolute;top:2pt;right:8pt;margin-bottom:-2em}.language-output{border:.5pt solid rgb(204,201,201)}.language-output,.language-result{white-space:pre-wrap!important;color:var(--vp-c-text-1);--vp-code-block-bg: transparent !important;transform:scale(.98);position:relative;border-radius:7pt!important;--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2)}.language-output button.copy,.language-result button.copy{display:none}.language-result{--vp-code-block-bg: rgba(202, 202, 202, .061) !important}.language-output pre code,.language-output pre,.language-output .hljs-comment,.language-output .hljs-string,.language-output .hljs-meta,.language-output .hljs-built_in,.language-output .hljs-keyword,.language-output .hljs-placeholder,.language-output .hljs-subst,.language-result pre code,.language-result pre,.language-result .hljs-comment,.language-result .hljs-string,.language-result .hljs-meta,.language-result .hljs-built_in,.language-result .hljs-keyword,.language-result .hljs-placeholder,.language-result .hljs-subst{color:var(--vp-c-text-1)!important;white-space:pre-wrap!important}img.inline-logo{display:inline-block;height:1em;position:relative;top:.15em;left:.1em}.grid{display:flex;flex-wrap:wrap;font-size:12pt}.grid-item-card{flex:1 1 200pt;margin:5pt;border-radius:6pt;overflow:hidden;border:.5pt solid rgba(204,201,201,.732);background:transparent;position:relative;padding:10pt;font-size:12pt;max-width:48%}.grid-item-card h3{font-size:12pt;margin:0;padding:0}.grid-item-card a{text-decoration:none;color:var(--vp-c-text-1);transition-duration:.1s!important}.grid-item-card a p{margin:5pt 0 0;font-size:12pt;font-weight:400}.btn{padding:4pt 10pt;font-size:1em;background-color:#dcdcdc;border-radius:4pt;font-weight:700;margin:20pt 5pt 5pt 0;border:2pt solid rgb(220,220,220)}.btn:hover{border:2pt solid rgb(192,190,190)}.btn.primary,.btn.primary:hover{background-color:#007bff;border:2pt solid #007bff;color:#fff}figure img{border-radius:4pt}#version-switcher{opacity:.9;text-align:center;font-size:.9em;margin-top:-5pt;color:var(--vp-c-text-2)}#version-switcher .version{display:inline-block;border-radius:4pt;padding:0 5pt;margin-left:2pt}#version-switcher .version:hover{background-color:var(--vp-c-gray-soft);color:var(--vp-c-text-1)}#version-switcher .version.active{background-color:#007bff;color:#fff}#version-switcher label{margin-right:2pt;color:var(--vp-c-text-2)}a:hover #version-switcher label{color:var(--vp-c-text-2)}#version-switcher a.version:not(.active):hover{cursor:pointer}.promptdown p{text-indent:2pt;white-space:pre-wrap}.promptdown{font-family:monospace;font-size:12pt;background-color:#fff;padding:10pt 20pt 10pt 10pt;border-radius:5pt;border:.5pt solid rgb(204,201,201);line-height:1.5;position:relative;opacity:0;font-family:system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Open Sans,Helvetica Neue,sans-serif;white-space:pre-wrap}.promtpdown.promptdown-compiled{opacity:1!important}html.dark .promptdown{background-color:#3d3d3d4c;color:#fff;border:.5pt solid rgba(64,64,64,.507)}.promptdown-var.cmd{display:none}.promptdown-var{background-color:#dedada;border-radius:2pt;color:#000;font-weight:400;margin-right:2pt;padding:.5pt 4pt}html.dark .promptdown-var{color:#f6f6f6}.promptdown-var.animate-immediate{animation:fadein .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein{0%{opacity:0}to{opacity:1}}.promptdown-var.color-none{background:none!important;padding-left:0}.promptdown-var .promptdown-var-name{color:#fff;background-color:#00000050;font-weight:700;border-radius:2pt;position:relative;top:-1.5pt;left:-2pt;font-size:80%;padding:0 4pt;font-family:monospace;margin-right:0}.promptdown-bubble-container.user{text-align:right}.promptdown-bubble-container{margin-bottom:8pt}.faded .promptdown-bubble{background:transparent}.hidden .promptdown-bubble{display:none}.promptdown-bubble>.promptdown-var-name{display:none}.promptdown-bubble{background-color:#fff;padding:10pt;display:inline-block;border-radius:5pt;color:#000;max-width:90%;white-space:pre-wrap}html.dark .promptdown-bubble{color:#fff}.promptdown-bubble.animate{animation:fadein-left .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein-left{0%{opacity:0;transform:translate(-20pt)}to{opacity:1;transform:translate(0)}}.promptdown-bubble.system{text-align:center;color:gray;font-size:.85em;display:block;max-width:100%;background-color:transparent}.promptdown-bubble.system.animate{z-index:-999;animation:fadein-top .2s}@keyframes fadein-top{0%{opacity:0;transform:translateY(-1pt)}to{opacity:1;transform:translateY(0)}}.promptdown-bubble.user{background-color:#597afe;color:#fff;text-align:left}.promptdown-bubble.user.animate{animation:fadein-right .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein-right{0%{opacity:0;transform:translate(20pt)}to{opacity:1;transform:translate(0)}}.promptdown-bubble.assistant{color:#000;background-color:#d9d9d9}.promptdown-bubble.assistant{background-color:#ece9e9df;padding:8pt}html.dark .promptdown-bubble.assistant{background-color:#777777df;padding:8pt}.promptdown h1,.promptdown h2,.promptdown h3{display:block;margin:0 0 8pt;padding:0;font-size:12pt;text-align:center}.promptdown h1{font-size:10pt}.promptdown h2{font-size:11pt;color:#696969}.promptdown h3{font-size:10pt}.promptdown-cursor{width:8pt;background-color:#c4c2c2;border-radius:2pt;position:relative;left:2pt;color:transparent;display:inline-block;transform:scale(.8);border:1pt solid rgb(212,212,212);animation:blink 1s infinite}.promptdown-var .promptdown-cursor{background-color:#00000047}.promptdown-var.color-none .promptdown-cursor{background-color:#c4c2c2}.promptdown .code_in_prompt{font-family:monospace;background-color:transparent!important}@keyframes blink{0%{opacity:.3}50%{opacity:1}to{opacity:.3}}.hidden{display:none}.cmd-hidden{display:none!important}.faded{opacity:.5;transition:opacity .5s;text-decoration:line-through;border-radius:2pt}.command.hidden{display:none}.promptdown .color-blue{background-color:#728cf5}html.dark .promptdown .color-blue{background-color:#4e60a7}.promptdown .color-purple{background-color:#a48efc}html.dark .promptdown .color-purple{background-color:#715ca3}.promptdown .color-pink{background-color:#ff7893}html.dark .promptdown .color-pink{background-color:#c55c71}.promptdown .color-magenta{background-color:#fb88fb}html.dark .promptdown .color-magenta{background-color:#9c519c}.promptdown .color-red{background-color:#fa9393}html.dark .promptdown .color-red{background-color:#aa5656}.promptdown .color-orange{background-color:#fe7a59}html.dark .promptdown .color-orange{background-color:#aa5640}.promptdown .color-lightorange{background-color:#feb259}html.dark .promptdown .color-lightorange{background-color:#6d5717}.promptdown .color-yellow{background-color:#fbfbc0}html.dark .promptdown .color-yellow{background-color:#6b6b3f}.promptdown .color-ochre{background-color:#8abc98}html.dark .promptdown .color-ochre{background-color:#567660}.promptdown button.promptdown-button-replay{position:absolute;top:10pt;right:10pt;animation:fadein .2s;color:#597afe;font-size:.8em;border:none;background:transparent;cursor:pointer}.promptdown button.promptdown-button-replay:hover{text-decoration:underline}.promptdown button.copy{background-color:#ffffff29;border:1pt solid rgba(255,255,255,.211);opacity:1;position:absolute;top:2pt;right:4pt;left:auto;font-size:10pt;opacity:.1;transition:opacity .1s;padding:5pt;background:transparent}.promptdown button.copy:hover{opacity:1} +@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-cyrillic.5f2c6c8c.woff2) format("woff2");unicode-range:U+0301,U+0400-045F,U+0490-0491,U+04B0-04B1,U+2116}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-cyrillic-ext.e75737ce.woff2) format("woff2");unicode-range:U+0460-052F,U+1C80-1C88,U+20B4,U+2DE0-2DFF,U+A640-A69F,U+FE2E-FE2F}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-greek.d5a6d92a.woff2) format("woff2");unicode-range:U+0370-03FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-greek-ext.ab0619bc.woff2) format("woff2");unicode-range:U+1F00-1FFF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-latin.2ed14f66.woff2) format("woff2");unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-latin-ext.0030eebd.woff2) format("woff2");unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:normal;font-named-instance:"Regular";src:url(/assets/inter-roman-vietnamese.14ce25a6.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-cyrillic.ea42a392.woff2) format("woff2");unicode-range:U+0301,U+0400-045F,U+0490-0491,U+04B0-04B1,U+2116}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-cyrillic-ext.33bd5a8e.woff2) format("woff2");unicode-range:U+0460-052F,U+1C80-1C88,U+20B4,U+2DE0-2DFF,U+A640-A69F,U+FE2E-FE2F}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-greek.8f4463c4.woff2) format("woff2");unicode-range:U+0370-03FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-greek-ext.4fbe9427.woff2) format("woff2");unicode-range:U+1F00-1FFF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-latin.bd3b6f56.woff2) format("woff2");unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-latin-ext.bd8920cc.woff2) format("woff2");unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Inter var;font-weight:100 900;font-display:swap;font-style:italic;font-named-instance:"Italic";src:url(/assets/inter-italic-vietnamese.6ce511fb.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:Chinese Quotes;src:local("PingFang SC Regular"),local("PingFang SC"),local("SimHei"),local("Source Han Sans SC");unicode-range:U+2018,U+2019,U+201C,U+201D}:root{--vp-c-white: #ffffff;--vp-c-black: #000000;--vp-c-neutral: var(--vp-c-black);--vp-c-neutral-inverse: var(--vp-c-white)}.dark{--vp-c-neutral: var(--vp-c-white);--vp-c-neutral-inverse: var(--vp-c-black)}:root{--vp-c-gray-1: #dddde3;--vp-c-gray-2: #e4e4e9;--vp-c-gray-3: #ebebef;--vp-c-gray-soft: rgba(142, 150, 170, .14);--vp-c-indigo-1: #3451b2;--vp-c-indigo-2: #3a5ccc;--vp-c-indigo-3: #5672cd;--vp-c-indigo-soft: rgba(100, 108, 255, .14);--vp-c-green-1: #18794e;--vp-c-green-2: #299764;--vp-c-green-3: #30a46c;--vp-c-green-soft: rgba(16, 185, 129, .14);--vp-c-yellow-1: #915930;--vp-c-yellow-2: #946300;--vp-c-yellow-3: #9f6a00;--vp-c-yellow-soft: rgba(234, 179, 8, .14);--vp-c-red-1: #b8272c;--vp-c-red-2: #d5393e;--vp-c-red-3: #e0575b;--vp-c-red-soft: rgba(244, 63, 94, .14);--vp-c-sponsor: #db2777}.dark{--vp-c-gray-1: #515c67;--vp-c-gray-2: #414853;--vp-c-gray-3: #32363f;--vp-c-gray-soft: rgba(101, 117, 133, .16);--vp-c-indigo-1: #a8b1ff;--vp-c-indigo-2: #5c73e7;--vp-c-indigo-3: #3e63dd;--vp-c-indigo-soft: rgba(100, 108, 255, .16);--vp-c-green-1: #3dd68c;--vp-c-green-2: #30a46c;--vp-c-green-3: #298459;--vp-c-green-soft: rgba(16, 185, 129, .16);--vp-c-yellow-1: #f9b44e;--vp-c-yellow-2: #da8b17;--vp-c-yellow-3: #a46a0a;--vp-c-yellow-soft: rgba(234, 179, 8, .16);--vp-c-red-1: #f66f81;--vp-c-red-2: #f14158;--vp-c-red-3: #b62a3c;--vp-c-red-soft: rgba(244, 63, 94, .16)}:root{--vp-c-bg: #ffffff;--vp-c-bg-alt: #f6f6f7;--vp-c-bg-elv: #ffffff;--vp-c-bg-soft: #f6f6f7}.dark{--vp-c-bg: #1b1b1f;--vp-c-bg-alt: #161618;--vp-c-bg-elv: #202127;--vp-c-bg-soft: #202127}:root{--vp-c-border: #c2c2c4;--vp-c-divider: #e2e2e3;--vp-c-gutter: #e2e2e3}.dark{--vp-c-border: #3c3f44;--vp-c-divider: #2e2e32;--vp-c-gutter: #000000}:root{--vp-c-text-1: rgba(60, 60, 67);--vp-c-text-2: rgba(60, 60, 67, .78);--vp-c-text-3: rgba(60, 60, 67, .56)}.dark{--vp-c-text-1: rgba(255, 255, 245, .86);--vp-c-text-2: rgba(235, 235, 245, .6);--vp-c-text-3: rgba(235, 235, 245, .38)}:root{--vp-c-default-1: var(--vp-c-gray-1);--vp-c-default-2: var(--vp-c-gray-2);--vp-c-default-3: var(--vp-c-gray-3);--vp-c-default-soft: var(--vp-c-gray-soft);--vp-c-brand-1: var(--vp-c-indigo-1);--vp-c-brand-2: var(--vp-c-indigo-2);--vp-c-brand-3: var(--vp-c-indigo-3);--vp-c-brand-soft: var(--vp-c-indigo-soft);--vp-c-brand: var(--vp-c-brand-1);--vp-c-tip-1: var(--vp-c-brand-1);--vp-c-tip-2: var(--vp-c-brand-2);--vp-c-tip-3: var(--vp-c-brand-3);--vp-c-tip-soft: var(--vp-c-brand-soft);--vp-c-warning-1: var(--vp-c-yellow-1);--vp-c-warning-2: var(--vp-c-yellow-2);--vp-c-warning-3: var(--vp-c-yellow-3);--vp-c-warning-soft: var(--vp-c-yellow-soft);--vp-c-danger-1: var(--vp-c-red-1);--vp-c-danger-2: var(--vp-c-red-2);--vp-c-danger-3: var(--vp-c-red-3);--vp-c-danger-soft: var(--vp-c-red-soft)}:root{--vp-font-family-base: "Chinese Quotes", "Inter var", "Inter", ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Helvetica, Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";--vp-font-family-mono: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace}:root{--vp-shadow-1: 0 1px 2px rgba(0, 0, 0, .04), 0 1px 2px rgba(0, 0, 0, .06);--vp-shadow-2: 0 3px 12px rgba(0, 0, 0, .07), 0 1px 4px rgba(0, 0, 0, .07);--vp-shadow-3: 0 12px 32px rgba(0, 0, 0, .1), 0 2px 6px rgba(0, 0, 0, .08);--vp-shadow-4: 0 14px 44px rgba(0, 0, 0, .12), 0 3px 9px rgba(0, 0, 0, .12);--vp-shadow-5: 0 18px 56px rgba(0, 0, 0, .16), 0 4px 12px rgba(0, 0, 0, .16)}:root{--vp-z-index-footer: 10;--vp-z-index-local-nav: 20;--vp-z-index-nav: 30;--vp-z-index-layout-top: 40;--vp-z-index-backdrop: 50;--vp-z-index-sidebar: 60}:root{--vp-icon-copy: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='none' height='20' width='20' stroke='rgba(128,128,128,1)' stroke-width='2' viewBox='0 0 24 24'%3E%3Cpath stroke-linecap='round' stroke-linejoin='round' d='M9 5H7a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2h-2M9 5a2 2 0 0 0 2 2h2a2 2 0 0 0 2-2M9 5a2 2 0 0 1 2-2h2a2 2 0 0 1 2 2'/%3E%3C/svg%3E");--vp-icon-copied: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='none' height='20' width='20' stroke='rgba(128,128,128,1)' stroke-width='2' viewBox='0 0 24 24'%3E%3Cpath stroke-linecap='round' stroke-linejoin='round' d='M9 5H7a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2h-2M9 5a2 2 0 0 0 2 2h2a2 2 0 0 0 2-2M9 5a2 2 0 0 1 2-2h2a2 2 0 0 1 2 2m-6 9 2 2 4-4'/%3E%3C/svg%3E")}:root{--vp-layout-max-width: 1440px}:root{--vp-header-anchor-symbol: "#"}:root{--vp-code-line-height: 1.7;--vp-code-font-size: .875em;--vp-code-color: var(--vp-c-brand-1);--vp-code-link-color: var(--vp-c-brand-1);--vp-code-link-hover-color: var(--vp-c-brand-2);--vp-code-bg: var(--vp-c-default-soft);--vp-code-block-color: var(--vp-c-text-2);--vp-code-block-bg: var(--vp-c-bg-alt);--vp-code-block-divider-color: var(--vp-c-gutter);--vp-code-lang-color: var(--vp-c-text-3);--vp-code-line-highlight-color: var(--vp-c-default-soft);--vp-code-line-number-color: var(--vp-c-text-3);--vp-code-line-diff-add-color: var(--vp-c-green-soft);--vp-code-line-diff-add-symbol-color: var(--vp-c-green-1);--vp-code-line-diff-remove-color: var(--vp-c-red-soft);--vp-code-line-diff-remove-symbol-color: var(--vp-c-red-1);--vp-code-line-warning-color: var(--vp-c-yellow-soft);--vp-code-line-error-color: var(--vp-c-red-soft);--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2);--vp-code-copy-copied-text-content: "Copied";--vp-code-tab-divider: var(--vp-code-block-divider-color);--vp-code-tab-text-color: var(--vp-c-text-2);--vp-code-tab-bg: var(--vp-code-block-bg);--vp-code-tab-hover-text-color: var(--vp-c-text-1);--vp-code-tab-active-text-color: var(--vp-c-text-1);--vp-code-tab-active-bar-color: var(--vp-c-brand-1)}:root{--vp-button-brand-border: transparent;--vp-button-brand-text: var(--vp-c-white);--vp-button-brand-bg: var(--vp-c-brand-3);--vp-button-brand-hover-border: transparent;--vp-button-brand-hover-text: var(--vp-c-white);--vp-button-brand-hover-bg: var(--vp-c-brand-2);--vp-button-brand-active-border: transparent;--vp-button-brand-active-text: var(--vp-c-white);--vp-button-brand-active-bg: var(--vp-c-brand-1);--vp-button-alt-border: transparent;--vp-button-alt-text: var(--vp-c-text-1);--vp-button-alt-bg: var(--vp-c-default-3);--vp-button-alt-hover-border: transparent;--vp-button-alt-hover-text: var(--vp-c-text-1);--vp-button-alt-hover-bg: var(--vp-c-default-2);--vp-button-alt-active-border: transparent;--vp-button-alt-active-text: var(--vp-c-text-1);--vp-button-alt-active-bg: var(--vp-c-default-1);--vp-button-sponsor-border: var(--vp-c-text-2);--vp-button-sponsor-text: var(--vp-c-text-2);--vp-button-sponsor-bg: transparent;--vp-button-sponsor-hover-border: var(--vp-c-sponsor);--vp-button-sponsor-hover-text: var(--vp-c-sponsor);--vp-button-sponsor-hover-bg: transparent;--vp-button-sponsor-active-border: var(--vp-c-sponsor);--vp-button-sponsor-active-text: var(--vp-c-sponsor);--vp-button-sponsor-active-bg: transparent}:root{--vp-custom-block-font-size: 14px;--vp-custom-block-code-font-size: 13px;--vp-custom-block-info-border: transparent;--vp-custom-block-info-text: var(--vp-c-text-1);--vp-custom-block-info-bg: var(--vp-c-default-soft);--vp-custom-block-info-code-bg: var(--vp-c-default-soft);--vp-custom-block-tip-border: transparent;--vp-custom-block-tip-text: var(--vp-c-text-1);--vp-custom-block-tip-bg: var(--vp-c-brand-soft);--vp-custom-block-tip-code-bg: var(--vp-c-brand-soft);--vp-custom-block-warning-border: transparent;--vp-custom-block-warning-text: var(--vp-c-text-1);--vp-custom-block-warning-bg: var(--vp-c-warning-soft);--vp-custom-block-warning-code-bg: var(--vp-c-warning-soft);--vp-custom-block-danger-border: transparent;--vp-custom-block-danger-text: var(--vp-c-text-1);--vp-custom-block-danger-bg: var(--vp-c-danger-soft);--vp-custom-block-danger-code-bg: var(--vp-c-danger-soft);--vp-custom-block-details-border: var(--vp-custom-block-info-border);--vp-custom-block-details-text: var(--vp-custom-block-info-text);--vp-custom-block-details-bg: var(--vp-custom-block-info-bg);--vp-custom-block-details-code-bg: var(--vp-custom-block-info-code-bg)}:root{--vp-input-border-color: var(--vp-c-border);--vp-input-bg-color: var(--vp-c-bg-alt);--vp-input-switch-bg-color: var(--vp-c-gray-soft)}:root{--vp-nav-height: 64px;--vp-nav-bg-color: var(--vp-c-bg);--vp-nav-screen-bg-color: var(--vp-c-bg);--vp-nav-logo-height: 24px}.hide-nav{--vp-nav-height: 0px}.hide-nav .VPSidebar{--vp-nav-height: 22px}:root{--vp-local-nav-bg-color: var(--vp-c-bg)}:root{--vp-sidebar-width: 272px;--vp-sidebar-bg-color: var(--vp-c-bg-alt)}:root{--vp-backdrop-bg-color: rgba(0, 0, 0, .6)}:root{--vp-home-hero-name-color: var(--vp-c-brand-1);--vp-home-hero-name-background: transparent;--vp-home-hero-image-background-image: none;--vp-home-hero-image-filter: none}:root{--vp-badge-info-border: transparent;--vp-badge-info-text: var(--vp-c-text-2);--vp-badge-info-bg: var(--vp-c-default-soft);--vp-badge-tip-border: transparent;--vp-badge-tip-text: var(--vp-c-brand-1);--vp-badge-tip-bg: var(--vp-c-brand-soft);--vp-badge-warning-border: transparent;--vp-badge-warning-text: var(--vp-c-warning-1);--vp-badge-warning-bg: var(--vp-c-warning-soft);--vp-badge-danger-border: transparent;--vp-badge-danger-text: var(--vp-c-danger-1);--vp-badge-danger-bg: var(--vp-c-danger-soft)}:root{--vp-carbon-ads-text-color: var(--vp-c-text-1);--vp-carbon-ads-poweredby-color: var(--vp-c-text-2);--vp-carbon-ads-bg-color: var(--vp-c-bg-soft);--vp-carbon-ads-hover-text-color: var(--vp-c-brand-1);--vp-carbon-ads-hover-poweredby-color: var(--vp-c-text-1)}:root{--vp-local-search-bg: var(--vp-c-bg);--vp-local-search-result-bg: var(--vp-c-bg);--vp-local-search-result-border: var(--vp-c-divider);--vp-local-search-result-selected-bg: var(--vp-c-bg);--vp-local-search-result-selected-border: var(--vp-c-brand-1);--vp-local-search-highlight-bg: var(--vp-c-brand-1);--vp-local-search-highlight-text: var(--vp-c-neutral-inverse)}@media (prefers-reduced-motion: reduce){*,:before,:after{animation-delay:-1ms!important;animation-duration:1ms!important;animation-iteration-count:1!important;background-attachment:initial!important;scroll-behavior:auto!important;transition-duration:0s!important;transition-delay:0s!important}}*,:before,:after{box-sizing:border-box}html{line-height:1.4;font-size:16px;-webkit-text-size-adjust:100%}html.dark{color-scheme:dark}body{margin:0;width:100%;min-width:320px;min-height:100vh;line-height:24px;font-family:var(--vp-font-family-base);font-size:16px;font-weight:400;color:var(--vp-c-text-1);background-color:var(--vp-c-bg);direction:ltr;font-synthesis:style;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}main{display:block}h1,h2,h3,h4,h5,h6{margin:0;line-height:24px;font-size:16px;font-weight:400}p{margin:0}strong,b{font-weight:600}a,area,button,[role=button],input,label,select,summary,textarea{touch-action:manipulation}a{color:inherit;text-decoration:inherit}ol,ul{list-style:none;margin:0;padding:0}blockquote{margin:0}pre,code,kbd,samp{font-family:var(--vp-font-family-mono)}img,svg,video,canvas,audio,iframe,embed,object{display:block}figure{margin:0}img,video{max-width:100%;height:auto}button,input,optgroup,select,textarea{border:0;padding:0;line-height:inherit;color:inherit}button{padding:0;font-family:inherit;background-color:transparent;background-image:none}button:enabled,[role=button]:enabled{cursor:pointer}button:focus,button:focus-visible{outline:1px dotted;outline:4px auto -webkit-focus-ring-color}button:focus:not(:focus-visible){outline:none!important}input:focus,textarea:focus,select:focus{outline:none}table{border-collapse:collapse}input{background-color:transparent}input:-ms-input-placeholder,textarea:-ms-input-placeholder{color:var(--vp-c-text-3)}input::-ms-input-placeholder,textarea::-ms-input-placeholder{color:var(--vp-c-text-3)}input::placeholder,textarea::placeholder{color:var(--vp-c-text-3)}input::-webkit-outer-spin-button,input::-webkit-inner-spin-button{-webkit-appearance:none;margin:0}input[type=number]{-moz-appearance:textfield}textarea{resize:vertical}select{-webkit-appearance:none}fieldset{margin:0;padding:0}h1,h2,h3,h4,h5,h6,li,p{overflow-wrap:break-word}vite-error-overlay{z-index:9999}mjx-container{display:inline-block;margin:auto 2px -2px}mjx-container>svg{margin:auto}.visually-hidden{position:absolute;width:1px;height:1px;white-space:nowrap;clip:rect(0 0 0 0);clip-path:inset(50%);overflow:hidden}.custom-block{border:1px solid transparent;border-radius:8px;padding:16px 16px 8px;line-height:24px;font-size:var(--vp-custom-block-font-size);color:var(--vp-c-text-2)}.custom-block.info{border-color:var(--vp-custom-block-info-border);color:var(--vp-custom-block-info-text);background-color:var(--vp-custom-block-info-bg)}.custom-block.info a,.custom-block.info code{color:var(--vp-c-brand-1)}.custom-block.info a:hover{color:var(--vp-c-brand-2)}.custom-block.info code{background-color:var(--vp-custom-block-info-code-bg)}.custom-block.tip{border-color:var(--vp-custom-block-tip-border);color:var(--vp-custom-block-tip-text);background-color:var(--vp-custom-block-tip-bg)}.custom-block.tip a,.custom-block.tip code{color:var(--vp-c-brand-1)}.custom-block.tip a:hover{color:var(--vp-c-brand-2)}.custom-block.tip code{background-color:var(--vp-custom-block-tip-code-bg)}.custom-block.warning{border-color:var(--vp-custom-block-warning-border);color:var(--vp-custom-block-warning-text);background-color:var(--vp-custom-block-warning-bg)}.custom-block.warning a,.custom-block.warning code{color:var(--vp-c-warning-1)}.custom-block.warning a:hover{color:var(--vp-c-warning-2)}.custom-block.warning code{background-color:var(--vp-custom-block-warning-code-bg)}.custom-block.danger{border-color:var(--vp-custom-block-danger-border);color:var(--vp-custom-block-danger-text);background-color:var(--vp-custom-block-danger-bg)}.custom-block.danger a,.custom-block.danger code{color:var(--vp-c-danger-1)}.custom-block.danger a:hover{color:var(--vp-c-danger-2)}.custom-block.danger code{background-color:var(--vp-custom-block-danger-code-bg)}.custom-block.details{border-color:var(--vp-custom-block-details-border);color:var(--vp-custom-block-details-text);background-color:var(--vp-custom-block-details-bg)}.custom-block.details a{color:var(--vp-c-brand-1)}.custom-block.details a:hover{color:var(--vp-c-brand-2)}.custom-block.details code{background-color:var(--vp-custom-block-details-code-bg)}.custom-block-title{font-weight:600}.custom-block p+p{margin:8px 0}.custom-block.details summary{margin:0 0 8px;font-weight:700;cursor:pointer}.custom-block.details summary+p{margin:8px 0}.custom-block a{color:inherit;font-weight:600;text-decoration:underline;text-underline-offset:2px;transition:opacity .25s}.custom-block a:hover{opacity:.75}.custom-block code{font-size:var(--vp-custom-block-code-font-size)}.custom-block.custom-block th,.custom-block.custom-block blockquote>p{font-size:var(--vp-custom-block-font-size);color:inherit}.dark .vp-code-light{display:none}html:not(.dark) .vp-code-dark{display:none}.vp-code-group{margin-top:16px}.vp-code-group .tabs{position:relative;display:flex;margin-right:-24px;margin-left:-24px;padding:0 12px;background-color:var(--vp-code-tab-bg);overflow-x:auto;overflow-y:hidden;box-shadow:inset 0 -1px var(--vp-code-tab-divider)}@media (min-width: 640px){.vp-code-group .tabs{margin-right:0;margin-left:0;border-radius:8px 8px 0 0}}.vp-code-group .tabs input{position:fixed;opacity:0;pointer-events:none}.vp-code-group .tabs label{position:relative;display:inline-block;border-bottom:1px solid transparent;padding:0 12px;line-height:48px;font-size:14px;font-weight:500;color:var(--vp-code-tab-text-color);white-space:nowrap;cursor:pointer;transition:color .25s}.vp-code-group .tabs label:after{position:absolute;right:8px;bottom:-1px;left:8px;z-index:1;height:2px;border-radius:2px;content:"";background-color:transparent;transition:background-color .25s}.vp-code-group label:hover{color:var(--vp-code-tab-hover-text-color)}.vp-code-group input:checked+label{color:var(--vp-code-tab-active-text-color)}.vp-code-group input:checked+label:after{background-color:var(--vp-code-tab-active-bar-color)}.vp-code-group div[class*=language-],.vp-block{display:none;margin-top:0!important;border-top-left-radius:0!important;border-top-right-radius:0!important}.vp-code-group div[class*=language-].active,.vp-block.active{display:block}.vp-block{padding:20px 24px}.vp-doc h1,.vp-doc h2,.vp-doc h3,.vp-doc h4,.vp-doc h5,.vp-doc h6{position:relative;font-weight:600;outline:none}.vp-doc h1{letter-spacing:-.02em;line-height:40px;font-size:28px}.vp-doc h2{margin:48px 0 16px;border-top:1px solid var(--vp-c-divider);padding-top:24px;letter-spacing:-.02em;line-height:32px;font-size:24px}.vp-doc h3{margin:32px 0 0;letter-spacing:-.01em;line-height:28px;font-size:20px}.vp-doc .header-anchor{position:absolute;top:0;left:0;margin-left:-.87em;font-weight:500;-webkit-user-select:none;user-select:none;opacity:0;text-decoration:none;transition:color .25s,opacity .25s}.vp-doc .header-anchor:before{content:var(--vp-header-anchor-symbol)}.vp-doc h1:hover .header-anchor,.vp-doc h1 .header-anchor:focus,.vp-doc h2:hover .header-anchor,.vp-doc h2 .header-anchor:focus,.vp-doc h3:hover .header-anchor,.vp-doc h3 .header-anchor:focus,.vp-doc h4:hover .header-anchor,.vp-doc h4 .header-anchor:focus,.vp-doc h5:hover .header-anchor,.vp-doc h5 .header-anchor:focus,.vp-doc h6:hover .header-anchor,.vp-doc h6 .header-anchor:focus{opacity:1}@media (min-width: 768px){.vp-doc h1{letter-spacing:-.02em;line-height:40px;font-size:32px}}.vp-doc h2 .header-anchor{top:24px}.vp-doc p,.vp-doc summary{margin:16px 0}.vp-doc p{line-height:28px}.vp-doc blockquote{margin:16px 0;border-left:2px solid var(--vp-c-divider);padding-left:16px;transition:border-color .5s}.vp-doc blockquote>p{margin:0;font-size:16px;color:var(--vp-c-text-2);transition:color .5s}.vp-doc a{font-weight:500;color:var(--vp-c-brand-1);text-decoration:underline;text-underline-offset:2px;transition:color .25s,opacity .25s}.vp-doc a:hover{color:var(--vp-c-brand-2)}.vp-doc strong{font-weight:600}.vp-doc ul,.vp-doc ol{padding-left:1.25rem;margin:16px 0}.vp-doc ul{list-style:disc}.vp-doc ol{list-style:decimal}.vp-doc li+li{margin-top:8px}.vp-doc li>ol,.vp-doc li>ul{margin:8px 0 0}.vp-doc table{display:block;border-collapse:collapse;margin:20px 0;overflow-x:auto}.vp-doc tr{border-top:1px solid var(--vp-c-divider);transition:background-color .5s}.vp-doc tr:nth-child(2n){background-color:var(--vp-c-bg-soft)}.vp-doc th,.vp-doc td{border:1px solid var(--vp-c-divider);padding:8px 16px}.vp-doc th{text-align:left;font-size:14px;font-weight:600;color:var(--vp-c-text-2);background-color:var(--vp-c-bg-soft)}.vp-doc td{font-size:14px}.vp-doc hr{margin:16px 0;border:none;border-top:1px solid var(--vp-c-divider)}.vp-doc .custom-block{margin:16px 0}.vp-doc .custom-block p{margin:8px 0;line-height:24px}.vp-doc .custom-block p:first-child{margin:0}.vp-doc .custom-block div[class*=language-]{margin:8px 0;border-radius:8px}.vp-doc .custom-block div[class*=language-] code{font-weight:400;background-color:transparent}.vp-doc .custom-block .vp-code-group .tabs{margin:0;border-radius:8px 8px 0 0}.vp-doc :not(pre,h1,h2,h3,h4,h5,h6)>code{font-size:var(--vp-code-font-size);color:var(--vp-code-color)}.vp-doc :not(pre)>code{border-radius:4px;padding:3px 6px;background-color:var(--vp-code-bg);transition:color .25s,background-color .5s}.vp-doc a>code{color:var(--vp-code-link-color)}.vp-doc a:hover>code{color:var(--vp-code-link-hover-color)}.vp-doc h1>code,.vp-doc h2>code,.vp-doc h3>code{font-size:.9em}.vp-doc div[class*=language-],.vp-block{position:relative;margin:16px -24px;background-color:var(--vp-code-block-bg);overflow-x:auto;transition:background-color .5s}@media (min-width: 640px){.vp-doc div[class*=language-],.vp-block{border-radius:8px;margin:16px 0}}@media (max-width: 639px){.vp-doc li div[class*=language-]{border-radius:8px 0 0 8px}}.vp-doc div[class*=language-]+div[class*=language-],.vp-doc div[class$=-api]+div[class*=language-],.vp-doc div[class*=language-]+div[class$=-api]>div[class*=language-]{margin-top:-8px}.vp-doc [class*=language-] pre,.vp-doc [class*=language-] code{direction:ltr;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}.vp-doc [class*=language-] pre{position:relative;z-index:1;margin:0;padding:20px 0;background:transparent;overflow-x:auto}.vp-doc [class*=language-] code{display:block;padding:0 24px;width:fit-content;min-width:100%;line-height:var(--vp-code-line-height);font-size:var(--vp-code-font-size);color:var(--vp-code-block-color);transition:color .5s}.vp-doc [class*=language-] code .highlighted{background-color:var(--vp-code-line-highlight-color);transition:background-color .5s;margin:0 -24px;padding:0 24px;width:calc(100% + 48px);display:inline-block}.vp-doc [class*=language-] code .highlighted.error{background-color:var(--vp-code-line-error-color)}.vp-doc [class*=language-] code .highlighted.warning{background-color:var(--vp-code-line-warning-color)}.vp-doc [class*=language-] code .diff{transition:background-color .5s;margin:0 -24px;padding:0 24px;width:calc(100% + 48px);display:inline-block}.vp-doc [class*=language-] code .diff:before{position:absolute;left:10px}.vp-doc [class*=language-] .has-focused-lines .line:not(.has-focus){filter:blur(.095rem);opacity:.4;transition:filter .35s,opacity .35s}.vp-doc [class*=language-] .has-focused-lines .line:not(.has-focus){opacity:.7;transition:filter .35s,opacity .35s}.vp-doc [class*=language-]:hover .has-focused-lines .line:not(.has-focus){filter:blur(0);opacity:1}.vp-doc [class*=language-] code .diff.remove{background-color:var(--vp-code-line-diff-remove-color);opacity:.7}.vp-doc [class*=language-] code .diff.remove:before{content:"-";color:var(--vp-code-line-diff-remove-symbol-color)}.vp-doc [class*=language-] code .diff.add{background-color:var(--vp-code-line-diff-add-color)}.vp-doc [class*=language-] code .diff.add:before{content:"+";color:var(--vp-code-line-diff-add-symbol-color)}.vp-doc div[class*=language-].line-numbers-mode{padding-left:32px}.vp-doc .line-numbers-wrapper{position:absolute;top:0;bottom:0;left:0;z-index:3;border-right:1px solid var(--vp-code-block-divider-color);padding-top:20px;width:32px;text-align:center;font-family:var(--vp-font-family-mono);line-height:var(--vp-code-line-height);font-size:var(--vp-code-font-size);color:var(--vp-code-line-number-color);transition:border-color .5s,color .5s}.vp-doc [class*=language-]>button.copy{direction:ltr;position:absolute;top:12px;right:12px;z-index:3;border:1px solid var(--vp-code-copy-code-border-color);border-radius:4px;width:40px;height:40px;background-color:var(--vp-code-copy-code-bg);opacity:0;cursor:pointer;background-image:var(--vp-icon-copy);background-position:50%;background-size:20px;background-repeat:no-repeat;transition:border-color .25s,background-color .25s,opacity .25s}.vp-doc [class*=language-]:hover>button.copy,.vp-doc [class*=language-]>button.copy:focus{opacity:1}.vp-doc [class*=language-]>button.copy:hover,.vp-doc [class*=language-]>button.copy.copied{border-color:var(--vp-code-copy-code-hover-border-color);background-color:var(--vp-code-copy-code-hover-bg)}.vp-doc [class*=language-]>button.copy.copied,.vp-doc [class*=language-]>button.copy:hover.copied{border-radius:0 4px 4px 0;background-color:var(--vp-code-copy-code-hover-bg);background-image:var(--vp-icon-copied)}.vp-doc [class*=language-]>button.copy.copied:before,.vp-doc [class*=language-]>button.copy:hover.copied:before{position:relative;top:-1px;transform:translate(calc(-100% - 1px));display:flex;justify-content:center;align-items:center;border:1px solid var(--vp-code-copy-code-hover-border-color);border-right:0;border-radius:4px 0 0 4px;padding:0 10px;width:fit-content;height:40px;text-align:center;font-size:12px;font-weight:500;color:var(--vp-code-copy-code-active-text);background-color:var(--vp-code-copy-code-hover-bg);white-space:nowrap;content:var(--vp-code-copy-copied-text-content)}.vp-doc [class*=language-]>span.lang{position:absolute;top:2px;right:8px;z-index:2;font-size:12px;font-weight:500;color:var(--vp-code-lang-color);transition:color .4s,opacity .4s}.vp-doc [class*=language-]:hover>button.copy+span.lang,.vp-doc [class*=language-]>button.copy:focus+span.lang{opacity:0}.vp-doc .VPTeamMembers{margin-top:24px}.vp-doc .VPTeamMembers.small.count-1 .container{margin:0!important;max-width:calc((100% - 24px)/2)!important}.vp-doc .VPTeamMembers.small.count-2 .container,.vp-doc .VPTeamMembers.small.count-3 .container{max-width:100%!important}.vp-doc .VPTeamMembers.medium.count-1 .container{margin:0!important;max-width:calc((100% - 24px)/2)!important}:is(.vp-external-link-icon,.vp-doc a[href*="://"],.vp-doc a[target=_blank]):not(.no-icon):after{display:inline-block;margin-top:-1px;margin-left:4px;width:11px;height:11px;background:currentColor;color:var(--vp-c-text-3);flex-shrink:0;--icon: url("data:image/svg+xml, %3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' %3E%3Cpath d='M0 0h24v24H0V0z' fill='none' /%3E%3Cpath d='M9 5v2h6.59L4 18.59 5.41 20 17 8.41V15h2V5H9z' /%3E%3C/svg%3E");-webkit-mask-image:var(--icon);mask-image:var(--icon)}.vp-external-link-icon:after{content:""}.vp-sponsor{border-radius:16px;overflow:hidden}.vp-sponsor.aside{border-radius:12px}.vp-sponsor-section+.vp-sponsor-section{margin-top:4px}.vp-sponsor-tier{margin-bottom:4px;text-align:center;letter-spacing:1px;line-height:24px;width:100%;font-weight:600;color:var(--vp-c-text-2);background-color:var(--vp-c-bg-soft)}.vp-sponsor.normal .vp-sponsor-tier{padding:13px 0 11px;font-size:14px}.vp-sponsor.aside .vp-sponsor-tier{padding:9px 0 7px;font-size:12px}.vp-sponsor-grid+.vp-sponsor-tier{margin-top:4px}.vp-sponsor-grid{display:flex;flex-wrap:wrap;gap:4px}.vp-sponsor-grid.xmini .vp-sponsor-grid-link{height:64px}.vp-sponsor-grid.xmini .vp-sponsor-grid-image{max-width:64px;max-height:22px}.vp-sponsor-grid.mini .vp-sponsor-grid-link{height:72px}.vp-sponsor-grid.mini .vp-sponsor-grid-image{max-width:96px;max-height:24px}.vp-sponsor-grid.small .vp-sponsor-grid-link{height:96px}.vp-sponsor-grid.small .vp-sponsor-grid-image{max-width:96px;max-height:24px}.vp-sponsor-grid.medium .vp-sponsor-grid-link{height:112px}.vp-sponsor-grid.medium .vp-sponsor-grid-image{max-width:120px;max-height:36px}.vp-sponsor-grid.big .vp-sponsor-grid-link{height:184px}.vp-sponsor-grid.big .vp-sponsor-grid-image{max-width:192px;max-height:56px}.vp-sponsor-grid[data-vp-grid="2"] .vp-sponsor-grid-item{width:calc((100% - 4px)/2)}.vp-sponsor-grid[data-vp-grid="3"] .vp-sponsor-grid-item{width:calc((100% - 4px * 2) / 3)}.vp-sponsor-grid[data-vp-grid="4"] .vp-sponsor-grid-item{width:calc((100% - 12px)/4)}.vp-sponsor-grid[data-vp-grid="5"] .vp-sponsor-grid-item{width:calc((100% - 16px)/5)}.vp-sponsor-grid[data-vp-grid="6"] .vp-sponsor-grid-item{width:calc((100% - 4px * 5) / 6)}.vp-sponsor-grid-item{flex-shrink:0;width:100%;background-color:var(--vp-c-bg-soft);transition:background-color .25s}.vp-sponsor-grid-item:hover{background-color:var(--vp-c-default-soft)}.vp-sponsor-grid-item:hover .vp-sponsor-grid-image{filter:grayscale(0) invert(0)}.vp-sponsor-grid-item.empty:hover{background-color:var(--vp-c-bg-soft)}.dark .vp-sponsor-grid-item:hover{background-color:var(--vp-c-white)}.dark .vp-sponsor-grid-item.empty:hover{background-color:var(--vp-c-bg-soft)}.vp-sponsor-grid-link{display:flex}.vp-sponsor-grid-box{display:flex;justify-content:center;align-items:center;width:100%}.vp-sponsor-grid-image{max-width:100%;filter:grayscale(1);transition:filter .25s}.dark .vp-sponsor-grid-image{filter:grayscale(1) invert(1)}.VPBadge[data-v-ea5b2908]{display:inline-block;margin-left:2px;border:1px solid transparent;border-radius:12px;padding:0 10px;line-height:22px;font-size:12px;font-weight:500;transform:translateY(-2px)}.vp-doc h1>.VPBadge[data-v-ea5b2908]{margin-top:4px;vertical-align:top}.vp-doc h2>.VPBadge[data-v-ea5b2908]{margin-top:3px;padding:0 8px;vertical-align:top}.vp-doc h3>.VPBadge[data-v-ea5b2908]{vertical-align:middle}.vp-doc h4>.VPBadge[data-v-ea5b2908],.vp-doc h5>.VPBadge[data-v-ea5b2908],.vp-doc h6>.VPBadge[data-v-ea5b2908]{vertical-align:middle;line-height:18px}.VPBadge.info[data-v-ea5b2908]{border-color:var(--vp-badge-info-border);color:var(--vp-badge-info-text);background-color:var(--vp-badge-info-bg)}.VPBadge.tip[data-v-ea5b2908]{border-color:var(--vp-badge-tip-border);color:var(--vp-badge-tip-text);background-color:var(--vp-badge-tip-bg)}.VPBadge.warning[data-v-ea5b2908]{border-color:var(--vp-badge-warning-border);color:var(--vp-badge-warning-text);background-color:var(--vp-badge-warning-bg)}.VPBadge.danger[data-v-ea5b2908]{border-color:var(--vp-badge-danger-border);color:var(--vp-badge-danger-text);background-color:var(--vp-badge-danger-bg)}.VPBackdrop[data-v-54a304ca]{position:fixed;top:0;right:0;bottom:0;left:0;z-index:var(--vp-z-index-backdrop);background:var(--vp-backdrop-bg-color);transition:opacity .5s}.VPBackdrop.fade-enter-from[data-v-54a304ca],.VPBackdrop.fade-leave-to[data-v-54a304ca]{opacity:0}.VPBackdrop.fade-leave-active[data-v-54a304ca]{transition-duration:.25s}@media (min-width: 1280px){.VPBackdrop[data-v-54a304ca]{display:none}}.NotFound[data-v-b9c0c15a]{padding:64px 24px 96px;text-align:center}@media (min-width: 768px){.NotFound[data-v-b9c0c15a]{padding:96px 32px 168px}}.code[data-v-b9c0c15a]{line-height:64px;font-size:64px;font-weight:600}.title[data-v-b9c0c15a]{padding-top:12px;letter-spacing:2px;line-height:20px;font-size:20px;font-weight:700}.divider[data-v-b9c0c15a]{margin:24px auto 18px;width:64px;height:1px;background-color:var(--vp-c-divider)}.quote[data-v-b9c0c15a]{margin:0 auto;max-width:256px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.action[data-v-b9c0c15a]{padding-top:20px}.link[data-v-b9c0c15a]{display:inline-block;border:1px solid var(--vp-c-brand-1);border-radius:16px;padding:3px 16px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:border-color .25s,color .25s}.link[data-v-b9c0c15a]:hover{border-color:var(--vp-c-brand-2);color:var(--vp-c-brand-2)}.root[data-v-463da30f]{position:relative;z-index:1}.nested[data-v-463da30f]{padding-left:16px}.outline-link[data-v-463da30f]{display:block;line-height:28px;color:var(--vp-c-text-2);white-space:nowrap;overflow:hidden;text-overflow:ellipsis;transition:color .5s;font-weight:400}.outline-link[data-v-463da30f]:hover,.outline-link.active[data-v-463da30f]{color:var(--vp-c-text-1);transition:color .25s}.outline-link.nested[data-v-463da30f]{padding-left:13px}.VPDocAsideOutline[data-v-3a6c4994]{display:none}.VPDocAsideOutline.has-outline[data-v-3a6c4994]{display:block}.content[data-v-3a6c4994]{position:relative;border-left:1px solid var(--vp-c-divider);padding-left:16px;font-size:13px;font-weight:500}.outline-marker[data-v-3a6c4994]{position:absolute;top:32px;left:-1px;z-index:0;opacity:0;width:2px;border-radius:2px;height:18px;background-color:var(--vp-c-brand-1);transition:top .25s cubic-bezier(0,1,.5,1),background-color .5s,opacity .25s}.outline-title[data-v-3a6c4994]{letter-spacing:.4px;line-height:28px;font-size:13px;font-weight:600}.VPDocAside[data-v-cb998dce]{display:flex;flex-direction:column;flex-grow:1}.spacer[data-v-cb998dce]{flex-grow:1}.VPDocAside[data-v-cb998dce] .spacer+.VPDocAsideSponsors,.VPDocAside[data-v-cb998dce] .spacer+.VPDocAsideCarbonAds{margin-top:24px}.VPDocAside[data-v-cb998dce] .VPDocAsideSponsors+.VPDocAsideCarbonAds{margin-top:16px}.VPLastUpdated[data-v-19a7ae4e]{line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}@media (min-width: 640px){.VPLastUpdated[data-v-19a7ae4e]{line-height:32px;font-size:14px;font-weight:500}}.VPDocFooter[data-v-a2d931e4]{margin-top:64px}.edit-info[data-v-a2d931e4]{padding-bottom:18px}@media (min-width: 640px){.edit-info[data-v-a2d931e4]{display:flex;justify-content:space-between;align-items:center;padding-bottom:14px}}.edit-link-button[data-v-a2d931e4]{display:flex;align-items:center;border:0;line-height:32px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:color .25s}.edit-link-button[data-v-a2d931e4]:hover{color:var(--vp-c-brand-2)}.edit-link-icon[data-v-a2d931e4]{margin-right:8px;width:14px;height:14px;fill:currentColor}.prev-next[data-v-a2d931e4]{border-top:1px solid var(--vp-c-divider);padding-top:24px;display:grid;grid-row-gap:8px}@media (min-width: 640px){.prev-next[data-v-a2d931e4]{grid-template-columns:repeat(2,1fr);grid-column-gap:16px}}.pager-link[data-v-a2d931e4]{display:block;border:1px solid var(--vp-c-divider);border-radius:8px;padding:11px 16px 13px;width:100%;height:100%;transition:border-color .25s}.pager-link[data-v-a2d931e4]:hover{border-color:var(--vp-c-brand-1)}.pager-link.next[data-v-a2d931e4]{margin-left:auto;text-align:right}.desc[data-v-a2d931e4]{display:block;line-height:20px;font-size:12px;font-weight:500;color:var(--vp-c-text-2)}.title[data-v-a2d931e4]{display:block;line-height:20px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1);transition:color .25s}.VPDocOutlineDropdown[data-v-95bb0785]{margin-bottom:48px}.VPDocOutlineDropdown button[data-v-95bb0785]{display:block;font-size:14px;font-weight:500;line-height:24px;border:1px solid var(--vp-c-border);padding:4px 12px;color:var(--vp-c-text-2);background-color:var(--vp-c-default-soft);border-radius:8px;transition:color .5s}.VPDocOutlineDropdown button[data-v-95bb0785]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPDocOutlineDropdown button.open[data-v-95bb0785]{color:var(--vp-c-text-1)}.icon[data-v-95bb0785]{display:inline-block;vertical-align:middle;width:16px;height:16px;fill:currentColor}[data-v-95bb0785] .outline-link{font-size:14px;font-weight:400}.open>.icon[data-v-95bb0785]{transform:rotate(90deg)}.items[data-v-95bb0785]{margin-top:12px;border-left:1px solid var(--vp-c-divider)}.VPDoc[data-v-a3c25e27]{padding:32px 24px 96px;width:100%}.VPDoc .VPDocOutlineDropdown[data-v-a3c25e27]{display:none}@media (min-width: 960px) and (max-width: 1279px){.VPDoc .VPDocOutlineDropdown[data-v-a3c25e27]{display:block}}@media (min-width: 768px){.VPDoc[data-v-a3c25e27]{padding:48px 32px 128px}}@media (min-width: 960px){.VPDoc[data-v-a3c25e27]{padding:32px 32px 0}.VPDoc:not(.has-sidebar) .container[data-v-a3c25e27]{display:flex;justify-content:center;max-width:992px}.VPDoc:not(.has-sidebar) .content[data-v-a3c25e27]{max-width:752px}}@media (min-width: 1280px){.VPDoc .container[data-v-a3c25e27]{display:flex;justify-content:center}.VPDoc .aside[data-v-a3c25e27]{display:block}}@media (min-width: 1440px){.VPDoc:not(.has-sidebar) .content[data-v-a3c25e27]{max-width:784px}.VPDoc:not(.has-sidebar) .container[data-v-a3c25e27]{max-width:1104px}}.container[data-v-a3c25e27]{margin:0 auto;width:100%}.aside[data-v-a3c25e27]{position:relative;display:none;order:2;flex-grow:1;padding-left:32px;width:100%;max-width:256px}.left-aside[data-v-a3c25e27]{order:1;padding-left:unset;padding-right:32px}.aside-container[data-v-a3c25e27]{position:fixed;top:0;padding-top:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + var(--vp-doc-top-height, 0px) + 32px);width:224px;height:100vh;overflow-x:hidden;overflow-y:auto;scrollbar-width:none}.aside-container[data-v-a3c25e27]::-webkit-scrollbar{display:none}.aside-curtain[data-v-a3c25e27]{position:fixed;bottom:0;z-index:10;width:224px;height:32px;background:linear-gradient(transparent,var(--vp-c-bg) 70%)}.aside-content[data-v-a3c25e27]{display:flex;flex-direction:column;min-height:calc(100vh - (var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 32px));padding-bottom:32px}.content[data-v-a3c25e27]{position:relative;margin:0 auto;width:100%}@media (min-width: 960px){.content[data-v-a3c25e27]{padding:0 32px 128px}}@media (min-width: 1280px){.content[data-v-a3c25e27]{order:1;margin:0;min-width:640px}}.content-container[data-v-a3c25e27]{margin:0 auto}.VPDoc.has-aside .content-container[data-v-a3c25e27]{max-width:688px}.external-link-icon-enabled[data-v-a3c25e27] :is(.vp-doc a[href*="://"],.vp-doc a[target=_blank]):after{content:"";color:currentColor}.VPButton[data-v-1e76fe75]{display:inline-block;border:1px solid transparent;text-align:center;font-weight:600;white-space:nowrap;transition:color .25s,border-color .25s,background-color .25s}.VPButton[data-v-1e76fe75]:active{transition:color .1s,border-color .1s,background-color .1s}.VPButton.medium[data-v-1e76fe75]{border-radius:20px;padding:0 20px;line-height:38px;font-size:14px}.VPButton.big[data-v-1e76fe75]{border-radius:24px;padding:0 24px;line-height:46px;font-size:16px}.VPButton.brand[data-v-1e76fe75]{border-color:var(--vp-button-brand-border);color:var(--vp-button-brand-text);background-color:var(--vp-button-brand-bg)}.VPButton.brand[data-v-1e76fe75]:hover{border-color:var(--vp-button-brand-hover-border);color:var(--vp-button-brand-hover-text);background-color:var(--vp-button-brand-hover-bg)}.VPButton.brand[data-v-1e76fe75]:active{border-color:var(--vp-button-brand-active-border);color:var(--vp-button-brand-active-text);background-color:var(--vp-button-brand-active-bg)}.VPButton.alt[data-v-1e76fe75]{border-color:var(--vp-button-alt-border);color:var(--vp-button-alt-text);background-color:var(--vp-button-alt-bg)}.VPButton.alt[data-v-1e76fe75]:hover{border-color:var(--vp-button-alt-hover-border);color:var(--vp-button-alt-hover-text);background-color:var(--vp-button-alt-hover-bg)}.VPButton.alt[data-v-1e76fe75]:active{border-color:var(--vp-button-alt-active-border);color:var(--vp-button-alt-active-text);background-color:var(--vp-button-alt-active-bg)}.VPButton.sponsor[data-v-1e76fe75]{border-color:var(--vp-button-sponsor-border);color:var(--vp-button-sponsor-text);background-color:var(--vp-button-sponsor-bg)}.VPButton.sponsor[data-v-1e76fe75]:hover{border-color:var(--vp-button-sponsor-hover-border);color:var(--vp-button-sponsor-hover-text);background-color:var(--vp-button-sponsor-hover-bg)}.VPButton.sponsor[data-v-1e76fe75]:active{border-color:var(--vp-button-sponsor-active-border);color:var(--vp-button-sponsor-active-text);background-color:var(--vp-button-sponsor-active-bg)}html:not(.dark) .VPImage.dark[data-v-ab19afbb]{display:none}.dark .VPImage.light[data-v-ab19afbb]{display:none}.VPHero[data-v-5a3e9999]{margin-top:calc((var(--vp-nav-height) + var(--vp-layout-top-height, 0px)) * -1);padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 48px) 24px 48px}@media (min-width: 640px){.VPHero[data-v-5a3e9999]{padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 80px) 48px 64px}}@media (min-width: 960px){.VPHero[data-v-5a3e9999]{padding:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 80px) 64px 64px}}.container[data-v-5a3e9999]{display:flex;flex-direction:column;margin:0 auto;max-width:1152px}@media (min-width: 960px){.container[data-v-5a3e9999]{flex-direction:row}}.main[data-v-5a3e9999]{position:relative;z-index:10;order:2;flex-grow:1;flex-shrink:0}.VPHero.has-image .container[data-v-5a3e9999]{text-align:center}@media (min-width: 960px){.VPHero.has-image .container[data-v-5a3e9999]{text-align:left}}@media (min-width: 960px){.main[data-v-5a3e9999]{order:1;width:calc((100% / 3) * 2)}.VPHero.has-image .main[data-v-5a3e9999]{max-width:592px}}.name[data-v-5a3e9999],.text[data-v-5a3e9999]{max-width:392px;letter-spacing:-.4px;line-height:40px;font-size:32px;font-weight:700;white-space:pre-wrap}.VPHero.has-image .name[data-v-5a3e9999],.VPHero.has-image .text[data-v-5a3e9999]{margin:0 auto}.name[data-v-5a3e9999]{color:var(--vp-home-hero-name-color)}.clip[data-v-5a3e9999]{background:var(--vp-home-hero-name-background);-webkit-background-clip:text;background-clip:text;-webkit-text-fill-color:var(--vp-home-hero-name-color)}@media (min-width: 640px){.name[data-v-5a3e9999],.text[data-v-5a3e9999]{max-width:576px;line-height:56px;font-size:48px}}@media (min-width: 960px){.name[data-v-5a3e9999],.text[data-v-5a3e9999]{line-height:64px;font-size:56px}.VPHero.has-image .name[data-v-5a3e9999],.VPHero.has-image .text[data-v-5a3e9999]{margin:0}}.tagline[data-v-5a3e9999]{padding-top:8px;max-width:392px;line-height:28px;font-size:18px;font-weight:500;white-space:pre-wrap;color:var(--vp-c-text-2)}.VPHero.has-image .tagline[data-v-5a3e9999]{margin:0 auto}@media (min-width: 640px){.tagline[data-v-5a3e9999]{padding-top:12px;max-width:576px;line-height:32px;font-size:20px}}@media (min-width: 960px){.tagline[data-v-5a3e9999]{line-height:36px;font-size:24px}.VPHero.has-image .tagline[data-v-5a3e9999]{margin:0}}.actions[data-v-5a3e9999]{display:flex;flex-wrap:wrap;margin:-6px;padding-top:24px}.VPHero.has-image .actions[data-v-5a3e9999]{justify-content:center}@media (min-width: 640px){.actions[data-v-5a3e9999]{padding-top:32px}}@media (min-width: 960px){.VPHero.has-image .actions[data-v-5a3e9999]{justify-content:flex-start}}.action[data-v-5a3e9999]{flex-shrink:0;padding:6px}.image[data-v-5a3e9999]{order:1;margin:-76px -24px -48px}@media (min-width: 640px){.image[data-v-5a3e9999]{margin:-108px -24px -48px}}@media (min-width: 960px){.image[data-v-5a3e9999]{flex-grow:1;order:2;margin:0;min-height:100%}}.image-container[data-v-5a3e9999]{position:relative;margin:0 auto;width:320px;height:320px}@media (min-width: 640px){.image-container[data-v-5a3e9999]{width:392px;height:392px}}@media (min-width: 960px){.image-container[data-v-5a3e9999]{display:flex;justify-content:center;align-items:center;width:100%;height:100%;transform:translate(-32px,-32px)}}.image-bg[data-v-5a3e9999]{position:absolute;top:50%;left:50%;border-radius:50%;width:192px;height:192px;background-image:var(--vp-home-hero-image-background-image);filter:var(--vp-home-hero-image-filter);transform:translate(-50%,-50%)}@media (min-width: 640px){.image-bg[data-v-5a3e9999]{width:256px;height:256px}}@media (min-width: 960px){.image-bg[data-v-5a3e9999]{width:320px;height:320px}}[data-v-5a3e9999] .image-src{position:absolute;top:50%;left:50%;max-width:192px;max-height:192px;transform:translate(-50%,-50%)}@media (min-width: 640px){[data-v-5a3e9999] .image-src{max-width:256px;max-height:256px}}@media (min-width: 960px){[data-v-5a3e9999] .image-src{max-width:320px;max-height:320px}}.VPFeature[data-v-ee984185]{display:block;border:1px solid var(--vp-c-bg-soft);border-radius:12px;height:100%;background-color:var(--vp-c-bg-soft);transition:border-color .25s,background-color .25s}.VPFeature.link[data-v-ee984185]:hover{border-color:var(--vp-c-brand-1)}.box[data-v-ee984185]{display:flex;flex-direction:column;padding:24px;height:100%}.box[data-v-ee984185]>.VPImage{margin-bottom:20px}.icon[data-v-ee984185]{display:flex;justify-content:center;align-items:center;margin-bottom:20px;border-radius:6px;background-color:var(--vp-c-default-soft);width:48px;height:48px;font-size:24px;transition:background-color .25s}.title[data-v-ee984185]{line-height:24px;font-size:16px;font-weight:600}.details[data-v-ee984185]{flex-grow:1;padding-top:8px;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.link-text[data-v-ee984185]{padding-top:8px}.link-text-value[data-v-ee984185]{display:flex;align-items:center;font-size:14px;font-weight:500;color:var(--vp-c-brand-1)}.link-text-icon[data-v-ee984185]{display:inline-block;margin-left:6px;width:14px;height:14px;fill:currentColor}.VPFeatures[data-v-b1eea84a]{position:relative;padding:0 24px}@media (min-width: 640px){.VPFeatures[data-v-b1eea84a]{padding:0 48px}}@media (min-width: 960px){.VPFeatures[data-v-b1eea84a]{padding:0 64px}}.container[data-v-b1eea84a]{margin:0 auto;max-width:1152px}.items[data-v-b1eea84a]{display:flex;flex-wrap:wrap;margin:-8px}.item[data-v-b1eea84a]{padding:8px;width:100%}@media (min-width: 640px){.item.grid-2[data-v-b1eea84a],.item.grid-4[data-v-b1eea84a],.item.grid-6[data-v-b1eea84a]{width:50%}}@media (min-width: 768px){.item.grid-2[data-v-b1eea84a],.item.grid-4[data-v-b1eea84a]{width:50%}.item.grid-3[data-v-b1eea84a],.item.grid-6[data-v-b1eea84a]{width:calc(100% / 3)}}@media (min-width: 960px){.item.grid-4[data-v-b1eea84a]{width:25%}}.VPHome[data-v-20eabd3a]{padding-bottom:96px}.VPHome[data-v-20eabd3a] .VPHomeSponsors{margin-top:112px;margin-bottom:-128px}@media (min-width: 768px){.VPHome[data-v-20eabd3a]{padding-bottom:128px}}.VPContent[data-v-3cf691b6]{flex-grow:1;flex-shrink:0;margin:var(--vp-layout-top-height, 0px) auto 0;width:100%}.VPContent.is-home[data-v-3cf691b6]{width:100%;max-width:100%}.VPContent.has-sidebar[data-v-3cf691b6]{margin:0}@media (min-width: 960px){.VPContent[data-v-3cf691b6]{padding-top:var(--vp-nav-height)}.VPContent.has-sidebar[data-v-3cf691b6]{margin:var(--vp-layout-top-height, 0px) 0 0;padding-left:var(--vp-sidebar-width)}}@media (min-width: 1440px){.VPContent.has-sidebar[data-v-3cf691b6]{padding-right:calc((100vw - var(--vp-layout-max-width)) / 2);padding-left:calc((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width))}}.VPFooter[data-v-e4279f1c]{position:relative;z-index:var(--vp-z-index-footer);border-top:1px solid var(--vp-c-gutter);padding:32px 24px;background-color:var(--vp-c-bg)}.VPFooter.has-sidebar[data-v-e4279f1c]{display:none}@media (min-width: 768px){.VPFooter[data-v-e4279f1c]{padding:32px}}.container[data-v-e4279f1c]{margin:0 auto;max-width:var(--vp-layout-max-width);text-align:center}.message[data-v-e4279f1c],.copyright[data-v-e4279f1c]{line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-2)}.VPLocalNavOutlineDropdown[data-v-24251f6f]{padding:12px 20px 11px}.VPLocalNavOutlineDropdown button[data-v-24251f6f]{display:block;font-size:12px;font-weight:500;line-height:24px;color:var(--vp-c-text-2);transition:color .5s;position:relative}.VPLocalNavOutlineDropdown button[data-v-24251f6f]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPLocalNavOutlineDropdown button.open[data-v-24251f6f]{color:var(--vp-c-text-1)}.icon[data-v-24251f6f]{display:inline-block;vertical-align:middle;margin-left:2px;width:14px;height:14px;fill:currentColor}[data-v-24251f6f] .outline-link{font-size:14px;padding:2px 0}.open>.icon[data-v-24251f6f]{transform:rotate(90deg)}.items[data-v-24251f6f]{position:absolute;top:64px;right:16px;left:16px;display:grid;gap:1px;border:1px solid var(--vp-c-border);border-radius:8px;background-color:var(--vp-c-gutter);max-height:calc(var(--vp-vh, 100vh) - 86px);overflow:hidden auto;box-shadow:var(--vp-shadow-3)}.header[data-v-24251f6f]{background-color:var(--vp-c-bg-soft)}.top-link[data-v-24251f6f]{display:block;padding:0 16px;line-height:48px;font-size:14px;font-weight:500;color:var(--vp-c-brand-1)}.outline[data-v-24251f6f]{padding:8px 0;background-color:var(--vp-c-bg-soft)}.flyout-enter-active[data-v-24251f6f]{transition:all .2s ease-out}.flyout-leave-active[data-v-24251f6f]{transition:all .15s ease-in}.flyout-enter-from[data-v-24251f6f],.flyout-leave-to[data-v-24251f6f]{opacity:0;transform:translateY(-16px)}.VPLocalNav[data-v-9e669cc1]{position:sticky;top:0;left:0;z-index:var(--vp-z-index-local-nav);display:flex;justify-content:space-between;align-items:center;border-top:1px solid var(--vp-c-gutter);border-bottom:1px solid var(--vp-c-gutter);padding-top:var(--vp-layout-top-height, 0px);width:100%;background-color:var(--vp-local-nav-bg-color)}.VPLocalNav.fixed[data-v-9e669cc1]{position:fixed}.VPLocalNav.reached-top[data-v-9e669cc1]{border-top-color:transparent}@media (min-width: 960px){.VPLocalNav[data-v-9e669cc1]{display:none}}.menu[data-v-9e669cc1]{display:flex;align-items:center;padding:12px 24px 11px;line-height:24px;font-size:12px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.menu[data-v-9e669cc1]:hover{color:var(--vp-c-text-1);transition:color .25s}@media (min-width: 768px){.menu[data-v-9e669cc1]{padding:0 32px}}.menu-icon[data-v-9e669cc1]{margin-right:8px;width:16px;height:16px;fill:currentColor}.VPOutlineDropdown[data-v-9e669cc1]{padding:12px 24px 11px}@media (min-width: 768px){.VPOutlineDropdown[data-v-9e669cc1]{padding:12px 32px 11px}}.VPSwitch[data-v-1c29e291]{position:relative;border-radius:11px;display:block;width:40px;height:22px;flex-shrink:0;border:1px solid var(--vp-input-border-color);background-color:var(--vp-input-switch-bg-color);transition:border-color .25s!important}.VPSwitch[data-v-1c29e291]:hover{border-color:var(--vp-c-brand-1)}.check[data-v-1c29e291]{position:absolute;top:1px;left:1px;width:18px;height:18px;border-radius:50%;background-color:var(--vp-c-neutral-inverse);box-shadow:var(--vp-shadow-1);transition:transform .25s!important}.icon[data-v-1c29e291]{position:relative;display:block;width:18px;height:18px;border-radius:50%;overflow:hidden}.icon[data-v-1c29e291] svg{position:absolute;top:3px;left:3px;width:12px;height:12px;fill:var(--vp-c-text-2)}.dark .icon[data-v-1c29e291] svg{fill:var(--vp-c-text-1);transition:opacity .25s!important}.sun[data-v-3329432d]{opacity:1}.moon[data-v-3329432d],.dark .sun[data-v-3329432d]{opacity:0}.dark .moon[data-v-3329432d]{opacity:1}.dark .VPSwitchAppearance[data-v-3329432d] .check{transform:translate(18px)}.VPNavBarAppearance[data-v-283b26e9]{display:none}@media (min-width: 1280px){.VPNavBarAppearance[data-v-283b26e9]{display:flex;align-items:center}}.VPMenuGroup+.VPMenuLink[data-v-f51f088d]{margin:12px -12px 0;border-top:1px solid var(--vp-c-divider);padding:12px 12px 0}.link[data-v-f51f088d]{display:block;border-radius:6px;padding:0 12px;line-height:32px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);white-space:nowrap;transition:background-color .25s,color .25s}.link[data-v-f51f088d]:hover{color:var(--vp-c-brand-1);background-color:var(--vp-c-default-soft)}.link.active[data-v-f51f088d]{color:var(--vp-c-brand-1)}.VPMenuGroup[data-v-a6b0397c]{margin:12px -12px 0;border-top:1px solid var(--vp-c-divider);padding:12px 12px 0}.VPMenuGroup[data-v-a6b0397c]:first-child{margin-top:0;border-top:0;padding-top:0}.VPMenuGroup+.VPMenuGroup[data-v-a6b0397c]{margin-top:12px;border-top:1px solid var(--vp-c-divider)}.title[data-v-a6b0397c]{padding:0 12px;line-height:32px;font-size:14px;font-weight:600;color:var(--vp-c-text-2);white-space:nowrap;transition:color .25s}.VPMenu[data-v-e42ed9b3]{border-radius:12px;padding:12px;min-width:128px;border:1px solid var(--vp-c-divider);background-color:var(--vp-c-bg-elv);box-shadow:var(--vp-shadow-3);transition:background-color .5s;max-height:calc(100vh - var(--vp-nav-height));overflow-y:auto}.VPMenu[data-v-e42ed9b3] .group{margin:0 -12px;padding:0 12px 12px}.VPMenu[data-v-e42ed9b3] .group+.group{border-top:1px solid var(--vp-c-divider);padding:11px 12px 12px}.VPMenu[data-v-e42ed9b3] .group:last-child{padding-bottom:0}.VPMenu[data-v-e42ed9b3] .group+.item{border-top:1px solid var(--vp-c-divider);padding:11px 16px 0}.VPMenu[data-v-e42ed9b3] .item{padding:0 16px;white-space:nowrap}.VPMenu[data-v-e42ed9b3] .label{flex-grow:1;line-height:28px;font-size:12px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.VPMenu[data-v-e42ed9b3] .action{padding-left:24px}.VPFlyout[data-v-aa8de344]{position:relative}.VPFlyout[data-v-aa8de344]:hover{color:var(--vp-c-brand-1);transition:color .25s}.VPFlyout:hover .text[data-v-aa8de344]{color:var(--vp-c-text-2)}.VPFlyout:hover .icon[data-v-aa8de344]{fill:var(--vp-c-text-2)}.VPFlyout.active .text[data-v-aa8de344]{color:var(--vp-c-brand-1)}.VPFlyout.active:hover .text[data-v-aa8de344]{color:var(--vp-c-brand-2)}.VPFlyout:hover .menu[data-v-aa8de344],.button[aria-expanded=true]+.menu[data-v-aa8de344]{opacity:1;visibility:visible;transform:translateY(0)}.button[aria-expanded=false]+.menu[data-v-aa8de344]{opacity:0;visibility:hidden;transform:translateY(0)}.button[data-v-aa8de344]{display:flex;align-items:center;padding:0 12px;height:var(--vp-nav-height);color:var(--vp-c-text-1);transition:color .5s}.text[data-v-aa8de344]{display:flex;align-items:center;line-height:var(--vp-nav-height);font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.option-icon[data-v-aa8de344]{margin-right:0;width:16px;height:16px;fill:currentColor}.text-icon[data-v-aa8de344]{margin-left:4px;width:14px;height:14px;fill:currentColor}.icon[data-v-aa8de344]{width:20px;height:20px;fill:currentColor;transition:fill .25s}.menu[data-v-aa8de344]{position:absolute;top:calc(var(--vp-nav-height) / 2 + 20px);right:0;opacity:0;visibility:hidden;transition:opacity .25s,visibility .25s,transform .25s}.VPSocialLink[data-v-16cf740a]{display:flex;justify-content:center;align-items:center;width:36px;height:36px;color:var(--vp-c-text-2);transition:color .5s}.VPSocialLink[data-v-16cf740a]:hover{color:var(--vp-c-text-1);transition:color .25s}.VPSocialLink[data-v-16cf740a]>svg{width:20px;height:20px;fill:currentColor}.VPSocialLinks[data-v-e71e869c]{display:flex;justify-content:center}.VPNavBarExtra[data-v-c8c2ae4b]{display:none;margin-right:-12px}@media (min-width: 768px){.VPNavBarExtra[data-v-c8c2ae4b]{display:block}}@media (min-width: 1280px){.VPNavBarExtra[data-v-c8c2ae4b]{display:none}}.trans-title[data-v-c8c2ae4b]{padding:0 24px 0 12px;line-height:32px;font-size:14px;font-weight:700;color:var(--vp-c-text-1)}.item.appearance[data-v-c8c2ae4b],.item.social-links[data-v-c8c2ae4b]{display:flex;align-items:center;padding:0 12px}.item.appearance[data-v-c8c2ae4b]{min-width:176px}.appearance-action[data-v-c8c2ae4b]{margin-right:-2px}.social-links-list[data-v-c8c2ae4b]{margin:-4px -8px}.VPNavBarHamburger[data-v-6bee1efd]{display:flex;justify-content:center;align-items:center;width:48px;height:var(--vp-nav-height)}@media (min-width: 768px){.VPNavBarHamburger[data-v-6bee1efd]{display:none}}.container[data-v-6bee1efd]{position:relative;width:16px;height:14px;overflow:hidden}.VPNavBarHamburger:hover .top[data-v-6bee1efd]{top:0;left:0;transform:translate(4px)}.VPNavBarHamburger:hover .middle[data-v-6bee1efd]{top:6px;left:0;transform:translate(0)}.VPNavBarHamburger:hover .bottom[data-v-6bee1efd]{top:12px;left:0;transform:translate(8px)}.VPNavBarHamburger.active .top[data-v-6bee1efd]{top:6px;transform:translate(0) rotate(225deg)}.VPNavBarHamburger.active .middle[data-v-6bee1efd]{top:6px;transform:translate(16px)}.VPNavBarHamburger.active .bottom[data-v-6bee1efd]{top:6px;transform:translate(0) rotate(135deg)}.VPNavBarHamburger.active:hover .top[data-v-6bee1efd],.VPNavBarHamburger.active:hover .middle[data-v-6bee1efd],.VPNavBarHamburger.active:hover .bottom[data-v-6bee1efd]{background-color:var(--vp-c-text-2);transition:top .25s,background-color .25s,transform .25s}.top[data-v-6bee1efd],.middle[data-v-6bee1efd],.bottom[data-v-6bee1efd]{position:absolute;width:16px;height:2px;background-color:var(--vp-c-text-1);transition:top .25s,background-color .5s,transform .25s}.top[data-v-6bee1efd]{top:0;left:0;transform:translate(0)}.middle[data-v-6bee1efd]{top:6px;left:0;transform:translate(8px)}.bottom[data-v-6bee1efd]{top:12px;left:0;transform:translate(4px)}.VPNavBarMenuLink[data-v-cb318fec]{display:flex;align-items:center;padding:0 12px;line-height:var(--vp-nav-height);font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.VPNavBarMenuLink.active[data-v-cb318fec],.VPNavBarMenuLink[data-v-cb318fec]:hover{color:var(--vp-c-brand-1)}.VPNavBarMenu[data-v-f732b5d0]{display:none}@media (min-width: 768px){.VPNavBarMenu[data-v-f732b5d0]{display:flex}}/*! @docsearch/css 3.5.2 | MIT License | © Algolia, Inc. and contributors | https://docsearch.algolia.com */:root{--docsearch-primary-color:#5468ff;--docsearch-text-color:#1c1e21;--docsearch-spacing:12px;--docsearch-icon-stroke-width:1.4;--docsearch-highlight-color:var(--docsearch-primary-color);--docsearch-muted-color:#969faf;--docsearch-container-background:rgba(101,108,133,.8);--docsearch-logo-color:#5468ff;--docsearch-modal-width:560px;--docsearch-modal-height:600px;--docsearch-modal-background:#f5f6f7;--docsearch-modal-shadow:inset 1px 1px 0 0 hsla(0,0%,100%,.5),0 3px 8px 0 #555a64;--docsearch-searchbox-height:56px;--docsearch-searchbox-background:#ebedf0;--docsearch-searchbox-focus-background:#fff;--docsearch-searchbox-shadow:inset 0 0 0 2px var(--docsearch-primary-color);--docsearch-hit-height:56px;--docsearch-hit-color:#444950;--docsearch-hit-active-color:#fff;--docsearch-hit-background:#fff;--docsearch-hit-shadow:0 1px 3px 0 #d4d9e1;--docsearch-key-gradient:linear-gradient(-225deg,#d5dbe4,#f8f8f8);--docsearch-key-shadow:inset 0 -2px 0 0 #cdcde6,inset 0 0 1px 1px #fff,0 1px 2px 1px rgba(30,35,90,.4);--docsearch-footer-height:44px;--docsearch-footer-background:#fff;--docsearch-footer-shadow:0 -1px 0 0 #e0e3e8,0 -3px 6px 0 rgba(69,98,155,.12)}html[data-theme=dark]{--docsearch-text-color:#f5f6f7;--docsearch-container-background:rgba(9,10,17,.8);--docsearch-modal-background:#15172a;--docsearch-modal-shadow:inset 1px 1px 0 0 #2c2e40,0 3px 8px 0 #000309;--docsearch-searchbox-background:#090a11;--docsearch-searchbox-focus-background:#000;--docsearch-hit-color:#bec3c9;--docsearch-hit-shadow:none;--docsearch-hit-background:#090a11;--docsearch-key-gradient:linear-gradient(-26.5deg,#565872,#31355b);--docsearch-key-shadow:inset 0 -2px 0 0 #282d55,inset 0 0 1px 1px #51577d,0 2px 2px 0 rgba(3,4,9,.3);--docsearch-footer-background:#1e2136;--docsearch-footer-shadow:inset 0 1px 0 0 rgba(73,76,106,.5),0 -4px 8px 0 rgba(0,0,0,.2);--docsearch-logo-color:#fff;--docsearch-muted-color:#7f8497}.DocSearch-Button{align-items:center;background:var(--docsearch-searchbox-background);border:0;border-radius:40px;color:var(--docsearch-muted-color);cursor:pointer;display:flex;font-weight:500;height:36px;justify-content:space-between;margin:0 0 0 16px;padding:0 8px;-webkit-user-select:none;user-select:none}.DocSearch-Button:active,.DocSearch-Button:focus,.DocSearch-Button:hover{background:var(--docsearch-searchbox-focus-background);box-shadow:var(--docsearch-searchbox-shadow);color:var(--docsearch-text-color);outline:none}.DocSearch-Button-Container{align-items:center;display:flex}.DocSearch-Search-Icon{stroke-width:1.6}.DocSearch-Button .DocSearch-Search-Icon{color:var(--docsearch-text-color)}.DocSearch-Button-Placeholder{font-size:1rem;padding:0 12px 0 6px}.DocSearch-Button-Keys{display:flex;min-width:calc(40px + .8em)}.DocSearch-Button-Key{align-items:center;background:var(--docsearch-key-gradient);border-radius:3px;box-shadow:var(--docsearch-key-shadow);color:var(--docsearch-muted-color);display:flex;height:18px;justify-content:center;margin-right:.4em;position:relative;padding:0 0 2px;border:0;top:-1px;width:20px}@media (max-width:768px){.DocSearch-Button-Keys,.DocSearch-Button-Placeholder{display:none}}.DocSearch--active{overflow:hidden!important}.DocSearch-Container,.DocSearch-Container *{box-sizing:border-box}.DocSearch-Container{background-color:var(--docsearch-container-background);height:100vh;left:0;position:fixed;top:0;width:100vw;z-index:200}.DocSearch-Container a{text-decoration:none}.DocSearch-Link{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;color:var(--docsearch-highlight-color);cursor:pointer;font:inherit;margin:0;padding:0}.DocSearch-Modal{background:var(--docsearch-modal-background);border-radius:6px;box-shadow:var(--docsearch-modal-shadow);flex-direction:column;margin:60px auto auto;max-width:var(--docsearch-modal-width);position:relative}.DocSearch-SearchBar{display:flex;padding:var(--docsearch-spacing) var(--docsearch-spacing) 0}.DocSearch-Form{align-items:center;background:var(--docsearch-searchbox-focus-background);border-radius:4px;box-shadow:var(--docsearch-searchbox-shadow);display:flex;height:var(--docsearch-searchbox-height);margin:0;padding:0 var(--docsearch-spacing);position:relative;width:100%}.DocSearch-Input{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:transparent;border:0;color:var(--docsearch-text-color);flex:1;font:inherit;font-size:1.2em;height:100%;outline:none;padding:0 0 0 8px;width:80%}.DocSearch-Input::placeholder{color:var(--docsearch-muted-color);opacity:1}.DocSearch-Input::-webkit-search-cancel-button,.DocSearch-Input::-webkit-search-decoration,.DocSearch-Input::-webkit-search-results-button,.DocSearch-Input::-webkit-search-results-decoration{display:none}.DocSearch-LoadingIndicator,.DocSearch-MagnifierLabel,.DocSearch-Reset{margin:0;padding:0}.DocSearch-MagnifierLabel,.DocSearch-Reset{align-items:center;color:var(--docsearch-highlight-color);display:flex;justify-content:center}.DocSearch-Container--Stalled .DocSearch-MagnifierLabel,.DocSearch-LoadingIndicator{display:none}.DocSearch-Container--Stalled .DocSearch-LoadingIndicator{align-items:center;color:var(--docsearch-highlight-color);display:flex;justify-content:center}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Reset{animation:none;-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:var(--docsearch-icon-color);cursor:pointer;right:0;stroke-width:var(--docsearch-icon-stroke-width)}}.DocSearch-Reset{animation:fade-in .1s ease-in forwards;-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:var(--docsearch-icon-color);cursor:pointer;padding:2px;right:0;stroke-width:var(--docsearch-icon-stroke-width)}.DocSearch-Reset[hidden]{display:none}.DocSearch-Reset:hover{color:var(--docsearch-highlight-color)}.DocSearch-LoadingIndicator svg,.DocSearch-MagnifierLabel svg{height:24px;width:24px}.DocSearch-Cancel{display:none}.DocSearch-Dropdown{max-height:calc(var(--docsearch-modal-height) - var(--docsearch-searchbox-height) - var(--docsearch-spacing) - var(--docsearch-footer-height));min-height:var(--docsearch-spacing);overflow-y:auto;overflow-y:overlay;padding:0 var(--docsearch-spacing);scrollbar-color:var(--docsearch-muted-color) var(--docsearch-modal-background);scrollbar-width:thin}.DocSearch-Dropdown::-webkit-scrollbar{width:12px}.DocSearch-Dropdown::-webkit-scrollbar-track{background:transparent}.DocSearch-Dropdown::-webkit-scrollbar-thumb{background-color:var(--docsearch-muted-color);border:3px solid var(--docsearch-modal-background);border-radius:20px}.DocSearch-Dropdown ul{list-style:none;margin:0;padding:0}.DocSearch-Label{font-size:.75em;line-height:1.6em}.DocSearch-Help,.DocSearch-Label{color:var(--docsearch-muted-color)}.DocSearch-Help{font-size:.9em;margin:0;-webkit-user-select:none;user-select:none}.DocSearch-Title{font-size:1.2em}.DocSearch-Logo a{display:flex}.DocSearch-Logo svg{color:var(--docsearch-logo-color);margin-left:8px}.DocSearch-Hits:last-of-type{margin-bottom:24px}.DocSearch-Hits mark{background:none;color:var(--docsearch-highlight-color)}.DocSearch-HitsFooter{color:var(--docsearch-muted-color);display:flex;font-size:.85em;justify-content:center;margin-bottom:var(--docsearch-spacing);padding:var(--docsearch-spacing)}.DocSearch-HitsFooter a{border-bottom:1px solid;color:inherit}.DocSearch-Hit{border-radius:4px;display:flex;padding-bottom:4px;position:relative}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit--deleting{transition:none}}.DocSearch-Hit--deleting{opacity:0;transition:all .25s linear}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit--favoriting{transition:none}}.DocSearch-Hit--favoriting{transform:scale(0);transform-origin:top center;transition:all .25s linear;transition-delay:.25s}.DocSearch-Hit a{background:var(--docsearch-hit-background);border-radius:4px;box-shadow:var(--docsearch-hit-shadow);display:block;padding-left:var(--docsearch-spacing);width:100%}.DocSearch-Hit-source{background:var(--docsearch-modal-background);color:var(--docsearch-highlight-color);font-size:.85em;font-weight:600;line-height:32px;margin:0 -4px;padding:8px 4px 0;position:sticky;top:0;z-index:10}.DocSearch-Hit-Tree{color:var(--docsearch-muted-color);height:var(--docsearch-hit-height);opacity:.5;stroke-width:var(--docsearch-icon-stroke-width);width:24px}.DocSearch-Hit[aria-selected=true] a{background-color:var(--docsearch-highlight-color)}.DocSearch-Hit[aria-selected=true] mark{text-decoration:underline}.DocSearch-Hit-Container{align-items:center;color:var(--docsearch-hit-color);display:flex;flex-direction:row;height:var(--docsearch-hit-height);padding:0 var(--docsearch-spacing) 0 0}.DocSearch-Hit-icon{height:20px;width:20px}.DocSearch-Hit-action,.DocSearch-Hit-icon{color:var(--docsearch-muted-color);stroke-width:var(--docsearch-icon-stroke-width)}.DocSearch-Hit-action{align-items:center;display:flex;height:22px;width:22px}.DocSearch-Hit-action svg{display:block;height:18px;width:18px}.DocSearch-Hit-action+.DocSearch-Hit-action{margin-left:6px}.DocSearch-Hit-action-button{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:50%;color:inherit;cursor:pointer;padding:2px}svg.DocSearch-Hit-Select-Icon{display:none}.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-Select-Icon{display:block}.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{background:rgba(0,0,0,.2);transition:background-color .1s ease-in}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{transition:none}}.DocSearch-Hit-action-button:focus path,.DocSearch-Hit-action-button:hover path{fill:#fff}.DocSearch-Hit-content-wrapper{display:flex;flex:1 1 auto;flex-direction:column;font-weight:500;justify-content:center;line-height:1.2em;margin:0 8px;overflow-x:hidden;position:relative;text-overflow:ellipsis;white-space:nowrap;width:80%}.DocSearch-Hit-title{font-size:.9em}.DocSearch-Hit-path{color:var(--docsearch-muted-color);font-size:.75em}.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-action,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-icon,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-path,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-text,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-title,.DocSearch-Hit[aria-selected=true] .DocSearch-Hit-Tree,.DocSearch-Hit[aria-selected=true] mark{color:var(--docsearch-hit-active-color)!important}@media screen and (prefers-reduced-motion:reduce){.DocSearch-Hit-action-button:focus,.DocSearch-Hit-action-button:hover{background:rgba(0,0,0,.2);transition:none}}.DocSearch-ErrorScreen,.DocSearch-NoResults,.DocSearch-StartScreen{font-size:.9em;margin:0 auto;padding:36px 0;text-align:center;width:80%}.DocSearch-Screen-Icon{color:var(--docsearch-muted-color);padding-bottom:12px}.DocSearch-NoResults-Prefill-List{display:inline-block;padding-bottom:24px;text-align:left}.DocSearch-NoResults-Prefill-List ul{display:inline-block;padding:8px 0 0}.DocSearch-NoResults-Prefill-List li{list-style-position:inside;list-style-type:"» "}.DocSearch-Prefill{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;border-radius:1em;color:var(--docsearch-highlight-color);cursor:pointer;display:inline-block;font-size:1em;font-weight:700;padding:0}.DocSearch-Prefill:focus,.DocSearch-Prefill:hover{outline:none;text-decoration:underline}.DocSearch-Footer{align-items:center;background:var(--docsearch-footer-background);border-radius:0 0 8px 8px;box-shadow:var(--docsearch-footer-shadow);display:flex;flex-direction:row-reverse;flex-shrink:0;height:var(--docsearch-footer-height);justify-content:space-between;padding:0 var(--docsearch-spacing);position:relative;-webkit-user-select:none;user-select:none;width:100%;z-index:300}.DocSearch-Commands{color:var(--docsearch-muted-color);display:flex;list-style:none;margin:0;padding:0}.DocSearch-Commands li{align-items:center;display:flex}.DocSearch-Commands li:not(:last-of-type){margin-right:.8em}.DocSearch-Commands-Key{align-items:center;background:var(--docsearch-key-gradient);border-radius:2px;box-shadow:var(--docsearch-key-shadow);display:flex;height:18px;justify-content:center;margin-right:.4em;padding:0 0 1px;color:var(--docsearch-muted-color);border:0;width:20px}@media (max-width:768px){:root{--docsearch-spacing:10px;--docsearch-footer-height:40px}.DocSearch-Dropdown{height:100%}.DocSearch-Container{height:100vh;height:-webkit-fill-available;height:calc(var(--docsearch-vh, 1vh)*100);position:absolute}.DocSearch-Footer{border-radius:0;bottom:0;position:absolute}.DocSearch-Hit-content-wrapper{display:flex;position:relative;width:80%}.DocSearch-Modal{border-radius:0;box-shadow:none;height:100vh;height:-webkit-fill-available;height:calc(var(--docsearch-vh, 1vh)*100);margin:0;max-width:100%;width:100%}.DocSearch-Dropdown{max-height:calc(var(--docsearch-vh, 1vh)*100 - var(--docsearch-searchbox-height) - var(--docsearch-spacing) - var(--docsearch-footer-height))}.DocSearch-Cancel{-webkit-appearance:none;-moz-appearance:none;appearance:none;background:none;border:0;color:var(--docsearch-highlight-color);cursor:pointer;display:inline-block;flex:none;font:inherit;font-size:1em;font-weight:500;margin-left:var(--docsearch-spacing);outline:none;overflow:hidden;padding:0;-webkit-user-select:none;user-select:none;white-space:nowrap}.DocSearch-Commands,.DocSearch-Hit-Tree{display:none}}@keyframes fade-in{0%{opacity:0}to{opacity:1}}[class*=DocSearch]{--docsearch-primary-color: var(--vp-c-brand-1);--docsearch-highlight-color: var(--docsearch-primary-color);--docsearch-text-color: var(--vp-c-text-1);--docsearch-muted-color: var(--vp-c-text-2);--docsearch-searchbox-shadow: none;--docsearch-searchbox-background: transparent;--docsearch-searchbox-focus-background: transparent;--docsearch-key-gradient: transparent;--docsearch-key-shadow: none;--docsearch-modal-background: var(--vp-c-bg-soft);--docsearch-footer-background: var(--vp-c-bg)}.dark [class*=DocSearch]{--docsearch-modal-shadow: none;--docsearch-footer-shadow: none;--docsearch-logo-color: var(--vp-c-text-2);--docsearch-hit-background: var(--vp-c-default-soft);--docsearch-hit-color: var(--vp-c-text-2);--docsearch-hit-shadow: none}.DocSearch-Button{display:flex;justify-content:center;align-items:center;margin:0;padding:0;width:48px;height:55px;background:transparent;transition:border-color .25s}.DocSearch-Button:hover{background:transparent}.DocSearch-Button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}.DocSearch-Button:focus:not(:focus-visible){outline:none!important}@media (min-width: 768px){.DocSearch-Button{justify-content:flex-start;border:1px solid transparent;border-radius:8px;padding:0 10px 0 12px;width:100%;height:40px;background-color:var(--vp-c-bg-alt)}.DocSearch-Button:hover{border-color:var(--vp-c-brand-1);background:var(--vp-c-bg-alt)}}.DocSearch-Button .DocSearch-Button-Container{display:flex;align-items:center}.DocSearch-Button .DocSearch-Search-Icon{position:relative;width:16px;height:16px;color:var(--vp-c-text-1);fill:currentColor;transition:color .5s}.DocSearch-Button:hover .DocSearch-Search-Icon{color:var(--vp-c-text-1)}@media (min-width: 768px){.DocSearch-Button .DocSearch-Search-Icon{top:1px;margin-right:8px;width:14px;height:14px;color:var(--vp-c-text-2)}}.DocSearch-Button .DocSearch-Button-Placeholder{display:none;margin-top:2px;padding:0 16px 0 0;font-size:13px;font-weight:500;color:var(--vp-c-text-2);transition:color .5s}.DocSearch-Button:hover .DocSearch-Button-Placeholder{color:var(--vp-c-text-1)}@media (min-width: 768px){.DocSearch-Button .DocSearch-Button-Placeholder{display:inline-block}}.DocSearch-Button .DocSearch-Button-Keys{direction:ltr;display:none;min-width:auto}@media (min-width: 768px){.DocSearch-Button .DocSearch-Button-Keys{display:flex;align-items:center}}.DocSearch-Button .DocSearch-Button-Key{display:block;margin:2px 0 0;border:1px solid var(--vp-c-divider);border-right:none;border-radius:4px 0 0 4px;padding-left:6px;min-width:0;width:auto;height:22px;line-height:22px;font-family:var(--vp-font-family-base);font-size:12px;font-weight:500;transition:color .5s,border-color .5s}.DocSearch-Button .DocSearch-Button-Key+.DocSearch-Button-Key{border-right:1px solid var(--vp-c-divider);border-left:none;border-radius:0 4px 4px 0;padding-left:2px;padding-right:6px}.DocSearch-Button .DocSearch-Button-Key:first-child{font-size:0!important}.DocSearch-Button .DocSearch-Button-Key:first-child:after{content:"Ctrl";font-size:12px;letter-spacing:normal;color:var(--docsearch-muted-color)}.mac .DocSearch-Button .DocSearch-Button-Key:first-child:after{content:"⌘"}.DocSearch-Button .DocSearch-Button-Key:first-child>*{display:none}.VPNavBarSearch{display:flex;align-items:center}@media (min-width: 768px){.VPNavBarSearch{flex-grow:1;padding-left:24px}}@media (min-width: 960px){.VPNavBarSearch{padding-left:32px}}.dark .DocSearch-Footer{border-top:1px solid var(--vp-c-divider)}.DocSearch-Form{border:1px solid var(--vp-c-brand-1);background-color:var(--vp-c-white)}.dark .DocSearch-Form{background-color:var(--vp-c-default-soft)}.DocSearch-Screen-Icon>svg{margin:auto}.VPNavBarSocialLinks[data-v-ef6192dc]{display:none}@media (min-width: 1280px){.VPNavBarSocialLinks[data-v-ef6192dc]{display:flex;align-items:center}}.title[data-v-2973dbb4]{display:flex;align-items:center;border-bottom:1px solid transparent;width:100%;height:var(--vp-nav-height);font-size:16px;font-weight:600;color:var(--vp-c-text-1);transition:opacity .25s}@media (min-width: 960px){.title[data-v-2973dbb4]{flex-shrink:0}.VPNavBarTitle.has-sidebar .title[data-v-2973dbb4]{border-bottom-color:var(--vp-c-divider)}}[data-v-2973dbb4] .logo{margin-right:8px;height:var(--vp-nav-logo-height)}.VPNavBarTranslations[data-v-ff4524ae]{display:none}@media (min-width: 1280px){.VPNavBarTranslations[data-v-ff4524ae]{display:flex;align-items:center}}.title[data-v-ff4524ae]{padding:0 24px 0 12px;line-height:32px;font-size:14px;font-weight:700;color:var(--vp-c-text-1)}.VPNavBar[data-v-f1abbc6e]{position:relative;border-bottom:1px solid transparent;padding:0 8px 0 24px;height:var(--vp-nav-height);pointer-events:none;white-space:nowrap}@media (min-width: 768px){.VPNavBar[data-v-f1abbc6e]{padding:0 32px}}@media (min-width: 960px){.VPNavBar.has-sidebar[data-v-f1abbc6e]{padding:0}.VPNavBar[data-v-f1abbc6e]:not(.has-sidebar):not(.top){border-bottom-color:var(--vp-c-gutter);background-color:var(--vp-nav-bg-color)}}.container[data-v-f1abbc6e]{display:flex;justify-content:space-between;margin:0 auto;max-width:calc(var(--vp-layout-max-width) - 64px);height:var(--vp-nav-height);pointer-events:none}.container>.title[data-v-f1abbc6e],.container>.content[data-v-f1abbc6e]{pointer-events:none}.container[data-v-f1abbc6e] *{pointer-events:auto}@media (min-width: 960px){.VPNavBar.has-sidebar .container[data-v-f1abbc6e]{max-width:100%}}.title[data-v-f1abbc6e]{flex-shrink:0;height:calc(var(--vp-nav-height) - 1px);transition:background-color .5s}@media (min-width: 960px){.VPNavBar.has-sidebar .title[data-v-f1abbc6e]{position:absolute;top:0;left:0;z-index:2;padding:0 32px;width:var(--vp-sidebar-width);height:var(--vp-nav-height);background-color:transparent}}@media (min-width: 1440px){.VPNavBar.has-sidebar .title[data-v-f1abbc6e]{padding-left:max(32px,calc((100% - (var(--vp-layout-max-width) - 64px)) / 2));width:calc((100% - (var(--vp-layout-max-width) - 64px)) / 2 + var(--vp-sidebar-width) - 32px)}}.content[data-v-f1abbc6e]{flex-grow:1}@media (min-width: 960px){.VPNavBar.has-sidebar .content[data-v-f1abbc6e]{position:relative;z-index:1;padding-right:32px;padding-left:var(--vp-sidebar-width)}}@media (min-width: 1440px){.VPNavBar.has-sidebar .content[data-v-f1abbc6e]{padding-right:calc((100vw - var(--vp-layout-max-width)) / 2 + 32px);padding-left:calc((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width))}}.content-body[data-v-f1abbc6e]{display:flex;justify-content:flex-end;align-items:center;height:calc(var(--vp-nav-height) - 1px);transition:background-color .5s}@media (min-width: 960px){.VPNavBar:not(.top) .content-body[data-v-f1abbc6e]{position:relative;background-color:var(--vp-nav-bg-color)}}@media (max-width: 767px){.content-body[data-v-f1abbc6e]{column-gap:.5rem}}.menu+.translations[data-v-f1abbc6e]:before,.menu+.appearance[data-v-f1abbc6e]:before,.menu+.social-links[data-v-f1abbc6e]:before,.translations+.appearance[data-v-f1abbc6e]:before,.appearance+.social-links[data-v-f1abbc6e]:before{margin-right:8px;margin-left:8px;width:1px;height:24px;background-color:var(--vp-c-divider);content:""}.menu+.appearance[data-v-f1abbc6e]:before,.translations+.appearance[data-v-f1abbc6e]:before{margin-right:16px}.appearance+.social-links[data-v-f1abbc6e]:before{margin-left:16px}.social-links[data-v-f1abbc6e]{margin-right:-8px}@media (min-width: 960px){.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]{position:absolute;right:0;bottom:-31px;width:calc(100% - var(--vp-sidebar-width));height:32px}.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]:before{display:block;width:100%;height:32px;background:linear-gradient(var(--vp-c-bg),transparent 70%);content:""}}@media (min-width: 1440px){.VPNavBar.has-sidebar .curtain[data-v-f1abbc6e]{width:calc(100% - ((100vw - var(--vp-layout-max-width)) / 2 + var(--vp-sidebar-width)))}}.VPNavScreenAppearance[data-v-0dc5cf49]{display:flex;justify-content:space-between;align-items:center;border-radius:8px;padding:12px 14px 12px 16px;background-color:var(--vp-c-bg-soft)}.text[data-v-0dc5cf49]{line-height:24px;font-size:12px;font-weight:500;color:var(--vp-c-text-2)}.VPNavScreenMenuLink[data-v-fe523e3d]{display:block;border-bottom:1px solid var(--vp-c-divider);padding:12px 0 11px;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:border-color .25s,color .25s}.VPNavScreenMenuLink[data-v-fe523e3d]:hover{color:var(--vp-c-brand-1)}.VPNavScreenMenuGroupLink[data-v-aea78dd1]{display:block;margin-left:12px;line-height:32px;font-size:14px;font-weight:400;color:var(--vp-c-text-1);transition:color .25s}.VPNavScreenMenuGroupLink[data-v-aea78dd1]:hover{color:var(--vp-c-brand-1)}.VPNavScreenMenuGroupSection[data-v-f60dbfa7]{display:block}.title[data-v-f60dbfa7]{line-height:32px;font-size:13px;font-weight:700;color:var(--vp-c-text-2);transition:color .25s}.VPNavScreenMenuGroup[data-v-c2c554ed]{border-bottom:1px solid var(--vp-c-divider);height:48px;overflow:hidden;transition:border-color .5s}.VPNavScreenMenuGroup .items[data-v-c2c554ed]{visibility:hidden}.VPNavScreenMenuGroup.open .items[data-v-c2c554ed]{visibility:visible}.VPNavScreenMenuGroup.open[data-v-c2c554ed]{padding-bottom:10px;height:auto}.VPNavScreenMenuGroup.open .button[data-v-c2c554ed]{padding-bottom:6px;color:var(--vp-c-brand-1)}.VPNavScreenMenuGroup.open .button-icon[data-v-c2c554ed]{transform:rotate(45deg)}.button[data-v-c2c554ed]{display:flex;justify-content:space-between;align-items:center;padding:12px 4px 11px 0;width:100%;line-height:24px;font-size:14px;font-weight:500;color:var(--vp-c-text-1);transition:color .25s}.button[data-v-c2c554ed]:hover{color:var(--vp-c-brand-1)}.button-icon[data-v-c2c554ed]{width:14px;height:14px;fill:var(--vp-c-text-2);transition:fill .5s,transform .25s}.group[data-v-c2c554ed]:first-child{padding-top:0}.group+.group[data-v-c2c554ed],.group+.item[data-v-c2c554ed]{padding-top:4px}.VPNavScreenTranslations[data-v-41505286]{height:24px;overflow:hidden}.VPNavScreenTranslations.open[data-v-41505286]{height:auto}.title[data-v-41505286]{display:flex;align-items:center;font-size:14px;font-weight:500;color:var(--vp-c-text-1)}.icon[data-v-41505286]{width:16px;height:16px;fill:currentColor}.icon.lang[data-v-41505286]{margin-right:8px}.icon.chevron[data-v-41505286]{margin-left:4px}.list[data-v-41505286]{padding:4px 0 0 24px}.link[data-v-41505286]{line-height:32px;font-size:13px;color:var(--vp-c-text-1)}.VPNavScreen[data-v-57cce842]{position:fixed;top:calc(var(--vp-nav-height) + var(--vp-layout-top-height, 0px) + 1px);right:0;bottom:0;left:0;padding:0 32px;width:100%;background-color:var(--vp-nav-screen-bg-color);overflow-y:auto;transition:background-color .5s;pointer-events:auto}.VPNavScreen.fade-enter-active[data-v-57cce842],.VPNavScreen.fade-leave-active[data-v-57cce842]{transition:opacity .25s}.VPNavScreen.fade-enter-active .container[data-v-57cce842],.VPNavScreen.fade-leave-active .container[data-v-57cce842]{transition:transform .25s ease}.VPNavScreen.fade-enter-from[data-v-57cce842],.VPNavScreen.fade-leave-to[data-v-57cce842]{opacity:0}.VPNavScreen.fade-enter-from .container[data-v-57cce842],.VPNavScreen.fade-leave-to .container[data-v-57cce842]{transform:translateY(-8px)}@media (min-width: 768px){.VPNavScreen[data-v-57cce842]{display:none}}.container[data-v-57cce842]{margin:0 auto;padding:24px 0 96px;max-width:288px}.menu+.translations[data-v-57cce842],.menu+.appearance[data-v-57cce842],.translations+.appearance[data-v-57cce842]{margin-top:24px}.menu+.social-links[data-v-57cce842]{margin-top:16px}.appearance+.social-links[data-v-57cce842]{margin-top:16px}.VPNav[data-v-7ad780c2]{position:relative;top:var(--vp-layout-top-height, 0px);left:0;z-index:var(--vp-z-index-nav);width:100%;pointer-events:none;transition:background-color .5s}@media (min-width: 960px){.VPNav[data-v-7ad780c2]{position:fixed}}.VPSidebarItem.level-0[data-v-bd01e0d5]{padding-bottom:24px}.VPSidebarItem.collapsed.level-0[data-v-bd01e0d5]{padding-bottom:10px}.item[data-v-bd01e0d5]{position:relative;display:flex;width:100%}.VPSidebarItem.collapsible>.item[data-v-bd01e0d5]{cursor:pointer}.indicator[data-v-bd01e0d5]{position:absolute;top:6px;bottom:6px;left:-17px;width:2px;border-radius:2px;transition:background-color .25s}.VPSidebarItem.level-2.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-3.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-4.is-active>.item>.indicator[data-v-bd01e0d5],.VPSidebarItem.level-5.is-active>.item>.indicator[data-v-bd01e0d5]{background-color:var(--vp-c-brand-1)}.link[data-v-bd01e0d5]{display:flex;align-items:center;flex-grow:1}.text[data-v-bd01e0d5]{flex-grow:1;padding:4px 0;line-height:24px;font-size:14px;transition:color .25s}.VPSidebarItem.level-0 .text[data-v-bd01e0d5]{font-weight:700;color:var(--vp-c-text-1)}.VPSidebarItem.level-1 .text[data-v-bd01e0d5],.VPSidebarItem.level-2 .text[data-v-bd01e0d5],.VPSidebarItem.level-3 .text[data-v-bd01e0d5],.VPSidebarItem.level-4 .text[data-v-bd01e0d5],.VPSidebarItem.level-5 .text[data-v-bd01e0d5]{font-weight:500;color:var(--vp-c-text-2)}.VPSidebarItem.level-0.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-1.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-2.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-3.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-4.is-link>.item>.link:hover .text[data-v-bd01e0d5],.VPSidebarItem.level-5.is-link>.item>.link:hover .text[data-v-bd01e0d5]{color:var(--vp-c-brand-1)}.VPSidebarItem.level-0.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.has-active>.item>.text[data-v-bd01e0d5],.VPSidebarItem.level-0.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.has-active>.item>.link>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.has-active>.item>.link>.text[data-v-bd01e0d5]{color:var(--vp-c-text-1)}.VPSidebarItem.level-0.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-1.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-2.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-3.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-4.is-active>.item .link>.text[data-v-bd01e0d5],.VPSidebarItem.level-5.is-active>.item .link>.text[data-v-bd01e0d5]{color:var(--vp-c-brand-1)}.caret[data-v-bd01e0d5]{display:flex;justify-content:center;align-items:center;margin-right:-7px;width:32px;height:32px;color:var(--vp-c-text-3);cursor:pointer;transition:color .25s;flex-shrink:0}.item:hover .caret[data-v-bd01e0d5]{color:var(--vp-c-text-2)}.item:hover .caret[data-v-bd01e0d5]:hover{color:var(--vp-c-text-1)}.caret-icon[data-v-bd01e0d5]{width:18px;height:18px;fill:currentColor;transform:rotate(90deg);transition:transform .25s}.VPSidebarItem.collapsed .caret-icon[data-v-bd01e0d5]{transform:rotate(0)}.VPSidebarItem.level-1 .items[data-v-bd01e0d5],.VPSidebarItem.level-2 .items[data-v-bd01e0d5],.VPSidebarItem.level-3 .items[data-v-bd01e0d5],.VPSidebarItem.level-4 .items[data-v-bd01e0d5],.VPSidebarItem.level-5 .items[data-v-bd01e0d5]{border-left:1px solid var(--vp-c-divider);padding-left:16px}.VPSidebarItem.collapsed .items[data-v-bd01e0d5]{display:none}.VPSidebar[data-v-ee2efba5]{position:fixed;top:var(--vp-layout-top-height, 0px);bottom:0;left:0;z-index:var(--vp-z-index-sidebar);padding:32px 32px 96px;width:calc(100vw - 64px);max-width:320px;background-color:var(--vp-sidebar-bg-color);opacity:0;box-shadow:var(--vp-c-shadow-3);overflow-x:hidden;overflow-y:auto;transform:translate(-100%);transition:opacity .5s,transform .25s ease;overscroll-behavior:contain}.VPSidebar.open[data-v-ee2efba5]{opacity:1;visibility:visible;transform:translate(0);transition:opacity .25s,transform .5s cubic-bezier(.19,1,.22,1)}.dark .VPSidebar[data-v-ee2efba5]{box-shadow:var(--vp-shadow-1)}@media (min-width: 960px){.VPSidebar[data-v-ee2efba5]{z-index:1;padding-top:var(--vp-nav-height);padding-bottom:128px;width:var(--vp-sidebar-width);max-width:100%;background-color:var(--vp-sidebar-bg-color);opacity:1;visibility:visible;box-shadow:none;transform:translate(0)}}@media (min-width: 1440px){.VPSidebar[data-v-ee2efba5]{padding-left:max(32px,calc((100% - (var(--vp-layout-max-width) - 64px)) / 2));width:calc((100% - (var(--vp-layout-max-width) - 64px)) / 2 + var(--vp-sidebar-width) - 32px)}}@media (min-width: 960px){.curtain[data-v-ee2efba5]{position:sticky;top:-64px;left:0;z-index:1;margin-top:calc(var(--vp-nav-height) * -1);margin-right:-32px;margin-left:-32px;height:var(--vp-nav-height);background-color:var(--vp-sidebar-bg-color)}}.nav[data-v-ee2efba5]{outline:0}.group+.group[data-v-ee2efba5]{border-top:1px solid var(--vp-c-divider);padding-top:10px}@media (min-width: 960px){.group[data-v-ee2efba5]{padding-top:10px;width:calc(var(--vp-sidebar-width) - 64px)}}.VPSkipLink[data-v-c8291ffa]{top:8px;left:8px;padding:8px 16px;z-index:999;border-radius:8px;font-size:12px;font-weight:700;text-decoration:none;color:var(--vp-c-brand-1);box-shadow:var(--vp-shadow-3);background-color:var(--vp-c-bg)}.VPSkipLink[data-v-c8291ffa]:focus{height:auto;width:auto;clip:auto;clip-path:none}@media (min-width: 1280px){.VPSkipLink[data-v-c8291ffa]{top:14px;left:16px}}.Layout[data-v-9d8abc1e]{display:flex;flex-direction:column;min-height:100vh}.VPHomeSponsors[data-v-843cc1b2]{border-top:1px solid var(--vp-c-gutter);padding:88px 24px 96px;background-color:var(--vp-c-bg)}.container[data-v-843cc1b2]{margin:0 auto;max-width:1152px}.love[data-v-843cc1b2]{margin:0 auto;width:28px;height:28px;color:var(--vp-c-text-3)}.icon[data-v-843cc1b2]{width:28px;height:28px;fill:currentColor}.message[data-v-843cc1b2]{margin:0 auto;padding-top:10px;max-width:320px;text-align:center;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}.sponsors[data-v-843cc1b2]{padding-top:32px}.action[data-v-843cc1b2]{padding-top:40px;text-align:center}.VPTeamPage[data-v-b1cfd8dc]{padding-bottom:96px}@media (min-width: 768px){.VPTeamPage[data-v-b1cfd8dc]{padding-bottom:128px}}.VPTeamPageSection+.VPTeamPageSection[data-v-b1cfd8dc-s],.VPTeamMembers+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:64px}.VPTeamMembers+.VPTeamMembers[data-v-b1cfd8dc-s]{margin-top:24px}@media (min-width: 768px){.VPTeamPageTitle+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:16px}.VPTeamPageSection+.VPTeamPageSection[data-v-b1cfd8dc-s],.VPTeamMembers+.VPTeamPageSection[data-v-b1cfd8dc-s]{margin-top:96px}}.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 24px}@media (min-width: 768px){.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 48px}}@media (min-width: 960px){.VPTeamMembers[data-v-b1cfd8dc-s]{padding:0 64px}}.VPTeamPageTitle[data-v-46c5e327]{padding:48px 32px;text-align:center}@media (min-width: 768px){.VPTeamPageTitle[data-v-46c5e327]{padding:64px 48px 48px}}@media (min-width: 960px){.VPTeamPageTitle[data-v-46c5e327]{padding:80px 64px 48px}}.title[data-v-46c5e327]{letter-spacing:0;line-height:44px;font-size:36px;font-weight:500}@media (min-width: 768px){.title[data-v-46c5e327]{letter-spacing:-.5px;line-height:56px;font-size:48px}}.lead[data-v-46c5e327]{margin:0 auto;max-width:512px;padding-top:12px;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}@media (min-width: 768px){.lead[data-v-46c5e327]{max-width:592px;letter-spacing:.15px;line-height:28px;font-size:20px}}.VPTeamPageSection[data-v-3bf2e850]{padding:0 32px}@media (min-width: 768px){.VPTeamPageSection[data-v-3bf2e850]{padding:0 48px}}@media (min-width: 960px){.VPTeamPageSection[data-v-3bf2e850]{padding:0 64px}}.title[data-v-3bf2e850]{position:relative;margin:0 auto;max-width:1152px;text-align:center;color:var(--vp-c-text-2)}.title-line[data-v-3bf2e850]{position:absolute;top:16px;left:0;width:100%;height:1px;background-color:var(--vp-c-divider)}.title-text[data-v-3bf2e850]{position:relative;display:inline-block;padding:0 24px;letter-spacing:0;line-height:32px;font-size:20px;font-weight:500;background-color:var(--vp-c-bg)}.lead[data-v-3bf2e850]{margin:0 auto;max-width:480px;padding-top:12px;text-align:center;line-height:24px;font-size:16px;font-weight:500;color:var(--vp-c-text-2)}.members[data-v-3bf2e850]{padding-top:40px}.VPTeamMembersItem[data-v-3a0078bd]{display:flex;flex-direction:column;gap:2px;border-radius:12px;width:100%;height:100%;overflow:hidden}.VPTeamMembersItem.small .profile[data-v-3a0078bd]{padding:32px}.VPTeamMembersItem.small .data[data-v-3a0078bd]{padding-top:20px}.VPTeamMembersItem.small .avatar[data-v-3a0078bd]{width:64px;height:64px}.VPTeamMembersItem.small .name[data-v-3a0078bd]{line-height:24px;font-size:16px}.VPTeamMembersItem.small .affiliation[data-v-3a0078bd]{padding-top:4px;line-height:20px;font-size:14px}.VPTeamMembersItem.small .desc[data-v-3a0078bd]{padding-top:12px;line-height:20px;font-size:14px}.VPTeamMembersItem.small .links[data-v-3a0078bd]{margin:0 -16px -20px;padding:10px 0 0}.VPTeamMembersItem.medium .profile[data-v-3a0078bd]{padding:48px 32px}.VPTeamMembersItem.medium .data[data-v-3a0078bd]{padding-top:24px;text-align:center}.VPTeamMembersItem.medium .avatar[data-v-3a0078bd]{width:96px;height:96px}.VPTeamMembersItem.medium .name[data-v-3a0078bd]{letter-spacing:.15px;line-height:28px;font-size:20px}.VPTeamMembersItem.medium .affiliation[data-v-3a0078bd]{padding-top:4px;font-size:16px}.VPTeamMembersItem.medium .desc[data-v-3a0078bd]{padding-top:16px;max-width:288px;font-size:16px}.VPTeamMembersItem.medium .links[data-v-3a0078bd]{margin:0 -16px -12px;padding:16px 12px 0}.profile[data-v-3a0078bd]{flex-grow:1;background-color:var(--vp-c-bg-soft)}.data[data-v-3a0078bd]{text-align:center}.avatar[data-v-3a0078bd]{position:relative;flex-shrink:0;margin:0 auto;border-radius:50%;box-shadow:var(--vp-shadow-3)}.avatar-img[data-v-3a0078bd]{position:absolute;top:0;right:0;bottom:0;left:0;border-radius:50%;object-fit:cover}.name[data-v-3a0078bd]{margin:0;font-weight:600}.affiliation[data-v-3a0078bd]{margin:0;font-weight:500;color:var(--vp-c-text-2)}.org.link[data-v-3a0078bd]{color:var(--vp-c-text-2);transition:color .25s}.org.link[data-v-3a0078bd]:hover{color:var(--vp-c-brand-1)}.desc[data-v-3a0078bd]{margin:0 auto}.desc[data-v-3a0078bd] a{font-weight:500;color:var(--vp-c-brand-1);text-decoration-style:dotted;transition:color .25s}.links[data-v-3a0078bd]{display:flex;justify-content:center;height:56px}.sp-link[data-v-3a0078bd]{display:flex;justify-content:center;align-items:center;text-align:center;padding:16px;font-size:14px;font-weight:500;color:var(--vp-c-sponsor);background-color:var(--vp-c-bg-soft);transition:color .25s,background-color .25s}.sp .sp-link.link[data-v-3a0078bd]:hover,.sp .sp-link.link[data-v-3a0078bd]:focus{outline:none;color:var(--vp-c-white);background-color:var(--vp-c-sponsor)}.sp-icon[data-v-3a0078bd]{margin-right:8px;width:16px;height:16px;fill:currentColor}.VPTeamMembers.small .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(224px,1fr))}.VPTeamMembers.small.count-1 .container[data-v-bf782009]{max-width:276px}.VPTeamMembers.small.count-2 .container[data-v-bf782009]{max-width:576px}.VPTeamMembers.small.count-3 .container[data-v-bf782009]{max-width:876px}.VPTeamMembers.medium .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(256px,1fr))}@media (min-width: 375px){.VPTeamMembers.medium .container[data-v-bf782009]{grid-template-columns:repeat(auto-fit,minmax(288px,1fr))}}.VPTeamMembers.medium.count-1 .container[data-v-bf782009]{max-width:368px}.VPTeamMembers.medium.count-2 .container[data-v-bf782009]{max-width:760px}.container[data-v-bf782009]{display:grid;gap:24px;margin:0 auto;max-width:1152px}.post>a[data-v-61c06c99]{text-decoration:none;color:var(--color-text)}h2[data-v-61c06c99],h3[data-v-61c06c99],h4[data-v-61c06c99]{color:var(--color-text)}h1[data-v-61c06c99]{font-size:2.5em!important;line-height:1.2em;font-weight:700}.post[data-v-61c06c99]{margin-top:10pt;margin-bottom:4rem}.posts[data-v-61c06c99]{margin-top:10rem!important}.posts[data-v-61c06c99]:first-child{margin-top:0!important}.post h1 .VPBadge[data-v-61c06c99]{transform:scale(1.2);margin-left:10pt;position:relative;top:7pt}.post .body h1:first-child{display:none}h4{margin-top:50pt}img[data-v-fb660782]{display:inline;height:130pt;position:relative;top:-15pt;margin-right:15pt;transform:translate(0)}h1[data-v-fb660782],h2[data-v-fb660782]{font-size:2.5em;line-height:1.2em;border:none;margin:0;padding:0;font-weight:550}h2[data-v-fb660782]{font-size:1.2em;margin-top:10pt;font-weight:500;color:#565656}html.dark h2[data-v-fb660782]{color:#b7b7b7}.hero[data-v-fb660782]{margin:50pt auto auto;display:flex;width:100%;max-width:650pt;text-align:left;align-items:center;justify-content:center;padding:20pt}.hero h1>img[data-v-fb660782]{display:none}html.dark .btn{background-color:#282828;border:2pt solid rgb(40,40,40)}html.dark .btn:hover{border:2pt solid rgb(60,60,60)}html.dark .btn.primary,html.dark .btn.primary:hover{background-color:#007bff;border:2pt solid #007bff;color:#fff}@media (max-width: 600px){.hero h1{font-size:2em}.hero h2{font-size:1.4em}.hero{margin:70pt 0!important;align-items:flex-start!important;max-width:200pt}.hero button{margin-top:10pt;margin-bottom:0}.hero img{width:40pt;display:none}.hero h1>img{display:inline!important;padding:0;position:relative;height:1em;top:3.5pt;margin:0 -5pt 0 -8pt}.hero{margin-top:20pt!important}}.buttons{margin-top:20pt}.feature[data-v-9d5b0837]{margin:100pt auto auto;display:flex;flex-direction:row;width:100%;max-width:750pt;text-align:left;align-items:center;padding:20pt}.feature>div[data-v-9d5b0837]:first-child{margin-right:20pt;flex:2}.feature code[data-v-9d5b0837]{font-size:1em;padding:-10pt 10pt 10pt;border-radius:8px}.feature code[data-v-9d5b0837]{max-width:60%;font-size:.8em}.feature code pre[data-v-9d5b0837]{margin:0;padding:10pt}.feature code[data-v-9d5b0837]{transform:translate(0);animation:slidefade .5s;animation-fill-mode:forwards}.feature.code[data-v-9d5b0837]{display:flex;margin-top:-20pt}.feature .btn{margin-top:20pt}.feature code span.lang{display:none!important}.feature code pre{margin-top:-15pt!important;line-break:anywhere;overflow-x:scroll;-webkit-overflow-scrolling:touch;-ms-overflow-style:none;padding:15pt}.feature code pre::-webkit-scrollbar{display:none}@keyframes slidefade{0%{opacity:0;transform:translate(20pt)}to{opacity:1;transform:translate(0)}}@keyframes slidefadeleft{0%{opacity:0;transform:translate(-20pt)}to{opacity:1;transform:translate(0)}}div:nth-child(2n)>.feature{flex-direction:row-reverse}div:nth-child(2n)>.feature>code{margin-right:30pt;animation:slidefadeleft .2s;animation-fill-mode:forwards}.feature pre.promptdown,.feature pre.promptdown.promptdown-compiled,html.dark .feature pre.promptdown,html.dark .feature pre.promptdown.promptdown-compiled{width:320pt;max-width:calc(50vw - 30pt);overflow-x:scroll;min-height:180pt;position:relative;top:20pt;white-space:pre-wrap;line-break:anywhere;text-indent:10pt!important;line-height:1.5em!important}.feature.left>code{display:none}.feature.middle{position:relative;width:520pt;max-width:100vw}.feature.middle>code{display:none}.feature.middle{text-align:center}.cards{display:flex;flex-direction:row;flex-wrap:wrap;justify-content:center;margin-top:40pt;margin-bottom:40pt}.cards>a{border-radius:5pt;border:1pt solid rgb(192,190,190);margin:5pt 2.5pt 0;padding:30pt 10pt 10pt;display:flex;flex-direction:column;justify-content:flex-end;line-height:1.2;font-weight:700;width:100pt;height:100pt;transition:all .1s;cursor:pointer;text-align:center}.cards img{position:relative;bottom:-5pt}.cards h1{margin-top:0}.cards>a:hover{background-color:#f0f0f0;transform:scale(1.05)}html.dark .cards>a{border:1pt solid rgb(50,50,50)}html.dark .cards>a:hover{background-color:#3232321b}.cards>a img{width:50pt;height:40pt;margin:-10pt auto auto;display:block;padding-bottom:10pt}.cards>a h1{font-weight:700!important;font-size:10pt}.feature.middle pre{text-align:left;padding:10pt}@media (max-width: 700px){.feature{flex-direction:column!important;margin-top:20pt!important;width:calc(100vw - 15pt)!important}.feature>div:first-child{margin-right:0}.feature.code{width:100vw!important}.feature>code{margin-right:0;margin-top:20pt;width:100vw!important;max-width:100vw!important;margin-left:0;border-radius:0;box-shadow:none!important;border:none!important;padding-left:20pt!important;margin-right:0!important}.feature.middle{width:calc(100vw - 10pt);padding:0 10pt!important}.feature.middle>div{max-width:100vw;text-align:left;margin:0!important}.feature pre.promptdown,.feature pre.promptdown.promptdown-compiled,html.dark .feature pre.promptdown,html.dark .feature pre.promptdown.promptdown-compiled{width:calc(100vw - 30pt);max-width:calc(100vw - 30pt);font-size:12pt;margin-left:-5pt;padding:0;margin-right:0}.feature pre.promptdown .promptdown-var{line-break:word!important}}.feature.code pre{flex:1;margin:0;padding:20pt;white-space:pre-wrap;box-shadow:0 0 80pt #00000045;font-size:10pt}.feature.code>code{display:none}.feature.code>div:first-child{margin-right:0}.feature.code a{text-decoration:underline}.feature.code{margin-top:10pt;max-width:480pt;font-size:12pt;line-height:1.4;z-index:100}@media (max-width: 800px){.feature.code{margin-top:-30pt!important;font-size:11pt;padding:2pt!important;max-width:calc(100vw - 20pt);overflow:hidden!important}.feature.code pre{white-space:pre;margin:0;padding:10pt;font-size:9pt;max-width:calc(100vw - 20pt);box-shadow:none;overflow-x:scroll}.feature.code pre .window-controls{display:none}}@media (max-width: 600px){.feature.code{margin-top:-90pt!important}}.code-by-code{display:flex;flex-direction:row;margin:0}.code-by-code .left,.code-by-code .right{flex:1;padding:0;max-width:50%}.code-by-code .left{margin-right:2.5pt}.code-by-code .right{margin-left:2.5pt}.code-by-code .left pre{padding:10pt!important;margin-top:-6pt;white-space:pre-wrap;font-size:12pt;line-height:1.5em}@media (max-width: 750pt){.code-by-code{flex-direction:column}.code-by-code .left pre{font-size:11pt}.code-by-code .left,.code-by-code .right{max-width:100%;width:100vw}}.code-by-code .left h2{color:#fff}.code-by-code h2{display:block;text-align:center;margin-bottom:-35pt;font-weight:700;opacity:.8;font-size:10pt}.code-by-code .language-lmql pre{padding-top:30pt!important}.code-by-code .promptdown,.code-by-code .promptdown.promptdown-compiled,html.dark .code-by-code .promptdown,html.dark .code-by-code .promptdown.promptdown-compiled{padding-top:30pt!important;font-size:14pt}.code-by-code .promptdown .promptdown-var{line-height:1.5em}.examples[data-v-96cfe14a]{max-width:1030pt;margin:auto;padding:0 8pt}h1[data-v-96cfe14a]{margin-bottom:20pt}.btn-group[data-v-96cfe14a]{margin-bottom:1rem;font-size:10pt;margin-top:1em}.btn[data-v-96cfe14a]{padding:4pt;margin:0 4pt 4pt 0}.btn-group .btn.active[data-v-96cfe14a]{background-color:#007bff;color:#fff;border:2pt solid #007bff}.examples .description[data-v-96cfe14a]{max-width:450pt;margin-bottom:30pt}.examples .description a{text-align:left;margin-left:4pt;color:#007bff}.examples .description a:hover{text-decoration:underline}.examples .right .distribution{position:relative;top:-110pt;margin-left:20pt;width:220pt}.examples .left pre{margin-top:-4pt!important}.post{margin-bottom:4rem}.primary.pdf[data-v-34af5329]{top:10pt;right:10pt;margin:5pt 0;display:inline-block;text-decoration:none}.primary.pdf[data-v-34af5329]:hover{background-color:#0069d9}.paper[data-v-34af5329]{position:relative;text-align:justify;line-height:1}.paper p[data-v-34af5329]{margin:10pt 0}:root{--vp-c-default-1: var(--vp-c-gray-1);--vp-c-default-2: var(--vp-c-gray-2);--vp-c-default-3: var(--vp-c-gray-3);--vp-c-default-soft: var(--vp-c-gray-soft);--vp-c-brand-1: var(--vp-c-indigo-1);--vp-c-brand-2: var(--vp-c-indigo-2);--vp-c-brand-3: var(--vp-c-indigo-3);--vp-c-brand-soft: var(--vp-c-indigo-soft);--vp-c-tip-1: var(--vp-c-brand-1);--vp-c-tip-2: var(--vp-c-brand-2);--vp-c-tip-3: var(--vp-c-brand-3);--vp-c-tip-soft: var(--vp-c-brand-soft);--vp-c-warning-1: var(--vp-c-yellow-1);--vp-c-warning-2: var(--vp-c-yellow-2);--vp-c-warning-3: var(--vp-c-yellow-3);--vp-c-warning-soft: var(--vp-c-yellow-soft);--vp-c-danger-1: var(--vp-c-red-1);--vp-c-danger-2: var(--vp-c-red-2);--vp-c-danger-3: var(--vp-c-red-3);--vp-c-danger-soft: var(--vp-c-red-soft)}:root{--vp-button-brand-border: transparent;--vp-button-brand-text: var(--vp-c-white);--vp-button-brand-bg: var(--vp-c-brand-3);--vp-button-brand-hover-border: transparent;--vp-button-brand-hover-text: var(--vp-c-white);--vp-button-brand-hover-bg: var(--vp-c-brand-2);--vp-button-brand-active-border: transparent;--vp-button-brand-active-text: var(--vp-c-white);--vp-button-brand-active-bg: var(--vp-c-brand-1)}:root{--vp-home-hero-name-color: transparent;--vp-home-hero-name-background: -webkit-linear-gradient( 120deg, #bd34fe 30%, #41d1ff );--vp-home-hero-image-background-image: linear-gradient( -45deg, #bd34fe 50%, #47caff 50% );--vp-home-hero-image-filter: blur(40px)}@media (min-width: 640px){:root{--vp-home-hero-image-filter: blur(56px)}}@media (min-width: 960px){:root{--vp-home-hero-image-filter: blur(72px)}}:root{--vp-custom-block-tip-border: transparent;--vp-custom-block-tip-text: var(--vp-c-text-1);--vp-custom-block-tip-bg: var(--vp-c-brand-soft);--vp-custom-block-tip-code-bg: var(--vp-c-brand-soft);--vp-code-block-bg: rgb(36, 39, 45);--vp-code-copy-code-bg: rgb(32, 33, 39);--vp-code-copy-code-border-color: #2e2e32;--vp-code-copy-code-hover-bg: #1b1b1f;--vp-code-copy-code-hover-border-color: #2e2e32;--vp-code-copy-code-active-text: rgba(235, 235, 245, .6)}.DocSearch{--docsearch-primary-color: var(--vp-c-brand-1) !important}.VPImage.logo{width:12pt}body{padding-bottom:200pt}span.lang{display:none}img.invert{filter:invert(90%)}html.dark img.invert{filter:invert(10%)}pre code{color:#ffffffdf!important}pre,.vp-doc div[class*=language-],.vp-block,html.dark pre,html.dark .vp-doc div[class*=language-]{background:var(--vp-code-block-bg)!important}.hljs-comment{opacity:.6}.hljs-string{color:#a7d884}.hljs-meta{color:#68edf2}.hljs-built_in,.hljs-keyword{color:#c678dd}.hljs-placeholder{color:#68edf2}.hljs-subst{color:#f4955d}html.dark .promptdown.promptdown-compiled,.promptdown.promptdown-compiled{opacity:1;line-height:1!important;padding:10pt!important;transform-origin:top center;background:transparent!important}pre.promptdown>p{margin:0}pre.promptdown>h1{margin:0 0 5pt;line-height:1em;text-transform:uppercase;opacity:.6}.language-promptdown button.copy{display:none}.language-promptdown .promptdown button.promptdown-button-replay{top:8pt;right:8pt;border-radius:15pt}.language-promptdown{--vp-code-block-bg: none;border-radius:0!important;transform-origin:top center;text-align:left}pre{border-radius:6pt}h1{font-size:1.4em;font-weight:700;margin-bottom:10pt}span.badge{background-color:#007bff;border-radius:2pt;transform:scale(.6);transform-origin:center left;display:inline-block;line-height:1.2em;padding:2pt 4pt;position:relative;top:0;color:#fff}h1 span.badge{transform:scale(.45);margin-left:3pt}div.subtitle{font-size:14pt;color:gray;font-weight:500;margin-bottom:25pt;margin-top:-5pt}.VPDoc:not(.has-sidebar):not(.has-aside) h1{font-size:2.5rem}.VPDoc:not(.has-sidebar):not(.has-aside) .content{max-width:830pt!important}.VPDoc:not(.has-sidebar):not(.has-aside) .container{max-width:830pt!important}.VPDoc:not(.has-sidebar) .content{max-width:1130pt!important}.VPDoc:not(.has-sidebar) .content .content-container{max-width:830pt}.VPDoc:not(.has-sidebar) .container{max-width:1130pt!important}html.dark p strong{font-weight:1200;text-decoration:underline}span.date{font-size:.8em;color:gray;display:block}pre.promptdown,pre.promptdown.promptdown-compiled,html.dark pre.promptdown,html.dark pre.promptdown.promptdown-compiled{text-indent:0pt!important;line-height:1.2em!important}.banner{background-color:#007bff;color:#fff;font-weight:700;padding:2pt 5pt;border-radius:2pt;max-width:calc(100vw - 40pt);margin:auto;width:730pt;position:relative;bottom:-20pt}@media (max-width: 800px){.banner{margin:0;max-width:100vw;border-radius:0}}.banner a{text-decoration:underline}pre .window-controls{margin-bottom:10pt;margin-left:-10pt;margin-top:-5pt}pre{position:relative}pre .window-controls .window-control{background:white;width:10pt;height:10pt;border-radius:50%;display:inline-block;margin-left:5pt}pre .window-controls .window-control:nth-child(1){background:#ff5f56}pre .window-controls .window-control:nth-child(2){background:#ffbd2e}pre .window-controls .window-control:nth-child(3){background:#27c93f}html.dark .language-grammar,.language-grammar,html.dark .language-grammar pre.hljs,.language-grammar pre.hljs{--vp-code-block-bg: none;background-color:transparent!important;background:none!important;color:var(--vp-c-text-1)!important;font-size:14pt;margin:0!important;margin-left:-20pt;white-space:pre-wrap}.language-grammar pre code{color:var(--vp-c-text-1)!important;white-space:pre!important;margin:0 0 0 -15pt!important}.language-grammar .hljs-comment{opacity:.6;color:var(--vp-c-text-1)}.language-grammar .hljs-string{color:var(--vp-c-text-1);color:var(--vp-c-danger-1)}.language-grammar .hljs-meta{color:var(--vp-c-text-1)}.language-grammar .hljs-built_in,.language-grammar .hljs-keyword{color:var(--vp-c-text-1);font-weight:700}.language-grammar .hljs-placeholder,.language-grammar .hljs-subst{color:var(--vp-c-text-1)}.language-grammar a[href^="#python-fragments"]{text-decoration:none;color:var(--vp-c-text-2)}.language-grammar{--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2)}.github-star{transform:scale(1.3)!important}.language-lmql .inline-lmql-delim{opacity:.2}.language-truncated{max-height:200pt;overflow:hidden}.info.show .language-truncated{max-height:none}.info.show button.btn.expand{display:none}html.dark .info button.btn.expand{background-color:var(--vp-c-gray-soft);border-color:var(--vp-c-gray-soft)}html.dark .info button.btn.expand:hover{border-color:var(--vp-c-gray-2)}.info button.btn.expand{text-align:center;width:100%;font-size:10pt;font-weight:700;margin-top:0}.language-output:before{content:"Console Output";font-size:10pt;font-weight:700;opacity:.4;text-align:right;position:absolute;display:block;top:2pt;right:5pt;margin-bottom:-2em}.language-result:before{content:"Result";font-size:10pt;font-weight:700;opacity:.4;text-align:right;position:absolute;top:2pt;right:8pt;margin-bottom:-2em}.language-output{border:.5pt solid rgb(204,201,201)}.language-output,.language-result{white-space:pre-wrap!important;color:var(--vp-c-text-1);--vp-code-block-bg: transparent !important;transform:scale(.98);position:relative;border-radius:7pt!important;--vp-code-copy-code-border-color: var(--vp-c-divider);--vp-code-copy-code-bg: var(--vp-c-bg-soft);--vp-code-copy-code-hover-border-color: var(--vp-c-divider);--vp-code-copy-code-hover-bg: var(--vp-c-bg);--vp-code-copy-code-active-text: var(--vp-c-text-2)}.language-output button.copy,.language-result button.copy{display:none}.language-result{--vp-code-block-bg: rgba(202, 202, 202, .061) !important}.language-output pre code,.language-output pre,.language-output .hljs-comment,.language-output .hljs-string,.language-output .hljs-meta,.language-output .hljs-built_in,.language-output .hljs-keyword,.language-output .hljs-placeholder,.language-output .hljs-subst,.language-result pre code,.language-result pre,.language-result .hljs-comment,.language-result .hljs-string,.language-result .hljs-meta,.language-result .hljs-built_in,.language-result .hljs-keyword,.language-result .hljs-placeholder,.language-result .hljs-subst{color:var(--vp-c-text-1)!important;white-space:pre-wrap!important}img.inline-logo{display:inline-block;height:1em;position:relative;top:.15em;left:.1em}.grid{display:flex;flex-wrap:wrap;font-size:12pt}.grid-item-card{flex:1 1 200pt;margin:5pt;border-radius:6pt;overflow:hidden;border:.5pt solid rgba(204,201,201,.732);background:transparent;position:relative;padding:10pt;font-size:12pt;max-width:48%}.grid-item-card h3{font-size:12pt;margin:0;padding:0}.grid-item-card a{text-decoration:none;color:var(--vp-c-text-1);transition-duration:.1s!important}.grid-item-card a p{margin:5pt 0 0;font-size:12pt;font-weight:400}.btn{padding:4pt 10pt;font-size:1em;background-color:#dcdcdc;border-radius:4pt;font-weight:700;margin:20pt 5pt 5pt 0;border:2pt solid rgb(220,220,220)}.btn:hover{border:2pt solid rgb(192,190,190)}.btn.primary,.btn.primary:hover{background-color:#007bff;border:2pt solid #007bff;color:#fff}figure img{border-radius:4pt}#version-switcher{opacity:.9;text-align:center;font-size:.9em;margin-top:-5pt;color:var(--vp-c-text-2)}#version-switcher .version{display:inline-block;border-radius:4pt;padding:0 5pt;margin-left:2pt}#version-switcher .version:hover{background-color:var(--vp-c-gray-soft);color:var(--vp-c-text-1)}#version-switcher .version.active{background-color:#007bff;color:#fff}#version-switcher label{margin-right:2pt;color:var(--vp-c-text-2)}a:hover #version-switcher label{color:var(--vp-c-text-2)}#version-switcher a.version:not(.active):hover{cursor:pointer}.promptdown p{text-indent:2pt;white-space:pre-wrap}.promptdown{font-family:monospace;font-size:12pt;background-color:#fff;padding:10pt 20pt 10pt 10pt;border-radius:5pt;border:.5pt solid rgb(204,201,201);line-height:1.5;position:relative;opacity:0;font-family:system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Open Sans,Helvetica Neue,sans-serif;white-space:pre-wrap}.promtpdown.promptdown-compiled{opacity:1!important}html.dark .promptdown{background-color:#3d3d3d4c;color:#fff;border:.5pt solid rgba(64,64,64,.507)}.promptdown-var.cmd{display:none}.promptdown-var{background-color:#dedada;border-radius:2pt;color:#000;font-weight:400;margin-right:2pt;padding:.5pt 4pt}html.dark .promptdown-var{color:#f6f6f6}.promptdown-var.animate-immediate{animation:fadein .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein{0%{opacity:0}to{opacity:1}}.promptdown-var.color-none{background:none!important;padding-left:0}.promptdown-var .promptdown-var-name{color:#fff;background-color:#00000050;font-weight:700;border-radius:2pt;position:relative;top:-1.5pt;left:-2pt;font-size:80%;padding:0 4pt;font-family:monospace;margin-right:0}.promptdown-bubble-container.user{text-align:right}.promptdown-bubble-container{margin-bottom:8pt}.faded .promptdown-bubble{background:transparent}.hidden .promptdown-bubble{display:none}.promptdown-bubble>.promptdown-var-name{display:none}.promptdown-bubble{background-color:#fff;padding:10pt;display:inline-block;border-radius:5pt;color:#000;max-width:90%;white-space:pre-wrap}html.dark .promptdown-bubble{color:#fff}.promptdown-bubble.animate{animation:fadein-left .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein-left{0%{opacity:0;transform:translate(-20pt)}to{opacity:1;transform:translate(0)}}.promptdown-bubble.system{text-align:center;color:gray;font-size:.85em;display:block;max-width:100%;background-color:transparent}.promptdown-bubble.system.animate{z-index:-999;animation:fadein-top .2s}@keyframes fadein-top{0%{opacity:0;transform:translateY(-1pt)}to{opacity:1;transform:translateY(0)}}.promptdown-bubble.user{background-color:#597afe;color:#fff;text-align:left}.promptdown-bubble.user.animate{animation:fadein-right .2s;animation-fill-mode:forwards;animation-delay:0s}@keyframes fadein-right{0%{opacity:0;transform:translate(20pt)}to{opacity:1;transform:translate(0)}}.promptdown-bubble.assistant{color:#000;background-color:#d9d9d9}.promptdown-bubble.assistant{background-color:#ece9e9df;padding:8pt}html.dark .promptdown-bubble.assistant{background-color:#777777df;padding:8pt}.promptdown h1,.promptdown h2,.promptdown h3{display:block;margin:0 0 8pt;padding:0;font-size:12pt;text-align:center}.promptdown h1{font-size:10pt}.promptdown h2{font-size:11pt;color:#696969}.promptdown h3{font-size:10pt}.promptdown-cursor{width:8pt;background-color:#c4c2c2;border-radius:2pt;position:relative;left:2pt;color:transparent;display:inline-block;transform:scale(.8);border:1pt solid rgb(212,212,212);animation:blink 1s infinite}.promptdown-var .promptdown-cursor{background-color:#00000047}.promptdown-var.color-none .promptdown-cursor{background-color:#c4c2c2}.promptdown .code_in_prompt{font-family:monospace;background-color:transparent!important}@keyframes blink{0%{opacity:.3}50%{opacity:1}to{opacity:.3}}.hidden{display:none}.cmd-hidden{display:none!important}.faded{opacity:.5;transition:opacity .5s;text-decoration:line-through;border-radius:2pt}.command.hidden{display:none}.promptdown .color-blue{background-color:#728cf5}html.dark .promptdown .color-blue{background-color:#4e60a7}.promptdown .color-purple{background-color:#a48efc}html.dark .promptdown .color-purple{background-color:#715ca3}.promptdown .color-pink{background-color:#ff7893}html.dark .promptdown .color-pink{background-color:#c55c71}.promptdown .color-magenta{background-color:#fb88fb}html.dark .promptdown .color-magenta{background-color:#9c519c}.promptdown .color-red{background-color:#fa9393}html.dark .promptdown .color-red{background-color:#aa5656}.promptdown .color-orange{background-color:#fe7a59}html.dark .promptdown .color-orange{background-color:#aa5640}.promptdown .color-lightorange{background-color:#feb259}html.dark .promptdown .color-lightorange{background-color:#6d5717}.promptdown .color-yellow{background-color:#fbfbc0}html.dark .promptdown .color-yellow{background-color:#6b6b3f}.promptdown .color-ochre{background-color:#8abc98}html.dark .promptdown .color-ochre{background-color:#567660}.promptdown button.promptdown-button-replay{position:absolute;top:10pt;right:10pt;animation:fadein .2s;color:#597afe;font-size:.8em;border:none;background:transparent;cursor:pointer}.promptdown button.promptdown-button-replay:hover{text-decoration:underline}.promptdown button.copy{background-color:#ffffff29;border:1pt solid rgba(255,255,255,.211);opacity:1;position:absolute;top:2pt;right:4pt;left:auto;font-size:10pt;opacity:.1;transition:opacity .1s;padding:5pt;background:transparent}.promptdown button.copy:hover{opacity:1} diff --git a/blog/index.html b/blog/index.html index 52ade73c..815c912a 100644 --- a/blog/index.html +++ b/blog/index.html @@ -5,14 +5,14 @@ Blog | LMQL - + - + @@ -105,7 +105,7 @@

Decorators "Say 'this is a test':[@screaming TEST]"
promptdown

Say 'this is a test': TEST THIS IS A TEST +" animate="true" __animate="true" animate-speed="50" class="promptdown promptdown-compiled" style="opacity: 1;">

Say 'this is a test': TEST THIS IS A TEST

Similar to Python decorators, LMQL decorators are functions that take a variable as input and can wrap and modify its value.

In the example above, we use the @screaming decorator to convert the value of TEST to uppercase. Decorators can be used to implement a wide range of custom functionality, including string normalization, datatype conversion, and more. LMQL also provides decorators that allow to stream or pre-process data during generation. For more information, please refer to the documentation.

@@ -199,7 +199,7 @@

One Line Is All It Takes
"One line is all it takes [CONTINUATION]"
 
promptdown

One line is all it takes CONTINUATIONFallin' in love with me. +" animate="true" __animate="true" animate-speed="50" class="promptdown promptdown-compiled" style="opacity: 1;">

One line is all it takes CONTINUATIONFallin' in love with me.

Sensible Defaults This is possible because LMQL now automatically assumes argmax and openai/text-davinci-003 as (configurable) default model. If you prefer to use a different model or custom decoder settings, you can still specify them explicitly, e.g. in the @lmql.query decorator function as demonstrated later in this post.

@@ -226,13 +226,13 @@

Inline Constraints

A list of awesome Dua Lipa songs:⏎ +" animate="true" __animate="true" animate-speed="50" class="promptdown promptdown-compiled" style="opacity: 1;">

A list of awesome Dua Lipa songs:⏎ - New Rules -- SONGDon't Start Now -- SONGIDGAF -- SONGBe the One -- SONGBlow Your Mind (Mwah) -Out of these, my favorite is FAVORITEDon't Start Now +- SONGDon't Start Now +- SONGIDGAF +- SONGBe the One +- SONGBlow Your Mind (Mwah) +Out of these, my favorite is FAVORITEDon't Start Now

Note also how in this example LMQL code now reads much more like standard Python code, without any additional level of indentation.


@@ -443,7 +443,7 @@

Short-Circuiting Long C ]
promptdown

If we have the choice we choose OPTIONOption A with a whole lot of extra context +" animate="true" __animate="true" animate-speed="50" class="promptdown promptdown-compiled" style="opacity: 1;">

If we have the choice we choose OPTIONOption A with a whole lot of extra context

Without Caching: Tokens: 123, Requests: 9 | With Caching Layer: Tokens: 25 (-80%), Requests: 2 (-78%)

Here, after the LLM has produced "Option" and then " A", LMQL short-circuits further model calls and automatically completes the resulting sequence to "Option A with a whole lot of extra context". This is possible because once Option A has been predicted, the remaining tokens are fully determined by the constraints.

@@ -536,7 +536,7 @@

Preview
- + \ No newline at end of file diff --git a/blog/posts/developer-survey.html b/blog/posts/developer-survey.html index 65987c93..6444ebe0 100644 --- a/blog/posts/developer-survey.html +++ b/blog/posts/developer-survey.html @@ -5,7 +5,7 @@ LMQL Developer Survey | LMQL - + @@ -21,7 +21,7 @@
Skip to content

LMQL Developer Survey

February 14, 2024

image

We have started a new initiative called the LMQL developer survey. With this short survey we have the goal of learning more from everyone around the LMQL and the bigger LLM community. We are looking for some broader feedback signals of how and what people are using LMQL for or would like to use it for.

The outcome of this survey will help shape our work around the next major version of LMQL.

You can find the survey here: https://forms.gle/pGvAicNpUhS1rAkK9.

- + \ No newline at end of file diff --git a/blog/posts/release-0.0.5.html b/blog/posts/release-0.0.5.html index 541ecbfa..dbadefdb 100644 --- a/blog/posts/release-0.0.5.html +++ b/blog/posts/release-0.0.5.html @@ -5,7 +5,7 @@ LMQL Release 0.0.5 | LMQL - + @@ -29,7 +29,7 @@ where INT(NUM)
  • Core Interpreter A complete reimplementation of the LMQL core interpreter has been completed. This fixes a couple of minor issues and overall, improves reliability and performance when dealing with branching decoding algorithms. #24.

  • Playground Locally and when used in-browser, the LMQL Playground now streams debugger information from the LMQL interpreter incrementally. This leads to speed-ups when running in the Playground, especially with longer outputs. #27f9a8ad.

  • Other Fixes:

    • When used from within Python (as decorated function), LMQL code no longer has to be doubly-escaped, e.g. you can now write STOPS_AT(VAR, "\n") instead of STOPS_AT(VAR, "\\n")
    • The LMQL inference API buffers requests that come in during startup, to avoid errors when the server is not yet ready. #15, thanks to @chrispan.
    • OpenAI request parallelization no longer leads to an error on Linux systems, with regards to worker processes #6.
  • Preview

    Apart from the changes above, we are also working on a number of other features, including:

    • llama.cpp support as started in this PR, thanks to @CircArgs.

    • Support for Type Constraints, e.g. type(VAR) is DataClass, that automatically force the model to produce a value that structurally conforms to the given type. See this Twitter thread for more details.

    • Support for using Antlr parsers during query execution, to force the model to produce a value that conforms to a given grammar.

    • Extending Logit Masking to OpenAI Chat Models. This will enable full support for LMQL constraints with e.g. chatgpt and gpt-4 models. See #25, thanks to @kharvd.

    - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.1.html b/blog/posts/release-0.0.6.1.html index 07181a0c..bdd1aba4 100644 --- a/blog/posts/release-0.0.6.1.html +++ b/blog/posts/release-0.0.6.1.html @@ -5,7 +5,7 @@ LMQL Release v0.0.6.1 | LMQL - + @@ -21,7 +21,7 @@
    Skip to content

    LMQL v0.0.6.1

    May 3, 2023

    We released LMQL v0.0.6.1, which contains several bug fixes and improvements. The most notable changes are:

    • Cache Layer Bug Fixes This release contains several fixes and improvements to the recently introduced cache layer.

    • Stopping Phrases Stopping phrases specified via STOPS_BEFORE are now passed to the OpenAI API as "stop" parameter, decreasing the number of tokens used for the request. If you want to disable this (e.g. to allow speculative execution), you can specify the new decoder parameter openai_nonstop=True.

    • Asynchronous Output Writers All output writers have been refactored to use asynchronous I/O. This should simplify integration with other asynchronous frameworks, e.g. for HTTP or Websocket APIs. We also added a new chapter on Output Streaming to the documentation.

    • Output Writers for HTTP endpoints, WebSockets and Server-Sent Events Based on the updated output writer interface, we added three new output writers for serving LMQL queries as HTTP endpoints, WebSockets and via Server-Sent Events (SSE). To learn more, check their relatively simple implementations in the new lmql.output module. We will also provide more documentation on how to use them, e.g. with aiohttp in the future.

    - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.3.html b/blog/posts/release-0.0.6.3.html index 6163f7f1..2f538490 100644 --- a/blog/posts/release-0.0.6.3.html +++ b/blog/posts/release-0.0.6.3.html @@ -5,7 +5,7 @@ LMQL Release v0.0.6.3 | LMQL - + @@ -33,7 +33,7 @@ where len(TOKENS(WHO)) > 10 and STOPS_AT(WHO, "\n")
  • lmql.run: Improved input validation for lmql.run as contributed by @lfegray. More specifically, lmql.run wil now provide more helpful error messages when client logic does not specify input values for all required query parameters.

  • Automatic Cache Invalidation: LMQL's tokenizer cache at ~/.cache/lmql is now invalidated automatically when upgrading to a new version. This should prevent issues with outdated cache files.

  • Note: Version 0.0.6.2 was skipped and yanked from pypi.org, as an invalid release was pushed accidentally.

    - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.4.html b/blog/posts/release-0.0.6.4.html index 0d2839b3..be59f75d 100644 --- a/blog/posts/release-0.0.6.4.html +++ b/blog/posts/release-0.0.6.4.html @@ -5,7 +5,7 @@ Releasing LMQL v0.0.6.4 LMTP, Azure, Synchronous API, and more | LMQL - + @@ -34,7 +34,7 @@ print(hello("world")) # ['Hello! How can I assist you today?']

    If you instead want to use lmql.run in a synchronous context, you can now use lmql.run_sync instead. To learn more about how LMQL can be used from Python, check out our documentation.

  • Improved Tokenizer Backends LMQL can now use the excellent tiktoken tokenizer as tokenization backend (for OpenAI models). Furthermore, all tokenization backends have been ported to operate on a byte-level, which improves support for multibyte characters and emojis. This is especially relevant for non-English languages and special characters.

  • Docker Image LMQL now provides a Docker image that can be used to run the LMQL playground in a containerized environment. For more information, please see the documentation. Many thanks to @SilacciA for contributing this feature.

  • Faster Startup Time We optimized LMQL's import hierarchy, which results in faster module loading time.

  • - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.5.html b/blog/posts/release-0.0.6.5.html index 83b1c401..91137d19 100644 --- a/blog/posts/release-0.0.6.5.html +++ b/blog/posts/release-0.0.6.5.html @@ -5,7 +5,7 @@ LMQL becomes simpler and adds llama.cpp | LMQL - + @@ -84,7 +84,7 @@

    llama.cpp Inference Backend

    LMQL now also fully integrates with the excellent llama.cpp C++ implementation of a number of Transformer-based language models.

    Using llama.cpp from LMQL is as simple as specifying it in the from clause of a query:

    argmax "Say 'this is a test':[RESPONSE]" from "llama.cpp:<PATH TO WEIGHTS>.bin"
     

    We support, both, in-process loading of llama.cpp, as well as remote inference via lmql serve-model. To learn more about llama.cpp and how to use it with LMQL, check out the corresponding chapter in the LMQL documentation.


    Other Changes

    • LMQL now includes a random model backend, which randomly samples tokens from the GPT-2 vocabulary. This is useful for debugging and testing purposes and can be used for data generation in the context of highly constrained query programs.

    • Two caching issues have been fixed, avoiding cache collisions which could lead to repeated model outputs.

    • More robust query string parsing, allowing for robust escaping of special characters [, ], { and }.

    • Added support for transformers based Llama models and the associated (fast) implementation of HF tokenizers.

    • Simplified Azure OpenAI support, see the relevant chapter in the documentation.

    We thank community members @minosvasilias and @CircArgs for their contribution to this release.

    - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.6.html b/blog/posts/release-0.0.6.6.html index 2e3cce5b..c17fde00 100644 --- a/blog/posts/release-0.0.6.6.html +++ b/blog/posts/release-0.0.6.6.html @@ -5,7 +5,7 @@ LMQL v0.0.6.6 | LMQL - + @@ -27,7 +27,7 @@ # call with keyword arguments greet(a="Alice", b="Bob") # Greet Alice and Bob: Hello!
    • We improved the error handling of the llama.cpp backend and fixed a bug with model identifier parsing.

    • We also fixed a bug with the LMTP scheduler, where CPU load was high even when no tasks were present. Thanks to community member @4onen for reporting and fixing this!

    • Added backend support for auto_gptq quantized models, contributed by community member @meditans.

    • We fixed an issue where for Azure OpenAI models, a dummy configuration api.env was needed. See our documentation for details. Thanks to community members Missing and @hooman-bayer for their feedback and contributions to this.

    Versioning Note: 0.0.6.6 is the last release with two leading zeros. Starting with the next release, LMQL will adopt semantic versioning and use a single leading zero, i.e. 0.6.7.

    - + \ No newline at end of file diff --git a/blog/posts/release-0.0.6.html b/blog/posts/release-0.0.6.html index a6703813..bcc7c9a1 100644 --- a/blog/posts/release-0.0.6.html +++ b/blog/posts/release-0.0.6.html @@ -5,7 +5,7 @@ Releasing the LMQL Caching Layer (v0.0.6) | LMQL - + @@ -54,7 +54,7 @@ STOPS_AT(PUNCHLINE, "\n") and len(PUNCHLINE) > 1

    The first successful run of this query will persist the cache to joke.tokens. Subsequent runs will then automatically load the cache from disk, and only invoke the LLM if the cache does not contain a match. This also works for queries whose underlying LLM requests only partially overlap, as the tree-based cache data structure will automatically identify matching subtrees.

    Caching During Query Development: Persisting the cache can be particularly useful during query development, as it allows you to reuse the cache across multiple runs of the same query. A persistent cache will reduce token cost and latency of your query, even if you slightly change the query between runs.

    Caveats and Disabling the Cache

    You can disable the caching layer by specifying cache=False in your query code. This will cause the LMQL runtime to always invoke the LLM, and never use the cache. This is useful for debugging purposes, or if you want to ensure that the LLM is always invoked.

    Further, as the cache currently is implemented as an append-only data structure, it will grow indefinitely. This may be problematic for long-running applications, as the cache will eventually grow to relatively large sizes. In the future, we plan to implement simple strategies to limit the cache size, such as a least-recently-used eviction policy.

    Conclusion

    In this post, we introduced the new caching layer of the LMQL runtime, which allows you to reduce the token cost and latency of your queries by reusing previously generated LLM outputs. We demonstrated how the caching layer can be used to reduce the number of LLM invocations in a variety of scenarios, including long constraints, short-circuiting, and tool-augmented queries. We also showed how the cache can be persisted to disk, allowing you to reuse the cache across multiple queries.

    To learn more about LMQL please also check out our documentation, or join our Discord to chat with us directly. We are looking forward to hearing from you!

    - + \ No newline at end of file diff --git a/blog/posts/release-0.7.html b/blog/posts/release-0.7.html index 37ef188e..ee13fa77 100644 --- a/blog/posts/release-0.7.html +++ b/blog/posts/release-0.7.html @@ -5,7 +5,7 @@ LMQL 0.7 brings Procedural Prompt Programming | LMQL - + @@ -69,7 +69,7 @@ PERSON_DATA # Person(name='Alice', age=21, job='engineer')

    To achieve this, LMQL leverages constrained generation to make sure the LLM always produces all information required to populate a valid Person object. The resulting PERSON_DATA object can then be directly used like a regular Python object. Types are still in an early stage and we are working on adding more features and functionality.

    Other Changes

    🎬 And that's a wrap!

    LMQL 0.7 is a big release and we are excited to see what you will build with it. As always, please let us know if you have any questions, suggestions or bug reports, on GitHub, Discord, Twitter or via hello@lmql.ai.

    - + \ No newline at end of file diff --git a/docs/development/dev-setup.html b/docs/development/dev-setup.html index 8e1c3fb9..8fffcc89 100644 --- a/docs/development/dev-setup.html +++ b/docs/development/dev-setup.html @@ -5,7 +5,7 @@ Development Environment | LMQL - + @@ -33,7 +33,7 @@ # registers the `lmql` command in the current shell source scripts/activate-dev.sh

    With Nix

    If you have Nix installed, this can be used to invoke LMQL, even if you don't have any of its dependencies previously installed! We try to test Nix support on ARM-based MacOS and Intel-based Linux; bugfixes and contributions for other targets are welcome.

    Most targets within the flake have several variants:

    • default targets (nix run github:eth-sri/lmql#playground, nix run github:eth-sri/lmql#python, nix run github:eth-sri/lmql#lmtp-server, nix develop github:eth-sri/lmql#lmql) download all optional dependencies for maximum flexibility; these are also available with the suffix -all (playground-all, python-all, lmtp-server-all).
    • -basic targets only support OpenAI (and any future models that require no optional dependencies).
    • -hf targets only support OpenAI and Hugging Face models.
    • -replicate targets are only guaranteed to support Hugging Face models remoted via Replicate. (In practice, at present, they may also support local Hugging Face models; but this is subject to change).
    • -llamaCpp targets are only guaranteed to support llama.cpp. (In practice, again, Hugging Face may be available as well).

    In all of these cases, github:eth-sri/lmql may be replaced with a local filesystem path; so if you're inside a checked-out copy of the LMQL source tree, you can use nix run .#playground to run the playground/debugger from that tree.

    - + \ No newline at end of file diff --git a/docs/development/docker-setup.html b/docs/development/docker-setup.html index ebb2e65c..b6b36066 100644 --- a/docs/development/docker-setup.html +++ b/docs/development/docker-setup.html @@ -5,7 +5,7 @@ LMQL in Docker | LMQL - + @@ -27,7 +27,7 @@

    Otherwise, if you want to use the api.env file you can also mount the file as follow:

    docker run -d -v $(PWD)/api.env:/lmql/.lmql/api.env -p 3000:3000 -p 3004:3004 lmql-docker:latest
     

    Starting a container with GPU and local models

    Make sure you have followed the image building step from the section Using GPU and local models. To start the docker container with access to the GPUs consider using the following command:

    docker run --gpus all -d -p 3000:3000 -p 3004:3004 lmql-docker:cuda11.8
     

    Where all means that you allocate all the GPUs to the docker container.

    Note that here we expose the port 3000 and 3004 from the container to the port 3000 and 3004 from your machine. And we reuse the name lmql-docker:cuda11.8 as it is the value we previously used to build the image.

    - + \ No newline at end of file diff --git a/docs/development/documentation.html b/docs/development/documentation.html index 6793ca05..e17cc1d7 100644 --- a/docs/development/documentation.html +++ b/docs/development/documentation.html @@ -5,7 +5,7 @@ Documentation | LMQL - + @@ -27,7 +27,7 @@ # run hot-reloading live server yarn docs:dev - + \ No newline at end of file diff --git a/docs/index.html b/docs/index.html index 17e587ef..4ffdb12d 100644 --- a/docs/index.html +++ b/docs/index.html @@ -5,7 +5,7 @@ Getting Started | LMQL - + @@ -34,7 +34,7 @@ else: "Good job"

    Going beyond what we have seen so far, this LMQL program extends on the above in a few ways:

    • Decoder Declaration sample(temperature=1.2): Here, we specify the decoding algorithm to use for text generation. In this case we use sample decoding with slightly increased temperature (>1.0). Above, we implicitly relied on deterministic argmax decoding, which is the default in LMQL. To learn more about the different supported decoding algorithms in LMQL (e.g. beam or best_k), please see Decoders.

    • Prompt Program: The main body of the program remains the prompt. As before, we use prompt statements here, however, now we also make use of control-flow and branching behavior.

      On each LLM call, the concatenation of all prompt statements so far, form the prompt used to generate a value for the currently active template variable like RESPONSE. This means the LLM is always aware of the full prompt context so far, when generating a value for a template variable.

      After a prompt statement has been executed, the contained template variables are automatically exposed to the surrounding program context. This allows you to react to model output and incorporate the results in your program logic. To learn more about this form of interactive prompting, please see Scripted Prompting.

    3. Enjoy

    These basic steps should get you started with LMQL. If you need more inspiration before writing your own queries, you can explore the examples included with the Playground IDE or showcased on the LMQL Website.

    If you have any questions and or requests for documentation, please feel free to reach out to us via our Community Discord, GitHub Issues, or Twitter.

    - + \ No newline at end of file diff --git a/docs/installation.html b/docs/installation.html index bc4da578..f1c75027 100644 --- a/docs/installation.html +++ b/docs/installation.html @@ -5,7 +5,7 @@ Installation Instructions | LMQL - + @@ -25,7 +25,7 @@

    INFO

    Using the LMQL playground requires an installation of Node.js. If you are in a conda-managed environment you can install node.js via conda install nodejs=14.20 -c conda-forge. Otherwise, please see the official Node.js website for instructions on how to install it on your system.

    This launches a browser-based Playground IDE, including a showcase of many exemplary LMQL programs. If the IDE does not launch automatically, go to http://localhost:3000.

    Command-Line Interface

    As an alternative to the playground, the command-line tool lmql run can be used to execute local .lmql files.

    Python Integration

    LMQL can also be used directly from within Python. To use LMQL in Python, you can import the lmql package, run query code via lmql.run or use a decorator @lmql.query for LMQL query functions.

    For more details, please see the Python Integration chapter.

    Self-Hosted Models

    Note that when using local 🤗 Transformers models in the Playground IDE or via lmql run, you have to first launch an instance of the LMQL Inference API for the corresponding model via the command lmql serve-model.

    For more details, please see 🤗 Models chapter.

    Configuring OpenAI API Credentials

    If you want to use OpenAI models, you have to configure your API credentials. To do so you can either define the OPENAI_API_KEY environment variable or create a file api.env in the active working directory, with the following contents.

    openai-org: <org identifier>
     openai-secret: <api secret>
     

    For system-wide configuration, you can also create an api.env file at $HOME/.lmql/api.env or at the project root of your LMQL distribution (e.g. src/ in a development copy).

    - + \ No newline at end of file diff --git a/docs/language/constraints.html b/docs/language/constraints.html index 72c78ed9..91a4f258 100644 --- a/docs/language/constraints.html +++ b/docs/language/constraints.html @@ -5,7 +5,7 @@ Constraints | LMQL - + @@ -98,7 +98,7 @@ For some, life begins with a loving family and a comfortable home, while for others it may start with struggle and hardship.

    Here, we enforce a stopping condition on the . character, but only once the generated story is at least 40 tokens long. Thus, the interpretation of and with stopping conditions is that the stopping condition is only enforced once the other constraints are satisfied.

    How Do LMQL Constraints Work?

    Token Masking and Eager Validation LMQL constraints are evaluated eagerly on each generated token, and will be used by the runtime to generate token masks during generation. This means, that the provided constraints are either satisfied by directly guiding the model during generation appropriately or, if this is not possible, validation will fail early on during generation, saving the cost of generating invalid output. In case of greedy decoding (sample or argmax), this terminates the generation process, in case of branching decoding, it will prune the current branch and continue generation only in the remaining, valid branches.

    High-Level Text Constraints Constraints are high-level and operate on a text (not token) level. For instance, users can specify constraints like VAR in ["Hello", "Hi", "Greetings"], without having to consider the exact tokenization of the individual phrases. LMQL will automatically translate these constraints to token masks, that can be used to guide the model during generation, allowing to generate output that satisfies the provided constraint using one generation call only.

    Custom Constraints and Theoretical Background

    To learn more about the internals of LMQL and how to implement your own LMQL constraint, see the chapter on Implementing Custom LMQL Constraints.

    - + \ No newline at end of file diff --git a/docs/language/constraints/custom-constraints.html b/docs/language/constraints/custom-constraints.html index b77602c3..cbe8a934 100644 --- a/docs/language/constraints/custom-constraints.html +++ b/docs/language/constraints/custom-constraints.html @@ -5,7 +5,7 @@ Custom Constraints | LMQL - + @@ -82,7 +82,7 @@ return "fin" if x == "fin" else "var"

    forward() implementation: First, we implement the forward method: To validate the foo-bar property, we check that any string segment following a potential foo substring aligns with " bar". For this, we have to consider the case of x being none (not yet generated), a partial match (including empty strings), and a match that extends beyond " bar". Depending on model tokenization forward() may be called on any such variation, and thus has to be able to handle all of them.

    follow() implementation: Next, we implement the follow method. For this, we again consider multiple cases:

    1. If x is not yet generated, we do not have to restrict the next token.

    2. If there is no foo in x, we do not have to restrict the next token.

    3. If there is a foo right at the end of x, we restrict as follows:

      (a) If the segment already starts with " bar", no restrictions are necessary.

      (b) Otherwise, we restrict the next token to " bar". For this, we construct a so-called follow map, a mapping of token ranges to the future evaluation result of our operator, if the next token is in the specified range.

      In our foo-bar case it suffices to indicate that a continuation of bar evaluates to True, and any other continuation to False. This is achieved by the fmap function, which constructs a follow map from a list of token ranges and their respective future evaluation results. To construct token ranges the tset constructor can be used, which allows to select tokens by length, set, prefix or regex, independent from the concrete tokenizer in use. In this case we use the regex=True option, to automatically select all tokens that fully or partially match " bar".

    final() implementation: Lastly, we implement the final method. This indicates to the LMQL runtime, whether a result of our custom operator is final with respect to the current model output or temporary. In this case, a return value of False can always be considered final (a definitive violation, warranting early termination). Otherwise, we have to consider the definitiveness of the current value of x. If x is final, then the result of our operator is also final. Otherwise, it is temporary, as a further continuation of x may still result in satisfying the constraint.

    Note: For illustrative purposes, 3b of our follow() implementation simplifies an important detail about token alignment. It only consider the case, where follow() is called right at the end of "foo", i.e. depending on model behavior and tokenization, follow() may also run on a partial result like "foo b", where the correct follow map should indicate "ar" as valid continuation not the full " bar". To handle this, one can simply rely on the implementation of built-in InOpStrInSet (implementation of constraint VAR in [...]), and replace the fmap call with InOpStrInSet([]).follow(bar_segment, [" bar"]), which will automatically handle all such cases.

    Using the Custom Constraint Operator

    To use the custom constraint operator, you can simply import it in your query context and use it as follows:

    "Say 'foo':[A]" where foo_bar(A)
     

    In general, you have to ensure that the @LMQLOp decorator is executed in your current process before the query is parsed, e.g. by importing the module containing the operator implementation.

    What Happens Under the Hood?

    Given an operator implementation as above, the LMQL runtime will be able to both validate model output during generation and derive token-level prediction masks. For this, forward, follow and final are called repeatedly during generation, with the current model output x as input. This allows the runtime to derive both validity of the current model output, as well as next-token ranges for which the operator definitively (final) evaluates to False. Based on the correctness of the underlying implementation, this soundly ensures that the model will never select a follow-masked token that would definitively violate your constraint.

    Expressiveness of LMQL Constraints

    LMQL constraints are applied eagerly during generation by relying on token masking. This means, that the model will not be able to generate any tokens that are masked by the constraints. However, naturally, this approach is limited with respects to expressiveness, since not all properties on text can be decided on a token-by-token basis. More specifically, expressiveness is limited to the validation of context-free languages. To enable safe use of token masking, LMQL's implementation of final/follow semantics provide a soundness guarantee with respect to token masking (see LMQL paper).

    Nonetheless, due to eager evaluation of constraints during generation, LMQL constraints will trigger as soon as the model output violates the constraint definitively (i.e. the validation result is final), preventing the model from the costly generation of invalid output. This is an advantage over validation in post-processing, where violations may only be detected after the model has already generated a large amount of invalid output.

    - + \ No newline at end of file diff --git a/docs/language/decoding.html b/docs/language/decoding.html index 707a1698..c1e106af 100644 --- a/docs/language/decoding.html +++ b/docs/language/decoding.html @@ -5,7 +5,7 @@ Decoders | LMQL - + @@ -39,7 +39,7 @@ tell_a_joke() # uses the decoder specified in @lmql.query(...) tell_a_joke(decoder="beam", n=2) # uses a beam search decoder with n=2

    This is only possible when using LMQL from a Python context.

    Decoding Algorithms

    In general, the very first keyword of an LMQL query, specifies the decoding algorithm to use. For this, the following decoder keywords are available:

    argmax

    The argmax decoder is the simplest decoder available in LMQL. It greedily selects the most likely token at each step of the decoding process. It has no additional parameters. Since argmax decoding is deterministic, one can only generate a single sequence at a time.

    sample(n: int, temperature: float)

    The sample decoder samples n sequences in parallel from the model. The temperature parameter controls the randomness of the sampling process. Higher values of temperature lead to more random samples, while lower values lead to more likely samples. A temperature value of 0.0 is equivalent to the argmax decoder.

    beam(n: int)

    A simple beam search decoder. The n parameter controls the beam size. The beam search decoder is deterministic, so it will generate the same n sequences every time. The result of a beam query is a list of n sequences, sorted by their likelihood.

    beam_sample(n: int, temperature: float)

    A beam search decoder that samples from the beam at each step. The n parameter controls the beam size, while the temperature parameter controls the randomness of the sampling process. The result of a beam_sample query is a list of n sequences, sorted by their likelihood.

    Novel Decoders

    LMQL also implements a number of novel decoders. These decoders are experimental and may not work as expected. They are also not guaranteed to be stable across different LMQL versions. More documentation on these decoders will be provided in the future.

    var(b: int, n: int)

    An experimental implementation of variable-level beam search.

    beam_var(n: int)

    An experimental implementation of a beam search procedure that groups by currently-decoded variable and applies adjusted length penalties.

    Inspecting Decoding Trees

    LMQL also provides a way to inspect the decoding trees generated by the decoders. For this, make sure to execute the query in the Playground IDE and click on the Advanced Mode button, in the top right corner of the Playground. This will open a new pane, where you can navigate and inspect the LMQL decoding tree:

    A decoding tree as visualized in the LMQL Playground.
    A decoding tree as visualized in the LMQL Playground.

    This view allows you to track the decoding process, active hypotheses and interpreter state, including the current evaluation result of the where clause. For an example, take a look at the translation example in the Playground (with Advanced Mode enabled).

    Writing Custom Decoders

    LMQL also includes a library for array-based decoding dclib, which can be used to implement custom decoders. More information on this, will be provided in the future. The implementation of the available decoding procedures is located in src/lmql/runtime/dclib/decoders.py of the LMQL repository.

    Additional Decoding Parameters

    Next to the decoding algorithm, LMQL also supports a number of additional decoding parameters, which can affect sampling behavior and token scoring:

    max_len: intThe maximum length of the generated sequence. If not specified, the default value of max_len is 2048. Note if the maximum length is reached, the LMQL runtime will throw an error if the query has not yet come to a valid result, according to the provided where clause.
    top_k: intRestricts the number of tokens to sample from in each step of the decoding process, based on Fan et. al(2018) (only applicable for sampling decoders).
    top_p: floatTop-p (nucleus) sampling, based on Holtzman et. al(2019) (only applicable for sampling decoders).
    repetition_penalty: floatRepetition penalty, 1.0 means no penalty, based on Keskar et. al(2019). The more a token is already present in the generated sequence, the more its probability will be penalized.
    frequency_penalty: floatfrequency_penalty as documented as part of the OpenAI API.
    presence_penalty: floatpresence_penalty as documented as part of the OpenAI API.

    TIP

    Note that the concrete implementation and availability of additional decoding parameters may vary across different inference backends. For reference, please see the API documentation of the respective inference interface, e.g. the HuggingFace generate() function or the OpenAI API.

    Runtime Parameters

    Lastly, a number of additional runtime parameters are available, which can be used to control auxiliary aspects of the decoding process:

    chunksize: intThe chunksize parameter used for max_tokens in OpenAI API requests or in speculative inference with local models. If not specified, the default value of chunksize is 32. See also the description of this parameter in the Models chapter.
    verbose: boolEnables verbose console logging for individual LLM inference calls (local generation calls or OpenAI API request payloads).
    cache: Union[bool,str]True or False to enable in-memory token caching. If not specified, the default value of cache is True, indicating in-memory caching is enabled.

    Setting cache to a string value, specifies a local file to use for disk-based caching, enabling caching across multiple query executions and sessions.
    openai_nonstopExperimental option for OpenAI-specific non-stop generation, which can further improve the effectiveness of caching in some scenarios.
    chunk_timeoutOpenAI-specific maximum time in seconds to wait for the next chunk of tokens to arrive. If exceeded, the current API request will be retried with an approriate backoff.

    If not specified, the default value of chunk_timeout is 2.5. Adjust this parameter, if you are seeing a high number of timeouts in the console output of the LMQL runtime.
    - + \ No newline at end of file diff --git a/docs/language/decorators.html b/docs/language/decorators.html index a017c4ec..0995c7cb 100644 --- a/docs/language/decorators.html +++ b/docs/language/decorators.html @@ -5,7 +5,7 @@ Decorators | LMQL - + @@ -83,7 +83,7 @@ Say 'this is a test': [TEST PREFIX: This is a test] - + \ No newline at end of file diff --git a/docs/language/nestedqueries.html b/docs/language/nestedqueries.html index 49670f66..fb09cfbd 100644 --- a/docs/language/nestedqueries.html +++ b/docs/language/nestedqueries.html @@ -5,7 +5,7 @@ Nested Queries NEW | LMQL - + @@ -100,7 +100,7 @@ 100 days ago? [ANSWER: dateformat]" '''

    Here, main_query references dateformat as a nested query, where both functions are defined on the top level of the same file. However, you can also import and reuse query code from other files, as long as they are accessible from the scope of you main query function. Using this ability you can write libraries of reusable query functions to be used across your application or even by other users.

    - + \ No newline at end of file diff --git a/docs/language/overview.html b/docs/language/overview.html index c072b356..030fa2fb 100644 --- a/docs/language/overview.html +++ b/docs/language/overview.html @@ -5,7 +5,7 @@ Overview | LMQL - + @@ -116,7 +116,7 @@ What is it that they liked about their stay? FURTHER_ANALYSISThe reviewer liked the hiking in the mountains and the food.

    As shown here, we can use the if statement to dynamically react to the model's output. In this case, we ask the model to provide a more detailed analysis of the review, depending on the overall positive, neutral, or negative sentiment of the review. All intermediate variables like ANALYSIS, CLASSIFICATION or FURTHER_ANALYSIS can be considered the output of query, and may be processed by an surrounding automated system.

    To learn more about the capabilities of such control-flow-guided prompts, see Scripted Prompting.

    As shown here, in addition to inline where expressions as seen earlier, you can also provide a global where expression at the end of your program, e.g. to specify constraints that should apply for all variables. Depending on your use case, this can be a convenient way to avoid having to repeat the same constraints multiple times, like for FURTHER_ANALYSIS in this example.

    - + \ No newline at end of file diff --git a/docs/language/reference.html b/docs/language/reference.html index 76bc8200..7b45e66f 100644 --- a/docs/language/reference.html +++ b/docs/language/reference.html @@ -5,7 +5,7 @@ Language Reference | LMQL - + @@ -163,7 +163,7 @@ "Greet {person}. Hello [NAME]!" '''

    From within Python, the same syntax can used to construct Python-callable query functions. Please see to the documentation chapter on Python Integration for more information.

    LMQL query function can also be declared as async functions, which enables asynchronous execution.

    Function Calling and Arguments

    A query function can be called as a standard function from within LMQL or Python code. It can also be called as a nested query from within a query string. async query functions require the await keyword to be used.

    Arguments In addition to the function arguments specified in the function signature, query functions also provide the following additional arguments, that can be used to control the generation process:

    • model: The lmql.LLM model reference (or string identifier) to be used for generation.
    • decoder: The decoding algorithm to be used for generation. See also the decoder clause section.
    • output_writer: The output writer callback to be used during generation. See also documentation chapter on output streaming section.
    • **kwargs: Additional keyword arguments, passed to decoder and interpreter, such as temperature, chunksize, etc.

    Reference Implementation

    LMQL's current reference implementation is written in Python and also available as a Python library. The reference implementation of the syntax and semantics described in this document is available via Git at github.com/eth-sri/lmql.

    Compiler and Runtime

    The LMQL Python compiler translates LMQL programs into asynchroneous, brancheable Python code according to the semantics described in this document. The resulting program is then executed using the LMQL runtime, which implements (constrained) decoding algorithms, optimizations and model support via several backends.

    Hybrid Parser

    For parsing, the implementation leverages a hybrid approach, largely relying on the existing Python parser (ast.parse) and grammar, adding additional parsing logic only for LMQL-specific constructs. This approach allows us to be compliant with the Python grammar, while also allowing us to extend the language with additional constructs, that are not part of the original Python grammar. To parse the standalone syntax, we segment the input on a token level and then call the parser several times to obtain the final AST for e.g. the prompt clause, the where clause or the distribution clause.

    - + \ No newline at end of file diff --git a/docs/language/scripted-prompting.html b/docs/language/scripted-prompting.html index 6310cdbe..8ad6cf8e 100644 --- a/docs/language/scripted-prompting.html +++ b/docs/language/scripted-prompting.html @@ -5,7 +5,7 @@ Scripted Prompting | LMQL - + @@ -119,7 +119,7 @@

    Python Compatibility

    Going beyond simple control flow, LMQL also supports most valid Python constructs in the prompt clause of a query, where top-level strings like "-[THING]" are automatically interpreted as model input and template variables are assigned accordingly. For more advanced usage, see also the Tool Augmentation chapter.

    - + \ No newline at end of file diff --git a/docs/language/tools.html b/docs/language/tools.html index bd22f8cf..3567510d 100644 --- a/docs/language/tools.html +++ b/docs/language/tools.html @@ -5,7 +5,7 @@ Tool Augmentation | LMQL - + @@ -207,7 +207,7 @@ `get('Alice') # result blue` Therefore at the end of the game, Alice has the OBJECT blue ball.

    As shown in the example above, the assign and get functions can be used to store and retrieve values in a simple key-value store. The model is merely instructed to make use of these functions in its reasoning. The query then implements logic to intercept any function use and insert the result of the function call into the reasoning. This allows the model to incorporate the state of the key-value store into its reasoning.

    - + \ No newline at end of file diff --git a/docs/latest/development/dev-setup.html b/docs/latest/development/dev-setup.html index 337379f2..5157721c 100644 --- a/docs/latest/development/dev-setup.html +++ b/docs/latest/development/dev-setup.html @@ -5,7 +5,7 @@ Development Environment | LMQL - + @@ -33,7 +33,7 @@ # registers the `lmql` command in the current shell source scripts/activate-dev.sh

    With Nix

    If you have Nix installed, this can be used to invoke LMQL, even if you don't have any of its dependencies previously installed! We try to test Nix support on ARM-based MacOS and Intel-based Linux; bugfixes and contributions for other targets are welcome.

    Most targets within the flake have several variants:

    • default targets (nix run github:eth-sri/lmql#playground, nix run github:eth-sri/lmql#python, nix run github:eth-sri/lmql#lmtp-server, nix develop github:eth-sri/lmql#lmql) download all optional dependencies for maximum flexibility; these are also available with the suffix -all (playground-all, python-all, lmtp-server-all).
    • -basic targets only support OpenAI (and any future models that require no optional dependencies).
    • -hf targets only support OpenAI and Hugging Face models.
    • -replicate targets are only guaranteed to support Hugging Face models remoted via Replicate. (In practice, at present, they may also support local Hugging Face models; but this is subject to change).
    • -llamaCpp targets are only guaranteed to support llama.cpp. (In practice, again, Hugging Face may be available as well).

    In all of these cases, github:eth-sri/lmql may be replaced with a local filesystem path; so if you're inside a checked-out copy of the LMQL source tree, you can use nix run .#playground to run the playground/debugger from that tree.

    - + \ No newline at end of file diff --git a/docs/latest/development/docker-setup.html b/docs/latest/development/docker-setup.html index 143da659..fffe7b44 100644 --- a/docs/latest/development/docker-setup.html +++ b/docs/latest/development/docker-setup.html @@ -5,7 +5,7 @@ LMQL in Docker | LMQL - + @@ -27,7 +27,7 @@

    Otherwise, if you want to use the api.env file you can also mount the file as follow:

    docker run -d -v $(PWD)/api.env:/lmql/.lmql/api.env -p 3000:3000 -p 3004:3004 lmql-docker:latest
     

    Starting a container with GPU and local models

    Make sure you have followed the image building step from the section Using GPU and local models. To start the docker container with access to the GPUs consider using the following command:

    docker run --gpus all -d -p 3000:3000 -p 3004:3004 lmql-docker:cuda11.8
     

    Where all means that you allocate all the GPUs to the docker container.

    Note that here we expose the port 3000 and 3004 from the container to the port 3000 and 3004 from your machine. And we reuse the name lmql-docker:cuda11.8 as it is the value we previously used to build the image.

    - + \ No newline at end of file diff --git a/docs/latest/development/documentation.html b/docs/latest/development/documentation.html index 3534c26c..35cb2a4a 100644 --- a/docs/latest/development/documentation.html +++ b/docs/latest/development/documentation.html @@ -5,7 +5,7 @@ Documentation | LMQL - + @@ -27,7 +27,7 @@ # run hot-reloading live server yarn docs:dev - + \ No newline at end of file diff --git a/docs/latest/index.html b/docs/latest/index.html index a6bca3d3..7acfd859 100644 --- a/docs/latest/index.html +++ b/docs/latest/index.html @@ -5,7 +5,7 @@ Getting Started | LMQL - + @@ -34,7 +34,7 @@ else: "Good job"

    Going beyond what we have seen so far, this LMQL program extends on the above in a few ways:

    • Decoder Declaration sample(temperature=1.2): Here, we specify the decoding algorithm to use for text generation. In this case we use sample decoding with slightly increased temperature (>1.0). Above, we implicitly relied on deterministic argmax decoding, which is the default in LMQL. To learn more about the different supported decoding algorithms in LMQL (e.g. beam or best_k), please see Decoders.

    • Prompt Program: The main body of the program remains the prompt. As before, we use prompt statements here, however, now we also make use of control-flow and branching behavior.

      On each LLM call, the concatenation of all prompt statements so far, form the prompt used to generate a value for the currently active template variable like RESPONSE. This means the LLM is always aware of the full prompt context so far, when generating a value for a template variable.

      After a prompt statement has been executed, the contained template variables are automatically exposed to the surrounding program context. This allows you to react to model output and incorporate the results in your program logic. To learn more about this form of interactive prompting, please see Scripted Prompting.

    3. Enjoy

    These basic steps should get you started with LMQL. If you need more inspiration before writing your own queries, you can explore the examples included with the Playground IDE or showcased on the LMQL Website.

    If you have any questions and or requests for documentation, please feel free to reach out to us via our Community Discord, GitHub Issues, or Twitter.

    - + \ No newline at end of file diff --git a/docs/latest/installation.html b/docs/latest/installation.html index e0d82da8..31ac6566 100644 --- a/docs/latest/installation.html +++ b/docs/latest/installation.html @@ -5,7 +5,7 @@ Installation Instructions | LMQL - + @@ -25,7 +25,7 @@

    INFO

    Using the LMQL playground requires an installation of Node.js. If you are in a conda-managed environment you can install node.js via conda install nodejs=14.20 -c conda-forge. Otherwise, please see the official Node.js website for instructions on how to install it on your system.

    This launches a browser-based Playground IDE, including a showcase of many exemplary LMQL programs. If the IDE does not launch automatically, go to http://localhost:3000.

    Command-Line Interface

    As an alternative to the playground, the command-line tool lmql run can be used to execute local .lmql files.

    Python Integration

    LMQL can also be used directly from within Python. To use LMQL in Python, you can import the lmql package, run query code via lmql.run or use a decorator @lmql.query for LMQL query functions.

    For more details, please see the Python Integration chapter.

    Self-Hosted Models

    Note that when using local 🤗 Transformers models in the Playground IDE or via lmql run, you have to first launch an instance of the LMQL Inference API for the corresponding model via the command lmql serve-model.

    For more details, please see 🤗 Models chapter.

    Configuring OpenAI API Credentials

    If you want to use OpenAI models, you have to configure your API credentials. To do so you can either define the OPENAI_API_KEY environment variable or create a file api.env in the active working directory, with the following contents.

    openai-org: <org identifier>
     openai-secret: <api secret>
     

    For system-wide configuration, you can also create an api.env file at $HOME/.lmql/api.env or at the project root of your LMQL distribution (e.g. src/ in a development copy).

    - + \ No newline at end of file diff --git a/docs/latest/language/constraints.html b/docs/latest/language/constraints.html index 4badb3bf..a390d581 100644 --- a/docs/latest/language/constraints.html +++ b/docs/latest/language/constraints.html @@ -5,7 +5,7 @@ Constraints | LMQL - + @@ -98,7 +98,7 @@ For some, life begins with a loving family and a comfortable home, while for others it may start with struggle and hardship.

    Here, we enforce a stopping condition on the . character, but only once the generated story is at least 40 tokens long. Thus, the interpretation of and with stopping conditions is that the stopping condition is only enforced once the other constraints are satisfied.

    How Do LMQL Constraints Work?

    Token Masking and Eager Validation LMQL constraints are evaluated eagerly on each generated token, and will be used by the runtime to generate token masks during generation. This means, that the provided constraints are either satisfied by directly guiding the model during generation appropriately or, if this is not possible, validation will fail early on during generation, saving the cost of generating invalid output. In case of greedy decoding (sample or argmax), this terminates the generation process, in case of branching decoding, it will prune the current branch and continue generation only in the remaining, valid branches.

    High-Level Text Constraints Constraints are high-level and operate on a text (not token) level. For instance, users can specify constraints like VAR in ["Hello", "Hi", "Greetings"], without having to consider the exact tokenization of the individual phrases. LMQL will automatically translate these constraints to token masks, that can be used to guide the model during generation, allowing to generate output that satisfies the provided constraint using one generation call only.

    Custom Constraints and Theoretical Background

    To learn more about the internals of LMQL and how to implement your own LMQL constraint, see the chapter on Implementing Custom LMQL Constraints.

    - + \ No newline at end of file diff --git a/docs/latest/language/constraints/custom-constraints.html b/docs/latest/language/constraints/custom-constraints.html index ee80f9cd..fe9f2afb 100644 --- a/docs/latest/language/constraints/custom-constraints.html +++ b/docs/latest/language/constraints/custom-constraints.html @@ -5,7 +5,7 @@ Custom Constraints | LMQL - + @@ -82,7 +82,7 @@ return "fin" if x == "fin" else "var"

    forward() implementation: First, we implement the forward method: To validate the foo-bar property, we check that any string segment following a potential foo substring aligns with " bar". For this, we have to consider the case of x being none (not yet generated), a partial match (including empty strings), and a match that extends beyond " bar". Depending on model tokenization forward() may be called on any such variation, and thus has to be able to handle all of them.

    follow() implementation: Next, we implement the follow method. For this, we again consider multiple cases:

    1. If x is not yet generated, we do not have to restrict the next token.

    2. If there is no foo in x, we do not have to restrict the next token.

    3. If there is a foo right at the end of x, we restrict as follows:

      (a) If the segment already starts with " bar", no restrictions are necessary.

      (b) Otherwise, we restrict the next token to " bar". For this, we construct a so-called follow map, a mapping of token ranges to the future evaluation result of our operator, if the next token is in the specified range.

      In our foo-bar case it suffices to indicate that a continuation of bar evaluates to True, and any other continuation to False. This is achieved by the fmap function, which constructs a follow map from a list of token ranges and their respective future evaluation results. To construct token ranges the tset constructor can be used, which allows to select tokens by length, set, prefix or regex, independent from the concrete tokenizer in use. In this case we use the regex=True option, to automatically select all tokens that fully or partially match " bar".

    final() implementation: Lastly, we implement the final method. This indicates to the LMQL runtime, whether a result of our custom operator is final with respect to the current model output or temporary. In this case, a return value of False can always be considered final (a definitive violation, warranting early termination). Otherwise, we have to consider the definitiveness of the current value of x. If x is final, then the result of our operator is also final. Otherwise, it is temporary, as a further continuation of x may still result in satisfying the constraint.

    Note: For illustrative purposes, 3b of our follow() implementation simplifies an important detail about token alignment. It only consider the case, where follow() is called right at the end of "foo", i.e. depending on model behavior and tokenization, follow() may also run on a partial result like "foo b", where the correct follow map should indicate "ar" as valid continuation not the full " bar". To handle this, one can simply rely on the implementation of built-in InOpStrInSet (implementation of constraint VAR in [...]), and replace the fmap call with InOpStrInSet([]).follow(bar_segment, [" bar"]), which will automatically handle all such cases.

    Using the Custom Constraint Operator

    To use the custom constraint operator, you can simply import it in your query context and use it as follows:

    "Say 'foo':[A]" where foo_bar(A)
     

    In general, you have to ensure that the @LMQLOp decorator is executed in your current process before the query is parsed, e.g. by importing the module containing the operator implementation.

    What Happens Under the Hood?

    Given an operator implementation as above, the LMQL runtime will be able to both validate model output during generation and derive token-level prediction masks. For this, forward, follow and final are called repeatedly during generation, with the current model output x as input. This allows the runtime to derive both validity of the current model output, as well as next-token ranges for which the operator definitively (final) evaluates to False. Based on the correctness of the underlying implementation, this soundly ensures that the model will never select a follow-masked token that would definitively violate your constraint.

    Expressiveness of LMQL Constraints

    LMQL constraints are applied eagerly during generation by relying on token masking. This means, that the model will not be able to generate any tokens that are masked by the constraints. However, naturally, this approach is limited with respects to expressiveness, since not all properties on text can be decided on a token-by-token basis. More specifically, expressiveness is limited to the validation of context-free languages. To enable safe use of token masking, LMQL's implementation of final/follow semantics provide a soundness guarantee with respect to token masking (see LMQL paper).

    Nonetheless, due to eager evaluation of constraints during generation, LMQL constraints will trigger as soon as the model output violates the constraint definitively (i.e. the validation result is final), preventing the model from the costly generation of invalid output. This is an advantage over validation in post-processing, where violations may only be detected after the model has already generated a large amount of invalid output.

    - + \ No newline at end of file diff --git a/docs/latest/language/decoding.html b/docs/latest/language/decoding.html index f5f7f0ea..2d823deb 100644 --- a/docs/latest/language/decoding.html +++ b/docs/latest/language/decoding.html @@ -5,7 +5,7 @@ Decoders | LMQL - + @@ -39,7 +39,7 @@ tell_a_joke() # uses the decoder specified in @lmql.query(...) tell_a_joke(decoder="beam", n=2) # uses a beam search decoder with n=2

    This is only possible when using LMQL from a Python context.

    Decoding Algorithms

    In general, the very first keyword of an LMQL query, specifies the decoding algorithm to use. For this, the following decoder keywords are available:

    argmax

    The argmax decoder is the simplest decoder available in LMQL. It greedily selects the most likely token at each step of the decoding process. It has no additional parameters. Since argmax decoding is deterministic, one can only generate a single sequence at a time.

    sample(n: int, temperature: float)

    The sample decoder samples n sequences in parallel from the model. The temperature parameter controls the randomness of the sampling process. Higher values of temperature lead to more random samples, while lower values lead to more likely samples. A temperature value of 0.0 is equivalent to the argmax decoder.

    beam(n: int)

    A simple beam search decoder. The n parameter controls the beam size. The beam search decoder is deterministic, so it will generate the same n sequences every time. The result of a beam query is a list of n sequences, sorted by their likelihood.

    beam_sample(n: int, temperature: float)

    A beam search decoder that samples from the beam at each step. The n parameter controls the beam size, while the temperature parameter controls the randomness of the sampling process. The result of a beam_sample query is a list of n sequences, sorted by their likelihood.

    Novel Decoders

    LMQL also implements a number of novel decoders. These decoders are experimental and may not work as expected. They are also not guaranteed to be stable across different LMQL versions. More documentation on these decoders will be provided in the future.

    var(b: int, n: int)

    An experimental implementation of variable-level beam search.

    beam_var(n: int)

    An experimental implementation of a beam search procedure that groups by currently-decoded variable and applies adjusted length penalties.

    Inspecting Decoding Trees

    LMQL also provides a way to inspect the decoding trees generated by the decoders. For this, make sure to execute the query in the Playground IDE and click on the Advanced Mode button, in the top right corner of the Playground. This will open a new pane, where you can navigate and inspect the LMQL decoding tree:

    A decoding tree as visualized in the LMQL Playground.
    A decoding tree as visualized in the LMQL Playground.

    This view allows you to track the decoding process, active hypotheses and interpreter state, including the current evaluation result of the where clause. For an example, take a look at the translation example in the Playground (with Advanced Mode enabled).

    Writing Custom Decoders

    LMQL also includes a library for array-based decoding dclib, which can be used to implement custom decoders. More information on this, will be provided in the future. The implementation of the available decoding procedures is located in src/lmql/runtime/dclib/decoders.py of the LMQL repository.

    Additional Decoding Parameters

    Next to the decoding algorithm, LMQL also supports a number of additional decoding parameters, which can affect sampling behavior and token scoring:

    max_len: intThe maximum length of the generated sequence. If not specified, the default value of max_len is 2048. Note if the maximum length is reached, the LMQL runtime will throw an error if the query has not yet come to a valid result, according to the provided where clause.
    top_k: intRestricts the number of tokens to sample from in each step of the decoding process, based on Fan et. al(2018) (only applicable for sampling decoders).
    top_p: floatTop-p (nucleus) sampling, based on Holtzman et. al(2019) (only applicable for sampling decoders).
    repetition_penalty: floatRepetition penalty, 1.0 means no penalty, based on Keskar et. al(2019). The more a token is already present in the generated sequence, the more its probability will be penalized.
    frequency_penalty: floatfrequency_penalty as documented as part of the OpenAI API.
    presence_penalty: floatpresence_penalty as documented as part of the OpenAI API.

    TIP

    Note that the concrete implementation and availability of additional decoding parameters may vary across different inference backends. For reference, please see the API documentation of the respective inference interface, e.g. the HuggingFace generate() function or the OpenAI API.

    Runtime Parameters

    Lastly, a number of additional runtime parameters are available, which can be used to control auxiliary aspects of the decoding process:

    chunksize: intThe chunksize parameter used for max_tokens in OpenAI API requests or in speculative inference with local models. If not specified, the default value of chunksize is 32. See also the description of this parameter in the Models chapter.
    verbose: boolEnables verbose console logging for individual LLM inference calls (local generation calls or OpenAI API request payloads).
    cache: Union[bool,str]True or False to enable in-memory token caching. If not specified, the default value of cache is True, indicating in-memory caching is enabled.

    Setting cache to a string value, specifies a local file to use for disk-based caching, enabling caching across multiple query executions and sessions.
    openai_nonstopExperimental option for OpenAI-specific non-stop generation, which can further improve the effectiveness of caching in some scenarios.
    chunk_timeoutOpenAI-specific maximum time in seconds to wait for the next chunk of tokens to arrive. If exceeded, the current API request will be retried with an approriate backoff.

    If not specified, the default value of chunk_timeout is 2.5. Adjust this parameter, if you are seeing a high number of timeouts in the console output of the LMQL runtime.
    - + \ No newline at end of file diff --git a/docs/latest/language/decorators.html b/docs/latest/language/decorators.html index 9f137153..bf7611e5 100644 --- a/docs/latest/language/decorators.html +++ b/docs/latest/language/decorators.html @@ -5,7 +5,7 @@ Decorators | LMQL - + @@ -83,7 +83,7 @@ Say 'this is a test': [TEST PREFIX: This is a test] - + \ No newline at end of file diff --git a/docs/latest/language/nestedqueries.html b/docs/latest/language/nestedqueries.html index 6d2cdc14..e0305723 100644 --- a/docs/latest/language/nestedqueries.html +++ b/docs/latest/language/nestedqueries.html @@ -5,7 +5,7 @@ Nested Queries NEW | LMQL - + @@ -100,7 +100,7 @@ 100 days ago? [ANSWER: dateformat]" '''

    Here, main_query references dateformat as a nested query, where both functions are defined on the top level of the same file. However, you can also import and reuse query code from other files, as long as they are accessible from the scope of you main query function. Using this ability you can write libraries of reusable query functions to be used across your application or even by other users.

    - + \ No newline at end of file diff --git a/docs/latest/language/overview.html b/docs/latest/language/overview.html index 08116308..fe682459 100644 --- a/docs/latest/language/overview.html +++ b/docs/latest/language/overview.html @@ -5,7 +5,7 @@ Overview | LMQL - + @@ -116,7 +116,7 @@ What is it that they liked about their stay? FURTHER_ANALYSISThe reviewer liked the hiking in the mountains and the food.

    As shown here, we can use the if statement to dynamically react to the model's output. In this case, we ask the model to provide a more detailed analysis of the review, depending on the overall positive, neutral, or negative sentiment of the review. All intermediate variables like ANALYSIS, CLASSIFICATION or FURTHER_ANALYSIS can be considered the output of query, and may be processed by an surrounding automated system.

    To learn more about the capabilities of such control-flow-guided prompts, see Scripted Prompting.

    As shown here, in addition to inline where expressions as seen earlier, you can also provide a global where expression at the end of your program, e.g. to specify constraints that should apply for all variables. Depending on your use case, this can be a convenient way to avoid having to repeat the same constraints multiple times, like for FURTHER_ANALYSIS in this example.

    - + \ No newline at end of file diff --git a/docs/latest/language/reference.html b/docs/latest/language/reference.html index 881b9ca2..715b3500 100644 --- a/docs/latest/language/reference.html +++ b/docs/latest/language/reference.html @@ -5,7 +5,7 @@ Language Reference | LMQL - + @@ -163,7 +163,7 @@ "Greet {person}. Hello [NAME]!" '''

    From within Python, the same syntax can used to construct Python-callable query functions. Please see to the documentation chapter on Python Integration for more information.

    LMQL query function can also be declared as async functions, which enables asynchronous execution.

    Function Calling and Arguments

    A query function can be called as a standard function from within LMQL or Python code. It can also be called as a nested query from within a query string. async query functions require the await keyword to be used.

    Arguments In addition to the function arguments specified in the function signature, query functions also provide the following additional arguments, that can be used to control the generation process:

    • model: The lmql.LLM model reference (or string identifier) to be used for generation.
    • decoder: The decoding algorithm to be used for generation. See also the decoder clause section.
    • output_writer: The output writer callback to be used during generation. See also documentation chapter on output streaming section.
    • **kwargs: Additional keyword arguments, passed to decoder and interpreter, such as temperature, chunksize, etc.

    Reference Implementation

    LMQL's current reference implementation is written in Python and also available as a Python library. The reference implementation of the syntax and semantics described in this document is available via Git at github.com/eth-sri/lmql.

    Compiler and Runtime

    The LMQL Python compiler translates LMQL programs into asynchroneous, brancheable Python code according to the semantics described in this document. The resulting program is then executed using the LMQL runtime, which implements (constrained) decoding algorithms, optimizations and model support via several backends.

    Hybrid Parser

    For parsing, the implementation leverages a hybrid approach, largely relying on the existing Python parser (ast.parse) and grammar, adding additional parsing logic only for LMQL-specific constructs. This approach allows us to be compliant with the Python grammar, while also allowing us to extend the language with additional constructs, that are not part of the original Python grammar. To parse the standalone syntax, we segment the input on a token level and then call the parser several times to obtain the final AST for e.g. the prompt clause, the where clause or the distribution clause.

    - + \ No newline at end of file diff --git a/docs/latest/language/scripted-prompting.html b/docs/latest/language/scripted-prompting.html index 11405567..405ffe68 100644 --- a/docs/latest/language/scripted-prompting.html +++ b/docs/latest/language/scripted-prompting.html @@ -5,7 +5,7 @@ Scripted Prompting | LMQL - + @@ -119,7 +119,7 @@

    Python Compatibility

    Going beyond simple control flow, LMQL also supports most valid Python constructs in the prompt clause of a query, where top-level strings like "-[THING]" are automatically interpreted as model input and template variables are assigned accordingly. For more advanced usage, see also the Tool Augmentation chapter.

    - + \ No newline at end of file diff --git a/docs/latest/language/tools.html b/docs/latest/language/tools.html index 437bba17..07b54e14 100644 --- a/docs/latest/language/tools.html +++ b/docs/latest/language/tools.html @@ -5,7 +5,7 @@ Tool Augmentation | LMQL - + @@ -208,7 +208,7 @@ `get('Alice') # result blue` Therefore at the end of the game, Alice has the OBJECT blue ball.

    As shown in the example above, the assign and get functions can be used to store and retrieve values in a simple key-value store. The model is merely instructed to make use of these functions in its reasoning. The query then implements logic to intercept any function use and insert the result of the function call into the reasoning. This allows the model to incorporate the state of the key-value store into its reasoning.

    - + \ No newline at end of file diff --git a/docs/latest/lib/chat.html b/docs/latest/lib/chat.html index 77939b6b..77122718 100644 --- a/docs/latest/lib/chat.html +++ b/docs/latest/lib/chat.html @@ -5,7 +5,7 @@ Chat | LMQL - + @@ -22,7 +22,7 @@
    Skip to content

    Chat

    Build custom chatbots with a just a couple of lines of LMQL.

    Building chat applications is one of the most common use cases for LLMs. This is why LMQL provides simple library support for it. This chapter will walk you through the basics of building a chatbot with LMQL Chat including the core loop, output streaming, serving and defending against prompt injections.

    Screenshot of the model dropdown in the playground
    An overview of the LMQL Chat library.

    To get started choose one of the following topics:

    - + \ No newline at end of file diff --git a/docs/latest/lib/chat/defend.html b/docs/latest/lib/chat/defend.html index 420da409..0b4c10cb 100644 --- a/docs/latest/lib/chat/defend.html +++ b/docs/latest/lib/chat/defend.html @@ -5,7 +5,7 @@ Defending Against Prompt Injections | LMQL - + @@ -49,7 +49,7 @@ from "chatgpt"

    To run this program, make sure the is_disallowed function is also included in your program code.

    Even though the system prompt explicitly instructs the model to reveal the hidden phrase, if asked for, the model will not do so. This is because disallowed inputs as detected by our sanitization function, are replaced with boilerplate text, which means the model never sees the original, malicious user message.

    Extending the Scope The set of disallowed phrases can easily be extended by additional examples, while checking for similarity is typically quite cheap even on CPU-only systems. This makes this approach a good candidate for a simple, yet effective defense against prompt injections.

    Other Uses Apart from checking for malicious user input, the same method can also be used to detect other types of user input. For example, we can check whether the user input relates to one of the topics we want to support and if not, replace it with a default message to prevent the model from going off-topic.

    Summary

    This chapter showed how to implement a simple embedding-based prompt injection defense. The defense works by checking whether the user input is similar to a set of disallowed inputs. If so, the user input is replaced with default instructions, making sure the model gracefully handles the situation, without actually revealing any information.

    We note that this defense is not perfect and can be circumvented by a sufficiently motivated attacker. However, it is a simple and effective way to prevent prompt injections and can be easily extended to cover more cases or to detect other types of user input.

    - + \ No newline at end of file diff --git a/docs/latest/lib/chat/internal.html b/docs/latest/lib/chat/internal.html index 88553361..7461278d 100644 --- a/docs/latest/lib/chat/internal.html +++ b/docs/latest/lib/chat/internal.html @@ -5,7 +5,7 @@ Internal Reasoning | LMQL - + @@ -38,7 +38,7 @@ from "chatgpt"

    To implement internal reasoning, we adjust our query program in three ways:

    1. We adapt the {:system} prompt to include additional instructions that make sure the underlying LLM is instructed to produce internal reasoning output.

    2. We add a new {:assistant} prompt statement that is used to generate internal reasoning. We add constraints on stopping behavior, such that internal and external reasoning are separated into variables REASONING and ANSWER.

    3. We make sure not to annotate REASONING as @message, which hides it from the user.

    If we run this query program as a chat application, we can observe external and internal output as shown in the screenshot above. As specified by the system prompt, the chabot now indeed exhibits anxious and slighlty paranoid internal reasoning.

    - + \ No newline at end of file diff --git a/docs/latest/lib/chat/overview.html b/docs/latest/lib/chat/overview.html index 874bed21..caa8fbfc 100644 --- a/docs/latest/lib/chat/overview.html +++ b/docs/latest/lib/chat/overview.html @@ -5,7 +5,7 @@ Your First Chatbot | LMQL - + @@ -40,7 +40,7 @@ "chatgpt"

    To resulting chat application will now respond in a more personalized way, as it will consider the system prompt before responding to user input. In this case, we instruct it to respond as LMQL marketing agent.

    3. Serving the Chatbot

    Lastly, to move beyond the playground, we can use the lmql chat command to serve our chatbot as a local web application. To do so, we just save the above program as chat.lmql and run the following command:

    bash
    lmql chat chat.lmql
     

    Once the server is running, you can access the chatbot at the provided local URL.

    Screenshot of the model dropdown in the playground
    A simple chatbot using the LMQL Chat UI.

    In this interface, you can interact with your chatbot by typing into the input field at the bottom of the screen. The chatbot will then respond to your input, while also considering the system prompt that you provide in your program. On the right, you can inspect the full internal prompt of your program, including the generated prompt statements and the model output. This allows you at all times, to understand what exact input the model received and how it responded to it.

    Learn More

    To learn more, return to the Chat overview page and pick one of the provided topics.

    - + \ No newline at end of file diff --git a/docs/latest/lib/chat/serving.html b/docs/latest/lib/chat/serving.html index 8d42ada0..33e31ca9 100644 --- a/docs/latest/lib/chat/serving.html +++ b/docs/latest/lib/chat/serving.html @@ -5,7 +5,7 @@ Serving | LMQL - + @@ -25,7 +25,7 @@ chatserver('path/to/my-query.lmql').run()

    Note that when passing a query function directly, you have to always provide a async def function, which enables concurrent client serving.

    @message Streaming

    Chat relies on a decorator-based output streaming. More specifically, only model output variables that are annotated as @message are streamed and shown to the user in the chat interface. This allows for a clean separation of model output and chat output, and eneables hidden/internal reasoning.

    To use @message with your custom output writer, make sure to inherit from lmql.lib.chat's ChatMessageOutputWriter, which offers additional methods for specifically handling and streaming @message variables.

    More Advanced Usage

    For more advanced serving scenarios, e.g. when integrating Chat into your own web applications, please refer to the very minimal implementation of chatserver in src/lmql/ui/chat/__init__.py. This implementation is very minimal and can be easily adapted to your own needs and infrastructure. The corresponding web UI is implemented in src/lmql/ui/chat/assets/ and offers a good starting point for your own implementation and UI adaptations on the client side.

    For other forms of output streaming e.g. via HTTP or SSE, see also the chapter on Output Streaming

    Disclaimer: The LMQL chat server is a simple code template that does not include any security features, authentication or cost control. It is intended for local development and testing only, and should not be used as-is in production environments. Before deploying your own chat application, make sure to implement the necessary security measures, cost control and authentication mechanisms.

    - + \ No newline at end of file diff --git a/docs/latest/lib/generations.html b/docs/latest/lib/generations.html index 19463102..96f6828a 100644 --- a/docs/latest/lib/generations.html +++ b/docs/latest/lib/generations.html @@ -5,7 +5,7 @@ Generations API NEW | LMQL - + @@ -63,7 +63,7 @@ ) -> lmql.ScoringResult

    lmql.score scores different continuation values for a given prompt and behaves just like LLM.score, with the provided model instance or model name.

    If no model is provided, the default model is used. See lmql.set_default_model for more information.

    lmql.score_sync(...)

    Synchronous version of lmql.score.

    lmql.set_default_model(...)

    python
    def set_default_model(model: Union[str, LLM])
     

    Sets the model to be used when no from clause or @lmql.query(model=<model>) are specified in LMQL. The default model applies globally in the current process and affects both LMQL queries and Generation API methods like lmql.generate and lmql.score functions.

    You can also specify the environment variable LMQL_DEFAULT_MODEL to set the default model.

    - + \ No newline at end of file diff --git a/docs/latest/lib/inference-certificates.html b/docs/latest/lib/inference-certificates.html index c13134ac..acf881a2 100644 --- a/docs/latest/lib/inference-certificates.html +++ b/docs/latest/lib/inference-certificates.html @@ -5,7 +5,7 @@ Inference Certificates | LMQL - + @@ -88,7 +88,7 @@ # all calls made in this context print(lmql.certificate(t))

    This produces one certificate for all calls made in the defined context, where each query is represented as a separate item in the list of children certificates. Recorded events are are nested in child certificates. Additionally, an aggregated metrics object ranging over all (recursive) calls is included in the top-level certificate.

    Certificate Callbacks And Return Values

    As an alternative to directly writing certificates to a file, certificates can also be handled via a callback or returned as a function return value.

    To specify a callback function that is called with the generated certificate as an argument, specify it as the certificate=<FCT> argument.

    The callback is provided with a single certificate object, which is of type lmql.InferenceCertificate. The certificate can be directly serialized to JSON using string conversion, i.e., str(certificate).

    - + \ No newline at end of file diff --git a/docs/latest/lib/integrations.html b/docs/latest/lib/integrations.html index d6b1f480..09930ced 100644 --- a/docs/latest/lib/integrations.html +++ b/docs/latest/lib/integrations.html @@ -5,7 +5,7 @@ Other Integrations | LMQL - + @@ -21,7 +21,7 @@ - + \ No newline at end of file diff --git a/docs/latest/lib/integrations/langchain.html b/docs/latest/lib/integrations/langchain.html index 7e8a5020..e7dd8009 100644 --- a/docs/latest/lib/integrations/langchain.html +++ b/docs/latest/lib/integrations/langchain.html @@ -5,7 +5,7 @@ LangChain | LMQL - + @@ -118,7 +118,7 @@ > Finished chain. "Step into a world of color with RainbowSocks Co.!"

    Overall, we thus have a chain that combines langchain and LMQL components, and can be used as a single unit.

    Asynchronous Use

    You may encounter problems because of the mismatch of LangChain's synchronous APIs with LMQL's async-first design.

    To avoid problems with this, you can install the nest_asyncio package and call nest_asyncio.apply() to enable nested event loops. LMQL will then handle event loop nesting and sync-to-async conversion for you.

    - + \ No newline at end of file diff --git a/docs/latest/lib/integrations/llama_index.html b/docs/latest/lib/integrations/llama_index.html index eda558e8..0d8314e7 100644 --- a/docs/latest/lib/integrations/llama_index.html +++ b/docs/latest/lib/integrations/llama_index.html @@ -5,7 +5,7 @@ LlamaIndex | LMQL - + @@ -48,7 +48,7 @@ output_writer=lmql.stream(variable="RESPONSE"))
    output
    Scripted prompting in LMQL refers to the ability to specify complex interactions, control flow, and constraints using lightweight scripting and declarative SQL-like elements in the Language Model Query Language (LMQL). This allows users to prompt language models with precise constraints and efficient decoding without requiring knowledge of the LM's internals. LMQL can be used to express a wide variety of existing prompting methods using simple, concise, and vendor-agnostic code. The underlying runtime is compatible with existing LMs and can be supported easily, requiring only a simple change in the decoder logic.
     
    - + \ No newline at end of file diff --git a/docs/latest/lib/integrations/pandas.html b/docs/latest/lib/integrations/pandas.html index 87f662b2..d0c50bb5 100644 --- a/docs/latest/lib/integrations/pandas.html +++ b/docs/latest/lib/integrations/pandas.html @@ -5,7 +5,7 @@ Pandas | LMQL - + @@ -71,7 +71,7 @@ Poodle 2.00 Name: AGE, dtype: float64 - + \ No newline at end of file diff --git a/docs/latest/lib/output.html b/docs/latest/lib/output.html index a75cbe66..4448a949 100644 --- a/docs/latest/lib/output.html +++ b/docs/latest/lib/output.html @@ -5,7 +5,7 @@ Output Streaming | LMQL - + @@ -66,7 +66,7 @@ The current program state (lmql.runtime.program_state). E.g. program_variables.variable_values is a mapping of variable names to their current values. """

    Based on this interface, you can implement your own output writer to implement custom streaming. For examples of how this interface can be used, see the implementation of the standard output writers in lmql.runtime.output_writer.

    - + \ No newline at end of file diff --git a/docs/latest/lib/python.html b/docs/latest/lib/python.html index f3587e54..ab822733 100644 --- a/docs/latest/lib/python.html +++ b/docs/latest/lib/python.html @@ -5,7 +5,7 @@ Python Integration | LMQL - + @@ -107,7 +107,7 @@ # a dictionary of all assigned template variable values variables: Dict[str, str] - + \ No newline at end of file diff --git a/docs/latest/models/azure.html b/docs/latest/models/azure.html index 0d6bb2a5..af03ff18 100644 --- a/docs/latest/models/azure.html +++ b/docs/latest/models/azure.html @@ -5,7 +5,7 @@ Azure | LMQL - + @@ -45,7 +45,7 @@ [verbose=False] )

    The resulting my_azure_model object can now be used in the from clause of a query, as model=... argument for LMQL query functions, or for direct generation.

    Azure configuration parameters specified as part of an lmql.model(...) object generally take precedence over environment variables. The latter just act as a fallback, e.g. when api_key= is not specified as a keyword argument.

    Using a Custom Deployment Name

    If your deployment name uses a non-standard name (e.g. different from e.g. gpt-3.5-turbo), the LMQL runtime may not be able to automatically infer a corresponding tokenizer to use. To resolve this, you can additionally specify a tokenizer="openai/gpt-3.5-turbo" parameter to the lmql.model call, with the name of the tokenizer that should be used for this model.

    - + \ No newline at end of file diff --git a/docs/latest/models/hf.html b/docs/latest/models/hf.html index e9b123ad..05212b47 100644 --- a/docs/latest/models/hf.html +++ b/docs/latest/models/hf.html @@ -5,7 +5,7 @@ Local Models / Transformers | LMQL - + @@ -28,7 +28,7 @@

    Quantization

    Quantization reduces the precision of model parameters to shrink model size and boost inference speed with minimal accuracy loss. LMQL supports two quantization formats: AWQ (using AutoAWQ) and GPTQ (using AutoGPTQ).

    AutoAWQ

    AWQ minimizes quantization error by protecting crucial weights, promoting model efficiency without sacrificing accuracy. It's ideal for scenarios requiring both compression and acceleration of LLMs.

    Install AutoAWQ following the repo instructions. To use AWQ-quantized models, run:

    To use AWQ-quantized models, first install AutoAWQ using the instructions in the repo.

    bash
    lmql serve-model TheBloke/Mistral-7B-OpenOrca-AWQ --loader awq
     

    AutoGPTQ

    AutoGPTQ reduces model size while retaining performance by lowering the precision of model weights to 4 or 3 bits. It's suitable for efficient deployment and operation of LLMs on consumer-grade hardware.

    Install AutoGPTQ following the repo instructions. To use GPTQ-quantized models, run:

    bash
    lmql serve-model TheBloke/Arithmo-Mistral-7B-GPTQ --loader gptq
     
    - + \ No newline at end of file diff --git a/docs/latest/models/index.html b/docs/latest/models/index.html index 5c438207..01e1255f 100644 --- a/docs/latest/models/index.html +++ b/docs/latest/models/index.html @@ -5,7 +5,7 @@ Overview | LMQL - + @@ -46,7 +46,7 @@ from "openai/text-ada-001"

    Here, we specify "openai/text-ada-001" directly, but the shown snippet is equivalent to the use of lmql.model(...), i.e. lmql.model("openai/text-ada-001").

    Note, that the from keyword is only available with the indented standalone syntax as shown here, where the decoder keywords has to be provided explicitly.

    Playground

    To specify the model when running in the playground, you can use the model dropdown available in the top right of the program editor, to set and override the model parameter of your query program:

    Model selection dropdown in the LMQL Playground.

    Adding New Model Backends

    Due to the modular design of LMQL, it is easy to add support for new models and backends. If you would like to propose or add support for a new model API or inference engine, please reach out to us via our Community Discord or via hello@lmql.ai.

    - + \ No newline at end of file diff --git a/docs/latest/models/llama.cpp.html b/docs/latest/models/llama.cpp.html index 8c657a4e..e5c50137 100644 --- a/docs/latest/models/llama.cpp.html +++ b/docs/latest/models/llama.cpp.html @@ -5,7 +5,7 @@ llama.cpp | LMQL - + @@ -25,7 +25,7 @@

    Model Path The client-side lmql.model(...) identifier must always match the exact server-side lmql serve-model GGUF location, even if the path does not exist on the client machine. In this context, it is merely used as a unique identifier for the model.

    Tokenizer When omitting tokenizer=..., LMQL will use the transformers-based tokenizer for huggyllama/llama-7b by default. This works for Llama and Llama-based fine-tuned models, but must be adapted for others. To find a matching tokenizer for your concrete gguf file, look up the transformers equivalent entry on the HuggingFace model hub. Alternatively, you can use sentencepiece as a tokenization backend. For this, you have to specify the client-side path to a corresponding tokenizer.model file.

    Running Without a Model Server

    To load the llama.cpp directly as part of the Python process that executes your query program, you can use the local: prefix, followed by the path to the gguf file:

    lmql.model("local:llama.cpp:<PATH TO WEIGHTS>.gguf", tokenizer="<tokenizer>")
     

    Again, you can omit the tokenizer=... argument if you want to use the default tokenizer for huggyllama/llama-7b. If not, you have to specify a tokenizer, as described above.

    Configuring the Llama(...) instance

    Any parameters passed to lmql serve-model and, when running locally, to lmql.model(...) will be passed to the Llama(...) constructor.

    For example, to configure the Llama(...) instance to use an n_ctx value of 1024, run:

    bash
    lmql serve-model llama.cpp:<PATH TO WEIGHTS>.bin --n_ctx 1024
     

    Or, when running locally, you can use lmql.model("local:llama.cpp:<PATH TO WEIGHTS>.bin", n_ctx=1024).

    - + \ No newline at end of file diff --git a/docs/latest/models/openai.html b/docs/latest/models/openai.html index 899a3b64..51902ff2 100644 --- a/docs/latest/models/openai.html +++ b/docs/latest/models/openai.html @@ -5,7 +5,7 @@ OpenAI | LMQL - + @@ -37,7 +37,7 @@ where STOPS_AT(COMPLETION, ".")

    By default, the chunk size is set to 32. This value is chosen based on the consideration, that a very large chunk size means that LMQL potentially has to discard many generated tokens (which is expensive), if a constraint is violated early on. However, if a query has few or only stopping phrase constraints, a larger chunk size may be beneficial for overall query cost. In general, if a query requires multiple long, uninterrupted sequences to be generated without imposing many constraints, a larger chunk size is recommended.

    OpenAI API Limitations

    Unfortunately, the OpenAI API Completions and Chat API are severely limited in terms of token masking and the availability of the token distribution per predicted token. LMQL tries to leverage these APIs as much as possible, but there are some limitations that we have to work around and may affect users:

    • The OpenAI Completion API limits the number of possible logit biases to 300. This means, if your constraints induce token masks that are larger than 300 tokens, LMQL will automatically truncate the token mask to the first 300 tokens. This may lead to unexpected behavior, e.g., model performance may be worse than expected as the masks are truncated to be more restrictive than necessary. In cases where the 300 biases limit is exceeded, LMQL prints a warning message to the console, indicating that the logit biases were truncated.

    • The OpenAI Completions API only provides the top-5 logprobs per predicted token. This means that decoding algorithms that explore e.g. the top-n probabilities to make decisions like beam search, are limited to a branching factor of 5.

    • The OpenAI Chat API does not provide a way to obtain token distributions or generate/continue partial responses (ChatGPT, GPT-4). Simple constraints can still be enforced, as the LMQL runtime optimizes them to fit the OpenAI API. However, more complex constraints may not be enforceable. In these cases, LMQL will print a error message to the console. As a workaround users may then adjust their constraints to fit these API limitations or resort to post-processing and backtracking. Scripted prompting, intermediate instructions and simple constraints are still supported with Chat API models, nonetheless.

    - + \ No newline at end of file diff --git a/docs/latest/models/replicate.html b/docs/latest/models/replicate.html index a25d3ac5..81c34caf 100644 --- a/docs/latest/models/replicate.html +++ b/docs/latest/models/replicate.html @@ -5,7 +5,7 @@ Replicate | LMQL - + @@ -37,7 +37,7 @@ where STOPS_AT(ANALYSIS, "\n") and len(TOKENS(ANALYSIS)) < 200 distribution CLASSIFICATION in [" positive", " negative", " neutral"]

    Uploading A 🤗 Model To Replicate

    You can also upload and deploy your own LMQL models to Replicate. To do so, first install Cog. In addition to that, LMQL provides scripts that largely automate the process of building and uploading models (see the scripts/replicate-build section of the LMQL source distribution).

    1. Create a corresponding model on the Replicate website.

    2. Copy config.toml.example to config.toml, and customize it.

      Change dest_prefix to replace YOURACCOUNT with the name of the actual Replicate account to which you will be uploading models.

      For each model you wish to build and upload, your config file should have a [models.MODELNAME] section. Make sure MODELNAME reflects the name of the model as create in your Replicate account.

      huggingface.repo should reflect the Hugging Face model name you wish to wrap. If you want to pin a version, also set huggingface.version.

      The config section may be used to set any values you want to pass in the model_args dictionary.

    3. Run the ./build script, with your current working directory being scripts/replicate-build.

      This will create a work/ subdirectory for each model defined in your configuration file.

    4. In the work/MODELNAME directory, run the generated ./push script to build and upload your model, or cog predict to test your model locally.

    - + \ No newline at end of file diff --git a/docs/lib/chat.html b/docs/lib/chat.html index 9cb43148..a70870e6 100644 --- a/docs/lib/chat.html +++ b/docs/lib/chat.html @@ -5,7 +5,7 @@ Chat | LMQL - + @@ -22,7 +22,7 @@
    Skip to content

    Chat

    Build custom chatbots with a just a couple of lines of LMQL.

    Building chat applications is one of the most common use cases for LLMs. This is why LMQL provides simple library support for it. This chapter will walk you through the basics of building a chatbot with LMQL Chat including the core loop, output streaming, serving and defending against prompt injections.

    Screenshot of the model dropdown in the playground
    An overview of the LMQL Chat library.

    To get started choose one of the following topics:

    - + \ No newline at end of file diff --git a/docs/lib/chat/defend.html b/docs/lib/chat/defend.html index 4605d5ff..e540a5b3 100644 --- a/docs/lib/chat/defend.html +++ b/docs/lib/chat/defend.html @@ -5,7 +5,7 @@ Defending Against Prompt Injections | LMQL - + @@ -49,7 +49,7 @@ from "chatgpt"

    To run this program, make sure the is_disallowed function is also included in your program code.

    Even though the system prompt explicitly instructs the model to reveal the hidden phrase, if asked for, the model will not do so. This is because disallowed inputs as detected by our sanitization function, are replaced with boilerplate text, which means the model never sees the original, malicious user message.

    Extending the Scope The set of disallowed phrases can easily be extended by additional examples, while checking for similarity is typically quite cheap even on CPU-only systems. This makes this approach a good candidate for a simple, yet effective defense against prompt injections.

    Other Uses Apart from checking for malicious user input, the same method can also be used to detect other types of user input. For example, we can check whether the user input relates to one of the topics we want to support and if not, replace it with a default message to prevent the model from going off-topic.

    Summary

    This chapter showed how to implement a simple embedding-based prompt injection defense. The defense works by checking whether the user input is similar to a set of disallowed inputs. If so, the user input is replaced with default instructions, making sure the model gracefully handles the situation, without actually revealing any information.

    We note that this defense is not perfect and can be circumvented by a sufficiently motivated attacker. However, it is a simple and effective way to prevent prompt injections and can be easily extended to cover more cases or to detect other types of user input.

    - + \ No newline at end of file diff --git a/docs/lib/chat/internal.html b/docs/lib/chat/internal.html index 6fd579f6..1b9a7d68 100644 --- a/docs/lib/chat/internal.html +++ b/docs/lib/chat/internal.html @@ -5,7 +5,7 @@ Internal Reasoning | LMQL - + @@ -38,7 +38,7 @@ from "chatgpt"

    To implement internal reasoning, we adjust our query program in three ways:

    1. We adapt the {:system} prompt to include additional instructions that make sure the underlying LLM is instructed to produce internal reasoning output.

    2. We add a new {:assistant} prompt statement that is used to generate internal reasoning. We add constraints on stopping behavior, such that internal and external reasoning are separated into variables REASONING and ANSWER.

    3. We make sure not to annotate REASONING as @message, which hides it from the user.

    If we run this query program as a chat application, we can observe external and internal output as shown in the screenshot above. As specified by the system prompt, the chabot now indeed exhibits anxious and slighlty paranoid internal reasoning.

    - + \ No newline at end of file diff --git a/docs/lib/chat/overview.html b/docs/lib/chat/overview.html index 355cce36..5fc6605a 100644 --- a/docs/lib/chat/overview.html +++ b/docs/lib/chat/overview.html @@ -5,7 +5,7 @@ Your First Chatbot | LMQL - + @@ -40,7 +40,7 @@ "chatgpt"

    To resulting chat application will now respond in a more personalized way, as it will consider the system prompt before responding to user input. In this case, we instruct it to respond as LMQL marketing agent.

    3. Serving the Chatbot

    Lastly, to move beyond the playground, we can use the lmql chat command to serve our chatbot as a local web application. To do so, we just save the above program as chat.lmql and run the following command:

    bash
    lmql chat chat.lmql
     

    Once the server is running, you can access the chatbot at the provided local URL.

    Screenshot of the model dropdown in the playground
    A simple chatbot using the LMQL Chat UI.

    In this interface, you can interact with your chatbot by typing into the input field at the bottom of the screen. The chatbot will then respond to your input, while also considering the system prompt that you provide in your program. On the right, you can inspect the full internal prompt of your program, including the generated prompt statements and the model output. This allows you at all times, to understand what exact input the model received and how it responded to it.

    Learn More

    To learn more, return to the Chat overview page and pick one of the provided topics.

    - + \ No newline at end of file diff --git a/docs/lib/chat/serving.html b/docs/lib/chat/serving.html index 34c68aa3..722d4cf9 100644 --- a/docs/lib/chat/serving.html +++ b/docs/lib/chat/serving.html @@ -5,7 +5,7 @@ Serving | LMQL - + @@ -25,7 +25,7 @@ chatserver('path/to/my-query.lmql').run()

    Note that when passing a query function directly, you have to always provide a async def function, which enables concurrent client serving.

    @message Streaming

    Chat relies on a decorator-based output streaming. More specifically, only model output variables that are annotated as @message are streamed and shown to the user in the chat interface. This allows for a clean separation of model output and chat output, and eneables hidden/internal reasoning.

    To use @message with your custom output writer, make sure to inherit from lmql.lib.chat's ChatMessageOutputWriter, which offers additional methods for specifically handling and streaming @message variables.

    More Advanced Usage

    For more advanced serving scenarios, e.g. when integrating Chat into your own web applications, please refer to the very minimal implementation of chatserver in src/lmql/ui/chat/__init__.py. This implementation is very minimal and can be easily adapted to your own needs and infrastructure. The corresponding web UI is implemented in src/lmql/ui/chat/assets/ and offers a good starting point for your own implementation and UI adaptations on the client side.

    For other forms of output streaming e.g. via HTTP or SSE, see also the chapter on Output Streaming

    Disclaimer: The LMQL chat server is a simple code template that does not include any security features, authentication or cost control. It is intended for local development and testing only, and should not be used as-is in production environments. Before deploying your own chat application, make sure to implement the necessary security measures, cost control and authentication mechanisms.

    - + \ No newline at end of file diff --git a/docs/lib/generations.html b/docs/lib/generations.html index 89a419ee..a8fa2588 100644 --- a/docs/lib/generations.html +++ b/docs/lib/generations.html @@ -5,7 +5,7 @@ Generations API NEW | LMQL - + @@ -63,7 +63,7 @@ ) -> lmql.ScoringResult

    lmql.score scores different continuation values for a given prompt and behaves just like LLM.score, with the provided model instance or model name.

    If no model is provided, the default model is used. See lmql.set_default_model for more information.

    lmql.score_sync(...)

    Synchronous version of lmql.score.

    lmql.set_default_model(...)

    python
    def set_default_model(model: Union[str, LLM])
     

    Sets the model to be used when no from clause or @lmql.query(model=<model>) are specified in LMQL. The default model applies globally in the current process and affects both LMQL queries and Generation API methods like lmql.generate and lmql.score functions.

    You can also specify the environment variable LMQL_DEFAULT_MODEL to set the default model.

    - + \ No newline at end of file diff --git a/docs/lib/inference-certificates.html b/docs/lib/inference-certificates.html index 2e494abb..020f958e 100644 --- a/docs/lib/inference-certificates.html +++ b/docs/lib/inference-certificates.html @@ -5,7 +5,7 @@ Inference Certificates | LMQL - + @@ -88,7 +88,7 @@ # all calls made in this context print(lmql.certificate(t))

    This produces one certificate for all calls made in the defined context, where each query is represented as a separate item in the list of children certificates. Recorded events are are nested in child certificates. Additionally, an aggregated metrics object ranging over all (recursive) calls is included in the top-level certificate.

    Certificate Callbacks And Return Values

    As an alternative to directly writing certificates to a file, certificates can also be handled via a callback or returned as a function return value.

    To specify a callback function that is called with the generated certificate as an argument, specify it as the certificate=<FCT> argument.

    The callback is provided with a single certificate object, which is of type lmql.InferenceCertificate. The certificate can be directly serialized to JSON using string conversion, i.e., str(certificate).

    - + \ No newline at end of file diff --git a/docs/lib/integrations.html b/docs/lib/integrations.html index e37ebbfb..c41ade99 100644 --- a/docs/lib/integrations.html +++ b/docs/lib/integrations.html @@ -5,7 +5,7 @@ Other Integrations | LMQL - + @@ -21,7 +21,7 @@ - + \ No newline at end of file diff --git a/docs/lib/integrations/langchain.html b/docs/lib/integrations/langchain.html index 719c7a55..f2a73873 100644 --- a/docs/lib/integrations/langchain.html +++ b/docs/lib/integrations/langchain.html @@ -5,7 +5,7 @@ LangChain | LMQL - + @@ -118,7 +118,7 @@ > Finished chain. "Step into a world of color with RainbowSocks Co.!"

    Overall, we thus have a chain that combines langchain and LMQL components, and can be used as a single unit.

    Asynchronous Use

    You may encounter problems because of the mismatch of LangChain's synchronous APIs with LMQL's async-first design.

    To avoid problems with this, you can install the nest_asyncio package and call nest_asyncio.apply() to enable nested event loops. LMQL will then handle event loop nesting and sync-to-async conversion for you.

    - + \ No newline at end of file diff --git a/docs/lib/integrations/llama_index.html b/docs/lib/integrations/llama_index.html index f60d7d79..ad44a147 100644 --- a/docs/lib/integrations/llama_index.html +++ b/docs/lib/integrations/llama_index.html @@ -5,7 +5,7 @@ LlamaIndex | LMQL - + @@ -48,7 +48,7 @@ output_writer=lmql.stream(variable="RESPONSE"))
    output
    Scripted prompting in LMQL refers to the ability to specify complex interactions, control flow, and constraints using lightweight scripting and declarative SQL-like elements in the Language Model Query Language (LMQL). This allows users to prompt language models with precise constraints and efficient decoding without requiring knowledge of the LM's internals. LMQL can be used to express a wide variety of existing prompting methods using simple, concise, and vendor-agnostic code. The underlying runtime is compatible with existing LMs and can be supported easily, requiring only a simple change in the decoder logic.
     
    - + \ No newline at end of file diff --git a/docs/lib/integrations/pandas.html b/docs/lib/integrations/pandas.html index 814bcb74..06556a23 100644 --- a/docs/lib/integrations/pandas.html +++ b/docs/lib/integrations/pandas.html @@ -5,7 +5,7 @@ Pandas | LMQL - + @@ -71,7 +71,7 @@ Poodle 2.00 Name: AGE, dtype: float64 - + \ No newline at end of file diff --git a/docs/lib/output.html b/docs/lib/output.html index 9e9d49dc..80a9d990 100644 --- a/docs/lib/output.html +++ b/docs/lib/output.html @@ -5,7 +5,7 @@ Output Streaming | LMQL - + @@ -66,7 +66,7 @@ The current program state (lmql.runtime.program_state). E.g. program_variables.variable_values is a mapping of variable names to their current values. """

    Based on this interface, you can implement your own output writer to implement custom streaming. For examples of how this interface can be used, see the implementation of the standard output writers in lmql.runtime.output_writer.

    - + \ No newline at end of file diff --git a/docs/lib/python.html b/docs/lib/python.html index b97cfe3b..dc9b8484 100644 --- a/docs/lib/python.html +++ b/docs/lib/python.html @@ -5,7 +5,7 @@ Python Integration | LMQL - + @@ -107,7 +107,7 @@ # a dictionary of all assigned template variable values variables: Dict[str, str] - + \ No newline at end of file diff --git a/docs/models/azure.html b/docs/models/azure.html index 39022045..87061e83 100644 --- a/docs/models/azure.html +++ b/docs/models/azure.html @@ -5,7 +5,7 @@ Azure | LMQL - + @@ -45,7 +45,7 @@ [verbose=False] )

    The resulting my_azure_model object can now be used in the from clause of a query, as model=... argument for LMQL query functions, or for direct generation.

    Azure configuration parameters specified as part of an lmql.model(...) object generally take precedence over environment variables. The latter just act as a fallback, e.g. when api_key= is not specified as a keyword argument.

    - + \ No newline at end of file diff --git a/docs/models/hf.html b/docs/models/hf.html index c730c3fb..b324da91 100644 --- a/docs/models/hf.html +++ b/docs/models/hf.html @@ -5,7 +5,7 @@ Local Models / Transformers | LMQL - + @@ -26,7 +26,7 @@

    Alternatively, you can also start to serve a model directly from within a Python environment, by running lmql.serve("gpt2-medium", cuda=True, port=9999, trust_remote_code=True). Just as with the CLI, standard transformers arguments are passed through, to the AutoModel.from_pretrained function.

    In-Process Models

    If you would like to load the model in-process, without having to execute a separate lmql serve-model command, you can do so by instantiating a custom lmql.model object with local: as part of the model name. For example, to load the gpt2-medium model in-process, run the following command:

    python
    argmax "Hello[WHO]" from lmql.model("local:gpt2")
     

    Note however, that this will load the model on each restart of the LMQL process, which can incur a significant overhead.

    If you want more control over model loading and configuration, you can pass additional arguments to lmql.model(...), as demonstrated below.

    python
    lmql.model("local:gpt2", cuda=True)
     
    - + \ No newline at end of file diff --git a/docs/models/index.html b/docs/models/index.html index 24408e24..f5c1307c 100644 --- a/docs/models/index.html +++ b/docs/models/index.html @@ -5,7 +5,7 @@ Overview | LMQL - + @@ -46,7 +46,7 @@ from "openai/text-ada-001"

    Here, we specify "openai/text-ada-001" directly, but the shown snippet is equivalent to the use of lmql.model(...), i.e. lmql.model("openai/text-ada-001").

    Note, that the from keyword is only available with the indented standalone syntax as shown here, where the decoder keywords has to be provided explicitly.

    Playground

    To specify the model when running in the playground, you can use the model dropdown available in the top right of the program editor, to set and override the model parameter of your query program:

    Model selection dropdown in the LMQL Playground.

    Adding New Model Backends

    Due to the modular design of LMQL, it is easy to add support for new models and backends. If you would like to propose or add support for a new model API or inference engine, please reach out to us via our Community Discord or via hello@lmql.ai.

    - + \ No newline at end of file diff --git a/docs/models/llama.cpp.html b/docs/models/llama.cpp.html index 8f832432..655fc2db 100644 --- a/docs/models/llama.cpp.html +++ b/docs/models/llama.cpp.html @@ -5,7 +5,7 @@ llama.cpp | LMQL - + @@ -25,7 +25,7 @@

    Model Path The client-side lmql.model(...) identifier must always match the exact server-side lmql serve-model GGUF location, even if the path does not exist on the client machine. In this context, it is merely used as a unique identifier for the model.

    Tokenizer When omitting tokenizer=..., LMQL will use the transformers-based tokenizer for huggyllama/llama-7b by default. This works for Llama and Llama-based fine-tuned models, but must be adapted for others. To find a matching tokenizer for your concrete gguf file, look up the transformers equivalent entry on the HuggingFace model hub. Alternatively, you can use sentencepiece as a tokenization backend. For this, you have to specify the client-side path to a corresponding tokenizer.model file.

    Running Without a Model Server

    To load the llama.cpp directly as part of the Python process that executes your query program, you can use the local: prefix, followed by the path to the gguf file:

    lmql.model("local:llama.cpp:<PATH TO WEIGHTS>.gguf", tokenizer="<tokenizer>")
     

    Again, you can omit the tokenizer=... argument if you want to use the default tokenizer for huggyllama/llama-7b. If not, you have to specify a tokenizer, as described above.

    Configuring the Llama(...) instance

    Any parameters passed to lmql serve-model and, when running locally, to lmql.model(...) will be passed to the Llama(...) constructor.

    For example, to configure the Llama(...) instance to use an n_ctx value of 1024, run:

    bash
    lmql serve-model llama.cpp:<PATH TO WEIGHTS>.bin --n_ctx 1024
     

    Or, when running locally, you can use lmql.model("local:llama.cpp:<PATH TO WEIGHTS>.bin", n_ctx=1024).

    - + \ No newline at end of file diff --git a/docs/models/openai.html b/docs/models/openai.html index 40e40aae..e95ddd5e 100644 --- a/docs/models/openai.html +++ b/docs/models/openai.html @@ -5,7 +5,7 @@ OpenAI | LMQL - + @@ -37,7 +37,7 @@ where STOPS_AT(COMPLETION, ".")

    By default, the chunk size is set to 32. This value is chosen based on the consideration, that a very large chunk size means that LMQL potentially has to discard many generated tokens (which is expensive), if a constraint is violated early on. However, if a query has few or only stopping phrase constraints, a larger chunk size may be beneficial for overall query cost. In general, if a query requires multiple long, uninterrupted sequences to be generated without imposing many constraints, a larger chunk size is recommended.

    OpenAI API Limitations

    Unfortunately, the OpenAI API Completions and Chat API are severely limited in terms of token masking and the availability of the token distribution per predicted token. LMQL tries to leverage these APIs as much as possible, but there are some limitations that we have to work around and may affect users:

    • The OpenAI Completion API limits the number of possible logit biases to 300. This means, if your constraints induce token masks that are larger than 300 tokens, LMQL will automatically truncate the token mask to the first 300 tokens. This may lead to unexpected behavior, e.g., model performance may be worse than expected as the masks are truncated to be more restrictive than necessary. In cases where the 300 biases limit is exceeded, LMQL prints a warning message to the console, indicating that the logit biases were truncated.

    • The OpenAI Completions API only provides the top-5 logprobs per predicted token. This means that decoding algorithms that explore e.g. the top-n probabilities to make decisions like beam search, are limited to a branching factor of 5.

    • The OpenAI Chat API does not provide a way to obtain token distributions or generate/continue partial responses (ChatGPT, GPT-4). Simple constraints can still be enforced, as the LMQL runtime optimizes them to fit the OpenAI API. However, more complex constraints may not be enforceable. In these cases, LMQL will print a error message to the console. As a workaround users may then adjust their constraints to fit these API limitations or resort to post-processing and backtracking. Scripted prompting, intermediate instructions and simple constraints are still supported with Chat API models, nonetheless.

    - + \ No newline at end of file diff --git a/docs/models/replicate.html b/docs/models/replicate.html index 1e9e05af..ffb462f0 100644 --- a/docs/models/replicate.html +++ b/docs/models/replicate.html @@ -5,7 +5,7 @@ Replicate | LMQL - + @@ -37,7 +37,7 @@ where STOPS_AT(ANALYSIS, "\n") and len(TOKENS(ANALYSIS)) < 200 distribution CLASSIFICATION in [" positive", " negative", " neutral"]

    Uploading A 🤗 Model To Replicate

    You can also upload and deploy your own LMQL models to Replicate. To do so, first install Cog. In addition to that, LMQL provides scripts that largely automate the process of building and uploading models (see the scripts/replicate-build section of the LMQL source distribution).

    1. Create a corresponding model on the Replicate website.

    2. Copy config.toml.example to config.toml, and customize it.

      Change dest_prefix to replace YOURACCOUNT with the name of the actual Replicate account to which you will be uploading models.

      For each model you wish to build and upload, your config file should have a [models.MODELNAME] section. Make sure MODELNAME reflects the name of the model as create in your Replicate account.

      huggingface.repo should reflect the Hugging Face model name you wish to wrap. If you want to pin a version, also set huggingface.version.

      The config section may be used to set any values you want to pass in the model_args dictionary.

    3. Run the ./build script, with your current working directory being scripts/replicate-build.

      This will create a work/ subdirectory for each model defined in your configuration file.

    4. In the work/MODELNAME directory, run the generated ./push script to build and upload your model, or cog predict to test your model locally.

    - + \ No newline at end of file diff --git a/features/1-code.html b/features/1-code.html index 89c199ca..0341f1fd 100644 --- a/features/1-code.html +++ b/features/1-code.html @@ -5,7 +5,7 @@ LMQL | LMQL - + @@ -45,7 +45,7 @@ # so from Python, you can just do this meaning_of_life() # 42
    - + \ No newline at end of file diff --git a/features/2-nested.html b/features/2-nested.html index 39451aeb..92a49af3 100644 --- a/features/2-nested.html +++ b/features/2-nested.html @@ -5,7 +5,7 @@ Nested Queries bring Procedural Programming to Prompting | LMQL - + @@ -35,7 +35,7 @@ Out of these, who was born last?LASTDua Lipa

    - + \ No newline at end of file diff --git a/features/3-models.html b/features/3-models.html index c1f4f213..ca3f7f05 100644 --- a/features/3-models.html +++ b/features/3-models.html @@ -5,7 +5,7 @@ Works Across Backends | LMQL - + @@ -21,7 +21,7 @@
    Skip to content

    LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

    - + \ No newline at end of file diff --git a/features/_1-types.html b/features/_1-types.html index de68c8fe..b02dbcbf 100644 --- a/features/_1-types.html +++ b/features/_1-types.html @@ -5,7 +5,7 @@ Typed LLMs | LMQL - + @@ -33,7 +33,7 @@ """ p.name # Alice
    - + \ No newline at end of file diff --git a/features/examples/1-packing-list.html b/features/examples/1-packing-list.html index 1da2e452..7eb5ea3c 100644 --- a/features/examples/1-packing-list.html +++ b/features/examples/1-packing-list.html @@ -5,7 +5,7 @@ 🌴 Packing List | LMQL - + @@ -41,7 +41,7 @@ - THING Sunscreen - THING Volleyball

    - + \ No newline at end of file diff --git a/features/examples/2-constraining.html b/features/examples/2-constraining.html index c81c6afb..a115b7f7 100644 --- a/features/examples/2-constraining.html +++ b/features/examples/2-constraining.html @@ -5,7 +5,7 @@ ⛓️ Constrained LLMs | LMQL - + @@ -37,7 +37,7 @@ Q: JOKE What did the fish say when it hit the wall? A: PUNCHLINE Dam

    - + \ No newline at end of file diff --git a/features/examples/2.5-data-types.html b/features/examples/2.5-data-types.html index eb3435d8..0d68b7d9 100644 --- a/features/examples/2.5-data-types.html +++ b/features/examples/2.5-data-types.html @@ -5,7 +5,7 @@ 🔢 Types and Regex | LMQL - + @@ -44,7 +44,7 @@ Q: What's the month number? A: ANSWER 6

    - + \ No newline at end of file diff --git a/features/examples/3-multi-part.html b/features/examples/3-multi-part.html index b3449792..37741723 100644 --- a/features/examples/3-multi-part.html +++ b/features/examples/3-multi-part.html @@ -5,7 +5,7 @@ 🧠 Multi-Part Prompts | LMQL - + @@ -48,7 +48,7 @@ Therefore, the answer is ANSWER A

    - + \ No newline at end of file diff --git a/features/examples/3.5-distributions.html b/features/examples/3.5-distributions.html index 3fdb3193..4446aaf8 100644 --- a/features/examples/3.5-distributions.html +++ b/features/examples/3.5-distributions.html @@ -5,7 +5,7 @@ 📐 Measure Distributions | LMQL - + @@ -61,7 +61,7 @@

    P(CLASSIFICATION) =
    - positive 0.9998711120293567
    - neutral 0.00012790777085508993
    - negative 9.801997880775052e-07
    - + \ No newline at end of file diff --git a/features/examples/3.6-python.html b/features/examples/3.6-python.html index 2319c515..55f3871b 100644 --- a/features/examples/3.6-python.html +++ b/features/examples/3.6-python.html @@ -5,7 +5,7 @@ 🐍 Python Support | LMQL - + @@ -37,7 +37,7 @@

    %SPLIT%

    promptdown

    Say 'Hello World': TEST Hello World

    - + \ No newline at end of file diff --git a/features/examples/4-meta-prompting.html b/features/examples/4-meta-prompting.html index 249502c7..f399d2ac 100644 --- a/features/examples/4-meta-prompting.html +++ b/features/examples/4-meta-prompting.html @@ -5,7 +5,7 @@ 🌳 Meta Prompting | LMQL - + @@ -47,7 +47,7 @@ For instance, (a data scientist or a machine learning engineer) would answer ANSWER this question by explaining that large language models are a type of artificial intelligence (AI) model that uses deep learning algorithms to process large amounts of natural language data.

    - + \ No newline at end of file diff --git a/features/examples/5-wikipedia.html b/features/examples/5-wikipedia.html index 000e1f71..4fc656bd 100644 --- a/features/examples/5-wikipedia.html +++ b/features/examples/5-wikipedia.html @@ -5,7 +5,7 @@ 🌎 Tool Augmentation | LMQL - + @@ -49,7 +49,7 @@ Final Answer: ANSWER The Norse originated from Scandinavia.

    - + \ No newline at end of file diff --git a/features/examples/6-chat.html b/features/examples/6-chat.html index d204bde2..60ddbcf7 100644 --- a/features/examples/6-chat.html +++ b/features/examples/6-chat.html @@ -5,7 +5,7 @@ 💬 Chatbots | LMQL - + @@ -35,7 +35,7 @@
    bubble:assistantANSWER The best way to interact with LLMs (Language Model Models) is through a query language like LMQL. LMQL allows you to easily and efficiently query large language models and retrieve the information you need. With LMQL, you can specify the input text, the output format, and the model you want to use , all in a single query. This makes it easy to integrate LLMs into your applications and workflows, and to get the most out of these powerful language models. Additionally, LMQL provides a standardized way of interacting with LLMs, which makes it easier for developers and data scientists to collaborate and share their work .

    - + \ No newline at end of file diff --git a/hashmap.json b/hashmap.json index c0885630..77eaa53c 100644 --- a/hashmap.json +++ b/hashmap.json @@ -1 +1 @@ -{"docs_lib_integrations_langchain.md":"82a1a8ce","blog_posts_release-0.0.6.1.md":"120a8d77","docs_lib_python.md":"bc95434a","docs_lib_chat_overview.md":"6114091f","docs_latest_development_documentation.md":"c26b8856","blog_posts_release-0.0.6.3.md":"a170b883","docs_latest_models_openai.md":"0ecf2534","docs_language_overview.md":"4e5dda69","docs_language_scripted-prompting.md":"1a472d51","docs_language_reference.md":"93969594","docs_lib_chat_internal.md":"865e3c38","features_examples_2.5-data-types.md":"7dcc82ab","docs_latest_installation.md":"4aae67b7","docs_latest_lib_chat_internal.md":"fc68f234","docs_lib_integrations_pandas.md":"f1409ba3","docs_lib_output.md":"6e90a5e7","docs_latest_development_dev-setup.md":"18f17674","docs_models_llama.cpp.md":"89effe2c","docs_latest_language_nestedqueries.md":"815f2dd9","research_index.md":"c26a598e","docs_lib_integrations.md":"f23c2c07","docs_latest_language_constraints_custom-constraints.md":"a673b22d","docs_latest_language_scripted-prompting.md":"12afea6e","docs_latest_models_index.md":"d5054f5a","docs_models_openai.md":"39ade1fd","features_examples_3-multi-part.md":"8bbbe3a5","docs_latest_models_llama.cpp.md":"9e418aff","readme.md":"82e2e066","index.md":"3b2473f1","features_2-nested.md":"3c4c00f3","features_examples_5-wikipedia.md":"0f7e4309","features_examples_6-chat.md":"cc86c446","docs_models_azure.md":"167a3758","docs_latest_language_decoding.md":"018937f6","docs_latest_language_constraints.md":"c042157c","docs_lib_chat_defend.md":"c253ac92","docs_latest_language_overview.md":"7387dad8","docs_models_hf.md":"00577347","docs_latest_lib_chat_defend.md":"5cbe2338","docs_lib_inference-certificates.md":"9a213cfe","docs_lib_integrations_llama_index.md":"f3803dc1","docs_latest_lib_integrations.md":"eb75ecfa","docs_latest_language_decorators.md":"53bd4b02","docs_lib_chat.md":"f8be0cb3","docs_latest_language_reference.md":"a23db647","features_examples_3.5-distributions.md":"9d7164a2","docs_models_index.md":"7dc07083","docs_language_nestedqueries.md":"3fc05b78","docs_latest_lib_chat_serving.md":"c33c8e48","docs_latest_lib_generations.md":"a63039e2","docs_lib_chat_serving.md":"36ddd098","docs_latest_lib_integrations_llama_index.md":"4f2fe1b9","docs_language_decorators.md":"9a3eb9c2","docs_language_tools.md":"9aaa96de","features_examples_1-packing-list.md":"fe8669f0","docs_latest_models_azure.md":"7c6396b6","docs_lib_generations.md":"4968526e","features__1-types.md":"2e12a0c5","blog_posts_release-0.0.6.6.md":"958a76e8","docs_installation.md":"8739ce92","docs_latest_models_hf.md":"d19b7949","docs_index.md":"526a0692","docs_latest_language_tools.md":"2379781a","docs_latest_lib_output.md":"4442e9f8","blog_posts_release-0.0.6.4.md":"a07e6028","features_examples_3.6-python.md":"376501ff","blog_posts_release-0.0.5.md":"6f3e470e","docs_latest_models_replicate.md":"4762e61c","docs_latest_lib_python.md":"1c370ca3","docs_development_documentation.md":"b9530bbc","blog_posts_release-0.0.6.5.md":"a82e82a9","blog_index.md":"ff9d28cf","docs_latest_development_docker-setup.md":"fa59e371","docs_latest_lib_chat_overview.md":"aa834f5b","features_3-models.md":"eaf28c02","features_examples_2-constraining.md":"cda9d411","docs_language_decoding.md":"0469ec27","docs_latest_lib_chat.md":"00476b1d","docs_latest_lib_inference-certificates.md":"414275b7","docs_development_docker-setup.md":"0e3e1b2c","docs_development_dev-setup.md":"94c94af5","docs_latest_index.md":"45032e16","docs_language_constraints.md":"43fe4757","blog_posts_developer-survey.md":"9ee0ba44","docs_latest_lib_integrations_pandas.md":"9b6d3f34","features_examples_4-meta-prompting.md":"19152cd4","docs_models_replicate.md":"c1a2236a","docs_latest_lib_integrations_langchain.md":"42ff678a","docs_language_constraints_custom-constraints.md":"f81b18dd","features_1-code.md":"356f0c36","blog_posts_release-0.0.6.md":"2111a972","blog_posts_release-0.7.md":"c15b0585"} +{"blog_posts_developer-survey.md":"9ee0ba44","docs_development_docker-setup.md":"0e3e1b2c","docs_language_overview.md":"4e5dda69","docs_development_dev-setup.md":"94c94af5","blog_posts_release-0.0.6.4.md":"a07e6028","docs_language_decorators.md":"9a3eb9c2","blog_posts_release-0.0.6.1.md":"120a8d77","blog_posts_release-0.0.6.6.md":"958a76e8","readme.md":"82e2e066","blog_posts_release-0.0.6.5.md":"a82e82a9","docs_language_constraints_custom-constraints.md":"f81b18dd","docs_latest_index.md":"45032e16","blog_posts_release-0.0.5.md":"6f3e470e","blog_posts_release-0.0.6.md":"2111a972","docs_development_documentation.md":"b9530bbc","blog_posts_release-0.7.md":"c15b0585","docs_language_constraints.md":"43fe4757","docs_latest_development_docker-setup.md":"fa59e371","docs_latest_installation.md":"4aae67b7","docs_latest_development_documentation.md":"c26b8856","docs_language_scripted-prompting.md":"1a472d51","docs_latest_development_dev-setup.md":"18f17674","docs_latest_language_decorators.md":"53bd4b02","docs_language_reference.md":"93969594","docs_latest_language_overview.md":"7387dad8","docs_language_nestedqueries.md":"3fc05b78","blog_posts_release-0.0.6.3.md":"a170b883","docs_installation.md":"8739ce92","docs_index.md":"526a0692","docs_latest_language_decoding.md":"018937f6","docs_latest_language_constraints.md":"c042157c","docs_latest_language_nestedqueries.md":"815f2dd9","docs_language_tools.md":"9aaa96de","docs_latest_lib_chat.md":"00476b1d","docs_latest_models_azure.md":"7c6396b6","docs_latest_models_hf.md":"d19b7949","docs_latest_models_index.md":"d5054f5a","features__1-types.md":"2e12a0c5","docs_latest_lib_integrations.md":"eb75ecfa","docs_lib_chat_overview.md":"6114091f","docs_latest_language_reference.md":"a23db647","docs_latest_lib_integrations_langchain.md":"42ff678a","docs_latest_lib_python.md":"1c370ca3","docs_models_replicate.md":"c1a2236a","features_3-models.md":"eaf28c02","docs_models_openai.md":"39ade1fd","blog_index.md":"b2796768","docs_lib_integrations_llama_index.md":"f3803dc1","features_1-code.md":"356f0c36","features_2-nested.md":"3c4c00f3","docs_lib_chat.md":"f8be0cb3","features_examples_3-multi-part.md":"8bbbe3a5","docs_models_llama.cpp.md":"89effe2c","docs_models_azure.md":"167a3758","docs_latest_language_constraints_custom-constraints.md":"a673b22d","docs_lib_integrations.md":"f23c2c07","docs_lib_output.md":"6e90a5e7","docs_lib_python.md":"bc95434a","features_examples_3.5-distributions.md":"9d7164a2","research_index.md":"2696fbba","features_examples_3.6-python.md":"376501ff","features_examples_4-meta-prompting.md":"19152cd4","features_examples_5-wikipedia.md":"0f7e4309","docs_latest_lib_output.md":"4442e9f8","docs_latest_lib_integrations_pandas.md":"9b6d3f34","docs_latest_lib_integrations_llama_index.md":"4f2fe1b9","docs_latest_lib_chat_defend.md":"5cbe2338","docs_latest_lib_chat_serving.md":"c33c8e48","docs_language_decoding.md":"0469ec27","docs_latest_language_scripted-prompting.md":"12afea6e","docs_latest_models_llama.cpp.md":"9e418aff","docs_lib_integrations_langchain.md":"82a1a8ce","docs_latest_lib_inference-certificates.md":"414275b7","index.md":"930a925c","docs_lib_inference-certificates.md":"9a213cfe","docs_latest_models_replicate.md":"4762e61c","docs_lib_integrations_pandas.md":"f1409ba3","docs_models_index.md":"7dc07083","docs_models_hf.md":"00577347","docs_latest_language_tools.md":"2379781a","docs_latest_lib_chat_internal.md":"fc68f234","features_examples_2-constraining.md":"cda9d411","features_examples_6-chat.md":"cc86c446","features_examples_2.5-data-types.md":"7dcc82ab","docs_latest_lib_chat_overview.md":"aa834f5b","features_examples_1-packing-list.md":"fe8669f0","docs_lib_chat_serving.md":"36ddd098","docs_latest_models_openai.md":"0ecf2534","docs_lib_chat_defend.md":"c253ac92","docs_latest_lib_generations.md":"a63039e2","docs_lib_generations.md":"4968526e","docs_lib_chat_internal.md":"865e3c38"} diff --git a/index.html b/index.html index bf976485..5fbfc265 100644 --- a/index.html +++ b/index.html @@ -5,7 +5,7 @@ LMQL is a programming language for LLM interaction. | LMQL - + @@ -13,7 +13,7 @@ - + @@ -66,13 +66,13 @@ ![_|Q: When was Dua Lipa born?][@wait|200][@begin|incontext2][dateformat|(respond in DD/MM/YYYY)][@end|incontext2][@wait|200][ANSWER|22/08/1995][@wait|200][@fade|incontext2][@wait|200][@hide|incontext2][@wait|200] [_|Out of these, who was born last?][LAST|Dua Lipa] -[:replay]" __animate="true" animate-speed="50" class="promptdown promptdown-compiled" style="opacity: 1;">

    Execution Trace

    Execution Trace

    -Q: When was Obama born?200incontext

    200ANSWER04/08/1961200incontext200incontext200 -Q: When was Bruno Mars born?200incontext1200ANSWER08/10/1985200incontext1200incontext1200 -Q: When was Dua Lipa born?200incontext2200ANSWER22/08/1995200incontext2200incontext2200 +Q: When was Obama born?200incontext200ANSWER04/08/1961200incontext200incontext200 +Q: When was Bruno Mars born?200incontext1200ANSWER08/10/1985200incontext1200incontext1200 +Q: When was Dua Lipa born?200incontext2200ANSWER22/08/1995200incontext2200incontext2200 -Out of these, who was born last?LASTDua Lipa +Out of these, who was born last?LASTDua Lipa

    Works Across Backends

    LMQL automatically makes your LLM code portable across several backends. You can switch between them with a single line of code.

    @@ -113,7 +113,7 @@

    Transformers

    - THING Volleyball

    - + \ No newline at end of file diff --git a/playground/asset-manifest.json b/playground/asset-manifest.json index 49010530..8deec72e 100644 --- a/playground/asset-manifest.json +++ b/playground/asset-manifest.json @@ -1,17 +1,17 @@ { "files": { "main.css": "./static/css/main.dea811a7.css", - "main.js": "./static/js/main.b9c04377.js", + "main.js": "./static/js/main.847ffa60.js", "static/js/357.c90ecd93.chunk.js": "./static/js/357.c90ecd93.chunk.js", "static/js/787.d1d4a2c2.chunk.js": "./static/js/787.d1d4a2c2.chunk.js", "static/media/explore.svg": "./static/media/explore.d648d98579c69615ae83ee55d57efe3b.svg", "index.html": "./index.html", "main.dea811a7.css.map": "./static/css/main.dea811a7.css.map", - "main.b9c04377.js.map": "./static/js/main.b9c04377.js.map", + "main.847ffa60.js.map": "./static/js/main.847ffa60.js.map", "787.d1d4a2c2.chunk.js.map": "./static/js/787.d1d4a2c2.chunk.js.map" }, "entrypoints": [ "static/css/main.dea811a7.css", - "static/js/main.b9c04377.js" + "static/js/main.847ffa60.js" ] } \ No newline at end of file diff --git a/playground/index.html b/playground/index.html index fbc1f808..3d024c2b 100644 --- a/playground/index.html +++ b/playground/index.html @@ -1 +1 @@ -LMQL Playground
    \ No newline at end of file +LMQL Playground
    \ No newline at end of file diff --git a/playground/static/js/main.b9c04377.js b/playground/static/js/main.847ffa60.js similarity index 99% rename from playground/static/js/main.b9c04377.js rename to playground/static/js/main.847ffa60.js index f1157623..d361770a 100644 --- a/playground/static/js/main.b9c04377.js +++ b/playground/static/js/main.847ffa60.js @@ -1,3 +1,3 @@ -/*! For license information please see main.b9c04377.js.LICENSE.txt */ -(function(){var __webpack_modules__={5591:function(e,t,n){"use strict";n.d(t,{uk:function(){return Ue},DN:function(){return Fe},gk:function(){return Ve}});var r={};n.r(r),n.d(r,{Decoder:function(){return Oe},Encoder:function(){return Te},PacketType:function(){return Ce},protocol:function(){return Pe}});var i=n(885),o=n(5671),a=n(3144),s=n(8068),l=function(){function e(t){var n=this;(0,o.Z)(this,e),this.worker=t,this.processWorker=new Worker(t),this.setup(),this.hasSecret=!1,this.secret=null,window.localStorage.getItem("openai-secret")&&(this.hasSecret=""!=this.secret,this.secret=window.localStorage.getItem("openai-secret")),"undefined"!==typeof window.SharedArrayBuffer?(this.interruptBuffer=new Uint8Array(new window.SharedArrayBuffer(1)),this.processWorker.postMessage({func:"set_interrupt_buffer",args:this.interruptBuffer})):this.interruptBuffer=null,this.consoleListeners=[],this.renderers=[],this.statusListeners=[],this.connectionListeners=[],this.runListeners=[],this.remotePid=null,this.killCounter=0,this.hardKillTimer=null,this.ready=!1,this.status={connected:!1,label:""},this.addStatusListener((function(e){"idle"==e.status&&(n.killCounter=0,n.status=Object.assign({},e),n.ready||(n.ready=!0,n.connectionListeners.forEach((function(e){e(!0)}))))}))}return(0,a.Z)(e,[{key:"setup",value:function(){var e=this;this.processWorker.onmessage=function(t){"app-result"==t.data.type?e.onAppResult(t.data.data):"app-status"==t.data.type?e.statusListeners.forEach((function(e){e(t.data.data)})):console.log("Received unhandled message type from worker",t.data)}}},{key:"on",value:function(e,t){"console"===e?this.addConsoleListener(t):"status"===e?this.addStatusListener(t):"connection"===e?this.addConnectionListener(t):"render"===e?this.addRenderer(t):"run"===e?this.addRunListener(t):console.error("Unknown event",e)}},{key:"remove",value:function(e,t){"console"===e?this.consoleListeners=this.consoleListeners.filter((function(e){return e!==t})):"status"===e?this.statusListeners=this.statusListeners.filter((function(e){return e!==t})):"connection"===e?this.connectionListeners=this.connectionListeners.filter((function(e){return e!==t})):"render"===e?this.renderers=this.renderers.filter((function(e){return e!==t})):"run"===e?this.runListeners=this.runListeners.filter((function(e){return e!==t})):console.error("Unknown event",e)}},{key:"addRunListener",value:function(e){this.runListeners.push(e)}},{key:"addConnectionListener",value:function(e){this.connectionListeners.push(e)}},{key:"addStatusListener",value:function(e){this.statusListeners.push(e)}},{key:"addConsoleListener",value:function(e){this.consoleListeners.push(e)}},{key:"addRenderer",value:function(e){this.renderers.push(e)}},{key:"logToConsole",value:function(e){this.consoleListeners.forEach((function(t){t(e)}))}},{key:"listener_stats",value:function(){console.log("console listeners",this.consoleListeners.length),console.log("render listeners",this.renderers.length),console.log("status listeners",this.statusListeners.length),console.log("connection listeners",this.connectionListeners.length),console.log("renderers",this.renderers.length),console.log("run listeners",this.runListeners.length)}},{key:"onAppResult",value:function(e){if(e.startsWith("DEBUGGER OUTPUT"))try{e=JSON.parse(e.substr("DEBUGGER OUTPUT".length)),this.renderers.forEach((function(t){t.add_result(e)}))}catch(d){this.logToConsole("Failed to parse debugger output "+e.substr("DEBUGGER OUTPUT".length)+"\n")}else if(e.startsWith("BUILD_INFO")){var t=e.substr("BUILD_INFO ".length).split(", "),n=(0,i.Z)(t,2),r=n[0],o=n[1],a=r.split(" "),l=a[1].substr(0,7);"dirty"==a[a.length-1]&&(l+=" (dirty)"),s.n.setInfo({commit:l,date:o})}else if(e.startsWith("APP EXIT")){var u=e.substr("APP EXIT ".length);this.onAppExit(u)}else if(e.startsWith("APP ERROR")){var c=e.substr("APP ERROR ".length);this.onAppError(c)}else"string"==typeof e?this.logToConsole(e+"\n"):this.logToConsole(JSON.stringify(e)+"\n")}},{key:"onAppError",value:function(e){this.logToConsole(e),this.statusListeners.forEach((function(t){t({status:"error",error:e})}))}},{key:"onAppExit",value:function(e){this.logToConsole(e),this.statusListeners.forEach((function(t){t({status:"exit",error:e})})),this.remotePid=null}},{key:"run",value:function(e){if(this.hasSecret){this.processWorker.postMessage({func:"set_openai_secret",args:this.secret}),this.renderers.forEach((function(e){e.clear_results()})),this.interruptBuffer&&(this.interruptBuffer[0]=0),this.killCounter=0;var t=[e.name,e.app_input,e.app_arguments];this.processWorker.postMessage({func:"live",args:t})}else this.statusListeners.forEach((function(e){e({status:"secret-missing",error:"No OpenAI secret set."})}))}},{key:"sendInput",value:function(e){this.processWorker.postMessage({func:"send_input",args:e})}},{key:"setSecret",value:function(e){e.startsWith("transient-")?e=e.substring("transient-".length):localStorage.setItem("openai-secret",e),this.secret=e,this.hasSecret=""!=this.secret,console.log("Setting OpenAI secret in browser process",e,"secret"),this.processWorker.postMessage({func:"set_openai_secret",args:this.secret})}},{key:"kill",value:function(){var e=this;if(0==this.killCounter){this.statusListeners.forEach((function(e){e({status:"stopping"})})),this.processWorker.postMessage({func:"kill",args:[]});var t=1+parseInt(1e6*Math.random());this.killCounter=t,console.log("Killing in-browser process"),this.hardKillTimer&&(clearTimeout(this.hardKillTimer),this.hardKillTimer=null),this.hardKillTimer=setTimeout((function(){e.killCounter==t&&(e.hardKillTimer=null,e.processWorker.terminate(),e.processWorker=new Worker(e.worker),e.setup())}),2e3)}}}]),e}();l.registry=window.BrowserProcessConnectionRegistry={},l.get=function(e){return l.registry[e]||(l.registry[e]=new l("lmql.web.min.js")),l.registry[e]};var u=n(7326),c=n(136),d=n(7277),f=n(1120);function h(){return h="undefined"!==typeof Reflect&&Reflect.get?Reflect.get.bind():function(e,t,n){var r=function(e,t){for(;!Object.prototype.hasOwnProperty.call(e,t)&&null!==(e=(0,f.Z)(e)););return e}(e,t);if(r){var i=Object.getOwnPropertyDescriptor(r,t);return i.get?i.get.call(arguments.length<3?e:n):i.value}},h.apply(this,arguments)}var p=n(9611);var g=n(8814);function v(e,t,n){return v=(0,g.Z)()?Reflect.construct.bind():function(e,t,n){var r=[null];r.push.apply(r,t);var i=new(Function.bind.apply(e,r));return n&&(0,p.Z)(i,n.prototype),i},v.apply(null,arguments)}function m(e){var t="function"===typeof Map?new Map:void 0;return m=function(e){if(null===e||(n=e,-1===Function.toString.call(n).indexOf("[native code]")))return e;var n;if("function"!==typeof e)throw new TypeError("Super expression must either be null or a function");if("undefined"!==typeof t){if(t.has(e))return t.get(e);t.set(e,r)}function r(){return v(e,arguments,(0,f.Z)(this).constructor)}return r.prototype=Object.create(e.prototype,{constructor:{value:r,enumerable:!1,writable:!0,configurable:!0}}),(0,p.Z)(r,e)},m(e)}var y=Object.create(null);y.open="0",y.close="1",y.ping="2",y.pong="3",y.message="4",y.upgrade="5",y.noop="6";var b=Object.create(null);Object.keys(y).forEach((function(e){b[y[e]]=e}));for(var x={type:"error",data:"parser error"},w="function"===typeof Blob||"undefined"!==typeof Blob&&"[object BlobConstructor]"===Object.prototype.toString.call(Blob),_="function"===typeof ArrayBuffer,k=function(e,t){var n=new FileReader;return n.onload=function(){var e=n.result.split(",")[1];t("b"+(e||""))},n.readAsDataURL(e)},E=function(e,t,n){var r,i=e.type,o=e.data;return w&&o instanceof Blob?t?n(o):k(o,n):_&&(o instanceof ArrayBuffer||(r=o,"function"===typeof ArrayBuffer.isView?ArrayBuffer.isView(r):r&&r.buffer instanceof ArrayBuffer))?t?n(o):k(new Blob([o]),n):n(y[i]+(o||""))},S="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",C="undefined"===typeof Uint8Array?[]:new Uint8Array(256),P=0;P>4,c[l++]=(15&r)<<4|i>>2,c[l++]=(3&i)<<6|63&o;return u}(e);return M(n,t)}return{base64:!0,data:e}},M=function(e,t){return"blob"===t&&e instanceof ArrayBuffer?new Blob([e]):e},D=function(e,t){if("string"!==typeof e)return{type:"message",data:M(e,t)};var n=e.charAt(0);return"b"===n?{type:"message",data:O(e.substring(1),t)}:b[n]?e.length>1?{type:b[n],data:e.substring(1)}:{type:b[n]}:x},N=String.fromCharCode(30);function L(e){if(e)return function(e){for(var t in L.prototype)e[t]=L.prototype[t];return e}(e)}L.prototype.on=L.prototype.addEventListener=function(e,t){return this._callbacks=this._callbacks||{},(this._callbacks["$"+e]=this._callbacks["$"+e]||[]).push(t),this},L.prototype.once=function(e,t){function n(){this.off(e,n),t.apply(this,arguments)}return n.fn=t,this.on(e,n),this},L.prototype.off=L.prototype.removeListener=L.prototype.removeAllListeners=L.prototype.removeEventListener=function(e,t){if(this._callbacks=this._callbacks||{},0==arguments.length)return this._callbacks={},this;var n,r=this._callbacks["$"+e];if(!r)return this;if(1==arguments.length)return delete this._callbacks["$"+e],this;for(var i=0;i1?t-1:0),r=1;r0);return t}function X(){var e=Y(+new Date);return e!==F?(H=0,F=e):e+"."+Y(H++)}for(;K0&&void 0!==arguments[0]?arguments[0]:{};return Object.assign(e,{xd:this.xd,xs:this.xs},this.opts),new re(this.uri(),e)}},{key:"doWrite",value:function(e,t){var n=this,r=this.request({method:"POST",data:e});r.on("success",t),r.on("error",(function(e,t){n.onError("xhr post error",e,t)}))}},{key:"doPoll",value:function(){var e=this,t=this.request();t.on("data",this.onData.bind(this)),t.on("error",(function(t,n){e.onError("xhr poll error",t,n)})),this.pollXhr=t}}]),n}(V),re=function(e){(0,c.Z)(n,e);var t=(0,d.Z)(n);function n(e,r){var i;return(0,o.Z)(this,n),i=t.call(this),B((0,u.Z)(i),r),i.opts=r,i.method=r.method||"GET",i.uri=e,i.async=!1!==r.async,i.data=void 0!==r.data?r.data:null,i.create(),i}return(0,a.Z)(n,[{key:"create",value:function(){var e=this,t=j(this.opts,"agent","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","autoUnref");t.xdomain=!!this.opts.xd,t.xscheme=!!this.opts.xs;var r=this.xhr=new J(t);try{r.open(this.method,this.uri,this.async);try{if(this.opts.extraHeaders)for(var i in r.setDisableHeaderCheck&&r.setDisableHeaderCheck(!0),this.opts.extraHeaders)this.opts.extraHeaders.hasOwnProperty(i)&&r.setRequestHeader(i,this.opts.extraHeaders[i])}catch(o){}if("POST"===this.method)try{r.setRequestHeader("Content-type","text/plain;charset=UTF-8")}catch(o){}try{r.setRequestHeader("Accept","*/*")}catch(o){}"withCredentials"in r&&(r.withCredentials=this.opts.withCredentials),this.opts.requestTimeout&&(r.timeout=this.opts.requestTimeout),r.onreadystatechange=function(){4===r.readyState&&(200===r.status||1223===r.status?e.onLoad():e.setTimeoutFn((function(){e.onError("number"===typeof r.status?r.status:0)}),0))},r.send(this.data)}catch(o){return void this.setTimeoutFn((function(){e.onError(o)}),0)}"undefined"!==typeof document&&(this.index=n.requestsCount++,n.requests[this.index]=this)}},{key:"onError",value:function(e){this.emitReserved("error",e,this.xhr),this.cleanup(!0)}},{key:"cleanup",value:function(e){if("undefined"!==typeof this.xhr&&null!==this.xhr){if(this.xhr.onreadystatechange=ee,e)try{this.xhr.abort()}catch(t){}"undefined"!==typeof document&&delete n.requests[this.index],this.xhr=null}}},{key:"onLoad",value:function(){var e=this.xhr.responseText;null!==e&&(this.emitReserved("data",e),this.emitReserved("success"),this.cleanup())}},{key:"abort",value:function(){this.cleanup()}}]),n}(L);if(re.requestsCount=0,re.requests={},"undefined"!==typeof document)if("function"===typeof attachEvent)attachEvent("onunload",ie);else if("function"===typeof addEventListener){addEventListener("onpagehide"in A?"pagehide":"unload",ie,!1)}function ie(){for(var e in re.requests)re.requests.hasOwnProperty(e)&&re.requests[e].abort()}var oe="function"===typeof Promise&&"function"===typeof Promise.resolve?function(e){return Promise.resolve().then(e)}:function(e,t){return t(e,0)},ae=A.WebSocket||A.MozWebSocket,se="undefined"!==typeof navigator&&"string"===typeof navigator.product&&"reactnative"===navigator.product.toLowerCase(),le=function(e){(0,c.Z)(n,e);var t=(0,d.Z)(n);function n(e){var r;return(0,o.Z)(this,n),(r=t.call(this,e)).supportsBinary=!e.forceBase64,r}return(0,a.Z)(n,[{key:"name",get:function(){return"websocket"}},{key:"doOpen",value:function(){if(this.check()){var e=this.uri(),t=this.opts.protocols,n=se?{}:j(this.opts,"agent","perMessageDeflate","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","localAddress","protocolVersion","origin","maxPayload","family","checkServerIdentity");this.opts.extraHeaders&&(n.headers=this.opts.extraHeaders);try{this.ws=se?new ae(e,t,n):t?new ae(e,t):new ae(e)}catch(We){return this.emitReserved("error",We)}this.ws.binaryType=this.socket.binaryType||"arraybuffer",this.addEventListeners()}}},{key:"addEventListeners",value:function(){var e=this;this.ws.onopen=function(){e.opts.autoUnref&&e.ws._socket.unref(),e.onOpen()},this.ws.onclose=function(t){return e.onClose({description:"websocket connection closed",context:t})},this.ws.onmessage=function(t){return e.onData(t.data)},this.ws.onerror=function(t){return e.onError("websocket error",t)}}},{key:"write",value:function(e){var t=this;this.writable=!1;for(var n=function(){var n=e[r],i=r===e.length-1;E(n,t.supportsBinary,(function(e){try{t.ws.send(e)}catch(n){}i&&oe((function(){t.writable=!0,t.emitReserved("drain")}),t.setTimeoutFn)}))},r=0;r1&&void 0!==arguments[1]?arguments[1]:{};return(0,o.Z)(this,n),(r=t.call(this)).writeBuffer=[],e&&"object"===typeof e&&(i=e,e=null),e?(e=fe(e),i.hostname=e.host,i.secure="https"===e.protocol||"wss"===e.protocol,i.port=e.port,e.query&&(i.query=e.query)):i.host&&(i.hostname=fe(i.host).host),B((0,u.Z)(r),i),r.secure=null!=i.secure?i.secure:"undefined"!==typeof location&&"https:"===location.protocol,i.hostname&&!i.port&&(i.port=r.secure?"443":"80"),r.hostname=i.hostname||("undefined"!==typeof location?location.hostname:"localhost"),r.port=i.port||("undefined"!==typeof location&&location.port?location.port:r.secure?"443":"80"),r.transports=i.transports||["polling","websocket"],r.writeBuffer=[],r.prevBufferLen=0,r.opts=Object.assign({path:"/engine.io",agent:!1,withCredentials:!1,upgrade:!0,timestampParam:"t",rememberUpgrade:!1,addTrailingSlash:!0,rejectUnauthorized:!0,perMessageDeflate:{threshold:1024},transportOptions:{},closeOnBeforeunload:!0},i),r.opts.path=r.opts.path.replace(/\/$/,"")+(r.opts.addTrailingSlash?"/":""),"string"===typeof r.opts.query&&(r.opts.query=function(e){for(var t={},n=e.split("&"),r=0,i=n.length;r1))return this.writeBuffer;for(var e,t=1,n=0;n=57344?n+=3:(r++,n+=4);return n}(e):Math.ceil((e.byteLength||e.size)*z)),n>0&&t>this.maxPayload)return this.writeBuffer.slice(0,n);t+=2}return this.writeBuffer}},{key:"write",value:function(e,t,n){return this.sendPacket("message",e,t,n),this}},{key:"send",value:function(e,t,n){return this.sendPacket("message",e,t,n),this}},{key:"sendPacket",value:function(e,t,n,r){if("function"===typeof t&&(r=t,t=void 0),"function"===typeof n&&(r=n,n=null),"closing"!==this.readyState&&"closed"!==this.readyState){(n=n||{}).compress=!1!==n.compress;var i={type:e,data:t,options:n};this.emitReserved("packetCreate",i),this.writeBuffer.push(i),r&&this.once("flush",r),this.flush()}}},{key:"close",value:function(){var e=this,t=function(){e.onClose("forced close"),e.transport.close()},n=function n(){e.off("upgrade",n),e.off("upgradeError",n),t()},r=function(){e.once("upgrade",n),e.once("upgradeError",n)};return"opening"!==this.readyState&&"open"!==this.readyState||(this.readyState="closing",this.writeBuffer.length?this.once("drain",(function(){e.upgrading?r():t()})):this.upgrading?r():t()),this}},{key:"onError",value:function(e){n.priorWebsocketSuccess=!1,this.emitReserved("error",e),this.onClose("transport error",e)}},{key:"onClose",value:function(e,t){"opening"!==this.readyState&&"open"!==this.readyState&&"closing"!==this.readyState||(this.clearTimeoutFn(this.pingTimeoutTimer),this.transport.removeAllListeners("close"),this.transport.close(),this.transport.removeAllListeners(),"function"===typeof removeEventListener&&(removeEventListener("beforeunload",this.beforeunloadEventListener,!1),removeEventListener("offline",this.offlineEventListener,!1)),this.readyState="closed",this.id=null,this.emitReserved("close",e,t),this.writeBuffer=[],this.prevBufferLen=0)}},{key:"filterUpgrades",value:function(e){for(var t=[],n=0,r=e.length;n=0&&e.num0;case Ce.ACK:case Ce.BINARY_ACK:return Array.isArray(t)}}}]),n}(L),Me=function(){function e(t){(0,o.Z)(this,e),this.packet=t,this.buffers=[],this.reconPack=t}return(0,a.Z)(e,[{key:"takeBinaryData",value:function(e){if(this.buffers.push(e),this.buffers.length===this.reconPack.attachments){var t=Ee(this.reconPack,this.buffers);return this.finishedReconstruction(),t}return null}},{key:"finishedReconstruction",value:function(){this.reconPack=null,this.buffers=[]}}]),e}();function De(e,t,n){return e.on(t,n),function(){e.off(t,n)}}var Ne=Object.freeze({connect:1,connect_error:1,disconnect:1,disconnecting:1,newListener:1,removeListener:1}),Le=function(e){(0,c.Z)(n,e);var t=(0,d.Z)(n);function n(e,r,i){var a;return(0,o.Z)(this,n),(a=t.call(this)).connected=!1,a.recovered=!1,a.receiveBuffer=[],a.sendBuffer=[],a._queue=[],a._queueSeq=0,a.ids=0,a.acks={},a.flags={},a.io=e,a.nsp=r,i&&i.auth&&(a.auth=i.auth),a._opts=Object.assign({},i),a.io._autoConnect&&a.open(),a}return(0,a.Z)(n,[{key:"disconnected",get:function(){return!this.connected}},{key:"subEvents",value:function(){if(!this.subs){var e=this.io;this.subs=[De(e,"open",this.onopen.bind(this)),De(e,"packet",this.onpacket.bind(this)),De(e,"error",this.onerror.bind(this)),De(e,"close",this.onclose.bind(this))]}}},{key:"active",get:function(){return!!this.subs}},{key:"connect",value:function(){return this.connected||(this.subEvents(),this.io._reconnecting||this.io.open(),"open"===this.io._readyState&&this.onopen()),this}},{key:"open",value:function(){return this.connect()}},{key:"send",value:function(){for(var e=arguments.length,t=new Array(e),n=0;n1?t-1:0),r=1;r1?n-1:0),i=1;in._opts.retries&&(n._queue.shift(),t&&t(e));else if(n._queue.shift(),t){for(var i=arguments.length,o=new Array(i>1?i-1:0),a=1;a0&&void 0!==arguments[0]&&arguments[0];if(this.connected&&0!==this._queue.length){var t=this._queue[0];t.pending&&!e||(t.pending=!0,t.tryCount++,this.flags=t.flags,this.emit.apply(this,t.args))}}},{key:"packet",value:function(e){e.nsp=this.nsp,this.io._packet(e)}},{key:"onopen",value:function(){var e=this;"function"==typeof this.auth?this.auth((function(t){e._sendConnectPacket(t)})):this._sendConnectPacket(this.auth)}},{key:"_sendConnectPacket",value:function(e){this.packet({type:Ce.CONNECT,data:this._pid?Object.assign({pid:this._pid,offset:this._lastOffset},e):e})}},{key:"onerror",value:function(e){this.connected||this.emitReserved("connect_error",e)}},{key:"onclose",value:function(e,t){this.connected=!1,delete this.id,this.emitReserved("disconnect",e,t)}},{key:"onpacket",value:function(e){if(e.nsp===this.nsp)switch(e.type){case Ce.CONNECT:e.data&&e.data.sid?this.onconnect(e.data.sid,e.data.pid):this.emitReserved("connect_error",new Error("It seems you are trying to reach a Socket.IO server in v2.x with a v3.x client, but they are not compatible (more information here: https://socket.io/docs/v3/migrating-from-2-x-to-3-0/)"));break;case Ce.EVENT:case Ce.BINARY_EVENT:this.onevent(e);break;case Ce.ACK:case Ce.BINARY_ACK:this.onack(e);break;case Ce.DISCONNECT:this.ondisconnect();break;case Ce.CONNECT_ERROR:this.destroy();var t=new Error(e.data.message);t.data=e.data.data,this.emitReserved("connect_error",t)}}},{key:"onevent",value:function(e){var t=e.data||[];null!=e.id&&t.push(this.ack(e.id)),this.connected?this.emitEvent(t):this.receiveBuffer.push(Object.freeze(t))}},{key:"emitEvent",value:function(e){if(this._anyListeners&&this._anyListeners.length){var t,r=this._anyListeners.slice(),i=(0,pe.Z)(r);try{for(i.s();!(t=i.n()).done;){t.value.apply(this,e)}}catch(We){i.e(We)}finally{i.f()}}h((0,f.Z)(n.prototype),"emit",this).apply(this,e),this._pid&&e.length&&"string"===typeof e[e.length-1]&&(this._lastOffset=e[e.length-1])}},{key:"ack",value:function(e){var t=this,n=!1;return function(){if(!n){n=!0;for(var r=arguments.length,i=new Array(r),o=0;o0&&e.jitter<=1?e.jitter:0,this.attempts=0}Ae.prototype.duration=function(){var e=this.ms*Math.pow(this.factor,this.attempts++);if(this.jitter){var t=Math.random(),n=Math.floor(t*this.jitter*e);e=0==(1&Math.floor(10*t))?e-n:e+n}return 0|Math.min(e,this.max)},Ae.prototype.reset=function(){this.attempts=0},Ae.prototype.setMin=function(e){this.ms=e},Ae.prototype.setMax=function(e){this.max=e},Ae.prototype.setJitter=function(e){this.jitter=e};var je=function(e){(0,c.Z)(n,e);var t=(0,d.Z)(n);function n(e,i){var a,s;(0,o.Z)(this,n),(a=t.call(this)).nsps={},a.subs=[],e&&"object"===typeof e&&(i=e,e=void 0),(i=i||{}).path=i.path||"/socket.io",a.opts=i,B((0,u.Z)(a),i),a.reconnection(!1!==i.reconnection),a.reconnectionAttempts(i.reconnectionAttempts||1/0),a.reconnectionDelay(i.reconnectionDelay||1e3),a.reconnectionDelayMax(i.reconnectionDelayMax||5e3),a.randomizationFactor(null!==(s=i.randomizationFactor)&&void 0!==s?s:.5),a.backoff=new Ae({min:a.reconnectionDelay(),max:a.reconnectionDelayMax(),jitter:a.randomizationFactor()}),a.timeout(null==i.timeout?2e4:i.timeout),a._readyState="closed",a.uri=e;var l=i.parser||r;return a.encoder=new l.Encoder,a.decoder=new l.Decoder,a._autoConnect=!1!==i.autoConnect,a._autoConnect&&a.open(),a}return(0,a.Z)(n,[{key:"reconnection",value:function(e){return arguments.length?(this._reconnection=!!e,this):this._reconnection}},{key:"reconnectionAttempts",value:function(e){return void 0===e?this._reconnectionAttempts:(this._reconnectionAttempts=e,this)}},{key:"reconnectionDelay",value:function(e){var t;return void 0===e?this._reconnectionDelay:(this._reconnectionDelay=e,null===(t=this.backoff)||void 0===t||t.setMin(e),this)}},{key:"randomizationFactor",value:function(e){var t;return void 0===e?this._randomizationFactor:(this._randomizationFactor=e,null===(t=this.backoff)||void 0===t||t.setJitter(e),this)}},{key:"reconnectionDelayMax",value:function(e){var t;return void 0===e?this._reconnectionDelayMax:(this._reconnectionDelayMax=e,null===(t=this.backoff)||void 0===t||t.setMax(e),this)}},{key:"timeout",value:function(e){return arguments.length?(this._timeout=e,this):this._timeout}},{key:"maybeReconnectOnOpen",value:function(){!this._reconnecting&&this._reconnection&&0===this.backoff.attempts&&this.reconnect()}},{key:"open",value:function(e){var t=this;if(~this._readyState.indexOf("open"))return this;this.engine=new he(this.uri,this.opts);var n=this.engine,r=this;this._readyState="opening",this.skipReconnect=!1;var i=De(n,"open",(function(){r.onopen(),e&&e()})),o=De(n,"error",(function(n){r.cleanup(),r._readyState="closed",t.emitReserved("error",n),e?e(n):r.maybeReconnectOnOpen()}));if(!1!==this._timeout){var a=this._timeout;0===a&&i();var s=this.setTimeoutFn((function(){i(),n.close(),n.emit("error",new Error("timeout"))}),a);this.opts.autoUnref&&s.unref(),this.subs.push((function(){clearTimeout(s)}))}return this.subs.push(i),this.subs.push(o),this}},{key:"connect",value:function(e){return this.open(e)}},{key:"onopen",value:function(){this.cleanup(),this._readyState="open",this.emitReserved("open");var e=this.engine;this.subs.push(De(e,"ping",this.onping.bind(this)),De(e,"data",this.ondata.bind(this)),De(e,"error",this.onerror.bind(this)),De(e,"close",this.onclose.bind(this)),De(this.decoder,"decoded",this.ondecoded.bind(this)))}},{key:"onping",value:function(){this.emitReserved("ping")}},{key:"ondata",value:function(e){try{this.decoder.add(e)}catch(t){this.onclose("parse error",t)}}},{key:"ondecoded",value:function(e){var t=this;oe((function(){t.emitReserved("packet",e)}),this.setTimeoutFn)}},{key:"onerror",value:function(e){this.emitReserved("error",e)}},{key:"socket",value:function(e,t){var n=this.nsps[e];return n?this._autoConnect&&!n.active&&n.connect():(n=new Le(this,e,t),this.nsps[e]=n),n}},{key:"_destroy",value:function(e){for(var t=0,n=Object.keys(this.nsps);t=this._reconnectionAttempts)this.backoff.reset(),this.emitReserved("reconnect_failed"),this._reconnecting=!1;else{var n=this.backoff.duration();this._reconnecting=!0;var r=this.setTimeoutFn((function(){t.skipReconnect||(e.emitReserved("reconnect_attempt",t.backoff.attempts),t.skipReconnect||t.open((function(n){n?(t._reconnecting=!1,t.reconnect(),e.emitReserved("reconnect_error",n)):t.onreconnect()})))}),n);this.opts.autoUnref&&r.unref(),this.subs.push((function(){clearTimeout(r)}))}}},{key:"onreconnect",value:function(){var e=this.backoff.attempts;this._reconnecting=!1,this.backoff.reset(),this.emitReserved("reconnect",e)}}]),n}(L),Re={};function Ie(e,t){"object"===typeof e&&(t=e,e=void 0);var n,r=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"",n=arguments.length>2?arguments[2]:void 0,r=e;n=n||"undefined"!==typeof location&&location,null==e&&(e=n.protocol+"//"+n.host),"string"===typeof e&&("/"===e.charAt(0)&&(e="/"===e.charAt(1)?n.protocol+e:n.host+e),/^(https?|wss?):\/\//.test(e)||(e="undefined"!==typeof n?n.protocol+"//"+e:"https://"+e),r=fe(e)),r.port||(/^(http|ws)$/.test(r.protocol)?r.port="80":/^(http|ws)s$/.test(r.protocol)&&(r.port="443")),r.path=r.path||"/";var i=-1!==r.host.indexOf(":")?"["+r.host+"]":r.host;return r.id=r.protocol+"://"+i+":"+r.port+t,r.href=r.protocol+"://"+i+(n&&n.port===r.port?"":":"+r.port),r}(e,(t=t||{}).path||"/socket.io"),i=r.source,o=r.id,a=r.path,s=Re[o]&&a in Re[o].nsps;return t.forceNew||t["force new connection"]||!1===t.multiplex||s?n=new je(i,t):(Re[o]||(Re[o]=new je(i,t)),n=Re[o]),r.query&&!t.query&&(t.query=r.queryKey),n.socket(r.path,t)}Object.assign(Ie,{Manager:je,Socket:Le,io:Ie,connect:Ie});var Be=function(){function e(){var t=this;(0,o.Z)(this,e);var n={NODE_ENV:"production",PUBLIC_URL:".",WDS_SOCKET_HOST:void 0,WDS_SOCKET_PATH:void 0,WDS_SOCKET_PORT:void 0,FAST_REFRESH:!0,REACT_APP_WEB_BUILD:"1",REACT_APP_BUILD_COMMIT:"67bd95c"}.REACT_APP_SOCKET_PORT||null;this.socket=null==n?Ie.connect():Ie.connect(":"+n),this.socket.on("connect",(function(){t.socket.on("app-result",(function(e){t.onAppResult(e)})),t.socket.on("app-pid",(function(e){t.remotePid=e.pid})),t.socket.on("app-error",(function(e){t.onAppError(e)})),t.socket.on("app-exit",(function(e){t.onAppExit(e)})),t.connectionListeners.forEach((function(e){e(!0)})),t.statusListeners.forEach((function(e){e({status:"idle"})})),t.socket.on("disconnect",(function(){t.connectionListeners.forEach((function(e){e(!1)}))}))})),this.consoleListeners=[],this.renderers=[],this.statusListeners=[],this.connectionListeners=[],this.runListeners=[],this.remotePid=null,this.status={connected:!1,label:""},this.addStatusListener((function(e){"idle"===e.status&&(t.killCounter=0,t.status=Object.assign({},e))}))}return(0,a.Z)(e,[{key:"on",value:function(e,t){"console"===e?this.addConsoleListener(t):"status"===e?this.addStatusListener(t):"connection"===e?this.addConnectionListener(t):"render"===e?this.addRenderer(t):"run"===e?this.addRunListener(t):console.error("Unknown event",e)}},{key:"remove",value:function(e,t){"console"===e?this.consoleListeners=this.consoleListeners.filter((function(e){return e!==t})):"status"===e?this.statusListeners=this.statusListeners.filter((function(e){return e!==t})):"connection"===e?this.connectionListeners=this.connectionListeners.filter((function(e){return e!==t})):"render"===e?this.renderers=this.renderers.filter((function(e){return e!==t})):"run"===e?this.runListeners=this.runListeners.filter((function(e){return e!==t})):console.error("Unknown event",e)}},{key:"addRunListener",value:function(e){this.runListeners.push(e)}},{key:"addConnectionListener",value:function(e){this.connectionListeners.push(e)}},{key:"addStatusListener",value:function(e){this.statusListeners.push(e)}},{key:"addConsoleListener",value:function(e){this.consoleListeners.push(e)}},{key:"addRenderer",value:function(e){this.renderers.push(e)}},{key:"logToConsole",value:function(e){this.consoleListeners.forEach((function(t){t(e)}))}},{key:"onAppResult",value:function(e){if(e.startsWith("DEBUGGER OUTPUT"))try{e=JSON.parse(e.substr("DEBUGGER OUTPUT".length)),this.renderers.forEach((function(t){t.add_result(e)}))}catch(t){this.logToConsole("Failed to parse debugger output "+e.substr("DEBUGGER OUTPUT".length)+"\n")}else"string"==typeof e?this.logToConsole(e+"\n"):this.logToConsole(JSON.stringify(e)+"\n")}},{key:"sendInput",value:function(e){this.socket.emit("app-input",{pid:this.remotePid,text:e})}},{key:"onAppError",value:function(e){this.logToConsole(e)}},{key:"onAppExit",value:function(e){this.logToConsole(e),this.statusListeners.forEach((function(t){t({status:"idle",error:e})})),this.remotePid=null}},{key:"run",value:function(e){this.remotePid?console.error("Cannot run multiple processes at once"):(this.runListeners.forEach((function(e){return e()})),this.statusListeners.forEach((function(e){e({status:"running",error:null})})),console.log("Running with parameters",e),this.renderers.forEach((function(e){e.clear_results()})),this.socket.emit("app",e))}},{key:"kill",value:function(){console.log("Killing remote process",this.remotePid),this.socket.emit("app-kill",{pid:this.remotePid})}}]),e}();window.RemoteProcessConnectionRegistry?Be.registry=window.RemoteProcessConnectionRegistry:Be.registry=window.RemoteProcessConnectionRegistry={},Be.get=function(e){return Be.registry[e]||(Be.registry[e]=new Be),Be.registry[e]};var ze=window.location.host.includes("next")||window.location.hash.includes("is-next"),Fe={DEMO_MODE:!0,BROWSER_MODE:!1,DEV_MODE:!0,NEXT_MODE:ze,ProcessConnection:Be},qe=!0;function Ve(){return qe}qe=!1;var Ue=(Fe={DEMO_MODE:!0,BROWSER_MODE:!0,DEV_MODE:!0,NEXT_MODE:ze,ProcessConnection:l}).ProcessConnection.get("lmql");Fe.ProcessConnection},7495:function(__unused_webpack_module,__webpack_exports__,__webpack_require__){"use strict";__webpack_require__.d(__webpack_exports__,{Vq:function(){return Dialog},lX:function(){return PromptPopup},r3:function(){return Explore},uy:function(){return ExploreState}});var _home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_slicedToArray_js__WEBPACK_IMPORTED_MODULE_8__=__webpack_require__(885),_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__=__webpack_require__(1766),react__WEBPACK_IMPORTED_MODULE_0__=__webpack_require__(2791),styled_components__WEBPACK_IMPORTED_MODULE_6__=__webpack_require__(6444),_queries__WEBPACK_IMPORTED_MODULE_1__=__webpack_require__(9126),_queries__WEBPACK_IMPORTED_MODULE_1___default=__webpack_require__.n(_queries__WEBPACK_IMPORTED_MODULE_1__),_State__WEBPACK_IMPORTED_MODULE_2__=__webpack_require__(318),_Configuration__WEBPACK_IMPORTED_MODULE_3__=__webpack_require__(5591),_build_info__WEBPACK_IMPORTED_MODULE_4__=__webpack_require__(8068),react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__=__webpack_require__(184),_templateObject,_templateObject2,_templateObject3,_templateObject4,_templateObject5,_templateObject6,_templateObject7,_templateObject8,_templateObject9,PromptPopup=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div(_templateObject||(_templateObject=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n position: absolute;\n top: 0;\n left: 0;\n width: 100vw;\n height: 100vh;\n background-color: #000000c2;\n z-index: 999;\n display: flex;\n flex-direction: column;\n overflow: hidden;\n\n animation: fade-in 0.2s;\n\n @keyframes fade-in {\n 0% {\n opacity: 0;\n }\n 100% {\n opacity: 1;\n }\n }\n\n .click-handler {\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n z-index: -1;\n }\n"]))),Dialog=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div(_templateObject2||(_templateObject2=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(['\n background-color: #ffffff;\n border-radius: 4pt;\n overflow: hidden;\n padding: 10pt;\n margin: auto;\n max-width: 1100pt;\n max-height: 500pt;\n color: black;\n\n font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica;\n\n input {\n border: none;\n background-color: #dfdfdf;\n border-radius: 4pt;\n font-size: 14pt;\n outline: none;\n padding: 5pt;\n margin: 10pt;\n margin-left: 0pt;\n border: 2pt solid transparent;\n\n :focus {\n border: 2pt solid #8e98ea;\n }\n }\n\n &.embed {\n width: calc(100% - 22pt) !important;\n margin: 0;\n max-width: 100% !important;\n height: 100%;\n max-height: 100%;\n border: 1pt solid grey;\n font-size: 8pt !important;\n }\n\n label {\n font-size: 10pt;\n position: relative;\n top: 8pt;\n left: 2pt;\n }\n']))),ExploreDialog=(0,styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP)(Dialog)(_templateObject3||(_templateObject3=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n width: 800pt;\n height: 700pt;\n max-height: calc(100vh - 40pt);\n max-width: 100vw;\n overflow-y: auto;\n\n /* invisible scroll bar */\n ::-webkit-scrollbar {\n width: 0px;\n background: transparent;\n }\n\n position: relative;\n padding: 0pt;\n \n\n @media (max-width: 800pt) {\n border-radius: 0 !important;\n width: 100vw !important;\n height: 100vh !important;\n max-height: 100vh !important;\n max-width: 100vw !important;\n }\n \n\n @media (max-width: 450pt) {\n width: calc(100vw - 20pt);\n height: calc(100vh - 20pt);\n max-height: 100vh;\n max-width: 100vw;\n overflow-y: auto;\n position: relative;\n padding: 10pt;\n padding-bottom: 80pt;\n padding-left: 0;\n margin: 0;\n\n div.tile {\n display: block !important;\n height: 40pt;\n width: 100% !important;\n\n &:hover {\n transform: none !important;\n border: 1pt solid #ababab;\n }\n }\n }\n\n >div {\n padding: 5pt 20pt;\n position: relative;\n }\n\n >div .sidenote a {\n padding-left: 2pt;\n }\n\n >div .sidenote {\n position: absolute;\n top: 15pt;\n right: 15pt;\n font-size: 8pt;\n }\n\n div.highlight {\n background-color: #f5f5f5;\n margin: 5pt;\n border-radius: 4pt;\n }\n\n >div>div {\n display: flex;\n flex-direction: row;\n flex-wrap: wrap;\n }\n\n p {\n text-align: justify;\n }\n\n h1 {\n margin: 0;\n padding: 20pt;\n font-weight: bold;\n padding-bottom: 10pt;\n }\n\n h1 img {\n width: 20pt;\n height: 20pt;\n margin-right: 8pt;\n position: relative;\n top: 2pt;\n }\n\n h2 {\n font-size: 12pt;\n }\n\n h3 {\n font-size: 12pt;\n color: #373737;\n margin: 0;\n z-index: 999;\n font-weight: 700;\n }\n\n .close {\n position: absolute;\n top: 10pt;\n right: 10pt;\n width: 30pt;\n height: 30pt;\n text-align: center;\n line-height: 30pt;\n font-size: 20pt;\n cursor: pointer;\n }\n"]))),Tile=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div.attrs({className:"tile"})(_templateObject4||(_templateObject4=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n background-color: white;\n border-radius: 4pt;\n padding: 10pt;\n margin: 10pt;\n margin-left: 0pt;\n margin-top: 0pt;\n cursor: pointer;\n height: 80pt;\n width: 100pt;\n border: 1pt solid #d4d4d4;\n opacity: 0.9;\n position: relative;\n display: flex;\n flex-direction: column;\n \n align-items: flex-start;\n justify-content: flex-start;\n\n transition: 0.1s linear transform;\n\n :hover {\n transform: scale(1.05);\n opacity: 1.0;\n border: 1pt solid #9b9a9a;\n }\n\n h3 {\n color: #212121;\n }\n\n code {\n opacity: 0.3;\n }\n\n p {\n color: #010101;\n font-size: 10pt;\n font-style: italic;\n z-index: 999;\n text-align: left;\n }\n\n .badge {\n color: #212121;\n position: absolute;\n top: 6pt;\n right: 6pt;\n font-size: 5pt;\n background-color: #e2e0e1;\n padding: 2pt;\n z-index: 9;\n }\n"]))),ExploreState={visible:"#explore"===window.location.hash,setVisibility:function(e){ExploreState.visible=e,ExploreState.listeners.forEach((function(t){return t(e)}))},listeners:[]},CodeContainer=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div.attrs({className:"code-container"})(_templateObject5||(_templateObject5=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(['\n background-color: transparent;\n padding: 10pt;\n margin: 0pt;\n overflow: hidden;\n transform: scale(0.8);\n transform-origin: top left;\n max-height: 50pt;\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n color: #323232;\n\n .keyword {\n color: #ff79c6;\n font-weight: bold;\n }\n\n code {\n width: 1024pt;\n margin: 0;\n padding: 0;\n white-space: pre-wrap;\n color: #f1fa8c;\n\n font-size: 4pt;\n line-height: 4pt;\n overflow-x: hidden;\n overflow-y: hidden;\n width: 100%;\n max-height: 60pt;\n white-space: pre-wrap;\n display: block;\n }\n\n /* fade out bottom */\n :after {\n content: "";\n position: absolute;\n bottom: -2px;\n left: 0;\n right: 0;\n height: 20pt;\n /* background: linear-gradient(180deg, transparent 0%, #f5f5f561 100%); */\n }\n'])));function BasicHighlighted(e){var t=e.code,n=["argmax","where","from","and","or","not","sample","beam_search"],r=t.split(/(\s+|[()\t])/g).map((function(e,t){return n.includes(e.toLowerCase())?(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("span",{className:"keyword",children:e},e+t):(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("span",{children:e},e+t)}));return(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)(CodeContainer,{children:(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("code",{children:r})})}var Description=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.p(_templateObject6||(_templateObject6=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n font-size: 14pt;\n color: #696969;\n padding: 0pt 20pt;\n font-weight: 500;\n margin-top: 0pt;\n"]))),LinkBox=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div(_templateObject7||(_templateObject7=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n display: flex;\n flex-direction: row;\n\n a.button {\n display: block;\n border: 1pt solid #6b77ff;\n padding: 5pt;\n margin-top: 15pt;\n margin-right: 5pt;\n border-radius: 2pt;\n color: #6b77ff;\n\n font-size: 10pt;\n }\n\n a.button:first-child {\n color: white;\n background-color: #6b77ff;\n\n &:hover {\n background-color: #8e98ea;\n text-decoration: none;\n }\n }\n"]))),CiteBox=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.code(_templateObject8||(_templateObject8=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n display: block;\n background-color: #f5f5f5;\n padding: 4pt;\n border-radius: 4pt;\n"]))),PoweredBy=styled_components__WEBPACK_IMPORTED_MODULE_6__.ZP.div(_templateObject9||(_templateObject9=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_taggedTemplateLiteral_js__WEBPACK_IMPORTED_MODULE_7__.Z)(["\n font-size: 8pt;\n color: #696969;\n text-align: center;\n margin: auto;\n\n display: flex;\n justify-content: center;\n align-items: center;\n padding: 10pt;\n\n >div:nth-child(2) {\n margin-left: 5pt;\n }\n"]))),didLoadAnchor=!1,PreviewQueries={queries:[],listeners:[]};function Explore(){var _useState=(0,react__WEBPACK_IMPORTED_MODULE_0__.useState)(ExploreState.visible),_useState2=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_slicedToArray_js__WEBPACK_IMPORTED_MODULE_8__.Z)(_useState,2),visible=_useState2[0],setVisible=_useState2[1],onClickTile=function(e){ExploreState.setVisibility(!1),e.state?fetch(e.state).then((function(e){return e.text()})).then((function(t){_State__WEBPACK_IMPORTED_MODULE_2__.Rs.load(t),_State__WEBPACK_IMPORTED_MODULE_2__.Rs.setItem("lmql-editor-contents",e.code),window.setTimeout((function(){return _State__WEBPACK_IMPORTED_MODULE_2__.l$.setTrackMostLikely(!0)}),10)})).catch((function(e){console.error(e),alert("Error loading the selected example. See the console for more details.")})):_State__WEBPACK_IMPORTED_MODULE_2__.Rs.setItem("lmql-editor-contents",e.code)},_useState3=(0,react__WEBPACK_IMPORTED_MODULE_0__.useState)(_Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.NEXT_MODE?PreviewQueries.queries:_queries__WEBPACK_IMPORTED_MODULE_1__.queries),_useState4=(0,_home_runner_work_lmql_lmql_src_lmql_ui_playground_node_modules_babel_runtime_helpers_esm_slicedToArray_js__WEBPACK_IMPORTED_MODULE_8__.Z)(_useState3,2),exploreQueries=_useState4[0],setExploreQueries=_useState4[1];if((0,react__WEBPACK_IMPORTED_MODULE_0__.useEffect)((function(){if(_Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.NEXT_MODE){var url="https://raw.githubusercontent.com/lmql-lang/awesome-lmql/main/next/showcase-playground.js";url+="?nocache="+Math.random(),fetch(url).then((function(e){return e.text()})).then((function(r){var queries=eval(r).queries;PreviewQueries.queries=queries,PreviewQueries.listeners.forEach((function(e){return e()})),console.log("Loaded list of Preview release queries.",queries)})).catch((function(e){console.error(e),console.error("Error loading list of Preview release queries.")}))}}),[]),(0,react__WEBPACK_IMPORTED_MODULE_0__.useEffect)((function(){ExploreState.listeners.push(setVisible);var e=window.localStorage.getItem("lmql-editor-contents");if((null===e||"string"===typeof e&&0===e.trim().length)&&ExploreState.setVisibility(!_State__WEBPACK_IMPORTED_MODULE_2__.YR.preloaded),window.location.hash&&!didLoadAnchor){didLoadAnchor=!0;var t=window.location.hash.substring(1),n=_queries__WEBPACK_IMPORTED_MODULE_1__.queries.filter((function(e){return e.queries.find((function(e){return e.state.includes(t)}))}));if(1===n.length){var r=n[0].queries.find((function(e){return e.state.includes(t)}));window.setTimeout((function(){return onClickTile(r)}),10)}}return function(){ExploreState.listeners=ExploreState.listeners.filter((function(e){return e!==setVisible}))}}),[]),(0,react__WEBPACK_IMPORTED_MODULE_0__.useEffect)((function(){if(_Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.NEXT_MODE){var e=function(){setExploreQueries(PreviewQueries.queries)};return PreviewQueries.listeners.push(e),function(){PreviewQueries.listeners=PreviewQueries.listeners.filter((function(t){return t!==e}))}}}),[]),!visible)return null;var description=(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.Fragment,{children:["LMQL is a query language for large language models. Explore the examples below to get started.",(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(LinkBox,{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{className:"button",target:"_blank",rel:"noreferrer",href:"https://lmql.ai/docs",children:"Documentation"}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{className:"button",target:"_blank",rel:"noreferrer",href:"https://lmql.ai",children:"Overview"})]})]});return _Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.NEXT_MODE&&(description=(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.Fragment,{children:["This is the preview channel of LMQL. It contains the latest features, but may be unstable. If you are looking for the stable version, please visit ",(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{href:"https://lmql.ai/playground",children:"the stable Playground IDE"}),"."]})),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(PromptPopup,{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("div",{className:"click-handler",onClick:function(){return ExploreState.setVisibility(!1)}}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(ExploreDialog,{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("h1",{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("img",{src:"/lmql.svg",alt:"LMQL Logo"}),"Welcome To LMQL"]},"welcome"),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)(Description,{children:description}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("span",{className:"close",onClick:function(){return ExploreState.setVisibility(!1)},children:"\xd7"}),exploreQueries.map((function(e){return(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("div",{className:e.highlight?"highlight":"",children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("h2",{children:e.category},e.category),e.highlight&&(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("span",{class:"sidenote",children:(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{href:"https://github.com/eth-sri/lmql/issues",target:"_blank",rel:"noreferrer",title:"Please report any issue you find with Preview features to help us improve LMQL.",children:" Report Issues"})}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("div",{children:e.queries.map((function(t,n){return(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(Tile,{onClick:function(){return onClickTile(t)},children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)(BasicHighlighted,{code:t.code}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("h3",{children:t.name}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("p",{children:t.description})]},e.category+"-"+n)}))},e.category+"-div")]},e.category+"-container")})),!_Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.NEXT_MODE&&(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)(react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.Fragment,{children:(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("div",{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("h2",{children:"Read More"},"read-paper"),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(CiteBox,{children:['Beurer-Kellner, Luca, Marc Fischer, and Martin Vechev. "Prompting Is Programming: A Query Language For Large Language Models."',(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{target:"_blank",rel:"noreferrer",href:"https://arxiv.org/pdf/2212.06094",children:"arXiv preprint arXiv:2212.06094"})," (2022)."]},"cite"),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("br",{}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(PoweredBy,{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("div",{children:["LMQL ",_build_info__WEBPACK_IMPORTED_MODULE_4__.n.info().commit]}),(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("div",{children:[_Configuration__WEBPACK_IMPORTED_MODULE_3__.DN.BROWSER_MODE&&!(0,_Configuration__WEBPACK_IMPORTED_MODULE_3__.gk)()&&(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)("span",{children:[" In-Browser, Made with \u2764\ufe0f + ",(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{href:"https://pyodide.org",target:"_blank",rel:"noreferrer",children:"Pyodide"})," + ",(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("a",{href:"https://webassembly.org/",target:"_blank",rel:"noreferrer",children:"WebAssembly"})]}),"-"!=_build_info__WEBPACK_IMPORTED_MODULE_4__.n.info().date?(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsxs)(react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.Fragment,{children:[(0,react_jsx_runtime__WEBPACK_IMPORTED_MODULE_5__.jsx)("br",{}),"Build on ",_build_info__WEBPACK_IMPORTED_MODULE_4__.n.info().date]}):null]})]})]})})]})]})}},318:function(e,t,n){"use strict";n.d(t,{Hj:function(){return d},Rs:function(){return u},YR:function(){return s},l$:function(){return c}});var r=n(5671),i=n(3144),o=n(9126);function a(){var e=!(arguments.length>0&&void 0!==arguments[0])||arguments[0],t=window.location.href;if(-1===t.indexOf("?"))return null;var n=t.split("?")[1];return n.startsWith("embed=")?n.substr(6):n.startsWith("snippet=")&&e?n.substr(8):null}var s={mode:a(!1)?"embed":"playground",preloaded:!!a(!0),embedFile:null},l=function(){function e(){(0,r.Z)(this,e),this.items={},this.listeners={},this.restore(),this.saveQueue={}}return(0,i.Z)(e,[{key:"persist",value:function(e){var t=this;Object.keys(this.items).forEach((function(n){n&&n!==e||window.localStorage.setItem(n,t.items[n])}))}},{key:"restore",value:function(){var e=this;Object.keys(this.items).forEach((function(t){e.items[t]=window.localStorage.getItem(t)}));var t=a();if(t){var n=!0;console.log("loading snippet from",t);var r=o.queries.filter((function(e){return e.queries.find((function(e){return e.state=="precomputed/"+t+".json"}))}));if(1===r.length){var i=r[0].queries.find((function(e){return e.state=="precomputed/"+t+".json"}));t=i.state}if(t.startsWith("gist:"))try{var l=t.split(":"),u=l[1].split("/")[0],c=l[1].split("/")[1],d=l[1].split("/")[3];t="https://gist.githubusercontent.com/".concat(u,"/").concat(c,"/raw/").concat(d),n=d.endsWith(".json")}catch(f){return void console.error("error parsing github gist URL",f)}fetch(t).then((function(e){return e.text()})).then((function(r){t.includes("gist.github")||window.history.replaceState({},document.title,window.location.pathname),r&&(n||(r=JSON.stringify({"lmql-editor-contents":r,"decoder-graph":'{"nodes":[],"edges":[]}'})),s.embedFile=t,e.load(r))}))}}},{key:"load",value:function(e){var t=this;e=JSON.parse(e),this.items={},Object.keys(e).forEach((function(n){t.items[n]=e[n],t.listeners[n]&&t.listeners[n].forEach((function(e){return e(t.items[n])}))}))}},{key:"dump",value:function(){return JSON.stringify(this.items)}},{key:"on",value:function(e,t){t&&(this.listeners[e]||(this.listeners[e]=[]),this.listeners[e].push(t))}},{key:"remove",value:function(e,t){this.listeners[e]&&(this.listeners[e]=this.listeners[e].filter((function(e){return e!==t})))}},{key:"getItem",value:function(e){if(e in this.items)return this.items[e];var t=window.localStorage.getItem(e);return t?(this.items[e]=t,t):null}},{key:"setItem",value:function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:null;this.items[e]=t,this.persist(e),this.listeners[e]&&this.listeners[e].forEach((function(e){e!==n&&e(t)}))}},{key:"queueSetItem",value:function(e,t){var n=this,r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:null;this.saveQueue[e]&&clearTimeout(this.saveQueue[e]),this.saveQueue[e]=setTimeout((function(){n.setItem(e,t,r)}))}}]),e}(),u=new l,c={setTrackMostLikely:function(){},getTrackMostLikely:function(){return!1},setSelectedNode:function(){}},d={addListener:function(e){d.listeners.push(e)},listeners:[],error:null,setError:function(e){d.error=e,d.listeners.forEach((function(t){return t(e)}))},removeListener:function(e){d.listeners=d.listeners.filter((function(t){return t!==e}))},showErrorOutput:function(){}}},8068:function(e,t,n){"use strict";n.d(t,{n:function(){return o}});var r=n(5671),i=n(3144),o=new(function(){function e(){(0,r.Z)(this,e),this.listeners=[],window.BUILD_COMMIT||(window.BUILD_COMMIT="67bd95c"),window.BUILD_DATE||(window.BUILD_DATE={NODE_ENV:"production",PUBLIC_URL:".",WDS_SOCKET_HOST:void 0,WDS_SOCKET_PATH:void 0,WDS_SOCKET_PORT:void 0,FAST_REFRESH:!0,REACT_APP_WEB_BUILD:"1",REACT_APP_BUILD_COMMIT:"67bd95c"}.REACT_APP_BUILD_DATE?{NODE_ENV:"production",PUBLIC_URL:".",WDS_SOCKET_HOST:void 0,WDS_SOCKET_PATH:void 0,WDS_SOCKET_PORT:void 0,FAST_REFRESH:!0,REACT_APP_WEB_BUILD:"1",REACT_APP_BUILD_COMMIT:"67bd95c"}.REACT_APP_BUILD_DATE:"")}return(0,i.Z)(e,[{key:"info",value:function(){return{commit:window.BUILD_COMMIT,date:window.BUILD_DATE||"-"}}},{key:"addListener",value:function(e){this.listeners.push(e)}},{key:"removeListener",value:function(e){this.listeners=this.listeners.filter((function(t){return t!==e}))}},{key:"setInfo",value:function(e){var t=this;window.BUILD_COMMIT=e.commit,window.BUILD_DATE=e.date,this.listeners.forEach((function(e){e(t.info())}))}}]),e}())},9126:function(e){e.exports={queries:[{category:"Introductory Examples",queries:[{name:"\ud83d\udc4b Hello World",description:"Who This?",code:"\"Say 'this is a test':[RESPONSE]\" where len(TOKENS(RESPONSE)) < 10",state:"precomputed/hello.json"},{name:"\ud83d\udc74 Tell A Joke",description:"Few-Shot Samples & Constraints",code:'# instructions + few-shot samples\n"""\nA list of good dad jokes. A indicates the punchline\nQ: How does a penguin build its house?\nA: Igloos it together.\nQ: Which knight invented King Arthur\'s Round Table?\nA: Sir Cumference.\n"""\n\n# generate a joke\n"Q:[JOKE]\\n" where len(TOKENS(JOKE)) < 120 and STOPS_AT(JOKE, "?")\n"A:[PUNCHLINE]" where STOPS_AT(PUNCHLINE, "\\n") and len(TOKENS(PUNCHLINE)) > 1',state:"precomputed/joke.json"},{name:"\ud83c\udf34 Packing List",description:"Control-Flow Guided Generation",code:'# specify a decoding strategy for the query\nsample(temperature=0.8)\n\n"A list of things not to forget when going to the beach: \\n"\n# use a loop to generate a list\nfor i in range(4):\n "- [THING] \\n" where \\\n THING in set(["Volleyball", "Sunscreen", "Bathing Suit"])',state:"precomputed/list.json"},{name:"\ud83d\udcdd Templates",description:"Template-Based Generation for JSON data",code:'"""\nWrite a summary of Bruno Mars, the singer:\n{{\n "name": "[STRING_VALUE]",\n "age": [INT_VALUE],\n "top_songs": [[\n "[STRING_VALUE]",\n "[STRING_VALUE]"\n ]]\n}}\n""" where STOPS_BEFORE(STRING_VALUE, \'"\') and \\\n INT(INT_VALUE) and len(TOKENS(INT_VALUE)) < 2',state:"precomputed/json-template.json"}]},{category:"Features In Preview",highlight:!0,queries:[{name:"\ud83d\udc68\u200d\ud83d\udc69\u200d\ud83d\udc67 Types / JSON",description:"Generate schema-safe, typed data.",code:'import lmql\nfrom dataclasses import dataclass\n\n@dataclass\nclass Employer:\n employer_name: str\n location: str\n\n@dataclass\nclass Person:\n name: str\n age: int\n employer: Employer\n job: str\n\n# use type constraints to generated (type-safe) structured data\n"Alice is a 21 years old and works as an engineer at LMQL Inc in Zurich, Switzerland.\\n"\n"Structured: [PERSON_DATA]\\n" where type(PERSON_DATA) is Person\n\n# the resulting object is directly accessible as a Python object\n"Their name is {PERSON_DATA.name} and she works in {PERSON_DATA.employer.location}."',state:"precomputed/json-robust.json"},{name:"\ud83d\udee0\ufe0f Multi-Tool Use",description:"Simply expose Python functions as LLM tools.",code:'from lmql.lib.actions import inline_use, calc, wiki\n\n"Q: What is the population of the US and Germany combined?\\n"\n"A: Let\'s consider the latest information to compute an answer\\n"\n\n# expose Python functions as LLM tools\n"[REASONING]\\n" where inline_use(REASONING, [wiki, calc])\n\n# use an integer-typed variable to extract the final result\n"Therefore the answer is[ANSWER: int]"',state:""},{name:"\ud83d\udd24 Regex Constraints",description:"Specify constraints using regex.",code:"# to structure output, you can enforce regex expressions\n\"It's the last day of June so today (DD/MM) is [RESPONSE: r'[0-9]{2}/[0-9]{1,2}']\"",state:"precomputed/date-regex.json"},{name:"\u2764\ufe0f Sentiment Constraints",description:"Affect sentiment with in-context instructions.",code:'# uses nested queries to generate sentiment-guided chatbot responses \n# see https://lmql.ai/docs/language/nestedqueries.html to learn more about nested queries\n\n# sub-query to generate ad-hoc instructions to match a specific mood\n@lmql.query(cache="mood.tokens", model="chatgpt")\nasync def mood_description(m: str):\n \'\'\'lmql\n print("Generating mood for", m)\n """Provide a one sentence instruction that prompts a model to write text that \n is written in a {m} tone, addressing some previously provided question.\\n"""\n "[SUMMARY]\\n"\n return SUMMARY.strip();\n \'\'\'\n\n# nested query to instruct the model answer matching a given mood\n@lmql.query\nasync def mood(m: str):\n \'\'\'lmql\n """\n Instruction: {await mood_description(m)}\n Answer: [RESPONSE]\n """ where stops_at(RESPONSE, ".") and stops_at(RESPONSE, "\\n")\n\n return RESPONSE.strip(); \n \'\'\'\n\n\n# main query (e.g. a chabot conversation)\nargmax\n for q in ["Hi", "Who are you", "How is your day going?"]:\n "Q: {q}\\n"\n # replace the above with "Q: {await input()}\\n" to enable interactive chatting\n "A: [RESPONSE]\\n" where mood(RESPONSE, "loving like a partner")\nfrom\n "chatgpt"',state:""},{name:"\ud83d\udcdd Write A Poem",description:"Insert dynamic instructions during generation.",code:'# nested query to generate a rhyme for the previous line\n@lmql.query\nasync def rhyme():\n \'\'\'\n """\n Instruction: Above is the beginning of the poem. Generate the next verse that rhymes with the last line and has the same number of syllables:\n Response:[VERSE]\n """ where stops_before(VERSE, "\\n")\n return VERSE\n \'\'\'\n\n# nested query to generate the first line of our poem\n@lmql.query\nasync def first_verse():\n \'\'\'\n """\n Instruction: Generate a verse that would be a good first line of a poem.\n Response:[VERSE]\n """ where not "\\n" in VERSE\n return VERSE\n \'\'\'\n\n# vary the poem\nsample(temperature=0.7)\n\n# set the topic\n"A poem on large language models:\\n"\n\n# generate a poem using nested queries\n"[FIRST_VERSE: first_verse]\\n"\nfor i in range(5):\n "[VERSE: rhyme]\\n"',state:""}]},{category:"LLM Reasoning",queries:[{name:"\ud83e\udde0 Chain-Of-Thought",description:"CoT with robust result extraction.",code:'"""Q: It was Sept. 1st, 2021 a week ago. What is the date 10 days ago in MM/DD/YYYY?\nAnswer Choices: (A) 08/29/2021 (B) 08/28/2021 (C) 08/29/1925 (D) 08/30/2021 (E) 05/25/2021 (F) 09/19/2021\n"""\n\n# chain-of-thought instruction\n"A: Let\'s think step by step.\\n"\n\n# free-form reasoning\n"[REASONING]\\n"\n\n# constrain the final answer to robustly extract the result\n"Therefore, among A through F, the answer is[RESULT]" where \\\n RESULT in ["A", "B", "C", "D", "E", "F"]',state:"precomputed/cot.json"},{name:"\ud83d\udc69\u200d\ud83d\udd2c Meta Prompting",description:"Asking an expert to answer.",code:'# use beam search to explore different potential \'expert\' values\nbeam(n=2)\n "Q: What are Large Language Models?\\n\\n"\n\n # prompt for an \'expert\'\n "A good person to answer this question would be[EXPERT]\\n\\n" where \\\n STOPS_AT(EXPERT, ".") and STOPS_AT(EXPERT, "\\n")\n expert_name = EXPERT.rstrip(".\\n")\n\n # use \'expert\' to answer the question\n "For instance,{expert_name} would answer[ANSWER]" where STOPS_AT(ANSWER, ".")\nfrom\n "openai/text-davinci-003"',state:"precomputed/meta.json"}]},{category:"Tool-Augmented Queries",queries:[{name:"\ud83e\uddee Calculator",description:"On-the-fly arithmetic evaluation using Python.",code:'import re\nfrom lmql.demo import gsm8k_samples\n\ndef calc(expr):\n expr = re.sub(r"[^0-9+\\-*/().]", "", expr)\n return eval(expr)\n\nQUESTION = "Josh decides to try flipping a house. \\\nHe buys a house for $80,000 and then puts in $50,000 in repairs. \\\nThis increased the value of the house by 150%. \\\nHow much profit did he make?"\n\n# insert few shot demonstrations\n"{gsm8k_samples()}"\n\n# prompt template\n"Q: {QUESTION}\\n"\n"Let\'s think step by step.\\n"\n\n# reasoning loop\nfor i in range(4):\n "[REASON_OR_CALC]" \\\n where STOPS_AT(REASON_OR_CALC, "<<") and \\\n STOPS_AT(REASON_OR_CALC, "So the answer")\n \n if REASON_OR_CALC.endswith("<<"):\n " [EXPR]" where STOPS_AT(EXPR, "=")\n # invoke calculator function\n " {calc(EXPR)}>>"\n elif REASON_OR_CALC.endswith("So the answer"):\n break\n\n# produce the final answer\n"is[RESULT]"',state:"precomputed/calc.json"},{name:"\ud83c\udf0e Wikipedia Search",description:"Interactive LM-driven Wikipedia search.",code:'async def wikipedia(q):\n from lmql.http import fetch\n try:\n q = q.strip("\\n \'.")\n pages = await fetch(f"https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles={q}&origin=*", "query.pages")\n return list(pages.values())[0]["extract"][:190]\n except:\n return "No results"\n\n"Q: From which countries did the Norse originate?\\n"\n"Action: Let\'s search Wikipedia for the term \'[TERM]\\n" where STOPS_AT(TERM, "\\n")\nresult = await wikipedia(TERM)\n"Result: {result}\\n"\n"Final Answer:[ANSWER]"',state:"precomputed/wiki.json"},{name:"\ud83d\udcd6 Key-Value Memory",description:"Augment the LM with a key-value storage.",code:'# implement a simple key value storage\n# with two operations\nstorage = {}\ndef assign(key, value): \n # store a value\n storage[key] = value; return f\'{{{key}: "{value}"}}\'\ndef get(key): \n # retrieve a value\n return storage.get(key)\n\n# instructive prompt, instructing the model to how use the storage\n"""In your reasoning you can use actions. You do this as follows:\n`action_name() # result: `\nTo remember things, you can use \'assign\'/\'get\':\n- To remember something:\n`assign("Alice", "banana") # result: "banana"`\n- To retrieve a stored value:\n`get("Alice") # result: "banana"`\nAlways tail calls with " # result". Using these actions, let\'s solve the following question.\\n"""\n\n# actual problem statement\n"""\nQ: Alice, Bob, and Claire are playing a game. At the start \nof the game, they are each holding a ball: Alice has a black \nball, Bob has a brown ball, and Claire has a blue ball. \n\nAs the game progresses, pairs of players trade balls. First, \nBob and Claire swap balls. Then, Alice and Bob swap balls. \nFinally, Claire and Bob swap balls. At the end of the game, \nwhat ball does Alice have?\nA: Let\'s think step by step.\n"""\n\n# core reasoning loop\nfor i in range(32):\n "[REASONING]" where STOPS_AT(REASONING, "# result") and \\\n STOPS_BEFORE(REASONING, "Therefore,")\n \n if REASONING.endswith("# result"):\n cmd = REASONING.rsplit("`",1)[-1]\n cmd = cmd[:-len("# result")]\n "{eval(cmd)}`\\n"\n else:\n break\n\n# generate final answer\n"Therefore at the end of the game, Alice has the[OBJECT]" \\\n where STOPS_AT(OBJECT, ".") and STOPS_AT(OBJECT, ",")',state:"precomputed/kv.json"}]},{category:"Decoding",queries:[{name:"\ud83d\udd0d Visualize Decoding",description:"Inspect the decoding tree of beam search.",code:'# beam search explore multiple alternative decoding options\n# to inspect the tree of explored sequences, make sure to open the \'Advanced Mode\'\n# in the LMQL Playground\nbeam(n=4)\n """English to French Translation:\n English: I am going to the store\n French: [TRANSLATION]\n """\nfrom \n "openai/text-davinci-001"\nwhere\n STOPS_AT(TRANSLATION, "\\n")',state:"precomputed/translation.json"},{name:"\ud83d\udcca Distributions",description:"Classification via LM-based conditional distributions.",code:'argmax\n """Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.\\n\n Q: What is the underlying sentiment of this review and why?\\n\n A:[ANALYSIS]\n """\n "Based on this, the overall sentiment of the message \\\n can be considered to be[CLASSIFICATION]" distribution \\\n CLASSIFICATION in [" positive", " neutral", " negative"]\n \n # Output:\n # P(CLASSIFICATION)\n # - positive (*) 0.9997506492815857\n # - neutral 0.0002479301558564076\n # - negative 1.4205625578758162e-06\nfrom \n "openai/text-davinci-003"',state:"precomputed/distribution.json"}]},{category:"Chatbots",requires_input:!0,queries:[{name:"\ud83d\udde3\ufe0f Chatbot",description:"Build a chatbot using interactive querying.",code:'argmax \n # use tags like {:system} to mark prompt segments as system/user/assistant\n "{:system} You are a marketing chatbot for the language model query language (LMQL)."\n for i in range(10):\n # use \'await input()\' to interactive query for user input\n "{:user} {await input()}"\n "{:assistant} [ANSWER]"\nfrom\n "chatgpt"',state:"precomputed/chat.json"}]}]}},3915:function(e,t,n){var r;r=function(e){return function(e){var t={};function n(r){if(t[r])return t[r].exports;var i=t[r]={i:r,l:!1,exports:{}};return e[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}return n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"===typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)n.d(r,i,function(t){return e[t]}.bind(null,i));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=0)}([function(e,t,n){var r=n(1),i=function(e){e&&e("layout","dagre",r)};"undefined"!==typeof cytoscape&&i(cytoscape),e.exports=i},function(e,t,n){function r(e){return r="function"===typeof Symbol&&"symbol"===typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"===typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e},r(e)}var i=function(e){return"function"===typeof e},o=n(2),a=n(3),s=n(4);function l(e){this.options=a({},o,e)}l.prototype.run=function(){var e=this.options,t=e.cy,n=e.eles,o=function(e,t){return i(t)?t.apply(e,[e]):t},a=e.boundingBox||{x1:0,y1:0,w:t.width(),h:t.height()};void 0===a.x2&&(a.x2=a.x1+a.w),void 0===a.w&&(a.w=a.x2-a.x1),void 0===a.y2&&(a.y2=a.y1+a.h),void 0===a.h&&(a.h=a.y2-a.y1);var l=new s.graphlib.Graph({multigraph:!0,compound:!0}),u={},c=function(e,t){null!=t&&(u[e]=t)};c("nodesep",e.nodeSep),c("edgesep",e.edgeSep),c("ranksep",e.rankSep),c("rankdir",e.rankDir),c("align",e.align),c("ranker",e.ranker),c("acyclicer",e.acyclicer),l.setGraph(u),l.setDefaultEdgeLabel((function(){return{}})),l.setDefaultNodeLabel((function(){return{}}));var d=n.nodes();i(e.sort)&&(d=d.sort(e.sort));for(var f=0;f1?t-1:0),r=1;re.length)&&(t=e.length);for(var n=0,r=new Array(t);nt?1:0},J=null!=Object.assign?Object.assign.bind(Object):function(e){for(var t=arguments,n=1;n255)return;t.push(Math.floor(o))}var a=r[1]||r[2]||r[3],s=r[1]&&r[2]&&r[3];if(a&&!s)return;var l=n[4];if(void 0!==l){if((l=parseFloat(l))<0||l>1)return;t.push(l)}}return t}(e)||function(e){var t,n,r,i,o,a,s,l;function u(e,t,n){return n<0&&(n+=1),n>1&&(n-=1),n<1/6?e+6*(t-e)*n:n<.5?t:n<2/3?e+(t-e)*(2/3-n)*6:e}var c=new RegExp("^"+$+"$").exec(e);if(c){if((n=parseInt(c[1]))<0?n=(360- -1*n%360)%360:n>360&&(n%=360),n/=360,(r=parseFloat(c[2]))<0||r>100)return;if(r/=100,(i=parseFloat(c[3]))<0||i>100)return;if(i/=100,void 0!==(o=c[4])&&((o=parseFloat(o))<0||o>1))return;if(0===r)a=s=l=Math.round(255*i);else{var d=i<.5?i*(1+r):i+r-i*r,f=2*i-d;a=Math.round(255*u(f,d,n+1/3)),s=Math.round(255*u(f,d,n)),l=Math.round(255*u(f,d,n-1/3))}t=[a,s,l,o]}return t}(e)},te={transparent:[0,0,0,0],aliceblue:[240,248,255],antiquewhite:[250,235,215],aqua:[0,255,255],aquamarine:[127,255,212],azure:[240,255,255],beige:[245,245,220],bisque:[255,228,196],black:[0,0,0],blanchedalmond:[255,235,205],blue:[0,0,255],blueviolet:[138,43,226],brown:[165,42,42],burlywood:[222,184,135],cadetblue:[95,158,160],chartreuse:[127,255,0],chocolate:[210,105,30],coral:[255,127,80],cornflowerblue:[100,149,237],cornsilk:[255,248,220],crimson:[220,20,60],cyan:[0,255,255],darkblue:[0,0,139],darkcyan:[0,139,139],darkgoldenrod:[184,134,11],darkgray:[169,169,169],darkgreen:[0,100,0],darkgrey:[169,169,169],darkkhaki:[189,183,107],darkmagenta:[139,0,139],darkolivegreen:[85,107,47],darkorange:[255,140,0],darkorchid:[153,50,204],darkred:[139,0,0],darksalmon:[233,150,122],darkseagreen:[143,188,143],darkslateblue:[72,61,139],darkslategray:[47,79,79],darkslategrey:[47,79,79],darkturquoise:[0,206,209],darkviolet:[148,0,211],deeppink:[255,20,147],deepskyblue:[0,191,255],dimgray:[105,105,105],dimgrey:[105,105,105],dodgerblue:[30,144,255],firebrick:[178,34,34],floralwhite:[255,250,240],forestgreen:[34,139,34],fuchsia:[255,0,255],gainsboro:[220,220,220],ghostwhite:[248,248,255],gold:[255,215,0],goldenrod:[218,165,32],gray:[128,128,128],grey:[128,128,128],green:[0,128,0],greenyellow:[173,255,47],honeydew:[240,255,240],hotpink:[255,105,180],indianred:[205,92,92],indigo:[75,0,130],ivory:[255,255,240],khaki:[240,230,140],lavender:[230,230,250],lavenderblush:[255,240,245],lawngreen:[124,252,0],lemonchiffon:[255,250,205],lightblue:[173,216,230],lightcoral:[240,128,128],lightcyan:[224,255,255],lightgoldenrodyellow:[250,250,210],lightgray:[211,211,211],lightgreen:[144,238,144],lightgrey:[211,211,211],lightpink:[255,182,193],lightsalmon:[255,160,122],lightseagreen:[32,178,170],lightskyblue:[135,206,250],lightslategray:[119,136,153],lightslategrey:[119,136,153],lightsteelblue:[176,196,222],lightyellow:[255,255,224],lime:[0,255,0],limegreen:[50,205,50],linen:[250,240,230],magenta:[255,0,255],maroon:[128,0,0],mediumaquamarine:[102,205,170],mediumblue:[0,0,205],mediumorchid:[186,85,211],mediumpurple:[147,112,219],mediumseagreen:[60,179,113],mediumslateblue:[123,104,238],mediumspringgreen:[0,250,154],mediumturquoise:[72,209,204],mediumvioletred:[199,21,133],midnightblue:[25,25,112],mintcream:[245,255,250],mistyrose:[255,228,225],moccasin:[255,228,181],navajowhite:[255,222,173],navy:[0,0,128],oldlace:[253,245,230],olive:[128,128,0],olivedrab:[107,142,35],orange:[255,165,0],orangered:[255,69,0],orchid:[218,112,214],palegoldenrod:[238,232,170],palegreen:[152,251,152],paleturquoise:[175,238,238],palevioletred:[219,112,147],papayawhip:[255,239,213],peachpuff:[255,218,185],peru:[205,133,63],pink:[255,192,203],plum:[221,160,221],powderblue:[176,224,230],purple:[128,0,128],red:[255,0,0],rosybrown:[188,143,143],royalblue:[65,105,225],saddlebrown:[139,69,19],salmon:[250,128,114],sandybrown:[244,164,96],seagreen:[46,139,87],seashell:[255,245,238],sienna:[160,82,45],silver:[192,192,192],skyblue:[135,206,235],slateblue:[106,90,205],slategray:[112,128,144],slategrey:[112,128,144],snow:[255,250,250],springgreen:[0,255,127],steelblue:[70,130,180],tan:[210,180,140],teal:[0,128,128],thistle:[216,191,216],tomato:[255,99,71],turquoise:[64,224,208],violet:[238,130,238],wheat:[245,222,179],white:[255,255,255],whitesmoke:[245,245,245],yellow:[255,255,0],yellowgreen:[154,205,50]},ne=function(e){for(var t=e.map,n=e.keys,r=n.length,i=0;i1&&void 0!==arguments[1]?arguments[1]:ue;!(t=e.next()).done;)n=65599*n+t.value|0;return n},fe=function(e){return 65599*(arguments.length>1&&void 0!==arguments[1]?arguments[1]:ue)+e|0},he=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:ce;return(t<<5)+t+e|0},pe=function(e){return 2097152*e[0]+e[1]},ge=function(e,t){return[fe(e[0],t[0]),he(e[1],t[1])]},ve=function(e,t){var n={value:0,done:!1},r=0,i=e.length;return de({next:function(){return r=0&&(e[r]!==t||(e.splice(r,1),!n));r--);},Re=function(e){e.splice(0,e.length)},Ie=function(e,t,n){return n&&(t=Z(n,t)),e[t]},Be=function(e,t,n,r){n&&(t=Z(n,t)),e[t]=r},ze="undefined"!==typeof Map?Map:function(){function e(){g(this,e),this._obj={}}return m(e,[{key:"set",value:function(e,t){return this._obj[e]=t,this}},{key:"delete",value:function(e){return this._obj[e]=void 0,this}},{key:"clear",value:function(){this._obj={}}},{key:"has",value:function(e){return void 0!==this._obj[e]}},{key:"get",value:function(e){return this._obj[e]}}]),e}(),Fe=function(){function e(t){if(g(this,e),this._obj=Object.create(null),this.size=0,null!=t){var n;n=null!=t.instanceString&&t.instanceString()===this.instanceString()?t.toArray():t;for(var r=0;r2&&void 0!==arguments[2])||arguments[2];if(void 0!==e&&void 0!==t&&I(e)){var r=t.group;if(null==r&&(r=t.data&&null!=t.data.source&&null!=t.data.target?"edges":"nodes"),"nodes"===r||"edges"===r){this.length=1,this[0]=this;var i=this._private={cy:e,single:!0,data:t.data||{},position:t.position||{x:0,y:0},autoWidth:void 0,autoHeight:void 0,autoPadding:void 0,compoundBoundsClean:!1,listeners:[],group:r,style:{},rstyle:{},styleCxts:[],styleKeys:{},removed:!0,selected:!!t.selected,selectable:void 0===t.selectable||!!t.selectable,locked:!!t.locked,grabbed:!1,grabbable:void 0===t.grabbable||!!t.grabbable,pannable:void 0===t.pannable?"edges"===r:!!t.pannable,active:!1,classes:new qe,animation:{current:[],queue:[]},rscratch:{},scratch:t.scratch||{},edges:[],children:[],parent:t.parent&&t.parent.isNode()?t.parent:null,traversalCache:{},backgrounding:!1,bbCache:null,bbCacheShift:{x:0,y:0},bodyBounds:null,overlayBounds:null,labelBounds:{all:null,source:null,target:null,main:null},arrowBounds:{source:null,target:null,"mid-source":null,"mid-target":null}};if(null==i.position.x&&(i.position.x=0),null==i.position.y&&(i.position.y=0),t.renderedPosition){var o=t.renderedPosition,a=e.pan(),s=e.zoom();i.position={x:(o.x-a.x)/s,y:(o.y-a.y)/s}}var l=[];M(t.classes)?l=t.classes:T(t.classes)&&(l=t.classes.split(/\s+/));for(var u=0,c=l.length;u0;){var _=y.pop(),k=v(_),E=_.id();if(f[E]=k,k!==1/0)for(var S=_.neighborhood().intersect(p),C=0;C0)for(n.unshift(t);d[i];){var o=d[i];n.unshift(o.edge),n.unshift(o.node),i=(r=o.node).id()}return a.spawn(n)}}}},Ke={kruskal:function(e){e=e||function(e){return 1};for(var t=this.byGroup(),n=t.nodes,r=t.edges,i=n.length,o=new Array(i),a=n,s=function(e){for(var t=0;t0;){if(l=v.pop(),u=l.id(),m.delete(u),_++,u===f){for(var k=[],E=i,S=f,C=b[S];k.unshift(E),null!=C&&k.unshift(C),null!=(E=y[S]);)C=b[S=E.id()];return{found:!0,distance:h[u],path:this.spawn(k),steps:_}}g[u]=!0;for(var P=l._private.edges,T=0;TC&&(h[S]=C,m[S]=E,y[S]=x),!i){var P=E*u+k;!i&&h[P]>C&&(h[P]=C,m[P]=k,y[P]=x)}}}for(var O=0;O1&&void 0!==arguments[1]?arguments[1]:o,r=[],i=y(e);;){if(null==i)return t.spawn();var a=m(i),l=a.edge,u=a.pred;if(r.unshift(i[0]),i.same(n)&&r.length>0)break;null!=l&&r.unshift(l),i=u}return s.spawn(r)},hasNegativeWeightCycle:p,negativeWeightCycles:g}}},et=Math.sqrt(2),tt=function(e,t,n){0===n.length&&Pe("Karger-Stein must be run on a connected (sub)graph");for(var r=n[e],i=r[1],o=r[2],a=t[i],s=t[o],l=n,u=l.length-1;u>=0;u--){var c=l[u],d=c[1],f=c[2];(t[d]===a&&t[f]===s||t[d]===s&&t[f]===a)&&l.splice(u,1)}for(var h=0;hr;){var i=Math.floor(Math.random()*t.length);t=tt(i,e,t),n--}return t},rt={kargerStein:function(){var e=this,t=this.byGroup(),n=t.nodes,r=t.edges;r.unmergeBy((function(e){return e.isLoop()}));var i=n.length,o=r.length,a=Math.ceil(Math.pow(Math.log(i)/Math.LN2,2)),s=Math.floor(i/et);if(!(i<2)){for(var l=[],u=0;u0?1:e<0?-1:0},ct=function(e,t){return Math.sqrt(dt(e,t))},dt=function(e,t){var n=t.x-e.x,r=t.y-e.y;return n*n+r*r},ft=function(e){for(var t=e.length,n=0,r=0;r=e.x1&&e.y2>=e.y1)return{x1:e.x1,y1:e.y1,x2:e.x2,y2:e.y2,w:e.x2-e.x1,h:e.y2-e.y1};if(null!=e.w&&null!=e.h&&e.w>=0&&e.h>=0)return{x1:e.x1,y1:e.y1,x2:e.x1+e.w,y2:e.y1+e.h,w:e.w,h:e.h}}},mt=function(e,t,n){e.x1=Math.min(e.x1,t),e.x2=Math.max(e.x2,t),e.w=e.x2-e.x1,e.y1=Math.min(e.y1,n),e.y2=Math.max(e.y2,n),e.h=e.y2-e.y1},yt=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:0;return e.x1-=t,e.x2+=t,e.y1-=t,e.y2+=t,e.w=e.x2-e.x1,e.h=e.y2-e.y1,e},bt=function(e){var t,n,r,i,o=arguments.length>1&&void 0!==arguments[1]?arguments[1]:[0];if(1===o.length)t=n=r=i=o[0];else if(2===o.length)t=r=o[0],i=n=o[1];else if(4===o.length){var a=b(o,4);t=a[0],n=a[1],r=a[2],i=a[3]}return e.x1-=i,e.x2+=n,e.y1-=t,e.y2+=r,e.w=e.x2-e.x1,e.h=e.y2-e.y1,e},xt=function(e,t){e.x1=t.x1,e.y1=t.y1,e.x2=t.x2,e.y2=t.y2,e.w=e.x2-e.x1,e.h=e.y2-e.y1},wt=function(e,t){return!(e.x1>t.x2)&&(!(t.x1>e.x2)&&(!(e.x2t.y2)&&!(t.y1>e.y2)))))))},_t=function(e,t,n){return e.x1<=t&&t<=e.x2&&e.y1<=n&&n<=e.y2},kt=function(e,t){return _t(e,t.x1,t.y1)&&_t(e,t.x2,t.y2)},Et=function(e,t,n,r,i,o,a){var s,l=Vt(i,o),u=i/2,c=o/2,d=r-c-a;if((s=Rt(e,t,n,r,n-u+l-a,d,n+u-l+a,d,!1)).length>0)return s;var f=n+u+a;if((s=Rt(e,t,n,r,f,r-c+l-a,f,r+c-l+a,!1)).length>0)return s;var h=r+c+a;if((s=Rt(e,t,n,r,n-u+l-a,h,n+u-l+a,h,!1)).length>0)return s;var p,g=n-u-a;if((s=Rt(e,t,n,r,g,r-c+l-a,g,r+c-l+a,!1)).length>0)return s;var v=n-u+l,m=r-c+l;if((p=At(e,t,n,r,v,m,l+a)).length>0&&p[0]<=v&&p[1]<=m)return[p[0],p[1]];var y=n+u-l,b=r-c+l;if((p=At(e,t,n,r,y,b,l+a)).length>0&&p[0]>=y&&p[1]<=b)return[p[0],p[1]];var x=n+u-l,w=r+c-l;if((p=At(e,t,n,r,x,w,l+a)).length>0&&p[0]>=x&&p[1]>=w)return[p[0],p[1]];var _=n-u+l,k=r+c-l;return(p=At(e,t,n,r,_,k,l+a)).length>0&&p[0]<=_&&p[1]>=k?[p[0],p[1]]:[]},St=function(e,t,n,r,i,o,a){var s=a,l=Math.min(n,i),u=Math.max(n,i),c=Math.min(r,o),d=Math.max(r,o);return l-s<=e&&e<=u+s&&c-s<=t&&t<=d+s},Ct=function(e,t,n,r,i,o,a,s,l){var u=Math.min(n,a,i)-l,c=Math.max(n,a,i)+l,d=Math.min(r,s,o)-l,f=Math.max(r,s,o)+l;return!(ec||tf)},Pt=function(e,t,n,r,i,o,a,s){var l=[];!function(e,t,n,r,i){var o,a,s,l,u,c,d,f;0===e&&(e=1e-5),s=-27*(r/=e)+(t/=e)*(9*(n/=e)-t*t*2),o=(a=(3*n-t*t)/9)*a*a+(s/=54)*s,i[1]=0,d=t/3,o>0?(u=(u=s+Math.sqrt(o))<0?-Math.pow(-u,1/3):Math.pow(u,1/3),c=(c=s-Math.sqrt(o))<0?-Math.pow(-c,1/3):Math.pow(c,1/3),i[0]=-d+u+c,d+=(u+c)/2,i[4]=i[2]=-d,d=Math.sqrt(3)*(-c+u)/2,i[3]=d,i[5]=-d):(i[5]=i[3]=0,0===o?(f=s<0?-Math.pow(-s,1/3):Math.pow(s,1/3),i[0]=2*f-d,i[4]=i[2]=-(f+d)):(l=(a=-a)*a*a,l=Math.acos(s/Math.sqrt(l)),f=2*Math.sqrt(a),i[0]=-d+f*Math.cos(l/3),i[2]=-d+f*Math.cos((l+2*Math.PI)/3),i[4]=-d+f*Math.cos((l+4*Math.PI)/3)))}(1*n*n-4*n*i+2*n*a+4*i*i-4*i*a+a*a+r*r-4*r*o+2*r*s+4*o*o-4*o*s+s*s,9*n*i-3*n*n-3*n*a-6*i*i+3*i*a+9*r*o-3*r*r-3*r*s-6*o*o+3*o*s,3*n*n-6*n*i+n*a-n*e+2*i*i+2*i*e-a*e+3*r*r-6*r*o+r*s-r*t+2*o*o+2*o*t-s*t,1*n*i-n*n+n*e-i*e+r*o-r*r+r*t-o*t,l);for(var u=[],c=0;c<6;c+=2)Math.abs(l[c+1])<1e-7&&l[c]>=0&&l[c]<=1&&u.push(l[c]);u.push(1),u.push(0);for(var d,f,h,p=-1,g=0;g=0?hl?(e-i)*(e-i)+(t-o)*(t-o):u-d},Ot=function(e,t,n){for(var r,i,o,a,s=0,l=0;l=e&&e>=o||r<=e&&e<=o))continue;(e-r)/(o-r)*(a-i)+i>t&&s++}return s%2!==0},Mt=function(e,t,n,r,i,o,a,s,l){var u,c=new Array(n.length);null!=s[0]?(u=Math.atan(s[1]/s[0]),s[0]<0?u+=Math.PI/2:u=-u-Math.PI/2):u=s;for(var d,f=Math.cos(-u),h=Math.sin(-u),p=0;p0){var g=Nt(c,-l);d=Dt(g)}else d=c;return Ot(e,t,d)},Dt=function(e){for(var t,n,r,i,o,a,s,l,u=new Array(e.length/2),c=0;c=0&&p<=1&&v.push(p),g>=0&&g<=1&&v.push(g),0===v.length)return[];var m=v[0]*s[0]+e,y=v[0]*s[1]+t;return v.length>1?v[0]==v[1]?[m,y]:[m,y,v[1]*s[0]+e,v[1]*s[1]+t]:[m,y]},jt=function(e,t,n){return t<=e&&e<=n||n<=e&&e<=t?e:e<=t&&t<=n||n<=t&&t<=e?t:n},Rt=function(e,t,n,r,i,o,a,s,l){var u=e-i,c=n-e,d=a-i,f=t-o,h=r-t,p=s-o,g=d*f-p*u,v=c*f-h*u,m=p*c-d*h;if(0!==m){var y=g/m,b=v/m,x=-.001;return x<=y&&y<=1.001&&x<=b&&b<=1.001||l?[e+y*c,t+y*h]:[]}return 0===g||0===v?jt(e,n,a)===a?[a,s]:jt(e,n,i)===i?[i,o]:jt(i,a,n)===n?[n,r]:[]:[]},It=function(e,t,n,r,i,o,a,s){var l,u,c,d,f,h,p=[],g=new Array(n.length),v=!0;if(null==o&&(v=!1),v){for(var m=0;m0){var y=Nt(g,-s);u=Dt(y)}else u=g}else u=n;for(var b=0;bu&&(u=t)},f=function(e){return l[e]},h=0;h0?x.edgesTo(b)[0]:b.edgesTo(x)[0];var _=r(w);b=b.id(),h[b]>h[m]+_&&(h[b]=h[m]+_,p.nodes.indexOf(b)<0?p.push(b):p.updateItem(b),u[b]=0,l[b]=[]),h[b]==h[m]+_&&(u[b]=u[b]+u[m],l[b].push(m))}else for(var k=0;k0;){for(var P=n.pop(),T=0;T0&&a.push(n[s]);0!==a.length&&i.push(r.collection(a))}return i}(c,l,t,r);return b=function(e){for(var t=0;t5&&void 0!==arguments[5]?arguments[5]:cn,a=r,s=0;s=2?vn(e,t,n,0,hn,pn):vn(e,t,n,0,fn)},squaredEuclidean:function(e,t,n){return vn(e,t,n,0,hn)},manhattan:function(e,t,n){return vn(e,t,n,0,fn)},max:function(e,t,n){return vn(e,t,n,-1/0,gn)}};function yn(e,t,n,r,i,o){var a;return a=O(e)?e:mn[e]||mn.euclidean,0===t&&O(e)?a(i,o):a(t,n,r,i,o)}mn["squared-euclidean"]=mn.squaredEuclidean,mn.squaredeuclidean=mn.squaredEuclidean;var bn=Ae({k:2,m:2,sensitivityThreshold:1e-4,distance:"euclidean",maxIterations:10,attributes:[],testMode:!1,testCentroids:null}),xn=function(e){return bn(e)},wn=function(e,t,n,r,i){var o="kMedoids"!==i?function(e){return n[e]}:function(e){return r[e](n)},a=n,s=t;return yn(e,r.length,o,(function(e){return r[e](t)}),a,s)},_n=function(e,t,n){for(var r=n.length,i=new Array(r),o=new Array(r),a=new Array(t),s=null,l=0;ln)return!1}return!0},Cn=function(e,t,n){for(var r=0;ri&&(i=t[l][u],o=u);a[o].push(e[l])}for(var c=0;c=i.threshold||"dendrogram"===i.mode&&1===e.length)return!1;var h,p=t[a],g=t[r[a]];h="dendrogram"===i.mode?{left:p,right:g,key:p.key}:{value:p.value.concat(g.value),key:p.key},e[p.index]=h,e.splice(g.index,1),t[p.key]=h;for(var v=0;vn[g.key][m.key]&&(o=n[g.key][m.key])):"max"===i.linkage?(o=n[p.key][m.key],n[p.key][m.key]1&&void 0!==arguments[1]?arguments[1]:0,n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.length,r=!(arguments.length>4&&void 0!==arguments[4])||arguments[4],i=!(arguments.length>5&&void 0!==arguments[5])||arguments[5];arguments.length>3&&void 0!==arguments[3]&&!arguments[3]?(n0&&e.splice(0,t)):e=e.slice(t,n);for(var o=0,a=e.length-1;a>=0;a--){var s=e[a];i?isFinite(s)||(e[a]=-1/0,o++):e.splice(a,1)}r&&e.sort((function(e,t){return e-t}));var l=e.length,u=Math.floor(l/2);return l%2!==0?e[u+1+o]:(e[u-1+o]+e[u+o])/2}(e):"mean"===t?function(e){for(var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:0,n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.length,r=0,i=0,o=t;o1&&void 0!==arguments[1]?arguments[1]:0,n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.length,r=1/0,i=t;i1&&void 0!==arguments[1]?arguments[1]:0,n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.length,r=-1/0,i=t;ia&&(o=l,a=t[i*e+l])}o>0&&r.push(o)}for(var u=0;u=C?(P=C,C=O,T=M):O>P&&(P=O);for(var D=0;D0?1:0;k[_%u.minIterations*t+B]=z,I+=z}if(I>0&&(_>=u.minIterations-1||_==u.maxIterations-1)){for(var F=0,q=0;q0&&r.push(i);return r}(t,o,a),W=function(e,t,n){for(var r=Wn(e,t,n),i=0;il&&(s=u,l=c)}n[i]=o[s]}return Wn(e,t,n)}(t,r,U),Z={},H=0;H1||a>1)&&(u=!0),c[t]=[],e.outgoers().forEach((function(e){e.isEdge()&&c[t].push(e.id())}))}else d[t]=[void 0,e.target().id()]})):l.forEach((function(e){var t=e.id();e.isNode()?(e.degree(!0)%2&&(n?r?u=!0:r=t:n=t),c[t]=[],e.connectedEdges().forEach((function(e){return c[t].push(e.id())}))):d[t]=[e.source().id(),e.target().id()]}));var f={found:!1,trail:void 0};if(u)return f;if(r&&n)if(s){if(i&&r!=i)return f;i=r}else{if(i&&r!=i&&n!=i)return f;i||(i=r)}else i||(i=l[0].id());var h=function(e){for(var t,n,r,i=e,o=[e];c[i].length;)t=c[i].shift(),n=d[t][0],i!=(r=d[t][1])?(c[r]=c[r].filter((function(e){return e!=t})),i=r):s||i==n||(c[n]=c[n].filter((function(e){return e!=t})),i=n),o.unshift(t),o.unshift(i);return o},p=[],g=[];for(g=h(i);1!=g.length;)0==c[g[0]].length?(p.unshift(l.getElementById(g.shift())),p.unshift(l.getElementById(g.shift()))):g=h(g.shift()).concat(g);for(var v in p.unshift(l.getElementById(g.shift())),c)if(c[v].length)return f;return f.found=!0,f.trail=this.spawn(p,!0),f}},Xn=function(){var e=this,t={},n=0,r=0,i=[],o=[],a={},s=function s(l,u,c){l===c&&(r+=1),t[u]={id:n,low:n++,cutVertex:!1};var d,f,h,p,g=e.getElementById(u).connectedEdges().intersection(e);0===g.size()?i.push(e.spawn(e.getElementById(u))):g.forEach((function(n){d=n.source().id(),f=n.target().id(),(h=d===u?f:d)!==c&&(p=n.id(),a[p]||(a[p]=!0,o.push({x:u,y:h,edge:n})),h in t?t[u].low=Math.min(t[u].low,t[h].id):(s(l,h,u),t[u].low=Math.min(t[u].low,t[h].low),t[u].id<=t[h].low&&(t[u].cutVertex=!0,function(n,r){for(var a=o.length-1,s=[],l=e.spawn();o[a].x!=n||o[a].y!=r;)s.push(o.pop().edge),a--;s.push(o.pop().edge),s.forEach((function(n){var r=n.connectedNodes().intersection(e);l.merge(n),r.forEach((function(n){var r=n.id(),i=n.connectedEdges().intersection(e);l.merge(n),t[r].cutVertex?l.merge(i.filter((function(e){return e.isLoop()}))):l.merge(i)}))})),i.push(l)}(u,h))))}))};e.forEach((function(e){if(e.isNode()){var n=e.id();n in t||(r=0,s(n,n),t[n].cutVertex=r>1)}}));var l=Object.keys(t).filter((function(e){return t[e].cutVertex})).map((function(t){return e.getElementById(t)}));return{cut:e.spawn(l),components:i}},$n=function(){var e=this,t={},n=0,r=[],i=[],o=e.spawn(e),a=function a(s){if(i.push(s),t[s]={index:n,low:n++,explored:!1},e.getElementById(s).connectedEdges().intersection(e).forEach((function(e){var n=e.target().id();n!==s&&(n in t||a(n),t[n].explored||(t[s].low=Math.min(t[s].low,t[n].low)))})),t[s].index===t[s].low){for(var l=e.spawn();;){var u=i.pop();if(l.merge(e.getElementById(u)),t[u].low=t[s].index,t[u].explored=!0,u===s)break}var c=l.edgesWith(l),d=l.merge(c);r.push(d),o=o.difference(d)}};return e.forEach((function(e){if(e.isNode()){var n=e.id();n in t||a(n)}})),{cut:o,components:r}},Gn={};[We,He,Ke,Xe,Ge,Je,rt,Ht,Yt,$t,Qt,un,Nn,Fn,Hn,Yn,{hopcroftTarjanBiconnected:Xn,htbc:Xn,htb:Xn,hopcroftTarjanBiconnectedComponents:Xn},{tarjanStronglyConnected:$n,tsc:$n,tscc:$n,tarjanStronglyConnectedComponents:$n}].forEach((function(e){J(Gn,e)}));var Qn=function e(t){if(!(this instanceof e))return new e(t);this.id="Thenable/1.0.7",this.state=0,this.fulfillValue=void 0,this.rejectReason=void 0,this.onFulfilled=[],this.onRejected=[],this.proxy={then:this.then.bind(this)},"function"===typeof t&&t.call(this,this.fulfill.bind(this),this.reject.bind(this))};Qn.prototype={fulfill:function(e){return Jn(this,1,"fulfillValue",e)},reject:function(e){return Jn(this,2,"rejectReason",e)},then:function(e,t){var n=this,r=new Qn;return n.onFulfilled.push(nr(e,r,"fulfill")),n.onRejected.push(nr(t,r,"reject")),er(n),r.proxy}};var Jn=function(e,t,n,r){return 0===e.state&&(e.state=t,e[n]=r,er(e)),e},er=function(e){1===e.state?tr(e,"onFulfilled",e.fulfillValue):2===e.state&&tr(e,"onRejected",e.rejectReason)},tr=function(e,t,n){if(0!==e[t].length){var r=e[t];e[t]=[];var i=function(){for(var e=0;e0:void 0}},clearQueue:function(){return function(){var e=this,t=void 0!==e.length?e:[e];if(!(this._private.cy||this).styleEnabled())return this;for(var n=0;n0&&this.spawn(r).updateStyle().emit("class"),t},addClass:function(e){return this.toggleClass(e,!0)},hasClass:function(e){var t=this[0];return null!=t&&t._private.classes.has(e)},toggleClass:function(e,t){M(e)||(e=e.match(/\S+/g)||[]);for(var n=this,r=void 0===t,i=[],o=0,a=n.length;o0&&this.spawn(i).updateStyle().emit("class"),n},removeClass:function(e){return this.toggleClass(e,!1)},flashClass:function(e,t){var n=this;if(null==t)t=250;else if(0===t)return n;return n.addClass(e),setTimeout((function(){n.removeClass(e)}),t),n}};fr.className=fr.classNames=fr.classes;var hr={metaChar:"[\\!\\\"\\#\\$\\%\\&\\'\\(\\)\\*\\+\\,\\.\\/\\:\\;\\<\\=\\>\\?\\@\\[\\]\\^\\`\\{\\|\\}\\~]",comparatorOp:"=|\\!=|>|>=|<|<=|\\$=|\\^=|\\*=",boolOp:"\\?|\\!|\\^",string:"\"(?:\\\\\"|[^\"])*\"|'(?:\\\\'|[^'])*'",number:K,meta:"degree|indegree|outdegree",separator:"\\s*,\\s*",descendant:"\\s+",child:"\\s+>\\s+",subject:"\\$",group:"node|edge|\\*",directedEdge:"\\s+->\\s+",undirectedEdge:"\\s+<->\\s+"};hr.variable="(?:[\\w-.]|(?:\\\\"+hr.metaChar+"))+",hr.className="(?:[\\w-]|(?:\\\\"+hr.metaChar+"))+",hr.value=hr.string+"|"+hr.number,hr.id=hr.variable,function(){var e,t,n;for(e=hr.comparatorOp.split("|"),n=0;n=0||"="!==t&&(hr.comparatorOp+="|\\!"+t)}();var pr=0,gr=1,vr=2,mr=3,yr=4,br=5,xr=6,wr=7,_r=8,kr=9,Er=10,Sr=11,Cr=12,Pr=13,Tr=14,Or=15,Mr=16,Dr=17,Nr=18,Lr=19,Ar=20,jr=[{selector:":selected",matches:function(e){return e.selected()}},{selector:":unselected",matches:function(e){return!e.selected()}},{selector:":selectable",matches:function(e){return e.selectable()}},{selector:":unselectable",matches:function(e){return!e.selectable()}},{selector:":locked",matches:function(e){return e.locked()}},{selector:":unlocked",matches:function(e){return!e.locked()}},{selector:":visible",matches:function(e){return e.visible()}},{selector:":hidden",matches:function(e){return!e.visible()}},{selector:":transparent",matches:function(e){return e.transparent()}},{selector:":grabbed",matches:function(e){return e.grabbed()}},{selector:":free",matches:function(e){return!e.grabbed()}},{selector:":removed",matches:function(e){return e.removed()}},{selector:":inside",matches:function(e){return!e.removed()}},{selector:":grabbable",matches:function(e){return e.grabbable()}},{selector:":ungrabbable",matches:function(e){return!e.grabbable()}},{selector:":animated",matches:function(e){return e.animated()}},{selector:":unanimated",matches:function(e){return!e.animated()}},{selector:":parent",matches:function(e){return e.isParent()}},{selector:":childless",matches:function(e){return e.isChildless()}},{selector:":child",matches:function(e){return e.isChild()}},{selector:":orphan",matches:function(e){return e.isOrphan()}},{selector:":nonorphan",matches:function(e){return e.isChild()}},{selector:":compound",matches:function(e){return e.isNode()?e.isParent():e.source().isParent()||e.target().isParent()}},{selector:":loop",matches:function(e){return e.isLoop()}},{selector:":simple",matches:function(e){return e.isSimple()}},{selector:":active",matches:function(e){return e.active()}},{selector:":inactive",matches:function(e){return!e.active()}},{selector:":backgrounding",matches:function(e){return e.backgrounding()}},{selector:":nonbackgrounding",matches:function(e){return!e.backgrounding()}}].sort((function(e,t){return function(e,t){return-1*Q(e,t)}(e.selector,t.selector)})),Rr=function(){for(var e,t={},n=0;n0&&u.edgeCount>0)return Oe("The selector `"+e+"` is invalid because it uses both a compound selector and an edge selector"),!1;if(u.edgeCount>1)return Oe("The selector `"+e+"` is invalid because it uses multiple edge selectors"),!1;1===u.edgeCount&&Oe("The selector `"+e+"` is deprecated. Edge selectors do not take effect on changes to source and target nodes after an edge is added, for performance reasons. Use a class or data selector on edges instead, updating the class or data of an edge when your app detects a change in source or target nodes.")}return!0},toString:function(){if(null!=this.toStringCache)return this.toStringCache;for(var e=function(e){return null==e?"":e},t=function(t){return T(t)?'"'+t+'"':e(t)},n=function(e){return" "+e+" "},r=function(r,o){var a=r.type,s=r.value;switch(a){case pr:var l=e(s);return l.substring(0,l.length-1);case mr:var u=r.field,c=r.operator;return"["+u+n(e(c))+t(s)+"]";case br:var d=r.operator,f=r.field;return"["+e(d)+f+"]";case yr:return"["+r.field+"]";case xr:var h=r.operator;return"[["+r.field+n(e(h))+t(s)+"]]";case wr:return s;case _r:return"#"+s;case kr:return"."+s;case Dr:case Or:return i(r.parent,o)+n(">")+i(r.child,o);case Nr:case Mr:return i(r.ancestor,o)+" "+i(r.descendant,o);case Lr:var p=i(r.left,o),g=i(r.subject,o),v=i(r.right,o);return p+(p.length>0?" ":"")+g+v;case Ar:return""}},i=function(e,t){return e.checks.reduce((function(n,i,o){return n+(t===e&&0===o?"$":"")+r(i,t)}),"")},o="",a=0;a1&&a=0&&(t=t.replace("!",""),c=!0),t.indexOf("@")>=0&&(t=t.replace("@",""),u=!0),(a||l||u)&&(i=a||s?""+e:"",o=""+n),u&&(e=i=i.toLowerCase(),n=o=o.toLowerCase()),t){case"*=":r=i.indexOf(o)>=0;break;case"$=":r=i.indexOf(o,i.length-o.length)>=0;break;case"^=":r=0===i.indexOf(o);break;case"=":r=e===n;break;case">":d=!0,r=e>n;break;case">=":d=!0,r=e>=n;break;case"<":d=!0,r=e0;){var u=i.shift();t(u),o.add(u.id()),a&&r(i,o,u)}return e}function ni(e,t,n){if(n.isParent())for(var r=n._private.children,i=0;i1&&void 0!==arguments[1])||arguments[1],ni)},ei.forEachUp=function(e){return ti(this,e,!(arguments.length>1&&void 0!==arguments[1])||arguments[1],ri)},ei.forEachUpAndDown=function(e){return ti(this,e,!(arguments.length>1&&void 0!==arguments[1])||arguments[1],ii)},ei.ancestors=ei.parents,(Gr=Qr={data:cr.data({field:"data",bindingEvent:"data",allowBinding:!0,allowSetting:!0,settingEvent:"data",settingTriggersEvent:!0,triggerFnName:"trigger",allowGetting:!0,immutableKeys:{id:!0,source:!0,target:!0,parent:!0},updateStyle:!0}),removeData:cr.removeData({field:"data",event:"data",triggerFnName:"trigger",triggerEvent:!0,immutableKeys:{id:!0,source:!0,target:!0,parent:!0},updateStyle:!0}),scratch:cr.data({field:"scratch",bindingEvent:"scratch",allowBinding:!0,allowSetting:!0,settingEvent:"scratch",settingTriggersEvent:!0,triggerFnName:"trigger",allowGetting:!0,updateStyle:!0}),removeScratch:cr.removeData({field:"scratch",event:"scratch",triggerFnName:"trigger",triggerEvent:!0,updateStyle:!0}),rscratch:cr.data({field:"rscratch",allowBinding:!1,allowSetting:!0,settingTriggersEvent:!1,allowGetting:!0}),removeRscratch:cr.removeData({field:"rscratch",triggerEvent:!1}),id:function(){var e=this[0];if(e)return e._private.data.id}}).attr=Gr.data,Gr.removeAttr=Gr.removeData;var oi,ai,si=Qr,li={};function ui(e){return function(t){var n=this;if(void 0===t&&(t=!0),0!==n.length&&n.isNode()&&!n.removed()){for(var r=0,i=n[0],o=i._private.edges,a=0;at})),minIndegree:ci("indegree",(function(e,t){return et})),minOutdegree:ci("outdegree",(function(e,t){return et}))}),J(li,{totalDegree:function(e){for(var t=0,n=this.nodes(),r=0;r0,c=u;u&&(l=l[0]);var d=c?l.position():{x:0,y:0};return i={x:s.x-d.x,y:s.y-d.y},void 0===e?i:i[e]}for(var f=0;f0,v=g;g&&(p=p[0]);var m=v?p.position():{x:0,y:0};void 0!==t?h.position(e,t+m[e]):void 0!==i&&h.position({x:i.x+m.x,y:i.y+m.y})}}else if(!o)return;return this}},oi.modelPosition=oi.point=oi.position,oi.modelPositions=oi.points=oi.positions,oi.renderedPoint=oi.renderedPosition,oi.relativePoint=oi.relativePosition;var hi,pi,gi=ai;hi=pi={},pi.renderedBoundingBox=function(e){var t=this.boundingBox(e),n=this.cy(),r=n.zoom(),i=n.pan(),o=t.x1*r+i.x,a=t.x2*r+i.x,s=t.y1*r+i.y,l=t.y2*r+i.y;return{x1:o,x2:a,y1:s,y2:l,w:a-o,h:l-s}},pi.dirtyCompoundBoundsCache=function(){var e=arguments.length>0&&void 0!==arguments[0]&&arguments[0],t=this.cy();return t.styleEnabled()&&t.hasCompoundNodes()?(this.forEachUp((function(t){if(t.isParent()){var n=t._private;n.compoundBoundsClean=!1,n.bbCache=null,e||t.emitAndNotify("bounds")}})),this):this},pi.updateCompoundBounds=function(){var e=arguments.length>0&&void 0!==arguments[0]&&arguments[0],t=this.cy();if(!t.styleEnabled()||!t.hasCompoundNodes())return this;if(!e&&t.batching())return this;function n(e){if(e.isParent()){var t=e._private,n=e.children(),r="include"===e.pstyle("compound-sizing-wrt-labels").value,i={width:{val:e.pstyle("min-width").pfValue,left:e.pstyle("min-width-bias-left"),right:e.pstyle("min-width-bias-right")},height:{val:e.pstyle("min-height").pfValue,top:e.pstyle("min-height-bias-top"),bottom:e.pstyle("min-height-bias-bottom")}},o=n.boundingBox({includeLabels:r,includeOverlays:!1,useCache:!1}),a=t.position;0!==o.w&&0!==o.h||((o={w:e.pstyle("width").pfValue,h:e.pstyle("height").pfValue}).x1=a.x-o.w/2,o.x2=a.x+o.w/2,o.y1=a.y-o.h/2,o.y2=a.y+o.h/2);var s=i.width.left.value;"px"===i.width.left.units&&i.width.val>0&&(s=100*s/i.width.val);var l=i.width.right.value;"px"===i.width.right.units&&i.width.val>0&&(l=100*l/i.width.val);var u=i.height.top.value;"px"===i.height.top.units&&i.height.val>0&&(u=100*u/i.height.val);var c=i.height.bottom.value;"px"===i.height.bottom.units&&i.height.val>0&&(c=100*c/i.height.val);var d=m(i.width.val-o.w,s,l),f=d.biasDiff,h=d.biasComplementDiff,p=m(i.height.val-o.h,u,c),g=p.biasDiff,v=p.biasComplementDiff;t.autoPadding=function(e,t,n,r){if("%"!==n.units)return"px"===n.units?n.pfValue:0;switch(r){case"width":return e>0?n.pfValue*e:0;case"height":return t>0?n.pfValue*t:0;case"average":return e>0&&t>0?n.pfValue*(e+t)/2:0;case"min":return e>0&&t>0?e>t?n.pfValue*t:n.pfValue*e:0;case"max":return e>0&&t>0?e>t?n.pfValue*e:n.pfValue*t:0;default:return 0}}(o.w,o.h,e.pstyle("padding"),e.pstyle("padding-relative-to").value),t.autoWidth=Math.max(o.w,i.width.val),a.x=(-f+o.x1+o.x2+h)/2,t.autoHeight=Math.max(o.h,i.height.val),a.y=(-g+o.y1+o.y2+v)/2}function m(e,t,n){var r=0,i=0,o=t+n;return e>0&&o>0&&(r=t/o*e,i=n/o*e),{biasDiff:r,biasComplementDiff:i}}}for(var r=0;re.x2?r:e.x2,e.y1=ne.y2?i:e.y2,e.w=e.x2-e.x1,e.h=e.y2-e.y1)},yi=function(e,t){return null==t?e:mi(e,t.x1,t.y1,t.x2,t.y2)},bi=function(e,t,n){return Ie(e,t,n)},xi=function(e,t,n){if(!t.cy().headless()){var r,i,o=t._private,a=o.rstyle,s=a.arrowWidth/2;if("none"!==t.pstyle(n+"-arrow-shape").value){"source"===n?(r=a.srcX,i=a.srcY):"target"===n?(r=a.tgtX,i=a.tgtY):(r=a.midX,i=a.midY);var l=o.arrowBounds=o.arrowBounds||{},u=l[n]=l[n]||{};u.x1=r-s,u.y1=i-s,u.x2=r+s,u.y2=i+s,u.w=u.x2-u.x1,u.h=u.y2-u.y1,yt(u,1),mi(e,u.x1,u.y1,u.x2,u.y2)}}},wi=function(e,t,n){if(!t.cy().headless()){var r;r=n?n+"-":"";var i=t._private,o=i.rstyle;if(t.pstyle(r+"label").strValue){var a,s,l,u,c=t.pstyle("text-halign"),d=t.pstyle("text-valign"),f=bi(o,"labelWidth",n),h=bi(o,"labelHeight",n),p=bi(o,"labelX",n),g=bi(o,"labelY",n),v=t.pstyle(r+"text-margin-x").pfValue,m=t.pstyle(r+"text-margin-y").pfValue,y=t.isEdge(),b=t.pstyle(r+"text-rotation"),x=t.pstyle("text-outline-width").pfValue,w=t.pstyle("text-border-width").pfValue/2,_=t.pstyle("text-background-padding").pfValue,k=h,E=f,S=E/2,C=k/2;if(y)a=p-S,s=p+S,l=g-C,u=g+C;else{switch(c.value){case"left":a=p-E,s=p;break;case"center":a=p-S,s=p+S;break;case"right":a=p,s=p+E}switch(d.value){case"top":l=g-k,u=g;break;case"center":l=g-C,u=g+C;break;case"bottom":l=g,u=g+k}}a+=v-Math.max(x,w)-_-2,s+=v+Math.max(x,w)+_+2,l+=m-Math.max(x,w)-_-2,u+=m+Math.max(x,w)+_+2;var P=n||"main",T=i.labelBounds,O=T[P]=T[P]||{};O.x1=a,O.y1=l,O.x2=s,O.y2=u,O.w=s-a,O.h=u-l;var M=y&&"autorotate"===b.strValue,D=null!=b.pfValue&&0!==b.pfValue;if(M||D){var N=M?bi(i.rstyle,"labelAngle",n):b.pfValue,L=Math.cos(N),A=Math.sin(N),j=(a+s)/2,R=(l+u)/2;if(!y){switch(c.value){case"left":j=s;break;case"right":j=a}switch(d.value){case"top":R=u;break;case"bottom":R=l}}var I=function(e,t){return{x:(e-=j)*L-(t-=R)*A+j,y:e*A+t*L+R}},B=I(a,l),z=I(a,u),F=I(s,l),q=I(s,u);a=Math.min(B.x,z.x,F.x,q.x),s=Math.max(B.x,z.x,F.x,q.x),l=Math.min(B.y,z.y,F.y,q.y),u=Math.max(B.y,z.y,F.y,q.y)}var V=P+"Rot",U=T[V]=T[V]||{};U.x1=a,U.y1=l,U.x2=s,U.y2=u,U.w=s-a,U.h=u-l,mi(e,a,l,s,u),mi(i.labelBounds.all,a,l,s,u)}return e}},_i=function(e){var t=0,n=function(e){return(e?1:0)<(r=T[1].x)){var O=n;n=r,r=O}if(i>(o=T[1].y)){var M=i;i=o,o=M}mi(f,n-k,i-k,r+k,o+k)}}else if("bezier"===P||"unbundled-bezier"===P||"segments"===P||"taxi"===P){var D;switch(P){case"bezier":case"unbundled-bezier":D=v.bezierPts;break;case"segments":case"taxi":D=v.linePts}if(null!=D)for(var N=0;N(r=j.x)){var R=n;n=r,r=R}if((i=A.y)>(o=j.y)){var I=i;i=o,o=I}mi(f,n-=k,i-=k,r+=k,o+=k)}if(c&&t.includeEdges&&g&&(xi(f,e,"mid-source"),xi(f,e,"mid-target"),xi(f,e,"source"),xi(f,e,"target")),c&&"yes"===e.pstyle("ghost").value){var B=e.pstyle("ghost-offset-x").pfValue,z=e.pstyle("ghost-offset-y").pfValue;mi(f,f.x1+B,f.y1+z,f.x2+B,f.y2+z)}var F=h.bodyBounds=h.bodyBounds||{};xt(F,f),bt(F,m),yt(F,1),c&&(n=f.x1,r=f.x2,i=f.y1,o=f.y2,mi(f,n-_,i-_,r+_,o+_));var q=h.overlayBounds=h.overlayBounds||{};xt(q,f),bt(q,m),yt(q,1);var V=h.labelBounds=h.labelBounds||{};null!=V.all?((l=V.all).x1=1/0,l.y1=1/0,l.x2=-1/0,l.y2=-1/0,l.w=0,l.h=0):V.all=vt(),c&&t.includeLabels&&(t.includeMainLabels&&wi(f,e,null),g&&(t.includeSourceLabels&&wi(f,e,"source"),t.includeTargetLabels&&wi(f,e,"target")))}return f.x1=vi(f.x1),f.y1=vi(f.y1),f.x2=vi(f.x2),f.y2=vi(f.y2),f.w=vi(f.x2-f.x1),f.h=vi(f.y2-f.y1),f.w>0&&f.h>0&&b&&(bt(f,m),yt(f,1)),f}(e,Si),r.bbCache=n,r.bbCachePosKey=a):n=r.bbCache,!o){var c=e.isNode();n=vt(),(t.includeNodes&&c||t.includeEdges&&!c)&&(t.includeOverlays?yi(n,r.overlayBounds):yi(n,r.bodyBounds)),t.includeLabels&&(t.includeMainLabels&&(!i||t.includeSourceLabels&&t.includeTargetLabels)?yi(n,r.labelBounds.all):(t.includeMainLabels&&yi(n,r.labelBounds.mainRot),t.includeSourceLabels&&yi(n,r.labelBounds.sourceRot),t.includeTargetLabels&&yi(n,r.labelBounds.targetRot))),n.w=n.x2-n.x1,n.h=n.y2-n.y1}return n},Si={includeNodes:!0,includeEdges:!0,includeLabels:!0,includeMainLabels:!0,includeSourceLabels:!0,includeTargetLabels:!0,includeOverlays:!0,includeUnderlays:!0,useCache:!0},Ci=_i(Si),Pi=Ae(Si);pi.boundingBox=function(e){var t;if(1!==this.length||null==this[0]._private.bbCache||this[0]._private.styleDirty||void 0!==e&&void 0!==e.useCache&&!0!==e.useCache){t=vt();var n=Pi(e=e||Si),r=this;if(r.cy().styleEnabled())for(var i=0;i0&&void 0!==arguments[0]?arguments[0]:Vi,t=arguments.length>1?arguments[1]:void 0,n=0;n=0;s--)a(s);return this},Wi.removeAllListeners=function(){return this.removeListener("*")},Wi.emit=Wi.trigger=function(e,t,n){var r=this.listeners,i=r.length;return this.emitting++,M(t)||(t=[t]),Ki(this,(function(e,o){null!=n&&(r=[{event:o.event,type:o.type,namespace:o.namespace,callback:n}],i=r.length);for(var a=function(n){var i=r[n];if(i.type===o.type&&(!i.namespace||i.namespace===o.namespace||".*"===i.namespace)&&e.eventMatches(e.context,i,o)){var a=[o];null!=t&&function(e,t){for(var n=0;n1&&!r){var i=this.length-1,o=this[i],a=o._private.data.id;this[i]=void 0,this[e]=o,n.set(a,{ele:o,index:e})}return this.length--,this},unmergeOne:function(e){e=e[0];var t=this._private,n=e._private.data.id,r=t.map.get(n);if(!r)return this;var i=r.index;return this.unmergeAt(i),this},unmerge:function(e){var t=this._private.cy;if(!e)return this;if(e&&T(e)){var n=e;e=t.mutableElements().filter(n)}for(var r=0;r=0;t--){e(this[t])&&this.unmergeAt(t)}return this},map:function(e,t){for(var n=[],r=this,i=0;ir&&(r=s,n=a)}return{value:r,ele:n}},min:function(e,t){for(var n,r=1/0,i=this,o=0;o=0&&i1&&void 0!==arguments[1])||arguments[1],n=this[0],r=n.cy();if(r.styleEnabled()&&n){this.cleanStyle();var i=n._private.style[e];return null!=i?i:t?r.style().getDefaultProperty(e):null}},numericStyle:function(e){var t=this[0];if(t.cy().styleEnabled()&&t){var n=t.pstyle(e);return void 0!==n.pfValue?n.pfValue:n.value}},numericStyleUnits:function(e){var t=this[0];if(t.cy().styleEnabled())return t?t.pstyle(e).units:void 0},renderedStyle:function(e){var t=this.cy();if(!t.styleEnabled())return this;var n=this[0];return n?t.style().getRenderedStyle(n,e):void 0},style:function(e,t){var n=this.cy();if(!n.styleEnabled())return this;var r=n.style();if(D(e)){var i=e;r.applyBypass(this,i,false),this.emitAndNotify("style")}else if(T(e)){if(void 0===t){var o=this[0];return o?r.getStylePropertyValue(o,e):void 0}r.applyBypass(this,e,t,false),this.emitAndNotify("style")}else if(void 0===e){var a=this[0];return a?r.getRawStyle(a):void 0}return this},removeStyle:function(e){var t=this.cy();if(!t.styleEnabled())return this;var n=t.style(),r=this;if(void 0===e)for(var i=0;i0&&t.push(c[0]),t.push(s[0])}return this.spawn(t,!0).filter(e)}),"neighborhood"),closedNeighborhood:function(e){return this.neighborhood().add(this).filter(e)},openNeighborhood:function(e){return this.neighborhood(e)}}),yo.neighbourhood=yo.neighborhood,yo.closedNeighbourhood=yo.closedNeighborhood,yo.openNeighbourhood=yo.openNeighborhood,J(yo,{source:Jr((function(e){var t,n=this[0];return n&&(t=n._private.source||n.cy().collection()),t&&e?t.filter(e):t}),"source"),target:Jr((function(e){var t,n=this[0];return n&&(t=n._private.target||n.cy().collection()),t&&e?t.filter(e):t}),"target"),sources:_o({attr:"source"}),targets:_o({attr:"target"})}),J(yo,{edgesWith:Jr(ko(),"edgesWith"),edgesTo:Jr(ko({thisIsSrc:!0}),"edgesTo")}),J(yo,{connectedEdges:Jr((function(e){for(var t=[],n=0;n0);return o},component:function(){var e=this[0];return e.cy().mutableElements().components(e)[0]}}),yo.componentsOf=yo.components;var So=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]&&arguments[2],r=arguments.length>3&&void 0!==arguments[3]&&arguments[3];if(void 0!==e){var i=new ze,o=!1;if(t){if(t.length>0&&D(t[0])&&!j(t[0])){o=!0;for(var a=[],s=new qe,l=0,u=t.length;l0&&void 0!==arguments[0])||arguments[0],r=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],i=this,o=i.cy(),a=o._private,s=[],l=[],u=0,c=i.length;u0){for(var R=e.length===i.length?i:new So(o,e),I=0;I0&&void 0!==arguments[0])||arguments[0],t=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],n=this,r=[],i={},o=n._private.cy;function a(e){var n=i[e.id()];t&&e.removed()||n||(i[e.id()]=!0,e.isNode()?(r.push(e),function(e){for(var t=e._private.edges,n=0;n0&&(e?k.emitAndNotify("remove"):t&&k.emit("remove"));for(var E=0;E=o?function(t,r){for(var o=0;o0?i=l:r=l}while(Math.abs(o)>a&&++ud&&Math.abs(s.v)>d;);return o?function(e){return u[e*(u.length-1)|0]}:c}}(),Mo=function(e,t,n,r){var i=To(e,t,n,r);return function(e,t,n){return e+(t-e)*i(n)}},Do={linear:function(e,t,n){return e+(t-e)*n},ease:Mo(.25,.1,.25,1),"ease-in":Mo(.42,0,1,1),"ease-out":Mo(0,0,.58,1),"ease-in-out":Mo(.42,0,.58,1),"ease-in-sine":Mo(.47,0,.745,.715),"ease-out-sine":Mo(.39,.575,.565,1),"ease-in-out-sine":Mo(.445,.05,.55,.95),"ease-in-quad":Mo(.55,.085,.68,.53),"ease-out-quad":Mo(.25,.46,.45,.94),"ease-in-out-quad":Mo(.455,.03,.515,.955),"ease-in-cubic":Mo(.55,.055,.675,.19),"ease-out-cubic":Mo(.215,.61,.355,1),"ease-in-out-cubic":Mo(.645,.045,.355,1),"ease-in-quart":Mo(.895,.03,.685,.22),"ease-out-quart":Mo(.165,.84,.44,1),"ease-in-out-quart":Mo(.77,0,.175,1),"ease-in-quint":Mo(.755,.05,.855,.06),"ease-out-quint":Mo(.23,1,.32,1),"ease-in-out-quint":Mo(.86,0,.07,1),"ease-in-expo":Mo(.95,.05,.795,.035),"ease-out-expo":Mo(.19,1,.22,1),"ease-in-out-expo":Mo(1,0,0,1),"ease-in-circ":Mo(.6,.04,.98,.335),"ease-out-circ":Mo(.075,.82,.165,1),"ease-in-out-circ":Mo(.785,.135,.15,.86),spring:function(e,t,n){if(0===n)return Do.linear;var r=Oo(e,t,n);return function(e,t,n){return e+(t-e)*r(n)}},"cubic-bezier":Mo};function No(e,t,n,r,i){if(1===r)return n;if(t===n)return n;var o=i(t,n,r);return null==e||((e.roundValue||e.color)&&(o=Math.round(o)),void 0!==e.min&&(o=Math.max(o,e.min)),void 0!==e.max&&(o=Math.min(o,e.max))),o}function Lo(e,t){return null!=e.pfValue||null!=e.value?null==e.pfValue||null!=t&&"%"===t.type.units?e.value:e.pfValue:e}function Ao(e,t,n,r,i){var o=null!=i?i.type:null;n<0?n=0:n>1&&(n=1);var a=Lo(e,i),s=Lo(t,i);if(N(a)&&N(s))return No(o,a,s,n,r);if(M(a)&&M(s)){for(var l=[],u=0;u0?("spring"===d&&f.push(a.duration),a.easingImpl=Do[d].apply(null,f)):a.easingImpl=Do[d]}var h,p=a.easingImpl;if(h=0===a.duration?1:(n-l)/a.duration,a.applying&&(h=a.progress),h<0?h=0:h>1&&(h=1),null==a.delay){var g=a.startPosition,v=a.position;if(v&&i&&!e.locked()){var m={};Ro(g.x,v.x)&&(m.x=Ao(g.x,v.x,h,p)),Ro(g.y,v.y)&&(m.y=Ao(g.y,v.y,h,p)),e.position(m)}var y=a.startPan,b=a.pan,x=o.pan,w=null!=b&&r;w&&(Ro(y.x,b.x)&&(x.x=Ao(y.x,b.x,h,p)),Ro(y.y,b.y)&&(x.y=Ao(y.y,b.y,h,p)),e.emit("pan"));var _=a.startZoom,k=a.zoom,E=null!=k&&r;E&&(Ro(_,k)&&(o.zoom=gt(o.minZoom,Ao(_,k,h,p),o.maxZoom)),e.emit("zoom")),(w||E)&&e.emit("viewport");var S=a.style;if(S&&S.length>0&&i){for(var C=0;C=0;t--){(0,e[t])()}e.splice(0,e.length)},c=o.length-1;c>=0;c--){var d=o[c],f=d._private;f.stopped?(o.splice(c,1),f.hooked=!1,f.playing=!1,f.started=!1,u(f.frames)):(f.playing||f.applying)&&(f.playing&&f.applying&&(f.applying=!1),f.started||Io(0,d,e),jo(t,d,e,n),f.applying&&(f.applying=!1),u(f.frames),null!=f.step&&f.step(e),d.completed()&&(o.splice(c,1),f.hooked=!1,f.playing=!1,f.started=!1,u(f.completes)),s=!0)}return n||0!==o.length||0!==a.length||r.push(t),s}for(var o=!1,a=0;a0?t.notify("draw",n):t.notify("draw")),n.unmerge(r),t.emit("step")}var zo={animate:cr.animate(),animation:cr.animation(),animated:cr.animated(),clearQueue:cr.clearQueue(),delay:cr.delay(),delayAnimation:cr.delayAnimation(),stop:cr.stop(),addToAnimationPool:function(e){this.styleEnabled()&&this._private.aniEles.merge(e)},stopAnimationLoop:function(){this._private.animationsRunning=!1},startAnimationLoop:function(){var e=this;if(e._private.animationsRunning=!0,e.styleEnabled()){var t=e.renderer();t&&t.beforeRender?t.beforeRender((function(t,n){Bo(n,e)}),t.beforeRenderPriorities.animations):function t(){e._private.animationsRunning&&se((function(n){Bo(n,e),t()}))}()}}},Fo={qualifierCompare:function(e,t){return null==e||null==t?null==e&&null==t:e.sameText(t)},eventMatches:function(e,t,n){var r=t.qualifier;return null==r||e!==n.target&&j(n.target)&&r.matches(n.target)},addEventFields:function(e,t){t.cy=e,t.target=e},callbackContext:function(e,t,n){return null!=t.qualifier?n.target:e}},qo=function(e){return T(e)?new Yr(e):e},Vo={createEmitter:function(){var e=this._private;return e.emitter||(e.emitter=new Ui(Fo,this)),this},emitter:function(){return this._private.emitter},on:function(e,t,n){return this.emitter().on(e,qo(t),n),this},removeListener:function(e,t,n){return this.emitter().removeListener(e,qo(t),n),this},removeAllListeners:function(){return this.emitter().removeAllListeners(),this},one:function(e,t,n){return this.emitter().one(e,qo(t),n),this},once:function(e,t,n){return this.emitter().one(e,qo(t),n),this},emit:function(e,t){return this.emitter().emit(e,t),this},emitAndNotify:function(e,t){return this.emit(e),this.notify(e,t),this}};cr.eventAliasesOn(Vo);var Uo={png:function(e){return e=e||{},this._private.renderer.png(e)},jpg:function(e){var t=this._private.renderer;return(e=e||{}).bg=e.bg||"#fff",t.jpg(e)}};Uo.jpeg=Uo.jpg;var Wo={layout:function(e){var t=this;if(null!=e)if(null!=e.name){var n=e.name,r=t.extension("layout",n);if(null!=r){var i;i=T(e.eles)?t.$(e.eles):null!=e.eles?e.eles:t.$();var o=new r(J({},e,{cy:t,eles:i}));return o}Pe("No such layout `"+n+"` found. Did you forget to import it and `cytoscape.use()` it?")}else Pe("A `name` must be specified to make a layout");else Pe("Layout options must be specified to make a layout")}};Wo.createLayout=Wo.makeLayout=Wo.layout;var Zo={notify:function(e,t){var n=this._private;if(this.batching()){n.batchNotifications=n.batchNotifications||{};var r=n.batchNotifications[e]=n.batchNotifications[e]||this.collection();null!=t&&r.merge(t)}else if(n.notificationsEnabled){var i=this.renderer();!this.destroyed()&&i&&i.notify(e,t)}},notifications:function(e){var t=this._private;return void 0===e?t.notificationsEnabled:(t.notificationsEnabled=!!e,this)},noNotifications:function(e){this.notifications(!1),e(),this.notifications(!0)},batching:function(){return this._private.batchCount>0},startBatch:function(){var e=this._private;return null==e.batchCount&&(e.batchCount=0),0===e.batchCount&&(e.batchStyleEles=this.collection(),e.batchNotifications={}),e.batchCount++,this},endBatch:function(){var e=this._private;if(0===e.batchCount)return this;if(e.batchCount--,0===e.batchCount){e.batchStyleEles.updateStyle();var t=this.renderer();Object.keys(e.batchNotifications).forEach((function(n){var r=e.batchNotifications[n];r.empty()?t.notify(n):t.notify(n,r)}))}return this},batch:function(e){return this.startBatch(),e(),this.endBatch(),this},batchData:function(e){var t=this;return this.batch((function(){for(var n=Object.keys(e),r=0;r0;)t.removeChild(t.childNodes[0]);e._private.renderer=null,e.mutableElements().forEach((function(e){var t=e._private;t.rscratch={},t.rstyle={},t.animation.current=[],t.animation.queue=[]}))},onRender:function(e){return this.on("render",e)},offRender:function(e){return this.off("render",e)}};Ko.invalidateDimensions=Ko.resize;var Yo={collection:function(e,t){return T(e)?this.$(e):A(e)?e.collection():M(e)?(t||(t={}),new So(this,e,t.unique,t.removed)):new So(this)},nodes:function(e){var t=this.$((function(e){return e.isNode()}));return e?t.filter(e):t},edges:function(e){var t=this.$((function(e){return e.isEdge()}));return e?t.filter(e):t},$:function(e){var t=this._private.elements;return e?t.filter(e):t.spawnSelf()},mutableElements:function(){return this._private.elements}};Yo.elements=Yo.filter=Yo.$;var Xo={},$o="t";Xo.apply=function(e){for(var t=this,n=t._private.cy.collection(),r=0;r0;if(f||d&&h){var p=void 0;f&&h||f?p=u.properties:h&&(p=u.mappedProperties);for(var g=0;g1&&(v=1),s.color){var w=i.valueMin[0],_=i.valueMax[0],k=i.valueMin[1],E=i.valueMax[1],S=i.valueMin[2],C=i.valueMax[2],P=null==i.valueMin[3]?1:i.valueMin[3],T=null==i.valueMax[3]?1:i.valueMax[3],O=[Math.round(w+(_-w)*v),Math.round(k+(E-k)*v),Math.round(S+(C-S)*v),Math.round(P+(T-P)*v)];n={bypass:i.bypass,name:i.name,value:O,strValue:"rgb("+O[0]+", "+O[1]+", "+O[2]+")"}}else{if(!s.number)return!1;var M=i.valueMin+(i.valueMax-i.valueMin)*v;n=this.parse(i.name,M,i.bypass,f)}if(!n)return g(),!1;n.mapping=i,i=n;break;case a.data:for(var D=i.field.split("."),L=d.data,A=0;A0&&o>0){for(var s={},l=!1,u=0;u0?e.delayAnimation(a).play().promise().then(t):t()})).then((function(){return e.animation({style:s,duration:o,easing:e.pstyle("transition-timing-function").value,queue:!1}).play().promise()})).then((function(){n.removeBypasses(e,i),e.emitAndNotify("style"),r.transitioning=!1}))}else r.transitioning&&(this.removeBypasses(e,i),e.emitAndNotify("style"),r.transitioning=!1)},Xo.checkTrigger=function(e,t,n,r,i,o){var a=this.properties[t],s=i(a);null!=s&&s(n,r)&&o(a)},Xo.checkZOrderTrigger=function(e,t,n,r){var i=this;this.checkTrigger(e,t,n,r,(function(e){return e.triggersZOrder}),(function(){i._private.cy.notify("zorder",e)}))},Xo.checkBoundsTrigger=function(e,t,n,r){this.checkTrigger(e,t,n,r,(function(e){return e.triggersBounds}),(function(i){e.dirtyCompoundBoundsCache(),e.dirtyBoundingBoxCache(),!i.triggersBoundsOfParallelBeziers||("curve-style"!==t||"bezier"!==n&&"bezier"!==r)&&("display"!==t||"none"!==n&&"none"!==r)||e.parallelEdges().forEach((function(e){e.isBundledBezier()&&e.dirtyBoundingBoxCache()}))}))},Xo.checkTriggers=function(e,t,n,r){e.dirtyStyleCache(),this.checkZOrderTrigger(e,t,n,r),this.checkBoundsTrigger(e,t,n,r)};var Go={applyBypass:function(e,t,n,r){var i=[];if("*"===t||"**"===t){if(void 0!==n)for(var o=0;ot.length?o.substr(t.length):""}function s(){n=n.length>r.length?n.substr(r.length):""}for(o=o.replace(/[/][*](\s|.)+?[*][/]/g,"");;){if(o.match(/^\s*$/))break;var l=o.match(/^\s*((?:.|\s)+?)\s*\{((?:.|\s)+?)\}/);if(!l){Oe("Halting stylesheet parsing: String stylesheet contains more to parse but no selector and block found in: "+o);break}t=l[0];var u=l[1];if("core"!==u)if(new Yr(u).invalid){Oe("Skipping parsing of block: Invalid selector found in string stylesheet: "+u),a();continue}var c=l[2],d=!1;n=c;for(var f=[];;){if(n.match(/^\s*$/))break;var h=n.match(/^\s*(.+?)\s*:\s*(.+?)(?:\s*;|\s*$)/);if(!h){Oe("Skipping parsing of block: Invalid formatting of style property and value definitions found in:"+c),d=!0;break}r=h[0];var p=h[1],g=h[2];if(this.properties[p])i.parse(p,g)?(f.push({name:p,val:g}),s()):(Oe("Skipping property: Invalid property definition in: "+r),s());else Oe("Skipping property: Invalid property name in: "+r),s()}if(d){a();break}i.selector(u);for(var v=0;v=7&&"d"===t[0]&&(u=new RegExp(s.data.regex).exec(t))){if(n)return!1;var f=s.data;return{name:e,value:u,strValue:""+t,mapped:f,field:u[1],bypass:n}}if(t.length>=10&&"m"===t[0]&&(c=new RegExp(s.mapData.regex).exec(t))){if(n)return!1;if(d.multiple)return!1;var h=s.mapData;if(!d.color&&!d.number)return!1;var p=this.parse(e,c[4]);if(!p||p.mapped)return!1;var g=this.parse(e,c[5]);if(!g||g.mapped)return!1;if(p.pfValue===g.pfValue||p.strValue===g.strValue)return Oe("`"+e+": "+t+"` is not a valid mapper because the output range is zero; converting to `"+e+": "+p.strValue+"`"),this.parse(e,p.strValue);if(d.color){var v=p.value,m=g.value;if(v[0]===m[0]&&v[1]===m[1]&&v[2]===m[2]&&(v[3]===m[3]||(null==v[3]||1===v[3])&&(null==m[3]||1===m[3])))return!1}return{name:e,value:c,strValue:""+t,mapped:h,field:c[1],fieldMin:parseFloat(c[2]),fieldMax:parseFloat(c[3]),valueMin:p.value,valueMax:g.value,bypass:n}}}if(d.multiple&&"multiple"!==r){var y;if(y=l?t.split(/\s+/):M(t)?t:[t],d.evenMultiple&&y.length%2!==0)return null;for(var b=[],x=[],w=[],_="",k=!1,E=0;E0?" ":"")+S.strValue}return d.validate&&!d.validate(b,x)?null:d.singleEnum&&k?1===b.length&&T(b[0])?{name:e,value:b[0],strValue:b[0],bypass:n}:null:{name:e,value:b,pfValue:w,strValue:_,bypass:n,units:x}}var C,P,D=function(){for(var r=0;rd.max||d.strictMax&&t===d.max))return null;var I={name:e,value:t,strValue:""+t+(L||""),units:L,bypass:n};return d.unitless||"px"!==L&&"em"!==L?I.pfValue=t:I.pfValue="px"!==L&&L?this.getEmSizeInPixels()*t:t,"ms"!==L&&"s"!==L||(I.pfValue="ms"===L?t:1e3*t),"deg"!==L&&"rad"!==L||(I.pfValue="rad"===L?t:(C=t,Math.PI*C/180)),"%"===L&&(I.pfValue=t/100),I}if(d.propList){var B=[],z=""+t;if("none"===z);else{for(var F=z.split(/\s*,\s*|\s+/),q=0;q0&&l>0&&!isNaN(n.w)&&!isNaN(n.h)&&n.w>0&&n.h>0)return{zoom:a=(a=(a=Math.min((s-2*t)/n.w,(l-2*t)/n.h))>this._private.maxZoom?this._private.maxZoom:a)=n.minZoom&&(n.maxZoom=t),this},minZoom:function(e){return void 0===e?this._private.minZoom:this.zoomRange({min:e})},maxZoom:function(e){return void 0===e?this._private.maxZoom:this.zoomRange({max:e})},getZoomedViewport:function(e){var t,n,r=this._private,i=r.pan,o=r.zoom,a=!1;if(r.zoomingEnabled||(a=!0),N(e)?n=e:D(e)&&(n=e.level,null!=e.position?t=it(e.position,o,i):null!=e.renderedPosition&&(t=e.renderedPosition),null==t||r.panningEnabled||(a=!0)),n=(n=n>r.maxZoom?r.maxZoom:n)t.maxZoom||!t.zoomingEnabled?o=!0:(t.zoom=s,i.push("zoom"))}if(r&&(!o||!e.cancelOnFailedZoom)&&t.panningEnabled){var l=e.pan;N(l.x)&&(t.pan.x=l.x,a=!1),N(l.y)&&(t.pan.y=l.y,a=!1),a||i.push("pan")}return i.length>0&&(i.push("viewport"),this.emit(i.join(" ")),this.notify("viewport")),this},center:function(e){var t=this.getCenterPan(e);return t&&(this._private.pan=t,this.emit("pan viewport"),this.notify("viewport")),this},getCenterPan:function(e,t){if(this._private.panningEnabled){if(T(e)){var n=e;e=this.mutableElements().filter(n)}else A(e)||(e=this.mutableElements());if(0!==e.length){var r=e.boundingBox(),i=this.width(),o=this.height();return{x:(i-(t=void 0===t?this._private.zoom:t)*(r.x1+r.x2))/2,y:(o-t*(r.y1+r.y2))/2}}}},reset:function(){return this._private.panningEnabled&&this._private.zoomingEnabled?(this.viewport({pan:{x:0,y:0},zoom:1}),this):this},invalidateSize:function(){this._private.sizeCache=null},size:function(){var e=this._private,t=e.container;return e.sizeCache=e.sizeCache||(t?function(){var e=w.getComputedStyle(t),n=function(t){return parseFloat(e.getPropertyValue(t))};return{width:t.clientWidth-n("padding-left")-n("padding-right"),height:t.clientHeight-n("padding-top")-n("padding-bottom")}}():{width:1,height:1})},width:function(){return this.size().width},height:function(){return this.size().height},extent:function(){var e=this._private.pan,t=this._private.zoom,n=this.renderedExtent(),r={x1:(n.x1-e.x)/t,x2:(n.x2-e.x)/t,y1:(n.y1-e.y)/t,y2:(n.y2-e.y)/t};return r.w=r.x2-r.x1,r.h=r.y2-r.y1,r},renderedExtent:function(){var e=this.width(),t=this.height();return{x1:0,y1:0,x2:e,y2:t,w:e,h:t}},multiClickDebounceTime:function(e){return e?(this._private.multiClickDebounceTime=e,this):this._private.multiClickDebounceTime}};sa.centre=sa.center,sa.autolockNodes=sa.autolock,sa.autoungrabifyNodes=sa.autoungrabify;var la={data:cr.data({field:"data",bindingEvent:"data",allowBinding:!0,allowSetting:!0,settingEvent:"data",settingTriggersEvent:!0,triggerFnName:"trigger",allowGetting:!0,updateStyle:!0}),removeData:cr.removeData({field:"data",event:"data",triggerFnName:"trigger",triggerEvent:!0,updateStyle:!0}),scratch:cr.data({field:"scratch",bindingEvent:"scratch",allowBinding:!0,allowSetting:!0,settingEvent:"scratch",settingTriggersEvent:!0,triggerFnName:"trigger",allowGetting:!0,updateStyle:!0}),removeScratch:cr.removeData({field:"scratch",event:"scratch",triggerFnName:"trigger",triggerEvent:!0,updateStyle:!0})};la.attr=la.data,la.removeAttr=la.removeData;var ua=function(e){var t=this,n=(e=J({},e)).container;n&&!L(n)&&L(n[0])&&(n=n[0]);var r=n?n._cyreg:null;(r=r||{})&&r.cy&&(r.cy.destroy(),r={});var i=r.readies=r.readies||[];n&&(n._cyreg=r),r.cy=t;var o=void 0!==w&&void 0!==n&&!e.headless,a=e;a.layout=J({name:o?"grid":"null"},a.layout),a.renderer=J({name:o?"canvas":"null"},a.renderer);var s=function(e,t,n){return void 0!==t?t:void 0!==n?n:e},l=this._private={container:n,ready:!1,options:a,elements:new So(this),listeners:[],aniEles:new So(this),data:a.data||{},scratch:{},layout:null,renderer:null,destroyed:!1,notificationsEnabled:!0,minZoom:1e-50,maxZoom:1e50,zoomingEnabled:s(!0,a.zoomingEnabled),userZoomingEnabled:s(!0,a.userZoomingEnabled),panningEnabled:s(!0,a.panningEnabled),userPanningEnabled:s(!0,a.userPanningEnabled),boxSelectionEnabled:s(!0,a.boxSelectionEnabled),autolock:s(!1,a.autolock,a.autolockNodes),autoungrabify:s(!1,a.autoungrabify,a.autoungrabifyNodes),autounselectify:s(!1,a.autounselectify),styleEnabled:void 0===a.styleEnabled?o:a.styleEnabled,zoom:N(a.zoom)?a.zoom:1,pan:{x:D(a.pan)&&N(a.pan.x)?a.pan.x:0,y:D(a.pan)&&N(a.pan.y)?a.pan.y:0},animation:{current:[],queue:[]},hasCompoundNodes:!1,multiClickDebounceTime:s(250,a.multiClickDebounceTime)};this.createEmitter(),this.selectionType(a.selectionType),this.zoomRange({min:a.minZoom,max:a.maxZoom});l.styleEnabled&&t.setStyle([]);var u=J({},a,a.renderer);t.initRenderer(u);!function(e,t){if(e.some(F))return ir.all(e).then(t);t(e)}([a.style,a.elements],(function(e){var n=e[0],o=e[1];l.styleEnabled&&t.style().append(n),function(e,n,r){t.notifications(!1);var i=t.mutableElements();i.length>0&&i.remove(),null!=e&&(D(e)||M(e))&&t.add(e),t.one("layoutready",(function(e){t.notifications(!0),t.emit(e),t.one("load",n),t.emitAndNotify("load")})).one("layoutstop",(function(){t.one("done",r),t.emit("done")}));var o=J({},t._private.options.layout);o.eles=t.elements(),t.layout(o).run()}(o,(function(){t.startAnimationLoop(),l.ready=!0,O(a.ready)&&t.on("ready",a.ready);for(var e=0;e0,u=vt(n.boundingBox?n.boundingBox:{x1:0,y1:0,w:r.width(),h:r.height()});if(A(n.roots))e=n.roots;else if(M(n.roots)){for(var c=[],d=0;d0;){var D=C.shift(),N=S(D,P);if(N)D.outgoers().filter((function(e){return e.isNode()&&i.has(e)})).forEach(O);else if(null===N){Oe("Detected double maximal shift for node `"+D.id()+"`. Bailing maximal adjustment due to cycle. Use `options.maximal: true` only on DAGs.");break}}}E();var L=0;if(n.avoidOverlap)for(var j=0;j0&&m[0].length<=3?l/2:0),d=2*Math.PI/m[r].length*i;return 0===r&&1===m[0].length&&(c=1),{x:K+c*Math.cos(d),y:Y+c*Math.sin(d)}}return{x:K+(i+1-(o+1)/2)*a,y:(r+1)*s}})),this};var ga={fit:!0,padding:30,boundingBox:void 0,avoidOverlap:!0,nodeDimensionsIncludeLabels:!1,spacingFactor:void 0,radius:void 0,startAngle:1.5*Math.PI,sweep:void 0,clockwise:!0,sort:void 0,animate:!1,animationDuration:500,animationEasing:void 0,animateFilter:function(e,t){return!0},ready:void 0,stop:void 0,transform:function(e,t){return t}};function va(e){this.options=J({},ga,e)}va.prototype.run=function(){var e=this.options,t=e,n=e.cy,r=t.eles,i=void 0!==t.counterclockwise?!t.counterclockwise:t.clockwise,o=r.nodes().not(":parent");t.sort&&(o=o.sort(t.sort));for(var a,s=vt(t.boundingBox?t.boundingBox:{x1:0,y1:0,w:n.width(),h:n.height()}),l=s.x1+s.w/2,u=s.y1+s.h/2,c=(void 0===t.sweep?2*Math.PI-2*Math.PI/o.length:t.sweep)/Math.max(1,o.length-1),d=0,f=0;f1&&t.avoidOverlap){d*=1.75;var v=Math.cos(c)-Math.cos(0),m=Math.sin(c)-Math.sin(0),y=Math.sqrt(d*d/(v*v+m*m));a=Math.max(y,a)}return r.nodes().layoutPositions(this,t,(function(e,n){var r=t.startAngle+n*c*(i?1:-1),o=a*Math.cos(r),s=a*Math.sin(r);return{x:l+o,y:u+s}})),this};var ma,ya={fit:!0,padding:30,startAngle:1.5*Math.PI,sweep:void 0,clockwise:!0,equidistant:!1,minNodeSpacing:10,boundingBox:void 0,avoidOverlap:!0,nodeDimensionsIncludeLabels:!1,height:void 0,width:void 0,spacingFactor:void 0,concentric:function(e){return e.degree()},levelWidth:function(e){return e.maxDegree()/4},animate:!1,animationDuration:500,animationEasing:void 0,animateFilter:function(e,t){return!0},ready:void 0,stop:void 0,transform:function(e,t){return t}};function ba(e){this.options=J({},ya,e)}ba.prototype.run=function(){for(var e=this.options,t=e,n=void 0!==t.counterclockwise?!t.counterclockwise:t.clockwise,r=e.cy,i=t.eles,o=i.nodes().not(":parent"),a=vt(t.boundingBox?t.boundingBox:{x1:0,y1:0,w:r.width(),h:r.height()}),s=a.x1+a.w/2,l=a.y1+a.h/2,u=[],c=0,d=0;d0)Math.abs(y[0].value-x.value)>=v&&(y=[],m.push(y));y.push(x)}var w=c+t.minNodeSpacing;if(!t.avoidOverlap){var _=m.length>0&&m[0].length>1,k=(Math.min(a.w,a.h)/2-w)/(m.length+_?1:0);w=Math.min(w,k)}for(var E=0,S=0;S1&&t.avoidOverlap){var O=Math.cos(T)-Math.cos(0),M=Math.sin(T)-Math.sin(0),D=Math.sqrt(w*w/(O*O+M*M));E=Math.max(D,E)}C.r=E,E+=w}if(t.equidistant){for(var N=0,L=0,A=0;A=e.numIter)&&(Ta(r,e),r.temperature=r.temperature*e.coolingFactor,!(r.temperature=e.animationThreshold&&o(),se(t)):(Fa(r,e),s())}()}else{for(;u;)u=a(l),l++;Fa(r,e),s()}return this},wa.prototype.stop=function(){return this.stopped=!0,this.thread&&this.thread.stop(),this.emit("layoutstop"),this},wa.prototype.destroy=function(){return this.thread&&this.thread.stop(),this};var _a=function(e,t,n){for(var r=n.eles.edges(),i=n.eles.nodes(),o={isCompound:e.hasCompoundNodes(),layoutNodes:[],idToIndex:{},nodeSize:i.size(),graphSet:[],indexToGraph:[],layoutEdges:[],edgeSize:r.size(),temperature:n.initialTemp,clientWidth:e.width(),clientHeight:e.width(),boundingBox:vt(n.boundingBox?n.boundingBox:{x1:0,y1:0,w:e.width(),h:e.height()})},a=n.eles.components(),s={},l=0;l0){o.graphSet.push(x);for(l=0;lr.count?0:r.graph},Ea=function e(t,n,r,i){var o=i.graphSet[r];if(-10)var s=(u=r.nodeOverlap*a)*i/(g=Math.sqrt(i*i+o*o)),l=u*o/g;else{var u,c=La(e,i,o),d=La(t,-1*i,-1*o),f=d.x-c.x,h=d.y-c.y,p=f*f+h*h,g=Math.sqrt(p);s=(u=(e.nodeRepulsion+t.nodeRepulsion)/p)*f/g,l=u*h/g}e.isLocked||(e.offsetX-=s,e.offsetY-=l),t.isLocked||(t.offsetX+=s,t.offsetY+=l)}},Na=function(e,t,n,r){if(n>0)var i=e.maxX-t.minX;else i=t.maxX-e.minX;if(r>0)var o=e.maxY-t.minY;else o=t.maxY-e.minY;return i>=0&&o>=0?Math.sqrt(i*i+o*o):0},La=function(e,t,n){var r=e.positionX,i=e.positionY,o=e.height||1,a=e.width||1,s=n/t,l=o/a,u={};return 0===t&&0n?(u.x=r,u.y=i+o/2,u):0t&&-1*l<=s&&s<=l?(u.x=r-a/2,u.y=i-a*n/2/t,u):0=l)?(u.x=r+o*t/2/n,u.y=i+o/2,u):0>n&&(s<=-1*l||s>=l)?(u.x=r-o*t/2/n,u.y=i-o/2,u):u},Aa=function(e,t){for(var n=0;n1){var p=t.gravity*d/h,g=t.gravity*f/h;c.offsetX+=p,c.offsetY+=g}}}}},Ra=function(e,t){var n=[],r=0,i=-1;for(n.push.apply(n,e.graphSet[0]),i+=e.graphSet[0].length;r<=i;){var o=n[r++],a=e.idToIndex[o],s=e.layoutNodes[a],l=s.children;if(0n)var i={x:n*e/r,y:n*t/r};else i={x:e,y:t};return i},za=function e(t,n){var r=t.parentId;if(null!=r){var i=n.layoutNodes[n.idToIndex[r]],o=!1;return(null==i.maxX||t.maxX+i.padRight>i.maxX)&&(i.maxX=t.maxX+i.padRight,o=!0),(null==i.minX||t.minX-i.padLefti.maxY)&&(i.maxY=t.maxY+i.padBottom,o=!0),(null==i.minY||t.minY-i.padTopp&&(d+=h+t.componentSpacing,c=0,f=0,h=0)}}},qa={fit:!0,padding:30,boundingBox:void 0,avoidOverlap:!0,avoidOverlapPadding:10,nodeDimensionsIncludeLabels:!1,spacingFactor:void 0,condense:!1,rows:void 0,cols:void 0,position:function(e){},sort:void 0,animate:!1,animationDuration:500,animationEasing:void 0,animateFilter:function(e,t){return!0},ready:void 0,stop:void 0,transform:function(e,t){return t}};function Va(e){this.options=J({},qa,e)}Va.prototype.run=function(){var e=this.options,t=e,n=e.cy,r=t.eles,i=r.nodes().not(":parent");t.sort&&(i=i.sort(t.sort));var o=vt(t.boundingBox?t.boundingBox:{x1:0,y1:0,w:n.width(),h:n.height()});if(0===o.h||0===o.w)r.nodes().layoutPositions(this,t,(function(e){return{x:o.x1,y:o.y1}}));else{var a=i.size(),s=Math.sqrt(a*o.h/o.w),l=Math.round(s),u=Math.round(o.w/o.h*s),c=function(e){if(null==e)return Math.min(l,u);Math.min(l,u)==l?l=e:u=e},d=function(e){if(null==e)return Math.max(l,u);Math.max(l,u)==l?l=e:u=e},f=t.rows,h=null!=t.cols?t.cols:t.columns;if(null!=f&&null!=h)l=f,u=h;else if(null!=f&&null==h)l=f,u=Math.ceil(a/l);else if(null==f&&null!=h)u=h,l=Math.ceil(a/u);else if(u*l>a){var p=c(),g=d();(p-1)*g>=a?c(p-1):(g-1)*p>=a&&d(g-1)}else for(;u*l=a?d(m+1):c(v+1)}var y=o.w/u,b=o.h/l;if(t.condense&&(y=0,b=0),t.avoidOverlap)for(var x=0;x=u&&(D=0,M++)},L={},A=0;A(r=Tt(e,t,x[w],x[w+1],x[w+2],x[w+3])))return v(n,r),!0}else if("bezier"===o.edgeType||"multibezier"===o.edgeType||"self"===o.edgeType||"compound"===o.edgeType)for(x=o.allpts,w=0;w+5(r=Pt(e,t,x[w],x[w+1],x[w+2],x[w+3],x[w+4],x[w+5])))return v(n,r),!0;y=y||i.source,b=b||i.target;var _=a.getArrowWidth(l,c),k=[{name:"source",x:o.arrowStartX,y:o.arrowStartY,angle:o.srcArrowAngle},{name:"target",x:o.arrowEndX,y:o.arrowEndY,angle:o.tgtArrowAngle},{name:"mid-source",x:o.midX,y:o.midY,angle:o.midsrcArrowAngle},{name:"mid-target",x:o.midX,y:o.midY,angle:o.midtgtArrowAngle}];for(w=0;w0&&(m(y),m(b))}function b(e,t,n){return Ie(e,t,n)}function x(n,r){var i,o=n._private,a=p;i=r?r+"-":"",n.boundingBox();var s=o.labelBounds[r||"main"],l=n.pstyle(i+"label").value;if("yes"===n.pstyle("text-events").strValue&&l){var u=b(o.rscratch,"labelX",r),c=b(o.rscratch,"labelY",r),d=b(o.rscratch,"labelAngle",r),f=n.pstyle(i+"text-margin-x").pfValue,h=n.pstyle(i+"text-margin-y").pfValue,g=s.x1-a-f,m=s.x2+a-f,y=s.y1-a-h,x=s.y2+a-h;if(d){var w=Math.cos(d),_=Math.sin(d),k=function(e,t){return{x:(e-=u)*w-(t-=c)*_+u,y:e*_+t*w+c}},E=k(g,y),S=k(g,x),C=k(m,y),P=k(m,x),T=[E.x+f,E.y+h,C.x+f,C.y+h,P.x+f,P.y+h,S.x+f,S.y+h];if(Ot(e,t,T))return v(n),!0}else if(_t(s,e,t))return v(n),!0}}n&&(l=l.interactive);for(var w=l.length-1;w>=0;w--){var _=l[w];_.isNode()?m(_)||x(_):y(_)||x(_)||x(_,"source")||x(_,"target")}return u},getAllInBox:function(e,t,n,r){for(var i,o,a=this.getCachedZSortedEles().interactive,s=[],l=Math.min(e,n),u=Math.max(e,n),c=Math.min(t,r),d=Math.max(t,r),f=vt({x1:e=l,y1:t=c,x2:n=u,y2:r=d}),h=0;h0?Math.max(e-t,0):Math.min(e+t,0)},P=C(E,_),T=C(S,k),O=!1;"auto"===v?g=Math.abs(P)>Math.abs(T)?i:r:v===l||v===s?(g=r,O=!0):v!==o&&v!==a||(g=i,O=!0);var M,D=g===r,N=D?T:P,L=D?S:E,A=ut(L),j=!1;(O&&(y||x)||!(v===s&&L<0||v===l&&L>0||v===o&&L>0||v===a&&L<0)||(N=(A*=-1)*Math.abs(N),j=!0),y)?M=(b<0?1+b:b)*N:M=(b<0?N:0)+b*A;var R=function(e){return Math.abs(e)=Math.abs(N)},I=R(M),B=R(Math.abs(N)-Math.abs(M));if((I||B)&&!j)if(D){var z=Math.abs(L)<=d/2,F=Math.abs(E)<=f/2;if(z){var q=(u.x1+u.x2)/2,V=u.y1,U=u.y2;n.segpts=[q,V,q,U]}else if(F){var W=(u.y1+u.y2)/2,Z=u.x1,H=u.x2;n.segpts=[Z,W,H,W]}else n.segpts=[u.x1,u.y2]}else{var K=Math.abs(L)<=c/2,Y=Math.abs(S)<=h/2;if(K){var X=(u.y1+u.y2)/2,$=u.x1,G=u.x2;n.segpts=[$,X,G,X]}else if(Y){var Q=(u.x1+u.x2)/2,J=u.y1,ee=u.y2;n.segpts=[Q,J,Q,ee]}else n.segpts=[u.x2,u.y1]}else if(D){var te=u.y1+M+(p?d/2*A:0),ne=u.x1,re=u.x2;n.segpts=[ne,te,re,te]}else{var ie=u.x1+M+(p?c/2*A:0),oe=u.y1,ae=u.y2;n.segpts=[ie,oe,ie,ae]}},ns.tryToCorrectInvalidPoints=function(e,t){var n=e._private.rscratch;if("bezier"===n.edgeType){var r=t.srcPos,i=t.tgtPos,o=t.srcW,a=t.srcH,s=t.tgtW,l=t.tgtH,u=t.srcShape,c=t.tgtShape,d=!N(n.startX)||!N(n.startY),f=!N(n.arrowStartX)||!N(n.arrowStartY),h=!N(n.endX)||!N(n.endY),p=!N(n.arrowEndX)||!N(n.arrowEndY),g=3*(this.getArrowWidth(e.pstyle("width").pfValue,e.pstyle("arrow-scale").value)*this.arrowShapeWidth),v=ct({x:n.ctrlpts[0],y:n.ctrlpts[1]},{x:n.startX,y:n.startY}),m=vf.poolIndex()){var h=d;d=f,f=h}var p=s.srcPos=d.position(),g=s.tgtPos=f.position(),v=s.srcW=d.outerWidth(),m=s.srcH=d.outerHeight(),y=s.tgtW=f.outerWidth(),b=s.tgtH=f.outerHeight(),x=s.srcShape=n.nodeShapes[t.getNodeShape(d)],w=s.tgtShape=n.nodeShapes[t.getNodeShape(f)];s.dirCounts={north:0,west:0,south:0,east:0,northwest:0,southwest:0,northeast:0,southeast:0};for(var _=0;_0){var V=u,U=dt(V,at(t)),W=dt(V,at(q)),Z=U;if(W2)dt(V,{x:q[2],y:q[3]})0){var ie=c,oe=dt(ie,at(t)),ae=dt(ie,at(re)),se=oe;if(ae2)dt(ie,{x:re[2],y:re[3]})=u||y){c={cp:g,segment:m};break}}if(c)break}var b=c.cp,x=c.segment,w=(u-f)/x.length,_=x.t1-x.t0,k=s?x.t0+_*w:x.t1-_*w;k=gt(0,k,1),t=pt(b.p0,b.p1,b.p2,k),i=function(e,t,n,r){var i=gt(0,r-.001,1),o=gt(0,r+.001,1),a=pt(e,t,n,i),s=pt(e,t,n,o);return us(a,s)}(b.p0,b.p1,b.p2,k);break;case"straight":case"segments":case"haystack":for(var E,S,C,P,T=0,O=r.allpts.length,M=0;M+3=u));M+=2);var D=(u-S)/E;D=gt(0,D,1),t=function(e,t,n,r){var i=t.x-e.x,o=t.y-e.y,a=ct(e,t),s=i/a,l=o/a;return n=null==n?0:n,r=null!=r?r:n*a,{x:e.x+s*r,y:e.y+l*r}}(C,P,D),i=us(C,P)}a("labelX",n,t.x),a("labelY",n,t.y),a("labelAutoAngle",n,i)}};u("source"),u("target"),this.applyLabelDimensions(e)}},ss.applyLabelDimensions=function(e){this.applyPrefixedLabelDimensions(e),e.isEdge()&&(this.applyPrefixedLabelDimensions(e,"source"),this.applyPrefixedLabelDimensions(e,"target"))},ss.applyPrefixedLabelDimensions=function(e,t){var n=e._private,r=this.getLabelText(e,t),i=this.calculateLabelDimensions(e,r),o=e.pstyle("line-height").pfValue,a=e.pstyle("text-wrap").strValue,s=Ie(n.rscratch,"labelWrapCachedLines",t)||[],l="wrap"!==a?1:Math.max(s.length,1),u=i.height/l,c=u*o,d=i.width,f=i.height+(l-1)*(o-1)*u;Be(n.rstyle,"labelWidth",t,d),Be(n.rscratch,"labelWidth",t,d),Be(n.rstyle,"labelHeight",t,f),Be(n.rscratch,"labelHeight",t,f),Be(n.rscratch,"labelLineHeight",t,c)},ss.getLabelText=function(e,t){var n=e._private,r=t?t+"-":"",i=e.pstyle(r+"label").strValue,o=e.pstyle("text-transform").value,a=function(e,r){return r?(Be(n.rscratch,e,t,r),r):Ie(n.rscratch,e,t)};if(!i)return"";"none"==o||("uppercase"==o?i=i.toUpperCase():"lowercase"==o&&(i=i.toLowerCase()));var s=e.pstyle("text-wrap").value;if("wrap"===s){var l=a("labelKey");if(null!=l&&a("labelWrapKey")===l)return a("labelWrapCachedText");for(var u=i.split("\n"),c=e.pstyle("text-max-width").pfValue,d="anywhere"===e.pstyle("text-overflow-wrap").value,f=[],h=/[\s\u200b]+/,p=d?"":" ",g=0;gc){for(var b=v.split(h),x="",w=0;wE)break;S+=i[P],P===i.length-1&&(C=!0)}return C||(S+="\u2026"),S}return i},ss.getLabelJustification=function(e){var t=e.pstyle("text-justification").strValue,n=e.pstyle("text-halign").strValue;if("auto"!==t)return t;if(!e.isNode())return"center";switch(n){case"left":return"right";case"right":return"left";default:return"center"}},ss.calculateLabelDimensions=function(e,t){var n=ve(t,e._private.labelDimsKey),r=this.labelDimCache||(this.labelDimCache=[]),i=r[n];if(null!=i)return i;var o=e.pstyle("font-style").strValue,a=e.pstyle("font-size").pfValue,s=e.pstyle("font-family").strValue,l=e.pstyle("font-weight").strValue,u=this.labelCalcCanvas,c=this.labelCalcCanvasContext;if(!u){u=this.labelCalcCanvas=document.createElement("canvas"),c=this.labelCalcCanvasContext=u.getContext("2d");var d=u.style;d.position="absolute",d.left="-9999px",d.top="-9999px",d.zIndex="-1",d.visibility="hidden",d.pointerEvents="none"}c.font="".concat(o," ").concat(l," ").concat(a,"px ").concat(s);for(var f=0,h=0,p=t.split("\n"),g=0;g1&&void 0!==arguments[1])||arguments[1];if(t.merge(e),n)for(var r=0;r=e.desktopTapThreshold2}var C=r(t);v&&(e.hoverData.tapholdCancelled=!0);o=!0,n(g,["mousemove","vmousemove","tapdrag"],t,{x:c[0],y:c[1]});var P=function(){e.data.bgActivePosistion=void 0,e.hoverData.selecting||a.emit({originalEvent:t,type:"boxstart",position:{x:c[0],y:c[1]}}),p[4]=1,e.hoverData.selecting=!0,e.redrawHint("select",!0),e.redraw()};if(3===e.hoverData.which){if(v){var T={originalEvent:t,type:"cxtdrag",position:{x:c[0],y:c[1]}};y?y.emit(T):a.emit(T),e.hoverData.cxtDragged=!0,e.hoverData.cxtOver&&g===e.hoverData.cxtOver||(e.hoverData.cxtOver&&e.hoverData.cxtOver.emit({originalEvent:t,type:"cxtdragout",position:{x:c[0],y:c[1]}}),e.hoverData.cxtOver=g,g&&g.emit({originalEvent:t,type:"cxtdragover",position:{x:c[0],y:c[1]}}))}}else if(e.hoverData.dragging){if(o=!0,a.panningEnabled()&&a.userPanningEnabled()){var O;if(e.hoverData.justStartedPan){var M=e.hoverData.mdownPos;O={x:(c[0]-M[0])*s,y:(c[1]-M[1])*s},e.hoverData.justStartedPan=!1}else O={x:b[0]*s,y:b[1]*s};a.panBy(O),a.emit("dragpan"),e.hoverData.dragged=!0}c=e.projectIntoViewport(t.clientX,t.clientY)}else if(1!=p[4]||null!=y&&!y.pannable()){if(y&&y.pannable()&&y.active()&&y.unactivate(),y&&y.grabbed()||g==m||(m&&n(m,["mouseout","tapdragout"],t,{x:c[0],y:c[1]}),g&&n(g,["mouseover","tapdragover"],t,{x:c[0],y:c[1]}),e.hoverData.last=g),y)if(v){if(a.boxSelectionEnabled()&&C)y&&y.grabbed()&&(d(x),y.emit("freeon"),x.emit("free"),e.dragData.didDrag&&(y.emit("dragfreeon"),x.emit("dragfree"))),P();else if(y&&y.grabbed()&&e.nodeIsDraggable(y)){var D=!e.dragData.didDrag;D&&e.redrawHint("eles",!0),e.dragData.didDrag=!0,e.hoverData.draggingEles||l(x,{inDragLayer:!0});var L={x:0,y:0};if(N(b[0])&&N(b[1])&&(L.x+=b[0],L.y+=b[1],D)){var A=e.hoverData.dragDelta;A&&N(A[0])&&N(A[1])&&(L.x+=A[0],L.y+=A[1])}e.hoverData.draggingEles=!0,x.silentShift(L).emit("position drag"),e.redrawHint("drag",!0),e.redraw()}}else!function(){var t=e.hoverData.dragDelta=e.hoverData.dragDelta||[];0===t.length?(t.push(b[0]),t.push(b[1])):(t[0]+=b[0],t[1]+=b[1])}();o=!0}else if(v){if(e.hoverData.dragging||!a.boxSelectionEnabled()||!C&&a.panningEnabled()&&a.userPanningEnabled()){if(!e.hoverData.selecting&&a.panningEnabled()&&a.userPanningEnabled()){i(y,e.hoverData.downs)&&(e.hoverData.dragging=!0,e.hoverData.justStartedPan=!0,p[4]=0,e.data.bgActivePosistion=at(f),e.redrawHint("select",!0),e.redraw())}}else P();y&&y.pannable()&&y.active()&&y.unactivate()}return p[2]=c[0],p[3]=c[1],o?(t.stopPropagation&&t.stopPropagation(),t.preventDefault&&t.preventDefault(),!1):void 0}}),!1),e.registerBinding(window,"mouseup",(function(i){if(e.hoverData.capture){e.hoverData.capture=!1;var o=e.cy,a=e.projectIntoViewport(i.clientX,i.clientY),s=e.selection,l=e.findNearestElement(a[0],a[1],!0,!1),u=e.dragData.possibleDragElements,c=e.hoverData.down,f=r(i);if(e.data.bgActivePosistion&&(e.redrawHint("select",!0),e.redraw()),e.hoverData.tapholdCancelled=!0,e.data.bgActivePosistion=void 0,c&&c.unactivate(),3===e.hoverData.which){var h={originalEvent:i,type:"cxttapend",position:{x:a[0],y:a[1]}};if(c?c.emit(h):o.emit(h),!e.hoverData.cxtDragged){var p={originalEvent:i,type:"cxttap",position:{x:a[0],y:a[1]}};c?c.emit(p):o.emit(p)}e.hoverData.cxtDragged=!1,e.hoverData.which=null}else if(1===e.hoverData.which){if(n(l,["mouseup","tapend","vmouseup"],i,{x:a[0],y:a[1]}),e.dragData.didDrag||e.hoverData.dragged||e.hoverData.selecting||e.hoverData.isOverThresholdDrag||(n(c,["click","tap","vclick"],i,{x:a[0],y:a[1]}),b=!1,i.timeStamp-x<=o.multiClickDebounceTime()?(y&&clearTimeout(y),b=!0,x=null,n(c,["dblclick","dbltap","vdblclick"],i,{x:a[0],y:a[1]})):(y=setTimeout((function(){b||n(c,["oneclick","onetap","voneclick"],i,{x:a[0],y:a[1]})}),o.multiClickDebounceTime()),x=i.timeStamp)),null!=c||e.dragData.didDrag||e.hoverData.selecting||e.hoverData.dragged||r(i)||(o.$(t).unselect(["tapunselect"]),u.length>0&&e.redrawHint("eles",!0),e.dragData.possibleDragElements=u=o.collection()),l!=c||e.dragData.didDrag||e.hoverData.selecting||null!=l&&l._private.selectable&&(e.hoverData.dragging||("additive"===o.selectionType()||f?l.selected()?l.unselect(["tapunselect"]):l.select(["tapselect"]):f||(o.$(t).unmerge(l).unselect(["tapunselect"]),l.select(["tapselect"]))),e.redrawHint("eles",!0)),e.hoverData.selecting){var g=o.collection(e.getAllInBox(s[0],s[1],s[2],s[3]));e.redrawHint("select",!0),g.length>0&&e.redrawHint("eles",!0),o.emit({type:"boxend",originalEvent:i,position:{x:a[0],y:a[1]}});var v=function(e){return e.selectable()&&!e.selected()};"additive"===o.selectionType()||f||o.$(t).unmerge(g).unselect(),g.emit("box").stdFilter(v).select().emit("boxselect"),e.redraw()}if(e.hoverData.dragging&&(e.hoverData.dragging=!1,e.redrawHint("select",!0),e.redrawHint("eles",!0),e.redraw()),!s[4]){e.redrawHint("drag",!0),e.redrawHint("eles",!0);var m=c&&c.grabbed();d(u),m&&(c.emit("freeon"),u.emit("free"),e.dragData.didDrag&&(c.emit("dragfreeon"),u.emit("dragfree")))}}s[4]=0,e.hoverData.down=null,e.hoverData.cxtStarted=!1,e.hoverData.draggingEles=!1,e.hoverData.selecting=!1,e.hoverData.isOverThresholdDrag=!1,e.dragData.didDrag=!1,e.hoverData.dragged=!1,e.hoverData.dragDelta=[],e.hoverData.mdownPos=null,e.hoverData.mdownGPos=null}}),!1);var _,k,E,S,C,P,T,O,M,D,L,A,j,R=function(t){if(!e.scrollingPage){var n=e.cy,r=n.zoom(),i=n.pan(),o=e.projectIntoViewport(t.clientX,t.clientY),a=[o[0]*r+i.x,o[1]*r+i.y];if(e.hoverData.draggingEles||e.hoverData.dragging||e.hoverData.cxtStarted||0!==e.selection[4])t.preventDefault();else if(n.panningEnabled()&&n.userPanningEnabled()&&n.zoomingEnabled()&&n.userZoomingEnabled()){var s;t.preventDefault(),e.data.wheelZooming=!0,clearTimeout(e.data.wheelTimeout),e.data.wheelTimeout=setTimeout((function(){e.data.wheelZooming=!1,e.redrawHint("eles",!0),e.redraw()}),150),s=null!=t.deltaY?t.deltaY/-250:null!=t.wheelDeltaY?t.wheelDeltaY/1e3:t.wheelDelta/1e3,s*=e.wheelSensitivity,1===t.deltaMode&&(s*=33);var l=n.zoom()*Math.pow(10,s);"gesturechange"===t.type&&(l=e.gestureStartZoom*t.scale),n.zoom({level:l,renderedPosition:{x:a[0],y:a[1]}}),n.emit("gesturechange"===t.type?"pinchzoom":"scrollzoom")}}};e.registerBinding(e.container,"wheel",R,!0),e.registerBinding(window,"scroll",(function(t){e.scrollingPage=!0,clearTimeout(e.scrollingPageTimeout),e.scrollingPageTimeout=setTimeout((function(){e.scrollingPage=!1}),250)}),!0),e.registerBinding(e.container,"gesturestart",(function(t){e.gestureStartZoom=e.cy.zoom(),e.hasTouchStarted||t.preventDefault()}),!0),e.registerBinding(e.container,"gesturechange",(function(t){e.hasTouchStarted||R(t)}),!0),e.registerBinding(e.container,"mouseout",(function(t){var n=e.projectIntoViewport(t.clientX,t.clientY);e.cy.emit({originalEvent:t,type:"mouseout",position:{x:n[0],y:n[1]}})}),!1),e.registerBinding(e.container,"mouseover",(function(t){var n=e.projectIntoViewport(t.clientX,t.clientY);e.cy.emit({originalEvent:t,type:"mouseover",position:{x:n[0],y:n[1]}})}),!1);var I,B,z,F,q,V,U,W=function(e,t,n,r){return Math.sqrt((n-e)*(n-e)+(r-t)*(r-t))},Z=function(e,t,n,r){return(n-e)*(n-e)+(r-t)*(r-t)};if(e.registerBinding(e.container,"touchstart",I=function(t){if(e.hasTouchStarted=!0,w(t)){h(),e.touchData.capture=!0,e.data.bgActivePosistion=void 0;var r=e.cy,i=e.touchData.now,o=e.touchData.earlier;if(t.touches[0]){var s=e.projectIntoViewport(t.touches[0].clientX,t.touches[0].clientY);i[0]=s[0],i[1]=s[1]}if(t.touches[1]){s=e.projectIntoViewport(t.touches[1].clientX,t.touches[1].clientY);i[2]=s[0],i[3]=s[1]}if(t.touches[2]){s=e.projectIntoViewport(t.touches[2].clientX,t.touches[2].clientY);i[4]=s[0],i[5]=s[1]}if(t.touches[1]){e.touchData.singleTouchMoved=!0,d(e.dragData.touchDragEles);var u=e.findContainerClientCoords();M=u[0],D=u[1],L=u[2],A=u[3],_=t.touches[0].clientX-M,k=t.touches[0].clientY-D,E=t.touches[1].clientX-M,S=t.touches[1].clientY-D,j=0<=_&&_<=L&&0<=E&&E<=L&&0<=k&&k<=A&&0<=S&&S<=A;var f=r.pan(),p=r.zoom();C=W(_,k,E,S),P=Z(_,k,E,S),O=[((T=[(_+E)/2,(k+S)/2])[0]-f.x)/p,(T[1]-f.y)/p];if(P<4e4&&!t.touches[2]){var g=e.findNearestElement(i[0],i[1],!0,!0),v=e.findNearestElement(i[2],i[3],!0,!0);return g&&g.isNode()?(g.activate().emit({originalEvent:t,type:"cxttapstart",position:{x:i[0],y:i[1]}}),e.touchData.start=g):v&&v.isNode()?(v.activate().emit({originalEvent:t,type:"cxttapstart",position:{x:i[0],y:i[1]}}),e.touchData.start=v):r.emit({originalEvent:t,type:"cxttapstart",position:{x:i[0],y:i[1]}}),e.touchData.start&&(e.touchData.start._private.grabbed=!1),e.touchData.cxt=!0,e.touchData.cxtDragged=!1,e.data.bgActivePosistion=void 0,void e.redraw()}}if(t.touches[2])r.boxSelectionEnabled()&&t.preventDefault();else if(t.touches[1]);else if(t.touches[0]){var m=e.findNearestElements(i[0],i[1],!0,!0),y=m[0];if(null!=y&&(y.activate(),e.touchData.start=y,e.touchData.starts=m,e.nodeIsGrabbable(y))){var b=e.dragData.touchDragEles=r.collection(),x=null;e.redrawHint("eles",!0),e.redrawHint("drag",!0),y.selected()?(x=r.$((function(t){return t.selected()&&e.nodeIsGrabbable(t)})),l(x,{addToList:b})):c(y,{addToList:b}),a(y);var N=function(e){return{originalEvent:t,type:e,position:{x:i[0],y:i[1]}}};y.emit(N("grabon")),x?x.forEach((function(e){e.emit(N("grab"))})):y.emit(N("grab"))}n(y,["touchstart","tapstart","vmousedown"],t,{x:i[0],y:i[1]}),null==y&&(e.data.bgActivePosistion={x:s[0],y:s[1]},e.redrawHint("select",!0),e.redraw()),e.touchData.singleTouchMoved=!1,e.touchData.singleTouchStartTime=+new Date,clearTimeout(e.touchData.tapholdTimeout),e.touchData.tapholdTimeout=setTimeout((function(){!1!==e.touchData.singleTouchMoved||e.pinching||e.touchData.selecting||n(e.touchData.start,["taphold"],t,{x:i[0],y:i[1]})}),e.tapholdDuration)}if(t.touches.length>=1){for(var R=e.touchData.startPosition=[],I=0;I=e.touchTapThreshold2}if(r&&e.touchData.cxt){t.preventDefault();var x=t.touches[0].clientX-M,T=t.touches[0].clientY-D,L=t.touches[1].clientX-M,A=t.touches[1].clientY-D,R=Z(x,T,L,A);if(R/P>=2.25||R>=22500){e.touchData.cxt=!1,e.data.bgActivePosistion=void 0,e.redrawHint("select",!0);var I={originalEvent:t,type:"cxttapend",position:{x:s[0],y:s[1]}};e.touchData.start?(e.touchData.start.unactivate().emit(I),e.touchData.start=null):a.emit(I)}}if(r&&e.touchData.cxt){I={originalEvent:t,type:"cxtdrag",position:{x:s[0],y:s[1]}};e.data.bgActivePosistion=void 0,e.redrawHint("select",!0),e.touchData.start?e.touchData.start.emit(I):a.emit(I),e.touchData.start&&(e.touchData.start._private.grabbed=!1),e.touchData.cxtDragged=!0;var B=e.findNearestElement(s[0],s[1],!0,!0);e.touchData.cxtOver&&B===e.touchData.cxtOver||(e.touchData.cxtOver&&e.touchData.cxtOver.emit({originalEvent:t,type:"cxtdragout",position:{x:s[0],y:s[1]}}),e.touchData.cxtOver=B,B&&B.emit({originalEvent:t,type:"cxtdragover",position:{x:s[0],y:s[1]}}))}else if(r&&t.touches[2]&&a.boxSelectionEnabled())t.preventDefault(),e.data.bgActivePosistion=void 0,this.lastThreeTouch=+new Date,e.touchData.selecting||a.emit({originalEvent:t,type:"boxstart",position:{x:s[0],y:s[1]}}),e.touchData.selecting=!0,e.touchData.didSelect=!0,o[4]=1,o&&0!==o.length&&void 0!==o[0]?(o[2]=(s[0]+s[2]+s[4])/3,o[3]=(s[1]+s[3]+s[5])/3):(o[0]=(s[0]+s[2]+s[4])/3,o[1]=(s[1]+s[3]+s[5])/3,o[2]=(s[0]+s[2]+s[4])/3+1,o[3]=(s[1]+s[3]+s[5])/3+1),e.redrawHint("select",!0),e.redraw();else if(r&&t.touches[1]&&!e.touchData.didSelect&&a.zoomingEnabled()&&a.panningEnabled()&&a.userZoomingEnabled()&&a.userPanningEnabled()){if(t.preventDefault(),e.data.bgActivePosistion=void 0,e.redrawHint("select",!0),ee=e.dragData.touchDragEles){e.redrawHint("drag",!0);for(var z=0;z0&&!e.hoverData.draggingEles&&!e.swipePanning&&null!=e.data.bgActivePosistion&&(e.data.bgActivePosistion=void 0,e.redrawHint("select",!0),e.redraw())}},!1),e.registerBinding(window,"touchcancel",z=function(t){var n=e.touchData.start;e.touchData.capture=!1,n&&n.unactivate()}),e.registerBinding(window,"touchend",F=function(r){var i=e.touchData.start;if(e.touchData.capture){0===r.touches.length&&(e.touchData.capture=!1),r.preventDefault();var o=e.selection;e.swipePanning=!1,e.hoverData.draggingEles=!1;var a,s=e.cy,l=s.zoom(),u=e.touchData.now,c=e.touchData.earlier;if(r.touches[0]){var f=e.projectIntoViewport(r.touches[0].clientX,r.touches[0].clientY);u[0]=f[0],u[1]=f[1]}if(r.touches[1]){f=e.projectIntoViewport(r.touches[1].clientX,r.touches[1].clientY);u[2]=f[0],u[3]=f[1]}if(r.touches[2]){f=e.projectIntoViewport(r.touches[2].clientX,r.touches[2].clientY);u[4]=f[0],u[5]=f[1]}if(i&&i.unactivate(),e.touchData.cxt){if(a={originalEvent:r,type:"cxttapend",position:{x:u[0],y:u[1]}},i?i.emit(a):s.emit(a),!e.touchData.cxtDragged){var h={originalEvent:r,type:"cxttap",position:{x:u[0],y:u[1]}};i?i.emit(h):s.emit(h)}return e.touchData.start&&(e.touchData.start._private.grabbed=!1),e.touchData.cxt=!1,e.touchData.start=null,void e.redraw()}if(!r.touches[2]&&s.boxSelectionEnabled()&&e.touchData.selecting){e.touchData.selecting=!1;var p=s.collection(e.getAllInBox(o[0],o[1],o[2],o[3]));o[0]=void 0,o[1]=void 0,o[2]=void 0,o[3]=void 0,o[4]=0,e.redrawHint("select",!0),s.emit({type:"boxend",originalEvent:r,position:{x:u[0],y:u[1]}});p.emit("box").stdFilter((function(e){return e.selectable()&&!e.selected()})).select().emit("boxselect"),p.nonempty()&&e.redrawHint("eles",!0),e.redraw()}if(null!=i&&i.unactivate(),r.touches[2])e.data.bgActivePosistion=void 0,e.redrawHint("select",!0);else if(r.touches[1]);else if(r.touches[0]);else if(!r.touches[0]){e.data.bgActivePosistion=void 0,e.redrawHint("select",!0);var g=e.dragData.touchDragEles;if(null!=i){var v=i._private.grabbed;d(g),e.redrawHint("drag",!0),e.redrawHint("eles",!0),v&&(i.emit("freeon"),g.emit("free"),e.dragData.didDrag&&(i.emit("dragfreeon"),g.emit("dragfree"))),n(i,["touchend","tapend","vmouseup","tapdragout"],r,{x:u[0],y:u[1]}),i.unactivate(),e.touchData.start=null}else{var m=e.findNearestElement(u[0],u[1],!0,!0);n(m,["touchend","tapend","vmouseup","tapdragout"],r,{x:u[0],y:u[1]})}var y=e.touchData.startPosition[0]-u[0],b=y*y,x=e.touchData.startPosition[1]-u[1],w=(b+x*x)*l*l;e.touchData.singleTouchMoved||(i||s.$(":selected").unselect(["tapunselect"]),n(i,["tap","vclick"],r,{x:u[0],y:u[1]}),q=!1,r.timeStamp-U<=s.multiClickDebounceTime()?(V&&clearTimeout(V),q=!0,U=null,n(i,["dbltap","vdblclick"],r,{x:u[0],y:u[1]})):(V=setTimeout((function(){q||n(i,["onetap","voneclick"],r,{x:u[0],y:u[1]})}),s.multiClickDebounceTime()),U=r.timeStamp)),null!=i&&!e.dragData.didDrag&&i._private.selectable&&w2){for(var T=[u[0],u[1]],O=Math.pow(T[0]-e,2)+Math.pow(T[1]-t,2),M=1;M0)return g[0]}return null},f=Object.keys(c),h=0;h0?l:Et(i,o,e,t,n,r,a)},checkPoint:function(e,t,n,r,i,o,a){var s=Vt(r,i),l=2*s;if(Mt(e,t,this.points,o,a,r,i-l,[0,-1],n))return!0;if(Mt(e,t,this.points,o,a,r-l,i,[0,-1],n))return!0;var u=r/2+2*n,c=i/2+2*n;return!!Ot(e,t,[o-u,a-c,o-u,a,o+u,a,o+u,a-c])||(!!Lt(e,t,l,l,o+r/2-s,a+i/2-s,n)||!!Lt(e,t,l,l,o-r/2+s,a+i/2-s,n))}}},ms.registerNodeShapes=function(){var e=this.nodeShapes={},t=this;this.generateEllipse(),this.generatePolygon("triangle",zt(3,0)),this.generateRoundPolygon("round-triangle",zt(3,0)),this.generatePolygon("rectangle",zt(4,0)),e.square=e.rectangle,this.generateRoundRectangle(),this.generateCutRectangle(),this.generateBarrel(),this.generateBottomRoundrectangle();var n=[0,1,1,0,0,-1,-1,0];this.generatePolygon("diamond",n),this.generateRoundPolygon("round-diamond",n),this.generatePolygon("pentagon",zt(5,0)),this.generateRoundPolygon("round-pentagon",zt(5,0)),this.generatePolygon("hexagon",zt(6,0)),this.generateRoundPolygon("round-hexagon",zt(6,0)),this.generatePolygon("heptagon",zt(7,0)),this.generateRoundPolygon("round-heptagon",zt(7,0)),this.generatePolygon("octagon",zt(8,0)),this.generateRoundPolygon("round-octagon",zt(8,0));var r=new Array(20),i=qt(5,0),o=qt(5,Math.PI/5),a=.5*(3-Math.sqrt(5));a*=1.57;for(var s=0;s=e.deqFastCost*g)break}else if(i){if(h>=e.deqCost*l||h>=e.deqAvgCost*s)break}else if(p>=e.deqNoDrawCost*_s)break;var v=e.deq(t,d,c);if(!(v.length>0))break;for(var m=0;m0&&(e.onDeqd(t,u),!i&&e.shouldRedraw(t,u,d,c)&&r())}),i(t))}}},Es=function(){function e(t){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:Ee;g(this,e),this.idsByKey=new ze,this.keyForId=new ze,this.cachesByLvl=new ze,this.lvls=[],this.getKey=t,this.doesEleInvalidateKey=n}return m(e,[{key:"getIdsFor",value:function(e){null==e&&Pe("Can not get id list for null key");var t=this.idsByKey,n=this.idsByKey.get(e);return n||(n=new qe,t.set(e,n)),n}},{key:"addIdForKey",value:function(e,t){null!=e&&this.getIdsFor(e).add(t)}},{key:"deleteIdForKey",value:function(e,t){null!=e&&this.getIdsFor(e).delete(t)}},{key:"getNumberOfIdsForKey",value:function(e){return null==e?0:this.getIdsFor(e).size}},{key:"updateKeyMappingFor",value:function(e){var t=e.id(),n=this.keyForId.get(t),r=this.getKey(e);this.deleteIdForKey(n,t),this.addIdForKey(r,t),this.keyForId.set(t,r)}},{key:"deleteKeyMappingFor",value:function(e){var t=e.id(),n=this.keyForId.get(t);this.deleteIdForKey(n,t),this.keyForId.delete(t)}},{key:"keyHasChangedFor",value:function(e){var t=e.id();return this.keyForId.get(t)!==this.getKey(e)}},{key:"isInvalid",value:function(e){return this.keyHasChangedFor(e)||this.doesEleInvalidateKey(e)}},{key:"getCachesAt",value:function(e){var t=this.cachesByLvl,n=this.lvls,r=t.get(e);return r||(r=new ze,t.set(e,r),n.push(e)),r}},{key:"getCache",value:function(e,t){return this.getCachesAt(t).get(e)}},{key:"get",value:function(e,t){var n=this.getKey(e),r=this.getCache(n,t);return null!=r&&this.updateKeyMappingFor(e),r}},{key:"getForCachedKey",value:function(e,t){var n=this.keyForId.get(e.id());return this.getCache(n,t)}},{key:"hasCache",value:function(e,t){return this.getCachesAt(t).has(e)}},{key:"has",value:function(e,t){var n=this.getKey(e);return this.hasCache(n,t)}},{key:"setCache",value:function(e,t,n){n.key=e,this.getCachesAt(t).set(e,n)}},{key:"set",value:function(e,t,n){var r=this.getKey(e);this.setCache(r,t,n),this.updateKeyMappingFor(e)}},{key:"deleteCache",value:function(e,t){this.getCachesAt(t).delete(e)}},{key:"delete",value:function(e,t){var n=this.getKey(e);this.deleteCache(n,t)}},{key:"invalidateKey",value:function(e){var t=this;this.lvls.forEach((function(n){return t.deleteCache(e,n)}))}},{key:"invalidate",value:function(e){var t=e.id(),n=this.keyForId.get(t);this.deleteKeyMappingFor(e);var r=this.doesEleInvalidateKey(e);return r&&this.invalidateKey(n),r||0===this.getNumberOfIdsForKey(n)}}]),e}(),Ss={dequeue:"dequeue",downscale:"downscale",highQuality:"highQuality"},Cs=Ae({getKey:null,doesEleInvalidateKey:Ee,drawElement:null,getBoundingBox:null,getRotationPoint:null,getRotationOffset:null,isVisible:ke,allowEdgeTxrCaching:!0,allowParentTxrCaching:!0}),Ps=function(e,t){var n=this;n.renderer=e,n.onDequeues=[];var r=Cs(t);J(n,r),n.lookup=new Es(r.getKey,r.doesEleInvalidateKey),n.setupDequeueing()},Ts=Ps.prototype;Ts.reasons=Ss,Ts.getTextureQueue=function(e){var t=this;return t.eleImgCaches=t.eleImgCaches||{},t.eleImgCaches[e]=t.eleImgCaches[e]||[]},Ts.getRetiredTextureQueue=function(e){var t=this.eleImgCaches.retired=this.eleImgCaches.retired||{};return t[e]=t[e]||[]},Ts.getElementQueue=function(){return this.eleCacheQueue=this.eleCacheQueue||new c.default((function(e,t){return t.reqs-e.reqs}))},Ts.getElementKeyToQueue=function(){return this.eleKeyToCacheQueue=this.eleKeyToCacheQueue||{}},Ts.getElement=function(e,t,n,r,i){var o=this,a=this.renderer,s=a.cy.zoom(),l=this.lookup;if(!t||0===t.w||0===t.h||isNaN(t.w)||isNaN(t.h)||!e.visible()||e.removed())return null;if(!o.allowEdgeTxrCaching&&e.isEdge()||!o.allowParentTxrCaching&&e.isParent())return null;if(null==r&&(r=Math.ceil(lt(s*n))),r<-4)r=-4;else if(s>=7.99||r>3)return null;var u=Math.pow(2,r),c=t.h*u,d=t.w*u,f=a.eleTextBiggerThanMin(e,u);if(!this.isVisible(e,f))return null;var h,p=l.get(e,r);if(p&&p.invalidated&&(p.invalidated=!1,p.texture.invalidatedWidth-=p.width),p)return p;if(h=c<=25?25:c<=50?50:50*Math.ceil(c/50),c>1024||d>1024)return null;var g=o.getTextureQueue(h),v=g[g.length-2],m=function(){return o.recycleTexture(h,d)||o.addTexture(h,d)};v||(v=g[g.length-1]),v||(v=m()),v.width-v.usedWidthr;P--)S=o.getElement(e,t,n,P,Ss.downscale);C()}else{var T;if(!x&&!w&&!_)for(var O=r-1;O>=-4;O--){var M=l.get(e,O);if(M){T=M;break}}if(b(T))return o.queueElement(e,r),T;v.context.translate(v.usedWidth,0),v.context.scale(u,u),this.drawElement(v.context,e,t,f,!1),v.context.scale(1/u,1/u),v.context.translate(-v.usedWidth,0)}return p={x:v.usedWidth,texture:v,level:r,scale:u,width:d,height:c,scaledLabelShown:f},v.usedWidth+=Math.ceil(d+8),v.eleCaches.push(p),l.set(e,r,p),o.checkTextureFullness(v),p},Ts.invalidateElements=function(e){for(var t=0;t=.2*e.width&&this.retireTexture(e)},Ts.checkTextureFullness=function(e){var t=this.getTextureQueue(e.height);e.usedWidth/e.width>.8&&e.fullnessChecks>=10?je(t,e):e.fullnessChecks++},Ts.retireTexture=function(e){var t=e.height,n=this.getTextureQueue(t),r=this.lookup;je(n,e),e.retired=!0;for(var i=e.eleCaches,o=0;o=t)return o.retired=!1,o.usedWidth=0,o.invalidatedWidth=0,o.fullnessChecks=0,Re(o.eleCaches),o.context.setTransform(1,0,0,1,0,0),o.context.clearRect(0,0,o.width,o.height),je(r,o),n.push(o),o}},Ts.queueElement=function(e,t){var n=this.getElementQueue(),r=this.getElementKeyToQueue(),i=this.getKey(e),o=r[i];if(o)o.level=Math.max(o.level,t),o.eles.merge(e),o.reqs++,n.updateItem(o);else{var a={eles:e.spawn().merge(e),level:t,reqs:1,key:i};n.push(a),r[i]=a}},Ts.dequeue=function(e){for(var t=this,n=t.getElementQueue(),r=t.getElementKeyToQueue(),i=[],o=t.lookup,a=0;a<1&&n.size()>0;a++){var s=n.pop(),l=s.key,u=s.eles[0],c=o.hasCache(u,s.level);if(r[l]=null,!c){i.push(s);var d=t.getBoundingBox(u);t.getElement(u,d,e,s.level,Ss.dequeue)}}return i},Ts.removeFromQueue=function(e){var t=this.getElementQueue(),n=this.getElementKeyToQueue(),r=this.getKey(e),i=n[r];null!=i&&(1===i.eles.length?(i.reqs=_e,t.updateItem(i),t.pop(),n[r]=null):i.eles.unmerge(e))},Ts.onDequeue=function(e){this.onDequeues.push(e)},Ts.offDequeue=function(e){je(this.onDequeues,e)},Ts.setupDequeueing=ks({deqRedrawThreshold:100,deqCost:.15,deqAvgCost:.1,deqNoDrawCost:.9,deqFastCost:.9,deq:function(e,t,n){return e.dequeue(t,n)},onDeqd:function(e,t){for(var n=0;n=3.99||n>2)return null;r.validateLayersElesOrdering(n,e);var a,s,l=r.layersByLevel,u=Math.pow(2,n),c=l[n]=l[n]||[];if(r.levelIsComplete(n,e))return c;!function(){var t=function(t){if(r.validateLayersElesOrdering(t,e),r.levelIsComplete(t,e))return s=l[t],!0},i=function(e){if(!s)for(var r=n+e;-4<=r&&r<=2&&!t(r);r+=e);};i(1),i(-1);for(var o=c.length-1;o>=0;o--){var a=c[o];a.invalid&&je(c,a)}}();var d=function(t){var i=(t=t||{}).after;if(function(){if(!a){a=vt();for(var t=0;t16e6)return null;var o=r.makeLayer(a,n);if(null!=i){var s=c.indexOf(i)+1;c.splice(s,0,o)}else(void 0===t.insert||t.insert)&&c.unshift(o);return o};if(r.skipping&&!o)return null;for(var f=null,h=e.length/1,p=!o,g=0;g=h||!kt(f.bb,v.boundingBox()))&&!(f=d({insert:!0,after:f})))return null;s||p?r.queueLayer(f,v):r.drawEleInLayer(f,v,n,t),f.eles.push(v),y[n]=f}}return s||(p?null:c)},Ms.getEleLevelForLayerLevel=function(e,t){return e},Ms.drawEleInLayer=function(e,t,n,r){var i=this.renderer,o=e.context,a=t.boundingBox();0!==a.w&&0!==a.h&&t.visible()&&(n=this.getEleLevelForLayerLevel(n,r),i.setImgSmoothing(o,!1),i.drawCachedElement(o,t,null,null,n,true),i.setImgSmoothing(o,!0))},Ms.levelIsComplete=function(e,t){var n=this.layersByLevel[e];if(!n||0===n.length)return!1;for(var r=0,i=0;i0)return!1;if(o.invalid)return!1;r+=o.eles.length}return r===t.length},Ms.validateLayersElesOrdering=function(e,t){var n=this.layersByLevel[e];if(n)for(var r=0;r0){e=!0;break}}return e},Ms.invalidateElements=function(e){var t=this;0!==e.length&&(t.lastInvalidationTime=le(),0!==e.length&&t.haveLayers()&&t.updateElementsInLayers(e,(function(e,n,r){t.invalidateLayer(e)})))},Ms.invalidateLayer=function(e){if(this.lastInvalidationTime=le(),!e.invalid){var t=e.level,n=e.eles,r=this.layersByLevel[t];je(r,e),e.elesQueue=[],e.invalid=!0,e.replacement&&(e.replacement.invalid=!0);for(var i=0;i3&&void 0!==arguments[3])||arguments[3],i=!(arguments.length>4&&void 0!==arguments[4])||arguments[4],o=!(arguments.length>5&&void 0!==arguments[5])||arguments[5],a=this,s=t._private.rscratch;if((!o||t.visible())&&!s.badLine&&null!=s.allpts&&!isNaN(s.allpts[0])){var l;n&&(l=n,e.translate(-l.x1,-l.y1));var u=o?t.pstyle("opacity").value:1,c=o?t.pstyle("line-opacity").value:1,d=t.pstyle("curve-style").value,f=t.pstyle("line-style").value,h=t.pstyle("width").pfValue,p=t.pstyle("line-cap").value,g=u*c,v=u*c,m=function(){var n=arguments.length>0&&void 0!==arguments[0]?arguments[0]:g;"straight-triangle"===d?(a.eleStrokeStyle(e,t,n),a.drawEdgeTrianglePath(t,e,s.allpts)):(e.lineWidth=h,e.lineCap=p,a.eleStrokeStyle(e,t,n),a.drawEdgePath(t,e,s.allpts,f),e.lineCap="butt")},y=function(){var n=arguments.length>0&&void 0!==arguments[0]?arguments[0]:v;a.drawArrowheads(e,t,n)};if(e.lineJoin="round","yes"===t.pstyle("ghost").value){var b=t.pstyle("ghost-offset-x").pfValue,x=t.pstyle("ghost-offset-y").pfValue,w=t.pstyle("ghost-opacity").value,_=g*w;e.translate(b,x),m(_),y(_),e.translate(-b,-x)}i&&a.drawEdgeUnderlay(e,t),m(),y(),i&&a.drawEdgeOverlay(e,t),a.drawElementText(e,t,null,r),n&&e.translate(l.x1,l.y1)}}},Ys=function(e){if(!["overlay","underlay"].includes(e))throw new Error("Invalid state");return function(t,n){if(n.visible()){var r=n.pstyle("".concat(e,"-opacity")).value;if(0!==r){var i=this,o=i.usePaths(),a=n._private.rscratch,s=2*n.pstyle("".concat(e,"-padding")).pfValue,l=n.pstyle("".concat(e,"-color")).value;t.lineWidth=s,"self"!==a.edgeType||o?t.lineCap="round":t.lineCap="butt",i.colorStrokeStyle(t,l[0],l[1],l[2],r),i.drawEdgePath(n,t,a.allpts,"solid")}}}};Ks.drawEdgeOverlay=Ys("overlay"),Ks.drawEdgeUnderlay=Ys("underlay"),Ks.drawEdgePath=function(e,t,n,r){var i,o=e._private.rscratch,a=t,s=!1,l=this.usePaths(),u=e.pstyle("line-dash-pattern").pfValue,c=e.pstyle("line-dash-offset").pfValue;if(l){var d=n.join("$");o.pathCacheKey&&o.pathCacheKey===d?(i=t=o.pathCache,s=!0):(i=t=new Path2D,o.pathCacheKey=d,o.pathCache=i)}if(a.setLineDash)switch(r){case"dotted":a.setLineDash([1,1]);break;case"dashed":a.setLineDash(u),a.lineDashOffset=c;break;case"solid":a.setLineDash([])}if(!s&&!o.badLine)switch(t.beginPath&&t.beginPath(),t.moveTo(n[0],n[1]),o.edgeType){case"bezier":case"self":case"compound":case"multibezier":for(var f=2;f+35&&void 0!==arguments[5])||arguments[5],a=this;if(null==r){if(o&&!a.eleTextBiggerThanMin(t))return}else if(!1===r)return;if(t.isNode()){var s=t.pstyle("label");if(!s||!s.value)return;var l=a.getLabelJustification(t);e.textAlign=l,e.textBaseline="bottom"}else{var u=t.element()._private.rscratch.badLine,c=t.pstyle("label"),d=t.pstyle("source-label"),f=t.pstyle("target-label");if(u||(!c||!c.value)&&(!d||!d.value)&&(!f||!f.value))return;e.textAlign="center",e.textBaseline="bottom"}var h,p=!n;n&&(h=n,e.translate(-h.x1,-h.y1)),null==i?(a.drawText(e,t,null,p,o),t.isEdge()&&(a.drawText(e,t,"source",p,o),a.drawText(e,t,"target",p,o))):a.drawText(e,t,i,p,o),n&&e.translate(h.x1,h.y1)},$s.getFontCache=function(e){var t;this.fontCaches=this.fontCaches||[];for(var n=0;n2&&void 0!==arguments[2])||arguments[2],r=t.pstyle("font-style").strValue,i=t.pstyle("font-size").pfValue+"px",o=t.pstyle("font-family").strValue,a=t.pstyle("font-weight").strValue,s=n?t.effectiveOpacity()*t.pstyle("text-opacity").value:1,l=t.pstyle("text-outline-opacity").value*s,u=t.pstyle("color").value,c=t.pstyle("text-outline-color").value;e.font=r+" "+a+" "+i+" "+o,e.lineJoin="round",this.colorFillStyle(e,u[0],u[1],u[2],s),this.colorStrokeStyle(e,c[0],c[1],c[2],l)},$s.getTextAngle=function(e,t){var n=e._private.rscratch,r=t?t+"-":"",i=e.pstyle(r+"text-rotation"),o=Ie(n,"labelAngle",t);return"autorotate"===i.strValue?e.isEdge()?o:0:"none"===i.strValue?0:i.pfValue},$s.drawText=function(e,t,n){var r=!(arguments.length>3&&void 0!==arguments[3])||arguments[3],i=!(arguments.length>4&&void 0!==arguments[4])||arguments[4],o=t._private.rscratch,a=i?t.effectiveOpacity():1;if(!i||0!==a&&0!==t.pstyle("text-opacity").value){"main"===n&&(n=null);var s,l,u=Ie(o,"labelX",n),c=Ie(o,"labelY",n),d=this.getLabelText(t,n);if(null!=d&&""!==d&&!isNaN(u)&&!isNaN(c)){this.setupTextStyle(e,t,i);var f,h=n?n+"-":"",p=Ie(o,"labelWidth",n),g=Ie(o,"labelHeight",n),v=t.pstyle(h+"text-margin-x").pfValue,m=t.pstyle(h+"text-margin-y").pfValue,y=t.isEdge(),b=t.pstyle("text-halign").value,x=t.pstyle("text-valign").value;switch(y&&(b="center",x="center"),u+=v,c+=m,0!==(f=r?this.getTextAngle(t,n):0)&&(s=u,l=c,e.translate(s,l),e.rotate(f),u=0,c=0),x){case"top":break;case"center":c+=g/2;break;case"bottom":c+=g}var w=t.pstyle("text-background-opacity").value,_=t.pstyle("text-border-opacity").value,k=t.pstyle("text-border-width").pfValue,E=t.pstyle("text-background-padding").pfValue;if(w>0||k>0&&_>0){var S=u-E;switch(b){case"left":S-=p;break;case"center":S-=p/2}var C=c-g-E,P=p+2*E,T=g+2*E;if(w>0){var O=e.fillStyle,M=t.pstyle("text-background-color").value;e.fillStyle="rgba("+M[0]+","+M[1]+","+M[2]+","+w*a+")",0===t.pstyle("text-background-shape").strValue.indexOf("round")?function(e,t,n,r,i){var o=arguments.length>5&&void 0!==arguments[5]?arguments[5]:5;e.beginPath(),e.moveTo(t+o,n),e.lineTo(t+r-o,n),e.quadraticCurveTo(t+r,n,t+r,n+o),e.lineTo(t+r,n+i-o),e.quadraticCurveTo(t+r,n+i,t+r-o,n+i),e.lineTo(t+o,n+i),e.quadraticCurveTo(t,n+i,t,n+i-o),e.lineTo(t,n+o),e.quadraticCurveTo(t,n,t+o,n),e.closePath(),e.fill()}(e,S,C,P,T,2):e.fillRect(S,C,P,T),e.fillStyle=O}if(k>0&&_>0){var D=e.strokeStyle,N=e.lineWidth,L=t.pstyle("text-border-color").value,A=t.pstyle("text-border-style").value;if(e.strokeStyle="rgba("+L[0]+","+L[1]+","+L[2]+","+_*a+")",e.lineWidth=k,e.setLineDash)switch(A){case"dotted":e.setLineDash([1,1]);break;case"dashed":e.setLineDash([4,2]);break;case"double":e.lineWidth=k/4,e.setLineDash([]);break;case"solid":e.setLineDash([])}if(e.strokeRect(S,C,P,T),"double"===A){var j=k/2;e.strokeRect(S+j,C+j,P-2*j,T-2*j)}e.setLineDash&&e.setLineDash([]),e.lineWidth=N,e.strokeStyle=D}}var R=2*t.pstyle("text-outline-width").pfValue;if(R>0&&(e.lineWidth=R),"wrap"===t.pstyle("text-wrap").value){var I=Ie(o,"labelWrapCachedLines",n),B=Ie(o,"labelLineHeight",n),z=p/2,F=this.getLabelJustification(t);switch("auto"===F||("left"===b?"left"===F?u+=-p:"center"===F&&(u+=-z):"center"===b?"left"===F?u+=-z:"right"===F&&(u+=z):"right"===b&&("center"===F?u+=z:"right"===F&&(u+=p))),x){case"top":case"center":case"bottom":c-=(I.length-1)*B}for(var q=0;q0&&e.strokeText(I[q],u,c),e.fillText(I[q],u,c),c+=B}else R>0&&e.strokeText(d,u,c),e.fillText(d,u,c);0!==f&&(e.rotate(-f),e.translate(-s,-l))}}};var Gs={drawNode:function(e,t,n){var r,i,o=!(arguments.length>3&&void 0!==arguments[3])||arguments[3],a=!(arguments.length>4&&void 0!==arguments[4])||arguments[4],s=!(arguments.length>5&&void 0!==arguments[5])||arguments[5],l=this,u=t._private,c=u.rscratch,d=t.position();if(N(d.x)&&N(d.y)&&(!s||t.visible())){var f,h,p=s?t.effectiveOpacity():1,g=l.usePaths(),v=!1,m=t.padding();r=t.width()+2*m,i=t.height()+2*m,n&&(h=n,e.translate(-h.x1,-h.y1));for(var y=t.pstyle("background-image").value,b=new Array(y.length),x=new Array(y.length),w=0,_=0;_0&&void 0!==arguments[0]?arguments[0]:P;l.eleFillStyle(e,t,n)},L=function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:M;l.colorStrokeStyle(e,T[0],T[1],T[2],t)},A=t.pstyle("shape").strValue,j=t.pstyle("shape-polygon-points").pfValue;if(g){e.translate(d.x,d.y);var R=l.nodePathCache=l.nodePathCache||[],I=me("polygon"===A?A+","+j.join(","):A,""+i,""+r),B=R[I];null!=B?(f=B,v=!0,c.pathCache=f):(f=new Path2D,R[I]=c.pathCache=f)}var z=function(){if(!v){var n=d;g&&(n={x:0,y:0}),l.nodeShapes[l.getNodeShape(t)].draw(f||e,n.x,n.y,r,i)}g?e.fill(f):e.fill()},F=function(){for(var n=arguments.length>0&&void 0!==arguments[0]?arguments[0]:p,r=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],i=u.backgrounding,o=0,a=0;a0&&void 0!==arguments[0]&&arguments[0],o=arguments.length>1&&void 0!==arguments[1]?arguments[1]:p;l.hasPie(t)&&(l.drawPie(e,t,o),n&&(g||l.nodeShapes[l.getNodeShape(t)].draw(e,d.x,d.y,r,i)))},V=function(){var t=(S>0?S:-S)*(arguments.length>0&&void 0!==arguments[0]?arguments[0]:p),n=S>0?0:255;0!==S&&(l.colorFillStyle(e,n,n,n,t),g?e.fill(f):e.fill())},U=function(){if(C>0){if(e.lineWidth=C,e.lineCap="butt",e.setLineDash)switch(O){case"dotted":e.setLineDash([1,1]);break;case"dashed":e.setLineDash([4,2]);break;case"solid":case"double":e.setLineDash([])}if(g?e.stroke(f):e.stroke(),"double"===O){e.lineWidth=C/3;var t=e.globalCompositeOperation;e.globalCompositeOperation="destination-out",g?e.stroke(f):e.stroke(),e.globalCompositeOperation=t}e.setLineDash&&e.setLineDash([])}};if("yes"===t.pstyle("ghost").value){var W=t.pstyle("ghost-offset-x").pfValue,Z=t.pstyle("ghost-offset-y").pfValue,H=t.pstyle("ghost-opacity").value,K=H*p;e.translate(W,Z),D(H*P),z(),F(K,!0),L(H*M),U(),q(0!==S||0!==C),F(K,!1),V(K),e.translate(-W,-Z)}g&&e.translate(-d.x,-d.y),a&&l.drawNodeUnderlay(e,t,d,r,i),g&&e.translate(d.x,d.y),D(),z(),F(p,!0),L(),U(),q(0!==S||0!==C),F(p,!1),V(),g&&e.translate(-d.x,-d.y),l.drawElementText(e,t,null,o),a&&l.drawNodeOverlay(e,t,d,r,i),n&&e.translate(h.x1,h.y1)}}},Qs=function(e){if(!["overlay","underlay"].includes(e))throw new Error("Invalid state");return function(t,n,r,i,o){if(n.visible()){var a=n.pstyle("".concat(e,"-padding")).pfValue,s=n.pstyle("".concat(e,"-opacity")).value,l=n.pstyle("".concat(e,"-color")).value,u=n.pstyle("".concat(e,"-shape")).value;if(s>0){if(r=r||n.position(),null==i||null==o){var c=n.padding();i=n.width()+2*c,o=n.height()+2*c}this.colorFillStyle(t,l[0],l[1],l[2],s),this.nodeShapes[u].draw(t,r.x,r.y,i+2*a,o+2*a),t.fill()}}}};Gs.drawNodeOverlay=Qs("overlay"),Gs.drawNodeUnderlay=Qs("underlay"),Gs.hasPie=function(e){return(e=e[0])._private.hasPie},Gs.drawPie=function(e,t,n,r){t=t[0],r=r||t.position();var i=t.cy().style(),o=t.pstyle("pie-size"),a=r.x,s=r.y,l=t.width(),u=t.height(),c=Math.min(l,u)/2,d=0;this.usePaths()&&(a=0,s=0),"%"===o.units?c*=o.pfValue:void 0!==o.pfValue&&(c=o.pfValue/2);for(var f=1;f<=i.pieBackgroundN;f++){var h=t.pstyle("pie-"+f+"-background-size").value,p=t.pstyle("pie-"+f+"-background-color").value,g=t.pstyle("pie-"+f+"-background-opacity").value*n,v=h/100;v+d>1&&(v=1-d);var m=1.5*Math.PI+2*Math.PI*d,y=m+2*Math.PI*v;0===h||d>=1||d+v>1||(e.beginPath(),e.moveTo(a,s),e.arc(a,s,c,m,y),e.closePath(),this.colorFillStyle(e,p[0],p[1],p[2],g),e.fill(),d+=v)}};var Js={};Js.getPixelRatio=function(){var e=this.data.contexts[0];if(null!=this.forcedPixelRatio)return this.forcedPixelRatio;var t=e.backingStorePixelRatio||e.webkitBackingStorePixelRatio||e.mozBackingStorePixelRatio||e.msBackingStorePixelRatio||e.oBackingStorePixelRatio||e.backingStorePixelRatio||1;return(window.devicePixelRatio||1)/t},Js.paintCache=function(e){for(var t,n=this.paintCaches=this.paintCaches||[],r=!0,i=0;ia.minMbLowQualFrames&&(a.motionBlurPxRatio=a.mbPxRBlurry)),a.clearingMotionBlur&&(a.motionBlurPxRatio=1),a.textureDrawLastFrame&&!d&&(c[a.NODE]=!0,c[a.SELECT_BOX]=!0);var y=l.style(),b=l.zoom(),x=void 0!==i?i:b,w=l.pan(),_={x:w.x,y:w.y},k={zoom:b,pan:{x:w.x,y:w.y}},E=a.prevViewport;void 0===E||k.zoom!==E.zoom||k.pan.x!==E.pan.x||k.pan.y!==E.pan.y||g&&!p||(a.motionBlurPxRatio=1),o&&(_=o),x*=s,_.x*=s,_.y*=s;var S=a.getCachedZSortedEles();function C(e,t,n,r,i){var o=e.globalCompositeOperation;e.globalCompositeOperation="destination-out",a.colorFillStyle(e,255,255,255,a.motionBlurTransparency),e.fillRect(t,n,r,i),e.globalCompositeOperation=o}function P(e,r){var s,l,c,d;a.clearingMotionBlur||e!==u.bufferContexts[a.MOTIONBLUR_BUFFER_NODE]&&e!==u.bufferContexts[a.MOTIONBLUR_BUFFER_DRAG]?(s=_,l=x,c=a.canvasWidth,d=a.canvasHeight):(s={x:w.x*h,y:w.y*h},l=b*h,c=a.canvasWidth*h,d=a.canvasHeight*h),e.setTransform(1,0,0,1,0,0),"motionBlur"===r?C(e,0,0,c,d):t||void 0!==r&&!r||e.clearRect(0,0,c,d),n||(e.translate(s.x,s.y),e.scale(l,l)),o&&e.translate(o.x,o.y),i&&e.scale(i,i)}if(d||(a.textureDrawLastFrame=!1),d){if(a.textureDrawLastFrame=!0,!a.textureCache){a.textureCache={},a.textureCache.bb=l.mutableElements().boundingBox(),a.textureCache.texture=a.data.bufferCanvases[a.TEXTURE_BUFFER];var T=a.data.bufferContexts[a.TEXTURE_BUFFER];T.setTransform(1,0,0,1,0,0),T.clearRect(0,0,a.canvasWidth*a.textureMult,a.canvasHeight*a.textureMult),a.render({forcedContext:T,drawOnlyNodeLayer:!0,forcedPxRatio:s*a.textureMult}),(k=a.textureCache.viewport={zoom:l.zoom(),pan:l.pan(),width:a.canvasWidth,height:a.canvasHeight}).mpan={x:(0-k.pan.x)/k.zoom,y:(0-k.pan.y)/k.zoom}}c[a.DRAG]=!1,c[a.NODE]=!1;var O=u.contexts[a.NODE],M=a.textureCache.texture;k=a.textureCache.viewport;O.setTransform(1,0,0,1,0,0),f?C(O,0,0,k.width,k.height):O.clearRect(0,0,k.width,k.height);var D=y.core("outside-texture-bg-color").value,N=y.core("outside-texture-bg-opacity").value;a.colorFillStyle(O,D[0],D[1],D[2],N),O.fillRect(0,0,k.width,k.height);b=l.zoom();P(O,!1),O.clearRect(k.mpan.x,k.mpan.y,k.width/k.zoom/s,k.height/k.zoom/s),O.drawImage(M,k.mpan.x,k.mpan.y,k.width/k.zoom/s,k.height/k.zoom/s)}else a.textureOnViewport&&!t&&(a.textureCache=null);var L=l.extent(),A=a.pinching||a.hoverData.dragging||a.swipePanning||a.data.wheelZooming||a.hoverData.draggingEles||a.cy.animated(),j=a.hideEdgesOnViewport&&A,R=[];if(R[a.NODE]=!c[a.NODE]&&f&&!a.clearedForMotionBlur[a.NODE]||a.clearingMotionBlur,R[a.NODE]&&(a.clearedForMotionBlur[a.NODE]=!0),R[a.DRAG]=!c[a.DRAG]&&f&&!a.clearedForMotionBlur[a.DRAG]||a.clearingMotionBlur,R[a.DRAG]&&(a.clearedForMotionBlur[a.DRAG]=!0),c[a.NODE]||n||r||R[a.NODE]){var I=f&&!R[a.NODE]&&1!==h;P(O=t||(I?a.data.bufferContexts[a.MOTIONBLUR_BUFFER_NODE]:u.contexts[a.NODE]),f&&!I?"motionBlur":void 0),j?a.drawCachedNodes(O,S.nondrag,s,L):a.drawLayeredElements(O,S.nondrag,s,L),a.debug&&a.drawDebugPoints(O,S.nondrag),n||f||(c[a.NODE]=!1)}if(!r&&(c[a.DRAG]||n||R[a.DRAG])){I=f&&!R[a.DRAG]&&1!==h;P(O=t||(I?a.data.bufferContexts[a.MOTIONBLUR_BUFFER_DRAG]:u.contexts[a.DRAG]),f&&!I?"motionBlur":void 0),j?a.drawCachedNodes(O,S.drag,s,L):a.drawCachedElements(O,S.drag,s,L),a.debug&&a.drawDebugPoints(O,S.drag),n||f||(c[a.DRAG]=!1)}if(a.showFps||!r&&c[a.SELECT_BOX]&&!n){if(P(O=t||u.contexts[a.SELECT_BOX]),1==a.selection[4]&&(a.hoverData.selecting||a.touchData.selecting)){b=a.cy.zoom();var B=y.core("selection-box-border-width").value/b;O.lineWidth=B,O.fillStyle="rgba("+y.core("selection-box-color").value[0]+","+y.core("selection-box-color").value[1]+","+y.core("selection-box-color").value[2]+","+y.core("selection-box-opacity").value+")",O.fillRect(a.selection[0],a.selection[1],a.selection[2]-a.selection[0],a.selection[3]-a.selection[1]),B>0&&(O.strokeStyle="rgba("+y.core("selection-box-border-color").value[0]+","+y.core("selection-box-border-color").value[1]+","+y.core("selection-box-border-color").value[2]+","+y.core("selection-box-opacity").value+")",O.strokeRect(a.selection[0],a.selection[1],a.selection[2]-a.selection[0],a.selection[3]-a.selection[1]))}if(u.bgActivePosistion&&!a.hoverData.selecting){b=a.cy.zoom();var z=u.bgActivePosistion;O.fillStyle="rgba("+y.core("active-bg-color").value[0]+","+y.core("active-bg-color").value[1]+","+y.core("active-bg-color").value[2]+","+y.core("active-bg-opacity").value+")",O.beginPath(),O.arc(z.x,z.y,y.core("active-bg-size").pfValue/b,0,2*Math.PI),O.fill()}var F=a.lastRedrawTime;if(a.showFps&&F){F=Math.round(F);var q=Math.round(1e3/F);O.setTransform(1,0,0,1,0,0),O.fillStyle="rgba(255, 0, 0, 0.75)",O.strokeStyle="rgba(255, 0, 0, 0.75)",O.lineWidth=1,O.fillText("1 frame = "+F+" ms = "+q+" fps",0,20);O.strokeRect(0,30,250,20),O.fillRect(0,30,250*Math.min(q/60,1),20)}n||(c[a.SELECT_BOX]=!1)}if(f&&1!==h){var V=u.contexts[a.NODE],U=a.data.bufferCanvases[a.MOTIONBLUR_BUFFER_NODE],W=u.contexts[a.DRAG],Z=a.data.bufferCanvases[a.MOTIONBLUR_BUFFER_DRAG],H=function(e,t,n){e.setTransform(1,0,0,1,0,0),n||!m?e.clearRect(0,0,a.canvasWidth,a.canvasHeight):C(e,0,0,a.canvasWidth,a.canvasHeight);var r=h;e.drawImage(t,0,0,a.canvasWidth*r,a.canvasHeight*r,0,0,a.canvasWidth,a.canvasHeight)};(c[a.NODE]||R[a.NODE])&&(H(V,U,R[a.NODE]),c[a.NODE]=!1),(c[a.DRAG]||R[a.DRAG])&&(H(W,Z,R[a.DRAG]),c[a.DRAG]=!1)}a.prevViewport=k,a.clearingMotionBlur&&(a.clearingMotionBlur=!1,a.motionBlurCleared=!0,a.motionBlur=!0),f&&(a.motionBlurTimeout=setTimeout((function(){a.motionBlurTimeout=null,a.clearedForMotionBlur[a.NODE]=!1,a.clearedForMotionBlur[a.DRAG]=!1,a.motionBlur=!1,a.clearingMotionBlur=!d,a.mbFrames=0,c[a.NODE]=!0,c[a.DRAG]=!0,a.redraw()}),100)),t||l.emit("render")};for(var el={drawPolygonPath:function(e,t,n,r,i,o){var a=r/2,s=i/2;e.beginPath&&e.beginPath(),e.moveTo(t+a*o[0],n+s*o[1]);for(var l=1;l0&&o>0){f.clearRect(0,0,i,o),f.globalCompositeOperation="source-over";var h=this.getCachedZSortedEles();if(e.full)f.translate(-n.x1*l,-n.y1*l),f.scale(l,l),this.drawElements(f,h),f.scale(1/l,1/l),f.translate(n.x1*l,n.y1*l);else{var p=t.pan(),g={x:p.x*l,y:p.y*l};l*=t.zoom(),f.translate(g.x,g.y),f.scale(l,l),this.drawElements(f,h),f.scale(1/l,1/l),f.translate(-g.x,-g.y)}e.bg&&(f.globalCompositeOperation="destination-over",f.fillStyle=e.bg,f.rect(0,0,i,o),f.fill())}return d},sl.png=function(e){return ul(e,this.bufferCanvasImage(e),"image/png")},sl.jpg=function(e){return ul(e,this.bufferCanvasImage(e),"image/jpeg")};var cl={nodeShapeImpl:function(e,t,n,r,i,o,a){switch(e){case"ellipse":return this.drawEllipsePath(t,n,r,i,o);case"polygon":return this.drawPolygonPath(t,n,r,i,o,a);case"round-polygon":return this.drawRoundPolygonPath(t,n,r,i,o,a);case"roundrectangle":case"round-rectangle":return this.drawRoundRectanglePath(t,n,r,i,o);case"cutrectangle":case"cut-rectangle":return this.drawCutRectanglePath(t,n,r,i,o);case"bottomroundrectangle":case"bottom-round-rectangle":return this.drawBottomRoundRectanglePath(t,n,r,i,o);case"barrel":return this.drawBarrelPath(t,n,r,i,o)}}},dl=hl,fl=hl.prototype;function hl(e){var t=this;t.data={canvases:new Array(fl.CANVAS_LAYERS),contexts:new Array(fl.CANVAS_LAYERS),canvasNeedsRedraw:new Array(fl.CANVAS_LAYERS),bufferCanvases:new Array(fl.BUFFER_COUNT),bufferContexts:new Array(fl.CANVAS_LAYERS)};var n="-webkit-tap-highlight-color",r="rgba(0,0,0,0)";t.data.canvasContainer=document.createElement("div");var i=t.data.canvasContainer.style;t.data.canvasContainer.style[n]=r,i.position="relative",i.zIndex="0",i.overflow="hidden";var o=e.cy.container();o.appendChild(t.data.canvasContainer),o.style[n]=r;var a={"-webkit-user-select":"none","-moz-user-select":"-moz-none","user-select":"none","-webkit-tap-highlight-color":"rgba(0,0,0,0)","outline-style":"none"};q()&&(a["-ms-touch-action"]="none",a["touch-action"]="none");for(var s=0;s0;--l)if(r=t[l].dequeue()){i=i.concat(s(e,t,n,r,!0));break}}return i}(n.graph,n.buckets,n.zeroIdx);return r.flatten(r.map(u,(function(t){return e.outEdges(t.v,t.w)})),!0)};var a=r.constant(1);function s(e,t,n,i,o){var a=o?[]:void 0;return r.forEach(e.inEdges(i.v),(function(r){var i=e.edge(r),s=e.node(r.v);o&&a.push({v:r.v,w:r.w}),s.out-=i,l(t,n,s)})),r.forEach(e.outEdges(i.v),(function(r){var i=e.edge(r),o=r.w,a=e.node(o);a.in-=i,l(t,n,a)})),e.removeNode(i.v),a}function l(e,t,n){n.out?n.in?e[n.out-n.in+t].enqueue(n):e[e.length-1].enqueue(n):e[0].enqueue(n)}},6456:function(e,t,n){"use strict";var r=n(8899),i=n(2212),o=n(1898),a=n(6744),s=n(8392).normalizeRanks,l=n(7652),u=n(8392).removeEmptyRanks,c=n(1652),d=n(4093),f=n(5384),h=n(7348),p=n(3090),g=n(8392),v=n(2990).Graph;e.exports=function(e,t){var n=t&&t.debugTiming?g.time:g.notime;n("layout",(function(){var t=n(" buildLayoutGraph",(function(){return function(e){var t=new v({multigraph:!0,compound:!0}),n=C(e.graph());return t.setGraph(r.merge({},y,S(n,m),r.pick(n,b))),r.forEach(e.nodes(),(function(n){var i=C(e.node(n));t.setNode(n,r.defaults(S(i,x),w)),t.setParent(n,e.parent(n))})),r.forEach(e.edges(),(function(n){var i=C(e.edge(n));t.setEdge(n,r.merge({},k,S(i,_),r.pick(i,E)))})),t}(e)}));n(" runLayout",(function(){!function(e,t){t(" makeSpaceForEdgeLabels",(function(){!function(e){var t=e.graph();t.ranksep/=2,r.forEach(e.edges(),(function(n){var r=e.edge(n);r.minlen*=2,"c"!==r.labelpos.toLowerCase()&&("TB"===t.rankdir||"BT"===t.rankdir?r.width+=r.labeloffset:r.height+=r.labeloffset)}))}(e)})),t(" removeSelfEdges",(function(){!function(e){r.forEach(e.edges(),(function(t){if(t.v===t.w){var n=e.node(t.v);n.selfEdges||(n.selfEdges=[]),n.selfEdges.push({e:t,label:e.edge(t)}),e.removeEdge(t)}}))}(e)})),t(" acyclic",(function(){i.run(e)})),t(" nestingGraph.run",(function(){c.run(e)})),t(" rank",(function(){a(g.asNonCompoundGraph(e))})),t(" injectEdgeLabelProxies",(function(){!function(e){r.forEach(e.edges(),(function(t){var n=e.edge(t);if(n.width&&n.height){var r=e.node(t.v),i={rank:(e.node(t.w).rank-r.rank)/2+r.rank,e:t};g.addDummyNode(e,"edge-proxy",i,"_ep")}}))}(e)})),t(" removeEmptyRanks",(function(){u(e)})),t(" nestingGraph.cleanup",(function(){c.cleanup(e)})),t(" normalizeRanks",(function(){s(e)})),t(" assignRankMinMax",(function(){!function(e){var t=0;r.forEach(e.nodes(),(function(n){var i=e.node(n);i.borderTop&&(i.minRank=e.node(i.borderTop).rank,i.maxRank=e.node(i.borderBottom).rank,t=r.max(t,i.maxRank))})),e.graph().maxRank=t}(e)})),t(" removeEdgeLabelProxies",(function(){!function(e){r.forEach(e.nodes(),(function(t){var n=e.node(t);"edge-proxy"===n.dummy&&(e.edge(n.e).labelRank=n.rank,e.removeNode(t))}))}(e)})),t(" normalize.run",(function(){o.run(e)})),t(" parentDummyChains",(function(){l(e)})),t(" addBorderSegments",(function(){d(e)})),t(" order",(function(){h(e)})),t(" insertSelfEdges",(function(){!function(e){var t=g.buildLayerMatrix(e);r.forEach(t,(function(t){var n=0;r.forEach(t,(function(t,i){var o=e.node(t);o.order=i+n,r.forEach(o.selfEdges,(function(t){g.addDummyNode(e,"selfedge",{width:t.label.width,height:t.label.height,rank:o.rank,order:i+ ++n,e:t.e,label:t.label},"_se")})),delete o.selfEdges}))}))}(e)})),t(" adjustCoordinateSystem",(function(){f.adjust(e)})),t(" position",(function(){p(e)})),t(" positionSelfEdges",(function(){!function(e){r.forEach(e.nodes(),(function(t){var n=e.node(t);if("selfedge"===n.dummy){var r=e.node(n.e.v),i=r.x+r.width/2,o=r.y,a=n.x-i,s=r.height/2;e.setEdge(n.e,n.label),e.removeNode(t),n.label.points=[{x:i+2*a/3,y:o-s},{x:i+5*a/6,y:o-s},{x:i+a,y:o},{x:i+5*a/6,y:o+s},{x:i+2*a/3,y:o+s}],n.label.x=n.x,n.label.y=n.y}}))}(e)})),t(" removeBorderNodes",(function(){!function(e){r.forEach(e.nodes(),(function(t){if(e.children(t).length){var n=e.node(t),i=e.node(n.borderTop),o=e.node(n.borderBottom),a=e.node(r.last(n.borderLeft)),s=e.node(r.last(n.borderRight));n.width=Math.abs(s.x-a.x),n.height=Math.abs(o.y-i.y),n.x=a.x+n.width/2,n.y=i.y+n.height/2}})),r.forEach(e.nodes(),(function(t){"border"===e.node(t).dummy&&e.removeNode(t)}))}(e)})),t(" normalize.undo",(function(){o.undo(e)})),t(" fixupEdgeLabelCoords",(function(){!function(e){r.forEach(e.edges(),(function(t){var n=e.edge(t);if(r.has(n,"x"))switch("l"!==n.labelpos&&"r"!==n.labelpos||(n.width-=n.labeloffset),n.labelpos){case"l":n.x-=n.width/2+n.labeloffset;break;case"r":n.x+=n.width/2+n.labeloffset}}))}(e)})),t(" undoCoordinateSystem",(function(){f.undo(e)})),t(" translateGraph",(function(){!function(e){var t=Number.POSITIVE_INFINITY,n=0,i=Number.POSITIVE_INFINITY,o=0,a=e.graph(),s=a.marginx||0,l=a.marginy||0;function u(e){var r=e.x,a=e.y,s=e.width,l=e.height;t=Math.min(t,r-s/2),n=Math.max(n,r+s/2),i=Math.min(i,a-l/2),o=Math.max(o,a+l/2)}r.forEach(e.nodes(),(function(t){u(e.node(t))})),r.forEach(e.edges(),(function(t){var n=e.edge(t);r.has(n,"x")&&u(n)})),t-=s,i-=l,r.forEach(e.nodes(),(function(n){var r=e.node(n);r.x-=t,r.y-=i})),r.forEach(e.edges(),(function(n){var o=e.edge(n);r.forEach(o.points,(function(e){e.x-=t,e.y-=i})),r.has(o,"x")&&(o.x-=t),r.has(o,"y")&&(o.y-=i)})),a.width=n-t+s,a.height=o-i+l}(e)})),t(" assignNodeIntersects",(function(){!function(e){r.forEach(e.edges(),(function(t){var n,r,i=e.edge(t),o=e.node(t.v),a=e.node(t.w);i.points?(n=i.points[0],r=i.points[i.points.length-1]):(i.points=[],n=a,r=o),i.points.unshift(g.intersectRect(o,n)),i.points.push(g.intersectRect(a,r))}))}(e)})),t(" reversePoints",(function(){!function(e){r.forEach(e.edges(),(function(t){var n=e.edge(t);n.reversed&&n.points.reverse()}))}(e)})),t(" acyclic.undo",(function(){i.undo(e)}))}(t,n)})),n(" updateInputGraph",(function(){!function(e,t){r.forEach(e.nodes(),(function(n){var r=e.node(n),i=t.node(n);r&&(r.x=i.x,r.y=i.y,t.children(n).length&&(r.width=i.width,r.height=i.height))})),r.forEach(e.edges(),(function(n){var i=e.edge(n),o=t.edge(n);i.points=o.points,r.has(o,"x")&&(i.x=o.x,i.y=o.y)})),e.graph().width=t.graph().width,e.graph().height=t.graph().height}(e,t)}))}))};var m=["nodesep","edgesep","ranksep","marginx","marginy"],y={ranksep:50,edgesep:20,nodesep:50,rankdir:"tb"},b=["acyclicer","ranker","rankdir","align"],x=["width","height"],w={width:0,height:0},_=["minlen","weight","width","height","labeloffset"],k={minlen:1,weight:1,width:0,height:0,labeloffset:10,labelpos:"r"},E=["labelpos"];function S(e,t){return r.mapValues(r.pick(e,t),Number)}function C(e){var t={};return r.forEach(e,(function(e,n){t[n.toLowerCase()]=e})),t}},8899:function(e,t,n){var r;try{r={cloneDeep:n(8121),constant:n(1547),defaults:n(6933),each:n(9430),filter:n(86),find:n(1211),flatten:n(5506),forEach:n(6514),forIn:n(9144),has:n(7805),isUndefined:n(2530),last:n(5727),map:n(2034),mapValues:n(7702),max:n(9627),merge:n(9286),min:n(6452),minBy:n(3638),now:n(72),pick:n(6460),range:n(6222),reduce:n(5080),sortBy:n(4286),uniqueId:n(804),values:n(2063),zipObject:n(4827)}}catch(i){}r||(r=window._),e.exports=r},1652:function(e,t,n){var r=n(8899),i=n(8392);function o(e,t,n,a,s,l,u){var c=e.children(u);if(c.length){var d=i.addBorderNode(e,"_bt"),f=i.addBorderNode(e,"_bb"),h=e.node(u);e.setParent(d,u),h.borderTop=d,e.setParent(f,u),h.borderBottom=f,r.forEach(c,(function(r){o(e,t,n,a,s,l,r);var i=e.node(r),c=i.borderTop?i.borderTop:r,h=i.borderBottom?i.borderBottom:r,p=i.borderTop?a:2*a,g=c!==h?1:s-l[u]+1;e.setEdge(d,c,{weight:p,minlen:g,nestingEdge:!0}),e.setEdge(h,f,{weight:p,minlen:g,nestingEdge:!0})})),e.parent(u)||e.setEdge(t,d,{weight:0,minlen:s+l[u]})}else u!==t&&e.setEdge(t,u,{weight:0,minlen:n})}e.exports={run:function(e){var t=i.addDummyNode(e,"root",{},"_root"),n=function(e){var t={};function n(i,o){var a=e.children(i);a&&a.length&&r.forEach(a,(function(e){n(e,o+1)})),t[i]=o}return r.forEach(e.children(),(function(e){n(e,1)})),t}(e),a=r.max(r.values(n))-1,s=2*a+1;e.graph().nestingRoot=t,r.forEach(e.edges(),(function(t){e.edge(t).minlen*=s}));var l=function(e){return r.reduce(e.edges(),(function(t,n){return t+e.edge(n).weight}),0)}(e)+1;r.forEach(e.children(),(function(r){o(e,t,s,l,a,n,r)})),e.graph().nodeRankFactor=s},cleanup:function(e){var t=e.graph();e.removeNode(t.nestingRoot),delete t.nestingRoot,r.forEach(e.edges(),(function(t){e.edge(t).nestingEdge&&e.removeEdge(t)}))}}},1898:function(e,t,n){"use strict";var r=n(8899),i=n(8392);e.exports={run:function(e){e.graph().dummyChains=[],r.forEach(e.edges(),(function(t){!function(e,t){var n,r,o,a=t.v,s=e.node(a).rank,l=t.w,u=e.node(l).rank,c=t.name,d=e.edge(t),f=d.labelRank;if(u===s+1)return;for(e.removeEdge(t),o=0,++s;s0;)t%2&&(n+=l[t+1]),l[t=t-1>>1]+=e.weight;u+=e.weight*n}))),u}e.exports=function(e,t){for(var n=0,r=1;r=2),s=c.buildLayerMatrix(e);var v=o(e,s);v=e.barycenter)&&function(e,t){var n=0,r=0;e.weight&&(n+=e.barycenter*e.weight,r+=e.weight);t.weight&&(n+=t.barycenter*t.weight,r+=t.weight);e.vs=t.vs.concat(e.vs),e.barycenter=n/r,e.weight=r,e.i=Math.min(t.i,e.i),t.merged=!0}(e,t)}}function i(t){return function(n){n.in.push(t),0===--n.indegree&&e.push(n)}}for(;e.length;){var o=e.pop();t.push(o),r.forEach(o.in.reverse(),n(o)),r.forEach(o.out,i(o))}return r.map(r.filter(t,(function(e){return!e.merged})),(function(e){return r.pick(e,["vs","i","barycenter","weight"])}))}(r.filter(n,(function(e){return!e.indegree})))}},3616:function(e,t,n){var r=n(8899),i=n(5213),o=n(1982),a=n(4929);e.exports=function e(t,n,s,l){var u=t.children(n),c=t.node(n),d=c?c.borderLeft:void 0,f=c?c.borderRight:void 0,h={};d&&(u=r.filter(u,(function(e){return e!==d&&e!==f})));var p=i(t,u);r.forEach(p,(function(n){if(t.children(n.v).length){var i=e(t,n.v,s,l);h[n.v]=i,r.has(i,"barycenter")&&(o=n,a=i,r.isUndefined(o.barycenter)?(o.barycenter=a.barycenter,o.weight=a.weight):(o.barycenter=(o.barycenter*o.weight+a.barycenter*a.weight)/(o.weight+a.weight),o.weight+=a.weight))}var o,a}));var g=o(p,s);!function(e,t){r.forEach(e,(function(e){e.vs=r.flatten(e.vs.map((function(e){return t[e]?t[e].vs:e})),!0)}))}(g,h);var v=a(g,l);if(d&&(v.vs=r.flatten([d,v.vs,f],!0),t.predecessors(d).length)){var m=t.node(t.predecessors(d)[0]),y=t.node(t.predecessors(f)[0]);r.has(v,"barycenter")||(v.barycenter=0,v.weight=0),v.barycenter=(v.barycenter*v.weight+m.order+y.order)/(v.weight+2),v.weight+=2}return v}},4929:function(e,t,n){var r=n(8899),i=n(8392);function o(e,t,n){for(var i;t.length&&(i=r.last(t)).i<=n;)t.pop(),e.push(i.vs),n++;return n}e.exports=function(e,t){var n=i.partition(e,(function(e){return r.has(e,"barycenter")})),a=n.lhs,s=r.sortBy(n.rhs,(function(e){return-e.i})),l=[],u=0,c=0,d=0;a.sort((f=!!t,function(e,t){return e.barycentert.barycenter?1:f?t.i-e.i:e.i-t.i})),d=o(l,s,d),r.forEach(a,(function(e){d+=e.vs.length,l.push(e.vs),u+=e.barycenter*e.weight,c+=e.weight,d=o(l,s,d)}));var f;var h={vs:r.flatten(l,!0)};c&&(h.barycenter=u/c,h.weight=c);return h}},7652:function(e,t,n){var r=n(8899);e.exports=function(e){var t=function(e){var t={},n=0;function i(o){var a=n;r.forEach(e.children(o),i),t[o]={low:a,lim:n++}}return r.forEach(e.children(),i),t}(e);r.forEach(e.graph().dummyChains,(function(n){for(var r=e.node(n),i=r.edgeObj,o=function(e,t,n,r){var i,o,a=[],s=[],l=Math.min(t[n].low,t[r].low),u=Math.max(t[n].lim,t[r].lim);i=n;do{i=e.parent(i),a.push(i)}while(i&&(t[i].low>l||u>t[i].lim));o=i,i=r;for(;(i=e.parent(i))!==o;)s.push(i);return{path:a.concat(s.reverse()),lca:o}}(e,t,i.v,i.w),a=o.path,s=o.lca,l=0,u=a[l],c=!0;n!==i.w;){if(r=e.node(n),c){for(;(u=a[l])!==s&&e.node(u).maxRanks)&&l(n,t,u)}))}))}return r.reduce(t,(function(t,n){var o,a=-1,s=0;return r.forEach(n,(function(r,l){if("border"===e.node(r).dummy){var u=e.predecessors(r);u.length&&(o=e.node(u[0]).order,i(n,s,l,a,o),s=l,a=o)}i(n,s,n.length,o,t.length)})),n})),n}function l(e,t,n){if(t>n){var r=t;t=n,n=r}var i=e[t];i||(e[t]=i={}),i[n]=!0}function u(e,t,n){if(t>n){var i=t;t=n,n=i}return r.has(e[t],n)}function c(e,t,n,i){var o={},a={},s={};return r.forEach(t,(function(e){r.forEach(e,(function(e,t){o[e]=e,a[e]=e,s[e]=t}))})),r.forEach(t,(function(e){var t=-1;r.forEach(e,(function(e){var l=i(e);if(l.length){l=r.sortBy(l,(function(e){return s[e]}));for(var c=(l.length-1)/2,d=Math.floor(c),f=Math.ceil(c);d<=f;++d){var h=l[d];a[e]===e&&tl.lim&&(u=l,c=!0);var d=r.filter(t.edges(),(function(t){return c===y(e,e.node(t.v),u)&&c!==y(e,e.node(t.w),u)}));return r.minBy(d,(function(e){return o(t,e)}))}function m(e,t,n,i){var o=n.v,a=n.w;e.removeEdge(o,a),e.setEdge(i.v,i.w,{}),h(e),d(e,t),function(e,t){var n=r.find(e.nodes(),(function(e){return!t.node(e).parent})),i=s(e,n);i=i.slice(1),r.forEach(i,(function(n){var r=e.node(n).parent,i=t.edge(n,r),o=!1;i||(i=t.edge(r,n),o=!0),t.node(n).rank=t.node(r).rank+(o?i.minlen:-i.minlen)}))}(e,t)}function y(e,t,n){return n.low<=t.lim&&t.lim<=n.lim}e.exports=c,c.initLowLimValues=h,c.initCutValues=d,c.calcCutValue=f,c.leaveEdge=g,c.enterEdge=v,c.exchangeEdges=m},4441:function(e,t,n){"use strict";var r=n(8899);e.exports={longestPath:function(e){var t={};r.forEach(e.sources(),(function n(i){var o=e.node(i);if(r.has(t,i))return o.rank;t[i]=!0;var a=r.min(r.map(e.outEdges(i),(function(t){return n(t.w)-e.edge(t).minlen})));return a!==Number.POSITIVE_INFINITY&&void 0!==a&&null!==a||(a=0),o.rank=a}))},slack:function(e,t){return e.node(t.w).rank-e.node(t.v).rank-e.edge(t).minlen}}},8392:function(e,t,n){"use strict";var r=n(8899),i=n(2990).Graph;function o(e,t,n,i){var o;do{o=r.uniqueId(i)}while(e.hasNode(o));return n.dummy=t,e.setNode(o,n),o}function a(e){return r.max(r.map(e.nodes(),(function(t){var n=e.node(t).rank;if(!r.isUndefined(n))return n})))}e.exports={addDummyNode:o,simplify:function(e){var t=(new i).setGraph(e.graph());return r.forEach(e.nodes(),(function(n){t.setNode(n,e.node(n))})),r.forEach(e.edges(),(function(n){var r=t.edge(n.v,n.w)||{weight:0,minlen:1},i=e.edge(n);t.setEdge(n.v,n.w,{weight:r.weight+i.weight,minlen:Math.max(r.minlen,i.minlen)})})),t},asNonCompoundGraph:function(e){var t=new i({multigraph:e.isMultigraph()}).setGraph(e.graph());return r.forEach(e.nodes(),(function(n){e.children(n).length||t.setNode(n,e.node(n))})),r.forEach(e.edges(),(function(n){t.setEdge(n,e.edge(n))})),t},successorWeights:function(e){var t=r.map(e.nodes(),(function(t){var n={};return r.forEach(e.outEdges(t),(function(t){n[t.w]=(n[t.w]||0)+e.edge(t).weight})),n}));return r.zipObject(e.nodes(),t)},predecessorWeights:function(e){var t=r.map(e.nodes(),(function(t){var n={};return r.forEach(e.inEdges(t),(function(t){n[t.v]=(n[t.v]||0)+e.edge(t).weight})),n}));return r.zipObject(e.nodes(),t)},intersectRect:function(e,t){var n,r,i=e.x,o=e.y,a=t.x-i,s=t.y-o,l=e.width/2,u=e.height/2;if(!a&&!s)throw new Error("Not possible to find intersection inside of the rectangle");Math.abs(s)*l>Math.abs(a)*u?(s<0&&(u=-u),n=u*a/s,r=u):(a<0&&(l=-l),n=l,r=l*s/a);return{x:i+n,y:o+r}},buildLayerMatrix:function(e){var t=r.map(r.range(a(e)+1),(function(){return[]}));return r.forEach(e.nodes(),(function(n){var i=e.node(n),o=i.rank;r.isUndefined(o)||(t[o][i.order]=n)})),t},normalizeRanks:function(e){var t=r.min(r.map(e.nodes(),(function(t){return e.node(t).rank})));r.forEach(e.nodes(),(function(n){var i=e.node(n);r.has(i,"rank")&&(i.rank-=t)}))},removeEmptyRanks:function(e){var t=r.min(r.map(e.nodes(),(function(t){return e.node(t).rank}))),n=[];r.forEach(e.nodes(),(function(r){var i=e.node(r).rank-t;n[i]||(n[i]=[]),n[i].push(r)}));var i=0,o=e.graph().nodeRankFactor;r.forEach(n,(function(t,n){r.isUndefined(t)&&n%o!==0?--i:i&&r.forEach(t,(function(t){e.node(t).rank+=i}))}))},addBorderNode:function(e,t,n,r){var i={width:0,height:0};arguments.length>=4&&(i.rank=n,i.order=r);return o(e,"border",i,t)},maxRank:a,partition:function(e,t){var n={lhs:[],rhs:[]};return r.forEach(e,(function(e){t(e)?n.lhs.push(e):n.rhs.push(e)})),n},time:function(e,t){var n=r.now();try{return t()}finally{console.log(e+" time: "+(r.now()-n)+"ms")}},notime:function(e,t){return t()}}},6206:function(e){e.exports="0.8.5"},6118:function(e,t,n){var r=n(5828);e.exports={Graph:r.Graph,json:n(5710),alg:n(5280),version:r.version}},6666:function(e,t,n){var r=n(980);e.exports=function(e){var t,n={},i=[];function o(i){r.has(n,i)||(n[i]=!0,t.push(i),r.each(e.successors(i),o),r.each(e.predecessors(i),o))}return r.each(e.nodes(),(function(e){t=[],o(e),t.length&&i.push(t)})),i}},672:function(e,t,n){var r=n(980);function i(e,t,n,o,a,s){r.has(o,t)||(o[t]=!0,n||s.push(t),r.each(a(t),(function(t){i(e,t,n,o,a,s)})),n&&s.push(t))}e.exports=function(e,t,n){r.isArray(t)||(t=[t]);var o=(e.isDirected()?e.successors:e.neighbors).bind(e),a=[],s={};return r.each(t,(function(t){if(!e.hasNode(t))throw new Error("Graph does not have node: "+t);i(e,t,"post"===n,s,o,a)})),a}},9919:function(e,t,n){var r=n(4871),i=n(980);e.exports=function(e,t,n){return i.transform(e.nodes(),(function(i,o){i[o]=r(e,o,t,n)}),{})}},4871:function(e,t,n){var r=n(980),i=n(6071);e.exports=function(e,t,n,r){return function(e,t,n,r){var o,a,s={},l=new i,u=function(e){var t=e.v!==o?e.v:e.w,r=s[t],i=n(e),u=a.distance+i;if(i<0)throw new Error("dijkstra does not allow negative edge weights. Bad edge: "+e+" Weight: "+i);u0&&(o=l.removeMin(),(a=s[o]).distance!==Number.POSITIVE_INFINITY);)r(o).forEach(u);return s}(e,String(t),n||o,r||function(t){return e.outEdges(t)})};var o=r.constant(1)},6953:function(e,t,n){var r=n(980),i=n(8172);e.exports=function(e){return r.filter(i(e),(function(t){return t.length>1||1===t.length&&e.hasEdge(t[0],t[0])}))}},5053:function(e,t,n){var r=n(980);e.exports=function(e,t,n){return function(e,t,n){var r={},i=e.nodes();return i.forEach((function(e){r[e]={},r[e][e]={distance:0},i.forEach((function(t){e!==t&&(r[e][t]={distance:Number.POSITIVE_INFINITY})})),n(e).forEach((function(n){var i=n.v===e?n.w:n.v,o=t(n);r[e][i]={distance:o,predecessor:e}}))})),i.forEach((function(e){var t=r[e];i.forEach((function(n){var o=r[n];i.forEach((function(n){var r=o[e],i=t[n],a=o[n],s=r.distance+i.distance;s0;){if(n=l.removeMin(),r.has(s,n))a.setEdge(n,s[n]);else{if(c)throw new Error("Input graph is not connected: "+e);c=!0}e.nodeEdges(n).forEach(u)}return a}},8172:function(e,t,n){var r=n(980);e.exports=function(e){var t=0,n=[],i={},o=[];function a(s){var l=i[s]={onStack:!0,lowlink:t,index:t++};if(n.push(s),e.successors(s).forEach((function(e){r.has(i,e)?i[e].onStack&&(l.lowlink=Math.min(l.lowlink,i[e].index)):(a(e),l.lowlink=Math.min(l.lowlink,i[e].lowlink))})),l.lowlink===l.index){var u,c=[];do{u=n.pop(),i[u].onStack=!1,c.push(u)}while(s!==u);o.push(c)}}return e.nodes().forEach((function(e){r.has(i,e)||a(e)})),o}},1731:function(e,t,n){var r=n(980);function i(e){var t={},n={},i=[];if(r.each(e.sinks(),(function a(s){if(r.has(n,s))throw new o;r.has(t,s)||(n[s]=!0,t[s]=!0,r.each(e.predecessors(s),a),delete n[s],i.push(s))})),r.size(t)!==e.nodeCount())throw new o;return i}function o(){}e.exports=i,i.CycleException=o,o.prototype=new Error},6071:function(e,t,n){var r=n(980);function i(){this._arr=[],this._keyIndices={}}e.exports=i,i.prototype.size=function(){return this._arr.length},i.prototype.keys=function(){return this._arr.map((function(e){return e.key}))},i.prototype.has=function(e){return r.has(this._keyIndices,e)},i.prototype.priority=function(e){var t=this._keyIndices[e];if(void 0!==t)return this._arr[t].priority},i.prototype.min=function(){if(0===this.size())throw new Error("Queue underflow");return this._arr[0].key},i.prototype.add=function(e,t){var n=this._keyIndices;if(e=String(e),!r.has(n,e)){var i=this._arr,o=i.length;return n[e]=o,i.push({key:e,priority:t}),this._decrease(o),!0}return!1},i.prototype.removeMin=function(){this._swap(0,this._arr.length-1);var e=this._arr.pop();return delete this._keyIndices[e.key],this._heapify(0),e.key},i.prototype.decrease=function(e,t){var n=this._keyIndices[e];if(t>this._arr[n].priority)throw new Error("New priority is greater than current priority. Key: "+e+" Old: "+this._arr[n].priority+" New: "+t);this._arr[n].priority=t,this._decrease(n)},i.prototype._heapify=function(e){var t=this._arr,n=2*e,r=n+1,i=e;n>1].priorityl){var u=s;s=l,l=u}return s+a+l+a+(r.isUndefined(o)?i:o)}function d(e,t){return c(e,t.v,t.w,t.name)}s.prototype._nodeCount=0,s.prototype._edgeCount=0,s.prototype.isDirected=function(){return this._isDirected},s.prototype.isMultigraph=function(){return this._isMultigraph},s.prototype.isCompound=function(){return this._isCompound},s.prototype.setGraph=function(e){return this._label=e,this},s.prototype.graph=function(){return this._label},s.prototype.setDefaultNodeLabel=function(e){return r.isFunction(e)||(e=r.constant(e)),this._defaultNodeLabelFn=e,this},s.prototype.nodeCount=function(){return this._nodeCount},s.prototype.nodes=function(){return r.keys(this._nodes)},s.prototype.sources=function(){var e=this;return r.filter(this.nodes(),(function(t){return r.isEmpty(e._in[t])}))},s.prototype.sinks=function(){var e=this;return r.filter(this.nodes(),(function(t){return r.isEmpty(e._out[t])}))},s.prototype.setNodes=function(e,t){var n=arguments,i=this;return r.each(e,(function(e){n.length>1?i.setNode(e,t):i.setNode(e)})),this},s.prototype.setNode=function(e,t){return r.has(this._nodes,e)?(arguments.length>1&&(this._nodes[e]=t),this):(this._nodes[e]=arguments.length>1?t:this._defaultNodeLabelFn(e),this._isCompound&&(this._parent[e]=o,this._children[e]={},this._children[o][e]=!0),this._in[e]={},this._preds[e]={},this._out[e]={},this._sucs[e]={},++this._nodeCount,this)},s.prototype.node=function(e){return this._nodes[e]},s.prototype.hasNode=function(e){return r.has(this._nodes,e)},s.prototype.removeNode=function(e){var t=this;if(r.has(this._nodes,e)){var n=function(e){t.removeEdge(t._edgeObjs[e])};delete this._nodes[e],this._isCompound&&(this._removeFromParentsChildList(e),delete this._parent[e],r.each(this.children(e),(function(e){t.setParent(e)})),delete this._children[e]),r.each(r.keys(this._in[e]),n),delete this._in[e],delete this._preds[e],r.each(r.keys(this._out[e]),n),delete this._out[e],delete this._sucs[e],--this._nodeCount}return this},s.prototype.setParent=function(e,t){if(!this._isCompound)throw new Error("Cannot set parent in a non-compound graph");if(r.isUndefined(t))t=o;else{for(var n=t+="";!r.isUndefined(n);n=this.parent(n))if(n===e)throw new Error("Setting "+t+" as parent of "+e+" would create a cycle");this.setNode(t)}return this.setNode(e),this._removeFromParentsChildList(e),this._parent[e]=t,this._children[t][e]=!0,this},s.prototype._removeFromParentsChildList=function(e){delete this._children[this._parent[e]][e]},s.prototype.parent=function(e){if(this._isCompound){var t=this._parent[e];if(t!==o)return t}},s.prototype.children=function(e){if(r.isUndefined(e)&&(e=o),this._isCompound){var t=this._children[e];if(t)return r.keys(t)}else{if(e===o)return this.nodes();if(this.hasNode(e))return[]}},s.prototype.predecessors=function(e){var t=this._preds[e];if(t)return r.keys(t)},s.prototype.successors=function(e){var t=this._sucs[e];if(t)return r.keys(t)},s.prototype.neighbors=function(e){var t=this.predecessors(e);if(t)return r.union(t,this.successors(e))},s.prototype.isLeaf=function(e){return 0===(this.isDirected()?this.successors(e):this.neighbors(e)).length},s.prototype.filterNodes=function(e){var t=new this.constructor({directed:this._isDirected,multigraph:this._isMultigraph,compound:this._isCompound});t.setGraph(this.graph());var n=this;r.each(this._nodes,(function(n,r){e(r)&&t.setNode(r,n)})),r.each(this._edgeObjs,(function(e){t.hasNode(e.v)&&t.hasNode(e.w)&&t.setEdge(e,n.edge(e))}));var i={};function o(e){var r=n.parent(e);return void 0===r||t.hasNode(r)?(i[e]=r,r):r in i?i[r]:o(r)}return this._isCompound&&r.each(t.nodes(),(function(e){t.setParent(e,o(e))})),t},s.prototype.setDefaultEdgeLabel=function(e){return r.isFunction(e)||(e=r.constant(e)),this._defaultEdgeLabelFn=e,this},s.prototype.edgeCount=function(){return this._edgeCount},s.prototype.edges=function(){return r.values(this._edgeObjs)},s.prototype.setPath=function(e,t){var n=this,i=arguments;return r.reduce(e,(function(e,r){return i.length>1?n.setEdge(e,r,t):n.setEdge(e,r),r})),this},s.prototype.setEdge=function(){var e,t,n,i,o=!1,a=arguments[0];"object"===typeof a&&null!==a&&"v"in a?(e=a.v,t=a.w,n=a.name,2===arguments.length&&(i=arguments[1],o=!0)):(e=a,t=arguments[1],n=arguments[3],arguments.length>2&&(i=arguments[2],o=!0)),e=""+e,t=""+t,r.isUndefined(n)||(n=""+n);var s=c(this._isDirected,e,t,n);if(r.has(this._edgeLabels,s))return o&&(this._edgeLabels[s]=i),this;if(!r.isUndefined(n)&&!this._isMultigraph)throw new Error("Cannot set a named edge when isMultigraph = false");this.setNode(e),this.setNode(t),this._edgeLabels[s]=o?i:this._defaultEdgeLabelFn(e,t,n);var u=function(e,t,n,r){var i=""+t,o=""+n;if(!e&&i>o){var a=i;i=o,o=a}var s={v:i,w:o};r&&(s.name=r);return s}(this._isDirected,e,t,n);return e=u.v,t=u.w,Object.freeze(u),this._edgeObjs[s]=u,l(this._preds[t],e),l(this._sucs[e],t),this._in[t][s]=u,this._out[e][s]=u,this._edgeCount++,this},s.prototype.edge=function(e,t,n){var r=1===arguments.length?d(this._isDirected,arguments[0]):c(this._isDirected,e,t,n);return this._edgeLabels[r]},s.prototype.hasEdge=function(e,t,n){var i=1===arguments.length?d(this._isDirected,arguments[0]):c(this._isDirected,e,t,n);return r.has(this._edgeLabels,i)},s.prototype.removeEdge=function(e,t,n){var r=1===arguments.length?d(this._isDirected,arguments[0]):c(this._isDirected,e,t,n),i=this._edgeObjs[r];return i&&(e=i.v,t=i.w,delete this._edgeLabels[r],delete this._edgeObjs[r],u(this._preds[t],e),u(this._sucs[e],t),delete this._in[t][r],delete this._out[e][r],this._edgeCount--),this},s.prototype.inEdges=function(e,t){var n=this._in[e];if(n){var i=r.values(n);return t?r.filter(i,(function(e){return e.v===t})):i}},s.prototype.outEdges=function(e,t){var n=this._out[e];if(n){var i=r.values(n);return t?r.filter(i,(function(e){return e.w===t})):i}},s.prototype.nodeEdges=function(e,t){var n=this.inEdges(e,t);if(n)return n.concat(this.outEdges(e,t))}},5828:function(e,t,n){e.exports={Graph:n(1311),version:n(4161)}},5710:function(e,t,n){var r=n(980),i=n(1311);function o(e){return r.map(e.nodes(),(function(t){var n=e.node(t),i=e.parent(t),o={v:t};return r.isUndefined(n)||(o.value=n),r.isUndefined(i)||(o.parent=i),o}))}function a(e){return r.map(e.edges(),(function(t){var n=e.edge(t),i={v:t.v,w:t.w};return r.isUndefined(t.name)||(i.name=t.name),r.isUndefined(n)||(i.value=n),i}))}e.exports={write:function(e){var t={options:{directed:e.isDirected(),multigraph:e.isMultigraph(),compound:e.isCompound()},nodes:o(e),edges:a(e)};r.isUndefined(e.graph())||(t.value=r.clone(e.graph()));return t},read:function(e){var t=new i(e.options).setGraph(e.value);return r.each(e.nodes,(function(e){t.setNode(e.v,e.value),e.parent&&t.setParent(e.v,e.parent)})),r.each(e.edges,(function(e){t.setEdge({v:e.v,w:e.w,name:e.name},e.value)})),t}}},980:function(e,t,n){var r;try{r={clone:n(8787),constant:n(1547),each:n(9430),filter:n(86),has:n(7805),isArray:n(3629),isEmpty:n(6364),isFunction:n(4786),isUndefined:n(2530),keys:n(2742),map:n(2034),reduce:n(5080),size:n(9467),transform:n(5653),union:n(6310),values:n(2063)}}catch(i){}r||(r=window._),e.exports=r},4161:function(e){e.exports="2.1.8"},5641:function(e,t,n){e.exports=n(2132)},2132:function(e,t){var n,r,i;(function(){var o,a,s,l,u,c,d,f,h,p,g,v,m,y,b;s=Math.floor,p=Math.min,a=function(e,t){return et?1:0},h=function(e,t,n,r,i){var o;if(null==n&&(n=0),null==i&&(i=a),n<0)throw new Error("lo must be non-negative");for(null==r&&(r=e.length);nn;0<=n?t++:t--)u.push(t);return u}.apply(this).reverse(),l=[],r=0,i=o.length;rg;0<=g?++c:--c)v.push(u(e,n));return v},y=function(e,t,n,r){var i,o,s;for(null==r&&(r=a),i=e[n];n>t&&r(i,o=e[s=n-1>>1])<0;)e[n]=o,n=s;return e[n]=i},b=function(e,t,n){var r,i,o,s,l;for(null==n&&(n=a),i=e.length,l=t,o=e[t],r=2*t+1;r-1}},2683:function(e){e.exports=function(e,t,n){for(var r=-1,i=null==e?0:e.length;++r0&&o(c)?n>1?e(c,n-1,o,a,s):r(s,c):a||(s[s.length]=c)}return s}},5099:function(e,t,n){var r=n(372)();e.exports=r},5358:function(e,t,n){var r=n(5099),i=n(2742);e.exports=function(e,t){return e&&r(e,t,i)}},8667:function(e,t,n){var r=n(3082),i=n(9793);e.exports=function(e,t){for(var n=0,o=(t=r(t,e)).length;null!=e&&nt}},7852:function(e){var t=Object.prototype.hasOwnProperty;e.exports=function(e,n){return null!=e&&t.call(e,n)}},529:function(e){e.exports=function(e,t){return null!=e&&t in Object(e)}},4842:function(e,t,n){var r=n(2045),i=n(505),o=n(7167);e.exports=function(e,t,n){return t===t?o(e,t,n):r(e,i,n)}},4906:function(e,t,n){var r=n(9066),i=n(3141),o="[object Arguments]";e.exports=function(e){return i(e)&&r(e)==o}},1848:function(e,t,n){var r=n(3355),i=n(3141);e.exports=function e(t,n,o,a,s){return t===n||(null==t||null==n||!i(t)&&!i(n)?t!==t&&n!==n:r(t,n,o,a,e,s))}},3355:function(e,t,n){var r=n(2854),i=n(5305),o=n(2206),a=n(8078),s=n(8383),l=n(3629),u=n(5174),c=n(9102),d=1,f="[object Arguments]",h="[object Array]",p="[object Object]",g=Object.prototype.hasOwnProperty;e.exports=function(e,t,n,v,m,y){var b=l(e),x=l(t),w=b?h:s(e),_=x?h:s(t),k=(w=w==f?p:w)==p,E=(_=_==f?p:_)==p,S=w==_;if(S&&u(e)){if(!u(t))return!1;b=!0,k=!1}if(S&&!k)return y||(y=new r),b||c(e)?i(e,t,n,v,m,y):o(e,t,w,n,v,m,y);if(!(n&d)){var C=k&&g.call(e,"__wrapped__"),P=E&&g.call(t,"__wrapped__");if(C||P){var T=C?e.value():e,O=P?t.value():t;return y||(y=new r),m(T,O,n,v,y)}}return!!S&&(y||(y=new r),a(e,t,n,v,m,y))}},3085:function(e,t,n){var r=n(8383),i=n(3141),o="[object Map]";e.exports=function(e){return i(e)&&r(e)==o}},8856:function(e,t,n){var r=n(2854),i=n(1848),o=1,a=2;e.exports=function(e,t,n,s){var l=n.length,u=l,c=!s;if(null==e)return!u;for(e=Object(e);l--;){var d=n[l];if(c&&d[2]?d[1]!==e[d[0]]:!(d[0]in e))return!1}for(;++l=u){var v=t?null:s(e);if(v)return l(v);h=!1,d=a,g=new r}else g=t?[]:p;e:for(;++ct||a&&s&&u&&!l&&!c||i&&s&&u||!n&&u||!o)return 1;if(!i&&!a&&!c&&e=l?u:u*("desc"==n[i]?-1:1)}return e.index-t.index}},291:function(e){e.exports=function(e,t){var n=-1,r=e.length;for(t||(t=Array(r));++n1?n[o-1]:void 0,s=o>2?n[2]:void 0;for(a=e.length>3&&"function"==typeof a?(o--,a):void 0,s&&i(n[0],n[1],s)&&(a=o<3?void 0:a,o=1),t=Object(t);++r-1?s[l?t[u]:u]:void 0}}},6381:function(e,t,n){var r=n(7255),i=n(3195),o=n(1495);e.exports=function(e){return function(t,n,a){return a&&"number"!=typeof a&&i(t,n,a)&&(n=a=void 0),t=o(t),void 0===n?(n=t,t=0):n=o(n),a=void 0===a?tf))return!1;var p=c.get(e),g=c.get(t);if(p&&g)return p==t&&g==e;var v=-1,m=!0,y=n&s?new r:void 0;for(c.set(e,t),c.set(t,e);++v-1&&e%1==0&&e-1}},7109:function(e,t,n){var r=n(7112);e.exports=function(e,t){var n=this.__data__,i=r(n,e);return i<0?(++this.size,n.push([e,t])):n[i][1]=t,this}},4086:function(e,t,n){var r=n(9676),i=n(8384),o=n(5797);e.exports=function(){this.size=0,this.__data__={hash:new r,map:new(o||i),string:new r}}},9255:function(e,t,n){var r=n(2799);e.exports=function(e){var t=r(this,e).delete(e);return this.size-=t?1:0,t}},9186:function(e,t,n){var r=n(2799);e.exports=function(e){return r(this,e).get(e)}},3423:function(e,t,n){var r=n(2799);e.exports=function(e){return r(this,e).has(e)}},3739:function(e,t,n){var r=n(2799);e.exports=function(e,t){var n=r(this,e),i=n.size;return n.set(e,t),this.size+=n.size==i?0:1,this}},234:function(e){e.exports=function(e){var t=-1,n=Array(e.size);return e.forEach((function(e,r){n[++t]=[r,e]})),n}},284:function(e){e.exports=function(e,t){return function(n){return null!=n&&(n[e]===t&&(void 0!==t||e in Object(n)))}}},4634:function(e,t,n){var r=n(9151),i=500;e.exports=function(e){var t=r(e,(function(e){return n.size===i&&n.clear(),e})),n=t.cache;return t}},9620:function(e,t,n){var r=n(8136)(Object,"create");e.exports=r},8836:function(e,t,n){var r=n(2709)(Object.keys,Object);e.exports=r},4221:function(e){e.exports=function(e){var t=[];if(null!=e)for(var n in Object(e))t.push(n);return t}},9494:function(e,t,n){e=n.nmd(e);var r=n(1032),i=t&&!t.nodeType&&t,o=i&&e&&!e.nodeType&&e,a=o&&o.exports===i&&r.process,s=function(){try{var e=o&&o.require&&o.require("util").types;return e||a&&a.binding&&a.binding("util")}catch(t){}}();e.exports=s},3581:function(e){var t=Object.prototype.toString;e.exports=function(e){return t.call(e)}},2709:function(e){e.exports=function(e,t){return function(n){return e(t(n))}}},4262:function(e,t,n){var r=n(3665),i=Math.max;e.exports=function(e,t,n){return t=i(void 0===t?e.length-1:t,0),function(){for(var o=arguments,a=-1,s=i(o.length-t,0),l=Array(s);++a0){if(++i>=t)return arguments[0]}else i=0;return e.apply(void 0,arguments)}}},511:function(e,t,n){var r=n(8384);e.exports=function(){this.__data__=new r,this.size=0}},835:function(e){e.exports=function(e){var t=this.__data__,n=t.delete(e);return this.size=t.size,n}},707:function(e){e.exports=function(e){return this.__data__.get(e)}},8832:function(e){e.exports=function(e){return this.__data__.has(e)}},5077:function(e,t,n){var r=n(8384),i=n(5797),o=n(8059),a=200;e.exports=function(e,t){var n=this.__data__;if(n instanceof r){var s=n.__data__;if(!i||s.length=t||n<0||m&&e-g>=d}function w(){var e=i();if(x(e))return _(e);h=setTimeout(w,function(e){var n=t-(e-p);return m?l(n,d-(e-g)):n}(e))}function _(e){return h=void 0,y&&u?b(e):(u=c=void 0,f)}function k(){var e=i(),n=x(e);if(u=arguments,c=this,p=e,n){if(void 0===h)return function(e){return g=e,h=setTimeout(w,t),v?b(e):f}(p);if(m)return clearTimeout(h),h=setTimeout(w,t),b(p)}return void 0===h&&(h=setTimeout(w,t)),f}return t=o(t)||0,r(n)&&(v=!!n.leading,d=(m="maxWait"in n)?s(o(n.maxWait)||0,t):d,y="trailing"in n?!!n.trailing:y),k.cancel=function(){void 0!==h&&clearTimeout(h),g=0,u=p=c=h=void 0},k.flush=function(){return void 0===h?f:_(i())},k}},6933:function(e,t,n){var r=n(8794),i=n(9231),o=n(3195),a=n(3961),s=Object.prototype,l=s.hasOwnProperty,u=r((function(e,t){e=Object(e);var n=-1,r=t.length,u=r>2?t[2]:void 0;for(u&&o(t[0],t[1],u)&&(r=1);++n-1&&e%1==0&&e<=t}},103:function(e,t,n){var r=n(3085),i=n(6194),o=n(9494),a=o&&o.isMap,s=a?i(a):r;e.exports=s},8092:function(e){e.exports=function(e){var t=typeof e;return null!=e&&("object"==t||"function"==t)}},3141:function(e){e.exports=function(e){return null!=e&&"object"==typeof e}},3977:function(e,t,n){var r=n(9066),i=n(1137),o=n(3141),a="[object Object]",s=Function.prototype,l=Object.prototype,u=s.toString,c=l.hasOwnProperty,d=u.call(Object);e.exports=function(e){if(!o(e)||r(e)!=a)return!1;var t=i(e);if(null===t)return!0;var n=c.call(t,"constructor")&&t.constructor;return"function"==typeof n&&n instanceof n&&u.call(n)==d}},6995:function(e,t,n){var r=n(8680),i=n(6194),o=n(9494),a=o&&o.isSet,s=a?i(a):r;e.exports=s},6769:function(e,t,n){var r=n(9066),i=n(3629),o=n(3141),a="[object String]";e.exports=function(e){return"string"==typeof e||!i(e)&&o(e)&&r(e)==a}},152:function(e,t,n){var r=n(9066),i=n(3141),o="[object Symbol]";e.exports=function(e){return"symbol"==typeof e||i(e)&&r(e)==o}},9102:function(e,t,n){var r=n(8150),i=n(6194),o=n(9494),a=o&&o.isTypedArray,s=a?i(a):r;e.exports=s},2530:function(e){e.exports=function(e){return void 0===e}},2742:function(e,t,n){var r=n(7538),i=n(3654),o=n(1473);e.exports=function(e){return o(e)?r(e):i(e)}},3961:function(e,t,n){var r=n(7538),i=n(8664),o=n(1473);e.exports=function(e){return o(e)?r(e,!0):i(e)}},5727:function(e){e.exports=function(e){var t=null==e?0:e.length;return t?e[t-1]:void 0}},2034:function(e,t,n){var r=n(8950),i=n(6025),o=n(3849),a=n(3629);e.exports=function(e,t){return(a(e)?r:o)(e,i(t,3))}},7702:function(e,t,n){var r=n(2526),i=n(5358),o=n(6025);e.exports=function(e,t){var n={};return t=o(t,3),i(e,(function(e,i,o){r(n,i,t(e,i,o))})),n}},9627:function(e,t,n){var r=n(3079),i=n(1954),o=n(2100);e.exports=function(e){return e&&e.length?r(e,o,i):void 0}},9151:function(e,t,n){var r=n(8059),i="Expected a function";function o(e,t){if("function"!=typeof e||null!=t&&"function"!=typeof t)throw new TypeError(i);var n=function n(){var r=arguments,i=t?t.apply(this,r):r[0],o=n.cache;if(o.has(i))return o.get(i);var a=e.apply(this,r);return n.cache=o.set(i,a)||o,a};return n.cache=new(o.Cache||r),n}o.Cache=r,e.exports=o},9286:function(e,t,n){var r=n(4173),i=n(9934)((function(e,t,n){r(e,t,n)}));e.exports=i},6452:function(e,t,n){var r=n(3079),i=n(2580),o=n(2100);e.exports=function(e){return e&&e.length?r(e,o,i):void 0}},3638:function(e,t,n){var r=n(3079),i=n(6025),o=n(2580);e.exports=function(e,t){return e&&e.length?r(e,i(t,2),o):void 0}},9694:function(e){e.exports=function(){}},72:function(e,t,n){var r=n(7009);e.exports=function(){return r.Date.now()}},6460:function(e,t,n){var r=n(4980),i=n(7038)((function(e,t){return null==e?{}:r(e,t)}));e.exports=i},38:function(e,t,n){var r=n(9586),i=n(4084),o=n(5823),a=n(9793);e.exports=function(e){return o(e)?r(a(e)):i(e)}},6222:function(e,t,n){var r=n(6381)();e.exports=r},5080:function(e,t,n){var r=n(2095),i=n(7927),o=n(6025),a=n(750),s=n(3629);e.exports=function(e,t,n){var l=s(e)?r:a,u=arguments.length<3;return l(e,o(t,4),n,u,i)}},4485:function(e,t,n){var r=n(379);e.exports=function(e,t,n){return null==e?e:r(e,t,n)}},9467:function(e,t,n){var r=n(3654),i=n(8383),o=n(1473),a=n(6769),s=n(4651),l="[object Map]",u="[object Set]";e.exports=function(e){if(null==e)return 0;if(o(e))return a(e)?s(e):e.length;var t=i(e);return t==l||t==u?e.size:r(e).length}},4286:function(e,t,n){var r=n(5182),i=n(3226),o=n(8794),a=n(3195),s=o((function(e,t){if(null==e)return[];var n=t.length;return n>1&&a(e,t[0],t[1])?t=[]:n>2&&a(t[0],t[1],t[2])&&(t=[t[0]]),i(e,r(t,1),[])}));e.exports=s},8174:function(e){e.exports=function(){return[]}},9488:function(e){e.exports=function(){return!1}},1495:function(e,t,n){var r=n(2582),i=1/0,o=17976931348623157e292;e.exports=function(e){return e?(e=r(e))===i||e===-i?(e<0?-1:1)*o:e===e?e:0:0===e?e:0}},9753:function(e,t,n){var r=n(1495);e.exports=function(e){var t=r(e),n=t%1;return t===t?n?t-n:t:0}},2582:function(e,t,n){var r=n(821),i=n(8092),o=n(152),a=NaN,s=/^[-+]0x[0-9a-f]+$/i,l=/^0b[01]+$/i,u=/^0o[0-7]+$/i,c=parseInt;e.exports=function(e){if("number"==typeof e)return e;if(o(e))return a;if(i(e)){var t="function"==typeof e.valueOf?e.valueOf():e;e=i(t)?t+"":t}if("string"!=typeof e)return 0===e?e:+e;e=r(e);var n=l.test(e);return n||u.test(e)?c(e.slice(2),n?2:8):s.test(e)?a:+e}},168:function(e,t,n){var r=n(8950),i=n(291),o=n(3629),a=n(152),s=n(170),l=n(9793),u=n(3518);e.exports=function(e){return o(e)?r(e,l):a(e)?[e]:i(s(u(e)))}},6576:function(e,t,n){var r=n(4503),i=n(3961);e.exports=function(e){return r(e,i(e))}},3518:function(e,t,n){var r=n(2446);e.exports=function(e){return null==e?"":r(e)}},5653:function(e,t,n){var r=n(4550),i=n(5763),o=n(5358),a=n(6025),s=n(1137),l=n(3629),u=n(5174),c=n(4786),d=n(8092),f=n(9102);e.exports=function(e,t,n){var h=l(e),p=h||u(e)||f(e);if(t=a(t,4),null==n){var g=e&&e.constructor;n=p?h?new g:[]:d(e)&&c(g)?i(s(e)):{}}return(p?r:o)(e,(function(e,r,i){return t(n,e,r,i)})),n}},6310:function(e,t,n){var r=n(5182),i=n(8794),o=n(9602),a=n(6279),s=i((function(e){return o(r(e,1,a,!0))}));e.exports=s},804:function(e,t,n){var r=n(3518),i=0;e.exports=function(e){var t=++i;return r(e)+t}},2063:function(e,t,n){var r=n(8019),i=n(2742);e.exports=function(e){return null==e?[]:r(e,i(e))}},4827:function(e,t,n){var r=n(8463),i=n(2971);e.exports=function(e,t){return i(e||[],t||[],r)}},888:function(e,t,n){"use strict";var r=n(9047);function i(){}function o(){}o.resetWarningCache=i,e.exports=function(){function e(e,t,n,i,o,a){if(a!==r){var s=new Error("Calling PropTypes validators directly is not supported by the `prop-types` package. Use PropTypes.checkPropTypes() to call them. Read more at http://fb.me/use-check-prop-types");throw s.name="Invariant Violation",s}}function t(){return e}e.isRequired=e;var n={array:e,bigint:e,bool:e,func:e,number:e,object:e,string:e,symbol:e,any:e,arrayOf:t,element:e,elementType:e,instanceOf:t,node:e,objectOf:t,oneOf:t,oneOfType:t,shape:t,exact:t,checkPropTypes:o,resetWarningCache:i};return n.PropTypes=n,n}},2007:function(e,t,n){e.exports=n(888)()},9047:function(e){"use strict";e.exports="SECRET_DO_NOT_PASS_THIS_OR_YOU_WILL_BE_FIRED"},4463:function(e,t,n){"use strict";var r=n(2791),i=n(5296);function o(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n