Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs cleanup components #12865

Merged
merged 15 commits into from
Oct 17, 2023
Merged
82 changes: 31 additions & 51 deletions docs/docs/concepts/components/llm-configuration.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: llm-configuration
sidebar_label: Configuration
title: Configuration
sidebar_label: LLM Providers
title: LLM Providers
abstract: |
Instructions on how to setup and configure Large Language Models from
OpenAI, Cohere, and other providers.
Expand All @@ -11,41 +11,22 @@ abstract: |

## Overview

This guide will walk you through the process of configuring Rasa to connect to an
LLM, including deployments that rely on Azure OpenAI service. Instructions for
other LLM providers are further down the page.
All Rasa components which make use of an LLM can be configured.
This includes:
* Which LLM provider to use
* The model you want to use
amn41 marked this conversation as resolved.
Show resolved Hide resolved
* The sampling temperature
* The prompt template

## Assitant Configration
and other settings.
This page applies to the following components which use LLMs:

To use LLMs in your assistant, you need to configure the following components:
* [LLMCommandGenerator](../dialogue-understanding.mdx)
* DocsearchPolicy
* IntentlessPolicy
* ContextualResponseRephraser
* LLMIntentClassifier

```yaml title="config.yml"
recipe: default.v1
language: en
pipeline:
- name: LLMCommandGenerator

policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
- name: rasa_plus.ml.DocsearchPolicy
- name: RulePolicy
```

To use the rephrasing capability, you'll also need to add the following to your
endpoint configuration:

```yaml title="endpoints.yml"
nlg:
type: rasa_plus.ml.ContextualResponseRephraser
```

Additional configuration parameters are explained in detail in the documentation
pages for each of these components:

- [LLMCommandGenerator](../dialogue-understanding.mdx)
- [FlowPolicy](../policies.mdx#flow-policy)
- Docseacrh
- [ContextualResponseRephraser](../contextual-response-rephraser.mdx)

## OpenAI Configuration

Expand All @@ -55,18 +36,10 @@ can be configured with different LLMs, but OpenAI is the default.
If you want to configure your assistant with a different LLM, you can find
instructions for other LLM providers further down the page.

### Prerequisites

Before beginning, make sure that you have:

- Access to OpenAI's services
- Ability to generate API keys for OpenAI

### API Token

The API token is a key element that allows your Rasa instance to connect and
communicate with OpenAI. This needs to be configured correctly to ensure seamless
interaction between the two.
The API token authenticates your requests to the OpenAI API.

To configure the API token, follow these steps:

Expand Down Expand Up @@ -102,16 +75,23 @@ To configure the API token, follow these steps:

### Model Configuration

Rasa allow you to use different models for different components. For example,
you might use one model for intent classification and another for rephrasing.
Many LLM providers offer multiple models through their API.
The model is specified individually for each component, so that if you want to you can use
a combination of various models.
amn41 marked this conversation as resolved.
Show resolved Hide resolved

To configure models per component, follow these steps described on the
pages for each component:
```yaml title="config.yml"
recipe: default.v1
language: en
pipeline:
- name: LLMCommandGenerator
model: "gpt-4"

policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
- name: rasa_plus.ml.DocsearchPolicy
model: "gpt-3.5-turbo"
```

1. [intent classification instructions](../../nlu-based-assistants/components.mdx#llmintentclassifier)
2. [rephrasing instructions](../contextual-response-rephraser.mdx#llm-configuration)
3. [intentless policy instructions](../policies.mdx#flow-policy)
4. docsearch instructions

### Additional Configuration for Azure OpenAI Service

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/components/llm-custom.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: llm-custom
sidebar_label: Customization
sidebar_label: Customizing LLM Components
title: Customizing LLM based Components
abstract:
---
Expand Down
114 changes: 77 additions & 37 deletions docs/docs/concepts/components/overview.mdx
Original file line number Diff line number Diff line change
@@ -1,62 +1,102 @@
---
id: overview
sidebar_label: Overview
title: Model Configuration
description: Learn about model configuration for Rasa.
sidebar_label: Configuration
title: Configuration
description: Configure your Rasa Assistant.
abstract: The configuration file defines the components and policies that your model will use to make predictions based on user input.
---

The recipe key allows for different types of config and model architecture.
Currently, "default.v1" and the experimental "graph.v1" recipes are supported.
import RasaProLabel from '@theme/RasaProLabel';

:::info New in 3.5
You can customise many aspects of how Rasa works by modifying the `config.yml` file.

The config file now includes a new mandatory key `assistant_id` which represents the unique assistant identifier.
A minimal configuration for a [CALM](../calm.mdx) assistant looks like this:

```yaml-rasa title="config.yml"
recipe: default.v1
language: en
assistant_id: 20230405-114328-tranquil-mustard

pipeline:
- name: LLMCommandGenerator

policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
```

:::tip Default Configuration
For backwards compatibility, running `rasa init` will create an NLU-based assistant.
To create a CALM assistant with the right `config.yml`, add the
additional `--template` argument:

```bash
rasa init --template calm
```

:::

The `assistant_id` key must specify a unique value to distinguish multiple assistants in deployment.
The assistant identifier will be propagated to each event's metadata, alongside the model id.
Note that if the config file does not include this required key or the placeholder default value is not replaced, a random
assistant name will be generated and added to the configuration everytime when running `rasa train`.
### The recipe, language, and assistant_id keys

The language and pipeline keys specify the components used by the model to make NLU predictions.
The policies key defines the policies used by the model to predict the next action.
The `recipe` key only needs to be modified if you want to build a [custom graph recipe](./graph-recipe.mdx).
amn41 marked this conversation as resolved.
Show resolved Hide resolved
The vast majority of projects should use the default value `"default.v1"`.

The `language` key is a 2-letter ISO code for the language your assistant supports.

## Suggested Config
The `assistant_id` key should be a unique value and allows you to distinguish multiple
deployed assistants.
This id is added to each event's metadata, together with the model id.
See [event brokers](../production/event-brokers.mdx) for more information.
Note that if the config file does not include this required key or the placeholder default value
is not replaced, a random assistant name will be generated and added to the configuration
every time you run `rasa train`.

TODO: update

You can leave the pipeline and/or policies key out of your configuration file.
When you run `rasa train`, the Suggested Config feature will select a default configuration
for the missing key(s) to train the model.

Make sure to specify the language key in your `config.yml` file with the
2-letter ISO language code.
## Pipeline

Example `config.yml` file:
The `pipeline` key lists the components which will be used to process and understand the messages
that end users send to your assistant.
In a CALM assistant, the output of your pipeline components is a list of [commands](../dialogue-understanding.mdx).
amn41 marked this conversation as resolved.
Show resolved Hide resolved

```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config.yml)
The main component in your pipeline is the `LLMCommandGenerator`.
Here is what an example configuration looks like:

```yaml-rasa title="config.yml"
pipeline:
- name: LLMCommandGenerator
llm:
model_name: "gpt-4"
request_timeout: 7
temperature: 0.0
```

The selected configuration will also be written as comments into the `config.yml` file,
so you can see which configuration was used. For the example above, the resulting file
might look e.g. like this:
The full set of configurable parameters is listed [here](../dialogue-understanding.mdx).

All components which make use of LLMs have common configuration parameters which are listed [here](../llm-configuration.mdx)

```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config_after_train.yml)

### Combining CALM and NLU-based components

<RasaProLabel />
amn41 marked this conversation as resolved.
Show resolved Hide resolved

Rasa Pro allows you to combine both NLU-based and CALM components in your pipeline.
See a full list of NLU-based components [here](../nlu-based-assistants/components.mdx).

## Policies

The `policies` key lists the [dialogue policies](../policies.mdx) your assistant will use
to progress the conversation.

```yaml-rasa title="config.yml"
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
```

If you like, you can then un-comment the suggested configuration for one or both of the
keys and make modifications. Note that this will disable automatic suggestions for this
key when training again.
As long as you leave the configuration commented out and don't specify any configuration
for a key yourself, a default configuration will be suggested whenever you train a new
model.
The `FlowPolicy` currently doesn't have an additional configuration parameters.
amn41 marked this conversation as resolved.
Show resolved Hide resolved

:::note nlu- or dialogue- only models
### Combining CALM and NLU-based dialogue policies

Only the default configuration for `pipeline` will be automatically selected
if you run `rasa train nlu`, and only the default configuration for `policies`
will be selected if you run `rasa train core`.
:::
<RasaProLabel />
amn41 marked this conversation as resolved.
Show resolved Hide resolved

Rasa Pro allows you to use both NLU-based and CALM dialogue policies in your assistant.
See a full list of NLU-based policies [here](../nlu-based-assistants/policies.mdx).
1 change: 1 addition & 0 deletions docs/docs/concepts/dialogue-understanding.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ You can find all generated commands in the [command reference](#command-referenc

To use this component in your assistant, you need to add the
`LLMCommandGenerator` to your NLU pipeline in the `config.yml` file.
Read more about the `config.yml` file [here](./components/overview.mdx).

```yaml-rasa title="config.yml"
pipeline:
Expand Down
8 changes: 4 additions & 4 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,6 @@ module.exports = {
items: [
"concepts/components/overview",
"concepts/components/llm-configuration",
"concepts/components/llm-custom",
"concepts/components/custom-graph-components",
"concepts/components/graph-recipe",
],
},
"concepts/policies", // TODO: ENG-538
Expand Down Expand Up @@ -258,8 +255,11 @@ module.exports = {
collapsed: true,
items: [
"nlu-based-assistants/glossary",
"concepts/components/custom-graph-components",
"concepts/components/llm-custom",
"concepts/components/graph-recipe",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for me it makes more sense to keep this under the Components section. Can you explain your reasoning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking is that components is under 'key concepts' which to me implies things you should understand as a Rasa user. IMO these pieces are not 'key' but rather advanced topics.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My worry is that we put things under the Reference section and it becomes a catch-all section for everything that doesn't fit with Key concept or the sections we added.

Because one key selling point of Rasa is to allow for customisation and configuration, it could actually make sense to have a section for building advanced assistants for pushing the boundaries of what Rasa bakes in by default. WDYT? I would see that section between "Operating at scale" and "Reference".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that idea, especially if we can use it as a place to articulate what we consider the public facing API and what things are internal and you shouldn't touch unless you want to make your own life hard. But clearly outside the scope of this PR. Happy to put this back in key concepts for now

"telemetry/telemetry",
"telemetry/reference",
"telemetry/reference",
amn41 marked this conversation as resolved.
Show resolved Hide resolved
require("./docs/reference/sidebar.json"),
],
},
Expand Down
Loading