-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: could i use my local embedding model ? #82
Comments
@ucas010 please take a look at this guide: https://pathway.com/developers/templates/private-rag-ollama-mistral#_3-embedding-model-selection. The code described comes from this template https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/private-rag but can be used with any template. For relevant documentation, see: |
thx, |
I think there is no problem here: pw Vector Store is a built-in vector index in-memory (it is built around Tantivy, at similar performance levels as the FAISS implementation). You do not need any extra steps to set up pw Vector Store, it will work out of the box. Try running the code. Setting up an external vector store or integrating with FAISS is significantly more work. We currently do not provide templates for this. |
Hi @ucas010 did you give the built-in vector store a try? |
Steps to reproduce
i have my own embedding encoder, and just request the http url to get vectors,
then how to realize it
and i also have my own LLM , and also i ask it just through the URL with ChatOpenai or OpenAI func ,
then how to replace the default Openai ?
thx
Relevant log output
What did you expect to happen?
replace the embedding encoder and the LLM
Version
no
Docker Versions (if used)
No response
OS
Linux
On which CPU architecture did you run Pathway?
ARM64 (AArch64, Apple silicon)
The text was updated successfully, but these errors were encountered: