Skip to main content

Questions tagged [ollama]

Ollama is a product for running Llama 2, Code Llama and other large language models.

ollama
0 votes
0 answers
15 views

ConnectError: All connection attempts failed when connecting indexing to neo4j database using PropertyGraphIndex from llama3

I am working on knowledge graph and all connection to neo4j browser is a success(using neo4j desktop windows not docker deployed). however with llama3 i am running the same notebooks as in property ...
Kcndze's user avatar
  • 21
0 votes
0 answers
16 views

How to Immediately Cancel an Asyncio Task That uses the Ollama Python Library to generate an answer?

I'm using ollama to generate answers from large language models (LLMs) with ollama python api. I want to cancel the response generation by clicking the stop button. The problem is that the task ...
noocoder777's user avatar
0 votes
1 answer
22 views

Does langchain with llama-cpp-python fail to work with very long prompts?

I'm trying to create a service using the llama3-70b model by combining langchain and llama-cpp-python on a server workstation. While the model works well with short prompts(question1, question2), it ...
bibiibibin's user avatar
0 votes
0 answers
23 views

How should I use Llama-3 properly?

I downloaded the Meta-Llama-3-70B-Instruct model using the download.sh and the url provided by Meta email, and this is all the files in the folder. enter image description here And when I tried to use ...
Joey1205's user avatar
0 votes
1 answer
72 views

Slow Ollama API - how to make sure the GPU is used

I made a simple demo for a chatbox interface in Godot, using which you can chat with a language model, which runs using Ollama. Currently, the interface between Godot and the language model is based ...
randomal's user avatar
  • 6,462
0 votes
0 answers
31 views

How to use the godot-llama-cpp plugin

Godot newbye here. I made a simple chatbot demo (repo here) in Godot, which takes as input the text typed by a user and outputs the replies generated by a large language model running locally using ...
randomal's user avatar
  • 6,462
-1 votes
0 answers
124 views

Ollama erro (HTTPError: 404 Client Error: Not Found for url: http://localhost:11434/v1/chat/completions)

I'm trying to translate the results I scraped using Selenium. But have been facing same issue while using ollama. Here's the error in detail with my codes. import os import openai import requests ...
nodistraction96's user avatar
3 votes
0 answers
54 views

langchain4j and Ollama - chat does not work because of uppercased role value

I am using Ollama v0.2.3 on Windows with tinyllama, locally installed, and langchain4j v0.32.0. I followed a very simple example of sending a chat query to Ollama. To my surprise I got back a very ...
JanDasWiesel's user avatar
0 votes
0 answers
86 views

How to stop Ollama model streaming

So I have this class that streams the response form a model: from langchain_community.llms.ollama import Ollama from app.config import ( LLM_MODEL_NAME, MAX_LLM_INPUT_LENGTH, ...
KZiovas's user avatar
  • 4,377
0 votes
0 answers
32 views

doing embedding document in ollama with langchain always gets an error (400 Bad Request)

Always get error 400 Bad Request when i Trying to embedding document in ollama with langchain js. here is my code const embeddings = new OllamaEmbeddings({ model: "orca-mini", baseUrl: &...
Muhammad Hakim's user avatar
2 votes
1 answer
86 views

how to instantly terminate a thread? Using ollama python API with tkinter to stream a response from llama2

I'm using ollama to stream a response from llama2 large language model. The functionality I need is, when I click the stop button, it should stop the thread immediately. The code below works but the ...
noocoder777's user avatar
-1 votes
1 answer
48 views

Ollama isnt using my gpu on a runpod.io pod

I am testing different AI models on runpod.io. One of those models is dolphin-mixtral:8x22b. I followed Runpod's tutorial for setting up the pod with Ollama: https://docs.runpod.io/tutorials/pods/run-...
Anton's user avatar
  • 1
0 votes
0 answers
21 views

How to configure ollama setup exe from its source code

I was required to install Ollama setup exe from the source code in windows I found the steps as Note: The Windows build for Ollama is still under development. First, install required tools: MSVC ...
Diksha Gupta's user avatar
0 votes
0 answers
64 views

The relationship between chunk_size, context length and embedding length in a Langchain RAG Framework

everyone. Currently I am working on a Langchain RAG framework using Ollama. I have a question towards the chunk size in the Document Splitter. Now I decide to use qwen2:72b model as both embedding ...
Joesf.Albert's user avatar
0 votes
1 answer
63 views

Ollama Embeddings: Are the embeddings in the same order as the documents?

I’m using OllamaEmbeddings from langchain_community.embeddings (in Python) to generate embeddings of documents. I need to be absolutely sure that the embeddings are in same order as the documents that ...
David's user avatar
  • 1

15 30 50 per page
1
2 3 4 5
10