Skip to content

Commit 4c0afd0

Browse files
XinyaoWaXuhuiRenlvliang-intelSpycshpre-commit-ci[bot]
authored
Add knowledge graph components (#171)
* enable ragas (#129) Signed-off-by: XuhuiRen <xuhui.ren@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Fix RAG performance issues (#132) * Fix RAG performance issues Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * add microservice level perf statistics (#135) * add statistics * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Add More Contents to the Table of MicroService (#141) * Add More Contents to the Table MicroService Signed-off-by: zehao-intel <zehao.huang@intel.com> * reorder Signed-off-by: zehao-intel <zehao.huang@intel.com> * Update README.md * refine structure Signed-off-by: zehao-intel <zehao.huang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix model Signed-off-by: zehao-intel <zehao.huang@intel.com> * refine table Signed-off-by: zehao-intel <zehao.huang@intel.com> * put llm to the ground Signed-off-by: zehao-intel <zehao.huang@intel.com> --------- Signed-off-by: zehao-intel <zehao.huang@intel.com> Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Use common security content for OPEA projects (#151) * add python coverage Signed-off-by: chensuyue <suyue.chen@intel.com> * docs update Signed-off-by: chensuyue <suyue.chen@intel.com> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <suyue.chen@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Enable vLLM Gaudi support for LLM service based on officially habana vllm release (#137) Signed-off-by: tianyil1 <tianyi.liu@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * add knowledge graph Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * knowledge graph microservice update Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Support Dataprep Microservice with Llama Index (#154) * move file to langchain folder Signed-off-by: letonghan <letong.han@intel.com> * support dataprep with llama_index Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add e2e test script Signed-off-by: letonghan <letong.han@intel.com> * update test script name Signed-off-by: letonghan <letong.han@intel.com> --------- Signed-off-by: letonghan <letong.han@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Support Embedding Microservice with Llama Index (#150) * fix stream=false doesn't work issue Signed-off-by: letonghan <letong.han@intel.com> * support embedding comp with llama_index Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add More Contents to the Table of MicroService (#141) * Add More Contents to the Table MicroService Signed-off-by: zehao-intel <zehao.huang@intel.com> * reorder Signed-off-by: zehao-intel <zehao.huang@intel.com> * Update README.md * refine structure Signed-off-by: zehao-intel <zehao.huang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix model Signed-off-by: zehao-intel <zehao.huang@intel.com> * refine table Signed-off-by: zehao-intel <zehao.huang@intel.com> * put llm to the ground Signed-off-by: zehao-intel <zehao.huang@intel.com> --------- Signed-off-by: zehao-intel <zehao.huang@intel.com> Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Use common security content for OPEA projects (#151) * add python coverage Signed-off-by: chensuyue <suyue.chen@intel.com> * docs update Signed-off-by: chensuyue <suyue.chen@intel.com> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <suyue.chen@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Enable vLLM Gaudi support for LLM service based on officially habana vllm release (#137) Signed-off-by: tianyil1 <tianyi.liu@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support embedding comp with llama_index Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add test script for embedding llama_inde Signed-off-by: letonghan <letong.han@intel.com> * remove conflict requirements Signed-off-by: letonghan <letong.han@intel.com> * update test script Signed-off-by: letonghan <letong.han@intel.com> * udpate Signed-off-by: letonghan <letong.han@intel.com> * update Signed-off-by: letonghan <letong.han@intel.com> * update Signed-off-by: letonghan <letong.han@intel.com> * fix ut issue Signed-off-by: letonghan <letong.han@intel.com> --------- Signed-off-by: letonghan <letong.han@intel.com> Signed-off-by: zehao-intel <zehao.huang@intel.com> Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: tianyil1 <tianyi.liu@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: zehao-intel <zehao.huang@intel.com> Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> Co-authored-by: chen, suyue <suyue.chen@intel.com> Co-authored-by: Tianyi Liu <tianyi.liu@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Support Ollama microservice (#142) * Add Ollama Support Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Fix dataprep microservice path issue (#163) Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * update CI to support dataprep_redis path level change (#155) Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: letonghan <letong.han@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Add Gateway for Translation (#169) * add translation gateway Signed-off-by: zehao-intel <zehao.huang@intel.com> * fix import Signed-off-by: zehao-intel <zehao.huang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: zehao-intel <zehao.huang@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Update LLM readme (#172) * Update LLM readme Signed-off-by: lvliang-intel <liang1.lv@intel.com> * update readme Signed-off-by: lvliang-intel <liang1.lv@intel.com> * update tgi readme Signed-off-by: lvliang-intel <liang1.lv@intel.com> * rollback requirements.txt Signed-off-by: lvliang-intel <liang1.lv@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: lvliang-intel <liang1.lv@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * add milvus microservice (#158) * Use common security content for OPEA projects (#151) * add python coverage Signed-off-by: chensuyue <suyue.chen@intel.com> * docs update Signed-off-by: chensuyue <suyue.chen@intel.com> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <suyue.chen@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: jinjunzh <jasper.zhu@intel.com> * add milvus microservice Signed-off-by: jinjunzh <jasper.zhu@intel.com> * fix the typo Signed-off-by: jinjunzh <jasper.zhu@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: jinjunzh <jasper.zhu@intel.com> --------- Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: jinjunzh <jasper.zhu@intel.com> Co-authored-by: chen, suyue <suyue.chen@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * enable python coverage (#149) Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com> Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Add Ray version for multi file process (#119) * add ray version document to redis Signed-off-by: Chendi Xue <chendi.xue@intel.com> * update test Signed-off-by: Chendi Xue <chendi.xue@intel.com> * Add test Signed-off-by: Chendi Xue <chendi.xue@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add TIMEOUT in container environment and return status Signed-off-by: Chendi Xue <chendi.xue@intel.com> * rebase on new folder layout Signed-off-by: Chendi Xue <chendi.xue@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Chendi Xue <chendi.xue@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Add codecov (#178) Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Rename lm-eval folder to utils/lm-eval (#179) Signed-off-by: changwangss <chang1.wang@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Support rerank and retrieval of RAG OPT (#164) * supported bce model for rerank. Signed-off-by: Xinyu Ye <xinyu.ye@intel.com> * change folder Signed-off-by: Xinyu Ye <xinyu.ye@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change path in test file. Signed-off-by: Xinyu Ye <xinyu.ye@intel.com> --------- Signed-off-by: Xinyu Ye <xinyu.ye@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Support vLLM XFT LLM microservice (#174) * Support vLLM XFT serving Signed-off-by: lvliang-intel <liang1.lv@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix access vllm issue Signed-off-by: lvliang-intel <liang1.lv@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add permission for run.sh Signed-off-by: lvliang-intel <liang1.lv@intel.com> * add readme Signed-off-by: lvliang-intel <liang1.lv@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix proxy issue Signed-off-by: lvliang-intel <liang1.lv@intel.com> --------- Signed-off-by: lvliang-intel <liang1.lv@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Update Dataprep Microservice README (#173) * update dataprep readme Signed-off-by: letonghan <letong.han@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: letonghan <letong.han@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fixed milvus port conflict issues during deployment (#183) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fixed milvus port conflict issues during deployment * align port for unified retrieval microservice --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * remove Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * remove hard address Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update readme Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * add example data and ingestion Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix typ Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix dataprep timeout issue (#203) Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * Add a new embedding MosecEmbedding (#182) * Add a new embedding MosecEmbedding. Signed-off-by: Jincheng Miao <jincheng.miao@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jincheng Miao <jincheng.miao@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * expand timeout for microservice test (#208) Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix typo Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix requirement Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: XuhuiRen <xuhui.ren@intel.com> Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> Signed-off-by: lvliang-intel <liang1.lv@intel.com> Signed-off-by: zehao-intel <zehao.huang@intel.com> Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: tianyil1 <tianyi.liu@intel.com> Signed-off-by: letonghan <letong.han@intel.com> Signed-off-by: jinjunzh <jasper.zhu@intel.com> Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com> Signed-off-by: Chendi Xue <chendi.xue@intel.com> Signed-off-by: changwangss <chang1.wang@intel.com> Signed-off-by: Xinyu Ye <xinyu.ye@intel.com> Signed-off-by: Jincheng Miao <jincheng.miao@intel.com> Co-authored-by: XuhuiRen <44249229+XuhuiRen@users.noreply.github.com> Co-authored-by: lvliang-intel <liang1.lv@intel.com> Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: zehao-intel <zehao.huang@intel.com> Co-authored-by: chen, suyue <suyue.chen@intel.com> Co-authored-by: Tianyi Liu <tianyi.liu@intel.com> Co-authored-by: Letong Han <106566639+letonghan@users.noreply.github.com> Co-authored-by: jasperzhu <jasper.zhu@intel.com> Co-authored-by: Chendi.Xue <chendi.xue@intel.com> Co-authored-by: Sun, Xuehao <xuehao.sun@intel.com> Co-authored-by: Wang, Chang <491521017@qq.com> Co-authored-by: XinyuYe-Intel <xinyu.ye@intel.com> Co-authored-by: Jincheng Miao <jincheng.miao@intel.com>
1 parent 23e6ed0 commit 4c0afd0

File tree

19 files changed

+452
-0
lines changed

19 files changed

+452
-0
lines changed

comps/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
TextDoc,
1717
RAGASParams,
1818
RAGASScores,
19+
GraphDoc,
1920
LVMDoc,
2021
)
2122

comps/cores/mega/constants.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ class ServiceType(Enum):
2727
UNDEFINED = 10
2828
RAGAS = 11
2929
LVM = 12
30+
KNOWLEDGE_GRAPH = 13
3031

3132

3233
class MegaServiceEndpoint(Enum):
@@ -50,6 +51,8 @@ class MegaServiceEndpoint(Enum):
5051
RERANKING = "/v1/reranking"
5152
GUARDRAILS = "/v1/guardrails"
5253
RAGAS = "/v1/ragas"
54+
GRAPHS = "/v1/graphs"
55+
5356
# COMMON
5457
LIST_SERVICE = "/v1/list_service"
5558
LIST_PARAMETERS = "/v1/list_parameters"

comps/cores/proto/docarray.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,19 @@ class RAGASScores(BaseDoc):
104104
context_precision: float
105105

106106

107+
class GraphDoc(BaseDoc):
108+
text: str
109+
strtype: Optional[str] = Field(
110+
description="type of input query, can be 'query', 'cypher', 'rag'",
111+
default="query",
112+
)
113+
max_new_tokens: Optional[int] = Field(default=1024)
114+
rag_index_name: Optional[str] = Field(default="rag")
115+
rag_node_label: Optional[str] = Field(default="Task")
116+
rag_text_node_properties: Optional[list] = Field(default=["name", "description", "status"])
117+
rag_embedding_node_property: Optional[str] = Field(default="embedding")
118+
119+
107120
class LVMDoc(BaseDoc):
108121
image: str
109122
prompt: str

comps/knowledgegraphs/README.md

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
# Knowledge Graph Microservice
2+
3+
This microservice, designed for efficiently handling and retrieving informantion from knowledge graph. The microservice integrates text retriever, knowledge graph quick search and LLM agent, which can be combined to enhance question answering.
4+
5+
The service contains three modes:
6+
7+
- "cypher": Query knowledge graph directly with cypher
8+
- "rag": Apply similarity search on embeddings of knowledge graph
9+
- "query": An LLM agent will automatically choose tools (RAG or CypherChain) to enhance the question answering
10+
11+
Here is the overall workflow:
12+
13+
![Workflow](doc/workflow.png)
14+
15+
A prerequisite for using this microservice is that users must have a knowledge gragh database already running, and currently we have support [Neo4J](https://neo4j.com/) for quick deployment. Users need to set the graph service's endpoint into an environment variable and microservie utilizes it for data injestion and retrieve. If user want to use "rag" and "query" mode, still need a LLM text generation service (etc., TGI, vLLM and Ray) already running.
16+
17+
Overall, this microservice provides efficient support for applications related with graph dataset, especially for answering multi-part questions, or any other conditions including comples relationship between entities.
18+
19+
# 🚀1. Start Microservice with Docker
20+
21+
## 1.1 Setup Environment Variables
22+
23+
```bash
24+
export NEO4J_ENDPOINT="neo4j://${your_ip}:7687"
25+
export NEO4J_USERNAME="neo4j"
26+
export NEO4J_PASSWORD=${define_a_password}
27+
export HUGGINGFACEHUB_API_TOKEN=${your_huggingface_api_token}
28+
export LLM_ENDPOINT="http://${your_ip}:8080"
29+
export LLM_MODEL="meta-llama/Llama-2-7b-hf"
30+
export AGENT_LLM="HuggingFaceH4/zephyr-7b-beta"
31+
```
32+
33+
## 1.2 Start Neo4j Service
34+
35+
```bash
36+
docker pull neo4j
37+
38+
docker run --rm \
39+
--publish=7474:7474 --publish=7687:7687 \
40+
--env NEO4J_AUTH=$NEO4J_USER/$NEO4J_PASSWORD \
41+
--volume=$PWD/neo4j_data:"/data" \
42+
--env='NEO4JLABS_PLUGINS=["apoc"]' \
43+
neo4j
44+
```
45+
46+
## 1.3 Start LLM Service for "rag"/"query" mode
47+
48+
You can start any LLM microserve, here we take TGI as an example.
49+
50+
```bash
51+
docker run -p 8080:80 \
52+
-v $PWD/llm_data:/data --runtime=habana \
53+
-e HABANA_VISIBLE_DEVICES=all \
54+
-e OMPI_MCA_btl_vader_single_copy_mechanism=none \
55+
-e HUGGING_FACE_HUB_TOKEN=$HUGGINGFACEHUB_API_TOKEN \
56+
--cap-add=sys_nice \
57+
--ipc=host \
58+
ghcr.io/huggingface/tgi-gaudi:2.0.0 \
59+
--model-id $LLM_MODEL \
60+
--max-input-tokens 1024 \
61+
--max-total-tokens 2048
62+
```
63+
64+
Verify LLM service.
65+
66+
```bash
67+
curl $LLM_ENDPOINT/generate \
68+
-X POST \
69+
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":32}}' \
70+
-H 'Content-Type: application/json'
71+
```
72+
73+
## 1.4 Start Microservice
74+
75+
```bash
76+
cd ../..
77+
docker build -t opea/knowledge_graphs:latest \
78+
--build-arg https_proxy=$https_proxy \
79+
--build-arg http_proxy=$http_proxy \
80+
-f comps/knowledgegraphs/langchain/docker/Dockerfile .
81+
82+
docker run --rm \
83+
--name="knowledge-graph-server" \
84+
-p 8060:8060 \
85+
--ipc=host \
86+
-e http_proxy=$http_proxy \
87+
-e https_proxy=$https_proxy \
88+
-e NEO4J_ENDPOINT=$NEO4J_ENDPOINT \
89+
-e NEO4J_USERNAME=$NEO4J_USERNAME \
90+
-e NEO4J_PASSWORD=$NEO4J_PASSWORD \
91+
-e HUGGINGFACEHUB_API_TOKEN=$HUGGINGFACEHUB_API_TOKEN \
92+
-e LLM_ENDPOINT=$LLM_ENDPOINT \
93+
opea/knowledge_graphs:latest
94+
```
95+
96+
# 🚀2. Consume Knowledge Graph Service
97+
98+
## 2.1 Cypher mode
99+
100+
```bash
101+
curl http://${your_ip}:8060/v1/graphs \
102+
-X POST \
103+
-d "{\"text\":\"MATCH (t:Task {status:'open'}) RETURN count(*)\",\"strtype\":\"cypher\"}" \
104+
-H 'Content-Type: application/json'
105+
```
106+
107+
Example output:
108+
![Cypher Output](doc/output_cypher.png)
109+
110+
## 2.2 Rag mode
111+
112+
```bash
113+
curl http://${your_ip}:8060/v1/graphs \
114+
-X POST \
115+
-d "{\"text\":\"How many open tickets there are?\",\"strtype\":\"rag\", \"max_new_tokens\":128}" \
116+
-H 'Content-Type: application/json'
117+
```
118+
119+
Example output:
120+
![Cypher Output](doc/output_rag.png)
121+
122+
## 2.3 Query mode
123+
124+
First example:
125+
126+
```bash
127+
curl http://${your_ip}:8060/v1/graphs \
128+
-X POST \
129+
-d "{\"text\":\"Which tasks have optimization in their description?\",\"strtype\":\"query\"}" \
130+
-H 'Content-Type: application/json'
131+
```
132+
133+
Example output:
134+
![Cypher Output](doc/output_query1.png)
135+
136+
Second example:
137+
138+
```bash
139+
curl http://${your_ip}:8060/v1/graphs \
140+
-X POST \
141+
-d "{\"text\":\"Which team is assigned to maintain PaymentService?\",\"strtype\":\"query\"}" \
142+
-H 'Content-Type: application/json'
143+
```
144+
145+
Example output:
146+
![Cypher Output](doc/output_query2.png)

comps/knowledgegraphs/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
docker build -t opea/knowledge_graphs:latest \
5+
--build-arg https_proxy=$https_proxy \
6+
--build-arg http_proxy=$http_proxy \
7+
-f comps/knowledgegraphs/langchain/docker/Dockerfile .
407 KB
Loading
467 KB
Loading
418 KB
Loading
711 KB
Loading

0 commit comments

Comments
 (0)