Skip to content

Commit c99421d

Browse files
samchonCopilot
andauthored
feat(webiste): utiliziation cases of LLM frameworks. (#1797)
* feat(webiste): utiliziation cases of LLM frameworks. * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * fix BOM * fix BOM and Emoji bugs * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
1 parent 7bceb47 commit c99421d

File tree

13 files changed

+806
-763
lines changed

13 files changed

+806
-763
lines changed
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
"use client";
2+
3+
import { useEffect } from "react";
4+
5+
export interface ClientRedirectProps {
6+
href: string;
7+
}
8+
9+
export function ClientRedirect(props: ClientRedirectProps) {
10+
useEffect(() => {
11+
window.location.replace(props.href);
12+
}, [props.href]);
13+
14+
return (
15+
<p>
16+
Moving to <a href={props.href}>{props.href}</a>.
17+
</p>
18+
);
19+
}

website/src/content/docs/_meta.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,3 +34,4 @@ export default {
3434
href: "https://dev.to/samchon/series/22474",
3535
},
3636
} satisfies MetaRecord;
37+

website/src/content/docs/llm/chat.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ await agent.conversate("Hello, I want to create an article.");
7979

8080
You also can make the super A.I. chatbot by Swagger document too.
8181

82-
With `@agentica`, you can build Agentic AI chatbot only with Swagger document built by [`@nestia/sdk`](/docs/swagger). Complex agent workflows and graphs required in conventional AI agent development are not necessary in `@agentica`. Only with the Swagger document, `@agentica` will do everything with the function calling.
82+
With `@agentica`, you can build Agentic AI chatbot only with Swagger document built by [`@nestia/sdk`](https://nestia.io/docs/swagger). Complex agent workflows and graphs required in conventional AI agent development are not necessary in `@agentica`. Only with the Swagger document, `@agentica` will do everything with the function calling.
8383

8484
Look at below demonstration, and feel how `@agentica` is powerful. Now, you can let users to search and purchase products only with conversation texts. The backend API functions would be adequately called in the AI chatbot with LLM function calling.
8585

website/src/content/docs/llm/http.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ flowchart
8282
end
8383
```
8484

85-
`HttpLlm` first upgrades any OpenAPI version to an emended OpenAPI v3.2 format, then converts each operation into an `IHttpLlmFunction` with parameter schemas, descriptions, and HTTP metadata. The resulting `IHttpLlmController` can be passed to [MCP](./mcp), [Vercel AI SDK](./vercel), or [Agentica](./chat).
85+
`HttpLlm` first upgrades any OpenAPI version to an emended OpenAPI v3.2 format, then converts each operation into an `IHttpLlmFunction` with parameter schemas, descriptions, and HTTP metadata. The resulting `IHttpLlmController` can be passed to [MCP](/docs/utilization/mcp/), [Vercel AI SDK](/docs/utilization/vercel/), or [Agentica](./chat).
8686
</Callout>
8787

8888
## `HttpLlm.controller()`
Lines changed: 2 additions & 257 deletions
Original file line numberDiff line numberDiff line change
@@ -1,261 +1,6 @@
11
---
22
title: Guide Documents > Large Language Model > LangChain
33
---
4-
import { Callout, Tabs } from "nextra/components";
4+
import { ClientRedirect } from "../../../components/internal/ClientRedirect";
55

6-
import LocalSource from "../../../components/LocalSource";
7-
8-
## `toLangChainTools()` function
9-
10-
<Tabs items={[
11-
<code>@typia/langchain</code>,
12-
<code>ILlmController</code>,
13-
<code>IHttpLlmController</code>,
14-
<code>HttpLlm.controller</code>,
15-
]}>
16-
<Tabs.Tab>
17-
```typescript filename="@typia/langchain" showLineNumbers
18-
export function toLangChainTools(props: {
19-
controllers: Array<ILlmController | IHttpLlmController>;
20-
prefix?: boolean | undefined;
21-
}): DynamicStructuredTool[];
22-
```
23-
</Tabs.Tab>
24-
<Tabs.Tab>
25-
<LocalSource
26-
path="packages/interface/src/schema/ILlmController.ts"
27-
filename="@typia/interface"
28-
showLineNumbers />
29-
</Tabs.Tab>
30-
<Tabs.Tab>
31-
<LocalSource
32-
path="packages/interface/src/http/IHttpLlmController.ts"
33-
filename="@typia/interface"
34-
showLineNumbers />
35-
</Tabs.Tab>
36-
<Tabs.Tab>
37-
<LocalSource
38-
path="packages/utils/src/http/HttpLlm.ts"
39-
filename="@typia/utils"
40-
showLineNumbers />
41-
</Tabs.Tab>
42-
</Tabs>
43-
44-
[LangChain.js](https://github.com/langchain-ai/langchainjs) integration for [`typia`](https://github.com/samchon/typia).
45-
46-
`toLangChainTools()` converts TypeScript classes or OpenAPI documents into LangChain `DynamicStructuredTool[]` at once.
47-
48-
Every class method becomes a tool, JSDoc comments become tool descriptions, and TypeScript types become JSON schemas — all at compile time. For OpenAPI documents, every API endpoint is converted to a `DynamicStructuredTool` with schemas from the specification.
49-
50-
Validation feedback is embedded automatically.
51-
52-
## Setup
53-
54-
```bash filename="Terminal"
55-
npm install @typia/langchain @langchain/core
56-
npm install typia
57-
npx typia setup
58-
```
59-
60-
## From TypeScript Class
61-
62-
<Tabs items={[
63-
"LangChain Agent",
64-
<code>Calculator</code>,
65-
<code>BbsArticleService</code>,
66-
<code>IBbsArticle</code>,
67-
]}>
68-
<Tabs.Tab>
69-
```typescript filename="src/main.ts" showLineNumbers {1-6, 10-15, 17-27}
70-
import { ChainValues, Runnable } from "@langchain/core";
71-
import { ChatPromptTemplate } from "@langchain/core/prompts";
72-
import { DynamicStructuredTool } from "@langchain/core/tools";
73-
import { ChatOpenAI } from "@langchain/openai";
74-
import { toLangChainTools } from "@typia/langchain";
75-
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
76-
import typia from "typia";
77-
78-
import { Calculator } from "./Calculator";
79-
80-
const tools: DynamicStructuredTool[] = toLangChainTools({
81-
controllers: [
82-
typia.llm.controller<Calculator>("calculator", new Calculator()),
83-
],
84-
});
85-
86-
const agent: Runnable = createToolCallingAgent({
87-
llm: new ChatOpenAI({ model: "gpt-4o" }),
88-
tools,
89-
prompt: ChatPromptTemplate.fromMessages([
90-
["system", "You are a helpful assistant."],
91-
["human", "{input}"],
92-
["placeholder", "{agent_scratchpad}"],
93-
]),
94-
});
95-
const executor: AgentExecutor = new AgentExecutor({ agent, tools });
96-
const result: ChainValues = await executor.invoke({
97-
input: "What is 10 + 5?",
98-
});
99-
```
100-
</Tabs.Tab>
101-
<Tabs.Tab>
102-
<LocalSource
103-
path="tests/test-langchain/src/structures/Calculator.ts"
104-
filename="Calculator.ts"
105-
showLineNumbers />
106-
</Tabs.Tab>
107-
<Tabs.Tab>
108-
<LocalSource
109-
path="examples/src/llm/BbsArticleService.ts"
110-
filename="BbsArticleService.ts"
111-
showLineNumbers />
112-
</Tabs.Tab>
113-
<Tabs.Tab>
114-
<LocalSource
115-
path="examples/src/llm/IBbsArticle.ts"
116-
filename="IBbsArticle.ts"
117-
showLineNumbers />
118-
</Tabs.Tab>
119-
</Tabs>
120-
121-
Create controllers from TypeScript classes with `typia.llm.controller<Class>()`, and pass them to `toLangChainTools()`.
122-
123-
- `controllers`: Array of controllers created via `typia.llm.controller<Class>()` or `HttpLlm.controller()`
124-
- `prefix`: When `true` (default), tool names are formatted as `{controllerName}_{methodName}`. Set to `false` to use bare method names
125-
126-
<Callout type="warning">
127-
**Type Restrictions**
128-
129-
Every method's parameter type must be a keyworded object type with static keys — not a primitive, array, or union. The return type must also be an object type or `void`. Primitive return types like `number` or `string` are not allowed; wrap them in an object (e.g., `{ value: number }`). See [`typia.llm.application()` Restrictions](../application#restrictions) for details.
130-
</Callout>
131-
132-
## From OpenAPI Document
133-
134-
```typescript filename="src/main.ts" showLineNumbers {1-3, 5-18}
135-
import { DynamicStructuredTool } from "@langchain/core/tools";
136-
import { toLangChainTools } from "@typia/langchain";
137-
import { HttpLlm } from "@typia/utils";
138-
139-
const tools: DynamicStructuredTool[] = toLangChainTools({
140-
controllers: [
141-
HttpLlm.controller({
142-
name: "shopping",
143-
document: await fetch(
144-
"https://shopping-be.wrtn.ai/editor/swagger.json",
145-
).then((r) => r.json()),
146-
connection: {
147-
host: "https://shopping-be.wrtn.ai",
148-
headers: { Authorization: "Bearer ********" },
149-
},
150-
}),
151-
],
152-
});
153-
```
154-
155-
Create controllers from OpenAPI documents with `HttpLlm.controller()`, and pass them to `toLangChainTools()`.
156-
157-
- `name`: Controller name used as prefix for tool names
158-
- `document`: Swagger/OpenAPI document (v2.0, v3.0, or v3.1)
159-
- `connection`: HTTP connection info including `host` and optional `headers`
160-
161-
## Validation Feedback
162-
163-
`toLangChainTools()` embeds [`typia.validate<T>()`](/docs/validators/validate) in every tool for automatic argument validation. When validation fails, the error is returned as text content with inline `// ❌` comments at each invalid property:
164-
165-
```json
166-
{
167-
"name": "John",
168-
"age": "twenty", // ❌ [{"path":"$input.age","expected":"number"}]
169-
"email": "not-an-email", // ❌ [{"path":"$input.email","expected":"string & Format<\"email\">"}]
170-
"hobbies": "reading" // ❌ [{"path":"$input.hobbies","expected":"Array<string>"}]
171-
}
172-
```
173-
174-
The LLM reads this feedback and self-corrects on the next turn.
175-
176-
<Callout type="warning">
177-
**Bypassing LangChain's Built-in Validation**
178-
179-
LangChain internally uses `@cfworker/json-schema` to validate tool arguments, which throws `ToolInputParsingException` before custom validation can run. `@typia/langchain` solves this by using a passthrough Zod schema (`z.record(z.unknown())`), allowing `typia`'s much more detailed and accurate validator to handle all argument validation instead.
180-
</Callout>
181-
182-
In the [AutoBe](https://github.com/wrtnlabs/autobe) project (AI-powered backend code generator), `qwen3-coder-next` showed only 6.75% raw function calling success rate on compiler AST types. However, with validation feedback, it reached 100%.
183-
184-
Working on compiler AST means working on any type and any use case.
185-
186-
- [AutoBeDatabase](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/database/AutoBeDatabase.ts)
187-
- [AutoBeOpenApi](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/openapi/AutoBeOpenApi.ts)
188-
- [AutoBeTest](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/test/AutoBeTest.ts)
189-
190-
```typescript filename="AutoBeTest.IExpression" showLineNumbers
191-
// Compiler AST may be the hardest type structure possible
192-
//
193-
// Unlimited union types + unlimited depth + recursive references
194-
export type IExpression =
195-
| IBooleanLiteral
196-
| INumericLiteral
197-
| IStringLiteral
198-
| IArrayLiteralExpression // <- recursive (contains IExpression[])
199-
| IObjectLiteralExpression // <- recursive (contains IExpression)
200-
| INullLiteral
201-
| IUndefinedKeyword
202-
| IIdentifier
203-
| IPropertyAccessExpression // <- recursive
204-
| IElementAccessExpression // <- recursive
205-
| ITypeOfExpression // <- recursive
206-
| IPrefixUnaryExpression // <- recursive
207-
| IPostfixUnaryExpression // <- recursive
208-
| IBinaryExpression // <- recursive (left & right)
209-
| IArrowFunction // <- recursive (body is IExpression)
210-
| ICallExpression // <- recursive (args are IExpression[])
211-
| INewExpression // <- recursive
212-
| IConditionalPredicate // <- recursive (then & else branches)
213-
| ... // 30+ expression types total
214-
```
215-
216-
## Structured Output
217-
218-
Use `typia.llm.parameters<T>()` with LangChain's `withStructuredOutput()`:
219-
220-
```typescript filename="src/main.ts" copy showLineNumbers {1-3, 5-11, 13-14, 16-22, 25-28}
221-
import { ChatOpenAI } from "@langchain/openai";
222-
import { dedent, LlmJson } from "@typia/utils";
223-
import typia, { tags } from "typia";
224-
225-
interface IMember {
226-
email: string & tags.Format<"email">;
227-
name: string;
228-
age: number & tags.Minimum<0> & tags.Maximum<100>;
229-
hobbies: string[];
230-
joined_at: string & tags.Format<"date">;
231-
}
232-
233-
const model = new ChatOpenAI({ model: "gpt-4o" })
234-
.withStructuredOutput(typia.llm.parameters<IMember>());
235-
236-
const member: IMember = await model.invoke(dedent`
237-
I am a new member of the community.
238-
239-
My name is John Doe, and I am 25 years old.
240-
I like playing basketball and reading books,
241-
and joined to this community at 2022-01-01.
242-
`);
243-
244-
// Validate the result
245-
const result = typia.validate<IMember>(member);
246-
if (!result.success) {
247-
console.error(LlmJson.stringify(result));
248-
}
249-
```
250-
251-
> ```bash filename="Terminal"
252-
> {
253-
> email: 'john.doe@example.com',
254-
> name: 'John Doe',
255-
> age: 25,
256-
> hobbies: [ 'playing basketball', 'reading books' ],
257-
> joined_at: '2022-01-01'
258-
> }
259-
> ```
260-
261-
The `IMember` interface is the single source of truth. `typia.llm.parameters<IMember>()` generates the JSON schema, and `typia.validate<IMember>()` validates the output — all from the same type. If validation fails, feed the error back to the LLM for correction.
6+
<ClientRedirect href="/docs/utilization/langchain/" />

0 commit comments

Comments
 (0)