fix(website): correct grammar in chat.mdx harness section#1802
Merged
samchon merged 2 commits intowebsite/harnessfrom Mar 20, 2026
Merged
fix(website): correct grammar in chat.mdx harness section#1802samchon merged 2 commits intowebsite/harnessfrom
samchon merged 2 commits intowebsite/harnessfrom
Conversation
Co-authored-by: samchon <13158709+samchon@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] [WIP] Address feedback on harness wording for LLM module in the website feature
fix(website): correct grammar in chat.mdx harness section
Mar 20, 2026
samchon
approved these changes
Mar 20, 2026
Contributor
There was a problem hiding this comment.
Pull request overview
Fixes minor grammar in the LLM function-calling harness documentation section to improve readability and clarity in the website docs.
Changes:
- Replace “The answer is not” with “The answer is no”.
- Replace “if correct it” with “if you correct it”.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comment on lines
+230
to
232
| The answer is no, and LLM (Large Language Model) vendors like OpenAI take a lot of type level mistakes when composing the arguments of the target function to call. Even though an LLM function calling schema has defined an `Array<string>` type, LLM often fills it just by a `string` typed value. This is where the **function calling harness** comes in — a deterministic correction loop of schema generation, lenient parsing, type coercion, and validation feedback that turns unreliable LLM output into 100% correct structured data. | ||
|
|
||
| Therefore, when developing an LLM function calling agent, the validation feedback process is essentially required. If LLM takes a type level mistake on arguments composition, the agent must feedback the most detailed validation errors, and let the LLM to retry the function calling referencing the validation errors. |
There was a problem hiding this comment.
This sentence still has a couple of grammatical issues: “vendors … take a lot of type level mistakes” reads incorrectly (typically “make many type-level mistakes”), and “type-level” should be hyphenated.
Suggested change
| The answer is no, and LLM (Large Language Model) vendors like OpenAI take a lot of type level mistakes when composing the arguments of the target function to call. Even though an LLM function calling schema has defined an `Array<string>` type, LLM often fills it just by a `string` typed value. This is where the **function calling harness** comes in — a deterministic correction loop of schema generation, lenient parsing, type coercion, and validation feedback that turns unreliable LLM output into 100% correct structured data. | |
| Therefore, when developing an LLM function calling agent, the validation feedback process is essentially required. If LLM takes a type level mistake on arguments composition, the agent must feedback the most detailed validation errors, and let the LLM to retry the function calling referencing the validation errors. | |
| The answer is no, and LLM (Large Language Model) vendors like OpenAI make many type-level mistakes when composing the arguments of the target function to call. Even though an LLM function calling schema has defined an `Array<string>` type, LLM often fills it just by a `string` typed value. This is where the **function calling harness** comes in — a deterministic correction loop of schema generation, lenient parsing, type coercion, and validation feedback that turns unreliable LLM output into 100% correct structured data. | |
| Therefore, when developing an LLM function calling agent, the validation feedback process is essentially required. If LLM makes a type-level mistake on arguments composition, the agent must feedback the most detailed validation errors, and let the LLM to retry the function calling referencing the validation errors. |
| About the validation feedback, `@agentica/core` is utilizing [`typia.validate<T>()`](https://typia.io/docs/validators/validate) and [`typia.llm.application<Class>()`](https://typia.io/docs/llm/application/#application) functions. They construct validation logic by analyzing TypeScript source codes and types in the compilation level, so that detailed and accurate than any other validators like below. | ||
|
|
||
| Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling through the **function calling harness** pattern. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if correct it through validation feedback with `typia`, success rate soars to 99%. And I've never had a failure when trying validation feedback twice. | ||
| Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling through the **function calling harness** pattern. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if you correct it through validation feedback with `typia`, success rate soars to 99%. And I've never had a failure when trying validation feedback twice. |
There was a problem hiding this comment.
Minor grammar: “success rate soars to 99%” is missing an article (e.g., “the success rate soars …”).
Suggested change
| Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling through the **function calling harness** pattern. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if you correct it through validation feedback with `typia`, success rate soars to 99%. And I've never had a failure when trying validation feedback twice. | |
| Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling through the **function calling harness** pattern. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if you correct it through validation feedback with `typia`, the success rate soars to 99%. And I've never had a failure when trying validation feedback twice. |
samchon
added a commit
that referenced
this pull request
Mar 20, 2026
* feat(website): the harness wording on llm module. * fix remove * do not abuse harness wording much a lot * fix(website): correct grammar in chat.mdx harness section (#1802) * Initial plan * fix(website): correct grammar in chat.mdx harness section Co-authored-by: samchon <13158709+samchon@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: samchon <13158709+samchon@users.noreply.github.com> * fix: add missing `tags` import in function-calling-harness.md example (#1801) * Initial plan * fix: add tags to import in function-calling-harness.md example Co-authored-by: samchon <13158709+samchon@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: samchon <13158709+samchon@users.noreply.github.com> --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Two grammatical errors in the LLM function calling harness description paragraph.
"The answer is not"→"The answer is no""if correct it"→"if you correct it"📍 Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.