Gentoro FAQs
1
When and why do I need Gentoro?
Interacting with enterprise systems through natural language involves complex LLM function-calling, API knowledge, function design and implementation, and system expertise. Gentoro simplifies this by automatically generating and managing these functions, saving your developers time.
Given the unpredictable and sensitive nature of GenAI, a minor change to a function to improve accuracy for one prompt can suddenly cause other prompts to hallucinate. Achieving the right balance requires tremendous patient trial and error. Gentoro addresses this by evaluating the entire function set holistically and, if needed, rebuilding it to maintain overall accuracy and reliability.
2
What Large Language Models (LLMs) does Gentoro work with?
Gentoro is LLM-agnostic, meaning it works with any Large Language Model. This flexibility allows you to switch between different LLM providers without being locked into one, ensuring your integrations stay adaptable as AI technologies evolve. Additionally, since each LLM has unique behaviors, Gentoro can automatically rebuild the function set to ensure smooth compatibility with any new model you adopt.
3
What enterprise resources does Gentoro connect to?
Gentoro connects to any API that supports the OpenAPI spec. We add other protocols like GraphQL, Protobuf, etc. as needed. Gentoro presently supports API keys and will soon add OAuth.
4
How much code will I have to write to get Gentoro to work?
Gentoro generates and tests the code needed to connect to services automatically. If the task exceeds the LLM’s capabilities, Gentoro provides a workbench where users can offer hints to guide the LLM in refining the code. For example, a user might say, “The name is incorrectly being blanked,” which is often enough to correct the behavior without reviewing the code. In rare cases, a developer may need to review the code or provide more precise hints.
5
How is Gentoro deployed?
Gentoro can run in private data centers or in public clouds behind your firewall so data can remain within your security perimeter. Gentoro does not extract and store any data in its own environment and never takes possession of your data.
6
How does Gentoro handle data security and privacy in AI applications?
For enterprise use, data confidentiality is crucial, and sensitive data often cannot be shared with public LLMs or even within the organization. Gentoro ensures compliance with company policies through a rule-based privacy and security layer. Sensitive data, like Social Security numbers or HIPAA-related information, is automatically detected, tokenized, and securely handled. Data can also be encrypted and decrypted as needed, offering flexible and secure processing.
7
Are Lambdas supported?
Yes, Gentoro supports LLM vendors that use lambdas, such as Amazon Bedrock. If the lambda option is chosen, Gentoro will generate and deploy the lambdas directly into the platform. Once deployed, prompts can be sent directly to the platform for execution without needing to go through Gentoro.
8
What level of technical expertise is required to implement Gentoro’s products?
The technical expertise required depends on the complexity of the use case. For simple tasks, no coding is needed—Gentoro provides built-in tools that work right out of the box, or you can modify it by describing the missing features, all without writing code.
For more complex cases involving multiple APIs, Gentoro handles these scenarios, and coding is often unnecessary. A clear description of the issue usually resolves it. If needed, Gentoro provides developer tools for more advanced situations, but its primary goal is to empower non-coders while offering full support for developers when required.
9
How Does Gentoro Differ from a Retrieval-Augmented Generation (RAG) System?
Retrieval-augmented generation (RAG) is a technique that enhances prompts by supplementing them with facts from a knowledge base. Its purpose is to address the limitation of large language models (LLMs) that rely solely on their training data and may lack up-to-date or domain-specific information.
Gentoro, on the other hand, focuses on a different limitation of LLMs: their inability to control external systems. Gentoro utilizes a specialized LLM capability called function-calling, which allows it to execute API calls on behalf of the LLM, enabling interaction with external systems and services.
1
What is MCP, and how does it work with Gentoro?
MCP, or Model Context Protocol, is an open standard that enables Large Language Models (LLMs) like Claude Desktop to interact dynamically with external tools and enterprise systems.
2
How does Gentoro work with MCP?
Gentoro leverages MCP to provide seamless connectivity, allowing AI agents to invoke Gentoro tools, process data, and respond intelligently to complex tasks using tools developed and hosted by Gentoro.
3
What tools can I build and host with Gentoro?
Gentoro supports the creation of agentic tools for a wide range of use cases. These tools can perform functions like querying databases, automating workflows, and accessing proprietary enterprise systems.
4
How are the tools built with Gentoro?
Gentoro comes with some pre-built tools for common use cases. However, when a new tool needs to be built, Gentoro supports an AI-assisted method, similar to products like github-pilot, cursor.ai, etc.
5
Can I use MCP with platforms other than Claude Desktop?
Yes, MCP is compatible with a variety of AI clients that support the protocol. While Claude Desktop is the first example, Gentoro’s MCP support is designed to work with other MCP clients, ensuring flexibility for diverse development needs. MCP is now supported by all major AI-assisted development platforms - replit, cursor.ai, etc.
6
How does Gentoro ensure data security and privacy when using MCP?
Gentoro incorporates robust security measures, including role-based access controls (e.g., using OAuth) and advanced anonymization, to protect sensitive data. AI agents interacting through MCP only access tools and data authorized for their specific role, ensuring compliance with enterprise security standards.
7
How does Gentoro’s MCP server improve enterprise workflows?
By bridging AI agents with enterprise tools and systems like Slack, Salesforce, and other databases, MCP enhances workflows through intelligent automation and dynamic tool invocation. This reduces manual tasks and enables faster, more accurate decision-making.
Customized Plans for Real Enterprise Needs
Gentoro makes it easier to operationalize AI across your enterprise. Get in touch to explore deployment options, scale requirements, and the right pricing model for your team.