What if you could identify all your duplicate AIs without manual intervention?
In the past, we’ve showcased how to train a script and AI to go through a list of APIs in our enterprise marketplace—Amplify Engage.
Since then, we’ve spent time thinking about more ways to apply AI to Amplify. Here’s a glimpse into the value we see in AI-driven additions, and where we’ve landed.
AI supports both sides of the Amplify journey
Leveraging artificial intelligence with the Amplify Platform is driven by two overarching goals:
- accelerating the journey of our customers by enabling AI capabilities within the platform, and
- accelerating the journey of their customers through providing capabilities which facilitate the delivery of AI based services.
For Axway customers, AI can be a powerful tool for streamlining the usage of Amplify software.
It’s about carrying out routine activities to get up and running faster, assisting with tasks such as automatically generating API product documentation, creating integration templates and providing natural language search capabilities.
Amplify can assist with the AI adoption journey by providing services necessary for a complete and secure implementation, such as LLM brokering, governance and access control, rule enforcement, and usage limits.
On the other side, we have developers who can better utilize the Amplify platform with the support of AI.
How can they find APIs in the API marketplace? How can they build products with these APIs?
AI can help consumers locate the assets they need and kickstart those efforts faster.
Couldn’t I just use ChatGPT?
These initiatives are tethered to a common starting point: AI models. It’s a matter of asking questions and having these models deliver meaningful output.
You may be wondering what Amplify brings into the picture versus ChatGPT.
Consider situations where private information should be gated based on who you are. Or, you may want to add data that makes the AI smarter to get better output.
See also: Retrieval-Augmented Generation (RAG) Using Amplify Fusion
Amplify can enhance the experience of using AI models by properly formatting input data, preventing prompt injection, improving response accuracy, and ensuring the right people have access to the right information.
The ability to insert a governance layer between your users and AI model delivers the ability to control what the user can ask of the AI.
This can include filtering inappropriate requests or language, as well as making sure requests don’t incur unexpected expense by having to process trivial requests, exemplified by the case of Saying ‘Please’ and ‘Thank you’ to ChatGPT is costing OpenAPI tens of millions of dollars.
There’s also another core issue at play here: data ownership and monetization.
Drawing more value from your data with Amplify + AI
The data and information you’ve provided to your AI are your intellectual property, which you may want to monetize. Amplify Engage allows you to attach payment and usage plans for the APIs which provide access to your AI, opening the opportunity for additional revenue channels.
While interactions with AI models is typically via an API, the metric for metering these interactions does not follow a traditional API model where users have quotas and plans based upon the number of API calls. The means a plan like this traditional API would not be appropriate:
“For $50/month I will grant you 1000 API calls.”
In the domain of AI, the transaction unit is not always the most appropriate metric. The amount of “work” the AI has to perform to answer each request is proportional to the size of the request sent by the user and the response returned.
This means that a single large request may be more expensive to execute than several small requests. For this reason, AI conversations tend to be measured in “tokens”, where a token can be a single word or chunk of words within the request sent by the user.
Example: “Tell me the salary of all employees” could be 7 tokens where “tell me the average salary of all developers in Germany” would be 10 tokens.
The second request is more complex, requiring more work by the AI, therefore making it a more expensive request (the openai tokenizer illustrates how request strings can be chunked into tokens).
Amplify Fusion can be used to parse the user request into tokens.
Engage provides the ability for API providers to create customized Consumption Units which can be used instead of transactions when setting usage quotas and pricing models for API products.
This combination allows API providers to create usage plans where they can say something like, “For $50/month I will grant you 50000 tokens” for use when interacting with the AI model.
Now, Fusion can tell Engage how many tokens a user consumes every time they interact with the AI model and have those counted against the user’s Consumption Unit quota.
This is a much more relevant metric for controlling and monetizing users’ access to AI models.
Three examples of Axway Amplify AI in action
A chatbot that provides salary information based on credentials
Using an API built in Amplify Fusion and hosted in Amplify Engage, salary data is sent to and populated in an LLM. A chatbot pulls from HR data within that LLM to respond to inquiries, with Fusion acting as the LLM’s interface.
With some rules in place, it’s possible to control the responses individuals get based on their credentials.
For instance, HR administrators can see data relating to the highest salaries across specific fields—but not the names of companies tied to that data. Developers, meanwhile, aren’t able to query this information.
There’s control over how data is chunked, manipulated, and populated in the data source.
A chat bubble that accelerates the build-out of integrations
An integration developer new to Amplify Fusion needs to pick a file from an SFTP site, ingest it, format it, and insert it into MongoDB. Being new to the platform, though, they aren’t sure where to start.
With an AI-powered chat bubble, the developer can use natural language to ask Amplify Fusion how to build out that integration.
The chat’s response: to do that, you’ll need an SFTP client, a data mapper, and a connection that does an insert into a MongoDB.
From there, AI builds out the workflow for you directly within the Amplify platform.
The only thing left to do is tell Amplify where the FTP server and database are and define what the map looks like.
Smart search capabilities so developers can find ideal APIs faster
Developers want to be able to find relevant APIs fast. Building out a more intelligent search engine lends itself to more dynamic and seamless developer experiences.
When a developer opens up a smart search in Amplify, an LLM will look at all of the APIs within your marketplace. The search engine will return any API results related to the specific search query, including any associated documentation if available.
This is another example of the retrieval-augmented generation model, where data is sliced and diced in the background before the trained model returns relevant data.
Dive deeper into the digital trends you can expect to see throughout 2025.
Follow us on social