AI Surge: Large Language Models Unlock Innovation and Capability
In the last two years, the adoption of AI-driven Large Language Models (LLMs) has dramatically surged in various industries and applications. It mainly started with OpenAI debuting their state-of-the-art models like GPT-3, GTP-3.5, GTP-4, ChatGPT. This is followed by other major players such as Google introducing LaMDA, Bard, Gemini, Meta (Facebook) launching LLaMa, Anthropic releases Claude, Microsoft releases Co-pilot and Amazon rolls out Bedrock and Titan. These LLMs are pushing the boundaries of Natural Language Processing (NLP), helping industries in areas like automating customer service, enhancing content creation, improving data analysis with more efficiency.
Additionally, most of the LLMs have exposed their REST APIs, which make it easier for business to integrate advanced AI capabilities into their applications and workflows.
The Application and Threat Intelligence (ATI) team in Keysight has analyzed the network traffic pattern of REST API call of various popular LLMs and recently launched a generic Application named “AI LLM over Generic HTTP” and its 3 SuperFlows (One 2-arm and two 1-arms) as part of their bi-weekly release ATI-2024-19 on September 30, 2024 (Available from BPS v10.00 Patch 3).
Figure 1: AI LLM over Generic HTTP App and its SuperFlows in BPS
This blog will detail how to configure the Applications and SuperFlows for calling the APIs of various popular LLMs, such as OpenAI and Gemini.
How to Configure OpenAI 1-arm Superflow in BreakingPoint
Warning: While this operation is supported, it is not normally necessary to perform automated connections to a live, production service that is not your own. Please confirm via the TOS of each application if there are any concerns. Another advisement is to keep the request performance parameters very low. You have been warned.
To communicate with the actual OpenAI REST API server via an HTTP POST request, the request header field values should be specified under “LLM Request” action as shown below –
Figure 2: OpenAI 1-arm Superflow in BPS
Here the action parameter includes the following OpenAI-specific HTTP request header values –
- Request URI – “v1/chat/completions”
- Hostname – “api.openai.com”
- Accept Encoding – “gzip, deflate”
- User Agent – “python-requests/2.25.1”
- Content Type – “application/json”
Additional HTTP headers, both standard and custom can also be specified under this “Upload Custom Request Header File” UI parameter in CSV format, as shown below –
Figure 3: Custom HTTP headers for OpenAI API Call
Next, using the “Upload Request Body File” parameter, the user can upload their own request body content that they want to send to the OpenAI API server. For example –
Figure 4: HTTP POST request payload for OpenAI API Call
Note: To configure the OpenAI 1-arm Superflow in BreakingPoint Systems, the gateway IP address of “api.openai.com” must be mentioned as the “Base IP Address” of the “IPV4 EXTERNAL HOSTS” which is present inside “Network Neighborhood” configuration.
After the successful authentication and processing of the OpenAI API request, the OpenAI server responds with a 200 OK HTTP response as shown below –
Figure 5: Sample HTTP 200 OK response from OpenAI API server
How to Configure Gemini 1-arm Superflow in BreakingPoint
At To communicate with the actual Google’s Gemini REST API server via an HTTP POST request, the request header field values should be specified under “LLM Request” action as shown below –
Figure 6: Gemini 1-arm Superflow in BPS
Here the action parameter includes the following Gemini-specific HTTP request header values –
- Request URI – “/v1/models/gemini-1.5-flash:generateContent”
- Hostname – “generativelanguage.googleapis.com”
- Accept Encoding – “gzip, deflate”
- User Agent – “python-requests/2.25.1”
- Content Type – “application/json”
Additional HTTP headers, both standard and custom can also be specified under this “Upload Custom Request Header File” UI parameter in CSV format, as shown below –
Figure 7: Custom HTTP headers for Gemini API Call
Next, using the “Upload Request Body File” parameter, the user can upload their own request body content that they want to send to the Gemini API server. For example –
Figure 8: HTTP POST request payload for Gemini API Call
Note: To configure the Gemini 1-arm Superflow in BreakingPoint Systems, the gateway IP address of “generativelanguage.googleapis.com” must be mentioned as the “Base IP Address” of the “IPV4 EXTERNAL HOSTS” which is present inside “Network Neighborhood” configuration.
After the successful authentication and processing of the Gemini API request, the Gemini server responds with a 200 OK HTTP response as shown below –
Figure 9: Sample HTTP 200 OK response from Gemini API server
For the 2-arm simulation (involving both the Client and Server sides) the user can specify the HTTP 200 OK response header field values through the UI parameters present under the “LLM Response” action. Additionally, the “Upload Custom Response Header File” parameter allows the user to upload extra response headers in CSV format. Also, using the “Upload Response Body File” parameter, the user can upload a custom response payload to send to the Client, as shown below –
Figure 10: AI LLM over Generic HTTP 2-arm Superflow in BPS
Leverage Subscription Service to Stay Ahead of Attacks
Keysight’s Application and Threat Intelligence subscription provides daily malware and bi-weekly updates of the latest application protocols and vulnerabilities for use with Keysight test platforms. The ATI Research Centre continuously monitors threats as they appear in the wild. Customers of BreakingPoint now have access to attack campaigns for different advanced persistent threats, allowing BreakingPoint Customers to test their currently deployed security control’s ability to detect or block such attacks.
Leave a reply