Google Gemini

The Google Gemini connection allows the Intelligence Hub to interact with Gemini models for generative AI and LLM workflows. This connection enables sending prompts and receiving responses from models hosted by Google Gemini. Below are details on the connection and input settings.

Use care when integrating this connection into production workflows. Due to the rapid pace of AI and LLM development, providers have made breaking changes to their APIs in the past. While we aim to respond quickly to such changes, they may temporarily disrupt production systems.

Connection Settings

Setting Description
API Key The API key used to authenticate with the Google Gemini service.

Input Settings

Setting Description
Model An identifier for the model to use. This is typically a model name or ID provided by Google Gemini. A common model is “gemini-2.0-flash” for example.
Instructions (Optional) The prompt or instructions to guide the model’s response.
Message The message to be sent to the model. This is the input the model will process based on the instructions.
Response Format The format in which the model should respond. When set to JSON the connection will attempt to parse the response as JSON. If the response is not valid JSON the read fails.