Highlander •
Building intelligent dApps with KeepKey Desktop

Building Intelligent dApps with KeepKey Desktop
Ollama powered crypto Tooling SDK

In this tutorial, we’ll explore how to create intelligent decentralized applications (dApps) using KeepKey, leveraging the powerful Ollama-powered crypto tooling SDK. We’ll guide you through building a basic chat app that integrates with KeepKey and utilizes Ollama’s advanced AI capabilities to interact with the KeepKey SDK.
Tool support · Ollama Blog
Ollama now supports tool calling with popular models such as Llama 3.1. This enables a model to answer a given prompt… ollama.com
At KeepKey, we have been very excited about Ollama 3.1, which enabled native tool calling. This feature and the offline private nature of Ollama itself inspired this KeepKey desktop Ollama integration.
Prerequisites
Before we dive into the tutorial, ensure you have the following:
- KeepKey Desktop Version 3.1.3 or higher: This version includes the Ollama executable bundled, which is essential for this tutorial. You can download the pre-release version here.
- Ollama Experimental Settings Enabled: Make sure to verify that the Ollama experimental settings are enabled in the KeepKey Desktop settings.
- Familiarity with the basic KeepKey Template, as we’ll be building on top of this.
Learn how to become a KeepKey beta tester for early access and insights into new features.

Get started with the KeepKey Template to deploy your dApp with ease.
Quick Start
Repo: https://github.com/keepkey/keepkey-ai-example
If you’re eager to jump right in, you can try the completed example here.
Step-by-Step Guide
Step 1: Setting Up the Project
Start by cloning the KeepKey template:
npx degit keepkey/keepkey-template keepkey-ai-example
This command will create a fresh template for your project. Next, clear out the
pages/home/index.tsx
Step 2: Building the Chat App
We’ll begin by creating a basic chat interface that connects to the KeepKey device. Here’s a simple example:
<Grid gap={4}> {keepkeyConnected ? ( <div> <Chat sdk={sdk} apiKey={apiKey}></Chat> </div> ) : ( <div>not Connected</div> )} </Grid>
This component checks if the KeepKey device is connected before rendering the chat interface.
Step 3: Model Management with Ollama
In our app, we’ll use the Mistral model from Ollama. While Ollama can download models on the fly, it’s crucial to inform users about the download process.
Here’s how you can handle model downloading in your app:
<Box> {isDownloading ? ( <Box textAlign="center" mt={4}> <Spinner size="xl" /> <Text mt={4}>Downloading model, please wait...</Text> <Progress value={percent} size="lg" colorScheme="blue" mt={4} borderRadius="md" /> <Text mt={2}>{percent}%</Text> </Box> ) : ( <> {/* Render selected component if necessary */} {selectedComponent && ( <Box mb={4}> {/* Insert your selected component rendering logic here */} </Box> )} <Text>Model: {model}</Text> {/* Conversation Messages */} <Box> {conversation.slice(-10).map((msg, index) => ( <Card key={index} mb={4}> <Box p={4}> <Text> {msg.role === "user" ? "User" : "Assistant"}: {msg.content} </Text> </Box> </Card> ))} </Box> {/* Input and Submit Button */} <HStack spacing={4} mt={4}> <Input placeholder="Type your message here..." value={input} onChange={(e) => setInput(e.target.value)} /> <Button onClick={() => handleSubmitMessage(input)} disabled={loading}> {loading ? <Spinner size="sm" /> : "Send"} </Button> </HStack> </> )} </Box>
This code snippet gates the chat components behind the
isDownloading
Step 4: Configuring Ollama for KeepKey Integration
Install the Ollama npm package: https://www.npmjs.com/package/keepkey-ollama
npm install keepkey-ollama
Now, let’s set up the Ollama configuration in
pages/home/chat/inference.tsx
import { Ollama } from "keepkey-ollama/browser"; const MODELS_KNOWN = [ "mistral:latest", // Recommended for tools "llama3.1:latest", ]; const response = await ollama.chat({ model, messages: newMessages, tools: TOOLS, });
Here, we’re leveraging the
/browser
model
messages
{ "inference": { "submitMessage": [ { "role": "system", "content": "You are an assistant whose primary task is to execute functions to retrieve information. Whenever a user requests information, especially a crypto address, you MUST immediately call the appropriate function without providing any explanation, instructions, or commentary." }, { "role": "user", "content": "What's my Bitcoin address?" }, { "role": "tool", "content": "The response for getBitcoinAddress is {\"address\":\"3M9rBdu7rkVGwmt9gALjuRopAqpVEBdNRR\"}" }, { "role": "system", "content": "You are a summary agent. The system made tool calls. You are to put together a response that understands the user's intent, interprets the information returned from the tools, then summarizes for the user. If you are more than 80% sure the answer is logical, tell the user this. Otherwise, apologize for failing and return the logic of why you think the response is wrong." } ] } }
Understanding the Roles in Ollama's Chat System
When building a dApp that utilizes AI, it's crucial to understand how the different roles within the chat system interact with each other. In Ollama's framework, these roles define how the AI processes information, executes functions, and communicates with the user.
1. System Role
The system role is the backbone of the AI's behavior and functionality. It defines the instructions and constraints under which the AI operates. These commands are pre-defined and act as the primary directives that guide the AI's decision-making process.
For example, consider the following system command:
{ "role": "system", "content": "You are an assistant whose primary task is to execute functions to retrieve information. Whenever a user requests information, especially a crypto address, you MUST immediately call the appropriate function without providing any explanation, instructions, or commentary." }
In this case, the system command instructs the AI to prioritize function execution whenever the user requests specific information. The AI is explicitly told not to engage in unnecessary dialogue or explanations, focusing solely on retrieving and returning the requested data.
The system role is essential because it ensures consistency in how the AI handles various scenarios, providing a controlled environment for function calls and responses.
2. User Role
The user role represents the inputs provided by the user interacting with the dApp. These are typically natural language queries or commands that the AI needs to interpret and act upon.
For instance, a user might ask:
{ "role": "user", "content": "What's my Bitcoin address?" }
In this context, the AI, guided by the system commands, will recognize that it needs to call a specific function (e.g.,
getBitcoinAddress
3. Tool Role
The tool role represents the responses generated by the function calls made by the AI. When the AI executes a function (as instructed by the system commands), it receives a result, which is then formatted as a tool response.
Here’s an example of a tool role response:
{ "role": "tool", "content": "The response for getBitcoinAddress is {\"address\":\"3M9rBdu7rkVGwmt9gALjuRopAqpVEBdNRR\"}" }
This response contains the data returned by the
getBitcoinAddress
4. Summary Agent Role
Finally, the summary agent role is responsible for interpreting the results from the tool responses and compiling a coherent and user-friendly reply. The summary agent reviews the tool responses and provides a final output to the user, ensuring that the AI's response is accurate and relevant.
Here’s an example of a summary agent role in action:
{ "role": "system", "content": "You are a summary agent. The system made tool calls. You are to put together a response that understands the user's intent, interprets the information returned from the tools, then summarizes for the user. If you are more than 80% sure the answer is logical, tell the user this. Otherwise, apologize for failing and return the logic of why you think the response is wrong." }
This role ensures that the final response provided to the user is meaningful and aligned with the user's initial query. The summary agent might either confirm the accuracy of the response or offer an apology if the AI determines that the logic or data might be flawed.
Putting It All Together
In your dApp, these roles work in tandem to create a seamless and intelligent user experience. The system commands set the groundwork for how the AI should behave, the user inputs direct the AI's focus, the tool responses deliver the necessary data, and the summary agent ensures that the final output is clear and accurate.
Understanding and correctly implementing these roles is key to building robust and intelligent dApps that leverage the full potential of AI and KeepKey's crypto tooling SDK.
Step 2: Building a Basic Function Wrapper for KeepKey SDK
We need to define the functions that Ollama can call. These functions can be anything from API calls to CLI commands. For simplicity, we’ll start with basic functions like getting Bitcoin addresses:
keepkey-ai-example/src/lib/pages/home/chat/functions/keepkey.ts at master ·…
Contribute to keepkey/keepkey-ai-example development by creating an account on GitHub. github.com
Here we are doing basics, getting addresses.
export const EXAMPLE_WALLET = (sdk: any) => ({ getCoins: async () => { return Object.keys(COIN_MAP_KEEPKEY_LONG); }, getBitcoinAddress: async (params: { network: any }) => { const addressInfo = { addressNList: [0x80000000 + 49, 0x80000000 + 0, 0x80000000 + 0, 0, 0], coin: "Bitcoin", scriptType: "p2sh-p2wpkh", showDisplay: false, }; const response = await sdk.address.utxoGetAddress({ address_n: addressInfo.addressNList, script_type: addressInfo.scriptType, coin: addressInfo.coin, }); return response; }, // ... more });
Here we are defining some simple functions.
And now we “map” these functions in a template to reading into the Ollama LLM:
https://github.com/keepkey/keepkey-ai-example/blob/master/src/lib/pages/home/chat/chart.tsx#L172
export const TOOLS: any = [ { name: "getBitcoinAddress", description: "Retrieve the Bitcoin address", parameters: { type: "object", properties: {}, required: [], }, }, { name: "getDogecoinAddress", description: "Retrieve the Dogecoin address", parameters: { type: "object", properties: {}, required: [], }, }, { name: "getMayachainAddress", description: "Retrieve the Mayachain address", parameters: { type: "object", properties: {}, required: [], }, }, // ... more ];
To keep things simple, we have no required properties. You can have params needed into function calling. Examples are here.
Step 2: Let LLM Call Tooling
Now let's add tooling for the pioneer SDK itself:
https://github.com/keepkey/keepkey-ai-example/blob/master/src/lib/pages/home/chat/inference.tsx#L185
const functionCall = isFunction[i]; console.log("functionCall: ", functionCall); const functionName = functionCall?.function?.name; console.log("functionName: ", functionName); if (availableFunctions[functionName]) { const functionResponse = await availableFunctions[functionName]( functionCall.function.arguments ); const toolResponseMessage = { role: "tool", content: `The response for ${functionName} is ${JSON.stringify( functionResponse )}`, }; console.log(tag, "toolResponseMessage: ", toolResponseMessage); newMessages.push(toolResponseMessage); }
This calls the function when given a “tool_calls” from the LLM itself. It executes the code and adds the result to the messages.
We then go to the LLM again and ask for a full summary:
newMessages.push({ role: "system", content: `You are a summary agent. The system made tool calls. You are to put together a response that understands the user's intent, interprets the information returned from the tools, then summarizes for the user. If you are more than 80% sure the answer is logical, tell the user this. Otherwise, apologize for failing and return the logic of why you think the response is wrong.`, }); console.log(tag, "newMessages: ", newMessages); const finalResponse = await ollama.chat({ model, messages: newMessages, tools: TOOLS, }); console.log(tag, "finalResponse: ", finalResponse);
And there we go, the results:

Now the system prompts in the basic demo are designed to be very strict to function calling. This project is just a basic demo to get developers started.
This is the first in a long series of tutorials, and we will cover the following in the near future:
Topics for Future
- Generative UI Components: Display addresses and QR codes on demand. The LLM will present data to users in more dynamic ways beyond simple text.
- Remote Tool Calls: Learn how to use APIs to retrieve balances for addresses and accounts.
- Generative UX for Text-Assisted Bitcoin Transfers: Discover how to create a user experience that seamlessly integrates text-based interactions with Bitcoin transactions.
What topics would you like to see covered next? Leave a comment below!