Cooking Without the Fuss: Building an Intelligent Recipe Chatbot with Amazon Q
I come from southern Italy đŽđš, I like food and I like to cook. In recent years however, working from home (yes, you work more than you work in the office) I have always focused a lot on work to the point of almost skipping lunch or having to prepare something quickly, at the last moment. However, it is not always easy to think about what to cook at that moment and sometimes you donât have many ingredients available. Technology today provides us with many tools, so why not use them to make our lives easier? During re:Invent 2023, AWS launched a service called Amazon Q. Amazon Q is a powerful assistant powered by generative AI that can help you get quick, relevant answers to questions, solve problems, and generate content using data and insights present in information repositories, in code and in company systems. Amazon Q can understand natural language queries, so you can ask it questions in simple English, itâs a fully managed service built on Amazon Bedrock, so you donât need to worry about the underlying architecture, and it also integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint and Salesforce and many more.
So I decided to use Amazon Q to create a chat bot that suggests recipes or advises me on what to prepare based on the ingredients I have at home. Here are the steps I followed.
Create the application
At the moment Amazon Q is only available in two regions, N. Virginia (us-east-1) and Oregon (us-west-2), I chose Oregon (us-west-2). On the main screen choose âGet startedâ
And in the next one âCreate applicationâ
You will automatically be offered a name for the application but I advise you to choose something more humanly understandable. Leave the option to create and use a new service role checked and give it a name too.
Data are encrypted by default with a KMS key but if you want to use a different key you can check the âCustomize encryption settingsâ box and create a new KMS key or use an existing one. It is also possible to apply tags to the application. Once everything has been filled out, click on the âCreateâ button, in this way both the application and the role will be created, it could take about 30 seconds.
Select the retriever
The next step is to select the retriever you want to use. There are two options, âNative retrieverâ and âExisting retrieverâ. Existing retriever is based on Amazon Kendra, while Native retriever allows you to choose more than 20 options.
We then select the number of provisioning units for our index. You can provision from 1 to 50 units and each unit corresponds to 20,000 indexed documents. My application is based on cooking recipes so one unit is more than enough. Letâs move on with âNextâ.
Letâs now choose the data source we want to use. Not having a PDF cookbook to upload, I chose to use the recipes published on www.giallozafferano.it so I selected âWeb crawlerâ.
In the next screen give a name to the data source, select the âSource URLsâ option and add the domains you want to be indexed.
Scrolling down, select the type of authentication required, in my case the website does not require authentication. You can also use a web proxy to connect to the website you want to crawl and AWS Secret Manager if the website you want to crawl using a web proxy requires authentication to access.
Scrolling down further you can choose if you want to have the application in a VPC, in the IAM role section choose âCreate a new service roleâ, we give a name to the role and in the âSync scopeâ section we select what to synchronize. You can limit crawling only to the domains and the subdomains for the website URLs you listed in Source. Or, you can choose to crawl everything, including other domains the web pages link to in addition to domains and subdomains. In my case I chose âSync domains with subdomains onlyâ.
In the âAdditional configurationâ section expand the âScope settingsâ tab, here you can modify the scan depth level (Crawl depth) from a minimum of 1 to a maximum of 10, the maximum size of the page to be scanned from a minimum of 1 byte up to a maximum of 50 MB, the maximum number of links crawled per page from a minimum of 1 link/page to a maximum of 1000 links/page and the maximum number of URLs crawled per host name per minute from a minimum of 1 URLs/host name/minute to a maximum of 300 URLs/host name/minute. This parameter must not be very high as it would make numerous requests to the server which could block the connection thinking it is an attempted attack.
In the âSync modeâ section we choose how we want our application to be updated in case there are changes in the contents of the data source. Since Iâm only interested in indexing the changes, I chose âNew, modified, or deleted content syncâ.
Letâs scroll down again and choose when to synchronize the data source. In my case I chose every day at 3.00 in the morning. We add tags if we want and leave the âField mappingsâ section with the default settings. This section is to structure the data for chat retrieval and filtering, Amazon Q scans the document attributes or metadata of the data source and maps them to the fields of your Amazon Q index. For the purpose of this guide we can ignore it. Finally, click on âAdd data sourceâ.
Test the chatbot
Now we have created our application and added the data source. By clicking on âPreview web experienceâ we could try to chat with the newly created application.
If we tried, however, we would be disappointed, the application would respond that it is unable to find the necessary information.
This is because, it is true that our application is ready, it is true that the data source is connected but at the moment the crawling of the information has not started. The data source is set to synchronize at 3:00 in the morning, so we must start the synchronization manually. Open the application, select the data source and click on âSync nowâ.
Depending on the number of pages that need to be synchronized and indexed it could take from a few minutes to hours, be patient.
Once the crawling work is finished we can try again to ask for our pasta alla carbonara recipe (there is no cream in it đ)
or ask to suggest a recipe based on what we have available at home.
As you can see this is a simple example of how to exploit the great potential offered by Amazon Q, a versatile tool that can help employees be more productive. Users can ask Amazon Q questions naturally using spoken language and receive complete, understandable answers. It supports a wide range of tasks, including answering questions, finding information, writing emails, summarizing texts, drafting documents and exchanging ideas, and even getting suggestions on what to prepare for lunch and dinner.
Buon appetito!
P.S.
In the next blog I will talk about how to deploy the web experience and how to configure the access using an identity provider.