Tutorials
September 24, 2024

DIY Raspberry Pi automatic dog or cat feeder: Full tutorial

Written by
Arielle Mella
Developer Advocate

If your pet is as insatiable as mine, you are familiar with having to wake up every morning to the sound of gentle whining at the door and the pitter patter of begging paws on the floor—two hours before the alarm. 

The sun has barely risen on the horizon as you glance out your east facing window, and you can see a moist nose peering under your door frame. They command: it’s time to eat.

The image shows the Viam app interface, specifically the configure tab, showing the image classification model accurately classifying a dog within the frame. The image captured by the camera is a dog staring up.
An image classification model within Viam accurately detecting a dog. 

By following this tutorial, you'll:

  • Build your own automatic pet feeder using Raspberry Pi.
  • Train a custom machine learning (ML) model with Viam’s Data Manager to recognize your pet.
  • Implement Viam’s Machine Learning and Vision Services to run the model on your robot.
  • Integrate a stepper motor and 3D-printed dispenser to release treats when your pet is identified.

Requirements

Hardware

You will need the following hardware components:

  1. A computer running macOS or Linux
  2. Raspberry Pi with microSD card (and microSD card reader), with viam-server installed following the Installation Guide.
  3. Raspberry Pi power supply
  4. Stepper motor and motor driver
  5. 12V power supply adaptor for motor driver
  6. Simple USB powered webcam
  7. Assorted jumper wires
  8. Four 16mm or 20mm M3 screws

Tools and other materials

You will also need the following tools and materials:

  1. Wide mouth Mason Jar or blender cup (if you want to avoid using glass!)
  2. Small pet treats or dry kibble
  3. Tools for assembly such as screwdrivers and allen keys
  4. 3D printer (or somewhere you can order 3D printed parts from)
  5. 3D printed STL models, wiring, and configuration recommendations.

Software

You will need the following software:

  • Python 3
  • pip
  • viam-server installed to your board. If you haven’t done this, we’ll walk you through it in the next section.

1. Assemble your DIY dog or cat feeder 

The STL files for the smart feeder robot are available on GitHub.

  1. Prepare your 3D prints. The front of the main body of your print is the side with the dog bone.
A gif showing the CAD file of a 3D printed DIY automatic smart dog or cat feeder spinning. Within the file, you'll see areas to insert the Raspberry Pi.
The CAD file of the DIY smart dog or cat feeder.
  1. Mount your Raspberry Pi to the side of the main body of your pet feeder using the provided mounting screw holes.
  2. Connect your power source to the Pi through the side hole.
  3. Mount your webcam to the front of your pet feeder. Connect the USB cable to your Pi.
  4. Insert the 3D printed stepper motor wheel into your pet feeder. This is what will funnel treats out of your pet feeder programmatically.
  5. Place your stepper motor into the motor holder part and gently slide the wires through the hole that leads through the body of your feeder and feeds the cables out on the Raspberry Pi side.
  6. Slide the motor driver holder into the body of your feeder, it should sit flush and fit nicely.
  7. Connect your stepper motor to the motor driver according to this wiring diagram:
The wiring diagram for the smart pet feeder showing a webcam, stepper driver, 12V power supply, and stepper motor attached to the Raspberry Pi.
A wiring diagram for the DIY automatic pet feeder.

2. Configure and test your automatic pet feeder 

Now that you’ve set up your robot, you can start configuring and testing it.

Set up your Raspberry Pi and components

If you haven’t already, set up the Raspberry Pi by following our Raspberry Pi Setup Guide.

Add a new machine in the Viam app. Then follow the setup instructions to install viam-server on the computer you’re using for your project and connect to the Viam app. Wait until your machine has successfully connected.

  1. Configure your Raspberry Pi board component.
  2. Configure your webcam camera component
This image shows the "cam," specifically the webcam being configured directly within the Viam app interface.
The webcam being configured within Viam’s app.
  1. Configure your stepper motor component with type motor and model gpiostepper.some text
    • If you used the same pins as in the wiring diagram, set the direction to pin 15 GPIO 22, and the step logic to pin 16 GPIO 23.
    • Set the Enable pins toggle to low, then set the resulting Enabled Low dropdown to pin 18 GPIO 24.
    • Set the ticks per rotation to 400 and select your board model, pi.
This image shows the "stepper," named "gpiostepper"being configured directly within the Viam app interface. The attributes fall at 400 ticks per rotation and is connected to the Raspberry Pi via pin assignments of direction: 15 GPIO 22, step: 16 GPIO 23, and enabled low: 18 GPIO 24.
The stepper motor component being configured within the Viam app.

Save your configuration.

Test your DIY smart pet feeder’s configuration

To test everything is wired and configured correctly, head to the Control tab. Start by testing the motor. Click on the motor panel and set the RPM to 20 and Go For (# of revolutions) to 100 to see your treat dispensing mechanism in action. Feel free to tweak these values to achieve the desired speed of your dispenser.

This image shows the stepper motor within Viam's app, specifically the "Control" tab, being tested. To test the motor, this motor is being ran in a forwards direction at an RPM of 20.
The stepper motor being tested within Viam’s “Control” tab.

Test your DIY smart pet feeder’s camera

Next, test your camera. Click on the camera panel and toggle the camera on. Now check if you can see your pet! Your pet may be a little skeptical of your robot at first, but once you get some treats in there, your furry friend will love it in no time!

My dog, Toast, being seen within the camera section of the Viam app control interface.
The camera is working perfectly—just look at my dog, Toast, for proof!

3. Use ML to recognize your cat or dog

Let’s make our pet feeder smart with some data capture and ML models! To do that, you’ll first have to configure Data Management to capture images. Then you can use these images to train an ML model on your pet.

Configure data management

To enable the data capture on your robot, do the following:

  1. Under the CONFIGURE tab, add the Data Management service which will allow your machine to sync data to the Viam app in the cloud.
  2. Give your service a name. We used pet-data for this tutorial.
  3. Ensure that Data Capture is enabled and Cloud Sync is enabled. Enabling data capture here will allow you to view the saved images in the Viam app and allow you to easily tag them and train your own machine learning model. You can leave the default directory as is. This is where your captured data is stored on-robot. By default, it saves it to the ~/.viam/capture directory on your machine.
Screenshot of the Viam app showing the data capture feature enabled, indicated by a green toggle switch.
In the Viam app, the active data capture is clearly indicated by the green toggle switch.

Next, enable the Data Management service on the camera component on your robot:

  1. Scroll down to the camera component you previously configured.
  2. Click + Add method in the Data Capture Configuration section.
  3. Set the Type to ReadImage and the Frequency to 0.333. This will capture an image from the camera roughly once every 3 seconds. Feel free to adjust the frequency if you want the camera to capture more or less image data. You want to capture data quickly so that you have as many pictures of your pet as possible so your classifier model can be very accurate. You should also select the Mime Type that you want to capture. For this tutorial, we are capturing image/jpeg data.
Screenshot showing the camera component configuration in the Viam app, with settings for ReadImage method, image capture frequency set to 0.333 (capturing an image every 3 seconds), and Mime Type selection for optimal data capture.
Configure the camera to capture an image every 3 seconds, ensuring you gather plenty of data for accurate pet classification by selecting the appropriate frequency and Mime Type.

Capture images of your dog or cat

Now it’s time to start collecting images of your beloved pet. Set your feeder up near an area your pet likes to hang out like your couch or their bed or mount it temporarily over their food bowl, or even just hold it in front of them for a couple of minutes.

You can check that data is being captured and synced by clicking on the menu icon on the camera configuration pane under the CONFIGURE tab and selecting View captured data.

An image that shows a camera configuration pane with a highlighted 'menu icon' under the CONFIGURE tab. The menu is open, and the option 'View captured data' is clearly selected. The interface is clean and modern, with the tab and menu visually distinct. Make sure the data capturing feature is subtly implied with icons or symbols representing data sync or transfer.
Showing the camera configuration pane open with the option "View captured data" selected.

Capture as many images as you want. If possible, capture your pet from different angles and with different backgrounds. Disable Data Capture after you’re done capturing images of your pet. You also want to make sure you capture images without your pet so that you have a diverse dataset to train your model on.

Create a dataset and tag images

Head over to the DATA page and select an image captured from your machine. After selecting the image, you can type a custom tag for some of the objects you see in the image and you add it to a dataset. The first thing you want to consider is what tags you are trying to create and how you want your custom model to function.

Screenshot displaying multiple images of my dog, Toast, captured and organized within Viam’s data manager. Each image has different angles of and distance away from Toast, ensuring a robust dataset.
Multiple images of my dog, Toast, captured within Viam’s data manager.

For the treat dispenser, start by tagging images with the name of the pet. To train a model successfully you will need both images with your pet and without. You can leave images without your pet untagged or tag them with something like not-pet.

Notice that in our image collection, we captured images at different angles and with different background compositions. This is to ensure that our model can continue to recognize the object no matter how your robot is viewing it through its camera. To be able to train on the data you are tagging you also need to add each image to a dataset.

Begin by selecting the image you would like to tag, and you will see all of the data that is associated with that image. Type in your desired tag in the Tags section.

Be mindful of your naming as you can only use alphanumeric characters and underscores: this is because the model will be exported as a .tflite file with a corresponding .txt file for labeling.

Then use the Datasets dropdown to create a new dataset and assign the image to it. We called our dataset petfeeder.

For each image, add tags to indicate whether it contains your pet and add the image to your dataset.

A screenshot of the image tag I used for my DIY automatic smart pet feeder: simply "toast."
The image tag I used within my DIY automatic smart pet feeder.

Add tags for each image that shows your pet and then add all the images of your pet (as well as images that do not contain your pet) to the dataset.

Continue parsing through your collected data (in this case images) and tag away and assign to your dataset to your heart’s desire. Tag as many images with as many tags until you are happy with your dataset. This is important for the next step.

View your dataset

Upon completion of tagging your data set, you can view the data in your dataset by clicking on your dataset’s name on the image sidebar or on the DATASETS subtab.

My DIY dog feeder’s dataset featuring the image tags and bounding box labels highlighting my dog, Toast.
My automatic smart pet feeder’s dataset featuring the image tags and bounding box labels highlighting my dog, Toast.

Train a model to automatically recognize your pet

From the dataset view, click on Train model, name your model and select Single label as the model type. Then select the label or labels that you used to label your pet images. We called it petfeeder and selected the tag toast and no-toast to train on images of the pup and images that do not contain the pup.

On Viam, you can also leverage its platform to train various models, including image classification and object detection models. 

In this case, we're focusing on training an image classification model, which will help classify whether or not an image contains your pet based on the labeled data you provide. This allows you to build a highly accurate model that can differentiate between images with and without your dog, Toast.

A gif showing how to train a model on Viam. In this specific case, I'm training an object detection model on the label "toast and not-toast."
Adding the image tag “toast” to my “pet-feeder” dataset.

If you want your model to be able to recognize multiple pets you can instead create a Multi Label model based on multiple tags. Go ahead and select all the tags you would like to include in your model and click Train Model.

Deploy your model to your dog or cat feeder

Once the model has finished training, deploy it by adding an ML model service to your robot:

  1. Navigate to the machine page on the Viam app. Add a service with the type ML Model, and select TFLite CPU.
  2. Enter puppyclassifier as the name, then click Create.
  3. To configure your service and deploy a model onto your robot, select Deploy model on machine for the Deployment field.
  4. Select your trained model (petfeeder) as your desired Model.

Use the vision service to detect your pet

To detect your pet with your ML model, you need to add a vision service that uses the model and a transform camera that applies the vision service to an existing camera stream and specifies a confidence threshold:

  1. Add a vision service with the model ML Model.
  2. Enter a name or use the suggested name for your ML model service and click Create.
  3. Select the model you previously created in the dropdown menu.
  4. Optionally add a transform camera by selecting type camera and model transform.
  5. Enter classifier_cam as the name for your camera, then click Create.
  6. Replace the JSON attributes with the following object which specifies the camera source for the transform cam and also defines a pipeline that adds the classifier you created.
{
"source": "petcam",
"pipeline": [
  {
      "attributes": {
          "classifier_name": "puppyclassifier",
          "confidence_threshold": 0.9
      },
      "type": "classifications"
  }
]
}

Head to your robots Control tab, click on your transform cam, toggle it on and you should be able to view your transform cam and if pointed at your pet, it should show it detecting your pet!

Image of a dog sitting being recognized as a match by the training model.
My dog, Toast, being accurately classified by the ML model on the Viam app.

3. Control your DIY smart pet feeder programmatically

With your robot configured, you can now add a program to your robot that controls the pet feeder when executed, using the Viam SDK in the language of your choice. This tutorial uses Python.

Set up your Python environment

Open your terminal and ssh into your Pi. Run the following command to install the Python package manager onto your Pi:

sudo apt install python3-pip

Create a folder named petfeeder for your code and create a file called main.py inside.

The Viam Python SDK allows you to write programs in the Python programming language to operate robots using Viam. To install the Python SDK on your Raspberry Pi, run the following command in your existing ssh session to your Pi:

pip3 install --target=petfeeder viam-sdk python-vlc

IMPORTANT: If you want your robot to automatically run your code upon startup, it is important to install the package into the petfeeder folder because of how the Viam platform runs the process.

Add the connection code

Go to your robot’s page on the Viam app and navigate to the Code sample tab. Select Python, then copy the generated sample code and paste it into the main.py file.

API KEY AND API KEY ID: By default, the sample code does not include your machine API key and API key ID. We strongly recommend that you add your API key and API key ID as an environment variable and import this variable into your development environment as needed. To show your machine’s API key and API key ID in the sample code, toggle Include secret on the CONNECT tab’s Code sample page.

CAUTION: Do not share your API key or machine address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your machine, or to the computer running your machine.

Save the file and run this command to execute the code:

python3 main.py

When executed, this sample code connects to your robot as a client and prints the available resources.

Add the logic

If your program ran successfully and you saw a list of resources printed from the program, you can continue to add the robot logic.

You’ll be using the puppyclassifier. The following code initializes a camera and the puppyclassifier and shows you how to get the classifications from the classifier when passing in the camera stream as an argument:

Remove the existing code in the main function from the boilerplate code generated in the Code Sample tab and replace it with the following logic. The code gets classifications from the puppyclassifier based on the camera stream, and if a pet is found, activates the stepper motor using the go_for() method to move a certain amount to dispense a treat.

petcam = Camera.from_robot(robot, "petcam")
puppyclassifier = VisionClient.from_robot(robot, "puppyclassifier")
classifications = await puppyclassifier.get_classifications_from_camera(
    camera_name)
‍

async def main():
    robot = await connect()

    # robot components + services below, update these based on how you named
    # them in configuration
    pi = Board.from_robot(robot, "pi")
    petcam = Camera.from_robot(robot, "petcam")
    stepper = Motor.from_robot(robot, "stepper")
    puppyclassifier = VisionClient.from_robot(robot, "puppyclassifier")

    while True:
        # look if the camera is seeing the dog
        found = False
        classifications = await \
            puppyclassifier.get_classifications_from_camera(camera_name)
        for d in classifications:
            # check if the model is confident in the classification
            if d.confidence > 0.7:
                print(d)
                if d.class_name.lower() == "toastml":
                    print("This is Toast")
                    found = True

        if (found):
            # turn on the stepper motor
            print("giving snack")
            state = "on"
            await stepper.go_for(rpm=80, revolutions=2)
            # stops the treat from constantly being dispensed
            time.sleep(300)

        else:
            # turn off the stepper motor
            print("it's not the dog, no snacks")
            state = "off"
            await stepper.stop()

        await asyncio.sleep(5)

        # don't forget to close the robot when you're done!
        await robot.close()


if __name__ == '__main__':
    asyncio.run(main())

Save your file and run the code, put your pet in front of the robot to check it works:

python3 main.py

4. Run the program automatically

One more thing. Right now, you need to run the code manually every time you want your robot to work. However, you can configure Viam to automatically run your code as a process.

Navigate to the Config tab of your machine’s page in the Viam app. Click on the Processes subtab and navigate to the Create process menu.

Enter main as the process name and click Create process.

In the new process panel, enter python3 as the executable, main.py as the argument, and the working directory of your Raspberry Pi as /home/pi/petfeeder. Click on Add argument.

Click Save config in the bottom left corner of the screen.

Now your robot starts looking for your pet automatically once booted!

5. Upgrade your DIY cat or dog feeder

Why stop at just dispensing treats? Take your smart pet feeder to the next level with these creative upgrades:

  • Add speakers to play a personalized message each time a treat is dispensed.
  • Train a ML model to recognize and reward your pet for performing a trick.
  • Include a button system that allows your pet to choose their favorite treat by pressing different colored buttons.
  • If you have multiple pets, you could configure different treats for each pet by training the ML model on each pet, and dispensing different treats depending on the pet recognized.

Build other Raspberry Pi projects

Ready to keep building with your Raspberry Pi? Try out some AI-based projects or explore more home automation tutorials.

And for more general robotics projects, check out other guides in our Codelabs.

Full code

import asyncio
import os
import time

from viam.robot.client import RobotClient
from viam.rpc.dial import Credentials, DialOptions
from viam.components.board import Board
from viam.components.camera import Camera
from viam.components.motor import Motor
from viam.services.vision import VisionClient

# these must be set, you can get them from your machine's 'Code sample' tab
robot_api_key = os.getenv('ROBOT_API_KEY') or ''
robot_api_key_id = os.getenv('ROBOT_API_KEY_ID') or ''
robot_address = os.getenv('ROBOT_ADDRESS') or ''

# change this if you named your camera differently in your robot configuration
camera_name = os.getenv('ROBOT_CAMERA') or 'petcam'


async def connect():
    opts = RobotClient.Options.with_api_key(
      api_key=robot_api_key,
      api_key_id=robot_api_key_id
    )
    return await RobotClient.at_address(robot_address, opts)


async def main():
    robot = await connect()

    # robot components + services below, update these based on how you named
    # them in configuration
    pi = Board.from_robot(robot, "pi")
    petcam = Camera.from_robot(robot, "petcam")
    stepper = Motor.from_robot(robot, "stepper")
    puppyclassifier = VisionClient.from_robot(robot, "puppyclassifier")

    while True:
        # look if the camera is seeing the dog
        found = False
        classifications = await \
            puppyclassifier.get_classifications_from_camera(camera_name)
        for d in classifications:
            # check if the model is confident in the classification
            if d.confidence > 0.7:
                print(d)
                if d.class_name.lower() == "toastml":
                    print("This is Toast")
                    found = True

        if (found):
            # turn on the stepper motor
            print("giving snack")
            state = "on"
            await stepper.go_for(rpm=80, revolutions=2)
            # stops the treat from constantly being dispensed
            time.sleep(300)

        else:
            # turn off the stepper motor
            print("it's not the dog, no snacks")
            state = "off"
            await stepper.stop()

        await asyncio.sleep(5)

        # don't forget to close the robot when you're done!
        await robot.close()


if __name__ == '__main__':
    asyncio.run(main())
On this page

Get started with Viam today!