RPi Node Red: Streaming rpi camera to dashboard

Goal:

Broadcast a live video feed from the RPi camera to a locally and network accessible webpage.

Resources:

RPi and a RPi camera.

Hardware:

Only setup here is connecting the RPi camera to the pi using a ribbon cable.

Installing Streaming Software:

We’ll be following this tutorial here: https://elinux.org/RPi-Cam-Web-Interface

The page there is extremely verbose and scary, but the actual setup is very simple and should only take a few minutes.

First open a terminal, and enter

sudo apt-get update

This will update your repository so the right links are there.

Then pase / type this into the same terminal and press enter.

git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git

This will start downloading the git repo for the web interface; this should only take a few seconds.

cd RPi_Cam_Web_Interface

Enters the directory that we just downloaded.

./install.sh

runs the script to start installing everything, it will prompt you for some settings, press enter to use the defaults.

 

In the article we’re following it mentions

The scripts are

    install.sh main installation as used in step 4 above
    update.sh check for updates and then run main installation
    start.sh starts the software. If already running it restarts.
    stop.sh stops the software
    remove.sh removes the software
    debug.sh is same as start but allows raspimjpeg output to console for debugging

    To run these scripts make sure you are in the RPi_Cam_Web_Interface folder then precede the script with a ./
    E.g. To update an existing installation ./update.sh
    E.g. To start the camera software ./start.sh
    E.g. To stop the camera software ./stop.sh

 

We just ran the ‘install.sh’, which installs everything. Now if we want to start the stream we’ll use ‘start.sh’

 

./start.sh

If we want to start the stream, say after restarting the pi we’ll have to navigate back to this directory and run the start script.

That process just looks like this:

cd RPi_Cam_Web_Interface
./start.sh

 

Viewing the stream:

Now for the fun part, actually viewing the stream.

First we’ll need our ip address; this can be found by hovering the mouse over the WiFi applet like this:

Next, on the same pi or any computer on the same local network we can open the stream via this address.

The url will look like this, except with the ip address switched out for your own.

10.71.0.174/html

Here’s what the default page looks like when we’re streaming.


You can edit all kinds of settings here, that aren’t really necessary for a basic setup.

If your image happens to be upside down you can change the rotation in the “Camera Settings” tab here:

There’s also a more minimal page that only shows the video here:
http://10.71.0.184/html/min.php

Remember that the IP address will be different for you.

 

Embedding the stream into node-red dashboard:

The T3 RPi kit comes with the node-red dashboard nodes installed already, and this is what we’ll use to view the stream in node-red. The advantage to doing that is that you can have buttons, graphs, or other data alongside the video feed.

Here is an example where we have the feed coming from a remote controlled car, with buttons alongside to drive the car around.

In this tutorial we’ll focus on getting the video itself to display.

Here is what the configuration for the template node looks like:

everything is set to default, except we’ve added some html code to the template box.
Here’s the code:

<iframe scrolling=no marginwidth=0 marginheight=0 frameborder=0 height=439 width=553 src="http://10.71.0.200/html/min.php"></iframe>

For it to work on your template node, you’ll have to replace the url here with the one for your pi. So the IP address will be different.

The other change is adding a new ui_group.

If you click the pen here, it’ll open the panel to define the new default group.
You can leave this all at default, but I think it looks better with the video if you raise the width slightly.
Here’s mine with the width raised to 9

Now if you deploy and navigate to http://your_ip:1880/ui you should be able to see your video stream embedded in the node-red dashboard.

You can fine tune the iframe settings like width and height in the template node, and the layout size in the dashboard options.

RPi Node-Red: Uploading pictures to Google Photos

This tutorial is being worked on – the google photos node may not be working at this time due to changes in the underlying repositories. 

Goal:

Upload pictures from a node-red flow into a Google Photos folder.

What you will learn:

How to install and configure the easybotics-gphoto-upload node, and use it in your flow.

What you need to know:

Getting started with node-red

Parts List:

  • A raspberry pi setup with node-red
  • Raspberry pi camera

Getting Started:

  • First connect your camera to your raspberry pi while the power is off.
  • Power on your raspberry pi and make sure the camera is enabled in the configuration menu.
  • If it is not already installed you need to install the gphoto-upload node, because of some peculiarities with dependencies this node has to be installed manually, the procedure follows:

Copy and paste, or type these lines into a terminal window. If typing press enter after each line, if pasting in the whole block you should only need to press enter once.

cd ~/.node-red
sudo node-red-stop
sudo npm i node-red-contrib-easybotics-gphoto-upload
sudo npm i axios@^0.16.2

Then once that install is done(1-5min) copy and paste/type this line and press enter

sudo npm i axios@^0.16.2

Wait for this install to finish(1-5min)

Restart Node Red – you could type Start Node-Red into the terminal or click on the node red program link

Open a web browser and view 127.0.0.1:1880 – If you already had the web browser open refresh the node red page.

Node usage:

Drag the upload-photo node in from the palette, and enter the setup menu by double clicking on it.

It’s definitely a good idea to setup an entirely separate google account for use with node-red, rather than pasting your actual username and password into nodes; no matter how trusted.

To allow node-red to login to gphotos without two factor authentication, you must setup unsecured access here: https://support.google.com/accounts/answer/6010255

 

In the settings page, the filename and album can be set. Usually you would inject a filename, but if you inject an empty message it takes the defaults set here.

 

To setup the camera node, just make sure it’s set to output the filename.
And that the Image Resolution is set to anything other than the maximum.

Notes:

If you take pictures at the rate of once every 2 minutes here are the days you can expect before the 8GB of free space on the SD card is filled up:

Quality = 80

  • 2592×1944 – 2.7MB per photo = 3.5 days
  • 1920×1080 – 1.2MB per photo = 7 days
  • 1024×768 – 546KB per photo = 17 days
  • 320×240 – 96KB per photo = 98 days

If you need more space to keep the SD card from filling up quickly a larger 64, 128, or even 256GB card will store many more photos before filling up – do some math to figure out how many photos each can hold.

Selfie station example project

Here is an example of an applied design thinking project done with a class of 17 students at the University of Hawaii Hilo Upward Bound T³ Alliance during the summer of 2018.  Node Red and physical Raspberry Pi setup instructions can be found on this post.

Students had mastered the skills associated with basic physical computing and Node Red.  They were capable of setting up a button, an LED Ring, using a sonic sensor, a PIR sensor, and a small camera.  They had been able to combine all these components using Node Red and were capable of generating emails that sent a photo.

I asked around for a person or group at the university that might be interested in a device that could take photos and have them emailed instantly.    Eventually I found the perfect potential client in Shara Mahoe, the director of new student services.    She was planning a scavenger hunt for the new student orientation day later in the summer when 600 freshmen appear and nervously try to find their way around the large UH Hilo campus.  She listened to my description of what the students in our T3alliance program were able to do and how design thinking process worked.   Once she understood what was involved, she signed up to be a client.

Shara and a colleague showed up in my classroom the next day and I interviewed her in front of the class.  She described her day and what she hoped it would feel like to new students.  She listed out 5 locations around the University that could use a selfie station and asked my students if they could find a solution that would work.  I had previously broken the students up with an ice breaker activity and they now found themselves choosing one of the sites around campus to design a selfie station.  As a team, they discussed what they had heard Shara speak about and filled out the first section of the guide questions associated with the “empathy” stage of the design thinking process.

We took a quick walking field trip to each location and the students finished the “define” phase, where they articulated exactly what was needed and what the constraints were, and moved into a brainstorming “ideation” phase.  Students were tempted to think there was just one type of solution to the selfie station problem, but they sketched out three different ideas. When this was finished they chose a “prototype” design they wanted to build and they wrote out a mini grant proposal.  

When the proposal is complete, we submitted it to the Upward Bound director for approval.  We prepared ahead for this type of project with wires, and buttons and extra raspberry pi devices with cameras and power supplies.  After the proposal was approved we handed out and checked off the items that had been requested on each proposal. The students got right to work building the prototype selfie stations.

We instructors restrained ourselves from helping too much and let the teams figure out how to build their designs.  When students would ask for help, we would respond with a question.  Eventually, the students learned to frame their questions in such a way as to be able to google the answer.  We helped in the areas where a skill had not been introduced, such as soldering, or learning to “remote” into the pi.  The teams were responsible for building the prototype, writing the code that controlled it, and recording and editing a short video.

Several days later, Shara met with us to see the results. The students walked around the campus with her demonstrating the way their selfie stations worked and noting what things could be improved. One team had an opportunity to radically modify their design because it didn’t take into account the safety considerations necessary when a crowd of students would moving past a certain area.

The students were beaming when Shara thanked the group.  She appreciated their efforts and asked them to sign their work so that new students at the school knew who had built these stations.  Each group modified and perfected the design and the instructions.

As part of the initial mini grant application, The teams had been responsible for writing user instructions, making a video about the project and writing a short report about the progress of the project. 

RPi Node-Red: Camera

Goal:

Install the Raspberry Pi camera and take a picture using Node-Red

Parts List:

  • Pi Camera
  • Pi Camera Ribbon Cable

Getting Started:

Power off your Raspberry Pi and the follow the instructions on this page to install the camera.

Setting up Node-Red

Start Node-Red and navigate to 127.0.0.1:1880.  Into the flow area drag an “inject” node and a “camerapi takephoto” node.

Link them together.

Deploy the flow, then point the camera at something and press the inject button.  The camera will take a picture and then store it in /home/pi/Pictures by default.  Open the file explorer.

Click on the Pictures folder.

There your photo is!

Whats Next?

  • Try and trigger the camera node with a Raspberry Pi input node connected to a button or PIR sensor.

RPi Node Red + Camera