Anemometer Computer Vision – Wind Speed

Anemometer Computer vision is a method to determine Wind Speed with analysis of frames captured in a camera, I filed for patent on this not so long ago.

Back in April 2018, I participated in “Emerge America’s” Hackathon with a project called “Augmented Reality Anemometer” that used Augmented Reality (AR) to capture video frames from the ARKit and computed the speed using OpenCV backend and sent it back to the phone to display the wind speed derived from the video frames.

Anemometer Project

This hackathon the idea was different and was to expand open it  creating a network of cameras that could capture wind speed, process and then generate wind speed computations using the captured frames from an input device.  In fact, one of the members Chris brought a RING and worked on the RING API To retrieve the video from that device, but this would work with any IP-based Camera.

All this took place at the Palm Beach Tech Hackathon hosted by Office Depot and organized by “Palm Beach Tech” team, lead by Joe Russo.

 

 

 

Eye of the Storm 

It was a fun event and, we did it, our team won 3rd place and $500 with the project called “Eye-of-the-Storm” or basically a collection of cameras positioned in multiple locations in South Florida simulating a hurricane event displaying wind speeds and showing videos of the computer wind speed in MPH (Miles Per Hour)

The code is located in GitHub at: http://github.com/wind2013/eye-of-the-storm/  Here my team:

How does it work? 

How an “Anemometer Computer Vision” works?

The current implementation is defined in wind_speed_detector.py  

You need to install FFMPEG and OpenCV to execute this code and as shown, the steps performed are:

  • Gray scale of each frame, to detect pockets of water
  • Delta from the original frame, 
  • Find all rectangles where 
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
// multiple iterations can be handled with this function. 
  • Once this is done, I found that a pattern to use could be the number of rectangles divided by the numbers of squares or, squire-like images. from all the green rectangles being drawn on the
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

This simplistic  approach provides a ratio from 0 to 5-6 where, 5-6 corresponds to high-speed winds of 150 MPH and anything higher does not provide any information, or corresponds to static images. 

Additionally, to remove false positives a low-pass filter was implemented in the computation of the speed and the generation of a JSON object with the information provided by the video feed. 

I found issues with VideoWriter and hence, I created a simple way to store all resulting images with the processed results and add them into a folder called “Videos.” The resulting video can be generated using FFMPEG as follows and the code generates the JSON dum for each video generated:

ffmpeg -r 1/5 -i img%04d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
{"name": "video2", "url": "https://youtu.be/M5Yqf38BZFc", "lat": "6.3453489", long:"-80.2466852", "data": [84.9678135385856, 79.60910098458474, 79.41670132581257, 75.42241593223598, 78.19563803555593, 65.56233203743804, 62.501120420257415, 90.02225734791524, 82.48358932733932, 87.88036269286269, 69.28044871794872, 99.2660984848485, 74.38048756798756, 80.67856449106449, 45.89786428848928, 72.84379856254856, 66.00986513486514, 83.3637508325008, 88.89919656787255, 100.63683134353609, 78.20004894752464, 70.53741000448593, 84.57500257513873, 36.72310943545803, 39.902204810583285, 66.87129811566815, 78.32142857142857, 68.77380952380953, 81.72871572871573, 76.74702380952381, 67.85416666666667, 58.186011904761905, 67.49900793650794, 69.3050595238095, 74.4672619047619, 64.0625, 70.11309523809524, 64.92113095238095, 69.61904761904762, 54.1026477177793, 33.61351373915387, 41.35319076563271, 56.99321715865834, 83.90395021645021, 36.04464285714286, 65.86309523809524, 41.895833333333336, 46.88365005092946, 60.79374098124098, 57.11404220779221, 78.36061507936509, 64.97614538239537, 83.39272879897878, 73.59424603174602, 88.63045634920634]}

In order to facility embedding the video, simply I added to Youtube.com, which can be found on video1.json, video2.json, video3.json, and video4.json.  The code for thee site demo shown on the directory storm-eye can be found in:

Google Map APIs are created and using the following Markers:

 latlng[0] = new google.maps.LatLng(26.0244368, -80.1645212);
 latlng[1] = new google.maps.LatLng(26.3453489, -80.2466852);
 latlng[2] = new google.maps.LatLng(26.3453149, -80.2466853);
 latlng[3] = new google.maps.LatLng(26.4051288, -80.1205341);

Adding an onClick (“click”) and “addListner” or Listener to load an iFrame with the “Page1.html”

marker[1] = new google.maps.Marker({
        position: latlng[1],
        map: map
    })
google.maps.event.addListener(marker[1], "click", function(){
        bubble = new google.maps.InfoWindow({
          content: '<iframe title="page1" type="text/html" width="480" height="390" src="../page1.html" frameborder="0"></iframe>'
        })
        bubble.open(map, marker[1]);
    })

As a result, the Maps include all the objects required for this Demo. 


Eye of the Storm – Screen Capture

Visit Us On TwitterVisit Us On FacebookVisit Us On YoutubeVisit Us On Linkedin