IoT One Cloud: IoT One Cloud: A Preview

Added by Daniel Wisnewski IoT 1 CloudPremium MemberTigaseTeam 4 months ago

A brief preview into what IoT One cloud is: the future of IoT. Watch and see how fast we can make an IoT enabled device with the security and reliability of XMPP!

IoT One Cloud: What will you Create?

Autonomous Obstacles Detecting Vehicle : Autonomous Robot v1

Added by Pawel Zaborny 4 months ago

After few weeks of work connection between two most important parts of the project are ready to be shown. This is the first step introducing robot into an autonomous driving mode.

So far the goal was to create movement control and vision system able to recognize objects and move with respect to them. This time I want to show general features and problems related to that topic. On the first video, we can easily see how robot reacts to visible objects and tries to adjust his position and point at the single orange ball.

After seeing this video first thing that will probably come to mind is why robot decided to point at the second ball when it appeared and why after pointing at the ball is still rotating?

The answer to the first question is related to the algorithm which process video stream. It tries to find the smallest rectangle which will become bounding box for the biggest detected object. So if there are two identical objects program always chooses closest and biggest one.

To explain the second question I'll present another video from a different perspective.

This time we can easily notice that robot cannot instantly aim at detected object. It's a very common issue in control theory and its called overshooting. But it's not a problem and also we can see that this error decrease over next iterations.

Another noticeable thing is how robot moves and the fact that these movements are not smooth at all. First of all our robot is not equipped with encoders that would give us information about wheel rotation. Instead of that we only receive feedback from vision system what will not provide as good accuracy as encoders. So we decided to use control system which is not typical PID controller but something closer to Kalman filter. We counted how much time does it takes for the robot to make 360-degree rotation on full throttle on deferent surfaces. And then we counted mean value of that time what is initial constant. Then we can halve that time to make 180-degree rotation or divide into 4 to make 90-degree rotation. But it's quite obvious that this value will never be exactly 180 or 90 degrees and difference of that error depends on many different factors. So another task is to make a control system that will estimate next movement count an error and adjust parameters to increase accuracy. And next add some safety features and of course, give robot capability to move toward an object.

Autonomous Obstacles Detecting Vehicle : Object detection deployed on Raspberry Pi

Added by Pawel Zaborny 6 months ago

This is how our first task looks like. We want the robot to detect one of this two objects and then give us a feedback about its position in the console.

As we can see that random HSV range setup doesn't give us any meaningful results. Robot convert camera view to binary representation and shows it in the right window. It's called mask and white areas represents our target. And what we can see now are our objects with tons of noise.

After adjusting saturation and value parameters we remove noises caused by reflection of sunlight captured by the camera. In the result, it gives us a clear picture of our objects. What we can see on the original camera view that program added green pointers around orange ball it's because program tries to find the biggest object from white areas in the right window and then mark them on the left window. This approach is called finding bounding boxes and was chosen due to its low complexity what doesn't cause a decrease in FPS.

At this point we still don't have defined the exact color that we want to follow. Let's say that we want to follow the yellow ball. All we need to do is to slightly adjust last parameter "hue". And that's the result. We have marked yellow ball and information about it's bounding box position is printed 30 times per second in console.

Public Release Documents: Update for IoT One Cloud

Added by Daniel Wisnewski IoT 1 CloudPremium MemberTigaseTeam 6 months ago

We’re hard at work on developing and bringing the world of IoT to your raspberry pi, and we’re excited to show off some of the work we’ve been doing. Today we released a video showing compatibility with an LED matrix screen. While this alone is a cool use of the technology, essentially being able to draw on the matrix to show patterns, there’s something else unseen that is being shown off in the video: Speed.The video shows both the cell phone and the Pi right next to each other, but they are communicating half a world away! The mobile device is sending commands from Poland, to a server located on the West coast of the United States, which is then sending the commands back to the raspberry pi, again in Poland. Once the led status has changed on the pi that information is published back through to the server in the West coast, and back to the cell phone. That’s over 10,000 miles of a round trip in almost as much as it takes you to blink. All this traffic secure and encrypted. IoT One Cloud from Tigase is poised to give you internet connectivity to devices you’ve never thought could be controlled, or deliver information over the web. What will you create?

Avatar?id=6023&size=24x24Public Release Documents: Security notification and confusing emails

Added by Artur Hefczyc TigaseTeam 7 months ago

Many of our users and customers received email notifications from our system about changes to their accounts, such as email change and other information.

After investigation it turns out these email were generated by our development system on which the team decided to change all users’ email to avoid spamming them with unwanted email notifications which could have been generated during testing new features.

Unfortunately this resulted in the exact thing we tried to avoid. Flood of emails notifying our users and customers about changes to their accounts.

We are very sorry for the inconvenience and confusion and unnecessary emails.

The good thing is that there is nothing to worry about, accounts are not hacked or compromised.

Autonomous Obstacles Detecting Vehicle : Investigation on how to deploy openCV javaFX GUI

Added by Pawel Zaborny 7 months ago

First of all after creating valid pom.xml and including all dependencies and native librariesI realized that file included in jar file has to be build directly on raspberry pi.As ".so" file has to be build for specified architecture. In this case ARM processor. Process of building openCV dependencies is the same for all devices and is posted on wiki here OpenCV.

Another problem was related to lack of support by java-8 for ARM processor. So it required to redesign code from javaFX to java swing.

And the last issue was caused by lack of v4l2 on raspberry pi. It's really misleading as typical camera test "raspistill -v -o test.jpg" doesn't require v4l2. But every attempt to run camera by java application will fail without v4l2. Fortunately it can be easily fixed with few lines of code and is explained here Camera and V4L2.

Autonomous Obstacles Detecting Vehicle : Deploying software and communication with RaspberryPi

Added by Pawel Zaborny 8 months ago

In last 2 weeks. I made some minor changes in code added comments and started deploying software to robot. I made connection with RaspberryPi via ssh for quick access to computer and via vnc to observe camera actions. I started learning abut maven and gradle. What took me the most of time was creating a java openCV jar file what isn't finished yet. I'll add code to repository as son as I'll check if it works properly on raspberryPi.


Also available in: Atom