instruction
stringlengths 40
28.9k
|
---|
I've recently succeeded in building my first collision-avoidance Arduino robot with 2 DC motors, and it works pretty well. However, it doesn't move in a straight line yet, when it should. I'm now studying which method should I implement to make the robot go straight. I've heard about using IMU or encoders with feedback control. I pretty much understand how to use the encoders, but I'm not sure about the gyro. Should I use just one of those or a combination of them?
|
I am looking for a physics simulator which can accurately model a robot hand picking up an object. The main requirement is for accuracy / realism, rather than speed. It needs to be able to model soft bodies, such as the rubber "skin" on robotic finger tips. It also needs to be a dynamics engine, such that the object is actually moved around by the hand, modelling effects such as slippage.
From the research I have already done, there are two good candidates. First, GraspIt! (http://graspit-simulator.github.io/). This is open-source, and specifically designed for grasping, rather than physics simulation in general. Second, MuJoCo (http://www.mujoco.org/). This is a more general simulator, is a commercial product, and has been adopted by some big names such as DeepMind.
I have tried using the Bullet physics engine for robot grasping simulation, but soon realised that this was not going to be strong enough, because Bullet is really designed for games, and hence sacrifices realism for speed. However, I'm much more interested in something which is as realistic as possible, even if the computation is slow.
Does anyone have any suggestions as to how I can proceed? Anybody with any experience with GraspIt! or MuJoCo?
Thanks!
|
I'm developing a project which involves a raspberry pi 3 remote control rover and I need to know the exact location of the raspberry pi rover in a set field.
Let's say I have four logs, one in each corner of the square field (the goal right now is to extend this to any shape field, any number of corners), every of them equipped with (some kind of wave technology) that allows me to triangulate the position (based on signal intensity) of the raspberry pi rover.
The distance between logs should not be bigger than 30m (~100feet) and there is no line of sight guaranted.
The question is: Which kind of technology should I use, infrared, wifi, bluetooth, radio, ultrasound, etc? or, is there any better approach to this problem?
|
I am using a physics simulator to simulate a robot arm. For a revolute joint in the arm, there are two parameters which need to be specified: damping, and friction. If a torque is applied to the joint, both the damping and the friction seem to reduce the resultant force on the torque. But what is the difference between the damping and the friction?
|
I am building my first drone,
Objective: - Need to control drone by wifi on phone or laptop using
ground station software of Openpilot
I have a Arduino 2560 , cc3d Openpilot flight controller , raspberry pi with wifi bluetooth in built...
Now i am not able to understand , how to go forward , should i connect arduino with openpilot cc3d flight controller , or raspberry pi directory with cc3d flight controller ....
Do i really need arduino 2560 now ?
also how to connect r pi with cc3d flight controller , and how to mock PWM signals ?
|
I have a 'Baron' robot frame with 4 static wheels, all driven by a motor. At the moment I'm thinking of handling it like a 2 wheel differential drive. Left and right wheels would receive the same signal. Actually you can interpret it as a tank on caterpillars, exept there is no link between the two tires.
Does anyone have a different idea about this?
Ps: The purpose of the robot will be to know it's exact location. I will use a kalman filter (EKF) to do sensor fusion of the odometry and an IMU with accelero, gyro and magnetometer. So in the kalman filter I add the odometry model of a differential drive robot.
|
I want to rotate the whole value of a 3d vector into one axis using quaternion rotations.
The reason behind is that I want to align the X and Y Axis of my smartphone with the X and Y Axis of my vehicle in order to detect lateral and longitudinal acceleration separated on these two axis. Therefore I want to detect the first straight acceleration of the car and rotate the whole acceleration value into the heading axis (X-Axis) of the phone assuming a straight forward motion.
How do I achieve this?
|
An Inertial Measurement Unit (IMU) is an important sensor used in aerial robotics. A typical IMU will contain an accelerometer and a rate gyroscope. Which of the following information does a robot get from an IMU?
Position
Orientation
Linear velocity
Angular velocity
Linear acceleration
Angular acceleration
I don't think it gets its orientation information from IMU. The last time I took the test, I said that all but the first two are true. I failed.
|
All of the examples of keeping a double/triple inverted pendulum balanced using a PID controller I've seen seem to be on a cart. Like this one https://www.youtube.com/watch?v=cyN-CRNrb3E
How come the PID controller always controls a cart rather than a servo that holds the first pendulum? The second/third pendulum could be connected loosely on the first pendulum and the PID controller controls the first pendulum. Is it because servos tend to be too slow or are there other reasons?
|
I took a course to have a better understanding of drones and their design. At the end of the course there was a test question that I got wrong and I would like to understand why.
I was supposed to select the choices that best describe SLAM.
and the possible answers were:
Estimates the location of features in the environment?
Controls the robot's flight through the environment?
Causes the robot to avoid obstacles in the environment?
Navigate in a cluttered environment?
Estimates the position and orientation of the robot with respect to
the environment?
At first I knew that at least 3 and 4 were right because I watched a drone doing these things. I also thought that the last answer was linked to these two so I said yes to it too. Finally, I thought that the only thing that was still controlled by the user would be the flight...
Yet I failed again... Therefore what does Simultaneous Localization And Mapping (SLAM) software do?
|
Please can somebody explain to me the difference between Position Control, Velocity Control, and Torque Control? Specifically, I am thinking in terms of a robot arm. I understand that position control tries to control the position of the actuators, such that the error signal is the difference between the current position and the desired position. Velocity control is then trying to control the velocity of each actuator, and torque control is trying to control the torque of each actuator.
However, I don't understand why these are not all the same thing. If you want to send a robot arm to a certain position, then you could use position control. But in order to move an actuator to a certain position, you need to give it a velocity. And in order to give it a velocity, you need to give it a torque. Therefore, whether the error is in position, velocity, or torque, it always seems to come back to just the error in torque. What am I missing?
|
This is my first post here, so hello all. I really hope I can learn a lot from you guys.
I am trying to build a robotic arm to carry an object and put it inside of different boxes that are placed in different fixed locations.
I found a few robotic arms that can do it, but I am still trying to find the right motor for the job. I read a lot on-line about the different motors, but I am not sure which on to pick. Since the boxes are located in fixed places, the motors have to move in a precise way, so, according to my research, Servo motors are the ones I should use.
Since it is a low budget project (I am college student), I wasn't sure which motor to choose (there are a lot of servo motors out there). I found several Servo motors on-line, for example , Analog Feedback Servo, and I was wondering what is the best servo motor I can buy for a really low cost project? I think I can spend about 10-20$ per motor (I need 5 motors).
I already have an Rpi and I know that pin 18 is the PWM pin that controls the motor's precision movement, but before I purchase a PWM controller and additional motors I need to run some testing to find how precise the motor is.
By the way, how can I calculate the amount of weight the motor can handle?
Any ideas and information will be greatly appreciated.
Thank you
|
I thought it were twelve ways:
Six for each ways between two propellers
Six others for each rotation on these ways.
But according to Vijay Kumar Dean of Penn Engineering , it seems that I was wrong...
Then I read this article about modeling and robust trajectory tracking control for a novel Six-Rotor Unmanned Aerial Vehicle and this one about navigation and autonomous control of a Hexacopter in indoor environments but was never able to find such an information.
I then guessed that 3 of the rotors could go one direction and three others into another which would add 6 other ways for rotating and therefore 6 others for simply flying but that is only a guess.
|
Good day everyone :)
I am an undergraduate student working on a project involving the use of high torque small-sized DC motors for use in designing a person following trolley bag. Where in the problem is to use small sized motors but still maintain the usability and efficiency in carrying loaded luggages.
I have been looking for motors in local stores as well as in RS components as well as Element 14. However I am not sure if my choices are the right fit as I am at a loss on what specifications to look for when selecting a particular motor for this application. I have also tried to look for motors used in current products that can be used in my application such as todays electric skate boards but unfortunately have no luck on finding their suppliers.
Basically the question I would like to ask is what specifications or calculations can I perform to select the proper motors given size constraints and weight carrying requirments. Or does anyone have suggestions on common motors that are already normally used for this application. My target maximum load capacity is 20kg.
Thank you!
|
What is the degrees of freedom (DOF) of the Rostock delta robot 3d printer (delta mechanism that consists of three prismatic joints)?
Here's the link to the delta mechanism I'm referring to:
https://www.youtube.com/watch?v=AYs6jASd_Ww.
Thanks in advance for your help!
|
For instance, how would you hook up a electric pump communicate with a motherboard? Let's say I buy a electric pump, I hook it up to some sort of metal structure that if the pump is turned on it moves the metal structure, how would I hook up the pump to my motherboard so that I can program it?
|
For hobbyists, you go to a store to buy products. The prices for these products are all clearly listed in the store catalog, and you can easily search for parts by lowest price or read customer reviews of the products.
For industrial engineers building complex machines, how do they buy components? Or don't they worry about cost, and leave it to their employer to eat the cost as a part of doing their line of work?
Is it possible to "shop around" for low-cost engineering components?
It is unclear to me how someone building a robot on their own as a small one-man startup can make the step from the world of toy robots, to larger and more industrial robotic components.
Most of the non-hobbyist stuff is hidden away and not exposed to the world. While a product catalog might be available, there are no prices listed for anything.
For larger industrial components, there does not seem to be any realistic way to shop around for the lowest price or best value, since pricing for much of the big stuff is basically unavailable.
For me personally, I am interested in trying to build my own powered exoskeleton on a middle class American income, so I can't afford to be paying 1000 bucks for a single electrohydraulic proportioning servo valve, when I'll need probably 50 of them. But shopping around for low cost ones is basically impossible as far as I can determine, because pricing info is generally not available or searchable from the majority of manufacturers.
|
I am the moment trying to read and understand this paper Task Constrained Motion Planning in Robot Joint Space but seem to have a hard time understanding the math.
The paper describes how to perform task constrained motion planning in cases where a frame is constrained to a specific task.
the problem the paper tackles is when sampling in joint space, randomized planners typically produce samples that lie outside the constraint manifold. the method they proposed methods use a specified motion constrain vector to formulate a distance metric in task space and project samples within a tolerance distance of the constrain.
Given the this I am seem to a bit confused on some simple terms they define in this paper.
Examples. How is a task space coordinate defined ? what information does it have?
they compute the $$\Delta x = T_e^t(q_s)$$ which is transformation matrix of the end effector with respect to the task frame.
What I don't get is why the end effector? and why the end effector with respect to the task frame?
Secondly.
Later in the paper they write down an expression that relates the task space to the joint space motion. They do it using the Jacobian, but seem to miss explaining (in my opinion) what $E(q_s)$ actually do.
$$J(q_s) = E(q_s)J^t(q_s)$$
What is said about it in the paper is that
Given the configuration $q_s$, instantaneous velocities have a linear
relationship $E(q_s)$
why the need of instaneous? what is the definition of an instantaneous component? how does it differ from the information given by the jacobian?
Basically i don't understand how and why the mapping is as it is?..
|
I have a laser pointer on a handle grip and I'm trying to keep the laser pointer's yaw direction, which can rotate at around 10deg/s. So I have the laser pointer on a stepper motor and an accelerometer/gyro in the handle, but what's a good way for maintaining its yaw direction? Could I simply turn the shaft according to the accelerometer/gyro's yaw readings or is control theory (PID) needed?
That is, if my stepper makes 4096 steps/rev, one gives 0.0879 deg. If the handle is turned by, say, 0.879 deg, then turn 10 steps in reverse (instantaneously). Would this be jerky and PID be needed?
Any thought appreciated.
|
I want to make a object tracking quadcopter for a project. While I'm using the Arduino Mega 2560 as the flight controller, I was thinking of using an additional offboard microcontroller/board for getting data from the onboard camera,which would then send an appropriate command to the onboard Arduino.
I was hoping someone could provide clarification on the advantages/disadvantages of doing object tracking with either choice.
Thanks!
|
I didn't found any modules to charge my 11,1 volt LiPo akku, only for 3,7 volt with 5 volt power supply. How can I handle that with a micro-USB connector on my robotplatform?
|
I'm unsure if this is the correct community to ask this question (vs. Electronics or Aviation Stack Exchange, for example), but I recently purchased a Hubsan x4 HD video drone from Amazon.
This is my second Hubsan drone so I am already familiar with using the recording feature. However, after every recording, the recordings are the correct length, with the correct audio, but the image is black. I tried formatting the micro SD, using different micro SDs, reading up on forums, etc. but nothing seems to do the trick.
Is mine defective, or has someone had this issue and has been able to solve it?
|
I am trying to measure the height of water inside a column. The column is 50mm in dia and 304mm long. I have mounted the sensor just above the column.
To measure the accuracy of the sensor, I filled the column up to a known value (a) and got the average sensor reading (b). a+b should give me the height of the sensor from the base of the column.
Repeating this for different values of a, I got very different values for (a+b). see attached chart.
My question is
Is the sensor expected to have error of this order?
Is my setup of confining the sensor through a column producing such errors.
Any other ideas to get the water column height. Please note that during the actual test, the water inside will be oscillating (up and down).I am thinking of making a capacitive sensor using aluminium foil. Water will work as the dielectric and the level of water will determine the capacitance.
P.S. I also did some open tests (not through a column) to get the distance of a fixed object, and it was quite accurate.
Any help is appreciated.
Arduino Code
#include <NewPing.h>
#define TRIGGER_PIN 7 // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN 8 // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 200 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.
double DB_ROUNDTRIP_CM = 57.0;
NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.
void setup() {
Serial.begin(9600); // Open serial monitor at 115200 baud to see ping results.
}
void loop() {
delay(100);
unsigned int uS = sonar.ping();
double d = uS / DB_ROUNDTRIP_CM;
Serial.println(d);
}
|
I am searching for a way to minimize the size of a stereo vision module and cannot find any ICs that will combine and sync two MIPI CSI-2 (4 lane) data streams without an FPGA and too much code. there was one online (MAX7366A 3D Video Combiner/Synchronizer with two MIPI CSI-2 Input and one MIPI CSI-2 output) but the product is not publicly available. Does anyone have knowledge of an arrangement of ICs that I could try?.
|
I am new to robotics.Recently came into contact code.So the teacher let me use the Serial port app for Android to enter the opcode.But the robot did not have any reaction.
I use the Communications Cable with Adapter in android phone.
App Use [DroidTerm: USB Serial port]
Serial Port Settings
Baud: 115200 (19200 also used)
Data bits: 8
Parity: None
Stop bits: 1
Flow control: None
I try to enter opcode, but no response.--> enter:128,135,134.....
But it did not show any reaction to the phone and robot.
I hope according to Opcode instructions to control robots to make the specified action.
|
I have an arduino wired to an MPU6050 breakout board. The arduino continuously collects accelerometer and gyroscope data from the MPU6050 and calculates angle and velocity.
Simply plotting the vector components (x,y,z) of this data does not allow one to reason about the motion of the sensor or robot. It's possible, though not easy, to do sanity checks (Is the sensor oriented as expected? Is gravity working?). But it's very difficult to look at a x,y,z plot of accelerometer log data and imagine what the robot did for instance.
I was wondering if there is some sort of tool or Python library to visualise accelerometer and gyro, or IMU data? (I'm looking for something like this- https://youtu.be/6ijArKE8vKU)
|
I am making a project with a 4 wheeled differential robot to make visual SLAM using a stereo rig. I have some encoders to measure de displacement and the steering angle of the robot and I want to use the odometry motion model in the fastSLAM algorithm.
To use the odometry motion model you need to calculate the values it needs from the odometry reading (incremental encoders), $u_t=(\bar{x}_{t-1},\bar{x_t})$ where $\bar{x}_{t-1}=(\bar{x}\>\bar{y}\>\bar{\theta})$ and $\bar{x}_t=(\bar{x}'\>\bar{y}'\>\bar{\theta}')$ are the previous and the current pose extracted from the odometry of the vehicle.
My question is about how to obtain those values from the encoders. I guess that in this case I would need to obtain the equations from the geometric model for the differential robot:
$D_L=\frac{2\cdot\pi \cdot R_L}{N_c}\cdot N_L$
$D_R=\frac{2\cdot\pi \cdot R_R}{N_c}\cdot N_R$
$D_T=\frac{D_L+D_R}{2}$
$\Delta\theta=\frac{D_L-D_R}{L}$
where $D_L$ is the advance of the left wheel, $D_R$ is the advance of the right wheel, $R_L$ is the lecture from the left encoder, $R_R$ the lecture from the right encoder, $N_C$ the total number of pulses of the encoder type, $D$ is the total distance achieved by the robot and $\theta$ the angle steered. $L$ is the distance between the wheels.
Using those equations is possible to obtain the pose in every time step:
$\bar{x}_{t}=\bar{x}_{t-1}+D\cos(\theta_{t-1})$
$\bar{y}_{t}=\bar{y}_{t-1}+D\sin(\theta_{t-1})$
$\bar{\theta}_{t}=\bar{\theta}_{t-1}+\Delta\theta$
So those last are the values I need to inject to the modometry motion model and then add gaussian noise to them.
Am I right? Or is there another way of computing the pose from odometry for a differential 4-wheel robot?
|
I have a heated compartment, inside which, there is another object heated up by independent heater. I want to control temperatures of both chamber and the object.
I could achieve this by simple PID (or PI) controllers for both chamber and object, but I would like to try more thoughtful approach :) I have two temperature sensors, and two PWM outputs for heaters. How do I identify a model for an object I want to control?
|
I'm new to robotics and I'm looking to make a 5-6 axis robotic arm out of stepper motors but I honestly don't know how much torque I should have for each part. Below I have described in more detail what my current plan is but I'm really not sure as to how much I really should be spending on each of these joints.
My general plan for this project was to make a arm that when fully extended would only be around 40-50(max) cm long. It would be consisted of light weight aluminum and I am hoping for it to weigh only a couple of pounds when done.
Here is my current list of actuators for each of the joints:
(Bottom = 1, Top = 6)
1st joint, (Nema 23 CNC Stepper Motor 2.8A)
2nd and 3rd joints
4th, 5th and 6th joints
My real questions is, is this overkill or is it not enough for what I'm really trying to make. I really don't need it to be able to pick up a lot of weight, at most 1 to 2 kilos but I highly doubt I will ever be picking up anything more than that. Anyway I just wanted to see if this was sufficient enough for my project.
|
I'm trying to design a control system for a robot that tracks moving object. Thus I want to robot to match the position and velocity state of the object. I don't want robot to simply to arrive at the position, but I want to arrive at the position with the same velocity of the object.
Object velocity and position data will be provided externally.
I'm not sure if a traditional PID controller (with velocity controls) with just a position based error is enough. Wouldn't position only state goal result in tracking that is always lagging behind?
Is PID what I want or should I be looking at something else like trajectory controls?
|
I'm making a target to an outdoor robot competition.
The target should detect if some of the robot got touched or got an hit automatically. and the target can get hit 360 degree.
I'm searching for the perfect sensor to detect an hit, without get false positive from a wind.
My option right now are:
1- ultrasonic sensor (bad coverage)
2- tilt sensor (bad FP rate)
3- wooden conductive
I would like to know if someone has other ideas (that affordable - less than 30$ dollar per target might be o.k)
Edit: the target is static, and just waiting to a robot to touch it.
Edit: The specs are:
1- The target dimension is 1 meter height, 0.5 meter width , 0.3 depth.
2- To trigger the target ,the robot should be around 10 centimeter long to any point of the target surface.
3-To trigger the target the robot needs to get close up to 10 centimeter or even press with around 1 Newton force. the robot might even throw an object that satisfy the previous condition.
4-Detection must be only from intentional touch.
5-Wooden conductive is trigger because a human is Electrically conductive. this might not be the option when we throw an object.
6- Target will be placed outdoor, so the sensor need to be wind-resistance (not extreme wind condition- just around 20-25 km/h)
7- I prefer a sensor that detect touch (more than proximity)because it might make my solution more cheap and reliable(in factor of amount sensors as i estimate).
Thanks.
Guy
|
I'm developing a 5 axis robotic arm with stepper motors and I am getting around to ordering most of my parts. I plan on using 5 EasyDriver shields to drive all of my motors. I am also planning on using just a basic arduino uno board to go with it. So here are my questions:
Is there any alternative instead of buying a ton of Easy Drivers and connecting all of them to a single board?
And if there isn't, then how would the setup look to use more than 3 stepper motors? This is the most useful picture I found, however It only shows 3 and while I know I could plug in a 4th I am unsure whether I could plug in a 5th.
|
I own an iRobot create2 on which I am planning to implement a control algorithm. After playing with the different drive commands, I noticed that changing the desired velocity values marginally doesn't seem to do anything.
Even the Drive PWM command that ranges from -255 to 255 seems to have an internal granularity that is bigger than 1.
In this video the create seems to change its driving direction nearly seamlessly, which I am not able to reproduce with the described behavior.
Does anyone have any suggestions?
|
I was looking at the contact points for the Atlas in the DRCsim package. Each foot has 4 contact points at each vertex of the rectangle. I'd like to know how these points are determined. I've tried looking at the ODE code, but C++ isn't my strong suit so I had some difficulty figuring out what was going on. What I understand is that ODE compares the geometries one by one however it's not possible to compare all points so it only compare a select few points. What I'm trying to understand is what basis are those particular points selected? Why does the Atlas have the 4 contacts set up the way they are, and not some additional points on the heel? Can I add them myself?
Thanks.
|
I've been making my own quadcopter flight controller using Arduino Mega. This is the sample code I wrote in order to test the esc timers and motors:
byte channelcount_1, channelcount_2, channelcount_3, channelcount_4;
int receiverinput_channel_1, receiverinput_channel_2, receiverinput_channel_3, receiverinput_channel_4, start;
unsigned long channel_timer_1, channel_timer_2, channel_timer_3, channel_timer_4, current_time, esc_looptimer;
unsigned long zero_timer, timer_1, timer_2, timer_3, timer_4;
void setup() {
// put your setup code here, to run once:
DDRC |= B11110000; //Setting digital pins 30,31,32,33 as output
DDRB |= B10000000;; //Setting LED Pin 13 as output
//Enabling Pin Change Interrupts
PCICR |= (1 << PCIE0);
PCMSK0 |= (1 << PCINT0); //Channel 3 PIN 52
PCMSK0 |= (1 << PCINT1); //Channel 4 PIN 53
PCMSK0 |= (1 << PCINT2); //Channel 2 PIN 51
PCMSK0 |= (1 << PCINT3); //Channel 1 PIN 50
//Wait till receiver is connected
while (receiverinput_channel_3 < 990 || receiverinput_channel_3 > 1020 || receiverinput_channel_4 < 1400) {
start++;
PORTC |= B11110000;
delayMicroseconds(1000); // 1000us pulse for esc
PORTC &= B00001111;
delay(3); //Wait 3 ms for next loop
if (start == 125) { // every 125 loops i.e. 500ms
digitalWrite(13, !(digitalRead(13))); //Change LED status
start = 0; //Loop again
}
}
start = 0;
digitalWrite(13, LOW); //Turn off LED pin 13
zero_timer = micros();
}
void loop() {
// put your main code here, to run repeatedly:
while (zero_timer + 4000 > micros());
zero_timer = micros();
PORTC |= B11110000;
channel_timer_1 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 33
channel_timer_2 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 32
channel_timer_3 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 31
channel_timer_4 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 30
while (PORTC >= 16) //Execute till pins 33,32,31,30 are set low
{
esc_looptimer = micros();
if (esc_looptimer >= channel_timer_1)PORTC &= B11101111; //When delay time expires, pin 33 is set low
if (esc_looptimer >= channel_timer_2)PORTC &= B11011111; //When delay time expires, pin 32 is set low
if (esc_looptimer >= channel_timer_3)PORTC &= B10111111; //When delay time expires, pin 31 is set low
if (esc_looptimer >= channel_timer_4)PORTC &= B01111111; //When delay time expires, pin 30 is set low
}
}
//Interrupt Routine PCI0 for Receiver
ISR(PCINT0_vect)
{
current_time = micros();
//Channel 1
if (PINB & B00001000)
{
if (channelcount_1 == 0 )
{
channelcount_1 = 1;
channel_timer_1 = current_time;
}
}
else if (channelcount_1 == 1 )
{
channelcount_1 = 0;
receiverinput_channel_1 = current_time - channel_timer_1;
}
//Channel 2
if (PINB & B00000100)
{
if (channelcount_2 == 0 )
{
channelcount_2 = 1;
channel_timer_2 = current_time;
}
}
else if (channelcount_2 == 1)
{
channelcount_2 = 0;
receiverinput_channel_2 = current_time - channel_timer_2;
}
//Channel 3
if (PINB & B00000010)
{
if (channelcount_3 == 0 && PINB & B00000010)
{
channelcount_3 = 1;
channel_timer_3 = current_time;
}
}
else if (channelcount_3 == 1)
{
channelcount_3 = 0;
receiverinput_channel_3 = current_time - channel_timer_3;
}
//Channel 4
if (PINB & B00000001) {
if (channelcount_4 == 0 )
{
channelcount_4 = 1;
channel_timer_4 = current_time;
}
}
else if (channelcount_4 == 1)
{
channelcount_4 = 0;
receiverinput_channel_4 = current_time - channel_timer_4;
}
}
However, my issue here is that the bldc motors i'm using don't work smoothly when connected to the Arduino. They erratically stop and even change direction of rotation at the same throttle input. I've tested them by connecting them directly to the transmitter and they work fine there with perfect rotation and speed. Can someone please help me out and tell me where I might be going wrong?
EDIT: I do realize posting the entire Arduino code might be overkill, but I've been trying to solve this problem for three days (as of 22nd June,16) and I really do hope someone can point out any improvements/corrections in my code.
|
I am new to the iRobot Create 2 but I do know a thing or two about the Arduino (don't assume too much though). However, in this case, I am beyond stumped over what I am sure is something simple but is somehow not obvious to me. Three people have confirmed my wiring from the Create 2 to the Arduino to be correct and the code I have looks similar to many examples that I have seen on this forum. However, I cannot get my Create 2 to do ANYTHING. I am not at all sure what is wrong and I am starting to wonder if the robot is even receiving commands let alone doing anything with them. Is there anything wrong with this code and can anybody suggest a way to verify that the robot is receiving data (since it does not beep or provide return messages)? Thank you.
EDIT (06/24 01:10 EST); Updated code (with a few notes).
#########################
#include <SoftwareSerial.h>
#include <SPI.h>
int baudPin = 17;
int i;
int ledPin = 13;
int rxPin = 19;
int txPin = 18;
unsigned long baudTimer = 240000; // 4 minutes
unsigned long thisTimer = 0;
unsigned long prevTimer = 0;
SoftwareSerial Roomba(rxPin, txPin);
void setup() {
pinMode(baudPin, OUTPUT);
pinMode(ledPin, OUTPUT);
pinMode(rxPin, INPUT);
pinMode(txPin, OUTPUT);
// I have tired communicating with both baud rates (19200 and 115200).
// When trying the 115200 baud, I set "i<=0;" in the loop below since
// the pulse does not need to be sent.
Roomba.begin(19200);
Serial.begin(115200);
delay(2000);
// I hooked up an LED in series with the baudPin so that it would turn
// off when low thus giving me some kind of visual confirmation that a
// pulse is being sent. See additional note in loop() below.
for (i = 1; i <= 3; i++) {
digitalWrite(baudPin, HIGH);
delay(100);
digitalWrite(baudPin, LOW);
delay(500);
digitalWrite(baudPin, HIGH);
}
// I know this might not be the right way to send data to the robot,
// but I was fiddling with this while trying to figure out a separate
// problem regarding the TX/RX lines which I am putting off until I
// get the baud issue straightened out.
. /*
int sentBytes = Roomba.write("128");
Serial.print(sentBytes);
Serial.print("\n");
*/
i = 0;
}
void loop() {
thisTimer = millis();
// The LED that I have hooked up in series with the baudPin blinks
// when the pulse is low, thus indicating that a pulse is being sent.
// However, it only seems to wake the robot when it is asleep. If the
// robot is already awake when the pulse is sent, it has no affect and
// the robot will fall asleep a minute later.
if (thisTimer - prevTimer > baudTimer) {
prevTimer = thisTimer;
i = 10;
Serial.print("Sending pulse...\n");
digitalWrite(baudPin, LOW);
delay(500);
digitalWrite(baudPin, HIGH);
}
/*
i++;
Serial.print(prevTimer);
Serial.print(" --> ");
Serial.print(thisTimer);
Serial.print(" --> ");
Serial.print(i);
Serial.print("\n");
delay(1000);
*/
}
#########################
|
It is helpful in robotics to first learn about "Linux kernel development" or "device driver development in Linux" before I start learning ROS? I know C and JAVA! In brief, I want to know any prerequisites which are essential to understand ROS better.
|
I'm using a basic trig/echo Ultrasonic Sensor with an Arduino Uno. I get accurate readings until I cover the sensor at which point I receive very large numbers. Why is this?
Program
int trigPin = 8;
int echoPin = 9;
float pingTime;
float targetDistance;
const float speedOfSound = 776.5; // mph
void setup() {
Serial.begin(9600);
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
}
void loop() {
digitalWrite(trigPin, LOW);
delayMicroseconds(2000);
digitalWrite(trigPin, HIGH);
delayMicroseconds(15);
digitalWrite(trigPin, LOW);
delayMicroseconds(10);
pingTime = pulseIn(echoPin, HIGH);
pingTime /= 1000000; // microseconds to seconds
pingTime /= 3600; // hours
targetDistance = speedOfSound * pingTime; // miles
targetDistance /= 2; // to from target (averaging distance)
targetDistance *= 63360; // miles to inches
Serial.print("distance: ");
Serial.print(targetDistance);
Serial.println("");
delay(100);
}
Example Output
I moved my hand from 10" away until I cover the sensor
10.20 distance: // my hand is 10" away from the sensor
10.01 distance:
9.51 distance:
8.71 distance:
7.85 distance:
6.90 distance:
5.20 distance:
4.76 distance:
3.44 distance:
2.97 distance:
1.65 distance:
1211.92 distance: // my hand is now pressed up against the sensor
1225.39 distance:
1197.16 distance:
1207.43 distance:
1212.66 distance:
1204.60 distance:
EDIT
I changed the amounts from inches to milimeters to get a more precise reading. I held the sensor ~100mm from a granite counter-top and quickly lowered it until the tabletop covered the front of the sensor.
distance: 103.27 // 100mm from tabletop
distance: 96.50
distance: 79.84
distance: 76.72
distance: 62.66
distance: 65.78
distance: 54.85
distance: 47.04
distance: 44.95
distance: 38.71
distance: 28.81
distance: 25.69
distance: 27.08
distance: 25.17
distance: 27.77
distance: 22.04 // sensor continues toward table but values start to increase when they would logically decrease ??
distance: 23.95
distance: 26.73
distance: 28.81
distance: 46.52
distance: 2292.85 // sensor is now flush against tabletop
distance: 2579.59
distance: 2608.75
distance: 2595.56
distance: 2591.57
distance: 2583.75
distance: 2569.87
distance: 2570.91
distance: 2600.07
distance: 30579.64 // extreme high & low values with sensor is same place against tabletop
distance: 37.66
distance: 30444.43
distance: 37.66
distance: 30674.23
distance: 38.71
|
I'm working on a project where I'm using a voltage that is higher than what most microcontrollers can handle. I'm looking for a kind of switch that will connect a power source to an electromagnet and all of this controlled by my microcontroller. I also thought about using a potentiometer to control the speed of two high voltage DC motors via my microcontroller so please tell me if this is a good idea aswell.
Thanks for your time
Zakary
|
I was wondering either there is any special naming for regulators that:
Outputs unit is the same as inputs, ie. velocity [m/s] as input and velocity as output [m/s].
Outputs unit is different than inputs, ie. position as input [m], velocity as output [/m/s]
I would appreciate all help.
|
So I was thinking about projectiles that don't need a propellant like gunpowder I've seen coils gun but that's a little out my way. I was wondering if I know the force required to propel a object could I program a robot to exert that force to propel the object the same way (in a linear propelled fashion).
|
I have never yet had the Create2's incremental encoder rollover but want to write my code to be prepared for this to happen and test it. When the encoder rolls past 32767 (14.5m), does it rollover to -32768 and count there or start at 0 again and count up from there?
One other odd thing but not a big deal. When I reset the Create2, the first value is 1 not 0.
|
I am trying to control my F450 dji quadcopter using a PID controller. From my IMU, I am getting the quaternions, then I convert them to Euler's angles, this is causing me to have the Gimbal lock issue. However, is there a way that I directly use the quaternions to generate my control commands without converting them to Euler's angle?
This conversation here discusses a similar issue but without mentioning a clear answer for my problem.
The three errors so far I am trying to drive to 0 are:
double errorAlpha = rollMaster - rollSlave;
double errorTheta = pitchMaster - pitchSlave;
double errorPsi = yawMaster - yawSlave;
where the Master generates the desired rotation and the Slave is the IMU.
UPDATE:
Here are some pieces of my code:
Getting the current and the reference quaternions for bot the Master and the Slave from the ROTATION_VECTOR:
/** Master's current quaternion */
double x = measurements.get(1);
double y = measurements.get(2);
double z = measurements.get(3);
double w = measurements.get(4);
/** Slave's current quaternion */
double xS = measurements.get(5);
double yS = measurements.get(6);
double zS = measurements.get(7);
double wS = measurements.get(8);
/** Master's Reference quaternion */
double x0 = measurements.get(9);
double y0 = measurements.get(10);
double z0 = measurements.get(11);
double w0 = measurements.get(12);
/** Slave's Reference quaternion.
* If the code has not been initialized yet, save the current quaternion
* of the slave as the slave's reference orientation. The orientation of
* the slave will henceforth be computed relative to this initial
* orientation.
*/
if (!initialized) {
x0S = xS;
y0S = yS;
z0S = zS;
w0S = wS;
initialized = true;
}
Then I want to know the orientation of the current quaternion relative to the reference quaternion for both the Master and the Slave.
/**
* Compute the orientation of the current quaternion relative to the
* reference quaternion, where the relative quaternion is given by the
* quaternion product: q0 * conj(q)
*
* (w0 + x0*i + y0*j + z0*k) * (w - x*i - y*j - z*k).
*
* <pre>
* See: http://gamedev.stackexchange.com/questions/68162/how-can-obtain-the-relative-orientation-between-two-quaternions
* http://www.mathworks.com/help/aerotbx/ug/quatmultiply.html
* </pre>
*/
// For the Master
double wr = w * w0 + x * x0 + y * y0 + z * z0;
double xr = w * x0 - x * w0 + y * z0 - z * y0;
double yr = w * y0 - x * z0 - y * w0 + z * x0;
double zr = w * z0 + x * y0 - y * x0 - z * w0;
// For the Slave
double wrS = wS * w0S + xS * x0S + yS * y0S + zS * z0S;
double xrS = wS * x0S - xS * w0S + yS * z0S - zS * y0S;
double yrS = wS * y0S - xS * z0S - yS * w0S + zS * x0S;
double zrS = wS * z0S + xS * y0S - yS * x0S - zS * w0S;
Finally, I calculate the Euler angles:
/**
* Compute the roll and pitch adopting the Tait–Bryan angles. z-y'-x" sequence.
*
* <pre>
* See https://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Quaternion_.E2.86.92_Euler_angles_.28z-y.E2.80.99-x.E2.80.B3_intrinsic.29
* or http://nghiaho.com/?page_id=846
* </pre>
*/
double rollMaster = Math.atan2(2 * (wr * xr + yr * zr), 1 - 2 * (xr * xr + yr * yr));
double pitchMaster = Math.asin( 2 * (wr * yr - zr * xr));
double yawMaster = Math.atan2(2 * (wr * zr + xr * yr), 1 - 2 * (yr * yr + zr * zr));
and I do the same thing for the Slave.
At the beginning, the reference quaternion should be equal to the current quaternion for each of the Slave and the Master, and thus, the relative roll, pitch and yaw should be all zeros, but they are not!
|
I am trying to solve some Create 2 sensor reading problem that I am having when I came across @NBCKLY's posts (Part 1 and Part 2) that I believe are exactly what I am looking for. I copied his code from the original post into my project and updated the code from the second post as best as I could interpret...but something is not going according to plan.
For example, I am printing the angle to my serial monitor (for now) but I am constantly getting a value of 0 (sometimes 1).
Can @NBCKLY or anybody please check out this code and tell me what I'm doing wrong? I would appreciate it. Thank you very much.
int baudPin = 2;
int data;
bool flag;
int i;
int ledPin = 13;
int rxPin = 0;
signed char sensorData[4];
int txPin = 1;
unsigned long baudTimer = 240000;
unsigned long prevTimer = 0;
unsigned long thisTimer = 0;
void drive(signed short left, signed short right) {
Serial.write(145);
Serial.write(right >> 8);
Serial.write(right & 0xFF);
Serial.write(left >> 8);
Serial.write(left & 0xFF);
}
void updateSensors() {
Serial.write(149);
Serial.write(2);
Serial.write(43); // left encoder
Serial.write(44); // right encoder
delay(100);
i = 0;
while (Serial.available()) {
sensorData[i++] = Serial.read();
}
int leftEncoder = int((sensorData[0] << 8)) | (int(sensorData[1]) & 0xFF);
int rightEncoder = (int)(sensorData[2] << 8) | (int)(sensorData[3] & 0xFF);
int angle = ((rightEncoder * 72 * 3.14 / 508.8) - (leftEncoder * 72 * 3.14 / 508.8)) / 235;
Serial.print("\nAngle: ");
Serial.print(angle);
Serial.print("\n");
}
void setup() {
pinMode(baudPin, OUTPUT);
pinMode(ledPin, OUTPUT);
pinMode(rxPin, INPUT);
pinMode(txPin, OUTPUT);
delay(2000);
Serial.begin(115200);
digitalWrite(baudPin, LOW);
delay(500);
digitalWrite(baudPin, HIGH);
delay(100);
Serial.write(128);
Serial.write(131);
updateSensors();
drive(50, -50);
}
void loop() {
thisTimer = millis();
if (thisTimer - prevTimer > baudTimer) {
i = 0;
prevTimer = thisTimer;
digitalWrite(baudPin, LOW);
delay(500);
digitalWrite(baudPin, HIGH);
Serial.print("Pulse sent...\n");
}
updateSensors();
}
#
What I am asking is why do I only get an angle of rotation of 0 or 1 degrees when the robot is moving in a circle. The angle should be incrementing while the robot is moving.
The output I am getting on the serial monitor shows a line of what looks like garble which I assume is supposed to be the bytes sent back from the Create which is followed by "Angle: 0 (or 1)" What I was expecting to see was an increasing angle value. (1,2,3...360, and so on).
|
I've googled a lot but wasn't able to find official definitions of these 3 parts. Maybe the explanations of servo and controller are good enough, but I'm still trying to look for a more "official" one.
Any ideas?
|
I am using a stereo rig to do SLAM, calibrated using the MATLAB Calibration Tool. I need to compute the 2D coordinates of a landmark using the observation model obtained from triangulation (the images are rectified).
The equations obtained from triangulation are the ones presented in the blue box here. Because I am doing SLAM in 2D the coordinates I need to use are $Z_p$ and $X_p$. The parameters needed to compute those values are $f$, $T$ and $disparity (x_L - x_R)$.
After doing the calibration intrinsics matrices $K_L$ and $K_R$ are obtained and a common intrinsic matrix for the stereo rig is calculated from $K = 1/2*(K_L +K_R)$ so I get the parameters needed in triangulation from this common matrix.
The focal length is supplied from the manufacturer, and for my Logitech C170 is 2.3mm. The baseline $T$ from the calibration is 78.7803 mm. To compute the disparity I am obtaining SURF points and using RANSAC to discard the outliers so I get x coordinates from both rectified images.
The problem is that with those values I can't obtain correct values for $Z_p$ and $X_p$ and I am not sure why or where I am doing the wrong step. Anyone can help with this? Are those the correct steps to do triangulation from rectified stereo images?
EDIT: My stereo rig looks like the figure I attach:
If you compare the coordinates system with the one used in the link before is easy to see that my $X_r$ corresponds to the $Z_p$ from the link and the $Y_r$ corresponds to $X_p$, so the equations to calculate the distance using triangulation and with the coordinate system of the figure are:
$x_r=\frac{fb}{x_L-x_R}$
$y_r = \frac{(x_L-p_x)b}{x_L-x_R}-\frac{b}{2}$
Being $f$ the focal length, $b$ the baseline, $p_x$ the x coordinate of the central point and $x_L-x_R$ the disparity. The $X_r$ $Y_r$ coordinate system is situated between the two cameras, so this is the meaning of the $\frac{b}{2}$ displacement in the equations.
Calibration
To obtain the cameras calibration I am using the Stereo Camera Calibrator Toolbox with the chessboard pattern.
After calibration, I made some tests using MATLAB functions triangulate and reconstructScene to know whether the parameteres are well calculated. The distances I obtained using this functions (which use the stereoParams object created by the calibrator) works well and I obtain distances very similar to the actual ones. So I suposse the calibration works well.
The problem, as I explained before, is when I try to calculate the distances using the equations $x_r$ and $y_r$ because I am not sure how to obtain the common matrix $K$ for the stereo rig (the calibrator gives one intrinsic matrix for each camera, so you have two matrices).
The value of the baseline given from calibration make sense, I made a measurement with a ruler and gives me 78 mm approximately.
The $f$ value I assume should be in pixels but here again the calibration gives an $f_x$ and $f_y$ value so I am not sure which one should I use.
Those are the intrinsic matrices I obtain:
Left:
$\begin{pmatrix} 672.6879&-0.7752&282.2488\\0&674.3705&240.1287\\0&0&1 \end{pmatrix}$
Right: $\begin{pmatrix} 681.7049&0.0451&331.2612\\0&681.8235&246.1209\\0&0&1\end{pmatrix}$
Being the parameters of $K$: $\begin{pmatrix}f_x&s&p_x\\0&f_y&p_y\\0&0&1\end{pmatrix}$
|
I got asked to make some sort of trigger pads for the foot section of an organ working over midi to a electric piano and my friend wants it to be pressure sensitive so we can program in the note velocity when he's not using the organ sound.
https://www.youtube.com/watch?v=DuariiHWJQg
That is what I try to achive. I want the pads to not just be on/off but also be able to control the velocity of the midi note.
Im planning to use a Adruino Uno with a MUX Shield II from Mayhew Labs to get 36 analog inputs. Not exactly sure on the wiring yet but have looked at some guides and videos on google to get a feel for how it can be made.
All these 36 piezo-"sensors" is planned to register how hard you push the pedals and then send out a MIDI signal with a specific note correspondig to the pedal, and velocity to the electric piano so you can control the low notes with your feet.
http://www.thomann.de/se/clavia_nord_pedal_key_27.htm
Just like that but more pedals and a lot cheaper.
Will the Arduino be able to read the analog output of the piezo sensor even though it's going through a multiplexer?
|
So, while I was out drinking with a couple of my friends, one of us said something like 'man, wouldn't it be cool if the beer just came to us?' and that got me thinking.
We all have seen some crazy things people do with quadcopters (or polycopters even), but would it be possible (and not too expensive) to build a quadcopter that could carry, say, a crate of beer? (16-20kg)
I'm a bit of a tinkerer and I've built some minor things with rasp. pi's before but never tried myself at a quadcopter, because they are quite a big piece of work, but being able to fly a crate of beer right in front of me would be pretty awesome.
That aside, how strong would such a quadcopter have to be? In terms of motors, propellers, battery & frame. I'm a complete noob when it comes to RPM and the like, so I wouldn't even know where to begin. I have, of course, read through most of the available tutorials on the internet, but they don't answer my question of what exactly to look for when I want my quadcopter to be able to carry something specific.
|
I've recently implemented a kalman filter to estimate altitude for a small robot with an IMU+Baro sensor mounted on it.
My objective is to get max precision I can have, using this two sensor, with small computing power that a MCU can provide me. I've tuned my filter and it seems to work pretty well.
Can I obtain a significant improvement using an Extended Kalman Filter instead of a normal Kalman Filter and if it worth time to implement it?
More in detail, since this request is too specific for each application, if a Model function that use Baro and Accel as states should be linearized and used in a EKF and if this can improve data reliability compared to a simply KF?
|
The landmarks are often used in SLAM. What are the algorithms used to extract them, and how can a robot differentiate the landmarks, if they detect one in point A at Xt and another in Xt+1? How can the robot know if it's the same landmark or not?
|
Are there any better/ advanced ways of steering a line following robot other than pid controller? If so what are them?
|
What are the criteria to consider when ordering dc motors for a line following robot?
Is there a way to calculate the torque required?
|
I would like to create a simulation model (basically a signal generator) which will allow me to generate the 3 output signals of an accelerometer based on 3 location input signals (x,y and z). I would like a more realistic model of the data produced by an accelerometer (with some noise and bias offsets).
How can I convert the series of points into a simulated accelerometer output?
Specifically:
I have a series of positions which describe a trajectory in 3D space...If an accelerometer was moving along the trajectory described by the series of positions, I am interested in knowing (simulating!) the data that the accelerometer would produce as the result of moving along the described trajectory.
I could just calculate the 2nd derivative of the trajectory, but that would probably be too ideal. I am looking for a model which is more realistic.
|
My Arduino + Raspberry Pi robot was working fine in the morning. I tested it, it ran perfectly, and then I switched it off.
Now in the evening when I'm trying to run it again, with the same batteries and everything, it just doesn't move!
I stripped it down to the motor compartment and found that when I try to run my main motor, I can see sparks through the translucent plastic on the back.
Does that mean my motor is gone?
|
I'm using a HC-SR04 sensor to detect obstacles. What are the pitfalls with an ultrasonic sensor?
Here are a couple I've found during my testing:
The signal can bounce off of one wall to another and then get picked up, distorting latency
Absorbent materials sometimes don't bounce the signal back
Check the datasheet for supported range (min/max)
|
My goal is to control drone by Raspberry PI. The Raspberry PI uses camera and OpenCV, sends control commands to AVR microcontroller which will generate the PWM control signal. Meaning that it will simulate pilot with transmitter-receiver setup.
In other words (to make it more clear). Raspberry tells the Atmega8 that the drone needs to go more forward. Atmega8 generates custom PWM signals on 8 pins. Those signals are sent directly to CC3D pins responsible for roll, pitch etc. Atmega8 replaces controller receiver in this setup. It generates signal not based on user input but on what Raspberry tells it.
In order to do that I need the parameters (period, voltage etc.) of the PWM signal that CC3D accepts to properly simulate it. I have found this topic:
CC3D - Replacing RC emitter with an RPi
He has the same problem as I do and he found the solution. Unfortunately I can't send pm and I can't comment because I'm new to the site... so basically there is no way for me to contact him.
So any help would be appreciated.
|
I'm trying to find known techniques for keeping a manually controlled robot within a known polygon fence. More specifically, a pilot controls a robot by issuing desired velocity vectors, and the autopilot adjusts the velocity so that the distance to any boundary is always at least the stopping distance of the robot.
My goal is to implement a system that:
Tries to follow the pilot's desired velocity as closely as possible.
Is robust to changes in position and desired velocity. At a minimum, I want the velocity to change continuously with respect to the position of the robot and desired velocity of the pilot. Informally, this means that sufficiently small changes in the position or desired velocity of the pilot induce arbitrarily small changes in the velocity.
The second point is particularly important. Suppose that the policy were to find the intersection with the boundary in the direction of the desired velocity and slow down smoothly to that point. The below figure depicts a couple of scenarios in which this would not be continuous. In this figure, the black lines represent the fence boundary, the red dot is the position of the robot, and the blue line is the desired velocity of the pilot. In figure (a), a small perturbation of the position to the left will cause a large increase in allowed velocity because the desired velocity will intersect the far edge instead of the near edge. In figure (b), a small clockwise rotation of the velocity vector will result in a large decrease in allowed velocity because the desired velocity will intersect the near edge instead of the far edge.
I have searched for relevant papers, but most of the papers I've seen have dealt with fully autonomous obstacle avoidance. Moreover, I haven't seen any papers address the robustness/continuity of the system.
:EDIT:
The robot knows its own location and the location of the boundary at all times. I also have some equations for maximum velocity that allow a smooth ramp-down to a single line boundary (though I'd be interested in seeing a better one). I would like the velocity limits to be continuous in the position and desired velocity of the pilot.
I want to continuously throttle the user's input such that a minimum safe distance between the robot and the boundary is maintained, but see the figure that I added to the question. The hard part (I think) is to make sure that small changes in position (e.g. due to sensor noise) or small changes in desired velocity (e.g. due to pilot noise) don't cause huge changes in what the autopilot allows.
I want continuity because I think it will provide a much nicer experience for the pilot while still enforcing the fence boundary. There is a trade-off with optimally but I think this is worth it. Even though the physical world smoothes any discontinuities in velocity, big changes could still cause large jerk which will be somewhat disturbing to the pilot. The goal is to not have the autopilot introduce large oscillations not intended by the pilot.
This Will be implemented On a physical system that has sensors that provide an estimation of position, and the boundary shape is known and is unchanging. The actual system that I'm targeting is a quadcopter.
|
I need some ideas for strategies or algorithms to apply on these strategies to perform obstacle avoidance while navigating.
At the moment I'm doing offline path planning and obstacle avoidance of known obstacles with an occupancy grid. And running the A* algorithm over the created matrix. After that my robot follows along the resulting trajectory. This is done by splitting the whole trajectory into sub-path. The robot adjust it's heading to the new target and follows the straight line. The robot is controlled by a fuzzy logic controller to correct deviations from the ideal line (steering) and adjusting the velocity according to the steering action and distance to the target. So far so good. And it's working very well.
As sensor system, I solely use the Google Project Tango (Motion Tracking and Area Learning for proper path following). Now I want to use the depth perception capability of the device. Getting the appropriate depth information and extracting a possible obstacle is done with a quite simple strategy. The robot analyses the depth information in front of the robot and if any object is in between the robot and the target point of the sub-path, an obstacle must be there.
Now I'm wondering how to bypass this obstacle most efficiently. The robot is only aware of the height and width of the obstacle, but has no clue about the depth (only the front of the obstacle is scanned). Feeding the occupancy grid with this new obstacle and running again the A* algorithm is not effective, because of the missing depth. One possible strategy I could imagine is estimating a depth of the length of the grid cell, re-plan and continue the navigation. If the robot faces the same obstacle again, the depth is increased by the size of one additional grid cell length. But I think this is extremely ineffective.
The requirement is to only use the Google Project Tango and no additional sensors, such as ultrasonic to sense the sides.
Update 1
The first picture illustrates the given trajectory from the path planning (orange). The gray and blue data points are the sensed obstacles in front of the robot. The notch behind the blue obstacle is actually the wall, but is shadowed by the blue obstacle. Image 2 shows the same scene just from a different perspective.
The issue I have to treat is, how to optimally bypass the blue obstacle even I don't know how deep it is. Driving to the left and to the right only to capture better data points (to generate a 3D model) is not possible.
Update 2
Yes, I'm using a depth sensor, the one integrated in Google Project Tango. It's a visual measurement. A infra-red laser beams a grid onto the objects and a RGB-IR camera capture these information and evaluates the appropriate depth information.
|
I am trying to assess the pros and cons of steering a robot car using different speeds of 2 or more DC motors versus using a servo and a steering mechanism? From your experience which is better in terms of:
Steering accuracy (e.g. prompt responsiveness or skidding while on higher speeds)
Efficiency in electrical power consumption
Durability and maintenance
Control complexity (coding and electronics)
I researched and understood how both approaches work, but I need some practical insight to select the most suitable approach. Any hint or research direction is appreciated.
|
I am working on a 6 DOF robotic arm(industrial manipulator). I have the basic structural specs (dimensions, weights etc) for the links and joints with me.
Basically, I want to simulate both static torque(Due to the weight of the arm) and dynamic torque(due to the accelerating joint's motion) Torque that the joints will need to bear for a given set of motions.
I have looked on the web and found tools like the ROS-MoveIt Visualiser, Gazebo, V-REP which let me visually see a robotic arm and simulate the position logic and external factors like collisions etc. But I have been unable to simulate/calculate dynamic torque values from these tools.
Ideally, I'd want to define a fixed motion of the end effector(i.e. move the robot between 2 positions) and measure the Torque(both static and dynamic) during that particular move.
These torque values are essential for selecting the optimum motors and gearboxes for my design and payload.
|
I'm working on a extremely simple robot (very first project) that attempts to find the source of a Bluetooth signal. There are two motors that drive the platform and each has an encoder. We've already used a Kalman filter to calculate the approximate distance to the Bluetooth beacon within reasonable error.
I worked out a manual solution using some trig that solves the problem in theory, but it fails if there is any error (For example, it attempts to turn 73 degrees, but turns 60).
My question is how can I reasonably drive the motors based on the encoder data to continuously minimize the distance to the signal? Furthermore, is there a generic solution to problems like these? (I guess you might call it a stochastic "Hotter/Colder" problem)
Thanks in Advance.
|
We know that a quadcopter needs to be tuned to its perfect PID values to minimise the pitch, roll , yaw errors and etc., Before releasing to the market will they tune every unit and ship it ? Or a any different algorithm is used which doesn’t require any tuning ? Because every motor/ESC or a chassis will not be exactly same, which will add to the noise.
|
I am working on path planning for a 2 arm 4dof (2 dof for each arm) robot. I am currently using a centralised planning methodology (considering the multi robot system as a single one with higher dof, 4 in this case) and A* algorithm to find the shortest path. The problem with this algorithm is its high computation time.Is there any way to reduce the computation time while still obtaining the shortest route ?
Note:decentralised path planning is not good enough for my case.
|
Good day,
I am currently working on an obstacle avoiding UAV using stereo vision to obtain depth maps. I noticed that the quadcopter would sometimes not steer to the correct direction.
I am using the Raspberry Pi Compute Module IO board which comes with two CSI ports used with two v1 Pi Cameras.
Issue
I soon found out that due to the latency between the cameras, the left and the right images are not in sync thus the errors in the depth map result.
Steps taken:
I noticed the image blur when moving the cameras around so I adjusted the shutter speed by setting the UV4l/raspicam driver. With the shutter speed, I also tried to increase the framerate as I've read, it improves the latency issue. In my code which uses the opencv library, I used the grab() and retrieve() commands to replace the read() command so that the frames from both cameras is grabbed at the nearest time possible however it didn't help much.
Does anyone know any possible solutions?
|
In the “iRobot_Roomba_600_Open_Interface_Spec.pdf” provided for the iRobot Create 2, there is a section titled “Roomba Internal Screw Boss Locations”. It states that “Screws may be replaced with threaded standoffs.”
Does anyone know what screw/thread size of standoffs should be used to match the screw threads?
(I saw another similar thread but the only solution listed was to re-thread the holes, which I would like to avoid if at all possible.)
Thanks!
|
I'm trying to build my own motorised camera gimbal using a BLDC like this, where the shaft is hollow. Does anyone know how the camera platform should be mounted? Should a shaft be somehow pressed into the hole?
Any thought appreciated.
|
I am using a L298n IC and (not a driver shield) and an Arduino.
I would like to know how to use the IC with the Arduino to run a six wire stepper motor.
Could I have a detailed explanation for wiring the IC connections on the breadboard and the Arduino?
|
first off, just to be transparent, I'm a total newbie when it comes to DC motors (and pretty much anything robotic).
I've got a couch that's right up to a window with the lever type openings (anderson windows). With the couch, I have no clearance to turn the lever to open it. Given I've replaced most of my house switches/outlets with home automatable ones, I figured I'd see if I can build myself a small motor that I can automate to open these also. To be absolutely honest, I've got no clue where to start. I have no problem with coding the automation part, but I don't even know what kind of motors to look for that would be able to turn my knob (or rather how to actuate the thing my knob connects to)...
Help!
Thanks :)
|
Consider multiple mobile bases driving around in some area. In order to get meaningful data from the lidar of each base, the sensors should be mounted as horizontal as possible. Due to safety regulations, the lidars should also be mounted at a height of 15 cm from the floor. When I checked the data sheet of SICK lidars, it shows that all models use the wavelength 904 nm. Does that mean that mobile bases equipped with lidars with a coplanar scan lines will end up mutually blinding each other?
If it is the case, how is this problem solved? (I don't consider tilting the lidars a solution as it defeats the purpose of having "2D" lidars where even if the tilting angle is known, what the lidar observes becomes dependent on the robot's pose and distance from eventual obstacles)
|
Update
Hey I have the following subscriber on Nvidia TX1 board running on an agricultural robot. we have the following issue with subscribing to Sensor_msgs::Compressed:
ImageConverter(ros::NodeHandle &n) : n_(n), it_(n_)
{
image_pub_ = it_.advertise("/output_img",1);
cv::namedWindow(OPENCV_WINDOW);
image_transport::TransportHints TH("compressed");
image_sub_compressed.subscribe(n,"/Logitech_webcam/image_raw/compressed",5,&ImageConverter::imageCallback,ros::VoidPtr(),TH);
}
And the callback function
void imageCallback(const sensor_msgs::CompressedImageConstPtr& msg)
When I compile this I get an error:
from /home/johann/catkin_ws/src/uncompressimage/src/publisher_uncompressed_images.cpp:1:
/usr/include/boost/function/function_template.hpp: In instantiation of ‘static void boost::detail::function::function_void_mem_invoker1<MemberPtr, R, T0>::invoke(boost::detail::function::function_buffer&, T0) [with MemberPtr = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&]’:
/usr/include/boost/function/function_template.hpp:934:38: required from ‘void boost::function1<R, T1>::assign_to(Functor) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&]’
/usr/include/boost/function/function_template.hpp:722:7: required from ‘boost::function1<R, T1>::function1(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’
/usr/include/boost/function/function_template.hpp:1069:16: required from ‘boost::function<R(T0)>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’
/home/johann/catkin_ws/src/uncompressimage/src/publisher_uncompressed_images.cpp:27:126: required from here
The red error statement was:
/usr/include/boost/function/function_template.hpp:225:11: error: no match for call to ‘(boost::_mfi::mf1<void, ImageConverter, const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&>) (const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&)’
BOOST_FUNCTION_RETURN(boost::mem_fn(*f)(BOOST_FUNCTION_ARGS));
I am not using BOOST, and searching around hasn't helped me solve it
|
I am working on a differential drive robot with two motor wheels with encoders and caster wheels. The robot also has an Intel RealSense depth camera.
When I launch RVIZ: The Global option > Fixed frame is set to Base_link and shows all the transforms for the differential driver nodes. But an error appears for the Depth camera nodes with message saying:
No transform from Camera_depth_frame to baselink
No transform from Camera_depth_optical_frame to baselink
No transform from Camera_link to baselink
No transform from Camera_rgb_frame to baselink
If I change the Global option > fixed frame to Camera_link, I can see all the transforms for the depth camera but now the differential drive transforms are not available
Hope you can help.
|
I accidentally ended up supplying 12 v to the Arduino 5v output pin instead of the Vin pin. Does that mean that I can't use the 5v output pin anymore i.e. its fried?
|
i am studying bacholar of dental surgery but have intrest in learning this subjet so tell me about a good book to read.
|
I am currently busy with a final year project which requires me to track people walking through a doorway.
I initially thought this may be possible using a normal camera and using some motion detection functions given in OpenCV, I have however come to the conclusion that the the camera is mounted too low for this to work effectively.(Height shown in the image below)
I have now been looking into using a 3D camera or a stereo camera to try and get around this problem.
I have seen similar examples where a Kinect(from Xbox 360) has been used to generate a depth map which is then processed and used to do the tracking, this was however done from a higher vantage point, and I found that the minimum operating range of the Kinect is 0.5m.
From what I have found, the Kinect uses an IR projector and receiver to generate its depth map, and have been looking at the Orbbec Astra S which uses a similar system and has a minimum working distance of 0.3m.
My question now:
What exactly would the difference be between the depth maps produced by a 3D camera that uses an IR projector and receiver, and a stereo camera such as the DUO/ZED type options?
I am just looking for some insight from people that may have used these types of cameras before
On a side note, am i going about this the right way? or should i be looking into Time of Flight Cameras instead?
----EDIT----:
My goal is to count the people moving into and out of the train doorway. I began this using OpenCV, initially with a background subtraction and blob detection method. This only worked for one person at a time and with a test video filmed at a higher vantage point as a "blob-merging" problem was encountered as shown in the left image below.
So the next method tested involved an optical flow method using motion vectors obtained from OpenCV's dense optical flow algorithm.
From which i was able to obtain motion vectors from the higher test videos and track them as shown in the middle image below, because of the densely packed and easily detected motion vectors it was simple to cluster them.
But when this same system was attempted with footage taken from inside a train at a lower height, it was unable to give a consistant output. My thoughts of the reasons for this was because of the low height of the camera, single camera tracking is able to function when there is sufficient space between the camera and the top of the person. But as the distance is minimized, the area of the frame that the moving person takes up becomes larger and larger, and the space to which the person can be compared is reduced (or atleast that is how I understand it). Below on the right you can see how in the image the color of the persons clothing is almost uniform, Optical flow is therefore unable to detect it as motion in both cases.
I only started working with computer vision a few months ago so please forgive me if I have missed some crucial aspects.
From what i have seen from research, most commercial systems make used of a 3D cameras, stereo cameras or Time-of-Flight cameras, but I am unsure as to how the specifics of each of these would be best suited for my application.
|
I'm using OpenRAVE to simulate a quadruped, in order to get an idea of torque requirements.
To get started I made a single DOF, single link pendulum to test controllers etc out on.
I've whipped up an inverse dynamics based PD controller using ComputeInverseDynamics(), which I set the outputs using SetDOFTorques(). I then set a desired position, with the desired velocity being zero. This all appears to work well and I can start the simulation, with the pendulum driving up to the desired position and settling.
My concern is the value of the output torques. My pendulum is modeled as a simple box of length 1, mass manually set to 1, with a COM of 0.5.
When I run my simulation, I output the gravity component from ComputeInverseDynamics(). This gives 4.9NM, which matches up with hand calculated torques I expect from the pendulum (eg the static case) when it is driven to the desired position (from down to horizontal).
But the output torques to SetDOFTorques() are much higher and vary depending what I set the simulation timestep to.
If I maintain a controller update rate of 0.001 seconds, then for a simulation update of 0.0001 seconds, my output torque is approximately 87NM. If I alter the simulation timestep to 0.0005 seconds, keeping the controller rate the same the output torques drop down to about 18NM.
As an experiment I removed the inverse dynamics controller and replaced it with a plain PD controller, but I still see large output torques.
Can anyone shed some light on this? It's very possible I'm missing something here!
Thanks very much
Edits:
I'm adding the main section of my code. There is no trajectory generation, really. I'm just trying to get to a fixed static position.
In the code, if I keep dt fixed, and alter env.StartSimulation(timestep=0.0001), I get the issues popping up.
with env:
robot = env.GetRobots()[0]
robot.GetLinks()[0].SetStatic(True)
env.StopSimulation()
env.StartSimulation(timestep=0.0001)
dt = 0.001
w = 100
eta = 5
Kp = [w*w]
Kv = [2*eta*w]
# Desired pos, vel and acc
cmd_p = [3.14/2]
cmd_v = [0]
cmd_a = [0]
while True:
with env:
torqueconfiguration, torquecoriolis, torquegravity = robot.ComputeInverseDynamics([1],None,returncomponents=True)
err_p = cmd_p - robot.GetDOFValues()
err_v = cmd_v - robot.GetDOFVelocities()
# ID Controller
M = compute_inertia_matrix(robot, robot.GetDOFValues())
a_cmd = (Kp*err_p + Kv*err_v + cmd_a)
taus = torquegravity + torquecoriolis + M.dot(a_cmd.transpose()).transpose()
# Just PD(ish) controller
#taus = Kp*err_p - Kv*robot.GetDOFVelocities()
with robot:
robot.SetDOFTorques(taus,False) # True = use limits
print (taus, torquegravity+torquecoriolis, a_cmd, M.dot(a_cmd.transpose()).transpose())
time.sleep(dt)
# https://scaron.info/teaching/equations-of-motion.html
def compute_inertia_matrix(robot, q, external_torque=None):
n = len(q)
M = np.zeros((n, n))
with robot:
robot.SetDOFValues(q)
for (i, e_i) in enumerate(np.eye(n)):
m, c, g = robot.ComputeInverseDynamics(e_i, external_torque, returncomponents=True)
M[:, i] = m
return M
<?xml version="1.0" encoding="utf-8"?>
<Robot name="Pendulum">
<RotationAxis>0 1 0 90</RotationAxis> <!-- makes the pendulum vertical -->
<KinBody>
<!-- <Mass type="mimicgeom"><density>100000</density></Mass> -->
<Body name="Base" type="dynamic">
<Translation>0.0 0.0 0.0</Translation>
<Geom type="cylinder">
<rotationaxis>1 0 0 90</rotationaxis>
<radius>0.3</radius>
<height>0.02</height>
<ambientColor>1. 0. 0.</ambientColor>
<diffuseColor>1. 0. 0.</diffuseColor>
</Geom>
<mass type="custom">
<!-- specify the total mass-->
<total>5.0</total>
<!-- specify the 3x3 inertia matrix-->
<!--<inertia>2 0 0 0 3 0 0 0 5</inertia> -->
<!-- specify the center of mass (if using ODE physics engine, should be 0)-->
<com>0.1 0.0 0.0</com>
</mass>
</Body>
<Body name="Arm0" type="dynamic">
<offsetfrom>Base</offsetfrom>
<!-- translation and rotation will be relative to Base -->
<Translation>0 0 0</Translation>
<Geom type="box">
<Translation>1 0 0</Translation>
<Extents>1 0.1 0.1</Extents>
<ambientColor>1. 0. 0.</ambientColor>
<diffuseColor>1. 0. 0.</diffuseColor>
</Geom>
<mass type="custom">
<!-- specify the total mass-->
<total>1.0</total>
<!-- specify the 3x3 inertia matrix-->
<!--<inertia>2 0 0 0 3 0 0 0 5</inertia> -->
<!-- specify the center of mass (if using ODE physics engine, should be 0)-->
<com>0.5 0.0 0.0</com>
</mass>
</Body>
<Joint circular="true" name="Joint0" type="hinge">
<Body>Base</Body>
<Body>Arm0</Body>
<offsetfrom>Arm0</offsetfrom>
<weight>0</weight>
<axis>0 0 1</axis>
<maxvel>100</maxvel>
<resolution>1</resolution>
</Joint>
</KinBody>
</Robot>
Here is some data for dt = 0.001 and env.StartSimulation(timestep=0.0001)
In this data,
taus is the torque command to the simulation,
torquegravity+torquecoriolis is returned from the inverse dynamics
a_cmd is the controller command and
M*a_cmd is the command after being multiplied by the mass matrix
The gravity and coriolis parts appear to be correct for steady state, where it should be about 4.9NM
taus, torquegravity+torquecoriolis, a_cmd, M*a_cmd
3464.88331508, 0.48809828, 5329.83879509, 3464.39521681
330.67177959, 1.47549936, 506.45581573, 329.19628023
-785.91806527, 2.45531014, -1212.88211601, -788.37337541
-1065.4689484, 3.23603844, -1644.16151823, -1068.70498685
-1027.47479809, 3.80261774, -1586.58063974, -1031.27741583
-877.83110127, 4.18635604, -1356.94993433, -882.01745731
-707.25108627, 4.4371714, -1094.9050118, -711.68825767
-554.34483533, 4.6006198, -859.91608481, -558.94545512
-432.22314217, 4.70818921, -672.20204828, -436.93133138
-327.797496, 4.7768792, -511.65288492, -332.5743752
-240.77203429, 4.82021019, -377.83422228, -245.59224448
-172.18942128, 4.84807059, -272.3653721, -177.03749186
-117.58895761, 4.86591166, -188.39210657, -122.45486927
-74.51920719, 4.87743369, -122.14867828, -79.39664088
-39.91183436, 4.88473444, -68.91779816, -44.7965688
-12.82321495, 4.88971433, -27.25066043, -17.71292928
8.45349476, 4.89281357, 5.47797105, 3.56068118
25.35468725, 4.89489884, 31.47659755, 20.45978841
38.84080509, 4.896309, 52.22230167, 33.94449609
48.72668147, 4.89724689, 67.42989936, 43.82943458
56.78552877, 4.89790152, 79.82711885, 51.88762725
65.515892, 4.89836756, 93.25772991, 60.61752444
68.81359264, 4.89867903, 98.33063633, 63.91491362
73.86961896, 4.89891052, 106.10878221, 68.97070844
76.67416578, 4.89907489, 110.42321674, 71.77509088
79.62549808, 4.89919702, 114.96354008, 74.72630105
85.17343708, 4.89928669, 123.49869291, 80.27415039
85.13686188, 4.89934963, 123.44232654, 80.23751225
85.75675034, 4.89939931, 124.39592466, 80.85735103
86.55192592, 4.89943807, 125.61921208, 81.65248785
86.39672231, 4.89946802, 125.38039121, 81.49725429
87.4299925, 4.89949202, 126.97000073, 82.53050048
87.42776523, 4.8995098, 126.96654682, 82.52825543
87.15472709, 4.8995251, 126.54646461, 82.255202
86.97240783, 4.89953825, 126.26595319, 82.07286958
86.98023044, 4.89954905, 126.27797137, 82.08068139
86.75364661, 4.89955809, 125.92936696, 81.85408852
86.9853716, 4.89956526, 126.28585591, 82.08580634
88.01679721, 4.89957062, 127.8726563, 83.1172266
89.2610231, 4.89957348, 129.78684557, 84.36144962
88.47969399, 4.89957495, 128.58479851, 83.58011903
88.77623594, 4.89957711, 129.04101359, 83.87665884
90.87280518, 4.89957739, 132.2665043, 85.9732278
88.9513552, 4.89957707, 129.3104279, 84.05177813
89.14100099, 4.89957773, 129.60218964, 84.24142327
And here is some data for dt = 0.001 and env.StartSimulation(timestep=0.0005)
taus, torquegravity+torquecoriolis, a_cmd, M*a_cmd
-313.62240349, 0.98927261, -484.01796324, -314.61167611
-242.03525463, 2.00886997, -375.45249938, -244.0441246
-199.82226305, 2.79259699, -311.71516928, -202.61486003
-190.02605484, 3.39367572, -297.56881625, -193.41973056
-162.08293067, 3.8525617, -255.28537288, -165.93549237
-125.84847045, 4.17559368, -200.03702174, -130.02406413
-103.89936813, 4.40068949, -166.61547326, -108.30005762
-82.32305905, 4.5566127, -133.66103347, -86.87967175
-64.56801352, 4.66415211, -106.51102404, -69.23216563
-49.68124446, 4.73812107, -83.72210081, -54.41936553
-37.91265825, 4.78890663, -65.6947152, -42.70156488
-27.99189838, 4.82374208, -50.48560071, -32.81564046
-19.81225948, 4.84762415, -37.9382825, -24.65988362
-12.55978349, 4.8636252, -26.80524414, -17.42340869
-6.89165107, 4.87470983, -18.10209369, -11.7663609
-3.13313345, 4.88256746, -12.33184754, -8.0157009
0.69831646, 4.88796162, -6.44560793, -4.18964516
3.86277859, 4.89166745, -1.58290594, -1.02888886
6.12163439, 4.8941598, 1.88842245, 1.22747459
8.58189707, 4.89593332, 5.67071346, 3.68596375
9.1580546, 4.89712981, 6.55526891, 4.26092479
11.81854706, 4.89798468, 10.64701905, 6.92056238
12.40540565, 4.89856409, 11.54898701, 7.50684156
14.04109075, 4.89897979, 14.06478609, 9.14211096
14.39924399, 4.89926951, 14.61534535, 9.49997448
14.98060951, 4.89947252, 15.50944153, 10.08113699
16.08890875, 4.89961544, 17.2142974, 11.18929331
16.01955973, 4.89971637, 17.10745133, 11.11984337
17.06493791, 4.89978831, 18.71561478, 12.16514961
17.35364328, 4.89983976, 19.15969772, 12.45380352
17.62239334, 4.89987688, 19.57310225, 12.72251646
17.84455913, 4.89990387, 19.91485424, 12.94465525
17.43825648, 4.89992362, 19.28974286, 12.53833286
17.58436934, 4.89993826, 19.51450935, 12.68443108
17.70571012, 4.8999492, 19.70117065, 12.80576093
18.40852272, 4.89995746, 20.78240808, 13.50856525
18.49492461, 4.89996372, 20.91532445, 13.59496089
18.56575802, 4.89996852, 21.02429154, 13.6657895
18.62430693, 4.89997223, 21.11436108, 13.7243347
16.54216482, 4.89997511, 17.91106109, 11.64218971
18.71146936, 4.89997747, 21.24844907, 13.81149189
18.13316504, 4.89997923, 20.35874741, 13.23318581
18.77330006, 4.89998067, 21.34356829, 13.87331939
Despite the differences in torque command (a_cmd) I still get similar performance, in that the arm drives to the right position fairly quickly.
As another experiment I set the initial position to pi/2 and just fed back the gravity term to the torque output. My understanding of this is that the arm should float, ala a gravity compensation sort of thing. But it just drops as if a small torque is applied.
Thanks again!
|
I would like to know if there is any way to get all the possible solutions of inverse kinematics of a 6 DOF robotic arm?
I have found some good Matlab codes but gives only one solution like in Peter corke's book .
Thank you in advance.
|
Scott Adams, creator of Dilbert, recently shared an article about a robot the police used to kill a suspect by detonating a bomb in close range.
This made me wonder -- when was the first time a robot took a human life?
Good comments were made on this which leads me to clarify that I mean a pureposeful taking of life. I shy away from the term "murder" because that involves legal concepts, but I mean an intentional killing.
An interesting subdivision would be between robots under active human direction ("remote control") and those with a degree of autonomy.
|
Is torque related to size or power at all in electric motors? And what about gas motors too?
I have a go kart that is 2.5hp and it's 50cc and its about 1ft x 2ft x 1ft in size. I also see online there are .21 cubic inch gas motors for R/C cars that are also 2.5hp, the difference being that the R/C motor spins at 32k rpm while the go-kart motor spins at 12k rpm. If I were to put a gear reduction on the R/C motor, would it preform more or less the same as the go kart motor? Why is there a size difference?
Same for electric motors. I can buy an RC car electric motor that's 10hp and the size of a pop can. The CNC machine at work has a 10hp motor the size of a 5 gal bucket. Again, the only difference is the RPM.
If I were to reduce both setups so they spun at the same RPM, would they preform the same?
The only reasons I could think of is 1. Cooling and 2. RPM control (For PID loops and sensors)
|
I am a third-year electrical engineering student and am working on an intelligent autonomous robot in my summer vacations.
The robot I am trying to make is supposed to be used in rescue operations. The information I would know is the position of the person (the coordinates of the person in a JSON file) to be rescued from a building on fire. I would also know the rooms of the building from a map, but I don't know where the robot may be placed inside the building to start the rescue operation.
That means I have to localise the robot placed at an unknown position in a known environment, and then the robot can plan its path to the person who has to be rescued. But, since this is not my domain I would like you to guide me on what is the best method for localising given that I can use an IMU ( or gyro, accelerometer, magnetometer) and ultrasonic sensors to do the localising job. I cannot use a GPS module or a camera for this purpose.
I, however, do know how to do path planning.
As far as my research on the Internet is concerned I have found a method called "Kalman filtering" that maybe can do the localising job. But there are I think some other filtering methods as well. Which one should I use? Or is there any other simpler/better method out there of which I don't know yet?
I am also attaching the map of the building which is known to me.
Edit:
The terrain is flat, and I would like to know where the robot is on the map like at coordinate 0,4 etc.
|
I'm doing a project with the iRobot Create 2. I want it to be able to map out a room and navigate to a point for example. My problem is that the robot doesn't have any distance sensors. What it can do is detect if there is an obstacle ahead of it or not (0 or 1) and it can measure how far it has traveled in millimeters. Any good techniques out there or best to buy an IR sensor?
|
I am building an application that executes graphSLAM using datasets recorded in a simulated environment. The dataset has been produced in MRPT using the GridMapNavSimul application. To simulate the laserScans one can issue the bearing and range error standard deviation of the range finder.
Currently I am using a dataset recorded with range_noise = 0.30m, bearing_noise = 0.15deg. Am I exaggerating with these values? Could somebody provide me with typical values for these quantities? Do laser scanner manufacturers provide these values?
Thanks in advance,
|
I am trying to implement quaternions and i am using CC2650 sensortag board from TI. This board has MPU9250 from invensense which has Digital Motion Processor (DMP) in it. This DMP gives quaternion, but for my understanding i implemented my own quaternion. I used Gyroscope and acceleorometer values coming out of DMP (which are calibrated ) to calculate angle of rotation. I feed this angle, in 3 directions (x,y,z), to my quaternion. I am not able to match my quaternion values with DMP quaternion values. In fact it's way off, so wondering what I have done wrong.
Following are detailed steps that i did :
1) Tapped Gyro sensor values from function “read_from_mpl”.
2) Converted gyro values in to float by diving by 2^16. As gyro values are in Q16 format.
3) Now used Gyro values of 3 axis and found out resultant using formula :
Gr = sqrt(Gx^2+Gy^2+Gz^2)
Where Gx,Gy and Gz are Gyro values along x-axis,y-axis and z-axis respectively.
4) Now Angle is derived using above found resultant Gr by :
*Angle = Gr*1/sample_rate*
Where sample_rate is found using API call ,mpu_get_sample_rate(&sample_rate)
5) This Angle is fed to angle_to_quater function which basically converts angle to axis and then quaternion multiplication.
/* Angle to axis and quaternion multiplication: */
temp.w = cos((Angle*1.0/RAD_TO_DEG)/2);
temp.x = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.y = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.z = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.x = temp.x *gyro_axis[0];//gyro_axis[0]=Gx
temp.y = temp.x *gyro_axis[1]; //gyro_axis[0]=Gy
temp.z = temp.x *gyro_axis[2]; //gyro_axis[0]=Gz
/* quaternion multiplication and normalization */
res = quat_mul(*qt,temp);
quat_normalize(&res);
*qt = res;*
6) I also added doing angle calculations from accelerometer as follows : Here also accelerometer is converted to float by dividing by 2^16, as acceleorometer values also in Q16 format.
*//acc_data[0]->Ax, acc_data[1]->Ay, acc_data[2]->Az
temp = (acc_data[0]*acc_data[0]) + (acc_data[1]*acc_data[1]);
acc_angle[0]=atan2(acc_data[2],temp)*RAD_TO_DEG;
temp = (acc_data[1]*acc_data[1]) + (acc_data[2]*acc_data[2]);
acc_angle[1]=atan2(acc_data[0],temp)*RAD_TO_DEG;
temp = (acc_data[1]*acc_data[1]) + (acc_data[0]*acc_data[0]);
acc_angle[2]=atan2(acc_data[1],temp)*RAD_TO_DEG;*
*Find resultant angle of this also as :
inst_acc_angle = (sqrt(acc_angle[0]*acc_angle[0] + acc_angle[1]*acc_angle[1] + acc_angle[2]*acc_angle[2]));*
7) Then complimentary filter is :
*FinalAngle = 0.96*Angle + 0.04*inst_acc_angle;
This Final Angle is fed to step 5 to get quaternion.*
Quaternion multiplication is done as below and then normailized to get new quaternion (q).
quater_mul :
q3.w = -q1.x * q2.x - q1.y * q2.y - q1.z * q2.z + q1.w * q2.w;
q3.x = q1.x * q2.w + q1.y * q2.z - q1.z * q2.y + q1.w * q2.x;
q3.y = -q1.x * q2.z + q1.y * q2.w + q1.z * q2.x + q1.w * q2.y;
q3.z = q1.x * q2.y - q1.y * q2.x + q1.z * q2.w + q1.w * q2.z;
quat_normalize:
double mag = pow(q->w,2) + pow(q->x,2) + pow(q->y,2) + pow(q->z,2);
mag = sqrt(mag);
q->w = q->w/mag;
q->x = q->x/mag;
q->y = q->y/mag;
q->z = q->z/mag;
When i check my quaternion values with DMP, they are WAY off. Can you please provide some insights in to what could be wrong here.
Source code :
acc_data[0]=data[0]/65536.0;
acc_data[1]=data[1]/65536.0;
acc_data[2]=data[2]/65536.0;
double temp = (acc_data[0]*acc_data[0]) + (acc_data[1]*acc_data[1]);
acc_angle[0]=atan2(acc_data[2],temp)*RAD_TO_DEG;
temp = (acc_data[1]*acc_data[1]) + (acc_data[2]*acc_data[2]);
acc_angle[1]=atan2(acc_data[0],temp)*RAD_TO_DEG;
temp = (acc_data[1]*acc_data[1]) + (acc_data[0]*acc_data[0]);
acc_angle[2]=atan2(acc_data[1],temp)*RAD_TO_DEG;*
gyro_rate_data[0]=data[0]/65536.0;
gyro_rate_data[1]=data[1]/65536.0;
gyro_rate_data[2]=data[2]/65536.0;
float inst_angle = (sqrt(gyro_rate_data[0]*gyro_rate_data[0] + gyro_rate_data[1]*gyro_rate_data[1] + gyro_rate_data[2]*gyro_rate_data[2]));
gyro_rate_data[0] = gyro_rate_data[0]/inst_angle;
gyro_rate_data[1] = gyro_rate_data[1]/inst_angle;
gyro_rate_data[2] = gyro_rate_data[2]/inst_angle;
inst_angle = inst_angle *1.0/sam_rate;
float inst_acc_angle = (sqrt(acc_angle[0]*acc_angle[0] + acc_angle[1]*acc_angle[1] + acc_angle[2]*acc_angle[2]));
inst_angle = WT*inst_angle + (1.0-WT)*inst_acc_angle;
angle_to_quat(inst_angle,gyro_rate_data,&q);
/* The function for angle to quaterinion and multiplication,normalization */
void angle_to_quat(float Angle,float *gyro_axis,struct quat *qt)
{
struct quat temp;
struct quat res;
temp.w = cos((Angle*1.0/RAD_TO_DEG)/2);
temp.x = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.y = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.z = sin((Angle*1.0/RAD_TO_DEG)/2);
temp.x = temp.x *gyro_axis[0];
temp.y = temp.x *gyro_axis[1];
temp.z = temp.x *gyro_axis[2];
res = quat_mul(*qt,temp);
quat_normalize(&res);
*qt = res;
}
This variation is coming when i am keeping my device stationary.
Y-Axis : Resultant of all 3 gyro axis.
X-axis : The number of samples. (have not converted them to time)
Sample_rate is 3Hz.
|
I'm trying to build a line follower robot and I'm interested in predicting the curves on the track.
I have 8 binary sensor array(qre1113).
My goal is to make a system that it can generalize what it learned about the curves and give me predictions about where should be at the line to pass it as fast as possible.
How can I integrate a system like Q learning and how can I train it?
And also how can I combine this system with a Type C PID controlller ?
There is a paper about it ot you are willing to explain
This is a important project for me and i am kinda running on clock so quick help would be appreciated
|
Good Day,
I am working on an autonomous quadcopter. May I ask if there is a significant difference if my control loop dropped from 500Hz to 460Hz due to added lines of code that would require retuning of the PID gains? And if retuning is required, is it correct to assume that only the I and D gains should be retweaked since they are the only constants which are time dependent? Thank you :)
|
Could you please see the attached battery images and tell me if it is safe to continue using this battery or should I discard it?
|
What are the specifications of the digital compass used in the iPhone 6S?
I am trying to measure yaw angle using the magnetometer. I observed the magnetometer/digital compass in the iPhone is really very stable. The north direction is always the same, while the magnetometer I am using (or the magnetometer used in Nexus) needs to be calibrated again and again to function properly.
I found that the digital compass AK8963C is used in the iPhone 6, but it needs calibration. So I am not sure what is inside iPhone 6S because it works without a calibration procedure.
|
I was looking for a Python implementation of SLAM and stumbled upon BreezySLAM which implements tinySLAM aka CoreSLAM.
My robot is equipped with the hokuyo urg-04lx-ug01.
I have odometry hence passing it to the updater:
self.slam.update(ls_array, (dxy_mm, dtheta_deg, dt));
As I start moving the robot starts discovering room A and then room B & C already the map seems to have rotated. I come back to room A and return the initial pose end=start using the same path. Now I noticed room A has significantly rotated in relation to the other room. Consequently the map isn't correct at all, neither is the path travelled by the robot.
Wasn't the SLAM supposed to store and keep the boundaries for the first room it discovered?
Why this rotation may be happening?
How could I try to troubleshoot this issue with the data I have collected (odometry, calculated position, liDAR scans)?
Can I tune SLAM to do a better job for my robot?
SLAM is pretty new to me, so please bear with me, any pointers on literature that may clarify and moderate my expectations of what SLAM can do.
Extra
... and Here the best video I found to understand particle filter
|
I am doing project on odometry using raspberry pi. I know that encoder motor will tell me how much distance my robot has covered, but I have no idea ho to implement completely. I just need guideline about which steps to follow. Till now I have interfaced motor with raspberry pi and counted the number of rotation. I have questions as follow?
How to plot map of odometry using which language and library?
If you know anything, just give me guideline about steps to follow.
|
I want to use a MPU9150 to give me the position (XY) and heading (angle) of a wheeled robot. This MPU9150 from invensense has a Digital Motion Processor in it which can give me a quaternion.
But how do I convert this quaternion data to an XY-coordinate and an angle so I can plot the position of my vehicle?
|
I recently bought a DW558 Quadrocopter (http://www.gearbest.com/rc-quadcopters/pp_110531.html).
After few minutes, the battery is dead. Which is understandable since the battery is so tiny. It is a 3.7V 250mAh battery, included with the Quadrocopter. I was thinking about buying spare batteries for it, and I have few questions about this:
1: Can I buy any kind of battery 3.7V 250mAh of the same size or is there any other property I have to pay attention?
2: Can I buy batteries of 3.7V and 350mAh (100 more than the included battery) and expect my Quadrocopter to be more "energic"? Is it bad to buy batteries with more mAh ?
2b: If I buy few 3.7V 350mAh batteries, will I be able to charge them with the same charger I got with my 3.7V 250 mAh batteries or do I have to buy a specific charger for these too?
(these are the batteries I want to buy, any comment is greatly appreciated: 350mAh batteries x5 http://www.gearbest.com/rc-quadcopter-parts/pp_196991.html and/or 4x 250mAh batteries + charger http://www.gearbest.com/rc-quadcopter-parts/pp_331372.html)
Thank you very much for your input. I think I just discovered my new hobby and I can't wait to have my spare batteries!
|
I'm trying to develop an Extended Kalman Filter (EKF) for the positioning of a wheeled vehicle. I have a 'Baron' robot frame with 4 static wheels, all driven by a motor. On the 2 rear wheels I have an encoder. I want to fuse this odometry data with data from an 'MPU9150' 9 DOF IMU.
This is my mathlab code for the what I call 'medium-size' EKF. This uses data from encoders, accelerometer in x and y axis and gyroscope z-axis.
Medium-size EKF
Inputs: x: "a priori" state estimate vector (8x1)
t: sampling time [s]
P: "a priori" estimated state covariance vector (8x8)
z: current measurement vector (5x1) (encoder left; encoder right; x-acceleration, y-acceleration, z-axis gyroscope)
Output: x: "a posteriori" state estimate vector (8x1)
P: "a posteriori" state covariance vector (8x8)
State vector x: a 8x1 vector $\begin{bmatrix} x \rightarrow X-Position In Global Frame \\ \dot x \rightarrow Speed In X-direction Global Frame \\ \ddot x \rightarrow Acceleration In X-direction Global Frame \\ y \rightarrow Y-Position In Global Frame \\ \dot y \rightarrow Speed In Y-direction Global Frame \\ \ddot y \rightarrow Acceleration In Y-direction Global Frame \\ \theta \rightarrow Vehicle Angle In Global Frame \\ \dot \theta \rightarrow Angular Speed Of The Vehicle \end {bmatrix}$
Measurement vector z:
a 5x1 vector $\begin{bmatrix} \eta_{left} \rightarrow Wheelspeed Pulses On Left Wheel \\ \eta_{right} \rightarrow Wheelspeed Pulses On Right Wheel \\ \dot \theta_z \rightarrow GyroscopeMeasurementInZ-axisVehicleFrame \\ a_x \rightarrow AccelerometerMeasurementX-axisVehicleFrame \\ a_y \rightarrow AccelerometerMeasurementY-axisVehicleFrame \end {bmatrix}$
function [x,P] = moodieEKFmedium(x,t,P,z,sigma_ax,sigma_ay,sigma_atau,sigma_odo,sigma_acc,sigma_gyro)
% Check if input matrixes are of correct size
[rows columns] = size(x);
if (rows ~= 8 && columns ~= 1)
error('Input vector size incorrect')
end
[rows columns] = size(z);
if (rows ~= 5 && columns ~= 1)
error('Input data vector size incorrect')
end
% Constants
n0 = 16;
r = 30;
b = 50;
Q = zeros(8,6);
Q(3,3) = sigma_ax;
Q(6,6) = sigma_ay;
Q(8,8) = sigma_atau;
%[Q(1,8),Q(3,6),Q(6,3)] = deal(small);
dfdx = eye(8);
[dfdx(1,2),dfdx(2,3),dfdx(4,5),dfdx(5,6),dfdx(7,8)] = deal(t);
[dfdx(1,3),dfdx(4,6)] = deal((t^2)/2);
dfda = zeros(6,6);
[dfda(3,3),dfda(6,6),dfda(8,8)] = deal(1);
dhdn = eye(5,5);
R = zeros(5,5);
[R(1,1),R(2,2)] = deal(sigma_odo);
R(3,3) = sigma_gyro;
[R(4,4),R(5,5)] = deal(sigma_acc);
%[R(2,1),R(1,2)] = deal(small);
% Predict next state
% xk = f(xk-1)
xtemp = zeros(8,1);
xtemp(1) = x(1) + t*x(2)+((t^2)/2)*x(3);
xtemp(2) = x(2) + t*x(3);
u1 = normrnd(0,sigma_ax);
xtemp(3) = x(3) + u1;
xtemp(4) = x(4) + t*x(5)+((t^2)/2)*x(6);
xtemp(5) = x(5) + t*x(6);
u2 = normrnd(0,sigma_ay);
xtemp(6) = x(6) + u2;
xtemp(7) = x(7) + t*x(8);
u3 = normrnd(0,sigma_atau);
xtemp(8) = x(8) + u3;
x = xtemp
% Predict next state covariance
% Pk = dfdx * Pk-1 * transpose(dfdx) + dfda * Q * transpose(dfda)
P = dfdx * P * transpose(dfdx) + dfda * Q * transpose(dfda);
% Calculate Kalman gain
% Kk = P * transpose(dhdx) [dhdx * P + dhdn * R * transpose(dhdn)]^-1
dhdx = zeros(5,8);
if(x(2) == 0 && x(5) == 0)
[dhdx(1,2),dhdx(2,2)] = deal(0);
[dhdx(1,4),dhdx(2,4)] = deal(0);
else
[dhdx(1,2),dhdx(2,2)] = deal(((t*n0)/(2*pi*r))*(x(2)/sqrt(x(2)^2+x(5)^2)));
[dhdx(1,4),dhdx(2,4)] = deal(((t*n0)/(2*pi*r))*(x(5)/sqrt(x(2)^2+x(5)^2)));
end
%[dhdx(1,2),dhdx(2,2)] = deal(((t*n0)/(2*pi*r))*(x(2)/sqrt(x(2)^2+x(5)^2)));
%[dhdx(1,4),dhdx(2,4)] = deal(((t*n0)/(2*pi*r))*(x(5)/sqrt(x(2)^2+x(5)^2)));
dhdx(1,6) = (t*n0*b)/(2*pi*r);
dhdx(2,6) = -(t*n0*b)/(2*pi*r);
dhdx(4,3) = sin(x(7));
dhdx(4,6) = -cos(x(7));
dhdx(4,7) = (x(3)*cos(x(7)))+(x(6)*sin(x(7)));
dhdx(5,3) = cos(x(7));
dhdx(5,6) = sin(x(7));
dhdx(5,7) = (-x(3)*sin(x(7)))+(x(6)*cos(x(7)));
Kk = P * transpose(dhdx) * (dhdx * P * transpose(dhdx) + dhdn * R * transpose(dhdn))^(-1)
% Update state
H = zeros(5,1);
n1 = normrnd(0,sigma_odo);
H(1) = (((t*n0)/(2*pi*r))*sqrt(x(2)^2+x(4)^2))+(((t*n0*b)/(2*pi*r))*x(6)) + n1;
n2 = normrnd(0,sigma_odo);
H(2) = (((t*n0)/(2*pi*r))*sqrt(x(2)^2+x(4)^2))-(((t*n0*b)/(2*pi*r))*x(6)) + n2;
n3 = normrnd(0,sigma_gyro);
H(3)= x(8) + n3;
n4 = normrnd(0,sigma_acc);
H(4)=(x(3)*sin(x(7))-(x(6)*cos(x(7))))+n4;
n5 = normrnd(0,sigma_acc);
H(5)=(x(3)*cos(x(7))+(x(6)*sin(x(7))))+n5;
x = x + Kk*(z-H)
% Update state covariance
P = (eye(8)-Kk*dhdx)*P;
end
This is the filter in schematic :
These are the state transition equations I use :
$$\ x_{t+1} = x_{t} + T \cdot \dot x_{t} + \frac{T^{2}}{2} \cdot \ddot x_{t}$$
$$\ \dot x_{t+1} = \dot x_{t} + T \cdot \ddot x_{t} $$
$$\ \ddot x_{t+1} = \ddot x_{t} + u_{1} $$
$$\ y_{t+1} = y_{t} + T \cdot \dot y_{t} + \frac{T^{2}}{2} \cdot \ddot y_{t}$$
$$\ \dot y_{t+1} = \dot y_{t} + T \cdot \ddot y_{t} $$
$$\ \ddot y_{t+1} = \ddot y_{t} + u_{2} $$
$$\ \dot \theta_{t+1} = \dot \theta_{t} + T \cdot \ddot \theta_{t} $$
$$\ \ddot \theta_{t+1} = \ddot \theta_{t} + u_{3} $$
These are the observation equations I use :
$$\ \eta_{left} = \frac{T \cdot n_{0}}{2 \cdot \pi \cdot r} \cdot \sqrt{\dot x^{2} + \dot y^{2}} + \frac{T \cdot n_{0} \cdot b}{2 \cdot \pi \cdot r} \cdot \dot \theta + n_{1}$$
$$\ \eta_{right} = \frac{T \cdot n_{0}}{2 \cdot \pi \cdot r} \cdot \sqrt{\dot x^{2} + \dot y^{2}} - \frac{T \cdot n_{0} \cdot b}{2 \cdot \pi \cdot r} \cdot \dot \theta + n_{2}$$
$$\ \dot \theta_{z} = \dot \theta + n_{3}$$
$$\ a_{x} = \ddot x \sin \theta - \ddot y \cos \theta + n_{4}$$
$$\ a_{y} = \ddot x \cos \theta + \ddot y \sin \theta + n_{5}$$
Small-size EKF
I wanted to test my filter, therefore I started with a smaller one, in which I only give the odometry measurements as input. This because I know that if I always receive the same amount of pulses on the left and right encoder, than my vehicle should be driving a straight line.
Inputs: x: "a priori" state estimate vector (6x1)
t: sampling time [s]
P: "a priori" estimated state covariance vector (6x6)
z: current measurement vector (2x1) (encoder left; encoder right)
Output: x: "a posteriori" state estimate vector (6x1)
P: "a posteriori" state covariance vector (6x6)
State vector x: a 6x1 vector $\begin{bmatrix} x \rightarrow X-Position In Global Frame \\ \dot x \rightarrow Speed In X-direction Global Frame \\ y \rightarrow Y-Position In Global Frame \\ \dot y \rightarrow Speed In Y-direction Global Frame \\ \theta \rightarrow Vehicle Angle In Global Frame \\ \dot \theta \rightarrow Angular Speed Of The Vehicle \end {bmatrix}$
Measurement vector z:
a 2x1 vector $\begin{bmatrix} \eta_{left} \rightarrow Wheelspeed Pulses On Left Wheel \\ \eta_{right} \rightarrow Wheelspeed Pulses On Right Wheel \end {bmatrix}$
% Check if input matrixes are of correct size
[rows columns] = size(x);
if (rows ~= 6 && columns ~= 1)
error('Input vector size incorrect')
end
[rows columns] = size(z);
if (rows ~= 2 && columns ~= 1)
error('Input data vector size incorrect')
end
% Constants
n0 = 16;
r = 30;
b = 50;
Q = zeros(6,6);
Q(2,2) = sigma_ax;
Q(4,4) = sigma_ay;
Q(6,6) = sigma_atau;
%[Q(1,8),Q(3,6),Q(6,3)] = deal(small);
dfdx = eye(6);
[dfdx(1,2),dfdx(3,4),dfdx(5,6)] = deal(t);
dfda = zeros(6,6);
[dfda(2,2),dfda(4,4),dfda(6,6)] = deal(1);
dhdn = eye(2,2);
R = zeros(2,2);
[R(1,1),R(2,2)] = deal(sigma_odo);
%[R(2,1),R(1,2)] = deal(small);
% Predict next state
% xk = f(xk-1)
xtemp = zeros(6,1);
xtemp(1) = x(1) + t*x(2);
u1 = normrnd(0,sigma_ax);
xtemp(2) = x(2) + u1;
xtemp(3) = x(3) + t*x(4);
u2 = normrnd(0,sigma_ay);
xtemp(4) = x(4) + u2;
xtemp(5) = x(5) + t*x(6);
u3 = normrnd(0,sigma_atau);
xtemp(6) = x(6) + u3;
x = xtemp
% Predict next state covariance
% Pk = dfdx * Pk-1 * transpose(dfdx) + dfda * Q * transpose(dfda)
P = dfdx * P * transpose(dfdx) + dfda * Q * transpose(dfda);
% Calculate Kalman gain
% Kk = P * transpose(dhdx) [dhdx * P * transpose(dhdx) + dhdn * R * transpose(dhdn)]^-1
dhdx = zeros(2,6);
if((x(2) < 10^(-6)) && (x(4)< 10^(-6)))
[dhdx(1,2),dhdx(2,2)] = deal((t*n0)/(2*pi*r));
[dhdx(1,4),dhdx(2,4)] = deal((t*n0)/(2*pi*r));
else
[dhdx(1,2),dhdx(2,2)] = deal(((t*n0)/(2*pi*r))*(x(2)/sqrt(x(2)^2+x(4)^2)));
[dhdx(1,4),dhdx(2,4)] = deal(((t*n0)/(2*pi*r))*(x(4)/sqrt(x(2)^2+x(4)^2)));
end
%[dhdx(1,2),dhdx(2,2)] = deal(((t*n0)/(2*pi*r))*(x(2)/sqrt(x(2)^2+x(4)^2)));
%[dhdx(1,4),dhdx(2,4)] = deal(((t*n0)/(2*pi*r))*(x(4)/sqrt(x(2)^2+x(4)^2)));
dhdx(1,6) = (t*n0*b)/(2*pi*r);
dhdx(2,6) = -(t*n0*b)/(2*pi*r);
Kk = P * transpose(dhdx) * ((dhdx * P * transpose(dhdx) + dhdn * R * transpose(dhdn))^(-1))
% Update state
H = zeros(2,1);
n1 = normrnd(0,sigma_odo);
H(1) = (((t*n0)/(2*pi*r))*sqrt(x(2)^2+x(4)^2))+(((t*n0*b)/(2*pi*r))*x(6)) + n1;
n2 = normrnd(0,sigma_odo);
H(2) = (((t*n0)/(2*pi*r))*sqrt(x(2)^2+x(4)^2))-(((t*n0*b)/(2*pi*r))*x(6)) + n2;
x = x + Kk*(z-H)
% Update state covariance
P = (eye(6)-Kk*dhdx)*P;
end
Odometry observation equations
If you would wonder how I come to the observation equations for the odometry data:
$\ V_{vl} = V{c} + \dot \theta \cdot b \rightarrow V_{vl} = \sqrt{ \dot x^{2} + \dot y^{2}} + \dot \theta \cdot b$
Problem
If I try the small-size EKF, using a Matlab user interface, it does seem to drive a straight line, but not under a heading of 0° like I would expect. Eventhough I start with a state vector of $\ x= \begin{bmatrix}0\\0\\0\\0\\0\\0\end{bmatrix}$ meaning starting at position [0,0] in the global coordinate frame, with speed and acceleration of zero and under an angle of 0°.
In the top right corner you can see the measurement data which I give as input, which is 5 wheelspeed counts on every wheel, every sampling period. (Simulating straight driving vehicle)
In the top left corner you see a plot of the X and Y coordinate (from state vector) at the end of one predict+update cycle of the filter, labeled with the timecycle.
Bottom left corner is a plot of the angle in the state vector. You see that after 12 cycles the angle is still almost 0° like I would expect.
Could anyone please provide some insights in to what could be wrong here?
Solutions I've been thinking on
I could use the 'odometry motion model' like explained in this question. The difference is that the odometry data is inserted in the predict step of the filter. But if I would do this, I see 2 problems: 1) I don't see how to make a small-size version of this for testing purposes, because I don't know which measurements to add in the update-step and 2) for the medium-size version I don't know how to make the observation equations as the state vector doesn't imply velocity and acceleration.
I could use the 'odometry motion model' and in the update step use the Euler-angle, which can be linked to $\ \theta $. This Euler-angle I can obtain from the Digital Motion Processor (DMP), implemented in the IMU. Then it is no problem that angular velocity is not in the state matrix. But than I still have a problem with the acceleration observation equations.
|
I recently bought a drone(quadcopter). I like how the drone works but now I would like to create an application that will allow me to control the drone remotely from my PC or Phone.
How does a computer or phone interface to an aerial vehicle?
Some of my initial thoughts were
Get a RF detector and detect what signals are being sent to the drone and replicate it using a new transmitter.
Replace the control circuit in the drone with an Arduino and send corresponding signals to it to fly the drone
Both these methods seem to be kind of far fetched and hard, but I'm not sure how this could otherwise be done.
|
I need a distance sensor (IR or Optical or any other) with 90 degree view angle to sense a rectangle surface.
in this case sensor must putting at the same level of surface area. please help me to solve this.
|
I have implemented a particle filter algorithm for the state estimation of a mobile robot.
There are several external range sensors(transmitters) in the environment which gives information on the distance (radius) of the robot based on the time taken for the receiver on the robot to send back its acknowledgement. So, using three or more such transmitters it will be possible to triangulate the position of the robot.
The particle filter is initialized with 15000 particles and the sensor noise is relatively low (0.02m).
Update Phase: At each iteration a range information from an external sensor is received. This assigns higher weights to the particles along the radius of the external sensor. Not all the particles are equally weighted since the process noise is low. Hence in most of the cases, the particle relatively closer to the robot gets lower weight than an incorrect one that happens to be along the radius. The weight is a pdf.
Resampling Phase: At this stage, the lower weighted particle(the correct one) that has negligible weight gets lost because the higher weighted particle gets picked up.
All this happens at the first iteration and so when the range information from another sensor arrives, the robot is already kidnapped.
Googling around, said that this problem is called as sample impoverishment and the most common approach is to resample only when the particle variance is low. (Effective Sample Size < number of particles / 2)
But, when the particles are assigned negligible weights and there are relatively very few particles with higher weights, the diversity of the particles are lost at resampling phase. So, when the variance is higher resampling is done which removes the lower weighted particle and hence the diversity of the particles is lost. Isnt this completely the opposite of the above idea of ESS?
Is my understanding of sample impoverishment correct? Is there a way this issue can be fixed?
Any pointers or help would be highly appreciated.
|
I am a third-year electrical engineering student and am working on an intelligent autonomous robot in my summer vacations.
The robot I am trying to make is supposed to be used in rescue operations. The information I would know is the position of the person (the coordinates of the person in a JSON file that can be changed anytime except during the challenge) to be rescued from a building on fire. I would also know the rooms of the building from a map, but I don't know where the robot may be placed inside the building to start the rescue operation.
That means I have to localise the robot placed at an unknown position in a known environment, and then the robot can plan its path to the person who has to be rescued. I can use gyroscope, accelerometer, magnetometer and ultrasonic sensors to do the localising job. I cannot use a GPS module or a camera for this purpose.
The object to be rescued (whose location is known in terms of coordinates & can be changed anytime) is surrounded by walls from 3 sides. Hence, adding more walls in this map.
According to my research particle filter is the best method used for localization of robot. But how can I deal with the landmarks (walls) that are fixed as shown in the map image and that are variable depending on the location of the object to be rescued being provided in the JSON file?
I can do the path planning from a known position to the target position, but I'm not sure how to determine the starting position.
More about JSON file:
(1) json file containing the coordinates of the object to be rescued can change. (2) it won't change during the challenge. (3) json file will be provided to me in an SD card that my robot has to read. I have successfully written the code that will allow the robot to read the json file and hence the coordinates of the object to be rescued.
Here is the map of the building which is known to me.
|
I have a 6 DOF robot arm, and I want to do some object grasping experiments. Here, the robot is rigidly mounted to a table, and the object is placed on a different adjacent table. The robot must pick up the object with its gripper parallel to the normal of the object's table, such that it is pointing directly downwards at the point of grasping.
Now, the two tables have adjustable heights. For any given height difference between them. there will be a fixed range of positions over which the robot arm can achieve this perpendicular pose. What I am trying to figure out, is what the optimum relative distance of the tables is such that this range of positions is maximum.
Is there a way to compute this analytically given the robot arm kinematics? Or is there a solution which applies to all robot arms (e.g. it is optimum when the tables are at the same height)?
If it is important, the arm is the Kinova MICO: https://www.youtube.com/watch?v=gUrjtUmivKo.
Thanks!
|
I'm currently doing some research on collaborative robotics. One area of interest is the type of sensor(s) used in these kind of machines.
I had a look at some robots by FANUC and Universal Robots and I've noticed that they do not come equipped with sensors; they are sold as an add-on.
Is this inherent with collaborative robots? Do customers need to buy sensors as an add-on - which has advantages and disadvantages.
Thanks for your help.
|
I am trying to make a IR distance sensor. I have seen this online. My goal however is to see the distance between a IR transmitter and my IR sensor. In the example above he uses the IR led's ambient light and timing to track the distance. Is there a way to find the distance between lets say a IR remote and a sensor? It would only have to be accurate to about 1 meter. I am also open to other ideas of accurately tracking distance between two objects weither that be bluetooth/ir/ultrasonic
|
I'm programming a flight controller on an Arduino. I've researched how other people have written theirs but without notes it's often so obfuscated that I've decided it will be easier and better to write my own.
This is my pseudocode thus far, will this work?
all of this will happen inside the constant Arduino loop
Read RX signal
Calculate desired pitch, roll, and yaw angles from RX input
Signal ESCs using PWM in order to match desired pitch, roll, and yaw from RX input
Gather IMU values (using Kalman filter to reduce noise)
Compare filtered IMU values vs. RX input to find errors in desired outcome vs. actual outcome
Use PID algo to settle errors between IMU vs. RX
Rinse and repeat
Suggestions are greatly appreciated
|
My team has been working on a wearable glove to capture data about hand movements, and use it as a human-computer interface for a variety of applications. One of the major applications is the translation of sign language, shown here: https://www.youtube.com/watch?v=7kXrZtdo39k
Right now we can only translate letters and numbers, because the signs for them require the person to hold their hand still in one position ('stationary' signs). I want to be able to translate words as well, which are non-stationary signs. Also the position of the hands really matters when signing words, for example it matters whether the hand is in front of the forehead, eyes, mouth, chest, cheeks, etc.
For this we need a portable and highly accurate position sensor. We have tried getting position from a 9-DOF IMU (accelerometer, gyroscope, magnetometer) but as you might guess, there were many problems with double integration of the noise and accelerometer bias.
So is there a device that can provide accurate position information? It should be portable and wearable (for example worn in the chest pocket, headband/cap, etc...be creative!).
EDIT (more details):
I'm going to emphasize certain aspects of this design that weren't clear before, based on people's comments:
My current problem of position detection is due to errors in double integration of the accelerometer data. So hopefully the solution has some incredibly powerful kalman filter (I think this is unlikely) or uses some other portable device instead of an accelerometer.
I do not need absolute position of the hand in space/on earth. I only need the hand position relative to some stable point on the body, such as the chest or belly. So maybe there can be a device on the hand that can measure position relative to a wearable device on the body. I don't know if such technology exists; I guess it'd use either magnets, ultrasound, bend sensors, or EM waves of some sort. Be creative :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.