instruction
stringlengths 40
28.9k
|
---|
I'm reading this paper: http://arxiv.org/abs/1310.2053 (The role of RGB-D benchmark datasets: an overview) and see the following words:
Thanks to accurate depth data, currently published papers could
present a broad range of RGB-D setups addressing well-known problems
in computer vision in which the Microsoft Kinect ranging from SLAM
[10, 12, 19, 17, 35, 11] over 3d reconstruction [2, 33, 38, 32, 1]
over realtime face [18] and hand [30] tracking to motion capturing and
gait analysis [34, 41, 8, 7, 4]
I thought of the term SLAM and 3D Reconstruction being the same thing, while the paper says the opposite with a bunch of citations (which still haven't tell the two apart).
In my opinion, Mapping in SLAM is the same term as 3D Reconstruction, while Localization is the essential part for Mapping. So I don't find a difference between SLAM and 3D Reconstruction, am I wrong (or is the author misclassfying)?
|
I went to go install ROS for my Rassberry Pi and found that there are 5 different variants. What is the difference between them and where can I go to learn about these differences for future updates?
Link to the ROSberryPi downloads I'm talking about:
http://wiki.ros.org/ROSberryPi/Setting%20up%20ROS%20on%20RaspberryPi
|
Am I correct in assuming that the I-Robot Create 2 does not have the 25-pin connector like the original version of I-Robot Create has? Thanks much...Rick
|
I'm trying to detect obstacles for a distance of up to 10 meters in an outdoor environment. Up to meaning that I also want to be able to detect obstacles that are close to the robot. I am thinking of doing this using stereo vision, but I am unsure if this is in fact even possible (before I buy expensive hardware). So is it possible? Has anyone had any success?
If this isn't possible, then what kind of sensors could give me a decent point cloud for such a range (outdoors)? I need a sensor that will fit a medium size robot. Also it needs to be not overly expensive since I have a limited budget.
Thanks
|
I reference the following article.
http://www.egr.msu.edu/classes/ece480/capstone/spring15/group02/assets/docs/nsappnote.pdf
I have followed article's code,
but it appears:
How can i solve this problem?
|
I am trying to install ros kinetic kame in ubuntu 16.04 , but after trying the first step setup your sources. list.
I am getting cannot create /etc/apt/sources.list.d/ros-latest.list: Permission denied what to do now
|
I would like to use GPS data as measurement input for an extended kalman filter. Therefore I need to convert from GPS longitude and lattitude to x and y coordinate. I found information about the equirectangular projection given these formulas:
$$\ X = r_{earth} \cdot \lambda \cdot cos(\phi_0) $$
$$\ Y = r_{earth} \cdot \phi $$
However I think these formulas are only for use when the axis x- and y-axis of my local frame are parallel to north and south axis of the earth.
But my vehicle is starting in my local reference frame in the origin and heading straight in y-direction. In whatever compas angle I put my vehicle, this should always be the starting position.
I can measure the angle $ \alpha $ to north with a compass on my vehicle.
Now what is the relationship between (longitude,lattitude) an (x,y) of my local frame?
|
I read a paper from 2015, "Structural bootstrapping - A novel, generative
mechanism for faster and more efficient acquisition of action-knowledge
", which introduces a concept called, "Structural bootstrapping with semantic event chains and dynamic movement primitives," which confused me a little bit.
According to my knowledge a robotarm is controlled by a PDDL-like planner. The PDDL file is a "qualitative physics engine" which can predict future events. The paper says the "qualitative physics engine" consists of dynamic movement primitive (DMP) which are learned from motion capture data.
My question is: How can a DMP be used for simulating physics?
|
Is it possible to replace just the wheels on the create2 robot? Is it a standard shaft/coupling?
|
Is it possible to measure the voltage of 2 different batteries on arduino? Currently I am able to use a resistor divider / voltage divider of 2x 10K resistors to an analog pin to read the voltage of the battery supplying the arduino.
Currently the system looks like 6v battery -> 5v power regulator to Arduino -> resistor divider attached to 6v (unregulated) battery. GND is common throughout.
How could I measure the voltage of another battery given that it will be on a different circuit? e.g. different ground loop.
|
I am trying to find the name (nomenclature) of the linkage (or carriage) that is being driven by the dual linear servo (actuator) arrangement in the following Youtube videos:
Servo Basic Concepts
YouTube - 4 X Linear Servo Application
The linkage (carriage) appears to be able to rotate about a 180 degree arc.
What is this metal linkage (or carriage) system called?
|
I have 2 wheeled differential drive robot which I use pid for low level control to follow line. I implemented q learning which uses samples for 16 iterations then uses them to decide the best position to be on the line so car takes the turn from there. This allows PID to setup and smooth fast following. My question is how can I setup a reward function that improves the performance i.e. lets the q learning to find the best
Edit
What it tries to learn is this, it has 16 inputs which contains the line positions for the last 15 iterations and this iteration. Line position is between -1 and 1 which -1 means only left most sensor sees the line and 0 means the line is in the center. I want it to learn a line position that when it faces this input again it will set that line position like its the center and take the curve according to that line position. For example error is required position - line position so let say I had 16 0 as input then I calculated the required as 0.4. So after that the car will center itself at 0.4 I hope this helps :)
You asked for my source code i post it below
void MainController::Control(void){
float linePosition = sensors->ReadSensors();
if(linePosition == -2.0f){
lost_line->FindLine(lastPos[1] - lastPos[0]);
}
else{
line_follower->Follow(linePosition);
lastPos.push_back(linePosition);
lastPos.erase(lastPos.begin());
}
}
My Sensor reading returns a value between -1.0f and 1.0f. 1.0f means Outer Sensor on the right is only the line. I have 8 sensors.
void LineFollower::Follow(float LinePosition){
float requiredPos = Qpredictor.Process(LinePosition,CurrentSpeed);
float error = requiredPos - LinePosition;
float ErrorDer = error -LastError;
float diffSpeed = (KpTerm * error + (KdTerm * ErrorDer));
float RightMotorSpeed = CurrentSpeed - diffSpeed;
float LeftMotorSpeed = CurrentSpeed + diffSpeed;
LastError = error;
driver->Drive(LeftMotorSpeed,RightMotorSpeed);
}
Here is the logic for the value for QPredictor(I call the learning part as this). And Finally QPredictor
float Memory[MemorySize][DataVectorLength] =
{
{0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
{0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3},
{0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6},
{0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8},
{0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000},
{0.000, 0.025, 0.100, 0.225, 0.400, 0.625, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.050, 0.200, 0.450, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000},
{0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000}
};
QPredictor::QPredictor(){
for(int i=0;i<MemorySize;i++){
output[i]=0.0f;
input[i]=0.0f;
}
state = 0;
PrevState = 0;
}
float QPredictor::Process(float linePosition,float currentBaseSpeed){
for(int i=1;i<DataVectorLength;i++){
input[i] = input[i-1];
}
input[0] = m_abs(linePosition);
int MinIndex = 0;
float Distance = 10000.0f;
float sum = 0.0f;
for(int i=0;i<MemorySize;i++){
sum = 0.0f;
for(int j=0;j<DataVectorLength;j++){
sum +=m_abs(input[j] - Memory[i][j]);
}
if(sum <= Distance){
MinIndex = i;
Distance = sum;
}
}
sum = 0.0f;
for(int i=0;i<DataVectorLength;i++){
sum += input[i];
}
float eta = 0.95f;
output[MinIndex] = eta * output[MinIndex] + (1 - eta) * sum;
return -m_sgn(linePosition) * output[MinIndex];
}
float QPredictor::rewardFunction(float *inputData,float currentBaseSpeed){
float sum = 0.0f;
for(int i=0;i<DataVectorLength;i++){
sum += inputData[i];
}
sum /= DataVectorLength;
return sum;
}
I now only have average Error and currently not using learning because it's not complete without reward function. How can I adjust it according to the dimensions of my Robot?
|
Given a desired transform matrix of the end effector relevant to the base frame of the P560:
John J. Craig, in his book, Introduction to Robotics
Mechanics and Control, computes the inverse kinematic solutions of a Puma 560, with (correct me if wrong) Modified DH parameters and gets the following set of equations for theta angles:
and I noticed that there are no alpha angles in these calculations.
So, my question is why aren't the alpha angle values not used in the calculation for the desired pose with the given end effector transform? Why is it independent of the axes twist angles of the robot?
|
I am working in the field of automated vehicles mainly in the domain of passenger and commercial vehicles. I have been studying whatever I can get regarding the measurement the state (relative position, relative velocity, relative heading and roation, a.k.a. yaw rate) of surrounding objects especially other vehicles using sensors.
While everything else is possible to measure precisely using on-board sensors, I have found out that not much literature is available for measuring the vehicle heading and yaw rate of other vehicles which is baffling to me given the extreme precision of laser based sensing (albeit using time stamps).
I am looking for:
Reference to literature with experiments for estimation of yaw rate and vehicle heading.
As I can see from the literature available (or the lack thereof), no direct way of measuring yaw rate is available but by using LIDAR or Camera with consecutive time stamps or scans of data. However, this inherently requires the data to be correct. Hence, I would think that due to the inaccuracies involved, this method is not used! Is this correct?
Are there any commercially available sensors that give accurate heading and yaw rate information of other vehicles?
Sources and research papers would be most welcome!
Edit: By this inherently requires the data to be correct I mean, given the high sensitivity to error in heading or yaw rate at high vehicle speeds, the values computed using sensor information is not accurate enough to be put to use in practice!
|
I am not able to clearly differentiate between the two platforms:
RoboEarth, and;
KnowRob.
|
I am having issues with bringing the robot out of its sleep or off mode. Seems it goes into sleep mode when there is no activity for about 4 minutes. I am using the i-Robot Create 2 serial cable. When it is in its sleep mode I try removing the cable end plugged into robot and connect jumper wire between pins 5 and 6 on the robot 7 pin connector for a brief time period. This effectively shorts the BRC pin to GND for a short period of time ( less than 1 second). Then I reconnect the serial port cable into the robot 7 pin connector and try giving the robot a command but no go. I have also read that commands 173 and 173 173 can help with this issue but I may be mistaken. Any help on this is very much appreciated !!!! Rick
|
I was planning on using the odometry model in the prediction stage of an Extended Kalman Filter.
State transition equations:
$$ f(X_t,a_t) = \begin{bmatrix}
x_{t+1} = x_t + \frac{\delta s_r + \delta s_l}{2} \cdot \cos(\theta_t) +u_1
\\ y_{t+1} = y_t + \frac{\delta s_r + \delta s_l}{2} \cdot \sin(\theta_t) + u_2
\\ \theta_{t+1} = \theta_t + \frac{\delta s_r + \delta s_l}{b} \cdot \sin(\theta_t)+u_3
\end{bmatrix} $$
with $\delta s_r$ and $\delta s_l = \frac{n}{n_0} \cdot 2 \cdot \pi \cdot r$
$X_t = \begin{bmatrix} x_t & y_t & \theta_t\end{bmatrix}^T$ state matrix containing XY-coordinate and heading $\theta$ of vehicle in global reference frame
$\delta s_r$ and $\delta s_l$ distance travelled by respectively right and left wheel
$b$ distance from center of the vehicle to the wheel
$n$ encoder pulses count during sampling period t
$n_0$ total pulses count in 1 wheelturn
$r$ wheel radius
$u_1,u_2$ and $u_3$ random noise N(0,$\sigma^2$)
Now I doubt if this noise indeed does have a zero mean?
Wheelslip will always make the estimated distance travelled shorter than the measured distance isn't it?
|
I am trying to control a Dobot arm. The arm moves with angles whereas I need to work with cartesian coordinates. From inverse kinematics equations and polar coordinates I have implemented x,y and z coordinates working very well on their own. But I can not combine the coordinates in order to work all together. When I add them up it is not going to the desired place. How can I combine these coordinates? I got some help from (https://github.com/maxosprojects/open-dobot) but could not manage to successfully move dobot.
Edit: I've written the codes in Qt and also I've added the triangles used for angle calculations.
//foreArmLength=160mm rearArmLEngth=135mm
float DobotInverseKinematics::gotoX(float x) //func for x-axis
float h=qSqrt(qPow(lengthRearArm,2)-qPow(x,2)); //height from ground
QList<float> zEffect=gotoZ(h); //trying to find the effect of x movement on z-axis
float cosQ=h/lengthRearArm; //desired joint angle
float joint2=qRadiansToDegrees(qAcos(cosQ));
//move in range control
if(joint2 != joint2)
{joint2=0;
qDebug()<< "joint2NAN";}
return joint2;
QList<float> DobotInverseKinematics::gotoY(float y) //func for y-axis
QList<float> result ;
float actualDist=lengthForeArm+distToTool; //distance to the end effector
float x=(qSqrt(qPow(actualDist,2)+qPow(y,2)))-actualDist; //calculating x movement caused by y movement
float joint1=qRadiansToDegrees(qAcos(actualDist/(actualDist+x))); //desired joint angle
float joint2=gotoX(x); //the angle calculation of y movement on x axis
if(joint1 != joint1)
{joint1=0;
qDebug()<< "joint1NAN";}
result.append(joint1);
result.append(joint2);
return result;
QList<float> DobotInverseKinematics::gotoZ(float z) //func. for z-axis
QList<float> result ;
float joint3=qSqrt(qPow(160,2.0)-qPow(z,2.0))/ 160; //desired joint angle
float temp=160-qSqrt(qPow(160,2.0)-qPow(z,2.0));
float joint2=qSqrt(qPow(lengthRearArm,2)-qPow(temp,2.0))/lengthRearArm; //desired joint angle
if(joint3 != joint3)
{joint3=0;
qDebug()<< "joint3NAN";}
joint2=qAcos(joint2)*(180/M_PI);
joint3=qAcos(joint3)*(180/M_PI);
result.append(joint2);
result.append(joint3);
return result;
|
I recently thought about building a lab bench power supply, it comes in cheaper and I love to build things...
But then I also have a LiPo charger an iMax B6AC, that I had bought for my quadcopter, then came the idea of whether I can use the charger as a lab bench power supply...
My questions is, could this work and how could I make it work?
|
Zeno Behaviour or Zeno Phenomenon can be informally stated as the behavior of a system making an infinite number of jumps in a finite amount of time.
While this is an important Control system problem in ideal systems, can Zeno behavior exist in real systems? Any examples?
If so, why don't noise or external factors deviate a system from achieving Zeno?
|
Why don't cheap toy robotic arms like this move smoothly? Why can't it even move itself smoothly, (even without any load)?
In other words - what do real industrial robotic arms have, that cheap toys don't?
|
I want to basically make a pin matrix controlled either by spring, electromagnets or small motors(spring being the most viable option), something like what’s shown in the image. I'm pretty new to arduino and hardware in general so any input would be appreciated.
I mostly know the arduino end but don't have clue about the hardware part. Plus I don't have the technical expertise, as in I know electromagnets won't be a good option as I have to control individual pins and not clusters. Plus springs have the disadvantage of pushing them back in but other than that a very option. And its not viable to have individual motors for so many pins.
|
Is it possible to convert a cycle into an electric bike by using brushless outrunner motors that usually are for RC planes, multicopter, helicopter, etc?
If it is possible, what specs do my motors need to be to provide enough power to bring my cycle to speed?
Will I need a gear system?
|
EDIT: Moved to ElectricalEngineering StackExchange Community
I'm using Sparkfun Razor IMU 9DOF sensor which incorporates accelerometer, gyroscope, and magnetometer, for giving the Euler's angles (yaw, pitch, and roll). I'm using the firmware at this link. It has Processing sketch for calibration of magnetometer, but it doesn't give the precise measurements. Especially, the yaw is imprecise. I'm using this sensor for measuring azimuth and altitude of stellar objects. The altitude is mostly correct, but azimuth (yaw) isn't.
I have several questions:
Is there a better way to calibrate magnetometer? Is the calibration sufficient, without using the Madgwick or Kalman filter?
Is there some nonlinearity present in the sensor? Since the yaw offset isn't constant, it changes (around -12 degrees up north to almost correct value at southwest). And if it is, how could I measure that nonlinearity and apply to the yaw measurements?
If I have to use Madgwick or Kalman, do I have to apply them on quaternions? I believe that applying them at the final yaw measurements wouldn't do the job.
|
I have a localization data estimated and GPS_truth and generated [3x3] covariance matrix along with them.
What i would like to do is to see if the covariance is correct or not?
Can we check it by plotting the covariance?
|
I have the actual DH parameters of a robot:
d1 = 0.4865 m
d2 = 0.600 m
d3 = 0.065 m
a1 = 0.150 m
a2 = 0.475 m
all other di's and ai's are zero.
Can I use these for an inverse kinematics analytic closed form computation or should I measure the virtual distances in the 3d environment?
I am actually asking if the theta angles that will be yeld after the computations are dependent on the scale of those distances.
EDIT: Scale factor is unknown
|
As a hardware engineer, I have studied quite a lot on sensor spec such as Accel, Gyro and Magnetometer including custom made fluxgate. I have studied matrix and quadarion (complex number) and so on.
I moving into calibration arena, I seen lot of article on calibration but not sure which is best solution to fix offset and mis-alignment axis. Can anyone point to best open source code.
I'm not interested in output results related to flight such as quadchopter and GPS, but more interested in directional math for drilling pipes, where toolface, inclination, azimuth and position is most important. What the best thesis or paper that cover this topic and open-source or example code (in C) for this application. Do I need Kalnman filter or such advance post data capture processing. Any tip how to avoid getting too involved with maths
|
I have some measurements of a real life robot, and a 3d model of that robot (lets say in Unity) and I want to know the scale factor, plus I dont want to find the 3d models measurements and then divide with the real world ones to find the scale factor, in order not to get more confused with more mathematics than I am already. So, is there a methodology or will I have to do it as I mentioned?
|
I am building a robotic arm with these specifications:
1 meter long
approx. 1kg weight
it is made of 4 motors. (2 at the base. One for rotating the whole arm left and right and another one for rotating up down. 1 motor for rotating the second half of the arm, only up and down. 1 for the claw used for grabbing)
it must be able to lift at least 4kg + 1kg (it's own weight), and have a speed of 180 degrees in 2 seconds = 360 degrees in 1 second resulting 60rpm.
Which kind of motor would be best for the project (servo or stepper) and how much torque will it need? (Please also give me an explanation of how I can calculate the torque needed). Could you also give me an example or two of the motors I would need and/or a gearbox for them (models or links).
|
I am currently working on a SLAM-like application using a variable baseline stereo rig. Assuming I'm trying to perform visual SLAM using a stereo camera, my initialization routine would involve producing a point cloud of all 'good' features I detect in the first pair of images.
Once this map is created and the cameras start moving, how do I keep 'track' of the original features that were responsible for this map, as I need to estimate my pose? Along with feature matching between the stereo pair of cameras, do I also have to perform matching between the initial set of images and the second set to see how many features are still visible (and thereby get their coordinates)? Or is there a more efficient way of doing this, through for instance, locating the overlap in the two points clouds?
|
I have a Robot with tracks. One of the tracks broke and I need to find a replacement, the tracks use the same plastic interconnects/pieces as this:
They were very popular years back. Does anyone know the brand/name?
|
We need to determine the 2D position of my robot. To do so, we have a LIDAR at a known high, with an horizontal plane, which gives us the distance to the nearest point for each angular degree (so 360 points for one rotation). The environment is known, so I have a map of the position of every object that the LIDAR is susceptible to hit.
My question is, based on the scatter plot that the LIDAR is returning, how can we retrieve the position of my robot in the map ? We would need the x, y position in the map frame and also the theta angle from the map frame to my robot frame.
We have tried to match objects on map with groups of points based on their distance between each other and by identifying those objects and the way the LIDAR "sees" them to retrieve the robot position. But it is still unsuccessful.
To put it in a nutshell, we want to make SLAM without the mapping part. How is it possible, and with what kind of algorithms ?
A first step could be to stop the robot while acquiring data if it seems easier to process.
|
I experienced some drifting when coming near to magnetic fields with my IMU, so I wondered if it is possible to completely shield the IMU from external influences. Would this be theoretically possible or does the IMU rely on external fields like the earth magnetic field? Are there maybe alternatives to IMUs that are less susceptible to magnetic interferences? I only need the rotational data of the sensor and don't use translational output.
|
I'm currently designing an autonomous robotic system to manipulate clothes using computer vision and complex moving hardware. My current design incorporates quite a number of moving parts. My biggest worry is a frame (140 x 80 x 40 cm) rotates from 0 to 90 degrees every time it manipulates a piece of cloth. Other than this the design involves various other moving parts to achieve successful manipulation of the cloth. It seems like the hardware is capable of achieving the task despite the high number of complex and moving parts.
So the question is, what are the design considerations I should take in designing an automated system. Should I think of a alternative design with less number of parts? Or proceed with the current design if it does the job>? Sorry I am in a position where I can't disclose much information about the project.
Thank you.
|
I want to make a simple Arduino based programmable insect robot.
I want it to walk on legs and legs will be made of hard aluminum wire. It needs to have 4 legs. I am planning to use Arduino Nano for that. I just had few questions like:
How to arrange servos and wire to have motion?
I also want it to turn sideways?
Where can I read good theory on making insect like robots?
|
I am currently trying to implement a GraphSLAM/SAM algorithm for LIDAR. From papers I've read, you generate a directed graph from expected LIDAR measurements to landmarks similar to the image below (taken from the Square Root SLAM Paper by Dellaert).
However in practice the point clouds you obtain from LIDAR are similar to this (taken from the KITTI car collected dataset):
It seems algorithms such as SIFT for 3D point clouds aren't as accurate yet. Is there a commonly used technique to efficiently find features in consecutive point clouds to find landmarks for SLAM algorithms without using >30,000 points in a point cloud?
|
I am building a robot and I want to be able to hear sounds from it's environment (ideally from my laptop). What is the best way to get live audio from my robot's microphone to my computer?
I have looked into a few solutions for hosting live audio streams using packages such as darkice and icecast. I'm just wondering about better solutions for robotic applications.
Additional details:
- I have access to hardware such as Raspberry Pi, Arduino, etc.
|
I am building a Pi car with 4 gear motors (190-250mAh each max). Now I want to use my 10000mAh USB power bank to power up raspberry pi along with the 4 gear motors.
I can power up the raspberry pi directly but I want to use my power bank as the only source of power for the Pi Car. How can I connect both my RPi and motor controller L298N to my USB power bank?
|
I want to create an amateur wire looping machine with Arduino, that has similar functionality than this machine. I don't need the automatic wire feeding as for my purposes this part can be done manually. I just want to automate the wire loop creation process, assuming that I already have straight wires.
I'm new to the world of motors, robotics, etc., so please be as descriptive as possible :)
From the video, I can tell that there are two motors:
Makes the initial wire bending
Spins to create the loop
The wire that I will be working with is galvanized steel of 11 gauge (2.0 - 2.5 mm diameter).
So what type of motors would be recommended for this application, taking into account:
They need to be accurate in their positioning for repeat ability
They need to have enough torque (specially the one that creates the loop itself) to work with this type of material
They're as fast as (or close to) the ones in the video
This is not going to be an industrial grade machine that will be running all the time
Ideally, they need to be not that expensive.. I don't want to be bankrupt by the end of this project :)
If links can be included for recommended products, it would be great.
Thanks!
|
i have "TAROT ZYX-GS 3-Axis Gimbal Stabilization System ZYX13" sensor that gives me the value of Roll tilt and Pan.The 3 axis accelerometer give me the value of x y and z.so can we use the Gimbal stabiliztion system in place of accelerometer
|
I'm starting out with robotics, got my first DC gear motor with quadrature encoder (https://www.pololu.com/product/2824):
I ultimately plan to hook it up to a motor driver connected to a Tiva Launchpad. However, since I'm a noob, and curious, I am starting by just playing with it with my breadboard, oscilloscope, and voltage source. E.g., when I plug in the motor power lines into my (variable) voltage source the axis spins nicely as expected between 1 and 12 V.
The problems start when I try to check how the encoder works. To do this, first I plug a a 5V source into the encoder GND/Vcc, and then try to monitor the encoder output.
While the motor is running, I check the Yellow (encoder A output) cable (referencing it to the green (encoder GND) cable). I made a video that shows a representative output from one of the lines (no USB on my old oscilloscope so I took a video of it using my phone).
As you would see at the video, the output doesn't look anything like the beautiful square waves you typically see in the documentation. Instead, it is an extremely degraded noisy sin wave (at the correct frequency for the encoder). The amplitude of the sin is not constant, but changes drastically over time. Strangely, sometimes it "locks in" and looks like the ideal square wave, for about a second or two, but then it gets all wonky again.
Both of the lines (encoder A and B output) act this way, and they act this way at the same time (e.g., they will both lock in and square up at the same time, for those brief glorious moments of clarity). Both of my motors are the same, so I don't think it's that I have a bad motor.
I have also checked using Vcc=12V, but it made no difference other than changing the amplitude of the output.
Note I already posted this question at reddit:
https://www.reddit.com/r/robotics/comments/502vjt/roboredditors_my_quadrature_encoder_output_is/
|
i have 3d gimbal system and i want to use this sensor in place of IMU(3 axsis accelerometer) in Quadcopter
|
For this robot the gear attached to the motor is linked to the gear attached to the wheels by a bicycle chain (I am using a bicycle wheel and transmission set as the parts for the robots movements).
How does using a bicycle chain affect the power transmission efficiency, how does this impact the torque?
|
I came across robotics library (RL), but quite unclear about its real purpose. Is it a FK/IK solver library or simply an graphical simulator?. RL has poor documentation, so its not clear how to use it. Im looking for some C++ library where there APIs to solve FK/IF analytically. Thank you.
|
I have a particular example robot that interests me:
http://www.scmp.com/tech/innovation/article/1829834/foxconns-foxbot-army-close-hitting-chinese-market-track-meet-30-cent
See first image, the bigger robot on the left, in particular the shoulder pitch joint that would support the most weight. I'm curious because I know it's really hard to balance strength and precision in these types of robots, and want to know how close a hobbyist could get.
Would they be something similar to this: rotary tables w/ worm gears?
http://www.velmex.com/products/Rotary_Tables/Motorized-Rotary-Tables.html
Looking for more actuation schemes to research, thanks!
|
I have thought of a technique to increase the resolution of a POV (persistence of vision) display. In an usual POV display, the LEDs are arranged in a strip and spun in a circle. There are two limiting factors to increasing the radial resolution along the circumference of any one circular path that an LED follows. One is, depending on the speed of the POV wheel, the minimum time required (decided by the microcontroller) to change the LED's color in case of a RGB. The other factor is the LED's width, that increases the 'bleeding' of color from one pixel to the neighboring pixel if the LED changes color or brightness too fast.
If one were to fix a slit in the front of an LED, |*| <-- like so, would this help improve the resolution of the POV display; by doing this one would in effect be reducing the width of the 'pixel' along the circumference on which the led would be traversing.
Thus if one were to use a fast enough microcontroller and a narrow enough slit, one could probably obtain a very high resolution along one dimension at least.
To be clear I've not yet implemented this, and am just looking for any experienced person who can tell if this will work or not.
|
Hey so I am trying to research into SWARM robotics, and trying to find helpful information or even articles/papers to read on the process of setting up communication protocols between different robots. For instance using a LAN connection, does each robot need to have a wireless adapter, and how would I begin setting up a network for say 5-10 smaller robots?
More generally could someone help me understand how devices connect and communicate across networks? I understand the basics of IP addressing, but I haven't researched into further complexities.
Any advice or direction is appreciated.
|
I am trying to build a robot. But a bigger robot than a Raspberry Pi connected to some tiny something as big as a can of black Coke. I am planning to build a robot of a size 1.2-1.5m. I already have chosen some torso, arms and so, but, the problem is the base (bigger wheels, able to cope with weight of 10kg minimum).
I was thinking about using iRobot Create 2 platform, but it is not that robust and cannot pass a bit higher doorsteps. I want it let it go outdoors eventually. Do you know of some product or similar project? Every base which is sold in shop is very small, which can be used on a table and not something which could serve as a good robust mechanism with wheels, servos and everything...
|
I'm new to robotics (and maybe it's not even robotics), and want to build a "pushing mechanism" to automatically push a "pop up stick" up from my kitchen bench (see product here: http://www.evoline.no/produkter/port/).
By design, the "pop up stick" is made to be dragged up by hand, but I want to install a mechanism under the bench, to do it by a trigger of some kind.
I'm a programmer, and I have some experience with Raspberry Pi. Does anybody have any hints of what products I could use to build this?
Thanks :)
|
At the moment this project is purely hypothetical but my friend and I were looking to make a model airplane which could stabilize its flight to be a straight line. Basically, we want there to a button on the controller or a separate transmitter which, when activated, would cause the plane to fly in a straight, horizontal line in whichever direction it was facing. The instrumentation we believe we would need is a three axis accelerometer, so that it can level the plane to fly horizontally with the ground and keep the roll and yaw steady. My question is, would this work? When I talked it over with my dad (who does a lot of this kind of thing) he suggested that we might need a Kalman filter to keep the instrumentation from gradually drifting off course but, being a high school student, that sounds a little intimidating. Any comments on feasibility or improvements would be greatly appreciated.
|
I had to make a Unity3D robot model(ABB IRB 1600-6R/6DOF), that given a desired end effector transformation matrix, it would calculate and rotate the robot joints to the appropriate angles(Inverse Kinematics Computation). I found some code in Robotics Toolbox for MATLAB that, lets say that you trust me, actually calculates the needed angles(its the general "offset" case in ikine6s.m) - but for a different zero angle position than my chosen one, which is corrected using the appropriate offsets.
So, I have set my 3D robot model in Unity3D correctly, angles are correct, I give the same parameters in Robotics Toolbox in MATLAB and the results are the same, I plot the robot stance in MATLAB to see it-it's on position-, I then run the code in Unity3D and the robot model seems to move to the stance I saw in MATLAB but it is off position- the end effector is away from its desired position.
Am I missing something?
The scaling is correct. I have subtracted a translation (equal to the distance of the bottom of the model's base contact to the floor, to the start of the next link- as MATLAB doesnt calculate it) from the Y component of the desired position of the end effector(in its homogenous transformation matrix I use as the rotation part, the identity matrix, so we do not care about that part).
Here are some pictures showing my case(say Px, Py, Pz is my desired EE position):
MATLAB-This is the plot of the results of the MATLAB ikine6s with input Px, Py, Pz in the corresponding translation part of the desired homogenous transform matrix:
Unity3D-This is what I get for the same input and angles in Unity3D-the EE is off position(should be half inside white sphere):
|
I'm building a modified version of the standard hanging plotter (v-plotter). The basic idea is that you have two cables hanging from stepper motors which form a triangle supporting the pen at the tip.
My design the strings anchored at points $C$ and $D$ which causes the behavior to be somewhat different that the normal plotter, especially when operating close to the motors.
I was able to work out the forward kinematics fairly easily, but the inverse kinematics are turning out to be a real headache. You can see my attempt at a standard geometry solution on math.stackexchange here.
Is there anything specific to calculating kinematics for robotics which could help me?
I'm not interested in modifying the hardware to make the math easier. I'm also not interested in discussing the center of gravity of the pen, cable weight...etc. I want to calculate the kinematics for this as an ideal system.
Any and all advice is appreciated.
Thank you.
|
I am building a rover which can navigate autonomously. I do not want to use wheel encoders for generating robot odometry since it causes drifts due to slippage etc.
I want to use GPS for generating odometry. Will it be useful for me to use the differential GPS approach or shall I use a simple GPS?
I am thinking of using DGPS because it enhances the accuracy to great extent, But at the same time, I think that in generating odometry, absolute gps values are not of importance. Instead, relative gps values are important to find for example, the distance covered by the robot.
Any suggestions on this ? Thanks
|
I'm a swimmer and would like to create a small object that would help me folow my trainning plannification.
In other words I want to create a custum stopwatch, with 5 colored led that will be on or off according to the trainning program.
This is my training:
0:00 to 5:00 : Low Effort (Green light)
5:00 to 15:00 : Medium Effort (Yellow light)
15:00 to 20:00 : High Effort (Orange Light)
20:00 to 25:00 : Medium Effort (Yellow light)
25:00 to 26:00 : Maximum Effort (Red light)
26:00 to 30:00 : Low Effort (Green light)
30:00 Trainning done (Should turn off automaticaly.)
The whole thing must be waterproof.
Must work on battery.
I'm asking for some guidelines, I'm not asking for a complete patent(would be appreciated though) but the main idea how to do it. So I can avoid wasting my time on research and tests.
I don't want to spend too much time on the project less than 100 hours maybe. I have studied in computer science and I have most of the materials I'll need. I can buy what is missing.
I'm a little bit noob on this kind of project, I want to know how you do it.
Thanks for help.
|
At least, on two legs. Asimo, one of the best known humanoid robots, is already capable to walk, although it doesn't seem to do this very stable. And it is a recent result.
As far I know, the legs are essentially many-dimensional non-linear systems, the theory of their control is somewhere in the border of the "very hard" and the "impossible".
But, for example, the airplanes are similarly many-dimensional and non-linear, despite that, autopilots are controlling them enough well some decades ago. They are enough trusted to confide the lives of hundreds of living humans to them.
What is the essential difference, what makes the walking so hard, while the airplane controlling so easy?
|
I'm trying to automate a process wherein individual small parts must be weighed one after another for identification / sorting.
The parts would be fed to the scale using a conveyor. A mechanism is in place to feed the parts one at a time (i.e. sufficient spacing between the parts).
I know that it is possible to have weighing idlers on industrial conveyors but their precision and repeatability seem to be inadequate for precision weighing. The parts range in weight from 100mg to 30g so the only type of scale that I've found that has sufficient precision (10mg or better) are precision/laboratory scales. For example: Torbal Precision Scales.
I can imagine a solution wherein a lightweight flat surface (e.g. plexiglass) could be laid atop the scale plater upon which the feed conveyor would deposit the parts to be weighed. Then, after a stable weight has been read by computer via the integrated RS-232 port, a motor controlled "sweeping mechanism" could brush off the part from the plexiglass plater onto a discharge conveyor system for routing into one of many possible part bins.
Given this, I have two questions:
Is there a technology that I'm unaware of that could achieve individual small part weighing that would be simpler than stated solution?
Would the plexiglass platter (used to create a larger flat surface) cause too much pre-loading to the load cells, thereby affecting the scales performance / durability in any way?
|
I want to make a dispenser system for Indian spices.
Dispense in multiple of teaspoons (5 ml)
Handle powdered spices and small seeds like mustard/cumin
I ordered a spice carousel from amazon and was hoping I'll be able to add an actuator to it, but it needs a lot of force to click the dial.
I'm wondering that I would need something similar to sugar dispensers in coffee machines. To start with I want to design individual units, but eventually want to make a solution that can dispense 6 different spices.
I'll appreciate if anyone gives any idea on how to get started on this, thanks in advance!
|
On Google Scholar there are a lot of papers which explain the advantages of motion primitive: instead of searching inside the state-space (configuration space of a robot) the solver has to search inside the plan space of motion primitives. Sometimes this concept is called lattice graph.
Even if all papers are convinced about motion primitive in general, there is room for speculation about how exactly this idea should be implemented. Two different school of thought are out there:
Machine Learning for generate motion primitive. This is based on q-learning, neural networks and motion capture. The projekt "poeticon" (Yiannis Aloimonos) is a good example.
Handcoded motion primtive. This concept is based an manual coded Finite States-Machines (FSM) which can only solve a concrete example like "pushing the box". Additional functionality has to be implemented by hand.
The question is, which concept is better on real life examples?
|
I have recently purchased an Orbbec Astra camera, which uses the same technology and produces the same style depth map as a Microsoft Kinect.
What would be the correct file format to save the depth map frames, How would I go about saving the videos recorded?
I have been able to load a stream but am not sure what format the frames should be saved in so that i can load them for testing at a later stage and still have all the same information.
I am using OpenNI2, OpenCV3.1.0 and C++.
|
I am fairly new to ROS and Gazebo. I have posted this question on the ROS/Gazebo forums, but they appear to be dead.
I am using ros-kinetic and gazebo7.3 on Ubuntu 16.04. I have been following this tutorial (Tutorial) and have adapted it slightly for my own vehicle (4 wheels instead of 2, skid drive plugin instead of diff drive).
I receive errors when running roslaunch on my own project, and I also get the same errors when using the same command on the completed project provided by the creators of the tutorial, Complete Project Github. I assume that means that I am missing something in ros or Gazebo.
Running:
roslaunch jaguar4x4_gazebo jaguar4x4_world.launch
gives the following errors (repeated for each of the four joint and trans):
[ERROR] [1472045218.311147687, 0.142000000]: No valid hardware interface element found in joint 'left_back_wheel_hinge'.
[ERROR] [1472045218.311359894, 0.142000000]: Failed to load joints for transmission 'left_back_trans'.
Also:
[ERROR] [1472045218.646822285, 0.365000000]: Exception thrown while initializing controller leftfrontWheel_effort_controller. Could not find resource 'left_front_wheel_hinge' in 'hardware_interface::EffortJointInterface'. [ERROR] [WallTime: 1472045219.647578] [1.363000] Failed to load leftfrontWheel_effort_controller [INFO] [WallTime: 1472045219.647938] [1.363000] Loading controller: leftbackwheel_effort_controller [ERROR] [WallTime: 1472045220.651591] [2.366000] Failed to load leftbackwheel_effort_controller
This error is repeated for each of the 4 wheels.
I have seen similar error messages posted online, however, the solutions don’t seem to fix mine (I can't post any more links due to my reputation level).
(1). I already use the hardwareInterface tags
(2). I have already installed ros_control
If anyone has any ideas on how to fix these errors I would appreciate it.
EDIT -------------------------------------------------------------------
I think maybe there is an issue with the way that I have created my joints and the use of the hardwareInterface tags.
In my macros.xacro file I create my joints and link the transmission separately (I repeat this twice for the front and back wheels):
<joint name="${lr}_front_wheel_hinge" type="continuous">
<parent link="chassis"/>
<child link="${lr}_front_wheel"/>
<origin xyz="${+wheelPos-chassisLength+2*wheelRadius} ${tY*wheelWidth/2+tY*chassisWidth/2} ${wheelRadius}" rpy="0 0 0" />
<axis xyz="0 1 0" rpy="0 0 0" />
<limit effort="100" velocity="100"/>
<joint_properties damping="0.0" friction="0.0"/>
</joint>
<transmission name="${lr}_front_trans">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${lr}_front_wheel_hinge" />
<actuator name="${lr}_front_Motor">
<hardwareInterface>EffortJointInterface</hardwareInterface>
<mechanicalReduction>10</mechanicalReduction>
</actuator>
</transmission>
Doing it this way gives the errors mentioned above, but my model in Gazebo appears.
If I try to merge both of these blocks so that there is just one joint tag wrapped by the transmission tags then I get the following error and my model does not appear in Gazebo:
[ERROR] [1473672367.041892175]: Failed to find root link: Two root links found: [footprint] and [left_back_wheel]
I don't understand why I get this error because I have a joint between my chassis base link and the world in my Jaguar4x4.xacro file :
<link name="footprint" />
<joint name="base_joint" type="fixed">
<parent link="footprint"/>
<child link="chassis"/>
</joint>
I now get a number of errors when trying to combine the joint and transmission blocks, so I imagine that this is not the best way to go?
|
What is the best way to estimate the state
[x-position;
y-position;
heading (yaw angle);
velocity;
acceleration;
curvature (or yaw rate)]
of a moving leading vehicle with sensors mounted on a follower/ego vehicle?
The following measurements of the leading vehicle are obtained via radar sensors mounted on the ego vehicle.
x-y-position in the ego coordinate frame
heading in ego coordinates
relative and absolute velocity and acceleration
No information about curvature (yaw rate). This should be estimated which is possible using the lateral acceleration and longitudinal velocity.
For estimation I think of using EKF or nonlinear moving horizon estimation.
Considering that no prediction about the moving leader vehicle can be made because the control inputs are unknown. Only the measurments (update step) and the movements of the ego vehicle are available (incroporate in prediction).
What kind of model would be appropriate for the whole scenario?
just a model for the leading vehicle? (e.g. bicycle model)
just a model for the following ego vehicle?
or a combination of both vehicles?
Option 1 would be perfect in combination with a simple bicycle model if there was an outside observer who is not moving. Option 2 is not really an option because the configuration of the leader vehicle should be estimated. Option 3 seems to me the right way because of the following thoughts:
Looking from the ego coordinates: is it correct correct that a motion change of the ego vehicle will seem as if the measured locations of the leading vehicle changed? If so will I need a coordinate transformation or is it better to use a model in global coordinates, then transform the measurements (which are in ego coordinates)?. The approach using global coordinates seems counter intuitive because the final estimate should be used for the follower/ego vehicle as reference trajectory.
Can you give me a hint which coordinate frame (global or ego) to use, which model to use in the prediction step and if my thoughts on motion changes are correct?
Or do you know any sources that address this or a similar issue?
For the process model I thought of
x_k+1 = x_k + v_k * cos(heading_k)
y_k+1 = y_k + v_k * sin(heading_k)
heading_k+1 = heading_k
v_k+1 = v_k
a_k+1 = a_k
curvature_k+1 = curvature_k + a_y_k/v_x_k
of course there should be some process noise added, especially for heading, velocity acceleration and curvature because these measurements are rather inaccurate.
For heading, velocity and acceleration I would use the previous estimates (or measurements) since no other source except the measurements from the sensors are available.
Curvature is computed using the acceleration in (global?) y-direciton and the (global?) velocity in x-direction curvature = a_y/v_x
The measurement model looks probably something like this:
y = [1 1 1 1 1 0;
0 1 0 0 0 0;
0 0 1 0 0 0;
0 0 0 1 0 0;
0 0 0 0 1 0;
0 0 0 0 0 0]*x
where x is the state vector [x-position; y-position; heading (yaw angle); velocity; acceleration; curvature or (yaw rate)]
The trajectory must be smooth and should be exactly like the driven path of the leader vehicle. So I think some sort of estimation will be necesseray in order to avoid following the measurements in straight lines, which would lead to an unsteady trajectory.
The mesurements of the heading, velocity and acceleration have a rather high variance depending on the situation.
|
My problem says that for the articulated arm shown determine:
The algorithm of the end effector position in terms of q1, q2, q3, q4 and q5 with de Modified Denavit Hartenberg convention and
The coordinate of the end effector EF for the following data table.
I have already made the algorithm, can I have some help with all the matrices?
And also with the graphic forward kinematics?
|
I actively take part in robotics competitions with my school's robotics club, and all of the line followers we use, are implemented using a PID algorithm (PD actually), but recently my electronics teacher told me to get my feet wet with fuzzy logic, he explained to me the reasons for why to use fuzzy logic (a mathematical model of a robot is hard to come up with, fuzzy logic is more "human-friendly" than PID, etc.) My question is if I should really bother with fuzzy logic or stay with PID, or maybe to change to another less known but better algorithm. The tracks that the robot has to follow are regular white over black continues ones, with no weird extras or anything like that, but most of them have marks on the sides of the beginning and end of every turn, but apart from that nothing else. So should I stick with PID or change to something else? Any advice would be very helpful, thanks.
|
The IONI drive can accept a new setpoint using an analog input. How many bits is the ADC on the IONI drive? Is it 12, or 16 bit?
|
Say, I have a flat square landing place for a copter, made of e.g. aluminium or plastic, with marks on it making a square, looking like this from the top:
-------------------------------------
| |
| * * |
| |
| |
| |
| | |
| --+-- |
| | |
| |
| |
| |
| * * |
| |
-------------------------------------
The copter is smaller than the landing field, to land properly. Say, it has some sensors or can be manipulated by the computer that powers logic of the landing square. The square itself can have sensors too.
I want the copter to land as precisely as possible into the place marked by land signs.
What ways are possible to implement this functionality knowing that it's a DIY prototype where I have no access to secret technology, but only to that is available in online electronic components stores?
I think of these:
Having a camera either on the drone or on the landing plate that
finds the marks and calculates if these marks are in the right place
or if the drone must move along X or Y axis. Looks as an overkill and
will be hard to configure it for difficult conditions like raining,
night, snow.
Using laser transmitters and receivers.
Some solution I am not aware of but which is still simple and efficient enough.
In what direction should I look to find a solution for this problem?
|
Can anyone throw some light on using accelerometers to measure angular acceleration and hence angular velocity. This approach is to avoid gyroscopes due to drifting errors. Any links for this also would be very helpful.
Thank you.
|
I am trying to build a bot that is "always on". I will be using a qi platform and am looking for a battery that charges through Qi. I've found several batteries that can charge your phone through Qi, but I am looking for one that charges itself through Qi. Any clue where I can find one like that? As a second question, do you think such an idea would work? Any foreseeable problems?
|
I'm working on creating a robotic device capable of oscillating an adult human leg using the knee as pivot, being able to constrain the amplitude of oscillation by controlling the mechanism with a microcontroller and some other electronic components.
The only idea I have to do this is by placing a pair of servomotors and some gears that will rotate the structure that holds the leg and then control the PWM, however I don't think I will be able to find servos strong enough to lift both the structure and the leg, so it would be very useful to hear some suggested mechanism and motors to accomplish this device.
|
The Mars Exploration Rover (MER) Opportunity landed on Mars on January 25, 2004. The rover was originally designed for a 90 Sol mission (a Sol, one Martian day, is slightly longer than an Earth day at 24 hours and 37 minutes). Its mission has been extended several times, the machine is still trekking after 11 years on the Red Planet.
How it has been working for 11 years? Can anyone explain hardware-related aspects and how smart this Rover is ?
I need to know how this rover is getting services on Mars regarding to hardware related issues?(if any hardware is not working properly, how it can be repair on the Red Planet?)
|
My main problem is measuring joint torque/load economically. There are ways to detect the electric current drawn which is proportional to the torque. Since my motor is a BLDC motor, there will be a driver which drives the 3 phases and hence the current drawn will also include losses, which would be variable. Is there a proper way to estimate the load by sensing current?
Is there any way to reliably get the torque/load on a motor joint, what are the common ways to do so without spending a ton of money on torque sensors?
Do to link me to any papers/research on this subject, if you'd want to.
I need to judge the load on the joints of a 6 dof robotic arm programmatically and detect unexpected impacts.
|
I've been developing an inspection rover with two miniature cameras on two dual axis gimbals. I've tried analog, digital and now brushless servos from various R/C parts makers and all of them eventually develop a flutter or shake. I am getting ready to step up to a more expensive servo that will be almost 30X more expensive than the typical $30 R/C servos I started with. The loads seem to be easily controlled at first but after a few weeks the shaking starts and just gets worse until I replace the servo. I've tried two different R/C controllers and that hasn't helped and I can't seem to control by programming my Spektrum DX-9 controller. Any suggestions?
|
So I have this motor: https://www.servocity.com/23-rpm-hd-premium-planetary-gear-motor-w-encoder
and this motor shield for my arduino uno:
https://www.pololu.com/docs/pdf/0J49/dual_vnh5019_motor_driver_shield.pdf
And no background in electronics to speak of. I cannot find a basic enough resource to tell me where to attach the wires, or really what they each do. The ones that are simple enough for me only deal with motors with a couple of wires. Information on motors with six wires go immediately to discussing things that are way beyond me, or at least use terms that I don't know.
So the motor has six leads, and the only info I can find on them in the documentation are this: Black (Motor -), Red (Motor +), Green (Ground), Brown (Channel B), Yellow (Channel A), and Orange (Sensor Voltage +).
I have the black and red hooked up to M1A and M1B on my shield, so I am able to turn the motor on, vary its speed and direction. I also had all the other wires hooked up to various pins mentioned in the Demo sketch associated with the shield.
I have spent a week trying to figure this out on my own, but I am not getting anywhere (except that I know that hooking up orange to almost anything triggers a fault).
So in addition to the four wires that are not connected to anything, I have these pins mentioned in the sketch that are also not connected to wires: D2 M1INA, D4 M1INB, D6 M1EN/DIAG, D9 M1PWM and A0 M1CS.
I hope someone can provide a pointer to a resource or a good, plain-language explanation for what these wires and pin descriptions mean. Thanks in advance!
servocity
|
Above illustration is the basic setup for the question. The scenario will be the following:
A potentiometer(providing 0 to 5V) is an analog input to A0. This
potentiometer voltage is converted to PWM(pin9) which controls a fan
M1. Which means the human controls directly the M1 fan speed.
This directly speed-controlled M1 fan creates air pressure on a sensor
Sen1.
On the other hand I want M2 fan to reach to a speed(via pin10 PWM)
such that Sen2 should output the same voltage as Sen1. Which means
eventually voltage at A1 analog input should be equal to A2. So one
can achieve same air pressures at Sen1 and Sen2.
I'm not experienced in PID or other type of controls. I would be glad if someone can help me to relate parameters and code this scenario. I was suggested to ask here instead of Arduino stack-exchange.
edit: Do you think I can use the following idea. If I make changes as following: PWM to LED in that example will be PWM to M2 in mycase; poti input in the example will be Sen1 input in mycase. And LDR input in the example will be Sen2 input in mycase. Do you think it is worth to try?
|
I have my IMU and I can get attitude (pitch, roll, yaw) as well as gyro (x, y, z)
As far as I can tell, attitude is all I need to stabilize my drone.
Is there any reason to implement gyro or will attitude suffice?
-- EDIT --
It seems the benefit of gyro is an immediate response to an outside factor effecting drone stability.
Example:
A gust of wind hits your drone. The attitude will not be quick enough combat the effects of wind. As far as I can tell, the gyro will be the best way to combat extreme wobble and outside influences such as wind / tether (leash).
-- EDIT 2 --
It's been a while since I posted this but I should add in that I ended up leveraging the infamous PID algo for my arduino.
|
I am controlling an IRobot Create 2 with a raspberry pi, which I am powering with one of these:
https://www.adafruit.com/product/1566
(I know, I know, I should have just gotten power from the battery thats already in the roomba)
Anyways. I am trying to read electric current values flowing through the roomba as it cleans. I should be getting negative values (which I did previous to using the battery). My code was working, although I know that doesn't mean much.
After much thought, I was wondering if maybe the roomba is drawing power through the serial port from the raspberry pi, giving it a slight charge even though it is not on the charger.
Is this possible, and is there a way to confirm or deny my suspicions?
Any help is appreciated.
|
I have a 7 pin cable connected to the OI port of my Create 2. While the Create is on the floor away from the dock, if I touch the BRC pin (7) to the ground pin (5), the Create wakes up, makes a beep, and the green Clean button light turns on.
However, if I put the Create on the dock and let it sit for several minutes and then touch the BRC pin to the ground pin, nothing happens.
Is there a method for waking up/starting the Create while it is on the dock/charging?
(Note: I have seen this answer and it does not solve the issue IRobot Create 2: Powering Up after Sleep)
If I send a command or try to fetch sensor data after the Create has been sitting on the dock for several minutes, I get no response and the Create doesn't do anything (no lights, sounds, movements, etc).
While the Create is already awake, the response from a restart command (0x7) looks like this
bl-start
STR730
bootloader id: #x4718535E 7DDBCFFF
bootloader info rev: #xF000
bootloader rev: #x0001
2007-05-14-1715-L
EDIT: simplified the scope of the question.
EDIT 2: added info from Jonathan's questions.
|
For a mobile robot - four wheels, front wheel steering - I use the following (bicycle) prediction model to estimate its state based on accurate radar measurements only. No odometry or any other input information $u_k$ is available from the mobile robot itself.
$$
\begin{bmatrix}
x_{k+1} \\
y_{k+1} \\
\theta_{k+1} \\
v_{k+1} \\
a_{k+1} \\
\kappa_{k+1} \\
\end{bmatrix} =
f_k(\vec{x}_k,u_k,\vec{\omega}_k,\Delta t) =
\begin{bmatrix}
x_k + v_k \Delta t \cos \theta_k \\
y_k + v_k \Delta t \sin \theta_k \\
\theta_k + v_k \kappa_k \Delta t \\
v_k + a_k \Delta t \\
a_k \\
\kappa_k + \frac{a_{y,k}}{v_{x,k}^2}
\end{bmatrix}
+
\begin{bmatrix}
\omega_x \\
\omega_y \\
\omega_{\theta} \\
\omega_v \\
\omega_a \\
\omega_{\kappa}
\end{bmatrix}
$$
where $x$ and $y$ are the position, $\theta$ is the heading and $v$, $a$ are the velocity and acceleration respectively. Vector $\vec{\omega}$ is zero mean white gaussian noise and $\Delta t$ is sampling time. These mentioned state variables $\begin{bmatrix} x & y & \theta & v & a \end{bmatrix}$ are all measured although $\begin{bmatrix} \theta & v & a \end{bmatrix}$ have high variance. The only state that is not measured is curvature $\kappa$. Therfore it is computed using the measured states $\begin{bmatrix} a_{y,k} & v_{x,k}^2\end{bmatrix}$ which are the lateral acceleration and the longitudinal velocity.
My Question:
Is there a better way on predicting heading $\theta$, velocity $v$, acceleration $a$, and curvature $\kappa$?
Is it enough for $a_{k+1}$ to just assume gaussian noise $\omega_a$ and use the previous best estimate $a_k$ or is there an alternative?
For curvature $\kappa$ I also thought of using yaw rate $\dot{\theta}$ as $\kappa = \frac{\dot{\theta}}{v_x}$ but then I would have to estimate the yaw rate too.
To make my nonlinear filter model complete here is the measurement model:
$$
\begin{equation}
\label{eq:bicycle-model-leader-vehicle-h}
y_k = h_k(x_k,k) + v_k =
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
\end{bmatrix}
\begin{bmatrix}
x_k \\
y_k \\
\theta_k \\
v_k \\
a_k \\
\kappa_k \\
\end{bmatrix}
+
\begin{bmatrix}
v_x \\
v_y \\
v_{\theta} \\
v_v \\
v_a \\
\end{bmatrix}
\end{equation}
$$
More Info on the available data:
The measured state vector is already obtained/estimated using a kalman filter. What I want to achive is a smooth trajectory with the estimate $\kappa$. For this it is a requirement to use another Kalman filter or a moving horizon estimation approach.
|
Robots can be better than humans. If we attach a wheel to a robot, it can move faster than a human with legs.
But why then is a lot of research going on to make robots look, walk and talk like humans. Wouldn't it be better to make robots better than humans and not to be limited by the human body parameters?
|
Standard particle filters can produce bad localization result if the initial particle generation step produces no particle that is close to location (and bearing) of tracked object. The accuracy depends on large number of particles to create at least one particle that's very close to state of tracking object.
Could we introduce in resampling stage a small number of completely random new particles? For example, 99% of particles are randomly selected with weighted probability, while 1% are new particles with random state.
My reasoning is that new particles that are bad guesses would quickly disappear, while good guesses would improve accuracy beyond what was possible with fixed particle pool. Does this improvement to particle filters make sense?
|
After a fair bit of research into industrial robot arms I've learned that a lot of them use components like harmonic drives and sometimes cycloidal gearboxes.
Two main questions:
How feasible are these for use in a hobby project? I do have several thousand dollars at my disposable to build a 6 DOF arm. Arm goals at a glance: 1m total reach, 5kg payload, reasonable speed.
What sources would you recommend? Online the sites seem very specialized, gotta go through sales reps, etc, which kind of worries me regarding the prices lol. Plus for Harmonic Drives I only found one vendor.
Side/related question: would buying from China work out alright or would I not get what I'm looking for (namely: good torque at <1 arcmin backlash)? E.g.
Another example.
--
I have little experience with this technology (precision eccentric gearing), but would definitely like to explore it and purchase test equipment to construct a basic arm. Thanks for any help, looking for a decent starting point.
EDIT -- I would like to use one of these special gearboxes with a NEMA stepper motor if possible (and if it saves cost) since we have a bunch of those lying around, heh.
EDIT2 -- Curious if this item would actually have some sort of strain wave gearing inside of it.
If so this seems very affordable and could provide decent precision on part of a robot arm.
|
I've been looking around for a lidar/radar rangefinder for a few months now and can't seem to find anything reasonable for my application. My requirements are pretty straightforward:
100m range
I2C or other serial protocol
Accuracy +/- 0.25m
Sampling > 50hz
This isn't a spinning/mapping application, so sampling rate isn't very demanding, and I don't think I'm asking much regarding accuracy, but I can't find anything for less than $600. It seems like there's a bit of a hole in the market?
The closest ive come is the lidar lite from pulsed lite (now Garmin), but its only good up to 40m, and I've had one backordered for 4 months now! If they offered this product in an 80 or even 60m application it would help me, but 40m isnt quite enough.
I would assume the technology/demand just isn't out there that anyone is cheaply Producing this device, but I see lots of lidar based rangefinders for golf courses/gun sitting/etc that are around $100 in price and work in these ranges. What's different about these technologies? Example. What technology does this use and are there sensors out there that I can buy direct?
Thanks
Also found this, but seems specific to altimeters. Is there something about this device that would only make it applicable for drone altimeter applications? I know there must be considerations for the incoming environmental light and reflectiveness of the surface it's detecting,so possibility it wouldn't work in an outside application where environmental lighting will vary? And also this device doesn't call itself 'lidar', so let's ask a very basic question:
Are all laser rangefinders considered 'lidar'? Surprisingly I haven't found this answer.
|
I'd like to attach some extra equipment to the iRobot. Does anyone have any idea of a weight limit before mobility is restricted or it just breaks? Could it possibly carry a few extra pounds?
|
Whats the name of the technique used to read symbols with computer vision?
Like the one used for this wood cutter:
I want to use something like this to measure distance from the camera to a piece of tape with symbols, or read QR-like symbols to recognize areas, but I can't find information on this because I don't know how this technique or method is called.
What is it called?
|
I am following the code obtained in matlab file exchange for the paper
http://ieeexplore.ieee.org/document/4655611/
He calculates the Hessian matrix as follows using Jacobian Matrix J,
**H = inv(J'*J); % Hessian matrix,**
How is this relation true.
And also the g value used is 1 to construct the COST function to be minimized using LEAST SQUARES , but should'nt it be the local gravity value at the particular place you are carrying out the calibration??
Link to the code :
MATLAB Code
Please throw some light on this.
Thank You.
|
I am building a two wheeled robot which will have:
max speed of 10km/h and acceleration of 0.5m/s²
can climb 12° slopes
weighs at most 30 kg
wheel diameter is 50cm
By doing the calculations myself, and checking with many online automatic calculator I find that in the "worst" possible conditions I would need :
Around 9.5Nm of torque
Around 100Rpm for speed
Which results in approximately a 100W motor.
When I look for suitable motors online I come across a lot of sites recommending different motors depending on robot weight.
An example would be.
Where this kind of motor is recommended for up to 150lbs robots and it is a 30-50W motor (I believe they use 4 of them).
Is the power I'm looking for in my case too "overkill"?
Seeing as this is an indoor robot but which will need to be able to go up slopes to reach different areas, do I need to find a motor that corresponds exactly to my numbers or do I need to get something even more powerful since efficiency is never 100%?
|
I've been thinking about building a small UAV with an onboard LIDAR, just for fun. I'm interested in SLAM and autonomous flight indoors and thought that I would need a lidar to get a 3D map of the environment. Now, I've spent some more time looking into SLAM techniques and have seen very impressive results with simple RGB cameras, not even necessarily stereo setups. For instance, these results of the CV group of TU Muenchen. They are capable of constructing 3D pointclouds from simple webcams, in real-time on a standard CPU.
My question: are there cases where you'd still need a LIDAR or can this expensive sensor be replaced with a standard camera? Any pros/cons for either sensors?
I'm going to list some pros/cons that I know/can think of:
LIDARs are better at detecting featureless objects (blank walls) whereas a vision-based SLAM would need some features.
Using LIDARs would be computationally less intensive than reconstructing from video
The single RGB camera 3D reconstruction algorithms I found need some movement of the camera to estimate depth whereas a LIDAR does not need any movement.
Using a single camera for SLAM would be cheaper, lighter and possibly have a better resolution than a LIDAR.
|
I am trying to integrate Angular acceleration obtained from a set of accelerometers positioned specifically at opposite corners of a cube, based on "EcoIMU: A Dual Triaxial-Accelerometer Inertial Measurement Unit for Wearable Applications" paper.
I am getting the angular acceleration on each Axis
This signal is quite noisy .
Then after integrating angular acceleration to angular velocity using trapezoidal rule , I get signals which drifts heavily and randomly.
I understand that noise and also numerical integration is causing the effect. Other than low pass filtering the data, is there any other methods to reduce noise.
And the major factor for the drift is numerical integration, how can this be handled.
Please help me out with this.
|
In rotation matrix, Why do we rotate the first and third rotation in the opposite direction of the 2nd rotation, this is confusing.
Image is attached with this.
In this image we can note that for x and Z rotation non zero elements are same. But for Y rotation sign of sin(theta) changed.
why is so?
|
For my robotics arm project I have recently come across the following ball bearing at low discount price which I think could make a great rotary table for my arm.
However it does not seem to have any mounting connectors, and the inner and outer rings are the same height.
My mechanical creativity is a bit limited, but my goal is to drive (spin) it with a large stepper motor (I have 34 and 42 series available which, even ungeared, should be enough for my smaller arm). On top of the rotary table that I'm trying to create will sit a U-channel that will hold the rest of the robot arm. So in the bottom of that channel I will drill holes and connect it to... something.
If anyone could point me to, recommend, or create a simple design - I have access to basic metalworking tools and scrap aluminum - that would be absolutely fantastic. My main goal is to work with what I have: the bearing, stepper motors, aluminum, other things at a standard hardware store.
BTW, here is how I refer to the standard robot axes (of a 6-DOF industry bot) by name.
|
In (nonlinear) moving horizon estimation the aim is to estimate an unknown state sequence $\{x_k\}_{k=0}^T$ over a moving horizon N using measurements $y_k$ up to time $T$ and a system model as constraint. All the papers I've seen so far 1,2,3 the following cost function is used:
$$
\phi = \min_{z,\{\omega_k\}_{k = T-N}^{T-1}} \sum_{k = T-N}^{T-1} \underbrace{\omega_k^TQ^{-1}\omega_k + \nu_k^TR^{-1}\nu_k}_{\text{stage cost } L_k(\omega,\nu)} + \underbrace{\mathcal{\hat{Z}}_{T-N}(z)}_{\text{approximated arrival cost}}
$$
The noise sequence $\{\omega_k\}_{k = T-N}^{T-1}$ should be optimized/minimized in order to solve for the unknown state sequence using the prediction model:
$$
x_{k+1} = f(x_k,\omega_k)
$$
whereas the measurement model is defined as
$$
y_k = h(x_k) + \nu_k
$$
with additive noise $\nu$.
The question is what exactly is $\omega_k$ in the stage cost?
In the papers $\nu$ is defined to be
$$
\nu_k = y_k - h(x_k)
$$
However, $\omega_k$ remains undefined.
If I can assume additive noise $\omega$ in the prediction model, I think $\omega_k$ is something like the follwoing:
$$
\omega_k = x_{k+1} - f(x_k)
$$
If this should be correct, then my next Problem is that I don't know the states $x_k$, $x_{k+1}$ (they should be estimated).
EDIT
Is it possible that my guess for $\omega_k = x_{k+1} - f(x_k)$ is "wrong" and it is enough to just consider:
the measurement $\nu_k = y_k - h(x_k)$ in the cost function $\phi$
the prediction model as constraint?
And let the optimization do the "rest" in finding a possible solution of a noise sequence $\{\omega_k\}_{k = T-N}^{T-1}$? In this case which Matlab solver would you suggest? I thought of using lsqnonlin since my problem is a sum of squares. Is it possible to use lsqnonlin with the prediction model as constraint?
1 Fast Moving Horizon Estimation of nonlinear processes via Carleman linearization
2 Constrained state estimation for nonlinear discrete-time systems: stability and moving horizon approximations
3 Introduction to Nonlinear Model Predictive Control and Moving Horizon
Estimation
|
I'm trying to find a way where I can estimate the location of my drone on a floorplan. Note that right now, I will just be moving the drone around manually and not flying it.
I read up on PDR and what I want to do is this:
Provide an initial location of my drone on the floorplan, and as I move the drone around, using information from the IMU/accelerometers, I want to update the position of my drone on the floorplan.
I've worked with ROS a bit and I want to know if there are packages in ROS that could do this. For now, I'm looking for rough estimates and not perfect solutions.
Thanks!
|
I'm using an Extended Kalman filter where the motion model is a function of the states and the inputs, with additive white noise, i.e.
$$ x_k = f(x_{k-1},u_{k-1}) +\delta_{k-1} \quad , \quad \delta_{k-1} \sim N(0,\Delta_{k-1})$$
If $x_{k-1}$ and $u_{k-1}$ are know, then the prediction step is done as
$$\hat{x}_{k|k-1} = f(\hat{x}_{k-1|k-1},u_{k-1}) $$
$$ f' = \frac{\partial F}{\partial x_{k-1}}\Big|_{x_{k-1}=\hat{x}_{k-1|k-1}~,~u=u_{k-1}} $$
$$ P_{k|k-1} = f'P_{k-1|k-1}f $$
However, at some time steps I won't know the value of $u$, the input. What is the optimal way to perform the prediction step in this scenario?
My thoughts so far are to set
$$\hat{x}_{k|k-1} = \hat{x}_{k-1|k-1} ~,$$
since I have no new information to update it... but no idea how to estimate the covariance matrix $P_{k|k-1}$.
|
I'm starting out in more complex robotics, which in this case includes the bilateral stabilization of a humanoid robot.
I know that when a human walks, his/her pelvis moves in correspondence, which allows for a basic stabilization when being on one leg in between steps.
I'm wondering about how to replicate this type of effect in a humanoid robot.
Thank you for your time!
|
I'm trying to figure out how to use the inertial sensors in an AR Drone to perform a rough version of dead reckoning. I want to move the drone around a room (without flying it) and using the velocity and orientation data from the drone to plot the trajectory that I have moved it so far.
I know that inertial odometry data is prone to heavy drift with continuous use, but for a short term exploration, I am ok with that.
I'm running the ardrone_autonomy package on ROS and I am using the odometry data from the drone to plot my trajectory on Rviz. However, Rviz only shows me the orientation of my drone and does not update movement whatsoever.
This is how my Rviz looks:
As you can see, even if I move the drone around the entire room, the position on the map does not get updated, but the orientation does.
Can anyone tell me what I am doing wrong here?
|
I'm trying to finish up a localization pipeline, and the last module I need is a filtering framework for my pose estimates. While a Kalman filter is probably the most popular option, I'm using cameras for my sensing and I wouldn't be able to ensure the kinds of noise profiles KF is good for, I doubt it would work as well with suddenly appearing outliers in my poses: so I am looking for other options which can work with real time data and be immune to outliers.
One thing I came across is a Kalman filter with a threshold based update rejection, something like Mahalanobis distance: but I don't know if this is completely applicable because the localization would be performed in real time, and it's not like I have a large set of 'good poses' to start with. The end result I'm expecting is something like smoothing, but without access to a full set of values. Another option I found is from a paper that proposes a 'robust Bayesian weighted Kalman filter' that's good for real time outlier rejection, but I still need to read through it: and I don't have much experience with filtering/smoothing, so any suggestions would be very helpful, perhaps about a decent go-to mechanism for this?
|
My task is simple: I want to move my drone manually around a room (IMPORTANT: I DO NOT WANT TO FLY IT) and I want to see its position update on a map using IMU data.
I've attempted something in ROS and Rviz, but the position on the map does not update when I pickup my drone from spot A and place it 5 meters away at spot B. It does however update when I fly the drone from A to B.
My question is this : is there someway I can do dead reckoning using the drone while not flying it? Or is flying the drone necessary to perform dead reckoning using the drone's IMU?
EDIT: Here's what my Rviz shows when I pickup the drone from A and put it at B. As you can see, orientation is updated, but the position remains the same. However, when I fly the drone, I can see the red arrow move around the map.
Any help would be greatly appreciated!
EDIT : The duplicate question just asks for general solutions. Here I've found a solution, I just can't get it to work; I need help debugging.
|
What is the core principle of a monocular visual odometry algorithm? I mean, after calibrating a single camera (undistortion etc.) images are fed into an algorithm - what exactly does this algorithm do with the images in order to get the translation/rotation between successive frames?
Do various mono algorithms use various techniques or is the core principle same everywhere? I see some libraries use image features (indirect approach) and some use pixel intensity (direct approach) but I am not really able to understand the principles from the papers... I can only see the algorithms use various methods of estimating the translation/rotation matrix (5-point, 8-point algorithms...).
Also, is it true that no mono algorithm is able to get the absolute scale of the scene? How does the relative scale work - is it set randomly?
I found following mono odometry libraries:
indirect methods (using image features)
Avi Singh via OpenCV (blog post) - uses Nister’s 5-point algorithm
VISO2 - uses 8-point algorithm (paper)
ORB_SLAM / ORB_SLAM2 - indirect approach?
direct approach (using whole edges etc.)
SVO: Fast Semi-Direct Monocular Visual Odometry (paper)
LSD-SLAM: Large-Scale Direct Monocular SLAM (paper) - needs ROS (but only for input/output)
DSO: Direct Sparse Odometry (paper)
I understand how stereo visual odometry works - they reconstruct 3D scene in each image frame and then compare (register) the point clouds of successive image frames and get directly the distance traveled like this - pretty simple principle.
|
I want my robot to contain about 20 MB of data or a little more, this will be mostly in text, some pictures so that it can recognize one or two particular objects.
So obviously, I want a camera connected to it and I want it to be able to save images (no need to display them, just save, I would access them through a PC later. However, I do want it to have an output text screen, i.e.: I want it to be able to display text on a screen, just plain text.)
I also want it to be controlled by speech recognition, so it has to be able to accept voice input. Also, iI want it to be capable of producing voice output, but only one or two prerecorded sounds.
Lastly, an infrared sensor will be used for distance gauging and a push button one for contact.
Summary of needs:
Microcontroller and parts required to:
store at least 20 MB of data, text (the microcontroller needs to be
able to access this text and run algorithms on it, GET data only, not
change it), and images (images only for image recognition)
allow saving images taken by the camera
speech recognition as input
output audio (only one or two prerecorded sounds)
output plain text on a screen
equip an infrared sensor for measuring distance and contact
push buttons for detecting contact
If relevant, I am using tracked wheels, two DC motors, the robot will move upon detecting an object using the camera. I will decide what motor controller to use when one of you good fellows let me know what microcontroller would be adept for my wants.
Note: for speech recognition input, I will use an advanced sound sensor, and for the infrared and push button sensors I obviously know what I'm using, same with the camera.
I only mentioned these ones so you would know what I need the microcontroller to be capable of to use. But as for the data storage and saving photos and screen, I have no idea what I can use as I have never used anything similar in a robot before.
|
The Map1
The task is divided into 2 runs, In first run, We have to navigate through whole map and reach the end mark taking all checkpoints.
In second run, using the mapping from first run we have to follow the shortest path.
I am reading Algos and codes from days, I know the steps that should be taken,
Navigate the maze using Tremaux algo and turning right at each possble junction.
then Save that map in memory and use djikstra algo or A* algo to find the shortest path.
But how to implement it practically?
Also since all the dictances and angles are known and also the wheel dia, motor rpm, bot width then Can I skip wheel encoders and use calculation instead?
|
I am using Constant Turn Rate and Veloctity (CTRV) model to predict the position of the vehicle.The CTRV model assumes a circular path between two consecutive time steps of the car as depicted in the figure, where the vehicle moves from point A to point B. Also, the yaw rate and velocity are assumed constant between two timesteps.
The displacement between the two points is denoted by S and R is the radius of the curvature. The five input parameters of the CTRV model are [ x, y, yaw angle, velocity and yaw rate]. The previous x, y position of the car is known to me. The laser data of the surrounding environment is available for each timestep. With this laser data, I have already calculated the Rotation matrix and translation vector between two time steps using ICP.
Now my question is how to calculate the input parameters of the CTRV model with the available data?
The followings things are clear to me: -
With the help of rotation angle I can calculate: -
Theta = Displacement (S) / Radius (R)
The velocity can be calculated using the timesteps :-
Velocity = Displacement (S) / t
The yaw rate can be calculated using :-
Velocity = Yaw rate * Radius (R)
Considering the first equation I am not sure how to calculate the displacement S if Radius is not known to me.
Pardon me if my queries are a bit naive. I am still in the process of learning. Any help from your side will be much appreciated :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.