instruction
stringlengths
40
28.9k
I'd like to get the object pose based on the attached AR marker to the object. I believe I only need one tag to get the 6D pose. However, I sometimes see more than one AR markers are used in the research papers or the demos by Boston Dynamics (https://boygeniusreport.files.wordpress.com/2016/02/screenshot-76.png) or by the team at Amazon Robotics Challenge. Do I need to use more than one marker? If so, how can I calculate the 6D pose from more than one marker's information?
Let us assume that I am able to create a 3D map (octree) based on information gathered by a Lidar and/or a camera on a quadrotor. Now based on the current 3D representation of the world, I want to go from the current pose to another one (which is in the known 3D environment). To do so I want: First to create a set of waypoints by using rrt*; Then based on the waypoints, build polynomial curves. So I took a look at OMPL, and although I understood the basics (well I think), I am still do not understand how to define the geometry of the robot (a box in my case) and how to define the working space (here a binary octree for instance). From the examples I found, I see that goal and start spaces are defined, then a planner along with the problem definition is chosen, and finally solve. But I cannot see where the geometry and/or kinematics is defined no the world space... Sorry for the naive question, but if somebody can provide a hint it will be very helpfull Thank you
My robot needs to obtain visual odometry, with a condition that the used algorithm has to have as low computational requirements as possible and the used camera/sensor shouldn't be very expensive. The thing is that it'll run on a platform like Jetson together with lots of other CPU expensive processes. Currently I'm trying monocular odometry packages from ROS (fovis, svo...) with an ASUS Xtion sensor which doesn't give very good results (due to computational power with fovis or probably insufficient FOV with svo). As I don't have a stereo camera to compare it with and I don't want to buy one until I don't know if it helps, I want to ask whether stereo algorithms would be a better choice here (are they more effective?), having in mind the low-cost budget and low cpu requirements. The robot also has to keep the depth sensor (doesn't have to be strictly ASUS Xtion), so the possibilities are to use a different sensor usable for obtaining visual odometry or to equip the robot with another camera. If you have any other suggestions I will appreciate it. Thank you in advance!
The specifications at Lego official site here say that Dual Core 2.0GHz CPU or better is needed. Still, maybe quad core with 1.8 GHz (like Prestiogio Multipad Visconte's Intel Atom quad core) would work smoothly enough? Have anyone tried to run Lego Mindstorms EV3 on such kind of CPU?
I am currently working on a robotics project in which I wish to integrated a depth/object scanner. Now my focus is currently on an IR camera in combination with some sort of IR pattern projector. Now I am having a hard time finding any sort of IR projector which can project a pattern. I have found several topics suggesting that I use an IR-LED and create a pattern in front of it, but that would mean the projector would take up significant space. Are there any projectors out on the market in a sensible form-factor for consumers? I came across the Structure Core, but that's only available to OEMs.
Most home or entertainment robots I see are either manual human remote control or have the processor on-board and are completely self contained. Contrast to this robots which are neither, but are controlled wirelessly by a cpu elsewhere not in the body. Another cpu in the body is there merely to relay commands and readings back wirelessly to and from the main cpu. A hypothetical example of this are the battle droids from Star Wars, which were controlled by a "droid control ship", without which they were helpless. So this is the type of robotics architecture I am referring to. I have built such a robot already, so this is really a robotics question, not a science fiction question. I just wanted to convey the architecture I am referring to with a well known example. What is the term for the physical shell? What is the term for the main external cpu that houses the actual algorithm? If the term "robot" applies, Which entity is the term "robot" more appropriately applied to? Or are there other actual industry terms which specifically classify the components in this architecture? I'm after the most widely accepted terms that still are precise.
I am searching for an open source simulation software which is able to simulate robots and their movements in a factory. I would like to send control signals and receive sensor data back. This software should allow me to communicate with it. I want to use java to send control signals and be able to receive back sensor data. My goal is it to use the simulation software on a Raspberry Pi. I would be grateful for any kind of help.
I saw some radio controllers for kids robots alike xbox/playstion game pads few months ago. But I couldn't find them by searching. I am looking for some cheap radio controllers for RC cars or other kids robots like console game pads.May you help me?
I'm looking for any recommended approaches to be able to digitally process a room full people, like an auditorium or a movie theater and output the following information: How many people are in the room Locate unoccupied and available seating Display the available seating to ushers, or people attending late, on an iPad or TV outside the auditorium. Similar to viewing airplane seat occupancy online. Some sensors I'm considering LiDar Scanning Many ceiling (IR) cameras looking straight down (or perpendicular to the seating if the seating is inclined) Some Challenges: Be able to take accurate enough measurements when: The lights are on The lights are off or in low light People are standing People are sitting Seating is a long bench, not a chair Constraints: Individual sensors for each seat isn't ideal, as we need to count people when they are standing as well. Thoughts on LiDar: I like the thought of scanning the room with LiDar to develop a 3D Map from which I can determine the number of prominent heads to count, and distance/location of each head to identify a seat that is taken or open. Are there any affordable (under $10,000) LiDar scanners that make this easy? Thoughts on Top Down Infrared (IR) Cameras : The auditorium seats 700 people and has a balcony. If the cameras are positioned in a top-down perspective, we would need to install about 22 cameras. It would be nice to have less of an installation overhead. Also, we'd like to respect the audience and not look down shirts. Thoughts on 1 or 2 Infrared (IR) Cameras Viewing the Crowd from the stage: This could work for counting faces (using OpenCV or other Computer Vision library) but would make it difficult to tell what seats are open and available, especially when people are standing. Images of the Auditorium:
I have built a quadcopter from scratch, including my own flight controller. I have implemented a sensor fusion algorithm (Madgwick algorithm), which returns me current Yaw, Pitch and Roll angles. Then, a PID control algorithm based on these measured angles and desired angles (which are set by the control sticks on a transmitter) adjusts the speed of the motors. For pitch and roll controls the algorithm is obvious and works just fine (i.e, pitch stick is 100% forward - desired pitch is "30 degrees", and current measured pitch is trying to catch up with that value using PID control). However for Yaw I would like the yaw stick to rotate the quad in the corresponding direction as long as I am holding the "yaw stick" in a non-zero position, with a rotating speed, proportional to the yaw stick angle. Since I have current measured Yaw angle (relative to a "reference yaw" which is the yaw that the quad had when the throttle was zero, not relative to the "absolute magnetic North-origined yaw"), I am basically just adding a "quadRotationSpeed = yawStickValue*delta_t" to the "desired yaw angle" in each iteration of the control loop, and then do the rest as a regular PID control: based on desiredYaw and measuredYaw I am calculating the motor torques. Now the questions: 1) Is this approach wrong? 2) When I turn the yaw stick, the quad seems to be rotating with the speed, proportional to the stick angle, but when I let go off the stick, it rapidly turns back half way and then keeps turning slowly back to the "desired angle" I set with the yaw stick. 3) After this yaw maneuver the quad seems to be much less stable during the flight, but gradually it stabilizes. My quad is about 800 grams and is quite large, about 60 cm in diagonal.
I was playing at connecting wires to 'get the feel of it' when I accidentally connected the servo connector red wire to the arduino 5V pin while the arduino is connected to my comp, (and hence powered). I did not connect a battery to the ESC though. Did I just effectively ran 5 V through my BEC and risk damaging it? What I only managed to find online is I shouldn't have done that when the battery is connected because I will burn out my Arduino. But since I didnt have my battery connected to the ESC, will my BEC get damaged instead? I couldn't check my ESC now because I don't have any suitable batteries with me right now and I have to wait till Monday to get my batteries. Just thought I can find a quicker answer here.
In SLAM for Dummies there is on page 40 this formula: $X = X+K(z-h)$ What is $z-h$ in updating $X$?
I am a High School student and a beginner coming to Robotics and I have no idea where to start. I have spent days searching tutorials and books I could read to start building and programming my first robot but most of them just makes me much more confuse, most of them requires me to buy pre-assembled components and just assemble the components I ordered as a puzzle, but I want to learn to build a robot from scratch. My questions are: 1. What should I read / learn first? 2. What programming language should I use? (I know a variety of programming languages. C++ being the one I know the best ) 3. What should I learn from physics? ( Circuits, Electricity, etc ) 4. How do you think I should start from this point where I know absolutely nothing but to program?
I am analyzing a concept of a surgical robot with 4 revolute joints and one sliding joint. I am not able to fix the coordinate frame for last prismatic joint. Following are the schematics of robot and DH frames I have fixed As X4 axis and X5 axis are intersecting each other, I am not able to capture joint distance variable(di) for the slider. How can I fix this? Following are the parameters for rest of the frames Updated Frames:
Whatever literature I've read on quad-copter dynamics, the state of a quad copter is define as a 12x1 vector, containing the coordinate positions (x,y,z) and velocity (xdot, ydot, zdot) in Earth frame, euler angles (theta, phi, psi) and euler angle derivatives (thetadot, phidot, psidot). However, this code that I am reading seems to have modeled the state vector as a 13x1 vector. It's modeled as : [x,y,z,xdot,ydot,zdot,qw,qx,qy,qz,p,q,r]. I am lost at these last 7 variables. In the initialization module, (qw,qx,qy,qz) are initialized as Quat(1),Quat(2),Quat(3),Quat(4) respectively, and (p,q,r) are all zero. I'm guessing the (p,q,r) are derivatives hence are set to zero during initialization. The Quat() is supposed to be the vector of quaternions, which is again something I haven't found in most literature on modeling of quad-copter dynamics. What is this and why is it needed? I have no prior background in robotics or control systems!
I want to estimate the covariance matrix of a measurement for a robot evolving on plane and having the following state vector. $$ X = \begin{bmatrix} x\\ y\\ \theta \end{bmatrix} $$ $x$ and $y$ are the coordinates in $XY$ plane et $\theta$ the heading angle. The measurement is taken from a place recognition algorithm which returns an pose $$z= \begin{bmatrix} x_{m}\\ y_{m}\\ \theta_{m} \end{bmatrix} $$ The covariance matrix of the measurement should have the following form $$ \Sigma = \begin{bmatrix} \sigma_{x}^2 & 0 & 0 \\ 0 & \sigma_{y}^2 & 0 \\ 0 & 0 & \sigma_{\theta }^2 \\ \end{bmatrix} $$ How to calculate $\sigma_{x}$ for example given that I have the ground truth pose $$ground_{truth}= \begin{bmatrix} x_{gt}\\ y_{gt}\\ \theta_{gt} \end{bmatrix} $$ and the measurement $z$
it is widely discussed in this forum, how to sync multiple cameras (e.g. best done via hardware trigger). Now I'd like to know how to actually determine the time delay between two image streams and the overall offset between a real event, and the moment when the captured frame is available in the PCs buffer. So I have a picture P which captured an event at time T, so the captured image will be available at P(T+dT). How to determine dT? I have two cameras capturing picture P1 and P2. But due to two unsynchronized shutter, P_1 will be captured at T_1 and P_2 will be captured at T_2. How to determine dt=T2-T1? PS.: I really don't want to synchronize the cameras, just determining these delays. I searched intuitivley for publications on multi camera/Kinect tracking or Event Related Synchronisation but weren't lucky at all. Does someone has some suggestions?
I am trying to fuse a u-blox M8 with a MicroStrain IMU via a loosely coupled architecture. I was wondering if there are any suggestions or insights based on the results that I am getting. I based most of my code off of Paul Grove’s 2nd edition book. I did a six-point tumble test to calibrate my IMU to get the accelerometer fixed bias, the accelerometer scale factor and the accelerometer misalignment. I also got gyroscope fixed bias. I don’t have a rate table so I can’t get the gyroscope scale factor and misalignment yet. The filter is not currently estimating any calibration information. I ran a test of the code for about 6 hours and 40 minutes. I have a couple of questions about the procedure. My main difficulty is that I am not sure what I should be expecting from the hardware/integration architecture that I am using. What would you expect after a test of 6 hours with a loosely coupled architecture? I am also having difficulty deciding on how to tune the filter. Are there any papers/procedures that you recommend for deciding what should go into Q and R Matrices? I tried propagating the IMU standalone to see how quickly it diverged from it’s initial position. I also took gps data to see how it diverged from it’s initial position. I am wondering If the tuning would be a function of my update interval, as well as how long it takes the two systems to diverge to a specified distance. For my R matrix, I am taking the uncertainty posted by the GPS. For my Q matrix, I am using the power spectral density. I do have some difficulty understanding the reasoning behind this. Finally, I am wondering how much you think that estimating calibration information in my filter would help with a long term solution. Please ignore the xlabel for the figures. It says time in seconds was about 28 days. But the test lasted for just 6 hrs and 40 minutes.
Does the literature on swarm robotics define the smallest number of robots required to make a "swarm". Is it 10, 20 or 100 robots, or does the name depend on something other than the number of robots?
Please note: although this question involves a Raspberry Pi (hereafter RPi), it is really a pure robotics question at heart! I am trying to connect my RPi 1 Model A to a breadboard with a single, simple LED on it. The LED is rated at 1.7V and 20mA. The RPi's GPIO pins are providing 3.3V of power and are not really meant to exceed 16mA. So my circuit needs a resistor, and I believe its calculation is as follows: R = V differential / I = (3.3V - 1.7V) / .02A = 1.6V / .02A = 80 ohms If I'm understanding this calculation correctly, that means that providing my LED circuit with 80 ohms or higher will safely limit the current on the circuit. Problem is, I only have 47-ohm resistors available to me. So I'm wondering if I can daisy chain two 47-ohm resistors (for a total of 94 ohms, which is greater than the 80-ohm requirement) and not fry the RPi and/or the LED? Here's my wiring: In the pic above: A red jumper connects a GPIO pin to a place on the breadboard; this feeds power to... The first 47-ohm resistor; this then feeds power to... The 2nd 47-ohm resistor; this then feeds power to... The LED's anode; the LED is powered and power flows out of the cathode to... A pair of jumpers (small yellow one and then a brown one) that lead back to the GND on the RPi Here's a slightly better angle of the breadboard: So I ask: will I fry my pi with this setup? Have I calculated resistance correctly (80 ohms)? Can I daisy chain two 47-ohm resistors together? Anything look "off" about my wiring setup in general? Thanks in advance!
I am trying to build an autonomous vessel able to plan the best route to a certain waypoint or to follow a pre-defined route. Currently, I have imported Esri GIS files containing elevation data (coordinates can be extracted from grid size and lower left corner position), and the robot automatically loads in a dynamic vector the correct map using a geofencing algorithm. The idea is to use elevation data to keep the boat away from shallow water and land, possibly using a D* algorithm to determine the best route. However, those GIS datasets are discrete and sometimes have a pretty low accuracy, while the GPS on board uses RTK to get a 2cm accuracy. The problem comes when I try to convert a discrete distribution in a continuous map, especially for elevation data: there are certain areas where altitude goes from -10 (underwater) to 100 from one point to another. Any ideas?
I'm reading the book Introduction to AI Robotics By Robin R. Murphy and it the first character said, more or less, that after chapter 5 I will be able to design and implement my own robots, either in a real robot or in simulation. I don't have a lot of money to buy the components to implement a robot, so I want to do it in simulation. My question is: where I can do a robot simulation? I've found ROS is suitable for this but I'm not sure. Is a good idea to use ROS with this book? I have no idea about how to do it or what do I need because this is the first time I do something with robotics. I have also found this Virtual Simulator For Robots.
I am a 1st-year grad student majoring in robotics. I have the opportunity to select and execute my own projects for a course called Robot Modeling. For the project, I have decided to model a UR5 and perform trajectory planning. The task would be to pick an object from one place and deploy it to another place while avoiding any obstacles in between. I am constrained to use MATLAB Simscape multibody for my project and it will only be a simulation. Is the problem statement challenging enough to be completed in 8 weeks?(assuming I can dedicate 5 hours a week). I would love to hear some take on problem statements that would make this more interesting or challenging problem statement for me. Thanks!
Question here opened the discussion on FPGAs on robotics applications. I would like to teach a LSTM RNN/CNN on border detection and feature detection on FPGA. Feature detection methods I would like to have contain image morphology algorithms and basic ML algorithms to start with. What are the most common machine learning libraries or deep learning algorithms to be used on FPGAs? Is it feasible to use traditional ML libraries such as H2O on FPGAs?
I am new to this space and the field. Please pardon my lack of intuition or knowledge. I have a recording stand with a join in between. a pair of screws through which the whole apparatus pivots like a hand. I am trying to convert into a robotic hand I am trying to add a servo motor at the joint such that the whole apparatus can adjust its height accordingly. My question is, how can I replace the screw area with the screws with a motor. I have MG995 Towerpro servo motor but I am not sure, how to connect the two things together. Please help!
I have a robot that takes a measurement of its current pose in the form $$ z = \begin{bmatrix} x\\ y\\ \theta \end{bmatrix} $$ $x$ and $y$ are the coordinates in $XY$ plane et $\theta$ the heading angle. I also have ground truth poses $$ z_{gt} = \begin{bmatrix} x_{gt}\\ y_{gt}\\ \theta_{gt} \end{bmatrix} $$ I want to estimate the standard deviation of the measurement. As the standard deviation formula is: $\sigma=\sqrt{\frac{1}{N}\sum_{i=0}^{i=N}(x_{i}-\mu)^2}$ Is it correct to calculate in this case where I have only one measurement, i.e $N=1$: $\sigma_{xx}=\sqrt{(x-x_{gt})^2}$ and the same for $\sigma_{yy}$ and $\sigma_{\theta\theta}$ ? Edit The measurement is taken from a place recognition algorithm where for a query image, a best match from a database of images is returned. To each image in the database is associated a pose where this image has been taken, i.e $z_{gt}$. That's why I only have one measurement. In order to integrate it into the correction step of Kalman filter, I want a model of measurement standard deviation to estimate the covariance having the following form: $$ \Sigma = \begin{bmatrix} \sigma_{x}^2 & 0 & 0 \\ 0 & \sigma_{y}^2 & 0 \\ 0 & 0 & \sigma_{\theta }^2 \\ \end{bmatrix} $$
I've got a project in university to build a vehicle on Arduino and I'd like to implement reinforcement learning for it. The processors on Arduino, of course, are too slow for this task so here is my question. Is it possible to perform all the learning in the cloud which will communicate with the vehicle via wifi? And if so, I would really appreciate some hints or references. Thanks.
I am working on a concept of robotic arm for which I have fixed DH frames and parameters. Is there any way to validate the correctness of DH parameters and frames ?
I am building a beach cleaning robot. I am having problems with it moving on the Sand. I have attached a video of it struggling to move. How can I proceed (I think it has enough torque) . it moves comfortably on normal ground. Do I need diff wheels, motors, wheel size etc? Video of slippage
I have been looking into non-linear control algorithms for controlling a quadcopter. I know that we have total 6 DOF for a quad (3 translation and 3 rotation) and our input belongs to $\mathbb{R}^4$, i.e. the 4 motor inputs. So if I look at a quad following a specified trajectory $x(t)$, where $x\in \mathbb{R}^3 $. Then I am actually controlling only the desired position coordinates. Angles and angular velocity vary such that position is attained, but we don't actually command the angles. So, my doubt is we have 4-degree input but only 3-degree controlled output. Is it possible to have position tracking along with a particular angle, such as yaw tracking? Controlling 4 states using 4 commands? Am I fundamentally wrong in my reasoning? Alternatively, in terms of movements being directly actuated, we do have 3-moment torques and a thrust as an input. Would we consider this as 4 DOF command instead? Any reference or paper would be welcome too.
I know Android uses Linux kernel. And nowadays processors going to be more powerful and robots going to be more interactive with graphical user interface and Android is good in this field. So, is it a good idea to investing time on Android and leaving Linuxes like Ubuntu? Can we do some classifying and say for which robots it's better to use Android and for which Linux is better choice? I ask this because I think Linux is faster than Android, am I thinking true?
I need help converting this project to reality for my office that I currently work in. I basically want to have sensors(idk which or what kind) behind a bookshelf in individual racks and have those sensors communicate with an app to tell if that rack is full or not. I have no background in programming or robotics but I am willing to learn and read as much as possible if someone can guide me in the right direction it would be really helpful. I also wan it to be as cheap as possible. I have attached an image of how I want the setup to be. Thanks in advance
I mean some bricks like Lego/Meccano with a community like Instructables site, those share their models and robots with others. Also the 3D models of bricks(in solidworks format for example) are free to use or the parts are available to buy at the market!
Are there any programmable microcontrollers or add-on boards that contain an IMU, or ESCs? I've looked at the arducopter but this board's code is not open source, and I need something that can be programmed.
Let me start by saying that I know nothing about robotics. My wife goes a little crazy with Halloween decorations. She judges success by how many children she can make cry. Last year I built her a coffin with a hinged lid that we prop a skeleton in. I'd like to punch it up this year and have the lid open with some actuators. I have no idea what parts I would need to make this work. I'm thinking I need something weather resistant and powerful enough to lift a wooden lid. I'll probably attach a skeleton prop to the underside of the lid so it looks like he's sitting up when the lid opens. For controls I'd like just a switch that we can turn on/off or maybe get fancy and add a motion sensor. Any help you can give is appreciated. Thanks in advance.
I am interested in using a cheap LiDAR module for an outdoor robot I am building. The LiDAR modules that I am looking at are not waterproof and I was wondering how hard it would be to build a waterproof case and how much that would impact performance. If anyone has experience related to this and can give some advise that would be greatly appreciated.
note: I'm just a day or so into the use of inertial measurements and trying to learn everything at once, this may be a noob question (it's my first here). I have seen this image of "Figure 8" in several sites geekmomprojects, makerworkshop, husstechlabs, aros.se, but I don't understand it or how the equations are derived. If I set an accelerometer on a table so that $a_x$ and $a_y$ are nulled and rotate it around the $\hat{z}$ axis, I'm changing the yaw of the accelerometer but of course there is no change in the value of $\theta$ away from zero as given by the last equation. So how is this a measurement of yaw? Or am I missing something obvious? What do these three expression actually mean? I'm beginning to think that the first two expressions are simply the "tilt angles" and the third is some geometrical angle that is not actually independent of the other two.
Previous Question Hello, In continuation of my question above, I have another question. I have managed to control the speed of the treadmill using a PID and StevO mentioned to me how the scaling from process variable(speed) to control signal (pwm) is done. Now, I want to control the position of the robot at a fixed position on the treadmill. I want to use the first approach shown in picture below. Note that the labeling is for a quadcopter. I want to use the same approach but the labeling is different in my case. For me the input to Stabilize PID is desired robot positon on treadmill and actual robot position on treadmill. The Rate PID is up and working. I want to use the Stabilize PID to make the robot hold a fixed position on the treadmill. I can measure the position of the robot using a distance sensor and find the error = desired distance - actual distance. My problem is that the error is in terms of distance(meters) but the control signal needs to be speed that would be fed to the speed control. The speed controller would do its job. To sum up, my question is that for the Stabilitze PID block, by using a process variable which is position (meters), how can I get a control signal that would be speed?, what kind of scaling should be done here? Thank you.
I'm building a 7oF arm with series elastic compliant joints. I need series elasticity because I'm training the robot to do autonomous manipulation using machine learning, and I want force control (and prevent the arm from killing itself if it hits something). I want the final joint to be something like this. I'm having real trouble finding plans online for any kind of rotary series elastic joint. I wonder why this is, given how easy it is to find arm plans. I'm going to add it to this robot. An example of the kind of motor I'll be sticking it on is 5:1 planetary gear stepper motor. My plan right now is to buy a rotary magnetic absolute encoder and a torsional spring. I don't really know if there's a standard way to mount these things. So if anyone could point me in the right direction, that would be great :)
Are there any specific tools or practices which robotics companies use in order to keep track of errors in: Sensors/Actuators malfunctioning or returning unexpected results Programming errors or exception states The aim of tracking this diagnostics information would be to: Remote diagnosis of errors Root cause analysis Understanding of errors which 'cascade' from devices/robots to platforms and cloud services I'm interested specifically in the parallels between normal software tools, e.g. Sentry or Rollbar for exception tracking, APM systems such as Dynatrace and the distributed and long lived state of the autonomous systems world. These systems seem to be better suited to errors in a request/response format, where the history of a device does not have as much affect on it's future as in the autonomous systems world. So far the only tool I've seen is the diagnosis tools built into the ROS messaging bus, however it does not seem well suited to production. How are teams currently doing this in production?
I have been asking a couple of questions here and around on autonomous boats and finally I decided to post the design and gather some feedback. The idea is to build a small vessel capable of advanced autonomous navigation (on a given path), path optimisation and obstacle avoidance, being at the same time completely independent from fuel thanks to renewable energy sources, only relying on solar panels and thermoelectric generators. The displacement of the boat is around 2m, with a bulb at the bottom containing LiFePo batteries. On the electronics side, I thought to split tasks into several units, made of microcontrollers running a RTOS (ChibiOs or FreeRTOS, I still have to choose) and a Single Board Computer with Arch Linux. Up to now the units identified are: High level navigation unit Low level navigation unit Power unit Engine unit The High level navigation unit would perform the following tasks: Path planning: global route calculation, considering shorelines, high-traffic routes and sea bathymetry to avoid groundings; Obstacle Avoidance: using a forward-looking sonar for underwater obstacles, AIS for boats, a thermal imaging camera and maybe a small radar for other types of objects; Local navigation: optimise the path taking into account the general predefined path (from the path planner), use Inertial Navigation and dead reckoning in case of GPS failure. Local System Management: put sensors in sleep mode when they're not needed or in case of power shortages. The Low level navigation unit is responsible for: Local navigation Local System Management Communication: transmit and receive messages via radio link or satellite; The Power unit contains a MPPT solar charger and a battery monitor and is responsible for: - Charging and powering the units; - Sensing the battery and informing other units on the charge status; - Disabling units to go in low power mode (one after the other). The last unit is simply an interface to the engines, basically it's an ESC connected to the other units. The backbone of the boat would be the well-established CAN Bus running NMEA2000 since almost all sensors use this protocol, and both the SBC and the uC would share messages via CanBus. Locally, the SBC will have a CanBus interface and a MessageBus where threads responsible for the aforementioned tasks will stream messages (inter-thread) and eventually publish frames on the Can Bus to interface with sensors (say put the Sonar in sleep mode) or the other units. Ideally in case of failure the boat should be able to navigate with just one of the two navigation units, obviously losing some functionalities. My main concern is to reduce failure points so that a single error or broken sensor/unit would not compromise the whole system. Ideally there could be failsafe additions such as a Draculino, an Arduino running on RF and piezoelectric harvested energy to stream QRSS and morse position messages to nearby boats, but probably it's useless. Fun fact: this design should be COLREG compliant due to the several anti-collision features and some small details I didn't mention such as the navigation light or the manoeuvre normaliser (to make the boat steer in a human-understandable way).
Im currently working on a small robot that uses an ardupilot for its control. As Im working indoor, I can't use a real GPS so I'm making a dummy GPS based on UWB. From UWB I get a position at +/- 10cm in cartesian coordinates. To make it easier I convert my position to latitude & longitude using: double Phi = this->latitude_0 * M_PI /380; this->lat_to_meter = 111132.92 - 558.82*cos(2*Phi) + 1.175*cos(4*Phi) - 0.0023*cos(6*Phi); this->lon_to_meter = 111412.84 * cos(Phi) - 93.5 * cos(3*Phi) + 0.118 * cos(5*Phi); and latitude_0 being 0 (all my reference are shifted to the point 0,0 close to the west of Africa). Im building GPGA, GPRMC & GPVTG frames to feed the ardupilot. GPS is detected & position is updated without any troubles. But I'm struggling with the the Track Made good for the GPRMC & GPVTG sentences. GPGA, GPRMC & GPVTG definitions Speed & everything else is trivial to calculate, but I can't understand the Track. If Im not mistaken the track is the angle to the true north of the course i.e. the distance orientation (?). So my question is given 2 coordinates in latitude & longitude (conversion to meter is trivial) : how can I calculate the Track ?
I am looking for a rotational actuator with high torque and one whose rotation angle can be controlled using the arduino control signals The project is an e-brake for an e-bike. Working mechanism The control input from sensor is fed to arduino and from arduino to an actuator . The actuator(to which the brake wire is connected)should rotate a predefined angle so that the brakes are engaged Any suggestions on implementing this? Or any other alternative mechanisms for suggestion?
I'm looking at this Dexter robot that uses FPGA controllers to get really accurate position and force sensing on a cheap ($2K) arm. They say that the speed of the processor lets them get this accurate sensing. I have a couple of questions: (1) What does processor speed have to with force sensing? Normal clock speeds seem pretty fast. (2) Why do they use stepper motors if they want the robot to be cheap? Everyone I've talked to has said that for small hobby robots, Dynamixel servos are by far the best choice.
At my work we use the BeagleBone Blue boards to control somewhat small AUVs. The board is placed in underwater tubes along with batteries, ESCs for thrusters and all other electronics. We use the in-build magnometer and gyro to calculate the heading for navigating under water. Testing the beagleboard alone on the desk shows that the heading is fairly stable and correct within reasonable limits. The problem arises when we do offshore tests with the AUV. Here the heading will start to drift. A controller has been configured for the AUV to follow a reference heading for X amount of time. From the data we can see that the AUV actually do keep a steady heading, but from inspecting the vehicle IRL it becomes obvious that something is wrong as the vehicle will follow an arc instead of a straight line. So to the AUV actually thinks that it keeps the same heading and actually does this for the first few meters and then starts to turn following an arc while still thinking it is on the same heading. I know that we introduce quit some noise on the sensors by having the batteries, ESCs and wiring going around the board, but could this really be the course of this? Can anyone of you think of other possibilities? And can anyone suggest a solution to this (other than moving power units and wires to another shielded tube - this is not an option at this moment)?
Let's say, for example, I am controlling a motor's speed by adjusting how much power I feed into it. The motor shaft is connected to some physical device with varying amounts of torque, so in order to keep the speed the same I implement a PID control loop. Let's say that the base power I am sending the motor is X. The PID outputs an adjustment value, say Y. Should I add Y to X for every loop through the PID? Or, should I add only changes in Y to X? For example, if Y changes from 5 to 10, then add 5 to X.
I build a self-balancing robot, at least that what i should have been. now i have tried implementing a complementary filter which combines gyro and accelerometer data, but the problem is that the motor and the IMU (MPU 6050) are on the same board and max. an inch away from each other. So, if i get it right, the vibrations from the Motor do influence the accelerometer way to much, hence i am not getting any applicable result when the robot is set to the ground. So, what techniques can i apply to get the absolute orientation? [EDIT] This, right now, is my code with the complementary filter. My problem is, that even when the motor is off, the yAngle value varies very much from the yAcc value, although the yAcc value is correct. Is there any error in my code causing the gyro drift to break the whole filter? // 100.0 is dt, 14.375 the gyro-specific resolution float gyroY = (float)(readY()) / 14.375 / (100.0); int z = getAccZ(); int x = getAccX(); float yAcc = atan((float)z / (float) x) / PI * 180; yAngle = 0.98 * (yAngle + gyroY) + (0.02 * yAcc);
My system consists of the following: A PC with a single ethernet port and other USB ports, a robot mobile base, a SICK LMS lidar with ethernet cable, a Velodyne sensor with Ethernet cable. The robot is connected via serial to usb converter and works fine. The lidar can also be connected to the ethernet and works fine. Now that we have acquired a velodyne 3d Lidar, I am confused as to how to connect this to the computer as there is only one ethernet port available. Should I use a router? Has anyone tried to connect multiple sensors like this. I would be thankful if you can provide details.
How to shorten these values? They are the result of a matrix like this. When I individually run sin(theta1) and such functions, it would give me the correct value as a zero or a one (the angles I am working with are 0 or 90 degrees) In some cases they may go on to be values like 1.574. I know there is a round off function in MATLAB, but then I would have to add that function to every element individually. Is there any easier way to achieve this? PS.: ST stands for sin(Theta) SA for Sin(Alpha) CA for cos(Alpha) and so on.. PPS. : I tried eval function, not working at all. Edit 1: The code I am using is as follows: init_lib; clc; load_robot; syms q1 q2 q3 q4 q5; %DH Parameters for the robot: robot.DH.theta= '[pi/2 pi/2 0 q(4) q(5)]'; robot.DH.d='[q(1) q(2) q(3) 0.1 0.020]'; robot.DH.a='[0 0 0 0 0]'; robot.DH.alpha= '[pi/2 pi/2 0 pi/2 0]'; % We input the joint parameters: q = [q1 q2 q3 q4 q5]; %Storing the evaulated values of 'q' Theta=eval(robot.DH.theta); d=eval(robot.DH.d); a=eval(robot.DH.a); alpha=eval(robot.DH.alpha); A01=dh(Theta(1), d(1), a(1), alpha(1)); A12=dh(Theta(2), d(2), a(2), alpha(2)); A23=dh(Theta(3), d(3), a(3), alpha(3)); A34=dh(Theta(4), d(4), a(4), alpha(4)); A45=dh(Theta(5), d(5), a(5), alpha(5)); A05 = A01*A12*A23*A34*A45; disp(A05); Where, dh is a function that comes from a predefined library. It basically substitutes the four values into a generalized form of the matrix I posted as the second image.
Really confused on how to do this, a general guideline for how to do this would be very much appreciated.
Can anyone please tell me if the sensor output is 0 to +-10V, and I connect the sensor to a differential amplifier to get an output of 0 to 3.3V then what is the input voltage to the differential amplifier from -10v to +10v or (0 to 10 v or -10 to 0v)? If I use -10 to 10 volts then the differential amplifier adds both the input voltages and gives +20V and with gain, we can adjust it to 0 to 3.3. But, if I use 0 to 10v or 0 to -10V then I get half voltage with the above set gain. Can anyone please explain? Thank you!
I have been working on a code where an A.R Drone 2.0 will detect color and put a red dot in the middle of the image. I am using streaming for the drone. The goal is for the drone to detect a white gutter and fly straight over it from one point to the other. Essentially following a line. I noticed when I changed the BGR to 0, 0, 255, I get the entire gutter to be distinguished but it detects white spots as well. Is there to isolate my detection just to see the gutter. Maybe using shapes, once the gutter is detected, put a bounding box. And my finally question is how do I tell my drone to follow the red dot or maybe drawing a line. I looked at python-AR drone libraries but don't know how to apply it.This is my code. import numpy as np import cv2 # open the camera cap = cv2.VideoCapture('tcp://192.168.1.1:5555') def nothing(x): pass cv2.namedWindow('result') # Starting with 100's to prevent error while masking h,s,v = 100,100,100 # Creating track bar cv2.createTrackbar('h', 'result',0,179,nothing) cv2.createTrackbar('s', 'result',0,255,nothing) cv2.createTrackbar('v', 'result',0,255,nothing) while True: #read the image from the camera ret, frame = cap.read() #You will need this later frame = cv2.cvtColor(frame, 35) #converting to HSV hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV) # get info from track bar and appy to result h = cv2.getTrackbarPos('h','result') s = cv2.getTrackbarPos('s','result') v = cv2.getTrackbarPos('v','result') # Normal masking algorithm lower_blue = np.array([h,s,v]) upper_blue = np.array([180,255,255]) mask = cv2.inRange(hsv,lower_blue, upper_blue) result = cv2.bitwise_and(frame,frame,mask = mask) cv2.imshow('result',result) #find center cnts=cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2] center=None if len(cnts)>0: c=max(cnts, key=cv2.contourArea) ((x,y),radius)=cv2.minEnclosingCircle(c) M=cv2.moments(c) center=(int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) if radius>10: #cv2.circle(frame, (int(x),int(y)), int(radius), 2) cv2.circle(frame, center,5,(0,0,255),-1) # color detection limits lB = 5 lG = 50 lR = 50 hB = 15 hG = 255 hR = 255 lowerLimits = np.array([lB, lG, lR]) upperLimits = np.array([hB, hG, hR]) # Our operations on the frame come here thresholded = cv2.inRange(frame, lowerLimits, upperLimits) outimage = cv2.bitwise_and(frame, frame, mask = thresholded) cv2.imshow('original', frame) # Display the resulting frame cv2.imshow('processed',outimage) # Quit the program when Q is pressed if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything done, release the capture print 'closing program' cap.release() cv2.destroyAllWindows()
For our Minor “Robotics and Vision” we need to use ROS to create a system that can navigate to and from a coffee machine. The available hardware consists of a robotic platform from "Dr Robot", and a manipulator that uses actuators from Dynamixel. The system should use computer vision to navigate the robotic platform and manipulator. Besides the robotic platform and manipulator, we have a limited budget of 500 euros to get all the other necessary hardware, including the processing device(s). This/these devices need to run the ROS core and nodes that are responsible for: - Analyzing environment with computer vision; - Controlling the engines; - Controlling the manipulator; - Interfacing with a database. After research we found that we could use a PC, like an Intel NUC or a custom PC. Or we could use several Raspberry Pi’s to run separate nodes, and let them communicate within a network. We tried to find out how much computational power was necessary to fulfill our requirements, but after searching online for a while we could not find a fitting answer. Our question is if anyone knows how much computation power we need to fulfill our aforementioned requirements, or how we could find out (without spending the entire budget). EDIT: The problem for us is that we are using the V-model, which means that we design our entire system before implementing it, therefore we have to decide on what type of hardware we use for the system before we "are allowed" to write any software. This makes it impossible to test it before ordering the computing part.
I have successfully controlled an analog 360 degree servo using a slide potentiometer, with the pot sending a 1000ms signal at one end of the pot and 2000 ms at the other. This gives me nice and smooth rotation speed control all along the pot axis. I recently got hold of TowerPro MG90D servos, which would be ideal for the app I have in mind because of size and power, but I do not seem to find the right signal to send. If I run the same code, I get the servo turning 100% clockwise first, then in the middle a jittery very small step movement, then at some point full speed counterclockwise. So my question is, what is the signal I should be sending to the digital servo? This servo is indeed a 360° servo. Please see https://youtu.be/pyGhbPLZA04 Many thanks in advance!
I'm building a LynxMotion Scout biped and believe that IK are the best way to develop an effective gait. I'm a mechanical design engineer - I currently have minimal skills / experience with programming, which I am looking to improve. Does anyone have any suggestions for IK resources for bipeds that would assist my learning / knowledge of IK? (I have already searched online, with little success). I would really appreciate any pointers or resources you can provide.
Searching online I have stumbled across following type of sensors that claim to measure distance. I don't quite understand the difference between them and their pros and cons. Would appreciate if someone can explain the difference between those. I myself trying to build a bot and want to figure out which sensor to use for opponent detection, balancing between the accuracy, speed and the sensor price) Sharp Proximity IR sensors (these are fairly affordable) Diffuse type sensors (these sensors are for some reason very expensive, why?) Mini Lidar (those sensors are somewhat in between, but still are on the pricey side)
I am far from an expert on robotics, but I am currently investigating the feasibility of automating a key task in our factory. We have an application where we would need image recognition and depth sensing to specify a coordinate in 3d space, where an object exists. Think of this as something hanging of a small tree / bush. Then we need to make to robot arm to move there and using a cutting end effector to cut of the string to the object hanging. As far as I understand it seems like most industrial robots today need to be pre programmed. Are there any good solutions where you could basically say to the robot: moveTo(x,y,z) cut() return() where the (x,y,z) coordinates comes from a depth camera setup with image recognition. I am sorry if the questions is too non specific, but I would like to know what the current state of the technology is.
I'm currently working with Kinect v2. I can do all sort of stuff on PC with it. What I want to do next is, to get the data I want on PC and control the Raspberry Pi with that data (for example, I will move the Pi with motors when I tilt my head to right). I have sorted out the motors and everything but I just don't know how to use that tracking data I have on PC to control the Pi. I hope the question makes sense, i'm just extremely new to both Pi and Kinect. Edit: Further clarification of what I'm trying to achieve. I have a Raspberry Pi which is connect to 2 step-motors through a motor driver. With this setup, I can run a Python script and turn on or off the motors with that. Now I also have a Kinect V2 connected to my PC. What I want to achieve is, to issue the 'turn on - turn off' commands from the Kinect which is connected to my pc and not the Pi. I want to tell the pi from my computer, for example, when I turn my head left, turn on the motors connected to the Pi. I can issue the commands through the python script inside the Pi. Now I want to issue those commands from my PC which has the Kinect data and I want to do it through WiFi connection. Step1: Get the head movement data from Kinect on my PC. Step2: Send that data to Raspberry Pi, somehow(The step where I have the problem). Step3: Pi turns on motors according to that data. I hope this clerifies the problem. Thanks for the help!
I'm a programmer and I've decided to make some bots for fun. I might start with wheels but my ultimate goal is to make bipedal robots. To that end I first decided to watch a bunch of videos of such bots to see what their motion and parts looked like. One thing that struck me as counter-intuitive is how the forward motion of the robots is being carried out in every example I see. When a human walks forward, they walk heal-first to toe upon their stride with one leg while pushing off the ground with their opposing heal (Or is it toe). In any case, I don't see any bipedal robots doing this. But why not? It seems to me that given sufficiently replicated feet and legs, bipedal robots should be able to carry out the same motion. Sure, the programming and/or training for this might be difficult but obviously doable. Or am I missing something? Edit: I do realize that when a person walks, they shift their weight to perform their stride. Also doable though right? There isn't anything magical about the motion of a walking human as far as I know.
The Extended Kalman filter is more or less a mathematical "hack" that allows you to apply these techniques to mildly nonlinear systems. The problem with Extended Kalman Filter is if I initialize the filter with poor conditions (i.e., the initial state), it will quickly diverge. If propagation and/or measurement updates happen at too great a timestep, it will quickly diverge. I my view, EKF is not good for control engineering, due to the risk of diverging. So my question is if Unscented Kalman Filter(UKF) is a better choise for me if I want to be sure that my controller is stable? Or should i use the static original Kalman-Bucy Filter? I'm working with feedback systems.
This is a question I have thinked about under a very long time. What are industrial controllers? From research I found that PID is the most used in the industry. PID controllers are included in classical control. Then Model Predictive Control(MPC) are used as well. MPC controllers are included in predictive control. LQG controllers are used too and LQG controllers are included in optimal control. For nonlinear systems Fuzzy controller are used and Fuzzy controllers are included in nonlinear control. Then we have two type of controllers left. They are Model reference adaptive control(MRAC) which are included in adaptive control and H-infinity control which are included in robust control. So my questions for you are: Why are Robust Control and Adaptive Control not used in the industry? Are them both still in a developement stage? Please. Correct me if I'm wrong. Because I really like Robust Control but I not going to learn it if there is not chance for me to use it in industrial work.
Originally I have a image with a perfect circle grid, denoted as A I add some lens distortion and perspective transformation to it, and it becomes B In camera calibration, A would be my destination image, and B would be my source image. Let's say I have all the circle center coordinates in both images, stored in stdPts and disPts. //25 center pts in A vector<Point2f> stdPts; for (int i = 0; i <= 4; ++i) { for (int j = 0; j <= 4; ++j) { stdPts[i * 5 + j].x = 250 + i * 500; stdPts[i * 5 + j].y = 200 + j * 400; } } //25 center pts in B vector<Point2f> disPts = FindCircleCenter(); I want to generate an image C that is as close as A, from input: B, stdPts and disPts. I tried to use the intrinsic and extrinsic generated by cv::calibrateCamera. Here is my code: //prepare object_points and image_points vector<vector<Point3f>> object_points; vector<vector<Point2f>> image_points; object_points.push_back(stdPts); image_points.push_back(disPts); //prepare distCoeffs rvecs tvecs Mat distCoeffs = Mat::zeros(5, 1, CV_64F); vector<Mat> rvecs; vector<Mat> tvecs; //prepare camera matrix Mat intrinsic = Mat::eye(3, 3, CV_64F); //solve calibration calibrateCamera(object_points, image_points, Size(2500,2000), intrinsic, distCoeffs, rvecs, tvecs); //apply undistortion string inputName = "../B.jpg"; Mat imgB = imread(imgName); cvtColor(imgB, imgB, CV_BGR2GRAY) Mat tempImgC; undistort(imgB, tempImgC, intrinsic, distCoeffs); //apply perspective transform double transData[] = { 0, 0, tvecs[0].at<double>(0), 0, 0,,tvecs[0].at<double>(1), 0, 0, tvecs[0].at<double>(2) }; Mat translate3x3(3, 3, CV_64F, transData); Mat rotation3x3; Rodrigues(rvecs[0], rotation3x3); Mat transRot3x3(3, 3, CV_64F); rotation3x3.col(0).copyTo(transRot3x3.col(0)); rotation3x3.col(1).copyTo(transRot3x3.col(1)); translate3x3.col(2).copyTo(transRot3x3.col(2)); Mat imgC; Mat matPerspective = intrinsic*transRot3x3; warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000)); //write string outputName = "../C.jpg"; imwrite(outputName, imgC); // A JPG FILE IS BEING SAVED And here is the result image C, which doesn't deal with the perspective transformation at all. So could someone teach me how to recover A? Thanks.
I want to make a LFR using IR sensor. Till now I coded my robot according to the track I am going to participate in. So it won't work on any other track. But now I want to code it in a way that it will work on almost any track with any design.
To record .avi video, I am using video_recorder for the image_view package. I want to do this through the launch file. Here is the how I put it in there <node pkg="image_view" type="video_recorder" name="video_record_$(arg camera_name)" machine="$(arg machine)" if="$(eval record and enable_camera)"> <arg name="fps" value="$(arg frame_rate)" /> <arg name="codec" value="HFYU" /> <arg name="encoding" value="$(arg image_encoding)" /> </node> I get error: WARNING: WARN: unrecognized 'arg' child tag in the parent element: Unfortunately I could not find a go-to syntax for this and I am new to ROS. Any help is appreciated. Feel free to ask for more details.
I'm looking at a pan tilt (RR) system. I want to define the kinematics between a world frame and the end effector. Every example I've seen looks at the world frame as being at the first joint (the zero frame). Can DH parameters be used with an offset base? My initial thought is yes, and that you define your world frame, your base frame, and your joint frames.Then there is an extra T matrix, but that one doesn't change since the transformation between your world frame and the fixed base frame is fixed. Thanks
I am looking for inverse kinematics library (preferably compatible with ROS) that includes functions to calculate inverse-jacobian to go from end-effector velocity (twist) to joint velocities. I've only found KDL that has this capability but before implementing KDL, I would to see if there is any other options that I can use. I also saw TRAC-IK but as I see from the source code, it only supports inverse position kinematics.
I have only recently started reading Steve LaValle's book on motion planning. It mentions that a rotating link with a fixed pivot has a configuration space of a unit circle. Since the configuration space is the set of all possible configurations the link can have, i.e. all possible angles from 0° to 360°, shouldn't the c-space be a line rather than a circle? Also, for a 2 link robotic arm in the same plane, how is the c-space a 2-tourus even though the joints rotate in the same plane? Lastly what's the difference between the representations shown below?
I want to choose stepper motor screw with slider for use it to shift cube with side 5 cm and wight 250 g . I found this motor .. how can I calculate speed and torque of slider
I'm a graduate student at mechanical engineering. I took computer vision(stereo vision) last semester but I'm still fairly new to this area. I wonder if I may ask 3 questions related to computer vision? We have a 6 DOF robot arm, and we want to orient the end effector(z axis of end effector) to the normal of a curved surface. The geometry of this curved surface is unknown. I wonder if you may suggest some good tutorials/paper/implementations of the extrinsic calibration? Assuming I have a stereo camera(ZED for example), which is fixed in space. I want to get the camera's coordinate and therefore all the pixels coordinates expressed in the robot base frame. I think this might be a hand eye calibration problem, but I think there might be a better methods? May I use stereo camera like ZED to get the surface normal? Assuming I want to get the surface normal of a cylinder. Assuming I have the extrinsic calibrated camera ready, which faces this cylinder. My proposed method is that I mark 3 points a1,a2,a3 in vicinity, and I use a blue marker pen to mark these 3 points. I can get the "physical coordinate" of these three points in the robot base frame, then I took the cross product between the physical coordinate of (a3-a1) and the physical coordinate of (a2-a1), then I may have a normal at the physical coordinate of a1. Any suggestions/improvements on my methods? My biggest concern is that ZED camera may not be a good option here, after all 400 dollar is not cheap for an international student. More importantly, I always feel there must a better option here. I'm all ears. I assume I may not have a good description of my problem. Thank you for your attention and help in advance. Best,
I would like to connect my RP3 to the HKpilot. While most connections are pretty straightforward, I would like to know how ca I power the PI: It is stated in the above link that "The RPi can be powered by connecting the red V+ cable to the +5V pin", can the HK32 pilot deal with the power consumption of th RP? It is also stated that "The RPi can be powered ... from USB in", in this case, there is the grouding is shared? From where can I get a 5v output? Do I need to build a voltage regulator? Thanks,
I am new to robotics, and currently trying to develop a purely numerical simulation of a quadcopter. As I understand, the problem of quadcopter control includes take-off control, hover stabilization and landing. However, I am only working with navigation control right now, foregoing take-off and landing. I am imagining a scenario wherein the quadcopter is already off the ground at a certain height (is that called hovering?) and it has to follow a trajectory in the presence of simulated noise (which I intend to introduce through random deviations in the angular orientations of the quadcopter) . The trajectory is nothing but a finely discretized curve in XYZ space (with a constant Z for now). I am trying to build on this tutorial. It's a highly simplified model, just taking into account the thrust, external torques and frictional forces on the quadcopter. I have two questions here: 1. Is my setup even feasible just to demonstrate a proof of concept fuzzy control? 2. In this setup, and according to the initial condition mentioned above, I understand that if the quadcopter already has to be at a certain height h, then the net thrust in Z direction should balance the gravitational force. However, that would be the case at any height above the ground. But to get it to height h, how do I calculate the angular velocities in body frame that are required to keep it at that height? I am asking this because by means of affecting these velocities I'll be able to add some noise to the system and then work my way from there. The platform for this numerical simulation is to be Matlab, if that information adds anything. I am planning to use only Matlab scripts for now, and not Simulink. As for my knowledge of dynamic systems, I understand basics of linear and rotational kinematics. I have only worked with implementing text book algorithms in a standard computer science course, and that's as far as my programming experience goes.
I am looking for something like this part in these photos: to make my robot ideas as prototype. But I don't know it's name! Also if there are better options I will happy to know.
I tried to get the yaw, roll, pitch and throttel integers from the pi to the CC3D with python and c++ as PWM signal over the GPIOs but the output didn't seem to be right or it was delayed. Could someone help me to program an application who knows how to program PWM, PPM, SBus or ExBus? Or does someone know a better way to do it than a serial connection?
over the last 5 years I had 2 quadcopter as a gift. the first one was basic, manual, I had to manually set the 4 motor speed for it not to crash during take off and I was alway controlling the amount of thrust myself. 1/ Why was that necessary? if you send the same thing to the 4 motor they should spin at the same rate and thus provide the same thrust no? The second one was a smartphone controlled drone, you just hit the take off button and the thing fly and stay at the same place/altitute. To do that I could see an ultrasonic sensor below the drone. if I made it fly above a mattress it would crash. 2/ Ok that nice but what about high altitude? what if my drone is hovering a 500m, what kind of sensor can I use to make him stay at the same place and altitude. I already fiddled with GPS and I don't think it's fast/precise enought. Thanks.
I am starting off with a very simple Kalman filter for vision based pose estimation (PnP algorithm). The filter is inspired by the constant velocity model in this OpenCV tutorial, but I am ignoring roll, pitch and yaw for now and I am only estimating the XYZ pose. As PnP is formulated as a non linear least squares problem, I have access to a covariance matrix from the solver I am using (Ceres), and I am using this matrix as an estimate of the measurement noise covariance matrix $R$ at each step. Process noise covariance $Q$ remains constant. My understanding of the filter is that if I obtain more and more 'good' measurements (with low $R$), the system covariance should recursively decrease; and even if the measurements worsen later, the posteriors should not worsen too much. So if I were to start from an area of bad measurements with covariance $P_1$, move to an area where I receive some good measurements and back to the area of bad measurements with covariance this time being $P_2$, $P_2 < P_1$. On the other hand, in my formulation, my posterior covariance $P$ seems to be blindly following $R$: in my previous example of bad-good-bad areas, $P$ decreases and increases to almost the same extent, so I can't see the advantage of receiving good measurements reflected in the final covariance. On the other hand, if I reduce $Q$ even lower, I can see some variation in initial vs. final covariances, but the system is not trusting the measurements at all and is smoothing the poses out way too much. I am confused as to how to pick the best values of $Q$ and $R$, and mainly as to how I can write my filter in a way that it recognizes the advantage of getting good measurements.
I'm working on a smart camera that does key point prediction (predicting locations of wrists, elbow, shoulder, ears, eyes, nose, etc.) for gesture recognition. Right now, the neural net is running on the embedded GPU (a Jetson TX2) and performance isn't ideal (<3 FPS). So I'm exploring whether it makes more sense to constantly upload images to the cloud, doing the predictions there, and sending the results back to the device. I'm curious what approach others would recommend for a smart camera. Specifically: Performance differences between using an embedded GPU vs. the cloud? Cost differences between using an embedded GPU vs. the cloud? What other smart cameras are doing (Nest, Lighthouse, etc.)? If there is an alternative better than the Jetson TX2 if going the embedded path? Any advice for any of the questions would be great.
I would like to determine the relative camera pose given two RGB camera frames. I assume there's overlap in field of view between the two cameras, what I am looking for ultimately is the rotation and translation between two cameras. I understand how to do this in theory, and am looking for existing openCV implementations in python. An existing one for matlab can be found here: https://www.mathworks.com/help/vision/ref/relativecamerapose.html#inputarg_inlierPoints1. But so far as I can tell I see no openCV API in python.
To simulate a system, a global timer will be set and all submodules will be synchronized with that timer to work together. such as PID controller, kalman filter, PWM module etc. How do you sync the timers in ROS? 1) should I make a timer pointer as paramter in PID() and pass the system "ros::Time xxx" to this PID? is this timer "real time" enough for a balance car? (For the PID controller I can do this because the PID controller and the PWM generater will be in the same node). 2) or receive the msg from nodes and extract the time stamp and use? is the latency can be ignored or how to evaluate the latency? (The Kanlman filter will be in the sensors node and from the coder view we cannot pass the system timer pointer to the kalman filter such as kalman() ) The PID controller works at around 500Hz and the kalman filter works at lower speed. my PWM is 5KHz. Thanks a lot!
I am trying to develop a line following algorithm where a drone will detect a bounding box and follow what is inside the bounding box. I am filtering all the colors to only see the color white. Once that color is detected, I want the drone to go in a straight line from one end of the box to the other. Maybe it is easier using the draw line function on openCV but I am not sure. Any way, my biggest problem is telling the drone to follow the color or in other words the detected object. I am using this repository from GitHub. Anyway this the code I have so far, and it only follows items that are moving. I need to follow an object that is stationary if that is possible. #include "ardrone/ardrone.h" int main(int argc, char *argv[]) { // AR.Drone class ARDrone ardrone; // Initialize if (!ardrone.open()) { std::cout << "Failed to initialize." << std::endl; return -1; } // Thresholds int minH = 0, maxH = 255; int minS = 0, maxS = 255; int minV = 0, maxV = 255; // XML save data std::string filename("thresholds.xml"); cv::FileStorage fs(filename, cv::FileStorage::READ); // If there is a save file then read it if (fs.isOpened()) { maxH = fs["H_MAX"]; minH = fs["H_MIN"]; maxS = fs["S_MAX"]; minS = fs["S_MIN"]; maxV = fs["V_MAX"]; minV = fs["V_MIN"]; fs.release(); } // Create a window cv::namedWindow("binalized"); cv::createTrackbar("H max", "binalized", &maxH, 255); cv::createTrackbar("H min", "binalized", &minH, 255); cv::createTrackbar("S max", "binalized", &maxS, 255); cv::createTrackbar("S min", "binalized", &minS, 255); cv::createTrackbar("V max", "binalized", &maxV, 255); cv::createTrackbar("V min", "binalized", &minV, 255); cv::resizeWindow("binalized", 0, 0); // Kalman filter cv::KalmanFilter kalman(4, 2, 0); // Sampling time [s] const double dt = 1.0; // Transition matrix (x, y, vx, vy) cv::Mat1f A(4, 4); A << 1.0, 0.0, dt, 0.0, 0.0, 1.0, 0.0, dt, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0; kalman.transitionMatrix = A; // Measurement matrix (x, y) cv::Mat1f H(2, 4); H << 1, 0, 0, 0, 0, 1, 0, 0; kalman.measurementMatrix = H; // Process noise covairance (x, y, vx, vy) cv::Mat1f Q(4, 4); Q << 1e-5, 0.0, 0.0, 0.0, 0.0, 1e-5, 0.0, 0.0, 0.0, 0.0, 1e-5, 0.0, 0.0, 0.0, 0.0, 1e-5; kalman.processNoiseCov = Q; // Measurement noise covariance (x, y) cv::Mat1f R(2, 2); R << 1e-1, 0.0, 0.0, 1e-1; kalman.measurementNoiseCov = R; char textBuffer[80]; cv::Scalar green = CV_RGB(0,255,0); float speed = 0.0; bool learnMode = false; // Main loop while (1) { // Key input int key = cv::waitKey(33); if (key == 0x1b) break; // Get an image cv::Mat image = ardrone.getImage(); // HSV image cv::Mat hsv; cv::cvtColor(image, hsv, cv::COLOR_BGR2HSV_FULL); // Binalize cv::Mat binalized; cv::Scalar lower(minH, minS, minV); cv::Scalar upper(maxH, maxS, maxV); cv::inRange(hsv, lower, upper, binalized); // Show result cv::imshow("binalized", binalized); // De-noising cv::Mat kernel = getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)); cv::morphologyEx(binalized, binalized, cv::MORPH_CLOSE, kernel); //cv::imshow("morphologyEx", binalized); // Detect contours std::vector<std::vector<cv::Point>> contours; cv::findContours(binalized.clone(), contours, cv::RETR_CCOMP, cv::CHAIN_APPROX_SIMPLE); // Find largest contour int contour_index = -1; double max_area = 0.0; for (size_t i = 0; i < contours.size(); i++) { double area = fabs(cv::contourArea(contours[i])); if (area > max_area) { contour_index = i; max_area = area; } } // Object detected if (contour_index >= 0) { // Moments cv::Moments moments = cv::moments(contours[contour_index], true); double marker_y = (int)(moments.m01 / moments.m00); double marker_x = (int)(moments.m10 / moments.m00); // Measurements cv::Mat measurement = (cv::Mat1f(2, 1) << marker_x, marker_y); // Correction cv::Mat estimated = kalman.correct(measurement); // Show result cv::Rect rect = cv::boundingRect(contours[contour_index]); cv::rectangle(image, rect, cv::Scalar(0, 255, 0)); } // Prediction cv::Mat1f prediction = kalman.predict(); int radius = 1e+3 * kalman.errorCovPre.at<float>(0, 0); // Calculate object heading fraction float heading = -((image.cols/2)-prediction(0, 0))/(image.cols/2); sprintf(textBuffer, "heading = %+3.2f", heading); putText(image, textBuffer, cvPoint(30,30), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, green, 1, CV_AA); // Show predicted position cv::circle(image, cv::Point(prediction(0, 0), prediction(0, 1)), radius, green, 2); //Speed if ((key >= '0') && (key <= '9')) { speed = (key-'0')*0.1; //printf("speed = %3.2f\n", speed); } sprintf(textBuffer, "speed = %3.2f", speed); putText(image, textBuffer, cvPoint(30,60), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, green, 1, CV_AA); // Drone control double vx = 0.0, vy = 0.0, vz = 0.0, vr = 0.0; // Auto-follow vx = speed; vr = -heading; if (key == 0x260000) vx = 1.0; if (key == 0x280000) vx = -1.0; if (key == 0x250000) vr = 1.0; if (key == 0x270000) vr = -1.0; if (key == 'q') vz = 1.0; if (key == 'a') vz = -1.0; ardrone.move3D(vx, vy, vz, vr); // See you ardrone.close(); return 0; } I also tried using PID's like so. // Find largest contour int contour_index = -1; double max_area = 0.0; for (int i = 0; i < (int)contours.size(); i++) { double area = fabs(cv::contourArea(contours[i])); if (area > max_area) { contour_index = i; max_area = area; } } // A marker detected if (contour_index >= 0) { // Moments cv::Moments moments = cv::moments(contours[contour_index], true); marker.y = (int)(moments.m01 / moments.m00); marker.x = (int)(moments.m10 / moments.m00); // Show result cv::Rect rect = cv::boundingRect(contours[contour_index]); cv::rectangle(image, rect, cv::Scalar(0, 255, 0)); //cv::drawContours(image, contours, contour_index, cv::Scalar(0,255,0)); } // Take off / Landing if (key == ' ') { if (ardrone.onGround()) ardrone.takeoff(); else ardrone.landing(); } // Move using keyboard double vx = 0.0, vy = 0.0, vz = 0.0, vr = 0.0; if (key == 0x260000) vx = 1.0; if (key == 0x280000) vx = -1.0; if (key == 0x250000) vr = 1.0; if (key == 0x270000) vr = -1.0; if (key == 'q') vz = 1.0; if (key == 'a') vz = -1.0; // Switch tracking ON/OFF static int track = 0; if (key == 't') track = !track; cv::putText(image, (track) ? "track on" : "track off", cv::Point(10, 20), cv::FONT_HERSHEY_SIMPLEX, 0.5, (track) ? cv::Scalar(0, 0, 255) : cv::Scalar(0, 255, 0), 1, CV_AA); // Marker tracking if (track) { // PID gains const double kp = 0.001; const double ki = 0.000; const double kd = 0.000; // Errors double error_x = (binalized.rows / 2 - marker.y); // Error front/back double error_y = (binalized.cols / 2 - marker.x); // Error left/right // Time [s] static int64 last_t = 0.0; double dt = (cv::getTickCount() - last_t) / cv::getTickFrequency(); last_t = cv::getTickCount(); // Integral terms static double integral_x = 0.0, integral_y = 0.0; if (dt > 0.1) { // Reset integral_x = 0.0; integral_y = 0.0; } integral_x += error_x * dt; integral_y += error_y * dt; // Derivative terms static double previous_error_x = 0.0, previous_error_y = 0.0; if (dt > 0.1) { // Reset previous_error_x = 0.0; previous_error_y = 0.0; } double derivative_x = (error_x - previous_error_x) / dt; double derivative_y = (error_y - previous_error_y) / dt; previous_error_x = error_x; previous_error_y = error_y; // Command velocities vx = kp * error_x + ki * integral_x + kd * derivative_x; vy = kp * error_y + ki * integral_y + kd * derivative_y; vz = 0.0; vr = 0.0; std::cout << "(vx, vy)" << "(" << vx << "," << vy << ")" << std::endl; } // Move ardrone.move3D(vx, vy, vz, vr);
I am reading about Condition Number which is considered as measure of manipulability of robotic arm, it is defined as $$ k = \parallel J \parallel \parallel J^{-1} \parallel $$ Where, $$ \parallel J \parallel = \sqrt {tr(JNJ^{T})} $$ How this can be a unique value for given end effector location, as manipulator can reach a point in workspace with different joint variable values, which in turn will change Jacobian value ? Reference Paper: Comparative study of performance indices for fundamental robot manipulators Serdar Kucuka,1, Zafer Bingulb,
I am trying to formulate an optimization problem for determining link lengths of 3R manipulator shown in picture below, Following are the constraints, Robot arm should be reachable at point x = 100, y =0 Link 3 should sweep minimum 60 deg. angle at the end point (i.e. min Φ = 240 deg, max Φ = 300 deg) $ 20°\leq\theta_{1}\leq160°$, $ 200°\leq\theta_{2}\leq340°$, $ 200°\leq\theta_{3}\leq340°$ Objective is to minimize $l_{1} + l_{2} + l_{3}$ How can I define 2nd constraint of minimum sweep angle mathematically? Reference for 3R Robot kinematics: http://www.seas.upenn.edu/~meam520/notes/planarkinematics.pdf Current Formulation: $Minimize f(x) = l_{1} + l_{2} + l_{3}$ $Subject \:to \: \: l_{1}\cos(\theta_{1})+l_{2}\cos(\theta_{1}+\theta_{2})+l_{3}\cos(\theta_{1}+\theta_{2}+\theta_{3}) = 100$ $ \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \:l_{1}\sin(\theta_{1})+l_{2}\sin(\theta_{1}+\theta_{2})+l_{3}\sin(\theta_{1}+\theta_{2}+\theta_{3}) = 0$ $ \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: 20°\leq\theta_{1}\leq160°$ $ \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: 200°\leq\theta_{2}\leq340°$ $ \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: 200°\leq\theta_{3}\leq340°$
What's technology robot vacuum cleaner used to locate it's charger when battery is low ?
I'm using monocular visual odometry. As in this case absolute scale can't be recovered, we rather compute relative scale for subsequent transformations as in this tutorial To do this, triple matches across three frames are required. We then triangulate two $3D$ points $X_{m}$ and $X_{n}$ from images $\{i,i+1\}$ and $\{i-1,i\}$ then the relative scale can be determined from the distance ratio between point pairs in subsequent image pairs as follows: $r=\frac{\parallel X_{m,\{i,i+1\}}-X_{n,\{i,i+1\}}\parallel}{\parallel X_{m,\{i-1,i\}}-X_{n,\{i-1,i\}}\parallel}$ My question concernes the coordinate system of the triangulated $3D$ points, the points have to be expressed in the coordinate system of the first, second or third frame or it dosen't matter as we are using the norm? The triangulation method that I'm using outputs a $3D$ point expressed in the first viewpoint. How to convert it to another viewpoint?
What I've been noodling on is how to convert that energy into thrust to create an all electric model rocket. I got as far as realizing that dumping electricity from a battery into a capacitor might result in fast enough power discharge, but I am stuck on converting that electricity into thrust for a model rocket. Any ideas? This is related to the following Stack Exchange question: Are LiPo really 100 times more energy dense than model rockets?
I have written a MATLAB code for inverse kinematics of 3R robotic arm, which returns value of joint angles for given link lengths and end effector position and orientation. But if location of point is outside the workspace or it can’t be reached with certain orientation the program gives error. Is there any way to check whether a particular point is reachable with a given orientation ?
The length of the last link (1.73) of my model (of a human finger) is not represented in any of the parameters that I calculated using the Denavit-Hartenberg algorithm, which I think can't be right. I suppose it should be represented in either a₃ or α₃, but those are both 0. It should be noted that the model has no end-effector/tool, but I aligned the z₃ with the approach vector anyway. If anyone has an idea where I'm messing up, that would be much appreciated! The model in question:
It is clear it is not a good idea to do resampling in a particle filter if the robot is not moving (there is no action), as the particles will converge towards a simple particle. However, Probabilistic Robotics book quickly mentions "It is usually a good idea to stop measurement integration if the robot does not move". However, I am not able to fully understand why. My only guess is that, if robot does not move the sampling (which not resampling) will give the same particles. Therefore, is there any other negative effect other than wasting computational power? Because in a world with lots of noise I think it might be helpful to keep the sampling running to have move proper weights over time.
I have this exercise The dynamic model is $B(q)\ddot{q}+S(q,\dot{q})\dot{q}=\tau+\tau_k$ Where $B(q)$ is the inertia matrix, $S(q,\dot{q})\dot{q}$ are the centrifugal and Coriolis terms, $\tau$ is the actuator input and $\tau_k=J^T(q)F$ is the torque imposed by the external force. So since at rest $\ddot{p}=J(q)\ddot{q}$, $S(q,\dot{q})\dot{q}=0$ and we have no actuator input, substituting $\ddot{q}$ from the dynamic model we have $\ddot{p}=J(q)B^{-1}(q)J^T(q)F$ I assume that for the point 1, for the second principle of the dynamics, the end-effector accelerates as in the case B. Now, in the point 3, if we have non-uniform masses of the links, the second principle of the dynamics is still valid? I guess I can choose an inertia matrix $B(q)$ with proper centers of mass of the three links such that at the rest configuration $q_{eq}$ $\ddot{p}=J(q_{eq})B^{-1}(q_{eq})J^T(q_{eq})F=\begin{bmatrix} p_1 \\ 0 \\ 0\end{bmatrix}$ and so the end effector accelerates as the case D right?
Since some months on many websites, I read and I have seen images of the Alibaba (the Chinese e-commerce giant) warehouses. One example is the following article: http://www.dailymail.co.uk/news/article-4754078/China-s-largest-smart-warehouse-manned-60-robots.html It seems interesting and it seems really the same Amazon is doing with its Kiva Systems. The equivalent of Kiva Systems here should be a company called Quicktron but I am not able to find anything about it on the internet. Someone of you has any knowledge about this company? A website?
I have been searching for a solution to the mentioned above problem, but nothing was accurate enough. I am trying to find the optimal-time trajectory for an object from initial point A to final point B. The velocity at those points is 0, and the max velocity and acceleration are v_max and a_max How can i solve this?
Background: I am working on a design in which an autonomous RC car implements a LIDAR sensor to navigate a course. The primary LIDAR sensor returns a 270 array of points, scanning in a line like fashion, each representing the distance at a particular angle. These can easily be converted from polar to cartesian coordinates. Question: Given a set of points how would you estimate the number of lines that exist within this plot and the slope of each line. An explanation for a simple linear regression is not required, an explanation on how to detect discontinuities between the data is helpful.
Hi i´m designing a robotic arm based on this idea of the NOVABOT I only want to change its structural parts by aluminum tubes. In this design 3D model it use the servo hs-311 wish have a max torque of 3.7 kg/cm, and its made of some plastic material. I made my calculations following this url and made a excel that I append here... the mass of my design have been taken from iProperties of Autodesk Inventor, so they are in mass units not in weight. And I´m dismissing the mass of the joints (m5,m6) to have a first idea of the values, because I dont have the selected motors. And my problems is that I think I´m missing something here because the it results in a very high torque motors, for this simple design. here is my excel
I am trying to run "px4 SITL simulation" using gazebo7 on my machine with "UBUNTU-14.04 LTS" using these commands: mkdir -p ~/src cd ~/src git clone https://github.com/PX4/Firmware.git cd Firmware git submodule update --init --recursive But as I enter the next command i.e : make posix_sitl_default gazebo It gives an error --> CMake Error: The source directory "/home/arpit/src/Firmware/build/posix_sitl_default/-Wno-deprecated" does not exist. Specify --help for usage, or press the help button on the CMake GUI. /bin/sh: 1: cd: can't cd to /home/arpit/src/Firmware/build/posix_sitl_default make: *** [posix_sitl_default] Error 2 Any Solutions?
I'm building my first robotic arm and I am new to robotics generally, but I have tried to do some research. I need to build a robotic arm that will be relatively lightweight and will need to grab things off of the shelf and place them higher up and then return them later. I have attached 3 photos showing the process I am imagining. The first shows the folded arm going to grab the object from the shelf. The second shows the arm extending up and the third shows the arm completely extended and placing the object on top of the shelf. The whole arm will be about 30 inches long and weigh about 3-4 pounds including the object it will be grabbing. I know that stepper motors are the best route to go but what kind of holding torque will i need for that weight as well as for gripping the weight of the object which is only (120-200 grams)? Also bipolar or unipolar? For a controller I am sure I need an Arduino but will the Uno be adequate to simultaneously control 6 motors? I don't think so. Any recommendations on one that will or how I can get multiple motors to move in sync? The arm's motions will be repetitive as there will only be so many places on the shelf for the arm to go, say 200 total. The command as to which spot will be sent remotely and I know Arduino has wifi options. I will also need darlington arrays right? One for each motor? Am I missing anything in terms of hardware? Any help you can give me would be greatly appreciated.
If my state vector is just a representation of the error state of a quaternion represented as $[\delta \bf{q} ]$ which is a 3x1 vector and my external update is from an accelerometer, how would I compute the jacobian $\bf{H}$ matrix? One implementation that I've found uses $$ \bf{H} = [R^w_b g]_\times $$ where $\times$ represents the skew symmetric matrix and $\textbf{g} = [0, 0, 9.80665]^T$, $\textbf{R}^w_g$ the rotation matrix obtained from the true state quaternion. How does $z = Hx$ make sense in this formulation? Thank you!
Dear Robotics Community, I am participating in a Robotics tourney that will start in a few weeks. The challenge I signed up for is to deliver and dump ping pong balls to a box. I already have my robot and the ball container built, but I need some help with coding it. I just need help coding an SR04 ultrasonic sensor for my Arduino robot. I want it so that when it senses the box in front of it, the robot turns on a motor and stops when the box is a certain distance away from the robot. I also want it to back up and go back to its starting position when it is done delivering the payload. I was told by my coach that a "boolean variable" would do the job, but I also do not know how to program this. Thank you! (Here is the robot platform I am using: https://www.elegoo.com/product/elegoo-uno-project-smart-robot-car-kit-v1-0/)
I have been given a Jacobian for a 3 revolute joint robot. I need to obtain the origins of each of the joint. Origins of the joints can be obtained once I get $T^0_1$, $T^0_2$, $T^0_3$ and $T^0_4$. 4th frame is my end effector. Is there any way to calculate the transformation matrices from the Jacobian? I have gone through literature extensively and I can get the Jacobian from the transformation matrices using the below relations The Jacobian is $$J = \begin{bmatrix} J_v\\ J_w \end{bmatrix}$$ where for revolute joints, $$ J_{v_i} = z_{i-1} \times (o_n - o_{i-1}) \\ J_{w_i} = z_{i-1}$$ Is the reverse possible i.e, obtaining the transformation matrices when I have the Jacobian? Is there any way to obtain just the origins of the frames $ o_1, o_2, o_3$ ($o_0 = \begin{bmatrix} 0 &0 &0 \end{bmatrix}^T$ ) from the given Jacobian? The reverse cross product method does not give me the complete co-ordinates of the origin.
I would like to know if there is any software that I can 3d model my robot and insert that model into some sort of simulation software where I can run AI related code(in python). Any suggestions would be appreciated.
I am working with a Nao robot. One if the things I want to do is the following: Take an image of geometrical objects in front of the Nao (done) Extract features from the objects in the image, such as x, y, color, etc.. (done) Make the Nao point to one of the objects (TO DO) So, what I need is to transform a 2D coordinate in the image plane to a 3D coordinate in the robot's coordinate system. I have some idea on how to do this, but I am not sure if this is correct. So, I start by transforming the 2D image coordinate to the 3D coordinate system that originates in the camera of the Nao. For this, I use height and width (in pixels) of the image ($r_w$ and $r_h$) and the horizontal and vertical camera opening ($\theta_v$ and $\theta_h$). This transformation is: $$ v_c = \left( \begin{matrix} 1 \\ -\frac{x_i}{r_h} \tan{\frac{\theta_h}{2}} \\ \frac{y_i}{r_w} \tan{\frac{\theta_v}{2}} \end{matrix} \right) $$ Than, I need to translate this vector using the orientation of the head of the Nao. The resulting coordinate system is parallel to the entire robot's coordinate system. The rotation of the camera is given by $R_c$. The translation is: $$ v_t = R_c v_c $$ Finally, I project this coordinate system to the robot's coordinate system. The Nao can use 2 coordinate system, one that originates on the ground between its legs (FRAME_ROBOT) or one that originates in its chest (FRAME_TORSO). Which one is used has no real importance for me. Let's say the offset between the coordinate systems is given by $t_c$. The transformation is: $$ v_r = v_t + t_c $$ So, given a position in the image $(x_i, y_i)$, I get this position in the robot's coordinate system. Is this a correct approach?
How can I track two set-points in a system using only one output? Example: make a motor track a position x moving in a speed y while I have only voltage as an output, speed and position as feedback. can this be done with PID or there is a need for more complex methods?