instruction
stringlengths
40
28.9k
I try to implement GraphSLAM from this tutorial, The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. I have some doubt while studying this paper. I hope people who research on same field may solve my query. As I don't know how to input notation in StackExchange editor I have uploaded a picture with my queries: I have some questions, which are written in the images, at line numbers 11, 14, 19: What is the dimension of this noise covariance matrix? If it is 3*3 then the dimension of the matrix is mismatched at line numbers 18 and 19 How to calculate $µ(j,x)$ and $µ(j,y)$? What is the value of $z_t^i$? I have some other doubts: What is the need to use two for loop one line number 10 and another one at line number 12? Is that use to change the value of $Q(t)$ in each iteration? How can all measurements and observed features be different from each other? I am using this dataset, UTIAS Multi-Robot Cooperative Localization and Mapping Dataset, where there is a sensor measurement data set and odometry measurement dataset. They are two separate files. Their timestamp is also different. Their dataset is like that after 100 odometer data there is a data which observed a features. Then how could I relate these datasets with this algorithm?
What kind of camera models can be used for lenses with a field of view of more than 180 degree ? Is ıt possible if not why? Are fisheye lenses suitable for this purpose?
I'm new to Roomba Create2. I just want the program to send and receive data of the bumper sensor only. I tried this code on an Arduino Uno: #include <SoftwareSerial.h> #include "roombaDefines.h" int rxPin = 10; int txPin = 11; int sensorbytes; SoftwareSerial Roomba(rxPin, txPin); void setup() { Roomba.begin(19200); Serial.begin(9600); pinMode(ddPin, OUTPUT); wakeUp (); // Wake-up Roomba startSafe(); // Start Roomba in Safe Mode setPowerLED(128, 255); delay(1000); setPowerLED(128, 0); delay(1000); } void loop() { Roomba.write(142); Roomba.write(7); delay(100); sensorbytes = Roomba.read(); if (sensorbytes!=0 ) { setPowerLED(255,255); Serial.println(sensorbytes); } else { setPowerLED(255,0); Serial.println("not_Press"); } delay(100); } The robot code works fine for a period of time but after about two minutes I got some weird data from the robot (it shows in my serial monitor) and it makes my program seem to delay and crash. So I tried to check where it comes from void loop() { int i = 0; while(Roomba.available() ) { byte c = Roomba.read(); sensorbytes = c; Serial.println(sensorbytes); i++; } delay(150); } to check that the robot sends data via the serial port or not. Even if I don't command it, I get a packet of data, maybe every two minutes: 32 32 32 32 70 108 97 115 104 32 67 82 67 32 115 117 99 148 205 205 233 32 48 120 48 32 40 48 120 48 41 10 13 and it is the same. So I think that the Roomba Create2 will send data via the serial port every two minutes. Can I avoid that data?
I want to get telemetry data from my flight controller (Matek F405-CTR) and send control commands to it via MSP protocol. I've already configured UART3 port on Matek in Inav configurator (look at pic). Then I connected usb-serial converter (CP2102) to Matek on UART3 port for testing. So that, Matek controller is defined as COM3 on my PC and CP2102 as COM4 But when I'am trying to run simple test from pyMultiWii lib I get nothing from Matek controller neither from CP2102. I've tried this test for COM3 and COM4 ports as well. #!/usr/bin/env python """show-attitude.py: Script to ask the MultiWii Board attitude and print it.""" from pymultiwii import MultiWii from sys import stdout if __name__ == "__main__": board = MultiWii("COM4") #board = MultiWii("COM3") try: while True: board.getData(MultiWii.ATTITUDE) #print board.attitude #uncomment for regular printing # Fancy printing (might not work on windows...) message = "angx = {:+.2f} \t angy = {:+.2f} \t heading = {:+.2f} \t elapsed = {:+.4f} \t".format(float(board.attitude['angx']),float(board.attitude['angy']),float(board.attitude['heading']),float(board.attitude['elapsed'])) stdout.write("\r%s" % message ) stdout.flush() # End of fancy printing except Exception as error: print("Error on Main: "+str(error)) May be I'am doing something wrong? I've already read this post, but it dosn't help me. Any advice will be appreciated!
Last month I visited a quadrotor laboratory ($ 20m^2$ approx). They used both Optitrack and Vicon systems. However, they told me that Vicon cost 10 times as much as Optitrack. Considering that any system of motion capture relies on the same principles of IR markers, which might be the reason for this difference in price?
I am trying to find a solution in S(1)*R^2 (x,y, orientation) with obstacles (refer to image) using RRT star and Dubins Model. The code takes a lot of time to find a suitable random sample with x,y, theta such that a successful Dubins path can be connected between the two points without the vehicle (a rectangle colliding any of the obstacles). The fact that the random sample needs to be at the correct angle so that the vehicle's path is collision free is 1 out of 100,000 random samples. This makes the code very slow even when my computer is at its full processing power. None of my internal codes take much time. I timed all of them, only the fact of achieving that 1 out of 100,000 sample causes the code to take so much time. I tried decreasing my discretization space by half but the problem still exists.
I am looking at the ServoCity Bogie platform - it comes with 6 motors. The motor listed specs are: Suggested Voltage: 4.5 VDC No Load Current: 190 mA Max. Load Current: 250 mA I was planning on using Adafruit's TB6612 based solution - it can handle "Power Supply current VM=15V max; Output current - IOUT=1.2 A(ave) / 3.2 A (peak)" My questions are: Can I combine each side's 3 motors into 1 H-Bridge? I assume so, and I would wire it in parallel. Is it then simply 3x250 mA = 750 mA draw? How do I calculate the battery voltage requirement? A bonus question - The Feather board linked above has two of these chips; can I bridge the H-Bridge outputs to allow for a double amperage output? H-Bridge => 2 outputs parallel => 3 motors in parallel. If so, this would allow for slightly more powerful motors in the future.
I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up. I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers. So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter. I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right? Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this. Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need? Any insight is very much appreciated!
Suppose we have many images for the same scene (with known poses already,using for ex: a GPS ), and the same feature appear in more than 2 images . How to use all the available info to compute as precisely as possible the 3d-reconstruction of the scene? (actually ,I want a dense map but let's start with a point) For 2 images , it's a least square problem solved with OpenCV function: cv2.tringulatePoints but for many, how to find the best estimate? any ideas on the subject? [no problem with dense math]
I'm building an omni-directional robot base and want to add some kind of low-level obstacle detection. The complete robot will have LIDAR and camera for navigation but I'd like to add something at the low level to stop the motors in case the high-level navigation fails/crashes. The issue I have is because it's omni-directional the sensors have to have all directions, which for even cheap sonar sensors starts to get pretty pricey. I was thinking of adding pressure sensor bumpers similar to what Kiva/Amazon uses: Meet the drone that already delivers your packages, Kiva robot teardown. It's a cool idea just a flexible tube attached to a pressure sensor, if anything bumps into the tube you get a small pressure change in the tube. This means I could cover the entire robot with a ring of tube and use a single pressure sensor! The main issue I have is figuring out what type of pressure sensor to use. The Kiva robot appears to have a single pressure sensor connected to one end of the tube. I found this example, Air pressure based bumpers by Axbri on Lets Make Robots which uses a pair of differential pressure sensors. The main question I have is around the type of sensor should it be absolute, gauge or differential (as in the LMR example). And maybe to have a rough idea how to calculate the pressure range required for the sensor? I guess the static pressure would be based on the length and diameter of the tube and the dynamic pressure (caused by the obstacle) would also depend on the size of the obstacle and how much the tube is compressed?
I have gone through The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. I implemented the code for GraphSLAM. The fundamental formula of GraphSLAM is $$\mu = \Omega^{-1}\xi$$ When I inverse $\Omega$ it gives me an error that Matrix is singular. In the above document there are some hints how to avoid this error. But I failed to understand that. On the eighth page of the above document - on page 410 - they discuss it in section 4. The GraphSLAM Algorithm portion. If anybody can understand it, please help me to understand. From the text: In particular, line 2 in GraphSLAM_linearize initializes the information elements. The “infinite” information entry in line 3 fixes the initial pose $x_0$ to (0 0 0)$^T$ . It is necessary, since otherwise the resulting matrix becomes singular, reflecting the fact that from relative information alone we cannot recover absolute estimates. I added my code here for better understanding import java.io.IOException; import org.ujmp.core.SparseMatrix; import org.ujmp.core.Matrix; import java.io.File; import java.util.Scanner; public class Test1 { public static void main(String args[]) throws IOException { Matrix omega = SparseMatrix.Factory.zeros(5, 5); Matrix Xi=SparseMatrix.Factory.zeros(5,1); int i = 0, i1 = 0, k1 = 0, k2 = 0,l=0,l1=0; double[] timex = new double[5]; double[] forwardx = new double[5]; double[] angularx = new double[5]; double[]x1=new double[5]; double[]y1=new double[5]; double[]theta1=new double[5]; double []landx=new double[2]; double[]landy=new double[2]; double[] timeya = new double[2]; double[] codea = new double[2]; double[] rangea = new double[2]; double[] bearinga = new double[2]; Scanner x = new Scanner(new File("/home/froboticscse/IdeaProjects/UJMPtest/src/main/java/a.txt")); Scanner y = new Scanner(new File("/home/froboticscse/IdeaProjects/UJMPtest/src/main/java/b.txt")); while (x.hasNext()) { double time = x.nextDouble(); double forward = x.nextDouble(); double angular = x.nextDouble(); timex[i] = time; forwardx[i] = forward; angularx[i] = angular; x1[i]=((forwardx[i]*0.006+Math.cos(0+(angularx[i]*0.006)/2))); y1[i]=((forwardx[i]*0.006+Math.sin(0+(angularx[i]*0.006)/2))); theta1[i]=(angularx[i]*0.006); i++; } while (y.hasNext()) { double timey = y.nextDouble(); double code = y.nextDouble(); double range = y.nextDouble(); double bearing = y.nextDouble(); timeya[i1] = timey; codea[i1] = code; rangea[i1] = range; bearinga[i1] = bearing; i1++; } while (k1 < timex.length && k2 < timeya.length) { if (timex[k1] < timeya[k2]) { omega.setAsDouble(1, k1, k1); omega.setAsDouble(1, k1, k1 + 1); Xi.setAsDouble(x1[l],k1,0); k1++; l++; } else if (timex[k1] == timeya[k2]) { landx[l1]=x1[k1]+rangea[k2]*(Math.cos(bearinga[k2]+theta1[k1])); omega.setAsDouble(1, k1, k2); omega.setAsDouble(1, k2, k1); Xi.setAsDouble((landx[l1]+x1[l1]),k1,0); k2++; l1++; } else { System.out.println("Nothing to add"); } // System.out.println(Xi); } System.out.println(omega); /* Matrix mu=omega.inv().mtimes(Xi); System.out.println(mu);*/ } }
I was hoping to pick your brains about a problem that even after much reading has left me baffled. My Application: For the sake of simulating underwater scenarios in very large environments (up to 100 km by 100 km in the X-Y plane), I am attempting to implement a path planner capable of planning a path that brings an underwater vehicle such as a submarine from a the starting position to a final 3D waypoint while avoiding obstacles. The vehicle will have high level knowledge of the map for things like the location of the ocean floor, and will sense other obstacles as it moves through the environment. The environment will not be obstacle dense, but they can be moving (other submarines, etc) My general problem: Underwater vehicles such as submarines have significant kinodynamic constraints associated with them. This makes using strictly geometric planners such as RRT*(the geometric version anyway...) or D*-Lite unrealistic, since the generated path could be entirely unimplementable by the vehicle even after smoothing. However, kinodynamic path planners cannot be reasonably applied over such large distances due to their computational complexity, especially in the context of new obstacles potentially being discovered all the time, requiring repairing or re-planning of the path. Are there standard methods for dealing with these type of complex scenarios that I am unaware of? Otherwise, can anyone propose a strategy well suited to this type of problem? Thank you. One idea I had was to use a geometric planner like RRT* as a 'global' planner' that generates a coarse plan that would then be roughly followed by a 'local' kinodynamic planner such as RRT-X. The local planner would ensure nearby obstacles are avoided while adhering to the constraints of the vehicle. I am a complete novice in this area though so I am unsure if even that is reasonable.
I am building a robot arm that is actuated by a pneumatic cylinder. Extending and retracting the piston would increase and decrease the joint angle respectively. This is done using a 5/3 solenoid valve controlled by a relay switch. The actual joint angle is feedback using a potentiometer connected to analog A0 pin of an Arduino Uno. I have a basic idea of how the whole mechanism would work to control position of the piston but have some queries and also on the use of Analog Comparator to serve as interrupt. The components I am using are: Pneumatic Cylinder that would be fitted with speed reducing connectors to hopefully slow down the piston action enough to be controllable 5/3 Solenoid Valve Relay Module Potentiometer Basic idea of how the system would function: Potentiometer reading is mapped to joint angle range User keys in target joint angle If target joint angle < measured joint angle, switch on relay to power desired solenoid valve and extend the piston and the opposite when target joint angle > measured joint angle If target joint angle = measured joint angle, switch off relays and solenoid valve would be in closed position, i.e.: no air input and piston stays in position Queries: Interrupt vs Polling The control loop needs to know when the actual joint angle = target joint angle so as to switch off the relay and power down the solenoid valve. This requires an interrupt signal when the potentiometer reading reaches a certain value correct? Polling of the potentiometer using analogRead() may miss a reading? Or would polling simply be the more straightforward way to get joint angle feedback? Analog Comparator If interrupt is the way to go, this would mean needing to use the in-built analog comparator on Uno. I read up on it and the idea is to put a reference voltage at D7 of Uno, while the potentiometer values are input at D6. When the input voltage passes the reference voltage, an interrupt (either falling or rising edge) would occur. But this would mean that there can only be one fixed reference voltage right? Or is there a means to have variable reference voltage sent in to D7 according to user input desired values? And the Uno only has these two pins for analog comparator, it would not be able to support a two-joint robot arm with angles measured by two different potentiometers correct? Or are there smarter ways to achieve control of the piston position?
I was going through UC Berkeley's CS287 - Advanced Robotics course. In particular - Optimal Control for Linear Dynamical Systems and Quadratic Cost (“LQR”) by Pieter Abbeel (Slide 10). Let us say we use LQR to go from point A to point B. The motion planner will probably output a trajectory, which we will follow using LQR. In the finite horizon discrete time case, how do we set the value of total time horizon $T$, and the value of each time step $\delta t$, and the waypoints $x^*_t$ to follow?
I am trying to implement GraphSlam. I go through this paper http://robots.stanford.edu/papers/thrun.graphslam.pdf. I have some doubt about the algorithm. I attached a picture with this question My question is that what is the need to augmented a identity matrix in line no 7 and 8 of this algorithm. Please help to clear the doubt.
Background Information: I'm new to robotics and my school currently has access to a 6DOF Universal Robot (UR-10). I'm programming it using its scripting language URscript and the arm has these two commands... speedl(vector of speeds [x, y, z, Rx, Ry, Rz]) where the list of speeds are linked to the end effector. speedj(joints speeds[base, shoulder, elbow, wrist1, wrist2, wrist3]), given in rads/s The built-in speedj is by far superior in terms of smooth motion, however using a PS3 controller it is much harder to control and thus I would like to be able send controls linear commands (left, right, fwd, back, up, and down) and have them be converted inappropriate joint speeds. Problem: I want to move the arm linearly, but I don't know how to convert linear motion into an equivalent representation in terms of the joint speeds. Where should I start researching? Is there a topic that covers my question? I doubt there's an immediate answer to my problem, but I'm unsure where to start looking. Any keywords that might facilitate my research would be awesome.
I've read this link: http://wiki.ros.org/kinetic/Installation/Source and have known that the command rosdep install --from-paths src --ignore-src --rosdistro kinetic --simulate can list all necessary dependencies of ROS on a clean system. Also, I've read this link: http://wiki.ros.org/catkin/package.xml So I'm thinking if it is possible to list only the exec_depend of ROS?
I have homing switches and I'm unsure if my zero should be right after they are done homing or if the zero is supposed to be the center of my build volume?
I have some straight and curve pieces with numbers, they are used to build tracks (of $5$ lanes) for my cars (figure $1$), I can send commands to the cars using an SDK on the Raspberry (set the speed for example ...), when a car is moving, a built-in downward facing camera scans the infrared markings on the track, a microcontroller inside the car decodes these markings to get the id of the position on the piece (see figure $2$) and the number of the piece, then a message PM=(piece number, position id, car linear velocity) is sent via Bluetooth low energy to a Raspberry. The car can send these messages AT LEAST every $0.2$ second. The problem is that the car can reach a linear velocity of $100cm/s$ and move quite a distance after it sends the PM message, so it will be a lag between the actual state of the car and the state contained in the received PM message. There is also the problem of delayed and lost packets containing PM messages because we use one Raspberry to communicate with $3$ cars via Bluetooth low energy (sometimes the Raspberry keeps receiving packets from one car even if the others try to send), which makes the lag larger. So, we cannot rely only on the received PM messages to determine the real-time position of the car. Therefore I am trying to implement on the Raspberry a Kalman filter to estimate the position of the car on the track in real-time, but given the fact that I don’t have the $(x,y)$ coordinates of the car I don’t know how to apply the algorithm using (position id, number of the piece), also I want to know how to do so the update step can take into account the delayed and lost measurements. When the car leaves a piece, it sends the wheels displacements but it is not sufficient I think, I have neither the heading nor the angular velocity of the car. There is one thing that I think is very useful; if no commands are sent to the car, the microcontroller inside it ensures that the car will keep moving on the same lane and tries to maintain its linear velocity. I would like to have your advice, opinions or ideas of another filter or algorithm, if you have any question, please ask me.
Lets say a quadrocopter's rotors are 2 feet apart, and 2 feet beneath it is attached a symmetrical, balanced payload that was x feet wide (with space for air from the rotors to move through) - if the rotors are powerful enough, would it still be somewhat controllable laterally if x was greater than 2 feet, or would that configuration ruin all ability to control regardless of the rotors' power?
I want to use matlab for estimating the position of end effector given joint angles. I saw convenient tool to build robot model using RigidBodyTree. My manipulator is planar, with only revolute joints with N number of joints. I want for a given manipulator (where N can be changed) to give as an input angles for manipulator's joints and to get position of the end effector. I only found InverseKinematics which does exactly the opposite. But nothing about Forward Kinematics. How could I possibly do that in Matlab?
I am trying to build a microcontroller based higher payload version of a servo motor using geared induction motor with VFD as a control device. For this purpose I have selected a 1HP motor running at 1800RPM which is geared down to 30RPM. The position feedback is captured using AS5048 14bit magnetic encoder and Cortex M4 microcontroller. Since I don't have any idea about State Space Controller and it's implementation using microcontroller, I am planning to implement it using PID controller. I have the following questions related to this 1) Is it possible to implement it using PID controller for varying load without affecting the tuning parameters for a reasonable range of load or does it require changing the parameters on the fly depending on the load? 2) How to select the update rate of PID control loop for motor position control? I am a communicating with the encoder using 400KHz I2C interface and the controller output is given as a voltage signal to the VFD for frequency/speed control.
I have been trying to implement FastSlam 1.0. To implement this, I need to create particles. Now my confusion is how to create Particles? I have some odometry and Measurement data. Using those data values, how could I generate the Particles?
Currently im working on an in-pipeline inspection robot that can navigate through narrow pipeline. My design is to use a propeller at the back to push the robot forward. The front part of the robot will have a camera to inspect the pipeline interior. Additionally, the robot will have support wheels to provide stability, braking and enable the turning of the robot, either right or left. I need help in determining what type of propeller will provide enough force to push the robot forward through the pipeline.Also, what type of motor to use for that propeller. Thank you for your effort and time.
I am studying and coding particle filters and I am using the Low variance sampling algorithm suggested in the Probabilistic Robotics book. I understand the procedure for the algorithm. A random number r is picked from the interval (0, 1 / M) and a variable U, calculated based on r is used to navigate the sample space systematically. A variable c(cumulative sum) is initialized with the first weight, and incremented by adding weights until it is higher than U. Once the cumulative sum is higher than U, it picks the sample corresponding to the weight last added. The problem that I have is that I don't see how this picks up a good sample set for the next iteration. This seems very random or at least favorable to lower valued weights. If the initial value of r is very low, U is also low initially and it may pick a sample whose weight is low, unless weight vector is sorted from high to low (Is it sorted?). However, this video suggests that particles with higher weight have a better chance of getting picked. The algorithm doesn't convey this idea to me. Please help if you have an explanation.
I am trying to implement GraphSLAM from Sebastian Thrun's paper, The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. When I compute the inverse of my information matrix, $\Omega^{-1}$, I get an error, Matrix is Singular, because $\Omega$ is singular. How could I avoid matrix singularity when implementing GraphSLAM? Please don't suggest me pinv() method it cannot give me actual answer. I had considered using a pseudo-inverse, but, instead, I want to prevent $\Omega$ from becoming singular so that I can compute the exact inverse. Below is where $\Omega$ is computed (see line 7), from Table 2. Calculation of and ξ in GraphSLAM:
for manipulators that lift heavy loads, e.g. a car or a cow, what would be the ideal angular velocity? Are there any standards for arm safe moving speed? (of course I know it depends on the project and use, the question is a yes or no) Is there any difference when the manipulator works around people ? (since in some cases people can be moving around it, so should it move slower ?)
I have an AUV with a 12 inch diameter, 29 inch long capsule. Inside the capsule sits an Intel NUC computer, an NVIDIA Jetson GPU, an Arduino Mega, a Sparton IMU (9 DOF) as well as a custom voltage regulator. Given the noise, would it be practical to attempt to use my magnetometer, even with calibration and filtering? If so, what should I keep in mind regarding the placement of my IMU in the AUV's capsule, given that I have already read this post? I have tried moving it as far away from the sources of noise as possible but the noise persists. I looked into shielding, but that looks like it would block the Earth's magnetic data as well, which would render the shield pointless.
I am working on a language (computer language) for robotics to communicate with each other. I am looking for naming standards that is unique and usable for robotics, for example when two robots are communicating with this language, they will use a word(standard) for an object like "door" that is understandable for both because they are using one unique naming standard. As I searched the internet I couldn't find something helpful for naming objects, senses and actions may robots share with each other and they understand what are they meaning for. Syntax of my language SEND BY loc ON object = door This language is a query language like SQL that programmer based on programming conditions writes communication queries to archive some data from destination robot or requesting some actions from it. In the code above, loc and door are names that should be declared by a standard that both robots can understand them. I'm asking you if you can suggest any naming standard for saving and sharing names on robots and if there is robotics communication standard to suggest especially scholars. thanks.
I'm using an adjustable desktop power supply (0-30 volt DC, 0-10 amp) to power my robot which has 7 standard 6-volt servos. There are some situations where the robot gets jammed, and power draw increases to 6A and recently I've even seen it hit 9.5A. I think my servo controller is only rated for around 5 amp continuous, and the servos are not intended to hit anywhere near that for any length of time. A servo burnout would be a big problem because I'd need to deconstruct, rebuild, and re-calibrate - none of which are easy operations. So, is there any device that can set a maximum current and can be easily reset? I could use a traditional fuse, but may end up replacing those far too often. It's fine if the device affects voltage moderately, as I can easily adjust the power supply up a bit and honestly I'm happy if it lowers voltage when it needs to lower current as well - that would further help prevent damage.
I am thinking about using a combination of mu-metal, a faraday cage, and spacial separation to reduce magnetic interference from a brushless motor on a compass sensor. The compass will be used to calculate heading. If the vehicle that it is attached to is standing still or slowly rotating, however, will earth's magnetic field be shielded by the mu-metal? The following is a quote from Wikipedia: The high permeability of mu-metal provides a low reluctance path for magnetic flux, leading to its use in magnetic shields against static or slowly varying magnetic fields A slowly rotating or fixed compass would experience a static or slowly varying magnetic field from the earth, no? So would the mu-metal interfere with the compass's readings? I could always just eliminate the mu-metal and use only a faraday shield, which I know don't affect static or slowly changing magnetic fields. EDIT: someone please let me know if there is a more appropriate SE site on which to ask this question.
I am creating a project where a robot needs to face a control panel and do some specific tasks. Some reference tags will be placed on the panel. I want to use the tags as AR reference and fix my robot's orientation. But I am new in AR and I need to learn more, can you give me some hint on how to do this? I want to use OpenCV Python.
Im working on a project and i want to use a sensor. I can plug in the sensor via usb. I do not need any driver for the sensor. My task is to have an access to the sensor data. I have followed the tutorial on this page: http://clearpathrobotics.com/assets/guides/ros/Udev%20Rules.html But i dont know if this is my soution. I dont know what to do now. How can i have access to the data in ROS? Do i have to create a publisher?
This might be a tricky question, but having dug in deep with ROS I am noticing the complexity that one has to deal with. Simple Arduino programming is much simpler but of course can’t do many useful things. No threading, IPC, and many many more. To be clear, I am not planning on using an Arduino, I am just using it as the extreme end point of a spectrum. So my question is, is there another package or library or framework that does some of the things that ROS does but perhaps not at the level of sophistication? Additional Thanks to the responses so far, and the feedback. I've added below and updated above. As an example application, let's say an experimental (i.e I am not trying to build a commercial product - yet) mobile robot, with a depth camera, Lidar, and wheels, that can navigate indoors and be controlled over WiFi. It would likely include slam navigation and/or fiducials. Obviously I am OK with buying the hardware or the hardware components and doing a lot of programming. I would prefer something other than C++ but that's not a strict requirement.
I'm looking at how COLMAP does multi-view triangulation. I can't work out what this function is doing. I can't find any formulas which look similar. The input "proj_matricies" come from pose data, the "points" are from the mono camera measurements. Seems like it is reducing an error in the projection matrices themselves? Is it trying to find some sort of overall projection error for all the observations? Any point in the right direction would be greatly appreciated!! Eigen::Vector3d TriangulateMultiViewPoint( const std::vector<Eigen::Matrix3x4d>& proj_matrices, const std::vector<Eigen::Vector2d>& points) { CHECK_EQ(proj_matrices.size(), points.size()); Eigen::Matrix4d A = Eigen::Matrix4d::Zero(); for (size_t i = 0; i < points.size(); i++) { const Eigen::Vector3d point = points[i].homogeneous().normalized(); const Eigen::Matrix3x4d term = proj_matrices[i] - point * point.transpose() * proj_matrices[i]; A += term.transpose() * term; } Eigen::SelfAdjointEigenSolver<Eigen::Matrix4d> eigen_solver(A); return eigen_solver.eigenvectors().col(0).hnormalized(); }
I will be using stepper motor in 6 DOF robotic arm along with Servo motors. Unlike Servo motor ,I find difficult to control Stepper motor. I was earlier using only Servo Motors but due to lack of Torque I had to use stepper motors. I read that in Stepper Motors there is no initial position or reference (0 degrees) position unlike servos. I searched on various links also where I came to know about rotary encoders but these wont be any good for my robotic arm. I will be using raspberry pi to control stepper motor. I need to know a way so that I can control the stepper motor so that there is some reference angle to the stepper motor.
What's the statistically and mathematically correct way to process lidar data so that I can find out a good current estimate of the closet obstacle and its rate change? Should I be learning about Kalman filters or is that the wrong kind of processing?
I am new to IMU. About the measurements from IMU, different materials give different explanations. From the link: https://www.it.uu.se/edu/course/homepage/systemid/vt14/tokp2.pdf Here the b maybe indicate body frame of IMU. Maybe e indicate earth frame. I can't understand the rotation matrix R(be) in the second equation means. Is R(be) from camera-IMU calibration to represent the extrinsic paramter between camera and IMU. Is the measurement from the accelerometer expressed in the IMU frame or not?
I've been looking into implementations of Extended Kalman filters over the past few days and I'm struggling with the concept of "sensor fusion". Take the fusion of a GPS/IMU combination for example, If I applied a kalman filter to both sensors, Which of these will I be doing? Convert both sensors to give similar measurements (eg. x, y, z), apply a kalman filter to both sensors and return an average of the estimates Convert both sensors to give similar measurements (eg. x, y, z), apply a kalman filter, and return an estimate of the sensor I trust more based on a certain parameter (measurement noise covariance)?
I have some odometer data. This data are based upon robot movement. I can transfer those raw data to a motion equation from where I get x,y,and theta co-ordinate of a Robot. If I plot those x,y coordinate I get a path which show me robot movement. This path show me that the Robot move back and forth through a corridor. So there is a looping. Now how could I determine from the raw data which consist of forward and angular velocity that in a particular time stamp robot back to its previous position? What is residual uncertainty in context of Robotics?
I try to implement Graph Slam in real dataset. My data set have some data that describe that the Robot observe same landmark over and over with a large amount of time difference. Prof. Sebastian Lectures(Udacity: https://classroom.udacity.com/courses/cs373) is not enough to understand this type of situation. So I study " The Graph Slam Algorithm with Application to Large - Scale MApping of Urban Structure" by Sebastian Thrun and Michael Montemerlo.(http://robots.stanford.edu/papers/thrun.graphslam.pdf) In this paper page number 414 at Table 4: Algorithm for Updating the Posterior mu" is written. I cannot understand this algorithm. First I remove all the landmark location then again I add it. But how could I add this? This is not clear to me. Also what is the need to first remove all landmarks then again add it? If any one know that please help me.
I have a drone which has certain services, say random.srv. I want to use this service via Python. Now I know that there is a page in ROS wiki that tells you how to create a service client in Python. Unfortunately that is not sufficient for me. Here goes my case: Once I start the drone node, the service is listed as /main/random_task Now an important point to note is that the name of the service is as stated above random.srv The function that I am writing is: from random.srv import * import rospy def enableRandom(): rospy.wait_for_service('main/random_task') try: droneRandom = rospy.ServiceProxy('main/random_task', random) doSomething(1) # something seems to be missing except rospy.ServiceException: print('Service call failed') if __name__ == '__main__': enableRandom() The thing is it gives an error saying random is undefined. But that is the name of the srv file. Can someone also explain why it doesn't work the way it should. You can guess by this question that I am new to this. Edit 1: Now I understand that in the statement from random.srv import * the 'random' should be the package name. This is not the case. The package name is, lets call it my-package1. The service random.srv that I want to use is in src/my-package1/main/srv/ Another thing to clear up is that /main/random_task is the name with which random.srv is shown in rosservice list. The new code now is from my-package1.srv import random import rospy def enableRandom(): rospy.wait_for_service('/main/random_task') try: droneRandom = rospy.ServiceProxy('/main/random_task', random) doSomething() except rospy.ServiceException: print('Service call failed') if __name__ == '__main__': enableRandom() Now I get my-project1 as having invalid syntax (I guess due to having '-'symbol). If I use from main.srv import random it says it cannot find 'main'. First if I have to use 'my-project1.srv' how to I get around the syntax issue with '-'. Secondly, if I have to use the subfolder name i.e. main.srv, why does it not recognize the package if it can recognize the services inside it (when using rosservice list on the terminal). I think there is an issue with the syntax.
I have question about GraphSLAM implementation. To find out the path and map using GraphSLAM we rely on this equation: $$\mu=\Omega^{-1}\xi$$ where $\Omega$ is our information matrix which determine the link between two nodes of graph and link between a landmark and a node. The $\xi$ vector give us the value of constraint between two consecutive robot pose or a robot pose and landmark. I want to clarify that why a matrix inversion multiplied by a vector give us path and map. What is the mathematical logic, algorithm or assumption behind this? Is it related to Floyds' shortest path algorithm or any other graph based algorithm related to it? For reference I studied The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures and listened to the lecture series of Udacity. However, I cannot understand where this equation came from. Also, I have a little bit of doubt about implementing GraphSLAM in a real dataset for the huge dimension on $\Omega$ Matrix and its inversion. Is it possible for a 75000 dataset?
Background: I have access to a UR-10, 6DOF robotic arm through my school (I'm very new to robotics). I know the desired set of linear speeds that I want in the x,y, z-direction in terms of the end effector ([x,y,z,rx =0, ry,=0, rz =0]). Using an analog controller I receive linear speeds in the x,y, z-direction ranging from -.1 -.1m/s. I found the forward kinematics for the UR-10 online and begin to derive the Jacobian Matrix. (If anyone has the Jacobian matrix for a UR-10 that would be awesome.) Since I'm only interested in the linear motion, where rx,ry,rz =0 I thought I could simplify my Jacobian to a 3x3 matrix. I realized that by doing so I would be unable able to solve for all the joints speeds 1-6. $J^{-1} \dot{X} = \dot{Q}$ where $J^{-1}$ is the inverse Jacobian, $\dot{X}$ is the Cartesian velocity vector and $\dot{Q}$ is the joint velocity vector. With the above simplification [3x3][3x1] = [3x1] joint velocity vector. However, I need a 6x1, so I have the speed for each joint. What am I doing wrong? What are the other 3 equations I would need to define a full 6x6 Jacobian and solve for the appropriate joint speeds? EDIT: I foresee a problem that since my linear speeds change incrementally there may be singularities when calculating my Inverse Jacobian how could I work around that?
I want to use a trapezoidal velocity profile(cartesian) but I struggle with implementing the equations to code(c++/python). Does anyone have an example for this? UPDATE more details about my problem. I want to create a Cartesian trajectory with a trapezoidal velocity profile from point A to point B. The start and end velocities must be zero
How are the coordinate frames and D-H parameters determined in industry? Does one use the CAD drawing of the arm or physically take measurements of the actual arm?
is it possible to calculate gear reduction based on the datasheet of SG90 servo motor? Because I wanted to do some modeling in Simulink and RC servo ask for gear reduction, but I can't find any information about the gear reduction.
I am trying to follow the ROS monocular camera calibration for a USB camera I have, but the calibration routine fails when I try to run it. I was expecting to see an image like the one below, but I get a segmentation fault instead. These are the commands I used to try to start the calibration routine: mona@Mona:~$ roscore mona@Mona:~/catkin_ws/src/usb_cam$ rosrun usb_cam usb_cam_node mona@Mona:~/catkin_ws/src/usb_cam$ rosrun rviz rviz mona@Mona:~/catkin_ws/src/usb_cam$ rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/camera/image_raw camera:=/camera But this is the output I get: ('Waiting for service', '/camera/set_camera_info', '...') Service not found Segmentation fault (core dumped) What am I missing? How should I get it to work? When I list the ROS topics, I get: $ rostopic list /rosout /rosout_agg /usb_cam/camera_info /usb_cam/image_raw /usb_cam/image_raw/compressed /usb_cam/image_raw/compressed/parameter_descriptions /usb_cam/image_raw/compressed/parameter_updates /usb_cam/image_raw/compressedDepth /usb_cam/image_raw/compressedDepth/parameter_descriptions /usb_cam/image_raw/compressedDepth/parameter_updates /usb_cam/image_raw/theora /usb_cam/image_raw/theora/parameter_descriptions /usb_cam/image_raw/theora/parameter_updates
Benedict Evans, a general partner at Andreessen Horowitz, claims that “almost all autonomy” projects are using lidar for SLAM, and that not all of them use HD maps. An MIT group is testing self-driving cars on public roads without HD maps. My question is whether the difference in error between lidar and cameras is significant. Benedict Evans and others claim that lidar is necessary for accurate enough SLAM in self-driving cars, but at first glance the KITTI benchmark data seems to contradict that claim. I want to confirm or refute that impression. The KITTI Vision benchmark leaderboard for visual odometry/SLAM methods shows a lidar-based method called V-LOAM in first place, and a stereo camera-based method called SOFT2 in fourth place. They have the same rotation error, and a percentage point difference of 0.05 in their respective translation errors. Is a 0.05 percentage point difference in translation accuracy large or insignificant, when it comes to an autonomous car navigation? The KITTI Vision benchmark leaderboard for odometry/SLAM methods:
I will be using 1:4 gear ratio and thus will require a motor with a continuous 360 motion and a high torque for efficient functioning. Servo motors with continuous rotation and high torque are very expensive and rare, so my choice has boiled down to DC Motor and stepper motor. However I have not completely understood the way to configure them for a robotic arm. I don't want to use rotary encoders, as installing a potentiometer or hall effect sensor on the shaft will be cumbersome and inaccurate due to vibration. Is there a way one could move a DC motor or stepper motor to a particular angle with IC/modules like l293d or other motor drivers?
I have lots of doubts about GraphSLAM, The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. When I practically implement it I get matrix singularity error. I taken the data set from UTIAS Multi-Robot Cooperative Localization and Mapping Dataset. Now this dataset contain 75000 odometer data and 5000 sensor data. Correspondence are known correspondence. As per the algorithm initially I think the information matrix should be 75015x75015 matrix. But practically, this is impossible to implement. I am using universal java matrix package. Then I think the robot may come to the same position after roaming certain amount of time. So I have to identify the location which is same as previous location. I watched Lecture 7: Visual Navigation for Flying Robots where there is a description of Iterative Closest Point algorithm. This algorithm identity the same location. But I have some doubts about the lecture. The prof said Given: Two corresponding point sets (Clouds) $$P = \{p_1,...,p_n\}\text{ and }Q = \{q_1,...,,q_n\}$$ Where does he get those points? Why there is there two data sets? Does $P$ represent X axis and $Q$ represent Y axis? I have odometer and sensor raw data. From which one do I create this point cloud? Do I really need to use this technique (ICP) for my implementation?
I am working with ROS indigo and clearpath huskyA200 and wanted to implement the EKF localization with unknown correspondences with my own hokuyo lidar data for a school project. Giving the algorithm in page 217 of the probabilistic robotics book by Thrun. (Picture of the algorithm is given below), what does step 9 mean by “for all observed features”, does it mean all the raw data in a lidar scan? Or does it mean to process the raw data first to find the features? If it’s the latter, what would be some technique to process the raw data to find features? This stackoverflow post helped me understand this algorithm a lot better, but now, i am just not sure how to provide the observed feature with my own lidar scan data. Any help is appreciated, thank you.
I was Performing Programming Assignment with MATLAB for Quad Thrust and Height for an Introductory Course. The control input for a PD controller was According to the Equation u = mass*(diff(s_des, 2) + Kpe + Kvdiff(e) + gravity) where: diff(s_des, 2) is the second diff. of the Desired Height which is 1 meter. Kp & Kv are the Proportional and Derivative Gains which Require Tuning. The Term "e" is the Position Error and diff(e) is the Velocity Error. Note: The Desired Kp, Kv, and Height required to be within 1 Second for the Rise Time, and Less than 5% for the Over Shooting. and the Code made by the instructor was: function [ u ] = pd_controller(~, s, s_des, params) % PD_CONTROLLER PD controller for the height % s: 2x1 vector containing the current state [z; v_z] % s_des: 2x1 vector containing desired state [z; v_z] % params: robot parameters % FILL IN YOUR CODE HERE end Finally, The simulator doesn't go so well, and I Couldn't Find the Error
I'm currently working on a project to generate a stable altitude and automatic control for a quadcopter I'm using arduino as the flight controller Ultrasonic sensor HC-SR04 MPU-6050 However my project expected outcome is to set the quadcopter to fly( altitude hold) to 50 Cm above the ground. My problem is I don't know which one of the sensors I have to use for PID set point can anyone help me with the code or the functions the I need to use ?
I am trying to implement the ekf_localization algorithm in page 217 (Table 7.3) of the probabilistic robotics book by Thrun. From my previous post, I understand that I need to extract observed features on step 9 of the algorithm given in the book. So I am planning to use a line extraction algorithm (https://github.com/kam3k/laser_line_extraction) to extract lines, then find the center point of the line and use that point as my observed feature in step 9. Click part1 part2 to see table 7.3. Now, I am having trouble understanding what is the map (m) input. Since, the ekf_localization algorithm assumes that the map is already giving, and let’s say figure 1 is the actual map that my robot will navigate in. Does this mean, that m consist of points in the world coordinate frame and that I can manually choose them? For example, the dots in figure one are my point landmarks that I provide for the algorithm (m = {(2,2), (2,4), 5,1), (5,2), (5,3), (6,2)}). If so, how many points should I provide? Be great if you could help C.O Park.
I am new to the robotics field and sensor fusion as well. I am trying to localize my robot using the data from my camera and the odometery through extended Kalman filter. I have the data offline, I synchronized them. However I have a trouble with constructing the $F, H$ matrices. Any help?
It's not technically robotics but: I've been trying to reproduce in Simulink a spacecraft attitude simulation using quaternions, and the kinematics and dynamics seem to work fine, however I'm having a bit of trouble with the controller. I followed the model give in the 7th chapter which seems to be some sort of a PD controller. The control equation I used is: $q_e$ is the quaternion error, $\omega_e$ is the rotation speed error But my results seems to be off. With : Initial quaternion and rotation speed are $q_i = [0;0;0;1]$ and $ \omega_i = [0;0;0]$ I give a desired reference of $q = [0;1;0;1]$ and $ \omega = [0;0;0]$. I get the following response: $q(1)$ and $q(3)$ are staying at zero as expected. But : $q(2)$ is going towards -1 instead of 1 (As far as I understand the sign ambiguity does not explain this since q(4) is staying around 1) $q(4)$ is not maintaining at 1. (I am not sure if this is related to the fact that the controller is only a PD) I've tried to add -1 gains but it doesn't seem to solve the problem. Why would the step response of q(2) be going to -1 instead of 1 ? And why is q(4) decreasing ? For reference I've added the simulink model: And the "Error quaternion" block: Edit: (Response after Chuck's answer)
Can anyone give a justification for using screws (twists, wrenches) instead of the traditional approach (rotation matrices, homogeneous transforms)? Even if screws are more compact, the situation gets complicated whenever we want to consider accelerations, so using screws in dynamics seems cumbersome for me.
Suppose I implement a particle filter with $n$ particles. This is a brief description of my understanding of a particle filter. For the first step, I throw out $n$ particles some distance from my vehicle. I weight the particles according to some Gaussian distribution: $$ w_{j,t} = \frac{e^{-X_{j,t}^{2}/2\sigma^{2}}}{\sum_{j=1}^n{e^-{X_{j,t}^{2}/2\sigma^{2}}}} $$ where $X_{j,t}$ is some (noisy) difference between a measurement taken at the vehicle and at the particle taken at time t. I then translate these particles with my vehicle (with some uncertainty) and do the same thing again, and the weights of these particles (the same particle pool) is $$ w_{j,t+1} = \frac{e^{-X_{j,t+1}^{2}/2\sigma^{2}}}{\sum_{j=1}^n{e^{-X_{j,t+1}^{2}/2\sigma^{2}}}} w_{j,t} $$ We resample if, according to wikipedia, $K = 1/\sum_j{w_{j,t}^2} < thresh$, where thresh is some threshold we pick. Resampling is done according to each particles weight (the probability of being chosen is given by that particle's weight). My question is thus: if $K<thresh$, that means that some particles are highly weighted. So won't resampling give us a very degenerate list of the highest weighted particles, on average? Suppose this new, resampled population is composed of only n/2 different particles, 2 times each. How do you get n particles back?
This robot has 2 revolute joints in advance then one cylindrical joint, then one revolute joint, then another cylindrical joint, then three revolute joints at the end. There is no D-H table for that and I modeled it in Simulink with the help of Simscape. Now I need the D-H table for the controller part in the Matlab function. I know that If the rotation of z-axis happens about its X-axis like joint number 2, I can show this change as an Alpha in my D-H table. As it illustrates, rotation of z-axis of revolute joint number 4 happened about the y-axis of TJ2. How can I show this kind of rotation of z-axis in my D-H table?
The first node in the graph SLAM should be fixed. The famous "a tutorial on graph based slam" paper is showing that we can fix a node by adding identity matrix. Why adding identity to the Hessian of the specific node results in fixing that node? What is the theory behind it? Any good material to read?
What is the definition of loop closure in Graph Slam? Ref: Graph Slam The theory of Graph Slam define here but I think there is no hints about loop closure. I give this reference for review purpose.
I am going to buy this motor :- https://robokits.co.in/motors/encoder-dc-servo/faulhaber-coreless-17w-encoder-motor-120rpm-pid-dc-servo-drive?zenid=im3bp6grppt2f2ofdo2r6pq5c2 The datasheet of its driver:- http://robokits.download/documentation/Serial_DC_Encoder_Servo_Driver.pdf I don't know how will I be able to control this motor with Arduino . I know about the values that control the position and speed of the motor by reading the datasheet but the part where Arduino will be used to send these values is still unclear.I know about serial communication but I have only used it for communication between PC and Arduino. And can I control more than 1 of these motors with one Arduino ?
i have a nomad 4wd rover kit but i saw that motors doesn't have encoders. But to use odometry with ros i need it. I cannot found any suggestion on google search, so i ask here if someone could route me to some solutions. thanks Edit: Sorry, i forget to add the kit link: https://www.servocity.com/nomad
Differential drive robot with 4 wheels, If for example the back wheels were using different motors than the front ones ( ie : running at a faster/slower speed ) in general, would that generate more moving ( tourqe/force ) than having free ruining wheels instead ? or using free running wheels would be a better option because of the slip related to the slower wheels ? Note : its a sumo bot
I am using servo motor model no. MG995 with Arduino. I don't know the current it consumes while powered on and while lifting a 10 kg of weight. I am currently using a 5 V, 1 A power supply with one motor but in future I need to connect 6 motors to single supply.
In the Literature I have read so far, I saw RRT star was running in multiple loops to converge to a better solution ( near optimal). I was wondering how I could implement the same, as most pseudocodes don't explain that part.
I want to control a quadrotor with a Python script and run the simulation as fast as my laptop can, not only in real time. I've modified my world and now the simulation runs with 7-10 real time factor. My problem is that after acting I want the physics to run for a determinate amount of simulation time, but if I do time.sleep(steptime) the sleep is in real time, which is 7-10 times simulation time. So I need to know the real time factor in Python to divide the sleep time. Is there a way to get it?
I am reading the paper On-Manifold Preintegration for Real-Time Visual-Inertial Odometry. There is one paragraph about IMU model. I have two questions. The first one: An IMU commonly includes a 3-axis accelerometer and a 3-axis gyroscope and allows measuring the rotation rate and the acceleration of the sensor with respect to an inertial frame what does an inertial frame mean? The second one: The vector(the second quantity from the first equation) is the instantaneous angular velocity of B relative to W expressed in coordinate frame B. The sentence is difficult for me, especially the highlighted part.
I'm experimenting with SLAM for the first time. I am using ROS on a Turtlebot3, which has a Raspberry Pi and a single Lidar sensor, Robotis LDS-01. I am running a configuration with three computers, the Pi, a computer running just Roscore and a computer where I do my development. I have a very simple maze set up covering around a 3x3M carpeted area. I am using gmapping to make the map. As I am a remote worker, I've not had an "expert" looking over my shoulder although I am very active on boards to ask questions. My experience with SLAM is that I am not impressed with the results so far. I've been following these instructions from Robotis as a starting point. I run gmapping and traverse the maze completely and do get a map that is pretty accurate although the resolution is I think 5x5cm by default so its a little rough. Its super simple. But then I save the map and run navigation, using move_base, amcl and map_server. To my eye the results are not impressive: when I use RVize to give the initial pose hint, the map is not particularly well aligned with the results of the lidar When I ask to navigate it will get there but throwing lots of errors in the console. See below for a sampling of the errors the robot moves in fits and starts. Sometimes stopping for a few seconds, sometimes repeating the same motion over and over again, and in general not being smooth. Questions: Does it sounds like I have a setup problem fundamentally? Is this just what you get when you use just LIDAR? Is my maze too small? Is this because I need to "tune" my SLAM? Sampling of Errors from slam $ roslaunch turtlebot3_navigation turtlebot3_navigation.launch map_file:=$HOME/map.yaml [INFO] [1535381131.875583266]: Got new plan [ INFO] [1535381132.075471469]: Got new plan [ WARN] [1535381132.180568313]: DWA planner failed to produce path. [ INFO] [1535381132.275184519]: Got new plan [ INFO] [1535381132.475446249]: Got new plan [ WARN] [1535381132.575722364]: Rotate recovery behavior started. [ERROR] [1535381132.576713131]: Rotate recovery can't rotate in place because there is a potential collision. Cost: -1.00 [ INFO] [1535381132.675482447]: Got new plan [ INFO] [1535381132.875441949]: Got new plan [ INFO] [1535381156.901500928]: Got new plan [ INFO] [1535381157.101396956]: Got new plan [ INFO] [1535381157.301632070]: Got new plan [ INFO] [1535381157.302170120]: Goal reached [ WARN] [1535381169.092857809]: Costmap2DROS transform timeout. Current time: 1535381169.0925, global_pose stamp: 1535381168.5424, tolerance: 0.5000 [ WARN] [1535381169.093196300]: Could not get robot pose, cancelling reconfiguration [ WARN] [1535381170.092852095]: Costmap2DROS transform timeout. Current time: 1535381170.0926, global_pose stamp: 1535381168.5424, tolerance: 0.5000 [ WARN] [1535381170.192729304]: Could not get robot pose, cancelling reconfiguration [ WARN] [1535381171.097531131]: Costmap2DROS transform timeout. Current time: 1535381171.0972, global_pose stamp: 1535381168.5424, tolerance: 0.5000 [ WARN] [1535381171.292652884]: Could not get robot pose, cancelling reconfiguration [ WARN] [1535381172.097543923]: Costmap2DROS transform timeout. Current time: 1535381172.0973, global_pose stamp: 1535381168.5424, tolerance: 0.5000 [ WARN] [1535381172.393145114]: Could not get robot pose, cancelling reconfiguration [ WARN] [1535381190.292336646]: Costmap2DROS transform timeout. Current time: 1535381190.2920, global_pose stamp: 1535381189.7725, tolerance: 0.5000 [ WARN] [1535381190.292683124]: Could not get robot pose, cancelling reconfiguration ^C[rviz-5] killing on exit Here is map.yaml: resolution: 0.050000 origin: [-10.000000, -10.000000, 0.000000] negate: 0 occupied_thresh: 0.65 free_thresh: 0.196 And here is the map:
I am using this motor in a robotic arm. I wanted to move it to a position and the position was provided via an input but I am getting errors on Serial Monitor. Initially when I tried this , name of the serial controller i.e Rhino Motor Controller got printed on serial monitor indicating that the controller is working .But the problem is that I am not able to control the motor via Serial communication because I think it is not accepting the serial commands. Here is the datasheet for the motor driver. Here is the code: #include <SoftwareSerial.h> int angle ; int position ; int speed ; String stringG1 ; SoftwareSerial Serial1(10,11); void setup() { Serial.begin(38400); Serial1.begin(38400); } void loop() { angle = Serial.read(); int position = 4*angle ; speed = 10 ; if(Serial.available()>0) { stringG1 = "N1" + String(position) + String(speed); Serial1.println(stringG1); if (Serial1.available()>0) { Serial.print(Serial1.readString()); } } } Output on serial monitor , N1-410 Err :N1-410 Err :N122810 Err :N1-410 ⸮⸮⸮5 :N119215⸮⸮5 :Nj⸮⸮0 Err :N⸮⸮2810 Err :N1-410 Err :N119610 Err :N119210 Err :N119210RErr :N1-41jT⸮⸮5 :N122810 ⸮⸮⸮5 :N1-410 ⸮⸮⸮5 :
In testing my iRobot Create 2, I discovered that the following message is being sent periodically over the serial interface: " Flash CRC successful: 0x0 (0x0)". Is this expected behavior or am I making an error in the way I'm connecting? I could not find this message documented anywhere. To reproduce this: iRobot Create 2 connected to a Windows 10 PC via the USB-Serial cable. Connecting via Putty at 115200 baud. Type ctrl-G to reboot the robot. I'll see the normal reboot message, then after a few minutes, the robot sends the above Flash CRC message.
I've read that Arduino's are actually not necessarily a good option for getting into robotics/embedded systems, as there are many shortcuts and you don't learn the real way of how it all works. When I started app dev, I jumped straight into it and never had any practice with simpler things such as Scratch or other app builders. I got straight into the IDE and learned the programming language. If Arduino's are just for hobbyists, I want nothing to do with them. I'm able to grasp things quickly and study intensively, should I start with an individual microcontroller or still stick to the Arduino? If a microcontroller, which one and why? I understand the differences between the Arduino and a stand-alone microcontroller, have no previous experience programming hardware, but that was the same with app dev and I got on just fine with the programming. Many thanks.
I want to write a Doppler Velocity Log (DVL) SensorPlugin for gazebo, but gazebo fails to load the plugin. My DvlPlugin.cpp looks like this #include "DvlPlugin.hpp" using namespace std; using namespace gazebo; void DvlPlugin::Load(sensors::SensorPtr sensor, sdf::ElementPtr pluginElement){ gzmsg << "Load" << endl; } and my DvlPlugin.hpp looks like this #ifndef _GAZEBO_DVL_PLUGIN_HPP_ #define _GAZEBO_DVL_PLUGIN_HPP_ #include <gazebo/common/common.hh> #include <gazebo/sensors/Sensor.hh> namespace gazebo { class DvlPlugin : public gazebo::SensorPlugin { public: DvlPlugin(){} ~DvlPlugin(){} void Load(gazebo::sensors::SensorPtr sensor, sdf::ElementPtr sdf); }; GZ_REGISTER_SENSOR_PLUGIN(DvlPlugin) } #endif So there is really nothing big that is done in the code, and everything compiles without errors. When I load following simple sdf file <?xml version="1.0" ?> <sdf version="1.6"> <world name="worl_test"> <model name="model_test"> <link name="link_test"> <pose>0 0 0 0 0 0</pose> <inertial><mass>0.01</mass></inertial> <sensor type="dvl" name="dvl_test"> <plugin name="gazebo_dvl" filename="libgazebo_dvl.so"/> </sensor> </link> </model> </world> </sdf> I get the error [Err] [SensorManager.cc:276] Unable to create sensor of type[dvl] (which means, when we look the SensorManager.cc code, that the Sensor dvl is not in the SensorFactory. Do you have an Idea why I get this error? PS: I am under Ubuntu 16.04 and I use gazebo 7 and I get the error even if i load the full path to the libgazebo_dvl.so.
So I have an idea which would require the following: 1-4 servos 1 microcontroller some way of communicating with it via smartphone (bluetooth or wifi) Ability to run for 14 days on a small power source 2-3AA or AAA battries. I'm seeing a lot of boards run in the 200Ma range which makes using batteries look like a no go as this even with a 13000mAh pack will only last 65hrs or about 3 days or so. If I were able to get into thee micro amp range this would be great or very low mA range. Servos should only have to run <1second about 2-3 times a day. Looking for starter information on this.
I have a dynamical system which has the following form: $\dot x=\mathcal F_1(m_1)x+\mathcal F_2(m_2)x$. My objective is to find the parameters $m_1$ and $m_2$ via LMI (linear matrix inequality) using the Lyapunov function $V=x^TPx$ where $x$ is the state and $P$ is a positive definite matrix. The problem that I can't find a feasible solution with a single matrix $P$. I tried to solve the problem with tow steps: I consider that $\dot x=\mathcal F_1(m_1)x+\omega_1$ then using the Lyapunov function $V_1=x^TP_1x$ (of course using the mathematical background) I can find the first parameter $m_1$; I consider that $\dot x=\mathcal F_2(m_2)x+\omega_2$ and as in the first step I can find the second parameter $m_2$ using the Lyapunov function $V_2=x^TP_2x$ and using them in the simulation they work very well. My questions are about: $m_1$ and $m_2$ were found independently how can I guarantee the stability of the system using $m_1$ and $m_2$ in other words how can I prove the stability of the system? It is well known that the Lyapunov function is also so called the energy of the system so can I say that $V=\frac{1}{2}(V_1+V_2)$ is also the energy of the system?
I know they are bad for positional tracking and drift from actual position over time but would like to know what is the situation with rotation only. I know Oculus DK1 used ordinary off-the-shelf cheap IMUs for rotation tracking of the user's head, as does any other VR headset, such as GearVR, where there is no positional tracking, but I haven't had chance to use them more than few minutes to know how much they (their IMUs) drift from original orientation over time.
I have some 50ms latency cameras at hand and a 800Hz IMU (gyro+accelerometer+magnetometer). I would like to know how exactly how I should do a sensor fusion of such an IMU and camera to fix the positional data from the IMU positional drift. I'm not able to find much resources online. The reason is that I don't want to go with just a camera due to its 50ms latency. ​ The optical markers for the camera can be LEDs, ORB-SLAM data or AruCo markers which I currently use and which add another few ms latency to the camera tracking. ​ Maybe there is even an existing library or documented implementation I can use? ​
I am trying to tune my PID to make my motor have a consistent output: input(pidOutput) => 100rpm I used the following step to tune my PID: Set all gains to zero. Increase the P gain until the response to a disturbance is steady oscillation. Increase the D gain until the the oscillations go away (i.e. it's critically damped). Repeat steps 2 and 3 until increasing the D gain does not stop the oscillations. Set P and D to the last stable values. Increase the I gain until it brings you to the setpoint with the number of oscillations desired (normally zero but a quicker response can be had if you don't mind a couple oscillations of overshoot) As I do not know the values of PID i should increase, I started with a value of P = 0.2, D = 0 and this resulted in an output of about 36~38 RPM. Then I increased P = 0.3, D= 0 and the output goes to about 42~46 RPM. I then had to slowly increase my D till i found a sweet spot eg, D=0.2 till the range of RPM goes from 44~46 RPM. After about 3 hours, I only achieved the range of about 60~62 and it is really frustrating as it is really slow and I have 2 motors and have not even tune one motor. I have a few doubts which is: Am I tuning wrongly as it is taking too long and far from my target of 100rpm. And how much of a range for RPM is counted as unacceptable? I have two motors with a PID on it's own, Would a difference of 2 RPM on each wheel cause the car to go sideways? After tuning (with the I gain), what output am I expected to see? will it be an output of a constant 100rpm? Or will it be 99~101 for example The PID code I used: double pidtune1(double rpm){ double kp, ki, kd, p, i, d, error, pid; kp = 0.79; ki = 0; kd = 0.6395; error = 100 - rpm1; integral += error; p = kp*error; i = ki*integral; d = kd*(prevRPM - rpm); prevRPM = rpm; pid = p+i+d; return pid; } Called By: while (distance < 100) { pid_output1=pidtune1(rpm1); pid_output2=pidtune1(rpm2); md.setSpeeds(pid_output1, pid_output2); recalculateRPMs(); } md.setBrakes(400,400);
I use Arduino Uno R3 connected to the Roomba Create 2 as in this picture All the output commands for Arduino work fine but I have a problem from reading data from Create 2 sensors: I get invalid/wrong sensor sensor data every approximately 2 minutes. So I tried to test with an empty program to check the incoming data from Roomba. This is the code: #include <SoftwareSerial.h> #include "roombaDefines.h" int rxPin=10; //yellow int txPin=11; //green byte data1 ; SoftwareSerial Roomba(rxPin,txPin); void setup() { Roomba.begin(19200); Serial.begin(9600); pinMode(ddPin, OUTPUT); delay(1000); wakeUp (); // Wake-up Roomba startSafe(); // Start Roomba in Safe Mode playSound(1); setPowerLED(128,255); Serial.println("Start"); } void loop() { while(Roomba.available()>0) { data1 = Roomba.read(); Serial.println(data1); } } and I got this from Serial monitor approximately every two minutes Start 32 32 32 32 70 108 97 115 104 32 67 32 115 117 99 99 101 115 115 102 117 108 58 32 48 120 48 32 40 48 120 48 41 10 13 I think this is the cause of the error in reading the sensor data. Did anyone have this problem before ? How can I avoid this and read correct sensor data from Roomba ? I see from another post that somebody had the same issue as me, see iRobot Create 2 Flash CRC message but I already tried op code 128 to start Roomba in OI mode and it still have the same issue
Suppose I want to mesh two identical gears (size and number of teeth). What would be the difference if both had 12 teeth versus both having 48? Assume the motor spins at the same speed in both scenarios.
The Arduino turns off when I try to put the wire from the servo to its respective pin (9). Also, I've tried the code from the Arduino IDE, the knob code where the position of the motor depends on the value of the potentiometer. I don't really know what's wrong. Every time the servo tries to turn, the Arduino turns off, so the motor only turn for a very small degree, then the Arduino turns on, and so on. By the way, I'm using an MG996R Tower Pro Digital Servo motor. Is there a problem with the motor or/and the Arduino? I can't seem to remedy this problem.
My Code gives the following convergence characteristics, I wanted to know if it is correct Updated code { %Basic RRT star algorithm for non-holonomic body with obstacles close all clc clear all %Map and Initialization Data x_max = 100; y_max = 100; obs1 = [30,0,20,20]; obs2= [30,60,20,20]; EPS = 5; % Step Size Iter = 2000; q_start.pos = [10 10]; q_start.cost = 0; q_start.parent = 0; q_start.child=[]; q_goal.pos = [90,90]; q_goal.cost = 1e9; q_goal.child=[]; q_new.pos =[0,0]; q_new.cost=0; q_new.parent=0; q_new.child=[]; goal_reached=0; tree(1) = q_start; figure(1) axis([0 x_max 0 y_max]) rectangle('Position',obs1,'FaceColor','b') rectangle('Position',obs2,'FaceColor','b') hold on plot(q_start.pos(1),q_start.pos(2),'.','MarkerSize',10,'Color','m') plot(q_goal.pos(1),q_goal.pos(2),'.','MarkerSize',10,'Color','b') goal_nodes=[]; best_goal=[]; finalnodes=[]; goals=0; fc=[]; t=cputime; for i=1:1:Iter q_rand=random_state(x_max,y_max);%Sampling a random state from the configuration space [q_near,val,idx] = nearest_neighbour(q_rand,tree); % Obtaining the nearest neighbour from the tree and its distance from q_rand q_new.pos=move(q_rand,q_near.pos,val,EPS); %Checking if goal has been reached stat= distance(q_rand,q_goal.pos); if (stat<=4 &&~isCollision(q_rand,q_near.pos,obs1)&&~isCollision(q_rand,q_rand,obs2)) goal_reached = 1; goals=goals+1; goal_nodes(goals,:)=q_rand; q_new.pos=q_rand; end if(~isCollision(q_new.pos,q_near.pos,obs1)&&~isCollision(q_new.pos,q_near.pos,obs2)) %line([q_near.pos(1), q_new.pos(1)], [q_near.pos(2), q_new.pos(2)], 'Color', 'k', 'LineWidth', 1); q_new.cost = distance(q_new.pos, q_near.pos) + q_near.cost; % Within a radius of r, find all existing nodes q_nearest = []; r = 10; neighbor_count = 0; ni=[]; for j = 1:1:length(tree) if (noCollision(tree(j).pos,q_new.pos,obs1)&&noCollision(tree(j).pos,q_new.pos,obs2)&&distance(tree(j).pos, q_new.pos) <= r) neighbor_count = neighbor_count+1; q_nearest(neighbor_count).pos = tree(j).pos; q_nearest(neighbor_count).cost = tree(j).cost; ni=[ni,j]; end end % Initialize cost to currently known value q_min = q_near; C_min = q_new.cost; % Iterate through all nearest neighbors to find alternate lower % cost paths for k = 1:1:length(q_nearest) if (q_nearest(k).cost + distance(q_nearest(k).pos, q_new.pos) < C_min) q_min = q_nearest(k); C_min = q_nearest(k).cost + distance(q_nearest(k).pos, q_new.pos); end end % Update parent to least cost-from node for j = 1:1:length(tree) if tree(j).pos == q_min.pos q_new.parent = j; q_new.cost = C_min; break end end % Add to tree tree = [tree,q_new]; tree(q_new.parent).child=[tree(q_new.parent).child,length(tree)]; %Rewire %Iterate through all nearest neighbors to rewire them with lower cost %path for k = 1:1:length(q_nearest) if ( q_new.cost + distance(q_nearest(k).pos, q_new.pos) < q_nearest(k).cost) tree(ni(k)).parent=length(tree); tree(ni(k)).cost= q_new.cost + distance(q_nearest(k).pos, q_new.pos); updatecost(tree,ni(k)); %Update cost of children end end end best_cost=1e9; best_goalindex=0; if(goal_reached) for k=1:1:length(goal_nodes(:,2)) for j = 1:1:length(tree) if(tree(j).pos(1)==goal_nodes(k,1)&&tree(j).pos(2)==goal_nodes(k,2)&&tree(j).cost<best_cost) best_goal=tree(j); best_cost=tree(j).cost; best_goalindex=j; end end end % Search backwards from goal to start to find the optimal least cost path q_goal.parent = best_goalindex; q_end = q_goal; finalcost=0; tree = [tree q_goal]; while q_end.parent ~= 0 start = q_end.parent; finalcost=finalcost+distance(q_end.pos,tree(start).pos); plot(q_end.pos(1),q_end.pos(2),'*','Color','r') line([q_end.pos(1), tree(start).pos(1)], [q_end.pos(2), tree(start).pos(2)], 'Color', 'r', 'LineWidth', 2); hold on q_end = tree(start); end finalnodes=[finalnodes,i]; fc=[fc,finalcost]; drawnow end end e=cputime-t figure(2) plot(finalnodes,fc); title('RRT* Convergence') xlabel('Iteration') ylabel('Path Cost') }
I am making an indoor robot and in order to detect static and dynamic objects I have decided to go with ST VL53L1X ToF proximity sensor, here is the link: https://www.st.com/en/imaging-and-photonics-solutions/vl53l1x.html My question is, will this sensor show me a 3D reconstruction of what the sensor actually sees by emitting a light? And what is the difference between this sensor and other 3D ToF cameras?
I have been trying to read Roomba raw encoder counts I am receiving values, but I have a sudden increase/decrease in values as can be seen over here. 142 152 153 166 167 180 181 196 50101 33280 207 223 222 238 236 252 #include <SoftwareSerial.h> #include <Wire.h> // Roomba Create2 connection int rxPin = 10; int txPin = 11; SoftwareSerial Roomba(rxPin, txPin); void setup() { pinMode(10, INPUT); pinMode(11, OUTPUT); pinMode(5, OUTPUT); Serial.begin(115200); Roomba.begin(19200); delay(1000); Roomba.write(128); //Start Roomba.write(132); //Full mode delay(1000); } char command; void loop() { if (Serial.available()){ command=Serial.read();} switch (command) { case '8': Roomba.write(byte(145)); Roomba.write(byte(0)); Roomba.write(byte(0x28)); Roomba.write(byte(0)); Roomba.write(byte(0x28)); delay(50); encoder_counts(); break; case '0': Roomba.write(byte(145)); Roomba.write(byte(0)); Roomba.write(byte(0)); Roomba.write(byte(0)); Roomba.write(byte(0)); break; } } void encoder_counts() { unsigned int right_encoder; unsigned int left_encoder; byte bytes[4]; Roomba.write(byte(149)); Roomba.write(byte(2)); Roomba.write(byte(43)); Roomba.write(byte(44)); delay(50); int i=0; while(Roomba.available()) { bytes[i++]=Roomba.read(); left_encoder = (unsigned int)(bytes[0] << 8)|(unsigned int)(bytes[1]&0xFF); right_encoder =(unsigned int)(bytes[2] << 8)|(unsigned int)(bytes[3]&0xFF); } Serial.print(right_encoder); Serial.print(" "); Serial.println(left_encoder); } so please kindly advice, Thank you
I have a thesis work about Graph Slam The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures I try to implement it with the help of this paper but during the implementation stage I came to know that it requires a huge memory space if your dataset is huge. It is a offline slam algorithm so everything that it computes store in the memory. So basically for this boundary, I fail to implement it. Next, I am looking for a alternative solution and from Probabilistic Robotics I got one name Sparse extended Information Filter which is referred as online algorithm of GraphSlam. To implement SEIF, Lectures on Youtube is very helpful. This picture define motion update for SEIF slam How many time this algorithm iterate?. There is no looping concept. From line 9, I understand that $\Omega$ is a 33*33 matrix if I have a 15 landmark. 1st 3*3 matrix for pose update and rest of them is for landmarks. For line 10, I understand that Xi is also a (3*3) matrix. Am I correct? But as per the algorithm develop Xi should be a (3*1) dimension matrix. $\bar\Omega_tF_x^t\delta_t$ represent a 3*3 dimension matrix. As per matrix addition both matrix must have same dimension for addition. The resultant matrix also have same dimension with additive matrix. So, I understand Xi is a 3*3 matrix. What is the signification of Xi? What should represent using 3*3 matrix block? It is describe that Just like the EKF, the SEIF integrates out past robot poses and only maintains a posterior over the present robot pose and map.Can any one clarify this line with respect to this algorithm? If in a data set 1st 500 data are only the odometer data and from 501 measurement data are integrated then how does the algorithm work? Which blocks updated time to time?
I'm a beginner in robotics and so far followed the lecture series by Prof. Oussama Khatib and some blogs and papers. currently I'm following this studywolf blog. in the end I plan on building a simple robot arm capable of moving a weight. since all questions I have are regarding the lagrange equation, I'll ask all here. blog says that we can ignore the Coriolis and centrifugal part of the equation and instead use a PD controller. is it OK? "we’re actually going to have a PD controller for each joint." does that mean different $kp$, $kv$ values to each joint in a matrix form? or separate equations for each joint? the control signal here is the torque. does that mean I need to have a current feedback(torque feedback) or is it just feedback from encoder (speed and position required for PD controller) and current feed-only? when practically implementing this, do I need a current-mode pwm to apply the torque or is a voltage pwm is enough?
I have some doubt on EKF Slam with known correspondence in the measurement update state. I follow the algorithm from Probabilistic Robotics by Sebastian THRUN. This Algorithm is on Chapter 10 page No:249 of this books. I attached a snapshot of my doubt. In this algorithm line No: 10 and 12 create some doubts. I want to know what is $\bar\mu_{t,x}$ and $\bar\mu_{t,y}$. As per my understanding, it is the robot position in the given timestamp when it sees landmarks. But I have some confusion, so it would be very helpful for me if anyone could clarify line 10,12.
I studied about Sparse Extended Information Filter slam. I want to clarify some points regarding this topic. As per the sparse extended information(SEIIF) slam when the robot sees some landmarks it can update the information matrix according to it's position and landmarks number. Lets the robot at $x_t$ position and sees landmarks $m_1$ and $m_2$. So according to the algorithm it update rows and columns of $x_t$,$m_1$,$m_2$. Then it move and at $x_{t+1}$ position it sees landmark $m_3$ ,then it update rows and columns of $x_{t+1}$,$m_3$ and also there is a weak link between $m_1$ , $m_2$, $m_3$. At the $x_{t+1}$ position it discard previous position and the rows and columns corresponding to $x_t$. Now my doubts is that what happened with the information matrix when the robot moving around to arena without observing any landmarks.Say, $x_1$ is the robot initial position then it go to the next position say, $x_2$ then as per the algorithm $x_1$ position will be discard. So if the robot only moving around without observing landmarks means only the 1st cell of information matrix say(0,0) is overwrite all the time. Then what is its effect on $\mu$? The key formula on which the algorithm based is $\mu=\Omega^{-1}Xi$ where $\Omega$ represent information matrix and Xi represent information vector. The algorithm taken from Probabilistic Robotics Chapter 12 page 315[pdf] and page 304[hard copy].
I am looking into building a self-docking robot that can charge itself when needed. To accelerate the prototyping phase, I am considering AlphaBot2 with Raspberry Pi 3 B+ as a development platform. I have two main concerns: 1) AlphaBot2 docs have very little information on power consumption and providing an alternative battery source. I am worried about the tightly designed PCB not providing a way to charge the battery pack or add an alternative power source. Does anybody have experience with this kit? 2) Would an inductive charging set like this one be able to charge a 3.7V Li-Po or Li-Ion battery through the PowerBoost 500 charger? I don't care about charge-time being too long, as long as it's possible since I am more interested in the software challenge of finding the dock and aligning with it. I appreciate any guidance and information you can provide. Thank you!
I am trying to put ROS on my Intel-based Ubuntu 18.04 laptop and it seems by the documentation that they only made if for Amd? Am I missing something. It never finds any of the packages. Unable to locate package ros-melodic-desktop-full. Any solutions except formating the machine and going back to 14.04
I am trying to decide between taking a compilers and interpreters course where we create an OpenGL shader compiler and a databases course. My aim is to go into robotics engineering and I am wondering which of these courses would provide the most benefit for that career path?
I am working on Sparse Extended Information Slam. I take the reference from Probabilistic Robotics, by Dr.Sebastian Thrun (Chapter 12,page 303). I have some doubt about the implementation of the algorithm. There is a concept of passive landmark and active landmark. As per the picture at $x_t$ position it sees $y_1$ and $y_2$ and update the matrix as per requirement. Now, lets say that the robot can see landmark $y_1$ again at $x_{t+10}$ position. Then the information matrix $\Omega$ change the $x_t$ vector as $x_{t+10}$ but what happened with the $y_1$ cells row and column? I cannot figure it out after reading the algorithm several times.
It is quite hard (or almost impossible for large robots) to hand code ROS robot definitions in URDF XML files, so that is why SolidWorks has model exporter http://wiki.ros.org/sw_urdf_exporter. SolidWorks has open source alternatives like FreeCAD and Blender. My question is - does SolidWorks CAD-to-URDF exporters works from FreeCAD or maybe other open source CAD programs?
I'm trying to build a quadruped robot. I studied dynamics and to get an idea, I read and watched papers and videos related to robots like ALof and StarlETH and some more. But since this is my first project it would be pretty difficult to follow everything and technically I won't be able to, because of no prior experience. To get to the problem, I'm stuck on choosing an actuation method. I plan to build the thing using aluminum and keep it lightweight. Hopefully 10kg and battery powered. so hydraulics is a no go. Currently I am on the designing stage. I have considered using dc geared motors directly on joints rather than -keeping on hip and transferring via gears and chains- to keep the design simple, but I guess it adds more strain on hip motors with my design. due to budgetary limitations and availability, I'm stuck with motors like XD37GB520 (currently 12v 100rpm but can change it) and no harmonic drivers or maxon motors. but I'm concerned with its ability to hold the weight of the robot. if I try to increase the torque capacity I lose angular velocity. I have also considered about lead-screw and motor method. but have little or zero knowledge about it. I know that the linear actuation's speed depends on the number of threads per unit length of screw and rpm of the motor, but how should I estimated the torque of the motor and the torque or force that can be expected from the mechanism (something like rated and stall torques of DC motors, I suspect stall torque would just depend on screw joints to the leg links) does the torque of the motor only depend on the friction of the nut? I hope to have the arrangement of motors as follows left: 3 DoFs leg with DC geared motors on each joint top motor is parallel to the drawing-plan. right: same leg with lead-screw, top screw is in the plane So how should I select a motor (torque rating) if I want to implement lead screw and motor? if I'm to stick with DC geared on joints, how much minimum rpm should I keep at a joint? (so that I can try to find a motor with that rpm and torque required). what is better for this kind of thing dc geared or lead-screw?. PS: I'm still designing this and I still need to start implementing Matlab or Simulink simulations. So I don't really know which rated torque will be required. I just need to decide on an actuation method to finish the design and move forward.
I am trying to run RRT algorithm for motion planning for a quadrotor. The quadrotor ends up sometimes in corners without reaching the goal. I realized from sources online, that it is due to local minimum occuring. I want to know what is it and why they occur and avoid the quadrotor getting stuck in corners. Do effective methods exist to overcome local minima in smapling based motion planning algorithms?
So I was browsing through the localization section and found this question and the code which answered it. https://robotics.stackexchange.com/a/7932/21145 but I have a follow-up question to this. So the code is exactly what I need but my "beacon" isn't in the middle but in the front (image following). Do I have to change the code or is the solution implemented general enough, I am not quite sure? The beacon is marked as point A. Edit: Code and clarification better showcase of beacon location So I chose the mathematical convection and I took the code from the previous question and modified it. The heading is alligned with the x-Axis. Is it theoratically right? So new it would be: A_x = M_x + cos(θ + (-25))*r A_y = M_y + sin(θ + (-25))*r //current points float xc = -300; float yc = 300; //target points float xt = -300; float yt = -300; //turning angle float turnAngle; //*************** float beaconHeading = -25; float startingHeading = 0; float currentHeading = startingHeading + beaconHeading; //*************** //*************** float turnRight = 1; float turnLeft = -1; float wheelBase = 39.5; float tireRadius = 7; float speedModifier = 0; float vRightMotor = 0; float vLeftMotor = 0; float vMotor = 0; float theta = 0; float distance = 0; //************** void setup() { // pin setup Serial.begin(9600); } void loop() { //************* destinationHeading = atan2((yt-yc), (xt-xc)); //calculate turning angle destinationHeading = destinationHeading * 180/3.1415; //convert to degrees turnAngle = destinationHeading - currentHeading; //************* if (turnAngle > 180) { turnAngle = turnAngle-360; } if (turnAngle < -180) { turnAngle = turnAngle+360; } //*************** if (turnAngle < 0) { speedModifier = turnRight; } if (turnAngle > 0) { speedModifier = turnLeft } theta = 0; while abs(abs(theta)-abs(turnAngle)) > 0 { vRightMotor = speedModifier * <100 percent speed - varies by your application>; vLeftMotor = -speedModifier * <100 percent speed - varies by your application>; <send the vRightMotor/vLeftMotor speed commands to the motors> vMotor = vRightMotor; thetaDot = (tireRadius * vMotor) / (wheelBase/2);` theta = theta + thetaDot*dT; } <send zero speed to the motors> currentHeading = destinationHeading; distance = sqrt((xt - xc)^2 + (yt - yc)^2); if (distance > 0) { // go forward } xc = xt; yc = yt; //**************** } I feel like I only had to change one line or am I wrong? I mean if I change the currentheading so that beacon is on a straight line with the next point it works. Maybe I didn't clarify it but I want to reach the next point so that the point A is the same as the point I'm heading to. I did make the angular offset a negative number because if I compare the angle to x-Axis it does go cw and not ccw, I tested it with geogebra and it is fine if I do it like this. The robot I use is a Arlo Complete Robot System so I could let the robot turn with other commands.
I'm building a robot that uses the arduino uno microcontroller, and the tracks system is run by the tamiya 70168-Double-Gearbox-Kit. Each Tamiya motor only uses 3V which is fine but it seems the stepper will help to control speed and direction of the motors. Does the Arduino Uno Microcontroller not allow you to control speed and direction? Do I need a stepper to have control of the tamiya gearbox?
I'm designing a PMSM controller with the motor has an optical encoder, so I can get position information from the motor. But how to measure the velocity of the motor. Now, my design is to calculate the difference, then add a low pass filter to estimate velocity. The approach is not a good way when the motor runs at a low speed. I have read some research to improve velocity estimation, like Luenberger Observer, Extended State Observer, or Kalman filter. But I don't know what should I choose since I have almost no theoretical basis.
6, because a rigid body has six degrees of freedom 4, since it is similar to a quadrotor, except with more motors 6, because there are six motors I was taking an online quiz, and I thought option 3 is the correct one, only to find out I got the answer wrong. I know that I could change the answer 2 times and by chance I finally got it right, but I want to understand why option 3 is wrong?