instruction
stringlengths
40
28.9k
Background: 6 propeller drone w 20C 3s 6400 mAh 11.1 liPO battery 4 propeller drone w 25C 2s 5000 mAh 7.40 liPO battery Behavior: Drone 1 flies with ease Drone 2 struggles hover 2-3 inches above ground Question: The microcontroller, all props, ESCs, and motors are the same. I'm thinking the reason the drones are flying so differently is because of the difference in batteries. IF the batteries are the reason, what would be the property that is most responsible for the difference in flight?
I need to build a conversion/mapping algorithm from a controller (PID etc.) output to the duty cycle in order to command my bldc motor via esc. I couldn't do it yet because l think l dont know the meaning of controller output. Anybody highlights my way?
I have a 16 Channel Servo Driver board from Adafruit (see here), and I communicate to it via I2C using a Raspberry Pi. The servo board is controlling a Qbrain by sending a PWM pulse between 1ms to 2ms and it works great. Problem is, I'm trying to create a kill switch such that the signal from the servo board would cease, and the ESC would stop because it detects no PWM signal. I have placed a toggle switch that cuts the VCC to the servo board, so technically it should no longer produce any PWM signal, however when the power is cut, the ESC jumps to 100% throttle, I can only assume this is because the ESC believes the signal is 100% duty cycle, but how do I solve this?
The task of the robot is as follows. My robot should catch another robot in the arena, which is trying to escape. The exact position of that robot is sent to my robot at 5Hz. Other than that I can use sonsor to identify that robot. Is that possible to estimate the next position of other robot using a mathematical model. If so, can anyone recommend tutorials or books to refer..?
I'm working on a 2-wheeled robot and have connected up a raspberry pi to an L298N motor driver. I'm sending the enable pin of a particular motor a software-generated PWM signal at 100Hz with a 50% duty cycle. I observe with an osciloscope: a fairly clean square wave going into the enable pin as expected. a fairly dirty square wave across the output motor terminals. The motor turns at about 50% speed/torque as expected. I find myself wondering if it would be better to control the speed of the motor by placing a flat lower constant voltage across its terminals, rather than oscillating a square wave. ie to do 50% speed/torque - instead of oscilating between 0V and 5V - just put a constant 2.5V across the motor terminals. I wonder if the oscillation is a waste of power/energy. Is this true? Or doesn't it make any difference? Do high-end motor drivers use a variable flat analog voltage to control speed/torque, or do they use a PWM? If a PWM, does the frequency make any difference?
My Background: My experience is in solid mechanics and FEA. So I have zero experience in robotics/controls. Problem Description I'm developing a control strategy to stabilize a complicated 6-legged dynamical system. Torques Ti from each leg's joints will be used to create a net moment M on the body, stabilizing the system. This moment M is known from the pre-determined control strategy. (Side note: the dynamical solver is of the nonlinear computational type) Due to my lack of background, I have a fundamental confusion with the dynamical system. I want to use joint torques Ti to create this known net moment M on the body. This moment M is a function of the current positions/angles of all the leg segments reaction forces and moments (that cannot be controlled) of each leg controllable joint torques Ti of each leg time $(*)$ At a given time $(n-1)\Delta$t: --From the control strategy, the desired net moment M is computed/known --One can read/sense the legs' positions, angles, reaction forces, and reaction moments (say, from well placed sensors), at this time $t = (n-1)\Delta$t. --From this information, vector algebra easily yields the desired joint torques Ti required to create the net moment M $(**)$ At the time $(n)\Delta$t: --one applies the previously determined joint torques Ti (determined at $t=(n-1)\Delta$t) to create the desired moment M --of course these torques Ti are applied at the immediate proceeding time step because they cannot be applied instantaneously So this is exactly where my fundamental confusion exists. The torques Ti were calculated in $(*)$, based on data of angles/positions/reactions in $(*)$, with the objective to create moment M. However, these torques Ti are applied in $(**)$, where the data (angles/positions/reactions) are now different - thus the desired net moment M can never be created (unless you an magically apply actuation at the instantaneous time of sensing). Am I understanding the controls problem correctly? Questions Am I understanding the robotics problem correctly? What are the terms and strategies around this dilemma? Of course I could create the time steps between the sensing and the actuation to be infinitely small, but this would be unrealistic/dishonest. What is the balance between a realistic time step, but also performs the task well?
I saw Buddy's page and want to purchase for my SLAM research. However, I wonder is it possible to program Buddy for SLAM? According to Buddy's spec, they're only few IR's, sonars and a camera. As I know, most SLAM algorithms are implemented with powerful sensors such as RGBD/stereo camera, or even laser range finder. Are there any pepers mention about IR-based SLAM?
I am using ikfast in OpenRave for my inverse kinematics. This is an analytical solver, so if your robot's DOF matches the IK type's DOF, then you get all possible solutions. But if your robot has more DOFs, then you need to pick some joints to have a constant value. (However, if you use OpenRave's Python interface it will discretize that joint for you. i.e. give you a set of solutions for every 0.1 radians of that joint. But my question holds for either interface.) I have a 7 DOF anthropomorphic arm with joints: Roll-Pitch-Roll-Pitch-Roll-Pitch-Yaw as seen in this image: The discretized joints are call "free joints" in OpenRave's terminology. If I let ikfast decide, it picks joint 3 (upper arm roll) to be the free joint. However, I have been using joint 4 (elbow) to be the free joint because it is easier for me to think about. But then I realized that perhaps joint 5, 6, or 7 would be better to discretize because they are closer to the end of the chain. Won't the IK solutions suffer if joints closer to the start of the chain have a large discretization? Or is OpenRave picking the optimal joint to discretize? I was just wondering if there is some standard practices or known conventions for this sort of thing. Put simply: I want a set of IK solutions for the end-effector at some pose. I will fix a joint either near the start or end of the kinematic chain. And what i set it to isn't going to be perfect. Lets say it is off from some "ideal" position by some epsilon. Now you can imagine that if i want the hand in-front of the robot, and I pick a bad angle for the shoulder (like straight up for example), the rest of the joints will have a hard time getting the end-effector to the target pose, if at all. But If I fix the wrist to be at some awkward angle, there is still a good chance of getting the end-effector there, or at lease close. What kind of trade-offs are there? Which will have a "better" set of solutions?
I try to find out the relation between rpm vs. thrust for a battery+motor+propeller combination. the image shows my setup and also the measurement result. Can anyone explain how l should use this datas (I know Kv.v gives the rpm but my voltage values decreasing because of P=V.I relation etc.)
Can I connect a UDOO board to a PC using a straight-through ethernet cable? Or do I need a cross-over cable? As far as I know, most modern devices can use the two interchangeably. However, I am not sure if a UDOO can do that. Anyone with any experience? Thank you for your help. (PS: I don't have a UDOO on me at the moment, so I can't test it myself. Couldn't find any information in the documentation either).
Say I had an object with 4 motors/wheels attached (in a fairly standard arrangement). I need to calculate the amount of torque required from the motors to be able to move the object of x kilograms consistently (without skipping any steps) at a velocity of y, travelling up a slope of angle z. I'm guessing this would also depend on factors like the grip of the tyre and such?
I am trying to implement an EKF-SLAM using the algorithm for unknown correspondences proposed in the book "Probalistic Robotics" by Sebastian Thrun in Table 10.2 . By now I understand actually all of the algorithm except of the initialization of new landmarks in the covariance matrix $ P_{new} $. In that algorithm when a new landmark is detected the procedure is just the same as if a normal measurment update for an already observed landmark is done: the Kalman gain $ K $ is calculated for the new landmark and then the covariance is updated with that Kalman gain and the jacobian $ H $ of that new landmark like this $ P_{new}= (I - K * H) * P$ . In my understanding a just new observed landmark would not have any effect on the rows and columns that correspond to already mapped landmarks or the robot pose in the covariance matrix. Instead I think that just two rows and columns for x and y should be created with some uncertainity like proposed here: the uncertainty of initializing new landmark in EKF-SLAM . I tried to split down the calculation of $ P_{new}$ via claculating it blockwise to see if I could somehow come to the same initialization as shown in the link above. But I end up having a different covariance matrix where apparently the new landmark is effecting the rows and columns of the parts of the old covariance, which in my view can't be right. I hope I don't understand the pseudo code of the book wrong or I did a mistake in my try to come to the same initialization. Any advice how the initialization of new lnndmarks work in that code or if it actually is the same as in the link will be appreciated. Edit So basically what I am asking is: why would they do a normal Kalman update of the covariance matrix in line 24 of table 10.2 for a new observed landmark? Why is there no explicit case for the initialization of new rows/columns of new observed landmarks in the covariance matrix? It seems to me like they just do a normal measurement update even for a just newly observed landmark.
I'm using Matlab to suppress low frequency components with a high pass filter. Objective Filter angular velocity measurements affected by high frequency noise and bias in order to get the best estimate of the angular position. The output when the gyroscope is still looks like this. First Approach The easiest way to remove baseline is to remove the average and can be achieved with Matlab using one line of code. yFilt = y - mean(y) Second Approach We can design a high pass filter to attenuate low frequency components. If we analyze the frequency components of the signal we will see one peak at low frequency and "infinite" small components in all frequencies due to Noise. With a second order ButterWorth filter with normalized cutoff freq Wn = 0.2 we will get what we are looking for. Filtered data Tilting the Gyro When we tilt the gyroscope the situation changes. With a sampling frequency of 300Hz we get the following plot. The first half of the dft is shown below in a normalized scale. You can find the sample.mat file here The first approach works great. I would like to apply the second one to this particular case but here there are other low frequency components that make to job harder. How can I apply the second approach based on the High Pass filter to remove the bias? EDIT 1 You can find more information here EDIT 2 How can we filter this signal to remove bias while keeping the angular velocity information (from 110-th to 300-th sample) intact? If gyroscopes have the bias problem only when they are not experiencing any rotation, then the offset is present only in the first ~110 samples. If the above hypothesis is correct, maybe if we apply high pass filtering only in the first 110 samples and desactivate the filter during rotations of the gyro, the estimated angular position will be more accurate.
I have an arduino, wires, resistors, all of that good stuff. However, I don't have materials to build the structure of the robot. What do you guys recommend? I don't have a place to solder yet so I can't solder but is there a kit or material that you guys recommend? Will it work well with motors and other stuff? Thanks! P.S. I plan on building a standard driving robot, but I want to be able to make other robots with the same materials/kit. I don't want a kit that only makes one robot, I want a Lego-esque approach to building the structure where I can build whatever I want with it. (Bump2)
I am working on a project that needs tracking location and speed of pedestrians/runners/athletes (so not really robotics, but I see a lot of related usage and posts in the robotics domain, and an answer to this question could help with follower robots). I'm interested in just the 2D location (latitude-longitude). Using just the GPS position has noisy/jump samples and also the degradation due to multi-path near trees etc. From reading about filtering solutions, I understand that sensor fusion that fuses GPS with the data from inertial sensors (INS) helps improve a lot of these issues. Also, this kind of sensor fusion seems to be used in a lot of places -- robotics, wearables, drones etc. Hence I think there might be off the shelf chips/modules/solutions for this, but I couldn't find any. I found a sensor hub from Invensense that integrates the 9 dof inertial sensors and comes with the fusion firmware, but it doesn't seem to have hookups and firmware for fusing GPS and providing filtered latitude-logitude. So, what should I be looking for? Are there any off the shelf chips/modules/solutions that come with the built in sensor fusion Software/firmware for doing GPS+INS fusion? I understand that it will still need tuning some params as well as some calibration.
I want to use a sensor to find displacement with accelerometer. How can I use accelerometer to find displacement? I want to use this for a quadcopter.
How quadcopter's arm length affect stability? As per my view I'll have better control on copter with longer arms but with stresses in arms and also it doesn't affect lift capabilities.
We're building an 6dof joystick, and we need to accurately measure the displacement of our central device. We can easily use a mechanical connection to the edges, but there has been some discussion about what the best way to achieve this is. The range of motion will be fairly small, but accuracy is incredibly important for us. Which sensors are most easily and accurately measured? My impulse response is that rotational and linear potentiometers are the most reliable, but others have been arguing for using gyros/accelerometers. I've also heard that hall effect sensors can be used to great effect.
I'm searching for a (commercial) projector that just projects a single laser point into the world (e.g. using two moving mirrors). However, I'm struggling because I'm not sure what such a thing is called. I either find area projectors that use lasers, party equipment or laser pointers. What is the name for such a device?
How are the brushless motors in a gimbal assembly designed? Obviously it doesn't need continual rotation, but it does need accurate control of precise position. I've noticed that the motors in my gimbal don't have the usual magnetic 'snap' positions that my other motors do. What are the primary design differences in these kinds of motor, if any?
I have created a three wheeled omni robot like the diagram below. Now I am unsure of how to program it. I want to use a single joystick so one x and one y value. The values for x and y are between -1 and 1, also the motors can be set anywhere from -1 to 1. How do I use this data to make the robot move based on the joystick without changing orientations? After doing some initial research this seems like a complex problem, but I am hoping there is a formula that I can.
I'm working on a Python script which reads the data from the MPU6050 IMU and returns the angles using sensor fusion algorithms: Kalman and Complementary filter. Here is the implementation: Class MPU6050 reads the data from the sensor, processes it. Class Kalman is the implementation of the Kalman filter. The problem is the next: None of the Kalman, neither the Complementary filter returns appropriate angle values from the Y angle. The filters work fine on the X angle, but the Y angle values make no sense. See the graphs below. I've checked the code million times, but still can't figure out where the problem is. class MPU6050(): def __init__(self): self.bus = smbus.SMBus(1) self.address = 0x68 self.gyro_scale = 131.072 # 65535 / full scale range (2*250deg/s) self.accel_scale = 16384.0 #65535 / full scale range (2*2g) self.iterations = 2000 self.data_list = array('B', [0,0,0,0,0,0,0,0,0,0,0,0,0,0]) self.result_list = array('h', [0,0,0,0,0,0,0]) self.gyro_x_angle = 0.0 self.gyro_y_angle = 0.0 self.gyro_z_angle = 0.0 self.kalman_x = Kalman() self.kalman_y = Kalman() def init_sensor()... def calculate_angles(self): dt = 0.01 comp_y = 0.0 comp_x = 0.0 print("Reading data...") while True: self.read_sensor_raw() gyro_x_scaled = (self.result_list[4] / self.gyro_scale) gyro_y_scaled = (self.result_list[5] / self.gyro_scale) gyro_z_scaled = (self.result_list[6] / self.gyro_scale) acc_x_scaled = (self.result_list[0] / self.accel_scale) acc_y_scaled = (self.result_list[1] / self.accel_scale) acc_z_scaled = (self.result_list[2] / self.accel_scale) acc_x_angle = math.degrees(math.atan2(acc_y_scaled, self.dist(acc_x_scaled,acc_z_scaled))) acc_y_angle = math.degrees(math.atan2(acc_x_scaled, self.dist(acc_y_scaled,acc_z_scaled))) comp_x = 0.95 * (comp_x + (gyro_x_scaled * dt)) + 0.05 * acc_x_angle comp_y = 0.95 * (comp_y + (gyro_y_scaled * dt)) + 0.05 * acc_y_angle kalman_y_angle = self.kalman_y.filter(acc_y_angle, gyro_y_scaled, dt) kalman_x_angle = self.kalman_x.filter(acc_x_angle, gyro_x_scaled, dt) self.gyro_x_angle += gyro_x_scaled * dt self.gyro_y_angle -= gyro_y_scaled * dt self.gyro_z_angle -= gyro_z_scaled * dt time.sleep(dt) def read_sensor_raw(self): self.data_list = self.bus.read_i2c_block_data(self.address, 0x3B, 14) for i in range(0, 14, 2): if(self.data_list[i] > 127): self.data_list[i] -= 256 self.result_list[int(i/2)] = (self.data_list[i] << 8) + self.data_list[i+1] def dist(self, a,b): return math.sqrt((a*a)+(b*b)) class Kalman(): def __init__(self): self.Q_angle = float(0.001) self.Q_bias = float(0.003) self.R_measure = float(0.03) self.angle = float(0.0) self.bias = float(0.0) self.rate = float(0.0) self.P00 = float(0.0) self.P01 = float(0.0) self.P10 = float(0.0) self.P11 = float(0.0) def filter(self, angle, rate, dt): self.rate = rate - self.bias self.angle += dt * self.rate self.P00 += dt * (dt * self.P11 - self.P01 - self.P10 + self.Q_angle) self.P01 -= dt * self.P11 self.P10 -= dt * self.P11 self.P11 += self.Q_bias * dt S = float(self.P00 + self.R_measure) K0 = float(0.0) K1 = float(0.0) K0 = self.P00 / S K1 = self.P10 / S y = float(angle - self.angle) self.angle += K0 * y self.bias += K1 * y P00_temp = self.P00 P01_temp = self.P01 self.P00 -= K0 * P00_temp self.P01 -= K0 * P01_temp self.P10 -= K1 * P00_temp self.P11 -= K1 * P01_temp return self.angle EDIT: I've added some information based on @Chuck's answer: self.result_list[3] contains the temperature In my opinion the compl. filter is implemented correctly: gyro_x_scaled and gyro_y_scaled are angular velocities, but they are multiplied by dt, so they give angle. acc_?_scaled are accelerations, but acc_x_angle and acc_x_angle are angles. Check my comment, where the Complementary filter tutorial is. Yes, there was something missing in the Kalman filer, I've corrected it. I totally agree with you, sleep(dt) is not the best solution. I've measured how much time the calculation takes, and it is about 0.003 seconds. The Y angle filters return incorrect values, even if sleep(0.007) or sleep(calculatedTimeDifference) is used. The Y angle filters still return incorrect values.
I am currently working on a pose estimation problem for which I would like to use filtering. To explain the system briefly, it consists of two cameras and each has its own GPS/IMU module. The main assumption is that Camera1 is fixed and stable, whereas camera2 has a noisy pose in 3D. I am using computer vision to obtain the pose (metric translation and rotation) of camera2 w.r.t. camera1, so that I can improve upon the inherent noise of GPS/IMU modules. The problem here is that the translation obtained through the vision method is only up to an arbitrary scale, i.e. at any given instant, I can only obtain a unit vector that specifies the "direction" of the translation and not absolute metric translation. The camera based estimation, although accurate, has no idea about how much actual distance is between the cameras, which is why I have the GPS, which gives me position data with some noise. Example: camera 2 is 5 m to the east of camera 1, the pose from my vision algorithm would say [1, 0, 0] ; 1 m north-east to camera 1, it would be something like [0.7, 0.7, 0] Hence, would it be possible to consider the GPS estimate of the metric translation as well as its covariance ellipse, and somehow link it with the normalized camera measurements to obtain a final, more accurate estimate of metric translation? I am not sure what kind of filters would be happy to use a measurement that has no absolute value in it. Thanks!
I am trying to build a 2-axis robot arm with pan and tilt mechanism. The gripper/holder will hold an object weighing 300 grams. The total weight of the arm including the motors will be around 2 kg. I have decided to use 180 degree servo motors. The maximum arm reach will be 340 mm. what I want to ask is: What kind of servos (analog/digital) will be suitable to support the total weight (2 kg) and the object weight (300 g)? How do I calculate the required torque? How many servos should I use to make sure that my arm doesn't flip over? Please suggest me if there is a better approach to designing the robot. I am fairly new to electronics and this is the first time I am building a robot. Thanks in advance.
Are there any standards regarding single vs multiple MCU in a robotic system? More specifically, if a single MCU can handle all of the sensor data and actuator controls, is it better to use a single MCU or multiple MCUs in a hierarchical manner? Are there any references/papers regarding this topic? What are the arguments towards one or the other? I am looking for facts, not personal opinions, so pros, cons, standards and such.
I am young researcher/developer coming from different (non-robotic) background and I did some research on camera localisation and I came to the point, where I can say that I am lost and I would need some of your help. I have discovered that there is a lot of SLAM algorithms which are used for robots etc. As far as I know they are all in unknown environments. But my situation is different. My problems and idea at the same time is: I will be placed in an known room/indoor environment (dimensions would be known) I would like to use handheld camera I can use predefined landmarks if they would help. In my case, I can put some " unique stickers" on the walls at predefined positions if that would help in any way for faster localisation. I would like to get my camera position (with its orientation etc) in realtime(30 Hz or faster). For beginning I would like to ask which SLAM algorithm is the right one for my situation or where to start. Or do you have any other suggestions how to get real time camera positions inside of the known room/environment. It must be really fast and must allow fast camera movements. Camera would be on person and not on robot. thank you in advance.
I want to control a brushless motor with the "EMAX Simon Series 30amp ESC" and Arduino (Leonardo) board. I am really confused how to do that. I can't understand which beep sounds mean what. I have tested many code examples but they weren't useful.
Hi there I just found this old Rx & Tx in my loft and need to know weather it is compatible with my APM micro 2.7.2. I already have telemetry but that does not give me manual control. My guess is I need a new Rx because the current one will make a hash of the electronics on the APM. Thanks in advance[![enter image description here][1]][1] [![enter image descriptere][2]][2]
Could I have your opinions on PID type selection? System description Here comes a very simple system: $\mbox{Output}(t) = k * (\mbox{Input}(t) + \mbox{systemVariable}(t))$. $k$ is constant and $\mbox{systemVariable}(t)$ is a system variable which may change according to time. The goal of the whole system is to maintain system output at $0$. It has to be as close to zero as possible. The controller has to compensate the $\mbox{systemVariable}$. The change ($\mbox{systemVariable}$ ) is modeled by a very slow ramp. Controller description The controller's input is the output of the system. However, the measurements are always noisy, and I modeled Band-Limited White Noise into the measurements. After PID controller, the output goes into an integrator, since the PID controller always calculates the "change" of the plant input. Questions My original thoughts: Add a PID controller with P=1/k is enough. Since every time the controller gets an error $e$, it can be calculated back that the compensation on controller output shall be $e/k$. However, Matlab auto-tuning always give me a PID. Why is that? What is the relation between P of PID and measurement noises? If P is large, the system will tend to be rambling largely, due to the noises. If P is small, the system will tend not to converge to the correct value or very slow. How to make the trade-offs? Or how to prevent system from rambling largely and get quick system responses? Thanks a lot!
I am working with an STM32F103C8 which has a flash size of 64kBytes. Now i am using ChibiOS 2.6 and the build file is a binary file of 82kBytes. Using ST-Link Utility, the program is getting dumped into the microcontroller's flash. My question is how come a 82kB code fits in the 64kB Flash? How is the size of that .bin calculated? I am attaching a picture of the display. I did a compare ch.bin with device memory and it doesn't report any errors found. All parts of the code work just fine, i don't see any problems anywhere tried all the features of the code, nothing breaks or behaves abnormally. Could someone please explain this? Thanks!
I need an iRobot Create Serial Cable (one end 7-pin Mini-DIN Connector and the other end is USB) for Turtlebot I. How can I connect my bot to my PC?
I know that RC servo motors are designed for precise movement, rather than a D.C. motor's continual rotation. Are most RC servo motors limited to movement within one rotation or can they actually be made to continually rotate? That is to say, is their movement limited to a specific arc? Or does it depend on the type of RC servo motor? I have seen videos of industrial size steppers rotating constantly, but, more specifically, I was wondering whether a MG995 can. I don't own any RC servo motors yet, so I can't actually test it myself. I just want to make sure before I make a purchase. I keep seeing conflicting information, for example the instructable, How to modify a RC servo motor for continuous rotation (One motor walker robot), implies that a RC servo motor will not continually rotate, else otherwise, why would there be a need to modify it? Addendum I have just realised, after further digging about on google, and as HighVoltage points out in their answer, that I have confused steppers and servos. In addition, I found out how to hack the TowerPro MG995 Servo for continuous rotation.
I am currently working on a balancing robot project, which features fairly low-cost sensors such as an 9-Dof IMU with the measurement states $\textbf{x}_\text{IMU} = \left[a_x, a_y, a_z, g_x, g_y, g_z, m_x,m_y,m_z \right]^\text{T}$. Currently I use the accelerometer and gyroscope readings, fused by a complimentary filter to get the angular deviation of the robot's upright (stable) position. The magnetometer values are tilt-compensated and yield the robots orientation with respect to the earth-magnetic field (awful when close to magnetic distortion). Furthermore I have pretty decent rotational encoders mounted on the wheels which deliver information on a wheel's velocity. $\textbf{x}_\text{ENC} = \left[v_l,v_r\right]^\text{T}$. Given these measurements i want to try to get the robots pose (position + heading). $\textbf{x}_\text{ROB} = \left[x,y,\theta\right]^\text{T}$ I do have minor theoretical knowledge on EKF or KF, but it is not sufficient for me to actually derive a practical implementation. Note that my computational resources are fairly limited (Raspberry Pi B+ with RTOS) and that I want to avoid using ROS or any other non-std libs. Can anybody help me on how to actually approach this kind of problem?
I want to conduct the following experiment: I want to set up a scene with a kuka lwr4+ arm, a 3D model of an object and a camera overlooking them. I want to find the pose of the object using some pose estimation algorithm and move the arm towards the object. In general I want a piece of software or a combination of cooperating software that can do all that without having to reinvent the wheel. Is there anything available?
I am new to the create 2 and I downloaded real term to program, opened an interface to the robot and send numbers with it to the robot. I can only get the drive command to work. I only know how to make the robot go faster, turning around or slower. I would like to know how to make the other commands work along with making it go left and right.
Recently we've encountered Kalman filter algorithm for state estimation in a course of Probabilistic Robotics. After taking several days to try to read Kalman's original paper published in 1960, A New Approach to Linear Filtering and Prediction Problems, it firstly feels a bit difficult to read, and it seems the majority is to show the orthogonal projection is the optimal estimation under certain conditions and solutions to Wiener's problem. But I did not find the exact algorithm in this original paper as the one in the textbook. For example, is there an explanation of "Kalman gain" in this paper ? Does Kalman's paper provide a mathematical derivation of Kalman filter algorithm?
Can anyone please explain me these lines found on page 5 of Kinematics Equations for Differential Drive and Articulated Steering by Thomas Hellström? Note that plugging in $r$ and $v$ for both left and right wheel result in the same $\omega $ (otherwise the wheels would move relative to each other). Hence, the following equations hold: $$ \begin{align} \omega~ \left(R+\frac{l}{2}\right) &= v_r\\ \omega~ \left(R-\frac{l}{2}\right) &= v_l\\ \end{align}$$ where $R$ is the distance between ICC and the midpoint of the wheel axis, and $l$ is the length of the wheel axis (see Figure 6). Figure 6 . When left and right wheel rotate with different speeds, the robot rotates around a common point denoted ICC My questions are: How do these equations come to be? Why does $\omega$ have to be same if we want to analyse the behavior after changing the wheel velocity relative to other? How do we know about the circle, in which the robot rotates by doing variations in one wheel velocity, surely passes through the center point between two wheels.
I'm a agricultural engineering student and complete newbie trying to build a simple mechanism attached to a drone that dispenses a grease-type fluid. However, since I'm not familiar with the field, I'm having a hard time googling because I don't know the correct terms to search for. I'm looking for a mechanism that will remotely push the grease out. The problem is carrying the necessary weight for an hectare (300g to 1,5kg of fluid) and the dispenser mechanism within the drone. So I'm looking for a lightweight dispenser mechanism capable of deliver small amounts of this fluid (3g) distributed on the trees canopy. The grease do not need to be heated as it flows naturally in normal temperatures (like a toothpaste). Both pump or syringe-type arrangement would be fine as long as I can control it remotely.
Suppose I have a 3 link(1 dimensional) chain in which all the joints are revolutes, the axis of first revolute joint is along Z-axis(global) and axis of second joint is along X-axis(global). The first link is along X-axis(global) and second link is along Z-axis(global). Now in order to use DH representation I introduced a local frame for link 1 at joint 1(z axis along Z and x axis along X) and another frame at joint 2.Here z-axis is along axis of rotation(global X) and here I am clueless how to determine x-axis for joint 2 because the two z axis are intersecting.(standard procedure is to find common normal between two z axis) Thanks for your time.
I try to read IMU sensor data from an Arduino mega 2560 UART with serial receive block of Arduino support package for simulink. The IMU can send binary packets and also nmea packets and I can configure it to any output. When the serial recieve block output is directly used, it displays just the numbers between 0-255. l need help about how to parse the coming data which contains the euler angles that I want to use. Here is binary structure ; "s","n","p",packet type(PT),Address,Data Bytes (D0...DN-1),Checksum 1,Checksum 0 The PT byte specifies whether the packet is a read or a write operation, whether it is a batch operation, and the length of the batch operation (when applicable). The PT byte is also used by the UM7 to respond to commands. The specific meaning of each bit in the PT byte is given below. Packet Type (PT) byte; 7 Has Data, 6 Is Batch, 5 BL3, 4 BL2, 3 BL1, 2 BL0, 1 Hidden, 0 CF Packet Type (PT) Bit Descriptions; 7...Has Data: If the packet contains data, this bit is set (1). If not, this bit is cleared (0). 6...Is Batch: If the packet is a batch operation, this bit is set (1). If not, this bit is cleared (0) 5:2..Batch Length (BL): Four bits specifying the length of the batch operation. Unused if bit 7 is cleared. The maximum batch length is therefore 2^4 = 16 1...Hidden: If set, then the packet address specified in the “Address” field is a “hidden” address. Hidden registers are used to store factory calibration and filter tuning coefficients that do not typically need to be viewed or modified by the user. This bit should always be set to 0 to avoid altering factory configuration. 0...Command Failed (CF): Used by the autopilot to report when a command has failed. Must be set to zero for all packets written to the UM7. The address byte specifies which register will be involved in the operation. During a read operation (Has Data = 0), the address specifies which register to read. During a write operation (Has Data = 1), the address specifies where to place the data contained in the data section of the packet. For a batch read/write operation, the address byte specifies the starting address of the operation. The "Data Bytes" section of the packet contains data to be written to one or more registers. There is no byte in the packet that explicitly states how many bytes are in this section because it is possible to determine the number of data bytes that should be in the packet by evaluating the PT byte. If the Has Data bit in the PT byte is cleared (Has Data = 0), then there are no data bytes in the packet and the Checksum immediately follows the address. If, on the other hand, the Has Data bit is set (Has Data = 1) then the number of bytes in the data section depends on the value of the Is Batch and Batch Length portions of the PT byte. For a batch operation (Is Batch = 1), the length of the packet data section is equal to 4*(Batch Length). Note that the batch length refers to the number of registers in the batch, NOT the number of bytes. Registers are 4 bytes long. For a non-batch operation (Is Batch = 0), the length of the data section is equal to 4 bytes (one register). The data section lengths and total packet lengths for different PT configurations are shown below. The two checksum bytes consist of the unsigned 16-bit sum of all preceding bytes in the packet, including the packet header. Read Operations; To initiate a serial read of one or more registers aboard the sensor, a packet should be sent to the UM7 with the "Has Data" bit cleared. This tells the device that this will be a read operation from the address specified in the packet's "Address" byte. If the "Is Batch" bit is set, then the packet will trigger a batch read in which the "Address" byte specifies the address of the first register to be read. In response to a read packet, the UM7 will send a packet in which the "Has Data" bit is set, and the "Is Batch" and "Batch Length" bits are equivalent to those of the packet that triggered the read operation. The register data will be contained in the "Data Bytes" section of the packet. here is an Example Binary Communication Code; { uint8_t Address; uint8_t PT; uint16_t Checksum; uint8_t data_length; uint8_t data[30]; } UM7_packet; // parse_serial_data.This function parses the data in ‘rx_data’ with length ‘rx_length’ and attempts to find a packet in the data. If a packet is found, the structure ‘packet’ is filled with the packet data.If there is not enough data for a full packet in the provided array, parse_serial_data returns 1. If there is enough data, but no packet header was found, parse_serial_data returns 2.If a packet header was found, but there was insufficient data to parse the whole packet,then parse_serial_data returns 3. This could happen if not all of the serial data has been received when parse_serial_data is called.If a packet was received, but the checksum was bad, parse_serial_data returns 4. If a good packet was received, parse_serial_data fills the UM7_packet structure and returns 0. uint8_t parse_serial_data( uint8_t* rx_data, uint8_t rx_length, UM7_packet* packet ) { uint8_t index; // Make sure that the data buffer provided is long enough to contain a full packet The minimum packet length is 7 bytes if( rx_length < 7 ) { return 1; } // Try to find the ‘snp’ start sequence for the packet for( index = 0; index < (rx_length – 2); index++ ) { // Check for ‘snp’. If found, immediately exit the loop if( rx_data[index] == ‘s’ && rx_data[index+1] == ‘n’ && rx_data[index+2] == ‘p’ ) { break; } } uint8_t packet_index = index; // Check to see if the variable ‘packet_index’ is equal to (rx_length - 2). If it is, then the above loop executed to completion and never found a packet header. if( packet_index == (rx_length – 2) ) { return 2; } // If we get here, a packet header was found. Now check to see if we have enough room left in the buffer to contain a full packet. Note that at this point, the variable ‘packet_index’contains the location of the ‘s’ character in the buffer (the first byte in the header) if( (rx_length – packet_index) < 7 ) { return 3; } // We’ve found a packet header, and there is enough space left in the buffer for at least the smallest allowable packet length (7 bytes). Pull out the packet type byte to determine the actual length of this packet uint8_t PT = rx_data[packet_index + 3]; // Do some bit-level manipulation to determine if the packet contains data and if it is a batch.We have to do this because the individual bits in the PT byte specify the contents of the packet. uint8_t packet_has_data = (PT >> 7) & 0x01; // Check bit 7 (HAS_DATA) uint8_t packet_is_batch = (PT >> 6) & 0x01; // Check bit 6 (IS_BATCH) uint8_t batch_length = (PT >> 2) & 0x0F; // Extract the batch length (bits 2 through 5) // Now finally figure out the actual packet length uint8_t data_length = 0; if( packet_has_data ) { if( packet_is_batch ) { // Packet has data and is a batch. This means it contains ‘batch_length' registers, each // of which has a length of 4 bytes data_length = 4*batch_length; } else // Packet has data but is not a batch. This means it contains one register (4 bytes) { data_length = 4; } } else // Packet has no data { data_length = 0; } // At this point, we know exactly how long the packet is. Now we can check to make sure we have enough data for the full packet. if( (rx_length – packet_index) < (data_length + 5) ) { return 3; } // If we get here, we know that we have a full packet in the buffer. All that remains is to pullout the data and make sure the checksum is good. Start by extracting all the data packet->Address = rx_data[packet_index + 4]; packet->PT = PT; // Get the data bytes and compute the checksum all in one step packet->data_length = data_length; uint16_t computed_checksum = ‘s’ + ‘n’ + ‘p’ + packet_data->PT + packet_data->Address; for( index = 0; index < data_length; index++ ) { // Copy the data into the packet structure’s data array packet->data[index] = rx_data[packet_index + 5 + index]; // Add the new byte to the checksum computed_checksum += packet->data[index]; } // Now see if our computed checksum matches the received checksum // First extract the checksum from the packet uint16_t received_checksum = (rx_data[packet_index + 5 + data_length] << 8); received_checksum |= rx_data[packet_index + 6 + data_length]; // Now check to see if they don’t match if( received_checksum != computed_checksum ) { return 4; } // At this point, we’ve received a full packet with a good checksum. It is already fully parsed and copied to the ‘packet’ structure, so return 0 to indicate that a packet was processed. return 0; }
I was trying to reproduce this youtube tutorial in V-rep and I came across some problems concerning blob detection. There are some complaints on this matter under the video. I don't believe that blob detection stopped working in recent v-rep versions, but I was unable to make it work (as a new v-rep user myself). Has anyone any idea how to properly implement it? More specifically, I have a vision sensor named cam and I want it to follow a red ball. The vision sensor will detect the position of the ball and I will use it to control the joints that steer the sensor (yaw and pitch). My script follows threadFunction=function() yaw=simGetObjectHandle("yaw") pitch=simGetObjectHandle("pitch") cam=simGetObjectHandle("cam") while simGetSimulationState()~=sim_simulation_advancing_abouttostop do result,pack1,pack2=simReadVisionSensor(cam) if result>0 then xtarget=pack2[5] ytarget=pack2[6] simAuxiliaryConsolePrint(out,string.format("\n x: %0.2f, y: %0.2f",xtarget,ytarget)) simSetJointTargetVelocity(yaw,1*(0.5-xtarget)) simSetJointTargetVelocity(pitch,1*(0.5-ytarget)) end end end simSetThreadSwitchTiming(2) out = simAuxiliaryConsoleOpen("Debug",8,1) res,err=xpcall(threadFunction,function(err) return debug.traceback(err) end) if not res then simAddStatusbarMessage('Lua runtime error: '..err) end When I run the simulation I can see that the sensor sees the red ball at some point but result is always 0 meaning that no detection takes place. Here is my scene
I'm trying to calculate the Jacobian for days now. But first some details. Within my Master's Thesis I have to numerically calculate the Jacobian for a tendon-driven continuum Robot. I have all homogeneous transformation matrices as I already implemented the kinematics for this Robot. Due to it's new structure there are no discrete joint variables anymore but rather continuous parameters. Therefore I want to compute the Jacobian numerically. It'd be awesome if someone could provide a detailed way how to compute the numerical Jacobian for a 6-DoF rigid-link robot (only rotational joints => RRRRRR). From that I can transfer it to the continuum robot. I've already started computing it. Let T be the homogeneous transformation matrix for the Endeffector (Tip) with $$T=\begin{bmatrix}R & r \\ 0 & 1 \end{bmatrix} $$ with R = rotational matrix (contains orientation) and $ r = \begin{bmatrix} x & y & z \end{bmatrix}^T$ endeffector position. My approach is to compute the first three rows of J by successively increasing the joints, computing the difference to the "original" joint values and dividing it by the increment delta, the joint-space is $ q = \begin{bmatrix} q_1 & q_2 & q_3 & ... &q_6 \end{bmatrix}T $ $q_1 = q_1 + \delta$ => $J(1,1) = (X_{increment} - X_{orig})/\delta$ $q_2 = q_2 + \delta$ => $J(1,2) = (X_{increment} - X_{orig})/\delta$ and so on. I do the same for the y and z coordinates. So I get the first 3 rows of J. Now I don't know how to compute the last three rows as they refer to the rotational Matrix R. Since it's a 3x3 matrix and no scalar value I don't know how to handle it.
I was wondering whether maybe you could help me with this problem. I have a double pendulum. I have set the origin of cartesian coordinates to be the "head" of the first arm, which is fixed. The end of the second arm is attached to a block that slides along the x-axis. What I want to do is to derive the equations relating the pendulum's angles with the distance from the origin to the block. Now, I know how I could go about deriving the equations without the constraint. $$x_1 = L_1cos(a_1)$$ $$y_1 = L_1sin(a_1)$$ Where $x_1$ and $y_1$ is where the first arm joins the second arm and $a_1$ is the angle between the horizontal and the first arm. Similarly, I can derive the equations for the end of the second arm $x_2 = x_1 + L_2 cos(a_2)$ and $y_2 = y_1 - L_2 sin(a_2)$ Now then, if I attach a sliding block to the end of my second arm, I don't know whether my equation for $x_2$ would change at all. I don't think it would but would I have to somehow restrict the swing angles so that the block only moves along the x direction? Well, basically the problem is finding the equation of $x_2$ if it's attached to a block that only moves along the x- direction.
So, I need to know a couple of things about soldering. My primary workspace, robotics and otherwise, is a desk with a computer and only a little bit of free space (4 ft. by 6 in.). I am wondering if it is safe to solder in such a small area. Also, what level of ventilation do I need to solder safely? My desk is in a normal house room and my desk is write next to an air vent. My house has heating and A/C. Do I need a fan or a fume sucker thing? I plan to only solder a little to get things to stay in my solder less bread board (soldering header pins onto wires and such). So, basically, what are the minimum requirements for soldering safely (space and ventilation)? Also, if anyone could point me to some hobby/beginner level soldering must-haves on Amazon that would be great, thanks.
I've notice the IRobot Create 2 does not respond to the app's commands when it has been sleeping. If I press the Clean button and re-run the app then the robot is responsive to the commands. My initialization sequence (Android/Java) using usb-serial-for-android: port.open(connection); port.setParameters(115200, 8, UsbSerialPort.STOPBITS_1,UsbSerialPort.PARITY_NONE); command(Opcode.START); command(Opcode.SAFE); The physical architecture is IRobot Create 2 connected by IRobot Serial Cable to Google Project Tango Tablet. How can my app wake up the Roomba from it's sleep?
I am using 8051 microcontroller and a dc motor.What to do if i have to rotate the motor at any fixed rpm. Let's say 120rpm. And if it is possible by generating pwm,how to do the calculations for the relation between duty cycle and rpm?
I want to create a rotating control mechanism that can turn a surface to face any direction in a sphere. My dad (an electrical engineer) said I can probably do it by connecting two servo motors together. I am looking for a servo motor that can do what I want to do, which is moving the sphere with decent precision (within ~1 degree) but I don't know which kinds of motors are able to handle such precision. Another challenge is that one servo will have to hold the second servo on top. As I understand it, the torque rating determines the maximum amount of force the servo can exert on its load so I can figure out if the servo is strong enough through some math?
I'm a software developer not experienced in AI or machine learning, but I'm now interested in developing this kind of software. I want to develop software that recognizes some specific objects, specifically, animals from a video stream (or a sequence of static images). I saw there's a library called openCV which is often commented in this forum, but what I saw so far is this library is a helper for working with images, I didn't find the object recognition or self learning part. Is openCV a good starting point? better go for some theory first? or there are other already developed libraries or frameworks aimed for object recognition? EDIT To give some context: I will have ona camera checking a landscape, mostly static but some leaves may move with the wind or some person may step in, and I want to get an alert when some animal is into view, I can reduce the "animals" to only birds (not always I will have a nice bird/sky contrast). I did some work with supervised neural networks some 15 years ago and studied some AI and machine learning theory, but I guess things have improved way too much since then, that's why I was asking for some more practical first steps. Thank you
Project Tango Development Kits come with a mini-dock (see picture below). I am controlling the iRobot Create 2 by the mounted Tablet using the USB cable provided plugged into the mini-dock. (see docs). The USB 3.0 port on the mini-dock is only functional when the tablet is docked. The port can be used to attach an external memory drive or standard peripherals to the tablet. I wish to recharge the tablet using the power from the iRobot. The mini dock comes with a port for external charging: The mini-dock accepts a power adapter for faster charging (not provided). The power adapter output must be 12V, 2A, and the connector must be a barrel plug with 5.5mm outer diameter, 2.1mm inner diameter, center positive. Ideally the charging would happen only when the iRobot is also charging, but charging all the time is acceptable. Is this possible? If so, how?
I have a dual (sequential) loop control system controlling the angle of a rotational joint on a robot using an absolute encoder. I have tuned the inner control loop (for the motor) and am now working on tuning the outer loop (for the joint). Example of a dual loop controller When I disturb the system the response isn't what I would expect. Kp = 0.4 Kp = 0.1 Kd = 0.001 I didn't add a Ki term because I don't have any steady state error. I'm confused by the fact that the second overshoot in the first plot is larger than the first one. No matter how I adjust the parameters I can't seem to get rid of the oscillation in the velocity of the joint (seen in the second plot). One limitation I have is if I increase both Kp and Kd too high the gearbox of the becomes very noisy because the noise in the encoder signal creates larger adjustments in the position of the motor. I'm working on adding a filter to the output using the method described here. The code I'm using for the outer loop is: static float e_prev = 0.0; e = joint_setpoint - joint_angle; e_i += e/0.001; // dt = 0.001s e_d = (e - e_prev)/0.001; // dt = 0.001s e_prev = e; motor_setpoint += k_p * e + k_i * e_i + k_d * e_d; I'm beginning to think that the system might not be able to be modeled by a first order equation, but would this change the implementation of the control loop at all? Any advice is appreciated! Ben
Like the title says.. Will it work? I know about the due 3.3 volt limitations. I want to build a hexapod with 18 servo's. The shield I am looking at: http://yourduino.com/sunshop2/index.php?l=product_detail&p=195 If it isn't compatible. Is there an alternative shield which will work? I can't seem to find much for the due.
The transmission of telemetry data between the ground base station and APM 2.x (Arducopter), using XBee, is not well documented. The only documentation is Telemetry-XBee, but it does not specify what XBee version is used. I have been checking and I guess is version 1 (this one has P2P link and the others not), but I am not sure. I would like to know, what XBee modules people use for flying drones? Do they have problems with the APM connection? How can I control the drone remotely using the XBee link with Mavlink protocol?
My 6 joint robot arm structure doesn't meet the requirements for a closed form solution (no 3 consecutive axes intersecting at a point or 3 parallel axes...). What would be best method to adopt to get solution in 1ms or less? Estimation accuracy of 1mm. I'm assuming the computation is done on an average laptop Intel Core i3, 1.7GHz, 4GB RAM
Is it possible to localize a robot without any sensors, odometer and servo motors? Assume robot has dc motors and no obstacles.
I would like to locate the position of a stationary autonomous robot in x-y-z axis relative to a fixed starting point. Could someone suggest sensors that would be suitable for this application? I am hoping to move the robot in 3D space and be able to locate it's position wirelessly. The rate of position update is not important as I would like to stop the robot from moving and relay the information wirelessly. The range I am looking for is roughly 2 KM + (the more the better) with accuracy of +/- 1 CM. Is there any system that could do this? Thanks for your help.
My goal is to move robot in certain points as shown in the figure. It's initial position is (x0,y0) and move along other coordinates. I am able to track robot position using a camera which is connected to pc and camera is located at the top of the arena. I've mounted a ir beacon on the robot, camera find this beacon and locates it's coordinate(in cm) in the arena. Using this coordinate how can I move my robot to another position, say new position (x1,y1) My robot has arduino mega 2560 with two DC motors, communication between pc and robot is done using bluetooth Update: Thanks @Chuck for the answer, however I still have few doubts regarding turning angle. My robot position setup is as shown in the image. (xc, yc) is the current position and (xt, yt) is the target position. If I want to align robot in the direction of the target coordinates, I've to calculate atan2 between target and current coordinates. But the angle remains same since it's current position is not changing with respect to the target point. so I assume robot simply makes 360' rotation at current position? Update: The path points is as show below in the image, is my initial heading angle assumption is correct? '1' is the starting point. Update Thank you for your patience and time, I'm still struck at turning, my code goes like this //current points float xc = -300; float yc = 300; //target points float xt = -300; float yt = -300; //turning angle float turnAngle; void setup() { // pin setup Serial.begin(9600); } void loop() { turnAngle = atan2((yt-yc), (xt-xc)); //calculate turning angle turnAngle = turnAngle * 180/3.1415; //convert to degrees if (turnAngle > 180) { turnAngle = turnAngle-360; } if (turnAngle < -180) { turnAngle = turnAngle+360; } if (turnAngle < -10) { //turn right } if (turnAngle > 10) { //turn left } } Since angle is always -90' robot only makes right turn in loop at current point, since angle is not changing. I think I'm missing something here.
Can i charge a lipo nano tech battery over imax b3 charger. 2650mah 35/70c 3s is the battery
Let's say I would like to use an EKF to track the position of a moving robot. The EKF would not only estimate the position itself but also variables affecting the position estimate, for example IMU biases, wheel radius, wheel slip and so on. My question is, is it better to use one big EKF (state vector containing all estimated variables) or multiple smaller EKFs (each one responsible for tracking a subset of all variables to be estimated)? Or is there no difference? As for the example above, the EKF could be split into one for tracking position, one for estimating wheel radius and slip and one for estimating IMU biases. The position EKF would of course use the estimations output from the other concurrent EKFs and vice versa. To me it seems it would be easier to tune and test multiple smaller EKFs rather than just one big. Are there any other advantages/disadvantages (execution time, ease of debugging etc.) assuming the resulting estimates are equal in the two approaches (or close enough at least)? Thanks, Michael
I am new in this field, I am looking for some high precision gyroscopes and accelerometers for attitude measurements.The precision requirement is around 0.2~0.5 deg/s dynamic. I have done some digging myself, not a single integrated MEMS sensor can do that without costing too much. So some heavy math is needed but that's fine.I need to make sure the prefect sensors are chosen, the budget is less than 100USD. can any one help, thanks in advanced.
I have a mobile robot and I would like it to follow the walls of a room. I have: A map of the room. Wheel encoders for the odometry. A Kalman filter for fusing data from wheel encoders and IMU. A Hokuyo lidar for localization and obstacle avoidance A Kinect to see obstacles which can not be seen by the Hokuyo. Amcl for localization. A couple of sharp sensors on the side for wall following. I am not planning to use the global or local costmap because the localization of the robot is not perfect and the robot might think that it is closer (or further away) to the wall than it actually is and therefore, wall following might fail. So, I am planning to just use the data from Hokuyo lidar and sharp sensors to do wall following and maintain constant distance from the wall (say 10 cm). Now, I would like to know what is the best technique for doing wall following in this manner? Also, how can one deal with the issue of open gaps in the wall (like open doors, etc..) while doing wall following using the above approach? I know this is a very general question but any suggestions regarding it will be appreciated. Please let me know if you need more information from me. Update: I am just trying to do wall following in a given room (I have the vertices of the room in a global reference frame) For example, Lets say I have a map of a room (shown below). I want to make the robot follow the wall very closely (say 10 cm from the wall). Also, if there is an open space (on bottom left), the robot should not go in the adjacent room but should keep on doing wall following in the given room (For this, I have the boundary limits of the room which I can use to make sure the robot is within the given room). The approach which I am thinking is to come up with an initial global path (set of points close to the wall) for wall following and then make sure robot goes from one point to the next making sure that it always maintains a certain distance from the wall. If there is no wall, then the robot can just follow the global path (assuming localization is good). I am not sure about its implementation complexity and whether there is a better algorithm/ approach to do something like this.
I'm building my first quadcopter, and these are the components I intend to buy: Motor: EMAX BL2212 1400 KV Brushless Outrunner Motor around 0.9 kg thrust: Flight Controller: Multiwii V2.5 Flight Controller Propellers: I don't know which one to get: fut-electronics propellers collection GPS: Skylab UART GPS Module SKM58 (Small Form Factor) Radio Communication: Radio Telemetry 915 Mhz (3DR), is there an affordable alternative to buying a radio telemetry maybe using Wi-Fi? ESCs: 4x1 ESC (4x25A) - Speed Controller for Quadcopter Battery: I don't know which one to choose My questions are: Are the components compatible? What battery to choose? If I'm not planning to do GPS planned missions, would the GPS be important for anything else? By the way I intend to attach a camera or a smart-phone to it for video capturing I think it is about an extra 200 grams.
Why are 'cell decomposition' methods in motion planning given the name, "combinatorial" motion planning?
Using an IMU (gyro, accelerometer and magnetometer), as found in most smartphones, can I detect the differences between tilting the device, say forward, along different (parallel) axis positions? To clarify, if the axis of rotation is far from the sensor, the the motion contains a translational component. Can the distance and position of this axis be extracted from the IMU data and if so how? Is there some data fusion algorithm that can do all this?
I have a rig for which I have a pretty good estimate of the static transformation between the camera and a joint based off of the CAD. It has some errors though and I was hoping to fix it by doing a hand eye calibration. So, I started off with generating some data based off of the transformation that I have already. From the papers that I have been reading, they all want to solve the $$AX = XB$$ problem by either converting $A$, $B$ to dual quaternions or simplifying the equation to something like $$ n_A = Xn_B $$ where $n_A$, $n_B$ are the eigenvectors corresponding to the eigenvalue of 1 for the $A$ and $B$ rotations. After generating the data, I tested if my data collection was correct and I validated it by checking if $AX = XB$ for all of the $A$s and $B$s that I generated. I used the CamOdoCal library to try and solve the problem but I got this - /hand_eye_calib_node : [ 0.00196822, -0.457069, 0.889429, 0.143463; -0.999965, -0.00813605, -0.00196822, -1.74257; 0.00813605, -0.889394, -0.457069, 0.0270069; 0, 0, 0, 1] ---------------------------------------- /hand_eye_calib_node : Actual transform 0 0 1 0.08891 -1 0 0 -0.070465 0 -1 0 0.07541 0 0 0 1 The actual transform is the one that I had based my $A$ and $B$ data on. Then I tried implementing the Tsai-Lenz and Horaud and Dornaika's Nonlinear optimization techniques using LM solver but to no avail. I do not get the correct transformation out of any of the solvers. So, I was wondering if you could point me to a hand eye calibration library or paper that has worked.
I wanted to know if there is any sort of archive of mechanisms that contains a brief description of mechanisms like there type of motion and forces involved. Not lengthy derivations and other stuff.
I am trying to make line following robot. I am using atmega328p mcu, pololu 10:1 motors, pololu qtr6-rc sensor, 2s li-po. Here is my code: /* * LineFollower.c * * Created: 30.04.2015 16:00:05 * Author: Mikk */ #define F_CPU 20000000 //we're running on 20mHz clock #define numberOfButtons 1 #define READPORT PORTC #define READDDR DDRC #define READPIN PINC // lines connected to PC0 - PC5 #define MAXTICKS 2500 #define QTRCNT 6 #include <avr/io.h> #include <util/delay.h> #include <avr/interrupt.h> #include <Mikk/Button.h> #include <Mikk/QTRRCSensors.h> int baseSpeed = 70; int maxSpeed = 140; const float Kp = 8.1; const float Kd = 400; uint8_t mode = 0; //indicates in which mode program is uint8_t RmotorSpeed = 0; // uint8_t LmotorSpeed = 0; //motors void button(void); void setMotors(int ml, int mr) { if(ml > maxSpeed) //make sure that speed is not out of range for left motor ml = maxSpeed; if(ml < -maxSpeed) ml = -maxSpeed; if(mr > maxSpeed) //make sure that speed is not out of range for right motor mr = maxSpeed; if(mr < -maxSpeed) mr = maxSpeed; if(ml > 0) //if left motor speed is positive then drive motor forwards LmotorSpeed = ml; if(ml == 0) //if left motor speed is 0 then stop motor LmotorSpeed = 0; if(mr > 0) //if right motor speed is positive then drive motor forwards RmotorSpeed = mr; if(mr == 0) //if right motor speed is 0 then stop motor RmotorSpeed = 0; } void emittersOn(void) //function for turning emitters on { PORTD |= (1 << PIND0); } void emittersOff(void) //function for turning emitters off { PORTD &= ~(1 << PIND0); } void LedOn(void) //function for turning led on { PORTB |= (1 << PINB5); } void LedOff(void) //function for turning led off { PORTB &= ~(1 << PINB5); } void stop(void) //stop everything { LedOff(); setMotors(0, 0); emittersOff(); } void calibration(void) //calibration takes about 5 seconds { //turn led on LedOn(); //turn emitters on emittersOn(); // reset minimums and maximums for (int i = 0; i < QTRCNT; i++) { QTRmax[i] = 0; QTRmin[i] = MAXTICKS; } //calibrate sensors for(int i=0; i<250; i++) { calibrateQTRs(); _delay_ms(5); } //turn emitters off emittersOff(); //turn led off LedOff(); } void start(void) { //turn led on LedOn(); //create all necessary variables int power_difference = 0; float error = 0; float lastError = 0; float derivative = 0; int position = 0; //turn emitters on emittersOn(); _delay_ms(500); //wait so you can pull your hand away while(mode == 2) { //check for mode change button(); //read position position = readLine(); //make calculations error = position - 2500; derivative = error - lastError; //remember last error lastError = error; //calculate power_difference of motors power_difference = error/(Kp/100) + derivative*(Kd/100); //make sure that power difference is in correct range if(power_difference > baseSpeed) power_difference = baseSpeed; if(power_difference < -baseSpeed) power_difference = -baseSpeed; //drive motors if(power_difference > 0) setMotors(baseSpeed+power_difference, baseSpeed-power_difference/2); else if(power_difference < 0) setMotors(baseSpeed+power_difference/2, baseSpeed-power_difference); else if(power_difference == 0) setMotors(maxSpeed, maxSpeed); } } void button(void) { char buttonState = 0; //check for current button status buttonState = ButtonReleased(0, PINB, 1, 200); //check if button is pressed if(buttonState) //pin change from low to high { mode++; if(mode == 1) calibration(); } } void pwmInit(void) { //set fast-PWM mode, inverting mode for timer0 TCCR0A |= (1 << COM0A1) | (1 << COM0A0) | (1 << WGM00) | (1 << WGM01) | (1 << COM0B1) | (1 << COM0B0); //set fast-PWM mode, inverting mode for timer2 TCCR2A |= (1 << COM2A1) | (1 << COM2A0) | (1 << WGM20) | (1 << WGM21) | (1 << COM2B1) | (1 << COM2B0); //set timer0 overflow interrupt TIMSK0 |= (1 << TOIE0); //set timer2 overflow interrupt TIMSK2 |= (1 << TOIE2); //enable global interrupts sei(); //set timer0 prescaling to 8 TCCR0B |= (1 << CS01); //set timer2 prescaling to 8 TCCR2B |= (1 << CS21); } int main(void) { DDRB |= 0x2A; //0b00101010 DDRD |= 0x69; //0b01101001 DDRC |= 0x00; //0b00000000 //clear port d PORTD |= 0x00; //enable pull-up resistor PORTB |= (1 << PINB1); initQTRs(); pwmInit(); //blink 2 times indicate that we are ready for(int i=0; i<4; i++) { PORTB ^= (1 << PINB5); _delay_ms(500); } while(1) { button(); if(mode == 0) stop(); if(mode == 2) start(); if(mode >= 3) mode = 0; } } //update OCRnx values ISR(TIMER0_OVF_vect) { OCR0A = RmotorSpeed; } ISR(TIMER2_OVF_vect) { OCR2A = LmotorSpeed; } And here is my qtr library: #ifndef QTRRCSensors #define QTRRCSensors #define SLOW 1 #define FAST 0 static inline void initQTRs(void) { TCCR1B = (1 << CS11); } uint16_t QTRtime[QTRCNT], QTRmax[QTRCNT], QTRmin[QTRCNT]; static inline void readQTRs(uint8_t forceSlow) { uint8_t lastPin, i, done = 0; for (i = 0; i < QTRCNT; i++) // clear out previous times QTRtime[i] = 0; READDDR |= 0b00111111; // set pins to output READPORT |= 0b00111111; // drive them high _delay_us(10); // wait 10us to charge capacitors READDDR &= 0b11000000; // set pins to input READPORT &= 0b11000000; // turn off pull-up registers TCNT1 = 0; // start 16bit timer at 0 lastPin = READPIN; while ((TCNT1 < MAXTICKS) && ((done < QTRCNT) || forceSlow)) // if forceSlow, always take MAXTICKS time { if (lastPin != READPIN) // if any of the pins changed { lastPin = READPIN; for (i = 0; i < QTRCNT; i++) { if ((QTRtime[i] == 0) && (!(lastPin & (1<<i)))) // did pin go low for the first time { QTRtime[i] = TCNT1; done++; } } } } if (done < QTRCNT) // if we timed out, set any pins that didn't go low to max for (i = 0; i < QTRCNT; i++) if (QTRtime[i] == 0) QTRtime[i] = MAXTICKS; } void calibrateQTRs(void) { uint8_t i, j; for (j = 0; j < 10; j++) { // take 10 readings and find min and max values readQTRs(SLOW); for (i = 0; i < QTRCNT; i++) { if (QTRtime[i] > QTRmax[i]) QTRmax[i] = QTRtime[i]; if (QTRtime[i] < QTRmin[i]) QTRmin[i] = QTRtime[i]; } } } void readCalibrated(void) { uint8_t i; uint16_t range; readQTRs(FAST); for (i = 0; i < QTRCNT; i++) { // normalize readings 0-1000 relative to min & max if (QTRtime[i] < QTRmin[i]) // check if reading is within calibrated reading QTRtime[i] = 0; else if (QTRtime[i] > QTRmax[i]) QTRtime[i] = 1000; else { range = QTRmax[i] - QTRmin[i]; if (!range) // avoid div by zero if min & max are equal (broken sensor) QTRtime[i] = 0; else QTRtime[i] = ((int32_t)(QTRtime[i]) - QTRmin[i]) * 1000 / range; } } } uint16_t readLine(void) { uint8_t i, onLine = 0; uint32_t avg; // weighted total, long before division uint16_t sum; // total values (used for division) static uint16_t lastValue = 0; // assume line is initially all the way left (arbitrary) readCalibrated(); avg = 0; sum = 0; for (i = 0; i < QTRCNT; i++) { // if following white line, set QTRtime[i] = 1000 - QTRtime[i] if (QTRtime[i] > 50) { // only average in values that are above a noise threshold avg += (uint32_t)(QTRtime[i]) * (i * 1000); sum += QTRtime[i]; if (QTRtime[i] > 200) // see if we're above the line onLine = 1; } } if (!onLine) { // If it last read to the left of center, return 0. if(lastValue < (QTRCNT-1)*1000/2) return 0; // If it last read to the right of center, return the max. else return (QTRCNT-1)*1000; } lastValue = avg/sum; // no chance of div by zero since onLine was true return lastValue; } #endif I am trying to find Kp constant but when it's 7 then my robot just turns off the line always on the same spot. When Kp is 8 then it follows staright line but wobbles a lot and can't take corners. I also tried to increase Kd 10 to 20 times when my Kp was 8 but it didn't change much. How can I get it working? Here is my robot and the track I want to follow.
I am a student of BE taking Mega-Quadcopter as my final year project.Can u please help me with the total hand calculations of the mega-copter i.e its procedure and formulaes? . I wanted to know how to calculate the dimensions of frame,specifications of motor and propeller,the rating of ESC's and the power rating of the batteries and its total no.s.I do not want direct answers but its procedure and formulaes.I want to lift aload of around 20-30 kgs .Please feel free to help.
I recently discovered this ROS-package: http://wiki.ros.org/laser_ortho_projector . Which is basically exactly what I need. However I am not using ROS, so I need to do what is been done in this package myself. Basically the information I have is the range measurement r and the angle theta for every measurement point of a 360 degree laserscan + I have the orientation in roll, pitch, yaw angles of the laserscanner. However yaw is not important for me and could be ignored. I really can't get my head around how to project those points to the ground plane. I mean it is easy for the measurement point which align with the roll and pitch axes, but I don't know what to do with the points in between. One solution I thought of is this: Convert the measurement point (r, theta) in cartesian coordinates (x,y,z) - vector Use rotations matrices: create rotation matrix for rotation around roll axis with roll angle, and adequately for the pitch axis. Multiplay bot matrices and then multiply it with (x,y,z) - vector. Now the orthogonal projection of the of the measurement would be the (x,y,z) - vector with z=0. Convert (x,y) - vector back to polar coordinates (r, theta). However, especially step 2 is very complicated, because the rotation matrices change according to the sign of the roll and pitch angles, right? I would like to note that the absolute value of role and pitch angles will always be < 90°, so there should not be an unambiguity with rotations.. Is there an easier (or maybe more elegant) way to solve my problem? My guess is that this problem must have been solved basically for every robot application which uses a 2D-laserscanner that is not fixed to one axis. But I can not find the solution anywhere. So I would be very glad if anyone of you could point me in the right direction.
I have built quadcopter but the problem is of balancing. It doesn't goes up. I am using PID technique for balancing. But I am not finding the suitable values for PID tuning. I am using mpu6050 as a sensor. I get the accelerometer values of x and y axis and find the error from them. That is lets say if accel on x is not zero then it error cause it should be zero if balanced. I am using +-2g sensitivity scale of accelerometer. The motors I am using are DJI 920 kva. What values for kp, ki, and kd should I set? I cant set them while in flight cause it completely out of balance. This is the design. Completely homemade. I have modified it a little after this photo. An accelerometer is at 2g so at balance z will be 32768/2. short PID() { short error,v; error = desired-current; //error/=390; integ += error; der = error - perror; x = error; x2 = integ; x3 = der; x* = kp; x2* = ki; x3* = kd; v = kpi;x/=100; v = kii;x2/=1000; v = kdi;x3/=1000; x = x+x2+x3; //x/=390; perror = error; return x; } There are also few more questions, should I scale error or PID output? Because, the error is ranging from 0 to 16380 at 2g setting, so I am scaling it from 0 to 42. So should I divide error or PID by some value?
I need a basic erector set that the parts will fit with servo motors and dc motors. Preferably below $100. I've looked at Minds-i basic set and it looks good except I don't know if it will function with my servos without hot glue or extensive modifications. If it matters, I am making a bipedal robot so I don't require any wheels or anything pre-built. I just need a basic set that I can add on to to build a whole bunch of different robots.
Is there a firmware upgrade for available for the Create 2? I had some issues in March when using these for assigning a University of Tennessee programming project. We are getting ready to use them again (we have 10 now) and I'd like to get them all updated to the latest firmware.
I'm not sure if this is the right place to post this but here goes. So, as the title states, I'm planning on building a desk that doubles as an air hockey table which has a robot on the other side. The robot would be mounted on a rail which should be able to go left and right using a linear actuator. It should be able to "attack" the puck using two servos. The real problem is how should I detect the puck's location? My idea: Since the table would have tiny holes in the corners of a every square(0.5inx0.5in), I could fit in a laser on the bottom part of the table, a laser for ever 1in so a 1inx1in square, the same location would be reflected on the "ceiling" of the table but instead of laser diodes, they would be replaced by an ldr. So I'm planning on doing a matrix and reading the signals of the ldr's columns and rows then performing some logic to locate the center of the puck. PROBLEMS: While I don't see any performance flaws in my plan, I see tons of flaws when done imperfectly even to the tiniest bit. I have to be exactly accurate regarding the laser diode's position, it has to be on the center of the holes, right below the z-axis. This should be easy if I'm just going to place 4 or 5. But I'm not. According to my estimations, I'm going to have to use 300-700 laser diodes, depending on if I'm planning on putting the lasers only on the opponent's side or on the entire board. It would definitely be costly. Imagine 300... This isn't really a huge problem, more like a hassle. Wiring 300 of these. Forget the pcbs, the project area is just to large. I have thought of numerous way to lessen these, like using a color sensor to get the x-axis location and a laser situated on a negative x-axis pointing to the positive x-axis to locate the puck's y location, but I'm still comparing ideas. Advantages: I could get a 3d-like graphical representation with 3d-like controls (3d-like in reality but technically 2d since the lasers are only plotted in the x and y axis though facing the z-axis). Since this project is going to be my room desk, situated in an automated room, I was thinking of making "desk modes" which should toggle between a game that takes advantage of the lasers and their controls, A control desk for my room, ordinary desk mode, and an air hockey mode. My question: (More like a request) Does anyone have another idea regarding how I should be able to locate the puck's x and y location accurately in real time? EDIT: The table is roll-able and stored underneath a loft bed which has an under-area height of 5'4". Which means I can't go grande on the a vertical solution. EDIT #2: Thanks to the helpful people here, I have come to the conclusion of using a camera. The camera will be that of a smartphone's, I'll create an app that tracks an object by color and a has fixed size comparison to identify the distance of the robot from the puck. The phone will then process this and send signals via bluetooth. The phone is anchored at the end of the robot's moving part so the camera is reminiscent of those games with a first-person view. Incoming problems: I'm looking forward to some delay, given the delay in processing.
I'm building a robotic tea-maker/watchdog robot and have a power problem. I would like to be able to have the robot approach a socket and insert the power cord of a cheap immersion heater (120V, 300W, see links below) to turn the heater on. However, the force and precision required to plug it into the wall is beyond the capabilities of my stepper motors/Arduino. My solution was a magnetic breakaway power cord like the charger on a Mac but at higher voltage. Deep fat fryers have suitable ones (120V, high power, see links below). However, the problem is I need both sides of the connector, and I can only find the magnetic breakaway power cord, not the opposite side, which would normally be built into the deep fat fryer. I don't fancy buying a whole fryer just to get one little part... Any ideas? Alternatives to a breakaway cord? Anyone know of any (cheap) 120V induction chargers? I'll resort to a mechanical on/off switch and just leave the robot plugged in if I have to, but I was hoping for something a bit sleeker. Links: Immersion heater Fryer cord
I would like to know how to calculate the distance to each car when I run my application for an autonomous vehicle in real-time. In addition, I want to know how to implement the calculation in C++. You can see in the images we can know the distance for each vehicle but I don't know what code I should use to make all these calculations for every vehicle. Please check the photo to understand more about what I'm trying to achieve.
I have two Series 1 XBees that won't be in transparent mode because they are in AT command mode when I'm not in X-CTU. I had asked for help elsewhere and no one had the answer except telling me about flow control. The XBees had been configured properly with the MY and DL settings. I'm thinking maybe I should shorten the timeout so they supposedly get out of AT command mode but they both stay in AT command mode. The only time I can get the two Series 1 XBees to talk is under X-CTU. I need the two Series 1 XBees to automatically be in transparent mode when powered on.
Having a camera mounted on my robot and looking upwards, I want to estimate the distance of the ceiling as the robot moves and also the position of landmarks observed on the ceiling (lamps for example). I know this is a structure from motion problem but I was very confused on how to implement it. This case is a much simpler case than bundle adjustment as the intrinsic calibration of the camera is existing, the camera pose changes just in x and y directions, and the observed thing is a planar ceiling. Odometry might also be available but I would like to start solving it without. Do you know any libraries that offer a good and simple API to do such a thing? preferably based on levenberg-marquardt or similar optimization algorithms taking in more than just two observations. (Python bindings would be nice to have)
I've been looking at parts for a beginners robotics kit (I teach at a museum) and have been wondering about servos. You can buy continuous servos with relative position encoders. But I can't find continuous rotation servos with absolute position encoders. Do these exist? If not, why not? I understand that some forums don't like shopping questions, but I suspect that this part doesn't exist and I'd like to understand why. Also, I understand that most servos use a potentiometer as a position encoder and that these don't turn more than 1 rotation, but there are other types of encoders that seem like they would do the job. Thanks for the help!
This is a simple question that I can't seem to find the answer for but when setting up the weave function how exactly does frequency (Hz) determine how fast it moves back and forth? In other words if I raise frequency will it move quicker or slower and what factors must I consider?
Is there any way to add a reset button to the Create2 that would be the equivalent of temporarily disconnecting the battery?
I am on the project quadcopter. So i have to use PID for stabalizing it. I think i am going wrong because i am adding the pid output to motors thrust. While the motors thrust means to be its acceleraTion. The reason of my previous statment is that when the quad is static in air(not goin up nor below), that time the thrust is enough to cancel gravity, means thrust is negative gravity, that is acceleration. So if i add pid output to thrust that is acceleration of motors, it will be wrong. I have to add pid to speed of motors, which is not visible. My quad is not stabalizing the reason i see is this, that i am adding pid to acc, while it should be added to speed(virtually). What should i do. Should i derivate the pid output and add to thrust? https://mbasic.facebook.com/photo.php?fbid=1545278952394916&id=100007384772233&set=a.1447457675510378.1073741830.100007384772233&refid=17&ft=top_level_post_id.1545278952394916%3Athid.100007384772233%3A306061129499414%3A69%3A0%3A1443682799%3A-1394728329505289925&tn=E https://mbasic.facebook.com/photo.php?fbid=1545281645727980&id=100007384772233&set=a.1447457675510378.1073741830.100007384772233&refid=17&tn=E This is the drawing of my circuit. I am giving the current from one esc to whole of the circuit. Other esc's has only pwm wire connected to circuit.
I was trying to implement the IBVS algorithm (the one explained in the Introduction here) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm. Also the errors are not exponentially dereasing as they should. What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the calc_Rotation_matrix() function to check that the rotation matrix is properly calculated, then verify that the line ds = vc; in euler_ode is correct. The camera orientation is expressed in Euler angles according to this convention. Finally, one could check if the interaction matrix L is properly calculated. function VisualServo() global A3D B3D C3D D3D A B C D Ad Bd Cd Dd %coordinates of the 4 points wrt camera frame A3D = [-0.2633;0.27547;0.8956]; B3D = [0.2863;-0.2749;0.8937]; C3D = [-0.2637;-0.2746;0.8977]; D3D = [0.2866;0.2751;0.8916]; %initial projections (computed here only to show their relation with the desired ones) A=A3D(1:2)/A3D(3); B=B3D(1:2)/B3D(3); C=C3D(1:2)/C3D(3); D=D3D(1:2)/D3D(3); %initial camera position and orientation %orientation is expressed in Euler angles (X-Y-Z around the inertial frame %of reference) cam=[0;0;0;0;0;0]; %desired projections Ad=A+[0.1;0]; Bd=B; Cd=C+[0.1;0]; Dd=D; t0 = 0; tf = 50; s0 = cam; %time step dt=0.01; t = euler_ode(t0, tf, dt, s0); end function ts = euler_ode(t0,tf,dt,s0) global A3D B3D C3D D3D Ad Bd Cd Dd s = s0; ts=[]; for t=t0:dt:tf ts(end+1)=t; cam = s; % rotation matrix R_WCS_CCS R = calc_Rotation_matrix(cam(4),cam(5),cam(6)); r = cam(1:3); % 3D coordinates of the 4 points wrt the NEW camera frame A3D_cam = R'*(A3D-r); B3D_cam = R'*(B3D-r); C3D_cam = R'*(C3D-r); D3D_cam = R'*(D3D-r); % NEW projections A=A3D_cam(1:2)/A3D_cam(3); B=B3D_cam(1:2)/B3D_cam(3); C=C3D_cam(1:2)/C3D_cam(3); D=D3D_cam(1:2)/D3D_cam(3); % computing the L matrices L1 = L_matrix(A(1),A(2),A3D_cam(3)); L2 = L_matrix(B(1),B(2),B3D_cam(3)); L3 = L_matrix(C(1),C(2),C3D_cam(3)); L4 = L_matrix(D(1),D(2),D3D_cam(3)); L = [L1;L2;L3;L4]; %updating the projection errors e = [A-Ad;B-Bd;C-Cd;D-Dd]; %compute camera velocity vc = -0.5*pinv(L)*e; %change of the camera position and orientation ds = vc; %update camera position and orientation s = s + ds*dt; end ts(end+1)=tf+dt; end function R = calc_Rotation_matrix(theta_x, theta_y, theta_z) Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)]; Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)]; Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1]; R = Rx*Ry*Rz; end function L = L_matrix(x,y,z) L = [-1/z,0,x/z,x*y,-(1+x^2),y; 0,-1/z,y/z,1+y^2,-x*y,-x]; end Cases that work: A2=2*A; B2=2*B; C2=2*C; D2=2*D; A2=A+1; B2=B+1; C2=C+1; D2=D+1; A2=2*A+1; B2=2*B+1; C2=2*C+1; D2=2*D+1; Cases that do NOT work: Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization) A2=2*D; B2=2*C; C2=2*A; D2=2*B;
Is there a way I can control my arduino robot from anywhere in the world. The robot goes out of range of my home wifi so my wifi shield can't help. Is there a way to make sure the robot is always on the Internet no matter where it goes?
I'm looking for a complete tutorial textbook for how to build and control a quadrotor (dynamics, control, etc.). I'm an engineer with a broad background in programming, mechanics, and control but it's been several years and I'm rusty. I was just wondering if anyone knew of a great "from the ground up" tutorial for quadrotors? I found this book which looks interesting but thought I'd ask here too. Thanks! EDIT So, assume I've taken a formal course on all necessary topics: system modeling, mechanics, control theory, state estimation, programming, etc. I'm looking for a book that assumes the reader is familiar with the topics but also goes step-by-step. For example, instead of just stating "here are the system equations" I'm looking for "let's derive the system equations" (but assumes you are familiar with modeling/kinematics). I'd like to start a quadcopter as a side project but have precious spare time so I'd prefer a single good reference instead of jumping from each individual topic textbook; maybe I'm just being greedy :)
Which method is better, in term of accuracy, for detection of indoor localization of a drone. Camera based system or wireless techniques like WLAN or Bluetooth?
I want to control the attitude(roll, pitch, yaw) of a vehicle capable of pitching and rolling. To do this I have created a quaternion PID controller. First I take the current attitude of the vehicle converting it to a quaternion Qc and do the same for the desired attitude with the quaternion Qd. I then calculate the input of my PID controller as Qr = Qc' x Qd. The imaginary parts of the quaternions are then fed as force requests on the roll, pitch, yaw axes of the vehicle. I test on a simulator and the control works but becomes unstable in some cases (request for R: 60 P: 60 Y:60). I also want this to work around singularities (i.e. pitch 90) Does anyone know why I get this behavior and if so explain (thoroughly) what I'm doing wrong?
I'm implementing an extended Kalman filter and I'm facing a problem with showing the covariances to the user. The covariance matrix estimate contains all the information we have about the current value estimate, but that is too much to display. I would like to have a single number that says "our estimate is really good" when close to 0 and "our estimate is not worth much" when large. My intuitive simple solution would be to average all the values in the covariance estimate matrix (or maybe just the diagonal), except that in my case the values have different units and different ranges. Is it possible to do something like this?
What subjects are involved in robotics. If I want to build robots then what necessary things I need to learn consecutively as a beginner.
As far as I know, a robot sends orders as discrete signals. However, isn't computer simulation based on continuous simulation? Do you know if it may happen any important difference when comparing reality to simulation in some cases? I heard that cable-driven robots were quite sensitive.
I have recently built a raspberry pi based quadcopter that communicates with my tablet over wifi. The problem is that it drifts a lot. At first I thought that the problem was vibration, so I mounted the MPU-6050 more securely to the frame. That seemed to help a bit, but it still drifts. I have tried tuning the PID, tuning the complementary filter, and installing a real time OS. Nothing seems to help very much. Below is my code written completely in java. Any suggestions are appreciated. QuadServer.java: package com.zachary.quadserver; import java.net.*; import java.io.*; import java.util.*; import com.pi4j.io.i2c.I2CBus; import com.pi4j.io.i2c.I2CDevice; import com.pi4j.io.i2c.I2CFactory; import se.hirt.pi.adafruit.pwm.PWMDevice; import se.hirt.pi.adafruit.pwm.PWMDevice.PWMChannel; public class QuadServer { private final static int FREQUENCY = 490; private static final int MIN = 740; private static final int MAX = 2029; private static Sensor sensor = new Sensor(); private static double PX = 0; private static double PY = 0; private static double PZ = 0; private static double IX = 0; private static double IY = 0; private static double IZ = 0; private static double DX = 0; private static double DY = 0; private static double DZ = 0; private static double kP = 1.95; //2.0 private static double kI = 10.8; //8.5 private static double kD = 0.15; //0.14 private static long time = System.currentTimeMillis(); private static double last_errorX = 0; private static double last_errorY = 0; private static double last_errorZ = 0; private static double outputX; private static double outputY; private static double outputZ; private static int val[] = new int[4]; private static int throttle; static double setpointX = 0; static double setpointY = 0; static double setpointZ = 0; static double errorX; static double errorY; static double errorZ; static long receivedTime = System.currentTimeMillis(); private static String data; static int trimX = -70; static int trimY = 70; public static void main(String[] args) throws IOException, NullPointerException { DatagramSocket serverSocket = new DatagramSocket(40002); PWMDevice device = new PWMDevice(); device.setPWMFreqency(FREQUENCY); PWMChannel esc0 = device.getChannel(0); PWMChannel esc1 = device.getChannel(1); PWMChannel esc2 = device.getChannel(2); PWMChannel esc3 = device.getChannel(3); /*Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() { public void run() { System.out.println("terminating"); try { esc0.setPWM(0, calculatePulseWidth(MIN/1000.0, FREQUENCY)); esc1.setPWM(0, calculatePulseWidth(MIN/1000.0, FREQUENCY)); esc2.setPWM(0, calculatePulseWidth(MIN/1000.0, FREQUENCY)); esc3.setPWM(0, calculatePulseWidth(MIN/1000.0, FREQUENCY)); } catch (IOException e) { e.printStackTrace(); } } })); System.out.println("running");*/ Thread read = new Thread(){ public void run(){ while(true) { try { byte receiveData[] = new byte[1024]; DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length); serverSocket.receive(receivePacket); String message = new String(receivePacket.getData()); data = ""+IX; addData(IY); addData(sensor.readAccelAngle(0)); addData(sensor.readAccelAngle(1)); byte[] sendData = new byte[1024]; sendData = data.getBytes(); InetAddress IPAddress = InetAddress.getByName("192.168.1.9"); DatagramPacket sendPacket = new DatagramPacket(sendData, sendData.length, IPAddress, 1025); serverSocket.send(sendPacket); setpointX = Double.parseDouble(message.split("\\s+")[0])*0.7; setpointY = Double.parseDouble(message.split("\\s+")[1])*0.7; throttle = (int)(Integer.parseInt((message.split("\\s+")[3]))*12.67)+MIN; kP = Math.round((Integer.parseInt(message.split("\\s+")[4])*0.05)*1000.0)/1000.0; kI = Math.round((Integer.parseInt(message.split("\\s+")[5])*0.2)*1000.0)/1000.0; kD = Math.round((Integer.parseInt(message.split("\\s+")[6])*0.01)*1000.0)/1000.0; trimX = (Integer.parseInt(message.split("\\s+")[7])-50)*2; trimY = (Integer.parseInt(message.split("\\s+")[8])-50)*2; double accelSmoothing = 0.02;//(Integer.parseInt(message.split("\\s+")[8])*0.05)+1; double gyroSmoothing = 0.04;//(Integer.parseInt(message.split("\\s+")[7])*0.01); sensor.setSmoothing(gyroSmoothing, accelSmoothing); //System.out.println("trimX: "+trimX+" trimY: "+trimY); System.out.println("kP: "+kP+", kI: "+kI+", kD: "+kD+", trimX: "+trimX+", trimY: "+trimY); receivedTime = System.currentTimeMillis(); } catch (IOException e) { e.printStackTrace(); } } } }; read.start(); while(true) { Arrays.fill(val, throttle); errorX = sensor.readGyro(0)-setpointX; errorY = -sensor.readGyro(1)-setpointY; errorZ = sensor.readGyro(2)-setpointZ; double dt = (double)(System.currentTimeMillis()-time)/1000; double accelAngleX = sensor.readAccelAngle(0); double accelAngleY = sensor.readAccelAngle(1); if(dt > 0.005) { PX = errorX; PY = errorY; PZ = errorZ; IX += (errorX)*dt; IY += (errorY)*dt; //IZ += errorZ*dt; IX = 0.98*IX+0.02*accelAngleX; IY = 0.98*IY+0.02*accelAngleY; DX = (errorX - last_errorX)/dt; DY = (errorY - last_errorY)/dt; //DZ = (errorZ - last_errorZ)/dt; last_errorX = errorX; last_errorY = errorY; last_errorZ = errorZ; outputX = kP*PX+kI*IX+kD*DX; outputY = kP*PY+kI*IY+kD*DY; outputZ = kP*PZ+kI*IZ+kD*DZ; time = System.currentTimeMillis(); } //System.out.println(IX+", "+IY+", "+throttle); add(-outputX-outputY-outputZ-trimX+trimY, 0); //clockwise add(-outputX+outputY+outputZ-trimX-trimY, 1); //counterClockwise add(outputX+outputY-outputZ+trimX-trimY, 2); //clockwise add(outputX-outputY+outputZ+trimX+trimY, 3); //counterclockwise //System.out.println(val[0]+", "+val[1]+", "+val[2]+", "+val[3]); try { if(System.currentTimeMillis()-receivedTime < 1000) { esc0.setPWM(0, calculatePulseWidth(val[0]/1000.0, FREQUENCY)); esc1.setPWM(0, calculatePulseWidth(val[1]/1000.0, FREQUENCY)); esc2.setPWM(0, calculatePulseWidth(val[2]/1000.0, FREQUENCY)); esc3.setPWM(0, calculatePulseWidth(val[3]/1000.0, FREQUENCY)); } else { esc0.setPWM(0, calculatePulseWidth(800/1000.0, FREQUENCY)); esc1.setPWM(0, calculatePulseWidth(800/1000.0, FREQUENCY)); esc2.setPWM(0, calculatePulseWidth(800/1000.0, FREQUENCY)); esc3.setPWM(0, calculatePulseWidth(800/1000.0, FREQUENCY)); } } catch (IOException e) { e.printStackTrace(); } } } private static void add(double value, int i) { if(val[i]+value > MIN && val[i]+value < MAX) { val[i] += value; }else if(val[i]+value < MIN) { //System.out.println("low"); val[i] = MIN; }else if(val[i]+value > MAX) { //System.out.println("low"); val[i] = MAX; } } static void addData(double value) { data += " "+value; } private static int calculatePulseWidth(double millis, int frequency) { return (int) (Math.round(4096 * millis * frequency/1000)); } } Sensor.java: package com.zachary.quadserver; import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.PinState; import com.pi4j.io.gpio.RaspiPin; import com.pi4j.io.i2c.*; import java.net.*; import java.io.*; public class Sensor { static I2CDevice sensor; static I2CBus bus; static byte[] accelData, gyroData; static long accelCalib[] = {0, 0, 0}; static long gyroCalib[] = {0, 0, 0}; static double gyroX; static double gyroY; static double gyroZ; static double smoothedGyroX; static double smoothedGyroY; static double smoothedGyroZ; static double accelX; static double accelY; static double accelZ; static double accelAngleX; static double accelAngleY; static double smoothedAccelAngleX; static double smoothedAccelAngleY; static double angleX; static double angleY; static double angleZ; static boolean init = true; static double accelSmoothing = 1; static double gyroSmoothing = 1; public Sensor() { try { bus = I2CFactory.getInstance(I2CBus.BUS_1); sensor = bus.getDevice(0x68); sensor.write(0x6B, (byte) 0x0); sensor.write(0x6C, (byte) 0x0); System.out.println("Calibrating..."); calibrate(); Thread sensors = new Thread(){ public void run(){ try { readSensors(); } catch (IOException e) { e.printStackTrace(); } } }; sensors.start(); } catch (IOException e) { System.out.println(e.getMessage()); } } private static void readSensors() throws IOException { long time = System.currentTimeMillis(); long sendTime = System.currentTimeMillis(); while (true) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelX = (((accelData[0] << 8)+accelData[1]-accelCalib[0])/16384.0)*9.8; accelY = (((accelData[2] << 8)+accelData[3]-accelCalib[1])/16384.0)*9.8; accelZ = ((((accelData[4] << 8)+accelData[5]-accelCalib[2])/16384.0)*9.8)+9.8; accelZ = 9.8-Math.abs(accelZ-9.8); double hypotX = Math.sqrt(Math.pow(accelX, 2)+Math.pow(accelZ, 2)); double hypotY = Math.sqrt(Math.pow(accelY, 2)+Math.pow(accelZ, 2)); accelAngleX = Math.toDegrees(Math.asin(accelY/hypotY)); accelAngleY = Math.toDegrees(Math.asin(accelX/hypotX)); //System.out.println(accelAngleX[0]+" "+accelAngleX[1]+" "+accelAngleX[2]+" "+accelAngleX[3]); //System.out.println("accelX: " + accelX+" accelY: " + accelY+" accelZ: " + accelZ); r = sensor.read(0x43, gyroData, 0, 6); gyroX = (((gyroData[0] << 8)+gyroData[1]-gyroCalib[0])/131.0); gyroY = (((gyroData[2] << 8)+gyroData[3]-gyroCalib[1])/131.0); gyroZ = (((gyroData[4] << 8)+gyroData[5]-gyroCalib[2])/131.0); if(init) { smoothedAccelAngleX = accelAngleX; smoothedAccelAngleY = accelAngleY; smoothedGyroX = gyroX; smoothedGyroY = gyroY; smoothedGyroZ = gyroZ; init = false; } else { smoothedAccelAngleX = smoothedAccelAngleX+(accelSmoothing*(accelAngleX-smoothedAccelAngleX)); smoothedAccelAngleY = smoothedAccelAngleY+(accelSmoothing*(accelAngleY-smoothedAccelAngleY)); smoothedGyroX = smoothedGyroX+(gyroSmoothing*(gyroX-smoothedGyroX)); smoothedGyroY = smoothedGyroY+(gyroSmoothing*(gyroY-smoothedGyroY)); smoothedGyroZ = smoothedGyroZ+(gyroSmoothing*(gyroZ-smoothedGyroZ)); /*smoothedAccelAngleX = accelAngleX; smoothedAccelAngleY = accelAngleY; smoothedGyroX = gyroX; smoothedGyroY = gyroY; smoothedGyroY = gyroY;*/ /*smoothedAccelAngleX += (accelAngleX-smoothedAccelAngleX)/accelSmoothing; smoothedAccelAngleY += (accelAngleY-smoothedAccelAngleY)/accelSmoothing; smoothedGyroX += (gyroX-smoothedGyroX)/gyroSmoothing; smoothedGyroY += (gyroY-smoothedGyroY)/gyroSmoothing; smoothedGyroZ += (gyroZ-smoothedGyroZ)/gyroSmoothing;*/ } angleX += smoothedGyroX*(System.currentTimeMillis()-time)/1000; angleY += smoothedGyroY*(System.currentTimeMillis()-time)/1000; angleZ += smoothedGyroZ; angleX = 0.95*angleX + 0.05*smoothedAccelAngleX; angleY = 0.95*angleY + 0.05*smoothedAccelAngleY; time = System.currentTimeMillis(); //System.out.println((int)angleX+" "+(int)angleY); //System.out.println((int)accelAngleX+", "+(int)accelAngleY); } } public static void calibrate() throws IOException { int i; for(i = 0; i < 100; i++) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelCalib[0] += (accelData[0] << 8)+accelData[1]; accelCalib[1] += (accelData[2] << 8)+accelData[3]; accelCalib[2] += (accelData[4] << 8)+accelData[5]; r = sensor.read(0x43, gyroData, 0, 6); gyroCalib[0] += (gyroData[0] << 8)+gyroData[1]; gyroCalib[1] += (gyroData[2] << 8)+gyroData[3]; gyroCalib[2] += (gyroData[4] << 8)+gyroData[5]; try { Thread.sleep(1); } catch (Exception e){ e.printStackTrace(); } } gyroCalib[0] /= i; gyroCalib[1] /= i; gyroCalib[2] /= i; accelCalib[0] /= i; accelCalib[1] /= i; accelCalib[2] /= i; System.out.println(gyroCalib[0]+", "+gyroCalib[1]+", "+gyroCalib[2]); System.out.println(accelCalib[0]+", "+accelCalib[1]+", "+accelCalib[2]); } public double readAngle(int axis) { switch (axis) { case 0: return angleX; case 1: return angleY; case 2: return angleZ; } return 0; } public double readGyro(int axis) { switch (axis) { case 0: return smoothedGyroX; case 1: return smoothedGyroY; case 2: return smoothedGyroZ; } return 0; } public double readAccel(int axis) { switch (axis) { case 0: return accelX; case 1: return accelY; case 2: return accelZ; } return 0; } public double readAccelAngle(int axis) { switch (axis) { case 0: return smoothedAccelAngleX; case 1: return smoothedAccelAngleY; } return 0; } public void setSmoothing(double gyro, double accel) { gyroSmoothing = gyro; accelSmoothing = accel; } }
I currently have a description of my 22 joint robot in "classic" DH parameters. However, I would like the "modified" parameters. Is this conversion as simple as shifting the $a$ and $alpha$ columns of the parameter table by one row? As you can imagine, 22 joints is a lot, so I'd rather not re-derive all the parameters if I don't have to. (Actually, the classic parameters are pulled out of OpenRave with the command: planningutils.GetDHParameters(robot).
I am taking part in a robotics competition, where the challenge is to create a pair of robots which successfully navigate a series of obstacles. However, the rules state that of the two robots, only one must have a driving actuator. The other must somehow be moved by the other robot, WITHOUT PHYSICAL CONTACT. I could think of either having sails on the non-driving robot, and moving it with fans on the driving one OR electromangnets on the driving one and permanent magnets with the opposite polarity on the non-driving one. However the problem with both is that efficiency falls off drastically with distance. Thus, I am looking for possible ways to overcome this problem. Thanks :) Also, the driving robot has a cable power supply, while the non-driving one may only have batteries. Rulebook: http://ultimatist.com/video/Rulebook2016_Final_website_1_Sep_15.zip
I am the moment learning about rotation matrices. It seems confusing how it could be that $R_A^C=R_A^BR_B^C$ is the rotation from coordinate frame A to C C to A, and A,B,C are different coordinate frames. $R_A^C$ must for a 2x2 matrix be defined as $$ R_A^C= \left( \begin{matrix} xa⋅xb & xa⋅xb \\ ya⋅yb & ya⋅yb \end{matrix} \right) $$ $x_a, y_a and x_b,y_b$ are coordinates for points given in different coordinate frame. I don't see how, using this standard, the multiplication stated above will give the same matrix as for $R_A^C$. Some form for clarification would be helpful here.
// MPU-6050 Short Example Sketch // By Arduino User JohnChi // August 17, 2014 // Public Domain #include<Wire.h> #include <Servo.h> Servo firstESC, secondESC,thirdESC,fourthESC; //Create as much as Servoobject you want. You const int MPU=0x68; // I2C address of the MPU-6050 int speed1=2000,speed2=0,speed3=0,speed4; int16_t AcX,AcY,AcZ,Tmp,GyX,GyY,GyZ; float integ=0,der=0,pidx=0,kp = .5,ki=0.00005 ,kd=.01,prerror,dt=100; void setup(){ firstESC.attach(3); // attached to pin 9 I just do this with 1 Servo secondESC.attach(5); // attached to pin 9 I just do this with 1 Servo thirdESC.attach(6); // attached to pin 9 I just do this with 1 Servo fourthESC.attach(9); // attached to pin 9 I just do this with 1 Servo Wire.begin(); Wire.beginTransmission(MPU); Wire.write(0x6B); // PWR_MGMT_1 register Wire.write(0); // set to zero (wakes up the MPU-6050) Wire.endTransmission(true); Wire.beginTransmission(MPU); Wire.write(0x1c); // PWR_MGMT_1 register Wire.write(0<<3); // set to zero (wakes up the MPU-6050) Wire.endTransmission(true); Serial.begin(9600); firstESC.writeMicroseconds(0); secondESC.writeMicroseconds(0); thirdESC.writeMicroseconds(0); fourthESC.writeMicroseconds(0); firstESC.writeMicroseconds(2000); secondESC.writeMicroseconds(2000); thirdESC.writeMicroseconds(2000); fourthESC.writeMicroseconds(2000); delay(2000); firstESC.writeMicroseconds(700); secondESC.writeMicroseconds(700); thirdESC.writeMicroseconds(700); fourthESC.writeMicroseconds(700); delay(2000); } void loop(){ Wire.beginTransmission(MPU); Wire.write(0x3B); // starting with register 0x3B (ACCEL_XOUT_H) Wire.endTransmission(false); Wire.requestFrom(MPU,4,true); // request a total of 14 registers AcX=Wire.read()<<8|Wire.read(); // 0x3B (ACCEL_XOUT_H) & 0x3C (ACCEL_XOUT_L) AcY=Wire.read()<<8|Wire.read(); // 0x3D (ACCEL_YOUT_H) & 0x3E (ACCEL_YOUT_L) firstESC.writeMicroseconds(0); secondESC.writeMicroseconds(700-(pidx/10)); thirdESC.writeMicroseconds(700+(pidx/10)); fourthESC.writeMicroseconds(0); PID(); //if(Serial.available()) //speed1 = Serial.parseInt(); //Serial.print("AcX = "); Serial.print(AcX); //Serial.print(" | AcY = "); Serial.print(AcY); //Serial.println(); //delay(333); } void PIdD() { float error; error = (atan2(AcY,AcZ)*180/3.14); now = millis(); dt = now-ptime; if(error>0)error=180-error; else error = -(180+error); error=0-error; integ = integ+(error*dt) ; der = (error - prerror)/dt ; prerror=error; pidx = (kp*error); pidx+=(ki*integ); pidx+=(kd*der); if(pidx>1000)pidx=1000; if(pidx<-1000)pidx=-1000; ptime = now; } The above is my program for my quadcopter, but now I have to tune the PID values, that is kp, ki, and kd. My accelome is at 2g. Please point to me what is wrong with the program? Is the error signal not appropriate? Please also give me or help me choose correct PID tuning. My limitation is I always have to connect my Arduino to pc and change kp ki or kd values, that is I have no remote control available currently.
I don's seem to be able to get any battery power from Create 2. I spliced the original cable it came with, and tried to use the power from red/purple(+) and yellow/orange(-) to power a Raspberry Pi2, with no luck. While the serial-to-USB cable still works, and I am able to command the robot via Python, there seems to be no power coming on the red/purple cables. I tried with a multimeter with no luck, even as I moved the device from passive/safe/full modes. There is no power even when Create 2 is charging/docked.
What is the maximum rotational velocity of miniature ball-screw (diameters up to 12mm) for approximately 1000 thrust cycles, and which type/brand would that be, if the speed is limited by the ball return mechanism? The fastest I could find was 4000 rpm at 3000 N thrust, but this was from a datasheet with a big safety margin (millions of cycles). I'm looking for either experience and data, or a general method/formula that can be used to find the maximum velocity (and load) as function of cycles or the other way round (similar to those of ball bearings). Suggestions and knowledge about faster types and brands of ballscrews than the ones I have been able to find is welcome as well. Some more background information: Ball screws are very interesting transmissions for electrically actuated legged robotics, since they provide a high-geared rotary-to-linear transmission that is accurate, precise, energy efficient and possibly backlash-free. However, the big downside is their limited rotational speed. The maximum rotational velocity is limited by resonance and the ball return mechanism. The former limit is easy to calculate (eigenfrequency calculation), and mostly not problematic for small spindles. However, the latter is a bigger problem. The balls in a ball screw roll through the threaded spindle and have to be recirculated to the other end of the nut. The recirculation limits the rotational velocity of the ballscrews. The corresponding maximum rotational velocities are not calculate-able (for as far as I know) and are provided by manufacturers in catalogues, either directly in rpm or via a so-called $D_n$-value, where the rotational velocity in rpm is $n=D_n/d$ where d is the diameter of the ball screw. But even then, the maximum rotational velocity of ball screws is capped at 4000 rpm or lower according to datasheets (depending on brand and ball return mechanism). The highest permissible rotational velocities I found were those of Steinmeyer ballscrews, at 4000 rpm, using an end-cap-return mechanism. Note that for electrical motors (up to 200W) ideal (maximum power) velocities are higher than 4000 rpm, and even more than twice as high for many brushless motors. It appears however that ball screws can run at higher speeds than what they are specified for in reality, because the specifications hold for many millions of cycles. I can only find a single unofficial source where someone claims to have run their ball-screws up to 6000 rpm, and in missiles (one-time-use) up to 7500 rpm. I'm interested in a theory or more experimental data that backs this up.
I am trying to understand how to use, what it requires compute the homogenous transformation matrix. I know 2 points from 2 different frames, and 2 origins from their corresponding frames. I how transformation matrix looks like, but whats confusing me is how i should compute the (3x1) position vector which the matrix needs. As i understand is, this vector a origin of the old frame compared to the new frame. But how to calculate it, the obvious answer (I think) would be to subtract both ($O_{new} - O_{old}$ ), but it does not feel right. I know its a simple question but my head cannot get around this issue, and how can i prove it the right way, with the information i know?
I'm currently undertaking a project to build remote controlled shades from scratch. I currently have every piece figured out except I don't know know much about the motors involved in something like this. I am looking for suggestions on what type of motor to search for. I imagine I need a type that can go forward and back as well as stop when the shade is fully retracted. I don't know what to search for though. Any help is much appreciated.
I'm developing a small scale cart-pole balancing robot consisting of two wheels driven by a single motor at the base (essentially like a unicycle, but with two wheels to constrain balance to a one dimensional problem). I'm not sure what qualities to look for in that motor. I think the motor should be able to accelerate quickly in directions opposite of motion as dictated by the control system. However, i'm not sure if this rapid acceleration should correlate with higher torque motors or faster speed motors. I think higher torque motors would be too slow to react to control commands. In contrast, fast speed motors may not be able to overcome the momentum of the cart. Are there any design equations or other calculations i can make based on my robot's dimensions and weight to determine the right specs needed for my robot's motor? How can i determine the right motor specs for this application without resorting to brute-force trial & error experiments?
Say we have a line-following robot that has a moving obstacle in front, that is a one-dimensional problem. The moving obstacle is defined by its initial state and a sequence of (longitudinal) acceleration changes (the acceleration function is piecewise constant). Let's say the robot can be controlled by specifying again a sequence of acceleration changes and its initial state. However, the robot has a maximum and minimum acceleration and a maximum and minimum velocity. How can I calculate the sequence of accelerations minimizing the time the robot needs to reach a goal. Note that the final velocity must not necessarily be zero. Can you briefly explain how this problem can be addressed or point me to some references where an algorithm is described? Or point out closely related problems? Furthermore, does the solution depend on the goal position or could the robot just brake as late as possible all the time (avoiding collisions) and still reach any goal in optimal time? A more formal problem description: Given the position of the obstacle $x_B(t) = x_{B,0} + \int_{t_0}^t v_B(t) dt$, and the velocity of the obstacle $v_B(t) = v_{B,0} + \int_{t_0}^t a_B(t) dt$, where $a_B$ is a known piecewise constant function: $$a_B(t) = \begin{cases} a_{B,1} & \text{for } t_0 \leq t < t_1 \\ a_{B,2} & \text{for } t_1 \leq t < t_2 \\ \dots & \\ \end{cases}$$ and given the initial state of the line-follower $x_{A,0}, v_{A,0} \in \mathbb{R}$ we search for piecewise constant functions $a_A$, where $a_{min} \leq a_A(t) \leq a_{max}$, $v_{min} \leq v_A(t) \leq v_{max}$ and $x_A(t) \leq x_B(t)$ (collision freeness) holds at all times. Reasonable assumptions are e.g. $v_B(t) \geq 0$ and $x_{B,0} \geq x_{A,0}$. Among the feasible solutions I would like to pick those minimizing $\int_{t_0}^{\infty} x_B(t) - x_A(t) dt$ or a similar objective. Approximation algorithms are also ok. Some numbers for those who would like a test input: http://pastebin.com/iZsm2UhB
These days I'm trying to build IR 40kHz long range receiver. I use ir phototransistor. I don't want to use components like TSOP... I need to make daylight filter and intensify filtred signal because out of this sensor I wanna use with some microcontroller. Can someone help me? Any idea? Thanks.