instruction
stringlengths 40
28.9k
|
---|
Would a control system consisting of 2 PID controller one plant would be considered as an cascade controller??
And how come would a proper tuning method be?
As far i've googled it seems to me that only best method is to manually do it, one by one.
this is how my system looks like
http://snag.gy/rJH2J.jpg
|
I'm looking to build a test rig for a robot that rotates a 1" diameter pipe 180º (roll not yaw or pitch). I am currently testing a motor's performance when subjected to various kinds of PWM (high duty cycle, low duty cycle, etc). I would like to characterize how this performs under various loads.
Is there a simple mechanical mechanism I can attach to a fixture and insert into or around the pipe that lets me control how easy or difficult it is to rotate?
I am thinking of something like a drill bit chuck that fits inside the pipe and expands or a circular clutch that clamps down around pipe to add resistance when one tightens a thumbscrew. I would like to go from no resistance to full stop for a 4N•m motor. I would like to be able to test how 'sticky' the pipe is using a torque wrench.
I imagine this would be very simple but I can't think of something that would do this!
|
I am having a hard time grasping the concept of a DC motor with load being unstable, and stable due to a controller
My confusions appears as I am trying to design a controller for one using Z-N method, and the transfer function i've identified using matlab tells me that my DC-motor always will be stable.
Which makes sense, since feeding it constant voltages, will lead to a constant veloicty.
But to use the z-n approach the system has be able to become unstable, and since this isn't possible i am getting confused if a motor are able to become motor for which i am to design a controller for.
The question in simplicity,
how come can a controller make a motor, (if the motor itself cannot (due to pole zero plot)) unstable.
|
I am bit uncertain how I should interpret the definition of an robust controller.
As far I've understood, the closed loop system including the controller has to have a high gain for frequencies where disturbance appears, and decay at frequencies higher than the work area, or noise. Both of these can be determined using a bode plot, thereby determining the robustness of my closed-loop system.
|
I want to cut acrylic pieces that will be assembled into the body of a robot. What are some recommendations for acrylic/plastic cutting services? Does laser cutting produce the best results?
|
I'm writing a C code generator geared toward RobotC and complex tasks for an FTC team, and was wondering about some performance and storage concerns:
How much memory is available for my program's data? It'll be mostly pre-defined lookup tables, generally in the form of multidimensional arrays.
How much NXT memory is available for my program itself? As in, roughly how much code can I expect to fit into a single RobotC compiled program?
How quickly do programs execute, generally? Looking at disassembly most of my generated lines correspond to 2-4 opcodes.
Based on these, I'm trying to make a decision of precomputation vs runtime pathfinding.
I'm using NXT/Tetrix. My major interest at this point with these questions is for pathfinding. I plan to have a 64x64 grid and be running Djisktra's A* algorithm with a heuristic function that assigns a penalty to turns and is as close to consistent as possible (not sure if consistency/monotonicity is doable with the turn penalty).
Roughly 8 paths would be cached if I decide to use the pre-cached lookup tables.
Instead of a set, I'll probably use a boolean array for the set of nodes visited. The fact that I'm working with a square layout will allow me to use a 2D array for the map needed to reconstruct the path.
I'd love some feedback and answers to my question if anyone has any. Thanks!
|
I am in a situation where I need to secure a 20" wheel made of 3/4" thick MDF (of my own making) onto a 1-1/4" precision-ground steel shaft, and this joint needs to be strong enough to convey a rather large amount of torque, about 575-600 in-lbs, effectively a 5hp motor driving the shaft at 500-600 rpm.
I don't have a milling machine or any metalworking tools, so my preferred option of 'milling flats onto the shaft' is out. My second option was to attempt to increase the surface area of the wheel's bore and then use an appropriate adhesive such as JB Weld.
Is JB weld a suitable solution to this problem or is there a better method of fastening that doesn't involve modifying the steel shaft at all?
|
I have upgraded the motors in my robotic arm to sensored, brushless RC car motors. The hope was to reuse the Hall sensors to double as a rotary encoder, by tapping 2 Hall sensors and treating the 2 bits as a quadrature signal (a crude quadrature since 2 of the 4 states will be longer than the other 2).
This works when none of the motor phases are powered and I just rotate the motor manually. But once the stator coils are energized, the encoder no longer counts correctly: When running at low power, the counting is correct, but when running under high power, the count is monotonic (only increases or decreases) no matter if I run in reverse or forward.
I'm almost certain this is because of the stator coils overpowering the permanent magnets on the rotors. So is there still a way to use the Hall sensors as an encoder?
Sorry if this is an obvious question. I'd love to research this problem more if I had more time.
Update:
I've measured the wave forms with my DSO quad and see the expected 120 degree separated signals (the measurement for phase C gets more inaccurate over time because I only had 2 probes, so I measured phases A & B first, then A & C, and then merged them.
When ESC speed is 0.1:
When ESC speed is 0.3:
Previously, I was using a hardware quadrature counter (EQEP module on a BeagleBone). At speed=0.3, this was counting backwards no matter if I do forward or reverse!
I then implemented quadrature counting on an LPC1114FN28 uController. The result was still bad at high speeds (count didn't change at all). The logic was:
void HandleGPIOInterrupt()
{
const uint8_t allowableTransitions[4][2] = {1, 2, 3, 0, 0, 3, 2, 1};
static int prevState = -1;
int state = phaseA | (phaseB * 2)
if (prevState != -1)
{
if (allowableTransitions[prevState][0] == state)
{
++rotations;
}
else if (allowableTransitions[prevState][1] == state)
{
--rotations;
}
}
prevState = state;
}
Then I got the idea to change the code to not update prevState until an expected state happens (to deal with glitches):
int state = phaseA | (phaseB * 2)
if (prevState != -1)
{
if (allowableTransitions[prevState][0] == state)
{
++rotations;
prevState = state;
}
else if (allowableTransitions[prevState][1] == state)
{
--rotations;
prevState = state;
}
else
{
// assume transition was a glitch
}
}
else
prevState = state;
Now the counting finally is correct in both directions, even at speeds higher than 0.3!
But are there really glitches causing this? I don't see any in the waveforms?
|
I wanted to present a voice-controlled robot in my lab's upcoming demo contest. My robot is essentially a x86 Ubuntu notebook resting on top of a two-wheeled platform, so in principle any solution available on Linux would do.
I looked into Julius, but it seems the only comprehensive acoustic model available for it is aimed at the Japanese language – which coincidentally I can speak a little, but apparently not clearly enough to produce anything beyond garbled text. I also tried the Google Speech API, which has a decent selection of languages and worked very well, but requires Internet access. Finally there is CMU Sphinx, which I haven't yet tested, but I'm afraid might have a problem with my accent (I'm a nativa Brazilian Portuguese speaker, and apparently there is no such acoustic model available for it).
Is that all there is to it? Have I missed any additional options? As you may have guessed, my main requirement is support for my native language (Brazilian Portuguese), or failing that, good performance for English spoken with foreign accents. A C++ API is highly desirable, but I can do with a shell interface.
|
Let's say we have a bunch of observations $z^{i}$ from sensor and we have a map in which we can get the predicted measurements $\hat{z}^{i}$ for landmarks. In EKF localization in correction step, should we compare each observation $z^{i}$ with the entire predicted measurement $\hat{z}^{i}$?, so in this case we have two loops? Or we just compare each observation with each predicted measurement?, so in this case we have one loop. I assume the sensor can give all observations for all landmarks every scan. The following picture depicts the scenario. Now every time I execute the EKF-Localization I get $z^{i} = \{ z^{1}, z^{2}, z^{3}, z^{4}\}$ and I have $m$, so I can get $\hat{z}^{i} = \{ \hat{z}^{1}, \hat{z}^{2}, \hat{z}^{3}, \hat{z}^{4}\}$. To get the innovation step, this is what I did
$$
Z^{1} = z^{1} - \hat{z}^{1} \\
Z^{2} = z^{2} - \hat{z}^{2} \\
Z^{3} = z^{3} - \hat{z}^{3} \\
Z^{4} = z^{4} - \hat{z}^{4} \\
$$
where $Z$ is the innovation. For each iteration I get four innovations. Is this correct? I'm using EKF-Localization in this book Probabilistic Robotics page 204.
|
I am experimenting with using a stepper motor for a robotics project. I'd like to use microstepping to give a better resolution and smoother movement, but I have noticed that the finer the microsteps, the lower the torque from the motor. Why is this?
For reference I'm using the Allegro Micro A4988 motor driver, and a bipolar stepper motor.
|
Am going to be competing in Robocup Rescue in Thailand next year - I was too busy to pull off a campaign for Brazil this year :(
Will be using CUDA powered GPUs, Kinect/Xtions, and ROS as the primary navigation system, but I need a sensor for long range scanning - at least 25 meters. It is probably overkill for the competition, but I want it to be used in other real world applications. It will need to be very robust, fairly light, high resolution, and proven. The cheaper the better, but high quality is a must.
Have read this question, but I need something that is available and proven now:
What different sensing approaches are used in the current batch of indoor 3D cameras?
A similar question was asked before, but closed:
LIDAR solutions. The suggestion was good , but I need something with a lot more range:
At the moment am probably going to go with a the RobotEye RE05 or RE08 3D-LiDAR:
Here is a paper that descibes how this sensor can be used on a mobile robot: www.araa.asn.au/acra/acra2012/papers/pap125.pdf
Does anyone have any alternative techniques, or suggestions of a sensor that can achieve similar results?
|
I had the opportunity to work for a factory/company that is in the domain space of production and they want to use a robotic arm for part of the production line.
They want basically a robotic arm with payload of about 2 Kgs or more and an arm length of more than 1600mm
I have researched a few companies like Kuka.com but I am not sure what I should be looking for when making suggestions and researching for it.
Are there any suggestions you can give me on few good points to be careful about with robotics arms? Any innovating companies out there I should consider? How is an installation done and if I should find a supplier for it etc. Please enlighten me.
|
I have built several quadcopters, hexacopters, and octacopters. Between the flight controller (I use 3DR APM2.6 or Pixhawk) and the motors I use heavy duty power wires as well as a servo-style cable carrying a PWM control signal for the ESC. Three short heavy-duty wires then connect the motor to the ESC, one for each phase.
Several times I've heard or read people saying that the electronic speed controllers (ESCs) should be mounted far away from the flight controller (FMU seems to be the abbreviation en vogue) and close to the motors. I think the idea is that this cuts down on interference (I'm not sure what sort) that could be emitted by the long ESC -> motor wires that would be required if you have the ESCs all at the center of the aircraft. Another consideration is that ESCs can be cooled by propellers if they are right under the rotor wash, as mine usually are.
So, I've always mounted ESCs close to motors, but realized that design could be much simpler if ESCs are mounted centrally. So, my question is: what are the pros and cons of mounting ESCs close to the motor versus close to the FMU?
|
http://pointclouds.org/documentation/tutorials/normal_distributions_transform.php#normal-distributions-transform
I've used this program with the sample PCD's given and it came out correctly. This was confirmed by experienced users on here. Now I'm trying to use my own pcd's. I didn't want to bother changing the program so I just changed the names to room_scan1 and room_scan2. When I attempt to use them, I get this error:
Loaded 307200 data points from room_scan1.pcd Loaded 307200 data
points from room_scan2.pcd Filtered cloud contains 1186 data points
from room_scan2.pcd normal_distributions_transform:
/build/buildd/pcl-1.7-1.7.1/kdtree/include/pcl/kdtree/impl/kdtree_flann.hpp:172:
int pcl::KdTreeFLANN::radiusSearch(const PointT&, double,
std::vector&, std::vector&, unsigned int) const [with PointT =
pcl::PointXYZ, Dist = flann::L2_Simple]: Assertion
`point_representation_->isValid (point) && "Invalid (NaN, Inf) point
coordinates given to radiusSearch!"' failed. Aborted (core dumped)
This is the program I compiled: http://robotica.unileon.es/mediawiki/index.php/PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing_.28OpenNI_viewer.29
Before you suggest it, I will let you know I already changed all of the PointXYZRGBA designations to just PointXYZ. It threw the same error before and after doing this. The thing that confuses me is that I looked at my produced PCD files and they seem to be exactly the same as the samples given for NDT.
Mine:
2320 2e50 4344 2076 302e 3720 2d20 506f
696e 7420 436c 6f75 6420 4461 7461 2066
696c 6520 666f 726d 6174 0a56 4552 5349
4f4e 2030 2e37 0a46 4945 4c44 5320 7820
7920 7a0a 5349 5a45 2034 2034 2034 0a54
5950 4520 4620 4620 460a 434f 554e 5420
3120 3120 310a 5749 4454 4820 3634 300a
4845 4947 4854 2034 3830 0a56 4945 5750
4f49 4e54 2030 2030 2030 2031 2030 2030
2030 0a50 4f49 4e54 5320 3330 3732 3030
0a44 4154 4120 6269 6e61 7279 0a00 00c0
7f00 00c0 7f00 00c0 7f00 00c0 7f00 00c0
Sample from NDT page:
2320 2e50 4344 2076 302e 3720 2d20 506f
696e 7420 436c 6f75 6420 4461 7461 2066
696c 6520 666f 726d 6174 0a56 4552 5349
4f4e 2030 2e37 0a46 4945 4c44 5320 7820
7920 7a0a 5349 5a45 2034 2034 2034 0a54
5950 4520 4620 4620 460a 434f 554e 5420
3120 3120 310a 5749 4454 4820 3131 3235
3836 0a48 4549 4748 5420 310a 5649 4557
504f 494e 5420 3020 3020 3020 3120 3020
3020 300a 504f 494e 5453 2031 3132 3538
Does anyone have any ideas?
|
In general, what is a good programming language for robotics? I am a starting robo nerd and don't know anyone who would know things like this.
|
I want to give my Linux robot the ability to locate a sound source and drive towards it. I am reading a paper on sound localization that seems to cover the theory well enough, but I'm at a loss as to how do I implement it. Specifically I would like to know:
How do I connect two microphones to a Linux PC?
How do I record from two microphones simultaneously?
Is there any library of sound processing algorithms (similar to how OpenCV is a library of computer vision algorithms) available for Linux?
|
Hello I wanted to simulate a busy urban road,similar to Darpa Urban Challenge for an autonomous self-driving-car. I'm in search of simulators for that.
I've seen gazebo since its integration with ROS is easier but editing world files or indeed creating them itself is difficult. In torcs simulator I have seen many world files but not many sensors. I don't want much physics in my simulation. I want a light weight simulator(for checking out path planning on an urban road) and in which creating roads are easier.
I've even searched for gazebo sdf files similar to urban city but in vain.
|
I am developing a quadcopter platform on which will be extended over the next year. The project can be found on Github. Currently, we are using an Arduino Uno R3 as the flight management module.
At present, I am tuning the PID loops. The PID function is implemented as:
int16_t pid_roll(int16_t roll)
{
static int16_t roll_old = 0;
int16_t result =
(KP_ROLL * roll) +
(KI_ROLL * (roll_old + roll)) +
(KD_ROLL * (roll - roll_old))
;
roll_old += roll;
result = constrain(result, PID_MIN_ROLL, PID_MAX_ROLL);
return -result;
}
I am having trouble interpreting the system response on varying the constants. I believe the problem is related to the questions below.
How frequently should a PID controller update the motor values? Currently, my update time is about 100-110 milliseconds.
What should be the maximum change that a PID update should make on the motor thrusts? Currently, my maximum limit is about +-15% of the thrust range.
At what thrust range or values, should the tuning be performed? Minimum, lift off, or mid-range or is it irrelevant?
|
I've read a lot places that making a controller which cancels the unwanted pole or zero is not good designing practice for designing a controller..
It should make the system uncontrollable which off course isn't wanted.
But what alternatives do i have.. ??
considering i have a system in which all poles and zeros lies on RHP.
|
Can someone please help me with the jacobian matrix equations for Abb irb140 robot. Or an easy way by which I can derive it given the DH parameters. I need it to implement some form of control that am working on. Thanks
|
This is quite a basic question. I'm practising robot programming with VRep. I have 2 K3 robots in the scene. One robot follows a predefined path. I want the second robot to move "in parallel" with the first one so they keep same orientation and same distance at all time. When there is a turn, I want the follower to slow/accelerate a little to keep the parallel.
In my implementation, I use wireless communication. The first robot will periodically "tell" the second about its speed, orientation. The second will use these parameters to calculate two speed to its two wheel. But when I run it, it doesn't work. The orientation of the follower is wrong. The distance is not maintained. I was totally confused.
I think this is quite a rudimentary task. There must be some practise to follow. Can somebody help to provide some ideas, references? That will be highly appreciated!
|
I have been reading about kinematic models for nonholonomic mobile robots such as differential wheeled robots. The texts I've found so far all give reasonably decent solutions for the forward kinematics problem; but when it comes to inverse kinematics, they weasel out of the question by arguing that for every possible destination pose there are either infinite solutions, or in cases such as $[0 \quad 1 \quad 0]^T$ (since the robot can't move sideways) none at all. Then they advocate a method for driving the robot based on a sequence of straight forward motions alternated with in-place turns.
I find this solution hardly satisfactory. It seems inefficient and inelegant to cause the robot to do a full-stop at every turning point, when a smooth turning would be just as feasible. Also the assertion that some points are "unreachable" seems misleading; maybe there are poses a nonholonomic mobile robot can't reach by maintaining a single set of parameters for a finite time, but clearly, if we vary the parameters over time according to some procedure, and in the absence of obstacles, it should be able to reach any possible pose.
So my question is: what is the inverse kinematics model for a 2-wheeled differential drive robot with shaft half-length $l$, two wheels of equal radii $r$ with adjustable velocities $v_L \ge 0$ and $v_R \ge 0$ (i.e. no in-place turns), and given that we want to minimize the number of changes to the velocities?
|
I am working on a quadrotor and am trying to solve the problems described here. In attempts to bring the refresh rate to 100 Hz, I did an analysis of the functions and most of the time 35+ ms is being taken by the RC receiver input function. To tackle this, I have decided on two solutions:
Use interrupts (PinChangeInt library) instead of pulseIn
Reduce the frequency of pilot input
The second solution which is much simpler is to simply read the pilot input once in $(n+1)$ PID updates. So, for $n$ times, we have a update time of $8\;ms$, and for the $(n+1)^{th}$ time, we have an update time of T ms. $n$ will be around $10$.
This will create a system that will run on average in $(n*8 + T)/(n+1)\; ms$.
Now, how does a dual/variable frequency affect the PID system? Does the system behave as if working at the effective frequency? I have been searching for some time but I cannot find anything that discusses such a situation.
|
I am searching for a way that allows me to wait for some conditions on ports before applying a new state.
My concrete Problem:
I want to make sure that my AUV aligns to the right pipeline. Therefore before starting the pipeline-tracking, I want to check for the current system heading.
My current state-machine looks like this:
find_pipe_back = state target_move_def(:finish_when_reached => false ,
:heading => 1 ...)
pipe_detector = state pipeline_detector_def
pipe_detector.depends_on find_pipe_back, :role => "detector"
start(pipe_detector)
forward pipe_detector.align_auv_event, success_event
roughly I am looking for a way to condition the last-forward.
|
I use Autodesk inventor professional 2014. I design my gears using the design accelerator. However, whenever I create gear trains, parts of certain gears become transparent. This seems completely random because sometimes if I zoom in or out or when I pan or orbit, the gears look normal again.
I have experienced this problem using both the default and other material types.
I also have ensured that each of these gears are enabled.
Here are some example pictures
Any help or suggestions will be greatly appreciated.
|
I have a riddle about EtherCAT in mind and I'd like to have your point of view about it...
With the rise of open platforms and hardware, and easily accessible embedded machines, it is now rather straightforward to install a RT system such as Xenomai on a raspberry PI, or a beagleboard black, or whatever cheap platform you prefer...
Now to connect these a RT bus would be really cool (e.g. EtherCAT...).
Hence my question: every hobbyist face the same problems with RT communication, so is there any good reason why there does not exist any open EtherCAT shield for raspberry PI or beagleboards? It would solve so many problems...
Any thoughts on why? Any idea?
|
I have a standard 5v
I am using the horn it came with it, I mean this piece:
Like the long arm in the middle. Each of its holes are 1mm diameter.
Then I have a 3d printed crankshaft I did:
Its holes are also 1 mm. So while the servo horn is attached to the servo, I attach this crank to the horn in order to lift or lower a small scructure.
What I am not sure is hot to connect these 2 pieces (horn and 3d printed crankshaft). So far I have been using a paper clip, and at both end of it I placed 2 blobs of tin using a soldering iron. This has worked for nearly a year, but today I failed, and I was wondering if there's something more specific for my problem, which seems something common.
I have seen some people use something called Dubro EZ connector, but it seems an overkill, plus it won't have space for my 3D printed piece. Some people seems to use a clevis pin, but I cannot find any with a diameter of less than 1.
So my question is, how can I fix it? What can I put at both ends to stop if from slipping away? I have already tried simple things like simply bending it.
|
The Unscented Kalman Filter is a variant of the Extended Kalman Filter which uses a different linearization relying on transforming a set of "Sigma Points" instead of first-order Taylor series expansion.
The UKF does not require computing Jacobians, can be used with discontinuous transformation, and is, most importantly, more accurate than EKF for highly nonlinear transformations.
The only disadvantage I found is that "the EKF is often slightly faster than the UKF" (Probablistic Robotics). This seems negligible to me and their asymptotic complexity seems to be the same.
So why does everybody still seem to prefer EKF over UKF? Did I miss a big disadvantage of UKF?
|
e.g. what general multicopter configurations would be generally accepted as recommendations to lift 0.5kg, 1kg, 2kg, 4kg, etc.
Is there any general correlation between number of motors on a similar sized frame and lift capacity?
|
It might be kind of a stupid question but how many degrees of freedom are there in a typical quadcopter? I say some saying 4 and some saying 6. The difference stands in translation throughout the other 2 axis (horizontal ones). Being strict to what you can directly tell the quadcopter to do, only 4 movements are possible since you cannot apply a pure lateral force. But you can tilt to start a lateral movement and align the body right after and let it hover in a horizontal axis, theoretically. So, formally, how many degrees of freedom should I consider to exist?
|
I am implementing the ATLAS SLAM framework for a ground robot, using EKF Slam for local maps and using line segment features. The line segment features can be abstracted to their respective lines [d,α] where d and α represent the distance and angle in the distance-angle representation of lines.
In the given framework, there is a local map matching step where lines of the local maps will be matched, and there is a need for a distance metric between 2 lines. The mahalanobis distance is suggested in the literature, however strictly a mahalanobis distance is between a single measurement and a distribution and not between 2 distributions.
How do I find the mahalanobis distance between line 1 [d1,α1] with covariance matrix S1 and line 2 [d2,α2] with covariance matrix S2?
In the EKF Algorithm from the book Probabilistic Robotics by Sebastian Thrun, there is a computation during the feature update step, where it looks like the covariances (of a new measurement and an existing measurement) are multiplied to give a resultant covariance matrix, and then the inverse is used in the Mahalanobis distance computation.
That would be similar to
Mahalanobis_Distance = [d2-d1,α2-α1] * Inverse(S1*S2) * [d2-d1,α2-α1]'
Is that correct?
|
Given part of the following algorithm in page 217 probabilistic robotics, this algorithm for EKF localization with unknown correspondences
9. for all observed features $z^{i} = [r^{i} \ \phi^{2} \ s^{i}]^{T} $
10. for all landmarks $k$ in the map $m$ do
11. $q = (m_{x} - \bar{\mu}_{x})^{2} + (m_{y} - \bar{\mu}_{y})^{2}$
12. $\hat{z}^{k} = \begin{bmatrix}
\sqrt{q} \\
atan2(m_{y} - \bar{\mu}_{y}, m_{x} - \bar{\mu}_{x} ) - \bar{\mu}_{\theta} \\
m_{s} \\ \end{bmatrix}$
13. $ \hat{H}^{k} = \begin{bmatrix}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33} \\ \end{bmatrix} $
14. $\hat{S}^{k} = H^{k} \bar{\Sigma} [H^{k}]^{T} + Q $
15. endfor
16. $ j(i) = \underset{k}{\operatorname{arg\,max}} \ \ det(2 \pi S^{k})^{-\frac{1}{2}} \exp\{-\frac{1}{2} (z^{i}-\hat{z}^{k})^{T}[S^{k}]^{-1} (z^{i}-\hat{z}^{k})\} $
17. $K^{i} = \bar{\Sigma} [H^{j(i)}]^{T} [S^{j(i)}]^{-1}$
18. $\bar{\mu} = \bar{\mu} + K^{i}(z^{i}-\hat{z}^{j(i)}) $
19. $\bar{\Sigma} = (I - K^{i} H^{j(i)}) \bar{\Sigma} $
20. endfor
My question is why the second loop ends in the line 15. Shouldn't it end after the line 19. I've checked the errata of this book but nothing about this issue.
|
How would one typically integrate a neural network into an online automation system?
As an example, we have developed a neural network that predicts a difficult to measure variable within a reactor using multiple sensors. We then use this predicted variable to tell the automation system to, for example, increase/decrease the stirrer speed.
How would someone implement this idea into a commercial system. Would they develop a function block that can simulate the neural network? Would they run a software on the server that reads and writes to the PLC control tags?
|
I need help on how to go about building a quadcopter software from scratch with the available tools I have with me. I don't have a transmitter radio therefore the only way I can do remote control is using an android phone with the itead studio bluetooth shield that I was recently given.
How can I use the existing open source software, i.e aeroquad or arducopter. The following are parts that I have:-
Arduino Uno
Bluetooth shield
Four brushless motors
Q450 frame Four
ESC Turnigy
MPU6050
|
I'm using amcl package in ROS to localize a mobile robot. I've changed min_particles and max_particles several times then calculated the output difference with odomotry to evaluate these parameters. The table below demonstrate results; As you see, there is no notable change in the output and if you ignore the first row of the table, output variance is small.
And this is the Particle Filter output on the map:
|
I'm developing a program for communicating with Ardupilot using Mavlink. I've generated code based on the Mavlink definition for Ardupilot, and I have the basic communication working.
What I can't figure out, is how to request Ardupilot to send a specific Mavlink message. I'd like Ardupilot to send me Mavlink message Attitude (#30) every second. How can I do this?
|
I want to overwrite the git source of a package in autoproj. That package is by
default on gitorious and I forked it on spacegit to apply specific patches.
According to the autoproj documentation (http://rock-robotics.org/stable/documentation/autoproj/customization.html), I set the new repo in the overrides.yml by:
- control/orogen/<package>:
url: git://spacegit.dfki.uni-bremen.de/virgo/orogen-<package>.git
But if I inspect the remotes of the newly checked out package, only the
fetch url is adapted to spacegit whereas the push url still points to
the default gitorious repo:
$ git remote -v
autobuild git://spacegit.dfki.uni-bremen.de/<project>/orogen-<package>.git (fetch)
autobuild [email protected]:/rock-control/orogen-<package>.git (push)
How can I overwrite both the fetch and the push source of a package in the
overrides.yml?
|
Assume I have a rather simple system I want to control, but all sensor measurements exhibit considerable time delay, i.e.:
$z_t = h(x_{(t-d)}) \neq h(x_t)$
With my limited knowledge about control, I could imagine the following setup:
One observer estimates the delayed state $x_{(t-d)}$ using control input and (delayed) measurements.
A second observer uses the delayed observer's estimate and predicts the current state $x_t$ using the last control inputs between delayed measurement and current time.
The second observer's estimate is used to control the system.
Can I do any better than that? What is the standard approch to this problem? And is there any literature or research about this topic?
|
I want to implement the velocity motion model in Matlab. According to Probabilistic Robotics page 124, the model is as following
\begin{align*}
\hat{v} &= v + sample(\alpha_{1} v^{2} + \alpha_{2} w^{2}) \\
\hat{w} &= w + sample(\alpha_{3} v^{2} + \alpha_{4} w^{2}) \\
\hat{\gamma} &= sample(\alpha_{5} v^{2} + \alpha_{6} w^{2}) \\
x' &= x - \frac{\hat{v}}{\hat{w}} sin \theta + \frac{\hat{v}}{\hat{w}} sin(\theta + \hat{w} \Delta{t}) \\
y' &= y + \frac{\hat{v}}{\hat{w}} cos \theta - \frac{\hat{v}}{\hat{w}} cos(\theta + \hat{w} \Delta{t}) \\
\theta' &= \theta + \hat{w} \Delta{t} + \hat{\gamma} \Delta{t}
\end{align*}
where $sample(b^{2}) \Leftrightarrow \mathcal{N}(0, b^{2})$. With this kind of variance $\alpha_{1} v^{2} + \alpha_{2} w^{2}$, the Kalman Gain is approaching singularity. Why?
|
We are building a 6 DOF robotic arm as a college project and we've almost finished the designs. The problem is with the controls. We still havent thought on how to control the arm, as in , software gui interfaces , etc. Any suggestions on this ? Also, is there any simulation software for Simulating and testing robotic arms ?
|
I'm facing problems with this book and it is the only book that discusses localization in depth. The results that I'm getting makes no sense. I've read a lot of papers, majority of them copy the localization algorithm from this book. My question here is why $\bar{\mu}$ and $\bar{\Sigma}$ are being changed every iteration?? I'm using them to get the predicted measurements in lines 11- 13, so they should be fixed.
9. for all observed features $z^{i} = [r^{i} \ \phi^{i} \ s^{i}]^{T} $ do
10. $j = c^{i}$
11. $q = (m_{x} - \bar{\mu}_{x})^{2} + (m_{y} - \bar{\mu}_{y})^{2}$
12. $\hat{z}^{i} = \begin{bmatrix}
\sqrt{q} \\
atan2(m_{y} - \bar{\mu}_{y}, m_{x} - \bar{\mu}_{x} ) - \bar{\mu}_{\theta} \\
m_{s} \\ \end{bmatrix}$
13. $ \hat{H}^{i} = \begin{bmatrix}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33} \\ \end{bmatrix} $
14. $\hat{S}^{i} = H^{i} \bar{\Sigma} [H^{i}]^{T} + Q $
15. $K^{i} = \bar{\Sigma} [H^{i}]^{T} [S^{i}]^{-1}$
16. $\bar{\mu} = \bar{\mu} + K^{i}(z^{i}-\hat{z}^{i}) $
17. $\bar{\Sigma} = (I - K^{i} H^{i}) \bar{\Sigma} $
18. endfor
19. $\mu = \bar{\mu}$
20. $\Sigma = \bar{\Sigma}$
Please suggest me other books that discuss EKF localization in depth.
|
I'm a noobie just starting out and trying to come up what I need to build my first quadrocopter, I just wanted to run something by people with some experience before I commit to buying anything.
Would this esc be fine for running this motor? As I understand it the ESC should be rated for slightly above what the max amps are for the motor?
On top of that, should this battery be able to run all of the motors without any issue?
|
How might I be able to control one function (like brightness control of an LED) with two different triggers (like a tactile switch and an IR remote)?
I am trying to be able to control the brightness with switches as well as IR remote when desired.
|
I'm going to build a small robot system, and it seems like that ROS serves a nice framework to control and program the system.
However, I am wondering which is the best practice to manage the components of my robot.
Does it make sense to put all the sensors in one node?
Should I only put the sensors of the same type in one node or is it better to have one node for one sensor?
Is it a good practice to have some kind of handler node, which takes input from sensors and steers the corresponding actuators or should the actuator nodes and sensor nodes communicate directly?
Fused sensor nodes and actuator nodes with handler
Single sensor and actuator nodes with handler
Direct communication
For me, I guess the best is to have some kind of handler, which handles the communication between sensors and actuators and have one node for each element of the robot (like in figure 2), because the system is in this way loosely coupled and can be extended easily, but I want to know what your opinion is.
|
I had just tested my first monitor, which results in the following error
regarding the suggestion in How to define conditions for state-machines in roby?
unfortunately i ran into a runtime error, i don't know whether this is a bug or if i misuse the monitor...
16:28:27.564 (Roby) = failed emission of the weak_signal event of Pipeline::Detector:0x71f5cf0
16:28:27.564 (Roby) = Backtrace
16:28:27.564 (Roby) |
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/task.rb:663:in `emitting_event'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/task_event_generator.rb:46:in `emitting'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/event_generator.rb:628:in `emit_without_propagation'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1017:in `block (2 levels) in event_propagation_step'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:648:in `propagation_context'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1015:in `block in event_propagation_step'
16:28:27.564 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:559:in `block in gather_propagation'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:648:in `propagation_context'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:559:in `gather_propagation'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1014:in `event_propagation_step'
16:28:27.565 (Roby) |/home/auv/dev/tools/roby/lib/roby/execution_engine.rb:783:in `block in event_propagation_phase'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:761:in `gather_errors'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:779:in `event_propagation_phase'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1426:in `process_events'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1940:in `block (2 levels) in event_loop'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/support.rb:176:in `synchronize'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1939:in `block in event_loop'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1917:in `loop'
16:28:27.565 (Roby) /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1917:in `event_loop'
16:28:27.565 (Roby) | /home/auv/dev/tools/roby/lib/roby/execution_engine.rb:1797:in `block (3 levels) in run'
16:28:27.565 (Roby) =
Don't know whether this is a bug, or if i had miss-used the monitor...
here is the action_state_machine i'm using:
183 describe("Find_pipe_with_localization").
184 optional_arg("check_pipe_angle",false)
185 action_state_machine "find_pipe_with_localization" do
186 find_pipe_back = state target_move_def(... some long stuff here ... )
187 pipe_detector = state pipeline_detector_def
188 pipe_detector.depends_on find_pipe_back, :role => "detector"
189 start(pipe_detector)
190
191 pipe_detector.monitor(
192 'angle_checker', #the Name
193 pipe_detector.find_port('pipeline'), #the port for the reader
194 :check_pipe_angle => check_pipe_angle). #arguments
195 trigger_on do |pipeline|
196 angle_in_range = true
197 if check_pipe_angle
198 angle_in_range = pipeline.angle < 0.1 && pipeline.angle > -0.1
199 end
200 state_valid = pipeline.inspection_state == :ALIGN_AUV || pipeline.inspection_state == :FOLLOW_PIPE
201 state_valid && angle_in_range #last condition
202 end. emit pipe_detector.success_event
# for non-monitor use, this works if the above is commented out
203 # forward pipe_detector.align_auv_event, success_event
204 # forward pipe_detector.follow_pipe_event, success_event
205
206 forward pipe_detector.success_event, success_event
207 forward pipe_detector,find_pipe_back.success_event,failed_event #timeout here on moving
208 end
|
I'm looking for a device that can push out independent pinpoints from something similar to a Pin Point Impression Toy. I'm looking to create a 3D image from for example my computer. Does anybody know the name of such a device or can point me in the right direction of making one?
I've been looking now for a while, but I'm having some slight problems finding a good way to describe it as a search term.
I'm sorry if this is the wrong forum.
|
Let's say I have a hypothetical sensor that provides, for example, velocity estimates, and I affix that sensor at some non-zero rotational offset from the robot's base. I also have an EKF that is estimating the robot's velocity.
Normally, the innovation calculation for an EKF looks like this:
$$ y_k = z_k - h(x_k) $$
In this case, $h$ would just be the rotation matrix of the rotational offset. What are the ramifications if instead, I pre-process the sensor measurement by rotating $z_k$ by the inverse rotation, which will put its coordinates in the frame of the robot? Can I then safely just make $h$ the identity matrix $I$?
|
Where can I buy multi-directional omni wheels?
I'm specifically looking at something which can support in excess of 100kg/wheel, so around 400kg in total. Also, a possible mission profile would include a 300 meter excursion outdoors on asphalt path, so they should be a little durable. The only ones I can find online are small ones for experimenting.
|
It is "good enough" for PID output directly controls, without further modelling, the PWM duty cycle?
Logic behind the question is,
In case of pure resistance heater, PWM duty cycle percentage directly relates to power (on off time ratio). So, direct control is appropriate.
However, motor has two additional effects,
a) with considerable inductance, initial current is smaller and ramping up over time
b) as RPM gradually ramping up, after time constant of mechanical inertia etc, increasing back EMF will reduce current
Will it be wise to ignore the above two effects and still expect a reasonably good outcome?
Application is 6 volts, 2 watt DC brushed motor, gear 1:50, 10000 RPM no load, PWM frequency 490Hz, driving DIY 1kg robot.
|
I’m a graduate student, and we're doing a project that is going to introduce a robot arm into manufacturing. Our goal is to build up an autonomous object classification system. We already have the software and hardware required for the task, but we have no idea if there is any existing manufacturing scenario where we can apply the system and really improve the efficiency or save human resources.
Here is some info about the robot arm:
For the hardware part, the robot arm is with 7 dof and 5kg payload (the weight of the end effector is not counted). Besides, the end effector is a 1.5kg 3-fingered robot hand with 2kg payload. The workspace is approximately a sphere with 0.9m diameter.
For the software part, we have programming by touch, by which human can drag the robot and record the desired pose. Besides, we have PCL object recognition that can recognize the object and its pose in the scene. Lastly, we have online trajectory generator and dynamic obstacle avoidance that can improve the safety when the robot corporates with human.
Since we know few about manufacturing, we hope that someone can give us a hint about the scenario and an actual application where we can apply this system.
|
I'm getting this warning from Matlab about Kalman Gain.
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 9.996841e-19.
The problem is coming from high variance of the measurement model. My question is here Does EKF work with high noise in sensor?
|
Following my previous question about pcduino+kinect, decided to go ahead and buy the pcduino I wish to run my robot with (kinect+pcduino+shields). However I'm having trouble getting started: I tried installing OpenNI, NITE and SensorKinect however OpenNI installation fails (I haven't even gotten to installing NITE and SensorKinect yet so no idea if that would work). I tried a bunch of pointers (here and here). For example the error I get if I follow link 1 is:
ubuntu@ubuntu:~/kinect/OpenNI/Platform/Linux/CreateRedist \$ sudo ./RedistMaker.Arm
Target: Linux-Arm
Version: 1.5.7.10
Num of compile jobs: 0
Building OpenNI...
Common/CommonDefs.mak:36: * Cross-Compilation error. Can't find ARM-J1_CXX and >ARM-J1_STAGING. Stop.
failed to execute: make PLATFORM=Arm-j1 -C
/home/ubuntu/kinect/OpenNI/Platform/Linux/CreateRedist/../Build clean
/home/ubuntu/kinect/OpenNI/Platform/Linux/CreateRedist/Output/BuildOpenNI_clean.txt
Cleaning Failed!
ubuntu@ubuntu:~/kinect/OpenNI/Platform/Linux/CreateRedist$
After someone suggested it, I tried removing the -mfloat-abi=softfp option but that didn't help. There seems to be some compiling/linking issue due to float types which I'm not able to figure out. In link 1 the author mentions to remove the 'calc_jobs_number()' but that does not work and I get similar error. Also similar problem exists for link 2 above
If I follow link 2, 'make' won't work and will give the following error:
/usr/bin/ld: error: ../../Bin/Arm-Release/libOpenNI.so uses VFP register >arguments, ./Arm-Release/tinyxmlparser.o does not
/usr/bin/ld: failed to merge target specific data of file ./Arm-Release/tinyxmlparser.o
Another approach would be to use simpleCV instead of OpenNI on pcduino as someone else claims it has worked before. However I've never used simpleCV with Kinect before so unless it's not radically different I prefer using OpenNI.
Any suggestions as to why I might be getting these errors are appreciated. Any other pointers for solving the problem of installing OpenNI on Pcduino would be welcome.
Please let me know if you need more details about anything else.
Thanks in advance
|
I installed SimpleCV and libfreenect on pcduino (running lbuntu). I separately verified that simpleCV reads my USB webcam well and libfreenect (glview tutorial) gives me depth and rgb correctly, albeit and a pathetic framerate. What I want is to call cam = Kinect() in simplecv but when I do that, I get the warning "You dont seem to have the freenect library installed. This will make it hard to use a kinect". Although this is a warning I get an error if I then do cam.getDepth(), which says "NameError: global name 'freenect' is not defined".
How do I let simplecv know that I've installed libfreenect?
|
I'm making a quadcopter. I have set up a PID loop to stabilize it to a given Euler angle (pitch and roll). The problem arises when the roll approaches 90 degrees (45 degrees and up). The values don't make sense anymore, as it approaches the gimbal lock. I intend to make it do complex maneuvers like looping etc., which exceeds the 45 degree roll limit.
How can I use quaternions to overcome this problem? (I get quaternions from the MPU-9150.) I have read many articles on the matter of quaternions, but they all talk about rotations in 3D software, and tweening between two rotation points. This makes little sense as I do not know imaginary numbers and matrices.
|
There are two different conventions that can determine DH parameters. What is the difference between Craig's [1, Sec 3.4] convention and the Spong [2, Sec. 3.2] convention?
I know that both methods must have the same response.
[1]: Craig, John J. Introduction to robotics: mechanics and control. Addison-Wesley, 1989.
[2]: Spong, Mark W., Seth Hutchinson, and Mathukumalli Vidyasagar. Robot modeling and control. Wiley, 2006.
|
Let me start off by saying that I am currently going to university majoring in computer engineering. I love software/hardware and I especially love robotics and I want to apply my knowledge of software/hardware in robots. I have never taken a formal class on robotics, so I don't really know where to start or how to approach the mathematics that robots entail.
Currently, I am interested in calculating the inverse kinematics of a delta robot. To clarify a bit more, I am trying to determine the required joint angles that will position the end-effector of the delta robot to a specific location given some x,y,z coordinate. The delta robot that I will be basing my design off of is shown in the image below.
Based off of some research that I have been doing for the past few days, I found that the sort of mathematics involved are usually like those of Denavit-Hartenberg parameters, Jacobian matrices, etc. I am going to be honest, I have never encountered Denavit-Hartenberg parameters or Jacobian matrices and I don't even know how to apply these to solve the kinematics equations and let alone find the kinematics equations. Most of the articles that I have read, mainly deal with serial manipulator robots and the mathematics in finding the kinematics equations of those serial manipulators. I couldn't really find any good material or material that was easy to understand given my current situation on parallel manipulators.
I wanted to ask my question here in the hopes that someone in the community could direct me to where I can start on learning more on obtaining the inverse kinematics equations of parallel manipulators and solving those equations.
Any help will be much appreciated.
Thank you.
|
I will have a belt-driven linear actuator, consisting a gantry-plate riding on two rails. I'm thinking of using a brushed dc motor.
The gantry will move from home position to the right (outbound) at 1m/s. The mass of the gantry will vary from 3Kg to 6Kg. On the return home (inbound) one must avoid spillage of contents which may require soft start/stop or simply a slow return to home.
In the outbound case, what I'd like to know is how,in a practical sense, do you brake the mass and bring the gantry to a stop, ensuring that the gantry plate always comes to rest to within 0.5mm of an end plate?
I'm clearer how I can ensure the gantry stops to within 0.5mm of the home position, because I can use a PWM ramp to slowly decelerate.
I'm wanting to avoid using an MCU. Just want to use an IC with switches and potentiometers.
You can also use math if you want to explain.
Of course, one seeks to begin to arrest the mass as close to the end stop in the outbound case as one can without problems.
Thanks.
|
I am using a ruby script to connent the Multi Layer Surface Map of the velodyne_slam component to the vizkit3D visualization.
The visualizazion plugin is loaded like this:
envireViz = Vizkit.default_loader.EnvireVisualization
It is possible to get the MLSVisualisation object from the EnvireVisualization in order to set visualization properties (like colors etc.) from the ruby script?
Rubys introspection abilities didn't help a lot here...
|
I need to search in the git history of a couple of packages to
get back to a working state for a demo. I am searching by checking out
commits manually until I found the commits of all effected packages that
work together.
By checking out commits manually, I will get into the detached HEAD state:
$ git checkout 995e018
-> You are in 'detached HEAD' state. [...]
To save the current state of all packages, a snapshot is created:
$ autoproj snapshot demo_working
Now the demo_working/overrides.yml will pin the commit where the HEAD is
pointing to (e.g. 5e2e3a259) instead of the commit that I chose manually
for the package (995e018).
Is this the desired behaviour? In my opinion a snapshot should store the
current state of all my git repositories meaning that I can also select
commits manually.
|
I need to get position $x$ from integrating velocity $v$. One could use 1st order Euler integration as
$x_{t+1} = x_t + \delta * v_t.$
However, doing so leads to errors proportional to sampling time $\delta$. Do you know any more accurate solution please?
|
I just got an iRobot iCreate base and I've followed the instructions given in ROS Tutorials to setup the turtlebot pc and the workstation. I could successfully ssh into username@turtlebot through workstation so I'm assuming that is all good. I had an issue with create not able to detect the usb cable which I solved using the detailed answer given for question here. This solved the problem of "Failed to open port /dev/ttyUSB0" that I was facing before.
Now the next step would be to ssh into the turtlebot (which I've done) and use roslaunch turtlebot_bringup minimal.launch to do whatever the command does (I've no idea what to expect upon launch). But apparently something's amiss since the create base chirps and then powers down after showing [kinect_breaker_enabler-5] process has finished cleanly as output and the log file location (see output below), but I dont see a prompt. I checked the battery and that's charged so that's not the problem. Following is the terminal output.
anshul@AnshulsPC:~$ roslaunch turtlebot_bringup minimal.launch
... logging to /home/anshul/.ros/log/9d936a6a-fbdc-11e3-ba6b-00265e5f3bb9/roslaunch-AnshulsPC-5038.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://128.110.74.233:48495/
SUMMARY
========
PARAMETERS
* /cmd_vel_mux/yaml_cfg_file
* /diagnostic_aggregator/analyzers/digital_io/path
* /diagnostic_aggregator/analyzers/digital_io/startswith
* /diagnostic_aggregator/analyzers/digital_io/timeout
* /diagnostic_aggregator/analyzers/digital_io/type
* /diagnostic_aggregator/analyzers/mode/path
* /diagnostic_aggregator/analyzers/mode/startswith
* /diagnostic_aggregator/analyzers/mode/timeout
* /diagnostic_aggregator/analyzers/mode/type
* /diagnostic_aggregator/analyzers/nodes/contains
* /diagnostic_aggregator/analyzers/nodes/path
* /diagnostic_aggregator/analyzers/nodes/timeout
* /diagnostic_aggregator/analyzers/nodes/type
* /diagnostic_aggregator/analyzers/power/path
* /diagnostic_aggregator/analyzers/power/startswith
* /diagnostic_aggregator/analyzers/power/timeout
* /diagnostic_aggregator/analyzers/power/type
* /diagnostic_aggregator/analyzers/sensors/path
* /diagnostic_aggregator/analyzers/sensors/startswith
* /diagnostic_aggregator/analyzers/sensors/timeout
* /diagnostic_aggregator/analyzers/sensors/type
* /diagnostic_aggregator/base_path
* /diagnostic_aggregator/pub_rate
* /robot/name
* /robot/type
* /robot_description
* /robot_pose_ekf/freq
* /robot_pose_ekf/imu_used
* /robot_pose_ekf/odom_used
* /robot_pose_ekf/output_frame
* /robot_pose_ekf/publish_tf
* /robot_pose_ekf/sensor_timeout
* /robot_pose_ekf/vo_used
* /robot_state_publisher/publish_frequency
* /rosdistro
* /rosversion
* /turtlebot_laptop_battery/acpi_path
* /turtlebot_node/bonus
* /turtlebot_node/port
* /turtlebot_node/update_rate
* /use_sim_time
NODES
/
cmd_vel_mux (nodelet/nodelet)
diagnostic_aggregator (diagnostic_aggregator/aggregator_node)
kinect_breaker_enabler (create_node/kinect_breaker_enabler.py)
mobile_base_nodelet_manager (nodelet/nodelet)
robot_pose_ekf (robot_pose_ekf/robot_pose_ekf)
robot_state_publisher (robot_state_publisher/robot_state_publisher)
turtlebot_laptop_battery (linux_hardware/laptop_battery.py)
turtlebot_node (create_node/turtlebot_node.py)
auto-starting new master
process[master]: started with pid [5055]
ROS_MASTER_URI=http://128.110.74.233:11311
setting /run_id to 9d936a6a-fbdc-11e3-ba6b-00265e5f3bb9
process[rosout-1]: started with pid [5068]
started core service [/rosout]
process[robot_state_publisher-2]: started with pid [5081]
process[diagnostic_aggregator-3]: started with pid [5102]
process[turtlebot_node-4]: started with pid [5117]
process[kinect_breaker_enabler-5]: started with pid [5122]
process[robot_pose_ekf-6]: started with pid [5181]
process[mobile_base_nodelet_manager-7]: started with pid [5226]
process[cmd_vel_mux-8]: started with pid [5245]
process[turtlebot_laptop_battery-9]: started with pid [5262]
[WARN] [WallTime: 1403641073.765412] Create : robot not connected yet, sci not available
[WARN] [WallTime: 1403641076.772764] Create : robot not connected yet, sci not available
[kinect_breaker_enabler-5] process has finished cleanly
log file: /home/anshul/.ros/log/9d936a6a-fbdc-11e3-ba6b-00265e5f3bb9/kinect_breaker_enabler-5*.log
Following is the log file: /home/anshul/.ros/log/9d936a6a-fbdc-11e3-ba6b-00265e5f3bb9/kinect_breaker_enabler-5*.log output:
[rospy.client][INFO] 2014-06-24 14:20:12,442: init_node, name[/kinect_breaker_enabler], pid[5538]
[xmlrpc][INFO] 2014-06-24 14:20:12,442: XML-RPC server binding to 0.0.0.0:0
[rospy.init][INFO] 2014-06-24 14:20:12,443: ROS Slave URI: [http://128.110.74.233:51362/]
[xmlrpc][INFO] 2014-06-24 14:20:12,443: Started XML-RPC server [http://128.110.74.233:51362/]
[rospy.impl.masterslave][INFO] 2014-06-24 14:20:12,443: _ready: http://128.110.74.233:51362/
[xmlrpc][INFO] 2014-06-24 14:20:12,444: xml rpc node: starting XML-RPC server
[rospy.registration][INFO] 2014-06-24 14:20:12,445: Registering with master node http://128.110.74.233:11311
[rospy.init][INFO] 2014-06-24 14:20:12,543: registered with master
[rospy.rosout][INFO] 2014-06-24 14:20:12,544: initializing /rosout core topic
[rospy.rosout][INFO] 2014-06-24 14:20:12,546: connected to core topic /rosout
[rospy.simtime][INFO] 2014-06-24 14:20:12,547: /use_sim_time is not set, will not subscribe to simulated time [/clock] topic
[rospy.internal][INFO] 2014-06-24 14:20:12,820: topic[/rosout] adding connection to [/rosout], count 0
[rospy.core][INFO] 2014-06-24 14:20:20,182: signal_shutdown [atexit]
[rospy.internal][INFO] 2014-06-24 14:20:20,187: topic[/rosout] removing connection to /rosout
[rospy.impl.masterslave][INFO] 2014-06-24 14:20:20,188: atexit
From the logs, I could tell something told the create to power down. And since the log is named with 'kinect', I tried minimal.launch w/ and w/o kinect attached to the turtlebot pc. It doesn't make any difference.
Any clue what I might be missing? Or is this the way bringup works (I guess not)?
|
Is it possible to control the Create without using any ROS whatsoever? I know it has all these serial/Digital I/O pins that connect to ROS which controls it using drivers/libraries. But how hard would it be to do so using, say, a PCduino?
I'm asking this because I'm having trouble launching the create using ROS (question)
|
I have a BS2 mounted on a Parallax Board of Education Rev D.
I was trying to use a wire to determine whether a control was pressed.
however, whenever there's a wire connected the state seems to fluctuate between 1 and 0 instead of staying one or the other. when connected to the desired button it still exhibits this behavior but has the added quality of switching to zero when the button is pressed. ideally it will stay zero while the buttons pressed and 1 when it's not, but instead it flickers between 1 and 0 when unpressed.
what causes this behavior and why does it occur even when the wire is not connected to anything except the bus?
the code used to get the state is
DO
DEBUG CRSRXY,0,3,
"P5:", BIN1 IN5,
LOOP
|
I am using Autodesk Inventor 2013 and I need to round a component of a device. I want to round the green marked edges, but not the red marked. But when I click "round", then the bottom edge will always be added to the rounding and I cannot de-select it. Any hints how to solve this problem?
|
This question is an extension to my previous problem (Data association with ekf). My problem here is in the line 16 in the aforementioned link.
16. $ j(i) = \underset{k}{\operatorname{arg\,max}} \ \ det(2 \pi S^{k})^{-\frac{1}{2}} \exp\{-\frac{1}{2} (z^{i}-\hat{z}^{k})^{T}[S^{k}]^{-1} (z^{i}-\hat{z}^{k})\} $
When I compute this line, I'm getting huge number 1.0e+09 * 3.5230. This is probability density function. Why is the pdf getting bigger than 1 in a huge way?
|
Problem: the cartesian position of an end effector (no orientation) of a robot arm is recorded, say, every millisecond (the time steps can not be changed), during a motion. The robot arm is commanded the same path but with different velocities. So I get different trajectories. I want to calculate the deviation of the paths, which is the distances of equivalent points of two paths. The problem is to find equivalent points. Since the two velocities are different the comparison at the same time steps of the trajectories makes no sense. I can assume that the paths underlying the trajectories to be compared are rather similar. The deviation for the ideal path being smaller than 1% of a typical length dimension of the path. I want to detect deviations of much lass than that.
I have to map the timestamp of the recorded points to the path length, and make comparison of points at the same path length. But of course also the path lengths differ for different paths, so any deviation would distort the result for all later points. How can I compensate for this ?
Is there a reliable algorithm ? Where can I find information ?
Note: time warp algorithms (even memory optimized ones) are out of the game because of memory consumption.
|
How do you make a gripper changer for a robotic arm like this? I don't see how you could connect power/control wires or what you use to hold the gripper to the arm.
|
I am working on a robot that has an accelerometer. This accelerometer measures the vibration of the robot. When the robot hits a certain vibration, I would like it to slow down in order to reduce the vibration. I thought about a PID controller, but I don't think it would work. Does anybody have some input on different types of controllers I can use?
Mechaman
|
I'm controlling the angular position of a pendulum using a DC motor with a worm gearbox. Mechanically, worm gears are impossible to backdrive.
Using a PID controller on a pendulum system with a regular DC motor (no worm gear), the integrator would help the motor find the appropriate constant power setting to overcome gravity so the pendulum can hold any arbitrary position. With the worm gear, however, there is no need to apply constant power to the motor once the desired position is achieved. Power to the motor can be cut off and the worm gear will resist gravity's force to backdrive the pendulum to the lowest gravity potential.
It seems to me, then, that the integrator of the PID algorithm will cause large overshoots once the desired position is achieved. I want the integrator initially to help control the pendulum to the desired position. But once the position is achieved, I'd need the integrator to turn off.
The only solution I can come up with is to test for a special condition in the PID algorithm that checks if the position has been reached AND the angular speed is small, then instantaneously reset the integrator to zero. Is there a better way to handle the integrator in a system that resists backdrive?
** EDIT *
When I originally worded my question, I was mostly just interested in the academic approach of backdrive resistance in a PID loop. But it'll help if I explain the actual mechanism I'm building. The device is a robotic arm that rotates on a car window motor. It will also occasionally pick up and drop small weights at the end of the arm. Manufacturing variability in motors and the difference in drive torque when picking up the small weights led to me consider a PID loop.
|
i want to calculate humanoid robot hand position with given shoulder roll, pitch angles and elbow roll angle.
I'm able to calculate elbow position using rotation matrix which includes shoulder angles.
But i dont know how to calculate hand position using elbow position and elbow roll angle.
Can you propose a method to calculate hand position?
|
Is it possible to strengthen permanent magnet DC motors by simply attaching extra magnets on the outside of the motor casing - adding to the magnetic field?
If this is not possible, the question becomes; what happens if I replace the magnets inside the motor with better magnets?
I know that the coils will not handle more current than they currently do, so what will the net effect on the motor be?
|
I am trying to build a 2ft square an XY Plotter. I have seen three designs so far: 1)Rack and Pinion 2)Threaded Screw 3) belt-driven. all these use a stepper motor to drive the system.
Each one has their obvious pros and cons but correct me if i am wrong, i believe the rack and pinion system is the most sturdy and easiest to put together.
I googled for Rack and pinion but all i get is industrial websites. Is there any place that sells cheaper rack and pinion sets for hobbyists? The payload of the XY Table is an eletro-magnet that isn't extremely heavy (maybe a half kilogram at most).
So obviously the motor must be strong enough to move anothe rack which will be significantly heavier than the payload.
This is my first real robotics project so i am new to all this.
|
I want to implement a manipulator link using a physic library. I can only apply some torque to the centre of mass, but the torque should be applied at the beginning of the link.
Shifting a reference frame from the centre of mass and recalculating inertia tensor in the new frame is not a problem, neither is recalculating a new torque, based on the change of distance, but I think it is not the correct solution.
In short, how can I scale a torque of a control signal applied at the beginning of the link to a torque of a physic simulation applied to the centre of mass. Thanks.
|
I have been working on a robot project for a while. Now I am tired of finding parts that just does the job, so it is time to do create parts.
A 3D printer will do the trick for many parts, but 3D printers share a lot with a CNC mill in terms of control and parts. So my question is this:
I am building a Reprap style printer, but I will use more heavy duty parts and motors, hoping to make a aluminum capable 3 axis mill later. I found some bipolar NEMA 23 stepper motors at 1.9 Nm and 3 Amps per coil. According to the reprap.org website, they recommend NEMA 17 and low voltage. Seems to me that they use voltage to limit the current.
Can I build a reprap, and use current limiting stepper drivers with an Arduino and some software I find online, and get away with these large stepper motors? Or am I in for a lot of trouble?
|
I made several tests with different setups in order to achieve an acceptable speech recognition quality. It works well when I push a button to activate it but now I want it to be automatically activated when a user speaks. This is a big problem, especially when I only use the energy of the audio signal to guess when the user is speaking. That is why I thought about using a headset and not a distant microphone. In a headset the microphone is very close to the users mouth and it is easier to make correct guesses about when the user is speaking. Now my question is if bluetooth sets used with mobile phones also have such a property. They are not long enough and their microphone is not positioned exactly in front of the mouth. Is there a possibility that such devices can also capture some speech/noise from a distant user? Is there a significant difference in the signal energy coming from the user's speech and a 1 meter distant person's speech?
|
I am a programmer who has never worked with electronics before. I am learning the concepts and hoping to build a quadcopter, with the control software entirely written by me. Motor control seems to be the most important part.
Is it true that the typical brushless DC motor and ESC (Electronic Speed Control) can only approximately control the speed? That's because the ESC seems to have only a very approximate idea how fast the motor is revolving. This still works for a PID (Proportional Integral Derivative) controller because it gets indirect feedback from say a gyroscope whether the motor is going fast enough and so it can tell the ESC to make it revolve "even faster" or "even slower", and that's good enough.
Is my understanding in the above paragraph correct?
If so, I wonder whether a servo motor that can inform about its current rate of rotation could help do away with the ESC entirely? I feel that if the microcontroller can receive an input about motor speeds and send an output requesting a certain speed, it would not need the ESC. But I am not sure how servo motors work -- what happens immediately after you request 100rpm when say they were at 80rpm?
Since they cannot adjust the immediately, should the microcontroller immediately adjust other motors to account for the fact that not all motors are at 100rpm yet? Does that imply that the microcontroller should only request very small deltas from the currently measured speed, so that the period of deviation from desired state is negligible?
In the latter model, of requesting only very small deltas from currently measured speed, the algorithm seems like it would not really be PID since there is no way to control the acceleration? But may be requesting the servo to go from 80rpm to 100rpm causes it to reach 81rpm much faster than requesting it to go from 80rpm to 81rpm?
I feel I know so little I cannot put my finger on it more precisely, but I hope this gives an idea of the concepts I am struggling to absorb.
To summarize, the questions are:
can a servo (brushless dc) motor allow doing away with ESC?
does a servo motor accept control inputs such as "revolve at 100rpm"?
does a servo motor offer an output saying "i am at 80rpm now"?
does a servo motor at 80rpm go to 81rpm faster if it is requested to revolve at 100rpm versus at 81rpm?
the less precise questions implicit in the text above.
(crossposted from electronics.stackexchange)
|
I'd like to drive the position of various components within a virtual assembly based on sensor data being collected in real time from an external device. Does Inventor support such a setup?
The goal is to match the relative movements of the components on screen to the real-world counterpart. For example, a absolute rotary encoder records the current angle of a physical joint and the virtual joint is rotated to match. Is this feasible?
My past searches for information on this have turned up empty; perhaps because I'm using the wrong search terms. Most results point to irrelevant mechanical stress simulations.
|
I'm thinking of starting my adventure in area of professional manufacturing. When I started to look onto machines I figured out that they are build somehow like in the 70s: huge footprint, big 3kW electric motors etc.
Is there any explanation why they are build in that way?
The only one I can think of is: they were developed long time ago and if it worked, it stays as it is.
BTW: If you know other place where to ask this question please let me know!
|
I came up with an idea and am working with a mechanical engineer to design and prototype the idea but I keep sketching out my own ideas in the process and I just came up with this.. I'm quite sure this is not an idea he'll go with but I'm just kinda curious whether or not this would actually be feasible. Or for all I know it's already common place, or totally stupid... I dunno.
What do you think?
|
I have a Chinese CNC mill (CNC3020T, though several different devices go under this name), and its Z axis was very imprecise, often being randomly off position by as much as 0.5mm. I've disassembled the linear actuator and discovered several problems with it.
First problem is that they apparently forgot to lubricate the linear ball bearings. I make this conclusion because the rails have a set of grooves ground into them, and after wiping the rails with a tissue the only thing that is left is the finely powdered metal, with no traces of oil or other lubricant.
Second problem is the nut. I expected to see a ballnut, but in reality it is just a piece of threaded PTFE! The leadscrew rotates smoothly in it, but there is quite some lateral movement, i.e. I can tilt it slightly without any opposing force.
Third problem is the overall mounting. In the picture below, the top left screw has been sheared in the factory and then they hid their mistake by tapping a larger thread and putting in a shorter screw that doesn't actually hold anything in the top plate. So the whole assembly was fixed in three, rather than four, points. However, the remaining screw was quite tight.
So my closely related questions are:
Is the assembly even salvageable? How do I verify that linear ball bearings, the PTFE nut are relatively undamaged?
Can I just rotate the rails by 45° to get smooth surface again?
What do I lubricate the linear bearings with? Do I clean them before lubrication? I have an ultrasonic cleaner.
Any other advice on maintenance of the whole assembly? There may be something that I missed.
|
Suppose we have a moving object (a horizontal projectile motion as one of the most basic examples). Is there any way to predict where it will hit finally? Please note that I'm looking for a machine learning method not a closed form solution.
Although we can track the motion, using Kalman filter, That is only applicable when we want to predict the new future(As far as I'm considered). But I need to predict the ultimate goal of a moving object.
To better express the problem let see the following example:
Suppose a goalkeeper robot that of course uses filtering methods to smooth the ball motion. It needs to predict if the ball is going to enter the goal or not, before it decide to catch the ball or neglect it to go out.
Input data is a time series of location and velocity [x,y,z,v].
|
I want to make a simple device that causes my cellphone to vibrate for 30 seconds when my phone is 10 feet away from it. How would I go about doing that. How small could I make the device?
|
I'm working on an application where I need to apply a linear or angular force to operate a linkage mechanism, but I don't (yet) know what amount of force I will need. I anticipate that it will be less than 4.5 kg (44 N). The travel distance on the linkage input should be less than 15 cm.
As I look through available servos, they seem to exist firmly in the scale-model realm of remote control vehicles, and as such I am uncertain if any will be suitable for my application. For example, one of Futaba's digital servos, the mega-high torque S9152, is listed at 20 kg/cm.
From what I understand, this means that at 1 cm from the center of the servo shaft, I can expect approximately 20 kg force. If I wanted 15 cm of travel distance I would need roughly a 10.6 cm radius, which would diminish the applied force to 20 / 10.6 = 1.9 kg, well below the 4.5 that might be required.
Question:
Is my understanding and calculation even remotely accurate? Should I be looking at other types of actuators instead of servos? They seem to become prohibitively expensive above 20 kg/cm torque. (For the purposes of this project, the budget for the actuator is less than $250 US.)
For my application, I'd like to have reasonable control over intermediate positions across the travel range, good holding power, and fairly fast operation. For this reason I have dismissed the idea of using a linear actuator driven by a gearmotor and worm drive.
I am relatively new to robotics in the usage of motorized actuators, but I've used pneumatic cylinders for many years. For this application, I can't use pneumatics.
Edit:
Per comments, some additional constraints that are important:
Linkage Details: The linkage is a planar, one degree-of-freedom, part of a portable system (similar to a scissor lift mechanism). It is for a theatrical effect where the motion is amplified and force reduced (speed ratio and mechanical advantage are < 1).
Power: It will be carried by a person. As such, the actuation needs to be battery-operated, as no tubing or wiring can tether the person. Tubing or wiring that is self-contained is okay. Because this is a portable system, battery-power will be used. The control system will be designed specifically for an appropriate actuator. Rechargeable batteries up to 12V will most likely be employed. Actuators could operate on as high as 24V. Ideally a motor would not exceed 1-2 amperes draw, but as it is not in continuous operation, this is not a hard limit.
Not Pneumatic: I've considered pneumatic actuation, using CO2 cartridges, for example, but the client would prefer not to use pneumatics. Also, the ability to stop/hold at intermediate points in the motion range is desirable, and somewhat more complicated to do with pneumatic actuators.
Speed: An ideal actuator will be able to move the input coupling 15 cm in 1-2 seconds.
Weight: Weight constraints are not well-defined. As it will be carried by a person, it should be moderately lightweight. The actuator itself should probably be less than 1kg, but certainly this can vary. (The rest of the mechanism will probably be 6-8 kg.)
Size: The primary size constraint is that everything must fit within a space measuring no more than 500 x 500 x 120 mm (H x W x D). The linkage mechanism extends from and collapses outside the enclosure, parallel to the width.
Noise: The quieter the better, but noise is the least priority.
Servos seemed like the best choice for the job, but they don't seem to be available with the sort of torque I need.
|
I'm trying to increase the torque on the output shaft of my robot's gearbox. I have a motor with a pinion attached to it with 8 teeth. I want to create a gear with 33 teeth that will mesh with the pinion that I currently have. I've got access to a 3D printer to make the gear, but I don't know how to design the second gear so that it will mesh properly.
What parameters do I need to know about the first gear (8 teeth) to ensure that the second gear (33 teeth) will mesh correctly? How do I translate these parameters into the design of the second gear?
|
Is there any well documented robot interaction language? I would imagine something like taking a user's speech in English, parsing it using some natural language processing like NLTK or Stanford NLP and then building a new sentence understandable by the robot. Does something like this already exists?
I recently found ROILA http://roila.org/language-guide/ but it seems like it is a whole different language and not just a reformulation of sentences using English words with less grammatical complexity.
|
I am in charge of studying passage of different species of fish (six species) between lakes in Patagonian Andean range. We've been thinking of deploying video cameras underwater, but we'd need software that would control the cameras and record images only when the video adequately changes so as to avoid having to continuously check the video.
If the software is also capable of recognizing the species that would even be better.
|
Given a PID controller with an anti-windup, what are some practical ways to retune the controller once oscillation has been caused and detected? I have access to the magnitude and period of the oscillation.
I don't want to use the Ziegler-Nichols method; rather I'd like a method that allows me to specify a phase/gain margin as I am returning the system.
Could someone recommend me towards a book/article or theory?
|
The propellers of a multicopter produce thrust. Unfortunately the thrust is the smaller, the more the copter is tilted. I was currently wondering whether there is an established method to calculate how much the overall thrust has to be modified to hold the current altitude, based on the current attitude.
This is the way a calculate the motor output so far. rol/pit/yaw-output already ran through the PIDs.
// Calculate the speed of the motors
int_fast16_t iFL = rcthr + rol_output + pit_output - yaw_output;
int_fast16_t iBL = rcthr + rol_output - pit_output + yaw_output;
int_fast16_t iFR = rcthr - rol_output + pit_output + yaw_output;
int_fast16_t iBR = rcthr - rol_output - pit_output - yaw_output;
|
Is there any software where I can simulate production line elements (joints, motors, springs, actuators, movement)? For example I want to simulate mechanism to unwind paper from big roll to weld it later with bubble foil and finally make bubble foil envelope, mechanism will look like this:
I need it as simple as possible and preferably free.
|
I have a differential drive robot whose motors are virtually quiet while driving on a completely flat surface, but the motors make a lot of noise when on a incline. This is likely due to the correction required to maintain speed with the high inertial load where the robot cannot accelerate fast enough for the PID to keep up.
But I noticed that some of the noise is related to acceleration, and the higher the acceleration, the smaller the amount of noise I hear, or the smaller the time the same level of noise lasts (up to a certain acceleration limit, otherwise the motors get really noisy again).
I am trying to find out of how to use an IMU that I have a available in order to change the acceleration based on how steep the path's incline is.
Any documentation (papers, tutorials, etc) about motion planning related to this topic that you can point me to?
|
I am doing stereo camera calibration as described in this blog post. I wonder I do not need to input camera baseline for the calibration. The fact probably goes back to some very basic mathematics of triangulation. Can someone explain?
|
I'm searching for a python toolbox/library to do visibility graph based motion planning. I have searched on the internet, but couldn't find anything. I'm probably missing out...
Is there any package, you can recommend me?
|
How to determine the limit range of end effector orientation (Roll-Pitch-Yaw) at one specific point(XYZ)?I had derived Forward/Inverse kinematic. I'm making a program for 6DOF articulated robot arm so that the user can know the limit of tool rotation in Global axis(Roll-Pitch-Yaw) at a certain point.
|
Is there a standard format of how stereo calibration data (various matrices, usually saved in XML) are stored? Can I load calibration data generated say from a OpenCV script in C to another OpenCV script say in C++ or to completely different software where I create disparity image?
|
I've been looking for ideas on how to launch a ping pong ball a small distance (< 1 metre) for a game. Solenoids look like they might be useful but I'm not 100% on what force/type I need. I can mount it under a base and have the balls roll over it, with a pin pushing the ball up a ramp to it's target.
As it's only a ping pong ball, it should be light. I was considering something like this: http://www.adafruit.com/product/412
Am I along the right lines? Or should I go back to the drawing board.
|
I am currently building a line-following mobile robot. I've done all my image processing work in C#, and now I am in the control phase. I am looking for a PD controller program written in C# to start with. I've searched a lot but without success. My robot is not an Arduino based, it has a motherboard with a Core i3 CPU, and I am using a Camera not an LDR sensor.
|
A screw is defined by a six dimensional vector of forces and torques. It can represent any spatial movement of a rigid body (as written here). But I don't get the following distinction between screw and wrench:
The force and torque vectors that arise in applying Newton's laws to a rigid body can be assembled into a screw called a wrench.
It seems to be some kind of contextualisation but in what way?
|
Held and rotated by the knurled ends, one in each hand, the silver spokes rise and fall in order for the assembly to rotate. What is it, some companies' salesmen show tool? Found in an old building, unit has no markings.
|
I've successfully done with EKF Localization Algorithm with known and unknown correspondences that are stated in "Probabilistic Robotics". The results make perfect sense,so I can estimate the position of a robot without using GPS or odometry. Now I've moved to EKF-SLAM with known correspondences in the same book. I don't understand this matrix
$$
F_{x,j} =
\begin{bmatrix}
1 & 0 & 0 & 0 \cdots 0 & 0 & 0 & 0 & 0 \cdots 0 \\
0 & 1 & 0 & 0 \cdots 0 & 0 & 0 & 0 & 0 \cdots 0 \\
0 & 0 & 1 & 0 \cdots 0 & 0 & 0 & 0 & 0 \cdots 0 \\
0 & 0 & 0 & 0 \cdots 0 & 1 & 0 & 0 & 0 \cdots 0 \\
0 & 0 & 0 & 0 \cdots 0 & 0 & 1 & 0 & 0 \cdots 0 \\
0 & 0 & 0 & \underbrace{0 \cdots 0}_{3j-3} & 0 & 0 & 1 & \underbrace{0 \cdots 0}_{3N-3j} \\
\end{bmatrix}
$$
What is exactly the bottom of this matrix? The following
$$
F_{x,j} =
\begin{bmatrix}
0 \cdots 0 & 1 & 0 & 0 & 0 \cdots 0 \\
0 \cdots 0 & 0 & 1 & 0 & 0 \cdots 0 \\
\underbrace{0 \cdots 0}_{3j-3} & 0 & 0 & 1 & \underbrace{0 \cdots 0}_{3N-3j} \\
\end{bmatrix}
$$
Is it as following (assuming N = 3)
$$
F_{x,j} =
\begin{bmatrix}
1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\
\end{bmatrix}
$$
Or
$$
F_{x,j} =
\begin{bmatrix}
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
\end{bmatrix}
$$
where ones' represent a specific landmark.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.