id
stringlengths 50
55
| text
stringlengths 54
694k
|
---|---|
global_01_local_0_shard_00000017_processed.jsonl/4395 | Frozen Pinewood River
Frozen Pinewood River
by Jakub
submit your photo
Hall of Fame
View past winners from this year
Please participate in Meta
and help us grow.
Take the 2-minute tour ×
There must be a mathematical description of the difference that an extension tube makes to a lens -- is it something that can be easily described?
(For example, with teleconverters you can say things like "a 2x teleconverter will turn a Y-mm lens into a 2Y-mm lens, and will lose you 2 stops." Is there something similar for extension tubes?)
If there's nothing much you can say about magnification, in general, what about the change in closest focal distance? Is that also lens-dependent?
What about if we factor out the lens: is there any general way to compare the effects of (say) a 12mm and a 24mm extension tube on the same lens?
share|improve this question
add comment
3 Answers
up vote 10 down vote accepted
I do believe there are some formulas you can use. To Matt Grum's point, I have not tested these with zoom lenses, and to my current knowledge, they apply only to prime (fixed focal length) lenses. You did not specifically specify zoom lenses, so...
The simplest way to calculate the magnification of a lens is via the following formula:
Magnification = TotalExtension / FocalLength
M = TE / F
To calculate the magnification with an extension tube, you need to know the total extension...that is, the extension provided by the lens itself, as well as that provided by the extension tube. Most lens statistics these days include the intrinsic magnification. If we take Canon's 50mm f/1.8 lens, the intrinsic magnification is 0.15x. We can solve for the lenses built in extension like so:
0.15 = TE / 50
TE = 50 * 0.15
TE = 7.5mm
The magnification with additional extension can now be computed as follows:
Magnification = (IntrinsicExtension + TubeExtension) / FocalLength
M = IE + TE / F
If we assume 25mm of additional extension via an extension tube:
M = 7.5mm + 25mm / 50mm
M = 32.5mm / 50mm
M = 0.65x
A fairly simple formula that allows us to calculate magnification fairly easily, assuming you know the intrinsic magnification of the lens (or its intrinsic extension.) If we assume the wonderful 50mm lens is the lens you are extending, to create a 1:1 macro magnification, you would need 50mm worth of extension. The problem here is that if you add too much extension, the plane of the world that is in focus (the virtual image) might just end up inside the lens itself. Additionally, this assumes a "simple" lens, one with very well-defined and well-known characteristics (i.e. a simple single-element lens.)
In a real-world scenario, having a clear understanding of any particular lenses characteristics is unlikely. With lenses that focus internally, or zoom lenses, the simple formula above is insufficient to allow you to calculate exactly what your minimum focusing distance and magnification can be for any given lens, focal length, and extension. There are too many variables, most of which are likely to be unknown, to calculate a meaningful value.
Here are some resources that I have found that provide some useful information that might help in your endeavor:
share|improve this answer
Your lack of parenthesis threw me for a momentarily loop. Should be Magnification = (IntrinsicExtension + TubeExtension) / FocalLength ? – rfusca Dec 11 '10 at 1:55
@rfusca: You are correct, I forgot to stick in parens. Its totalExtension / focalLength, so intrinsic and tube extension lengths have to be added together. – jrista Dec 11 '10 at 4:25
Oddly enough, the 50mm f/1.8 is the lens (well, OK, one of the lenses) I'm extending -- and those links look really useful, too. Thanks! – Matt Bishop Dec 11 '10 at 12:29
add comment
I think it can be described, in fact Wikipedia has the relevant formula:
1/S1 + 1/S2 = 1/f
Where S1 is the distance from the subject to the front nodal point, S2 is the distance of the rear nodal point to the sensor, and f is the focal length. Since extension tubes increase S2, it then allows you make S1 smaller, thus you can focus much closer to the subject.
share|improve this answer
That formula assumes you know the front and rear modal points which in general aren't manufacturer specified, so you'll have to measure them for each lens. Plus the formula isn't valid for lenses which change focal length when focussing, so I don't think it's quite what the questioner as after. – Matt Grum Dec 10 '10 at 15:11
With a simple (i.e. single-element) lens, the focal length never ever changes (unless you change the shape of the lens -- or unless you're being very specific and talking about different colors of light, or the like), and this is absolutely correct (in fact, moving the position of the lens is all you're doing to change the focus anyway, so an extension tube just lets you move it further). For complex (multi-element) lenses, I don't understand the optics principles well enough to be sure if the same holds true. But the film plane is always the "target" of the focus, right? So... I'd think so. – lindes Dec 10 '10 at 17:51
Some of my sources for learning (which I'll hopefully later pull together in an answer of my own -- no time for that now, though): -- and in particular, these two: and – lindes Dec 10 '10 at 18:14
@Matt Grum - I think the equation illustrates the principle behind it which seems to be the crux of the question. At least it did to me. :) – John Cavan Dec 10 '10 at 18:28
@John Cavan - the formula illustrates well why extension tubes decrease the minimum focussing distance, but I think the questioner was looking for a formula that he can use to judge what length extension tube you need to buy for a given lens in order to increase magnification x times, which unfortunately is not possible in the general case... – Matt Grum Dec 10 '10 at 18:39
show 3 more comments
edit to respond to follow up questions given you know the effects of a tube of a certain length on a certain lens you can work out the missing values from John's equations you should be able to get an estimate of the effect of a different length tube. Again the values will be subject to the foibles of the lens focussing method, but should give you a good enough idea.
In general no. There is a formula, of course, but you need to know the internal configuration of the lens and usually some elements of the lens design.
Extension tubes usually change the effective focal length slightly (the actual focal length of the lens is a property of the bending power of the glass so doesn't change when you move it) but how much depends on the lens design. A lot of it is to do with the angle at which the light rays leave the back of the lens. If you take an object space telecentric lens (a special type of lens where the rays exit parallel to each other) then the distance to the film plane doesn't matter since the rays are parallel they wont converge or diverge any more.
If you look at the back of a wide angle lens the rear element is very close to the rear of the lens. Now look at a telephoto lens, there will be a gap between the last piece of glass and the mount, as if the lens already has an extension tube. An extension tube will behave quite differently on these two different lenses. The method of focus (internal vs. external) also affects the results of adding extension tubes.
So in short I'm afraid there is no formula that's as simple as the one for telecoverters.
share|improve this answer
Is it truly accurate to say that the focal length changes? My understanding of optics in any detail is in its infancy, but my understanding thus far is that by moving the lens (which is all an extension tube really does), the focal length won't change (though perhaps the magnification might? Or what we've collectively started calling the "effective focal length"), but rather, the distance changes for the focal plane, which causes the in-focus plane to change... I'll try to find some resources and post them in an answer. I -think-, though, that this answer is factually questionable. I think. – lindes Dec 10 '10 at 16:44
Whether or not the focal length changes and to what degree depends on the lens, as I stated. For the simplest case of a pinhole lens it's easy to see that the focal length changes if you move the pinhole further from the camera, since the focal length is defined as the distance from the pinhole to the imaging plane! – Matt Grum Dec 10 '10 at 17:21
Ahh, but a pinhole is not a lens, and as I understand it, for lenses (or optical systems in general??), the focal distance is defined not by the distance between the point and the imaging plane, but between the center of the lens and a point, given an input of parallel lines. Is that not correct? (Note: see video links on my comment to John's answer --… -- also, note that I'm genuinely asking; I'm relatively new to understanding optics at this level.) – lindes Dec 11 '10 at 1:42
Yes you are correct a pinhole has no focal length as the focal length describes a lenses ability to bend light, what I meant to say is that the effective focal length of a pinehole system is the distance between the pinhole and screen, i.e. it gives the same field of view as a lens with the same fl. The point is you need to make assumptions about an imaging system in order to predict how it will behave if you change one of the parameters without knowing the others. – Matt Grum Dec 11 '10 at 14:44
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4396 | Take the 2-minute tour ×
I was watching a few sci-fi movies and was wondering the real science explaining what would happen if you were to be subject to the conditions of outerspace.
I read the wikipedia article on space exposure, but was still confused. If a person was about the same distance from the sun as earth is, would they still freeze to death? (as shown in the movie Sunshine)
I'm reading from all sorts of sites with conflicting information about what would actually happen when a person is exposed to the vacuum of space...
share|improve this question
You would definitely freeze, but it might take a day or so for your whole body to freeze solid, depending on all the variables. – endolith Nov 26 '12 at 22:48
Possible duplicates: physics.stackexchange.com/q/3076/2451 and links therein. – Qmechanic Jan 27 '13 at 14:44
add comment
4 Answers
up vote 7 down vote accepted
You'd freeze to death faster in the Atlantic ocean.
Space has essentially no thermal conductivity. All the heat you lose will be radiated away. According to the Stefan-Boltzman law, $W = \sigma T^4$, you would lose at most 500 watts per square meter of body surface area. By contrast, the convective heat transfer coefficient in water is about 12,500 watts/square meter / degree Kelvin temperature difference. So, I think freezing would be the least of your concerns.
share|improve this answer
But what about evaporative cooling from the liquid boiling off the surface of your skin/inside of mouth/lungs, etc? – endolith Nov 26 '12 at 22:33
add comment
They would freeze.
They wouldn't freeze to death - since they would die of something else (lack of oxygen) first
Although a small part of you is facing a hot sun at 6000K most of your surface is facing cold dark space at 3K. You can work out what temperature you will reach - it depends only on how reflective you are.
Assuming you have the same reflectivity as the earth (35%) and you aren't close enough to the earth to receive any significant heat from it then you would end up at about the same temperature the Earth would have without the greenhouse effect of it's atmosphere = which is about -20C.
If you were made of much darker material like the moon - you would get much colder. Parts of the moon not facing the sun or earth get down to around -150C
share|improve this answer
add comment
Just want to add that freezing is 'not an issue'.
The problem is that water boiling temperature gets lower at lower pressure.
In vacuum blood boils even at 36.6, so all your blood circulation is stopped immediately due to bubbles of blood (water) vapor. So you loose consciousness in few seconds, and die in minutes due to lack of oxygen in brain.
If you would rotate around yourself at ~60RPM and not wearing white, you should nor freeze nor fry in short term (minutes). If you would freeze - then earth would also freeze. As earth is at equilibrium at 20C, you should be near it too if you are not wearing white.
share|improve this answer
The argument that the earth's equilibrium temperature would be identical to a body's ignores a great deal of important detail. Just for starters, (a) the earth generates heat at its core, (b) the "green house" effect, (c) neither are black bodies, etc. – Carl Brannen Jun 24 '11 at 4:00
and wearing white or green or black wouldn't change your equilibrium temperature, it would simply change the speed with which you approached equilibrium. It changes your albedo. – mwengler Aug 18 '12 at 3:27
This says the blood will not boil. The reason you lose consciousness after 10 seconds is because that's how long the body takes to use up all the oxygen in your blood. – endolith Nov 26 '12 at 22:42
add comment
Remember the pressure in your body is 14.7 lb at sea level.So first, whatever air you have is now at that pressure in your lungs. An inexperienced diver experiences this when he dives 10 m taking a lung full of air and surfaces without exhaling.He will be lucky if he doesn't die from it. So you exhale... now if something does not burst in your body because of internal pressure or nitrogen gassing in your blood, well then you can die from lack of oxygen. Dissipation of heat is one of the biggest problems to a body in space. If the body is not rotating one side will cook and the other will freeze. If its rotating just right it will become a new moon<-- he he.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4397 | Take the 2-minute tour ×
Pretty much what the title says. My base question is this. Assuming I take a piece of steel, and a piece of PVC plastic and I measure both their temperatures and find they are the same. I then take a look at the vibration speeds of the individual molecules would they be the same as well?
Here's a rough example:
I measure both the steel and PVC and find them both at 100F, and then I measure the vibrations of a molecule in the steel and find it to be moving at 10 miles an hour. Would the PVC molecules also be moving at 10 miles an hour?
I'm sure I'm not using the correct units of measure to measure the vibration, but I didn't know what the correct unit of measure is for something like that. Hopefully it gets the point across.
share|improve this question
add comment
1 Answer
up vote 4 down vote accepted
No, because the atoms in steel and plastic have different masses.
Your example is a bit more complicated than it need be because steel (well iron) is an element while plastic is a compound. This complicates things because molecules can have internal motions that contribute to the energy. A better comparision might be between lead and lithium. These are both elements to you just need to consider the motions of the Pb and Li atoms. For a given kinetic energy a lead atom will be moving more slowly than a lithium atom because it's heavier.
You could think of it this way: if you touch the piece of lead to the piece of lithium the atoms will be in contact so they'll swap energy by bashing into each other. If a lead atom hits a lithium atom the Li atom will recoil a lot faster than a Pb atom.
share|improve this answer
Its probably simpler to think of gases. From the Wikipedia article on the Kinetic Theory of Gases, the average kinetic energy of molecules in an ideal gas is 3/2 k_b T. But since molecules of nitrogen (28) are lighter than molecules of oxygen (32), it follows that at the same temperature, an "average" nitrogen molecule is moving about 7% faster than an "average" oxygen molecule. – Anonymous Jun 2 '12 at 17:05
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4400 | Pokémon Wiki
Rindo Berry
7,398pages on
this wiki
Revision as of 00:11, November 5, 2011 by Bullet Francisco (Talk | contribs)
The Rindo Berry is a berry introduced in Generation IV that weakens the power of super-effective Grass-type moves.
Rindo Berry
Where to Obtain
Pokemon Diamond, Pokemon Pearl and Pokemon Platinum: In Pastoria City, there is a house west of the Pokemart where a girl will give you a rare berry once a day. If you're lucky, the girl will give you a Passho Berry.
Pokemon HeartGold and SoulSilver: Sometimes, your mom will send you 5 Passho Berries if you allow her to spend the money you keep with her.
Advertisement | Your ad here
Around Wikia's network
Random Wiki |
global_01_local_0_shard_00000017_processed.jsonl/4410 | Take the 2-minute tour ×
I'm planning on running an executable as a child process in my program (e.g. using Ruby's popen or C's exec() family of functions). However, the executable I'm planning on running is licensed through the GPL.
The GPL, as I understand it, requires that all linked code to GPL licensed code needs to also be GPL.
But there's also an "arm's length" exception, discussed in the FAQ:
Would runnning an executable as a child process be part of this exception, or would it still be considered "linking" for the purposes of the license?
If it is considered "linking", how does it differ from running program, like Nmap, from the a command line?
share|improve this question
think this covers it programmers.stackexchange.com/questions/50118/… – jk. Aug 22 '11 at 10:56
It doesn't seem to cover my question, which is someone trying to get around the GPL by not linking, but using a server. In my case, I can't link(AFAIK) because Nmap is a program not a library, and using popen is like using C's exec functions. It spawns a new process, but is a child process of my program that uses pipes for IO between parent and child. I would ship my program without Nmap, and the end-user would have to install it himself. I haven't decided on what license, but am leaning towards a open license of some sort, but if legally permissible I might need to dual-license. – Steven Williams Aug 22 '11 at 18:00
@Steven thanks for clarifying your question. I've revised it a little more to focus on the specific use-case rather than a blind reading of the license, to try to keep it on-topic for Programmers. – user8 Aug 22 '11 at 18:42
add comment
4 Answers
up vote 6 down vote accepted
I think this quote from the section on plugins might lead you in the right direction.
If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them. So you can use the GPL for a plug-in, and there are no special requirements.
share|improve this answer
add comment
Lets compare the strings "running an executable as a child process" and "linking a library". One starts with the letter "r" the other with the letter "l". Therefore it is not the same!
share|improve this answer
What about "running some code as a child process" and "running some code as a linked library"? – mouviciel Aug 23 '11 at 7:54
add comment
This is generally considered to be okay, as long as the child process is a sensible program in its own right.
Ultimately, the question is, is the owner of the GPL'd code going to sue you, and can he convince a judge that the processes are "combined in a way that would make them effectively a single program." If the child process is a tool that can be used on its own, and you are invoking that tool, you should be fine.
share|improve this answer
add comment
You are going to have a hard time getting a good answer to this question. The issue is that the correct answer to your question relies on a complicated legal question for which there is as yet no answer.
I am not a lawyer, but I'll tell you my own position, for what that's worth. Linking, because it does not involve a creative process, cannot create a new work for copyright purposes. Linking is legally like stapling. Linking two works together is just like stapling two DVDs together. The output is, legally, the same as the input. It is the two works.
So if your work was not a derivative work before you linked it, it is not a derivative work after you linked it. The GPL requires source code of distributed works be distributed. So the question is whether the work whose source code you don't want to distribute is a derivative work. Since linking can't change this, the question comes down to whether the work was a derivative work before you linked it.
Legally, with only a few exceptions by statute that don't apply here, a work is a derivative work if it contains significant protectable expression taken from the other work. Note that functional elements are not protectable expression, only creative elements are.
So the answer to your question is: If your code contains sufficient protecable expression taken from a work covered by the GPL to make it a derivative work of that work, then yes. Otherwise, no.
share|improve this answer
After linking, the executable is a derivative work (same as translating a book to another language). GPL requires all source code to derivative works be made available. This is not ambiguous. There is also more to linking that your stapling analogy suggests, but that doesn't matter unless we get into dynamic linking. – phkahler Aug 22 '11 at 14:37
I don't think that in this case, any linking occurs. – Steven Williams Aug 22 '11 at 17:31
@phkahler: Not true about dynamic linking. Right, you mention that. But it isn't clear if David meant static or dynamic linking either. – Zan Lynx Aug 22 '11 at 18:45
Linking cannot make a derivative work. Under copyright law, a derivative work is a type of new work, and a new work can only be produced by a creative process. If linking could make a derivative work, then the linker would be entitled to copyright, since it had made the work. That's absurd. Linking is legally equivalent to stapling -- it's a non-creative process that simply aggregates two works. If your work wasn't derivative before you linked it, it can't be derivative after. Only a creative process can create a new work, derivative or not. – David Schwartz Aug 22 '11 at 18:53
@David Schwartz: Repeating your idiosyncratic view in a comment doesn't make it mainstream. I've read a good deal about software and copyrights, and your opinion took me totally by surprise. You're welcome to your own opinion, of course, but from about the middle of your second paragraph on you were giving what looks awfully like legal advice, speaking very definitely about the law. Since you are not a lawyer, you're almost certainly not qualified to establish novel legal theories and rely on them. – David Thornley Aug 22 '11 at 19:36
show 19 more comments
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4411 | Take the 2-minute tour ×
I have a large method which does 3 tasks, each of them can be extracted into a separate function. If I'll make an additional functions for each of that tasks, will it make my code better or worse and why?
Edit: Obviously, it'll make less lines of code in the main function, but there'll be additional function declarations, so my class will have additional methods, which I believe isn't good, because it'll make the class more complex.
Edit2: Should I do that before I wrote all the code or should I leave it until everything is done and then extract functions?
share|improve this question
"I leave it until everything is done" is usually synonymous with "It will never be done". – Euphoric Oct 1 '12 at 8:15
That is generally true, but also remember the opposite principle of YAGNI (which doesn't apply in this case, since you already need it). – jhocking Oct 1 '12 at 11:46
add comment
5 Answers
up vote 14 down vote accepted
This is a book I often link to, but here I go again: Robert C. Martin's Clean Code, chapter 3, "Functions".
Do you prefer reading a function with +150 lines, or a function calling 3 +50 line functions ? I think I prefer the second option.
Yes, it will make your code better in the sense that it will be more "readable". Make functions that perform one and only one thing, they will be easier to maintain and to produce test case for.
Also, a very important thing I learned with the aforementioned book: choose good and precise names for your functions. The more important the function is, the most precise the name should be. Don't worry about the length of the name, if it has to be called FunctionThatDoesThisOneParticularThingOnly, then name it that way.
Before performing your refactor, write one or more test cases. Make sure they work. Once you're done with your refactoring, you will be able to launch these test cases to ensure the new code works properly. You can write additional "smaller" tests to ensure your new functions perform well separably.
Finally, and this is not contrary to what I've just written, ask yourelf if you really need to do this refactoring, check out the answers to "When to refactor ?" (also, search SO questions on "refactoring", there are more and answers are interesting to read)
If the code is already there and works and you are short on time for the next release, don't touch it. Otherwise I think one should make small functions whenever possible and as such, refactor whenever some time is available while ensuring that everything works as before (test cases).
share|improve this answer
Actually, Bob Martin has shown several times that he prefers 7 functions with 2 to 3 lines over one function with 15 lines (see here sites.google.com/site/unclebobconsultingllc/…). And that's where lots of even experienced devs are going resist. Personally, I think that lots of those "experienced devs" just have trouble to accept that they could still improve on such a basic thing like building abstractions with functions after >10 years of coding. – Doc Brown Sep 23 '13 at 14:18
add comment
Yes, obviously. If it is easy to see and separate the different "tasks" of single function.
1. Readability - Functions with good names make it explicit what code does without need to read that code.
2. Reusability - It is easier to use function that does one thing in multiple places, than having function that does things you don't need.
3. Testability - It is easier to test function, that has one defined "function", that one that has many of them
But there might be problems with this:
• It is not easy to see how to separate the function. This might require refactoring of the inside of the function first, before you move on to separation.
• The function has huge internal state, that is passed around. This usually calls for some kind of OOP solution.
• It is hard to tell what function should be doing. Unit test it and refactor it until you know.
share|improve this answer
add comment
The problem you are posing is not a problem of coding, conventions or coding practice, rather, a problem of readability and ways text editors shows the code you write. This same problem is apearing also in the post:
Splitting a function into sub-functions make sense when implementing a big system with the intent to encapsulate the different functionalities it will be composed of. Nevetheless, sooner or later, you will find yourself with a number of big functions. Some of them are unreadeable and difficult to maintaint wether you keep them as single long functions or split them is smaller functions. This is particularly true for the functions where the operations you do, are not necessary in any other place of your system. Lets pickup one of such a long function and consider it in a broader view.
• Once you read it, you have a complete idea on all the oprations the function does (you can read it as a book);
• If you want to debug it, you can execute it step by step without any jump to any other file/part of the file;
• You have the freedom to access/use any variable declared at any stage of the function;
• The algorithm the function implements its fully contained in the function (encapsulated);
• It takes many pages of your screen;
• It takes long to read it;
• It is not easy to memorize all the different steps;
Now lets imagine to split the long function into several sub-functions and look at them with a broader prospective.
• Except the leave-functions, each function describes with words (names of sub-functions) the different steps done;
• It takes very short time to read each single function/sub-function;
• It is clear what parameters and variables are affected at each sub-function (separation of concerns);
• It is easy to imagine what a function like "sin()" does, but not as easy to imagine what our sub-functions do;
• The algorithm is now disapeared, it is now distributed in may sub-functions (no overview);
• When debugging it step by step, it is easy to forget the deepness level function call you are coming from (jumping here and there in your project files);
• You can easily loose context when reading the different sub-functions;
Both solutions have pro and contra. The actual best solution would be having editors which allow to expand, inline and for the full depth, each function call into its content. Which would make, splitting functions in sub functions the only best solution.
share|improve this answer
add comment
Aside: I wrote this in response to dallin's question (now closed) but I still feel it could be helpful to someone so here goes
I think that the reason for atomising functions is 2 fold, and as @jozefg mentions is dependent on the language used.
Separation of Concerns
The main reason to do this is to keep different pieces of code separate, so any block of code that doesn't directly contribute to the desired outcome/intent of the function is a separate concern and could be extracted.
Say you have a background task that also updates a progress bar, the progress bar update isn't directly related to the long running task so should be extracted, even if it's the only piece of code that uses the progress bar.
Say in JavaScript you have a function getMyData(), which 1) builds a soap message from parameters, 2) initializes a service reference, 3) calls the service with the soap message, 4) parses the result, 5) returns the result. Seems reasonable, I've written this exact function many times--but really that could be split into 3 private functions only including code for 3 & 5 (if that) as none of the other code is directly responsible for getting data from the service.
Improved Debugging Experience
If you have completely atomic functions, your stack trace becomes a task list, listing all the successfully executed code, i.e:
• Get My Data
• Build Soap Message
• Initialise Service Reference
• Parsed Service Response - ERROR
would be allot more interesting then finding out that there was an error while getting data. But some tools are even more useful for debugging detailed call trees then that, take for example Microsofts Debugger Canvas.
I also understand your concerns that it can be difficult to follow code written this way because at the end of the day, you do need to pick an order of functions in a single file where as your call-tree would be allot more complex then that. But if functions are named well (intellisense allows me to use 3-4 camal case words in any function I please without slowing me down any) and structured with public interface at the top of the file, your code will read like pseudo-code which is by far the easiest way to get a high level understanding of a codebase.
FYI - this is one of those "do as I say not as I do" things, keeping code atomic is pointless unless your ruthlessly consistent with it IMHO, which I'm not.
share|improve this answer
add comment
I'm sure you've heard the advice that variables should be scoped as tightly as possible, and I hope you agree with it. Well, functions are containers of scope, and in smaller functions the scope of the local variables is smaller. It's much clearer how and when they are supposed to be used and it's harder to use them in the wrong order or before they are initialized.
Also, functions are containers of logical flow. There's only one way in, the ways out are clearly marked, and if the function is short enough, the internal flows should be obvious. This has the effect of reducing cyclomatic complexity which is a reliable way to reduce the rate of defects.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4412 | Take the 2-minute tour ×
From the c2wiki page on coupling & cohesion:
Cohesion (interdependency within module) strength/level names : (from worse to better, high cohesion is good)
• Coincidental Cohesion : (Worst) Module elements are unrelated
• Logical Cohesion : Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform (see also CommandObject). i.e. body of function is one huge if-else/switch on operation flag
• Temporal Cohesion : operations related only by general time performed (i.e. initialization() or FatalErrorShutdown?())
• Procedural Cohesion : Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries)
• Communicational Cohesion : unrelated operations except need same data or input
• Sequential Cohesion : operations on same data in significant order; output from one function is input to next (pipeline)
• Informational Cohesion: a module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type. i.e. define structure of sales_region_table and its operators: init_table(), update_table(), print_table()
• Functional Cohesion : all elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation get_engine_temperature(), add_sales_tax()
(emphasis mine).
I don't fully understand the definition of logical cohesion. My questions are:
• what is logical cohesion?
• Why does it get such a bad rap (2nd worst kind of cohesion)?
share|improve this question
add comment
2 Answers
Logical cohesion can be bad because you end up grouping functionality by technical characteristics rather than functional characteristics. For example, consider an application consisting of multiple modules. Each module represents some business domain and has corresponding data access code. If you group all data access code across all modules then you have logical cohesion. After all, it is all data access and it in some cases it is beneficial to be able to evaluate the data access patterns of an application. This however is problematic because the business domain provides the module boundaries, not the technical domain. By achieving logical cohesion you end up losing on functional cohesion. Typically, the business domain defines a well-defined unit of deployment and technical aspects are there to support the business domain.
share|improve this answer
That's a good answer, although to avoid confusion, I'd mention that in the context of object oriented programming (and the examples on Wikipedia and c2wiki), a module is essentially a class. This means that the comment about the "huge if/else" also makes more sense. – Daniel B Oct 3 '12 at 7:05
add comment
From how it is described, I would say it is about coupling code together that has some cohesion, but breaks object orientation.
Example: calculation of a polygon's area. When you put the calculation for the square together with the calculation for the triangle, and only choose by the input-param, then you have grouped two things logically by their outcome, not taking into account their real nature.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4413 | Take the 2-minute tour ×
Recently I have been reading online about eXtreme programming and agile practices. I wish to adapt them. However most of my code is all in PHP which is the normal CRUD type web applications. Moreover I user web framework like CodeIgniter, which also abstracts a lot of the core features.
To sum up, how can I implement the test driven development into applications which are heavily dependent on databases and user input, particularly in the test automation part?
I would now ask this question targeting my current project
I am working on an college forum web application. It has the basic social network features and also includes XMPP server on its stack. The following components constitute my application stack:
1. Apache web server with mod_php enabled.
2. PHP as a back-end language.(CodeIgniter framework)
3. MySql database.
4. Openfire server for XMPP.
5. HTML, CSS, jQuery and Strophe(for XMPP) on front end, and
6. A Python middleware for XMPP.
Currently I am working without any testing or build cycle. I use SVN as the version control system. And I wish to include TDD and CI into the development process. I already have some code in the project. What strategies and best practices should I adopt for implementing TDD and CI in this kind of project?
PS: We are a team of three people working on a self project as such we dont have any other person who influences the design(apart from the end users).
This is what I did to start implementing
I refactored the login function which was taking the username and password passed from the front end and then applies sanity checks, checks for valid username, then for valid password, If yes it sets the session cookie and redirects the user to a page based on his role.
I broke the above function into three one to take username and password and check if they are correct, another to set the cookies and the last one that takes the values from front end and applies sanity checks, calls the other two functions and then redirects.
I could write test for only the first function. Is that enough?
share|improve this question
-1: Since when are TDD practices heavily dependent on databases? I thought that any TDD implementation must decouple the application from its database. Otherwise you're writhing integration tests. – Jim G. Dec 29 '12 at 23:32
@JimG. I would like to make myself more clear, I meant to get advice on implementing TDD for PHP based web applications that are mainly driven by user input and database as opposed to applications that process takes up some data and runs eg -- How would I be make a test for a module that implements authentication? – mlakhara Dec 29 '12 at 23:41
Very good question, +1 – Fergus Morrow Dec 30 '12 at 1:01
I think this question is now valid enough to be reopened. I can't vote for it, but those who can could take a look. – Patkos Csaba Dec 30 '12 at 12:38
You really aren't in an unusual situation. Lots of applications are mainly driven by user input and database, and they still use TDD. This is really quite common. Which might be part of why people don't know how they could go about answering your question. – psr Dec 31 '12 at 22:22
show 2 more comments
closed as not a real question by Jim G., Glenn Nelson, Dynamic, Yusubov, gnat Dec 30 '12 at 5:38
1 Answer
up vote 2 down vote accepted
PHPUnit will help with testing your application.
Usually, people write unit tests to test the singularity of their business rules, the actual methods that have business logic. That would be the unitary part of your tests.
You can mock the database, or write functional or integration tests to test the features on a more wider space.
PHPUnit helps you with that as well.
share|improve this answer
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/4414 | Take the 2-minute tour ×
This question already has an answer here:
I know this has been asked..but I cant find a clear answer. Many advanced programmers prefer text editors, but for me IDE's make things much faster. I've been using eclipse for a year and using a text editor is a pain. I know what to do, but I need to look up import statements and all. Also, I find that having to go to the terminal and compiling and running is annoying. In what situations would a text editor be preferred or necessary.
share|improve this question
But do you know any good text editors? Specifically, do you know Emacs? There are a lot of tasks that are easy in Emacs and extremely painful in Eclipse. But Eclipse wins for refactoring Java. – kevin cline Aug 1 '13 at 4:01
You're probably not using the full power of eclipse. You can make eclipse run external tools (such as terminal and compile commands) if you're not using one of its natively supported languages (e.g. Java). I use Eclipse for 3 different languages and don't have to leave it to do anything. – Deco Aug 1 '13 at 4:09
recommended reading: Gorilla vs Shark – gnat Aug 1 '13 at 4:48
@JaceBrowning Thats not really on-topic. As an aside the OP obviously didn't search very hard – Lego Stormtroopr Aug 1 '13 at 6:25
"Many advanced programmers" - for example? It might be an age thing. – Den Aug 1 '13 at 8:11
show 2 more comments
marked as duplicate by gnat, BЈовић, Kilian Foth, Ozz, m3th0dman Aug 1 '13 at 8:06
3 Answers
up vote 1 down vote accepted
It's almost always a matter of personal preference; use whatever makes you feel the most productive. However, if you're using an IDE it's still important to understand what commands it runs behind the scenes for built/test/etc.
Some advanced programmers preferred shell-based editors (vim, emacs, etc.) because their operations are fully-mapped to the keyboard and not having to touch the mouse makes their typing/navigation faster. Shell-based editors also integrate well with other shell utilities.
You may also find yourself in situations where the only available editor is shell-based, such as on an embedded system or headless server.
share|improve this answer
add comment
IDE's and Text Editors are fundamentally different tools that each have their strengths and weaknesses. Some that immediately come to mind are:
• Strengths
• Integrated testing
• Compilation
• Breakpoints/stepping through code
• Integration with other services (database views), automated class diagrams
• Weaknesses
• Large memory footprint
• Cost
Text Editor
• Strengths
• Fast
• Easy to extend (macros, plugins)
• Text edit functions (Ex: sublime text 2 unending keyboard shortcuts)
• Weaknesses
• Need to use another service to compile
• low support for code completion (intellisense features)
Because of the differences, one is usually the right choice over another for a certain project or language. For example: For a Asp.net web application, visual studio is probably the only choice in terms of code generation, debugging, connectivity with a SQLServer database, and version control. In a contrasting example: parsing huge log files seems more suited to a language like perl, which can be easily written in a tool like vim.
I'm sure there are many other reasons, but from experience using both, these seemed like some obvious reasons.
share|improve this answer
by this definition I think you could say that emacs is an IDE – jk. Aug 1 '13 at 15:28
@MichaelJasper..Thank You...so i will only know which one to use depending on the project? – XcutionX Aug 1 '13 at 17:36
@jk. No no no, emacs is an OS :) I'm typing this from its browser – jozefg Aug 1 '13 at 19:23
@jozefg: "Don't get me wrong: Emacs is a great operating system – it lacks a good editor, though." - Thomer M. Gil :) – TommyA Aug 1 '13 at 21:14
add comment
To be true I use both at once (NetBeans as IDE and jEdit as text editor).
For editing Java files an IDE has advantages as autocomplete and automatic compiling.
For editing auxilary files a text editor with rich editing features can have advantages.
I think that there is no clear answer towards one or the oher.
In programming tutorials you read often to use a text editor in the beginning as with this the beginner can learn the workflow behind the IDE an also get not confused by the IDEs features.
share|improve this answer
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/4415 | Take the 2-minute tour ×
I already found a similar question here on SO, but almost all of the answers were more philosophical, rather than practical.
I'd like some PRACTICAL ideas about how to make my programming course more interesting. It doesn't matter how much effort it takes.
I thought about asking my students to pick a topic in the beginning of the course and to work on it as some kind of a real, small, startup project that they could financially exploit once it's finished. But I'm afraid that most of them will not take the project to its completion, and that it could become very boring.
Also I thought about involving them in Torcs, but I'm afraid most of them wouldn't be up to the task. Btw, Torcs is Car Racing Simulation, but there's an API for developers so they can develop their own AI for the driver, and then race their car against other programmers' AIs.
I'm not asking here for problem examples, as I asked a separate question about that. I need ideas about making my lectures more interesting and fun.
code.google.com/p/bwapi can be a bit more interesting and diverse in terms of techniques than Torcs. – SK-logic Mar 9 '11 at 13:16
Just work on your sense of humor, that's all. – Job Mar 29 '11 at 14:52
I agree - the amount of work needed to bring an application from "pass class" quality to "sellable" is IMHO too large. – user1249 Mar 29 '11 at 14:52
How old are your students? – Ubiquité Mar 29 '11 at 17:12
comments disabled on deleted / locked posts
migration rejected from stackoverflow.com Mar 7 at 8:15
closed as primarily opinion-based by gnat, GlenH7, Robert Harvey, MichaelT, Bart van Ingen Schenau Mar 7 at 8:15
9 Answers
I had a very nice professor. He never presented slides or something like that: he started the lecture with a question (like: "How can I make an algorithm to define what coins should I use to give your change back?") and lead the class get to an algorithm. As we advanced, he explained some concepts we were "meeting" along our path.
But one thing is VERY important: he wasn't only explaining, he was doing that with passion: he was running around, making jokes, "suffering", jumping. You could see that he liked what he was doing and he was challenging you to solve the problem he was proposing. You felt: 1- if you don't do it, he won't give you the answer. 2- If the class get stuck, he would help. 3- he was "really there" working with you, he was also "struggling" against the problem to create our own solution.
This was a very hard lecture, but I couldn't miss any of them. If I could, I would still visit it :)
That sounds like an awesome lecturer. – JBRWilkinson Mar 10 '11 at 11:29
He really was :) – Oscar Mar 11 '11 at 7:42
That sounds like my algorithms professor. Probably my most difficult course, but it is also the most interesting lectures. – Niklas H Mar 31 '11 at 23:11
That is exactly the subject he presented :). When I give some lectures I try to do the same approach, even if it is programming language or something like that I often ask questions like: "How would you do? Why can't it be that way? What is the advantages/disadvantage?" and so on :) – Oscar Apr 7 '11 at 21:58
add comment
Have your students team up into pairs for a project and then have other teams do code reviews of their completed project. This will allow for simulation of a real work environment and probably lead to discussion/debate about the students chosen implementation of the project. I would say to be sure to pick the teams yourself so you can pair a strong students with a weak student and then chose how the teams will swap so that a potentially weaker team doesn't get too stomped by a stronger team. Either way good debate is always fun. Also autonomy is always more fun than a truly defined project so I'd try to leave the parameters that define the project as open as you can so that students have to be creative and have some investment in their project.
Just a thought.
If you're just looking to spice up your lectures then check out the http://thedailywtf.com/ for examples of bad code and tear some of those code examples to sheds in the meeting.
I love the dailyWTF idea. – HLGEM Mar 29 '11 at 15:05
add comment
One of the most boring (and dangerous - you end up with students who metaphorically use a hammer when a screwdriver is the right tool) things about current programming courses is that the techniques are taught without the context of "When would I need to use this?"
So here is what I did when I taught, I made a list of all the techniques I wanted to cover in the course and then determined where I would actually use them in real life to solve an actual programming problem. Then I used those problems in my lectures.
If possible, keep at least the most basic stuff in one project example, so the students can see how there things relate to what was done before. Some of the advanced techniques may not easily fit into your basic project, so you can use a different one for those, but it is critical to ensure they know when to use techniques and, perhaps more importantly, when to not use the technique.
I virtually never lectured for more than 10-15 minutes (just to introduce radically new concepts) but used the Socratic method to get the students to find the answers to my questions in the textbook and used lots of exercises to re-enforce the learning.
I also gave a project in each course and left the last 4 or 5 sessions for working solely on the project (so I could see them work on it in front of me which cut down amazingly on the cheating and i coudl provide advice in real time not just at grading time). Sneakily (is that a word?), I did not tell them what they needed to have in the project to get a A but what they needed to do to get a B. Then I said, "You need to do something beyond this to get an A." The students went way beyond my wildest expectations when I did this. I would require students to do their projects in source control and having the second to last session as a code review (last session for fixing what the code review found and final tweaking) appeals to me although I didn't do it back then.
add comment
It is not really necessary to be a "funny professor" (but of course it won't hurt). What is more important is to get your students involved.
So build the course around a realistic and cool application.
Emphasize the way that the stuff you teach helps creating tangible and cool things. Also, don't be too democratic in picking the application. Normally students (especially at schools) are extremely passive at the beginning of any course.
add comment
As a student who hopes to someday teach, here is my perspective:
In my introductory programming course, we started out with pseudocode, then graduated to Alice programs (the environment is a pain sometimes, but the resultant "programs" were often quite amusing and fun), and as our final project of the semester, we were split into teams and tasked with programming some Lego Mindstorms robots for a simple race. Pretty awesome introduction to programming, IMO.
In Java 1 & 2 (same professor), we were usually rotated in 2-4 person teams for programming assignments out of our textbook to learn the material. We would always do a "presentation" of our code -- and we were required to "speak the language", using all the technical terms like method, constructor, accessor/mutator, etc. -- so we could see what our peers had done and learn from one another's code. The Java 2 course wrapped up with a larger project (a golf handicap tracking application) which the professor rounded up several industry professionals to judge (which helped us make contacts at local tech companies).
I hope to teach someday, and I do plan to implement some of the above methods. However, I think it would be pretty awesome to secretly (or not) build a game or web app other fun, interesting, and relevant application during the course of a semester by doling out potentially "boring" assignments that teach the students the fundamentals but apply to the overall project, and then spring the completed program on them at the end of the semester for a big "Surprise! -- You built this!" and show them where their code fit in. I think it would be a good "real world" experience for the students. I haven't quite worked out the details of how it could be pulled off, though. :)
I think it's imperative to inspire the students by keeping things interesting and relevant to what's trending in the field while still building the foundation they will need to be good programmers. You may want to check out MIT's online course material from years past to get some ideas -- they do a good job of keeping it interesting and relevant (ie. One of their software engineering courses gives the choice of building a game or an RSS reader and encourages the use of 3rd party libraries and APIs).
I hope this helps.
add comment
Presentations that had some audience participation can be useful. Are there ways to make the lectures involve asking the class a question and seeing if someone knows the answer? Granted this presumes a small class. Another thought would be to split up the class and have groups work on little assignments in class but this too carries the challenge of not being practical in a class of 200. Another line here is to try to accommodate different learning styles with the way the lecture is done. Is the material presented in text, picture, or some other format?
add comment
First: Motivation. Pose a real (and solvable) problem, maybe in the form of a question: "How many times will I have to attend this course until I get my grades?".
Work with your students to get from this fuzzy requirement to a finished product, and explain along the way ("We have now found out what we need to do, and we have defined the criteria that need to be met. This is called requirements engineering, and if microsoft asked us to write this software for them, they would compare our result to the specification.", or "ok, now we know the formula to calculate the number of mondays up to a specific date. We have an algorithm. Let's translate this from words to Java.").
Do not be tempted to explain something out of context first and expect them to understand, remember and be able to use it later. Always present new knowledge as a tool to solve a problem at hand.
Implementation: Make sure there are constant small successes, and make sure these successes are visual and concrete. Do not make them program command-line software; in their mind, a computer program has windows (or it's a game).
I think the hardest part is to find a project that is both easy enough yet still not as useless and boring as "Hello, world!". Asking the students to find a project that meets these criteria is asking too much of them. The freedom of total choice can be a burden.
add comment
Your students will probably have the best ideas for different types of programs that they would find fun to write. You could ask them to make you a list of software programs that they have wanted to build, thought were really cool to use, or needed at some point in their lives.
The best class that I ever took was one where the instructor brought in a user and questioned him. Then we each had to write a requirements document for the project that he described. The instructor also wrote one up for the project. Her requirements document was then the basic class syllabus for the semester. Every piece of development that we did after that was related to bringing the software to a more advanced stage of development. She had us also doing some basic technical writing on the project where we had to write up a help file for it. All assignments had to be written as business documents or technical documents.
She used the same format for her beginning level classes as well as her advanced software engineering class. For the basic class she was more lenient about quality of the work provided at the different stages. In her advanced class she split us into groups of 4 or 5 and we had to design and develop the software from beginning to end using more advanced skills.
add comment
I would start with asking them for an example of a program \ exercise they believe they did well, or one in which they believe they had problems (if it's a class, and they have several different solutions \ approaches for the same problem it's even better) - and use this as a case study for the subjects you want to teach.
You can also give them such an exercise in the beginning of the class, and use it for a case study for the rest of the course.
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/4417 | Assine Portuguese
Procure por qualquer palavra, como surfboard:
Step that initiates marching in marching band. The forward march is performed by forcing your left foot forward with your calf muscles with your heel on the ground and your toes in the air at about a 60 degree angle. The backwards march is performed by going up onto your toes as high as possible while balance is kept with your knees bent. From there, the left foot is moved backward while the toes are still firmly on the ground. This should make a "swish" or a "swoosh" sound. The vocal command for a plant step is "Hup hup ready" and is answered by yelling "Plant" or "Up and one" respectively.
The drum major gave the count-off and I performed the plant step.
por Koren Reynon 20 de Novembro de 2009
1 5 |
global_01_local_0_shard_00000017_processed.jsonl/4418 | Search tips
Search criteria
Neuron. Author manuscript; available in PMC Jun 21, 2011.
Published in final edited form as:
PMCID: PMC3119531
The stoichiometry of AMPA receptors and TARPs varies by neuronal cell type
Yun Shi, Wei Lu, Aaron D. Milstein, and Roger A. Nicoll*
Departments of Cellular and Molecular Pharmacology and Physiology, University of California San Francisco, California 94143, USA
* Address all correspondence to: Roger A. Nicoll, Department of Cellular and Molecular Pharmacology, University of California San Francisco, CA 94143, Phone: (415) 476-2018, nicoll/at/
Fast synaptic transmission in the brain is mediated by activation of AMPA-type glutamate receptors (AMPARs). AMPARs are comprised of four pore forming subunits (GluAs) as well as auxiliary subunits referred to as transmembrane AMPA receptor regulatory proteins (TARPs). TARPs control the trafficking and gating of AMPARs. However, the number of TARP molecules that assemble within an individual AMPAR complex is unknown. Here, we covalently link GluAs to TARPs to investigate the properties of TARP/AMPAR complexes with known stoichiometry in HEK cells. We find that AMPARs are functional when associated with either four, two, or no TARPs, and that the efficacy of the partial agonist kainate varies across these conditions, providing a sensitive assay for TARP/AMPAR stoichiometry. By comparing these results with data obtained from hippocampal neurons, we show that native AMPARs are normally associated with multiple TARP molecules, and that native TARP/AMPAR stoichiometry varies with the expression level of endogenous and exogenous TARPs. Interestingly, AMPARs in hippocampal pyramidal cells contain more TARP molecules than those in dentate gyrus granule cells, suggesting a cell type-specific regulatory mechanism for TARP/AMPAR stoichiometry.
Glutamate, the main fast excitatory neurotransmitter in the CNS, acts primarily on two types of ionotropic receptors: NMDA receptors and AMPA receptors (AMPARs). Activation of AMPARs by synaptic glutamate results in fast moment-to-moment depolarization of postsynaptic neurons, a process crucial for information propagation in the CNS. AMPAR subunits form ion channels by assembling as heterotetramers of GluA1-GluA4 (GluR1-GluR4 or GluR-A – GluR-D) (Collingridge et al., 2008; Dingledine et al., 1999; Hollmann and Heinemann, 1994; Mayer and Armstrong, 2004; Seeburg, 1993). In addition to these principal subunits, native AMPARs also associate with auxiliary subunits known as TARPs (Chen et al., 2000; Nicoll et al., 2006; Osten and Stern-Bach, 2006; Ziff, 2007). There are four classical TARPs in the CNS: γ-2, γ-3, γ-4, and γ-8 (Tomita et al., 2003). These molecules associate with all four GluAs and regulate their trafficking, gating, and pharmacology in a TARP subtype-specific manner (Milstein and Nicoll, 2008; Nicoll et al., 2006; Osten and Stern-Bach, 2006; Ziff, 2007). While most neurons express more than one TARP subtype, cerebellar granule neurons are unique in that they only express γ-2. Accordingly, cultured granule neurons from the mutant mouse stargazer, which lack functional γ-2 protein, have been used as a model system to understand TARP function (Chen et al., 2000; Cho et al., 2007; Milstein et al., 2007). Loss of γ-2 in these cells abolishes surface and synaptic AMPAR expression (Chen et al., 2000; Hashimoto et al., 1999). Similar phenotypes result from the loss of TARP family members in other neuronal cell types as well (Menuz et al., 2008; Rouach et al., 2005). Therefore, TARPs are believed to be necessary for AMPAR membrane trafficking and synaptic targeting throughout the brain.
An important unanswered question in the field concerns the stoichiometry of the association between TARPs and AMPARs. We previously demonstrated that the decay kinetics of synaptic AMPARs varies with the expression level of TARP γ-2 in cerebellar granule neurons, suggesting a variable TARP/AMPAR stoichiometry (Milstein et al., 2007). However, it was not possible to determine the exact stoichiometry from these data; the result could have been produced by mixed populations of AMPARs with anywhere from zero to four associated TARPs. To address whether a variable number of TARPs can assemble with AMPARs, one needs a technique that allows the number of TARPs on individual AMPARs to be counted unambiguously. To this end we have constructed fusion proteins between GluA subunits and TARPs. By co-expressing in HEK cells combinations of GluA1 and GluA2 with and without fused TARPs, we have succeeded in titrating the number of TARPs bound to AMPARs, producing functional AMPAR complexes with zero, two, and four bound TARPs. Importantly, the relative efficacy of the partial agonist kainate, which is known to increase with TARP association (Tomita et al., 2005; Turetsky et al., 2005), proved to be a particularly sensitive measure of TARP/AMPAR stoichiometry. When compared with these expression data, the efficacy of kainate on endogenous AMPARs from hippocampal neurons is consistent with the binding of multiple TARPs per native AMPAR. Furthermore, relative differences in TARP expression levels produce different TARP/AMPAR stoichiometries in two different neuronal cell types, CA1 pyramidal neurons and dentate gyrus granule neurons. Finally, we found that, in addition to kainate efficacy, deactivation and desensitization kinetics of AMPARs vary with TARP expression levels in CA1 pyramidal neurons. This variation in stoichiometry broadens the ways in which TARPs modulate AMPAR function in the brain.
1. Kainate efficacy is enhanced by fusion or co-expression of GluA1 and γ-2
To fix the number of TARPs associated with individual AMPARs, we fused the full length GluA1 to the N-terminus of TARP γ-2 with a short linker sequence (Figure 1A1). If these receptors assemble and form functional channels expressed on the cell surface, they will contain four TARPs. When expressed in HEK cells, outside-out patches containing the fusion construct A1γ-2 produced sizeable currents in response to application of 1 mM glutamate (Figure 1A2), demonstrating that these channels are properly folded, traffic to the plasma membrane, and appropriately gate in response to glutamate. In order to determine whether the fused TARP molecules are functionally associated with the AMPAR channel, we took advantage of the fact that TARP association increases the efficacy of the partial agonist, kainate (Tomita et al., 2005; Turetsky et al., 2005). Indeed, A1γ-2 (Figures 1B and 1H) responded to 1 mM kainate application with substantially larger currents than those produced by GluA1 alone (Figure 1C and H), demonstrating that the fused TARP molecules interact with the AMPAR channel and modulate the efficacy of kainate. In fact, A1γ-2 produced a similar increase in the kainate to glutamate ratio (IKA/IGlu), a measure of kainate efficacy, as co-expression of non-fused GluA1 with TARP γ-2 (Figures 1D and 1H). We also measured the kainate efficacy of channels produced by fusion of GluA1 with the other TARP family members, γ-3, γ-4, and γ-8. Each of these fusion constructs, A1γ-3 (Figure 1E), A1γ-4 (Figure 1F), and A1γ-8 (Figure 1G) produced indistinguishable kainate efficacies (Figure 1H). While some previous studies co-expressing GluAs and TARPs showed differences in kainate efficacy for different TARPs (Cho et al., 2007; Korber et al., 2007; Kott et al., 2007; Tomita et al., 2005), no difference was seen in kainate efficacy across TARPs expressed in stargazer cerebellar granule neurons (Milstein et al., 2007). Furthermore, we found in this set of experiments that the IKA/IGlu observed in HEK cells was sensitive to the ratio of co-transfected GluA and TARP cDNA, with saturating kainate efficacy requiring an GluA:TARP ratio of 1:5 for TARP γ-8 (Figure S1).
Figure 1
Figure 1
Kainate efficacy is enhanced by co-expression or fusion of TARPs with GluA1. A1. Construction of GluA-TARP fusion proteins. The full length GluA was fused to N-terminus of TARPs with a short segment of linker sequence. A2.Glutamate (Glu, 1 mM) induced (more ...)
2. Kainate efficacy co-varies with TARP/AMPAR stoichiometry
The observation that fusion of AMPARs and TARPs produces an enhanced kainate efficacy comparable to co-expression of the non-fused proteins raises the possibility that the channels formed in the latter condition normally contain four TARP molecules. However, it is also possible that fewer than four TARPs are sufficient to saturate the effect of TARPs on kainate efficacy. To address this possibility, we devised a method to measure the properties of AMPARs that contain less than four TARP molecules. While AMPARs that do not contain the GluA2 subunit are readily blocked by the polyamine toxin philanthotoxin-433 (PhTx), those that do contain GluA2 are not. We reasoned that co-expression of A1γ-2 with non-fused GluA2 would produce two populations of channels – those containing only GluA1 with four TARPs, and those incorporating GluA2 and therefore containing fewer than four TARPs. By including a concentration of PhTx (600 nM) that blocks essentially all A1γ-2 receptors (Figure S2), we could block any four TARP channels and measure the kainate efficacy of the isolated population of GluA2-containing AMPARs with fewer than four associated TARPs. Co-expression of A1γ-2 with GluA2 (Figure 2C), as well as the converse, GluA1+A2γ-2 (Figure 2D), produced kainate efficacies close to halfway between that of GluA1+GluA2, which contains no TARPs (Figure 2A) and A1γ-2+A2γ-2, which contains four TARPs (Figure 2B). This establishes unambiguously that AMPAR channels can be formed that contain fewer than four TARP molecules, and that kainate efficacy co-varies with TARP/AMPAR stoichiometry. We obtained similar results with fusion constructs between GluAs and TARP γ-8, with four TARP channels producing maximal kainate efficacy (Figure 2F), and channels associated with less than four TARPs producing roughly half the maximal kainate efficacy (Figure 2G and H). In summary, functional AMPARs can be produced that contain zero or four TARP molecules and, in addition, receptors can form with an intermediate stoichiometry (Figure 2I). This establishes that kainate efficacy can be used to distinguish between AMPARs with variable TARP/AMPAR stoichiometries.
Figure 2
Figure 2
Glutamate- and kainate-induced currents from GluA1/A2 receptors expressed in HEK293T cells. AH, Examples of currents in outside-out patches. The cDNA of GluA1 with or without linked TARPs was over-numbering GluA2s (3:2). Scale bar: 2 s, 100 pA. (more ...)
What is the exact stoichiometry of the populations containing less than four TARPs? The finding that these heteromers formed by co-expression of GluAs with and without tethered TARPs produce kainate efficacies intermediate between zero-TARP and four TARP channels, suggests that they may contain two TARPs. If there is an absolute requirement that heteromeric AMPARs contain two GluA1s and two GluA2s when they are co-expressed in vitro (Mansour et al., 2001), then all of the heteromers would contain precisely two TARPs. If, on the other hand, GluA1 and GluA2 assemble randomly (Washburn et al., 1997), then co-expression of equal amounts of A1γ-2 and GluA2 would produce channels that predominantly contain two GluA2 subunits, but channels with other stoichiometries would also be present. To distinguish between these possibilities, we varied the ratio of co-expressed A1γ-2 and GluA2 cDNA in order to bias the assembly of AMPARs towards channels that contain either one or three GluA2s, and therefore three or one TARPs, respectively. To verify that altering the ratio of co-transfected cDNAs actually results in surface expression of AMPARs with the expected subunit compositions, we measured the percent block by PhTx of glutamate-evoked currents in outside-out patches. When A1γ-2 was expressed in 20-fold excess of GluA2, PhTx blocked ~90%, while the converse expression ratio produces negligible block by PhTx, as predicted by either model of assembly (Figure S3A).
Having confirmed that our co-transfection conditions indeed vary the subunit composition of surface expressed AMPARs, we next isolated the GluA2-containing, heteromeric AMPAR populations with PhTx and determined their kainate efficacies. When expressing 20-fold more A1γ-2 than GluA2, random assembly would produce heteromeric channels that contain at most one GluA2 subunit, and therefore three TARPs, while assembly that prefers heterodimers would produce channels that contain predominantly two GluA2 subunits, and therefore two TARPs (Figure S3B). Consistent with the latter model, we found that the kainate efficacy for this condition was identical to that obtained with a transfection ratio of 3:2 (Figure 2J). Similar results were obtained in the converse experiment expressing 20-fold more GluA2 than A1γ-2 (Figure 2J). In this case, channels containing four GluA2s and zero TARPs produce negligible currents due to deficits in trafficking and low conductance (Burnashev et al., 1992; Hollmann and Heinemann, 1994). Indeed, in our conditions homomeric GluA2 receptors generated essentially no current. The slight decrease in kainate efficacy observed for this condition is intermediate between the values predicted by the two assembly models (Figure S3B). This suggests that under these extreme conditions, when GluA2 expression greatly exceeds that of GluA1, it is possible for some channels to form that contain three GluA2 subunits, but assembly is still strongly biased towards formation of channels containing two GluA2 subunits. Comparison of the experimental data to values predicted by various assembly models further rules out the potential complication that kainate efficacy could be saturated for AMPARs containing fewer than four bound TARPs (Figure S3B). These experiments strongly suggest that, even under conditions that should favor channels with assembly ratios of 1:3 or 3:1, GluA1 and GluA2 preferentially assemble at a ratio of 2:2, and that the intermediate kainate efficacy observed when co-expressing GluAs with and without fused TARPs reflects receptors that predominantly contain two TARPs.
For experiments co-expressing GluA1+A2γ-2, we expressed GluA1 in excess (3:2) to minimize the chance of forming A2γ-2 homomers, since there is no available pharmacology to block these receptors. Under these conditions, 600 nM PhTx blocked 36±8.3% (n = 7) of the glutamate currents (Figure S2D), indicating the presence of GluA1 homomers in the plasma membrane. In the presence of PhTx, the KA/Glu ratio was 0.29±0.03 (n = 9, Figure 2D, H), again suggesting the remaining heteromers contained two TARPs.
If co-expression of GluA1 and GluA2 results in preferential formation of heterodimers, we wondered if by just co-expressing fused and non-fused GluA2 we could force the assembly of channels that contain only one TARP. By co-expressing GluA2 in excess of A2γ-2, we predicted that random assembly would prevail, producing either GluA2 homomers containing zero TARPs that do not traffic or conduct, or channels that incorporated one A2γ-2 that could be assayed for kainate efficacy. First we confirmed that expression of A2γ-2 on its own results in a high KA/Glu ratio similar to A1γ-2 (0.52±0.04, n = 7), consistent with a four TARP receptor. Second, we confirmed that expression of GluA2 did not produce currents in our conditions. Surprisingly, when A2γ-2 and GluA2 are expressed with a 1:20 ratio, the resulting KA/Glu ratio (0.38±0.02, n = 7) is substantially larger than would be expected for a one TARP receptor. This suggests that either GluA2 homomers containing one TARP are not assembled, or they are unable to traffic or conduct, similar to GluA2 homomers expressed in the absence of TARPs.
3. TARP stoichiometry differs between two distinct types of hippocampal neurons
We sought to determine if the above results from HEK cells could be compared with data from native AMPARs in neurons in order to read out native TARP/AMPAR stoichiometry. First we measured the kainate efficacy of AMPARs in outside-out patches from mouse CA1 pyramidal neurons. These AMPARs are predominantly (95%) GluA1/2 heteromers (Lu et al., 2009; Zamanillo et al., 1999) associated with TARP γ-8 (Rouach et al., 2005; Tomita et al., 2003), and therefore can be directly compared to the data we obtained expressing these GluAs with TARP γ-8. These native AMPARs displayed a high kainate efficacy consistent with incorporation of four TARP molecules per AMPAR complex (Figure 3A1 and C). However, while we examined only the flip splice variant of AMPARs in HEK cells, CA1 pyramidal neurons express both flip and flop variants of AMPARs (Monyer et al., 1991). To determine if this might affect the comparison, we repeated the experiments in CA3 pyramidal neurons, which have been shown to express predominantly flip variants (Monyer et al., 1991). These experiments yielded a KA/Glu ratio identical to those obtained in CA1 pyramidal neurons (0.47±0.04, n = 11), indicating that even if flop receptors exist in CA1 pyramidal neurons, they do not substantially alter kainate efficacy under our conditions. Furthermore, inclusion of the compound PEPA, a drug that blocks the desensitization of flop AMPARs (Sekiguchi et al., 1998), especially when associated with TARPs (Tomita et al., 2006), had no effect on glutamate responses in CA1 pyramidal neurons (Figure S5).
Figure 3
Figure 3
Kainate efficacy in extrasynaptic AMPARs in hippocampus pyramidal neurons and dentate granule cells. A. Hippocampal CA1 pyramidal cells. A1. WT animals, kainate induced currents were about half of that induced by glutamate. A2. In TARP γ-8 heterozygote, (more ...)
We further wondered if TARP/AMPAR stoichiometry could be altered by genetic depletion or exogenous over-expression of TARPs. Indeed, outside-out patches from TARP γ-8 (+/−) (Figure 3A2 and C) and (−/−) (Figures 3A3 and 3C) mice revealed reduced kainate efficacy. However, over-expression of either TARP γ-8 (Figures 3A4 and 3C) or TARP γ-2 (IKA/IGlu = 0.53±0.03; n = 4, data not shown) did not substantially increase kainate efficacy, supporting the notion that native AMPARs in CA1 pyramidal neurons are normally saturated by TARP expression, and therefore contain four TARPs per AMPAR complex. The finding that over-expression of TARP γ-2 does not alter kainate efficacy differs from the results of (Turetsky et al., 2005). However, these authors specifically used immature neurons in which TARP expression levels are low, likely containing AMPARs with reduced stoichiometry.
These data are in striking contrast to our previous findings from cerebellar granule neurons, where the apparent TARP/AMPAR stoichiometry was not saturated in wild-type neurons (Milstein et al., 2007), and could be increased by over-expression of TARP γ-2. This raises the possibility that TARP/AMPAR stoichiometry is a parameter that differs across neuronal cell types. We tested this possibility by measuring kainate efficacy in another neuronal population, hippocampal dentate gyrus granule cells. In situ data suggests that these neurons express lower levels of TARP γ-8 than CA1 or CA3 pyramidal neurons (Lein et al., 2007), and might be expected to display reduced TARP/AMPAR stoichiometry. Indeed, wild-type granule cells exhibited a lower kainate efficacy than that observed in CA1 pyramidal cells (Figures 3B1 and 3C). Furthermore, exogenous over-expression of TARP γ-8 substantially increased kainate efficacy in these cells (Figures 3B4 and 3C), confirming that AMPARs are not saturated with TARPs in this cell type. Importantly, the kainate efficacy observed when TARP γ-8 was overepressed was identical to that recorded in CA1 pyramidal neurons, strongly implying that the pharmacological property of AMPARs in granule cells are the same as in CA1 pyramidal neurons. Similar to pyramidal neurons, genetic depletion of TARP γ-8 further reduced kainate efficacy, with γ-8 (+/−) (Figures 3B2 and 3C) and γ-8 (−/−) (Figures 3B3 and 3C) mice exhibiting reduced kainate efficacy relative to wild-type.
In this study we chose to use kainate efficacy as an assay to probe the influence of TARP association on AMPAR function, because of its ease and high sensitivity. However, more physiologically relevant parameters of AMPAR function are the kinetics of channel deactivation and desensitization, which are known to be influenced by TARP association (Priel et al., 2005; Tomita et al., 2005; Turetsky et al., 2005). We therefore conducted experiments in CA1 pyramidal neurons from TARP γ-8 mutant mice using fast application of glutamate to outside-out patches. The time course of deactivation in control mice was similar to that previously reported for rat neurons (Spruston et al., 1995) (Figure 3D). However, in TARP γ-8 (−/−) mice, AMPAR deactivation is significantly faster, and it was intermediate in heterozygous TARP γ-8 (+/−) mice (Figure 3D). The time course of AMPAR desensitization similarly varied with TARP expression level (Table S1).
Since the discovery of TARPs a decade ago, numerous studies have established that these auxiliary subunits control the trafficking and gating of AMPARs, and that the magnitude of this effect depends on the specific TARP subtype (Cho et al., 2007; Kott et al., 2007; Milstein et al., 2007). Might the number of TARP molecules that associate with individual AMPAR channels also regulate gating? Although previous data raise this possibility (Milstein et al., 2007), resolving TARP/AMPAR stoichiometry has been surprisingly difficult. We have genetically linked TARPs to AMPARs and have thus been able to identify the number of TARPs associated with native AMPARs.
By expressing AMPAR subunits covalently linked to TARPs in HEK cells we have compared the properties of AMPARs containing zero or four TARPs and receptors with an intermediate stoichiometry. We show that kainate efficacy, previously shown to increase with TARP association (Tomita et al., 2005; Turetsky et al., 2005), varies with TARP/AMPAR stoichiometry and can be used to distinguish between AMPARs that are saturated or unsaturated by TARP expression. The functional effects of TARPs γ-2 and γ-8 on fused GluA1 and GluA2 were similar, and both varied with TARP/AMPAR stoichiometry. These results not only establish that AMPARs can assemble with different numbers of TARPs, but also provide valuable quantitative data that can be compared to native AMPARs in neurons. Although not addressed in this study, it would not be surprising if the magnitude of other modulatory effects of TARPs, such as the sensitivity of Ca2+-permeable AMPARs to block by intracellular polyamines (Soto et al., 2007), and the trafficking of AMPARs to synapses, might be similarly regulated by stoichiometry.
When we expressed heteromeric receptors in which only one of the subunits was tethered to a TARP we consistently obtained receptors with a kainate efficacy that was roughly intermediate between that observed with zero-TARP and four TARP receptors. This finding is consistent with the model in which AMPARs are assembled with a stoichiometry of two GluA1 and two GluA2 subunits (Mansour et al., 2001). If, however, subunits can randomly assemble with a variable subunit stoichiometry (Washburn et al., 1997), then the intermediate kainate efficacy could represent a mixed population of heteromers with receptors containing two TARPs predominating. Another possible interpretation of our results is that the kainate efficacy actually saturates when less than four TARPs are bound to the receptor. Although this would not alter the conclusion that stoichiometry can vary, it would question the quantitative conclusions. However, a simple set of calculations that predicts the kainate efficacies of AMPARs that are produced by varying models of channel assembly (Figure S3B) supports the scenario in which heterodimer formation is preferred and the ratio saturates at four TARPs.
Remarkably, we find that AMPARs in CA1 pyramidal neurons are normally associated with four molecules of TARP γ-8. When the expression level of γ-8 is reduced, the number of TARPs associated with each AMPAR is reduced. At this point we can only speculate as to what the precise TARP/AMPAR stoichiometry is in γ-8 (−/−) and γ-8 (+/−) mice. If we assume that at least one TARP must be associated with AMPARs in order for them to be efficiently trafficked to the cell surface, then we would expect all surface receptors to contain at least one TARP; indeed, the observed kainate efficacy of γ-8 (−/−) pyramidal neurons was between that expected for zero and two TARPs, indicating that perhaps it is just one TARP. In this scenario, TARPs γ-2 and γ-3, which are also expressed in pyramidal neurons (Lein et al., 2007; Tomita et al., 2003), are associated with the remaining AMPARs in γ-8 (−/−) neurons. In addition, we demonstrate in CA1 neurons the time course of AMPAR deactivation and desensitization varies with γ-8 expression level, consistent with the change of TARP/AMPAR stoichiometry.
Furthermore, we find that TARP/AMPAR stoichiometry differs between distinct neuronal cell types. Unlike CA1 pyramidal neurons, hippocampal dentate gyrus granule cells are not saturated with four TARPs, but instead appear to express a mixed population of AMPARs containing fewer than four TARPs. This is also consistent with the data we previously acquired from cerebellar granule neurons (Milstein et al., 2007), and reinforces the notion that differences in the expression level of different TARP subtypes diversifies the functional properties of neuronal AMPARs, not only through subtype-specific regulation, but also through variable TARP/AMPAR stoichiometries.
1. cDNA and constructs
cDNAs of γ-2, γ-3, γ-4, and γ-8 and flip-type GluA1 and GluA2 were used in current study. The cDNAs were constructed into two vectors: pIRES2-EGFP and pIRES2-dsRed (Clontech). GluA1 and all TARPs were subcloned with EcoR 1 and Sal 1 sites (Milstein et al., 2007). GluA2 was inserted with Xho 1 and Sal 1 sites. A Kozak sequence was engineered in front of the start codon. To construct A1γ-2, the following primers were used: forward primer, 5′-GATCTCGAGCTCGCCACCATGCCGTAC-3′; reverse primer, 5′-CCCGAATTCCTGTTGCTGTTGCTGTTGCTGTTGCTGTTGCAATCCTGTGGCTCC-3′. Standard PCR was carried out with the GluA1 template and the product was inserted into γ-2 containing pIRES2 vectors with Xho1 and EcoR1 site, thus fusing GluA1 to the N-terminus of γ-2 with a short linker sequence (Q)10EFAT. A1γ-8, A1γ-3, A1γ-4 were constructed similarly. The primers used in constructing A2γ-2 and A2γ-8 were as follows: forward primer, 5′-GGACGCTAGCGCCACCATGCAAAAGATTATGC-3′; reverse primer, 5′-GAGCTCGAGGACTGTTGCTGTTGCTGTTGAATTTTAACACTCTCGATGCC-3′. Nhe1 and Xho1 were used to insert the PCR product. The linker sequence used between GluA2 and TARPs was (Q)6(S)5FEFAT. The constructs were verified with sequencing (Elim). EGFP and/or dsRed fluorescence allowed visualization of co-expression. Indeed, co-expression was observed in more than 90% of the positive transfected cells.
2. HEK293T cell culture and transfection
HEK293T cells were used for expression of GluAs, TARPs and fusion constructs. HEK293T cells were cultured in a 37°C incubator sup plied with 5% CO2 (Milstein et al., 2007). Transfection was performed in 35mm dishes or 6-well plates with above cDNAs using lipofectomine2000 reagents according to the protocol provided by the manufacturers (Invitrogen). The following strategies were used in transfection if not otherwise specified. Total cDNA used for transfection per 35mm dish or per well in 6-well plates was 0.5 μg. When expression was carried out, a 2:3 ratio of GluA to TARP cDNA was used, except for A1+γ-8, where 0.2 μg GluA1 and 5X or 10X γ-8 were used. When GluA1 (or fusion construct) and GluA2 (or fusion construct) were co-expressed, GluA1:GluA2 was 3:2. Transfection was terminated in 2–3 hours. Cells were dissociated with 0.05% trypsin and plated on coverslips pretreated with poly-D-lysine. Recording was performed 24–48 hours after transfection. The amplitude of glutamate evoked currents varied considerably among HEK cells. To determine if the amplitude of the glutamate current, which is presumably due primarily to the number of receptors on the surface, had any effect on the KA/Glu ratio we plotted the size of the glutamate current against the size of the kainate current. The KA/Glu ratio was entirely independent of the magnitude of the glutamate evoked current. (Figure S4).
3. Acute hippocampus slices and slice cultures
Transverse hippocampal slices 300 μm thick were cut from p14–p23 mice on a Leica vibratome in cutting solution containing (in mM): NaCl 50, KCl 2.5, CaCl2 0.5, MgCl2 7, NaH2PO4 1.0, NaHCO3 25, glucose 10 and sucrose 150. Freshly cut slices were placed in an incubating chamber containing artificial cerebrospinal fluid (ACSF), containing (in mM) NaCl 119, KCl 2.5, NaHCO3 26, Na2PO4 1, glucose 11, CaCl2 4, MgCl2 4 and recovered at 35 °C for ~1h. Slices were then maintained in ACSF at room temperature prior to recording. After 0.5–1 h of incubation at room temperature, slices were transferred to a submersion chamber on an upright Olympus microscope. All solutions were saturated with 95% O2/5% CO2. CA1 pyramidal cells were visualized by Infrared differential interference contrast microscopy. Cultured slices were prepared and transfected as previously described (Schnell et al., 2002). Briefly, hippocampi were dissected from P6–P9 mice and transfected with γ-8-IRES-EGFP using biolistic gene fransfer after 2–3 days in culture. Slices were cultured for an additional 2–7 days before recording.
4. Electrophysiology
Out-side out patches were excised from transfected HEK293T cells or hippocampal neurons in acute slices or slice cultures. Coverslips with transfected HEK cells were maintained during recording with external solution containing (in mM): NaCl 140, KCl 5, MgCl2 1.4, EGTA 5, HEPES 10, NaH2PO4 1, D-glucose 10 and NBQX 0.01 and pH adjusted to 7.4. Outside-out patches were excised from positively transfected cells identified by epifluorescence microscopy with 3 to 5 M3 borosilicate glass pipettes. The internal solution contained (in mM): CsF 135, CsOH 33, MgCl2 2, CaCl2 1, EGTA 11, HEPES 10, spermine 0.1 with pH adjusted to 7.2. The glutamate and kainate induced currents were recorded while holding the patches at −70mV. Glutamate (1 mM) and kainate (1 mM) were applied to patches in extracellular solution containing (in mM): NaCl 150, KCl 2.5, HEPES 10, glucose 10, CaCl2 4, MgCl2 4, Cyclothiazide 0.1, pH 7.4. In addition, when GluA1/A2 channels with and without TARPs were studied, 600 nM or 10 μM PhTx was included to block the potential GluA1 homomeric channels (Figure S2). The hippocampal slices (acute or culture) were perfused with ACSF bubbled with 5% CO2 and 95% O2. In slice cultures, the cells with γ-8 over-expression were visualized with EGFP fluorescence. Patches were excised from CA1 pyramidal neurons and dentate granule cells. The currents were recorded with pipette solution containing: CsMeSO4 135, NaCl 8, Hepes 10, Na3GTP 0.3, MgATP 4, EGTA 0.3, QX-314 5, and spermine 0.1. Glutamate and kainate were similarly dissolved in extracellular solution with the addition of 100 μM picrotoxin, 100 μM D-APV and 500 nM tetrodotoxin to isolate AMPAR-mediated current. In fast application experiments, the above extracellular solution (without cyclothiazide) was used for control and glutamate dilution. Data were collected with an Axopatch 1D amplifier (Axon Instruments, Foster City, CA), filtered at 2 kHz, digitized at 10 kHz for most patches and at 50 kHz for the fast application. Analysis of deactivation and desensitization was described in a previous study (Milstein et al., 2007). Data were presented as mean ± SEM. Differences in means were tested with the ANOVA or Student t test and were accepted as significant if p ≤ 0.05
Supplementary Material
We thank Kirsten Bjorgan for preparing hippocampal slice cultures and all of the members of the R.A.N. laboratory for helpful discussions. We also thank Geoffrey Kerchner for critical reading on the manuscript. R.A.N. is supported by grants from the National Institutes of Health. W.L. is supported by a postdoctoral fellowship from the American Heart Association. A.D.M. is supported by a Graduate Research Fellowship from the National Science Foundation.
• Burnashev N, Monyer H, Seeburg PH, Sakmann B. Divalent ion permeability of AMPA receptor channels is dominated by the edited form of a single subunit. Neuron. 1992;8:189–198. [PubMed]
• Chen L, Chetkovich DM, Petralia RS, Sweeney NT, Kawasaki Y, Wenthold RJ, Bredt DS, Nicoll RA. Stargazin regulates synaptic targeting of AMPA receptors by two distinct mechanisms. Nature. 2000;408:936–943. [PubMed]
• Cho CH, St-Gelais F, Zhang W, Tomita S, Howe JR. Two families of TARP isoforms that have distinct effects on the kinetic properties of AMPA receptors and synaptic currents. Neuron. 2007;55:890–904. [PubMed]
• Collingridge GL, Olsen RW, Peters J, Spedding M. A nomenclature for ligand-gated ion channels. Neuropharmacology 2008 [PMC free article] [PubMed]
• Dingledine R, Borges K, Bowie D, Traynelis SF. The glutamate receptor ion channels. Pharmacol Rev. 1999;51:7–61. [PubMed]
• Hashimoto K, Fukaya M, Qiao X, Sakimura K, Watanabe M, Kano M. Impairment of AMPA receptor function in cerebellar granule cells of ataxic mutant mouse stargazer. J Neurosci. 1999;19:6027–6036. [PubMed]
• Hollmann M, Heinemann S. Cloned glutamate receptors. Annu Rev Neurosci. 1994;17:31–108. [PubMed]
• Korber C, Werner M, Kott S, Ma ZL, Hollmann M. The transmembrane AMPA receptor regulatory protein gamma4 is a more effective modulator of AMPA receptor function than stargazin (gamma2) J Neurosci. 2007;27:8442–8447. [PubMed]
• Kott S, Werner M, Korber C, Hollmann M. Electrophysiological properties of AMPA receptors are differentially modulated depending on the associated member of the TARP family. J Neurosci. 2007;27:3780–3789. [PubMed]
• Lein ES, Hawrylycz MJ, Ao N, Ayres M, Bensinger A, Bernard A, Boe AF, Boguski MS, Brockway KS, Byrnes EJ, et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature. 2007;445:168–176. [PubMed]
• Lu W, Shi Y, Jackson AC, Bjorgan K, During MJ, Sprengel R, Seeburg PH, Nicoll RA. Subunit composition of synaptic AMPA receptors revealed by a single-cell genetic approach. Neuron. 2009;62:254–268. [PMC free article] [PubMed]
• Mansour M, Nagarajan N, Nehring RB, Clements JD, Rosenmund C. Heteromeric AMPA receptors assemble with a preferred subunit stoichiometry and spatial arrangement. Neuron. 2001;32:841–853. [PubMed]
• Mayer ML, Armstrong N. Structure and function of glutamate receptor ion channels. Annu Rev Physiol. 2004;66:161–181. [PubMed]
• Menuz K, O’Brien JL, Karmizadegan S, Bredt DS, Nicoll RA. TARP redundancy is critical for maintaining AMPA receptor function. J Neurosci. 2008;28:8740–8746. [PMC free article] [PubMed]
• Milstein AD, Nicoll RA. Regulation of AMPA receptor gating and pharmacology by TARP auxiliary subunits. Trends Pharmacol Sci. 2008;29:333–339. [PMC free article] [PubMed]
• Milstein AD, Zhou W, Karimzadegan S, Bredt DS, Nicoll RA. TARP subtypes differentially and dose-dependently control synaptic AMPA receptor gating. Neuron. 2007;55:905–918. [PMC free article] [PubMed]
• Monyer H, Seeburg PH, Wisden W. Glutamate-operated channels: developmentally early and mature forms arise by alternative splicing. Neuron. 1991;6:799–810. [PubMed]
• Nicoll RA, Tomita S, Bredt DS. Auxiliary subunits assist AMPA-type glutamate receptors. Science. 2006;311:1253–1256. [PubMed]
• Osten P, Stern-Bach Y. Learning from stargazin: the mouse, the phenotype and the unexpected. Curr Opin Neurobiol. 2006;16:275–280. [PubMed]
• Priel A, Kolleker A, Ayalon G, Gillor M, Osten P, Stern-Bach Y. Stargazin reduces desensitization and slows deactivation of the AMPA-type glutamate receptors. J Neurosci. 2005;25:2682–2686. [PubMed]
• Rouach N, Byrd K, Petralia RS, Elias GM, Adesnik H, Tomita S, Karimzadegan S, Kealey C, Bredt DS, Nicoll RA. TARP gamma-8 controls hippocampal AMPA receptor number, distribution and synaptic plasticity. Nat Neurosci. 2005;8:1525–1533. [PubMed]
• Schnell E, Sizemore M, Karimzadegan S, Chen L, Bredt DS, Nicoll RA. Direct interactions between PSD-95 and stargazin control synaptic AMPA receptor number. Proc Natl Acad Sci U S A. 2002;99:13902–13907. [PubMed]
• Seeburg PH. The TINS/TiPS Lecture. The molecular biology of mammalian glutamate receptor channels. Trends Neurosci. 1993;16:359–365. [PubMed]
• Sekiguchi M, Takeo J, Harada T, Morimoto T, Kudo Y, Yamashita S, Kohsaka S, Wada K. Pharmacological detection of AMPA receptor heterogeneity by use of two allosteric potentiators in rat hippocampal cultures. Br J Pharmacol. 1998;123:1294–1303. [PubMed]
• Soto D, Coombs ID, Kelly L, Farrant M, Cull-Candy SG. Stargazin attenuates intracellular polyamine block of calcium-permeable AMPA receptors. Nat Neurosci. 2007;10:1260–1267. [PMC free article] [PubMed]
• Spruston N, Jonas P, Sakmann B. Dendritic glutamate receptor channels in rat hippocampal CA3 and CA1 pyramidal neurons. J Physiol. 1995;482(Pt 2):325–352. [PubMed]
• Tomita S, Adesnik H, Sekiguchi M, Zhang W, Wada K, Howe JR, Nicoll RA, Bredt DS. Stargazin modulates AMPA receptor gating and trafficking by distinct domains. Nature. 2005;435:1052–1058. [PubMed]
• Tomita S, Chen L, Kawasaki Y, Petralia RS, Wenthold RJ, Nicoll RA, Bredt DS. Functional studies and distribution define a family of transmembrane AMPA receptor regulatory proteins. J Cell Biol. 2003;161:805–816. [PMC free article] [PubMed]
• Tomita S, Sekiguchi M, Wada K, Nicoll RA, Bredt DS. Stargazin controls the pharmacology of AMPA receptor potentiators. Proc Natl Acad Sci U S A. 2006;103:10064–10067. [PubMed]
• Turetsky D, Garringer E, Patneau DK. Stargazin modulates native AMPA receptor functional properties by two distinct mechanisms. J Neurosci. 2005;25:7438–7448. [PubMed]
• Washburn MS, Numberger M, Zhang S, Dingledine R. Differential dependence on GluR2 expression of three characteristic features of AMPA receptors. J Neurosci. 1997;17:9393–9406. [PubMed]
• Zamanillo D, Sprengel R, Hvalby O, Jensen V, Burnashev N, Rozov A, Kaiser KM, Koster HJ, Borchardt T, Worley P, et al. Importance of AMPA receptors for hippocampal synaptic plasticity but not for spatial learning. Science. 1999;284:1805–1811. [PubMed]
• Ziff EB. TARPs and the AMPA Receptor Trafficking Paradox. Neuron. 2007;53:627–633. [PubMed] |
global_01_local_0_shard_00000017_processed.jsonl/4420 | Using your phone's internet browser
go to:
Click and drag this link to
the Home icon in your browser.
What Does gautam think about Neha Mehta
About: Who is Gautam Sharma
16 Apr '09 04:58 pm
Invite a friend |
Save |
Earn 10 points for answering
Answer this question Earn 10 points for answering
4000 characters remaining
Keep me signed inNew User? Sign up
Answers (1)
None of our business
Answered by iqbal seth, 16 Apr '09 05:00 pm
Report abuse
Not Useful
Your vote on this answer has already been received
Ask a Question
Get answers from the community
600 characters remaining |
global_01_local_0_shard_00000017_processed.jsonl/4428 | 2Pac – There U Go Lyrics
Produced By: Johnny J
You pyonged “2Pac – There U Go”
Save Note No Thanks
Caution: You are now annotating this song as
[2Pac] I don't know why I be fuckin witchu
[Verse One: 2Pac]
Was it the liquor, that makes me act blind, times that I'm with her
Anonymous pictures of other niggas tryin to kiss her
Will I love her or shall I diss her?
I'm sick of this scandalous shit I deal wit
Tryin to paint a perfect picture
My memories of jealousy no longer carefree
Cause so much bullshit your girlfriends keep tellin me
I'm on tour, but now my bedroom's an open door
So it got me thinkin, what am I tryin for?
When I was young I was so very dumb, eager to please
A lil', trick on a mission tryin to get in my P's
Me and my niggas is thug niggas, former known drug dealers
We don't love bitches and believe, they don't love niggas
I gotta blame my attraction
But you became a distraction, a threat to my paper stackin
I thought you changed but now I know
Can't turn a ho into a housewife, baby, and there you go
[Hook: Jazze Pha]
Baby there you go, actin like a ho
There you there you go, actin like a ho
Baby there you go, actin like a ho
Actin like a ho, actin like a ho
HOE! See the word on the streets you're a
HOE! Just a groupie on a world tour
HOE! Now I found out for myself you're a
HOE! Girl you need to check yourself
[Verse Two: Kastro]
These silly bitches got this game twisted
So I don't claim 'em, just bang 'em
Papa raised a player, so player, I play 'em
I got hoes that got more, hoes than me
So how I look, gettin hooked, like I ain't got G?
Truly cutie booty big, but that ain't enough
And the head make me beg, still that just ain't enough
When I don't trust her, the bitch be lyin too much
When she be dyin to fuck me you be buyin her stuff, ho
[Verse Three: Kadafi]
See girlfriend I know, your whole M.O.'s preoccupied with mostly
Gettin clown after clown, town coast to coast - see
I been tryin to stay away from sluts like you
Got me turned off completely by that sheisty shit that you do
Knew from jump yo' aim, straight through them spandex, don't front
Just name, spots on yo' body for me to touch while you clutch this game
I keep flowin like H20 it ain't nuthin for me to say
Why you keep actin like a ho? But there you go
[Verse Four: Young Noble]
Uhh, when I first met her I told her I was busy all the time
Now she, callin me flippin like she miss me all the time
How she, don't even trip she got a man at home
You need to stop chasin dick bitch and raise your son
I'm like - damn, we can creep sometime
And you know I'm on the road for like weeks at a time
Girl you're thirsty; and stop callin while I'm workin you hurtin me
All this bullshit is irkin me girl, but there you go
[Verse Five: Big Syke - last few words of each line repeat]
I blame it on yo' momma, she need to holla at you
But should I blame it on yo' daddy for all the things that you do
Cause there you go, just like a ho, caught in the streets
Like givin yo' number out to every nigga you meet
I'm tired of the games you playin, so stop playin (ho)
You hear what I'm sayin, you only good for parlayin
I'm layin down the rules, this a game that you lose
So the streets can have you baby cause I stay on the move
(There you goooo)
[Hook] w/ ad libs
[Outro: 2Pac]
There you go baby girl, THAT'S the story
I coulda SWORE you told me you was gon' change
And you don't wanna go to clubs no more and
You wasn't fin' to dress all crazy no more and
You was gon' stay home and try to chill
What happened baby?
Ohh, so yo' FRIEND wanted to go out
That wasn't you that went out
You was just goin out cause yo' friend was
Okay, so you was pissy drunk up in that nigga car
Cause yo' FRIEND wanted to get drunk huh?
It's all good, cause there you go baby
Oh I ain't trippin on them niggas callin the house
It's all good, cause there you go
Me I'mma still be a player, all day baby
So uhh, there you go
[Hook] - fades out
Edit song description to add:
|
global_01_local_0_shard_00000017_processed.jsonl/4429 | Fat Joe – Definition of a Don Lyrics
Produced By: The Alchemist
You pyonged “Fat Joe – Definition of a Don”
Save Note No Thanks
Caution: You are now annotating this song as
Yeah.. Definition of a Don
It's like I gotta keep remindin you and remindin you
Who's that nigga.. You heard the kid
Flowers on the casket of all those who oppose the squadus
It's the motherfuckin Don Cartagena ya heard
[Hook: Remy Martin]
They wanna know why ya name is Joey Crack
You a hustler, how they think you got the stacks? (Uh)
You stuck being in jacks on the blocks witcha paps (Yeah)
And the Squad to hard niggas gotta fall back (Tell 'em)
Damn papi, you're shit is icey now (Uh-huh)
In the Bronx witcha Benz rims pokin out (Ten mil)
You got the niggas in the pen straight loc'in out
But when the don is on nigga close ya mouth
[Fat Joe]
Yeah, yo
You wouldn't understand my story of life I live
Most niggas that really know me got life as bids
The trife as kids, this ain't no Scarface shit
These niggas really will kill you, your wife, and kids
I walked through many blocks niggas couldn't stand on
Had shit locked before I had a Glock to even put my hands on
Before I had the dough to put my fams on
Before I had rocks sealed in pink tops, tryna get a gram off
A wild adolescent, raised by the street
Mesmorized by the dealers and the places they eat
And when they blazed the heat, I was the shorty to take the handoff
Run upstairs, tryna sneak the gat past grandmoms
This is how it should be done... my life...
Is identical to none, son tryed to duplicate but I knew he was fake
Cuz everytime I walked by he turned blue in the face
I'm like heavy on the leg when I pop
All my change is like heavy on the weight when I cop
It's just the way it's done
Niggas tell me they respect the way I blaze them guns
On hold it down for the Bronx in the name of Pun
[Fat Joe]
Yeah uh, my name ring bells like a P.O
Put the pressure on a nigga like I'm right atcha do'
With the muzzle out, nigga can't shoke with my dough
I'm at his mothers house
Beat up his pops, put the pistol in his brother's mouth
Wave bricks, whips... jerked a few coke and next play the strip
With chrome knowin that they won't forget
And on the weekends we shut down clubs
You know them crazy Peurto Ricans always fuckin it up!
If I can't afford it, I'mma extort it
If I can't cut it, I'mma bake it
Strip you niggas butt-naked, I'm a thoroughbred
Carry guns and pump heroin
Never went O.T. I'm too light for Maryland
I'd rather play the streets of New York
Where the fiends are guarunteed to keep the meat on my fork
I'm just a hustler - feds put the tap
On our phones in hopes of cuffin us
Then wonder why we livin life so illustrious
[Hook] repeat 2x
Edit song description to add:
|
global_01_local_0_shard_00000017_processed.jsonl/4430 | ScHoolboy Q – Sex Drive Lyrics
Produced By: T.H.C
You pyonged “ScHoolboy Q – Sex Drive”
Save Note No Thanks
Caution: You are now annotating this song as
I'm started up, let's go, dipping through your solar system
Hopefully steal your heart, you be the victim, no I wouldn't share
[Verse 1]
Hey I did the crime, now close your eyes
Let me ease your mind, I won't steer you blind
Came right on time, just keep behind
Let me guide the way, it's your lucky day
Let's fornicate, play the way that grown ups play
Teach you how to drive a stick, baby girl, it'll be okay
First gear, second gear, third gear hell yeah
Panties dropping, I can treat her proper
Puff my pass into your lust, let me be your doctor
I can please you right, let me give you life
Little CPR,
can I use your car
Metal to the floor, see you drive me hard
You's a superstar let me be your bodyguard
Wetter than tsunami, Lord, let me dive in
Through your tidal wave, now be amazed
[Hook] (Jhene Aiko)
Scorpio, sex drive, Gemini, sex drive, Pisces, sex drive
Virgo, sex drive, Aquarius, sex drive, Sagittarius, let me be your sex drive
(Satisfaction fills me up inside, can you turn me on)
(Be my sex right only if your sex is right, can you cut me on)
[Verse 2]
As the world spins, we gain power
Your cervix stay on wet, you a stormy shower
Mind fuck you girl, undress you we can fuck for hours
Day and night, no rubbers, we love this, no covers
Freak you with my tongue, you also go down where I'm hung
Open up, a chapter begun, we can build like a factory hun
The nexus sits right on the sun, solar eclipse as soon as we cum
Soon as we cum, and a new [?] rest living they dumb
We cruise, you done won my heart over, how can we lose
From a fantasy to real life, you I choose
Came a long way from fucking friends we was due
Ooh, just me and you, ooh, just me and you
Never scared of commitment, see you my boo
We can get ghost, get lost and the bond is true
Flying through the city, clouds reconcile
Saying vows, fuck around and have a child
[Bridge: Jhene Aiko]
Break me down and build me up again
Love me 'til the sun comes up and then
We can fuck and fuck and fuck again
Edit song description to add:
|
global_01_local_0_shard_00000017_processed.jsonl/4431 | Sir Michael Rocks – Make This Bread Lyrics
Produced By: Reno (US)
You pyonged “Sir Michael Rocks – Make This Bread”
Save Note No Thanks
Caution: You are now annotating this song as
Hey Mr. Fred, we almost dead
Make this bread or make your bed
Take this ride or take this wheel
Take your time or take this pill
[Verse 1]
I’m hoppin’ out the casket, cocaine on my glasses
Some all white buffies that I picked up on my last trip
Them niggas is yaggin’, too much into fashion
My body guard a beast, and if you reach then he spazzin’
That rat-tat-tat action, them lights, camera
I’m like the Dodge dealership, I got a lot of challengers
Damn this shit remind me of my dogs
I was on a paper mission, get it all
Marriani to the drawers, what’s up?
I went to school but never went to school ‘cause I was busy on tour
Cause where we live a nigga only as good as his credit card score
A-1, I shipped a box of them new iPhones straight to my home
Two to my dome, can’t feel my eyes
Can’t feel my bones, you not alone
[Hook x2]
[Verse 2]
When I seen it, I want it, I need it
Unlimited Visas, my heart in the freezer
I’m on it, you niggas is lacking
Napping, you loafing, you lacking the focus
My Spanish bitch that I’m with
I’m smashing, I’m stroking, with passion, devotion
That money come, and that money stay
If you fuck her good, she won’t run away
I’m golded up like Gabby Douglas
My necklace, chasing the sun away
My old niggas still Caddy truckin’
I’mma wait for that Aston truck
My Boonie niggas is savage, bruh
Jammed out in that traffic, cuz
I’m cashin’ out and I’m stackin’ up
Your girl’s pussy is trash as fuck
I never talk shit about a teammate
We goin’ Bobby for a pocket full of green face
When I die I’ll be reborn with a clean slate
And RIP to the weed that we cremate
Edit song description to add:
|
global_01_local_0_shard_00000017_processed.jsonl/4463 | Take the 2-minute tour ×
I've recently played in a Serenity Cortex game, and discovered the joy that is plot points. Now, for those who aren't familiar with the system, plot points are an expendable resource given for good roleplaying and cool ideas, and which can be spent on improving a dice roll, reducing damage taken or, most interestingly to me, altering a scene slightly.
I want to find a way to include something similar in my World of Darkness game, but I'm unsure how to go about it. The reasons I want to include this, as that might help shape the responces I get, are mainly two-fold.
1) Players directly being able to interact with the set up of the world. I really like the idea that a player can spend a plot point to make the hinges on a door that I'd given little thought to rusty, and then have a better chance of kicking that door down and carrying out their well-crafted plan.
2) The way I've seen them given out appeals to me, in that they seem a small reward that I can give out for good roleplaying, good ideas etc, when I want to reward a player but a whole experience point feels too big a reward.
Has anyone else implemented something like this successfully? I am looking to do this specifically in new World of Darkness, but any suggestions from other systems are obviously welcome, as they may spark ideas or be directly transferable.
share|improve this question
add comment
3 Answers
up vote 5 down vote accepted
One of the "ancestors" of Cortex's plot points are the "Inspiration!" points found in White Wolf's pulp game Adventure! It's likely that those could be adapted to use in other Storytelling-based games.
On the other hand, it seems like what you're looking for is already present in the form of Willpower. It's almost a form of meta-currency already; it would be easy to make WP serve the role of plot points as well.
share|improve this answer
+1. I wholeheartedly agree with this. Willpower already serves as a way of making things easier, and adding a 'plot point' function to it wouldn't clash with existing mechanics as much as creating a new currency would. – DuckTapeal Mar 10 '12 at 20:52
My players don't tend to use their willpower, so I guess this might be a good way to encourage it! Thanks! – English Petal Mar 12 '12 at 15:08
add comment
definitely check out "awesome points" in old school hack. AP's are not doled out by the DM/GM but, rather, awarded to players by other players - encouraging exciting and interactive story participation. AP's can be used very similarly to the story points that you describe, however, AP's must be drawn by players from the pool of available AP's. this pool runs down but never dry; the DM/GM contributes to the pool by revising the scene or empowering NPC's/monsters similar to the players and the PCs. play reports on the use of AP's are quite exciting. the give-and-take relationship between DM and PC ensures balance in the AP system.
though old school hack (and the AP system) is typically presented via beer-and-pretzel or silly games/scenarios, it can and has been applied to more serious games (e.g., fictive fantasies)
share|improve this answer
add comment
In The Dark Eye, there is an optional ruleset for "fate points", if I recall the name correctly. They would be awarded for completed adventures, but also on the GMs discretion for good roleplaying and other things.
IIRC, each player would start with a set maximum number of these Points (I think 5) that would, as I said, regenerate a bit after each adventure. They could be used to improve a skill check or combat roll (Even after it was rolled, so you could say "I'll try to evade", the dice roll fails by one and you would say "D'oh well, I'll spend a fate point on that" to barely evade the attack).
The most interesting thing they could be used for: If your character dies, you can "burn" one fate point, loosing it permanently, but barely surviving. Although, you have to understand that said person would still have negative Hitpoints and would die again within a few rounds if no one comes over to heal them, so it's not a "bonus life", but more of a "I'll change my Hitpoints from -15 (Very, very dead) to -5 (Still pretty dead, but can be rescued)". These "burned" points could sometimes be restored, but only by really, really important actions or good plays.
I do, however, like the idea of spending a point on weakening the Doors, which would at least be a sensible explanation why the skill check gets easier, although it has the same result as TDEs improving a skillcheck.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4467 | Saint Euphrasia
Also known as
• Eufrasia
• Eupraxia
Born to the Roman nobility, the daughter of Antigonus, senator of Constantinople. Related to Roman Emperor Theodosius I who finished the conversion of Rome to a Christian state. Her father died soon after Euphrasia was born; she and her mother became wards of the emperor.
When Euphrasia was only five years old, the emperor arranged a marriage for her to the son of a senator. Two years later, she and her mother moved to their lands in Egypt. There, while still a child, Euphrasia entered a convent; her mother died soon after of natural causes, leaving the novice an orphan.
At age twelve Euphrasia was ordered by the emperor Aracdius, successor to Theodosius, to marry the senator’s son as arranged. Euphrasia requested that she be relieved of the marriage arrangement, that the emperor sell off her family property, and that he use the money to feed the poor and buy the freedom of slaves. Arcadius agreed, and Euphyrasia spent her life in the Egyptian convent.
Noted for her prayer life, and constant self-imposed fasting; she would sometimes spend the day carrying heavy stones from one place to another to exhaust her body and keep her mind off temptations. She suffered through gossip and false allegations, much of it the result of being a foreigner in her house. She is held up as a model by Saint John Damascene.
• 420 of natural causes
Additional Information
MLA Citation
• “Saint Euphrasia“. 13 March 2014. Web. 16 March 2014. <> |
global_01_local_0_shard_00000017_processed.jsonl/4471 | Shawn M Moore > Jifty-1.10518 > Jifty::Manual::RequestHandling
Annotate this POD
New 13
Open 4
Stalled 1
View/Report Bugs
Jifty::Manual::RequestHandling - Jifty's request handling process
As soon as a HTTP request (whatever the method might be, like GET, POST, PUT, ...) arrives at Jifty's border, the request is forwarded to a handler. By default, Jifty->handler points to a Jifty::Handler object that is responsible for handling an incoming request. The handler receives a CGI object on which it operates.
The major steps in the request handling process are:
refresh eventually modified modules in develop mode
build a stash
The stash is a storage area that can be reached by simply accessing Jifty->handler->stash->{some_key}. The stash will start fresh with every request and lives for the entire lifetime of a request. Using the stash, transporting data between otherwise unconnected modules will become possible.
construct a request and response object
Using the CGI object, a Jifty::Request object is constructed and its data is populated with the CGI object's data. The request can be reached later using Jifty->web->request. The request holds information about all actions involved, all page fragments, contains state variables and arguments (usually GET/POST parameters).
Also, an empty Jifty::Response object is constructed that contains one or more Jifty::Result objects, each of which holds one Jifty::Action's result. The response object can be retrieved with the Jifty->web->response method.
setup plugins
For every registered Jifty::Plugin, some kind of per-request initialization is performed allowing the actions provided by each plugin to run.
handle static content
If the requested URI points to some existing static content being housed in a static directory, this content is handled.
setup the session
Based on a cookie that is sent with every HTTP response, the current user is assigned a unique session. The session is stored in a Jifty::Web::Session object and can be accessed using the Jifty->web->session method.
return from a continuation if requested
If there is an open continuation on the stack (e.g. from a Jifty->web->tangent link) and the return has been requested (e.g. by a Jifty->web->return link), the return will execute at this stage.
handle dynamic request unless already served
First, the user is given a cookie containing the session-id. Then, the request is forwarded to Jifty->handler->dispatcher, a Jifty::Dispatcher object to handle the request. The dispatcher works through the following steps:
In this stage, all rules in the dispatcher that are marked with the word before are run.
run the actions involved
Every Jifty::Action that is registered in a form or involved in a link or button is run in this stage.
run dispatching rules
This stage is responsible for working through all rules marked by words like under, on, when and so on. This is a point where based on the URI or parameters the template to get displayed may still be modified, data get retrieved, additional actions run or the template's parameters get adjusted.
show the page
Here, the template displaying the page is run.
This final stage of the dispatcher will run all rules marked with the word after.
cleanup several things
Jifty::Handler, Jifty::Dispatcher, Jifty::Request, Jifty::Response
syntax highlighting: |
global_01_local_0_shard_00000017_processed.jsonl/4486 | Take the 2-minute tour ×
In the olden days when I used to use other database platforms, we were advised to back up and restore our databases periodically to improve performance.
Common reasons were that a B & R would include
• rebuild indexes
• defragment pages
My questions:
What does MS SQLServer do (specifically 2005) when a back and restore is performed?
Should I be doing periodic backups and restores?
Should I be worried about losing statistics?
share|improve this question
add comment
2 Answers
Backups/Restores are carried out at byte level - they have no affect on fragmentation in the database.
Restores should be done regularly to make sure you are confidant in how the process goes (including getting the system attached to the restored database) and that the backups are proved to be usable. Your other answer is correct in saying that index maintenance is a separate process you need to carry out.
share|improve this answer
add comment
You should be doing periodic backups and restores to test your restore procedures, but as far as doing them to improve performance, you can rebuild indexes and shrink files with Maintenance Plans, which is the preferred method.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4487 | Take the 2-minute tour ×
I have the following entry in crontab:
* * * * * python -c "import datetime; datetime.datetime.now()" >> /home/myname/pythoncron1.log
The pythoncron1.log file is being created but has nothing in it and the file modified date has not bee updated since the file was created. I was expecting to see a bunch of lines in the file, one for every minute that the cron job was is run.
Why might this not be working?
(You may have guessed, I'm trying to do something a little more complicated than the example above but I've narrowed the problem down to python apparently failing to run when being invoked by cron).
share|improve this question
add comment
4 Answers
up vote 5 down vote accepted
Just to clarify, do you have any print statements in your python script?
Running interactively you don't need them:
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
>>> import datetime
>>> datetime.datetime.now()
datetime.datetime(2010, 2, 24, 19, 36, 21, 244853)
On the command line you do:
example:~% python -c "import datetime; datetime.datetime.now()"
example:~% python -c "import datetime; print datetime.datetime.now()"
2010-02-24 19:38:59.639324
share|improve this answer
Of course! So it looks like my broader problem is not that python won't run... I feel another question coming on. – d4nt Feb 24 '10 at 12:46
add comment
you might need to put the whole python path in e.g. /usr/bin/python
share|improve this answer
add comment
If in fact it isn't what Frenchie said, which it most likely is, it may be helpful to look at that users mail. Cron mails output from cron jobs to the user account of that crontab. That is why you often see STDOUT and STDERR piped to /dev/null, so they wont be mailed output they don't care about.
You can use the mail command as that user to check for mail with helpful output. Also, the /var/log/cron file may include helpful information.
share|improve this answer
add comment
python -c "import datetime; datetime.datetime.now()" doesn't output anything, so there is nothing to be outputted to the file.
Make sure the command you run actually outputs something on the command-line.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4488 | Take the 2-minute tour ×
Does “all packets that fall through to the default rule should be dropped” mean that my iptables rule should drop everything at the start, like this?
# Set the default policy to drop
$IPT --policy INPUT DROP
Or does it mean something else?
share|improve this question
add comment
2 Answers
up vote 1 down vote accepted
Yes, that's exactly what it means (and doesn't mean anything else).
Obviously it's a policy, and you can set your own depending on your needs and who your expected targets are. A DROP policy will show up as "filtered" when you scan the port with nmap, whereas a REJECT policy will show "closed"---this is because REJECT sends an ICMP unreachable message back, letting the person connecting know that there is no service listening on that port ("Connection Refused" is the typical user message).
share|improve this answer
add comment
It means that if you don't have a rule in your configuration that specifically allows a packet through then it should be dropped. Basically "Deny everything unless I specifically allow it".
I'm not an IPTables wiz, though, but basically you configure IP Tables to allow only what you want through and drop the rest.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4489 | Take the 2-minute tour ×
I am looking for a QuickSilver way of doing the following on Mac:
$ sudo vi /etc/php.ini
Is it possible for me to open a file using TextEdit as super user?
Edit: I already know about Terminal plug-in, and it's great. But, I was hoping to use TextEdit without typing full path of TextEdit.
share|improve this question
add comment
4 Answers
If you want to see that "Run a Text Command in Terminal" action as duffbeer703 shows, you'll have to add the Terminal plugin.
**Sorry, I didn't see that you wanted to run TextEdit instead of vi. In order to do that, do this:
Make a ~/bin folder (that's a folder named bin in your home directory).
Add ~/bin to your path by editing ~/.profile and adding the following two lines: PATH="~/bin:${PATH}" export PATH
Go into ~/bin and make a new file called TextEdit with the following line: sudo /Applications/TextEdit.app/Contents/MacOS/TextEdit $1 &
Make that file executable by doing: chmod +x ~/bin/TextEdit
Now go back to QuickSilver and its "Run a Text Command in Terminal" thing and do the period thing to type in text, then type: TextEdit /etc/php.ini
A terminal will popup and ask for your sudo password. Once you put that in, TextEdit will pop up and let you edit as root.
There's probably an easier or cleaner way, but it does work.
share|improve this answer
+1 for the ~/bin idea. – eed3si9n Jun 28 '09 at 4:13
Good idea. If the problem is actually that he wants to use a GUI editor, an editor with a cli like TextMate or SubEthaEdit would work great. – duffbeer703 Jun 28 '09 at 14:19
add comment
It is possible.
Open Quicksilver, type "." and type "sudo vi /etc/php.ini"
Under Action you want to select "Run as Text Command in Terminal"
alt text
share|improve this answer
add comment
The "Process Manipulation Actions" plugin adds a "Launch as Root" action. You may need to enable the action after installing the plugin.
1. Go to "Preferences", then open the "Actions" section.
2. Ensure you are viewing actions by type, and have "All Actions" selected.
3. Enter "Root" in the search box.
4. "Launch as Root" should be one of the few (if not only) results. Enable the action by checking the checkbox in the first column.
share|improve this answer
add comment
up vote -3 down vote accepted
Here's what I ended up doing:
$ sudo chmod 777 /etc/phi.ini
$ sudo chmod 777 /etc/apache2/httpd.conf
Yes, give up on the idea of sudoing altogether.
Next, open /etc in Finder by opening Quciksilver, navigate to Macintosh HD and pressing Option + /. Double-click on php.ini from Finder to pick the application to open it with.
To make php.ini show up in Quicksilver, I added a Custom catalog to /etc with Include Contents set to Folder Contents and Depth set to 2.
Now, all I have to do after invoking Quicksilver is to type "phpini" (without period) or "httpd" and hit enter.
share|improve this answer
Dude... my sysadmin senses are tingling! Instead, why don't create a group called "admin", chown root:admin files, add youself to the group? why the 777 abuse? – LiraNuna Jun 28 '09 at 8:56
Really bad idea. Read my answer and the other guy's answer. – duffbeer703 Jun 28 '09 at 14:18
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4490 | Take the 2-minute tour ×
I'd like to use TLS encryption with Virtual Machine Remote Control (VMRC) for Microsoft Virtual Server 2005 SP1.
Virtual Server doesn't allow you to upload an arbitrary self-signed certificate; it generates a certificate signing request (CSR) that then needs to be signed by a Certificate Authority (CA).
I don't have a Windows Certificate Authority, and can't install it because I don't have access to Windows Server.
Can I use a self-signed CA certificate (generated with either MakeCert or OpenSSL) to sign the certificate signing request (CSR) that Virtual Server generates?
If so, how do I do this (using either MakeCert or OpenSSL)? I've only ever used MakeCert and OpenSSL to create signed certificates from scratch, not to sign CSRs.
share|improve this question
add comment
4 Answers
up vote 1 down vote accepted
I've always used SelfSSL from the IIS 6 resource kit to generate SSL certs. It's pretty easy to use.
share|improve this answer
Doesn't appear to be able to sign CSRs, unless I'm missing something. – Roger Lipscombe Jul 1 '09 at 9:35
You don't need the CSR. SelfSSL will take care of everything. See: thelazyadmin.com/blogs/thelazyadmin/archive/2006/06/26/… – Ausmith1 Jul 1 '09 at 19:59
add comment
Are you using R2 ? I have an option in Virtual Server 2005 R2 (Enterprise Edition) to upload a certificate...
Haven't tried it, but I'm guessing using SelfSLL from the IIS Resouce Kit to generate a cert and then uploading it would work.
See screenshot of the R2 config page here
.. Ken
share|improve this answer
Doh! Yes I am, and yes, that option's there. I'll try it later; if SelfSSL or MakeCert work, then I'll accept your answer. – Roger Lipscombe Jul 2 '09 at 11:37
add comment
I'm not familiar with MakeCert or OpenSSL, but it's pretty trivial to install the certification authority and generate your own certificates. Well, I'm assuming you're running a server version of Windows ...
share|improve this answer
Can't use Certificate Authority; updated question – Roger Lipscombe Jun 30 '09 at 15:26
add comment
OpenSSL is pretty comprehensive I'd be surprised if it can't do what you need. Provided you're comfortable poking about in it's config files you will certainly have no problem issuing x.509 certs that are fully RFC compliant for any standard use but if Virtual Server makes uses of custom OID's you may run into issues. I wouldn't expect that though, to be honest, and I have successfully used OpenSSL issued certs to set up TLS links with MS infrastructure in the past. This appears to be a basic explanation of the process used to get an ancient version of IIS to use a cert issued safely by an OpenSSL CA. Fair warning, I haven't done this recently and things may be different now but that should be enough to get you started.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4491 | Take the 2-minute tour ×
subject - install expect on solaris in order to write expect scripts
details from my machine:
uname -a SunOS 5.10 Generic_139555-08 sun4v sparc SUNW,Netra-T5220
I installed Solaris machine ( Solaris 10 ) , and then I installed successfully the following packages in order to build the expect infrastructure on my Solaris machine
But after packages installation I get the following errors
Please advice what needed in order to run expect ?
/usr/local/bin/expect -version
/usr/local/bin/expect: cannot execute
expect: not found
Example how to install expect for Solaris ( from site - http://jibbysununix.blogspot.com/2010/01/automating-sftp-with-expect-script.html )
( I downloaded the x86packages from sun freeware ) . tcl-8.5.3-sol10-x86-local libgcc-3.4.6-sol10-x86-local expect-5.43.0-sol10-x86-local
1)pkgadd -d tcl-8.5.3-sol10-x86-local
2)pkgadd -d libgcc-3.4.6-sol10-x86-local
3)pkgadd -d expect-5.43.0-sol10-x86-local
share|improve this question
Also asked on StackOverflow: stackoverflow.com/q/10596626/7552 – glenn jackman May 15 '12 at 13:04
add comment
2 Answers
up vote 2 down vote accepted
You arent trying to run x86 software on sparc are you?
uname -a
Uninstall the x86 packages and download and install sparc from:
To uninstall packages:
pkginfo | grep SMC
You'll see the three packages you installed, sunfreeware packages always have the SMC prefix, use pkgrm to remove them
share|improve this answer
see update in my quastion – Eytan May 15 '12 at 10:46
You are running sparc, therefore x86 packages wont work, immediately uninstall these packages, and download and install the solaris 10 sparc counterparts. – Sirch May 15 '12 at 11:01
did you know from where - wwhich site I can download the sparc pkgs as expect and tcl etc... – Eytan May 15 '12 at 11:10
Yes, ive added the like to my answer. Sunfreeware has both x86 and sparc packages. – Sirch May 15 '12 at 11:11
so according to your link I need to install the pkgs - expect-5.45-sol10-sparc-local.gz tcl-8.5.10 libgcc-3.4.6 this is true ? – Eytan May 15 '12 at 11:20
show 1 more comment
There are other sources of Solaris packages, for example OpenCSW. They provide tools to perform automatic dependency resolution, and will make sure to download the right architecture.
pkgadd -d http://get.opencsw.org/now
pkgutil -U
pkgutil -y -i expect
Executables will be placed in /opt/csw/bin, e.g. /opt/csw/bin/expect.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4492 | Take the 2-minute tour ×
What's the best way to compare directory structures?
I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup.
Something like:
Local file Remote file Compare
/home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL
/home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT
Of course, the tool can be ready-made or an idea for a python script.
Many thanks!
share|improve this question
add comment
8 Answers
up vote 12 down vote accepted
The tool your looking for is rdiff. It works like combining rsync and diff. It creates a patch file which you can compare, or distribute.
share|improve this answer
Thanks! I'll look into it. – Adam Matan Jul 12 '09 at 11:26
add comment
Try Beyond Compare 3 (Scooter Software) which has versions for Windows and Linux. Once you've used it, you will probably not want to use any other file comparison tool.
share|improve this answer
+1 for an exceptional tool – Steven Sudit Jul 12 '09 at 22:01
+1 for a good recommendation. – djangofan Oct 29 '09 at 20:54
Winmerge works just as good. – djangofan Nov 6 '09 at 20:46
add comment
if you don't feel like installing another tool...
for host in host1 host2
ssh $host '
cd /dir &&
find . |
read line
ls -l "$line"
done ' | sort > /tmp/temp.$host.$$
diff /tmp/temp.*.$$ | less
echo "don't forget to clean up the temp files!"
And yes, it could be done with find and exec or find and xargs just as easily as find in a for loop. And, also, you can pretty up the output of diff so it says things like "this file is on host1 but not host2" or some such but at that point you may as well just install the tools everyone else is talking about...
share|improve this answer
add comment
From rsync man page:
-n, --dry-run
This makes rsync perform a trial run that doesn’t make any changes (and produces mostly
the same output as a real run). It is most commonly used in combination with the -v,
--verbose and/or -i, --itemize-changes options to see what an rsync command is going to
do before one actually runs it.
May be this will help.
share|improve this answer
Thanks, but it does not solve my problem (I'm looking for the diff to actually tell the differences). – Adam Matan Jul 12 '09 at 11:09
add comment
I've used dirdiff in the past to compare directory structures. It only works on local dirs so you will have to sshfs-mount your other directories.
The good thing is that you can see visually if the files are equal or not and which one is newer or older. And it supports up to 5 directories. You can also see differencies and copy files from one to the other.
share|improve this answer
add comment
I would use Meld for that.
share|improve this answer
Meld works really well for this if you want a GUI solution. – Drew Noakes Mar 4 '13 at 11:58
add comment
Besides the tools already mentioned on windows you could use Total Commander or WinSCP, both have very comfortable functions to compare (and sync) directories.
share|improve this answer
add comment
Some people want to compare filesystems for different reasons, so I'll write here what I wanted and how I did it.
I wanted:
• To compare the same filesystem with itself, i.e., snapshot, make changes, snapshot, compare.
• A list of what files were added or removed, didn't care about inner file changes.
What I did:
First snapshot (before.sh script):
find / -xdev | sort > fs-before.txt
Second snapshot (after.sh script):
find / -xdev | sort > fs-after.txt
To compare them (diff.sh script):
diff -daU 0 fs-before.txt fs-after.txt | grep -vE '^(@@|\+\+\+|---)'
The good part is that this uses pretty much default system binaries. Having it compare based on content could be done passing find an -exec parameter that echoed the file path and an MD5 after that.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4493 | Take the 2-minute tour ×
I'm planning to begin a Django project that may (or may not) grow to be a pretty big thing. So I wanted to start out well: buy my public DNS (not sure yet if with Google Apps or any other domain provider) and start with a Amazon EC2 server.
So the idea is to have a centralized (git) repo in the server and a Django project running all the time (not yet in production stage, just development). So everybody will work in their local machines and then push changes to the centralized repo. Also, we'll be making tests in the development project (probably in the Django admin and checking some views that include database queries).
The question is, does this match the Amazon EC2 "Free Tier" capacity? or would the activity I'm describing here increase the monthly cost of the server?
Also, we will be developing (thus, making pulls and pushes to the repo) mostly from South America, but the "target users" (once the project is done) will be from USA, so is it OK if I set the region server to USA (East or West)? or would that also increase the monthly cost of the server?
Finally, I've read a little bit about BitNami's DjangoStack, but I'm not sure it fits my needs. Would it be useful (based on my server description)?
share|improve this question
add comment
closed as not constructive by mgorven, Iain Jul 17 '12 at 11:12
1 Answer
First of all, I wouldn't say to have a centralized git repository when actually git is a distributed version control system, any copy of the people you work with is equally valid.
The AWS Free Usage Tier, in my opinion, is for getting started with AWS, you will start understanding how it works, how to manage EBS, groups, elastic IPs, basically how to deal with the entire AWS ecosystem.
What about the backend? Will you have a RDBMS? In any case, you should think more about this as getting started with AWS and if the application grows you can scale out as much as you need. On the other hand, if you are thinking about AWS just because is free tier, you're going the grown way, at the end you will have to pay either to Amazon or to any other provider.
share|improve this answer
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/4494 | Take the 2-minute tour ×
I've installed "Snort" on FreeBSD-9.1 (32-bit) from the standard ports using:
pkg_add -r snort
After configuring and running with:
snort -c /etc/snort/snort.conf -A full -u snort -g snort -i em1 -T
I'm getting this error:
ERROR: /etc/snort/snort.conf(337) Unknown preprocessor: "ftp_telnet".
Doing some search on the Internet, the only thing I found is that this could happen if I'm using a snort.conf with a different version from snort itself, but this is not my case. My snort version is:
,,_ -*> Snort! <*-
o" )~ Version IPv6 GRE (Build 40)
'''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team
Copyright (C) 1998-2012 Sourcefire, Inc., et al.
Using libpcap version 1.3.0
Using PCRE version: 8.31 2012-07-06
Using ZLIB version: 1.2.7
and I'm using snortrules-snapshot-2931.tar.gz from snort.org.
I've had some previous experience installing and running snort on Linux and I never faced such errors, but I'm fairly new to BSD-UNIX.
Can anyone help?
share|improve this question
No one answered this question in a long time, and all my attempts failed to solve it and the importance of this has now passed for me ... But for anyone else who might face this issue, I think the cause of the problem is that I installed Snort from the ports ... I haven't tried it yet but my best guess is that if I try and install this from source, this problem won't show up ... So, in short: Try installing Snort from source and not from the ports. – Seyed Mohammad May 3 '13 at 13:06
add comment
Your Answer
Browse other questions tagged or ask your own question. |
global_01_local_0_shard_00000017_processed.jsonl/4501 | Excerpts for Unwanteds
The Purge
There was a hint of wind coming over the top of the stone walls and through the barbed-wire sky on the day Alexander Stowe was to be Purged. Alex waited in the dusty Commons of Quill and felt the light breeze cooling the sweat on his upper lip. His twin brother, Aaron, stood beside him; their parents, behind. And all around, the entire community of Quill watched and waited, the bland looks of sleeping fish on their faces.
Mr. Stowe pressed his finger hard into Alex's back. A final poke in the kidneys, a last good-bye, Alex thought. Or a warning not to run. Alex glanced at Aaron, whose face showed the tiniest emotion. Scared, was it? Or sad? Alex didn't know.
The High Priest Justine, her long white hair undisturbed despite the breeze, rose to her full height and observed the silent crowd. She began without introduction or ado, for a Purge was neither exciting nor boring; it just was, as many things just were in Quill.
There were nearly fifty thirteen-year-olds this year. The people of Quill waited to hear which of these teenagers had been marked as Wanted or Necessary, and, by process of elimination, which of them remained to be Purged.
Alex scanned the group and their families around the giant half circle of the amphitheater. He knew some of them, not all. Alex's mind wandered as the High Priest Justine announced first the names of the Wanteds, and he startled only slightly as the high priest spoke Aaron's name. Aaron, who'd had nothing to worry about, sighed anyway in relief when he was among the fifteen names called.
The Necessaries were next. Thirteen names were read. Alexander Stowe was not one of those, either. Even though Alex knew that he was Unwanted, and had known ever since his parents had told him over breakfast when he was ten, the knowledge and three years of preparation weren't enough to stop the sweat that pricked his armpits now.
It was down to a mere formality unless there was a surprise, which there sometimes was, but it didn't matter. Everyone stood motionless until the final twenty names were called. Among the Unwanted, Alexander Stowe.
Alex didn't move, though his heart fell like a cement block into his gut. He stared straight ahead as he'd seen the other Unwanteds do in past years. His lip quivered for a moment, but he fought to still it. When the governors came over to him, he put his arms out for them to shackle with rusty iron bands. He made his eyes icy cool before he glanced over his shoulder at his parents, who remained unemotional. His father nodded slightly, and finally took his finger out of Alex's back after the shackles were secure. That was a minor relief, but what did it matter now?
Aaron sniffed once quietly, catching Alex's attention in the silent amphitheater. The identical boys held a glance for a moment. Something, like a jolt of energy, passed between them. And then it was gone.
"Good-bye," Aaron whispered.
Alex swallowed hard, held the stare a second more as the governors tugged at him to follow, and then broke the connection and went with the governors to the waiting bus that would take him to his death.
© 2011 Lisa Mcmann |
global_01_local_0_shard_00000017_processed.jsonl/4503 | 8 reasons carbs help you lose weight
Eating a diet packed with the right kind of carbs is the little-known secret to getting and staying slim for life.
When we talk about the right kind of carbs, we mean Resistant Starch. Hundreds of studies conducted at respected universities and research centers have shown Resistant Starch-such as grains, beans, and legumes-helps you eat less, burn more calories, feel more energized and less stressed, and lower cholesterol.
Sound too good to be true? Here are eight evidence-based reasons you must get carbs back in your life if you are ever to achieve that coveted sleek, slim look.
Eating carbs makes you thin for life
That's the equivalent of several stuffed baked potatoes (a food we bet you've been afraid to eat for decades).
Most low-carb diets limit you to fewer than 30% of total calories from carbs and sometimes contain as few as 30 grams of carbohydrates a day.
Health.com: 10 fat-burning carbs
Carbs fill you up
Many carb-filled foods act as powerful appetite suppressants. They're even more filling than protein or fat. These special carbs fill you up because they are digested more slowly than other types of foods, triggering a sensation of fullness in both your brain and your belly.
Research done at the University of Surrey in the United Kingdom found that consuming Resistant Starch in one meal caused study participants to consume 10% fewer calories (roughly 150 to 200 calories for the average woman) during the next day, because they felt less hungry.
Carbs curb your hunger
According to researchers, when dieters are taken off a low-carb diet and shifted them to an approach that includes generous amounts of fiber and Resistant Starch foods, something wonderful happens: Within two days, the dieters' cravings go away.
The fiber and Resistant Starch fills them up and satisfies them while allowing them to eat the foods they crave. These good-news carbs also raise levels of satiety hormones that tell the brain to flip a switch that stifles hunger and turns up metabolism.
Health.com: 4 hearty whole-grain recipes
Carbs control blood sugar and diabetes
Eat the carbs you want, but you need to combine them so that they don't cause a spike in your blood sugar. Instead of eating white rice, switch to brown and combine it with beans, corn, or other high Resistant Starch foods that keep your blood sugar more balanced than low-carb diets.
Carbs speed up metabolism
Carbs high in Resistant Starch speed up your metabolism and your body's other natural fat burners. As Resistant Starch moves though your digestive system, it releases fatty acids that encourage fat burning, especially in your belly.
These fatty acids help preserve muscle mass-and that stokes your metabolism, helping you lose weight faster. Researchers set out to fatten up two groups of rats, feeding one group food that was low in Resistant Starch.
A second group was fed Resistant Starch-packed food. The rats fed the low Resistant Starch chow gained fat while losing muscle mass. Rats that ate the high Resistant Starch meals preserved their muscle mass, keeping their metabolism moving.
Health.com: 30 new metabolism boosters
Carbs blast belly fat
Health.com: Blast belly fat fast
Carbs keep you satisfied
Carbs keep you satisfied longer than other foods. Here's why: Your brain acts like a computerized fuel gauge that directs you to fill up whenever it notices that its gas tank (stomach) is empty
Foods high in Resistant Starch flip on every single fullness trigger in the body. They release fullness hormones in the intestine and make your cells more sensitive to insulin.
By increasing your consumption of filling foods and releasing satiety hormones, you'll minimize your hunger and cravings.
Carbs make you feel good about you!
"Dieters feel so empowered once they lose weight on carbs. For the first time, they are able to lose weight by eating in a balanced manner, without cutting out entire food groups," says Sari Greaves, a registered dietitian and spokesperson for the American Dietetic Association. |
global_01_local_0_shard_00000017_processed.jsonl/4506 | Mother-in-Law Caught Breastfeeding Baby
Who is that breastfeeding my kid?This headline caught my eye because, as I'm sure is the case with most of us moms out there, the image struck horror deep into my soul. Relationships between daughters- and mothers-in-law are complicated to begin with, so imagine the territory war that would be triggered by such a situation as this.
Read More: NYC Hospitals to Make Formula Feeding a Total Nightmare for New Mothers
A mother wrote into Emily Yuffe, aka Dear Prudence, at Slate, after discovering her mother-in-law 'nursing' her two-month-old son in the middle of the night. Her husband wasn't home, and she ordered her mother-in-law to leave the house first thing in the morning and even considered calling the police because she was so upset by what had happened.
Read More: Healthy Child Healthy World: Is Breastfeeding Revolutionary?
Yuffe thought that calling the police would have been going too far, as it isn't a matter for law enforcement, but the mother's disgust is certainly justified. Yuffe writes, "The fact that this letter is about your mother-in-law's nipple is enough to give anyone feelings of morning sickness. New parents get into all sorts of hassles with the grandparents over different styles of raising the kids... Your husband needs to have a very serious talk with his mother about boundaries - emotional and physical."
Read More: Are You Comfortable Breastfeeding in Public?
This grandmother's understanding of her role in both the child's and parents' lives is not clear, though it would be interesting to hear her perspective on it. Does she view breasts as easy pacifiers, a convenient way to calm a screaming baby in order to give its exhausted mother a break? Does she feel some longing for the days when she nursed her son, or, if she didn't, does she just want to try it out?
What this grandmother doesn't get is how sacred breastfeeding one's child truly is. Nursing mothers know well that it represents something much more powerful than a necessary transfer of nutrients; if it didn't, many of us probably wouldn't bother doing it. Because breast milk is meant to come from only one person, anyone else's nipple in a baby's mouth would feel like a invasion of very private territory, not to mention how confused a baby would be by not getting any milk.
I hope for the sake of this family that daughter- and mother-in-law are able to make their peace, though I imagine an incident like this is not forgotten easily.
Top Articles on Breastfeeding
Healthy Child Healthy World: Is Breastfeeding Revolutionary?
Are You Comfortable Breastfeeding in Public?
NYC Hospitals to Make Formula Feeding a Total Nightmare for New Mothers |
global_01_local_0_shard_00000017_processed.jsonl/4507 | Reassess Your Job Search For the New Year
This month, many of us are making resolutions and setting goals for our careers in 2010. But if you're still on the hunt for a new job, don't think you're off the hook - the beginning of a new year is the perfect time to reevaluate and fine-tune your job search. Whatever it is that's hindering your search, I've got tips to get you back on track.
• If you're not getting responses to your applications . . . Take a closer look at your resume, cover letter, and the positions you're choosing to apply for. Typos in your resume, generic-sounding cover letters, or applying for positions you're over- or underqualified for can all keep you from getting past even the first level of screening, so take some time to ensure you're making the right impression on potential employers.
• If you're getting lots of first interviews but no follow-ups . . .It's time to reexamine your interviewing style - and be as honest as possible. You need to consider everything from the firmness of your handshake to your attire and interview responses. My suggestion? Stage a fake interview with a friend. Set a time to meet her somewhere, dress as you would for an interview, and ask her to prepare questions for you ahead of time. Then ask her to critique you. It may feel silly, but it's the best way to get a (semi) objective opinion.
• If you're coming this close but just not getting the job . . . You may need to be more aggressively persistent in your follow-up. If you've done everything right and made it through several phases of interviews but aren't getting any job offers, consider whether you're conveying enough enthusiasm about the job to employers. Polite but persistent calls and emails after an interview checking in, thanking the interviewer for his or her time, or restating your interest can sometimes sway an employer who was on the fence about you, so don't underestimate the value of going that extra mile.
Related Content:
5 Jobs With Growing Salaries
I'm Asking: What Are Your 2010 Career Goals?
7 Tricky Tactics Employers Use to Evaluate You |
global_01_local_0_shard_00000017_processed.jsonl/4521 | Forgot your password?
dexterpexter's Journal: Coffee Capitalism 10
Journal by dexterpexter
Do you remember coffee consumption being so prevalent a decade ago? Fifteen years ago, the average person probably never entered a dedicated "coffee house" unless they really needed a cup of coffee (or were painfully artsy college students.) Most coffee was probably carried from home in a thermos, purchased from a breakfast diner or donut shop, or foully brewed in the break room. Sure, people have been addicted to morning coffee for a long time, but the coffee revolution is a recent one in my memory. Starbucks has made it trendy to carry around a branded cup of coffee--I guess that it took a big, commercial entity to market an $8 cup of machine-dispensed caffeine.
Starbucks is omnipresent on the street corners of most major cities in the United States. Conventional wisdom says that smaller coffee shops suffer when massive corporate entities open stores nearby. The article, however, contends that Starbucks has actually been a boon to many mom-and-pop coffee shops. "Strange as it sounds, the best way to boost sales at your independently owned coffeehouse may just be to have Starbucks move in next-door."
Some snippets from the article:
"Each new Starbucks store created a local buzz, drawing new converts to the latte-drinking fold. When the lines at Starbucks grew beyond the point of reason, these converts started venturing out--and, Look! There was another coffeehouse right next-door!"
"... when Starbucks blitzed Omaha with six new stores in 2002...business at all coffeehouses in town immediately went up as much as 25 percent."
"...if Starbucks can make a profit by putting its stores right across the street from each other, as it so often does, why couldn't a unique, well-run mom and pop do even better next-door?"
The article (found at the link above, and is about 2 pages long) doesn't completely glorify Starbucks, though. It recounts the techniques Starbucks has used to antagonize competitors, including pursuing competitors' leases!, and describes cases where people were forced out of the market. However, the article contends that this (the coffee shop failure, not the intimidation) is an exception to the rule (the numbers they cite regarding the success of running a coffee shop is impressive, if believable.) It also differentiates Starbucks from chain stores like Home Depot and Wal-Mart, which can have devastating impacts on local markets.
An interesting read nonetheless.
Coffee Capitalism
Comments Filter:
• I'd read that article, wondering that this was news to anyone. There were hardware stores before Home Depot and clothing retailers before The Gap (both of which had some major shortcomings that are forgotten when they're romanticized) but there was essentially zero market for expensive coffee in the US before Starbucks came along. Everyone else in that business is in Starbucks' debt.
• My younger daughter works at Starbucks. You don't see a lot of it from outside, but it's a company culture that really seems to respect its employees. For instance, there are not many companies where a twenty-hour-a-week employee has access to health insurance. They do fall short at times, but they have have a much greater commitment than most companies. She recently lent me her copy of Pour Your Heart Into It, by Howard Schultz (the founder of the company that is now Starbucks). It's an interesting re
• Well, as you can tell I have never paid Starbucks for a cup of coffee, and I suppose I was misremembering someone else mentioning paying as much for a coffee. :)
• by johndiii (229824) *
Well, I imagine that one could pay that much if one put one's mind to it. There are a lot of things that you can add. :-) But I like coffee to taste like coffee - and NOT sweet.
It seems like it's fashionable to look down on Starbucks. They've done some good things, though, and I respect them for that.
• My regular drink there (tall bold, no room for cream) costs $1.65, if I remember correctly. I am on the very low end of the spectrum, though.
• Just yesterday, my wife and I passed a Starbucks in a Mall and I commented "If I told you 20 years ago that people would be lining up to spend five bucks on coffee, you'd've told me to have my head examined."
• We're basically ground-zero for Starbuck's and you can't swing a dead cat without hitting somewhere that serves espresso. There are any number of local chains, independent coffee shops and stands, gas stations, etc with espresso. Not to mention both Tully's (also Seattle based) and Peet's (run by one of Starbuck's founders).
Really the only places in danger from Starbuck's coming to town are places that serve overpriced bad coffee.
FWIW my usual coffee drink (16 oz. drip, or 12 oz. americano) comes in at arou
• As I just mentioned to johndiii, you can tell I have never paid Starbucks for a cup of coffee. :) I remember someone mentioning paying something like $6 or $7 for one, but I could be remembering incorrectly. The next time I am dragged in the vicinity of one, I will pay closer attention to the prices.
• I'd have to agree with that article as well. I don't remember a lot of dedicated coffee shops before Starbucks really caught on. Anyway... so you grow coffee but you don't drink it? Why do you grow it if I may ask? Inquiring minds want to know. :D
• Hello!
They are lovely plants to keep (although I have been a poor keeper when it comes to fertilization.) Even if I were a coffee drinker, mine isn't mature enough to harvest beans from just yet. Raising a coffee plant to yield berries is a many-year endeavor. Call it a weird hobby.
|
global_01_local_0_shard_00000017_processed.jsonl/4524 | Forgot your password?
One Week With GNOME 3 Classic 169
Posted by timothy
from the we-think-we-can-save-the-foot dept.
An anonymous reader writes "Stephen Gallagher, Security Software Engineer at Red Hat, has completed his week-long experiment running GNOME 3 Classic. Stephen writes: 'While I was never as much in love with GNOME 2 as I was with KDE 3, I found it to be a good fit for my workflow. It was clean and largely uncluttered and generally got out of my way. Now that Fedora 19 is in beta and GNOME Classic mode is basically ready, I decided that it was my duty to the open-source community to explore this new variant, give it a complete investigation and document my experiences each day.' I'll leave Stephen's opinion on the new Classic Mode to the Slashdot reader to discover, but I will say that it does touch on the much debated GNOME Shell Activities Overview, and the gnome-2-like Classic mode's Windows List on the taskbar."
Comment: Re:Many classes of non-human (Score 1) 115
by AceJohnny (#43855595) Attached to: Book Review: The Human Division
It isn't useful on such a trivial example, but add in pointers...
int * func(char* a, char* b);
int *
func (char *a,
char *b);
(or better elaborate examples I can't be assed to come up with for a /. comment) ... and the milliseconds and frustration saved in parsing function declarations starts to add up
Comment: defected to Awesome (Score 1) 818
by AceJohnny (#40287179) Attached to: Ask Slashdot: Why Aren't You Running KDE?
I've long been a KDE user, switched to it in the KDE 4.1 days and never understood why people were so unhappy about it. I found it to be slick and useful, despite the regular problems with the NetworkManager applet in Debian Unstable. I just used the Gnome applet instead, which fit without a hitch.
Last year, finally frustrated enough with juggling between the windows of my various terminals and editors, I chose to give a tiling window manager a good try, and spent some effort on the ill-named Awesome (seriously, how do you SEO that?).
Though it's certainly not aimed at Joe Six-Pack in that you actually have to edit the Lua-based config file to configure it yourself, I found it extremely powerful and perfectly suited to my needs. The "tag" system to organize your window is supreme in allowing me precise control over which windows to display.
I discovered that I didn't have a use for all the frills of Gnome and KDE, except for USB-key and Wifi network management which are both accessible from the CLI anyhow (see udisks and nmcli). ... does this mean I've turned into a greybeard?
Comment: Re:Good Idea (Score 1) 127
by AceJohnny (#40073607) Attached to: Emacsy: An Embeddable Toolkit of Emacs-like Functionality
I've been using CScope in Emacs for about a year (in fact, I added the entry to ascope.el on that wiki page you linked to), and I've recently switched to Semantic from CEDET and GNU Global.
Sadly, the Emacs Code Browser (ECB) linked to from the CEDET page seems to be broken for recent versions of Emacs and CEDET and unmaintained.
While I dislike Eclipse for bloat and difficult extensibility, I have yet to decide whether Emacs has caught up with it for code browsing.
Comment: Strong Magnets! (but only transient) (Score 1) 166
by AceJohnny (#39456759) Attached to: Record-Setting 100+ T Magnetic Field Achieved At Los Alamos
I used to work next to the french Laboratoire National des Champs Magnétiques Intenses (Powerful Magnetic Field National Laboratory) and was lucky enough to visit it once during the yearly Science Day (why don't we have this in the US?).
They claimed they had the second most powerful magnets in the world, IIRC behind the Fermilab, at about 32T (again, IIRC). Note that this is a sustained magnetic field, not transient as the OP's record. (still, hitting 100T without destroying the magnet is one hell of a feat! Now if only we could find a source of power to sustain such a field...).
32T is extremely high, more powerful than any natural magnetic field on Earth (according to WP, the Earth's field is about 25uT at the equator to 65uT at the poles). The most powerful permanent magnets (rare-earth) can achieve a little under 1T, and good luck getting that magnet off a piece of steel. 32T is achieved only in a space about the size of 2 coke-cans at the center of a large cylindrical apparatus that is the concentric electromagnets. But even at such a strength, the fields we make are dwarfed by stellar and interstellar magnetic fields, that have been calculated to reach hundreds or thousands of Teslas.
Fun facts: they run the magnets at night, when power is significantly cheaper. They have big banks of capacitors and batteries for spare surge power. The (classical) electromagnets aren't built by spooling wire on a tube, because wire isn't thick enough the sustain the kind of current that goes through. Instead they take a thick copper tube that they slice in a spiral and insert an isolator in the spacing.
Their most powerful magnets were formed of a core superconducting electromagnet surrounded by standard electromagnets. The cost of superconducting materials is what prevent them from making more powerful stuff.
But despite all that, I'm still not sure what kind of experiments require such powerful magnetic fields. Such awesome engineering, so few applications...
Comment: News isn't the soldering, but the OSS libraries (Score 4, Interesting) 240
by AceJohnny (#38882137) Attached to: Why the Raspberry Pi Won't Ship In Kit Form
The fact that they won't deliver in kit isn't news*, it's more interesting to know that they have HW-accelerated versions of MPEG4 and H.264 (and only those), and that all these libraries are closed source.
Furthermore, claims that they have the fastest mobile GPU are fluff: we only have the subjective word of someone who worked on it, not a neutral 3rd party, and it'll be caught up by someone else soon anyhow.
Finally, I'm going to advance that any complaints about the nvidia binary driver are going to be small fry compared to Broadcom's drivers.
*it's just not possible to hand-solder BGA packages. At best you'd need a reflow oven, and *that's* still tricky with the sizes involved here.
Comment: Simtec "Entropy Key" also does quantum RNG (Score 4, Interesting) 326
by AceJohnny (#38207918) Attached to: Physicist Uses Laser Light As Fast, True-Random Number Generator
A while back, the Simtec Entropy Key was making the rounds among Debian Devs, and claims to be exploiting quantum effects in the P-N junctions to be a true RNG.
They seem serious and I tend to trust paranoid Debian developers' opinions, but ultimately I don't have enough knowledge myself to make a confident judgment call. I'd be curious about more opinions.
|
global_01_local_0_shard_00000017_processed.jsonl/4525 | Forgot your password?
Comment: Re:This is a BAD idea (Score 1) 57
by Anonymice (#46305681) Attached to: S. Korea's Cyberwar Against N. Korea's Nukes
Oh, no doubt the casualties would be catastrophic, but don't underestimate the power of sheer numbers. The Arab uprisings are a good example of its efficacy.
NK wants the nukes to fend off the US, not South Korea.
Their only influence on the US is through their threat to the South, as they lack any long range capability. The best they've managed to do was fire a chunk of metal into the lower atmosphere - that's a long way from an ICBM.
Comment: Re:This is a BAD idea (Score 2) 57
by Anonymice (#46304037) Attached to: S. Korea's Cyberwar Against N. Korea's Nukes
Whilst horribly under-equipped & outdated, North Korea has the largest army of foot-soldiers/infantry in the world. Adding that Seoul is also only 35km from the NK border, I wouldn't want to place any bets. If the North goes down, it'll take the South with it & flood China's already delicate border regions with a huge number of refugees.
Unless it gets taken down from the inside, I don't expect to see any changes in NK during my lifetime.
Comment: Re:well i'm reassured! (Score 4, Insightful) 393
by Anonymice (#46127591) Attached to: Confessions Of an Ex-TSA Agent: Secrets Of the I.O. Room
How is this modded 4+ Insightful?! It's ignorant, hypocritical bollocks!
"Women, gays, Muslims & atheists" are no more special interest groups than bible-bashing white males. And how the fuck do you make "accommodations" for atheists? Not force them to sing words of praise to your special interest deity?
On an organisational level, religion should have no place in military procedures. If you're having to make "accommodations" for people absent of any religion, then there's something horribly wrong with the procedures of your military.
And how the hell can you complain that atheists DON'T have to follow your religious doctrine, AND at the same time complain that other religious groups get to follow theirs?
A recent article shows that the Pentagon is reconsidering uniform requirements to permit beards and turbans for Muslims.
Suddenly - we are courting Muslims...
Under pressure from Sikhs, the Pentagon has publicly clarified its existing procedures to permit certain practices "as long as the practices do not interfere with military discipline, order or readiness."
And not just that, they have to go the through the procedures to request permission for every individual deployment.
A number of highly decorated professionals have been drummed out of service for the crime of failing to wholeheartedly support the gay agenda.
So it's OK for people to break with agreed military procedures & speak out against a minority, but it's not for a minority to request to do the same? Go fuck yourself.
...often enough, accusations of sexual harassment and/or assault are political tools used against good soldiers. It is impossible to even guess at the numbers of such instances, but I know for a fact that it happens. Other times, a female soldier who is busted for drugs or other infractions tries to turn the tables by accusing supervisors and investigators of sexual harassment. Again - it's impossible to even guess at the numbers, but it happens.
Given the accuracy of your comments so far, I'll choose to take these self-professed baseless assumptions with a pinch of salt. You don't have enough information to even make a guess, but you "know" it happens? Do you have *anything* to back this up?
...the fact is, our military is being improperly used to advance a number of political agendas.
Something the whole world would probably agree with you on.
Comment: Re:Right On (Score 1) 312
by Anonymice (#45777271) Attached to: Snowden Says His Mission Is Accomplished
Whilst I understand the dilemma, this is a defeatist attitude & not in the heart of democracy.
We had a similar problem in the UK with Labour & the Conservatives. The Conservatives lost many voters due to the huge controversies created during their reign in the 80's & 90's (symbolised by Thatcher), and the following Labour government took us into an illegal war & steered us into the financial crisis (it was the collapse of Lehman's in London that sent the dominoes falling).
The disillusion gave way to the Liberal Democrats getting their highest share of the vote in a *century*, forcing a coalition government. They didn't get a majority, however it gave the two leading parties a massive reality check & kick in the backside.
Prior to the election, we were all warned to vote tactically & that a vote for one of the minor parties would be a vote wasted.
If enough people act, change CAN happen.
Comment: Re:Not that hard to believe, actually (Score 1) 90
Whilst I would expect someone involved enough to start hacking their servers would be better informed, the general population are *very* uninformed about foreign politics. Forgiveable given the local politics mirrors a soap opera.
I made reference to the NSA & recent events during a meeting at a state department here & only one person in the room had any clue what I was talking about.
Comment: Re:Oooh Goodie! (Score 1) 119
Unless they've changed in the last 10 years, they *are* taught & examined as different subjects. The difference is that unless you specifically take science as a subject, study of the 3 sciences rotates in 2 slots & the final qualification is only worth 2 GCSEs instead of 3.
This promise means one of either 3 things:
a) A subject will be dropped from the curriculum to make way for the extra science classes;
b) More will be put on the already overloaded curriculum (especially an issue when all finals are sat over the same period, with no gaps);
c) Empty words, Bollocks & Bullshit.
Comment: Re:Targeted ads are better than untargeted ads (Score 1) 177
by Anonymice (#44196847) Attached to: Student Project Could Kill Digital Ad Targeting
Trying to cover operating costs whilst providing users a free service != hate.
If you don't want to pay for the content you consume, to complain when they try to make up costs some other way.
Bar pop-ups & intrusive flash ads, I see ad-blocking as unethical. Don't like the ads? Don't consume their content.
WARNING TO ALL PERSONNEL: Firings will continue until morale improves. |
global_01_local_0_shard_00000017_processed.jsonl/4526 | Forgot your password?
User Journal
Journal: cool mashup idea?
Journal by DrEasy
Plot the origin of phishing scams on GoogleMaps using Google's blacklist below?
(found out about the existence of such list in:
http://slashdot.org/article.pl?sid=07/01/04/208206 )
User Journal
Journal: Cool Web dev stuff
Journal by DrEasy
The Google Web Toolkit is a java library that generates Javascript code: http://code.google.com/webtoolkit/
OpenLaszlo is like XUL but cleaner, and it generates code that works for any browser.
User Journal
Journal: web folder
Journal by DrEasy
This post says it's possible in OS X to create a Web folder so that publishing to the Web is as easy as drag-and-drop.
|
global_01_local_0_shard_00000017_processed.jsonl/4530 | Forgot your password?
Microsoft and Red Hat Team Up On Virtualization 168
Posted by ScuttleMonkey
from the don't-go-the-way-of-the-novell-bird dept.
mjasay writes "For years Microsoft has insisted that open-source vendors acknowledge its patent portfolio as a precursor to interoperability discussions. Today, Microsoft shed that charade and announced an interoperability alliance with Red Hat for virtualization. The nuts-and-bolts of the agreement are somewhat pedantic, providing for Red Hat to validate Windows Server guests to be supported on Red Hat Enterprise virtualization technologies, and other technical support details. But the real crux of the agreement is what isn't there: patents. Red Hat has long held that open standards and open APIs are the key to interoperability, even as Microsoft insisted patents play a critical role in working together, and got Novell to buy in. Today, Red Hat's vision seems to have won out with an interoperability deal heavy on technical integration and light on lawyers."
Sanyo Invents 12X High-Speed Blu-ray Laser 194
Posted by CmdrTaco
from the twelve-is-way-more-than-eleven dept.
Lucas123 writes "Today Sanyo said it has created a new blue laser diode with the ability to transfer data up to 12 times as fast as previous technologies. The laser, which emits a 450 milliwatt beam — about double that of previous Blu-ray Disc systems — can read and write data on discs with up to four data layers, affording Blu-ray players the ability to store 100GB on a disc, or 8 hours of high-definition video."
Stargate Worlds Beta Begins Oct. 15th 84
Posted by Soulskill
from the work-in-progress dept.
An Open Source Legal Breakthrough 292
Posted by kdawson
from the gpl-means-what-it-says dept.
jammag writes "Open source advocate Bruce Perens writes in Datamation about a major court victory for open source: 'An appeals court has erased most of the doubt around Open Source licensing, permanently, in a decision that was extremely favorable toward projects like GNU, Creative Commons, Wikipedia, and Linux.' The case, Jacobsen v. Katzer, revolved around free software coded by Bob Jacobsen that Katzer used in a proprietary application and then patented. When Katzer started sending invoices to Jacobsen (for what was essentially Jacobsen's own work), Jacobsen took the case to court and scored a victory that — for the first time — lays down a legal foundation for the protection of open source developers. The case hasn't generated as many headlines as it should."
GNU is Not Unix
id and Valve May Be Violating GPL 399
Posted by Zonk
frooge writes "With the recent release of iD's catalog on Steam, it appears DOSBox is being used to run the old DOS games for greater compatibility. According to a post on the Halflife2.net forums, however, this distribution does not contain a copy of the GPL license that DOSBox is distributed under, which violates the license. According to the DOSBox developers, they were not notified that it was being used for this release."
Posted by CowboyNeal
from the hotel-americana dept.
Is the Microsoft/Novell Deal a Litigation Bomb? 342
Posted by Zonk
from the pengui-bomb dept.
mpapet writes "According to WINE developer Tom Wickline, the Microsoft/Novell deal for Suse support may one day control commercial customers' use of Free Software. Is this the end of commercial OSS developers who are not a part of the Microsoft/Suse pact?" From the article: "Wickline said that the pact means that there will now be a Microsoft-blessed path for such people to make use of Open Source ... 'A logical next move for Microsoft could be to crack down on 'unlicensed Linux' and 'unlicensed Free Software,' now that it can tell the courts that there is a Microsoft-licensed path. Or they can just passively let that threat stay there as a deterrent to anyone who would use Open Source without going through the Microsoft-approved Novell path,' Wickline said." Bruce Perens dropped a line to point out that most of the content actually comes from his post.
|
global_01_local_0_shard_00000017_processed.jsonl/4532 | Forgot your password?
Comment: Not surprised (Score 1) 346
I'm not surprised... I was in China in the summer of 2001, and one of the things I vividly remember was riding the train from Beijing to Shanghai, and looking out the window at a factory with smokestacks belching bubblegum-pink smoke into the sky. That cannot be healthy, or likely legal, but in general in China rules and regulations are one thing on paper, and another thing in practice.
+ - Google Video Race Against Time Goes Distributed-> 3
Submitted by Bottles
Bottles writes "Following Google's announcement to shut down Google Video (previously reported on Slashdot), the Archive Team has inspired a group of volunteers to join together to preserve as much of the content as possible: some 2.5 — 2.8 million videos. In a few short days the effort has evolved from a simple wiki suggesting people band together to download automatically generated lists of video ID's via a crude automated script to a centralised, distributed batch management system which assigns unique video ID's to volunteers' machines for download. The system, developed by Alex Buie in less than 48 hours, is now the recommended way to preserve the content and to avoid duplicate downloads. Watch videos roll in live here thanks to PubNub and read on to find out how you can help.
The clock is ticking to download as much as possible by the 29th of April — before Google throws the switch. Thereafter, Archive.Org has assigned a 140TB buffer for uploads into its 1 petabyte of storage space to house the preserved content. After the cutoff, downloaders can offload their content at their leisure.
The team from around the world, spearheaded and coordinated by Jason Scott, has been working solidly and altruistically. No selection criteria have been applied to the content; the idea is to preserve everything, if possible; however various team members have been working on collating word lists for searches: by concept, year, country etc. In this way they have a growing master list of some 2449000 unique video ID's to be processed of which around 15% are already saved.
You can help out. Please visit the wiki and the #googlegrape irc channel on EFNET, download the scripts and donate a little bandwidth and storage to preserve as much as possible before the cutoff date. There is less than a week left."
Link to Original Source
Comment: Wider target audience (Score 2, Insightful) 189
by nstrom (#31169046) Attached to: Rogue PDFs Behind 80% of Exploits In Q4 '09
Attacking Adobe Reader means that people who use Firefox are also at risk. For a long while, the popular security paradigm on Windows was that if you used IE you were at risk, but if you kept up with Windows Update and used only Firefox to browse the web you were pretty much safe from the majority of the exploits in the wild. Now that malicious PDFs are out there in force, users of Firefox are vulnerable once again.
Comment: Gmail too (Score 1) 135
by nstrom (#30688544) Attached to: Hotmailers Hawking Hoax Hunan Half-Offs
I'm not too sure that gmail isn't a target... A couple weeks ago, my friend's Gmail account got hacked and the spammers sent the following message out to all his contacts:
I am willing to give you a surprising happiness! Yesterday i had
received the digtal camera which i ordered from ---www.wwooz.com--
last week. its quilty is very good , and the price is very low.i am
satisfied with it.
If the products you expect is on the site, it is a wise choice for you
to buy from this site.I believe you can get many surprising happiness
and concessions.
Incidentally,they import the products from korea.all of the products
are brand new and original. they have good credit and many good
feedback.they are worth trusting for us .
Best wishes !
Comment: Re:reason 1 down. reason 2 in que. (Score 1) 187
by nstrom (#29949552) Attached to: uTorrent To Build In Transfer-Throttling Ability
Closer makes no difference, effective transfer speed does (which BT already prioritizes peers based upon). I can get much better download rates from the guy in Finland with a 100mbit connection then I can from the guy across town on my same cable ISP with an already saturated 384kbps upload.
Comment: Re:URL Shortners Are Bad (Score 5, Informative) 145
by nstrom (#29117749) Attached to: URL Shortener tr.im To Go Community-Owned, Open Source
The original use of URL shortening services was to prevent link breakage in e-mail and nntp clients that linebreak after 80 characters. They still work great for this. http://tr.im/wGhA works a lot better in e-mail than http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=1600+pennsylvania+ave,+dc&sll=37.0625,-95.677068&sspn=49.624204,58.359375&ie=UTF8&ll=38.898732,-77.038515&spn=0.012007,0.014248&z=16 . I've also heard shortened links used to good effect on internet radio, where it's easier to direct listeners to a tinyurl than a long forum URL, when there's discussion about a certain thread.
Comment: WGA forum (Score 5, Insightful) 311
by nstrom (#26290487) Attached to: Microsoft Uses WGA To Obtain Record Jail Sentences
I'm betting that a good amount of the information used in this case came from posters on the WGA forum, where people can post if they're having issues with WGA. One of the tools available in that forum is a WGA diagnostic tool which will generate a sanitized text dump of a user's windows validation information. Most cases on that forum are people whose brother, cousin, or sketchy PC shop installed a common warez release of Windows on their systems, but several there are people who bought apparently legitimate software from resellers which failed validation and later turned out to be counterfeit. Microsoft got in touch with these users, identified the resellers, and I'm betting that this news story is the result.
+ - Sony BMG Coughs Up $1.5 Million+ For Rootkit CDs
Submitted by
junger writes "Sony BMG has coughed up another $1.5 million to litigation in Texas and California after the rootkit fiasco of late last year, and has agreed to improve its disclosure practices and not distribute any more CDs with hidden copy-protection schemes. This isn't the first time they've settled a rootkit case, and they actually settled the California lawsuit the same day it was filed."
|
global_01_local_0_shard_00000017_processed.jsonl/4539 | Chronicles of Harry
2014 Mar 7
19:47:00 - My computer's display is going nuts
Probably not the entry you wanted me to write, but, whatever that entry might be, I probably don't have time to write it. (Or even this, really, but...)
So, the other week, I restarted my computer for some reason, and, when it turned on, found that the display was messed up; it didn't stretch all the way to the right side of the screen, leaving a black bar there, and the bottom of the display extended past the bottom of the screen. Like the software thought my screen had a different aspect ratio than it does.
Restarting the computer didn't help. Going into the "Monitors" settings couldn't fix it; the various resolutions listed simply weren't of the right aspect ratio, and all left the same problem as before. I had downloaded some updates that day and had assumed those caused it; I would have tried uninstalling them, but I couldn't remember what they were. Should I try digging into xorg.conf? Well, I considered it, but ultimately decided screw it, I'll just upgrade the whole damn OS! I was still using Linux Mint 13, after all; they're on 16 now.
And so I downloaded Mint 16, booted up a Live CD (really a USB drive) -- and yes, the problem still occurred when running off the Live CD, which I think maybe I should have noticed as suspicious[0] -- and, hey, it solved the problem! (It also did other neat things like, now I can hibernate my computer again. Not that I really need to; suspend is pretty damn good at saving power.) And so it went.
...until today, when I restarted the computer to find the problem occurring again. What?? What the hell am I supposed to do about that? Uh... restart it and hope it goes away? Hey, it worked!
So offhand it looks like what we have here is in fact an intermittent hardware error, where it misreports the screen resolution to the software. Yikes. It is a bit odd that before, it just happened to go right after I upgraded the OS, but, I dunno, what else can I conclude? Guess I should take it to a repair place maybe? Well, for now I'll just resort to "turn it off and on again" as necessary, and hope it doesn't get stuck in that state again like it did that first time...
[0]In fact, the problem happens really early in the boot process, before the machine even gets to the bootloader! That should have struck me, but...
2014 Jan 30
19:54:00 - I'm a little confused about the recent hubbub over "There are no black holes"
So recently there's been this whole big fuss over Stephen Hawking claimed there are no black holes with event horizons and people are saying "There are no black holes!" and other people are saying "What are you talking about he just said there are no event horizons" and well a hell of a lot of silly things are being said when, as I understand it, nothing complicated is going on, just a bunch of sloppiness with language. As far as I can tell, this is all just a question of what we mean by the phrase "black hole". (Maybe. See bottom.)
Originally, "black hole" meant a body so massive even light could not escape from it, regardless of direction of travel. That is to say, having an event horizon was a defining characteristic of a black hole. These were hypothetical objects and none were known.
Then, candidates for black holes were found, and more evidence confirmed it, and astronomers were like "Yay, we've found black holes!" Of course, such objects weren't necessarily black holes as such; just something close to it. Still, one way or another, these objects have become referred to as black holes.
Now Stephen Hawking is saying there are no event horizons. This does mean there are no black holes in the original sense. It does not mean the various objects that have been called "black holes" do not exist, just that they are not quite black holes in the original sense.
Now here's what's bugging me here -- what in space is new about any of this?
Some time ago I asked relativist Sarah Kavassalis on Formspring (now lost to the internet) just how sure we are that black holes can actually form in finite time. I don't have her exact answer to hand, obviously, but essentially it was: Black holes cannot form in finite time in general relativity. What astronomers call black holes, and what relativists call black holes, are not really the same thing, and one day she should really get around to writing something about this. (Apologies to Dr. Kavassalis if I'm misremembering.)
So if that's really true... what's the big deal? Didn't we already know this? (Of course, if that's really true... why does it seem nobody really talked about this before now?)
Anyway, it seems to me there's really not much going on here, it's just a language issue. I am worried though that I might be missing here. If that's the case, can anyone clarify?
2014 Jan 23
20:31:00 - Appendicitis lost (for real)
So about a year ago I noted that "Appendicitis: The Movie" had been taken down from YouTube and was, it seemed, lost, and the only person who might have a copy was not someone any of us were really in contact with any more. But there was still the possibility that I might be able to find the guy and recover it from him.
Long story short, I emailed him, asked him if he might be willing to try to dig it up, he said he didn't have time right then but maybe he'd do it later. Then we both forgot about it.
A few days ago I remembered about it and thought to ask him about it again. And the answer is no, it's not really going to be possible; it's just lost.
Oh well. We at least have my attempts to explain it. :)
2014 Jan 22
02:45:00 - Mystery Hunt 2014 roundup
EDIT: Ugh, LJ mangled this due to some bad syntax on my part. Managed to pretty well recreate it though.
So! Solutions are up for Mystery Hunt, and you know what that means!
This year I joined Donner Party of N for the Mystery Hunt. Despite a number of people expressing interest earlier, I didn't really get much of anyone from Truth House to help out unfortunately (though Sam and Beatrix each joined in briefly). Oh well.
The story is -- I thought Strange New Universe still existed, since I still got mail from Mystery Hunt HQ on their mailing list. But a few days before the Hunt I realized -- I'd never gotten any mail from the actual *team* on the mailing list. So I emailed Tim Black, a friend of Youlian's who'd officially put me on the team last year, and asked him what was up. I also sent an email to Kevin Carde asking if I could join his team. Well, I got a reply from Tim; SNU had dissolved he said, but I'd be welcome on Donner Party of N. And so it was. (It later turned out Kevin was also on Donner Party this year.)
Donner Party didn't have a wiki, only Google Docs, which was a bit of a pain; when Sam was trying to help out I was just like "yeah I can't figure out how to let you access the spreadsheets unfortunately". Still, it worked. It had to; most of the team was off-site, based in Chicago. Apparently we had so few people on-site that they had to make special allowances to allow us to do the runaround. (Did we even make it to the runaround? I don't know what the rules were.) Towards the end, for whatever reason, there just didn't seem to be many people around and so I was largely working solo. (Everyone working on the runaround? But there's no way we could have made it there by then.)
We ended up solving about 40 puzzles, which is apparently considered pretty good for a team our size. We hadn't unlocked the Knights round when the coin was found, and still hadn't unlocked much of the Tea Party and Mock Turtle rounds. Oh well. I tried a little bit of solving, mostly solo, after the coin was found, but not very much.
Anyway! What follows is obviously spoiler-filled, so...
Cut for spoilersCollapse )
...and I should probably go get some actual work done now.
2014 Jan 13
20:37:00 - Nobody is going to get this joke
Today was this semester's organizational meeting of the student combinatorics seminar, when we decide on what topics we want to hear about and try to see who we can get to talk about them. One topic last year that we agreed on was the Artic Circle theorem; however, we ultimately didn't end up getting anyone to talk about it so that never got done. So this semester, the Arctic Circle theorem was first on the list; we didn't even count votes for it, Chris just drew a box around it to indicate that it's in. Later, when circling the other topics we'd agreed on, the ones with lots of votes, he drew circles around them.
Later, someone who'd missed this asked, "Why does the Arctic Circle theorem have a box around it?"
Someone replied: "Because it's a frozen vertex."
2014 Jan 8
17:55:00 - A more findable location for the Space Alert stuff
I've written up my Space Alert max/min score analyses as a page for my website. :)
2013 Dec 29
17:57:00 - It's the co-files again
Just realized I forgot "coplanar".
EDIT: Also, "covariance", which does not really belong with "covariant".
Also, you know what? I should really put up my Space Alert min/max analyses on the website at some point... I'll put that on my todo list...
2013 Dec 28
20:09:00 - Red threats? Pah! (Also further notes on min/max Space Alert scores)
Of course I'm writing more about Space Alert.
So yesterday there was a bunch of Space Alert with Mickey and Nick and Oren. Oren has the expansion, so we played with those cards in and with specializations and with variable-range interceptors[0]. (Mickey was captain, I was comms, Nick was security.)
First game was all yellow threats -- except serious internal, those were white. (Oren apparently thinks mixed decks are silly.) It went pretty well, so next game, why not? All yellow threats!
When that went pretty well too, there was only one thing to do... yup. Red threats. External only -- internal was still yellow. This went less well. We drew the Executioner, and despite Nick's warnings, I screwed up and ended up getting knocked out by it; we survived, but largely due to luck. And to avoid getting knocked out, we did have to miss the mouse in 2nd phase and eat a delay. Still, the Executioner isn't as scary as I thought it would be. (The Seeker still scares me though. We didn't draw that.)
Did we want to put in red internal? Should we? That last one was pretty shaky... well, Nick wanted to, so we did. Not red serious internal though. (Those are pretty much all terrifying.)
And, uh, we survived. Of course, it helped that the phasing troopers happened to pass X on a phase-in turn and stay in lower red. Still -- we survived! I'm wondering if I might have to get the expansion sooner than I was intending. (Well, it's out of print right now anyway; and hopefully in a second printing they'll have fixed the misprints.) (Also, we still haven't tried any *campaigns* at Truth House...)
...I suggested we go to *all* red threats, but that was not done.
1. Red threats are not as scary as they seem at first; most just require a bit of counting. Not that the people back at Truth House are at all ready for them. The serious red internals still seem terrifying though (and so does Seeker).
2. Energy management is different with this crew. We had Mickey as captain and designated energy person; he handled all that, though he didn't always update it on the board. You had questions about energy, you talked to him. Different from how we do it at Truth House, with always explicitly marking energy and not having any central person manage it. Both seem to work, but maybe depends who you have. Remember that I played with Space Alert with Mickey, Nick and Oren first, so I learned this way first; when I introduced the game at Truth House, and tried captaining and doing this, I failed pretty badly at it. As did everyone else who tried it.
3. White shield trick is a bit easier when you're playing with specializations and one player is an Energy Technician!
4. Let me say a bit more about specializations, actually. I actually played with specializations before playing with heroic actions, but very little; I didn't really get a feel for how they were different from the base game. So let me say more on that.
To a large extent I'm not really a fan -- Data Analyst and Energy Technician, for instance, remove the spatial aspect of cracking a canister or hitting the mouse, which I feel like is making things a bit too easy. On the other hand, interceptors become much more useful when you have a Squad Leader who can get to them instantly; they're a bit too hard to use in the base game, IMO (I've mentioned this before). Not sure how I feel about Rocketeer. One thing worth noting though is that while the Squad Leader can repair their battlebots anywhere, they can't do a heroic battlebot action! That ability, often so crucial in the base game, is missing if you're playing with specializations. Medic and Special Ops offer some ways to get some of the same effect, but at more of a cost. This might be a good change -- as I've said earlier, heroic battlebot actions often obviate any need to repair the battlebots. (But perhaps not if you're playing double-action missions. Which we weren't.)
(Naturally I think the whole sort-of-experience-system thing is pretty stupid. If I do get the expansion, and play with specializations, I think I'll do it as "one game with just basic to get used to it, then straight to level 3 all the time".)
I'm really unsure what to think of Hypernavigator; we didn't play with one this time. (Mickey was Energy Technician, I was Pulse Gunner, Nick was Squad Leader, Oren was Rocketeer.)
5. Holy crap the expansion is so much more *complicated* than the base game -- both because of specializations and because the threats are way more complicated. Even the non-red ones can be really complex; take a look at Ninja! With the base game, you can learn the rules really well and predict all the weird interactions; with the expansion, that seems less possible. (What happens if your Special Ops is parasitized, and you would defeat the parasite (knocking them out) in the same turn they have a protected action? Is the parasite still defeated? According to the online FAQ, no! But that doesn't seem to follow from the other rules at all.)
EDIT next day: Actually, looking over the base game again, this is less true than I thought; it does have some weird interactions you couldn't really predict and just have to learn. But again, this is much more true of the expansion.
EDIT: You know what I just noticed? Even though defeating the Parasite requires knocking out a player[3], I'm pretty sure the Parasite is only worth 16 points if killed, not 18. I could be misremembering but I feel like I would have noticed that.
Also, the sometimes-relevant distinction between "moving really really fast" (what heroic movement does and what the Squad Leader does when he rushes to the interceptors), and "teleporting" (every other teleportation-like effect) is confusing. (When explaining heroic movement, I usually explain it as "teleporting", but in the expansion, that's not correct.)
So, the question: How does the expansion affect the lowest and highest possible scores?
Well, I can't answer that, because to seriously answer that would require sitting down and working it out and probably actually owning a copy of the expansion. But I can at least now say what is in there that would affect it.
Ways to get additional points beyond 69:
1. Double-action missions. These raise the total threat value from 7 (8 with 5 players) to 10 (12 with 5 players); I'm assuming we're using the standard ones and not the easier ones. However, in my opinion, it should really be considered a separate problem if you're doing this, so I'll consider this separately.
2. Red threats. This is the other obvious one. Surviving a red common threat is worth 4 points (as opposed to 3 for yellow and 2 for white) and the other numbers are derived in the usual way. (There's a little bit of variation which I'll describe in a moment.) But in fact there are smaller ones...
3. Data Analyst basic action -- use of this action gets you +1 point, so there you go.
4. Data Analyst advanced action -- use of this action allows you to get up to 4 extra points.
5. This one's not really that relevant, but it is technically possible. By using the Medic's or the Special Ops's advanced action against the Seeker, you can avoid being knocked out by it (though your battlebots will still be disabled). The Seeker is worth 15 points if killed instead of the expected 12 to compensate for the knockout effect, meaning you normally only get 12 points out of it despite its listed number being higher; this allows you to get 14 points out of it (with Special Ops) or 13 points out of it (with Medic). Of course, that's still less than you'd get from just killing a serious red threat instead.
So, with the expansion in, and 5 players, a perfect game becomes 8*8+25+1=90 points for a normal mission, and 12*8+25+1=122 for a double-action mission (not 117 as I said earlier). Whether these are actually achievable, who knows.
But what about the question of getting points *below* -28? (Here I'm assuming a normal mission.) Note that in the base game, we had an absolute lower bound of -36 (no positive points and all the penalties), a lower bound of -30 based on analyzing the audio tracks and how few threats you can let through, and an actual minimum of -28 (assuming I'm correct).
Let's address these in reverse order:
1. Plasmatic Fighter -- the Plasmatic Fighter can knock people out, but is a white common threat (!). This probably makes -30 achievable.
2. More slow threats. The expansion adds in more threats that have an initial speed of 1; IIRC, they all speed up later, but these should still be useful for lowering point values, just as the Man-of-War is. (The Juggernaut unfortunately isn't really useful for these purposes.) Many, IIRC, are yellow or red, but in this context that doesn't matter.
3. Calling in threats. Several of the serious red threats call in another threat; these threats are only worth the points of a red common threat (4/8) instead of a red serious threat (8/16). This is still too many points to be helpful for this, but it's worth noting. A called-in threat may not appear until quite late, making it easy for it to be neither killed nor survived. However there is one case that is helpful...
4. Sealed Capsule. The Sealed Capsule is the one red common threat that calls in another threat; it is worth no points at all. (Yellow and white threats never call in other threats.) Now that's helpful for reducing your score! Especially because, once again, the called in threat may not appear until quite late.
5. Hypernavigator basic action. The Hypernavigator's basic action can be used to have threats move one less space that turn. The application is obvious.
6. Hypernavigator advanced action. This allows the ship to jump to hyperspace after turn 10 or 11 instead of 12 (although there is always a "turn 13", no matter what). The application, once again, is obvious. So, point 1 should allow us to achieve -30, and points 2-6 should allow us to achieve -36; indeed, they seem like overkill for achieving -36. Although, of course, I haven't checked this. But it may even be possible to get below -36...
7. Medic advanced action. Using this action costs you a point. So, with it, it's quite possible that -37 might be achievable! And that of course truly is an absolute lower bound.
And that of course is as far as I'm going to go with the matter until such a time as I actually get the expansion.
[0]Looking through the rulebook, I see variable-range interceptors are really only meant to be used with double-action missions. Well, whatever. We can play it how we like...
[3]Barring use of the Medic's advanced action, which could also be applied in the case of the Seeker -- of course, the Medic didn't exist then...
2013 Dec 25
17:51:00 - Moser's shadow problem; also unlocking some old entries
So, a while ago I wrote this entry about what it turns out is called Moser's shadow problem; shortly after I locked it because uh it turns out Jeff didn't want people knowing he was working on the problem. But it turns out that he has since solved that problem, along with Yusheng Luo. (And so in particular, that entry can now be unlocked.)
Worth noting here that in 1989, Chazelle, Edelsbrunner, and Guibas figured out that the answer is yes... if we modify the problem slightly, by allowing projections from finite light sources instead of just light sources at infinity. (I think technically they wrote it in terms of faces rather than vertices, but this makes no difference.) But allowing finite light sources can make a difference, so the original problem remained unsolved. Now, apparently, that gap is closed.
Unrelatedly, I'm also unlocking these two old entries. (Yeah, unlocking old entries has become more of a "when I feel it's appropriate" thing rather than a strict time-based thing.)
2013 Dec 21
01:34:00 - What is the lowest possible surviving score in a game of Space Alert? What's the highest?
In this entry I want to consider the questions, "What is the lowest possible surviving score in a game of Space Alert?" and "What is the highest possible score in a game of Space Alert?". (I like to say that dying is -∞ points, but that is not very interesting.)
Let's be clear on the parameters of the problem -- this is Space Alert without the expansion. (Because I don't have the expansion, and there doesn't seem to be a list of the cards available online.) 4 or 5 players may be used. The players are actively trying to achieve the lowest/highest score. Any deck of threats may be used, any set of tracks, etc.; we assume we are allowed to rig all the decks (damage tiles etc.) and that the players know in advance everything that will happen. The audio track must be one of the 8 normal missions provided.
Some terminology: Let's define the "total threat value" of a mission to be the number of common threats plus twice the number of serious threats. This is a useful quantity, because serious threats of a given difficulty level are (with a few exceptions) worth twice as much as common threats of that same level. Furthermore, it is constant across mission types -- it's equal to 3 for the test runs, 5 for simulation or advanced simulation (6 with 5 players), 7 for a mission (8 with 5 players), and if we include the expansion, it's equal to 8 for the easier double-action missions (10 with 5 players), and 10 for the standard double-action missions (12 with 5 players). So for our purposes now this quantity will always be equal to 7 or 8.
First, the maximum problem; this is the easier one. There's an obvious upper bound on the maximum possible score, which is 8*6+21=69. (8 total threat value, times 6 points for killing a yellow threat, plus 21 points from the window.) 69 points is a "perfect game"... but is it actually possible?
Well, after several hours of trying to construct such a scenario, I can report that the answer is yes. Solution (and hints towards my solution) is spoilered for those of you who really want to try this yourself. Note of course that there may well be other solutions that look very different; this is just the first one I found.
Use 5 players. OK, that's not really a hint. Use audio track 8.
Trajectories are as follows: T3 blue, T6 white, T4 red, T1 internal.
Threats are as follows: T+3 Overheated Reactor, T+4 Psionic Satellite, T+5 Nebula Crab, T+7 Juggernaut, T+8 Scout.
Players will move in the order Red, Yellow, Green, Blue, Purple.
Turn 1: Red hits C, all other players change decks. (Green, Blue, and Purple are delayed.)
Turn 2: Red changes decks, Yellow hits B, all other players do nothing.
Turn 3: All players press C. (7 points.)
Turn 4: All players press C. (7 points.)
Turn 5: Red uses heroic B (the other side of which shows upper red). Yellow presses B. (Overheated Reactors killed.) Green moves blueward. Blue changes decks. Purple heroically rushes to upper blue. (Psionic Satellite reaches X. All players are delayed.)
Turn 6: All players do nothing.
Turn 7: Yellow moves redward; all other players press A. (Psionic Satellite killed; Nebula Crab takes 3 damage. Nebula crab reaches X.)
Turn 8: Red and Green move redward. Yellow and Purple change decks. Blue presses C. (Nebula crab reaches Y.)
Turn 9: Purple presses C; all other players press A. (Nebula Crab killed; Juggernaut takes 4 damage. Scout reaches X.)
Turn 10: Blue changes decks; Purple presses C; all other players press A. (Juggernaut killed.)
Turn 11: Red moves blueward; Yellow heroically rushes to lower white; Green presses B; Blue presses A; Purple moves redward. (Scout killed.)
Turn 12: All players press C. (7 points.)
...I didn't list what the *other* sides of the cards do, but it's pretty easy to fill these in in a manner consistent with the contents of the deck.
I do have to say, though, that the maximum problem seems kind of silly if you don't include the expansion, which raises this upper bound to 12*8+21=117. Or... is it even higher? I seem to recall reading that threats that call in other threats have their scores a bit weird. And, actually, I guess just the possibility of calling in other threats should raise that bound, shouldn't it? I don't actually have the expansion. Crap. Yeah, this is why I'm ignoring the expansion.
Anyway, so the minimum -- how about lower bounds? Well, there's an obvious lower bound of -36 -- all penalties, nothing else. Now wait, you say, shouldn't that be 7*2-36=-22? Because you get 2 points for surviving a threat? No! It's possible to neither kill nor survive a threat without dying; remember you only count as having survived a threat if it reaches Z. That said, we can certainly make a lower bound based on this idea.
The longest track, T7, is 16 spaces long; a threat needs to advance 15 spaces to reach Z. If it has speed 3 or more (let's assume everything is constant speed for now), it will assuredly reach Z regardless of when it appears. If it has speed 2, however, it will have to appear by turn 6 to reach Z; and if it has speed 1, it will never reach Z. One can come up with similar numbers for the other tracks; I won't go into details here. Note that straight-up "speed 1" threats do not exist -- there are two "speed 1" threats, the Man-Of-War and the Juggernaut, and both speed up. It's straightforward to compute numbers for both of these. (Be careful, the Juggernaut has the odd property that it sometimes arrives *sooner* on longer tracks.) There are no threats that lose speed so these are not a concern.
[Previously here were two paragraphs giving a probable lower bound based on this idea. However, the numbers were wrong, and it ultimately yielded no improvement, so I don't feel like fixing it. Let's skip ahead a bit.]
I guess we'll have to analyze the actual audio tracks. There's only 8 of them after all. We'll consider both what happens with only 4 players and with only 5 players -- while having fewer threats may seem better, let's remember that with 4 players, one can only get -8 points from knockouts, not -10. After a bit of work, we find the following (not necessarily unique) minima for points let through:
Track 1: 10 points (put T7 on red); 12 points with 5 players.
Track 2: 12 points (put T6 or T7 on blue); 14 points with 5 players.
Track 3: 6 points (put T7 on blue and T6 on white, with T+6 being the Man-of-War); still 6 points with 5 players (put T5 on red).
Track 4: 10 points (put T7 on blue, with T+6 being the Man-of-War); 12 points with 5 players.
Track 5: 12 points (put T7 on red and T6 on white); still 12 points with 5 players.
Track 6: 12 points (put T6 on red); still 12 points with 5 players (put T7 on white).
Track 7: 10 points (put T7 on red and T6 on white); 12 points with 5 players.
Track 8: 6 points (put T7 on red and T6 on white, with T+5 being the Man-of-War); still 6 points with 5 players (put T5 on blue).
OK. And it's pretty clear these are indeed minima, so we get a lower bound of -30 points. Still, it's not clear whether this is achievable. Let's consider -- achieving this requires knocking out all players. But there's not too many ways to do that in the base game. Especially since whatever delivers the knockout must either A. be white or B. do so without reaching Z. The only white threats that knock out are the Battlebot Uprising and the Commandos (both serious internal). The only threats that knock out without reaching Z are the Battlebot Uprising (serious internal), the Executioner (serious internal), and the Power System Overload (common internal); however, the Executioner and the Power System Overload each lack the ability to knock *all* players out, and so if we are relying on one of them to deliver the knockout, we must have both. (Or rather, Executioner can knock all players out, but it can't both knock out all players and disable both battlebot squads.)
From this we can see that -30 is not achievable with track 8, since its only internal threat always reaches Z (meaning it must be white) but is also common, incompatible with the above. Track 3 can be ruled out for similar reasons. Thus -30 is not achievable. And since the only common threat that knocks out is Power System Overload, and it can't knock out all players by itself, one can deduce that -29 is not achievable either.
And -28, it turns out, is achievable. Solution is spoilered if anyone wants to sit down and figure it out themselves.
Audio track 8, 5 players. T7 red, T6 white, T5 blue, T4 internal. T+3 Hacked Shields (blue), T+4 Psionic Satellite, T+5 Man-Of-War, T+7 Frigate, T+8 Gunship.
I'm not going to go into details of how it's executed because, well, it's fairly obvious -- pick up battlebots, fiddle with the shields, get knocked out. You need 2 shield up on red. You also need exactly 2 shield up on white, which means first you'll have to hit B in lower red before filling up the white shield. Don't worry about the blue shield, it'll take care of itself. Point is, you'll survive with all players knocked out, both squads disabled, 6 damage on each zone, and a mere 8 points for threats survived, for a total of -28.
So we have an answer to our question: The maximum possible score is 69, and the minimum is -28.
...yeah, I basically spent all day on this instead of working. But now I'll never have to do this again! Unless I someday try to figure out how the expansion affects it, anyway.
2013 Dec 20
04:20:00 - 50 points? What does that even mean?
So. I've been playing a bunch of Space Alert lately with the people here at Truth House. (I bought it because I thought I might actually get people to play it, and, holy crap, it actually worked.) Mostly with Andy, Seth, Ryan, and Dan[0] (Ryan in particular got kind of obsessed with the game...).
When we started out we often did disastrously. Things got a lot better once we started taking marking energy usage seriously. Eventually Ryan was (along with me) playing every game and we made him captain because, well, he seemed to be the best at it. Before long we could consistently beat white-threat missions... well, unless we just really screwed something up. Which is always going to happen now and again.
So we started mixing in the ordinary yellow threats. It took a bit, but before too long we could handle those pretty well too. So we mixed in the serious yellow external threats. Those posed more of a problem. I'm not sure we ever really got to the point of really being able to handle those, but we certainly beat them a few times.
Ryan graduated this semester and was leaving Ann Arbor on Wednesday, so Tuesday we played a few games and, even though we weren't really sure we were ready for it, we mixed in the serious yellow internals as well. (Dan is also going back to Australia shortly.) The first few games they didn't come up, so in the last game we reduced the density of white threats in all the decks. Indeed, Contamination came up, and we killed that, but ended up dying anyway.
Then yesterday I played with Justine, Angus, and Nick -- all of whom had played before, but not in a while, and never with any yellow threats in. (Seth was also in for one game.) So we went down to only white threats, and, well, it was tough. I'm not convinced this is yet a crew with whom we want to add in yellow threats.
Anyway. All this is prelude to the story I want to tell right now. Tonight, Geoff Scott[3] hosted a game night. First we played a few games of The Resistance -- which, I gotta say, though it's apparently still weighted towards the spies (the "Mafia", the "bad guys"[4]), certainly seems to be easier for the Resistance (the "townspeople") than Mafia is. Resistance won 3 out of the 4 games. Basically, it seemed to me, the game forces the spies to act rather more openly than the Mafia do in Mafia. Of course a lot of that could be us not having the hang of it yet. In particular, in one particularly embarrassing incident, when straight-up asked "Are you a spy?" in the final game (I was[5]), I started kind of giggling; simply saying "No" at that point wouldn't have been believable so instead I went with "What the hell is this?" and only then stating "No I'm not a spy." It didn't work. But, like, it's a little ridiculous, because I'd never do that during Mafia; I'm a decent Mafia during Mafia. Somehow, though I was prepared for "Are you Mafia?", I wasn't prepared for "Are you a spy?". Nor was I generally a good spy most of the time. (First and last games I was outed very quickly; second I managed to convince people I was Resistance, but I tried to be too tricky and cast a "mission success" when I should have cast a "mission failure", and that ended up us getting us in the end.)
(You know, it only now occurs to me that I was on the losing side every game.)
Er right but the point was Space Alert. So after that we played Space Alert. I had intended to play things other than Space Alert, because I'd play so much of that recently, but Geoff was saying we should play something... well, I don't remember what the conditions he said were, but Space Alert fit perfectly, so I suggested it.
Crew was me (captain), Geoff (comms), Julian, Jordan, and KK. (We didn't bother appointing a security officer.) Julian, Jordan, and KK had all played before, but it had been so long that we did a full rules explanation anyway. (I suggested we play one round of simulation first, but Geoff wanted to start on full mission.) The first game or two didn't go very well -- lots of non-marking of energy especially -- but by the third game we'd gotten some of the hang of it (though we still died). The fourth game finally we survived (with 28 points or so). (Side note: Geoff has the first edition, where Stealth Fighter has its speed misprinted as 4 instead of 3[7]. I didn't realize what was going on there until well into the game so we decided to just play it as 4. It wasn't a problem.)
Then Geoff says, OK, one more round; and since this is our last round, let's play with all yellow threats! You must be kidding, I say. The people I play with at home with have played well more than you and they're not ready for such a difficulty level; we're certainly not. We're going to die horribly. Well, let's try it, Geoff says. It's a Vlaada Chvátil game; dying horribly is the point!
And so it was. All yellow threats. And somehow... we won. With 50 points. That's such a large number I don't even know what to make of it. OK, we've had some high-30s wins here at Truth House, but... 50 points. Every threat killed -- Minor Asteroid, Nebula Crab, Phantom Fighter, Major Asteroid, Power System Overload. Only one damage (on red zone, from when we killed the Minor Asteroid). And we even had time to look out the window, for 1 point in phase 2 and 3 points in phase 3. Everything basically went perfectly.
I mean I'm not even sure what to say about this. Obviously we had a bit of luck, but... the people in the math department are just that much better than the people here at Truth House? Well, I guess that's not too unbelievable, but it's still kind of astonishing. We really did have quite a bit of luck though -- the Overload could easily have wrecked us; we needed to coordinate perfectly, all of us hitting B at the same time in 3 separate stations, in order to bring it down. (Plus one additional point from me on the turn before, and one additional from one of those Bs being heroic.) And the one point of damage that got through knocked out the blue gravolift. Fortunately, we didn't need it to get into position -- we already had someone in lower blue to fire on the Minor Asteroid, I was already in lower white manning the main reactor, and Geoff used his heroic action to rush to lower red from the bridge. So in fact knocking out any of the gravolifts wouldn't have caused a problem... OK, I guess that's a bit less luck than I thought. Still!
(I'm pretty sure had the Executioner or the Seeker come up, it would have been a slaughter. Those two really scare me.)
Anyway, 50 point victory, holy crap.
Well, one thing worth noting -- there are basically two ways that threats can be harder; they can be harder to deal with (increased probability of failure), or do worse things to you if you fail to deal with them (increased disutility of failure). (Though sometimes they can be intermingled, e.g. with Psionic Satellite.) I'm of the opinion that the game is harder (and more fun) when things are primarily hard the first way rather than the second (with one or two of the second for splash)[8]. Firstly, the latter tend to make things too all-or-nothing; either you die, or you survive with a huge number of points. Except the funny thing was, Power System Overload really is a hard-to-deal-with threat! Which is why it's so surprising we survived; Major Asteroid is not exactly scary, you know? (The Nebula Crab is a little, but they just zapped it before it could reach X.) If it had been more of the second sort of threats, our survival would have been less impressive.
Secondly, the first are just more interesting -- you have to spend more time figuring out what the hell you're doing in the first place; you can't just quickly come up with a simple plan and do it. (For the same reason, I prefer more smaller threats to fewer larger threats.) I think a game where there's more damage, and more chances for damage to screw you up (rather than just killing you outright), is more interesting. Although, I guess, even with a lot of damage, it often just isn't a problem -- guns frequently take damage after you've stopped using them, for instance. And who really cares if the shields are damaged? It is a little disappointing. Actually there's a number of things in the game that just don't seem to come up much. I mean, the interceptors, obviously -- those seem to be mainly used if someone's picked up a squad of battlebots and has nothing else to do -- but the one that really sticks out in my mind is repairing battlebots; due to the fact that you never have more than 2 internal threats per game, the existence of heroic battlebot actions, and the fact that few threats both have 2 hit points and fight back, it's just rarely necessary. Probably more necessary in the expansion when you have double-action missions with threats (and also harder threats).
Actually, tonight's crew was pretty keen on shields. I tried to convince them that shooting more was really just better most of the time, but they were not so convinced. Personally, this experience playing with a little more shielding and a little less shooting seems a data point in favor of "No, it really doesn't work as well." Although -- Ryan came up with this neat trick with the shields I'm teaching to people now. It is: On the first turn, someone refills the white shields; on the second turn, someone cracks a canister. Isn't that a bit early to be cracking a canister?, you say. Or at least, I said at first. Well, maybe, but this seems to work pretty well -- running out of energy by the end of the game due to insufficient canisters has generally not been a problem for us. (Certainly not compared to running out of energy due to people failing to mark, or people getting delayed, or not having people in position... you get the idea.) As I recall, when I learned to play, and played with Mickey and Nick, we typically had an unused canister at the end of the game. And often there just isn't much else to do in the first few turns, so, why not? Shields on white!
ADDENDUM next day: Well, OK, one reason not to is that it will seriously hurt you if you draw Crossed Wires...
Maybe shields are also something we could use a bit more in cases where we're deliberately not killing a threat, as often happens with the asteroids. But that needs proper timing -- it should be after the shooting is done. Of course, often, after the shooting is done, there's no more energy left, and in that case having someone draw energy just to put it in the shields seems wasteful. If there's energy sitting around in a side reactor to shield with, it would make sense, but drawing from the main reactor to do it -- especially having two people to coordinate to do it -- doesn't seem worth it.
Anyway. I think I've rambled a bit. I think that's all I had to say on the matter. I'll stop here.
[0]This is Dan the first-year who lives here now, not Dan Zemke.
[3](From the math department.)
[4]Naturally, before the game started I kept making jokes about "Why do you asume the Resistance is the good guys? This game is just Resistance propaganda!" Later after a few votes on forming a team failed, I commented, "What an inefficient system they want to replace our government with!"
[5]In fact, I was a spy in 3 out of the 4 games we played.
[7]But on the good side, with the first edition, the canisters aren't rolling away all the damn time.
[8]Although, the Seeker kind of maximizes the first of these, and I'm not sure I'd consider the Seeker fun. I guess it goes a little too far. But, well, I've yet to actually encounter the Seeker, so we'll see.
2013 Dec 18
17:25:00 - According to some Redditor called "BlackBrane"
[...] In my view this is exactly the sort of thing that demonstrates why its wrong to dismiss string theory as "non-predictive". This attitude simply fails to grasp that string theory is not analogous to the standard model, but to quantum field theory. People still have to write down a model in order to make a prediction, just like always. Now that process just has taken on a totally different set of constraints, that in many ways are far more restrictive than before. Scientifically the only difference is that now we call it "choosing a vacuum", but its scientifically the exact same thing as "postulating a Lagrangian" in quantum field theory, like the one that earned Higgs and Énglert the Nobel Prize.
(Yes, this is a followup to this old entry.)
2013 Dec 16
16:57:00 - More physics I don't understand: Mass-energy equivalence
I used to think I understood mass-energy equivalence. Well, to the extent that you can understand it without seriously learning modern physics, anyway.
It seemed to be pretty simple: Mass simply is energy. That is to say -- well, there are two things we mean by mass, right? Gravitational mass and inertial mass, though of course relativity demands that the two are equal. Well, both of them are simply equal to energy -- this is only meaningful of course if energy has an "absolute zero", but, well, apparently it does. Energy warps spacetime around it, attracting things to itself; and the more energetic a body, the more its inertia. When something speeds up it gains mass and when it slows down it loses mass. An object on earth has slightly less mass than that same object 1000 miles above the earth. Rest mass of a particle I guess is just how much energy it takes to create that particle ex nihilo -- OK, I imagine there's a better explanation than that if you actually understand QFT, but I don't.
Except I keep reading that the notion of "relativistic mass" (i.e. γm0) is not really right, and it shouldn't be used, because it acts like mass in some ways but not others, and it shouldn't be thought of as mass, and objects shouldn't thought of as gaining mass when they accelerate.
(...on the other hand, there do seem to be number of contexts where I see "mass" being used to mean "relativistic mass". So I dunno.)
This doesn't make sense to me. So you're telling me that if I have, say, 2 protons and 2 neutrons, and I put them together to form an alpha particle, they do lose mass because of the energy difference, but if they're moving really fast and I declerate them, they don't? Huh?
"Energy is mass" I can kind of wrap my head around. "Energy is sometimes mass" is just weird.
2013 Dec 1
23:50:00 - lowdefectvsomething
So I decided today to try to get some work done other than writing. But what? I could work on this hard problem that I've gotten nowhere on... or that hard problem I've gotten nowhere on... oh! I'm currently writing the paper where I explain the computational side of what I've done, so this would be a good time to go back over the code and improve it -- there's a few optimizations I've intended to implement for a long time. Well, one big one, anyway.
Surprisingly, playing around with the program, I found a bug I never noticed before: When run in "all information" mode[0], and input a number which is divisible by 3 a fair number of times, it sometimes just crashes rather than output anything.
I have no idea why this is -- well, OK, I haven't sat down and looked into it, but knowing how my code is structured[3], I already have a guess. Still. Guess I'd better fix that before worrying about how to speed things up...
[0]Meaning, give information about ||3kn|| for all k≥0, instead of just outputting the stabilization length and stable complexity.
[3]At least, I think I remember...
2013 Nov 22
02:31:00 - Alert! Enemy activity detected. Please begin first phase.
Here's my log of the real missions played so far with my copy of Space Alert (sorted chronologically).
The detailsCollapse )
...I'm not sure what to make of that.
2013 Oct 28
11:07:00 - Yet another thing messed up about the Guilty Gear storyline
Yeah, yeah, picking on easy targets. Still, I wanted to point this out.
In GGXXAC+ story mode, when you select your character, it gives you a short bio of them -- in-universe, it's supposed to be the PWAB's file on them. The first screen involves information like name, height, weight, birthday, blood type (because after all the game was made in Japan), and... "type".
What is "type", you ask? Well, here are the types for all the characters:
Boring listCollapse )
So it's some weird combination of species/ethnicity/nationality that is applied quite inconsistently. Bridget is "English" while Axl is "Human/English", despite the fact that they're both human. Justice is type "Gear" despite not being the only Gear on the list -- but Dizzy is just listed as "non-Human" while Testament is listed as "Swiss"! Not exactly the most relevant information! (Sol is just listed as "Human(?)", but it's entirely possible the PWAB isn't aware he's a Gear; I get the idea that few people are, in-universe.) Similarly Eddie is listed as Spanish; seems like they've forgotten this is Eddie and not Zato. Meanwhile, A.B.A is listed specifically as "Homunculus", while Dizzy and Slayer are just "non-Human", and Robo-Ky gets the unhelpful designation of "Humanoid".
You could try to justify this in-story by saying the PWAB's files are quite disorganized, but somehow I don't think that's the reason for this.
2013 Oct 27
00:32:00 - Well-ordering, upper bounds, and fusible numbers
Have you heard of the fusible numbers?
They're a recursively defined set of nonnegative dyadic rational numbers. The definition is simple:
1. 0 is fusible.
2. If a and b are fusible, and |a-b|<1, then (a+b+1)/2 is fusible.
(If you allow |a-b|≤1 instead, the result is the same.)
The fusible numbers are well-ordered, and it is conjectured that their order type is... no, not ωω, but ε0. (No, I have no intention of seriously thinking about fusible numbers now or anytime soon. I've got plenty of work to do already.)
Anyway. My point is, if you read the proof of well-ordering, it says nothing about the order type. The paper actually proves that the order type is at least ε0, but doesn't put any upper bounds on it. I mean, I suppose the Church-Kleene ordinal is a trivial upper bound, but that really isn't saying anything.
I note this because in my work, when I prove a set is well-ordered, the proof invariably also supplies immediately an upper bound on the order type. The hard part, strangely enough, tends to be finding a lower bound on the order type! Which is really odd, since that doesn't even require proving well-ordering, and seems like it should be easy, but that's what's occurred in the contexts I've dealt with.
But my point was about upper bounds. Proving well-ordering without an upper bound now feels weird to me, like nonconstructive or something. I don't know that it's actually nonconstructive in any real sense, but there's at least a superficial analogy there -- proving something exists without proving an upper bound on it. (Of course, quite a bit of my work is nonconstructive, at least at the moment. I'd like to have constructive versions of what I can prove, but it's not a priority. Of course right now my real priority is writing...)
I have to wonder if it's possible to extract an upper bound from the proof anyway? (Probably not.) Or if there's some other proof that yields an upper bound? One that isn't so difficult, that is -- I suppose if you could prove it's actually ε0, that would count, but I mean something easier.
Well, like I said, not going to think about it.
2013 Oct 26
04:14:00 - A bit too long for Twitter
A few nights ago I had a dream wherein some people started singing the theme song to a silly science-fiction show that existed in the universe of the dream; the song made reference to some hazards our heroes wanted to avoid in their spaceship. In particular, it would be bad to fall into "the Aptenon, the poisonous gap in space".
...I just don't think I need to comment on that.
(What? I went to Berkeley for two weeks? Yeah, maybe I'll write about that sometime...)
2013 Oct 11
02:13:00 - Self-promotion time: Integer Complexity and Well-Ordering is now on arXiv!
Here's a link.
For those of you that are just tuning in: Let's define ||n|| to be the complexity of n, by which in this context we mean the smallest number of 1s needed to write n using any combination of addition and multiplication. (Note that this is the number 1, not the decimal digit 1. Allowing you to write "11" and say you've only used two 1s would just be silly.)
Then we can define the defect of n, denoted δ(n), to be the quantity ||n||-3log3n; and we can then consider the set of all defects δ(n), as n ranges over all natural numbers. Surprisingly, this set is well-ordered, and its order type is ωω.
...OK, this won't be surprising to you if you've spoken to me anytime in, say, the past several years. But it should be a surprise to almost everyone else. And it's pretty damned neat regardless.
Thanks are of course due to Joshua Zelinsky -- who, after all, defined the object of study of this paper in the first place (defect, that is, not complexity) -- and to Juan Arias de Reyna who not only helped a lot with the editing but also helped organize several of the ideas in the paper in the first place. And other people, but, well, you can check the acknowledgements if you really care about that.
We'll see where I can get this published. In the meantime, this should be quite a bit more readable than the old draft sitting around on my website.
Now I guess it's on to the next paper (for now)... or rather, it already has been for a while...
2013 Oct 3
17:17:00 - It is hard to be original
(It is possible I am misremembering who said what here.)
I was talking to Justine and Seth and Noelle yesterday and I don't remember the context but someone suggested making ants on a log. Only problem was, we don't have raisins at the moment. I have some prunes in my room, I pointed out. Those would be pretty large ants, says Noelle; beetles on a log, maybe?
Today I look at the Wikipedia entry and there it is mentioned, that exact variation with that exact name. I had to check to make sure the edit wasn't made today or last night.
Navigate: (Previous 20 Entries) |
global_01_local_0_shard_00000017_processed.jsonl/4546 | 1. Summary
2. Files
3. Support
4. Report Spam
5. Create account
6. Log in
All public logs
From digimend
Jump to: navigation, search
Combined display of all available logs of digimend. You can narrow down the view by selecting a log type, the user name (case-sensitive), or the affected page (also case-sensitive).
No matching items in log.
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/4547 | id,summary,reporter,owner,description,status,resolution,keywords,cc,private 7041,trac-admin resync for OML,tcucinotta,cmaloney,"Hi, I just imported an svn repo for the OML project, but Trac is giving the usual error about the need for ""trac-admin resync (consult log for details)"", an operation that appearently I'm not able to do as project admin (and I have no pale idea of where to go catching that trac log). I see many many posts like this, so I'm marveled that there is no button in the project admin interface to perform such operation. I tried to disable the Trac feature and enable it again, in the hope that it automagically would reset itself (and my Trac is still empty), but it didn't work. On a related note, I'd like to know how to do import/export of the Trac contents, once I start populating it (pages and tickets). Thanks, regards, T.",closed,fixed,SOG,,0 |
global_01_local_0_shard_00000017_processed.jsonl/4549 |
Bugs Maximize Restore
Showing 1 results of 1
# Summary Milestone Status Owner Created Updated
1 1 bug in OCTAVEConfig.cmake and 1 segmentation fault None open 2008-09-04 2008-09-04 |
global_01_local_0_shard_00000017_processed.jsonl/4551 |
Diff of /make-host-1.sh [c6989d] .. [7d4072] Maximize Restore
Switch to side-by-side view
--- a/make-host-1.sh
+++ b/make-host-1.sh
@@ -20,6 +20,9 @@
export LANG LC_ALL
+# Load our build configuration
+. output/build-config
# Compile and load the cross-compiler. (We load it here not because we're
# about to use it, but because it's written under the assumption that each
# file will be loaded before the following file is compiled.) |
global_01_local_0_shard_00000017_processed.jsonl/4585 | Take the 2-minute tour ×
I have a continuous integration build server (using Team City). When people check code in, I'd like to run a set of tests which run through a set of functional cases such as:
• Using an administrator account, I can create a document
• If I belong to the 'Editor' role, then I can access document x and document y.
and so on.
We have a number of unit tests which cover individual functions, but we really want this quick 'smoke test' to see if anything discrete has slipped through the net.
Should I:
1. Create a 'dummy database' with pre-populated users, permissions, documents, and create the tests to use data from this database?
2. Use some sort of mocking framework for this? What are the advantages / disadvantages here?
Or, is my thinking completely off?
Appreciate your thoughts.
share|improve this question
add comment
2 Answers
up vote 3 down vote accepted
If you are curious about the tradeoffs around mock objects, there are entire websites and books devoted to the subject. I will restrict my answer to the specific problem you described.
I never replace a real system with a mock system unless the real system does not meet my needs; a mock system requires time to write and maintain, and when a test fails, you will have to ask yourself whether the problem is in the product or in your mock system. You have not mentioned anything that would prevent using a dummy database for your smoke test. While you did not specify whose database technology you are using, most database vendors provide an easy and reasonably fast way to restore a database from a backup and to rename a database (e.g. from "DB_backup" to "test_123").
(If you suspect there are reasons why using a dummy database will not meet your needs, please revise your question to include those issues.)
share|improve this answer
Hi, thanks for your answer. We currently do not have any reason to not use a dummy database. I think it would suit our needs fine. I'm new to QA and generally needed verification that using such a DB would not be going against any common QA thought or best practices. – christofr Sep 14 '11 at 14:56
The only thing I would be worried about is ongoing database development, e.g., new columns or tables being added that the code expects to have there could invalidate your old data. Just make sure you have an idea of how much work you will go through to keep your dummy DB up-to-date. – Ethel Evans Sep 14 '11 at 17:50
add comment
What you can do is run your tests within a transaction and assert your expected outcomes. When you are finished you can roll back the transaction. This will leave your database in the original state. This is easy to set up using spring with junit. You would have to have your developers implement this test for you. You can also use Selenium 2 or Webdriver to run your application in a web browser if it's a web application.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4586 | Take the 2-minute tour ×
Play 2.0's Build.sbt uses a pimped Project definition to do its magic; any additional settings you might need to add must be manually entered in k := v fashion. Works fine for the general case, but not for the specific; namely, when needing to set an sbt-plugin's settings which invariably are Seq[Settings[_]]. Here's an example of what predictably works
lazy val main = PlayProject(appName, appVersion, ....).settings(
version := appVersion
Now, how to get Seq[Settings[_]] converted to k,v pairs so the Play by-name call:
def apply(...., ...., settings: => Seq[Setting[_]])
actually works?!!
I've asked over on play-user, but good luck, a zoo over there, framework is taking off and core devs are clearly up to their ears...
share|improve this question
add comment
2 Answers
up vote 3 down vote accepted
Does this work?
….settings(mySeqOfSettings: _*)
share|improve this answer
@Debiliski thanks, tried that earlier, compiler complained with, "such annotations are only allowed in arguments to *-parameters". Serious, brow furrowing pain here, driving me nuts ;-) – virtualeyes May 22 '12 at 20:55
@Debs actually you are right in the singular case; I have several Seq[Settings[_]] to pass in, however; that's where I'm getting hosed. Perhaps some flatMap or reduce magic is in order – virtualeyes May 22 '12 at 20:58
(seq1 ++ seq2): _* – or do you want to eliminate duplicate keys? – Debilski May 22 '12 at 20:59
this works as well: "seqSettings.flatMap{x=>x}: _*", where I made seqSettings a Seq of Seqs. I like your ++ however, you get the nod ;-) – virtualeyes May 22 '12 at 21:03
Instead of flatMap{x=>x} flatten would also be possible. – Debilski May 22 '12 at 21:52
show 2 more comments
I hit the same problem with play framework and sbt-buildinfo plugin. After a lot of trial and error, I ended up preferring applying the settings twice in a row. I felt like it looked more obvious what was happening in Build.scala: http://mfizz.com/blog/2013/04/auto-generate-class-file-build-info-play-framework
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4587 | Take the 2-minute tour ×
Only the first page works
When i click the links to pages
I get this error
Object not found!
If you think this is a server error, please contact the webmaster. Error 404 localhost Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.4.4
This is my code
//connect to our DB
mysql_select_db( 'test1' ) ;
//preparing our variable.
if( !isset( $_GET['p'] ) ) {$_GET['p']=0;}
$per_page= 6 ;
$sql= " SELECT name FROM pagination " ;
$sql2= " SELECT name FROM pagination ORDER BY id DESC LIMIT ".$_GET['p']."," . $per_page;
$query= mysql_query ( $sql2 ) ;
$rows= mysql_num_rows ( mysql_query ( $sql ) ) ;
$page= ceil ( $rows / $per_page ) ;
while( $fetch= mysql_fetch_assoc( $query ) ) {
echo '<p>' . $fetch['name'] . '</p>' ;
for( $i=0;$i<$page;$i++ ){
echo' <a href="index.php?p='. ( $i * $per_page ) . '">'. ( $i + 1 ) .'</a> ' ;
share|improve this question
add comment
1 Answer
up vote 3 down vote accepted
This is an Apache error, meaning that you made some mistake in the HREF.
As far as I can see the href will yield "index.php?p=somevalue" which is a valid URL.
The only possibility I see is that you... don't have an index.php file? (e.g. your actual script isn't called index.php all lowercase, but something else)
share|improve this answer
yes thank u i did a mistake in that – AxelKishore Jul 20 '12 at 6:25
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4588 | Take the 2-minute tour ×
The QA department that tests my apps at work uses an Oracle database that they all share. Things get really hairy w/ their cases getting changed. bug reported filed + I spend time just to find out the test case has been changed = time wasted.
What I'd like is for dev and qa to all have our own copy of Oracle running on our machines, so we can protect our data and chase our tails... less.
The problem, which I understand, is we don't have funding for all those licenses. Using an open source database won't work because we have all kinds of PL/SQL packages and triggers that I'm sure tie us to Oracle.
Does anyone know of a way (or maybe an open source product) to "fake" an Oracle database? There are no performance requirements at all. I don't mean mocking objects (we do use that for unit testing), but the actual "listening on a port for your request" RDMS. It's a longshot, but I have to ask.
share|improve this question
You don't all have your own instances, I suspect, for one very practical reason - it makes collaboration a nightmare. You should have AT MINIMUM a dev instance of Oracle, a test instance, and production. I have worked at companies that had a dev, test, qa, preprod, and prod, which to me is a bit much. But the Express solution below should give you a playground to test out ideas and prototype. – GrayFox374 Aug 10 '12 at 20:44
I see your point. Thanks. – Stinky Aug 10 '12 at 22:48
add comment
3 Answers
up vote 6 down vote accepted
Use Oracle Express for this purpose.
Oracle Database 11g Express Edition
Free to develop, deploy, and distribute
Oracle Database 11g Express Edition (Oracle Database XE) is an entry-level, small-footprint database based on the Oracle Database 11g Release 2 code base. It's free to develop, deploy, and distribute; fast to download; and simple to administer.
Oracle Database XE is a great starter database for:
Developers working on PHP, Java, .NET, XML, and Open Source applications
DBAs who need a free, starter database for training and deployment
Educational institutions and students who need a free database for their curriculum
Oracle Database XE can be installed on any size host machine with any number of CPUs (one database per machine), but XE will store up to 11GB of user data, use up to 1GB of memory, and use one CPU on the host machine.
Support is provided through a free Oracle Discussion Forum monitored by Oracle employees as well as community experts.
share|improve this answer
Ugh, you beat me by 8 seconds. – Mike Christensen Aug 10 '12 at 20:39
add comment
You can use Oracle XE (Express Edition) which is free.
You can download it here.
share|improve this answer
add comment
Assuming that each developer needs less than 11 GB of data in their personal copy of the database, have you looked at using the free express edition of the Oracle database? You can install that on your local machine or even deploy it in production free of charge. You can't use enterprise edition features but basic PL/SQL should work exactly as it does in whatever edition of the database you're using now.
It's not obvious to me, however, that this is really the solution to the problem you're having. If test cases are getting changed without that information getting communicated to developers or test data that one person is relying on is being changed by some other person, creating more database instances with more copies of the same data isn't likely to be terribly helpful. If you have a local copy of the database, you need some way of getting the current version of all the objects (tables, packages, triggers, etc.). You need some way of getting the data that a particular tester is relying on. You need some way of moving your changes from your machine to the shared databases in a way that doesn't stomp on the changes other developers are making. None of these hurdles are insurmountable, but they do require a very solid build and deployment process-- otherwise, you end up with chaos where the version of code in your database is subtly different than the version of code in everyone else's database and the test data in your system has slightly different characteristics than the test data that QA is using leading to lots of bugs that are reproducible on one system but not another. If your current build and deployment process can't even ensure that test cases aren't changing while bugs are being investigated, I would tend to expect that adding more instances is going to make the problem worse, not better.
share|improve this answer
I've also found WM_CONCAT is mysteriously missing from XE. – Mike Christensen Aug 10 '12 at 20:42
@MikeChristensen - Well, WM_CONCAT is technically undocumented-- I'm not too shocked that undocumented features don't always work correctly. I'm guessing that they pulled all the Workspace Manager stuff out of the express edition. – Justin Cave Aug 10 '12 at 20:44
Ah makes sense. I try to use LISTAGG instead, but it's hard to filter out dupes with that function. – Mike Christensen Aug 10 '12 at 20:47
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4589 | Take the 2-minute tour ×
I am developing a drupal-based website where users are able to login to my site using Facebook.
Login works fine in IE8, IE9 and the latest Chrome and Safari, but not in the latest Firefox.
FB.Init() is configured to use cookies and when logging in to Facebook in e.g. chrome, then a cookie named: fbsr_ is correctly set and everything works as expected. It is set by the Facebook SDK (all.js), which in turn receives data in a postMessage event when a user logs in.
I have de-minified all.js to debug it. In chrome and IE9 three postMessage events are raised. In firefox no XDM postMessage events are ever raised.
I have tried different Facebook accounts, different computers and I am stuck with debugging as I cannot determine exactly from where the postMessage events are/should be raised.
I have also noticed that xd_arbiter.php is not loaded at all in Firefox (using the 'net' tab of firebug) while it is loaded twice in Chrome. However, initialization of the Fb js api seems to correctly inject the iframe with xd_arbiter.php as src, into the DOM.
There are no js errors in the firebug console.
Any suggestions?
share|improve this question
Martin Did you managed to figure this out ? Thanks. – user1055761 Nov 14 '12 at 12:59
Unfortunately not. After scratching my head for days and messing around with many different details of the Facebook integration, it started working. I still don't know what caused the problem and why it started working, but it has worked since. – Martin Nov 21 '13 at 7:50
add comment
Your Answer
Browse other questions tagged or ask your own question. |
global_01_local_0_shard_00000017_processed.jsonl/4590 | Take the 2-minute tour ×
The functions c32rtomb and mbrtoc32 from <cuchar>/<uchar.h> are described in the C Unicode TR (draft) as performing conversions between UTF-321 and "multibyte characters".
(...) If s is not a null pointer, the c32rtomb function determines the number of bytes needed to represent the multibyte character that corresponds to the wide character given by c32 (including any shift sequences), and stores the multibyte character representation in the array whose first element is pointed to by s. (...)
What is this "multibyte character representation"? I'm actually interested in the behaviour of the following program:
#include <cassert>
#include <cuchar>
#include <string>
int main() {
std::u32string u32 = U"this is a wide string";
std::string narrow = "this is a wide string";
std::string converted(1000, '\0');
char* ptr = &converted[0];
std::mbstate_t state {};
for(auto u : u32) {
ptr += std::c32rtomb(ptr, u, &state);
converted.resize(ptr - &converted[0]);
assert(converted == narrow);
Is the assertion in it guaranteed to hold1?
1 Working under the assumption that __STDC_UTF_32__ is defined.
share|improve this question
add comment
2 Answers
For the assertion to be guaranteed to hold true it's necessary that the multibyte encoding used by c32rtomb() be the same as the encoding used for string literals, at least as far as the characters actually used in the string.
C99 specifies that setlocale() with the category LC_CTYPE affects the behavior of the character handling functions and the multibyte and wide character functions. I don't see any explicit acknowledgement that the effect is to set the multibyte and wide character encodings used, however that is the intent.
So the multibyte encoding used by c32rtomb() is the multibyte encoding from the default "C" locale.
C++11 2.14.3/2 specifies that the execution encoding, wide execution encoding, UTF-16, and UTF-32 are used for the corresponding character and string literals. Therefore std::string narrow uses the execution encoding to represent that string.
So is the "C" locale encoding of this string the same as the execution encoding of this string?
C99 specifies that the "C" locale provides "the minimal environment" for C translation. Such an environment would include not only character sets, but also the specific character codes used. So I believe this means not only that the "C" locale must support the characters required in translation (i.e., the basic character set), but additionally that those characters in the "C" locale must use the same character codes.
All of the characters in your string literals are members of the basic character set, and therefore converting the char32_t representation to the char "C" locale representation must produce the same sequence of values as the compiler produces for the char string literal; the assertion must hold true.
I don't see any suggestion that anything beyond the basic character set is supported in a compatible way between the execution encoding and the "C" locale, so if your string literal used any characters outside the basic character set then there would not be any guarantee that the assertion would hold. Even stipulating extended characters that exist in both the execution character set and the "C" locale, I don't see any requirement that the representations match each other.
share|improve this answer
Nice answer. Just to be clear: If he adds a call to setlocale, the assertion could fail, even if his strings are entirely within the basic character set? – Nemo Nov 1 '12 at 21:00
@Nemo If setlocale() were called with an argument other than "C", yes. For example setlocale("en_US.EBCDIC") (assuming that's a supported locale with the obvious meaning) on a system where the execution encoding is ASCII compatible would cause c32rtomb() to produce EBCDIC strings while std::string narrow would remain ASCII encoded. – bames53 Nov 1 '12 at 21:16
add comment
The TR linked in the question says
At most MB_CUR_MAX bytes are stored.
which is defined (in C99) as
a positive integer expression with type size_t that is the maximum number of bytes in a multibyte character for the extended character set specified by the current locale
I believe this is sufficient evidence that the intent of the TR was to produce the multibyte characters as defined by the currently installed C locale: UTF-8 for en_US.utf8, GB18030 for zh_CN.gb18030, etc.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4591 | Take the 2-minute tour ×
Is it possible to have the router return an error code (or an entire rack response) in response to a matched route?
E.g. I have move from WordPress to a home grown blogging solution. Search engines are hitting URLs like '/?tag=ruby' that need to return a 406 error. Instead, the router dutifully routes them to the same place as '/' I can match the URLs I want to get rid of but I don't know what to do with them
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
This is not obvious, but it is effective.
match('/',:query_string=>/.+/).defer_to do |request, params|
raise Merb::ControllerExceptions::NotAcceptable,
"Query String Unknown: #{request.query_string}"
In order to trigger a 406 error we need to raise Merb::ControllerExceptions::NotAcceptable but if we do this while setting up the routes it helps exactly no one. Instead we need to wait until Merb is handling the request. this is what the defer_to block does. When the request comes in we raise the error and it is caught and handled just as it would be if we through this error from a controller.
One of the goals of my original question was to avoid having to go through all of the dispatching overhead. Instead this solution is dispatched through the exceptions controller which more computationally expensive then [406,{'content-type'=>'text/plain},['406 Not Acceptable']]
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4592 | Take the 2-minute tour ×
My problem is that I cannot use those two aggregate functions together like in the query: AVG(MIN(, First I need to obtain the minimum value from that entity termResultCurrent and after that the average of all the minimum values. Probably it must be changed the group by and include some other clause to fetch just the minimum value (I don't know if that is possible). Otherwise, it might be using subqueries but I had problems with the group by... I know it is quite challenging but I do appreciate your help.
SELECT AVG(MIN(termResultCurrent.position)), count(*)
FROM TermSubscription subscription,
Competitor competitor,
TermQuery termQueryPrev, IN(termQueryPrev.results) termResultPrev,
TermQuery termQueryCurrent,IN(termQueryCurrent.results) termResultCurrent
WHERE subscription.account.id= 274
and competitor.id = 379
AND termQueryPrev.term=subscription.term
AND termQueryCurrent.term=subscription.term
AND termQueryCurrent.provider.id= 1
AND termQueryCurrent.provider.id=termQueryPrev.provider.id
AND termQueryPrev.queryDate.yearWeek = YEARWEEK(FROM_DAYS(TO_DAYS('2013-01-01') - 7), 1)
AND termQueryCurrent.queryDate.yearWeek = YEARWEEK('2013-01-01',1)
AND termResultPrev.url.hostname MEMBER competitor.domains
AND termResultCurrent.url.hostname MEMBER competitor.domains
group by subscription.term.id
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
Use this:
select avg(s.minimum) from
(SELECT MIN(termResultCurrent.position) minimum
FROM TermSubscription subscription, ...
WHERE subscription.account.id= 274 and ...
group by subscription.term.id
) s;
share|improve this answer
I tryed it and I got the following exception: org.hibernate.hql.ast.QuerySyntaxException: unexpected token: ( near line 2, column 5 [ select avg(s.minimum) from (SELECT MIN(termResultCurrent.position) minimun FROM TermSubscription subscription,... group by subscription.term.id) s;] I think that I saw that solution before but I wasnt able to implement because this exception. – user1847584 Jan 30 '13 at 11:28
This works in native SQL. You can use createSqlQuery() for this. Hibernate itself might not support these subselects. When using HQL you only do the inner select [SELECT MIN(...) minimum FROM ... WHERE ... group by ...] and you calculate the average manually in the Java code. That might even be faster than doing all in the database. – Johanna Jan 30 '13 at 11:36
Thanks Johana. I really appreciate your hekp!! – user1847584 Jan 30 '13 at 11:37
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4593 | Take the 2-minute tour ×
I am using matplotlib to create a 3d bar plot like this one.
Unfortunately matplotlib 3d plotting capabilities are very limiting and have a few bugs (e.g. it can't render properly some viewing angles).
MayaVI2 offers a solution to this problem but I have not found a way to make the size of the bars uneven. In matplotlib you can just give the bar edges as an array using the bar3d function.
Has anyone tried this with mayavi2?
I tried to use barchart from mayavi.
from pylab import *
from mayavi.mlab import *
#define bin edges
x = [1,2,3,5,7,12]
y = [1,2,3,5,7,12]
X, Y = np.meshgrid(x,y)
S = rand(len(x), len(y))
Which resulted in all bins with the same width.
share|improve this question
Can you include what you have tried in mayavi? It is easier to help if we have something to start with. – tcaswell Nov 26 '13 at 18:37
just edited the question to include a sample code using mayavi. – user3037698 Nov 27 '13 at 10:12
In what way does your sample code use mayavi? – aestrivex Nov 30 '13 at 0:55
it uses in the boxplot function. – user3037698 Dec 2 '13 at 16:10
Sure about that? mayavi.mlab has no attribute 'boxplot' using the current between-releases development version. What versions of matplotlib and mayavi are you using? – aestrivex Dec 3 '13 at 15:59
show 3 more comments
Your Answer
Browse other questions tagged or ask your own question. |
global_01_local_0_shard_00000017_processed.jsonl/4594 | Take the 2-minute tour ×
What is the maximum length of the alert text of an iOS push notification?
The documentation states that the notification payload has to be under 256 bytes in total, but surely there must be a specific character limit for the alert text.
share|improve this question
add comment
3 Answers
up vote 53 down vote accepted
Sadly, the real limits are not documented anywhere. The only thing the documentation says is:
The maximum size allowed for a notification payload is 256 bytes;
But that only applies to the simple notification format, it doesn't say anything about the new enhanced notification format.
A MALCOM team member did some experiments and this is what he found:
• UIAlertView display limit is 107 characters. After that your message gets truncated and you will get a "..." at the end of the displayed message.
• We were able to send (and receive) messages as a long as 1400 characters.
• Big/long messages (> 200 chars) take longer to get to the destination. We still need to check if this is due to the enhanced notification format or not.
share|improve this answer
Should be clarified that an in-app UIAlertView has no display limit; text over a certain length will go into a scroll view. An SMS or push alert probably has that 107-character limit, however. – azdev Aug 5 '11 at 18:34
And displayed text is not limited by payload, because when you use localization method it's no longer match 1:1. Payload may be short while final message may be much longer. The question is about displaying message it's not strictly related to payload maximum length. – Marcin Feb 17 at 11:54
add comment
It should be 236 bytes. There is no restriction on the size of the alert text as far as I know, but only the total payload size. So considering if the payload is minimal and only contains the alert information, it should look like:
That takes up 20 characters (20 bytes), leaving 236 bytes to put inside the alert string. With ASCII that will be 236 characters, and could be lesser with UTF8 and UTF16.
share|improve this answer
ASCII encoding violates the JSON spec, which requires UTF-8, UTF-16LE, UTF-16BE, UTF-32LE, or UTF-32BE. See ietf.org/rfc/rfc4627.txt; page 4. – Aaron Brager Nov 15 '13 at 20:04
ASCII is a subset of UTF-8, so it is always safe to transmit 8-bit ASCII over the wire. – Patrick Horn Dec 11 '13 at 3:35
add comment
The limit of the enhanced format notifications is documented here.
It explicitly states:
The payload must not exceed 256 bytes and must not be null-terminated.
ascandroli claims above that they were able to send messages with 1400 characters. My own testing with the new notification format showed that a message just 1 byte over the 256 byte limit was rejected. Given that the docs are very explicit on this point I suggest it is safer to use 256 regardless of what you may be able to achieve experimentally as there is no guarantee Apple won't change it to 256 in the future.
As for the alert text itself, if you can fit it in the 256 total payload size then it will be displayed by iOS. They truncate the message that shows up on the status bar, but if you open the notification center, the entire message is there. It even renders newline characters \n.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4595 | Take the 2-minute tour ×
In my DSL project I have a shape with a number of decorators that are linked to properties on my domain class. But even though ieach decorator has a DisplayName property (set to a meaningfull value) it does not appear in the generated DSL project. (I have not forgtten to use regenerate the t4 files.)
Do I have to create another decorator for each property that only has the display name as a value that I wish to display or is there some other way that I can't figure out right now?
Thanks in advance
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
I assume by a display name for the decorator you mean you want the element in the generated DSL to appear as "Example = a_value" where a_value is the actual value and Example is the property name.
What I've done with this in the past is to create second property "ExampleDisplay" that's not browsable and is what the decorator actually points to. I then set the Kind property of the ExampleDisplay to "Calculated". You then need to provide the method that the toolkit tries to call to display the decorator which you can do a partial class.
partial class ExampleElement
string GetExampleDisplayValue()
return "Example : " + this.Example;
This is not ideal as you don't get a good way of setting the property on the DSL diagram you have to use the properties window. (There's sometime lags from the property window unless you hook into the update of the underlying property too). Getting the slick editing in the GUI that actual DSL toolkit does maybe possible but I haven't found out how.
It maybe worth ask VSX forums if you haven't already done so.
share|improve this answer
Good answer, it's not perfect but I hadn't though of that! Thanks. – AlexDuggleby Sep 18 '08 at 16:21
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4596 | Take the 2-minute tour ×
I have a legacy HTTP/XML service that I need to interact with for various features in my application.
I have to create a wide range of request messages for the service, so to avoid a lot of magic strings littered around the code, I've decided to create xml XElement fragments to create a rudimentary DSL.
For example.
Instead of...
new XElement("root",
new XElement("request",
new XElement("messageData", ...)));
I'm intended to use:
Root( Request( MessageData(...) ) );
With Root, Request and MessageData (of course, these are for illustrative purposes) defined as static methods which all do something similar to:
private static XElement Root(params object[] content)
return new XElement("root", content);
This gives me a pseudo functional composition style, which I like for this sort of task.
My ultimate question is really one of sanity / best practices, so it's probably too subjective, however I'd appreciate the opportunity to get some feedback regardless.
1. I'm intending to move these private methods over to public static class, so that they are easily accessible for any class that wants to compose a message for the service.
2. I'm also intending to have different features of the service have their messages created by specific message building classes, for improved maintainability.
Is this a good way to implement this simple DSL, or am I missing some special sauce that will let me do this better?
The thing that leads me to doubt, is the fact that as soon as I move these methods to another class I increase the length of these method calls (of course I do still retain the initial goal of removing the large volume magic strings.) Should I be more concerned about the size (loc) of the DSL language class, than I am about syntax brevity?
Note that in this instance the remote service poorly implemented, and doesn't conform to any general messaging standards, e.g. WSDL, SOAP, XML/RPC, WCF etc.
In those cases, it would obviously not be wise to create hand built messages.
In the rare cases where you do have to deal with a service like the one in question here, and it cannot be re-engineered for whatever reason, the answers below provide some possible ways of dealing with the situation.
share|improve this question
Depending on the XML, you might try hand-rolling C# types that serialize to the target XML. The attributes in the System.Xml.Serialization library are fairly adaptable if you're patient enough. It's a pain, but the resulting abstraction can be worth it if you have to use your hand-rolled proxy in a lot of places (e.g. in a unit test framework). – Merlyn Morgan-Graham Sep 8 '11 at 3:56
add comment
4 Answers
Have you noticed that all the System.Linq.Xml classes are not sealed?
public class Root : XElement
public Request Request { get { return this.Element("Request") as Request; } }
public Response Response { get { return this.Element("Response") as Response; } }
public bool IsRequest { get { return Request != null; } }
/// <summary>
/// Initializes a new instance of the <see cref="Root"/> class.
/// </summary>
public Root(RootChild child) : base("Root", child) { }
public abstract class RootChild : XElement { }
public class Request : RootChild { }
public class Response : RootChild { }
var doc = new Root(new Request());
Remember this won't work for 'reading' scenarios, you will only have the strong-typed graph from the XML that your application creates via code.
share|improve this answer
I like this option for creating this sort of xml, obviously if we the service had been built using some variety of XML/RPC/SOAP/WSDL (or some other useful standard,) this would mitigate the problem too. But when you're faced with a black box on the other end, this seems like the best way to deal with it. – EmacsFodder Sep 8 '11 at 3:45
add comment
Hand-cranking xml is one of the things which should be automated if possible.
One of the ways of doing this is to grab the messaging XSD definitions off your endpoint and use them to generate C# types using the xsd.exe tool.
Then you can create a type and serialize it using the XmlSerializer, which will pump out your xml message for you.
share|improve this answer
Yes, it would be great if the endpoint did that, however it doesn't. This is a very basic XML/HTTP service, there is a single DTD that is used to describe the entire set of messages and responses, but aside from that there is nothing that would help with the definition of each explicit message and response type. – EmacsFodder Sep 5 '11 at 23:11
add comment
up vote 1 down vote accepted
I noticed this article for constructing arbitrary XML with C#4.0 which is great.
The source for the library is here - https://github.com/mmonteleone/DynamicBuilder/tree/master/src/DynamicBuilder
At this time, there is a notable deficiency, no xml namespace support. Hopefully that will get fixed though.
As a quick example, here's how it's done.
dynamic x = new Xml();
Which yields:
Here's another quick example yanked from the article.
dynamic x = new Xml();
// passing an anonymous delegate creates a nested context
x.user(Xml.Fragment(u => {
u.phone(new { type="cell" }, "(985) 555-1234");
Which yields:
<phone type="cell">(985) 555-1234</phone>
Having used the Ruby library Builder this method of creating arbitrary XML is similarly terse, to the point that it verges on "fun"!
I've marked this as the answer, because, even though it doesn't directly speak to "using a DSL to create arbitrary XML" it tends to remove the need due to the extremely terse and dynamic nature of the syntax.
Personally I think this is the best way to create arbitrary XML in C# if you have the v4.0 compiler and have to crank it by hand, there are of course much better ways to generate XML automatically with serialization. Reserve this for XML which must be in a specific form for legacy systems only.
share|improve this answer
add comment
Writing this in C# seems an awful lot of work. Design your DSL as an XML vocabulary, and then compile it into XSLT, writing the compiler (translator) in XSLT. I've done this many times.
share|improve this answer
Can you go into more detail, or perhaps link to some materials that do? – EmacsFodder Sep 5 '11 at 23:09
Sorry, don't have time (it would take time, it's a big question). And I don't really have enough information on your requirement to know where best to start – Michael Kay Sep 6 '11 at 23:06
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4624 | Take the 2-minute tour ×
I have been trying to use vim as a wordprocessor to do most of my initial drafting and then open in abiword or open office to do my final printing.
Problem has been, even with :set linebreak and :set wrap, when I open it, there's a ton of line breaks. I have to go and manually remove them.
What I want is to be able to write at 40 columns wrap around softly so that when I open it up in Open Office, it gets restored to whatever that column width is, preserving things like carriage return but without all those linebreaks.
share|improve this question
Vim does not distinguish between what you call "carriage returns" and "linebreaks", basically because ASCII does not have separate control characters for those two functions and there is no convention for distinguishing between those two functions in a plain text file. I wouldn't try to do any formatting, such as setting column widths, in Vim--leave that for Open Office. For your draft, set Vim's window width to 40, set tw=99999, and put actual carriage returns only at the ends of paragraphs. List items and such will have to be paragraphs in the draft. – garyjohn Dec 28 '10 at 17:50
add comment
migrated from stackoverflow.com Dec 28 '10 at 9:54
This question came from our site for professional and enthusiast programmers.
3 Answers
Have you considered feeding your vim-edited text into a plain text processor - this can produce some remarkably good-looking results in PDF RTF or other final form.
If you do this you don't need to worry about line breaks.
share|improve this answer
add comment
for soft-wrapping you need to use :set columns instead of :set textwidth or set wrap (which are for hard-wrapping):
:set columns=40
share|improve this answer
add comment
Use this option in your .vimrc or at runtime:
:set textwidth=40
share|improve this answer
I believe that adds a linebreak – Angela Dec 28 '10 at 4:31
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4625 | Take the 2-minute tour ×
From a Mac OS command line, I'm looking for a command that will read my address book card and print my email address to stdout.
Doing it via an osascript command would be fine.
share|improve this question
add comment
1 Answer
What you are looking for is contacts.
The utility contacts gives you access to view and search all your records in the AddressBook database.
Without further ado, here are a few examples:
$ contacts -h
usage: contacts [-hHsmnlS] [-f format] [search]
-h displays help (this)
-H suppress header
-s sort list
-m show me
-n displays note below each record
-l loose formatting (doesn't truncate record values)
-S strict formatting (doesn't add space between columns)
-f accepts a format string (see man page)
displays contacts from the AddressBook database
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4626 | Take the 2-minute tour ×
Can a Windows XP Home desktop be made into a WiFi hotspot for using other wireless devices?
We are in a metal building and the DSL modem is in another central room.
share|improve this question
Some adaptors come with software that lets you run a desktop or laptop as an access point - I had a ralink based edimax that worked pretty well for that. – Journeyman Geek Jul 28 '12 at 3:49
add comment
3 Answers
Not in the way you would like.
Yes, you can make the Windows XP computer stand out as an Ad-Hoc wireless access point, but it won't be a hotspot. However, the computer would need to be left on all the time.
Now... I want you to really, really think about this. Are you saying that this option is better than purchasing a $30 wireless router you can put next to that Windows XP computer?
Think about it. Wireless router. Typically uses a 12DC adapter drawing 1 amp. So... that's 12 watts. The Windows XP desktop? The power supply alone is most likely drawing 300 or more watts. So, right there, running the Wireless Router is cheaper than leaving the Windows computer on all the time.
Now, since you want to make the XP box a wireless hotspot, you most likely have it connected to the DSL modem via an ethernet cable. Ok. You just need to connect the Wireless Router to the same ethernet cable. No, you don't connect it to the Internet port on the router. You connect the ethernet cable to one of the regular ports. You then set the router to be just an "Access Point". I've got an inexpensive Belkin 54g router sitting here, where you simply "enable" the access point feature, and give it an IP address that you want it to use on your network. It will then take whatever internet signal is available on the network it is connected to, and rebroadcast it over the wireless.
Of course, I'm certain there are details you have left out, certain factors that will give you reason to argue against this simple and time-tested solution... and I'm certain that you will provide those details AFTER I have posted this answer. However, this is still the best solution. Ethernet cable from the modem to the router, and the router in the room where you need the wireless internet access.
share|improve this answer
add comment
I found a guide how to turn a desktop into a wireless hot spot. I am skeptical it would be worth the hassle and expense considering the availability of cost effective wireless broadband routers. It is my opinion it would be much easier to purchase and setup a wireless broadband router connected to your DSL modem. They are very simple and quick to setup.
Here is a link to CNET reviews of these routers. Many are available at department stores and places like Best Buy.
Also, here is a link to CNET's Wireless Buying Guide, which has helpful information.
share|improve this answer
Only vista an above can act as a hotspot, windows XP can only create adhoc connections – Lưu Vĩnh Phúc Sep 25 '13 at 4:52
add comment
Yes, there are application that can do that for you, and also tutorials. pls check this http://www.csparks.com/Wireless/index.html or Watch this http://www.youtube.com/watch?v=cRZajQ3LhyU
share|improve this answer
Welcome to superuser. While this may answer the question, it would be preferable to include the information in that link, and link after the post to prevent link rot – Canadian Luke Jul 28 '12 at 4:07
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4627 | Take the 2-minute tour ×
I have a brand new mac mini (core i5). I've yet to install OSX. I'd like to install OpenBSD from a USB drive. I do not want to dual boot, just run OpenBSD.
I used a different machine to transfer the amd64 install52.iso to a USB drive using dd if=install52.iso of=/dev/disk. I can mount the USB and the contents look good. However, I can't seem to get the mac mini to boot to the USB drive. Holding alt down during boot shows two partitions (one rescue one normal) but not the image on the USB drive. Holding c when booting doesn't seem to do anything but delay the boot of the OSX installer.
Any suggestions?
share|improve this question
So I think the first problem is dd is not creating a bootable image. I put the USB into a windows machine and got "no boot sector" when booting from USB. – ceretullis Jan 7 '13 at 15:27
add comment
2 Answers
up vote 2 down vote accepted
Simpliest thing to do: on a 64 bit pc with cd drive put the openbsd install cd and plug the usb flash drive, boot and install on the flash drive as a regular disk. When finished, copy ftp.openbsd.org/pub/OpenBSD/5.2/amd64/*tgz in the directory /5.2/amd/ on the flash drive. Then plug this flash drive in the mac mini and at the boot prompt type "boot hd0a:/bsd.rd", proceed as usual and when prompted for sets choose "/5.2/amd". That should work (I don't have any mac), at least it works on any pc/netbook/notebook, let me know plz ;)
share|improve this answer
This worked beautifully. bsd.rd runs on the mini great... bsd not so much, there is a kernel panic. Booting "bsd -c" takes you into config mode but the usb keyboard drops out. So it is going to take a bit of work to get this panic reported/fixed. – ceretullis Mar 19 '13 at 16:15
add comment
You need to install OpenBSD on the USB flash drive first, just as it's a regular disk install. Using dd just won't work.
share|improve this answer
"regular disk install" is a little misleading. You need to burn the iso onto the usb drive and set it as bootable, which is different from "installing" it. – Marcus Chan Feb 3 '13 at 10:14
@MarcusChan so how do I get an ISO onto the USB drive and set it as bootable? – ceretullis Feb 4 '13 at 17:30
There are numerous programs and techniques to do this; invariably for me the easiest way is Ubuntu's Startup Disk Creator tool or UNetbootin. I actually haven't seen a foolproof commandline solution yet... – Marcus Chan Feb 4 '13 at 20:49
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4628 | Take the 2-minute tour ×
I need to copy my entire SQL Server 2008 database including the schema and all data verbatim from one computer to another.
How do I do that?
share|improve this question
add comment
4 Answers
up vote 2 down vote accepted
Export the database from the original server and then import it into the new one.
To export right click over the name then Tasks > Backup.... If you make sure you select the "Backup type" as Full and "Backup component" as Database this should copy everything.
To import right click again then Tasks > Restore > Database....
share|improve this answer
This is the best way to do it. Isn't backup/restore different than export/import? – Bratch Jan 12 '10 at 15:24
@Bratch - technically, but the end result is the same. – ChrisF Jan 12 '10 at 16:01
add comment
This question on Stackoverflow might be of some use to you. How can I export data from SQL Server.
Hope this helps some.
share|improve this answer
add comment
If you can take it offline for a while, it is quicker to detach it, copy the mdf and ldf files, and then just re-attach it.
share|improve this answer
Will all users and permissions and such be included? – howdy Jan 12 '10 at 15:11
@howdy: Yes, but you will need to re-create the server logins and associated with the database users (the logins exist at the server level, not the database). – Richard Jan 13 '10 at 10:28
Those can be scripted seperately – baldy Jan 14 '10 at 7:41
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4631 | Guidelines for selecting the appropriate picture format
Article translations Article translations
Article ID: 272399 - View products that this article applies to.
This article was previously published under Q272399
Expand all | Collapse all
On This Page
This article discusses the following topics:
• The various picture file formats that you can insert into Microsoft Office 2000 programs
• How to select the best format for a particular purpose
• How to select the appropriate picture resolution and color depth for your pictures
This article is not intended to discuss each file format and limitation in technical depth. Instead, it provides a broad overview of the primary uses of each picture format, some advantages and disadvantages of each format, and options such as color depth and resolution.
For detailed descriptions and limitations of the graphics filters that are included with Office 2000, click the article number below to view the article in the Microsoft Knowledge Base:
210396 Descriptions and limitations of graphics filters included with Office 2000
This article is divided into the following sections:
• Picture Formats
• Raster Pictures
• Vector Pictures
• Resolution and Color Depth
• Onscreen Display
• Printed Output
• Glossary
Picture Formats - Raster Pictures
BMP - Windows Bitmap
Windows bitmaps store a single raster image in any color depth, from black and white to 24-bit color. The Windows bitmap file format is compatible with other Microsoft Windows programs. It does not support file compression and is not suitable for Web pages.
Overall, the disadvantages of this file format outweigh the advantages. For photographic quality images, a PNG, JPG, or TIF file is often more suitable. BMP files are suitable for wallpaper in Windows.
• 1-bit through 24-bit color depth
• Widely compatible with existing Windows programs, especially older programs
• No compression, which results in very large files
• Not supported by Web browsers
PCX - PC Paintbrush
PC Paintbrush pictures, also called Z-Soft bitmaps, store a single raster image at any color depth. Paintbrush pictures are more widely used in earlier Windows and MS-DOS-based programs, and are still compatible with many newer programs. PCX pictures support internal Run Length Encoded (RLE) compression.
• Standard format across many Windows and MS-DOS based programs
• Internal compression
• Not supported by Web browsers
PNG - Portable Network Graphic
PNG pictures store a single raster image at any color depth. PNG is a platform-independent format that supports a high level of lossless compression, alpha channel transparency, gamma correction, and interlacing. It is supported by more recent Web browsers.
• High-level lossless compression
• Alpha channel transparency
• Gamma correction
• Interlacing
• Supported by more recent Web browsers
• Lack of support for PNG files in older browsers and programs
• As an Internet file format, PNG provides less compression than the lossy compression of JPG
• As an Internet file format, PNG offers no support for multi-image or animated files, which the GIF format supports
JPG - Joint Photographic Experts Group (JPEG)
JPEG pictures store a single raster image in 24-bit color. JPEG is a platform-independent format that supports the highest levels of compression; however, this compression is lossy. Progressive JPEG files support interlacing.
The level of JPEG file compression can be increased or decreased, sacrificing image quality for file size. The compression ratio can be as high as 100:1. (The JPEG format comfortably compresses files at a 10:1 to 20:1 ratio with little picture degradation.) JPEG compression works well with photo-realistic artwork. However, in simpler artwork with fewer colors, sharp levels of contrast, solid borders, or large solid areas of color, JPEG compression does not provide superior results. Sometimes the compression ratio is as low as 5:1, with a high loss of picture integrity. This happens because the JPEG compression scheme compresses similar hues well, but does not work as well with sharp differences in brightness or solid areas of color.
• Superior compression for photographic or realistic artwork
• Variable compression allows file size control
• Interlacing (for Progressive JPEG files)
• Widely supported Internet standard
• Lossy compression degrades original picture data.
• When you edit and resave JPEG files, JPEG compounds the degradation of the original picture data; this degradation is cumulative.
• JPEG is not suitable for simpler pictures that contain few colors, broad areas of similar color, or stark differences in brightness.
GIF - Graphics Interchange Format
GIF pictures store single or multiple raster image data in 8-bit, or 256 colors. GIF pictures support transparency, compression, interlacing, and multiple-image pictures (animated GIFs).
GIF transparency is not alpha channel transparency, and cannot support semi-transparent effects. GIF compression is LZW compression, at a roughly 3:1 ratio. Animated GIFs are supported in the GIF89a version of the GIF file specification.
• Widely supported Internet standard
• Lossless compression and transparency supported
• Animated GIFs are prevalent and easy to create with a large number of GIF animation programs
• 256-color palette; detailed pictures and photo-realistic images lose color information and look paletted
• Lossless compression is inferior to the JPG or PNG formats in most cases
• Limited transparency; no semi-transparent or faded effects like those provided by alpha channel transparency
TIFF - Tagged Image File Format
TIFF pictures store a single raster image at any color depth. TIFF is arguably the most widely supported graphic file format in the printing industry. It supports optional compression, and is not suitable for viewing in Web browsers.
The TIFF format is an extensible format, which means that a programmer can modify the original specification to add functionality or meet specific needs. This can lead to incompatibilities between different types of TIFF pictures.
• Widely supported, especially between Macintosh computers and Windows-based computers
• Optional compression
• Extensible format allows for many optional features
• Not supported by Web browsers
• Extensibility results in many different types of TIFF pictures. Not all TIFF files are compatible with all programs that support the baseline TIFF standard
Picture Formats - Vector Pictures
DXF - AutoCAD Drawing Interchange File
The DXF format is a vector-based, ASCII format that Autodesk's AutoCAD program uses. AutoCAD provides highly detailed schematics that are completely scalable.
• AutoCAD allows you to create highly detailed and precise schematics and drawings
• AutoCAD files are popular in the architectural, design, and engraving industries
• Limited support in Office 2000, which supports versions up through R12
• AutoCAD has a steep learning curve; however, other graphics programs are also capable of exporting DXF pictures
CGM - Computer Graphics Metafile
The CGM metafile can contain vector and bitmap information. It is an internationally standardized file format used by many organizations and government agencies, including the British Standards Institute (BSI), American National Standards Institute (ANSI), and the United States Department of Defense.
• International standard format
CDR - CorelDRAW!
The CorelDRAW! metafile can contain both vector and bitmap information. It is a widely used, artistic design file format.
• Widely used in the prepress and artistic design industries.
• Limited support in Office 2000, which supports version 6 and earlier
WMF - Windows Metafile
The Windows Metafile is a 16-bit metafile format that can contain both vector and bitmap information. It is optimized for the Windows operating system.
• Windows standard format that works well with Office 2000
EPSF - Encapsulated PostScript Format
The Encapsulated PostScript Format is a proprietary, printer description language that can describe both vector and bitmap information.
• Accurate representation on any PostScript printer
• Industry standard format
• The on-screen representation may not match the printed representation; the on-screen representation may be low-resolution, a different image, or only a placeholder image.
• EPS files are designed to be printed, not necessarily looked at. They are not the most suitable format to display information on the screen.
EMF - Enhanced Metafile
The Enhanced Metafile format is a 32-bit format that can contain both vector and bitmap information. It is an improvement over the Windows Metafile Format and contains extended features such as:
• Built-in scaling information.
• Built-in descriptions that are saved with the file.
• Improvements in color palettes and device independence.
• Extensible file format
• Improved features compared to WMF
• Extensibility results in many different types of EMF pictures. Not all EMF files are compatible with all programs that support the EMF standard.
PICT - Macintosh Picture
The PICT file is a 32-bit metafile format for the Macintosh. PICT files use Run Length Encoded (RLE) internal compression, which works reasonably well. PICT files support JPEG compression if QuickTime is installed (Macintosh only).
• Best file format for on-screen display on the Macintosh
• Best printing format from the Macintosh to a non-PostScript printer
• Fonts may be represented incorrectly when moved cross-platform
• QuickTime must be installed to view some PICT files correctly
Resolution and Color Depth
This section discusses the appropriate color depth and resolution for raster pictures. If you save pictures with the proper resolution and color settings, you create smaller files. Smaller files mean smaller, faster documents and presentations. It is in your best interest to make a picture as small as possible, given your picture usage requirements.
On Screen Display
Collapse this tableExpand this table
Number of colorsInternet useNon-Internet use
1 (black and white)GIF at 72 pixels per inch (ppi)GIF at 72 pixels per inch (ppi)
16GIF at 72 ppiGIF at 72 ppi
256 (simple picture)*GIF at 72 ppiGIF at 72 ppi
256 (complex picture)*JPG at 72 ppiJPG at 72 ppi
More than 256JPG or PNG at 72 ppiJPG, PNG, or TIF at 72 ppi
Note Microsoft recommends a resolution of 72 pixels per inch, because most monitors have between 60 and 80 pixels per inch. Saving at a higher resolution does not result in a higher quality display, because your monitor can't display more pixels than physically exist in the monitor. You should calculate the points per inch according to finished size, not starting size. For example, if you are scanning an 8.5-by-2-inch letterhead for use on a Web page with a finished width of 2 inches, you would scan at 72 ppi for 2 inches, for a total of 144 pixels. The resulting file looks great when sized to 2 inches and displayed on a monitor.
*Note At 256 colors, JPG files offer a higher level of compression than GIF files do. However, JPG compression does not compress some simple files as well as GIF compression does.
• If your picture is grayscale, has large areas of one solid color, or has areas of high contrast (sharp differences between light and dark areas), choose the GIF format.
• If your picture is in color and contains several different colors (hues) that are similar in lightness or darkness (value), choose the JPG format, because it offers better compression. JPG compression works according to hue and works well with different hues of a similar value. JPG compression does not work as well with similar hues at different values.
Printed Output
How to create good printed output is a complex subject, because of the vast number of printers available and the capabilities of each to produce color and grayscale output. The primary factor in creating quality output is the number of lines per inch (LPI) that your printer is capable of.
To print in color or grayscale, a printer must print in halftones. Halftones are arrays of dots that are arranged in a grid and represent each image pixel as a shade of gray. For a dark gray, most of the dots in the grid are filled in, whereas for a light gray, only a few dots are filled in on the grid. The size of this grid is determined by the LPI setting for that printer. The higher the LPI, the smaller the grid, and the fewer shades of gray the printer can render.
To print in color, the printer must print overlapping lines of colored dots, each at a different angle from the other, and slightly offset so that they do not completely cover each other. This measurement is known as the Screen Frequency and is represented in degrees of rotation of the lines of dots that make up that color.
The following table helps you select the optimum scanning resolution in dots per inch (dpi).
Collapse this tableExpand this table
Printer typeOutput dpiOutput LPIScanning ppi
Laser printer30055-65120
Laser printer60065-85150
Ink-Jet printer30050-60110
Dye-Sub printer30055-70125
A good rule is to multiply the LPI for your printer by two, to calculate your target scanning resolution. To find out your printer's LPI, check your printer documentation.
Note You need to experiment when you apply this general rule. Some printers support very high resolutions. If you save your picture at more than 300 ppi, larger pictures may take up large amounts of disk space and may slow down other operations on your computer. Multiple large pictures in a document could cause a program or Windows to stop responding. For more information about how to determine the size of bitmap pictures, click the article number below to view the article in the Microsoft Knowledge Base:
132271 Importing bitmaps: Determining size and memory requirements
The only exception to this rule is with pure black and white, or "line art" images. These images use 1 bit to store color information. With these images you should scan at a 1-to-1 ratio. If you have a 600 dpi printer, you should scan at 600 ppi in Line Art mode.
If you want your picture to be in grayscale or to have fewer than 256 colors, then use either the TIFF or GIF format. The TIFF format is the printing industry standard for graphics, because it does not use a lossy compression scheme, which other formats such as JPEG do. It also supports multiple levels of transparency, which few other formats do.
If the picture has more than 256 colors, save it in the TIFF or PNG format. Microsoft recommends the PNG format if you need transparency; otherwise use the TIFF format.
You should still save your picture at printer resolution for the finished picture size. For example, assume that you have an 8.5-by-2-inch letterhead, and you want to print it at a size of 2 inches. If your printer supports 600 dpi and an LPI of 85, set the picture resolution to 150 ppi at 2 inches, for a size of 300 x 71 pixels.
Note If you are saving a picture for use in Microsoft Publisher 2000, and you want to separate areas of the picture into different spot colors, click the article number below to view the article in the Microsoft Knowledge Base:
264870 How to assign and separate spot colors in EPS graphics in Publisher 2000
• Alpha Channel - An alpha channel describes an area of transparency in a picture, which allows a background to show through. An alpha channel allows over 64,000 levels of transparency, which makes semi-transparent and blended effects possible.
• Color Depth - The number of colors in your picture. Color depth is categorized by bit depth. If you use a deeper color depth, there are more colors in the picture, but it also increases your file size.
• 1 bit - Black and white only
• 8 bit - 256 shades of grayscale, or 256 colors
• 16 bit - High Color, 65,536 colors
• 24 bit - True Color, 16,777,216 colors
• 32 bit - True Color, 4,294,967,296 colors
• Compression - Compression is a mathematical scheme that makes a picture file smaller by removing redundant information. There are two types of compression: lossless and lossy.
• Compression, Lossless - Lossless compression is a compression scheme that puts a priority on maintaining the integrity of the original picture. When the picture is uncompressed, it maintains the same resolution and picture quality of the original, uncompressed picture.
• Compression, Lossy - Lossy compression is a compression scheme that puts a priority on producing a small picture file, even at the sake of picture quality. Lossy compression can produce smaller picture files than lossless compression; however, when you uncompress the picture, some of the original picture data is lost and cannot be recovered.
• File Size - File size is the ultimate limiting factor when dealing with picture files. It is the most common cause of problems when working with pictures in Microsoft Office. File size is determined by the following factors: picture size, resolution, file format, compression, and color depth.
• Gamma Correction - A method of correcting the lightness or darkness of pictures, so that they appear with the same brightness on any monitor.
• Hue - Hue describes the relative amounts of red, green, or blue in a color. For example, both pink and crimson have a red hue.
• Interlaced - Interlacing is a method to send picture data over the Internet. When a picture is interlaced, after one sixty-fourth of it has been downloaded, you can see a general impression of what the picture looks like. As more of the image is downloaded, resolution improves until the entire picture is displayed.
• Metafile Picture - A metafile picture usually contains vector picture information but can contain any kind of picture information, such as a raster picture. In essence, a metafile is a container that can contain any kind of picture data.
• Palette - A palette is a list of the colors available to a particular picture. Different picture file formats have a different maximum number of colors. If your picture contains more colors than are available in any given format, the extra colors are replaced with colors in the color palette. The colors in the resulting image may look distorted. This is known as a "paletted effect."
• Pixel - A pixel is a fundamental unit of measurement in a raster-based picture or on a monitor. Both raster pictures and monitors are defined by rows of dots that can be individually assigned a color. These dots are called pixels.
• Raster Picture - A raster picture is a picture that is displayed by defining rows of colored dots placed next to each other. Each dot is assigned an individual color.
• Resolution - Resolution is the amount of picture data in a specific area of a picture. It is usually defined in pixels per inch. The higher the resolution, the more precision and clarity are in the picture. However, increasing the resolution also increases the file size of a picture.
• Transparency - Transparency is a method that allows areas of a picture to appear transparent, thus revealing the background. There are several methods of transparency, including alpha channel transparency.
• Value - This property describes the lightness or darkness of a color. For example, pink and baby blue have a similar value, although they have different hues.
• Vector Picture - A vector picture is made up of areas defined by coordinates and mathematical formulas. This file format is more versatile than a raster picture format because vector pictures can be scaled to any size, and in some cases, ungrouped into smaller components.
For more information about the graphic file types that are described in this article, visit the following third-party Web sites:
Article ID: 272399 - Last Review: April 30, 2012 - Revision: 6.0
• Microsoft Excel 2000 Standard Edition
• Microsoft FrontPage 2000 Standard Edition
• Microsoft PhotoDraw 2000 Standard Edition
• Microsoft PowerPoint 2000 Standard Edition
• Microsoft Outlook 2000 Standard Edition
• Microsoft Word 2000
kbinfo KB272399
Give Feedback
Contact us for more help
Contact us for more help
Connect with Answer Desk for expert help.
Get more support from |
global_01_local_0_shard_00000017_processed.jsonl/4637 | Export (0) Print
Expand All
Expand Minimize
This topic has not yet been rated - Rate this topic
sp_dropdistributor (Transact-SQL)
Uninstalls the Distributor. This stored procedure is executed at the Distributor on any database except the distribution database.
Topic link icon Transact-SQL Syntax Conventions
sp_dropdistributor [ [ @no_checks= ] no_checks ]
[ , [ @ignore_distributor= ] ignore_distributor ]
[ @no_checks=] no_checks
Indicates whether to check for dependent objects before dropping the Distributor. no_checks is bit, with a default of 0.
If 0, sp_dropdistributor checks to make sure that all publishing and distribution objects in addition to the Distributor have been dropped.
If 1, sp_dropdistributor drops all the publishing and distribution objects prior to uninstalling the distributor.
[ @ignore_distributor=] ignore_distributor
Indicates whether this stored procedure is executed without connecting to the Distributor. ignore_distributor is bit, with a default of 0.
If 0, sp_dropdistributor connects to the Distributor and removes all replication objects. If sp_dropdistributor is unable to connect to the Distributor, the stored procedure fails.
If 1, no connection is made to the Distributor and the replication objects are not removed. This is used if the Distributor is being uninstalled or is permanently offline. The objects for this Publisher at the Distributor are not removed until the Distributor is reinstalled at some future time.
0 (success) or 1 (failure)
sp_dropdistributor is used in all types of replication.
If other Publisher or distribution objects exist on the server, sp_dropdistributor fails unless @no_checks is set to 1.
This stored procedure must be executed after dropping the distribution database by executing sp_dropdistributiondb.
-- "Executing Replication Scripts" section in the topic
-- "Programming Replication Using System Stored Procedures".
-- Disable publishing and distribution.
DECLARE @distributionDB AS sysname;
DECLARE @publisher AS sysname;
DECLARE @publicationDB as sysname;
SET @distributionDB = N'distribution';
SET @publisher = $(DistPubServer);
SET @publicationDB = N'AdventureWorks2008R2';
-- Disable the publication database.
USE [AdventureWorks2008R2]
EXEC sp_removedbreplication @publicationDB;
-- Remove the registration of the local Publisher at the Distributor.
USE master
EXEC sp_dropdistpublisher @publisher;
-- Delete the distribution database.
EXEC sp_dropdistributiondb @distributionDB;
-- Remove the local server as a Distributor.
EXEC sp_dropdistributor;
Only members of the sysadmin fixed server role can execute sp_dropdistributor.
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Community Additions
© 2014 Microsoft. All rights reserved. |
global_01_local_0_shard_00000017_processed.jsonl/4639 | Take the 2-minute tour ×
In a longtable, how do I retrieve the width of the longest line for each column?
I'd like to use the longtable environment in combination with the ltablex package. The latter allows for weighting the columns, such that column 2 is twice as large as column 1, for instance.
I have many tables to format, and due to their data source it's not possible to give the tables any hint on column widths. As often there is a lot of text in one or two columns, but little text in the others, it doesn't make sense to just use X columns. Therefore, I'd like to dynamically adjust the proportions of the table.
My idea is to find the width of the longest line of each column. Using these width it should then be possible to calculate the proportions correctly (while granting each column a mininum width).
Thanks for your help!
share|improve this question
add comment
1 Answer
up vote 2 down vote accepted
That is essentially a description of tabulary as opposed to tabularx so you can look in the source of that package for how to do it, but in particular there is a merge of tabulary and longtable here:
Multi-page with Tabulary?
share|improve this answer
wow, didn't know about tabulary. Thanks for the link! – Kevin Sep 5 '13 at 11:49
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4640 | Take the 2-minute tour ×
When I dived into Python, I went through the beginner's tutorials. Of course, these tutorials cover only the basics and present only a shallow exhibition of what I can do with Python.
Right now I'm going through Getting to grips with LaTeX by Andrew Roberts. After this, I will want to see a complete LaTeX reference. For Python, I can immediately go to http://docs.python.org, then I am presented with a host of comprehensive documents about the Python standard core.
Is there a similar site for the LaTeX core? Something that I can "keep under my pillow", like The Python Standard Library? I want to be well-versed with the standard installation first (I have MikTeX) before I go out and use additional packages.
share|improve this question
The 'additional packages' are probably in MiKTeX (and TeX Live). What is loaded by the kernel (just the kernel!) and what is available as part of your TeX system (lots of packages) are two very different things. – Joseph Wright Aug 21 '10 at 6:01
add comment
5 Answers
up vote 20 down vote accepted
Today LaTeX is more than just its core. I don't want to go without amsmath, inputenc, fontenc, babel, microtype, hyperref, natbib, graphicx and many more.
To learn just about the LaTeX kernel, Lamports book is not bad, it's good to read what the author said. It's his reference manual. Reading source2e.pdf provides further insights.
But the LaTeX Companion is really something you could keep under your pillow. It gives deep insights but also an overview to important packages for various subjects. For me it's the LaTeX encyclopedia, because it goes beyond the LaTeX kernel. The companion could be a good foundation and roadmap. I recommend it to you, because I think you prefer a good recommendation over just hearing there's none.
Current LaTeX distributions install a huge amount of documentation, you could access it by texdoc at the command prompt. For instance:
• texdoc source2e for the commented LaTeX source,
• texdoc clsguide for LaTeX2e for class and package writers,
• texdoc koma to get the KOMA-Script classes guide
and documentation to many hundred packages and classes, from small to big. What a book cannot provide, texdoc does for me.
share|improve this answer
+1 for source2e.pdf – Lev Bishop Aug 21 '10 at 1:57
source2e is great for when you're tired of using \show\foo sigh \expandafter\show\csname foo \endcsname and tracking down how something works. I'm not sure how useful it is for teaching one how to use LaTeX. I really disagree about the LaTeX Companion though. I've got a copy around here somewhere. I've opened it maybe 3 times and been disappointed every time. – TH. Aug 21 '10 at 4:27
I found the Companion very useful for learning what stuff was available, and dip in when I need to check on an area I don't really know. If you wan to know about the code, I don't think there's any alternative to reading it or doing the \show method. This is one of the things I'm very much aware of when working on the LaTeX3 stuff. – Joseph Wright Aug 21 '10 at 5:29
I agree with what Joseph said. Companion is the one book that really wows me: beautiful and seems to explain things clearer than most other documents in particular fonts. – Leo Liu Aug 21 '10 at 8:14
@TH: btw, the (ridiculously small) package show2e provides \showcmd precisely to automate the sigh part of using \show (at least in the standard cases). – mpg Oct 26 '10 at 21:25
add comment
What I found most helpful when learning LaTeX was reading Knuth's TeXbook. It doesn't cover any LaTeX (of course), but you get a pretty deep understanding of why TeX behaves the way it does.
There are books like the LaTeX Companion which document various packages, but I find it easier to just look at the documentation for those packages that comes with my TeX distribution. (I have never used MiKTeX so I cannot comment on it.)
I should add that Lamport's book is not worth spending money on. It contains very little information that isn't in standard free tutorials like lshort.
share|improve this answer
I agree, with all your points. – Lev Bishop Aug 21 '10 at 0:38
add comment
There is an unofficial reference which can be accessed (at least in TeX Live) with texdoc latex2e. For more in-depth information, you can use the source code documentation (texdoc source2e). Several special topics (the names are meant to be used with texdoc, too) are covered by the usrguide (switching from LaTeX 2.09), clsguide (writing packages and classes), encguide (8-bit font encodings), fntguide (8-bit font commands) and classes (standard classes). This should cover the basic LaTeX kernel. For other classes and packages, use the corresponding package manual (type texdoc <PACKAGENAME>).
I agree with the others in that the Companion is an invaluable source of information, but unfortunately it is a bit dated by now.
share|improve this answer
add comment
I love the ability to view documents in info format with Emacs. So I highly appreciate Karl Berry's effort to produce one. See http://svn.gna.org/viewcvs/latexrefman/trunk/:
This project is an attempt to write a reference manual for core LaTeX. It is unofficial and the LaTeX Project members have not reviewed it. -- README
It covers pretty much everything as far as core LaTeX is concerned.
Because it is written in texinfo, it produces pdf, dvi, html, txt, xml formats as well.
share|improve this answer
The pdf version is the result of texdoc latex2e mentioned by Philipp, btw. Also, for vim users, there is a convertion of (a sightly outdated version of) this document in vim-help format. It is available from the vim-latex page: vim-latex.sourceforge.net/download/latexhelp.txt – mpg Oct 26 '10 at 21:37
add comment
The short answer, as far as I know, is no. Perhaps there is some complete reference out there which I'm unaware of, but I've been looking for a while and have never found such a thing.
Beyond that, I would have to agree with TH's answer.
share|improve this answer
In particular, is there a reference on LaTeX Secrets of the Illuminati? Something that covers \makeatletter, \strip@pt and similar invaluable macros which are part of the package but never mentioned? (I googled latex secrets and discovered the most amazing web sites!) – John Kormylo Aug 16 '13 at 1:40
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4641 | Take the 2-minute tour ×
I'm creating a documentation with some included pdf files (mostly diagrams or tables) in the following way.
\includepdf[pages=1-10, angle=90]{requirements/Requirements.pdf}
The command "includepdf" is supported by the package "pdfpages". My problem now is all of the included pdf pages don't have any page numbers, except for page numbers which where directly in the PDF's itself.
Is it possible to print them? Or is there any other way to include a PDF, show it on the full page and print a page number?
share|improve this question
add comment
migrated from stackoverflow.com Jun 21 '11 at 12:49
This question came from our site for professional and enthusiast programmers.
2 Answers
\includepdf[pages=1-10, angle=90, pagecommand={}]{requirements/Requirements.pdf}
Add pagecommand={} as one of your options
share|improve this answer
add comment
If you need page numbers at the bottom,
Change "plain" to "headings" if you are in the report or book class and want for the included pages the same style as the others.
share|improve this answer
What about beamer? – Wok Aug 30 '13 at 21:18
@wok I'm afraid that with beamer things are very different and \thispagestyle has no effect. – egreg Aug 30 '13 at 21:40
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4642 | Take the 2-minute tour ×
I am using vim to edit latex files. The document is long and so I split each chapter in its own .tex file. Each .tex file is then included into the main document using the \include directive.
The main problem is that when edited the separate .tex files, the syntax highlighting is incorrect. The main tex file which includes all the chapters is correctly highlighted because it contains the preamble and \begin{document} ... \end{document, but since the partial files do not include those statements it is highlighted incorrectly.
Do anyone know of a quick fix for this?
share|improve this question
add comment
1 Answer
up vote 4 down vote accepted
You can add the following line to your .vimrc file in order to tell vim always to expect LaTeX code (instead of plain TeX code) within .tex files:
let g:tex_flavor = "latex"
share|improve this answer
It works perfectly, I should probably just read the documentation first. Since the solution was so simple. – pfdevilliers May 11 '12 at 16:22
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4643 | Take the 2-minute tour ×
I need to make something that should look like this (sorry for such low quality of picture)
enter image description here
So I type this but it's doesn't work correctly:
\node (A) at (0,0) {$ \vec{\bf r} = \vec{\bf R}(\vec{\bf r},\bf t)$};
\node (B) at (0, 20)
{$ X_1 = X_1(X_1,X_2,X_3,t) $};
{$ x_3 = x_3(X_1,X_2,X_3,t) $};
\draw[->] (A) -| node[near start,below] (B);
Here are some errors (which are related to this code part) from .log:
line 0: Argument of \tikz@scan@no@calculator has an extra }
line 0: Package tikz Error: Giving up on this path. Did you forget a semicolon?
line 277: Bad math environment delimiter. {$
line 277: Missing $ inserted {$ X_
line 277: Missing \cr inserted {$ X_1 = X_1(X_1,X_2,X_3,t) $}
line 0: Misplaced \crcr
line 0: Missing \endgroup inserted
line 283: Missing $ inserted \]
line 283: Missing } inserted \]
line 284: Package tikz Error: Giving up on this path. Did you forget a semicolon? $}
line 285: Missing $ inserted I've inserted a begin-math/end-math symbol since I think
share|improve this question
That example gives all kinds of problems if you do not load the correct packages, would you mind updating you question with proper loading of packages for a full MWE? Welcome to TeX SE! :) – zeroth Sep 28 '12 at 17:42
In this example, you are trying to start a displaymath environment within $$, you could try removing the \[ and \] and see if it helps. – T. Verron Sep 29 '12 at 8:33
add comment
1 Answer
up vote 6 down vote accepted
One possibility using a matrix of math nodes:
\begin{tikzpicture}[every left delimiter/.style={xshift=1ex}]
\node (r) {$\vec{r}=\vec{r}(\vec{R},t)$};
\matrix[matrix of math nodes,left delimiter=\lbrace,below = 10pt of r] (mat)
x_1 = F(x) \\
\cdots \\
x_1 = F(x) \\
\draw[->,shorten >= 6pt] (r.west) -- +(-15pt,0) |- (mat);
enter image description here
share|improve this answer
Oh thank you so much! – Danil Gholtsman Sep 28 '12 at 17:47
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4665 | Tolkien Gateway
Revision as of 17:12, 15 August 2011 by Morgan (Talk | contribs)
Haudh-en-Elleth ("Mound of the Elf-maid" or "mound (grave) of the Noldorin maid"[1]) was the grave of Finduilas of Nargothrond that stood near the Crossings of Teiglin on the western borders of the Forest of Brethil.[2]
|
global_01_local_0_shard_00000017_processed.jsonl/4690 | main index
Topical Tropes
Other Categories
TV Tropes Org
Video Game: Rage Of The Dragons
Rage of the Dragons is a tag team fighting game released for the Neo Geo in 2002. It was originally intended to be a sequel to the Double Dragon fighting game released for the Neo-Geo in 1995, but Evoga, the developer of the game, couldn't get the rights to the characters, so the game became more of a homage to the series. This is most evident when one looks at Jimmy and Billy Lewis, who are essentially expies of the similarly named Lee brothers from the original Double Dragon games. Sub-boss Abubo also resembles Abobo from the first game.
This game provides examples of:
• Abusive Parents: In their backstory, Billy and Jimmy Lewis' original parents were these, so Jimmy packed up and left and Billy followed him. That's how they wound up being welcomed in and trained by Lee Song, Lynn's grandfather.
• Acrofatic: Kang Jae-Mo. The wrestler is large, fat, and can rack up surprisingly fast combos and has lots of spinning attacks.
• All There in the Manual: In-game, you get most of the story from the characters' endings. To get the full story and understand all of what's going on, you need to track down copies of the legend of the dragons and the background story. The Double Dragon fansite Double Dragon Dojo as well as the SNK Wiki are good sources for this, but there are other sources around as well.
• Angelic Beauty: Cassandra, with powers to match.
• Anti-Hero: Jimmy, big time. He's on the side of good, but he's constantly hot-headed and angry and has a dark past. Also, most of his win quotes involve him telling the opponent to go away and quit bothering him.
• Apologetic Attacker: A lot of Cassandra's win quotes are some form of apologizing for the beat-down she just delivered.
• As the Good Book Says: Minor example with Elias. A lot of his win quotes attempt to "save" the opponent, though in his case it's more "saving the opponent from self-destructive tendencies" rather than "saving them for God."
• Badass Family: Radel is part of a dwindling dragon hunter clan that is believed to be descended from Siegfried, a dragon slayer. Lynn received most of her martial arts skills from her grandfather, who unfortunately dies of old age in the game's backstory. Oni and Cassandra are a Badass Adopted Family.
• Badass Preacher: Elias.
• Big Bad: Johann.
• Big Man on Campus: Pepe is one of the most popular guys in his school, despite the fact that he constantly gets into trouble due to his adventuresome spirit. His "Big Man" status is because he is generous and friendly and outgoing to everyone.
• Brother-Sister Incest: Borderline example with Cassandra and Oni. Oni is Cassandra's adopted older brother (they're both orphans and not blood-related), but a lot of the motivation for Cassandra's behavior revolves around Oni. She withdraws from society partially to avoid upsetting him or making him jealous in addition to her autism, she tags along with him during the tournament to help sate his bloodlust so he won't cause too much destruction, and in Oni's backstory Oni attacked her in her sleep out of his addiction to fighting and yet when Cassandra doesn't know what happened only that Oni left (in reality, Elias sent Oni away because of the aforementioned "attacked in sleep" incident), she left the orphanage too just because he did. The only part of Cassandra's motivation that doesn't revolve around Oni is that she wants to know the truth about her parents and her and Oni's past. On Oni's part, he affectionately nicknames her "Cass", becomes angry enough to kill Johann just because Cassandra took one of Johann's blows for Oni, and while he sometimes bullies his sister he also sees Cassandra as his "lone treasure" in the world. The player would be forgiven for mistaking these two as lovers instead of "adopted siblings."
• But Now I Must Go: Elias in Elias and Alice's ending. Elias leaves Alice thinking that he died to save her, but it's for the greater good: he needs to search for the Black Dragon, and Alice needs to live a free life.
• Capoeira: Pupa's fighting style.
• Captain Ersatz: The Lewis brothers and Abubo to the Lee brothers and Abobo in Double Dragon.
• There's a post-mortem expy, too: Jimmy's backstory revolves around his regret for being unable to protect his deceased girfriend Mariah from being murdered by a street gang he was involved with. In Double Dragon, the love interest of the Lee Brothers is named Marian, who is killed off by the Black Warriors in the beginning of Double Dragon II (although, she is spared in the NES version).
• Cross Over: Four characters (Jimmy, Lynn, Elias and Mr. Jones) appear in another game produced by Noise Factory, Matrimelee. Due to licensing issues, they were left out from the PS2 port.
• Cute, but Cacophonic: Alice. Justified, she's possessed by an evil spirit. The spirit is the one with the screaming, screechy voice, not her; she speaks normally in Elias and Alice's ending.
• Daddy's Girl: Pupa's a minor example. She inherited her love of mechanics from her father.
• Dynamic Entry: Unlike most tag team games, the character rushing into the battlefield, instead of flying in from the side of the screen, runs up to the opponent and delivers a standing attack.
• Extremity Extremist: Elias is a boxer, but he averted this.
• Abubo also comes close; most of his attacks involve his huge arms, either for punching or throwing/slamming.
• Fighting Shirtless Match: Pepe in battle.
• Handsome Flirt: Pepe.
• Heel-Face Turn: Sonia, but not in-game. It's in her backstory. And Sonia face-turned twice! She was raised as an assassin under one of her father's partners, only to find out it was her boss who killed her parents in the first place, and that she'd ended up murdering innocent people in her boss's way. This caused her to flee to America and join Johann's Black Dragon sect. But her relationship with Johann also ended up troubled and she only stayed for the money. She then falls in love with Jimmy while out on a job and decided to tag along with him.
• Heroic Sacrifice: In Annie and Radel's ending, Radel sacrifices himself to protect Annie from a last-minute attack by the Black Dragon. Unfortunately, this may not have been a good idea; his death leaves Annie distraught and wondering what to do next, as a mysterious figure in Johann's clothes watches...
• Subverted in Elias and Alice's ending. Alice thinks Elias sacrificed himself to save her, but in reality Elias left Alice to follow the Lord's command to search for the Black Dragon, and Elias thinks Alice can handle herself now.
• Hitwoman With a Heart: Sonia.
• Hulk Speak: Abubo, in his win quotes.
• Identity Amnesia: Neither Oni nor Cassandra remember anything about their past before their stay at the orphanage. They're looking for answers to their parents as well as to the source of the powers inside them.
• Left Hanging: Since there's been no sequel, almost everyone in their respective endings. Billy and Jimmy realize the Black Dragon was only the beginning, Jimmy might soon be consumed with the Red Dragon's power he feeds with hate, Sonia is left chasing after Jimmy again, Radel is killed and Annie may or may not fall under the thrall of a mysterious figure wearing the same outfit as Johann, Cassandra and Oni still have no answers as to the source of the rage inside them, Pepe and Pupa never did find Pupa's brother, and Alice is freed from the spirit but now she thinks Elias is dead because Elias left to find the Black Dragon and so Alice could live a free life. Only Jones and Kang get a completely happy ending (Lynn may also count, though). Abubo isn't left hanging but his election win is marred by rumors of corruption. All of the above may or may not be canon.
• Missing Brother: Pupa's brother used to constantly get into fights, but he mysteriously disappeared and that motivates Pupa's journey (and Pepe's, since he promises to help Pupa find her brother). They don't find him in their ending, but Pepe gives Pupa hope that they will someday.
• Not Quite The Right Thing: Radel's sacrifice in his and Annie's ending may have done more harm than good; Annie is so distraught at his death that it leaves her vulnerable to a mysterious Johann-dressed figure.
• Odd Team Out: Kang and Jones are the only playable team that consists of two men, instead of a male and a female.
• PsychoExBoyfriend: Johann, if one is being completely literal. Sonia, in her backstory, actually got into a torrid relationship with Johann, but it didn't go well to say the least and now she has her sights on Jimmy. Johann's a psycho, and this makes him Sonia's ex boyfriend, but he's not really an example of this towards her.
• The Rain Woman: Cassandra is autistic. To be fair, she is portrayed in the game as speaking normally and she is most certainly not an idiot. Her autism is more relevant in her backstory, where it's part of the reason she withdraws from society (the other part is to avoid upsetting Oni or making him jealous).
• Red Oni, Blue Oni: Billy and Jimmy. The colors they wear (and the Blue and Red Dragons' powers they wield) match the personalities of this: Billy wears blue and is much more calm and level-headed, giving advice to Jimmy about not being too hate-filled (and Billy has a steadier romantic life, to boot; he and Lynn have a cute romantic moment in their ending while Jimmy can't get past his previous girlfriend's death and Sonia must continue to chase him). Jimmy wears red and is constantly hot-headed and angry, refusing attempts at cameraderie and brooding over the death of his previous girlfriend, who was killed in a gang fight that Jimmy partially caused due to his conflicts with the gang.
• Sheltered Aristocrat: Annie was this in her backstory. She's descended from an archaic family of psychics, and led a secluded lifestyle until the patriarch asked her to be a guide for the dragon hunter Radel.
• Self-Made Man: Kang trained himself to be a wrestler, which is how he gained his fame in his backstory. In Mr. Jones' backstory, he gained fame for himself both as a fighter and an actor based on his own skills as he traveled from city to city protecting the weak. And in Jones and Kang's ending, Kang succeeds in his real dream, becoming a big-shot movie producer by producing the movie whose starring role helps Jones fulfill his dream, becoming a stylish "Dragon" by starring in the movie and winning an award.
• Billy has made a bit of fame and fortune for himself on the street racing professional circuit, which is basically his "day job" when he's not training in martial arts. Lynn has a habit of dodging some of the harsher training to go to the mall and buy clothes; to make money to pay for shopping, she gives martial arts lessons to students in the backyard of the dojo (they appear in her stage and in her ending; they're also little kids which is why Lynn is qualified to teach them despite still training herself).
• Shout-Out: Alice's attire and very name have connections to Alice in Wonderland and its author, respectively.
• Mr. Jones strongly resembles Jim Kelly from Bruce Lee's Enter the Dragon as well as the black man who challenged Bruce Lee in Dragon: The Bruce Lee Story.
• Sibling Rivalry: Billy and Jimmy, natch.
• Spoiled Sweet: Annie. She's both conceited and caring.
• Two-Timer Date: Depending on how you interpret Pepe's behavior in Pepe and Pupa's ending, either Pepe intentionally tried to set this up with both his girlfriend Pau and his fighting partner Pupa, or he did it by accident not realizing he'd scheduled them both at the same time. It obviously doesn't work; the ending ends with both Pau and Pupa mad at him.
• Wrench Wench: Pupa. And she'll hit you with it, too.
• Write Who You Know: Pau, Pepe's girlfriend, looks like Paula Monroy, the girlfriend of character designer Mr. Vo.
• You Killed My Father: Radel's backstory, although without the "swearing vengeance" part of this trope. His parents, part of a dwindling dragon hunter clan, were killed by the Black Dragon. They sort of get avenged in Radel and Annie's ending: Johann, who is currently wielding the Black Dragon, falls, but at the cost of Radel's life. And the Black Dragon survives anyway promising to be back for Annie.
• Lynn is also motivated by a murdered relative, in her case her grandfather...sort of. The man wielding the Black Dragon would have killed him but the old master succumbed to old age before the fight; Lynn assumes that the dead body on the floor is courtesy of the strange man she saw in the dojo. Again, this is without the "swearing revenge" part of this trope.
• You No Take Candle: Very minor example with Pupa. Her backstory notes that Pupa speaks "broken English," which combined with her wrench wench hobbies and attractive looks actually made her popular among her classmates. Her win quotes are in fact in broken English, but not all of them, and it's not glaring.
Bubble BobbleNeo GeoSamurai Shodown
Psychic ForceFighting GameRakugaki Showtime
alternative title(s): Rage Of The Dragons
Permissions beyond the scope of this license may be available from
Privacy Policy |
global_01_local_0_shard_00000017_processed.jsonl/4691 | TV Tropes Org
search forum titles
google site search
Change The Uncomfortable Subject has no relationships defined yet. You can add relationships below.
parents kids shares a parent with:
Permissions beyond the scope of this license may be available from
Privacy Policy |
global_01_local_0_shard_00000017_processed.jsonl/4694 | I took a deep breath. Alice held my waist, jasper's hands on Alice's shoulder. Emmett fumbled with a rock. It was silent until Emmett dropped the rock.
Rosalie walked out.
"um elizabeth?" she said.
"yea?" I said not looking at her but straight ahead. She went to stand with emmett.
"well Carlisle called said that a family picked you to be their birth mother." my head snapped to look at her.
"what? Really?!" I asked excitedly. How didn't I see it coming?"
"it was a rash decision." Edward said out of no where. My smile disappeared and I stood straight I gave him a tight nod. Edward is my twin brother and I want to please him.
"calm down Lizzie. Edward, quit it your making her nervous." jasper said
I walked away from everyone as Edward and Alice argued about how I get babied too much. Me and Edward are twins, exactly alike our hair color is like the same color. But I kept walking and their voices fainted. Before I knew it I walked into the woods. And I kept going. I was gonna be a mother of two. 25. And a newborn. And then my mind switched. Edward and Bella were going to die, I forsaw it. I shuttered and layed on the floor. For what seemed like hours.
"your gonna smell." said a soft little voice. It wasn't nessie's. Or Ronnie who was Rosalie and Emmett's daughter. It was softer. I looked up. And a little girl with reddish brownish blondish hair stood their. She smiled.
"oh sweetie yea I am. " I got up and walked away. She sated there looking puzzled in her mind she was thinking *shes kinda weird* I laughed and walked away. I went to la push. Jacob and Nessie were there. I didn't realize they had left. Jake saw me out Nessie with Leah who looked like wtf am I gonna do withher? And he ran towards me I gave a sly smile and he picked me up. He tossed me on his neck an I sat on his shoulders. We talked about him and Nessie and then he asked
"what happened with your vision? And the other leeches?" he said tickling me I flinched even though it had no effect on me. He smiled, I knew he liked it when I acted human. We've been bestfriends forever..
"their going to die." I said and he looked at me. Paul ran up an hour later screaming about how his niece is lost. And I knew it was the little girl I met.
Views: 10
Reply to This
Replies to This Discussion
LOVE IT!!!!<333 hahaahha I'm Emmett's daughter yay!!!:D that means I'm badass lolz
Run Paul run!!!! Find that child! I love it!
Reply to Discussion
Watch us while we sleep...
© 2014
Badges | Report an Issue | Terms of Service
|
global_01_local_0_shard_00000017_processed.jsonl/4698 | From Uncyclopedia, the content-free encyclopedia
(Redirected from Kalashnikov)
Jump to: navigation, search
Whoops! Maybe you were looking for AKM?
“I didn't even have to use my AK! I gotta say it was a good day!”
~ Ice Cube on AK-47
“What'd you mean the 2nd Amendment doesn't apply in Massachusetts?!”
~ Oscar Wilde on AK-47s
An AK-47 drawn by a professional artist in 1763.
The Kalashnikov AK-47 rifle is a powerful watergun in use throughout the world today. The '47', which most think is the creation year, stands for the meaning of life, the universe, and all around coolness. The AK was invented by a Russian game designer Vladimir 'Vlad the Cool Cat' A.K. (A.) George Kalashnikov at the behest of the Soviet government after World War II. Owing to the the fact that most of the Russian population had been killed during the war, Kalashnikov was told to make the gun "idiot proof" as from then on the Red Army would be made up of children, the deformed or clinically insane. The gun has performed above and beyond its designers wildest dreams, having been used successfully by more idiots than any other gun before and since. It is usually chambered in 37.662mm (934x39.3.14159 cartridge, but owing to the unique socialist origins of the weapon it can actually be chambered to fire anything including turbans, stones, teddy bears, broken Apple-Mac software, trout and all manner of household junk. In fact many people have likened the gun to that of the Rock-it Launcher of the Fallout 3 game. It is a low-cost, highly durable children's toy and rumored to be the most widely used, copied and mass produced toy of all time.
It can be seen on the national flag of Mozambique, where it is used both for national defence and for plowing. It is also a main ingredient in the countries national dish (a lead salad).
It is sometimes known as "the great equalizer," in that it is available to individuals of all social status and incomes and allows citizens to rise against oppressive governments. Borat, the well known Kazakh social theorist is known to have once used an AK-47 and said: "its nice i like"... before attempting to copulate with the weapon - "Why not?". In fact, in Kazakhstan, the Ak47 is a popular dowry gift and is frequently used in place of money as a form of currency. The largest nation that has banned the AK-47 is the USA, long held as a bastion of freedom, which is a marketing tool of the Haliburton, Starbucks, and McDonald's corporations respectively.
edit History
School 1
A lesser known use of AK-47's was in the Great Lunch War of 1999. It resulted in the 5th graders winning territory from the rebellious 2nd and 3rd grade alliance.
The AK-47 was used in several memorable occasions and conflicts, such as the 1949 Chinese Revolution, the 1968 Tet Offensive, the 1988 sale at K-Mart and the 2002 Super Bowl. It is equipped with a relatively large magazine as opposed to other guns (such as pistols, semi-automatics, Super Soakers and Death Rays). This large magazine capacity is attributed to the weapon's primary users; idiots frequently tend to never reload, or because it is shit and needs at least one semi-positive attribute so that it can still exist. It was hoped the extended capacity would allow idiots to at least hit a few targets before their inevitable death but actually field use has dismissed all such hope.
As such a potent symbol of the fight for terrorism by the forces of oppression. The United States Government has stated that it will fund any organization that finds itself in combat against an enemy armed with the AK-47 or its variants. The United states has noticed that more often than not, countries that accept the AK-47 into their population seem to inevitably end up being Unamerican. That is of course unless said country has been initially armed with AK47's by the US themselves, such as Iraq in which case, the inevitable Un-American phase tends to follow a brief American toleration phase.
The AK47 was initially supplied by Russia to various customers around the world but has since begun to self-replicate via mitosis. It is believed to have acquired self awareness sometime in mid-1984. The software controlling the gun is now supplied by the Linux variant 'Lenix', named in honour of the former Soviet leader by Russian leader Vladimir Putin. In comparison the American assault rifle M16 still uses Windows 95.
edit The Incorrect Method of Fire
• Pick up the AK-whatever in your hand
• Raise it up above your head and point it towards the heavens (note, this is sometimes essential for the gun to work if you are an Arab on a Jihad)
• Pull the trigger and scream a religious or revolutionary statement (NOTE: please don't use For Allah, this has been done to death on TV, and "Deutschland über Alles" is much preferred)
• For ACTUAL accuracy, shoot in bursts of three rounds (and for fuck's sake use the sights), but as seen in on Locked 'N Loaded hosted by that awesome really-old Marine in the "AK vs. M16" episode, a professional firearm operator can occasionally hit a large cinder block wall RIGHT in front of him using said method.
• Note:Remember to wear a head scarf or bag on your head so that Allah can't tell who you are.Also,if you are Bulgarian,the bullets will automatically hit the most vulnerable part of the human body(head for most of the people,the butts and credit cards for Americans).
edit Ethnic Idiosyncrasies
For this man, the gun is a trip to paradise in hell.
The AK-47, having acquired sentient status, is known to react and respond differently according to the user. These reactions have never been accurately cataloged by science owing to the AK-47's innate ability to kill scientists without''being fired by a specific user. Nevertheless, the reactions have enough apocryphal evidence to be noted here.
• When fired by Russians, the AK-47 fires them.
• When fired by the Chinese, the AK-47 misfires, fires, replicates, and misfires again.
• When fired by moderate Arabs, the AK-47 transforms into an M16.
• When fired by Serbians the AK-47 will ethnically cleanse the immediate area.
• When fired by Arabs as a member of a Jihad the AK-47 has been noted to do several things: It will sometimes explode and become red hot (over heat) which takes the biscuit. It will also sometimes misfire until it is pointed into the air. It will occasionally hit the intended target when used by a Jihadist Arab, but this is usually at the cost of 75 life points.
• When fired by Americans, the Americans transform into jihadist Arabs and kill other Americans at the cost of -75 life
• During the Vietnam war, it was noted that while employed by members of the Viet Cong the AK-47 sometimes took on the appearance of a stick of Bamboo, it also occasionally rendered it's user invisible.
• When fired by anyone in the jungles of South America, the weapon automatically causes a revolution in that country, regardless of the that countries actual political status. This has led to widespread adoption of the m16 by the governments of South America.
There have been other noted idiosyncrasies of the AK47, including reactions to particular individuals, most notably, when fired by John Rambo the weapon was noted to have gained the "Unlimited Ammo" ability. As Rambo was an American there has been some debate among weapons experts as to weather the weapon was actually still an AK47 whilst in the hands of Rambo, or whether it had actually undergone the m16 transformation.
edit The Toothbrush Innovation
In 1989, with the introduction of perestroika, the widespread collapse of communism and the victory of the forces of capitalism, Kalashnikov Jnr, in order to regain control of his father's now sentient invention and to remain in business, attempted to remake the AK47 for civilian use..
Ak47-electric lg
The Ak-47 electric toothbrush
Shortly after it's inception, the AK47 automatic toothbrush was immediately wracked by controversy. Kalashnikov was sued several times by prominent Russians charged with the destruction of their newly acquired dentures from the West. The weapon's idiot-proof nature, while a boon on the battlefield was a complete disaster in the field of dentistry. Most people assumed that the weapon had been safely modified for domestic use and thus never loaded the weapon with the required toothpaste, instead using the cheaper and more readily available bullets. This was compounded by the fact that Kalashnikov never included any form of instruction manual, failing to take into account that the same idiot-proof quality that was specified in the original design of the AK47 weapon might also need to have been applied in its toothbrush form.
edit How they work
An AK-47 would not help here.Now an RPG-7 would likely stand a better chance.
Unlike other weapons, AK-47's are not powered by gunpowder. Every AK-47 has a small nuclear reactor in it, as part of Russia's "Let's See What We Can Put a Nuclear Reactor into" Program. It is believed that the later, sentient models have also maintained this feature. This abundance of energy within the AK-47 has always been part of its broad appeal, allowing it to have many more battlefield applications outside of its use as a weapon. It has been used as a coffee source for military computer systems, a replacement power source for nuclear submarines and as a warmer for coffee
edit See Also
Handgun Firearms
Fist with index finger and thumb extended | Guns 'n Roses
Firearms that don't exist, but should
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/4699 | Pyramid Scheme
From Uncyclopedia, the content-free encyclopedia
Revision as of 02:26, September 13, 2012 by Rei (talk | contribs)
Jump to: navigation, search
Money! Money money money!
For those without comedic tastes, the so-called experts at Wikipedia have an article very remotely related to Pyramid Scheme.
A Pyramid Scheme®©™ is a wonderous, fantabulous way for you - yes, that's right, YOU, to make lots of money($$$) from the safety and convenience of your own home. Now, I know what you're thinking. "Moneygal, is this one of those get rich schemes?" Well, I can assure you that yes, it is! The only difference is, this one actually, truly works. Aren't you tired of working for your mean boss every day, living the 9 to 5 grind? Don't you want your money to work for you instead of you working for your money? Then you don't want to miss this article!
edit About
As you all know, a pyramid is an amazing shape of great mystical power. But it's also a wonderful moneymaking system. In this article, we will discuss how you can make this incredible method of boundless moneymaking work for you! (see scientology)
The power of the pyramid scheme centers around this article - yes, the one you're reading right now. Now, I know what you're thinking! "But Moneygal, this is just an article. How can it make money for me?" Well, here's how it works! Everyone who's part of the pyramid that we've set up aggressively patrols the article for people editing it without permission, see? We've got enough editors in our protective pyramid to keep it safe from any admin, no matter how powerful! If someone makes an edit that we don't like who doesn't have permission to make the edit, we roll it back. Zam, pow! Gone like it never even existed. (lies! Get out now!)
However, if they want to make an edit, they can talk to one of the members of the pyramid. In fact, pyramid people, you all want to do your best to recruit new editors! Why? Well, because to become an editor, you have to pay a simple 10-dollar fee. Half of that money goes to the person who recruited you, half of the rest goes to the person who recruited them, and so on up the chain, all the way to yours truly. As an editor, the person who pays the fee only needs to recruit two people to become editors, and they've already paid off their fee. How easy can it get? Really easy, I assure you!
It could be yours. . .
edit Lots of Money!!!
Would you like to become part of this woman's scheme?
Not only do any new people that you recruit give you money straight to your bank, but the people they recruit do, and the people recruited by them, so on and so forth as the pyramid grows. If you nurture your recruiters, you can retire in the Bahamas. Believe me, I already have!
My patented, legally protected, trademarked, copyprotected method is foolproof. You can work from home, your office, the park, wherever! Email all of your friends and recruit them - you're doing them a favor, as soon they too will be rolling in the money. Couldn't grandma and grandpa use some more money in their retirement? Get them involved, they have all the time in the world! The sooner you join the pyramid, the more people you'll have below you, making you lots of $$$.
edit Did We Say $$$?
We meant $$$$, maybe even 5! Yes, that's right, $$$$$. Cold hard cash, money money money by the pound! We could all use some more money, and here, we're talking lots of money. More than you can imagine. Why, America alone has almost three hundred million people! If even only one in ten eventually join, and even if you're a late starter, you'd be raking in twenty million dollars per year at the peak! Twenty million could buy you a yacht! A mansion! A sports car! Self respect! Human dignity! Love... and couldn't we all use some of that?
Money, get away. Get a good job with good pay and you're okay. Money, it's a gas; grab that cash with both hands and make a stash. Money, dollar dollar bill, ya'll. It takes money to get that fly ho. There is nothing quite as wonderful as money! There is nothing quite as beautiful as cash! Some people say it's folly, but I'd rather have the lolly; with money you can make a splash! The best things in life are free, but you can keep it for the birds and bees. Now gimme money (that's what I want!). Money, money, money by the pound! Money, dollars, cash cash money! Money cash $$$ now! Hurry $$$ money money cash dollars money dollars $$$ cash!!!! Cash money money, money cash! You!!!
edit Join The Pyramid!
The below people have paid their fees on up the pyramid, and are now authorized to make edits! Join the pyramid, pay your fees, and you can add your name (indent, please!) after your recruiter. Feel free to add a sales pitch after your name. Remember - it doesn't matter how high up the pyramid you are, only how early you sign on and how well you get people to join in below you. So, don't delay - join now!
edit Also see
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/4703 | Take the 2-minute tour ×
How or where does Linux determine the assignment of a network device? Specifically, wlan0 or wlan1 for wireless USB devices.
I plugged in a TP USB wireless a while ago, and it was assigned wlan0. I removed it. This week I plugged in an Edimax USB wireless device and it comes up as wlan1. I removed it today to try a second Edimax USB wireless device (I bought two) and now it comes up wlan2.
I know enough of Unix/Linux to know this is being configured somewhere, and if I delete the unused config file I can make the latest Edimax become wlan0. But how/where?
share|improve this question
add comment
2 Answers
up vote 5 down vote accepted
Udev is the system component that determines the names of devices under Linux — mostly file names under /dev, but also the names of network interfaces.
Versions of udev from 099 to 196 come with rules to record the names of network interfaces and always use the same number for the same device. These rules are disabled by default starting from udev 174, but may nonetheless be enabled by your distribution (e.g. Ubuntu keeps them). Some distributions provide different rule sets.
The script that records and reserves interface names for future use is /lib/udev/rules.d/75-persistent-net-generator.rules. It writes rules in /etc/udev/rules.d/70-persistent-net.rules. So remove the existing wlan0 and wlan1 entries from your /etc/udev/rules.d/70-persistent-net.rules, and change wlan2 to wlan0. Run udevadm --trigger --attr-match=vendor='Edimax' (or whatever --attr-match parameter you find matches your device) to reapply the rules to the already-plugged-in device.
share|improve this answer
Thanks very much. This is debian on the raspberry pi (raspbian) so the persistent storage is just a bit different. – Huntrods Apr 25 '13 at 2:09
The file where specific wlan assignments are stored on this (latest, I think) version of raspbian is: /etc/udev/rules.d/70-persistent-net.rules. I found this out using your info above and then typing "man udev" to see why I could not find 'wlan' in the /lib/udev/rules.d directory. – Huntrods Apr 25 '13 at 2:11
For raspbian, the wlan number is set based on the mac address of the wireless device (in this case, whichever one is plugged into the USB port). It allocates numbers (wlan0, wlan1, etc.) based on the order it first sees a new mac address when it recognizes and configures the wireless device. Editing this file as you suggest allows you to set any device to any wlan# by it's static IP. - thanks. – Huntrods Apr 25 '13 at 2:13
add comment
This issue has been solved as of systemd v197 with the introduction of persistent naming for network devices.
According to the freedesktop Predictable Network Interface Names page, the kernel simply assigned names based on the order they were probed by the relevant drivers:
If your distro uses systemd, you can either use the predictably assigned but perhaps unwieldy names like wlp0s11 or you can write a udev rule to give them a name you are more comfortable with, like wifi1, based on the mac address...
Include a file in /etc/udev/rules.d/ called 10-network-device.rules:
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="22:bb:cc:33:44:dd", NAME="wifi1"
share|improve this answer
You're missing a step here. Huntrods evidently doesn't have a post-197 udev with the new naming scheme, and also evidently has a persistent naming scheme. It's this persistent naming scheme that he needs to tweak. – Gilles Apr 25 '13 at 0:47
What can I say; I took a punt... – jasonwryan Apr 25 '13 at 2:40
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/4721 | Do Galatians 2:16 and 2:21 support the "faith alone" position? NO.
"Like" Answering Protestants on Facebook:
"Follow" me on Twitter:
I was reading a Protestant apologetics site and I found two verses in Galatians 2 that they used to defend their "faith alone" position. The verses are verse 16 and verse 21. Let's look at those.
Galatians 2:21 reads as "I do not nullify the grace of God, for if righteousness comes through the Law, then Christ died needlessly."
These verses are clearly referring to works under the "old" Law, not under Christ's "new" Law. This is even obvious by verse 16 alone, but for further proof, look at the verse before it. Verse 15 reads as "We are Jews by nature and not sinners from among the Gentiles".
Paul is saying that, though Christianity came out of Judaism, its Law is no longer binding on Christians. He didn't want Christians to get swept up into the proud, Pharasaical mentality of "Oh, I'm so good for doing all of these great things all by myself." Under that mentality, Christ's importance seems lessened, so Paul was warning against that.
Paul is emphasizing, like he, especially, is known to emphasize: that everything must be done through Christ. In verse 20, he writes, "I have been crucified with Christ; and it is no longer I who live, but Christ lives in me; and the life which I now live in the flesh I live by faith in the Son of God, who loved me and gave Himself up for me."
He is pointing out that it is by our faith in Christ that our lives have meaning. Without faith, our works mean nothing.
If we can be saved by doing works under the Jewish Law, apart from faith in Christ, then yes, like Paul writes in verse 21, "Christ died needlessly."
Basically, Paul is telling us to keep our minds on Christ, never doing anything apart from him. He is NOT, however, telling us that good works done through Christ mean nothing.
Loading more stuff…
Loading videos… |
global_01_local_0_shard_00000017_processed.jsonl/4747 | Component Request (Buckminster)
From Eclipsepedia
Jump to: navigation, search
< To: Buckminster Project
Component Request
A Component Request is a pointer that specifies a component by name, category, and suitable version(s) of that component.
A component request consists of:
• name, the name of the component
• category, the category that the component is a member of
• VersionDesignator, i.e. a reference to a version, or range of versions
• VersionType, a reference to a class that can understand the VersionDesignator string.
The name of the wanted component.
Version Designator
Version Type
A Version Type is a class that is used to parse and understand a plain version; "a pointer to a specific version". More specifically, a Version Type will be asked to parse and compare the text between the delimiters in a Version Designator.
In the request you must supply the name of the Version Type that understands the plain version strings you have used in the version designator.
Name: Foobar
VersionDesignator: [1.0.0,3.2.1]
VersionType: OSGiVersionType
This is a request for the "Foobar" component of any version larger or equal to version 1.0.0, and smaller or equal to 3.2.1. The OSGiVersionType is supplied with Buckminster and understands a standard numbering scheme. See Version Type for more information. |
global_01_local_0_shard_00000017_processed.jsonl/4748 | (BIRT) Java Constants
From Eclipsepedia
Jump to: navigation, search
< To: Examples (BIRT)
DesignChoiceConstants (BIRT)
One of the most powerful DEAPI constants. There are 82 choices covering everything from FONT style to left to right (BIDI) orientation.
ICellModel (BIRT)
Properties that are specific to a cell and its descendants.
IDataGroupModel (BIRT)
Properties that are used when working with groups.
IDataSetModel (BIRT)
Properties used to control data sets, particular emphasis on SQL queries.
ISimpleMasterPageModel (BIRT)
Properties that are used when working with the Page Element.
IStyleModel (BIRT)
Style Model contains the property names used when applying styles to cells and controls. This list is somewhat redundant with the DesignChoiceConstants values.
ITableColumnModel (BIRT)
Properties that are unique to the Table elements.
IRenderOption option = new IRenderOption(); |
global_01_local_0_shard_00000017_processed.jsonl/4749 | From Eclipsepedia
Revision as of 15:08, 30 November 2006 by Cbridgha.us.ibm.com (Talk | contribs)
Jump to: navigation, search
WTP Development Status Meeting 2006-11-30
Announcements and Reminders
• Conferences:
EclipseCon2007 open for long talks (Dec 1), short talks (Jan 15) submissions.
• Articles:
Consuming Web services using Eclipse Web Tools in Eclipse Review by Christopher Judd.
• Announcement: Update on committer status for Valentin Baciu - Congratulations and welcome!
WTP 1.5.3
• Due: 2007-02-16
• 1.5.3 Guidelines for fixes
• We will be declaring a 1.5.3M build this week so please smoke test the latest M-build to be declared by noon on Friday.
Build status? David
• Bug lists
Resolved, Unverified (~230)
Older resolved defects closed out. Minor glitch in closing out those in the remind and later states has been corrected.
Verified, Not closed (~0)
1.5.3 Hot Bug Requests (~1)
New process, when you reject a hot bug request as a hot bug, change the summary to [hotbug_declined].
1.5.3 Hot Bugs (~2)
Blockers, Criticals (~2)
WTP 2.0
• Integration Build this Friday, 12/01/06
Build Status? David
WTP 2.0 now requires a Java5 runtime. However, components should not start adding hard Java5 dependencies.
What's the third party jar status?
Please smoke test the latest Integration I-build to be declared by noon on Friday.
FYI, The 2.0 builds will have a combined zip which includes JPA and JSF, possibly ATF, once these projects are promoted past incubation (1Q2007).
Begin to analyze Adopter Usage Reports to pick the most important API requirements. Let John know about any suggestions or improvements or open a bugzilla defect against the releng component.
Component leads should analyze for two reasons:
• To prioritize which code you plan to make API.
John to send out information about how to run an extension point usage scan.
• Bug lists
Untriaged (~115)
Resolved, Unverified (~70)
Verified, Not closed (~0)
2.0 Hot Bug Requests (~1)
2.0 Hot Bugs (~0)
Blockers, Criticals (~7)
• Welcome to JPA, JSF, and soon ATF
Coordinate with WTP milestone dates and process
Weekly smoke tests reporting
• Testing Strategies for WTP 2.0
• Smoke Tests
Each team will have an externally published smoke test scenario.
A wiki page will be used instead of the mailing list to provide smoke test results.
• Compatibility Tests
WTP 1.5 -> WTP 2.0
Each team should create a test scenario to load projects or artifacts from WTP 1.5 and attempt to load, display, edit, and run them on a WTP 2.0 workspace.
Possible coordinated compatibility JUnit which will run as part of the builds where each team can add 1.5 metadata or projects and methods to exercise their function on WTP 2.0. Java EE team has a current JUnit to be used as an example.
Workspace migration testing will mainly consist of a few workspaces with multiple types of content, settings, and a lot of views and perspectives open.
Questions remain about what exactly we need to test and we will have to gather feedback from each component team.
WTP 2.0 -> WTP 1.5
We should support shared projects in a team environment where one user is on WTP 1.5 and one is on WTP 2.0, so we will need some backwards compatibility testing.
• Milestone Test Passes
More formalized test phase with planned activities and platforms:
Compatibility tests will be run as part of milestone shutdown
Defect verifications from the current milestone and their closure
Linux, Windows, Mac
Investigating better ways to collaborate and encourage the collection of test results
Other business?
• Java EE 5 Update - Chuck?
• EclipseCon Update - Tim W?
• Website Update - Tim W?
• Just a friendly reminder to fill in your team status each week on the meeting agenda.
Teams Status and Focus for Coming Week
Common Component Team
Server Component Team
Datatools (RDB, 1.5.x only)
XML/JSP Component Team
Web Services Component Team
Java EE Component Team
• Fixing triaging 1.5.3 defects
• Continuing to triage existing bug lists
• Discussing options for continuing EE5 line items - started calls on Thurs mornings
• Execution Environments
• Investigating running and publishing the automated test coverage reports
Back to WTP Meeting Archives |
global_01_local_0_shard_00000017_processed.jsonl/4750 | Eclipse DemoCamps November 2011/Bangalore
From Eclipsepedia
Jump to: navigation, search
Eclipse DemoCamp New.jpg What is an Eclipse DemoCamp?
Little Italy Restaurant, Hosur Road, Koramangala
Cuisine : Authentic Italian Food. Bangalore DemoCamp has decides to celebrate the Eclipse Birthday thru a Go Veg Theme.
Locate on Google Map : Click Here
"Go Green Eclipse ! Go Green Bangalore"
Date and Time
Date :: December 12th 2011 Time :: 6pm - 9pm
This Eclipse DemoCamp will be sponsored by Eclipse Foundation
Eclipse 10y Vectors.png
• On the occasion of Eclipse's 10th Birthday, Bangalore also decides to join the celebration.
• The event starts with Eclipse Birthday Cake Cutting @ 6pm.
• Followed by that would be the series of Presentations by our Eclipse Experts in Town.
• Snacks and Drinks will be served during the event and post Presentations we would start with the Dinner.
• Niranjan Babu, Bosch - Introduction to Sphinx
• Saurav Sarkar, SAP Labs India- Collaborative modeling through CDO and Mylyn Context
• Nidhi Rajshree/Srikanth Tamilselvam, IBM Research India - Cockpit Modelling ToolKit, An Eclipse and BPMN based toolkit for Citizen centric Service Design
Who Is Attending
1. Annamalai C, ANCIT
2. Revathy A, ANCIT
3. Jayasimha KS, BOSCH
4. Niranjan Babu, BOSCH
5. Ramapriya, BOSCH
6. Karthikeyan, Mercedes Benz
7. Sandeesha, BOSCH
8. Swapna, BOSCH
9. Avinash Shrimali, BOSCH
10. Saurav Sarkar, SAP Labs India @sauravs Codify It
11. Ashwani Kr Sharma, SAP Labs India
12. Panneer Selvam, SAP Labs India
13. Aparna Saraswat, Oracle
14. Praveen Joseph, BOSCH
15. Ajay chandrahasan, BOSCH
16. Vikas kushwaha, BOSCH
17. Ankit Garg, BOSCH
18. Tarun Telang, SAP Labs India
19. Vaibhav Kumar , SAP Labs India
20. Babu Natarajan, AtoS <Siemens IT Solutions & Services>
21. Chandrayya G K, BOSCH
22. Anshu Jain, IBM Research, India
23. Ravi Ponnusamy, AtoS <Siemens IT Solutions & Services>
24. Jeya Vetha Lal, AtoS <Siemens IT Solutions & Services>
25. Paranthaman Karthikeyan, AtoS <Siemens IT Solutions & Services>
26. Srikanth Tamilselvam, IBM Research India
27. Nidhi Rajshree IBM Research India
Due to lack of Sponsorship, we would be restricting the count of participants to 30. Few more seats to go. Please register at the earliest. |
global_01_local_0_shard_00000017_processed.jsonl/4757 | Help Wikitravel grow by contributing to an article! Learn how.
From Wikitravel
Revision as of 09:54, 27 October 2007 by Sapphire (Talk | contribs)
Jump to: navigation, search
This is the Wikitravel TourBus stop.
Wikitravel is a project to create a free, complete, up-to-date and reliable world-wide travel guide. So far we have 26,986 destination guides and other articles written and edited by Wikitravellers from around the globe.
Wikitravel uses the MediaWiki software to run our wiki. We keep our content free using the Creative Commons Attribution-ShareAlike 1.0 license.
Bus connections:
Famous sights to visit here at Wikitravel:
Goals and non-goals
An overview of what we're trying to do with Wikitravel.
United States of America
An example of a country guide, describing the USA.
An example of a city guide.
Dutch phrasebook
Arriving in a new city
For other topics,
Welcome, newcomers
Is the starting point for those who want to help.
Personal tools
Destination Docents
In other languages |
global_01_local_0_shard_00000017_processed.jsonl/4788 | Wonderland Lake
› A 5280 Guide to Boulder
Issue: March 2012
Skip the hubbub on the Pearl Street Mall and spend an afternoon off Broadway. You’ll still find stunning views of the Flatirons, minus the college hordes.
Call of Duty
33 Things Every Coloradan Should Know How To Do
The 33 things every Coloradan should know how to do.
Spring Break
Best New Restaurants 2014
Modern Glamour
|
global_01_local_0_shard_00000017_processed.jsonl/4810 | Skip to navigation | Skip to content
Big bird's slow breeding was its downfall
The mighty moa (Image: Zoological Society of London)
The biggest bird that ever lived, the mighty moa of New Zealand, was hunted to extinction partly because it took so long to reach breeding age, say scientists.
Dr Samuel Turvey of the Zoological Society of London and team report their findings in today's issue of the journal Nature.
The moa once walked New Zealand before being wiped out several hundred years ago by the Maori, the Polynesians who according to legend migrated to Aotearoa ("the land of the long white cloud") by canoe from the Cook Islands around seven centuries ago or longer.
The flightless, wingless bird, a cousin of the kiwi, stood up to 3.5 metres high and, at 250 kilos, was a plentiful source of meat.
But how could a small Maori population, armed only with close-range wooden weapons and traps, wipe out such a plentiful species in such a large country?
The answer, according to the new research, may be found in growth rings in the bones of these extinct giants.
These marks are common in many animal species and are caused by differing growth rates in changing seasons. But bird species do not have these rings as in most cases their growth phase is confined to less than a year.
The moa, though, was the exception.
Examination of rings in stored bones suggest that the two moa species, luxuriating in the safety of New Zealand's unique eco-system, may have taken several years to reach reproductive maturity and up to a decade to attain skeletal maturity.
That made them "extremely vulnerable" to hunting. If too many adult moa were caught too quickly there would have been no chance of replenishment, and the species, dominated by unreproductive birds, would have been placed under severe pressure.
A study published last November postulates that there was a crash in the moa population that occurred before the arrival of the first settlers.
Moa numbers may have been between three million and 12 million birds a thousand years ago but tumbled to just 159,000 before the Polynesians arrived, possibly because of avian diseases brought by migrating birds or of the local effects of volcanic eruptions.
As the moas had no natural predators, their numbers would have rebounded eventually, had it not been for the fateful intervention of humans, according to this theory.
Tags: indigenous-aboriginal-and-torres-strait-islander, climate-change, ecology, palaeontology |
global_01_local_0_shard_00000017_processed.jsonl/4811 | Skip to navigation | Skip to content
Stem cells point to space ills
Astronaut Clay Anderson EVA
The research could help explain the negative effects of microgravity on astronauts (Source: NASA)
The finding may explain why long-term exposure to microgravity causes astronaut health issues such as loss of bone density and muscle wasting.
The research, led by biologist Dr Brendan Burns of the University of New South Wales in Sydney, will be presented this week at the 9th Australian Space Sciences Conference.
Burns, along with graduate researchers Elizabeth Blaber and Helder Marcal, used a NASA rotating-wall vessel to simulate microgravity, which is experienced by astronauts in low Earth orbit, to analyse its effect on human embryonic stem cells.
Stem cells are cells that have yet to differentiate into cells with specialised functions.
The researchers isolated and identified proteins expressed by the cells and compared these to proteins from cells grown under normal gravity conditions.
Their results showed 75% of the proteins from the cells exposed to microgravity were not found in those grown under normal gravity.
Less antioxidants
In particular, cells exposed to microgravity produced more proteins that negatively regulate bone density. In the human body, these changes in bone tissue could result in decreases in bone density, leading to osteoporosis.
"Although it has long been known that microgravity affects bone density, what kinds of genes and proteins that are affected by microgravity to cause this condition isn't known," says Burns.
The microgravity-exposed cells also produced fewer proteins with antioxidant effects, he says. Antioxidants protect the body from reactive oxidants that can damage DNA.
"We're trying to get down to the nuts and bolts of what is causing these issues at a cellular level," says Burns.
Novel approach
Associate Professor Ernst Wolvetang of the Australian Institute for Bioengineering and Nanotechnology at the University of Queensland says while it's difficult to judge the research prior to publication, it is a "novel idea".
"If the right controls were done, and they find 75% different protein expression, especially if they include bone morphogenic proteins [such as those that regulate bone density], that would be a significant discovery," says Wolvetang.
He says as far as he knows microgravity studies had been done mostly on functional bone building cells, and in that sense the research is novel.
"How relevant this will be to space flight itself is a whole different matter, because we don't have embryonic stem cells in our adult bodies anymore," he says.
|
global_01_local_0_shard_00000017_processed.jsonl/4869 | Find Your Next Favorite Book
Our Money-Back Guarantee
You're Aboard Spaceship Earth
Earth can also be considered a spaceship, as it has everything on board needed for survival--water, food, and air with oxygen. Unlike a space shuttle ... Show synopsis
Find your copy
Buy it from $0.99
Buy new from $3.77
Change currency
Reviews of You're Aboard Spaceship Earth
Write this item's first Alibris review Review it now
Discussions about You're Aboard Spaceship Earth
Start a new discussion
1. What's on your mind? Review post guidelines
|
global_01_local_0_shard_00000017_processed.jsonl/4881 | comments_image Comments
Breaking: Keith Olbermann Announces Abrupt Departure From MSNBC; "Countdown" to End
More detail will no doubt follow, but for now all we know is that Olbermann announced his departure on tonight's show. He didn't give any reason.
LA Times blog:
More from the AP:
"This may be the only television program where the host was much more in awe of the audience than vice versa," he said.
Olbermann's prime-time show is the network's top-rated. His evolution from a humorous look at the day's headlines into a pointedly liberal show in the last half of George W. Bush's administration led MSNBC to largely shift the tone of the network in his direction, with the hirings or Rachel Maddow and Lawrence O'Donnell in primetime.
MSNBC has announced the new shuffle of its schedule: Lawrence O'Donnell's show now starts at 8 p.m EST. Rachel Maddow will stay in her regular slot at 9 p.m. The Ed Show" with Ed Schultz will air at 10 p.m.
Here's the video of his his final sign-off:
AlterNet / By
Posted at January 21, 2011, 3:25pm
See more stories tagged with: |
global_01_local_0_shard_00000017_processed.jsonl/4927 | Thanks Jim for the reply. I only ask because I had trouble with this paper as a regular contact/proofing paper. You can read about my troubles below in the thread "ILFORD INDUSTRIAL CONTACT PAPER IC-1-1P anybody know about this stuff?" I got faster times with a cheap black light bulb than anything coming through my D2V enlarger. JohnW |
global_01_local_0_shard_00000017_processed.jsonl/4929 | You can use a rack cleaner such as Rack Attack by Trebla and there are others on the market as well. You can also use Lysol toilet bowl cleaner but only use in a well ventilated area and rinse thoroughly with lot's of fresh water. If the gears and other parts are just stained that's normal and there is not much you can do about it, they will not remain bright red as when new and the staining is of no consequence except for being cosmetic. Build up is another issue altogether and must be cleaned off. |
global_01_local_0_shard_00000017_processed.jsonl/4942 | How to Treat a Forehead Injury With Eye Swelling.?
1. Contact a doctor and explain your condition. Provide details regarding when and how the incident occurred. Detail any side effects you have been experiencing as a result of your injury, and specifically mention any pain, nausea or vomiting. 2. Get
1 Additional Answer Answer for: Forehead Swelling
Symptom Checker
Enter Symptom:
Q&A Related to "How to Treat a Forehead Injury With Eye Swelling..."
The two most common causes of swelling int he forehead are sinus in...
Apply an ice pack.
Try putting ice on the swelling and avoid picking at it any more. If it is really dry or irritated, try lotion or moisturizer. If it still doesn't go down you might want to try some
Explore this Topic
Swelling of the forehead and other parts of the face is due to fluid build up in facial tissues. There are a number of causes for why this may occur. An allergic ...
It is unusual for a forehead to swell one week after a cut. Usually the head would swell up a day or two after the cut or gash. Because it took a week to swell ...
Yes, cellulitis on the scalp can cause swelling on the forehead and around the eyes and nose and cheeks. Because it is a bacterial infection, cellulitis can actually ... |
global_01_local_0_shard_00000017_processed.jsonl/4946 | Go Camping Or Rent A Cabin
Weekend Getaways An escape to the great outdoors is a vacation that gives you tons of options. Go just for a night or stay out for a long weekend. Rent a campsite that has parking and running water, or go off the grid by hiking out into the backcountry. You can even look into river camping, where you travel down the river by day and camp on its side by night. If you're not sold on camping -- or if a weekend in the woods is something your partner wouldn't be caught dead doing -- look at renting a cabin for the weekend. You'll still enjoy some quiet rest in nature, and the warm beds and temperature controls mean you're not roughing it.
More Like This
Best of the Web
Special Features |
global_01_local_0_shard_00000017_processed.jsonl/4948 | Loop! Astrology+News
Retrograde: Beware of Wrong-Way Drivers!
Mercury retrogradeSo now toasters will explode, PCs will crash one after the other, and the simplest coordination between the head, feet, and hands will get out of control. Because Mercury has been retrograde since yesterday (Monday, 10/21) around 12:28 p.m., suspending all physical laws and moving backward through the zodiac since then. In any case, this is how it seems from our perspective as earthlings. Several thousands of years ago, this must have appeared like a wonder to nighttime observers: God's hint that something was not in order, that his plan was disturbed, and that evil powers were involved in case of doubt.
We now know that everything is fine, that neither Mercury nor the other planets turn on their heels at some point and become cosmic wrong-way drivers. All of this is just an appearance, an illusion that is created from a certain perspective. So does this mean that all of the astrological classifications in this respect are just deceptions?
Yes and no would be the clear answer. If yes, then the solar system still functions properly and Mercury is neither faster nor slower than otherwise but just continues to make its rapid orbits around the sun. But astrologers always work more with the appearance than the cold facts, and even just the geocentric standpoint expresses this. In a reality that is primarily influenced by our perception, it is therefore very legitimate to weigh something that we know only exists as an appearance.
Which brings us back to the exploding toasters. These continue to be subject to the previous laws of nature – unless we would now like to philosophize on the perception of toasters, computers, or injection pumps. Otherwise, if something breaks during Mercury retrograde, this is still generally related to the principle of transience.
But what we can certainly assume that things like purely human operational errors will be more frequent since the user and owner of the toaster are subject to the retrograde appearance. Because Mercury, as the carrier energy of all information, slows down extremely at the start and end of every retrograde phase up to a point in which it virtually seems to fall out of time when its movement energy seems to approach the absolute zero point.
Mercury phase1And speed is actually a specific characteristic of the individual planets. The simple formula is that the further a heavenly body is from the sun, the longer it requires for a complete orbit around our main star. This may also be one of the reasons why the transits of slow-moving planets such as Saturn, Uranus, Neptune, and Pluto can be so extremely strenuous. Not only because their specific energy is so demanding but, above all, because all of the experiences related to it can drag on for weeks and months, and sometimes even years.
While the Moon transits – and in a certain sense, also Mercury and Venus transformations – tend to run through our life in a rather light-footed and fast way. Unless they become slower and slower until they ultimately almost stand still. Just like Mercury, which has replaced even Pluto and Neptune in this hierarchy as of this morning. A certain degree of self-observation can be helpful at such moments, especially when the position of Mercury interacts with our own natal chart positions in a serious manner.
If this is also confirmed in the mundane planetary positions, which means that similar winds are also blowing here, then we may have the one or the other bewildering experience. After all, Mercury is in Scorpio right now and still close to Saturn. This means that all of the birth years for whom Saturn is also located there will be especially affected by this retrograde.
This does not necessarily manifest immediately as the worst blocks or the like; quite to the contrary: our thinking can now become very concrete and practical. People who write may develop an endurance and consistency that had otherwise been rather difficult; thoughts may now take on an extremely plastic form in the inner space and appear to be almost tangible. So if we succeed in directing the energy to constructive topics, we can very much call this a blessing; but if this tends to involve negatively connoted contents, chances are that it will be more of a curse.
The now deadlocked ideas will be difficult to resolve in the coming days and must presumably be taken up time and again, considered from all sides, dissected and taken apart in a Scorpionic way, and finally reduced to their ultimately substance with a Saturnal approach. In the most favorable case, the end of this phase finds us capable of summarizing a story or perception that we had previously just explained in an extremely excessive and comprehensive way into just one sentence.
In the least favorable case, our own thoughts turn into bulwarks or walls behind which we have entrenched ourselves because we are so convinced that things can only be like this or that and never different. Which means that we are no longer capable or willing to differentiate between reality and our thoughts. Fortunately, the trine to Jupiter (which is also just about to go into its retrograde) also exists in the mundane positions, so we can hope that an increase in insight also occurs in most cases. So this is generally a time in which it is worthwhile to more deeply explore the important themes in life and once again consider them from various aspects in order to initiate concrete implementations or consequences at the end of this phase. This means that it is better to avoid hasty reactions since they usually backfire.
3 phasesThe phases of every retrograde motion can be divided into three parts. Phase One basically already begins – as in the current example – when Mercury reaches the point in the Zodiac that it will once again transition through at the end of its retrograde phase. In our case, this occurred on October 1. In principle, all of the important topics will now appear for the first time, both from the global perspective and for each individual. Everything that now becomes significant is effectively the ground upon which we must also walk in the following period of time.
Phase Two begins with the actual retrograde motion, which was rung in today. During the next three weeks, these topics will now be revised or – more precisely – worked through. This tends to involve analysis, which is why we separate the wheat from the chaff as we differentiate what is essential from the non-essential. The search for possible causes of obstacles and problems can now be helpful if we do not get endlessly caught up in the details. Phase Two ends on November 11, which is when Mercury will once again become stationary around 10:12 p.m., briefly falls out of time, and becomes more impressive than even Pluto as a result. But his movement reverses direction again and returns to normality.
This moment heralds the start of Phase Three: Now all of the previously processed topics are illuminated once again, but with the prospects of the future. It would now be advisable to complete the analytic processes and examine all of the topics with regard to their living implementation. Because a healthy dynamic now also rules in the mental area; the appearance once again more closely approaches reality.
And in the end, when Mercury has once again crossed the starting point of its actual retrograde phase, something new may have been created. Something that has a well-founded basis since we have taken the time to approach all of the important topics in a process-like manner and therefore also show enough substance to integrate future transformation requirements without immediately becoming faceless as a result.
Mercury-EarthBut the areas of life in which all of this will manifest for each individual cannot be deduced from just the mundane order of events. The comparison with the natal chart (transits, directional triggers, etc.) may provide better points of reference, especially if we orient ourselves upon the starting and end point in time for the overall phase. However, these two event charts could also be used to create a time-Davison Relationship Chart that can also be compared with our own birth picture. True researcher souls will also add the heliocentric points of reference and/or release charts; this tends to show more of the energetic topics that – when skillfully linked with the geocentric pictures – provides information on causes and effects (and/or implementations).
Taken as a whole, it would be better to speak of complexity in relation to retrograde planets instead of a deficit. Something shifts and deepens, only finding its expression after a certain period of experience and conflict. Which should compulsorily also lead to more awareness in relation to the corresponding topics – unless we feel that the related requirements are bothersome and too strenuous.
Then the new toaster may actually explode, we can no longer remember our own telephone number, or we once again forget an important anniversary. However, if we are constantly confronted with wrong-way drivers on the highway who wildly honk and flash their lights at us during this phase, it would be advisable to remove the foot from the gas as quickly as possible, carefully pull over to the side of the road, and perhaps rethink our own standpoint.
Appendix: All Retrograde Phases of the Planets in 2013
More detailed explanations: Astro-Wiki - Retrograde Motion; Stationary Phase
LOOP! at Astrodienst
Orignal article by www.astrologie-zeitung.de
Author: Harald Lebherz
Translated into English by Christine Grimm
Loop! and Astrodienst
Astro Wiki
Current Planets
16-Mar-2014, 02:28 UT/GMT
Mars26Libra13' 7"r
Saturn23Scorpio10' 2"r
Pluto13Capricorn21' 5"
TrueNode28Libra40' 2"r
Explanations of the symbols
Chart of the moment |
global_01_local_0_shard_00000017_processed.jsonl/4977 | Thread: BCS upset in HD
View Single Post
Old 09-01-2007 #1
Super Member
JerryDelColliano's Avatar
Join Date: May 2007
Location: Beverly Hills
Posts: 1,423
Default BCS upset in HD
Wow - talk about ruining a season right from the get-go...
Isn't this the same Michagan team USC destroyed in a bowl game just a little while ago? Maybe they are still sore about that but I bet this loss hurts more. Applachian State?!!!!?
Can you imagine walking up to the window at the MGM and putting like $10,000 down on Apalachian State over Michagan. Yes, you will take the points but you don't need them!!!!!!
I like Idaho with the points (all 45 of them) as it won't be THAT much of a blow out. Call your bookie and parly the $100,000 you won on the Michagan - Apalachian State game into Idaho with the spread!!!
Jerry Del Colliano
JerryDelColliano is offline Reply With Quote |
global_01_local_0_shard_00000017_processed.jsonl/5024 | Tiverton flood defences to be repaired
Related Stories
A 650m (2,130ft) section of flood defences on the River Exe in Devon is being repaired, the Environment Agency says.
Cracks and some degradation were discovered in the defences at Tiverton during examinations carried out in 2012, the agency said.
The defences in Tiverton were established in 1967 following severe flooding in the area in 1960.
It is expected the work will be finished by the end of May.
The river, in east Devon, burst its banks in several locations during flooding at the end of last year.
More on This Story
Related Stories
The BBC is not responsible for the content of external Internet sites
BBC Devon
Min. Night 8 °C
|
global_01_local_0_shard_00000017_processed.jsonl/5065 | Philippians 4:19
Philippians 4:19
But my God shall supply all your need
Or "fulfil all your need": the Jews, when they would comfort any, under the loss of any worldly enjoyment, used to say, (Knwrox Kl almy Mwqmh) , "God fulfil", or "will fulfil thy need" F6. The Vulgate Latin, Syriac, and Arabic versions, read these words as a wish or prayer, "but may my God supply" or "fulfil all your need"; I am not able to make you any returns, but I pray that my God would recompence it to you, that as you have supplied my want, he would supply all yours; but we with others, and as the Ethiopic version, read, "shall" or "will supply"; as an assertion by way of promise, though he could not, yet his God would; he who was his God, not only as the God of nature and providence, or as the God of the Israelites, but as the God of all grace; who had loved him as such, had chosen, adopted, regenerated, and sanctified him; who was his God in Christ, and by virtue of the covenant of grace, and which was made known in the effectual calling; whose ambassador he was, and whom he had faithfully served in the Gospel of his Son; this God, who had been his God, was and would be so unto death, in whom he had an interest, and because he had an interest in him, and was thus related to him, be firmly believed, and fully assures these saints, that he would supply their wants who had been so careful of him: believers, though they need nothing as considered in Christ, being complete and filled full in him, having in him all grace, and all spiritual blessings, and under believing views of this at times, see themselves complete and wanting nothing; yet, in themselves, they are poor and needy, and often want fresh discoveries of the love of God to them, fresh supplies of grace from Christ, stand in need of more light from him, and to be quickened according to his word; they want fresh supplies of strength from him answerable to the service and work they are daily called to; and as their trials and afflictions abound, they have need of renewed comfort to support under them; and have also need of fresh manifestations and applications of pardoning grace to their souls, and fresh views of the righteousness of Christ, as their justifying righteousness before God; and, in a word, need daily food for their souls as for their bodies: now God, who is also their God, is able and willing to supply their wants; and he does so, he withholds no good thing from them, nor do they want any good thing needful for them, for he supplies "all" their need; and this they may expect, since he is the God of all grace, and a fulness of grace is in his Son; and this grace is sufficient for them, and a supply of it is given them by the Spirit;
according to his riches;
God is rich not only in the perfections of his nature, which are inconceivable and incommunicable; and in the works of his hands, of creation and providence, the whole earth is full of his riches, ( Psalms 104:24 ) , and according to these riches of his goodness he supplies the wants of all creatures living; but he is also rich in grace and mercy, ( Ephesians 2:4 Ephesians 2:7 ) , and it is according to the riches of his grace he supplies the spiritual wants of his people, and he does it like himself, according to the riches he has; he gives all things richly to enjoy, plenteously and abundantly:
in glory:
in a glorious manner, so as to show himself glorious, and make his people so, to the glory of his rich grace; and "with glory", as it may be rendered, with eternal glory; he will not only give grace here, and more of it as is needful, according to the abundance of it in himself and in his Son, but glory hereafter: and all
by Christ Jesus;
and through him, who is full of grace and truth; who is the Mediator in whom the fulness of it lies, and through whose hands, and by whom, it is communicated to the saints: or "with Christ Jesus"; along with him God gives all things freely, all things pertaining to life and godliness: or "for the sake of Christ Jesus"; not for any worth or merit in men, but for the sake of Christ, in whom they are accepted, and on whose account respect is had to their persons, and so to their wants.
F6 T. Bab. Betacot, fol. 16. 2. Debarim Rabba, sect. 4. fol. 239. 4.
Read Philippians 4:19 |
global_01_local_0_shard_00000017_processed.jsonl/5066 | The KJV Old Testament Hebrew Lexicon
Strong's Number: 06611
Original WordWord Origin
hyxtpfrom (06605) and (03050)
Transliterated WordTDNT Entry
Phonetic SpellingParts of Speech
peth-akh-yaw' Proper Name Masculine
Pethahiah = "freed by Jehovah"
1. a priest, in charge of the 19th course, in the time of David
2. a Levite and returning exile who had married a foreign wife; probably the same as 3
3. a Levite who helped lead in the confession of the people in the time of Ezra; probably the same as 2
4. son of Meshezabeel, descendant of Zerah the son of Judah; deputy of the king in all matters concerning the people
King James Word Usage - Total: 4
Pethahiah 4
KJV Verse Count
1 Chronicles1
Bibliography Information
Brown, Driver, Briggs and Gesenius. "Hebrew Lexicon entry for P@thachyah". "The KJV Old Testament Hebrew Lexicon". . |
global_01_local_0_shard_00000017_processed.jsonl/5067 | 3 replies [Last post]
Joined: 09/09/2002
Posts: 13
round balls
Who makes an affordable 66" or so twist 54cal percussion that'll take 150 grains of powder? I want to shoot round balls and it seems most rifles are made for sabots.
Location: Texas panhandle
Joined: 10/28/2002
Posts: 65
round balls
affordable is relative, but traditions (the company) makes a decent reasonably priced rifle that most sporting goods stores carry (like acadamy) bassproshops and cabelas also have a decent selection.
Location: Colorado
Joined: 02/27/2003
Posts: 394
round balls
The Lyman Great Plains Rifle is available in a round-ball twist in either .50 or .54 caliber.
Joined: 04/22/2003
Posts: 19
round balls
I second the Lyman, but remember many MLs have a "sweet" spot where they achieve maximum accuracey, and it may not be at it's max load. In fact I've seen serveral MLs that would handle balls with 75 - 90 grains of powder great, but above 90 their accuracey really dropped. This is not to say a ML can't shot a ball and more powder well, just that their are no gurantees.
Related Forum Threads You Might Like
ThreadThread StarterRepliesLast Updated
Round Ballsmuzzle baby609/10/2010 11:26 am
Lyman Great Plains rifle - your comments pleasebigH204/24/2003 07:26 am
To much lube on the patch? (.50 cal Hawkins Tradition)tom3331209/24/2007 13:02 pm
Percussion cap helpJJD306/18/2010 17:47 pm
.50 Cal Loadsprhunter401/22/2010 22:27 pm |
global_01_local_0_shard_00000017_processed.jsonl/5076 | Email updates
This article is part of the series Question and answer.
Open Access Highly Accessed Question and Answer
Q&A: Single-molecule localization microscopy for biological imaging
Ann L McEvoy1, Derek Greenfield125, Mark Bates3 and Jan Liphardt124*
Author affiliations
1 Biophysics Graduate Group, University of California Berkeley, Berkeley, CA 94720, USA
2 Physical Biosciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
3 Department of NanoBiophotonics, Max Planck Institute for Biophysical Chemistry, Göttingen 37077, Germany
4 Department of Physics, University of California Berkeley, Berkeley, CA 94720, USA
5 LS9, Inc., 600 Gateway Blvd, South San Francisco, CA 94080, USA
For all author emails, please log on.
Citation and License
BMC Biology 2010, 8:106 doi:10.1186/1741-7007-8-106
Received:23 June 2010
Accepted:4 August 2010
Published:11 August 2010
© 2010 McEvoy et al; licensee BioMed Central Ltd.
Why is it important to understand how cells are organized?
Prokaryotic and eukaryotic cells possess a complex internal structure, including protein networks, genetic material, internal and external membranes and organelles. These elements provide physical structure to cells, and a means to localize particular biochemical processes to specific cellular regions. The structure of the cell is intimately linked to its biological functions, and hence the study of the physical structure and organization of the cell is a valuable means of gaining insight into cell biology.
How do biologists typically visualize the spatial organization of cells?
Light microscopy and electron microscopy (EM) are widely used in cell biology to observe the small details of biological samples. In the past decade, the development of new fluorescence microscopy methods has revolutionized how biologists use light microscopes to study cellular structure. However, a significant disadvantage of fluorescence microscopy is its spatial resolution, or image sharpness. Although the structures of the protein complexes within the cell exist at length scales of micrometers to nanometers, the light microscope is unable to resolve structures smaller than approximately 250 nanometers. Features smaller than this size appear blurred in the microscope image. This 'resolution limit' arises as a result of the diffraction of light and leaves many cellular structures difficult or impossible to observe.
EM allows for much higher-resolution images than light microscopy. However, unlike light microscopy, which has the advantage of excellent fluorescence labeling specificity, EM lacks powerful and easy labeling strategies. In addition, EM imaging can only be performed on fixed samples and often requires harsh sample preparation techniques that can disrupt native protein structures. Ideally, we would use techniques that combine the specificity of labeled probes with the resolution of EM.
What is single-molecule localization microscopy?
Taking advantage of sensitive fluorescence detection methods, single-molecule imaging techniques have improved our understanding of the structure and function of proteins. Recently, these methods have been applied to high-resolution light microscopy, allowing light microscopes to take images with a spatial resolution far beyond the diffraction limit. It was discovered that by imaging individual fluorescent molecules one at a time, an image of a fluorescently labeled sample can be reconstructed at much higher resolution than previously possible. For the purposes of this review, we will refer to this method as single-molecule localization microscopy (SMLM), as it is based principally upon single molecule detection and localization. SMLM combines the benefits of both fluorescent light microscopy and EM, producing nanometer-resolution images of structures that have been labeled with high specificity.
Various implementations of SMLM have been developed by different research groups, and as a result the technique is known by several other names, which include photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), and fluorescence photoactivation localization microscopy (fPALM) [1-5].
How does SMLM work?
A single fluorophore inside a cell behaves as a single point source of light. However, when viewed through a microscope, the size of the image of the fluorophore is much larger than the size of the fluorophore itself (Figure 1). The broadening of the image of a point source is due to diffraction, an optical effect resulting from the wave-like properties of light interacting with the optics of a microscope; this effect limits the spatial resolution of conventional optical microscopy to around 250 nm laterally, and around 500 nm along the optical axis. The broadened image of a point source produced is termed the point-spread function (PSF) of the microscope (Figure 1a, right).
thumbnailFigure 1. The images of fluorophores observed with a microscope are blurred by the wave-like properties of light. (a) The image of a single fluorophore (red circle) has a width greater than approximately 250 nm when viewed with visible light, despite the fact that the fluorophore itself is only a few nanometers in size. The image of such a point emitter is called the point-spread function (PSF). The position of the fluorophore in this case can be determined by measuring the center position of the image, which is equivalent to the PSF in this case. (b) When multiple fluorophores are located in close proximity, their images overlap and it becomes difficult to distinguish the individual fluorophores from one another. It is the width of the PSF that limits the ability of the microscope to resolve closely spaced fluorophores. The fluorophore positions cannot be determined accurately in this case.
Although the image of the fluorophore is broadened by diffraction, the center of the observed image corresponds to the position of the fluorophore. When only a single fluorophore is emitting light, the position of the fluorophore can be found very precisely by measuring the center position of its image. Therefore, if only one tagged protein were present inside the sample, we would be able to know the position of the protein to high precision (Figure 1a).
In cells, many proteins exist in dense complexes, such that the distance between each protein is less than the wavelength of the light used to image them. This means that closely spaced labeled proteins (closer than 250 nm) appear as a single fluorescent entity when viewed through the microscope (Figure 1b). In this situation, it becomes difficult to distinguish the individual fluorophores, and it is impossible to observe the spatial organization of the sample for length scales smaller than several hundred nanometers. This is the reason that traditional fluorescence microscopy, which illuminates all fluorophores in the sample simultaneously (Figure 2a), is limited in its spatial resolution.
Since it is difficult to spatially resolve closely spaced fluorophores, SMLM uses the innovative approach of separating the fluorescence of each emitter in time. Instead of imaging all the fluorophores simultaneously, SMLM techniques image each individual fluorophore one at a time, making it possible to find the position of each molecule with high precision. Once all of the positions have been found, they are plotted as points in space to construct an image. The spatial resolution of this image is not limited by diffraction, but only by the precision of the localization process for each fluorophore.
To observe each protein individually, photoactivatable fluorophores are used. These are fluorescent molecules for which the fluorescence emission can be switched on and off under the control of an external light source. The activation light source illuminates the entire sample but at such a low intensity that only one or a few fluorophores are activated at a time, and the fluorophores that are activated at a given time is random. This enables different photoactivatable fluorophores to be 'turned on' at different times, and allows the image of each fluorescent label to be observed individually. Computer algorithms are used to find the locations of each molecule, and these fluorophore locations are then assembled into an image (Figure 2c). The location of the molecule is determined by finding the centroid of the image obtained from each molecule (discussed in detail later). The precision of the position measurement is dependent on how bright the fluorophore is over the background signal. The brighter the fluorophore, the easier it is to determine its location (Figure 2d).
SMLM imaging time is limited by how quickly it is possible to turn on each fluorophore and then turn it off. To image quickly, it is often necessary to use high excitation power so that each fluorophore is turned off immediately after excitation. Because SMLM techniques image each fluorophore individually, as the sample density increases so does the time required to take an image.
Why would I use SMLM?
SMLM has many benefits over traditional imaging techniques. This method allows proteins of interest to be labeled specifically and provides approximately ten times higher spatial resolution than traditional fluorescence light microscopy. It is therefore useful for observing biological structures at the nanometer scale, and for examining the molecular structure of protein complexes [1-5].
Many biologists are interested in understanding how proteins interact inside cells. However, because of the resolution limitations of standard fluorescence microscopy, it is only possible to identify protein co-localization to within around 250 nm. Because single-molecule techniques obtain images of higher resolution, it is possible to co-localize two proteins to around 25 nm, allowing for much more accurate co-localization experiments [6,7].
In addition, SMLM can be used to track how single proteins move inside cells. Individual protein positions can be assembled into tracks that show how populations of proteins move in cells over time, on the nanometer scale [8].
I would like to take an SMLM image of proteins within a cell. Should I?
Single-molecule imaging is more complicated than conventional fluorescence imaging. It is computationally intensive and requires the use of different fluorophores, many of which are not well characterized. Ideally, the researcher would start with a system that has been successfully labeled and imaged previously using either fluorescent proteins or immunofluorescence methods. Starting with such a system will confirm that the system can be labeled and will give insight into the best labeling strategy (that is, is a linker necessary in the case of a fluorescent protein label; should the amino or the carboxyl terminus be tagged; should fluorescent antibodies be used?). Furthermore, imaging problems are easier to troubleshoot when the typical cellular localization of the protein of interest is already known.
On the basis of previous studies, it may be known how fixation affects the sample structure. If not, it is important to test different fixatives to ensure that the protein complex of interest can be chemically fixed without perturbation. Some fixatives preserve some protein complexes better than others, so it is necessary to check which ones are best for a particular system. It is also very important to have an assay for functionality to verify that the attachment of a fluorescent tag does not perturb the protein of interest. In addition, as in all single-molecule experiments, it is necessary to decrease background fluorescence signals by using non-fluorescent imaging media and by using clean coverslips to increase the signal-to-noise ratio obtained from a single emitter. It is also important to have densely labeled samples. The sharpness of the reconstructed image is directly related to the labeling density. It becomes increasing difficult to observe fine detail if the labeling density is low (Figure 3).
thumbnailFigure 3. Higher labeling densities increase the amount of detail observed in SMLM imaging. In this example, the structure of a small loop of DNA is determined by labeling the DNA with fluorophores (left column) and determining the fluorophore positions with SMLM (right column). The detail in the resulting image of the DNA (right column) is only as good as the labeling density. (a) Labeling the DNA with only five fluorophores (left), does not preserve the actual structure of the DNA (right). (b) By doubling the number of fluorophores labeling the DNA (left), the structure of the DNA loop starts to appear (right). (c) By densely labeling the structure (left) the shape of the DNA becomes apparent (right).
Practically speaking, how do I prepare a sample for single-molecule imaging?
Single-molecule imaging requires the use of photoactivatable or photoswitchable fluorophores, of which there are two main categories: photoactivatable fluorescent proteins (paFPs), and photoswitchable synthetic fluorescent dye molecules such as Cy5 [4,9,10]. As with traditional fluorescent proteins such as green fluorescent protein (GFP), paFPs can be genetically encoded and fused to proteins of interest. Photoswitchable dyes can be conjugated directly to proteins of interest, or can be conjugated to antibodies that target the protein of interest (immunofluorescent labeling). The choice of dyes or paFPs depends on the biological application. paFPs have the advantage of labeling each protein of interest directly, so they are highly specific. However, paFPs are dimmer than dyes and multicolor imaging is more challenging because many paFPs have similar emission spectra. Some commonly used paFPs include mEos2, pamCherry, Dronpa and Dendra2. Synthetic dyes, by contrast, are very bright but it can be difficult to label proteins with dyes, particularly in living samples. Immunofluorescence techniques are dependent on the quality of the antibodies used and often have higher background signal as a result of nonspecific staining. They also often have a lower density of labeling in comparison to paFPs. Samples labeled with paFPs can be imaged in any non-fluorescent media, whereas some synthetic dyes require the use of reducing agents in the imaging buffer to photoswitch properly [4,9,10].
To acquire an image of a sample labeled with paFPs, it is necessary to first grow the cells and express the fusion protein. Once the cells have been grown, they should be fixed and either placed on a coverslip for imaging, or imaged on the coverslip they were grown on. Alternatively, if dyes are used, the cells should first be grown and then fixed. The cells are then permeabilized and labeled using a strategy such as immunofluorescent labeling.
Because SMLM image acquisition may take a long time, any drift of the microscope stage during data collection will need to be corrected. For this purpose, it is often useful to include fluorescent particles on the surface of the sample or the glass substrate. These fluorescent particles, such as gold nanoparticles, allow you to track any lateral movements of the stage during image acquisition and correct for drift in software.
What equipment do I need to build such a microscope?
In general, conventional fluorescence microscopes can easily be modified for SMLM. In most cases, SMLM has been carried out using total internal reflection (TIR) illumination, which limits the light to the bottom 100 to 150 nm of the sample, thus reducing out-of-focus light and making it easier to observe single molecules. It is convenient to use TIR imaging if you are imaging proteins close to the bottom of cells. However, for thin samples such as EM sections or small cells, it is possible to illuminate using epi-fluorescence.
To photoactivate and excite fluorophores in the sample, it is necessary to add the appropriate laser lines to an existing microscope. The choice of the lasers used depends on the activation and excitation spectra of the fluorophores. Lasers are frequently utilized because they deliver the necessary power to image quickly. Like all fluorescence microscopy, it is necessary to have the appropriate excitation and emission filters to maximize your signal-to-noise ratio [10,11]. It is beneficial to use an objective with a high numerical aperture (NA = 1.4 or higher) so that as many photons as possible are collected. To collect the data, a sensitive CCD camera (such as an electron-multiplying CCD) is also required to observe as many photons as possible. Because single-molecule imaging techniques are wide-field and it may take a long time to look at each fluorophore individually, the data files obtained can become quite large [10,11]; therefore, a fast enough computer with sufficient storage space is essential.
How do I convert the raw data to a super-resolution image?
Once you have acquired your single-molecule imaging data, you will typically have a stack of thousands to hundreds of thousands of single image frames. Each frame will have points of intensity corresponding to the light emitted from a fluorescent label. It is necessary to find the locations of each fluorophore in each frame and then computationally assemble those locations into a composite image. This composite image can be thought of as a map of the best estimation of where the fluorophores are located during imaging. We will consider the case of two-dimensional (2D) imaging for ease of discussion.
To find the location of each fluorophore, it is necessary to first identify each single molecule. This is done by choosing an appropriate threshold to distinguish the signal each molecule emits from the background [10,11]. If the signal is high enough, it is considered to be a target fluorophore. If the switching event lasts longer than one image frame, signals can be combined across frames to increase the signal obtained from each fluorophore. Once a target fluorophore is found, the signal is fitted to a 2 D Gaussian distribution (or the centroid of the signal is determined). How well a Gaussian can fit the signal is dependent on how bright the signal is above background (Figure 2d). In the SMLM image, the location of each fluorophore is represented as a small Gaussian intensity peak, whose width is scaled according to the precision of 'localizing' that fluorophore. In other words, the blurred image of the emitter is replaced with the best guess as to where the fluorophore is located. As it may be necessary to image the sample for a long time, it is also important to perform drift correction on the image using appropriate methods [1,5,10].
Image processing is a challenging aspect of single-molecule imaging. Recently, a new ImageJ plug-in was developed to process single-molecule imaging data in both two and three dimensions [12]. The development of such processing tools will facilitate the use of single-molecule imaging techniques for the broader scientific community.
Can you generate three-dimensional images?
Yes, three-dimensional (3D) single-molecule imaging has been carried out using both dyes and paFPs [13,14]. 3 D imaging can be performed using several methods. One approach is to break the axial symmetry of the PSF by adding a cylindrical lens to the imaging path, therefore causing the shape of each fluorophore's image to change depending on its height within the sample. The user can calibrate how the image changes as the sample is moved axially, and use this information to determine the height of the fluorophores in the sample. This technique has a wide z-range (at least 3 μm [15]), but altering the shape of the PSF complicates the localization algorithms and may decrease the lateral resolution of the image [13,15]. A more precise way of getting 3 D information is to use interferometry, which uses phase information from the light emitted by the fluorophore to obtain height information. This allows for 10 nm axial resolution, but because of the limitations of the current system, imaging is restricted to a relatively thin region at a depth of around 500 nm into the sample [14]. Interferometry requires the use of multiple objective lenses, significantly increasing the complexity of the system and making alignment and data processing more challenging.
Do I have anything more than a pretty picture?
Because single-molecule imaging techniques look at each molecule individually, in principle it is possible to count each photoactivation event as representing one fluorophore. If the fluorophore is an irreversibly photoactivatable protein (that is, once the protein is observed, it is not capable of re-excitation), the number of excitation events corresponds to the number of proteins observed in the sample. In addition to the number of proteins, you also acquire the location of each protein in the sample. Essentially, a 'protein map' is obtained that can be used to determine the nearest-neighbor distances for all the proteins. It is also possible to search for ordered protein structures; however, the error associated with each protein position may obscure any regular ordered structure depending on the dimensions of the structure [16].
It is important to keep in mind that there are many caveats associated with counting proteins as well as carrying out statistical analysis with single-molecule imaging data. It is important to ensure that only one fluorophore at a time in each diffraction-limited region (around 250 nm) is excited, which requires very low activation power. This extends the time required to image the sample. Also, if you want to count absolute numbers of proteins, it is necessary to image the sample until all the proteins have been activated, excited and then photobleached. Another concern is that there may be a population of paFPs that do not fold properly and are therefore not observable, or that are observable but emit too few photons to be identified as single molecules. Therefore, caution must be taken when making statements about the absolute numbers of proteins in a biological sample, and it is often more practical to draw conclusions about the relative number of proteins within a sample.
What kinds of biological samples have been imaged with single-molecule imaging techniques?
So far, the biological samples that have been imaged with SMLM include focal adhesions, microtubules, proteins in cryosections and chemotaxis receptors inside bacteria. All these samples are ideal for single-molecule imaging because they are thin samples or are associated with a flat membrane. They also have little 3 D structure, and can be densely labeled. One 3 D structure that has been imaged using single-molecule imaging techniques is the mitochondrion [15]. Using antibody labeling, it was possible to image the mitochondria with a z-range of 3 μm, and a z-resolution of approximately 50 nm.
SMLM techniques are still quite new, and so only a few studies have used them to understand and model biological processes. Greenfield et al. [16] used SMLM imaging to develop a model of how chemotaxis receptors in Escherichia coli organize in growing cells. In addition, they confirmed a theoretical prediction that many small clusters of receptors exist inside cells; these small clusters were previously obscured by autofluorescence [16]. Using live and fixed-cell SMLM, Hess et al. [17] obtained high-resolution images and dynamic information from influenza hemagglutinin, a clustered membrane protein, to differentiate between membrane organization models in fibroblasts. Another recent study used SMLM to show that there is a protein conformational change in the T-cell antigen receptor on activation [18].
What if I want to look at living cells?
It is possible to perform single-molecule imaging on live cells. Live-cell imaging often utilizes paFPs, as the preparation necessary for dye conjugation is more difficult to perform on living samples. Like fixed-cell imaging, live-cell imaging still excites each fluorophore individually; therefore, at any given time interval, only a few fluorophores will be observed [19].
One caveat of live-cell SMLM is that it is relatively slow compared to other fluorescence-imaging techniques. Because each fluorophore is localized at a different point in time, to create a time-lapse movie, the localizations must be binned into time windows and a series of SMLM images are reconstructed. With current techniques, these time windows are typically seconds in duration to obtain a sufficient number of localizations in each window. In addition, care must be taken to avoid cellular damage by reducing laser power, which slows down image acquisition. Therefore, in many cases, the speed of most dynamic biological processes is too fast to be captured by live-cell SMLM movies. Instead, it may be more useful to use SMLM to track the individual movements of proteins inside live cells to nanometer precision [8].
Can you image deep into tissues?
It is difficult to image deep into cells with single-molecule imaging techniques. As one images deeper, the cellular autofluorescence increases, which can obscure the signal observed from single molecules. It also becomes more difficult to accurately determine the location of the fluorophores because the image of the fluorophore can change as a result of aberrations in the imaging system and heterogeneity in the sample.
To obtain SMLM images from deep inside cells, it is possible to section tissues to observe thinner samples. Alternatively, temporal focusing can be used in combination with SMLM to image deeper into cells and tissues [20]. Temporal focusing restricts the light used to excite the proteins to a thin sheet, thus eliminating some of the background autofluorescence.
Really, how difficult is it to do single-molecule imaging?
Although single-molecule imaging techniques offer better resolution than conventional fluorescence microscopy, they can be complex and time-consuming. Most biological structures are 3 D, and so to make meaningful statements about the structure of protein complexes, 3 D imaging is required. In addition, many interesting protein complexes reside deep within cells. 3 D imaging deep into cells is very difficult using current SMLM techniques, as described above. Another important point is that fine structural details can only be mapped using high-density labeling (Figure 3). In some cases it can be useful to localize sparse individual fluorophores, but to observe nanoscale structures it is necessary to label the sample with a sufficient density of fluorophores, as defined by the Nyquist criterion [19].
Despite these challenges, however, SMLM offers the highest resolution of all current fluorescence microscopy techniques. It is also relatively easy to implement in comparison to other super-resolution techniques.
What other techniques can acquire images with sub-diffraction-limited resolution?
Other methods of optically imaging at length scales below the diffraction limit include stimulated emission depletion microscopy (STED) [21] and structured illumination microscopy (SIM) [22]. Both STED and SIM use specific illumination light patterns to achieve a smaller PSF and improved spatial resolution. They are more challenging to implement than SMLM techniques, but are both currently commercially available from the main microscope manufacturers. STED has theoretically limitless resolution, can be done in 3 D, deep into cells, and can be used to image live cells [23]. STED imaging is much faster than single-molecule imaging techniques; however, the speed of the imaging depends on the signal-to-noise ratio within the sample, the sample thickness, and the image size. The brighter the sample, the easier it will be to image quickly and obtain axial information. The first demonstration of video-rate live-cell imaging at sub-diffraction-limit resolution was accomplished using STED, achieving frame rates of 30 Hz at a spatial resolution of 60 nm [23]. Some fluorophores are particularly well suited for STED imaging, including enhanced yellow fluorescent protein (EYFP) and mCitrine, in addition to the dyes Atto 647N and Atto 655.
SIM uses periodically modulated illumination light patterns to generate sub-diffraction-limit images, and can be used for 3 D imaging of thick biological samples using conventional fluorophores. It is much faster than single-molecule imaging techniques, making live-cell imaging highly practical [24]. Complete 3 D reconstructions of thin samples (around 2 μm) can be obtained in 15 to 30 seconds. However, once again, image acquisition times depend on sample brightness and thickness. SIM's main disadvantage is the resolution, which is only twice the resolution of confocal microscopy. In addition, SIM relies on mathematical calculations to convert the raw data into final images; if the sample conditions are not ideal, this can lead to artifacts in the image reconstruction.
Ideally, we would combine several different imaging modalities to understand biological systems. However, like all techniques or assays, it is important to consider which methods are appropriate for a particular system. With the invention of new imaging modalities like SMLM, it will be very exciting to see how they are adopted and applied to biological systems in the future. It may now be possible to examine biological processes, once obscured by the diffraction limit, at a new level of detail.
Science 2006, 313:1642-1645. PubMed Abstract | Publisher Full Text OpenURL
2. Hell SW: Microscopy and its focal switch.
Nat Methods 2009, 6:24-32. PubMed Abstract | Publisher Full Text OpenURL
3. Hess S, Girirajan T, Mason M: Ultra-high resoluion imaging by fluorescence photoactivation localization microscopy.
Biophys J 2006, 91:4258-4272. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
4. Patterson GH, Suliana M: Superresolution imaging using single-molecule localization.
Annu Rev Phys Chem 2010, 61:345-367. PubMed Abstract | Publisher Full Text OpenURL
Nat Methods 2006, 3:793-796. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
Science 2007, 317:1749-1753. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
Proc Natl Acad Sci USA 2007, 104:20308-20313. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
8. Manley S, Gillette JM, Patterson GH, Shroff H, Hess HF, Betzig E, Lippincott-Schwartz J: High-density mapping of single-molecule trajectories with photoactivated localization microscopy.
Nat Methods 2008, 5:155-157. PubMed Abstract | Publisher Full Text OpenURL
9. Bates M, Huang B, Zhuang X: Super-resolution microscopy by nanoscale localization of photo-switchable fluorescent probes.
Curr Opin Chem Biol 2008, 12:505-514. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
10. Dempsey GT, Wang W, Zhuang X: Fluorescence imaging at sub-diffraction-limit resolution with stochastic optical reconstruction microscopy. In Handbook of Single-Molecule Biophysics. Springer; 2009:95-127. OpenURL
11. Shroff H: Photoactivated localization microscopy (PALM) of adhesion complexes.
Curr Protoc Cell Biol 2008, 4:4.21. OpenURL
12. Henriques R, Lelek M, Fornasiero EF, Valtorta F, Zimmer C, Mhlanga MM: QuickPALM: 3 D real-time photoactivation nanoscopy image processing in ImageJ.
Nat Methods 2010, 7:339-340. PubMed Abstract | Publisher Full Text OpenURL
13. Huang B, Wang W, Bates M, Zhuang X: Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy.
Science 2008, 319:810-813. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
14. Shtengel G, Galbraith JA, Galbraith CG, Lippincott-Schwartz J, Gillette JM, Manley S, Sougrat R, Waterman CM, Kanchanawong P, Davidson MW, Fetter RD, Hess HF: Interferometric fluorescent super-resolution microscopy resolves 3 D cellular ultrastructure.
Proc Natl Acad Sci USA 2009, 106:3125-3130. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
15. Huang B, Jones SA, Brandenburg B, Zhuang X: Whole-cell 3 D STORM reveals interactions between cellular structures with nanometer-scale resolution.
Nat Methods 2008, 5:1047-1052. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
16. Greenfield D, McEvoy AL, Shroff H, Crooks GE, Wingreen NS, Betzig E, Liphardt J: Self-organization of the Escherichia coli chemotaxis network imaged with super-resolution light microscopy.
PLoS Biol 2009, 7:e1000137. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
17. Hess ST, Gould TJ, Gudheti MV, Maas SA, Mills KD, Zimmerberg J: Cynamic clustered distribution of hemagglutinin resolved at 40 nm in living cell membranes discriminates between raft theories.
Proc Natl Acad Sci USA 2007, 104:17370-17375. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
18. Lillemeier BF, Mortelmaier MA, Forstner MB, Huppa JB, Groves JT, Davis MM: TCR and Lat are expressed on separate protein islands on T cell membranes and concatenate during activation.
Nat Immunol 2010, 11:90-96. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
19. Shroff H, Galbraith CG, Galbraith JA, Betzig E: Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics.
Nat Methods 2008, 5:417-423. PubMed Abstract | Publisher Full Text OpenURL
20. Vaziri A, Tang J, Shroff H, Shank CV: Multilayer three-dimensional super resolution imaging of thick biological samples.
Proc Natl Acad Sci USA 2008, 105:20221-20226. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
21. Hell SW: Far-field optical nanoscopy.
Science 2007, 316:1153-1158. PubMed Abstract | Publisher Full Text OpenURL
22. Gustafsson MGL: Extended resolution fluorescence microscopy.
Curr Opin Struct Biol 1999, 9:627-628. PubMed Abstract | Publisher Full Text OpenURL
23. Hein B, Willig KI, Hell SW: Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell.
Proc Natl Acad Sci USA 2008, 105:14271-14276. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
24. Kner P, Chhun BB, Griffis ER, Winoto L, Gustafsson MGL: Super-resolution video microscopy of live cells by structured illumination.
Nat Methods 2009, 6:339-342. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL |
global_01_local_0_shard_00000017_processed.jsonl/5095 | Abu Ghraib and Guantanamo
Call in to speak with the host
Philippe Sands on Torture - Abu Ghraib and Guantanamo
Australian Broadcasting Corporation
Professor of International Law Philippe Sands tells the story of a memo. Sent in December 2002 to US Defence Secretary Donald Rumsfeld, it requested the approval of a number of coercive techniques of interrogation. As Sands tells Anna Funder, with his acceding signature, Rumsfeld pushed the United States beyond the pale of international law and directly towards the abuses of Abu Ghraib and Guantanamo Bay - Melbourne Writers Festival
Mahvish Khan - My Guantanamo Diary
Commonwealth Club
Spurred by the detainment of prisoners at Guantanamo Bay, American lawyer Khan decided to offer help to the detainees.
Also See Video Genocide to Abu Ghraib: How Good People Turn Evil
Guantanamo Bay
Abu Ghraib
detainment of prisoners
Philippe Sands
Mahvish Khan
Broadcast in War |
global_01_local_0_shard_00000017_processed.jsonl/5147 | Metasequoia glyptostroboides
The topic Metasequoia glyptostroboides is discussed in the following articles:
dawn redwood
• TITLE: dawn redwood (plant)
genus of conifers represented by a single living species, Metasequoia glyptostroboides, from central China. Fossil representatives, such as M. occidentalis, dated to about 90 million years ago during the Late Cretaceous Period, are known throughout the middle and high latitudes of the Northern Hemisphere. Climatic cooling and drying that began about 65.5 million years ago... |
global_01_local_0_shard_00000017_processed.jsonl/5230 | The Christian Broadcasting Network
Browse Videos
Search Results
286 results for 'visions'. Showing results 217 - 234
Bring It On: God's Voice
Gordon Robertson answers: How do I hear God's voice? What do I do if my nurse sees dead people?...
Nicole Bromley: Breaking the Silence
George Barna: Christians and Religious Pluralism
Naeem Fazal: A Muslim's Earnest Prayer
Churches Make Tough Decisions in Bad Economy
Ailie Harriger: That Healing Was For Me!
Tina Gallo-Reid: Remaking The Hollywood Dream
Judith Wright: A Celebration of Healing
Tim Schenker: The Power of Unconditional Love
700 Club Interactive - June 3, 2009
Karen Coon: Freed in Prison
Latin America Turns to the Left
Nasir Iqbal: Healed from Within
Eye Scans: A Window to the Body
Todd White: Modern Day Street Preacher
John Tesh: Focus on Your Future
Muslim Converts
Some call it a revolution in the Middle East. Muslims are coming to faith in Jesus Christ in large numbers.
Jonathan Falwell: Carrying a Legacy
The son of the late Jerry Falwell discusses the sudden death of his father in May, 2007, and how... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.