id
stringlengths 50
55
| text
stringlengths 54
694k
|
---|---|
global_01_local_0_shard_00000017_processed.jsonl/29701 | Export (0) Print
Expand All
This topic has not yet been rated - Rate this topic
AuthenticationManager::CustomTargetNameDictionary Property
Gets the dictionary that contains Service Principal Names (SPNs) that are used to identify hosts during Kerberos authentication for requests made using WebRequest and its derived classes.
Namespace: System.Net
Assembly: System (in System.dll)
static property StringDictionary^ CustomTargetNameDictionary {
StringDictionary^ get ();
Property Value
Type: System.Collections.Specialized::StringDictionary
A writable StringDictionary that contains the SPN values for keys composed of host information.
An SPN is a name by which a client uniquely identifies an instance of a service or application on a server for purposes of mutual authentication. Mutual authentication is requested by default, and you can require it by setting WebRequest::AuthenticationLevel to MutualAuthRequired in your request.
When a WebRequest requires mutual authentication, the SPN for the destination must be supplied by the client. If you know the SPN, you can add it to the CustomTargetNameDictionary before sending the request. If you have not added SPN information to this dictionary, the AuthenticationManager uses the RequestUri method to compose the most likely SPN; however, this is a computed value and might be incorrect. If mutual authentication is attempted and fails, you can check the dictionary to determine the computed SPN. No SPN is entered into the dictionary if the authentication protocol does not support mutual authentication.
To add an SPN value to this dictionary, use the AbsoluteUri of the RequestUri as the key. Internally, the key is truncated to include the Scheme, Host, and the Port if it is not the default port.
Accessing the methods and properties of the CustomTargetNameDictionary requires unrestricted WebPermission.
When Kerberos authentication is performed through a proxy, both the proxy and the ultimate host name need to be resolved to an SPN. The proxy name resolution is protected by a timeout. Resolution of the ultimate host name to a SPN requires a DNS lookup, and there is no timeout associated directly with this operation. Therefore synchronous operations may take longer to timeout. To overcome this, add the ultimate host's URI prefix to the SPN cache prior to making requests to it.
Version 3.5 SP1 now defaults to specifying the host name used in the request URL in the SPN in the NTLM (NT LAN Manager) authentication exchange when the CustomTargetNameDictionary property is not set. The host name used in the request URL may be different from the Host header specified in the System.Net::HttpRequestHeader in the client request. The host name used in the request URL may be different from the actual host name of the server, the machine name of the server, the computer's IP address, or the loopback address. In these cases, Windows will fail the authentication request. To address the issue, you may need to notify Windows that the host name used in the request URL in the client request ("contoso", for example) is actually an alternate name for the local computer.
The following code example demonstrates displaying the contents of the CustomTargetNameDictionary.
static void RequestResource( Uri^ resource )
// Set policy to send credentials when using HTTPS and basic authentication.
// Create a new HttpWebRequest object for the specified resource.
WebRequest^ request = dynamic_cast<WebRequest^>(WebRequest::Create( resource ));
// Supply client credentials for basic authentication.
request->UseDefaultCredentials = true;
request->AuthenticationLevel = AuthenticationLevel::MutualAuthRequired;
HttpWebResponse^ response = dynamic_cast<HttpWebResponse^>(request->GetResponse());
// Determine mutual authentication was used.
Console::WriteLine( L"Is mutually authenticated? {0}", response->IsMutuallyAuthenticated );
System::Collections::Specialized::StringDictionary^ spnDictionary = AuthenticationManager::CustomTargetNameDictionary;
System::Collections::IEnumerator^ myEnum = spnDictionary->GetEnumerator();
while ( myEnum->MoveNext() )
DictionaryEntry^ e = safe_cast<DictionaryEntry^>(myEnum->Current);
Console::WriteLine( "Key: {0} - {1}", dynamic_cast<String^>(e->Key), dynamic_cast<String^>(e->Value) );
// Read and display the response.
System::IO::Stream^ streamResponse = response->GetResponseStream();
System::IO::StreamReader^ streamRead = gcnew System::IO::StreamReader( streamResponse );
String^ responseString = streamRead->ReadToEnd();
Console::WriteLine( responseString );
// Close the stream objects.
// Release the HttpWebResponse.
The output from this example will differ based on the requested resource
and whether mutual authentication was successful. For the purpose of illustration,
a sample of the output is shown here:
Is mutually authenticated? True
Key: http://server1.someDomain.contoso.com - HTTP/server1.someDomain.contoso.com
.NET Framework
Supported in: 4, 3.5, 3.0, 2.0
.NET Framework Client Profile
Supported in: 4, 3.5 SP1
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Community Additions
© 2014 Microsoft. All rights reserved. |
global_01_local_0_shard_00000017_processed.jsonl/29702 | Export (0) Print
Expand All
This topic has not yet been rated - Rate this topic
IKeyFrameAnimation Properties
The IKeyFrameAnimation type exposes the following members.
Name Description
Public property KeyFrames Gets or sets an ordered collection KeyFrames associated with this animation sequence.
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
© 2014 Microsoft. All rights reserved. |
global_01_local_0_shard_00000017_processed.jsonl/29703 |
1 out of 4 rated this helpful - Rate this topic
DataTable.Merge Method (DataTable)
Merge the specified DataTable with the current DataTable.
Namespace: System.Data
Assembly: System.Data (in
public void Merge (
DataTable table
public void Merge (
DataTable table
public function Merge (
table : DataTable
Not applicable.
The DataTable to be merged with the current DataTable.
The Merge method is used to merge two DataTable objects that have largely similar schemas. A merge is typically used on a client application to incorporate the latest changes from a data source into an existing DataTable. This allows the client application to have a refreshed DataTable with the latest data from the data source.
The merge operation takes into account only the original table, and the table to be merged. Child tables are not affected or included. If a table has one or more child tables, defined as part of a relationship, each child table must be merged individually.
When performing a merge, changes made to the existing data before the merge are preserved by default during the merge operation. Developers can modify this behavior by calling one of the other two overloads for this method, and specifying a false value for the preserveChanges parameter.
In a client application, it is usual to have a single button that the user can click that gathers the changed data and validates it before sending it back to a middle tier component. In this scenario, the GetChanges method is first invoked. That method returns a second DataTable optimized for validating and merging. This second DataTable object contains only the DataRow objects that were changed, resulting in a subset of the original DataTable. This subset is generally smaller and thus more efficiently passed back to a middle tier component. The middle tier component then updates the original data source with the changes through stored procedures. The middle tier can then send back either a new DataTable that includes original data and the latest data from the data source (by running the original query again), or it can send back the subset with any changes that have been made to it from the data source. (For example, if the data source automatically creates unique primary key values, these values can be propagated back to the client application.) In either case, the returned DataTable can be merged back into the client application's original DataTable with the Merge method.
When merging a new source DataTable into the target, any source rows with a DataRowState value of Unchanged, Modified, or Deleted, is matched to target rows with the same primary key values. Source rows with a DataRowState value of Added are matched to new target rows with the same primary key values as the new source rows.
The following console application creates a simple DataTable and adds data to the table. The example then creates a copy of the table, adding rows to the copy. Finally, the example calls the Merge method to merge the data in the second table with the data in the first table.
private static void DemonstrateMergeTable()
DataTable table1 = new DataTable("Items");
// Add columns
DataColumn column1 = new DataColumn("id", typeof(System.Int32));
DataColumn column2 = new DataColumn("item", typeof(System.Int32));
// Set the primary key column.
table1.PrimaryKey = new DataColumn[] { column1 };
// Add RowChanged event handler for the table.
table1.RowChanged +=
new System.Data.DataRowChangeEventHandler(Row_Changed);
// Add some rows.
DataRow row;
row = table1.NewRow();
row["id"] = i;
row["item"] = i;
// Accept changes.
PrintValues(table1, "Original values");
// Create a second DataTable identical to the first.
DataTable table2 = table1.Clone();
// Add three rows. Note that the id column can't be the
// same as existing rows in the original table.
row = table2.NewRow();
row["id"] = 14;
row["item"] = 774;
row = table2.NewRow();
row["id"] = 12;
row["item"] = 555;
row = table2.NewRow();
row["id"] = 13;
row["item"] = 665;
// Merge table2 into the table1.
PrintValues(table1, "Merged With table1");
private static void Row_Changed(object sender,
DataRowChangeEventArgs e)
Console.WriteLine("Row changed {0}\t{1}",
e.Action, e.Row.ItemArray[0]);
private static void PrintValues(DataTable table, string label)
// Display the values in the supplied DataTable:
foreach (DataRow row in table.Rows)
foreach (DataColumn col in table.Columns)
Console.Write("\t " + row[col].ToString());
.NET Framework
Supported in: 3.0, 2.0
.NET Compact Framework
Supported in: 2.0
XNA Framework
Supported in: 1.0
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Community Additions
© 2014 Microsoft. All rights reserved. |
global_01_local_0_shard_00000017_processed.jsonl/29709 | Submitted by MattS 601d ago | news
Another developer makes a critical hit mobile game... and sells 22 copies on Android
Digitally Downloaded writes: "We often hear about how the iOS $0.99 app, as well as Free-to-Play and in-app purchases are a bit of a saving grace for some of the challenges that game developers are currently facing in the market.
So, here’s some numbers that are a little worrying." (Android, iPad, iPhone, Monstaaa!)
guitar_nerd_23 + 601d ago
One factor is probably the time between releases, iOS games that succeed are like fads and if you miss the window that they're relevant for no one cares after they're over.
I've bought quite a few games on Android since I caved and bought a Nexus 7.
If any developers come here, add native controller support for everything that's not designed to be touch only and I'll be more likely to buy your games, see my purchase history as proof.
insomnium2 + 601d ago
Marketting is the key. Casual people do not go to MC and look at games to buy. Or pretty much any site outside of Facebook for that matter. Sad but true.
kneon + 601d ago
Even with good marketing the odds are not in your favour. Only about 1% of Android and IOS developers make back their development costs let alone make a profit.
There are just too many developers making too many apps, most of which are crap anyway.
sjaakiejj + 601d ago
It's a matter of understanding the type of market you're dealing with. People using iOS are generally more willing to pay a premium price, where as Android users are more likely to have no problem with in-game advertisements. Gear your development and market strategy to that, and don't expect android users to pay upfront - they won't.
schlanz + 601d ago
I personally hate in game ads and have purchased well over 20 games from the android market. Maybe part of the reason less Android users don't buy apps is because you don't set up payment info upon creating a Google account (like you do for an apple ID)
sjaakiejj + 601d ago
And for that, a premium version of the product can be made available, but for many android users, a free version with ads is a must.
neutralgamer19 + 600d ago
I have an android os phone and have never bought a game from the play market, nor will in the foreseeable future. The only reason Ive Played Angry Birds is because it was a free download and i rarely play that. There is a game called wind up knight that's pretty generic but i play it from time to time.
Add comment
New stories
Spearhead Games Wants You to Help Make Their Next Title
1h ago - Spearhead Games, developer of Tiny Brains, is working on a new game called Project Cyber, and the... | PC
Dark Souls II Initial Impressions - PixlBit
1h ago - PixlBit | "Like many other small gaming outlets, PixlBit didn't receive their copy of Dark Souls... | PC
New Bewitched Stages Coming Soon For Rovio’s Tiny Thief
1h ago - New Bewitched stages will be coming soon for Tiny Thief, the point and click adventure game devel... | iPhone
Titanfall Gameplay Tips
South Park: Stick of Truth – the most immersive game of all time?
1h ago - It kind of cheated in doing it, but Stick of Truth is quite possibly the funniest, most immersive... | PC |
global_01_local_0_shard_00000017_processed.jsonl/29735 | The Cutline
The Guardian experiments with publishing lists of stories in progress
Dylan Stableford, Yahoo News
The Cutline
View photo
Ever wonder how the news sausage gets made? London's Guardian newspaper announced on Monday an experiment in "opening up" its news coverage to readers by publishing the daily list of upcoming stories that is maintained by the paper's editors.
The Guardian is hoping that by publishing its "newslist," readers will help shape upcoming stories by "talking to editors and reporters about upcoming stories as we work on them."
The "newslists" include direct links to the Twitter feeds of on-duty editors and reporters, and the paper is encouraging readers to contact them:
You can tell us what you think of individual stories and suggest lines of inquiry using Twitter by tweeting to the hashtag #opennews. We will retweet a selection in the panel of our tweets opposite. Alternatively, try contacting whichever reporter has been assigned to the story by clicking on the link next to their name and sending them a Twitter message. For anything confidential, ask one of us to follow you and you can send a direct message instead, or if you'd rather not even do that you can send us an email via - though this is less likely to be spotted than a tweet.
Some critics, however, predict it won't be readers contacting the Guardian's editors and reporters--it'll be public-relations flacks.
"We're a busy newsdesk so we won't be able to reply to everything," the editors explained. "But we will be reading it and taking your views into account."
The Guardian isn't completely opening up its news meetings for public view:
We won't quite show you everything. We can't tell you about stories that are under embargo or, sometimes, exclusives that we want to keep from our competitors, but most of our plans will be there for all to see, from the parliamentary debates we plan to cover to the theatre we plan to review. We reserve the right to stick to our guns, but would love to know what you think. Sometimes you will see how quiet it is; other times you will wonder how we intend to fit it all in. Above all, bear in mind these are real-time working documents and, by definition, only provisional.
The editors plan to keep this "experiment in openness" live for at least two weeks.
Other popular Yahoo! News stories:
Reporter incites D.C. riot—to write about it
Why George Clooney doesn't tweet
Why there won't be an uproar if news outlets pay for Amanda Knox interviews
View Comments |
global_01_local_0_shard_00000017_processed.jsonl/29753 | Skip to content, or skip to search.
Skip to content, or skip to search.
the schnabulous life
Schnabel’s Schleepypants for Schale!
So sayeth Ginnifer.
Smoky burning wood. Turpentine. Tacos. These are just a few of the things we have long imagined Julian Schnabel's soft, well-worn pajamas would schmell like. Now, some lucky person will get the chance to find out, since they're being auctioned off on ebay, along with, heaven help us, a portrait Annie Leibovitz took of the Schnabulous one luxuriating in said pajamas, to benefit the Dumbo Arts Center. The auction runs February 11 to 21. Our only question is: Other than us, who in the world would pay big money for this curmudgeonly old dude's farted-in pajamas? [Dumbo Arts Center, NY Press] |
global_01_local_0_shard_00000017_processed.jsonl/29771 | Oregon, USA
December 01
Happy to be here among friends.
NOVEMBER 1, 2010 6:40PM
Exclusive: Talk with Rep Peter DeFazio
Rate: 13 Flag
Impeach Justice Roberts
DeFazio has been in national news because of the large amount of secret corporate money funneled into Oregon to defeat him. Peter calls his “very own” Wall Street fat cat (Robert Mercer) “Bob”. Bob unwisely spent $400,000 trying to unseat DeFazio who won reelection with 83% of the vote in 2008. Peter (who is running against a tea party wingnut who wants to abolish public schools) is being reelected again by a comfortably wide margin.
DeFazio said the reason that “Bob” does not like him is because Peter supports the securities and derivative transaction tax of 0.25%, which would result in a tax of $250 on a $50,000 stock buy. He also supports raising the capital gains tax to normal income levels. Both of these proposals would diminish Mercer's earnings.
As DeFazio puts it, all he wants is for Bob to “pay taxes like an American”.
Peter DeFazio
After the rally, I asked DeFazio if he was going to press forward with his investigation into articles of impeachment against Supreme Justice Roberts. He told me that he was committed on this course of action and was putting out the word to attract lawyers to argue the case.
Apparently, Bob is homeless and lives in two post office boxes: one in the Bahamas for tax purposes and one in New York:
Bob Mercer
P.O. Box 1507
Stony Brook, New York 11790
As an Oregon voter, I am going to write him a letter telling him how I do not appreciate him trying to subvert the democratic process in Oregon.
I am also going to tell him that I heard Santa was not bringing him any more cars for his train set.
Team DeFazio: Powered by the People
Team DeFazio: Powered by the People
Your tags:
Enter the amount, and click "Tip" to submit!
Recipient's email address:
Personal message (optional):
Your email address:
Type your comment below:
While you're impeaching, can you take out Alassho, Thomass and Scaliar, too?
I know I have been calling him Super DeFazio but I think taking out the chief would be sufficient for one man.
Oh, if he only could get rid of those four jerk-offs. I'm not talking about replacing them with liberals either. Just people who believe in the constitution and don't go bird hurting or to parties thrown by the very people that they rule on.
You got it, scanner.
The old boys system has to go...
I've got your back, but don't leave behind Scalia and Thomas.
Nobody I'd want more backing me than you.
What was cool was how he looked when I asked him. He looked over my head into the middle distance like he was imaging the fight this would be.
Great post. I support your mission fully!
Thanks, Dicky!
You keep up your cartooning! I ama fan!
The Wright Sight,
I already done it! (Voted that is)
As a proud Colorado voter, I was very happy to send my $25 to Defazio to influence Oregon elections!
How about a Sanders/Defazio ticket?
Your money I do not mind at all! LOL
Sanders/DeFazio sounds great!
(Santa is going to be good to you this year!)
I didn't even know you could impeach a Supreme. I read about that out-of-state donor. Sounds like he didn't get his way after all - good!
Hello Blue!
Yep. It might not work but it will put a very bright spotlight on all the doings of the Supremes.
(How's that tea?)
After 2000 and the hijacked voting...the SC really amazes me every time!
I am hoping that this action will put some light on their actions. There is also a group that is beginning the long process of a Constitutional amendment restricting the Bill of Rights to living, breathing human beings.
I've always liked DeFazio and love his call to impeach Roberts. You go girl!
Go, De Fazio! We need a Supreme Court that doesn't try to subvert our elections with corporations allowed to act like individuals.
Thanks, Trish!
He is so cool. Was a treat to shake hands with him.
And my shirt!
Just to let you know, I've been walking in Springfield solid for the last two weeks.
Hello Shiral!
Yes, the corruption is pretty plain. Power and greed...
old new lefty,
Good for you!
I just back from the phone banks. Saved three ballots from teh post office.
DeFazio-- one thing going right.
Like your sign, your hat, and your shirt!
Thanks, Sweet sister.
He pulled it out of the hat tonight. Too close for comfort... |
global_01_local_0_shard_00000017_processed.jsonl/29793 | Take the 2-minute tour ×
I have been in several software projects but not as a leader. In all the projects we all knew the tools and languages etc. before the project started.
I am wondering if it is ok or is it a good practice to have a time for developers to familiarize themselves with tools and technologies after project kicks off (eg. After requirements specification)?
share|improve this question
Whether or not it's okay, it's necessary. How are you going to write it if you don't understand the tools? – Alex Feinman Apr 5 '11 at 14:39
add comment
migrated from stackoverflow.com Apr 3 '11 at 11:33
This question came from our site for professional and enthusiast programmers.
6 Answers
If the familiarization is done in a comprehensive, requirements-driven manner, it's OK. Otherwise it's just another word for slacking, and you'd better start coding in an unfamiliar technology right away, and refactor later.
A good example of familiarization is creating a mockup of the system that uses the same key technologies and deploys to the same foundation as the target project. As steve314 adds in his comment, you'll make all the newbie mistakes you would have made if you started coding the project outright, but you won't have to throw away a lot of code that took the time to be written but made you learn nothing useful.
share|improve this answer
Thank you, this is helpful. – serengeti12 Apr 3 '11 at 12:03
+1 - The best way to learn something is to use it. It's a good idea to do some familiarization exercises that cover the same ground that you'll use for real, in part because you don't mind making all the newbie mistakes in the learning exercises, but for most things you should quickly (often as little as a few days, or even a few hours) reach the point where the most appropriate learning exercise is to do the job. "Plan to throw one away" may still be good advice, though - hopefully for a few early components rather than the whole project, but that depends how novel the technology is. – Steve314 Apr 3 '11 at 12:06
I think that reading a tutorial beforehand is not "slacking". I learn best by reading documentation first. I am one of those kids who got a new toy and actually read the directions before playing with it. I usually had a lot more fun with it too, because I knew things that were non-discoverable otherwise. – Nemi Apr 5 '11 at 13:09
I agree that creating a mockup of the system using the same technologies as the target project is the best way to familiarize yourself with the tools. – Amy Apr 5 '11 at 14:40
add comment
If you have decided to use technology with which your developers are unfamiliar, it is ok, good practice and inevitable.
share|improve this answer
add comment
I would plan for some slower performance at the beginning of the project. There may also be some setup time. Try to get the setup standardized.
If you need significant familiarization, then you may need some training time. Well done training should be more efficient than just playing around with the tools. Self-paced training is fine, and may be more efficient. Get some cheat-sheets, or develop project specific cheat-sheets.
EDIT: Project specific cheet-sheets can be of three varieties (or a combination there-of).
• Abbreviated cheat-sheets for tools used by the project omitting features not used by the project.
• Cheat-sheets for project specific tools and or libraries. Basically, anything project specific that could be on a cheat-sheet.
• Merged cheat-sheets for particular work-flows using multiple tools.
Automate your processes when you can. The cheat-sheet would then point to the appropriate automated process.
Consider using a Wiki to hold your cheat-sheets. This is also a good place to document your process. (It helps to document alternatives looked at and why the chosen one was selected.)
share|improve this answer
By cheat-sheets do you mean cheat sheets of tools you'll be using? What is project specific cheat-sheets? – serengeti12 Apr 5 '11 at 10:49
@serengeti12: Yes I do mean cheat sheets for tools you will be using. – BillThor Apr 5 '11 at 12:54
add comment
One thing I would recommend is taking some time to get a broad overview of what's covered in the framework and what isn't and also read up on best practices and common conventions of the frameworks/tools.
I've started some projects at a high speed where there wasn't time taken to learn in general about the frameworks and tools and we just learned as we went. This means though that often you only learn about the framework parts or tool abilities that you're working with directly, which can lead to the following problems:
1) reinventing something that your framework/tool already provides support for, because you didn't know it was there.
2) not following the correct conventions and best practices recommended for a tool, which can lead to maintainability problems down the line, problems upgrading etc.
share|improve this answer
add comment
On my projects we usually have a trailblazer when we are using new tools/technology/processes. It could be the same person or different for each task, but it needs to be someone who knows when they've been able to achieve the end-goal when they get there. The trailblazer gets to make all the mistakes and settle upon the way the project will do things. They then do some sort of presentation/how to guide/template/example that the other developers can then use and hopefully prevent all the other developers from making the same or other mistakes.
Other than that, I'm not sure specifically setting aside time to learn tools is a good idea. I would just leave that up to the developers to determine when they need the Just-In-Time training to take it upon themselves at that point. If you set aside the time and the particular developer isn't ready to use the tool yet, then odds are they will divert their attention someplace else. Then when they need the tool, that's when they'll learn it. In that event, I think you lost time instead of gaining time.
share|improve this answer
add comment
I would use a proof of concept stage in the project for getting familiar with new technologies and ensuring that they will be approriate for the job
share|improve this answer
Sorry,Is that a stage in the SDLC? How would you do that for example – serengeti12 Apr 5 '11 at 11:03
Yes. You would normally perform it before the design phase. In essence it is used to prove that you are capable of building what has been proposed. You could test out a number of different technologies to see which one(s) are appropriate for the job. If you haven't done the design yet you could maybe squeeze it in but it needs to be relavant to the project, not just playing around with technology for the sake of it. – John Shaft Apr 5 '11 at 12:09
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29794 | 4,205 reputation
bio website memmove.blogspot.com
location Charleston, SC
age 28
visits member for 2 years, 8 months
seen 2 days ago
I am a Software Development Engineer with Amazon.com. I have extensive experience with object-oriented programming in C++, JAVA, and C#.NET. Also, I have done a great deal of work with relational databases and embedded development.
This user has not participated in any bounties |
global_01_local_0_shard_00000017_processed.jsonl/29797 | Reiaze's God of War PlayStation Trophies
Viewing Reiaze's God of War trophies. Remove Sort by XMB/Type/Alphabetical
1.21 Gigawatts
Acquire Poseidon’s Rage
Beat a Dead Horse
Complete the Centaur sacrifice to Hades
Bolt Action
Acquire Zeus’ Fury
Don't Leave Her Hanging
Rescue the Oracle with 10 seconds to spare
Don't They Ever Shut Up!
Defeat the Desert Sirens
Get Me a Beer Kid
Free yourself from the depths of Hades
Get the Ball Rolling
Complete the Challenge of Atlas
Getting My Ass Kicked
Die enough times to get offered Easy mode
God Killer
Kill Ares
Head Hunter
Obtain the head of the Architect's wife
Get a 100 Hits Combo
Hitman 2
Get a 200 Hits Combo
It's the HUGE One
Retrieve the Captain's Key
I’ll Take the Physical Challenge
Complete the Challenge of the Gods
Key to Success
Collect all of the Muse Keys
Complete the Spike Room Box Puzzle
Kratos' Marble Collection
Collect all the Gorgon Eyes
Legend of the Twins
Watch the 'Birth of the Beast' Treasure
Win the first Minotaur fight
Prepare to be a God
Beat the Game on any Difficulty
Rockin' the Boat
Complete the Sex Mini-Game
Rocking Out
Acquire Medusa’s Gaze
Roll Over ... and Die
Win the first Cerberus fight
Scape Goat
Win the first Satyr fight
Seeing Red
Max out all Weapons and Magic
Soul Search
Acquire Army of Hades
Speed of Jason McDonald
Beat the game in under 5 hours on any Difficulty
Kiss the Nyad
Stick it in Your Cap!
Collect all the Phoenix Feathers
Sword Man
Acquire Blade of Artemis
Take the Bull by the Horns
Defeat Pandora's Guardian
The Power to Kill a God
Retrieve Pandora’s Box
Totally Baked
Complete the Sacrifice
Trophy of Zeus
Unlock all God of War® Trophies
You Got the Touch!
Climb the Spiked Column in Hades without taking damage
Zero Health = Bronze Trophy
Open a Health Chest when your health meter is already full |
global_01_local_0_shard_00000017_processed.jsonl/29799 | Search tips
Search criteria
Dev Biol. Author manuscript; available in PMC Mar 15, 2012.
Published in final edited form as:
PMCID: PMC3044432
A gene regulatory network controlling hhex transcription in the anterior endoderm of the organizer
Scott A. Rankin,1 Jay Kormish,1# Matt Kofron,1 Anil Jegga,2 and Aaron M. Zorn1*
1 Division of Developmental Biology, Cincinnati Children’s Research Foundation and Department of Pediatrics, College of Medicine, University of Cincinnati, 3333 Burnet Avenue, Cincinnati, Ohio 45229, USA
2 Division of Biomedical Informatics, Cincinnati Children’s Research Foundation and Department of Pediatrics, College of Medicine, University of Cincinnati, 3333 Burnet Avenue, Cincinnati, Ohio 45229, USA
* Corresponding author: Aaron M Zorn, Ph D, Division of Developmental Biology, Cincinnati Children’s Research Foundation, 3333 Burnet Avenue, TCHRF Room 2564, Cincinnati, Ohio 45229-3039, USA, Aaron.zorn/at/, Tel: 513 636 3770, Fax: 513 636 4317
#Current address: Department of Biological Science, University of Calgary, Canada.
The homeobox gene hhex is one of the earliest markers of the anterior endoderm, which gives rise to foregut organs such as the liver, ventral pancreas, thyroid, and lungs. The regulatory networks controlling hhex transcription are poorly understood. In an extensive cis-regulatory analysis of the Xenopus hhex promoter we determined how the Nodal, Wnt, and BMP pathways and their downstream transcription factors regulate hhex expression in the gastrula organizer. We show that Nodal signaling, present throughout the endoderm, directly activates hhex transcription via FoxH1/Smad2 binding sites in the proximal −0.44 Kb promoter. This positive action of Nodal is suppressed in the ventral-posterior endoderm by Vent 1 and Vent2, homeodomain repressors that are induced by BMP signaling. Maternal Wnt/β-catenin on the dorsal side of the embryo cooperates with Nodal and indirectly activate hhex expression via the homeodomain activators Siamois and Twin. Siamois/Twin stimulate hhex transcription through two mechanisms: 1) They induce the expression of Otx2 and Lim1 and together Siamois, Twin, Otx2 and Lim1 appear to promote hhex transcription through homeobox sites in a Wnt-responsive element located between −0.65 to −0.55 Kb of the hhex promoter. 2) Siamois/Twin also induce the expression of the BMP-antagonists Chordin and Noggin, which are required to exclude Vents from the organizer allowing hhex transcription. This work reveals a complex network regulating anterior endoderm transcription in the early embryo.
The homeodomain (HD) transcription factor Hhex is one of the earliest markers of the foregut progenitor cells that give rise to the liver, ventral pancreas, thyroid and lungs (Keng et al., 1998; Newman et al., 1997; Thomas et al., 1998). The regulatory networks that control gene expression in the early foregut progenitors, and hhex transcription in particular, are poorly understood. A greater understanding of this process could provide insight into congenital foregut organ defects and enhance our ability to direct the differentiation of stem cells into foregut organ lineages.
In Xenopus, hhex is first expressed at the blastula stage in the dorsal-anterior endoderm of the Spemann organizer, which after gastrulation gives rise to the ventral foregut progenitors (Brickman et al., 2000; Jones et al., 1999; Newman et al., 1997). The organizer and its equivalent in other species is a heterogeneous population of cells that plays an essential role in axial patterning, with sub-regions of the organizer having distinct functions (De Robertis, 2009; Niehrs, 2004). The chordomesoderm component regulates trunk formation whereas the hhex-expressing anterior endoderm regulates head and cardiac induction (Bouwmeester et al., 1996; Foley and Mercola, 2005; Jones et al., 1999; Niehrs, 2004). Hhex function is essential for these activities as hhex-deficient mouse and Xenopus embryos have head truncations as well as heart and foregut organ defects (Bort et al., 2004; Keng et al., 2000; Martinez Barbera et al., 2000; McLin et al., 2007; Smithers and Jones, 2002).
In Xenopus, the organizer is formed in the dorsal margin of the blastula by the intersection of Nodal signaling in the vegetal cells and a maternal Wnt11/β-catenin (mWnt) pathway active on the future dorsal side of the embryo (Heasman, 2006). Activation of the canonical Wnt signaling causes β-catenin to accumulate in the nucleus, where it interacts with Tcf/Lef transcription factors to displace Groucho/Tle co-repressors and directly stimulate the transcription of Wnt-target genes such as the related HD factors Siamois (Sia) and Twin (Twn) (Brannon et al., 1997; Carnac et al., 1996; Fan et al., 1998; Kessler, 1997; Laurent et al., 1997; Lemaire et al., 1995). In addition, β-catenin/Tcf complexes cooperate with the vegetally-localized maternal T-box transcription factor VegT to activate transcription of Nodal-related ligands (xnr1, 5, 6) resulting in high levels of Nodal signaling in the dorsal-vegetal cells of the blastula (Hilton et al., 2003; Hyde and Old, 2000). Nodal-activated receptors phosphorylate Smad2 proteins, which translocate to the nucleus and interact with DNA-binding proteins such as Foxh1, Wbscr11, Mixer, and Bix2 to activate mesendoderm gene transcription (Chen et al., 1997; Germain et al., 2000; Ring et al., 2002).
The combination of Nodal and mWnt signaling promotes the expression of organizer-specific transcription factors including Gsc, Otx2, Lim1/Lhx1, as well as a number of secreted BMP- and Wnt-antagonists. These include Chordin, Noggin, Sfrp2, Sfrp3/FrzB, Crescent, Dkk1, and Cerberus, which mediate the organizer’s inductive activities by inhibiting BMP4 and zygotic Wnt8 (zWnt8) ligands expressed in the ventral marginal zone (De Robertis, 2009; Niehrs, 2004). BMP4 and zWnt8 promote ventral-posterior fates and restrict dorsal-anterior fates, in part by inducing the expression of the HD transcriptional repressors Vent1 and Vent2, which inhibit organizer gene expression (Friedle and Knochel, 2002; Karaulanov et al., 2004; Onichtchouk et al., 1998; Ramel and Lekven, 2004; Rastegar et al., 1999; Sander et al., 2007).
Promoter analyses in Xenopus have begun to reveal how interactions between these various signaling pathways and transcription factors are integrated on cis-regulatory elements to control gene expression. One of the most extensively characterized models of organizer transcription is the gsc promoter, which is coordinately regulated by Nodal and mWnt signaling through distinct proximal and distal cis-elements (PE and DE respectively) (Koide et al., 2005). Nodal/Activin stimulate gsc transcription though Smad-Foxh1 complexes binding to the PE and Smad-Wbscr11 complexes binding to a the DE (Blythe et al., 2009; Labbe et al., 1998; Ring et al., 2002; Watabe et al., 1995). Studies have shown that Sia/Twn also bind to the PE to stimulate transcription in response to mWnt signaling (Kessler, 1997; Laurent et al., 1997; Watabe et al., 1995). After the initial activation of gsc transcription, a number of other HD factors including Lim1, Otx2, Bix2, Mix1, and Mixer maintain gsc expression by binding to a series of homeobox sites in the PE and DE (Germain et al., 2000; Latinkic and Smith, 1999; Mochizuki et al., 2000). In the ventral-posterior mesendoderm, these same homeobox sites appear to be utilized by the HD repressors Vent1/2, Msx1, and Pou2, which inhibit gsc transcription (Danilov et al., 1998; Trindade et al., 1999; Witta and Sato, 1997).
A few other organizer gene promoters (sia, twn, lim1, foxa4, noggin, and cerberus) have also been analyzed (Howell and Hill, 1997; Kaufmann et al., 1996; Tao et al., 1999; Watanabe et al., 2002) but other than cerberus, their expression is not restricted to endoderm component of the organizer like hhex. An analysis of cerberus transcription indicates that it is an indirect target of Nodal and mWnt signaling and suggests that like gsc it is cooperatively regulated by Sia, Lim1, Otx2, and Mix1 complexes (Yamamoto et al., 2003). It is unclear to what extent this mode of regulation can explain all anterior endoderm transcription.
In this study we have examined how the Nodal, Wnt, and BMP pathways and their downstream transcription factors impact cis-regulatory elements to control hhex transcription in the dorsal-anterior endoderm of the organizer. By coupling promoter analysis in Xenopus transgenics with an extensive series of loss-of-function and rescue experiments we have elucidated a gene regulatory model linking our understanding of axial patterning to early foregut organ development.
Embryo manipulations and gene expression assays
Xenopus laevis embryos were cultured as previously described (Zorn et al 1999). Embryos with clear dorsal and ventral pigmentation differences were selected for 32-cell stage injections. In explant experiments the following were added to the media as indicated: cycloheximide (10 μg/ml; Sigma), dexamethasone (4 μg /ml; Sigma), Recombinant human Activin A (100 ng /ml; R&D systems), LiCl (200 mM; Sigma) or BIO (10 μM; Stemgent).
Generation of the -6Kb:hhex:gfp transgenic lines was previously described (McLin et al., 2007). For deletion analysis hhex promoter fragments were PCR amplified (details available upon request), sequence verified, and cloned into either the pGFP3 or the pGL2-Basic (Promega) reporter vectors. Mutations were made using the GeneTailor site-directed mutagenesis kit (Invitrogen). Transient transgenics were generated by nuclear transplantation as previously described (Kroll and Amaya, 1996; Sparrow et al., 2000). To visualize GFP, transgenic embryos were fixed in MEMFA for 2 hours, bisected in PBS and fluorescence was directly imaged by microscopy.
For luciferase assays, hhex:luc promoter constructs (300 pg) were microinjected along with a pRL-TK:Renilla control vector (25 pg) and activity was determined using standard kits (Promega). In every experiment each construct was assayed in biological triplicate (three tubes of 5 embryos each) and the mean normalized luciferase/renilla activity and standard deviation were determined. Experiments were repeated at least three separate times. In all cases the same trends were observed and a representative example is shown.
In situ hybridization (McLin et al., 2007) and RT-PCR analysis (Kofron et al., 2004) were performed as previously described. The cDNA for the maternal FoxH1-depletion experiment was from (Kofron et al., 2004). Chromatin Immunoprecipitation (ChIP) analysis was performed as described in Blythe et al 2009 with minor modifications using the PCR primers provided in supplementary Table S1.
Morpholino oligos and synthetic mRNAs
All morpholino oligos (MOs) in this study, with the exception of the Smad2a-MO (20ng, 5’ggtgaaaggcaagatggacgacatg-3’) and Smad2b-MO (20ng, 5’ggtgaatggcaaaatcgagcacatg-3) have been previously published and shown to generate specific loss-of-functions: β-catenin-MO (Heasman et al., 2000), Tcf3-MO (Liu et al., 2005), Siamois-MO and Twin-MO (Ishibashi et al., 2008), Otx2-MO (Carron et al., 2005), Lim1-MO (Schambony and Wedlich, 2007), Chordin-MOs (Oelgeschlager et al., 2003), Noggin-MO (Kuroda et al., 2004), Gsc-MO, Vent1-MO and Vent2-MO (Sander et al., 2007). For each MO we reproduced the published phenotypes (Supplementary Fig. S1).
The following synthetic mRNAs have been previously described: Cer-S (Piccolo et al., 1999); stabilized pt-β-catenin (Yost et al., 1996); Δ NTcf3 (Molenaar et al., 1996); Xnr1 (Zorn et al., 1999); FoxH1 (Kofron et al., 2004); FoxH1-EnR and FoxH1-VP16 (Watanabe and Whitman, 1999); Smad2 (Shimizu et al., 2001); Siamois and Sia-EnR (Kessler, 1997); GR-Siamois (Kodjabachian and Lemaire, 2001); Otx2 (Gammill and Sive, 1997); Lim1 and GR-Lim1/3m (Yamamoto et al., 2003); Gsc (Yao and Kessler, 2001). To construct pT7Ts-GR-Vent2-VP16, the Vent2-VP16 open reading frame was PCR amplified from the pRN3-Vent2-VP16 vector (Onichtchouk et al., 1998), cloned in-frame into the pT7Ts-GR plasmid, and sequence verified.
A -6Kb hhex:gfp transgene recapitulates anterior endoderm expression
To better understand the gene regulatory network controlling early anterior endoderm gene expression we analyzed the regulation of hhex transcription in transgenic Xenopus laevis embryos. Previously we generated two independent -6Kb:hhex:gfp transgenic lines containing approximately 6 Kb of genomic laevis sequence upstream of the hhex transcriptional start site (McLin et al., 2007). Here we show that these transgenic lines recapitulate early hhex expression in the anterior endoderm (Fig. 1). Transcription of endogenous hhex and gfp were simultaneously activated in the dorsal-anterior vegetal cells of the late blastula (stage 9.5) and exhibited identical expression in the anterior endoderm and ventral foregut until stages 25–27. By stage 35 gfp mRNA was undetectable in the hhex-expressing liver and thyroid primordia (Fig. 1), although persistent GFP fluorescence was still detected. Unlike endogenous hhex the transgene was not expressed in developing vasculature and we observed ectopic transgene expression in the head at stage 35. Thus the −6.0 Kb upstream sequence is sufficient to recapitulate early hhex expression in the anterior endoderm.
Figure 1
Figure 1
A −6Kb:hhex:gfp transgene recapitulates hhex expression
Regulation of hhex transcription by Nodal and Wnt/β-catenin signaling
mWnt and zygotic Nodal signaling are known to regulate hhex transcription in the organizer (Xanthos et al., 2002; Zorn et al., 1999). However, it was unclear whether these pathways acted in parallel or if one was epistatic to the other. Moreover, it was not known in any species whether Nodal or Wnt signaling directly activate hhex transcription.
We therefore performed a series of loss-of-function and rescue experiments in -6Kb:hhex:gfp transgenic embryos at early gastrula. Inhibition of the mWnt pathway either by injection of an antisense β-catenin morpholino oligo (β-cat-MO) (Heasman et al., 2000) or mRNA encoding a constitutive repressor form of Tcf (Δ NTcf3) (Molenaar et al., 1996) resulted in a severe reduction of hhex and gfp expression (Fig. 2A). Moreover, injection of a Tcf3 morpholino (Tcf3-MO) (Liu et al., 2005) resulted in ectopic hhex and gfp expression throughout the endoderm (Supplementary Fig. S2). This is consistent with published findings that Tcf3 represses organizer gene expression in ventral cells that lack mWnt signaling, whereas in the dorsal cells where mWnt/β-catenin are active, Tcf3 repression is lifted and partially redundant Tcf1 and Tcf4 activate organizer transcription (Houston et al., 2002; Liu et al., 2005; Standley et al., 2006). In regards to the Nodal pathway, injection of mRNA encoding a secreted Nodal-antagonist Cer-S (Piccolo et al., 1999) abolished hhex and gfp expression. In rescue experiments, injection of nodal (xnr1) mRNA was sufficient to induce hhex and gfp expression in embryos where mWnt signaling was blocked by either the β-cat-MO or the Δ NTcf3. In contrast, injection of mRNA encoding stabilized β-catenin (Yost et al., 1996) was unable to rescue hhex or gfp expression in embryos where Nodal signaling was inhibited by Cer-S (Fig. 2A).
Figure 2
Figure 2
Regulation of hhex expression by Nodal and Wnt/β-catenin signaling
These data demonstrate that the −6.0Kb hhex promoter is regulated in an identical fashion to endogenous hhex and that both Nodal and mWnt are required to initiate hhex transcription. While these data suggest that mWnt signaling lies upstream of nodal ligand (xnr) expression, (Supplemental Fig, S3) (Hilton et al., 2003; Xanthos et al., 2002), they do not exclude the possibility that mWnt might also function in parallel with Nodal signals to simulate hhex transcription.
To test this possibility we injected a −6Kb:hhex:luc reporter construct (the −6 Kb hhex promoter driving luciferase), into either the C1 (dorsal-anterior mesendoderm), C4 (ventral-posterior mesendoderm), or A4 (ectoderm) blastomeres at the 32-cell stage and assayed luciferase activity at stage 10. Similar to endogenous hhex, the reporter was highly active in the dorsal-anterior mesendoderm, weakly active in ventral cells, and exhibited little if any expression in ectoderm (Fig. 2B). Injection of either xnr1 or β-catenin mRNAs were sufficient to activate the −6Kb:hhex:luc reporter in the ectoderm, with low doses of xnr1 (5pg) plus β-catenin (20pg) having an additive effect (Fig. 2B). Importantly β-catenin does not activate nodal expression (xnr1,2,4,5,6) in animal cap ectoderm cells, as it does in vegetal tissue (Sinner et al., 2004; Takahashi et al., 2000). β-catenin does induce xnr3, but this divergent ligand does not signal via the Smad pathway. We conclude that 1) Wnt/β-catenin alone can stimulate hhex transcription in the ectoderm independently from promoting nodal ligand expression and 2) β-catenin can cooperate with Nodal signaling to induce robust hhex expression.
We next tested whether hhex is a direct transcriptional target of Nodal or Wnt signaling (Fig. 2C). Animal cap ectoderm tissue was isolated from −6Kb:hhex:gfp transgenic blastulae and treated with cycloheximide (CHX) to block the translation of secondary factors. After 30 minutes, control and CHX-treated explants were further exposed to either Activin to stimulate the Nodal pathway or Gsk3-inhibitors (Bio or LiCl) to stimulate the Wnt pathway. Analysis of explants at stage 11 showed that while both Activin and the GSK3-inhibitors induced hhex and gfp expression, only Activin induced their expression when translation was blocked by CHX (Fig. 2C; data not shown). As controls we also assayed xnr3, a direct transcriptional target of β-catenin/Tcf (McKendry et al., 1997), and cerberus, an indirect Nodal target (Yamamoto et al., 2003).
The results from Figure 2 demonstrate that Nodal signaling is required to directly activate hhex transcription. Maternal Wnt/β-catenin is also essential but acts indirectly by promoting xnr expression in the dorsal-anterior endoderm, as well as through Nodal-independent mechanisms. We next sought to determine how these signaling pathways impact the hhex promoter.
Identification of cis-elements controlling hhex spatial expression
To identify the cis-regulatory elements controlling hhex transcription we generated a series of deletion constructs and tested these in hhex:gfp transient transgenics or by injecting hhex:luc constructs into either the dorsal-C1 or the ventral-C4 blastomeres. We then assayed GFP or luciferase activity at stage 10.5. Transgenic expression of the −6.0, −3.2 and −1.56 Kb constructs were indistinguishable from endogenous hhex (Fig. 3A). Robust anterior endoderm expression was observed in all deletion constructs from −6.0 to −0.44 Kb, whereas the −0.38 Kb deletion was not expressed above background. Together, the transgenics and the luciferase assays indicated that deletion of sequences between −2.3 to −0.55 Kb resulted in a progressive increase in ectopic GFP and luciferase in the central and ventral endoderm (Fig. 3A, B; Table S2), suggesting the loss of repressor elements. This ectopic expression was more obvious in sensitive luciferase assays (compare the ratio of C1 to C4 activity) than in transgenics (Table S2), consistent with previous reports that GFP fluorescence under-reports in the opaque yolk-rich endoderm (Ahmed et al., 2004).
Figure 3
Figure 3
Mapping Nodal and Wnt-responsive cis-elements
Mapping Nodal and Wnt responsive elements
To define Nodal and Wnt responsive cis-elements, we injected the hhex:luc deletion constructs with or without xnr1 or β-catenin RNA into the A4 ectoderm cells (Fig. 3C). This analysis indicated that a Nodal-responsive element (NRE) was contained within the proximal −0.44 Kb, which coincides with the minimal region required for endoderm expression (Fig. 3A). A separate Wnt/β-catenin-responsive element (WRE) localized between −0.65 and −0.55 Kb (Fig. 3C), confirming that mWnt signaling can stimulate hhex transcription by mechanisms other than just promoting xnr expression. This arrangement of distinct Nodal- and Wnt-responsive elements (Fig. 4A) is similar to the cis-regulation of gsc described by Cho and colleagues (Koide et al., 2005; Watabe et al., 1995). We next sought to determine how Nodal and Wnt signaling regulated hhex transcription through these cis-elements.
Figure 4
Figure 4
Sequence analysis of the hhex promoter
Nodal directly activates hhex transcription via Foxh1 and Smad2
In Xenopus, Nodal-responsive transcription can mediated by Foxh1, Wbscr11, or the HD proteins Mixer and Bix2 (Chen et al., 1997; Germain et al., 2000; Ring et al., 2002). There are no obvious Wbscr11 DNA-binding sites in the proximal −0.44 Kb promoter and although there is one putative homeobox site (Fig. 4) it is not predicted to be bound by Mix-family proteins (Germain et al., 2000; Latinkic et al., 1997; Noyes et al., 2008). However, the NRE contains three potential Foxh1 DNA-binding sites, two of which are flanked by putative Smad-binding sites (Fig. 4). We mutated the two Foxh1/Smad DNA-binding sequences in the context of the −6Kb:hhex:luc reporter (Fig. 5A) and assayed their activity in the dorsal-anterior mesendoderm at early gastrula. Mutation of individual Smad-sites (Δ S1 or Δ S2) resulted in a modest but significant reduction in luciferase activity, whereas mutation of the either Foxh1-site (Δ F1 or Δ F2) severely compromised expression (Fig. 5B) and mutation of both Smad sites (Δ S1+S2) or both Foxh1 sites (Δ F1+F2) largely abolished expression (Fig. 5B). Moreover, the Foxh1 and Smad sites were required to mediate robust Nodal-stimulated transcription in ectoderm injections (Fig. 5C).
Figure 5
Figure 5
Nodal-signaling directly activates hhex transcription through Foxh1/Smad2 binding in the proximal −0.44 Kb promoter
To determine whether Mix-like factors might also contribute to hhex activation downstream of Nodal, we tested whether over-expression of Mix1, Mixer, Bix1, Bix2 or Bix4 stimulated hhex transcription in animal cap assays. Only Bix1 and Bix4 (and not the Smad-interacting Mixer or Bix2) robustly activated the −6Kb:hhex:luc reporter but deletion analyses indicated that they act through via sequences between −1.0 and −0.65 Kb and not via the NRE (Supplementary Fig. S4).
To confirm that Foxh1 and Smad2 regulated endogenous hhex, we performed a series of loss- and gain-of-function experiments. Injection of morpholino oligos to knockdown Smad2 or mRNA encoding a Foxh1-Engrailed (Foxh1-EnR) constitutive repressor construct (Watanabe and Whitman, 1999) abolished hhex expression, whereas ventral injection of constitutively active Foxh1:VP16 or Smad2:VP16 fusion constructs induced ectopic hhex (Fig. 5D). Ventral over-expression of wild type Foxh1 or Smad2 individually had no effect, but together Foxh1 + Smad2 were sufficient to induce ectopic hhex (Fig. 5D). Finally, we examined embryos where maternal foxh1 mRNA had been depleted using the host transfer method (Kofron et al., 2004), and found that hhex expression was severely reduced. This was partially rescued by adding back synthetic foxh1 mRNA (Fig. 5E). Expression of the foxh1-related gene fast3 was not affected in these experiments.
We next used chromatin immunoprecipitation (ChIP) to determine whether Foxh1 associated with the NRE in vivo. As there are no Xenopus anti-Foxh1 antibodies available, we injected a low level of myc-tagged Foxh1 mRNA (50 pg) into embryos and performed ChIP with anti-myc. This level of Foxh1-myc had no detectable effect on development or endogenous hhex expression (Fig. 5D). QPCR of immunoprecipitated chromatin amplified DNA fragments containing the F1 and F2 Foxh1-binding sites in the hhex NRE from both the dorsal and ventral mesendoderm at levels equivalent to the positive control mix2 Activin response element (mix2-ARE) (Chen et al., 1997). The negative control gene mlc2 was not amplified (Fig. 5F). We conclude that Nodal signaling directly activates hhex transcription through Foxh1/Smad-binding sites in the proximal −0.44 Kb NRE.
Siamois and Twin promote hhex expression downstream of mWnt
We next examined the Wnt-responsive element in more detail. Consistent with Wnt/β-catenin acting indirectly, the WRE does not contain Tcf/Lef-binding sites. It does however contain three homeobox sites including two tandem sites with the sequence 5’-TAATGTAAT-3’ (Figs. 4, ,6;6; HD2 and HD3); this is identical to the sequence found in the Wnt-responsive proximal enhancer of gsc, that can be bound by the HD factor Twin (Laurent et al., 1997; Watabe et al., 1995). Direct transcriptional targets of mWnt, Sia and Twn are transiently expressed in the dorsal-anterior endoderm of the blastula similar to hhex (Fig. 6A).
Figure 6
Figure 6
Sia/Twn act downstream of Wnt/β-catenin to activate the WRE
To test whether Sia/Twn mediate the mWnt activation of hhex we performed a series of loss-of-function and rescue experiments in −6.0Kb:hhex:gfp transgenic embryos (Fig. 6C). Knockdown of Sia and Twn by antisense MOs (Ishibashi et al., 2008) caused a dramatic reduction in hhex and gfp expression (Fig. 6C). In addition sia mRNA injection rescued hhex expression in β-catenin-depleted embryos, consistent with Sia/Twn acting downstream of mWnt. Although endogenous xnr mRNA levels were largely unchanged in Sia/Twn-MO embryos (Supplementary Fig. S3), we found that Xnr1 over-expression restored hhex and gfp expression in Sia/Twn-depleted embryos. In contrast Sia over-expression did not rescue hhex or gfp when Nodal signaling was blocked (Fig. 6C), consistent with reports that Sia needs to cooperates with Nodal to induce some organizer genes (Engleka and Kessler, 2001).
Using the hhex:luc deletion constructs we confirmed that Sia stimulates hhex transcription via the WRE (Fig. 6D) and that homeobox sites were required for Sia-induced activation of the reporter in animal caps (Fig. 6B,E). Mutation of the HD1 site alone had no effect on Sia-responsiveness, the Δ HD23 construct exhibited reduced activation, and the Δ HD123 construct with all three sites mutated was not activated above reporter-alone levels. In addition, we observed that Sia cooperated with Xnr1 to activate the hhex:luc reporter, and this cooperation required both the WRE and the Foxh1 sites in the NRE (Supplementary Fig. S5).
Mechanisms of Sia/Twn regulation
These results, together with published reports that Twn can bind to the HD23 sequence from the gsc promoter, suggested that Sia/Twn directly activate hhex transcription. Since there are no ChIP antibodies available to assay Sia/Twn’s association with chromatin in vivo, we tested whether a dexamethasone (DEX) inducible form of Sia (GR-Sia) (Kodjabachian and Lemaire, 2001) could directly activate hhex transcription in ventral mesendoderm or animal cap explants when translation was blocked by CHX. Surprisingly, DEX-activated GR-Sia could not induce hhex expression in either CHX-treated caps or ventral explants (Fig. 7A; data not shown). We considered two mechanisms to explain this result (Fig. 7B).
Figure 7
Figure 7
Otx2 and Lim1 promote hhex transcription downstream of Sia/Twn
In the first model Sia/Twn activate the expression of other HD factors, which in turn stimulate hhex transcription via the WRE (either by themselves or in a complex with Sia/Twn). Candidates include Otx2, Lim1, and Gsc because they are all regulated by Sia/Twn and their expression overlaps with hhex (Fig. 7C) (Blitz and Cho, 1995; Cho et al., 1991; Kodjabachian et al., 2001; Laurent et al., 1997; Taira et al., 1994; Xanthos et al., 2002).
In the second model, Sia/Twn indirectly promote hhex transcription via inhibition of BMP signaling (Fig. 7B). Sia/Twn are known to activate the expression of the secreted Bmp antagonists Chordin and Noggin in the organizer (Collart et al., 2005; Ishibashi et al., 2008; Kessler, 1997), which inhibit expression of the BMP targets vent1 and vent2. In this model Vents repress hhex transcription in the ventral endoderm, but not in the dorsal-anterior endoderm as a result of Sia/Twn activity. In support of this model, there are at least eight potential Vent DNA-binding sites (5’-CTAAT-3’) (Friedle et al., 1998; Trindade et al., 1999) in the −1.4 Kb hhex promoter (Fig. 4) and ectopic over-expression of Vent2 can inhibit hhex expression in the foregut during later somite-stages of development (McLin et al., 2007).
Finally it was possible that Sia/Twn promote hhex expression via both mechanisms. Since we observed that GR-Sia directly activated the transcription of otx2, lim1, and chordin in ventral explants treated with CHX + DEX, and that GR-Sia indirectly suppressed vent1/2 expression (Fig. 7A), we therefore tested both models.
Regulation of hhex transcription by Otx2, Lim1 and Gsc
Consistent with the first model, gsc, otx2, and lim1 were dramatically down-regulated in Sia/Twn-depleted embryos and were ectopically induced in the ventral mesendoderm by Sia injection (Fig. 7C). We then tested if Gsc mediated the effects of Sia/Twn and found that injection of gsc mRNA was unable to rescue hhex expression in Sia/Twn-depleted embryos (data not shown). Moreover injection of a Gsc-MO had no obvious effect on hhex expression at stage 10.5 (Supplementary Fig. S6), even though control genes vent1, vent2 and wnt8, which are known to be repressed by Gsc (Sander et al., 2007) were up-regulated. However by stage 12 hhex was severely reduced in Gsc-depleted embryos (Supplementary Fig. S6). This demonstrates that while Gsc is not required to initiate hhex transcription, it participates in maintaining hhex expression, possibly by suppressing Vents and Wnt8.
We next examined the role of Otx2 and Lim1. Injection of a Lim1-MO (Schambony and Wedlich, 2007) or an Otx2-MO (Carron et al., 2005) resulted in a modest reduction of hhex expression (Supplementary Fig. S7). However, depletion of both Otx2 and Lim1 resulted in a dramatic loss of hhex transcripts, comparable to Sia/Twn depletion (Fig. 7C). Otx2/Lim1-depleted embryos also exhibited reduced gsc, chordin, and expanded vent1/2 expression (Supplementary Fig. S7). Ectopic over-expression of either Otx2 or Lim1 alone was not sufficient to induce hhex (data not shown), but when co-injected together they did induce ectopic hhex and gsc. However, this combination of Otx2 + Lim1 did not rescue hhex expression in Sia/Twn-MO embryos, (but did rescue gsc) (Fig. 7C). One possible explanation for this result is that Otx2 and Lim1 might require other interacting partners, possibly Sia/Twn themselves.
Since otx2 + lim1 mRNA injection was sufficient to induce hhex in the ventral mesendoderm, we tested whether they acted via the homeobox sites in the WRE and if they could cooperate with Sia. We found that Otx2 plus Lim1 activated the −0.65 Kb reporter in an additive fashion that was enhanced by Sia co-injection, and that their activity required the HD DNA-binding sites (Fig. 7D). Together these data suggest that Otx2 and Lim1 act downstream of, or in combination with, Sia/Twn to promote hhex transcription via the WRE. We next tested whether the co-injection of GR-Otx2 + GR-Lim1 (with or without GR-Sia) could directly induce hhex expression in ventral explants treated with CHX and DEX. They could not directly induce hhex, but GR-Oxt2 +GR-Lim1 did directly induce chordin transcription (data not shown). This along with the observation that Otx2 and Lim1 are required for chordin expression (Fig. S7) suggests that while Oxt2 and Lim1 may act positively through the WRE, they also indirectly promote hhex transcription by inhibiting BMP and Vent.
Sia/Twn promote hhex expression by inducing Chordin and inhibiting Vent
We next tested the second model where Sia/Twn indirectly promote hhex expression by inhibiting BMP activity (Fig. 7B). As predicted, chordin was dramatically reduced in Sia/Twn-depleted embryos whereas vent1 and vent2 were ectopically expanded into the dorsal-anterior endoderm (Fig. 8). Consistent with this, ventral injection of sia mRNA induced ectopic chordin and caused a dramatic reduction in vent1/2 transcripts (Fig. 8). The Sia/Twn-MO phenotype was partially rescued by inhibiting BMP signaling in the dorsal-anterior mesendoderm with a dominant BMP receptor (tBR), confirming that the ectopic vent and repressed hhex was due in part to elevated BMP signaling.
Figure 8
Figure 8
Sia/Twn promote hhex transcription by inhibiting BMP and Vent
To formally test whether Chordin and Noggin are required for hhex expression, we injected antisense MOs to knockdown these factors (Kuroda et al., 2004; Oelgeschlager et al., 2003) in hhex:gfp transgenic embryos. As predicted, Chd/Nog-depleted embryos exhibited a striking reduction in hhex, gfp, chordin, gsc, otx2, and lim1 levels and ectopic vent1/2 (Fig. 8; data not shown). Conversely, when we injected antisense MOs targeting both Vent1 and Vent2 (Sander et al., 2007), we observed ectopic hhex and gfp expression throughout the endoderm. Vent1/2-depletion also resulted in increased gsc, otx2, and chordin expression, but did not alter xnr1,2,4,5,6 or sia, mRNA levels (Supp Fig. S8). Finally, we tested whether the loss of hhex caused by Sia/Twn-MOs could be rescued by knockdown of the ectopic Vent1/2. Co-injection of Sia/Twn-MOs plus Vent1/2-MOs into the dorsal-anterior mesendoderm strikingly restored hhex, gfp, and chordin levels (Fig 8), although their expression boundaries were not as defined as in control embryos. These data indicate that Vent1/2 repress hhex expression in the ventral-posterior endoderm and that Sia/Twn exclude vent1/2 from the organizer through the action of BMP-antagonists, thereby creating a permissive environment for hhex transcription.
Vents repress hhex transcription
To test if Vents can act directly on the hhex promoter, we generated an inducible GR-Vent-2-VP16 construct, which converts Vent2 from a transcriptional repressor into a potent activator (Onichtchouk et al 1998). In animal cap assays Dex-activated GR-Vent2-VP16 directly induced hhex and otx2 transcription (but not chordin) when translation was inhibited by CHX (Fig. 9A). This suggests that during normal development, Vent2 directly represses the hhex and otx2 promoters.
Figure 9
Figure 9
Vent and Sia directly act on the hhex promoter
There are 8 potential Vent DNA-binding sites in the hhex upstream region (Fig. 4). To map where Vent1/2 act we injected hhex:luc deletion constructs into the ventral-posterior mesendoderm along with the Vent1/2-MOs (Fig. 9B). Vent-depletion resulted in a robust activation (de-repression) of the −6.0 Kb −0.65 Kb and −0.65Kb:Δ HD reporter constructs. The −0.55 Kb hhex:luc construct was also significantly activated over background by Vent-depletion, albeit to lower levels than the −0.65 Kb construct. This suggests that the WRE mediates some but not all of Vent’s repressive activity. In contrast Vent-MO injection did not stimulate the −0.44 Kb nodal-responsive proximal promoter and mutation of the Foxh1-sites dramatically impaired Vent-MO mediated activation of the −6Kb reporter (Fig. 9B), arguing that the ectopic hhex expression in Vent-depleted embryos was due to Nodal signaling. These data suggest that Vent1/2 act at several locations on the hhex promoter, including sequences between −0.55 to −0.44 Kb, which contain one consensus Vent DNA-binding site (Fig. 4).
Siamois can directly activate hhex transcription in the absence of Vent1/2
Altogether our data suggest that Sia/Twn promote hhex transcription by simultaneously preventing Vent-repression (via Chordin and Noggin) and activating the HD sites in the WRE. We therefore repeated the CHX using the GR-Sia construct, but also injected Vent1/2-MOs to deplete animal caps of endogenous Vent1/2. The depletion of Vent1/2 from the cap would negate the need for Sia to induce BMP antagonists, and allow us to ask whether Sia can directly act on the hhex promoter. It is important to note that the Vent1/2-MOs do not induce ectopic hhex in animal cap ectoderm (Fig. 9) as they do in ventral-posterior mesendoderm (Fig. 8) because animal caps lack Nodal signaling. We found that when Vent1/2 were depleted from animal caps, DEX-activated GR-Sia was now able to directly induce hhex expression in the presence of CHX. We conclude that Sia/Twn promote hhex transcription by both relieving Vent-repression and by activating the WRE.
A Gene Regulatory Network controlling hhex transcription
We have uncovered a complex gene regulatory network controlling hhex expression in the early embryo. This study, combined with the work of others, links our understanding of axis specification with foregut organogenesis. Our data suggest a model (Fig. 10) to explain how the three major signaling pathways in distinct spatial domains of the Xenopus blastula: 1) Nodal signaling active throughout the mesendoderm, 2) maternal Wnt11/β-catenin on the dorsal side and 3) repressive BMP/Vent activity in the ventral-posterior region, all converge on DNA cis-regulatory elements to control hhex transcription in the dorsal-anterior endoderm of the organizer.
Figure 10
Figure 10
A model of the regulatory network controlling hhex transcription
Nodal signaling is absolutely required to directly activate hhex transcription via Foxh1/Smad2 complexes binding to DNA sites in the −0.44 Kb proximal NRE. Bix1 and Bix4 further maintain hhex expression downstream of Nodal through cis-elements between −1.0 and −0.65 Kb. Activation of hhex transcription by Nodal is repressed in the ventral mesendoderm by Vent1/2, which are targets of BMP and zygotic Wnt8 signals. Our data suggest that Vent1/2 directly repress hhex transcription through multiple DNA sites located between −1.5 and −0.44 Kb of the hhex promoter, although further work is needed to precisely define these. Our data suggest that the balance between stimulation by Foxh1/Smad2 and repression by Vent results in the hhex promoter being poised, but not actively transcribed.
Maternal Wnt signaling on the dorsal side of the blastula cooperates with Nodals to indirectly promote hhex transcription by a number of means. First mWnt promotes xnr1,5,6 transcription resulting in higher Nodal activity in the dorsal-anterior endoderm. mWnt also directly induces the expression of Sia and Twn, which activate hhex transcription via two complementary mechanisms: 1) Sia/Twn activate the hhex transcription (possibly in a complex with Otx2 and Lim1) via homeobox sites in the −0.65 to −0.55 Kb WRE. 2) Sia/Twn (as well as Otx2 and Lim1) induce the expression of Chordin and Noggin, which inhibit BMP activity and exclude vent1/2 expression from the organizer. Thus hhex is not repressed in these cells. Our data indicate that both activation and preventing Vent-repression are essential for hhex transcription with the interaction between positively acting Sia/Chordin and negatively acting BMP/Vent defining the hhex expression domain.
Comparison to other Organizer genes
The dual activation and de-repression mechanism that we describe may be broadly applicable to the regulation of many Sia-target genes. Cis-regulatory analyses suggest that gsc and cerberus transcription are also regulated by the combination of positively acting Sia/Otx2/Lim-containing complexes and negatively acting Vent-containing complexes, that interact with clusters of overlapping homeobox sites (Koide et al., 2005; Mochizuki et al., 2000; Yamamoto et al., 2003). In the future, it will be important to determine how endogenous HD complexes are assembled on chromatin in vivo and to test, for example, whether Sia- and Vent-containing complexes compete for the same cis-regulatory elements.
Another striking parallel between hhex and gsc is the functional interaction between distinct Sia-associated WREs and Smad-associated NREs (Koide et al., 2005). Interestingly we, and others, have found that Sia requires Nodal signaling to induce certain organizer genes (Engleka and Kessler, 2001). Sia over-expression could not activate hhex transcription in the endoderm where Nodal signaling was blocked and mutation of the NRE impaired the ability of Sia to activate the hhex:luc reporter. Although the mechanisms of this Nodal-dependency are unknown, one possibility is that in order for Sia-WRE interactions to stimulate transcription the NRE must also be bound by Smad2. Indeed, recent studies indicate that Smad2 DNA-binding can cause epigenetic modifications that make chromatin transcriptionally permissive (Dahle et al., 2010).
Regulation of hhex transcription in mammals
There is ample genetic evidence that the signaling pathways regulating anterior endoderm gene expression are conserved in mammals. For example in mice Wnt3a signaling in the primitive streak promotes Nodal expression and Nodal, Smad2, Foxh1, Otx2 and Lim1 are all required for anterior mesendoderm development (Zorn and Wells, 2009). In addition a combination of Wnt3a and Activin are commonly used to induce anterior endoderm lineages in human and mouse ES cells. Finally there is evidence that BMP antagonism protects Nodal signaling to promote anterior development in the mouse gastrula (Yang et al., 2010). Thus the overall signaling crosstalk that we describe here is likely to be broadly applicable to all vertebrates.
There are however some distinctions between hhex regulation in Xenopus and mice. For example mice lack Sia/Twn and Vent orthologs, although there is a Ventx in humans (Moretti et al., 2001). We speculate that in mice Otx2 and Lim1 might substitute for Sia/Twn, whilst Msx factors might play the role of Vents. In addition, deletion analysis of the mouse hhex locus concluded that gastrula endoderm expression was controlled by elements in the 3rd intron and not the upstream region as we have found in Xenopus (Rodriguez et al., 2001). A cross-species blast revealed no obvious homology between the Xenopus −6 Kb upstream region and mammalian hhex genomic loci (Supplementary Fig. S9), although the mouse 3rd intron does contains putative Smad, HD and Fox DNA-binding sites (Rodriguez et al., 2001). While the functional importance of these sites have not been tested, it is possible that Xenopus and mouse share similar cis-regulatory cassettes located in different genomic regions. It is formally possible that the 3rd intron of the Xenopus hhex gene might also contributes to anterior endoderm expression.
Regulation of hhex transcription by temporally distinct Wnt signaling
In this study we show that maternal Wnt promotes hhex transcription, however during gastrula and early somite stages zygotic Wnt/β-catenin signaling has the opposite effect and represses hhex expression (McLin et al., 2007). We propose that these temporally distinct Wnt activities can be explained by a common regulatory cassette - repression by Vents. The vent2 promoter contains an essential BMP-responsive element, as well as TCF/Lef DNA-binding sites that modulate the strength of vent2 expression (Friedle and Knochel, 2002; Karaulanov et al., 2004). In the blastula BMP signaling directly induces vent2 in the ventral mesendoderm (Onichtchouk et al., 1996; Rastegar et al., 1999), while mWnt, through the action of Sia and Chordin, inhibits BMP and vents in the organizer, thus permitting hhex transcription. In contrast, during gastrula and somite stages zygotic Wnts in the posterior ventral-lateral mesoderm cooperate with BMP4, and act on the TCF sites in the vent2 promoter to maintain its expression (Karaulanov et al., 2004; Li et al., 2008; McLin et al., 2007) in the posterior endoderm. At this time secreted Wnt-antagonists such as sFRP5 are required to suppress high levels of Wnt signaling and vent expression from the foregut, thus maintaining hhex. This provides a paradigm for how crosstalk between signaling pathways can have temporally distinct effects on the same target genes.
Research Highlights
• Nodal signaling directly activates hhex transcription via FoxH1/Smad2 sites in the proximal promoter.
• BMP targets Vent 1 and Vent2 repress hhex in the ventral tissue.
• Maternal Wnt cooperates with Nodal and indirectly activate hhex expression via Siamois and Twin.
• Siamois/Twin stimulate hhex transcription by inducing BMP-antagonists, which exclude Vents from the organizer allowing hhex transcription.
Supplementary Material
We are grateful to Drs. Cho, Dawid, De Robertis, Heasman, Hoppler, Kessler, Kodjabachian, Niehrs, Sive, Taira, and Whitman for providing reagents. This work was supported by NIH grants DK70858 to AMZ and P30 DK078392 (DHC bioinformatics core). We are grateful to members of the Zorn and Wells labs for helpful suggestions throughout this study and to Ira Blitz and Shelby Blythe for advice on ChIP.
• Ahmed N, Howard L, Woodland HR. Early endodermal expression of the Xenopus Endodermin gene is driven by regulatory sequences containing essential Sox protein-binding elements. Differentiation. 2004;72:171–84. [PubMed]
• Blitz IL, Cho KW. Anterior neurectoderm is progressively induced during gastrulation: the role of the Xenopus homeobox gene orthodenticle. Development. 1995;121:993–1004. [PubMed]
• Blythe SA, Reid CD, Kessler DS, Klein PS. Chromatin immunoprecipitation in early Xenopus laevis embryos. Dev Dyn. 2009;238:1422–32. [PMC free article] [PubMed]
• Bort R, Martinez-Barbera JP, Beddington RS, Zaret KS. Hex homeobox gene-dependent tissue positioning is required for organogenesis of the ventral pancreas. Development. 2004;131:797–806. [PubMed]
• Bouwmeester T, Kim S, Sasai Y, Lu B, De Robertis EM. Cerberus is a head-inducing secreted factor expressed in the anterior endoderm of Spemann's organizer. Nature. 1996;382:595–601. [PubMed]
• Brannon M, Gomperts M, Sumoy L, Moon RT, Kimelman D. A beta-catenin/XTcf-3 complex binds to the siamois promoter to regulate dorsal axis specification in Xenopus. Genes Dev. 1997;11:2359–70. [PubMed]
• Brickman JM, Jones CM, Clements M, Smith JC, Beddington RS. Hex is a transcriptional repressor that contributes to anterior identity and suppresses Spemann organiser function. Development. 2000;127:2303–15. [PubMed]
• Carnac G, Kodjabachian L, Gurdon JB, Lemaire P. The homeobox gene Siamois is a target of the Wnt dorsalisation pathway and triggers organiser activity in the absence of mesoderm. Development. 1996;122:3055–65. [PubMed]
• Carron C, Bourdelas A, Li HY, Boucaut JC, Shi DL. Antagonistic interaction between IGF and Wnt/JNK signaling in convergent extension in Xenopus embryo. Mech Dev. 2005;122:1234–47. [PubMed]
• Chen X, Weisberg E, Fridmacher V, Watanabe M, Naco G, Whitman M. Smad4 and FAST-1 in the assembly of activin-responsive factor. Nature. 1997;389:85–9. [PubMed]
• Cho KW, Blumberg B, Steinbeisser H, De Robertis EM. Molecular nature of Spemann's organizer: the role of the Xenopus homeobox gene goosecoid. Cell. 1991;67:1111–20. [PMC free article] [PubMed]
• Collart C, Verschueren K, Rana A, Smith JC, Huylebroeck D. The novel Smad-interacting protein Smicl regulates Chordin expression in the Xenopus embryo. Development. 2005;132:4575–86. [PubMed]
• Dahle O, Kumar A, Kuehn MR. Nodal signaling recruits the histone demethylase Jmjd3 to counteract polycomb-mediated repression at target genes. Sci Signal. 2010;3:ra48. [PubMed]
• Danilov V, Blum M, Schweickert A, Campione M, Steinbeisser H. Negative autoregulation of the organizer-specific homeobox gene goosecoid. J Biol Chem. 1998;273:627–35. [PubMed]
• De Robertis EM. Spemann's organizer and the self-regulation of embryonic fields. Mech Dev. 2009;126:925–41. [PMC free article] [PubMed]
• Engleka MJ, Kessler DS. Siamois cooperates with TGFbeta signals to induce the complete function of the Spemann-Mangold organizer. Int J Dev Biol. 2001;45:241–50. [PubMed]
• Fan MJ, Gruning W, Walz G, Sokol SY. Wnt signaling and transcriptional control of Siamois in Xenopus embryos. Proc Natl Acad Sci U S A. 1998;95:5626–31. [PubMed]
• Foley AC, Mercola M. Heart induction by Wnt antagonists depends on the homeodomain transcription factor Hex. Genes Dev. 2005;19:387–96. [PubMed]
• Friedle H, Knochel W. Cooperative interaction of Xvent-2 and GATA-2 in the activation of the ventral homeobox gene Xvent-1B. J Biol Chem. 2002;277:23872–81. [PubMed]
• Friedle H, Rastegar S, Paul H, Kaufmann E, Knochel W. Xvent-1 mediates BMP-4-induced suppression of the dorsal-lip-specific early response gene XFD-1' in Xenopus embryos. Embo J. 1998;17:2298–307. [PubMed]
• Gammill LS, Sive H. Identification of otx2 target genes and restrictions in ectodermal competence during Xenopus cement gland formation. Development. 1997;124:471–81. [PubMed]
• Heasman J. Patterning the early Xenopus embryo. Development. 2006;133:1205–17. [PubMed]
• Heasman J, Kofron M, Wylie C. Beta-catenin signaling activity dissected in the early Xenopus embryo: a novel antisense approach. Dev Biol. 2000;222:124–34. [PubMed]
• Hilton E, Rex M, Old R. VegT activation of the early zygotic gene Xnr5 requires lifting of Tcf-mediated repression in the Xenopus blastula. Mech Dev. 2003;120:1127–38. [PubMed]
• Houston DW, Kofron M, Resnik E, Langland R, Destree O, Wylie C, Heasman J. Repression of organizer genes in dorsal and ventral Xenopus cells mediated by maternal XTcf3. Development. 2002;129:4015–25. [PubMed]
• Howell M, Hill CS. XSmad2 directly activates the activin-inducible, dorsal mesoderm gene XFKH1 in Xenopus embryos. Embo J. 1997;16:7411–21. [PubMed]
• Hyde CE, Old RW. Regulation of the early expression of the Xenopus nodal-related 1 gene, Xnr1. Development. 2000;127:1221–9. [PubMed]
• Ishibashi H, Matsumura N, Hanafusa H, Matsumoto K, De Robertis EM, Kuroda H. Expression of Siamois and Twin in the blastula Chordin/Noggin signaling center is required for brain formation in Xenopus laevis embryos. Mech Dev. 2008;125:58–66. [PMC free article] [PubMed]
• Jones CM, Broadbent J, Thomas PQ, Smith JC, Beddington RS. An anterior signalling centre in Xenopus revealed by the homeobox gene XHex. Curr Biol. 1999;9:946–54. [PubMed]
• Karaulanov E, Knochel W, Niehrs C. Transcriptional regulation of BMP4 synexpression in transgenic Xenopus. Embo J. 2004;23:844–56. [PubMed]
• Kaufmann E, Paul H, Friedle H, Metz A, Scheucher M, Clement JH, Knochel W. Antagonistic actions of activin A and BMP-2/4 control dorsal lip-specific activation of the early response gene XFD-1' in Xenopus laevis embryos. Embo J. 1996;15:6739–49. [PubMed]
• Keng VW, Fujimori KE, Myint Z, Tamamaki N, Nojyo Y, Noguchi T. Expression of Hex mRNA in early murine postimplantation embryo development. FEBS Lett. 1998;426:183–6. [PubMed]
• Keng VW, Yagi H, Ikawa M, Nagano T, Myint Z, Yamada K, Tanaka T, Sato A, Muramatsu I, Okabe M, Sato M, Noguchi T. Homeobox gene Hex is essential for onset of mouse embryonic liver development and differentiation of the monocyte lineage. Biochem Biophys Res Commun. 2000;276:1155–61. [PubMed]
• Kessler DS. Siamois is required for formation of Spemann's organizer. Proc Natl Acad Sci U S A. 1997;94:13017–22. [PubMed]
• Kodjabachian L, Karavanov AA, Hikasa H, Hukriede NA, Aoki T, Taira M, Dawid IB. A study of Xlim1 function in the Spemann-Mangold organizer. Int J Dev Biol. 2001;45:209–18. [PubMed]
• Kodjabachian L, Lemaire P. Siamois functions in the early blastula to induce Spemann's organiser. Mech Dev. 2001;108:71–9. [PubMed]
• Kofron M, Puck H, Standley H, Wylie C, Old R, Whitman M, Heasman J. New roles for FoxH1 in patterning the early embryo. Development. 2004;131:5065–78. [PubMed]
• Koide T, Hayata T, Cho KW. Xenopus as a model system to study transcriptional regulatory networks. Proc Natl Acad Sci U S A. 2005;102:4943–8. [PubMed]
• Kroll KL, Amaya E. Transgenic Xenopus embryos from sperm nuclear transplantations reveal FGF signaling requirements during gastrulation. Development. 1996;122:3173–83. [PubMed]
• Kuroda H, Wessely O, De Robertis EM. Neural induction in Xenopus: requirement for ectodermal and endomesodermal signals via Chordin, Noggin, beta-Catenin, and Cerberus. PLoS Biol. 2004;2:E92. [PMC free article] [PubMed]
• Labbe E, Silvestri C, Hoodless PA, Wrana JL, Attisano L. Smad2 and Smad3 positively and negatively regulate TGF beta-dependent transcription through the forkhead DNA-binding protein FAST2. Mol Cell. 1998;2:109–20. [PubMed]
• Latinkic BV, Smith JC. Goosecoid and mix.1 repress Brachyury expression and are required for head formation in Xenopus. Development. 1999;126:1769–79. [PubMed]
• Latinkic BV, Umbhauer M, Neal KA, Lerchner W, Smith JC, Cunliffe V. The Xenopus Brachyury promoter is activated by FGF and low concentrations of activin and suppressed by high concentrations of activin and by paired-type homeodomain proteins. Genes Dev. 1997;11:3265–76. [PubMed]
• Laurent MN, Blitz IL, Hashimoto C, Rothbacher U, Cho KW. The Xenopus homeobox gene twin mediates Wnt induction of goosecoid in establishment of Spemann's organizer. Development. 1997;124:4905–16. [PubMed]
• Lemaire P, Garrett N, Gurdon JB. Expression cloning of Siamois, a Xenopus homeobox gene expressed in dorsal-vegetal cells of blastulae and able to induce a complete secondary axis. Cell. 1995;81:85–94. [PubMed]
• Liu F, van den Broek O, Destree O, Hoppler S. Distinct roles for Xenopus Tcf/Lef genes in mediating specific responses to Wnt/{beta}-catenin signalling in mesoderm development. Development. 2005;132:5375–85. [PubMed]
• Martinez Barbera JP, Clements M, Thomas P, Rodriguez T, Meloy D, Kioussis D, Beddington RS. The homeobox gene Hex is required in definitive endodermal tissues for normal forebrain, liver and thyroid formation. Development. 2000;127:2433–45. [PubMed]
• McKendry R, Hsu SC, Harland RM, Grosschedl R. LEF-1/TCF proteins mediate wnt-inducible transcription from the Xenopus nodal-related 3 promoter. Dev Biol. 1997;192:420–31. [PubMed]
• McLin VA, Rankin SA, Zorn AM. Repression of Wnt/{beta}-catenin signaling in the anterior endoderm is essential for liver and pancreas development. Development. 2007;134:2207–17. [PubMed]
• Mochizuki T, Karavanov AA, Curtiss PE, Ault KT, Sugimoto N, Watabe T, Shiokawa K, Jamrich M, Cho KW, Dawid IB, Taira M. Xlim-1 and LIM domain binding protein 1 cooperate with various transcription factors in the regulation of the goosecoid promoter. Dev Biol. 2000;224:470–85. [PubMed]
• Molenaar M, van de Wetering M, Oosterwegel M, Peterson-Maduro J, Godsave S, Korinek V, Roose J, Destree O, Clevers H. XTcf-3 transcription factor mediates beta-catenin-induced axis formation in Xenopus embryos. Cell. 1996;86:391–9. [PubMed]
• Moretti PA, Davidson AJ, Baker E, Lilley B, Zon LI, D'Andrea RJ. Molecular cloning of a human Vent-like homeobox gene. Genomics. 2001;76:21–9. [PubMed]
• Newman CS, Chia F, Krieg PA. The XHex homeobox gene is expressed during development of the vascular endothelium: overexpression leads to an increase in vascular endothelial cell number. Mech Dev. 1997;66:83–93. [PubMed]
• Niehrs C. Regionally specific induction by the Spemann-Mangold organizer. Nat Rev Genet. 2004;5:425–34. [PubMed]
• Noyes MB, Christensen RG, Wakabayashi A, Stormo GD, Brodsky MH, Wolfe SA. Analysis of homeodomain specificities allows the family-wide prediction of preferred recognition sites. Cell. 2008;133:1277–89. [PMC free article] [PubMed]
• Oelgeschlager M, Kuroda H, Reversade B, De Robertis EM. Chordin is required for the Spemann organizer transplantation phenomenon in Xenopus embryos. Dev Cell. 2003;4:219–30. [PubMed]
• Onichtchouk D, Gawantka V, Dosch R, Delius H, Hirschfeld K, Blumenstock C, Niehrs C. The Xvent-2 homeobox gene is part of the BMP-4 signalling pathway controlling [correction of controling] dorsoventral patterning of Xenopus mesoderm. Development. 1996;122:3045–53. [PubMed]
• Onichtchouk D, Glinka A, Niehrs C. Requirement for Xvent-1 and Xvent-2 gene function in dorsoventral patterning of Xenopus mesoderm. Development. 1998;125:1447–56. [PubMed]
• Piccolo S, Agius E, Leyns L, Bhattacharyya S, Grunz H, Bouwmeester T, De Robertis EM. The head inducer Cerberus is a multifunctional antagonist of Nodal, BMP and Wnt signals. Nature. 1999;397:707–10. [PMC free article] [PubMed]
• Ramel MC, Lekven AC. Repression of the vertebrate organizer by Wnt8 is mediated by Vent and Vox. Development. 2004;131:3991–4000. [PubMed]
• Rastegar S, Friedle H, Frommer G, Knochel W. Transcriptional regulation of Xvent homeobox genes. Mech Dev. 1999;81:139–49. [PubMed]
• Ring C, Ogata S, Meek L, Song J, Ohta T, Miyazono K, Cho KW. The role of a Williams-Beuren syndrome-associated helix-loop-helix domain-containing transcription factor in activin/nodal signaling. Genes Dev. 2002;16:820–35. [PubMed]
• Rodriguez TA, Casey ES, Harland RM, Smith JC, Beddington RS. Distinct enhancer elements control Hex expression during gastrulation and early organogenesis. Dev Biol. 2001;234:304–16. [PubMed]
• Sander V, Reversade B, De Robertis EM. The opposing homeobox genes Goosecoid and Vent1/2 self-regulate Xenopus patterning. Embo J. 2007;26:2955–65. [PMC free article] [PubMed]
• Schambony A, Wedlich D. Wnt-5A/Ror2 regulate expression of XPAPC through an alternative noncanonical signaling pathway. Dev Cell. 2007;12:779–92. [PubMed]
• Shimizu K, Bourillot PY, Nielsen SJ, Zorn AM, Gurdon JB. Swift is a novel BRCT domain coactivator of Smad2 in transforming growth factor beta signaling. Mol Cell Biol. 2001;21:3901–12. [PMC free article] [PubMed]
• Sinner D, Rankin S, Lee M, Zorn AM. Sox17 and beta-catenin cooperate to regulate the transcription of endodermal genes. Development. 2004;131:3069–80. [PubMed]
• Smithers LE, Jones CM. Xhex-expressing endodermal tissues are essential for anterior patterning in Xenopus. Mech Dev. 2002;119:191–200. [PubMed]
• Sparrow DB, Latinkic B, Mohun TJ. A simplified method of generating transgenic Xenopus. Nucleic Acids Res. 2000;28:E12. [PMC free article] [PubMed]
• Standley HJ, Destree O, Kofron M, Wylie C, Heasman J. Maternal XTcf1 and XTcf4 have distinct roles in regulating Wnt target genes. Dev Biol. 2006;289:318–28. [PubMed]
• Taira M, Otani H, Saint-Jeannet JP, Dawid IB. Role of the LIM class homeodomain protein Xlim-1 in neural and muscle induction by the Spemann organizer in Xenopus. Nature. 1994;372:677–9. [PubMed]
• Takahashi S, Yokota C, Takano K, Tanegashima K, Onuma Y, Goto J, Asashima M. Two novel nodal-related genes initiate early inductive events in Xenopus Nieuwkoop center. Development. 2000;127:5319–29. [PubMed]
• Tao QH, Yang J, Mei WY, Geng X, Ding XY. Cloning and analysing of 5' flanking region of Xenopus organizer gene noggin. Cell Res. 1999;9:209–16. [PubMed]
• Trindade M, Tada M, Smith JC. DNA-binding specificity and embryological function of Xom (Xvent-2) Dev Biol. 1999;216:442–56. [PubMed]
• Watabe T, Kim S, Candia A, Rothbacher U, Hashimoto C, Inoue K, Cho KW. Molecular mechanisms of Spemann's organizer formation: conserved growth factor synergy between Xenopus and mouse. Genes Dev. 1995;9:3038–50. [PubMed]
• Watanabe M, Rebbert ML, Andreazzoli M, Takahashi N, Toyama R, Zimmerman S, Whitman M, Dawid IB. Regulation of the Lim-1 gene is mediated through conserved FAST-1/FoxH1 sites in the first intron. Dev Dyn. 2002;225:448–56. [PubMed]
• Watanabe M, Whitman M. FAST-1 is a key maternal effector of mesoderm inducers in the early Xenopus embryo. Development. 1999;126:5621–34. [PubMed]
• Witta SE, Sato SM. XIPOU 2 is a potential regulator of Spemann's Organizer. Development. 1997;124:1179–89. [PubMed]
• Xanthos JB, Kofron M, Tao Q, Schaible K, Wylie C, Heasman J. The roles of three signaling pathways in the formation and function of the Spemann Organizer. Development. 2002;129:4027–43. [PubMed]
• Yamamoto S, Hikasa H, Ono H, Taira M. Molecular link in the sequential induction of the Spemann organizer: direct activation of the cerberus gene by Xlim-1, Xotx2, Mix.1, and Siamois, immediately downstream from Nodal and Wnt signaling. Dev Biol. 2003;257:190–204. [PubMed]
• Yang YP, Anderson RM, Klingensmith J. BMP antagonism protects Nodal signaling in the gastrula to promote the tissue interactions underlying mammalian forebrain and craniofacial patterning. Hum Mol Genet. 2010;19:3030–42. [PMC free article] [PubMed]
• Yao J, Kessler DS. Goosecoid promotes head organizer activity by direct repression of Xwnt8 in Spemann's organizer. Development. 2001;128:2975–87. [PubMed]
• Zorn AM, Butler K, Gurdon JB. Anterior endomesoderm specification in Xenopus by Wnt/beta-catenin and TGF-beta signalling pathways. Dev Biol. 1999;209:282–97. [PubMed]
• Zorn AM, Wells JM. Vertebrate endoderm development and organ formation. Annu Rev Cell Dev Biol. 2009;25:221–51. [PMC free article] [PubMed] |
global_01_local_0_shard_00000017_processed.jsonl/29820 | where the writers are
Kirsty Logan's Writings
Short Story
The Forest Book of Bedtime Stories
When the dog starts barking, we know it's beginning. Or rather, ending. We grab handfuls of bottles and climb up onto the roof of the house. She stumbles and her foot slips into the gutter sopped with dead leaves. I grab her wrists and pull her clear - sure, she's not the person I'd choose to do this with, but she's my only option so I might as well be nice. Plus... |
global_01_local_0_shard_00000017_processed.jsonl/29836 | Take the 2-minute tour ×
I'm looking to expand my game system library, and I would like to tap the hive mind to find funky, offbeat core rules or settings.
What system should I add to my game library that best features novel rules that actually work in play?
Guilty pleasures are welcome, but I want to know why it has a place in your heart; actual play reporting is encouraged.
If it's out of print and/or produced by an independent game publisher, I'm interested. But I need you to tell me why it appeals to you. If you have any links to a or site detailing your experiences with your system, please feel free to post that in your answer.
Example Answer: I enjoyed playing Psi World back in the '80s because the idea of an underground resistance movement in America started by people with emerging psionic powers is just cool. Yes, it's a premise detailed in X-Men, but Psi World accomplishes the same paranoia feeling in a much grittier way without so much spandex.
share|improve this question
Define mainstream, please. :) – Brian Ballsun-Stanton Apr 27 '11 at 12:05
@T.W.Wombat I'm sorry your question appears to be entirely subjective. There is no way people can objectively measure "favorite". We allow quite a bit of subjective on RPG, as much of the hobby can't be quantified, but favorite doesn't add value. – C. Ross Apr 27 '11 at 12:14
I've done a massive edit based on a conversation over in chat. The new question hopefully focuses more on good-subjective (play experience looking for new odd games which are fun for the poster) rather than "big honking list" which is not handled well by the site. Voting to reopen. – Brian Ballsun-Stanton Apr 27 '11 at 12:30
I thought this topic might be a bit too broad and too subjective. Thanks for the edits, Brian - I think that'll cover what I'm looking for in a more focused way. – T.W.Wombat Apr 27 '11 at 12:34
I would add Apocalypse World to the list of answers, except that I haven't actually played it yet. All of Vincent Baker's games qualify, though. – gomad Apr 27 '11 at 17:15
show 1 more comment
closed as not constructive by wax eagle, Brian Ballsun-Stanton, C. Ross Mar 25 '12 at 20:12
5 Answers
up vote 5 down vote accepted
Apocalypse World is what I'd recommend, which gomad mentioned in comments. For my part, I have played and run it, so I feel like I can speak to it with at least a bit of authority.
Apocalypse World has the mixed distinction right now of being the "new hotness"—or at least it did last year during the run-up to its public release and after. It has been called a game-changer, which is high praise. I won't say whether it's true in the usual meaning of the phrase, but I can say that it has changed my personal game profoundly. Whether any of the praise is deserved or is just hype can really only be judged personally once you've played, but it's something to be aware of if you go do more reading about the game online.
The game has three virtues going for it relative to your question:
First, it's a very new game; the most recent expression of a well-respected indie designer's cumulative growth over the course of several successful games. Notably, Apocalypse World has outsold by several times all of Vincent Baker's previous games put together, and games like Dogs in the Vineyard aren't anything to sneeze at.
So, it has the vote of the indie-games-buying market, and you can be sure that there will be lots of online discussion, actual play reports, and accumulated wisdom on how to run the game.
Second, AW is a seamless blend of a complete game and an instruction manual on how to GM—at least, in one specific style. It explicitly says that its way isn't the only way to run Apocalypse World, but it does say that the way to GM that it describes is the surest way for the average GM to run AW in a way that gets the most awesome out of the system.
To that end, it describes this very specific style of GMing in exhaustive but accessible detail, going so far as to create a set of rules for the GM to follow. This manual on "exactly what to do right this minute during your game" is one of the innovations that have got people talking about the game so much online.
One of the principle advantages of the GMing style it lays out is that the GM does not need to have a prepared plot. The game goes so far as to say the GM is not allowed to plot things in advance. This enables part of the core goals for the player experience of Apocalypse World: as a player, you can go anywhere, do anything, and impact the setting any way you'd like to try. You may not succeed, but it will be because of poor planning or failed rolls, never because the GM blocked your actions to preserve the story. This is also great as the GM, because you get to be just as entertained finding out what happens as the players.
The GM "rules" enable this through a set of tools and techniques that are designed to be invisible to the players (and they are!) while being highly structured for ease of GMing the game even for novice GMs. The structure is all about creating bits of setting, NPCs, and other playing pieces the GM can bring to bear to either react to the players, or throw a spontaneous (yet structured) development at them when they're not giving you things to react to.
I could go on, but eventually I'd just be spewing parts of the book. I'll let it be and just finish by saying that this is one of the prime reasons to buy the book, even if it never gets run. As a manual of concrete "do this, then do this" GMing techniques, the book is solid gold.
Third and finally, the game manages to both be rules tailored specifically to the apocalypse world setting and a general rules engine for any kind of setting you might like to create. While the "how to run" rules are given to the GM and make up most of the book, the "how to be a player" rules are (nearly) all on the character sheet (usually a folded one called a "playbook").
Each playbook contains the basic Moves shared by all characters as well as the Moves specific to the class in the playbook. The Moves of a Chopper all involve dealing with your biker gang, enforcing authority, and raiding across the wastes. The Moves of a Hardholder let you detail a stronghold (that exists by player fiat) and exercise authority (or fail to) over your people. The other playbooks contain similar Moves that, together, create much of the setting.
In fact, part of what makes GMing Apocalypse World easy is that the setting is created for you by the players' choices of playbooks—you have a Hardholder, a Chopper, and an Angel? Then your campaign is going to be about the threats and opportunities for the hold, enforcing the "peace" and raiding with the biker gang, and all the blood and guts that the Angel is going to have to mop up and put back inside the people who they value (and why). A game with a Brainer, a Driver, and an Battlebabe, on the other hand, is likely going to be about weird mind-control things (whether the PC's or NPC's) and wrestling with the Psychic Maelstrom, staying always on the move while trying to keep the car fueled up, and getting into (and often winning) nasty fights.
But while the setting is expressed by the rules built into the playbooks, using AW as an engine for a completely different genre and setting is entirely possible just by writing your own playbooks; and playbooks are only slightly more difficult to write than they are for a novice player to pick one up and just start playing the game cold, which is deliberately easy. There is, in fact, an official Apocalypse World subforum devoted to setting hacks, and a D&D-inspired hack called Dungeon World is about to go to the printers (a lite version of which is available for download. Oh yeah, and Vincent Baker is very happy to have people run with his game engine in whatever direction they want, including a commercial venture. Which is a good unofficial fourth virtue to wrap up this already-length answer with.
So in sum, there's a lot Apocalypse World has to offer in mind-expanding rules, both for a game of AW and for GMing and game-designing in general. And if you do actually run it instead of just absorbing the rules innovations, it's a lot of fun to play. I had a great time with my Hardholder character, and I've successfully run it cold-start, with no prep by me other than reading the book, and no previous exposure for the players until I dumped the playbooks on the table. It doesn't hurt that it's downright cheap for a complete game.
share|improve this answer
That's awesome. I loved Dogs in the Vineyard, and lite prep is the way I GM. This sounds like it's right up my alley and I think it'll rise to the top of my wish list. Thanks for the lengthy description! – T.W.Wombat Apr 28 '11 at 11:16
add comment
For novel rules I can't help but think of Penny for My Thoughts which is GM-less and uses pennies to indicate progress. The idea is that every player has amnesia and the story is how the different players help each other remember what caused their memory loss. It's designed to work literally as a pick up game where one person reads aloud the process as if an actual patient were following steps laid out by a doctor to aid their recovery.
The other I am thinking of is Dread which uses Jenga as its mechanism for determining success. When you need to test an action just pull another piece out of the tower. Bad things happen if the tower falls over. The website gives a better description then I could plus has quick start rules to get you going.
share|improve this answer
Jenga as resolution mechanic? I'm intrigued. Thanks for the pointers! – T.W.Wombat Apr 27 '11 at 17:40
add comment
There are 5 I'm going to recommend:
Og: Unearthed Edition
It's a great one-shot kind of game, but it's also suitable for recurrent play. It's not 'high art,' but is an almost elegant system, and a novel way to play.
The setting is essentially a typical newspaper-comic-style dinosaurs and cavemen... think Alley Oop... but even less wordy!
"semi-diceless"... I've had big battles resolved with just a handful of dice rolls, and it's not TOO deterministic.
System is a generic, two-pool point generation, d10's only universal system. The supplements (VDS and 3G3) are excellent.
Houses of the Blooded and Blood and Honor
Same system, two great settings. This is the one where you don't roll to succeed, but to see who decides if you succeed. Surprisingly fun.
Burning Empires
Burning Wheel adapted to run the Iron Empires setting from Chris Moeller's Graphic Novels.
PVP and PVGM, but still cooperative. Narrativist-Gamist. Play hard, the GM is limited by the same rules as the players. Absolutely not about house rules - It's a competition, and if the GM is changing the rules, he's actually cheating!
Also, my first encounter with the new GM Rule 0: Don't be a dick!
Plus, it has an excellent version of BW's various minigames.
Fate System, and character growth without skill growth. Really a good fun time, low prep, players create much of the setting in the first session.
Plus, lots of cool minigames.
share|improve this answer
I've looked at CORPS briefly, but I'll give it another shot. I actually have an old version of BTRC's Time Lords, which got pretty arcane bonus-wise but had some great setting ideas. I haven't been exposed to Houses of the Blooded, so I'll give that a shot as well. I've heard good things about Diaspora, so that's already on my list. And "Don't be a dick" is actually Wheaton's Law, but it does work well as a new corollary to Rule Zero. Thanks! – T.W.Wombat Apr 28 '11 at 11:08
CORPS bears little resemblance to SpaceTime (which is the same engine as TimeLords; I have ST but not TL). It's very streamlined, and does gritty fantasy and streetlevel supers fairly well. – aramis Apr 29 '11 at 7:05
add comment
Og: Unearthed Edition
This comedy game of anachronistic, bumbling cavemen uses limited vocabulary to hilarious effect.
I have played this game numerous times. Every time has been a hit - even with zero prep on anyone's part.
It's by Robin D. Laws, so you're getting novel mechanics from one of the very best.
This answer provides more details.
share|improve this answer
Nice. I missed Og when it came out. I'll give it a look. Thanks! – T.W.Wombat Apr 27 '11 at 17:39
add comment
3:16 Carnage Among the Stars is good for bare-bones space marines. It's kind of refreshing to find a system that plays so well with only two main stats: fighting and non-fighting. It's kind of tricky to run long-term, due to how things scale on the high end, but it's great for a one-shot or a couple of sessions with maybe 5-10 minutes of prep, for the players and GM combined.
Another I had success with was Swashbucklers of the 7 Skies, which uses the PDQ# rules. PDQ# stands for Prose Descriptive Qualities, with the "#" thrown in to signify added rules crunch. Characters have ranked "fortes," which grant a bonus to rolls if they apply. More than one can apply. Fortes are also the damage tracking mechanism, which means that when a character is hurt, he must choose which forte gets a penalty. This forte then generates a plot hook for the GM. It's been said that in PDQ#, you can "punch Spider-man in the girlfriend."
7 Skies is a complete and unique system and setting package, and has a pretty solid chunk of the book devoted to GMing techniques. It's also got a "Further Reading" section, which has a list of books, movies, and other such things for genre inspiration.
share|improve this answer
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/29850 | Take the 2-minute tour ×
Everytime I login into our CentOS 5, MediaTemple (dv) server, I have to do:
ssh-agent $BASH
ssh-add .ssh/my_id
once I've managed to login. This is used for our server to participate in our git workflow, and as we iterate quite a bit while in development, I'm growing tired of doing the same over and over again.
Adding this to the server's .bash_profile does not quite work, and my very limited understanding of ssh tells me it's because doing ssh_agent $BASH pretty much just spawns a new login window (and thus, it stops executing the next lines of the file...).
How do I get the server to have it's key working every time I remotely login into it?
share|improve this question
add comment
2 Answers
up vote 2 down vote accepted
The easiest thing to do is to enable agent forwarding in SSH so that it uses the agent on your local system instead of on the server. Failing that, you can do as it suggests in the man page and run eval $(ssh-agent -s).
share|improve this answer
Great, so it would use the ssh-agent on my local machine, but present the key stored in the server when git asks for them? I'm not very familiar at all with SSH, so if you could tell me about some resources I could look into, I'd really appreciate it! – Rob Jul 15 '10 at 10:29
The key would be stored on the agent in your local machine once it is added. ibm.com/developerworks/linux/library/l-keyc3 – Ignacio Vazquez-Abrams Jul 15 '10 at 10:38
Yes, agent forwarding is the answer I was looking for. Thanks a lot! – Rob Jul 26 '10 at 19:21
Thanks! Just what I was looking for! – Daniel Elliott Apr 27 '11 at 9:01
add comment
This is copied from a script I made a few months ago for teams I work with and use in my .bashrc. It was compiled from a collection of ideas and tweaked to work on multiple operating systems/environments as we found incompatibilities. It checks for a running agent, and if necessary starts one (saving the data for other shells). It then checks if that agent has keys added, and if not adds them for 10hrs (designed for a workday) with usage confirmation. If desired, confirmation can be removed by removing the -c options to ssh-add.
function start_agent {
echo "Initialising new SSH agent..."
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
# Source SSH settings, if applicable
. "${SSH_ENV}" > /dev/null
#ps ${SSH_AGENT_PID} doesn't work under cywgin
ps -ef | grep ${SSH_AGENT_PID} | grep -q ssh-agent$ || {
/usr/bin/ssh-add -l > /dev/null || {
echo No ssh identities detected. Running "ssh-add -c -t 36000"...
/usr/bin/ssh-add -c -t 36000;
share|improve this answer
I will try it out myself and report back. Thanks! – Rob Jul 15 '10 at 10:31
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29851 | Take the 2-minute tour ×
I have a CentOS box with 10 2TB drives & an LSI RAID controller, used as an NFS server.
I know I'm going to use RAID 1 to create 5TB of usable space. But in terms of performance, reliability & management, which is better, create a single 5TB array on the controller, or create 5 1TB arrays and use LVM to regroup them into one (or more) VGs.
I'm particularly interested in hearing why you would pick one approach or the other.
share|improve this question
Strictly speaking, you can't make "a single 5TB array" using RAID1 from 10 physical 2TB disks. If you use 5 2TB disks to build a RAID1, the array will be only 2TB large, but survive up to 4 failing disks. To make "a single 5TB array" you'll have to use either RAID 0+1 or RAID 1+0. – earl Jul 21 '10 at 22:15
You're right, of course. Interestingly, the RAID software will let me create a thing it calls a 5TB RAID 1 array. I wonder what it actually is? – Jeff Leyser Jul 21 '10 at 22:32
add comment
4 Answers
up vote 2 down vote accepted
If the controller will allow you to provision a 10-disk raid 10 (rather than 2 8-disk units with 2 disks left over) that would probably be the best bet. It's simple to manage, you get good write performance with battery backed cache and the RAID card does all the heavy lifting, monitoring, management. Just install the RAID card's agent in the OS so you can reconfigure and monitor status from within the OS and you should be set.
Putting everything in the care of the RAID card makes the quality of the software on the card the most important factor. I have had RAID cards which have crashed causing the whole IO subsystem to "go away" and requiring a server reboot, I've even had instances of a card completely losing the array configuration requiring either it to be carefully reconfigured from the console or the whole thing to be restored from backups. The chances that you, with your one server, would see any particular problem are low, but if you had hundreds or thousands of servers you would probably see these kinds of problems periodically. Maybe newer hardware is better, I haven't had these kinds of problems in a while.
On the other hand it is possible and even probable that the IO scheduling in Linux is better than what's on the RAID card so either presenting each disk individually or as 5 RAID 1 units and using LVM to stripe across them might give the best read performance. Battery backed write cache is critical for good write performance though so I wouldn't suggest any configuration that doesn't have that feature. Even if you can present the disks as a JBOD and have battery backed write cache enabled at the same time there is additional management overhead and complexity to using Linux software raid and smartd hardware monitoring. It's easy enough to get set up but you need to work through the procedure to handle drive failures, including the boot drive. It's not as simple as pop out the disk with the yellow blinky light and replace. Extra complexity can create room for error.
So I recommend a 10-disk RAID 10 if your controller can do it or 5 RAID 1s with LVM striping if it can't. If you test out your hardware and find that JBOD and Linux RAID works better than use that but you should specifically test for good random write performance across a large portion of the disk using something like sysbench rather than just sequential reads using dd.
share|improve this answer
The one thing I'd recommend checking with this approach is that it's actually RAID 10, and not 0+1. The only difference is if two drives fail -- in the case of RAID 0+1 any subsequent drive failures EXCEPT the other half of the failed mirror will fail the array, whereas in RAID 10 any other drive is fine to lost EXCEPT the other half of the failed mirror. The easiest way to test is to fail a drive, and then see pull out and plug back in other drives, noting failures of the entire array as you go. Make sense? I have seen commercial arrays that do RAID 0+1. – Jeff McJunkin Jul 21 '10 at 23:01
As an aside that card is very likely to not pass any SMART data to the OS. – Banis Jul 22 '10 at 1:40
add comment
That's actually R10, not R1 - and it's R10 I'd use, i.e. let the OS see all ten raw disks and manage it 100% in software., anything else is needlessly over complex.
share|improve this answer
The upside is you get to use the neat Linux RAID 10 implementation but the downside is that you lose the write performance benefit of battery backed cache. There is also an upside in that the Linux IO scheduler keeps separate queues for each disk so will work most efficiently when the number of disks isn't abstracted away but the downside is that management and monitoring of the array and hardware from the OS is more complex as its not abstracted away by the RAID controller. – mtinberg Jul 21 '10 at 22:31
But isn't that an argument against all hardware RAID? Why not do the RAID 10 at the controller level? – Jeff Leyser Jul 21 '10 at 22:33
Don't get me wrong, I do R10 in hardware all the time, but it does tie your array into one manufacturer. Use LVM and the array can move from controller to controller as needed. – Chopper3 Jul 21 '10 at 22:47
add comment
If you're stuck with 2TB LUNs due to 32-bittedness somewhere, I'd strongly lean towards making 5x 1TB RAID1 LUNs on the RAID card and throwing them into a volume-group to make one big 5TB hunk o' space. That way the card handles the write multiplication implicit in the RAID1 relationship, and you get 5TB of space.
If you can make LUNs larger than 2TB, I lean towards making that one big array on the RAID card. The strength of my lean depends A LOT on the capabilities of the RAID card in question. I don't know what it is, so I can't advise you. If I didn't trust it, I'd stick with the 5x 1TB RAID1 arrangement.
share|improve this answer
64bit OS, so LUN size is not a problem. "Trust" the RAID array in what sense? It's an LSI MegeRAID 8888ELP. – Jeff Leyser Jul 21 '10 at 22:12
It's not about the OS, the question was whether the RAID card is capable of presenting LUNs greater than 2GB, not all versions of the LSI can. For example I have a Dell PE2950 with LSI RAID and I have to have one 2TB LUN and one 1.5TB LUN that I concatenate with LVM because the controller can't present the full size of the RAID disk. – mtinberg Jul 21 '10 at 22:28
With a RAID card that fancy, you should be OK there. – sysadmin1138 Jul 21 '10 at 22:49
add comment
I'd suggest using the expensive raid controller to do the bulk of the raid work. LSI cards and the software they come with works quite nicely. When properly configured, they will send you email when intereting things happen to the array. Like when disks fail. There is nothing wrong with either of the two linux software raid options, but you've gone out and purchased a somewhat fancy raid card. Let it do the work.
Configure the disk array to expose one big device to Linux. If you would like to break up the final device into small volumes use lvm for that. One big physical volume, one big volume group and cut the volume group into whatever number of logical volumes you need.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29852 | Take the 2-minute tour ×
a while ago I gave root a password so I could log in as root and get some stuff done. Now I want to disable root login to tighten security, since I'm going to be exposing my serve to the internet. I've seen several ways of doing this (sudo passwd -l root, fiddling with /etc/shadow, and so on), but nowhere that says what the best/most sensible way of doing it is. I've done sudo passwd -l root but I've seen advice that says this can affect init scripts, and that it's not as secure as it looks since it still asks for a password if you try to log in, rather than flat out denying access. So what would be the way to achieve that?
EDIT: to clarify, this is for local login as root; I've already disabled remote login via SSH. Though trying to log in as root over SSH still prompts for root's password (which always fails). Is that bad?
share|improve this question
Disabling local access for root has almost zero security benefit. A user with physical access can pwn your box in countless ways. – jscott Sep 4 '10 at 14:45
Point taken. No need to log in as root if you can just take the hard drives out. I'd still like to know how to return the root account to how it was before I changed it though, if only out of curiosity now. – Ben Hymers Sep 4 '10 at 15:36
see the update to my answer below. I think I now understand what you're asking. – jscott Sep 4 '10 at 15:52
add comment
7 Answers
up vote 10 down vote accepted
It's debatable, to me, that disabling root is worth the potential issues. I have never tested a server configured in such a manner. My preference is to allow root local access only. If an attacker has physical access to your server, you can forget everything you've done to "secure" your install anyway.
Disable root ssh access by editing /etc/ssh/sshd_config to contain:
PermitRootLogin no
Fiddling with /etc/shadow, chsh -s /bin/false root all can be undone with a simple bootable CD/thumbdrive.
Update per your comment:
From help.ubuntu.com: "By default, the root account password is locked in Ubuntu". Please see the section "Re-disabling your root account" specifically. In order to reset the state of root's account, to install-default, use the following command:
sudo usermod -p '!' root
share|improve this answer
No system is secure when an attacker has physical access to that system . When you can edit /etc/shadow, what stops you from editing /etc/ssh/sshd_config? – SvW Sep 4 '10 at 14:33
@SvenW: Exactly. That is why I debate the usefulness, security-wise, of even bothering to "disable" root. Restrict root's access, yes. Disable the account, no. – jscott Sep 4 '10 at 14:38
Brilliant, thanks! Exactly what I wanted. – Ben Hymers Sep 4 '10 at 15:59
add comment
I assume you refer to remote login via ssh. Add the following line to /etc/ssh/sshd_config:
PermitRootLogin no
and the restart the ssh service
sudo service ssh restart
That should do the job and you can keep your root account as it is (or try to disable it anyway if you feel that is necessary).
share|improve this answer
Sorry, I should have said, this is for local logins. I've updated the question. – Ben Hymers Sep 4 '10 at 14:38
add comment
Replacing the encrypted password with a * in /etc/shadow (second field, after the first ':') is the best way, IMHO. Also, deactivate root login for ssh (this way it's simply impossible to login via ssh as root) and maybe restrict ssh to certificate logins, which is much more secure than password-based logins.
In most cases, SSH should be the only service accessible from the outside which potentially allows root login, so this door would be locked.
In order to further restrict this, you could install something like fail2ban, which bans IP addresses for a certain amount of time after a number of unsuccessful login attempts.
share|improve this answer
I've a feeling '*' is the same as '!', as per the accepted answer, so I'll vote this answer up too. – Ben Hymers Sep 4 '10 at 16:00
add comment
The main question has been answered several times, but the secondary has not. SSH prompts for the password after entering root after it is disabled as a security feature. It will also trigger if you try to log in as lkjfiejlksji.
This is to prevent someone from testing a pile of usernames, to try and find out which are valid on your system. However, from a security standpoint, if you've disabled root over SSH, I'd also set up a bruteforce detection program (like fail2ban), and set it so that if someone even tries to log in as root, it blocks them from trying any additional attacks.
share|improve this answer
Good answer, thanks! I've got fail2ban set up already, I'll configure it to block on the first attempt at logging in as root though, good advice. – Ben Hymers Sep 6 '10 at 8:44
add comment
Re: Security.
IMHO there is only so much you can do, security wise, short of unplugging the box, disconnecting it from the network, and welding it inside a 3" thick bullet-proof carbide-steel box.
Think of it this way - if folks can hack the Department of Defense, the CIA, the FBI, and Citibank - the rest of us mere mortals can't do much better.
Re: SSH security.
I not only forbid root access via ssh, I also set the "AllowUsers" parameter to my, and only my, username. This way nobody but my own user can log in via ssh. This may be redundant as in my own case, I only create ONE non-root user anyway.
Unfortunately, as others have said many times before, as soon as someone gets physical access to the box, all bets are OFF!
Certificate exchange for ssh login? Hmmmm. . . . sounds good. How do you do it?
Jim (JR)
share|improve this answer
add comment
JR et al,
Your AllowUsers led me to this https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
sudo vi /etc/ssh/sshd_config
PermitRootLogin yes (changed to no)
(add line at bottom of file) DenyUsers user1 user2
save and exit and then
sudo service ssh restart
Solved my issue. Thanks to all.
share|improve this answer
add comment
If you want to disable local root login, you can try to modify /etc/passwd and replace /bin/bash by /bin/false. HOWEVER, since I haven't tested it, I would say leave a root session open on the side, test it, and if there is any weird side effect, change it back.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29853 | Take the 2-minute tour ×
I am having a strange issue occur in the office:
A client has setup a VPN connection for some of our employees to connect to their network. I am able to setup and connect to the VPN on all of the desktop PCs in the building, but not on the laptops. I receive an 800 error code saying that the connection is unreachable or something like that. The odd thing is, all computers (PCs and laptops alike) are on the same domain, same subnet, etc. and are all connected via CAT5 cabling - none of the laptops are using the wireless connection. They are all using the same OS (Windows XP Professional), and the users in question all have administrative rights. I've tried disabling Windows Firewall and no change. All the computers also have Symantec Endpoint Protection installed. The only difference here is that all the desktops are managed by an EPP management server, while the laptops are unmanaged and handle updates on their own. However, I have also tried disabling the software alltogether on the laptops, and still no luck.
The connection was setup via the "New Connection Wizard" in Windows XP. After setup, the only properties changed were the IPSec Settings. There is a pre-shared key used for authentication.
The error messages comes after trying to connect as reads as follows:
"Error 800: Unable to establish the VPN connection. The VPN server may be unreachable or security parameters may not be configured properly for this connection."
Does anyone have any other suggestions or advice on something I might be overlooking??
share|improve this question
Let's see some more information. Where does this error code come from and what exactly is the wording? What kind of VPN connection is it, and what is the subnet/IP configuration? – wolfgangsz May 20 '11 at 15:18
Okay, I added more information above. Hopefully that will help. – Randy Cleary May 20 '11 at 19:10
What version of SEP? Are you using Network Threat Protection on the laptops? What happens when you configure a laptop to get managed by a SEPM server (like the desktops)? – Cypher May 20 '11 at 21:33
Can you ping the vpn host from the laptops? (assuming the remote host accepts icmp echo) – Cypher May 20 '11 at 21:37
What type of VPN are you using (PPTP, L2TP, etc?) – devicenull May 22 '11 at 3:56
show 2 more comments
Your Answer
Browse other questions tagged or ask your own question. |
global_01_local_0_shard_00000017_processed.jsonl/29854 | Take the 2-minute tour ×
After a reboot of my Ubuntu server, the nfs shares and not accessible. I have to do sudo /etc/init.d/nfs-kernel-server restart and then clients can mount the nfs shares just fine. The service is started at boot, and I've tried adding this line to rc.local but I still have to actually log in and run the command manually before clients can connect. I only have to do this once after the server boots up, and then it works fine from then on.
Any idea why it is requiring this manual restart?
share|improve this question
add comment
2 Answers
Check the permissions on the start up script.
share|improve this answer
The environment or $PATH might be different when the script is run during startup vs. when you run it by hand causing different results. – mtinberg Jun 29 '11 at 18:24
The startup script is the one provided by the Ubuntu apt repo, just fyi. @Chris - the perms are the same as all other start scripts: 755 owned by root.root. – James Jun 29 '11 at 19:52
@mtinberg - that would make sense. Any idea what part of the environment could be affecting it? – James Jun 29 '11 at 19:54
I would expect that the script does work or it would be broken for everybody so it must be something different about your local setup. – mtinberg Jun 29 '11 at 20:14
add comment
up vote 0 down vote accepted
I figured it out. It is apparently a problem with name resolution. I was using dns names in my export definitions. When I changed the dns names to ip addresses, then it works find after a restart. I would rather use the names, but I guess I'll live with using ip addresses for now since I don't expect them to change any time soon.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29855 | Take the 2-minute tour ×
So I created a new user, put in the normal details, said yes to create mailbox, then tried to share with someone. After an hour of battling cached mode etc etc, I thought to check the permissions of someone else.
Normally under exchange advanced, mailbox rights, you would have a bunch of items: administrator anon logon domain admins everyone mail ops SELF etc
But this one only had SELF!
So my question is, why did it fail to add these permissions (I eventually fixed the problem by adding them in myself)?
share|improve this question
add comment
3 Answers
up vote 1 down vote accepted
The Exchange server only syncs with AD every few hours. The delay is tuneable. It might just be that it hadn't synced with AD yet, so the full permissions weren't there. You can bounce the server to force a sync, but that's a bit drastic.
I had to set the sync interval down to about 2 hours on the server that I looked after, to get around similar problems.
share|improve this answer
add comment
The mailbox has not actually been created yet, so inherited security privileges that apply to the mailbox don't show up.
To nudge this process along, get the user to log in to the mailbox or send an email message to it. The permissions will quickly fall into line with what you expect.
share|improve this answer
That wasn't the case. I sent 2 test messages, I could login and see them in OWA, but the only perms were SELF (and the one I added). – Joshua D'Alton Aug 19 '11 at 3:02
add comment
You (or the user) need to log in once to the mailbox for the permissions to be created. I am not sure if the OWA login works this way (never tried it), so maybe try via Outlook.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29856 | Take the 2-minute tour ×
I'm setting up an environment for wordpress on apache2, on a fresh install of ubuntu 12.04.
In order to get friendly URLS working, I'm trying to set up mod_rewrite. I followed some instructions I found on the net, and used a2enmod.
Now. after restarting apache, I'd like to check if the module is actually loaded.
The command that I've found for getting a list of loaded modules is this:
apache2 -t -D DUMP_MODULES
However, this returns an error:
apache2: bad user name ${APACHE_RUN_USER}
So, how do I actually list all loaded modules, or otherwise check to see if mod_rewrite has been enabled?
share|improve this question
add comment
5 Answers
KM01 meant apachectl but that will not give you what you need. That will control starting/stopping/restarting of the server, along with giving some status information. The php file option works, but requires some extra work on your part. Instead, try running php from the command line: $ php -i. This outputs what phpinfo() outputs, only on the command line.
You can get a list of compiled-in modules by running $ apache2 -l, but that doesn't help for viewing dynamically loaded modules using the LoadModule (or other) directives.
You can see what modules are being dynamically loaded by looking at the entries in /etc/apache2/mods-enabled/. Some have an additional conf file in the same directory for configuration. Those modules are NOT getting loaded twice. You can see a list of available modules to load dynamically by looking in /etc/apache2/mods-available/. You can enable them on the command line with $ a2enmod <module_name>. You can unload them with $ a2dismod <module_name>.
When you are done enabling/disabling, you must restart apache with $ service apache2 restart or $ apachectl graceful. You will need root (sudo) privileges to do most, if not all, of this work.
share|improve this answer
add comment
1) Type <?php phpinfo(); ?> in a php file and save it and run that file in the server.
2) And now you can the list of information, just search the word “mod_rewrite” from the browser’s search menu
3) If it is found under the “Loaded Modules” section then this module is already loaded as you see in the picture below, otherwise you need to go to the next step for enabling mod_rewrite module.
share|improve this answer
add comment
PHP info won't always show you whether or not it is enabled. Sorry!
However, this page over on Stack Overflow does get you pointed in the right direction.
Alternatively, here is some php to list them all out:
<?php foreach( apache_get_modules() as $module ) echo "$module<br />"; ?>
share|improve this answer
You're right, phpinfo (or via apache extension) solution works only if PHP is compiled/used as an Apache module. Not in *CGI, FPM, etc. Besides, mod_rewrite could be loaded (LoadModule) but rewriting could be denied, according to AllowOverride and/or Options. – julp Dec 11 '12 at 16:58
add comment
Instead of using apache2 command, do you have the apachectl command? It should be in the same location as apache2. Or you could execute the command with elevated privileges using sudo apache2 -t -D DUMP_MODULES
share|improve this answer
add comment
Just as @Richard explained, but what you actually need to do in order to achieve that is to have a file with the following contents:
This prints out various information about your apache/php configuration. Other useful stuff that you will find there would be if imagick is installed or not. All loaded apache modules are there as well.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29857 | Take the 2-minute tour ×
I am not that of an electricity knowledgable person, so I'll try to put as much info as possible.
I have a Rack Mount UPS of 1800 Watts Model: PRP 3050 RM (in total I have 2 of these - talking about one specifically)
I had to change it's batteries today, and that led me to think if it can support the hardware that are connected to it.
*I will add the specs I was able to find on each manufacturer site with links to PDF's Currently connected to it are:
• Netgear 1100 specs out of production
• QNAP U-859 RP+ specs *no amps indication
• Dell R610 spec
• Dell 1950 spec
• Dell 2850 spec
• Screen L1710S *minimal screen watts - < 0.5A
I have created on Dell's Power calculator ESSA, a map of the hardware I have:
Map of Rack Tower from Dell ESSA Site
• Total AMPS used is 5.8
• All three Dell's each have a redundant PSU
• according to Dell's site - that means that each PSU uses half of the said/needed Watt's.
• Each Dell is connected to 2 UPS's (both are PRP 3050 )
• My UPS's specs says it has a 8.5Ah - does that mean I can connect more appliances until I reach the MAX?
• What can I learn from this information that I have provided?
adding link to power calculator that was suggested by @amotzg http://www.jobsite-generators.com/power_calculators.html
share|improve this question
This FAQ entry might help: superuser.com/questions/9946/… – amotzg Aug 5 '12 at 12:20
If you add up the Watts used by your hardware and the sum is larger than the watts provided by your UPS, that's usually not good. – Oliver Salzburg Aug 5 '12 at 12:24
Technically it depends on the specific UPS design but I would not generally recommended it. Though you can work near it's maximum and reduce consumption when triggered to stretch the up time. – amotzg Aug 5 '12 at 12:37
Expanding on UPS design: There are a kind of passive UPSes. They detect power failure and quickly switch to batteries supplied power. And active UPSes which always provide power from batteries while continuously charging. – Hennes Aug 5 '12 at 13:03
On a different note: Can it support extra hardware today (with fresh batteries) or can it support it over a year or two (with batteries near replacement again). Plan for the last case. – Hennes Aug 5 '12 at 13:06
show 4 more comments
migrated from superuser.com Aug 10 '12 at 4:24
2 Answers
up vote 3 down vote accepted
If you have 8.5 Ah and draw 8.5 amps, you can do it for one hour. Conversely you can draw 5.8 Amps for 32% longer or approximately 81 minutes. Really you should only try to draw 80% of your max rating. Batteries get HOT under 100% load.
I would be nervous about operating so close to the stated maximum wattage of the UPS's. You should upgrade as soon as possible to the next available size (wattage).
The formula for wattage is very simple. It Volts (electrical pressure) time Amps (electrical current). So a 120 Volt 5 Amp (maximum current draw) device would need a 600 Watt power supply.
What would all of this information provide you? Figure out the total in kilowatts, and then multiply that by the number of hours it's on per day (daily kilowatt hours). Then multiply that by the cost per kilowatt, and you now know what this rack of equipment costs you in electricity every day.
share|improve this answer
add comment
First thing to do is to ignore the nameplate and actually measure the current draw. Get an electrician to do that if you don't have the gear for it. Make measurements at the heaviest load your servers normally experience. It's surprising how different (and generally lower) real-life figures are compared to the server specs.
Next ensure your calculated figure is no more that half of the UPS rating. While you could theoretically draw the full Ah rating of the battery for an hour, drawing anything more than half that figure will result in a drastic reduction of the battery life. Ideally even stay below one quarter of the rating.
Of course your UPS should also state the maximum current it can supply, so be sure to factor that in as well.
share|improve this answer
If you're using an on-line UPS also remember that the UPS has to be able to supply your expected startup load. Powering up a rack of servers (spinning up the hard drives and fans) often draws even more wattage than "peak load" for a machine that's running. You need to plan for this by either allowing extra UPS capacity or having a specific power-up sequence to avoid overloading the UPS. – voretaq7 Aug 10 '12 at 16:42
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29858 | Take the 2-minute tour ×
If you're not running your own dns server (or if you must run your own to do this assume you are) how would you measure how many DNS queries per second you get for your domain?
Is there anyway to tell if a web request came using your DNS server as the authoritative source or a cached result?
The idea is to be able to tell if you are adequately able to meet demand with your existing DNS server.
share|improve this question
Your question is vague, which led me to post an answer that you didn't like. Next time be specific and tell us who hosts your DNS so we can give you an accurate answer. – joeqwerty Oct 1 '09 at 11:42
dlamblin, you're smacking people for their answers but in fairness you didn't specifiy who hosts your DNS and I offer this statement from your question as proof that your question is vague and hypothetical: "(or if you must run your own to do this assume you are)". So stop smacking people and downvoting them because you don't like their answers. Next time be specific and tell us who hosts your DNS and don't post a question that can be misunderstood or misintepreted. – joeqwerty Oct 1 '09 at 12:34
Ok, either you are, or are not running your own DNS server. Which is it? If you are not, and do not have at least a trust relationship established with it, there is no way to tell. A similar question would be "What number am I thinking of?" .. you have no way of knowing. – Tim Post Oct 2 '09 at 2:48
@all; wow I never down-voted anyone here. So don't whine if someone else did. I think my question is clear, I am not currently running my own DNS server, but I am interested in measuring the number of requests. If I can only measure the number of requests by running my own, then I will. I understand that allows for two different answers. – dlamblin Oct 2 '09 at 18:42
add comment
4 Answers
up vote 1 down vote accepted
Run a packet capture program on the DNS server, start a capture and filter for only DNS, run the capture for an hour, calculate from the data collected in the capture.
AFAIK, there's no way to know how a client resolved a DNS lookup unless you run a packet capture on the client.
share|improve this answer
It's not possible to capture packets if the server is run by my webhost is it? – dlamblin Oct 1 '09 at 4:44
He didn't say who was running the DNS for him. Look at his question again: "If you're not running your own dns server (or if you must run your own to do this assume you are)" So clearly he's not saying who is hosting his DNS and he's asking how to measure DNS queries and I told him how. – joeqwerty Oct 1 '09 at 11:36
Sorry dlamblin, didn't catch that you were the one who commented But to rephrase your question the way I understood it you asked "How do I do this if I don't run my own DNS, or do I need to run my own DNS to do this?" and I gave you a valid answer. So 1 downvote for you for being vague in your question and not specifying who was hosting your DNS. – joeqwerty Oct 1 '09 at 11:41
add comment
Short answer: In your situation you have no way of telling.
If you have DNS hosted for you I would suggest contacting the hosting provider to discuss the matter. It's pretty certain they won't be prepared to give you access to the logs or any other real information but they should at least be able to give you an indication of whether their system is coping well of not. I imagine their system is providing DNS for other customers as well as yourself, so if it wasn't keeping pace they would be getting complaints.
share|improve this answer
Finally, someone who actually read the question – Mark Henderson Oct 1 '09 at 5:30
add comment
If you do run the DNS server, I prefer joeqwerty's solution (a capture program like DSC) to tinkertim's solution (tuen on query logging) because query logging is:
• DNS server software dependant
• slows down the name server
In any case, whether you run the DNS server or not, no, you cannot say what happened at the client side. If your name server receives a DNS request, you can be reasonably sure it means it wxs not in the cache at the other side. If you don't receive the request, you know nothing.
share|improve this answer
add comment
While this slows down the server (a tiny bit), you can simply turn on query logging. This produces a log of queries with timestamp .. after that its relatively easy to take averages over periods of time.
Its rather easy to do in most versions of BIND, not sure about others. I use the same thing to police some DNS servers that we have no choice but to allow almost wide open recursion.
share|improve this answer
So you're assuming I must run my own server? – dlamblin Oct 1 '09 at 4:44
@dlamblin , your question was poorly worded and lead me to think that you did have control over your DNS server. – Tim Post Oct 2 '09 at 2:46
The very first words in his question are "If you're not running your own dns server" - what is ambiguous about that? – Mark Henderson Oct 2 '09 at 11:03
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29859 | Take the 2-minute tour ×
We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
A linked server is not the best option.
• it opens the SQL Server for remote T-SQL execution, a very serious security hole
• it requires SQL password based authentication because of the different domains involved
• it does not offer any redundancy when faced with spotty conectivity
• TDS as a protocol is not designed for speed
A much better alternative is to use Service Broker:
• SSB operates on a dedicated port that does not allow arbitrary T-SQL commands, like a linked server
• SSB supports certificate based authentication accross distinct domains
• Message fragmentation and delivery fairness ensures a smooth operation over bad/slow connections
• SSB uses a high throughput protocol designed for speed, sam eprotocol used in database mirroring.
If you insist on the linked server then you must:
• enable SQL Server to listen on the public internet addressed
• enable TCP ion the server and open the SQL listenening port (default TCP 1433) on the firewall. If the server listen on non-default ports, the you must start the SQL Browser service and open port 1434 UDP on the firewall, and allow sqlservr.exe to open arbitrary ports on the firewall.
• you must enable SQL Authentication to allow for SQL based password.
• To protect the traffic you should ensure SSL is used see How to enable SSL encryption for an instance of SQL Server by using Microsoft Management Console and How to Enable Channel Encryption.
• Check, re-check and double-recheck that the [sa] login has a bulletproof password that is known only by people that you have absolute 100% trust. Your TDS port opened to the internet will be subject to a constant barage of brute force attacks on [sa] from a million automated bots.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29860 | 166 reputation
bio website alwaysgetbetter.com
location Ottawa, Canada
age 30
visits member for 2 years, 5 months
seen Jan 3 '12 at 1:15
Technical Director at Fuel Industries in Ottawa, Ontario.
I am currently responsible for developing and maintaining highly scalable online gaming properties using PHP, ASP.NET and Java.
In my spare time I play with with everything from NoSQL databases and Node.js to C++ and Ruby on Rails. I am a blog and social media addict.
This user has not asked any questions
Stack Overflow 1,087 rep 58
Programmers 801 rep 46
Server Fault 166 rep 2
0 Votes Cast
This user has not cast any votes |
global_01_local_0_shard_00000017_processed.jsonl/29863 | Take the 2-minute tour ×
Is it possible to show multiple display groups in SharePoint search instead of scope dropdown. How to customize the search result page? i need to change the result page in which if i click on the result link corresponding description should be displayed at the bottom of the result page. For that i need to know in which format the result will be returned..
share|improve this question
can u explain what do you mean by result format? – Deepu Nair Feb 29 '12 at 18:00
Format in which xml data will be returned.. will it be same for all search results or it may vary for list, documents, etc.. – Sanker Mar 1 '12 at 4:28
add comment
1 Answer
You can add multiple display groups by going to Site Actions -> Site settings -> Search Scopes under Site collection settings.
Now, if you have added multiple display groups, you can configure each of those display group for each of your search center in your sites or subsites. Basically if you edit the results.aspx page in the search center, you can edit the scopes drop down web part, where you can see an option to enter the scope display group name. This will enable you to select any of the scope display groups that you have already created.
share|improve this answer
Can a single search page have three diplay groups displayed? – Sanker Mar 1 '12 at 4:32
A single search page just has one search drop down and each search drop down can accomodate only one display group. If you need multiple display groups, then you ought to have multiple search centers in each of your sub sites, where each search center will have one display group. – Deepu Nair Mar 1 '12 at 5:27
Now i have a scenario that i have to use search option in my page where there are many filters used like category, type, etc.. in a drop down box and also search box..when i enter search query and click search button my search results should be depend on these filters also..i thought of using scope display group for each filter..is there any other solution available where i want to use existing search option in sharepoint rather creating custom one.. – Sanker Mar 1 '12 at 5:50
Refer this link technet.microsoft.com/en-us/library/gg185660.aspx – Sanker Mar 2 '12 at 7:53
Here they stated it is possible to assign different display groups to single search page.. – Sanker Mar 2 '12 at 7:55
show 1 more comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29872 | 8 Multiple SICStus Runtimes in a Process
It is possible to have more than one SICStus runtime in a single process. These are completely independent (except that they dynamically load the same foreign resources; see Foreign Resources and Multiple SICStus Runtimes).
Even though the SICStus runtime can only be run in a single thread, it is now possible to start several SICStus runtimes, optionally each in its own thread.
SICStus runtimes are rather heavy weight and you should not expect to be able to run more than a handful.
Send feedback on this subject. |
global_01_local_0_shard_00000017_processed.jsonl/29876 | • What is Hyperpigmentation?
The word hyperpigmentation refers to excess, undesired pigment in the skin. This applies to various skin conditions such as age spots, melasma, acne scarring, and postinflammatory hyperpigmentation (a darkening of the skin caused by irritation).
Most hyperpigmentation is caused or aggravated by sun exposure, but some is also caused by a pigment-producing reaction in the skin caused by irritation. This type of hyperpigmentation, called post-inflammatory hyperpigmentation is most common in darker skin types.
Treatments for Hyperpigmentation
To fade unwanted pigmentation, there are two main options: bleaching agents and exfoliants.
Bleaching agents are ingredients that suppress excess melanin production and distrbution and are, in general, the most effective option.
Exfoliants fade pigmentation by increasing cell turnover and encouraging the shedding of pigmented or damaged skin cells. Exfoliants such as alpha hydroxy acids and retinol also have added anti-aging benefits and can bring faster results when used in combination with a bleaching treatment.
When treating any form of hyperpigmentation, sun protection is key. Always wear sunblock with broad-spectrum protection and with an SPF of 30 or more. |
global_01_local_0_shard_00000017_processed.jsonl/29878 | Forgot your password?
Gates Tries to Explain .Net 613
Posted by michael
from the effing-the-ineffable dept.
AdamBa writes "Speaking to financial analysts and reporters, Bill Gates admitted that .NET hadn't caught on as quickly as he had hoped. The headline ('Gates admits .NET a "misstep"') is a bit misleading; he doesn't think all of .NET was a misstep, just the My Services part (aka Hailstorm). He also said that labelling the current generation of enterprise products as .NET might have been 'premature.' Summary: Microsoft got too excited about locking in users via Hailstorm and botched the overall .NET message." There's also a Reuters report and a NYTimes story on the same subject, which includes the interesting line: "Microsoft also warned today that the era of "open computing," the free exchange of digital information that has defined the personal computer industry, is ending." It isn't clear if Microsoft is talking about something happening beyond their control, or if they're boasting about ending it.
Gates Tries to Explain .Net
Comments Filter:
• by Telastyn (206146) on Thursday July 25, 2002 @11:04AM (#3951228)
Wouldn't that truly be one of the travisties of humanity? Ending the Information Revolution by returning to where we were before it... Let us just hope and act in such a way that this does not come to pass.
• by ultima (3696) on Thursday July 25, 2002 @11:04AM (#3951232)
Free exchange of digital information (like Open Source Software) which defined personal computing (GNU did quite a bit of defining with gcc, emacs, &c) is ending?
Sounds like FUD aimed at open source software -- particularly because he uses the term "open computing" :)
On another note, my personal experience of .NET is that it seems to revolve around Visual Basic style API, buzzwords, and commercialism. I was thinking this morning that it seems like companies no longer have any interest in providing developer tools to people who develop for the sake of developing, but rather tools for rather poor coders working for large profiteering companies. It's a shame because it would have been so nice if it wasn't such garbage.
• by Zone5 (179243) on Thursday July 25, 2002 @11:04AM (#3951237)
"The era of open computing is ending"
You bet your ass it's ending because they're ending it. If the universal pushing of Passport, .Net, and Palladium haven't convinced you yet, you need to do a little reading.
I am genuinely afraid of what personal computing will look like in ten years if Microsoft has their way, and I have never been too concerned in the past, so I am hardly an alarmist Microsoft conspiracy nut either.
• Marketing to blame (Score:4, Insightful)
by glh (14273) on Thursday July 25, 2002 @11:07AM (#3951256) Homepage Journal
I think the main problem with .NET is the marketing. .NET means somethind different to just about everyone.. To me as a developer it means the new development tools (ASP.NET, VB.NET, C#, Web Services). I definitely don't think that was a misstep- it is 100x better than its predecessor (COM). However, I think branding hailstorm and all the new version of the enterprise servers as .NET was a mistake. MS was trying to put everything under the .NET umbrella, but since some of those products/concepts have failed (ie hailstorm) it is now going to paint all things .NET in a negative light especially to people who aren't totally familiar with it. I hope they learn the lesson. I can remember visiting the web site several times that talks about what .NET is, and seeing it change about every month :)
• by FatRatBastard (7583) on Thursday July 25, 2002 @11:09AM (#3951280) Homepage
... because this quote is dopey no matter who said it:
Jim Allchin, one of the company's top vice presidents, acknowledged the shift in focus in the industry from personal computers to plumbing, and bemoaned the difficulty of getting Microsoft's traditional consumers to care about its new vision.
Well gee, Jim, you have it a bit backwards don't you. Shouldn't the company care about its customers' vision? I mean, if Porsche designed a kick ass lawmower -- I mean a innovative leap in lawnmower technology -- would you expect Porsche's traditional to care about Porsche's new vision?
• by Rahga (13479) on Thursday July 25, 2002 @11:12AM (#3951301) Homepage Journal
That will happen when they pry the webserver out of my dead hands.
Seriously, what is going to happen? MSN will supply all the content for the world? I doubt it.
http://www.rahga.com forever, and I suggest you do the same.
• .NET (Score:5, Insightful)
by Twister002 (537605) on Thursday July 25, 2002 @11:12AM (#3951302) Homepage
I think when developers talk about .NET, we're talking about the .NET framework. Which does have many wonderful features and improvements to the languages (C#, VB.NET is a big improvement over VB 6.0), the ease of making web services. It's much easier to manipulate XML than in previous versions. In the developer community (at least the ones that make money by programming on the Windows platform) it is slowly gaining popularity and many web sites have converted over to ASP.NET.
When the general public thinks about .NET, I think they are referring to the nebulous cloud of "web services" that Microsoft has alluded to, "Hailstorm", ".NET My Services", etc... Those still seem to be up in the air and not many people see the need for them.
I don't think I'd pay Microsoft for a subscription to Word.NET when I can just keep using MS Word 2000 or OpenOffice 1.0, or AbiWord. I don't want to store my credit card info in my Passport (or liberty alliance or any other online identity service) account. Heck, I want the people in the checkout lane to ASK to see my ID when I hand them a credit card, I certainly don't want to hand over all the info that a thief needs to charge things to my credit card.
• free exchange? (Score:3, Insightful)
by bigpat (158134) on Thursday July 25, 2002 @11:14AM (#3951317)
Well, I think we should see the writing on the wall for this one. No large monopolistic corporation can make good enough money on a free (as in Paul Revere) internet, so they are trying to divvy it up with proprietary systems and protocols to impose artificial monopolies.
Big companies may be able to undercut the competition at first, but the total cost of ownership will hurt you in the end.
• by Zone5 (179243) on Thursday July 25, 2002 @11:16AM (#3951332)
I didn't say anything about ending Open Source. I said they're ending open computing. Two different things.
Open source is of course, freely available source code. Open computing is the basic interoperability and data exchange upon which we all rely to make things 'just work' together. Try just for a minute to tell me that MS wouldn't foreclose on any interoperability standard they could if it would result in increased sales of their products.
Open source isn't ending, and it never will. It's currently our best hope for keeping MS as honest as possible.
• by interiot (50685) on Thursday July 25, 2002 @11:17AM (#3951342) Homepage
• Gates also acknowledged that confusion still reigns about .NET's very definition.
Good -- they understand one problem. People can perhaps point to the CLR and assoicated libraries, but .NET has been touted as much more than that, especially to non-techies.
• On Wednesday, he hammered home a new definition: "software to connect information, people, systems and services."
Unfortunately, this definition doesn't help at all. Pretty much all internet-based software does this.
• by sam_handelman (519767) <skh2003@co l u m bia.edu> on Thursday July 25, 2002 @11:18AM (#3951351) Homepage Journal
Wherever "open computing" survives will become the dominant cultural force of the next century.
The United States is in a position to maintain cultural hegemony over the whole world - if we don't kill the free exchange of culture in order to make a quick buck.
If we do, I predict, within a couple of generations, that other parts of the world will have outpaced us. Killing open computing will destroy our best way-out of the recent doldrums in popular movies and music.
• by GreyPoopon (411036) <[email protected]> on Thursday July 25, 2002 @11:22AM (#3951385)
He could be speaking of the end of open source in the business sense.
Where in the article did it mention him indicating the end of Open Source? The warning statement was about the end of "Open Computing," and I believe he was referring to Digital Rights Management and other cryptographic technologies being built into the hardware and operating system. Personally, I find this concept MORE frightening than ending Open Source, but he's doing nothing more here than repeating what all of the big corporate conglomerates (RIAA, etc) have been trying to convince us of. Sad really. As much as I don't like Mr. Gates, I would have hoped that the geek in him wouldn't have caved so quickly.
• by croanon (567416) on Thursday July 25, 2002 @11:28AM (#3951427)
Then why I am seeing everyone is converting to Java in the last 2 years? No one is using .NET or planning to use it around. My firm tested it, tried to call some legacy activex controls and unmanaged C++ code, they of course rejected it after a biiiiiiig performance hit.
I know lots of developers who shifted to Java from MS platforms though. :)
.NET is new. Not tested, not trustable. Java existed 7 years ago. Why should I risk it? Why should I develop in .NET, just another VM based technology, but this time lock myself to Windows? I know that there will be other implementations of .NET, such as Mono on Linux, but those will not be cross platform compatible at all. Even they say it. One reason is that .NET's most important parts are not given to ECMA, such as WinForms and ADO.NET. Do not forget that. MS is still holding the patterns.
etc. etc.
.NET my BUTT. I will never use it.
• by Jucius Maximus (229128) <zyrbmf5j4x&snkmail,com> on Thursday July 25, 2002 @11:31AM (#3951468) Homepage Journal
Gates indicated that the company's software Promised Land will be a new version of its Windows operating system code-named Longhorn, which is still at least two years off.
Don't we hear this story every few years, but with a different product's name? Before that it was Windows XP, and before that it was "Chicago/Windows 4.0/Win95" and before that it was DOS 6 and before that it was ...
According to MSFT, the 'Promised Land of Computing' has always been waiting for us in their home just over the next ridge.
• by will592 (551704) on Thursday July 25, 2002 @11:35AM (#3951501)
Thank God someone finally has something good to say about Java. I've been developing java based solutions for the past 3 years and I honestly don't see any reason for this .Net crap. Seems like more and more people are moving their server side code over to Java and not looking back. But all you here is Java is dead. Maybe no one is using java on the client but Java seems to be surging forward on the server. Chris
• by Vicegrip (82853) on Thursday July 25, 2002 @11:37AM (#3951515) Journal
There are two main potential .NET targets:
1. Companies who have not yet started to deploy solutions using J2EE or Java and are trying to decide which to use: Java or .NET
2. Companies who have a need for some software that is only as a .NET application.
I won't address issues involving getting companies to deploy the .NET environment to their PCs... Microsoft is most likely going to have to force people-- which may not be popular.
a1. If you already have a substantial investment in software written in anything but a .NET language, chances are you aren't very motivated to switch paradigms.
a1. Regardless of how you view .NET the fact is java has been here for quite a while and has a good following. I have yet to meet a serious java developer who has any interest in .NET
a1. Regardless of all the claims Microsoft makes about C#/.NET maturity, nobody in their right mind is going to bet the company on a new MS platform just because the pay-for-plundits say it's sexy. .NET has to earn the industry's trust-- not an easy hill to climb these days.
a2. There is little imperative to adopt something for which there are no major none-Microsoft commercial offerings.
a2. Either way, I suspect difficult part of the sell for .NET is in convincing CEOs that they aren't further limiting their licensing choices and options in order to adopt something they just don't need-- at least not yet. The wait-and-see approach is a tried and true paradigm with respect to version 1.0 software from Microsoft.
Personally, I find it hard to get excited about something from a company whose major call to fame these days is the latest way it is reaming its customers.
• by Eric Damron (553630) on Thursday July 25, 2002 @11:43AM (#3951555)
It seems clear enough to me. Microsoft and the entertainment industry are in bed together. Both have something to gain from DRM.
The entertainment industry can stop music and movie pirating, take away our fair use rights and set the stage for a future market. That market being the sale of digital video and music which will be streamed directly to hardware. It is important to the entertainment industry that we are not allowed to record the digital data because once recorded we, as individuals, could illegally swap the files with others. Obviously, that would greatly reduce the incentive to pay again and again for the privilege of having the entertainment industry stream it to us. So say good-by to your fair use rights.
Microsoft has a lot to gain here also, on an entirely different front. They are fighting for their Corporate lives against a foe unlike any they have had to deal with before. Linux can not be made to go bankrupt, it cannot be sued into oblivion and it is steadily gaining popularity. How can Microsoft deal with this specter of doom? They must use any weapon available to them.
1. FUD. Yep, good ol' fear, uncertainty and doubt has always helped Microsoft in the past. It hasn't worked very well against Linux because their FUD has been too transparent. People just weren't buying it. They need a more complex strategy.
2. The Law. Make open source illegal. Hmmm... I'm sure they thought about that one... but how?
How about using FUD, a grain of truth to paint open source users as pirates, thieves and other assorted forms of lower life. Then join together with the entertainment industry to buy a senator like say.... SENATOR HOLLINGS FROM SC. And have him draft legislation that will ram DRM down our throats.
One all hardware is DRM enabled, only the entertainment industries bed partner will be allowed to receive digital data that will be streamed by this industry. Microsoft will do it's part to ensure that as few applications as possible will be allowed to run on Linux and have access to this new market. Definitely not open source. Thus they prevent competition. Typical strategy for Microsoft. Being afraid of competition they don't go head to head unless they can ensure themselves an advantage.
• by FatRatBastard (7583) on Thursday July 25, 2002 @11:44AM (#3951566) Homepage
Unfortunatly, this is how the IT industry works (or has worked). I guess all marketing departments do this to an extent, but IT is really the worst.
A. Promise the moon, to be delivered within two years
B. Spend 6 months talking about the Moon, but never really getting into details beyond buzzwords.
B2. If new and interesting technology comes along within those 6 months claim the Moon will contain it as well
C. Come out with alpha software (Moon v.1 Preview) that has little functionality built in but looks nice
D. Slip schedule ('We're adding new and exciting features')
E..Y Wait
Z. Deliver something that could quite possibly be useful and innovative, but deliveres about 1/10th of the orig. promise.
• by Jord (547813) on Thursday July 25, 2002 @11:45AM (#3951576) Homepage
I would have to disagree that Java is dead on the client. I think it suffered a major stroke with AWT and then again with the first versions of Swing.
However with the release of 1.4, there have been vast improvements made on the client side (read GUI) that makes it much more viable as an option. The company I am currently with is designing an entire GUI with Swing and so far things have been very positive.
On the server side, however, Java is king. There are very few "single" technologies that can do as much as smoothly as Java does. Yes you can do everything that Java does with other technologies, but using a single technology, Java owns this arena currently.
.NET is new. People are suspicious of it. A large number of developers out there view it as a clone and say "why do we want it". .NET does give you less in the interoperability department (basically windows only) than J2EE does plus it still has to prove itself.
Give .net a couple more years. It will either get a foothold or die. Personally, I hope it dies.
• by jav1231 (539129) on Thursday July 25, 2002 @11:45AM (#3951583)
I think that MS may see this as an opportunity to garner control along with RIAA via things like the DMCA. MS has practically embraced the idea of more control over content and media. Legislation like the DMCA simply reinforces their further control of "innovations" as they call them. If things like proprietary encryption and the like come down the pike, MS will be the medium. The fact that this will further alienate the Open Source community is a huge bonus for them. >
• by PanopticnetPrisoner (593699) <[email protected]> on Thursday July 25, 2002 @11:50AM (#3951606)
While open source is a subset of open computing, the two are in no way synonyms. The idea Microsoft is trying to convey is that business models are finally beginning to catch up to modern technology. Open computing could be taken to cover everything from internet access (where business models are already beginning to evolve from unlimited monthly access to capped transfer/bandwidth or pay-by-MB) to P2P file sharing systems (no explanation necessary). Personally, I still believe technologically open solutions are evolving faster than traditional business models, but certainly the industry is now actively aware of this open computing -- not "problem" -- but "opportunity" to make more money. (Or, after the latest string of quarterly losses, make ANY money). I've always found it interesting how gargantuan companies can lose millions (or billions) of dollars each year, yet the CEO's of said companies still manage to turn a profit of hundreds of millions of dollars and live in houses with six hot tubs and three pools (at least one indoor) and other such ludicrously excessive luxuries.
• by Rader (40041) on Thursday July 25, 2002 @11:55AM (#3951650) Homepage
"If I'm building a box, am I going ot include a Palladium component"
Well, that sounds good until a couple years from now where your video card is getting really doggy, and the CPU's that are available are 4 times faster than what you've got, and no one is using CD-r's anymore, and the 27GB blue disc DVD's are looking nice and cheap.
If Palladium passes and they enforced making the sale of non-Palladium hardware illegal... then all the companies will start making Palladium compliant hard ware. Sure, you can find hardware form the pre-Palladium days, but every year, those will seem so slow, it won't be worth it.
• by croanon (567416) on Thursday July 25, 2002 @11:56AM (#3951660)
Yes, Java! Because: - Java is cross platform compatible. .NET may never be cross platform compatible %100 including Mono project etc., since MS is holding patents of very important parts of .NET, such as WinForms, ADO.NET. They did not submitted all the parts of .NET to ECMA. They kept the most important parts.
- Java was there 7 years ago! :) Think about it. Now it is matured, reliable. There are millions of Java programmers (still there will be %50 more need for in 2003 according to Gartner research), thousands of open/close, ready to use, matured programs, frameworks, libraries written in Java. .NET is a newbee, need at least 3 years to become reliable. During this time, Java will be much better.
- Java is working already. Its doing everything I need. Why should I change to .NET? :) There are many programs written in Java, basically working on many different platforms already.
- Performances of .NET and Java are not very different. Both are VM based. .NET might be faster than Java on Windows, especially in client applications, but, it is not very important, since CPUs are fast enough, and Java is getting better optimized with every release. In short, Java is fast enough.
- All the big companies other than MS, such as Sun, Oracle, Sybase, IBM, BEA, HP, Fujitsu, Nokia, Sony/Ericcson, JBoss, etc. already rolled their dice and chosen Java. They have many products based on Java. Why should they burn their investments and move to MS's .NET? Of course they won't.
- Java is not from the most unethical company in the history of mankind. Some people believe in ethics and don't use it. Such as me.
• by letxa2000 (215841) on Thursday July 25, 2002 @12:03PM (#3951715)
It's actually quite ingenious on the part of Gates. Admit that MS hasn't done as well with .NET as they would like. Everyone knows that to be true, but Gates' honesty shocks and surprises everyone.
In the next breath he mentions that not everything is going to be so open and free in the future. But since he just scored "honesty points" by admitting a less-than-great performance by his company, the general public automatically attaches a little more credibility to his comment about "open and free."
If Gates just comes out and spews FUD about open source, etc. it's just more of the same. If Gates makes an out-of-character negative critique about his company and THEN spews FUD about open source, it sounds like its part of a fit of honesty.
• by pmz (462998) on Thursday July 25, 2002 @12:16PM (#3951817) Homepage
Your taxpayer dollars are paying good money to port from one completely propietary platform (2k/ASP) to another (ORACLE/SUN). The only difference? The latter costs more.
This is a bit trollish. Oracle on Sun offers tremendous flexibility, it can be extemely reliable, and it is much simpler to administer well. Conversely, I've seen Oracle on Windows NT, and it was an embarassing travesty.
I really wish people who see only up-front costs would take off their blinders and have just a little insight into the future. UNIX, believe it or not, is still cheaper in the long-term than Windows, and going with non-Microsoft applications may actually reduce risk. Perhaps this is a good thing for the taxpayers?
Microsoft has been very successful at making people put all their eggs in one basket and at providing an operating system that requires what seems to be a one-to-one ratio between administrators and computers. Is this really what you want?
• nail on the head (Score:5, Insightful)
by mblase (200735) on Thursday July 25, 2002 @12:32PM (#3951928)
Shouldn't the company care about its customers' vision?
Some columnist recently pointed out that Apple achieved in one stroke everything MS is trying to achieve with .NET, by announcing iCal [apple.com] and iSync [apple.com] last week at MacWorld. Those two programs allow users of Mac OS X Jaguar to connect their PDAs, cell phones and desktop PIM software to a single database and publish them on the Internet, connect with the calendars of others, and resolve conflicts between the two.
In other words, while Microsoft spent two years talking about Web services and technologies, Apple quietly went about actually building them into a program its users will want to use. MS has been announcing and releasing software for other people to build these Web applications, but Apple decided to lead by example instead.
No doubt the next release of Windows will include similar features, and of course they'll be more widely used than Apple's. But just think what might be happening right now if Microsoft had spent as much time creating Web applications for Windows XP as they did promoting them.
If a person could synchronize their PocketPC to their MSN account and Outlook at the same time, then reconcile with all their coworkers' calendars and documents, without having to do anything more than press a button, Microsoft wouldn't need subscriptions to sell the next version of Office or Windows. Instead they settled for getting halfway there so that they could sell more copies of Exchange Server and keep PocketPCs as expensive as humanly possible.
• by rseuhs (322520) on Thursday July 25, 2002 @12:34PM (#3951949)
Face it:
People want open computing, otherwise we would all run Macs now.
In the last 2 weeks I've installed Linux for 2 friends and yesterday I was called by another one who is no longer able to rip DVD-movies with Windows XP after he did an online-update. (Yes, he wants to try Linux, too after this "experience".)
Pirated music, movies and software is what keeps the whole computer-thing going at home. Or do you really think that granny is going to shell out 400$ for MS Office to write 2 letter/month?
If you take that away, you immediately lock out the vast majority of home users which will accept great pain and suffering to escape (and switching over to Linux is not as hard as it used to be. But even if it was, that would not matter because a DRM-computer would be useless for most home users.)
Palladium and universal DRM are just not going to happen in a free market.
Of course semi-democracies like the US might force it by law, but just like Alcohol-prohibition, it won't last very long and nobody would care about it anyway. (Actually alcohol-prohibition reduced alcohol consumption only in the first 2 years while the market adapted. Then because of harder drinks (= easier to smuggle) and more aggressive distribution (no more youth protection) the alcohol consumption per head was much higher at the end of prohibition than at the start.)
Millions of users currently don't care about copyright, why should they care wether DRM is mandatory or not?
• by ASeed (195654) <alberto.intersaint@org> on Thursday July 25, 2002 @12:48PM (#3952048) Homepage
".NET Signals an Industry Shift"
also referenced as the article about "Moore's Triple Crisis".
The author of the article (David Bau, who made the popular "Dave's Google Quicksearch Bar") writes about a three-way Moore's law crisis: crisis in systems, apps and development.
Systems: "the exponentially rising power of PC technology has started to overshoot the needs of the ordinary customer. This means people are starting to shop for cheaper computers instead of more powerful ones."
Development: "Moore's law crisis affects development costs just as dramatically as it affects hardware costs. As computing power gets cheaper and software becomes more ephemeral, it makes sense to save software development hours by wasting CPU cycles." The Garbage collectors and Intermediate Languages of .NET and Java are according with that. Scripting languages too.
Applications: "Microsoft is facing the problem of saturation. The widely recognied issue here is that almost everybody who wants to do something with their computer software can already do it. Why would you buy a new version of Microsoft Word or Excel?" "Microsoft is facing competitors like America Online that are using a new model for software applications."
That's why Microsoft introduced his .NET services.
• by Animats (122034) on Thursday July 25, 2002 @01:08PM (#3952183) Homepage
The next big thing was supposed to be Applications Service Providers. Rent your key business apps. A hosting provider with a support staff would resell applications. Remember? Where are those guys now?
There are successes in that business, but Microsoft isn't one of them. PeopleSoft, Oracle, SAP, EDS, and Automatic Data Processing are the successful players. They're big, vertically integrated companies that build and service what they sell. They're not value-added resellers, and they don't usually work through value-added resellers.
Microsoft's model, that you download something, pay for it forever, and don't bother them much, isn't how it's done. The big service providers provide real service; they are in the business of outsourcing corporate support functions, not pushing software.
• by jpellino (202698) on Thursday July 25, 2002 @01:08PM (#3952184)
Nothing new. Bill Redux: I remember hearing of an episode from back when GEM and Windows were still battling it out - at a conference panel where Bill and Gary Kildall were members, and Gary was going on about OSs, and how there'd be plenty of ways to run your computer. Bill grabbed a microphone and interrupted, with a clarification to the effect that "No, there will be one way to operate your computers. One. (uncomforatble silence) You may continue."
• by AmateurCoder (574449) on Thursday July 25, 2002 @03:08PM (#3952943)
. . . PHP is also an excellent alternative to ASP.
I read somewhere that PHP is the fastest growing scripting language on the web, and has already surpassed the popularity of the more mature ASP.
Exellent development tools available for Java make it a good choice for some bigger web projects, but the downside is that the cost of setting up a server. Not too many people offer virtual hosting for java. You pretty much need your own server with root access to set things up.
For smaller projects you can get a domain name, virtual host with PHP, and mySQL for about $20 US per month.
Of course you can design and test both technologies on your free OS, with your free web server, with your free database.
So why is anybody switching to .NET?
• by Eric Damron (553630) on Thursday July 25, 2002 @03:33PM (#3953101)
"Re:It seems clear to me... (Score:2)
by sheldon on Thursday July 25, @02:05PM (#3952551)
(User #2322 Info | http://www.sodablue.org/)
Microsoft's position on this is quite understandable. They aren't in bed together, but Microsoft feels that if they do not incorporate DRM into their applications and utilities someone else will and that application will become supplant Windows as a desired choice."
I'm not buying it. With all of the applications out there and over 90% of computers in the entire world running a Microsoft OS there is no OS poised to "supplant Windows as a desired choice."
In their recent FUD they claimed that the reason for their Palladium strategy is to protect customer's from evil hackers and "un-trusted" code. Yet it will not do a thing to prevent the majority of attacks. This initiative is mostly about hurting open source for Microsoft and about curtailing future P2P file swapping for the entertainment industry.
You bet Microsoft is in bed the entertainment industry.
One more partner that I didn't mention in my previous post was the hardware manufacturers. To pull this off they have to play along as well. All of them need to exclusively sell DRM enabled hardware because if any of them are not on board with this scheme then people will have a choice. Given the choice of hardware that the entertainment industry and Microsoft controls or uncrippled hardware, you can guess what people will choose. So we must not be allowed a choice.
And just in case some of the hardware companies are reluctant to play along Microsoft and the entertainment industry have bought and paid for SENATOR HOLLINGS FROM SC. This is one corrupt SOB that needs to be removed from the equation. If you are from SC I would suggest voting the bastard out.
As far as my opinion being FUD, I think not. It is by far more based on fact then fear, uncertainty and doubt.
• by Vicegrip (82853) on Thursday July 25, 2002 @05:41PM (#3954307) Journal
".NET has nothing to do with COM. It exists as it is even if COM never existed."
Well that's pretty rich. I guess I was imagining all those GUIDs.
"Yes, just as you can't use a PHP function in Java. I'm not sure what your point is."
Not having to reinvent the wheel for a new paradigm was the point... you know.. reusing existing code... anyways..
"We had code in Beta2 that runs flawlessly on the 1.0 CLR less one minor exception (minor syntax change)."
I'm glad to hear Microsoft didn't redesign the CLR between beta2 and version 1.0 ... that must have been a big relief.
Working for a company that has the budget to redesign and re-code everything must be nice though. I'm glad not everyone is hurting in this economy.
• by Mr. Firewall (578517) on Thursday July 25, 2002 @06:51PM (#3954804) Homepage
...what, in your view, was Gates's motivation then in grasping the security nettle so publicly the other day...?
I don't know Mr. Gates personally so I can only guess based on what I was told, by someone who does know him, in a conversation that occurred last winter.
My friend said that Gates finally "got it" about two years ago as far as realizing that security is actually important, but still did not realize that security is something that must be designed in to a technology from the very beginning. He described Mr. Gates as a visionary who likes to dream up new stuff and believed that security was something that could be added on to a technology later -- by low-level underlings. Kind of like believing that you could make the Corvair safe by simply adding air bags.
He also mentioned that BillG considered security to be more of a PR issue than a real one.
The "Trusted Computing" letter to which you refer is consistent with that view. Most of the letter is pure PR and most of the rest is consistent with a viewpoint that security can be obtained by simply having coders go back through source code looking for bugs.
I don't think Gates realized until just recently that he has literally built Windows on a very dangerous foundation (ActiveX, for one example) that CANNOT be made secure. I think that's what Palladium is about: yet another add-on by underlings (hardware designers, in this case) so that he does not have to admit that he made some very fatal errors several years ago when he designed the Win32 architecture.
Gates is a betting man -- he played a LOT of poker in his college days and usually won -- and it shows in the way he keeps "betting the farm" on his company's products and technologies. If the world ever figures out what he's done, he's going to lose it all.
So to answer your question, I THINK that he believes that he really is on the track to better security. I think he's starting to realize that it ain't really true, but I think he also believes that he can bluff his way out of this one just as he has no doubt done in countless poker games in the past.
It will be interesting to see whether that actually happens.
|
global_01_local_0_shard_00000017_processed.jsonl/29879 | Forgot your password?
Comment: More disks (Score 1) 959
by DrHyde (#46466189) Attached to: How Do You Backup 20TB of Data?
I'm backing up 8TB at home, by rsyncing to another 8TB of disk space. It's been working reliably for years, starting back when a TB was a lot and adding/replacing disks over time.
A 4TB hard disk is pretty cheap these days, so he just needs to get six of 'em and make another RAID array. Once you've done the initial rsync, I presume that subsequent changes will be relatively small, so transfer speed doesn't matter much, so he could hang them off a USB port in one of those USB-to-SATA dock things.
Comment: Re:What is "computer-directed flight control"? (Score 1) 353
It would have been an electrical (or possibly mechanical if they could make it light enough) analogue computer. Analogue fire control computers were common on naval ships from WW1 onwards, and used in bomb sights and anti-aircraft guns in WW2. I presume that it would just be a moderately complex negative feedback system.
Mind you, the pictures make it look like it wouldn't really have been a useful military plane. Too small to carry any significant load, guns, or fuel. It was designed as a racer, not a military plane, and while companies like Supermarine could apply lessons from racing to mass-produced military machines, they still had to design the military machines from scratch and not just do quick adaptations to existing designs. The Spitfire, for example, has its origins in a 1931 design, and had two substantial re-designs before finally entering service in 1938. Even if the Germans get their hands on this unfinished prototype in mid 1940 when France falls, there's not a chance that they'd get anything related to it into production until they're already getting their heads kicked in by the Red Army and RAF Bomber Command.
Comment: Not video games (Score 2) 669
by DrHyde (#46286037) Attached to: Ask Slashdot: What Games Are You Playing?
I play Go. With real people, face-to-face, on a wooden board. I'm not interested in big flashy video games, and haven't been since Doom.
There are a few interesting games on iOS devices. They're mostly good because the very limited user interface - you don't have eleventy million keys, or joysticks - and limited CPU grunt, storage and memory means that game designers have to actually think about gameplay and come up with original ideas instead of just releasing yet another Doom clone with MOAR MEGGERPIKSELS. Harbour Master, Osmos, and Tower Bloxx are all a few years old but still great fun.
Comment: I don't care about upgradeability (Score 1) 477
by DrHyde (#45528725) Attached to: Ask Slashdot: Best Laptops For Fans Of Pre-Retina MacBook Pro?
I'm posting this on my Macbook Pro, made of Chinese slaves' retinas. I love it.
I've never felt the urge to perform brain surgery on any of the laptops I've owned over the years. I bought each one pretty much maxed out, and ran it for four or five years. The one thing that irritates me about my latest Macbook is that I can't carry a spare battery with me. But on the other hand, its battery life is very good and it's very rare for me to spend so long between charging opportunities that it's a problem. And the one time it was, well, it's a price worth paying. Other laptops - and I looked at many when deciding which to buy - all found worse ways to suck.
The Internet
Some Of Australia's Tubes Are About To Be Filtered 339
Posted by samzenpus
from the think-of-the-koala-children dept.
Slatterz writes "The first phase of Australia's controversial Internet filters were put in place today, with the Australian government announcing that six ISPs will take part in a six-week pilot. The plan reportedly includes a filter blocking a list of Government-blacklisted sites, and an optional adult content filter, and the government has said it hasn't ruled out the possibility of filtering BitTorrent traffic. The filters have been widely criticized by privacy groups and Internet users, and people have previously even taken to the streets to protest. While Christian groups support the plan, others say filters could slow down Internet speeds, that they don't work, and that the plan amounts to censorship of the Internet. At this stage the filters are only a pilot, and Australia's largest ISP, Telstra, is not taking part. But if the $125.8 million being spent by the Australian Government on cyber-safety is any indication, it's a sign of things to come."
Comment: Re:Let's cut the conspiracy theory (Score 1) 1589
by DrHyde (#26058021) Attached to: When Teachers Are Obstacles To Linux In Education
I hope you always charge this nincompoop for helping him. After all, if you help him for free, there has to be a catch, there has to be a law being broken. And I don't mean just helping him with his computer. Charge him for helping him trim his garden hedge, for helping clear snow from his drive, for keeping an eye on his house when he's away, ...
Comment: Re:Let's cut the conspiracy theory (Score 1) 1589
by DrHyde (#26057973) Attached to: When Teachers Are Obstacles To Linux In Education
The bit of the teacher's letter that is quoted doesn't even say that she's a member of a union, let alone that specific one.
Anyway, I know where *my* union's money comes from and what it's goals are. The money comes from members and a very little bit comes from adverts in our magazine.
Comment: Re:WTF? (Score 1) 117
by DrHyde (#26030881) Attached to: Free Resources for Windows Perl Development
I'm one of that group. And yes, my beard *is* going grey. I've not used Windows for over a decade. I have no idea how to set up and configure a current Windows to be secure and to have a reasonable development environment. Nor do I care to learn, as I have better things to do with my time, like making tasty booze and grumbling about The Youth Of Today with their ghetto blasters and hard core pornography.
But if this lets me test my code on Windows before releasing it, and spot and fix stupid errors, then that's a Big Win. I still won't bother fixing any major Windows-only bugs (like Adam says, it's too big a time investment from which I will get no benefit), but the vast majority of bugs are trivial little things and I *will* fix those.
|
global_01_local_0_shard_00000017_processed.jsonl/29881 | Forgot your password?
+ - KDE 4.1 Release Announcement->
Submitted by Syde
Link to Original Source
Comment: Re:Global Warming - why?? (Score 1) 164
by Syde (#24077023) Attached to: Gentoo 2008.0 Released
I agree, speed is important, but the flexibility of Gentoo is amazing. I tend to refer to Gentoo as a tweaker's dream. I came from the days before we had useful package management systems in Linux... back when you had to compile all of your packages and dependencies yourself. (X and a window manager sure was a lot of fun to install back then!) So I remember the time of compiling everything and getting everything properly optimized for your hardware (and other software)... so for me Gentoo is kind of the best of both worlds, package management, but still doing it from source.
Open Source Economics and Why IBM Is Winning 146
Posted by kdawson
from the committed dept.
|
global_01_local_0_shard_00000017_processed.jsonl/29882 | Forgot your password?
Comment: Re:Interesting... (Score 1) 115
by The Lerneaen Hydra (#31514058) Attached to: Firmware Hack Allows Video Analysis On a Canon Camera
Well, for what it's worth, the only thing the camera in the video seems to be doing is threshold compare on a pixel by pixel basis, ie if each pixel changes from the previous frame by more than a certain amount then it's highlighted, which is a pretty simple operation. Still, it's a cool proof of concept.
Comment: LTspice & TINA-spice (Score 2, Informative) 211
by The Lerneaen Hydra (#28912799) Attached to: Cheap, Cross-Platform Electronic Circuit Simulation Software?
I've actually been in the same situation myself, two free (as in beer) SPICE derivatives I've found to work well are LTspice and TINA-spice (from linear and Texas Instruments respectively). They are windows binaries but function very well in WINE (in fact the developer(s) for LTspice have designed it to function as well as possible with WINE).
I've mostly used LTspice and it works very well and has a low learning threshold. Of course you can insert spice directives in the schematic to do more advanced functions like basic parameter sweeps as well as monte-carlo simulations and so on and so forth. Check out LTspice's yahoo group for a bunch of documentation.
As far as other recommendations for eagle go I doubt that's what you're looking for as eagle is solely for schematic capture and pcb design, there are no simulation capabilities in it.
Comment: Unlikely numbers (Score 2, Interesting) 516
by The Lerneaen Hydra (#26411145) Attached to: The Environmental Impact of Google Searches
I don't buy these numbers. Assuming the summary is correct and one search uses as much energy as boiling half a cup of water, then the total energy dissipated is;
Which for water gives (assuming 80 degrees of temperature difference and 75g of water, or about half a small cup of tea);
A few google searches I just did took on average 0.2 seconds each, as reported by google.
This would give a power draw of 125kW, for just running the services that handled my single request!
Now, I must say that I don't now a lot pertaining to how much power google's servers draw, and of course running the search engine servers ism't enough, google needs to update it's database and do lots of other maintanence. All in all this strikes me as far too much.
Does anyone happen to have any real knowledge about this?
|
global_01_local_0_shard_00000017_processed.jsonl/29884 | Forgot your password?
Comment: Mint+Mate or CentOS (Score 1) 573
by doodleboy (#43266521) Attached to: Ask Slashdot: New To Linux; Which Distro?
By which I mean, a distro that runs Gnome2. I've been using Linux as my primary desktop OS since sometime in the late 90's and I actually work as a shell programmer. I am not interested in using some new UI that is designed to run on a tablet, or that is written by some cabal of out of touch developers for their own masturbatory purposes. I want something that is easy to install that I don't have to waste a lot of time dicking around with. I assume most other people who have lives feel the same way. My 2 cents:
CentOS: A clone of Redhat Enterprise Linux. It is quite stable but does not have quite the same selection of packages as Ubuntu and its derivatives like Mint. Also, the software tends to be lag a bit behind faster churning distros like Ubuntu. But if you don't care about living on the bleeding edge, CentOS is for you.
Mint+Mate: An Ubuntu derivitave that runs the Mate UI, which is a fork of Gnome2. I'm using it now on my home PC. It's fast enough for me and I have it set up so that it looks very similar to the way I had 10.04. So far I have had zero problems with it.
In short, if you want to be on the bleeding edge and don't mind a few bugs, get Mint+Mate. Otherwise, get CentOS.
Comment: The case for lower resolution (Score 1) 375
by doodleboy (#42906217) Attached to: Ask Slashdot: What Is Your Favorite Monitor For Programming?
Folks get all exited about having the highest possible resolution, but that is only part of the story. I have 2 x Samsung p2770fh 27" 1920x1080 monitors. They're discontinued now, but 2 years ago I paid $280 each at the local Costco. (I would suggest buying monitors locally so they can be returned if you get dead pixels.)
Anyway, about that resolution. I'm 48 years old and my eyeballs don't work as well as they used to. I have a smokin' work-issued laptop, a Lenovo w520. I love that I can run multiple VMs at once on the thing, but I find myself squinting at it because of the higher pixel density. But at home on the 27's everything is nice and big and easy to read, even if I'm leaned back in my chair.
Otherwise the screens are nice and bright and text is very easy to read. Video looks great. For less than $600 I am a happy camper.
Comment: Re:Atlas Shrugged (Score 1) 700
by doodleboy (#41644347) Attached to: Ask Slashdot: What Books Have Had a Significant Impact On Your Life?
I don't say this to be a smart ass, so please don't take it that way, but perhaps it was that simple to you because you read it when you were 16? Mind you, my youngest child is older than that and I spent half of my life overseas in the Army, so I am neither young nor naive. Give it another shot. You may be surprised.
I did read it again about 10 years ago, 20 years after the first go-round and after picking up a BA in philosophy and literature. It was a remarkably different experience from being a 16 year-old fanboy. The book is not very well constructed and Galt's speech, nearly a book in itself, was nearly impossible to get through.
If I was going to recommend any Rand book it would be The Fountainhead, because it gets the basic message across without all the interminable editorializing.
Comment: Re:Atlas Shrugged (Score 3, Interesting) 700
by doodleboy (#41639301) Attached to: Ask Slashdot: What Books Have Had a Significant Impact On Your Life?
Most of the people who criticize Atlas Shrugged haven't read it, even if they say they have. It's a great book. I second the recommendation!
I read Atlas Shrugged and to my knowledge all of Ayn Rand's other published works. In fact I thought she was the shiznit when I was 16. It all seemed so simple: these people over here are good, and those other people over there are evil. However, I have come to understand real life is a good deal more complex than that, and the binary distinctions favoured by ideologues like Rand in no way correspond with reality.
I have come to believe that any philosophy based on hate is fundamentally untenable.
Comment: rsync scripty goodness (Score 1) 304
by doodleboy (#39536693) Attached to: Ask Slashdot: It's World Backup Day; How Do You Back Up?
I haven't bothered with offsite backups. I don't need to because I live in Florida and it's not like we ever get hurricanes or anything like that.
I have a 3ware raid card in my 10.04 box with 4 drives in raid 5, as well as an eSATA drive. I export a TB of the RAID array and a TB from the iSCSI drive via iSCSI to two 2k8 servers running in Virtualbox VMs. In the Windows VMs, DFS mirrors the data to the two mountpoints. I export those shares to a Z: drive which maps on login. I set up the free MicrosoftSyncToys powertool to mirror the local My Documents directories to the Z: drive. When SyncToy is run, and the data is backed up in two places.
I have another esata drive which mirrors my home partition every night. This is slightly complicated because I have a couple dozen virtual machines that could be running (it's usually less than 10), so what I wanted was a way to pause any VMs that might be running, back everything up, then unpause. Here's the script I wrote to do that.
# nightly_backup: Script to pause any virtual machines that are running,
# do an rsync backup, then unpause the virtual machines. Set the SRCE
# and DEST variables below, as well as the USER variable. Script assumes
# that $DEST is a separate partition. If this is not the case for you,
# comment out the line _mount_check below.
# Sample cron entry:
# 30 04 * * * /usr/local/bin/nightly_backup &>>/var/log/nightly_backup.log
# Sample /etc/logrotate.d/nightly_backup file
# /var/log/nightly_backup.log {
# monthly
# missingok
# rotate 4
# compress
# }
# --exclude-from file syntax:
# Copy directory but not its contents:
# + Cache/
# - **/Cache/**
# Do not copy (file or directory)
# - .gvfs
# $Id: nightly_backup,v 1.1 2011/12/03 19:23:15 doodleboy Exp kevin $
ARGS="-aHS --delete --stats --exclude-from=/usr/local/bin/rsync_exclude"
# Function to pause or resume running virtual machines
_pause-resume() {
VMS=$(su - $USER -c "vboxmanage --nologo list runningvms")
if [ -n "$VMS" ]; then
printf "$VMS\n" | while read VM; do
VM=${VM%% \{*}
printf "Running $ARG on $VM...\n"
su - $USER -c "vboxmanage --nologo controlvm $VM $ARG"
printf "No VMs are running.\n"
# Abort backup if $DEST partition is not mounted
_mount_check() {
if mount | grep -w "$DEST" &>/dev/null; then
printf "$DEST is mounted. Proceeding with backup.\n"
printf "$DEST is not mounted. Aborting backup.\n"
printf "*** $(date): Aborting nightly backup ***\n\n"
exit 1
# Start banner
printf "*** $(date): Starting nightly backup ***\n"
# Make sure $DEST is mounted
# Comment out _mount_check if $DEST is not a partition
# Pause virtual machines
_pause-resume pause
# Flush pending writes
sleep 3
# Do the backup
# Resume virtual machines
_pause-resume resume
# Exit banner
printf "*** $(date): Finished nightly backup ***\n\n"
I wrote another script to email me the status of my raid array every night. Admittedly this is only useful if you have a 4-drive 3ware card, but it could be adapted to other hardware. Here it is:
RAID=$(tw_cli /c4 show)
U0=$(echo "$RAID" | awk '/^u0/ {print $3}')
P0=$(echo "$RAID" | awk '/^p0/ {print $2}')
P1=$(echo "$RAID" | awk '/^p1/ {print $2}')
P2=$(echo "$RAID" | awk '/^p2/ {print $2}')
P3=$(echo "$RAID" | awk '/^p3/ {print $2}')
BB=$(echo "$RAID" | awk '/^bb/ {print $4}')
for status in "$U0" "$P0" "$P1" "$P2" "$P3" "$BB"; do
if [ "$status" = "OK" ]; then
SUBJECT="RAID Status OK"
elif [ "$status" = "VERIFYING" ]; then
SUBJECT="ISSUES with RAID Array!!!"
catEOF | mailx -s "$SUBJECT" [email protected] &
The fortune for today is:
Comment: Re:ltsp with fat clients (Score 1) 202
Well, you can PXE boot LTSP over wifi if you have a wireless bridge. It's not exactly reliable though, at least it wasn't when I tried it last year.
Where I work we have 300 remote locations running LTSP on lucid. One server at each location, perhaps as many as a dozen thin clients using PXE boot. We built our own update mechanism, where the LTSP servers rsync a directory tree that contains the updates. Anything new, they run the update. If an update fails for whatever reason they send an email back to hq. It's been working fairly well for us.
LTSP enabled us to put a modern Linux desktop with Firefox, OO.org, etc, on the desktop of every underpowered thin clients that we own. This saved us from having to obsolete a big chunk of our infrastructure, probably a couple million in new hardware and depreciation costs.
We used a Clonezilla cluster to build the disk images. We wrote a config script that configured the base images (hostname, network, etc) for each location. It was a big effort but it went well.
Comment: Not Really Possible to go Paperless (Score 1) 311
by doodleboy (#39008863) Attached to: Ask Slashdot: How To Go Paperless At Home?
If it's more work to save a doc in a paperless format, or if it costs more, then it isn't practical and doesn't make a lot of sense. Also, if you are all digital and a little lazy about backups, you're only a disk crash away from disaster. I like having paper copies of important stuff.
I do print most everything double-sided. This alone will save a huge amount of paper. Duplex printers aren't nearly as expensive as they used to be. I have a samsung clp-620nd, a networked color duplex laser printer. It's fantastic for the money (about $300), but I'm sure there are others out there that would work just as well.
If I do need to scan, I have a cheap HP j4550 multifunction inkjet. I never bothered buying new ink for it, but I do use the scanner. Normally I'll import into SimpleScan and output to PDF. SimpleScan works surprisingly well. I also print to PDF for receipts and the like if I want to keep a digital copy. If it's important I'll also print a copy and put it in the file cabinet.
My thought on scanning vs printing is, if it's important then do both. Don't keep anything that matters in just in one place.
Comment: Been there (Score 1) 315
by doodleboy (#38405828) Attached to: Ask Slashdot: Good Metrics For a Small IT Team?
We had a new IT director show up a few years ago that came around to talk to everyone about their hopes and dreams and all the rest of it. Because he cared about us as people. Shortly after that the IT department shrunk by a third.
It's Friday. I took the night off. I will be VPNing in tomorrow to do a bunch of stuff. I have to go in on Sunday to do a bunch of other stuff I can't do remotely.
Fuck this shit.
Comment: Re:I know this isn't what you asked but... (Score 2) 320
by doodleboy (#37904030) Attached to: Which OSS Clustered Filesystem Should I Use?
I also have a 3ware card and four 1 TB drives in RAID 5 in my 10.04 desktop PC at home. Some of that space is exported via iSCSI to a couple of Windows boxes. Then I back the RAID array up with a couple of external SATA drives. My wife thinks this is excessive, but I lost a lot of data, once, nothing critical but stuff I cared about, emails and papers from college, pics of friends and family, etc. But when the drive started throwing SMART errors I thought, yup, better go pick up a new drive soon... 3 days later, it was dead.
The irony is that one of my main responsibilities at work is backups, mostly with shell scripts I wrote myself.
Many of you probably have most of your important stuff on one drive that you don't back up. At the very least, pick up an external USB drive and schedule backups for anything you care about.
Comment: Re:Tape can be unreliable (Score 1) 611
by doodleboy (#28749987) Attached to: Best Home Backup Strategy Now?
Since when is tape unreliable?
It sure can be, especially the lower end stuff like Travan. Where I work we have over 300 remote sites, which used to have TR-5 tapes and drives that failed continuously. We replaced all of them with a local rsync to a different partition with snapshots going back a week, along with a remote rsync to a bank of servers with snapshots going back a month. We had to shell out some cash for the backup servers and some dev time for the scripts, but the savings from not buying tapes paid for them fairly quickly. The local rsyncs take the place of tapes, while the remotes provide secure off-site storage. We have been able to rebuild branch office servers using data off the backup servers with no data loss and minimal downtime. Hard drives are cheap, fast and reliable. I honestly don't understand the appeal of tapes.
Comment: Re:The right tools for the job (Score 1) 421
by doodleboy (#28466867) Attached to: How Do You Sync & Manage Your Home Directories?
At work we're starting to install Ubuntu 9.04 to dualboot with XP on upper management's laptops. Ubuntu is pretty slick these days, but there is the problem of syncing files across both operating systems. We've been kicking around the idea of using a fat32 partition to keep files on, but that sucks on many levels. Reading your post, it occurs to me that unison will do exactly what we need. I knew I came here for a reason.
Comment: Re:Moving parts are the main problem (Score 5, Informative) 655
by doodleboy (#27474127) Attached to: How Do I Provide a Workstation To Last 15 Years?
My full solution would be a fanless rig, with RAID 1 for full redundancy of disks so if a hard disk fails, it doesn't take your data with it, and weekly backups to DAT tape stored off-site. Then I'd use a pair of power supplies, using a diode to prevent power from one from getting into the other, and a zener diode or 78 series linear regulators to ensure a failing supply can't overpower any one line. Then, from my little power circuit, the two power supplies would feed the one motherboard, which would be underclocked at reduced voltage. It would have the highest possible amount of RAM in it, because that would reduce the writes to the hard drives.
On the software side, I would consider hosting the DOS app on linux using an emulator such as dosemu or dosbox. The OP's dad would have an environment very similar to what he's using now. I would probably use Debian stable for both boxes, which has very long release cycles and is very stable.
With linux comes the option to replace the DAT tapes with an off-site rsync over ssh. If the main box dies, you'd be able to just swap in the backup box in a couple of minutes. If the data set isn't very large the mirror will complete in a couple of seconds. It's very easy to do:
Create a RSA public/private key pair: ssh-keygen -t rsa, press enter at the password prompts.
Copy the public key to the remote box: ssh-copy-id -i ~/.ssh/id_rsa.pub remotebox.
Have a nightly cron job to push the files: rsync -ave ssh --delete /localfiles/ remotebox:/localfiles.
For bonux points you could even throw in snapshots.
I'm backing up hundreds of partitions this way at work, each with snapshots going back a month. Tapes are slow, unreliable and expensive. I would not use them for any purpose.
|
global_01_local_0_shard_00000017_processed.jsonl/29885 | Forgot your password?
Comment: Re:a good friend (Score 1) 189
by garett_spencley (#44597249) Attached to: Ask Slashdot: Experiences Working At a High-Profile Game Studio?
I see both sides.
If I witness a good friend who I think is about to walk off of a cliff (metaphorically or literally) then I usually say something, but it has to be pretty serious because otherwise I don't think it is my place. I'm sure the person posting the question to /. does think this is serious, as he is worried his friend is turning down a lot of money and an excellent opportunity for something that won't pan out. Yet at the same time, if this person wants to go work for a game studio that kind of implies "that is my dream" to me. And the type of friend who would discourage someone's dream is the antithesis of someone I would want to a call a "friend."
That's the question that needs to be answered. If the programmer / student wants to work for a game studio because he thinks the game industry is growing and it's his best bet at securing a solid financial future there is no harm in presenting facts to the contrary. But if, on the other hand, he is akin to a musician who dreams of becoming a rock star and is perfectly content pumping gas and sleeping on couches trying to carve out a career and has no ambitions of getting married, starting a family and having a comfortable middle class retirement then it is a very big asshole move to step into the role of father and say "No don't do that because there's no money in it." He's probably already had to put up with enough of that bullshit from his parents already. What he needs from his friends is support in that case.
Comment: Free market (Score 4, Insightful) 238
by garett_spencley (#44456969) Attached to: How Did My Stratosphere Ever Get Shipped?
1) No one, not even the most "hard core" fiscal conservatives / libertarians, claim the free market is "infallable." The free market is individual human beings making individual economic decisions without coercive interference from others. Human beings are fallable, thus the free market is "fallable."
2) 3rd party reviews = free market. What is not free market is when government creates oversight organizations / watchdogs through taxation and uses them to enforce laws and regulations. Examples are the FCC, FDA etc.
3) As imperfect as it may be, at least when a company releases a major catastrophe of a buggy product they get penalized with support and replacement costs, bad PR and a market that will think long and hard before buying another product from that company.
4) There is nothing stopping anyone from implementing your suggestion for creating better cellphone reviews. That's the beauty of the free market. The fact that no one has done it (as far as we know) does not hint to the free market's imperfections, it means there is a business opportunity waiting to make someone some money.
Comment: Re:Technical illiteracy among politicians (Score 3) 266
by garett_spencley (#44372989) Attached to: British Porn-Censoring MP Has Website Defaced With Porn
I've tried to follow this discussion. Let me see if I've gotten it right.
First, someone says that studies have found that pornography causes no psychological or ill effects.
You respond with information to the contrary.
You get a response which basically said "So what? Some people get addicted, they also get addicted to TV and other things, and I take objection to your 'nothing good comes out of it'" and he makes some points.
Now you're saying "OK well I guess some good comes out of it but wouldn't you agree that there's more negative than good?"
My response to that is: "Citation needed."
I am really interested in why you are "anti-porn." Why are you fixated on whether it's largely good or bad ? We're talking about something that people do in private for themselves. Now, I don't tend to look at things in collectivist ways, meaning I don't judge something based on it's "contributions to society" because I am an individualist. However, I can make the argument that "society" is a collection of individuals and so if porn provides some sort of positive service to any individual then I would judge it to be good. And if you're looking at things in terms of "how many numbers of individuals does it hurt v.s. how many numbers of individuals does it help" then given that the overwhelming majority of people who consume porn do not get addicted (and we can be very broad with the word "addicted" by defininig addiction as "when the activity begins to interfere with your day to day life") then even looking at it in collectivist / "greater good" terms then you would have to come to the conclusion that it actually does more good than harm.
Comment: Re:What else did you expect? (Score 5, Interesting) 387
by garett_spencley (#44252629) Attached to: Steve Ballmer Reorganizing Microsoft
Ballmer might be a horrible CEO (I don't really care enough to know), but you would think a CEO should have some idea of what parts of the company are "important", and "important" should not be a matter of opinion, but of objective profit measurement.
Books have been written about why companies that focus do better than companies that try to get their hands into everything. PepsiCo owns everything from Frito-Lay to KFC to East Side Marios restaurants, but both Coca Cola and McDonald's each have PepsiCo beat in terms of net asset value despite each corporation focusing tightly on only beverages or a single fast food chain.
It's not against anyone's best interests for Microsoft to cut the fat and sell off divisions and brands that aren't integral to it's core focus. What the core focus is, if it has one, I don't know. My guess is it should probably be Windows and related products like Office. XBox should at the very least drop the Microsoft brand and be treated as a separate company, if not actually spun into a completely separate company. There's really no reason not to. The shareholders can spin off divisions or brands held by Microsoft corp into completely new companies and still retain ownership in those new companies. They would just elect a new Presidents for those new corps, hire a new executive team (preferably by promoting experts within those divisions who know what they're doing), and let them be run as tightly focused companies that don't need to compete for capital and resources with all of the other divisions under the currently bloated umbrella corp that is Microsoft. The shareholders continue to profit from their holdings as long as the new company is profitable, and the employees working in those divisions benefit from working for a company that is dedicated solely to achieving the success of the products they actually work on, rather than being treated "unimportant" compared to the other divisions (i.e: no more infighting). As long as there is any hope for those products they stand to do much better as stand-alone companies.
Another reason defocused companies are at a disadvantage is that often they need to sell to their competitors. Pepsi actually outsells Coca Cola in super markets, but in restaurants Coca Coca destroys them, and as a result Coca Cola wins in terms of net profits. The reason is because McDonald's and others don't want to buy from PepsiCo when Pepsi owns Taco Bell, KFC and other competitors.
Comment: Re:Thou hast angered thy King (Score 1) 260
by garett_spencley (#44076541) Attached to: China Says Serious Polluters Will Get the Death Penalty
Reparations can be made to a living person, and dead people cannot exercise habeus corpus.
People screw up when dealing with each other, it's a fact of life. One can even argue that that's the entire justification for government existing in the first place: to provide a means for resolving interpersonal conflicts. The government, being human, has the capacity for error. And so governments need to be held accountable for their actions which is the point of checks and balances. Those checks and balances become meanlingless in death. A person who was falsely executed for a crime he/she did not commit has no access to habeus corpus and cannot hold the government accountable for the mistake that ended their life.
Morally, a person who murders another deserves to die. But under that same principle it is better to sentence 9 murderers to life in prison than it is to execute 1 innocent person, as it only takes that one false execution to turn the state itself into a murderer by definition.
Comment: Re:Of course (Score 1) 260
by garett_spencley (#43798045) Attached to: Ask Slashdot: Can Yahoo Actually Stage a Comeback?
I'm going to back up your "So What?" with another point of view.
There is a perception that traditional "big business" has long understood, but that the big Internet corps like Yahoo and Google have yet to "get", and it holds that the less you focus the worse a job you will do.
Corporations like Procter & Gamble have solved the problem with heavy branding: Tide, Bounty, Charmin, Crest, Oral-B, etc., etc.
Each brand exists as if it's a complete and separate company. While I doubt there's many people who haven't heard the name "Procter & Gamble", most people use their products without realizing that they're using a P&G product. Some P&G brands might even compete against each other.
There is no reason that Yahoo needs to "glue" it's products into some sort of "Yahoo identity." In fact, if the Yahoo! "brand" is dying, they could opt to kill it off entirely and go the branding route. Keep Flickr as "Flickr" and Tumblr as "Tumblr." They're solid brands unto themselves. I think that even gives Yahoo an edge because people, psychologically, become more likely to use something that stands on it's own rather than gets package-dealt with something else. For example, psychologically people tend to think "If Yahoo Search sucks then Yahoo sucks and so 'Yahoo Flickr' must suck too." Keep the branding separate. Flickr = Flickr, Tumblr = Tumblr and then only people who are really passionate about their reasons for liking or disliking specific corporations will care that Flickr and Tumblr just happen to be owned by Yahoo.
Google has a really good thing going with Youtube, as a brand, and should *not* try and integrate it with the Google name in any way. Notice how many steps in that direction have resulted in negative blow-back. Like trying to force people to use their real names for comments, and link their Youtube accounts with a Google account. I used to have a registered Youtube account, I don't anymore because of that. Gmail was a success story, and in some ways it might qualify as a brand unique from "Google", but people think of "Google" as a search engine. They'd be better off keeping it that way. Blogspot should stay Blogspot, Chrome should stay Chrome. There's no reason not to drop the "Google" name from each of those brands entirely and let them stand on their own. While this is pure conjecture, I kind of suspect that Google Plus may have had a slightly better chance of succeeding as a Facebook killer if they had done a better job with branding, and not associated it with Google. It should have focused entirely on what separates it from Facebook and makes it *unique and compelling* instead of "Hey Google has one too!" ... the appropriate response to that was "so what?"
Apple is a total anomaly in the world of branding. They've created an "Apple Identity" and their indivdual brands have been able to benefit from that. But it also puts their individual brands in potential jeopardy becuase if the Apple brand takes a hit it's more likely to trickle down to their individual products.
Yahoo could be very successful as a holding company with many unique brands that each focus on their own individual "identity." They don't need to integrate a thing or attach the Yahoo name to any of them. Just let each product shine on it's own.
Comment: Re:The definition of PC (Score 2) 184
by garett_spencley (#43239127) Attached to: Apple Yanks "Sweatshop Themed" Game From App Store
It is exactly, every one bit, a "straw man" argument because not one single person is making any of the claims that you are saying they make. You are building up an argument for the sake of tearing it down. That is a "straw man" by very definition.
Your post shows a complete lack of having even read my paragraph, which clearly stated that to many abortion is about preventing a murder, and has nothing to do with "wanting the woman to do anything." To them it is about preventing a wrong, not enforcing a particular behaviour or forcing a woman to do anything. And once again, that is THEIR position, not mine. I am probably more "pro choice" than most on the pro-choice side.
Comment: Re:The definition of PC (Score 4, Insightful) 184
by garett_spencley (#43236219) Attached to: Apple Yanks "Sweatshop Themed" Game From App Store
Any time you have a movement or an ideology that affects people who don't share that ideology you see outrage. That outrage often comes with straw-man tactics used in discourse.
I can think of many examples of so-called "right wing" or "conservative" ideologies that are on the receiving end. The "pro-life" movement is one example. To most who are "pro-life" the issue that is that life begins at conception and so an abortion is literally murder. But many on the "pro-choice" side have accused the "pro-life" crowd of hating women and wanting to enslave them. That's a very blatant straw-man argument from my point of view. And FWIW, I'm probably more "pro-choice" than most.
Fiscal conservatism receives straw-man arguments all the time. Whenever people accuse a fiscal conservative of being "on the side of the wealthy" or "greedy", whenever someone claims that libertarianism is "anarchy for rich people" those are straw-men arguments.
Comment: Re:Mounting evidence - of hype. (Score 3, Interesting) 335
by garett_spencley (#41090123) Attached to: Why Cell Phone Bans Don't Work
But your false dichotomy is irrelevant anyway: I'd rather have neither group on the road with me.
I'd rather have no one on the road with me. What's your point ?
I absolutely hate driving. No other activity has inclined me towards removing myself from society all together and going off the grid in the wilderness somewhere. I love the ability to drive, I just can't stand the driving itself.
But until someone actually causes an accident and inflicts some sort of harm or injury I respect their right to use the roads and drive a vehicle, even though I fantasize about being a tyrannical dictator that makes a law giving myself exclusive use of the roads when I feel like driving somewhere.
Blood alcohol limits, graduated licensing, road tests, license renewals, hell ... even licenses themselves ... are all preemptive; taking a pessimistic view of people and treating them as a danger and potential criminal by default. If we took the same view towards other day to day activities that we take towards driving we would have curfews and random stops and searches and all sorts of other nanny-state intrusions in the name of keeping people safe. I do understand where the sentiment comes from. 5 minutes of driving is enough to make someone really pessimistic about the driving abilities of the average person, but it's telling that with all our laws and regulations and licensing and testing those idiots are still there causing accidents and being jerks. IMO we should be throwing the book at people who get into accidents due to negligence and recklessness as we do with all other crimes, you know innocent until proven guilty, and stop trying to nanny the hell out of everyone's driving habits.
Comment: Re:The Best Advertising... (Score 1) 716
by garett_spencley (#41076763) Attached to: Ask Slashdot: To AdBlock Or Not To AdBlock?
For me its just the opposite. An advertisement is an attempt to get me to trust the advertiser's word on their product. If they want to convince me, the way to start is by being honest about what they're doing and not try and disguise it as something else.
Why did you interpret my thoughts as endorsing any sort of dishonesty ?
While trust is certainly a factor, I would go even further and say that "marketing" (which is a much wider field than "advertising") serves the purpose of informing people about a solution to a problem.
You're very set in this idea that "an ad is an ad is an ad." I don't think that making an "ad" that is entertaining and offers some sort of value in and of itself has to wear a disguise in any way what-so-ever. You can inform people quite honestly about a product and do so in a way that gets people to care. You don't have to mislead them to be entertaining or informative.
I suppose that product placement could be viewed as "an ad in disguise" but it doesn't have to be. I have no shame in admitting that I would love to drive an Audi because that's what James Bond drives. If they're shit vehicles then it hurts the Bond franchise and people will start to think of that as blatant and crummy product placement. But they've got a reputation for being luxury vehicles that I think is hard earned. If I did have the money to buy one I'd do more research to make sure my impressions are accurate, the real point is that I wouldn't even bother if Bond didn't drive one.
Comment: Re:The Best Advertising... (Score 2) 716
by garett_spencley (#41076675) Attached to: Ask Slashdot: To AdBlock Or Not To AdBlock?
What sort of disclosure do you display on this sponsored content? Are users clearly informed they're ads? This suggests not:
I was intentionally vague, because I'm not here to pitch my web-site or talk about what I do etc. But you did hit on something:
Much better would be if I could learn about things through unbiased content written by you and your users, and you get paid through affiliate-like mechanisms.
That's a pretty accurate description of what I do. I don't work for anyone or promote one given company. The ads that people are there to see is the content of the site, and it is a subset of what it's trying to sell. But you can't get it on the "manufacturer's" web-site without paying for it. My site provides free samples. Think of people who might go to Costco on Friday just for the samples, and if they really like something in particular they might buy it. The only difference is, people usually don't perceive the content on my site to be an advertisement, and I'm in the very fortunate position where 99.99% of my competitors shove blatant ads and pop-ups down their surfer's throats. People tell me they come to my site because there are no ads.
by garett_spencley (#41076609) Attached to: Ask Slashdot: To AdBlock Or Not To AdBlock?
Except they stole your time and attention with no recourse. They probably rang at meal time too.
To an extent, I agree. I hate receiving unsolicited phone calls and I did point that out. I would much rather that I had sought them out as opposed to the other way around. But I guess the reason they won me over was a) they did offer a solution a problem I had at the time, and it was a solution I would not have thought to research on my own and b) it turned out to be an enjoyable conversation. I could have hung up at any time without feeling any guilt nor any obligation to be "polite" (I'm not a very polite person, especially to telemarketers) but I chose not to. So it didn't feel like they "stole my time" at all.
You're paying for that "value" in the increased price of the product to pay for the ad.
Not necessarily. If the company makes up for the cost of the marketing campaign in sales generated by the campaign, the costs do not have to be passed on to the consumer. And the entire point of the marketing campaign is to increase sales. So there's no reason to increase the prices, especially when companies are competing on prices as well as other factors.
This actually reminds me of another marketing tactic famously employed. When Microsoft released the X-Box they sold it at a loss expecting to make up for it in game sales. That's not an example of advertising but it is an example of a marketing strategy that may tempt you to say "people paid for the X-Box in part by the cost of games", but if they sold more games at the same price than they would have without taking a loss then the price was not necessarily passed back to consumers. I don't think any customers would have felt ripped off by paying less for an X-Box and then buying more games because they wanted to and were satisfied with their product (I'm not saying that's what actually happened, just that that was the intended outcome and was therefore a good idea IMO).
Comment: Re:Just block all ads and don't worry about it (Score 5, Insightful) 716
by garett_spencley (#41076353) Attached to: Ask Slashdot: To AdBlock Or Not To AdBlock?
we should be asking if it's in the public interest
This is a nitpick, but I'd rather ask if it's in any individual's interest.
I like to differentiate between "marketing" and "advertising." If you'll bare with me for one second: marketing, as I see it, is about trying to develop relationships with customers, present or potential, and provide them a solution to a problem they have. Advertising is one single tool that can be used as part of a marketing campaign.
As long as there is more than a single monopoly providing a given good or service then individuals really do need a way to become informed about alternatives and make decisions. I think that's where marketing comes in. And it doesn't have to be the company jumping in front of you, interrupting what you're trying to do in an attempt to get your attention. If you are, for example, shopping for a laptop you might ask your friends. If they have had a good experience with a given company, that's a form of marketing (marketing isn't trying to make a sale, it's trying to keep customers as well and get them to speak highly about their experiences). If you google "laptops" and read user reviews, maybe even go to a consumer review site, that's also marketing. And a good consumer review site will realize that people are there looking to buy things and instead of shoving ads in their face, will provide affiliate links in appropriate places so when someone decides to check out, say, "Dell Computers" the link they click on will provide a track-back to the consumer review site and the user will never think that they've just earned someone some ad revenue.
I think there are a lot of crappy ads out there and companies that haven't the first clue how to market properly. I also think that advertising is necessary and "good." And us having this debate right now, and using ad block software etc. is also a "good thing" because it's how our opinions get shoved in the faces of advertisers. The good marketers will take notice and respond. They'll realize that making people happy in some way is the whole point of a business and that marketing is about informing choices. Not informing people who don't care, but people who are actively seeking that information.
Comment: The Best Advertising... (Score 5, Interesting) 716
by garett_spencley (#41076177) Attached to: Ask Slashdot: To AdBlock Or Not To AdBlock?
... is advertising that doesn't come across as advertising.
People who say they loathe advertising in any form actually just loathe the bad advertising; the advertising that detracts from what you're trying to do and immediately screams "this is an advertisement, I'm here to interrupt you in some way in the hope that somehow it will get you to buy something even though I've pissed you off."
A few years ago I received an unepxected phone call on my wife's cellphone from a company offering a CDN service. At first I was really pissed off that this company had reached me in such an inappropriate manner ... but the guy on the other end didn't try to sell me anything and the conversation was unlike any telemarketing call that I had ever received. It was personal and appealed to my geeky curiosity (CDNs were very new at the time, the only companies that were using them were heavy traffic movers like Yahoo, so I wanted to know how it worked), it was offering me a solution to a problem I had at the time and the conversation was very informal. Within a minute or two I was actually asking him questions, and that's how it works. And to top it off when I told the guy I wasn't going to buy from him he chuckled and said "I'm not trying to sign you up today, don't worry." It kept me on the phone. I didn't buy but I was impressed enough that if I had chosen to purchase a CDN service within the next little while I probably would have given them a second look.
I still don't like people phoning me, and I think there are far better ways to reach out to people, but everything that transpired within that phone call was an example of marketing done in the right direction.
I'm self-employed, running a high-traffic web-site that generates money via ad revenue for 11 years now, and the people who visit my web-site have no idea that the entire site is one giant advertisement; in fact, people have complimented and praised me for not having any ads on the site. And yet when fellow webmasters in the same industry as myself share their sales and conversion stats I always get a big smile on my face. Their sites are crawling with blatant advertisements and they need 2 to 5 times the traffic to generate the same revenue. I've never understood how pissing off your customers can be regarded as any form of business model.
I think the best well-known type of advertisement that's going in the right direction is product placement. It can be done poorly, yes and I know I am about to get a bunch of replies from people telling me that they always notice it and it ruins the program etc. But it *CAN* be done in a subtle way that blends with the program and does not detract, to the point where the viewer does not notice or care.
But I think the real way to do "advertising" is provide a value to the viewer as the advertisement itself. Imagine an hour long infomercial on television that was entertaining and/or informative enough to get you to watch it for it's own sake, with no intention of buying anything. Remember that "punch the monkey" ad that was on every single web-site a decade ago ? Imagine if that had actually been a real game that you could play. No pushy-ness what-ever. Not shoved in your face and not done as a banner / flash ad. Instead, something people genuinely wanted to play, with an entertaining sales pitch as part of it. Good advertising can be done, and occasionally is. We just don't notice because we're too distracted and pissed off at the "BOO!!! HAHAH! THIS IS ADVERTISEMENT! YOU WILL BUY NOW LOLZ!"
I've practiced "magic"/illusion-performance as a hobby for a few years and in reading/studying I've learned that corporations will often hire magicians at trade-shows to pitch new products to retailers. Some of the better magicians have crafted entire 20-minute magic routines around the product they're hired to pitch. It's entertainment and people want to watch it for that purpose alone, but it's also an advertisement.
Comment: Re:Honestly (Score 4, Funny) 76
Reasons to hate the TPP:
"latest attempt by the US at legislative colonisation of sovereign countries' IP laws."
Reasons to love the TPP:
Stick it to neo-religious enviro-nazi "green" hippy fanatics.
World spinning. Can't ... decide ... where .. to ... stand... gaaaaaaaaaaaaaaah
|
global_01_local_0_shard_00000017_processed.jsonl/29886 | Forgot your password?
Comment: Re:It IS FLAC (Score 1) 397
The greatest time in that articel is spent claiming that 192Khz is overkill because everything above 20Khz is unhearable. He shows how a square looking waveform has all the right spectral components in the 20Khz range and so therefore it it is not missing anything. This is fourier and nyquist type argument that assumes linearity.
as you put it F( a+b) = F(a) + F(b). When this is true then it's as he said. But if F(a+b) != F(a) + F(b) then you need more than 20Khz to describe the spectrum.
I'm not saying 192Khz is the right thing. I'm just say the entire argument in the article is assuming linearity to draw the conclusions that the 0-20Khz spectrum contains all the information you can hear.
In fact we already know that ears are not linear. This is in fact how some compression algorithms function. They know that as it gets loud that you can't hear quieter frequencies as efficiently so they are removed. This is an example that actually works in the opposite direction-- that there's less information needed. However it supports the notion that describing everything by spectral analysis is wrong when things are linear.
You said, well it's just a change of basis. Sort of. How tightly you want to sample has th be determined first. This is what actually sets the bases that the analysis is going to be changing between. A given point spacing in time for a given lenght of time forces the interval over which the fourier transform exists. Conversely if you insist that the highes frequency is 20K (or 40K for nyquist) then you have fixed the time interval of the sampling. You are then blind to any point in the intervals between which is where the non-linear effects could, conceivebly, hide.
Comment: Re:It IS FLAC (Score 1) 397
Apparently this link hasn't been posted enough times yet. It addresses both your first question (partially) and your second question (in huge detail).
The video you're comparing to is being treated no better than audio. It's simply that human eyes are much better than human ears, so to give a comparable experience much higher bitrates are needed for video than audio.
What all these linear analyses assume is that hearing is a linear process. If its non linear then these analyses are incorrect.
Comment: Re:It IS FLAC (Score 5, Informative) 397
Pono music is an ecosystem to sell music in FLAC audio file format: 1) production of FLAC files from existing recordings, 2) a dedicated player, and 3) a web store to sell FLAC files.
The problem with FLAC is how does one get FLAC? you could use your own encoder to record a CD in FLAC. But then you just have CD quality Why not reach back to the studio quality if you are going the FLAC route?. Cause you don't have access to that. But now you do-- the PONO ecosystem does that. And if you wanted to play that FLAC file, well your mp3 player might not play it and if it does it probably has a lot less memory than you would like. soe PONO players are chubbier in memory. And finally what if you are one of those people who likes to roll there own and prefers to just buy it pre recorded. Well agains the PONO ecosystem is there for you.
Comment: Re:Chromebook (Score 1) 286
by goombah99 (#46429883) Attached to: Ask Slashdot: Linux For Grandma?
It does not. It only supports "cloud enabled printers".
The plausible reason for this is that there's no reason to put printer drivers in the OS in the year 2014. Printers should be smart, blackboxes, with a universal interface advertising their capabilities. Apple too seems to have the same philosophy of dropping support for things early when there's a better but less used solution available. Postscript printerts, 3.5" floppies, parallel ports, serial ports, and dropping Flash support on iphones all were logical moves, that while causing a little pain, ultimately ushered in the right way of doing things.
|
global_01_local_0_shard_00000017_processed.jsonl/29887 | Forgot your password?
Comment: Re:Sensitive information? (Score 1) 152
I guess those children brought along by their families willfully chose to have their parents or parent bring them into the USA through undocumented immigration. Why do you insist on calling these people illegals when they're just as law-abiding as you and I, except they were brought here underage and only found out they were undocumented when they got into High School or College? They are not criminals, they did not knowingly break any laws. Most of the time they don't even speak Spanish, so deporting them to Mexico where they can't speak the language and they don't know any of their family there is exceptionally cruel punishment for someone who ultimately didn't do anything except get brought over by an authority figure early on in life.
It's not a soft friendly PC name calling them undocumented immigrants, it's being realistic about the fact that not every single one of them committed any crime willingly or otherwise. Calling them illegals is absolutely pejorative, and it has no purpose other than to paint the "illegals" as nothing more than criminals. The world isn't so black-and-white, and there are a lot of privileged people who simply cannot fathom that maybe some undocumented people in America aren't here because they decided to cross the border one day, as they had little choice in the matter once their immediate family says "This is what we're doing" and the children don't even yet understand what is happening.
Not every undocumented immigrant comes to America this way, of course, but there are many who do, and to call them criminals via blanket statements is, frankly, ignorant. A child being brought over the border illegally by his/her family has two choices: Go with your family or go live on the streets. They're presented with these choices when they're often times not even old enough to comprehend anything about what's going on, other than "We're moving". I really hope you can begin to consider that maybe everyone you call "illegal" did nothing at all to earn that mean and inaccurate label. It's a pejorative term used to dehumanize the actual people involved, and there are better ways to describe this swath of people without the stigma of "Illegals".
"Affected by current laws" makes it sound like they had the unfortunate accident of being dropped in this country by mistake rather than choosing, willfully, to be in violation of the law.
It's staring you right in the face in the opening of your post, but for some reason it eludes you. There actually are a lot of people who absolutely were not brought here willingly or at an age of understanding.
Comment: Re:Come on (Score 5, Insightful) 1223
by oddfox (#41472033) Attached to: Torvalds Uses Profanity To Lambaste Romney Remarks
The guy laughs at the most inappropriate moments, a lot. Recalling a horrible cruel prank on a gay person back in college? Haha! Talking about a situation where his wife could have died? Haha! Talking about the Seamus-on-his-car-roof-in-a-kennel incident? Haha! Talking about your father closing a factory in Michigan and moving it to Wisconsin and there was a mishap with the band playing the wrong song? Haha! Romney's laughter isn't an indicator of sincerity. It's an indicator of extreme nervousness and discomfort meant to distract. And the sad thing is that it apprently works on people.
Comment: Re:Same with their up/down voting (Score 2) 192
by oddfox (#40411279) Attached to: Reddit Cofounder Says Site Was Built By a Horde of Fake Accounts
I can't imagine why a jagoff like you gets downvoted on Reddit. Oh wait, I can, it's because you take a lot of text and time to say a lot of bullpucky amounting to nothing of real value. The fact that you got modded up for posting conspiratorial drivel is kind of worrying, but I guess a lot of people here on /. really do want to think they're being silenced by "the man", whoever or whatever "the man" may be for a particular website. Honestly, you're not nearly as important to the people that run the site or the various sub-reddits that you seem to believe you are.
Comment: Re:Still busted (Score 1) 537
by oddfox (#34889134) Attached to: Firefox 4 Beta 9 Out, Now With IndexedDB and Tabs On Titlebar
I'm not trying to say you're wrong or anything but that sounds the exact opposite of how the Square pre-release is on Linux platforms right now. HW Accel is only supported in the 32-bit flash and is supposed to be available for 64-bit users when the next Flash release is finalized. If it's the other way around, then I find it pretty strange that I still don't have 1-5% CPU usage with 1080p under Linux w/NVidia while I have no issue whatsoever under Windows w/NVidia. As it is I can expect a 1080p flash video to eat about 40% of one of my four cores under Linux. Actually, just tested now in my Chromium nightly (32-bit) with the latest Flash and I'm getting about 10-20% CPU usually for the 1080p playback.
If things are really that way in Mac-land, the opposite of how it is in Linux-land, congratulations on having a better 64-bit plugin than the 32-bit plugin. Seems you are unable to really make use of it though.
Classic Games (Games)
Pac-Man's Ghost Behavior Algorithms 194
Posted by Soulskill
from the i-hate-the-pink-one dept.
Comment: Re:Assange (Score 1) 579
by oddfox (#34429662) Attached to: Moscow Has Eyes On WikiLeaks, Too
To quote your linked article:
On Saturday a WikiLeaks spokesman, who said he uses the name Daniel Schmitt in order to protect his identity, told The Associated Press that the group had requested help from NATO to check the files prior to publication to ensure the lives of civilians were not put at risk.
"For this reason, we conveyed a request to the White House prior to the publication, asking that the International Security Assistance Force provide us with reviewers," Schmitt said. "That request remains open. However, the Pentagon has stated that it is not interested in 'harm minimization' and has not contacted us, directly, or indirectly to discuss this offer."
If the government/NATO wants to protect civilians that assisted them then they should do what's right and help Wikileaks to redact such names from the documents. This is exactly the same thing that happened with the leaked cables -- Wikileaks asks the government to, if it wishes, tell them what names need to be redacted from these documents, and the government refuses to do so. The recklessness is coming from NATO and the government, not from Wikileaks which has shown every interest in addressing this particular issue.
And if you don't think they're serious about cooperating for redactions, I would remind you that the ball is not in Wikileaks court regarding such redactions. Anyone that does die because of these leaks (something that has not been proven anywhere) dies because the government refused to shield them, when it should be doing the right thing and focusing on damage control. The stuff is out there and it's going to be released, it's only responsible to try to make sure that innocent people who aided your efforts won't get hurt because you want to play the role of tough guy.
Comment: Re:Still using KDE 3.5.X... (Score 2, Interesting) 224
by oddfox (#34360940) Attached to: KDE 4.6 Beta 1 – a First Look
I honestly think if more people knew about NX they would never use VNC unless it was absolutely the only solution available, period. VNC just blows chunks way too bad, and NX makes things so easy when bandwidth is important. Anyone who has not tried NX and uses VNC should seriously give it a try because the difference is night and day.
Comment: Re:Changes seem irrelevant... (Score 2, Informative) 473
by oddfox (#33854040) Attached to: Ubuntu 10.10, Maverick Meerkat, Now Available
Ext4 doesn't have online defrag yet, it is planned. Btrfs has an fsck tool but it is not capable of fixing any problems on the disk, it can apparently only let you know there are problems (I say apparently because while I've used btrfs I haven't before had to fsck it thanks to lots of luck with not running into any hiccups during my usage). They say as much on the front page of the Btrfs wiki. To quote the main page of this wiki:
So not only are you railing against ext4 for a fsck operation which should take a long time (5TB? Come on, most people don't have 1TB in their box, and we're talking about desktop users), but you are unawares of the features and capabilities of both filesystems you are discussing. Btrfs is great, but it's not something Average Joe should be using just yet either in production or on their desktop. I have used it before and I will use it again in the future, but it is not complete yet.
If I had any mod points I probably just would have modded up ratboy666's reply because he did a fantastic job of explaining the whole situation.
Comment: Re:Well that's stupid. (Score 1) 495
WoW and most MMOs are under constant development, and while early development could be interpreted as "during alpha/beta" it could also be interpreted to mean "early on in the game's lifetime".
It wasn't even really a trick of wording though, because the patch notes linked by KingMotley explains the change in more depth. The change was drastically for the better, as the patch notes explained it allows for more freedom in exploring aspects of gameplay aside from level grinding. Before, you had a set amount of time before your XP was diminished significantly. After the modification, the only thing that was really impacted by being rested was level grinding. This change gave users the freedom to not have to worry about spending the first few hours grinding, and they could play at their own pace.
So your point really doesn't stand because much more of the gameplay was changed than "oh the rested mechanism works slightly different but pretty much the same" because it's an oversimplifcation. If you don't see the very real and very drastic differences between how it worked and how it's been tweaked to work, you aren't trying very hard or you haven't played the game for a significant amount of time.
Comment: Re:Well that's stupid. (Score 1) 495
As a small addendum I forgot to mention: In the beta the XP gains worked differently. Beta was a long time ago and gamers don't generally consider discussions of Beta stages to be relevant to the game at-and-post-launch. TBH your description is similar to how things worked in Beta, but it's not like there was widespread outrage because WoW didn't exactly have a huge public beta test (Or any public beta test that I can remember).
See this for more details on the situation how it was in Beta, and then scroll up to see how it's been ever since..
Comment: Re:Well that's stupid. (Score 1) 495
World of Warcraft was never like that. If you spend any time, logged out or logged in, inside a major city or an inn at a minor city/outpost then you become "rested". While you are rested you gain a bonus to your XP gains, This mechanism has never been modified and has been in place since launch, and I do not remember ever hearing anyone raising a stink over getting to log out for the night in town and come back in the morning with half a level or more rested so that your leveling is in fact highly accelerated.
Comment: Re:Good to see (Score 1) 203
by oddfox (#33668748) Attached to: Microsoft Says IE9 Beta Demand Overwhelming
The results from html5test have my Chromium nightly leading the pack with a score of 241 with 8 bonus points out of a total of 300. For comparison, in order of most "compliant" to least:
• Chromium 7.0.531.0 (60152): 241/8
• Chrome 7.0.517.8 dev: 231/12
• Firefox 4 (Minefield nightly): 207/9
• Opera 10.61 build 3484: 159/7
• Firefox 3.6.10: 139/4
• Internet Explorer 9 beta: 96/5
So the IE9 beta does lag pretty far behind the competition on this test for html5 support, and I don't think anyone would be surprised to see Chromium/Chrome in the lead with FF4 gaining ground. The point is, though, that no browser is completely standards compliant yet, much less with html5 and/or CSS3.
|
global_01_local_0_shard_00000017_processed.jsonl/29888 | Forgot your password?
NASA Running Low On Fuel For Space Exploration 282
Posted by timothy
from the let's-explore-earth-for-more dept.
smooth wombat writes "With the end of the Cold War came warmer relations with old adversaries, increased trade and a world less worried about nuclear war. It also brought with it an unexpected downside: lack of nuclear fuel to power deep space probes. Without this fuel, probes beyond Jupiter won't work because there isn't enough sunlight to use solar panels, which probes closer to the sun use. The fuel NASA relies on to power deep space probes is plutonium-238. This isotope is the result of nuclear weaponry, and since the United States has not made a nuclear device in 20 years, the supply has run out. For now, NASA is using Soviet supplies, but they too are almost exhausted. It is estimated it will cost at least $150 million to resume making the 11 pounds per year that is needed for space probes."
Confusion Reigns As Analog TV Begins Shutdown 434
Posted by kdawson
from the how-not-to-do-it dept.
As TV stations across the country switch off their analog signals, uncertainty reigns. Some 691 stations will have converted to digital broadcasting by midnight tonight (some interpreted the mandate as going digital by Feb. 17, not during Feb. 17, and shut down yesterday). This represents about a third of TV broadcasters nationwide. No one can say how many of the estimated 5.8 million households unready for the transition are in areas served by the stations that are switching now. The FCC added to the uncertainty by imposing extra conditions, making it unclear until last Friday exactly which stations would be switching at the beginning of the transition period. The article quotes a former analyst at Barclays Capital who said the whole process has been "botched politically."
|
global_01_local_0_shard_00000017_processed.jsonl/29889 | David Dallas - "Pay Off" Video
7.13 The Cooler
PoV: ?uestlove + DJ Premier Get Their Hands Dirty
By Gotty™ / 07.12.12
At first, this was going to be all about an imaginary conversation that could’ve taken place between Premo and ?uest as they shared the tables. Then, a new Nardwuar popped up featuring Prem and upended the everything. There’s some rule that deems it impossible (and maybe illegal) not to share one of the wacky Canadian’s interviews, especially when he’s talking with a legend.
Join The Discussion
Join the discussion. or Register
A Member of Townsquare Music. Advertise.
The Smoking Section. UNO CINCO SIETE.
eXTReMe Tracker |
global_01_local_0_shard_00000017_processed.jsonl/29919 | Take the 2-minute tour ×
I heard a reference to Privacy testing recently (James Whittaker mentioned it here.) and it seems like a valuable area to know something about. It's extremely important these days as companies balance "social features" with protecting users P2I (personally identifiable information).
My question is how would someone go about understanding the basis for user privacy? Assuming you’d need to understand what privacy is before you could test for it?
Are there references (blog posts, Wikipedia, books, videos, experts in the field?) on the factors involving user privacy? The end result could involve checking for your own or others user privacy.
share|improve this question
Are there specific regulatory rules that you need to comply with, or do you have a general but non-specific "the software shall respect privacy" requirement? – user246 Oct 20 '11 at 1:03
No requirement just interested in understanding more about the testing subject. Things to consider, etc. – Chris Kenst Oct 20 '11 at 18:03
I'd suggest that if it's not a real problem, you won't get good quality answers. I'm inclined to think we should close this question unless you can add enough detail to make it answerable. sqa.stackexchange.com/faq#dontask Opinions? Anyone able to suggest good edits to make this question more relevant/useful? – testerab Oct 22 '11 at 17:43
I updated the question to try and make it easier to answer. Does it make more sense? – Chris Kenst May 22 '12 at 21:16
add comment
3 Answers
The first thing to be mindful of when it comes to privacy is the simple fact that you are dealing with specific legal constraints upon what is and is not allowed. Therefore unless you have dealt with regulatory issues before, you are going to need somewhat of a different mental model for deriving requirements and testing.
Your location, the type of application (web or desktop), does the app require / transmit personal data, where the servers are and many other factors; will determine which, if any, laws you need to become familiar with.
Consider the following scenario, a web based app that is created by developers in country X with users around the world, and hosted by a third party cloud provider, whose physical server location may change on the fly. Which jurisdictions laws do you need to worry about? Probably all of them.
Fundamentally therefore privacy considerations need to be baked into the architecture and design from day one. There are far too many ways for personally identifiable information to leak otherwise. It is also not possible to secure this data in isolation, there needs to be an understanding of the risks from top to bottom of the organization (large or small) on who may get access to this.
Privacy testing is really a combination of the following:
1. Store only as little information as needed. You can't leak what you don't know.
2. Compliance with relevant regulations for what information you do store
3. Prevention of unintentional leaks e.g. security testing for info leaks, SQL injection etc.
4. Methods to identify intentional leaking. Monitor for unusual data movement patterns, back doors...
5. Recovery procedures in the case of a data leak. This is technically covered by item 2 but worth reiterating. There are, depending on jurisdiction, legal obligations regarding notification of data loss.
share|improve this answer
Here is a way to think about the problem: you should not intentionally violate privacy, and you should not unintentionally violate privacy. You test the former by examining the application's user interface as well as everything the software produces: e.g. emails, files, data streams, queries to other systems, and text messages. You test the latter via security testing. And of course, as Steve said, you can't leak what you don't know. – user246 Oct 27 '11 at 19:36
add comment
I believe you asked to test for your privacy, ie. you identity must be protected when browsing. To test what web browser reveals about you.
share|improve this answer
The OP didin't specify browsing, specifically - there's all manner of possible non-browser software that could be handling sensitive information. For example, I know of various handy desktop applications designed to make it easier to fill out tax returns, and various smartphone applications might handle billing information. – user867 May 23 '12 at 0:02
Protecting your own privacy is one aspect. See my updated question. – Chris Kenst May 24 '12 at 23:06
add comment
up vote 0 down vote accepted
There seems to be a lot of information on the web about privacy but nothing so specific as it relates to testing it. However in order to test it you probably need to have experience in some / many aspects. From what I can tell there are several aspects to Privacy including:
• Technical: As Steve mentions it depends on the type of application, how data is transmitted and stored, etc. This would also related to how you track users, how you verify who's identity is in use, etc.
• Legal: Privacy laws for each country or state; laws for specific industries like Health Care, how legal jurisdictions work. Does the legal aspect cross other aspects?
• Social: How does society value privacy and in what ways? What are the social norms and are they changing with social media? Is this area based on a lot of subjectivity - in the eye of the beholder?
• Ethical: What information should we collect? How should be it be collected? Are there certain groups of the population we shouldn't and others where we can collect more? Protecting your own privacy might fall into this area if you feel others can't be trusted to do it for you.
• Historical: How do our past involvements with privacy influence our current views? What is the history of Privacy / privacy law in certain countries, states, etc.? Consequences?
Others I can think of include cultural influences, tools that are used, people and organizations involved in the debate.
This isn't a comprehensive list but I'd like to get more input. Anyone see some category I've missed or have additional details on the categories?
--- Chris
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29920 | 103 reputation
bio website stevehhh.com
location Victoria
visits member for 1 year, 7 months
seen Feb 26 at 0:54
I started off in the 1980s by learning Atari, Microsoft, and BBC DOS and BASIC systems. In the 1990s, I got into the Amiga and eventually Windows 3.0. Courses in programming taught me the basics of assembly, Pascal, COBOL, C, Prolog, LISP, SQL, TCP/IP, Unix, networking, and mainframe system.
In the years since, I've worked on all kinds of systems, including (but not limited to) Unixes (Solaris, OS X, BSD), Windows, Linuxes (SuSE, Fedora, Ubuntu), plus a variety of Ant, Bash, C++, C#, Erlang, Java, PowerShell, Python, Perl, Java, Python, and other projects.
My current preferences lean towards Git over Subversion, IntelliJ IDEA over Eclipse, Fish over Bash, vi over emacs, Postgres over MySQL, Maven and Ant over Make, VMware over Parallels or VirtualBox, OS X or BSD over Linux and Windows, Nginx over Apache, Pixelmator and iDraw over Photoshop and Illustrator, PathFinder over Finder, PowerGREP over grep, Ghost over Wordpress, folders over iPhoto and iTunes, BeyondCompare over Kaleidoscope, colour over color. And so on.
This user has not answered any questions
This user has not asked any questions
This user has not participated in any tags
Stack Overflow 2,414 rep 2033
Super User 469 rep 1313
Arqade 387 rep 138
Parenting 276 rep 4
Ask Ubuntu 203 rep 17
6 Votes Cast
all time by type
6 up 3 question
0 down 3 answer |
global_01_local_0_shard_00000017_processed.jsonl/29921 | Take the 2-minute tour ×
I have researched this question for ages and cannot get it right!
I have a populated hashmap and an identically formatted hashmap (Map<Integer, ArrayList<String>>) that I have been working on (it has key value/s equal to other keys in the populated hashmap such as 0,1,2 etc). When I use the .put command to update the populated hashmap the few/one I have been working on replaces everything in the populated hashmap - is this normal? Where am I going wrong? I was expecting it to simply replace the key in question + values....
Excuse the not supplying code but it would mean posting quite an amount to demonstrate, just wondering if anyone could help explain where this might be going wrong. I could throw something together to show if needed...
Much obliged!
share|improve this question
I did not understand your question? Do you mean that when you put an already present key in the map why does it get replaced? – noMAD Apr 29 '12 at 21:58
it would be good if you could just post the loop/replacement code in question – g13n Apr 29 '12 at 21:58
You will have to post code. Your question is quite incomprehensible and anyway, how could we know what you're doing wrong. Reduce your code to the minimum necessary to reproduce your problem, don't just dump everything you've got. – Marko Topolnik Apr 29 '12 at 21:58
Ok, there is something fishy going on here. Providing code would possibly solve this fast. Are you sure that the two variables aren't pointing to the same object? That is are you either making a deep-copy of the original hashmap, or when adding elements adding them (with for example put()) to both HashMaps? – esej Apr 29 '12 at 22:24
The question is unclear. Please give a simple example: What is already in the map? What do you update and how? What would you expect as result? What do you get as result instead? I have the impression your map contains (1, List ("foo", "bar")) - you update (1, "foobar") and expect (1, List ("foo", "bar", "foobar")). – user unknown Apr 29 '12 at 22:38
show 14 more comments
2 Answers
up vote 2 down vote accepted
This is how a code example might look like:
import java.util.*;
public class NumFormEx
public static ArrayList <String> listIt (String... params)
ArrayList <String> as = new ArrayList <String> ();
for (String s: params)
as.add (s);
return as;
public static void main (String args[])
Map <Integer, ArrayList<String>> mils = new HashMap<Integer, ArrayList<String>> ();
mils.put (1, listIt ("foo", "bar"));
mils.put (2, listIt ("zacka", "zacka"));
System.out.println ("mils:\t" + mils);
mils.put (1, listIt ("foobar"));
System.out.println ("mils:\t" + mils);
java NumFormEx
mils: {1=[foo, bar], 2=[zacka, zacka]}
mils: {1=[foobar], 2=[zacka, zacka]}
I would say: as expected.
share|improve this answer
that does help, thanks! would you expect to be able to take the mils map and .put it into a Map<String, Map<Integer, ArrayList<String>>> to the same effect? – user1360809 Apr 30 '12 at 0:44
I don't quiet understand to the same effect. Yes, I could put that map into another map, where Strings are keys. Not that easy to imagine a use case, but you could have a mapping of words usages: John used 2 times "zacka zacka", 1 times "foo", 1 times "bar". Peter has a different usage-map. For different names we have different maps, maps get updated, inserted, removed ... . Can you build an example showing problems? – user unknown Apr 30 '12 at 14:19
add comment
Since map doesn't allow duplicate values you can do :
myMap.put(2, new ArrayList<String>());
This will take element with key 2 and replace it's list with new ("blank") list.
share|improve this answer
I have been using: mainMapLayout.put(int, updateMapLayout); I might have to correct myself as I am adding a Map<Integer, ArrayList<String>> to Map<String, Map<Integer, ArrayList<String>>>. Map<String (for mainMapLayout) = user name and Map<Integer, ArrayList<String>> for customer values (Integer = 0.1,2,3 etc and values = ArrayList<String>) my apologies! – user1360809 Apr 29 '12 at 22:39
results, original file contents: 2.d.title={0=[1, 2, 3, , 4, 5, 6, 7, 8, , 9, 10, 11], 1=[1, 2, 3, , 4, 5, 6, 7, 8, , 9, 10, 11]}} updated file contents: 2.d.title={0=[1, 2, 3, , 4, 5, 6, 7, 8, , 9, x, 11]}} using command: mainMapLayout.put(string, updateMapLayout); – user1360809 Apr 29 '12 at 22:43
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29922 | Take the 2-minute tour ×
I am currently using the storyboard interface builder to design my GUI.
My structure looks like following in storyboard
-> Navigation Controller -> Tab Bar Controller -> SomeViewController
-> AnotherViewController
Navigation Bar and Tab Bar appears fine in the ViewControllers, and the titles are set and visible in the editor, but in simulation the titles disappear.
How do i resolve this problem?
FYI: Navigation Controller and Tab Bar Controller are not bound to any Custom Class.
share|improve this question
add comment
2 Answers
up vote 6 down vote accepted
self.navigationController.navigationBar.topItem.title = @"YourTitle";
Simply put that in viewDidAppear in your ViewControllers.
share|improve this answer
add comment
Also You can use this code .
put inside viewDidAppear or viewWillAppear
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29923 | Take the 2-minute tour ×
I have a json which is return from the java code. I use toJSON to show the json but it does'nt showing anything and giving an error when i de-bugged it using firebug.Below is the response which i have to show in browser.
below is the function in jsp which i am calling:
type: 'post',
success:function(data) {
var json = $.toJSON(data);
error:function() {
alert("request failed");
1st alert shows [object] but 2nd alert is not showing anything.
share|improve this question
could you show the alert(data) output ? – jbduzan May 14 '12 at 8:09
what kind of ajax call type are you doing? please provide more code – Fabrizio Calderan May 14 '12 at 8:10
yes sure...it is [object Object] – Java_NewBie May 14 '12 at 8:11
I have provided the code for ajax as well – Java_NewBie May 14 '12 at 8:13
@Java_NewBie Please, don't use alert for debugging, use console.log or console.dir with chrome or firebug. – Yoshi May 14 '12 at 8:21
add comment
closed as not a real question by Quentin, TJHeuvel, Christian, Perception, kapa May 14 '12 at 16:26
1 Answer
up vote 0 down vote accepted
There is no such thing as $.toJSON(), thats why you get an error. Use JSON.stringify() and JSON.parse().
Alternatively, if you set the correct contentType, you should be able to use it without any conversions.
edit: to be correct toJSON() is a plugin. Imo there is no need for this, the standard JSON-handling from the browsers and jQuery is sufficient for that task.
share|improve this answer
code.google.com/p/jquery-json – Quentin May 14 '12 at 8:28
@Quentin It's a plugin and not part of the standard jQuery. And why use it, when every "normal" browser does this by default. – Christoph May 14 '12 at 8:36
I'm not saying it should be used, just that the statement There is no such thing as $.toJSON() is incorrect. – Quentin May 14 '12 at 8:43
JSON.stringify() solves the issue...thanks. – Java_NewBie May 14 '12 at 8:45
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/29924 | Take the 2-minute tour ×
My situation is that I want to rotate maillog of postfix anytime after my php script send the mail.
So that I can analyze log file, disapatch different error message to different sender and after that delete the rotated log file preventing original maillog from growing too large.
My Freebsd version is 8.0 release.
Is it possible that I don't have to install any extra program tool increasing difficulties in setting up system.
thanks in advance
share|improve this question
add comment
closed as off topic by X-Istence, user97693321, S.L. Barth, Linger, Sirko Nov 2 '12 at 13:14
2 Answers
up vote 0 down vote accepted
FreeBSD uses newsyslog(8) for its log rotation. It is configured by /etc/newsyslog.conf and by default uses a daily rotation for /var/log/maillog.
If really necessary you can trigger a non-scheduled rotation by calling newsyslog -F /var/log/maillog. But IMHO it is preferable not to and only to change the policy in newsyslog.conf because that a) requires no additional code and b) means that the config documents the system's state.
share|improve this answer
add comment
Couldn't logrotate be helpful to you? It's a rather standard tool for log rotation on nix systems.
If I were in your situation, I would use logrotate and write custom scripts to do what I want with the logs: since you don't express your needs in a precise way, it's hard to give precise answers.
When you say you want to use PHP for that, I hope for you that you don't mean "from within the web server", but "as a system script language". I would really not let the Apache PHP module, for instance, manipulate the system logs. I would ensure that they're out of its reach, actually.
Additionally, I think it might be a question for Server Fault or UNIX / Linux instead.
share|improve this answer
Yes you're right I write some php script and run with crontab every 5 min. Use shell_exec to call external program or shell to do log rotate is the only way that I can figure out. I find that there no "logrotate" commend after I install my freebsd. Only newsyslog and syslog that I can use. I mean that if it is possible to not to install logrotate or other non-default tool to rotate the log manually. – inker Aug 31 '12 at 7:36
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/29925 | Take the 2-minute tour ×
Is representing user permissions better in the user table or better in its own permissions table?
Permissions in User table
Putting permissions in the user table means making a column for each permission in the user table. An advantage is queries should run faster because no joins are necessary when relating users to user permissions. A disadvantage is that having many permissions columns clutters the user table.
Permissions in Permission table joined to User table with many-to-many relationship
Doing it this way cleanly separates out the permissions from the user table, but requires a join across two tables to access user permissions. Database access might be slower, but database design seems cleaner.
Perhaps keeping permissions in a separate table is better when there are many permissions. What are other considerations in making this decision, and which design is better in various situations?
share|improve this question
add comment
2 Answers
up vote 2 down vote accepted
Your first approach is feasible when the number of different roles/permissions is relatively small. For example if you only have two types of users: normal and admin, a separate table looks like an overkill. Single is_admin column is sufficient and simple.
However this approach does not scale once the number of roles exceeds a few. It has several drawbacks:
• user table becomes very "wide" having a lot of empty columns (wasting space)
• adding new role to the system requires altering user table. This is cumbersome and might be time-consuming for large user database
• listing user roles requires enumerating over all columns, as opposed to simple database query.
share|improve this answer
Enumerating permissions columns to show all rolls and running ALTAR command on the db - two things I didn't think of. The separate table quickly becomes more desirable. – steampowered Oct 27 '12 at 18:46
add comment
The standard pattern for access control is called Role Based Security. As both the number of users and the number of different types of permissions you need grows, the management of your user-to-permissions links can become increasingly difficult.
For example, if you have five administrators and fifty users, how do you keep the permissions of each group in synch? When one of your users is promoted to an administrator, how many edits do you need to make? The answer is to create two intersections: users-to-roles and roles-to-permissions.
This solution is described (including entity relationship diagram) in my answer to this question.
enter image description here
share|improve this answer
This is a robust answer for a more complex application. As @Nurkeiwicz notes, a simple application may not even need a rolls table or permissions table. – steampowered Oct 29 '12 at 17:53
nice diagram, what tool did you use? – dangerousdave Dec 13 '13 at 10:38
@dangerousdave - Sorry for delayed response, I've been out of town. I use Visio with custom smart shapes that I built to use the James Martin ERD visual convention and a custom line pattern that gives it a hand-drawn look. – Joel Brown Dec 16 '13 at 17:16
@Joel - it's very good! – dangerousdave Dec 17 '13 at 18:53
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29926 | Take the 2-minute tour ×
For instance, I have a sheet which contains names of 100+ people. In column H I have their birth dates. What will the code look like if I want to find out which friend's birthday is today? Of course the macro will need to run through that column and see if today's date matches one with in column H. I have very little experience with VBA/Macros. Please and thank you for your help.
share|improve this question
What have you tried? What does your data look like? – The Unfun Cat Nov 13 '12 at 0:16
Conditional formatting might be a better approach. – Tim Williams Nov 13 '12 at 0:22
add comment
2 Answers
This code will put a message on I column if someone' birthday is today as per H column. (Assuming G column has person's name.
Sub BirthdayAlert()
Dim lastRow As Long
Dim ws As Worksheet
Dim varArray As Variant
Dim lb As Long
Dim i As Integer
Set ws = Sheets("Sheet1")
lastRow = ws.Range("H" & Rows.Count).End(xlUp).Row
varArray = Application.Transpose(ws.Range("H2:H" & lastRow).Value)
lb = LBound(varArray)
For i = LBound(varArray) To UBound(varArray)
If IsDate(varArray(i)) Then
If CDate(varArray(i)) = Date Then
varArray(i) = "Today is Your Birthday " & Range("G2").Offset(i).Value
varArray(i) = ""
End If
End If
Next i
If UBound(varArray) > 0 Then
ws.Range("I2:I" & lastRow).Value = Application.Transpose(varArray)
End If
End Sub
share|improve this answer
add comment
Here's something simple for it:
Sub birthdayThing()
Dim rng As Range
For Each rng In Range("H2:H100")
If CDate(rng) = Date Then rng.Offset(0, 1).Value = "Birthday"
Next rng
End Sub
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29927 | Take the 2-minute tour ×
There are already a few questions regarding the fact that methods in GWT RPC should not return an interface like List, but rather a concrete class like ArrayList, because otherwise "GWT needs to include all possible implementations". See e.g. In GWT, why shouldn't a method return an interface?
Here's my question: is this limited to the return type itself? How about parameters of the method? And what if the return object contains an interface, e.g.
public class MyReturnObject implements IsSerializable {
List<String> listOfUnspecifiedType1;
List<Long> listOfUnspecifiedType2;
The examples I have seen all talk of the return type itself. I don't see why it would be a problem to return an interface, but not a problem to return an object which just wraps an interface; but maybe I am missing something?
share|improve this question
add comment
1 Answer
up vote 2 down vote accepted
It's clear from the linked question that it applies recursively (and as soon as you understand why you should use the most derived types as possible, it becomes obvious that it is recursive).
This is also true of method arguments, not only the return types and their fields: if you send a List<X> then GWT has to generate serialization code for all List classes: ArrayList, LinkedList, etc.
And of course the same applies to classes, not only interfaces: AbstractList is no different from List.
And because generation comes before optimization, all possible classes from the source path will be included, not only those that you use in your code; and then they come in the way of the optimization pass, as all those classes are now used by your app.
Therefore, the rule is: use the most specific types as possible. The corollary is: don't fear DTOs, don't try to send your business/domain objects at all cost.
share|improve this answer
Thanks… It does make sense, I just wanted to make sure. – user711413 Jan 16 '13 at 21:21
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29928 | Take the 2-minute tour ×
So here's the thing... I'm making a small app that should be able to list EVERYTHING on a users Desktop - including shortcuts.
So I was doing this:
string filepath = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
DirectoryInfo d = new DirectoryInfo(filepath);
foreach (var file in d.GetFiles())
That gives me the following:
But on my Desktop I can see these:
Microsoft Office 2010
VLC Media Player
So I tried to pull some WMI info from: Win32_ShortcutFile without any luck. (It lists stuff I don't have on the desktop like Windows Live.)
So at the moment I'm kind of clueless...
I hope this made any sense!
Any pointers in the right direction would be awesome!
EDIT: I forgot to mentioned - the target node is a Windows Server 2008 SP1 machine.
EDIT: I also forgot to mention that I am already checking for folders on the desktop.
share|improve this question
You need a check for folders too. – P.Brian.Mackey Jan 28 '13 at 22:02
Yeah, sorry forgot to mention that - I have that check implemented as well. – RobinNilsson Jan 28 '13 at 22:03
You also need to check the Public user's (or All Users in XP) Desktop for items. – itsme86 Jan 28 '13 at 22:03
I think in XP you need to add the All Users/Desktop folder manually. – Johnny Mopp Jan 28 '13 at 22:03
Not everything on a user's Desktop is represented as a file or directory on the filesystem. Do you want "virtual" files, like the Recycle Bin, too? – Gabe Jan 28 '13 at 22:10
show 2 more comments
5 Answers
up vote 5 down vote accepted
You need to check the public user's desktop.
In .Net 4.0 and above, you can use the Environment.SpecialFolder.CommonDesktopDirectory special folder to get at that directory.
On your machine it is probably C:\Users\Public\Desktop if you have not changed it. If you look in there, you should see the files that are missing from the C:\Users\YourUserName\Desktop folder.
If you are on .net 3.5 or below, then the CommonDesktopDirectory does not exist in the special folder enum. If that is the case, you will need to use a Win32 API call to get the folder path.
[DllImport("shfolder.dll", CharSet = CharSet.Auto)]
private static extern int SHGetFolderPath(IntPtr hwndOwner, int nFolder, IntPtr hToken, int dwFlags, StringBuilder lpszPath);
private const int MAX_PATH = 260;
private const int CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019;
public static string GetAllUsersDesktopFolderPath()
StringBuilder sbPath = new StringBuilder(MAX_PATH);
SHGetFolderPath(IntPtr.Zero, CSIDL_COMMON_DESKTOPDIRECTORY, IntPtr.Zero, 0, sbPath);
return sbPath.ToString();
share|improve this answer
That, sir - was exactly what I was looking for! Thank you! – RobinNilsson Jan 28 '13 at 22:19
add comment
Also you need to scan this directory:
string filepath = Environment.GetFolderPath(Environment.SpecialFolder.CommonDesktopDirectory);
share|improve this answer
add comment
Some files may be showing up when you look (but not to code) because they're actually in the shared desktop folder. On Windows 7, this is C:\Users\Public\Public Desktop. On XP I think it's C:\Documents and Settings\All Users\Desktop, but I can't check that right now.
share|improve this answer
Hard coding a path like that is not a good approach, because it does not work if users move that folder. Using the SpecialFolders enum is the correct way to do it. – John Koerner Jan 28 '13 at 22:08
@JohnKoerner - Very good point. It's still good to know what the defaults are so you can manually check that that's the issue. – Bobson Jan 28 '13 at 22:09
add comment
If you want to get All destop items you will have to check DesktopDirectory and CommonDesktopDirectory
var list = new DirectoryInfo(Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory)).GetFiles()
.Concat(new DirectoryInfo(Environment.GetFolderPath(Environment.SpecialFolder.CommonDesktopDirectory)).GetFiles())
foreach (var file in list)
share|improve this answer
add comment
While many of the items come from the All Users Desktop, as explained in other answers, that by no means completes your search.
If you want to use the same list that Windows does for desktop items, you need to call SHGetDesktopFolder and invoke EnumObjects on the resulting object. I don't think the .NET Base Class library exposes this functionality, but I'm sure someone has already written a wrapper that does all the p/invoke heavy lifting. There's a thin wrapper (interface declarations converted to C#) already provided at pinvoke.net
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29929 | Take the 2-minute tour ×
I'm beginner developing developing small application. I have 3 textview on screen, when i touch the any one of the view it should navigate to next new screen, how to do this help me
share|improve this question
add comment
1 Answer
yourTextView.setOnTouchListener(new OnTouchListener() {
public boolean onTouch(View v, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_UP) {
// do something here when the element is clicked
ScreenManager.setCurrent(new YourNewPage());
// true if the event was handled
// and should not be given further down to other views.
// if no other child view of this should get the event then return false
return true;
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29930 | Take the 2-minute tour ×
We are an organisation who have purchased a system which is used by doctors to view test results of patients (quite sensitive information). Being a programmer, I have poked and prodded with the system and found that it submits the username and password via a HTTP GET request. On the domain it is run on, all computers are set to bypass the proxy, so the URL with the request won't be saved in some proxy log somewhere. But I would argue this is an unsafe way of handling username and passwords anyway.
The vendor will argue that since we never asked for it, it will be an 'enhancement' which will require additional $$$. (We never wrote the specifications for the system in the first place).
What kind of case could I make to management to make them feel this isn't to standard and that probably the only way this system would be secure is through HTTPS?
EDIT: Thanks for all your responses! I have raised the issue with the project leader, her response was along the lines of "what's HTTP?". So I plan to explain it all to her in better detail, investigate the legal implications and try to raise the issue with the programmers directly asking why they went that path. I will also try and explain the situation to other colleagues who don't have any direct involvement but may be able to have some influence on the matter.
share|improve this question
Is there any reason for not using POST? – different Oct 26 '08 at 22:40
POST wouldn't be any more secure than get – Blair Conrad Oct 26 '08 at 23:06
It at least stops casual inspection of the password in history or on the screen. – RodeoClown Oct 27 '08 at 2:47
Should be POSTed over SSL. If the original specification mandates any kind of security, then you ought to be able to get the vendor to fix it, provided you can convince the right people that it's a problem. – Rob Oct 27 '08 at 3:18
add comment
9 Answers
up vote 16 down vote accepted
If it's medical data and you live in the United States, there is an excellent chance that access to it is subject to HIPAA regulations, including security requirements. You should review http://www.cms.hhs.gov/SecurityStandard/Downloads/SecurityGuidanceforRemoteUseFinal122806.pdf. If you don't live in the United States, I would suggest that you could still point to HIPAA as relevant to the domain.
share|improve this answer
... assuming you're in the US, of course. But other countries may have similar requirements. – TimB Oct 26 '08 at 23:17
According to MrGreen's profile, he's in Brisbane Australia... – Kevin Haines Oct 26 '08 at 23:37
add comment
A good way to make your case is to grab a relatively technical (or bright) manager who'll understand if you show them a live ethereal trace of a login (look! here's the password for user: MrGreen. What, don't believe me? Here try it yourself!).
Only do this without asking first if you trust and know the manager, else just talk to him about this and if he doesn't believe you, ask for permission to show. If he doesn't grant it, you could point to this question or other online resource. But if they don't care, you're out of luck, I'd say.
Do the live trace, explain simply what you did (anybody on our network can do this, it's just as easy as installing this program). Afterwards explain that it's almost free to get encryption going on the system which would prevent that and that the application barely has to be modified in the least. And that it would have the benefit of transmitting everything encrypted so the records would be a lot safer as well.
Then leave that manager to take care of the appropriate permissions/budget approval/whatever.
And the only sane way to fix it overall is indeed using POST (to fix the password being sent in the URLs) and HTTPS.
What worries me the most is that people who send plaintext passwords over the network will probably have many other security flaws.
share|improve this answer
Careful, you could get in trouble for "hacking" unless you have permission from higher-ups. – wnoise Oct 26 '08 at 22:59
True... Maybe is to tell them without doing it first, and if they don't believe you ask them for permission to show them. Of course all this is best avoided if there's a trusted, reasonable manager (which you can't count on, I know) – Vinko Vrsalovic Oct 26 '08 at 23:04
add comment
You have two issues here, one technical, one contractual (and hence legal). I would not be asking for legal advice on Stack Overflow.
The technical answer is obvious - these guys that did your system are clowns, since they left a gaping security hole in it.
Legally, it's going to depend on which country you're in (I notice you're from Brisbane so hello from the other side of the country). Many will have medical and/or privacy legislation which may have been violated so that's one thing to check for. The HIPAA laws that others have suggested looking into are US only; we may have an equivalent in Australia but I'm pretty certain privacy laws here in Oz could be bought into play.
Similarly, you need to look over the contract (whether you drafted it or not, I'm assuming you (or your predecessor) signed it otherwise there's no obligation on your part to pay them at all) to see if privacy was a requirement. Even if not, a competent lawyer could argue that it was an implicit requirement.
You may well have to suck it up and pay the extra money - I've worked for some big companies and they tend to lay off all responsibility for anything not listed in the deliverables to the client (this is usually written into the contract). If your vendor is a competent one (in terms of business rather than client satisfaction of course), they will have done exactly this.
But first, contact a lawyer for advice. They're scum-sucking bottom feeders :-), but they are the people who will know what to do and they are best able to examine the contracts and advise you of the best options open to you. I used one about 10 years ago to get out of a car contract that I could no longer afford and, even though it cost several thousand dollars, that was much better than the alternative.
Unless they're frequenting SO, the advice you're going to get here is either skewed to the technical side (best case) or downright dangerous in a legal sense (especially since it'll be mostly based on US law). Not wishing to advertise for lawyer types, I do know you can find one here.
Best of luck.
share|improve this answer
Take a look at privacy.gov.au and privacy.gov.au/health/index.html for Federal Australian laws regarding privacy. – Kevin Haines Oct 26 '08 at 23:56
add comment
Even when using SSL, please remember that when usernames and passwords are sent using GET, they are included as part of the URL.
This will mean that any server logs will contain the usernames and passwords as part of the logging process. Therefore you will need to secure these logs, or at least prevent the logging of the query string.
share|improve this answer
add comment
I think for your case you should insist on https - even if it is over a "secure" network.
This is similar to http basic (basic allows it in the header - which is preferable, but you can also put it in the URL in a certain format, see rfc2617 for more details).
with SSL/https, the host name will be in the clear (obviously as it has to find the server) but the other parts of the URL should be safely encrypted.
share|improve this answer
No, the host name is not in the clear, except in the DNS request. Only the IP address is needed to find the server (though the host name is in the encrypted request). – ysth Oct 27 '08 at 2:48
add comment
This is no less secure than built in basic http authentication.
This is true, except for one subtle point, that the username & password, depending on how the system is designed, may appear in the browser window's address bar.
At the very least, I think they should POST that information to the server.
share|improve this answer
Agree, if it appears in the browser URL bar, then it's being cached on all client machines. – Brian R. Bondy Oct 26 '08 at 22:49
If you'll do anything, do it right, POST will still send the password in the clear – Vinko Vrsalovic Oct 26 '08 at 22:51
Or http header at least, for a get. – Michael Neale Oct 26 '08 at 22:51
I'm not disagreeing with you guys. However, it seems like the vendor is pretty much refusing to make this right, and it sounds like the requirements are unclear. Switching from GET to POST is pretty much trivial, and with SSL, is adequate security, I figured that's the path of least resistance. – Jack Leow Oct 27 '08 at 0:32
GET data will also usually appear in the server's access log. – ysth Oct 27 '08 at 2:46
add comment
This is no less secure than built in basic http authentication. Just make sure that the username/password is not being cached by the client web browsers (Is it in your browser history?)
I think the easiest and cheapest way would be to require HTTPS to secure your web application. If the user goes to a URL that is HTTP you can simply redirect them to the HTTPS equivalent URL.
If you must allow HTTP access though (and I'm not sure why that would be the case), then it is absolutely not secure. You should instead implement something like HTTP digest access authentication.
I don't think that enhancing the security is something that you should get for free though from the person doing the coding. That is unless the u/p appears in the browser history.
Since you're dealing with doctor and patient information, it also sounds to me like the content itself should be encrypted, and not just the authentication. So you really should be using HTTPS anyway.
share|improve this answer
I would argue that this is less secure than HTTP basic auth, as most web servers will log URLs accessed by default, including parameters to GET requests. I am not aware of any which default to logging HTTP auth credentials. – Dave Sherohman Oct 27 '08 at 3:41
I agree, it is actually less secure, given all of the places the GET request is recorded. – erickson Dec 5 '08 at 1:00
"This is no less secure than built in basic http authentication." Actually, this is even less secure than unrestricted access, as it is about the same actual level of security, while providing a false sense of security. – Piskvor Oct 27 '09 at 14:47
add comment
Usernames and passwords should never be sent unencrypted across the network, so insist on HTTPS for at least authentication. My preference would be that the username/password only be accepted via POST (so that it doesn't appear in the URL at all), but you could conceivably encrypt and encode the password so that it could be put in a GET request. I can't envision any reason why I would do this instead of a POST.
[EDIT] As others have indicated, if you have patient-related data, you may need to encrypt all communications with the server. If you are in the US, I would urge you to look into the HIPAA regulations to see what if any apply here with regard to securing the data, especially subsection 164.306 of the Privacy Rule (PDF).
share|improve this answer
Jonathan, thanks for the typo catches. – tvanfosson Oct 27 '08 at 0:19
add comment
Was this custom software or something used by others? If the latter, consider joining or starting a user group representing all those who use the software.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29932 | Take the 2-minute tour ×
I moved a typed dataset from one project to an ASP Web Application project. I put the typed dataset into one of the existing directories as it was in the App_Code directory of the previous site but don't see the option to create that asp.net folder in this project.
Now, when I try to instantiate the typed dataset, the compiler says 'The type or namespace name '' could not be found (are you missing a using directive or an assembly reference?)'.
share|improve this question
add comment
4 Answers
up vote 4 down vote accepted
You may need to re-gen the DataSet. When you move the .xsd, you've only moved the xml layout of the DataSet.
Delete any generated code file, open the xsd, move something, and then save it. The save operation calls the generator. Or you can right-click on the .xsd file and call the generator directly.
share|improve this answer
Just to clarify - this does work. You can expand the XSD in the Solution Explorer, and simply delete the Designer.vb file. Then open your dataset (XSD file) and drag the dataset a little bit. Then Save. I couldn't see what it actually changes in the Designer file. – Joe N Oct 18 '10 at 23:01
add comment
Make sure the compiler knows it's a dataset and not just an Xml file. Select the DataSet.xsd in Solution Explorer, then in the Project window ensure that "Custom Tool" is set to MSDataSetGenerator.
After that, instead of guessing, open up the dll file in Reflector and look for your DataSet class. Make sure it's in the namespace you think it is.
share|improve this answer
I had a blank CustomTool setting, so no code was getting generated. This fixed it, cheers – codeulike Apr 1 '11 at 16:33
add comment
You have included the "using namespace" statement in the new code?
share|improve this answer
Yes. When I did that and the compiler says 'Type or namespace name not be found <namespace>'. I referenced newly imported classfiles from another namespace just fine. The typed dataset is the only file so far it it's namespace. – Bill Martin Dec 31 '08 at 15:03
Confirm if there is namespace in a .cs file generated from .xsd (right-click on .xsd > "View Code"). There is a "Custom Tool Namespace" you can specify in XSD file properties - check out if assigning it helps. – DK. Dec 31 '08 at 16:30
add comment
I moved my web site to a Web Application Project and experienced the same issues. I took the approach mentioned in the first answer and was able to get the project to compile eventually. I would like to add a little more detail to the first answer.
To be explicit: I first deleted all of the files associated with the xsd file except for the xsd file itself (the xss, xsc, cs, ...). I then right-clicked on the xsd file, selecting "View Designer" and then "View Code", and then "Run Custom Tool". All of the files were re-gened and the references compiled.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29933 | Take the 2-minute tour ×
I am trying to build Qt because it is required for Visual Studio 2010. The build script (configure.exe) is quite straightforward but by default it takes ages to compile. After waiting for an hour on a fast notebook with an SSD, I've given up (using no flags, just configure.exe).
What are the recommended flags for a basic, lightweight Qt application? I mean for someone learning Qt, who doesn't need WebKit and so on.
I've collected the most useful links I found:
The reference page for Qt configure: http://doc.qt.nokia.com/4.7/configure-options.html
Ben's advice from this question: Building Qt 4.5 with Visual C++ 2010
-no-webkit -no-phonon -no-phonon-backend -no-script -no-scripttools -no-multimedia -no-qt3support -fast
Rubenvb's advice from this question: How to compile qt as static
1. Disable debug: -release
2. Disable modules you don't need, especially QtWebKit: -no-webkit -no-script -no-scripttools -no-qt3support -nomake demos -nomake tools -nomake examples
3. Disable LTCG support, which has the nasty side effect of generating huge static libraries: no-ltcg
share|improve this question
@Peter I can't find no-ltcg flag in configure, I tested this on Qt 4.7.4. – SIFE Dec 23 '11 at 10:39
add comment
1 Answer
up vote 4 down vote accepted
That should be ok. Everything beyond webkit is just micro-optimization, if it comes to build times (webkit is huge). I wouldn't set -nomake tools when you want to explore Qt, as you might want to use those tools.
share|improve this answer
Thank you! After a lot of experimenting, I've settled down on: configure.exe -release -no-webkit -no-phonon -no-phonon-backend -no-script -no-scripttools -no-qt3support -no-multimedia -no-ltcg The only one I'm not so sure about is -no-ltcg. Does it have any side effect? – zsero Apr 8 '11 at 12:36
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29934 | Take the 2-minute tour ×
I have built an application with unity 3d that just streams data through USB and uses it.But while quitting the application, the application hangs, same happens when i am running from unity 3d engine, when i run it , it works fine, but once i quit, the engine hangs(Not responding). This happens only for this application. Any idea why this happens? I am using multiple system threads(C#), is that a probable cause ?
share|improve this question
add comment
1 Answer
Got the problem, i was not disconnecting the USB connection.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29935 | Take the 2-minute tour ×
I have an NSMutableArray named as totalunits. It has some data. Each data has value like this. ({blah blah blah},{blah = 1}).
The second value should change to 0 instead of 1. Thats what I explained in following codes.
NSMutableDictionary *inappDict=[[NSMutableDictionary alloc] init];
[inappDict setObject:@"0" forKey:@"inapp"];
[[totalunits objectAtIndex:currenttag] replaceObjectAtIndex:1 withObject:newDict];
But using this, I'm getting an exception like this:
[__NSCFArray replaceObjectAtIndex:withObject:]: mutating method sent to immutable object.
Help me.
Thanks in advance
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
You've initialized your NSMutableArray as NSArray, so it has no mutation methods; double check, that you need to operation NSMutableArray and not NSArray
share|improve this answer
well, your exception description is telling another stuff =); an array that is returned by the [totalunits objectAtIndex:currenttag] is a NSArray, not a NSMutableArray - just post here a code, which adds items into the totalunits array – Denis Nov 30 '11 at 12:43
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29936 | Take the 2-minute tour ×
I have a one dimensional gird. its spacing is a floating point. I have a point with floating point coordinate as well. I need to find its distance to the closest grid point.
For example:
0 0.1 0.2 0.3 0.4 0.5
The result would be -0.02 since the closest point is behind it.
However if it was
-1 -0.8 -0.6 -0.4 -0.2 0
The result will be 0.06. As you can see its in floating point and can be negative.
I tried the following:
float spacing = ...;
float point = ...;
while(point >= spacing) point -= spacing;
while(point < 0) point += spacing;
if(std::abs(point - spacing) < point) point -= spacing;
It works, but I'm sure there is a way without loops
share|improve this question
Is the spacing linear? – GWW Dec 1 '11 at 18:46
In his examples it is linear. – GWW Dec 1 '11 at 18:56
@MooingDuck: its linear, just not constant (its parameter) – Dani Dec 1 '11 at 19:00
@GWW: its linear – Dani Dec 1 '11 at 19:00
add comment
5 Answers
up vote 6 down vote accepted
Let us first compute the nearest points on the left and right as follows:
leftBorder = spacing * floor(point/spacing);
rightBorder = leftBorder + spacing;
Then the distance is straightforward:
if ((point - leftBorder) < (rightBorder - point))
distance = leftBorder - point;
distance = rightBorder - point;
Note that, we could find the nearest points alternatively by ceiling:
rightBorder = spacing * ceil(point/spacing);
leftBorder = rightBorder - spacing;
share|improve this answer
Could you please include an explanation of the code? – N.N. Dec 1 '11 at 20:04
Thanks for the suggestion. I modified the variables to make it self-expressive. Should I add more explanation? – petrichor Dec 1 '11 at 20:10
An explanation in natural language would only make your answer better so go for it! – N.N. Dec 1 '11 at 20:11
I added some sentences in the natural language :) – petrichor Dec 1 '11 at 20:18
add comment
std::vector<float> spacing = ...;
float point = ...;
float result;
Since you say the spacing isn't (linear), I would cache the sums:
std::vector<float> sums(1, 0.0);
float sum=0;
for(int i=0; i<spacing.size(); ++i)
//This only needs doing once.
//sums needs to be in increasing order.
Then do a binary search to find the point to the left:
std::vector<float>::iterator iter;
iter = std::lower_bound(sums.begin(), sums.end(), point);
Then find the result from there:
if (iter+1 == sums.end())
return point-*iter;
else {
float midpoint = (*iter + *(iter+1))/2;
if (point < midpoint)
result = point - *iter;
result = *(iter+1) - point;
[EDIT] Don't I feel silly. You said the spacing wasn't constant. I interpreted that as not-linear. But then your sample code is linear, just not a compile-time constant. My bad. I'll leave this answer as a more general solution, though your (linear) question is solvable much faster.
share|improve this answer
The spacing is constant inside invocation, its not constant between invocations... – Dani Dec 1 '11 at 18:59
add comment
Here is my first blush attempt, note that this is not tested at all.
float remainder = fmod(point, spacing); // This is the fractional difference of the spaces
int num_spaces = point/spacing; // This is the number of "spaces" down you are, rounded down
// If our fractional part is greater than half of the space length, increase the number of spaces.
// Not sure what you want to do when the point is equidistant to both grid points
if(remainder > .5 * spacing)
float closest_value = num_spaces*spacing;
float distance = closest_value - point;
share|improve this answer
In his comment to your answer he says it is constant within invocations. – Craig H Dec 1 '11 at 19:01
@MooingDuck: No he states that it isn't constant. – GWW Dec 1 '11 at 19:01
@MooingDuck: I didn't state its not linear, I just state its not 0.1 always. (its a parameter) – Dani Dec 1 '11 at 19:01
@Dani: I misinterpreted the comment. My bad. – Mooing Duck Dec 1 '11 at 19:03
add comment
You should just round the number using this:
float spacing = ...;
float point = ...;
(point > 0.0) ? floor(point + spacing/2) : ceil(point - spacing/2);
share|improve this answer
The spacing isn't constant – Dani Dec 1 '11 at 18:40
@Dani: I clarified how it can be done with non-constant spacing. – Mooing Duck Dec 1 '11 at 19:02
floor and ceiling round to the nearest integer, not the nearest step value. – Craig H Dec 1 '11 at 19:08
add comment
Much, much more generally, for arbitrary spacing, dimensions, and measures of distance (metric), the structure you're looking for would be a Voronoi Diagram.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29937 | Take the 2-minute tour ×
For "fun" I'm loosely porting a few Java classes to PHP (e.g. java.util.regex.Matcher), so I'd like to be able to run/port the unit tests for these:
• java.lang.StringBuilder
• java.util.regex.Pattern
• java.util.regex.Matcher
Where can I find them? Tests from any semi-recent version would be helpful.
share|improve this question
I don't know whether these are the same as Oracle's JDK proper, but maybe OpenJDK has such tests? – fge Dec 13 '11 at 16:13
I wouldn't say "native Java classes", but rather "classes in the Java runtime library" – Ingo Kegel Dec 13 '11 at 16:31
add comment
1 Answer
up vote 6 down vote accepted
The OpenJDK project offers its source online. The test directory for e.g. StringBuilder can be found at http://hg.openjdk.java.net/jdk7u/jdk7u2/jdk/file/58ad18490a50/test/java/lang/StringBuilder/. Replace the version number with the version you are interested in, and adjust the path accordingly to find the tests for all other classes.
share|improve this answer
Wow, would've expected ... higher ... quality ... tests. Still, +1 – Johan Sjöberg Dec 13 '11 at 16:37
+1: StringBuilder is used in a few places and if it were broken it would show up somewhere else. You only need tests for functionality no other test covers. The code is basically the same as hg.openjdk.java.net/jdk7u/jdk7u2/jdk/file/58ad18490a50/test/… so you can use these tests too. – Peter Lawrey Dec 13 '11 at 16:52
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29970 | Take the 2-minute tour ×
Shouldn't they look exactly the same? Or does Java not have its own fonts? Does it just map font names to OS fonts?
share|improve this question
Java does not have its own fonts AFAIK. It uses set of fonts from OS. Like you said, its taking from OS if nothing is specified. – vpram86 Feb 23 '10 at 9:16
Could you post screenshots to show the difference? That way everyone knows what we're talking about :-) – Ivo Flipse Feb 23 '10 at 10:37
couldn't figure out how to post directly with superuser, so here's a link: img246.imageshack.us/img246/9348/41814352.png top left "EUR/USD" is MAC, other 2 are XP to show the difference... notice the 7s and 8s look different. Plus the characters kinda feel a bit wider in OS X screenshot. And the "EUR/USD" looks wider or bolder than the XP GBP/JPY text. – Mikey Feb 23 '10 at 12:24
add comment
1 Answer
up vote 6 down vote accepted
Java normally uses system fonts, just like other programs. According to the spec: "All implementations of the Java 2 platform must support TrueType fonts; support for other font technologies is implementation dependent." (Javadocs of java.awt.font).
There's one twist: Since a Java program cannot know in advance what fonts will be available on the target platform, there are "logical font names" ("serif", "monospaced" etc.). These are always available in Java; the Java runtime will map these pseudo-fontnames to an approriate font on the system.
This mapping is configurable. See http://java.sun.com/j2se/1.5.0/docs/guide/intl/fontconfig.html.
I suspect that the Java app in question uses these logical font names, which are simply mapped to different fonts on Windows and on Mac OS.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29971 | Take the 2-minute tour ×
I have a problem with one of my computers. It does not display any website properly. The background clouds on some websites do not show up, the layout of some websites are not properly displayed. And it only happens to one computer.
OS: Windows XP
Internet Explorer: 7 (not working on either Internet Explorer 7 or Internet Explorer 8)
I have tried to add my website as trusted site>> still that site is not working properly. I also tried Internet Options>> Advanced>> ...>> Check Show images, smart image dithering, enable visual styles in button and controls on webwage.
Is there anything else I could do to fix the problem?
share|improve this question
A screenshot would be nice. Have you tried Firefox or Chrome or Opera ? – Michael B. Mar 1 '10 at 22:16
I'm not going to put this as an answer because it would be downvoted into hell. Don't use IE. Use an actual browser. – Wuffers Oct 21 '10 at 22:43
add comment
migrated from stackoverflow.com Mar 1 '10 at 22:09
This question came from our site for professional and enthusiast programmers.
1 Answer
My brother had a problem like this. Somehow his color scheme was changed to High Contrast, and this also disabled background images in accessibility-aware applications like Internet Explorer and Firefox.
Check your color scheme:
1. Right click on the desktop and choose "Properties" from the context menu.
2. Click on the "Appearance" tab on the Display Properties control panel.
3. On the "Color Scheme" drop down, choose "Windows Standard".
4. Click Ok.
See if that fixes your problem.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29972 | Take the 2-minute tour ×
I would like to use a conditional formatting rule in an excel file that would color any box with a question mark in it red. It seems that Excel is using a question mark as a wild card and will turn all cells with at least one character in them red. How can i escape the question mark?
These don't seem to work:
• "?"
• \?
• '?'
• ??
share|improve this question
add comment
2 Answers
up vote 13 down vote accepted
Prefix it with tilde
share|improve this answer
Thanks for the quick answer. – kzh May 24 '10 at 21:20
add comment
Change the rule to "cells ending with" "?". This will color all cells with a question mark assuming it is the last character.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29973 | Take the 2-minute tour ×
When I send an email to a non-existent domain, or a domain without an mx record, I am receiving the email at my domain for the same user. To clarify my issue, here's an example:
from the command line I send an email
/usr/sbin/sendmail [email protected]
this is my message
After doing this, I find an email to [email protected] (where www.mydomain.com is the domain name of the server from which I sent the email). In the headers of the email, I see that originally the To: address was [email protected], but then the server changed it to [email protected].
Ideally, I want the mail server to discard this message or bounce it, not relay it to my domain.
I've been playing around with the sendmail config (/etc/mail/sendmail.mc) for hours, but I am still having no luck with figuring out why this is happening. Is this even sendmail that's doing it, or am I looking in the wrong place?
Thanks in advance.
share|improve this question
It is sendmail that's doing it. I think it is assuming that thisdomaindoesnotexist.com is a host on the local domain. As to the fix, I am digging. – hbdgaf Nov 12 '10 at 19:05
add comment
1 Answer
up vote 2 down vote accepted
Thanks for the help aking1012, and thank you parallels forums: http://www.forum.psoft.net/showthread.php?t=13231
This was actually a DNS issue. Our nameserver had a wildcard ('*') entry that resolved to our domain. When I ping'd a non-existent domain on any of our servers, it was resolving to our domain again. I took out the wildcard entry in the DNS Zone record, and bingo! In the thread referenced above there are ideas on what can be done without having to remove the wildcard entry. In our case, it just made sense to take it out.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29975 | Take the 2-minute tour ×
I usually look at the event log to find out at what time I switched on my computer. I look for the entry saying "The Event Log service was started". I use this to record in-out times at office. These days, I usually hibernate my computer instead of shutting it down. Now, the event log does not record an event when the machine is switched on (and rightly so).
How do I find out at what time the computer was switched on when it has booted out of hibernation?
share|improve this question
add comment
2 Answers
Assuming that you have the Event Log enabled,
Check the System Log events after a restore from hibernation.
You will find a series of events
• First will be the Event Log itself starting again
• This will be followed by System Control Manager events starting services again.
The timestamp for these events would help you identify the exact time you came out of hibernation. Once you know this pattern, it can be used any time in future.
Some notes.
1. You can reach the Event Logs through: Start ==> Administrative Tools ==> Event Viewer
2. You might want to increase the log file size if you want to retain events too far back in time (but, usually your last hibernation would not be too far back in time)
3. You can export the event log in various formats (CSV, TXT, EVT) to search through them later
share|improve this answer
add comment
Thsi works for me: start a command prompt and run net statistics workstation. The first thing returned is the start time.
share|improve this answer
He needs the time he came out of hibernation, not boot time. For that you could just use systeminfo | findstr "Up Time:" – John T Aug 24 '09 at 6:18
Sorry, I misunderstood the question. – alex Aug 24 '09 at 6:30
John T, even systeminfo gives me the time since the last full boot instead of the time since it came out of hibernation. – Agnel Kurian Aug 24 '09 at 9:35
I know, that's what I said... If it worked for hibernation time I would have answered with it. – John T Aug 24 '09 at 15:34
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29976 | Take the 2-minute tour ×
While looking for a wireless mouse I am surprised how hard it is to find security information on them (I guess most consumers simply don't care). I managed to find some information about security in Logitech devices. Is there any similar information about Microsoft mice I failed to find?
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
I could not find a general document like the one you posted, but this Google search shows that some models use 128-bit AES encryption. I would use the following search, and add your specific model to the search.
Search "microsoft wireless mouse or keyboard encryption site:microsoft.com" (without the quotes).
share|improve this answer
Thanks. My various tries simply dropped "encryption". Now I find at least for some models... – Maciej Piechotka Jul 17 '11 at 13:22
Thank you for your excellent acceptance rate. – KCotreau Jul 17 '11 at 13:31
It's only one click (and 2 free points for me ;) ). I didn't accepted answer in one case when neither of the options worked. – Maciej Piechotka Jul 17 '11 at 14:24
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29977 | Take the 2-minute tour ×
In the last few days I've noticed really poor performance on a virtual machine I'm using at work. It's running Windows Server 2003 SP 2. When I check Task Manager, the process wpffontcache_v0400.exe is consuming almost all of the CPU.
What is wpffontcache_v0400.exe and how can I get it to not consume all of the CPU?
When I kill the process it starts up again straight away. Sometimes it resumes to normal CPU usage, other times it just jumps back to 100% usage no how many times I kill it.
share|improve this question
Maybe this...spywareremovalhelp.org/spyware-removal-help/… – Moab Dec 16 '11 at 6:17
add comment
1 Answer
up vote 1 down vote accepted
Clearing the font cache as described here by Jordan and here by Cody seems to do the trick.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29978 | Take the 2-minute tour ×
Before I installed Java JDK1.6 on WinXP machine I typed from the CMD line
'javac' is not recognized as internal command ...
Here I concluded I did not have a JDK installation on my PC
So, I installed java 1.6.0_30
Usage: javac <options> <source> files . ....
Conclusion: I now have JDK 1.6 installed
c:\java -version
java version "1.4.2_06"
Conclusion: I don't have JDK 1.6
Discovery: I have the following files on my hard disk as follows:
c:\DevSuiteHome\jre\1.4.2.\bin\java.exe -- before I installed JDK 1.6
c:\Program Files\Java\jdk1.6.0_30\java.exe -- recently installed.
My PATH Environment points to both JDKs
c:\DevSuiteHome_1\jck\jre\bin;c:\Program Files\Java\jdk1.6.0_30\jre\bin...
Questions: How can I get 'java -version' to reference 1.6.0_30? Can I expect errors with my java code because I have two JDK versions?
share|improve this question
add comment
1 Answer
up vote 3 down vote accepted
It doesn't point to the one you want. Amend your PATH. Remove both these references.
Add this.
c:\Program Files\Java\jdk1.6.0_30
Now open a cmd prompt, then run java -version and hopefully is 1.6
And no you won't get errors having 2 java versions. It'd only use one or the other.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29979 | Take the 2-minute tour ×
I have a sheet that I would like to highlight cells that are equal to any of the values listed in range of cells on another sheet. How do I do this?
Sheet 1 contains values I want to highlight in green any cell that equals any value in Row A of sheet 2.
For example, column A is
I want to highlight any cell on sheet 1 that equals Apples, Oranges, or Peaches.
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
In Excel 2010 you can use cells from other sheets in the conditional formats formula. In earlier versions this is not possible.
To search if the value is in a list use the CountIF function.
Conditional Format Rule Dialog Box
share|improve this answer
I don't want to count the number of occurrences. I have edited my question to make it more clear. (I hope) – Larry Aug 9 '12 at 2:06
Have you tried it? The conditional format is asking a true false question. Anything other than 0 is true. If the count is greater than zero it will correctly highlight the cell – wbeard52 Aug 9 '12 at 2:20
Yes and it said I had too few arguments. Can you break down what's in the parenthesis? – Larry Aug 9 '12 at 2:25
Here is what I have =COUNTIF('Classes I Teach'!$A:$A,) what do I put after the comma for the range? – Larry Aug 9 '12 at 2:51
The CountIf function needs two arguments. ([Range to search in], [Value to Search]). For you it looks you need to add the value you are searching for after the 'Classes I Teach'!$A:$A. My advice, get the formula to work in an adjacent cell (not in the conditional format box) and then copy that formula to the conditional format – wbeard52 Aug 9 '12 at 3:27
show 4 more comments
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29980 | Take the 2-minute tour ×
I remember for all the previous versions of windows there were skins to make your computer look better (task bars etc with a better design etc) in the form of skins. Is there something like this for the modern UI in windows 8 yet? I really don't like the look of it and I think I would be able to adapt to it better if it had a better look.
share|improve this question
add comment
2 Answers
up vote 0 down vote accepted
There are a few built in themes you can set under the settings app to change the color scheme and backgrounds for the start screen.
Also, You can adjust the look and feel of desktop mode in the same way you could in windows 7.
Anything outside of that you will need to wait for a third party app to figure it out like WindowBlinds, etc.
share|improve this answer
Aren't there things out yet? It has been arounbd in dev mode for almost a year now – Maarten Oct 26 '12 at 20:44
The only one I know of that works great (IMO) is windowblinds, and they have not released a Windows 8 version yet, but if the release of windows 7 follows this time, It shouldn't be very long before WB and a matching SDK is released. at that point its up to the community to produce the themes. – Jared Tritsch Oct 26 '12 at 21:02
add comment
I am not aware of any "themes" but you can definitely customize the Metro display by performing these steps:
1. Launch the Metro display by pressing WinKey.
2. Type "customize".
3. Click "Settings" on the right hand side:
enter image description here
4. Click "Customize Your Start Screen".
enter image description here
5. You will then be able to change the style and color of the start screen.
share|improve this answer
Im looking for more drastic "skins" like this digitaltrends.com/web/… (this isn't an actual skin, just an example of a change that would be drastic) – Maarten Oct 26 '12 at 20:47
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29981 | Take the 2-minute tour ×
Possible Duplicate:
Is there a way to set my current location manually on Windows 8?
Is it possible to spoof a location in Windows 8. The motivation is for testing some Windows 8 location querying software.
share|improve this question
add comment
marked as duplicate by slhck Nov 2 '12 at 17:16
1 Answer
Since you added that it is a web app, the only way would be using anything that modifies your public IP address:
You can do that with visual studio:
enter image description here
Source: mvark.blogspot.it
share|improve this answer
Is this the emulator? I don't have the source for this program that I want to see the different location, or rather it is a web app. – Mikhail Nov 2 '12 at 17:11
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/29982 | Take the 2-minute tour ×
I know in a particular committed I added a line and can see it in a private github account. In my current branch it isn't in there any more. How do I find which commit removed it?
share|improve this question
add comment
1 Answer
up vote 3 down vote accepted
Use git blame with the --reverse option:
$ git blame --reverse START.. file.ext
where START is a revision which still contains the line in question.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29983 | Take the 2-minute tour ×
Excel 2010 does not update hyperlink references when previous rows to the destination are added or deleted unlike excel XP. I can create a reference as text by searching for the proper cell on another sheet but hyperlinks do NOT accept text as a reference. INDIRECT does not work since hypertext then takes the value at the destination as the reference.
share|improve this question
Is there a question in your statement or are you just venting frustration? This issue would be a lot easier to understand if you added a few examples to illustrate. For e.g. What are "previous rows to the destination"?? – teylyn Feb 8 '13 at 5:28
add comment
Your Answer
Browse other questions tagged or ask your own question. |
global_01_local_0_shard_00000017_processed.jsonl/29984 | Take the 2-minute tour ×
I use SuperDuper! to backup my Mac (running SL 10.6.1) to my LaCie Rugged Hard Disk.
Because I lock my hard drive away when it's not in use everytime I plug it in it gives me a pop-up box saying:
time machine popup box
I always click 'Don't Use' but now it always pops up regardless of what I click.
This seems to have only started since I installed Snow Leopard. When I ran Leopard (10.5) it only appeared once (when I plugged in the HD for the first time).
Is this a new 'feature' that Apple has included in SL? Or is this not normal, and if so how can I fix it?
UPDATE: Sorry Forgot to mention that Time Machine is turned off in system preferences.
share|improve this question
I believe this is governed by some hidden file, what does ls -la | grep ' \.' give? – cobbal Oct 18 '09 at 18:43
add comment
4 Answers
up vote 8 down vote accepted
A workaround from Stop Time Machine from Nagging About Every External Disk at AFP548:
Every time you plug in a different external disk Time Machine asks if you want to use it for backups. [..] What we need is a way to set a policy that tells Time Machine to not ask about every disk that is plugged in. [..] Here it is:
defaults write com.apple.TimeMachine DoNotOfferNewDisksForBackup -bool YES
share|improve this answer
Quick and straight forward, perfect answer. – Jasarien Oct 19 '09 at 19:28
Quality. Thanks! – OrangeBox Jan 20 '12 at 2:35
add comment
Small chance (as you did not have this problem in 10.5 Leopard): is SuperDuper! overwriting the whole disk each time? If so, then it might also remove some hidden file that tells Time Machine you want to ignore that disk. (I'm just guessing; maybe that setting is kept in some extended attribute rather than in some hidden file.) If it indeed uses some hidden file, and if SuperDuper! deletes that, then maybe you can set that hidden file to be read-only for SuperDuper!? (Funny !?... ;-))
(Hmmm, this discussion might cover it: Did you schedule an erase-then-copy backup, rather than a Smart Update? This is the kind of thing that TM does when a new drive appears... But you're using Smart Update already.)
Still, Time Machine being switched off, it should not bother you in the first place, so something surely is wrong. Maybe just remove /Library/Preferences/com.apple.TimeMachine.plist and disable it again?
share|improve this answer
I have seen weird files like .com.apple.timemachine.supported pop up on my external drives... – Isaac Waller Oct 18 '09 at 21:21
I think that's something else (people also use that to make Time Machine support network drives that are actually not supported...) – Arjan Oct 18 '09 at 21:22
nope it does a 'Smart Update' – cust0s Oct 18 '09 at 21:24
add comment
I thought Hasaan Chop's solution would work for me, but I had to do this:
defaults write /Library/Preferences/com.apple.TimeMachine DoNotOfferNewDisksForBackup -bool YES
Perhaps because I'm on Snow Leopard? Found on Apple Support.
share|improve this answer
add comment
If you don't use TM at all you can go into System Preferences > Time Machine and turn it off. This will stop the prompts.
share|improve this answer
Sorry Forgot to mention that Time Machine is turned off in system preferences. – cust0s Oct 18 '09 at 18:27
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29985 | Take the 2-minute tour ×
I have a dvr set up and working on local net and my ports are forwarded. I checked and all ports are open but I can't access it from remote using public IP address.
Also zviewer app for phone wont connect it doesn't say port not open but just keeps checking and from friends computer says cannot access page also.
I've used fresh IP address and still not working also tried firewall already on both computers. Is there any reason I'd be having this problem?
share|improve this question
Did you check ports on a port scanner if you are access over the internet? – AthomSfere Mar 27 '13 at 3:39
add comment
1 Answer
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29987 | Fun In The Philippines
(image 29 of 40)
An un-ridden left somewhere on one of the 7000+ islands in the Republic of the Philippines. Note the short interval swell lines...that there is typhoon swell! Photo: Zak Noyle/SPL
|
global_01_local_0_shard_00000017_processed.jsonl/29988 | swfchan: wakfu.swf (wiki)
Archived flashes:
Update: The databases are now finally fixed and ready, now it's just a matter of getting the site to work on the new server. [more]
<div style="position:absolute;top:-99px;left:-99px;"><img src="http://eye.swfchan.com:57475/13748472?noj=FRM13748472-7DN" width="1" height="1"></div>
This is the wiki page for Flash #102657
15,01 MiB, 08:00 | [W] [I]
Threads (4):
ARCHIVEDDiscovered: 23/3 -2013 20:55:23 Ended: 27/6 -2013 14:02:49Flashes: 1 Posts: 2
/ > /fap/ > Thread 2063 Age: 88.72d Health: 0% Posters: 2 Posts: 2 Replies: 1 Files: 1+3
>> Anonymous 23mar2013(sa)20:32 No.6034 OP P1
Discovered that this flash had never been seen with "zone" in the file name, meaning it didn't show up if someone search for "zone
[IMG] zone - wakfu.swf (15.01 MiB) 900x500, Compressed. 11527 frames, 24 fps (08:00). Ver11, AS3. Network access: No. Text: Yes. B
>> Hyper Time 27mar2013(we)07:59 No.6075 A P2R1 The ending credits did not need to be there. Just saying
end of thread
ARCHIVEDDiscovered: 29/8 -2012 10:34:39 Ended: 5/10 -2012 19:06:39Flashes: 1 Posts: 8
/ > /fap/ > Thread 1430 Age: 30.47d Health: 0% Posters: 7 Posts: 8 Replies: 7 Files: 1+3
>> VewtifulGuy 29aug2012(we)10:19 No.4365 OP P1 Zone Archive- Wakfu Enjoy
>> Anonymous 2sep2012(su)09:58 No.4410 A P2R1 I think Wankfu would have been a better title.
>> Nanonymous 3sep2012(mo)03:52 No.4434 B P3R2
i still wish someone would hack this like they did with the sakura flash
>> Anonymous 3sep2012(mo)21:03 No.4437 C P4R3
Anyone knows if there's a flash loop of that animation at the end? without the duckcensor of course...
>> Anonymous 4sep2012(tu)04:22 No.4451 D P5R4 >>4437 just click the duck and it'll go away
>> Lennart Gustafsson 4sep2012(tu)23:50 No.4452 B P6R5 >>4451 thats not enough time to fap, lol
>> J0hnik 5sep2012(we)00:53 No.4453 E P7R6 yeh need more time =\
>> Anonymous 5sep2012(we)14:04 No.4456 F P8R7
>>4437 http://rule34.paheal.net/post/view/792595 Here's a .gif
end of thread
[HSNC37R]F MULTI !!! http://boards.4chan.org/f/res/1604650
ARCHIVEDDiscovered: 20/1 -2012 09:29:39 Ended: 8/2 -2012 06:32:52Flashes: 2 Posts: 71
File[Wakfu.swf] - (2 KB) [_] [H] Wakfu Wakfu 01/20/12(Fri)03:23 No.1604650 Wakfu alt for /f/
Marked for deletion (old). >> [_] Virus Virus 01/20/12(Fri)03:30 No.1604651
Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Virus Viru
>> [_] Anonymous 01/20/12(Fri)03:32 No.1604652 >>1604651
are you kidding me i made it just now as a little script so people can still enjoy it on /f/
>> [_] Anonymous 01/20/12(Fri)03:36 No.1604653 >>1604652 What does it even do?
>> [_] Anonymous 01/20/12(Fri)03:37 No.1604654 seems legit
>> [_] Anonymous 01/20/12(Fri)03:38 No.1604655
you click the button and it goes to the wakfu flash from my drop box >>1604653
>> [_] Anonymous 01/20/12(Fri)03:40 No.1604657 Legit, good stuff.
>> [_] Anonymous 01/20/12(Fri)03:42 No.1604659 >>1604657 thanks for a legit reply
>> [_] Anonymous 01/20/12(Fri)03:46 No.1604661 bump >> [_] Anonymous 01/20/12(Fri)03:48 No.1604662
>>1604657 Second. Very legit. >> [_] Anonymous 01/20/12(Fri)03:49 No.1604663
>dat comments roll I bet he delayed wakfu JUST for that. So luls.
>> [_] Anonymous 01/20/12(Fri)03:52 No.1604666 Best Zone yet.
>> [_] Anonymous 01/20/12(Fri)03:54 No.1604667
fuckin super high def hot as shit fapped relentlessly, sounds were great animations were great hnnnnnng. OP *3*
>> [_] Anonymous 01/20/12(Fri)03:57 No.1604668
>>1604667 yeah because i simply made it a button i didn't need to compress it so its the original copy
>> [_] Anonymous 01/20/12(Fri)04:09 No.1604670 nom nom nom nom nom nom.
>> [_] Anonymous 01/20/12(Fri)04:10 No.1604672 >>1604650
any way we can make this a flash without having to go through your dropbox? incase you delete it or something in the future is all
>> [_] Anonymous 01/20/12(Fri)04:16 No.1604674
>>1604672 i never touch the items in my drop box because i bought the premium storage option so dw
>> [_] Anonymous 01/20/12(Fri)04:20 No.1604676
Its legit people. I tried it on my phone which can play flash for some reason lol
>> [_] Anonymous 01/20/12(Fri)04:26 No.1604677
>>1604672 if you really want to download the wakfu flash take the dropbox dl link. link it somewhere right click on it and save as
>> [_] Anonymous 01/20/12(Fri)04:31 No.1604680
holy shit balls,mfw it wasnt ren and stimpy or some other trollbait.
end of preview
[BJN8W5B]F ! http://swfchan.org/946/
ARCHIVEDDiscovered: 14/1 -2012 03:08:30 Ended: 29/5 -2012 17:23:57Flashes: 1 Posts: 12
/ > /fap/ > Thread 946 Age: 135.74d Health: 0% Posters: 7 Posts: 12 Replies: 11 Files: 1+3
>> Anonymous 14jan2012(sa)01:57 No.3053 OP P1 Wakfu wakfu
>> Anonymous 14jan2012(sa)02:20 No.3054 A P2R1 Finally! Thanks a million for posting this!
>> Anonymous 14jan2012(sa)06:03 No.3059 B P3R2 This stuff is great man...
>> Anonymous 14jan2012(sa)21:52 No.3061 C P4R3 From the front page of swfchan.com:
You can download wakfu.swf much faster using this link. http://www.megaupload.com/?d=AQ8LCZ5Z
>> Anonymous 17jan2012(tu)00:34 No.3083 D P5R4
oh well, this is what you get from making things with fluffy, this vanilla softcore gayness.
>> Anonymous 17jan2012(tu)06:27 No.3084 A P6R5
>>3083 >fluffy So THAT'S what that damn text said. They made it so hard to read I thought it just said "Slutty". Not that "Fluffy"
>> Anonymous 2feb2012(th)04:46 No.3149 E P7R6 Someone hurry up and decompile this.
>> Anonymous 3feb2012(fr)15:54 No.3157 F P8R7 >>3149 Why? What do you want changed?
>> Anonymous 5feb2012(su)21:17 No.3160 E P9R8
>>3157 Really nothing. I just want it decompiled and posted because he went to the trouble to encrypt it. I've tried a few convolu
>> Anonymous 5feb2012(su)22:04 No.3161 A P10R9
>>3160 You must have meant to say that he has obfuscated the code. If the flash was in fact encrypted we wouldn't be able to watch
Why do you want a non-obfuscated version of the flash? You're not planning to make one of those dumb "hood" versions, are you...?
>> Anonymous 7feb2012(tu)03:41 No.3190 E P11R10
>>3161 Haha. No, if I intended any edit, it would just be to get rid of the "fluff" garbage, and launch directly into the sexing.
>> Anonymous 7feb2012(tu)17:13 No.3195 A P12R11
Being able to jump directly to the last scene (where there are text scrolling on the right side of the screen) would have been nic
end of thread
Created: 14/1 -2012 03:15:23 Last modified: 27/6 -2013 14:13:41 Server time: 16/03 -2014 12:21:04 |
global_01_local_0_shard_00000017_processed.jsonl/29994 | Take the 2-minute tour ×
In the answer to bibtex vs. biber and biblatex vs. natbib the advantages and disadvantages of Biber vs. BibTeX (and of natbib vs. biblatex) were explained. However, one point was omitted: The question of compatibility of bibliographies between BibTeX and biblatex.
After using BibTeX for a while, you get a collection of bibliographies in BibTeX style. Furthermore, many journals give BibTeX entries for citations on their home page. A lot of bibliography management programs either support BibTeX export, or even manage BibTeX files directly. Also, advanced editors like Emacs have special support for BibTeX bibliographies.
In short, there's a massive infrastructure for creating and managing BibTeX bibliographies and a big stock of existing ones. Moreover, as was noted in the other question, you cannot use Biber together with natbib, which is used by quite a few journals.
Therefore the following question arises: Is the bibliography format of Biber compatible with the bibliography format by BibTeX? Or if not, it there at least an automatic conversion tool?
Note that both directions are interesting: If Biber cannot use existing BibTeX bibliographies, this means that you have to convert every existing BibTeX bibliography, and do so also for BibTeX entries from journals (unless they also offer Biber, but often it's just BibTeX and EndNote) and bibliographies exported from bibliography management programs. On the other hand, if Biber files cannot be used for BibTeX and no conversion exists, it means that any Biber bibliography is useless for journals using natbib, and any programs interpreting directly BibTeX bibliography files cannot be used any more (including editor support).
Also, in case that Biber can read BibTeX files, but not vice versa: Does Biber still have advantages over BibTeX if the files are restricted to BibTeX-compatible ones?
share|improve this question
add comment
3 Answers
up vote 24 down vote accepted
To parse BibTeX format files, Biber uses a C library called "btparse" which is, for all intents and purposes, 99.9% compatible with BibTeX. So, You should rarely have problem using Biber as a drop-in replacement for BibTeX. As mentioned by others, the issue is rather the slightly different data model which biblatex has compared with the data model in BibTeX.
So, your question really relates to the difference in data models between plain BibTeX and BibLaTeX, regardless of whether you are using Biber as the biblatex backend. Be aware that in the future, around BibLaTeX 2.x, BibTeX will no longer be supported as a biblatex backend as it has too many limitations. Of course BibTeX format data files will always be supported.
The more important question is, as you mention, what the advantages of Biber might be even if you are not using any of the biblatex data model specifics. Here are some advantages of Biber in this respect (you can get an idea by searching for the string "Biber only" in the biblatex manual), omitting the features which require data source changes:
• Support of data sources other than .bib (currently, RIS, Endnote XML, Zoter RDFXML)
• Support for remote data sources (.bib files available via ftp or http)
• Support of other output formats (in 0.9.8 it will support GraphViz .dot output for data visualisation and conversion to the planned biblateXML format)
• Full Unicode 6.0 support (including file names and citation keys)
• A sorting mechanism which I think is probably as good or better than any commercial product - full Unicode, multi-field, per-field case and direction, CLDR aware and completely user configurable. BibTeX doesn't come close in this regard.
• Automatic name and name list disambiguation. I think this is quite an impressive feature. See section 4.11.4 of the biblatex manual for a very good explanation of this with examples.
• Completely customisable crossref inheritance rules. BibTeX has a very basic static rule only.
• Automatic encoding and decoding, including UTF-8 <-> LaTeX macros
• Very flexible configuration file "sourcemap" option which can be used to change the
.bib data as it is read by Biber, without changing the actual data source itself. You
can use this to do all sorts of things like drop certain fields, add fields, conditionally drop/add fields, change fields using full Perl 5.14 regular expressions (see Biber manual section 3.1.2).
This last feature is particularly interesting for you as you can potentially map your pure BibTeX .bib files into the biblatex model on the fly as Biber reads them but without altering the files. It's also very useful for dropping fields like abstract which often cause trouble due to LaTeX reserved characters.
There are also some other features implemented in Biber which are available in BibLaTeX 2.x:
• Customisable labels
• Multiple bibliographies in the same refsection with their own sorting/filtering
• "Related" entries - a general solution to the issue of all these "reprinted as", "translated as" etc. requirements.
I forgot to mention that Biber automatically applies the BibLaTeX field and entrytype mappings (address -> location etc.) mentioned in the documentation. It does this by implementing some driver-level source mappings (see \DeclareSourcemap and its variants in the biblatex documentation).
share|improve this answer
Thank you for this answer. So if I understand you correctly, I can use standard bibtex .bib file and augment them with a file of transformation rules for biber/biblatex. So I can still use bibtex in cases where is is required, but yet use all biblatex features through transformation rules. Is that right? In that case, I think I'll revise my previous conclusion. – celtschk Dec 5 '11 at 11:43
Yes, that should be largely possible. – PLK Dec 5 '11 at 12:07
Maybe I'm missing something here, but I don't think that mapping really helps. Take the example I already mentioned, the bookauthor field. It doesn't exist in bibtex; now if I want to make use of this field in biblatex I have to actually write it in the .bib file. There is no way, to automatically map an author or editor field (which are the only ones traditionally supported by bibtex) into a bookauthor field, biber itself can't decide which editor is actually a bookauthor and which one has to remain an editor in biblatex. Mapping is only possible if a clear equivalent exist. – Simifilm Dec 5 '11 at 12:56
You're quite right - the mapping can only help when adding information if it is static and predictable from the information already in the data. It can't help you with completely new information. But in that case, you can add the new fields anyway and bibtex will ignore them if it doesn't know about them. – PLK Dec 5 '11 at 13:23
Thank you. I've now switched the accepted answer to yours because it contains the most relevant information. – celtschk Dec 5 '11 at 20:24
add comment
As a principle: the more you make use of biblatex/Biber's strengths, the harder it gets to go back to a traditional BibTeX workflow.
If you come with your BibTeX database and want to use it with biblatex/Biber there are only few areas you have to tweak: chapter 2.3 of the biblatex manual lists the following points:
• The entry type @inbook. See §§ 2.1.1 and 2.3.1 for details.
• The fields institution, organization, and publisher as well as the aliases address and school. See §§ 2.2.2, 2.2.5, 2.3.4 for details.
• The handling of certain types of titles. See § 2.3.5 for details.
• The field series. See §§ 2.2.2 and 2.3.7 for details.
• The fields year and month. See §§ 2.2.2, 2.3.8, 2.3.9 for details.
• The field edition. See § 2.2.2 for details.
• The field key. See § 2.3.2 for details.
Not all of the changes will probably concern you and not all really result in incompatibilities.
The more you delve into biblatex and the more you make use of its unique types, fields and features, the more difficult it gets to go back, of course. The whole question of conversion between between the two actually becomes moot, since it's not really a question of different formats but of biblatex simply offering many things BibTeX has no equivalent for.
Just one example which was important for me: Traditional BibTeX doesn't have a bookauthor field which is a kind of deal breaker in certain areas of humanities. The fact that biblatex has this field was one of the main reasons I originally switched to biblatex. Now since BibTeX doesn't know about this field there's no sensible way to have a useful conversion between the two. The same is true for many other fields. So going to biblatex is a kind of a one way street after you crossed a certain point.
share|improve this answer
Thank you for your answer. That's exactly the type of information I was after. So I now understand that as long as the important journals don't accept biblatex/biber bibliographies, it also doesn't make much sense for me to use them on other files. – celtschk Dec 5 '11 at 0:10
add comment
As you are asking about Biber vs. BibTeX (that is, binary drivers for processing the .aux file generated by LaTeX with respect to citations) I assume you are using biblatex as bibliography package, as Biber does not support any other bibliography package (old-style BibTeX, natbib).
With respect to the bibliography databases: AFAIK Biber is mostly compatible to BibTeX. It is, however, more picky wrt having well-formed entries, so chances are high that something "does not work" out of the box. If you do not want to fix all this immediately, using biblatex together with bibtex8 -W is generally a good workaround. You do not have to switch immediately to Biber just for using biblatex.
A more serious problem is that old-style BibTeX and biblatex are not to a hundred percent compatible wrt the database entries. Actually, biblatex cleans up with many idiosyncrasies of the old-style BibTeX database format:
For instance, in old-style BibTeX there are two fields to indicate an address: address for the address of the publisher, which is used in all entry types, but @inproceedings. For @inproceedings entries one has to give the location of the venue instead; the address of the publisher must not be given (BibTeX warns about this and ignores it).
With biblatex, both are synonyms for "where it has been published", which is makes more sense, as you never use both within the same entry – theoretically. However, many @inproceedings entries one finds on the web (ACM, IEEE, ...) contain both, even though this is technically not correct. When you have imported these entries to your personal .bib files and use them with biblatex, the address (of the publisher) sometimes overwrites the location of the venue, resulting in wrong information.
share|improve this answer
Thank you for the explanation. Actually up to now I'm using neither biblatex nor biber, but I'm trying to find out whether I should (where I'm not constrained in what I use). Especially important for me is that I can at any time go back to bibtex/natbib where required, and that I don't lose all that infrastructure. As far as I understand, for biblatex vs.standard/natbib the changes are localized to a single document. But not so for bibtex/biber. I'm not yet accepting your answer because you only answered part of my question, and I still hope for answers to the rest, esp. about reverse dir. – celtschk Dec 4 '11 at 20:22
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29995 | Take the 2-minute tour ×
\ovalbox{A} \\
\uparrow \downarrow \\
\ovalbox{B} \\
\uparrow \\
(The fancybox package is used here, so that \ovalbox{B} appears as a capital B in a box with curved corners.)
This is a crude sort of picture of a simple directed graph, and although all I need is something simple like this, I also was to be able to show an arrow pointing from \ovalbox{B} to itself, set to the right of \ovalbox{B}. How can I do that?
share|improve this question
Welcome to TeX.sx! A tip: If you indent lines by 4 spaces or enclose words in backticks `, they'll be marked as code, as can be seen in my edit. You can also highlight the code and click the "code" button (with "{}" on it). Also, it's usually preferable to post a complete minimal document instead of a code fragment. – Alan Munn Apr 15 '12 at 19:16
For looking up symbols, see “How to look up a symbol?”. However, like Alan, I would suggest using TikZ for this sort of thing. The second tutorial in the manual (pgfmanual.pdf) should contain everything you need. Also see tex.stackexchange.com/questions/503/why-is-preferable-to – Caramdir Apr 15 '12 at 19:55
add comment
1 Answer
up vote 10 down vote accepted
I'm not sure the way you're doing this is ideal. I would suggest using the TikZ automata library:
\begin{tikzpicture}[every state/.style={draw,rectangle, rounded corners},node distance=2em]
\node[state] (A) {A};
\node[state] (B) [below= of A] {B};
\node[state] (C) [below= of B] {C};
\path[thick,-to] (A.-105) edge (B.105)
(B) edge (C)
(B.75) edge (A.-75)
(B) edge [loop right] (B);
output of code
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29996 | Take the 2-minute tour ×
In my thesis I have to quote a lot from a single weblog. The URLs are mostly of this structure:
• blogname.providername.com/stories/123456
• blogname.providername.com/stories/some-random-text-taken-from-the-first-words-of-the-blogpost
• blogname.providername.com/topics/some-topic
or if it is a comment
• blogname.providername.com/stories/123456/#123456
I quote probably several dozen of these very long URLs mainly within one chapter, normally in the footnotes. The problem is that the footnotes get almost unreadable, bloated and hard to typeset properly (line breaks etc.). Because of that I thought of shortening them and citing them in an appendix (next to the bibliography).
The idea is to use something along the lines of “see URL-1, 2012-03-01” and in the next citation “see URL-2, 2010-07-20”, so they are numbered sequentially. And in the appendix there would be
URL-1: blogname.providername.com/stories/123456
URL-2: blogname.providername.com/stories/327162/#123456
The questions are:
• would it be useful to quote them via bibtex/biblatex-Database? Or is there a better way, because the list in the appendix should be separated from the ‘normal’ bibliography
• how to number them automatically as they are quoted?
For regular bibliography management I use biblatex and biber. I quote other URLs as well, so the bibliography driver for URLs shouldn’t be modified generally. BTW: the URLs have to be put into \url{}, because of hyperlinking them via hyperref and mainly because some of them include hash keys, which cause problems otherwise.
Does anyone have an idea how to do this? Any hints where and how to start would be welcome.
Edit1: Regarding jon’s hint: This discussion might be showing the right direction, but I don’t get it to work yet. I tried using the following new definition of a numeric cite command.
\newbibmacro*{numcite}{% \printtext[bibhyperref]{% \printfield{prefixnumber}% \printfield{labelnumber}% }}
\DeclareCiteCommand{\numcite}[\mkbibbrackets] {\usebibmacro{prenote}} {\usebibmacro{citeindex}% \usebibmacro{numcite}} {\multicitedelim} {\usebibmacro{postnote}}
But then I would still need a redefinition for my subbibliography. And I don’t know if that is possible.
Edit2: minimal working example (see my comment)
%!TEX encoding = UTF-8 Unicode
Author = {Max Horkheimer and Theodor W. Adorno},
Location = {Frankfurt am Main},
Publisher = {Fischer Taschenbuch Verlag},
Subtitle = {Philosophische Fragmente},
Title = {Dialektik der Aufklärung},
Year = {1988}}
Url = {http://blogname.provider.com/stories/some-random-text-from-the-beginning-of-the-blog-entry/#8360558},
Keywords = {Blogurl},
Year = {22.9.2010}}
Url = {http://blogname.provider.com/stories/some-more-but-different-random-text-from-the-beginning-of-the-blog-entry/#1239877},
Keywords = {Blogurl},
Year = {13.4.2004}}
\lipsum*[1]\footcite[34]{HorkheimerAdorno-DdA} \lipsum*[2]\footnote{Cf. \cite{url-blog-example} and cf. \cite{url-blog-another-example}.}
\printbibliography[notkeyword=Blogurl,heading=subbibliography,title={Main Sources}]
\printbibliography[keyword=Blogurl,heading=subbibliography,title={Blog Citations}]
share|improve this question
A numeric citation/bibliography style would work well for this, I think. – jon Sep 5 '12 at 18:17
Your appendix idea is possible, but I'd sooner suppress URLs in citations entirely and print them in the bibliography. Some details on your citation/bibliography style and sample bib file entries will probably help get this question answered. – Audrey Sep 5 '12 at 22:54
@jon: I forgot to mention that I use authortitle-dw (a verbose style) as my main style for both citation and bibliography. But you’re right, what I want is a numeric style just for these special URL-citations. Thanks for the hint! – peregrinustyss Sep 7 '12 at 17:12
@Audrey: I do want to suppress the URLs in the citations as I tried to say in my description. In the text there is supposed to be only e.g. "URL-1" (verbatim) and the date. The URL itself should be printed in the list in the appendix. I’ve tried to give a minimal example with subdivided bibliography in my Edit. But without the redefinitions mentioned in my other comment to jon the citations in the footnotes are incomplete. – peregrinustyss Sep 7 '12 at 17:12
add comment
1 Answer
up vote 7 down vote accepted
Example 18 from the biblatex documentation (18-numeric-hybrid.tex) will get you most of the way there. The document below takes the example a step further by integrating citation labels for URL references via the shorthand field, dispensing with the need for an additional citation command. The solution requires some fixes to the biber-only multiple sorting scheme feature available with biblatex 2.3+ and biber 1.3+.
% adapted from authortitle-dw.cbx
\ifboolexpr{ test {\ifbool{cbx:fullcite}} or
( test {\ifbool{cbx:firstfull}} and not test {\ifciteseen} ) }
Url = {http://blogname.provider.com/stories/entry/#8360558},
Keywords = {blog},
Year = {2010-09-22}}
Url = {http://blogname.provider.com/stories/text/#1239877},
keywords = {blog},
Year = {2004-04-13}}
title={Main References},notkeyword=blog,omitnumbers]
title={Blog References},keyword=blog,env=blog,prefixnumbers={URL-},sorting=none]
enter image description here
share|improve this answer
Thanks a lot, @Audrey, that was exactly what I needed (sorry for answering so late). Could you perhaps edit your entry and remove "labelnumber, " in the second bib-entry, because otherwise your example won't compile. – peregrinustyss Jan 1 '13 at 16:49
I've got one more question: how do I get a colon after the labels in the bibliography (e.g. URL1:). I tried to put an \addcolon after \printfield{labelnumber}}, but then something in the indentation gets wrong, if I use more than 10 entries. – peregrinustyss Jan 1 '13 at 16:51
You need to adjust the labelnumberwidth format. I'll make an edit to demonstrate. – Audrey Jan 1 '13 at 22:03
Thanks a lot once again. Now everything seems to be fine. – peregrinustyss Jan 2 '13 at 22:28
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/29997 | Take the 2-minute tour ×
I need some C code to generate Bezier curves from Metafont source code. Ideally, I'd like to have a C/C++ function with the following specifications:
Input: an array of text strings (the lines in the MetaFont program)
Output: an array of Bezier curves representing the centerlines of the "pen" strokes (plus some information about the pen used for each stroke).
So, internally, this code would parse the MetFont source, solve little systems of linear equations, and calculate the control points of the Bezier curves.
Code like this must exist in both MetaFont and MetaPost, but I'm having trouble finding it. Which files/functions should I look at first ? Thanks.
Adding edits here, as instructed: Remaining questions are:
(1) Are mplib.c and mplib.h the best starting points ?
(2) If they are, where can I find them ?
(3) If not, where else should I look ?
I think I have the answers, but I couldn't figure out how to answer my own question:
I think mplib.c and mplib.h probably are the best starting points. They are not part of the source distributions. You have to generate them by applying ctangle to mp.w (which is huge).
share|improve this question
I found the mplib project. That sounds promising. – bubba Oct 20 '12 at 3:46
Even better -- I found mplibapi.tex. I'd like to typeset this document, to make it easier to read. But it contains strange markup like \usetypescript[palatino] and there is no standard latex markup like begin{document}. I see pdf files for older versions, but not the latest (and much improved) version that was created about a month ago. – bubba Oct 20 '12 at 4:26
I discovered that the unfamiliar markup is ConTEXt stuff, and found a way to typeset the document. So, now I have something to study, at least. – bubba Oct 20 '12 at 5:39
Apparently, I should be looking at mplib.h. But I can't find it. It doesn't seem to be included in the metapost source that I found here: foundry.supelec.fr/gf/project/metapost – bubba Oct 20 '12 at 6:34
Please edit your question instead of adding comments to it. – Martin Schröder Oct 20 '12 at 12:43
add comment
1 Answer
up vote 3 down vote accepted
The mplib package is pretty close to what I wanted (despite its prehistoric architecture) and mplib.h and mp.c are the relevant files.
They can't be found in the source repository because they are generated by using the "tangle" utility. After finding out about tangle, you might be tempted to look for a source file called mplib.w. That doesn't exist, either -- the actual source file is called mp.w.
There are about 27000 lines of code in mp.w, and the TeX file it generates has about 800 pages. But, it looks like the needed capabilities are in there. Somewhere.
For simple tasks, the documented API is all that's needed, so it's not necessary to dig through the old internal code.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/30016 | Tolkien Gateway
American Tolkien Society
Revision as of 13:54, 14 June 2013 by (Talk)
The American Tolkien Society is a Tolkien society based across the United States.
[edit] History
The American Tolkien Society was founded in 1975 by Phil and Marci Helms, Paul Ritz and Dave Dettman. Its journal, Minas Tirith Evening Star, was published quarterly since 1967 as it was initially an independent publication.
[edit] External Links |
global_01_local_0_shard_00000017_processed.jsonl/30020 | In response to:
The Republican Hispanic Challenge
LonfromPen Wrote: Dec 03, 2012 2:16 PM
Parker is right that Republicans could at least make a significant dent in the Hispanic vote of they could convince Hispanics that their policies would help them escape poverty, or move from lower middle class up to middle class and beyond. It is curious that she seems to miss that this is a problem. Even if one believes that Republican policies would have this effect, despite the lack of historical evidence to the contrary. The fact is that Hispanics clearly were not convinced this time around, and she does not seem to advocate any change. Apparently she things Republicans can keep doing the same thing with different results. But that is a common definition of insanity.
mulbery Wrote: Dec 03, 2012 2:24 PM
She's saying "Vote fo us, we'll make sure your wages, working conditions, schools, and possibility of having healthcare are all as dim as possible...It'll start to remind you more and more of home...except your bosses won't look like you...but yeah, vote for us.
LonfromPen Wrote: Dec 03, 2012 2:17 PM
And some Hispanics were probably turned off by the culture war stuff. After all Republicans lost Asians by a slightly larger margin than Hispanics, and statistically Asians do not suffer the above ills at the same rates as Hispanics.
Allan60 Wrote: Dec 03, 2012 2:34 PM
It's different with Asians because a large percentage of the recently arrived Hispanic population is largely unskilled. The same is not generally true of people who came through the immigration process where skill sets are desirable.
The bottom line is that constant illegal immigration drives down the wages in low skilled areas., but the Democrats have managed to convince Hispanics that anyone who is against wide open borders hates people with brown skin.
LonfromPen Wrote: Dec 03, 2012 4:23 PM
It makes sense that it is different with Asians. But the interesting, even surprising, thing is that the voting results were pretty much exactly the same. That was my point.
If Parker's analysis was correct, why didn't the Republicans win the Asian vote? Was there some other explanation that exactly balanced it out? Maybe. But that is quite a coincidence. (I'm not sure what the right answer is here).
|
global_01_local_0_shard_00000017_processed.jsonl/30032 | Thread: PST Substitute
View Single Post
Old 01-20-2013, 08:59 PM #8
Andyroo10567's Avatar
Join Date: Nov 2011
Location: Home
Posts: 1,101
LOLLLL. Babolat has a 1 year warranty on the racquets. Now go call in and tell them what happened. Your honesty will take a big toll in this. They can tell if you slammed the racquet and whatnot. So if you really did break 4 of them, i suggest asking Babolat for replacements. With those replacements go sell them on E(bay) or sell them to TW/Classifieds.
Racquet : TGK238.1 2X / Head YTPP 1X
Strings : BHB7 17G 49lb / LUX 4g Rough 17G
Andyroo10567 is offline Reply With Quote |
global_01_local_0_shard_00000017_processed.jsonl/30037 | [[http://en.wikipedia.org/wiki/Kuji-in Kuji-in]] "Nine Syllable Seals" originated as a specialized form of Buddhist meditation. It has spread to Taoism, Shinto, {{Onmyodo}}, [[{{Ninja}} Ninjutsu]] and folk magic through out East Asia.
In popular culture, ''kuji-in'' has become a way of performing magic. Details vary widely, from empowering a physical blow, to powering up an ''ofuda'', to weaponized ki.
Kuji-in consists of nine ''[[MagicalGesture mudras]]'' and their related ''[[MagicalIncantation mantras]]''. The order is specific: "Rin Pyo To Sha Kai Jin Retsu Zai Zen". There is also the long version of these mantra chants, in Sanskrit garbled by Japanese spelling and pronunciation (as in "On nōmaku sanmanda basaradan kan"). In visual media such as anime, manga or a video game, the associated ''kanji'' may be overlaid as each seal is performed. At minimum, only the ''mantras'' are spoken.
Compare HandSeals.
[[folder: Anime and Manga ]]
* Sailor Mars from ''SailorMoon'' was able to immobilize monsters using this, namely reciting the (short version of) chant then hitting the monster in the face with an Ofuda while shouting "Akuryō Taisan" (Literally Evil Spirits begone)
** The English dub tried to cut this out and replaced the chant and subsequent Evil sprits begone with with "I summon the power of Mars, Mars Fireball Charge" despite having nothing to do with her powers as Sailor Mars, (or fire for that matter) instead coming from her Miko training, leaving fans confused as to why she could perform it untransformed. They later tried to fix it by replacing it again with simply "Evil Spirit Begone!"
** She later combined the move, albeit with a different chant, with her Fire Soul creating Fire Soul Bird.
** In the manga, Akuryō Taisan is a fire attack but only when Sailor Mars was transformed.
* In a filler arc of ''InuYasha'', Tsukiyomi, a magic- and sword-wielding female samurai, used this to seal Hoshiyomi.
* The Buddhist monk from ''Manga/GhostHunt'' usually uses the longer version of this chant.
** Ayako the Shinto priestess uses the short version of the chant a few times.
** Mai learns the longer version of the chant from the monk and later the shorter version from Ayako so she can defend herself from hostile ghosts.
* ''{{X 1999}}'' sees many mages cast their spells this way, specially Subaru Sumeragi.
* In the manga/anime version of ''{{Harukanaru Toki no Naka de}}'', the local [[{{Onmyodo}} onmyoji]], Abe no Yasuaki, uses this on a regular basis, both the mantras and the mudras (though the ''Haruka'' franchise in fact doesn't [[SmallReferencePools limit]] the chant selection used by Yasuaki and [[{{Expy}} Yasutsugu]] to just this one).
* The Kuji Kanesada is a sword with the nine words of the Kuji on it; "to cut away the souls of man." Sword of Shiki Ryogi from ''Literature/KaraNoKyoukai''.
* Himiko Se from ''VampirePrincessMiyu'' uses similar chants at least twice in the OAV.
* Kantarou from ''{{Tactics}}'' uses this when fighting {{youkai}}.
* RurouniKenshin: A pair of filler villains armed with RazorFloss are often heard chanting this to make their abilities seem supernatural.
* The Shinryuuji Nagas of {{Eyeshield 21}} do the long version of this chant while meditating under a waterfall.
* A variation is used in ''Anime/TheTwelveKingdoms'' to bind [[{{Youkai}} Youma]] into a Kirin's service.
[[folder: Comic Books ]]
* LarryHama's ''ComicBook/NthManTheUltimateNinja'' can use this to disorient everyone nearby, by making them feel the world has turned upside down.
** Also by Hama: his ''[[ComicBook/GIJoeARealAmericanHeroMarvel G.I. Joe]]'' comics sometimes has the Arashikage ninja clan recite this as a purification/initiation rite.
[[folder: Film ]]
* Recited by Franco Nero's character near the beginning of ''EnterTheNinja''.
[[folder: Folklore ]]
* [[CaptainObvious East Asian folk magic uses this as part of a ritual.]]
[[folder: Literature ]]
* Shows up in Eric Van Lustbader's Ninja books.
[[folder: Live Action TV ]]
* This is used in a Hong Kong live-action TV show (the title is something to the effect of "I Have a Date With a Vampire") about an exorcist and her vampire boyfriend as the standard exorcising chant.
[[folder: Video Games ]]
* Zhuzen in ShadowHearts uses this chant when using his magic.
* One of [[{{Tekken}} Raven]]'s winning pose had him chant this for the sake of [[RuleOfCool coolness]]. Also, in Yoshimitsu's story mode, he teaches Raven to do it the correct way.
* [[VideoGame/SuperRobotWarsGaiden Hwang Yang Long]]'s ultimate move with Granveil: Kafuu Seiun Ken had him chant this (missing the Sai) to enhance his FlamingSword with further power and after blowing black winds that turn into fire to the enemy, he proceeds to lay smackdown on the enemy.
* In ''PsychicForce'', Genma Rokudou's ultimate involves him throwing a talisman, chant this word, and the talisman generates massive explosion.
* In ''Super VideoGame/StreetFighterIV'' the urban ninja Guy chants this as one of his personal actions, complete with the HandSeals.
* Neo Geo's ''VideoGame/DoubleDragon'' fighting game character Amon (a ninja), also chants this as part of his ultimate move.
* In ''SamuraiWarriors'', both Hanzo Hattori and his rival Kotaro use this chant.
* The kanji are used in ''{{Tenchu}} 3'': The Wrath of Heaven/ Return from Darkness (same game, different consoles). Each stealth kill you get lights up from one half to one and a half kanjis, depending on the kill method and the target (dogs are only worth half, frontal stealth kills get you one and a half). If all are lit up, you learn a new move but you need to beat the level in question to be able to keep it.
* They're also used in the XBOX version of ''NinjaGaiden'' -- Each 'Life of the Gods' jewel you get lights up a meter by one. Find or make a complete set of nine and your health bar increases. Also, when dialogue is set to Japanese, Ryu will shout the chant to cut a stone platform out of the ground so he can fight the penultimate boss.
** In Ninja Gaiden Dragon Sword, tracing the Sanskrit letter on the touchscreen will activate the ninjutsu associated with it.
** In Day 5 of ''Ninja Gaiden 3'', Momiji recites one prior to the Obaba boss fight to create a ForceField.
* [[http://paragonwiki.com/wiki/Ninjitsu#Kuji-In_Rin Kuji-In Rin]], [[http://paragonwiki.com/wiki/Ninjitsu#Kuji-In_Sha Sha]], and [[http://paragonwiki.com/wiki/Ninjitsu#Kuji-In_Retsu Retsu]] appear in ''CityOfVillains'' as player-character abilities for the Stalker (read: {{Backstab}}ber) class. Respectively, they provide protection from certain StandardStatusEffects, act as a [[HealingFactor self-healing ability]], and provide temporary [[NighInvulnerable near-invulnerability]] (of the "95% of attacks miss" sort). Ninja Masterminds have [[http://paragonwiki.com/wiki/Ninjas#Kuji_In_Zen Kuji-In Zen]], which is the final upgrade to the abilities of their henchmen.
* Used liberally in ''VideoGame/{{Touhou}}'', with several human characters using paper talismans, and Sanae using a spell card specifically named 'Nine Syllable Seals'.
* Bang Shishigami from ''VideoGame/BlazBlue'' attempts to chant one for a fire jutsu. Unfortunately, before the last syllable, he sneezes, causing the jutsu to run amok.
* Emon Five from ''{{Otomedius}}'' recites this chant while charging his D-Burst attack. He runs out of time during the charge period, so he ends it by shouting "Pierce them!" instead of the final phrase. |
global_01_local_0_shard_00000017_processed.jsonl/30038 | main index
Topical Tropes
Other Categories
TV Tropes Org
Film: Sky Captain and the World of Tomorrow
"This is Sky Captain. I'm on my way."
Sky Captain and the World of Tomorrow is a 2004 movie homage to the Two-Fisted Tales of the 1930's. The film follows the adventures of Ace Pilot 'Joe' Sullivan, known as Sky Captain (Jude Law) and Intrepid Reporter Polly Perkins (Gwyneth Paltrow).
They begin investigating the affairs of the mysterious German scientist, Dr. Totenkopf, after his machines attack New York City, searching for something. Further implicating Totenkopf is a string of kidnapped scientists, all of whom point back to Totenkopf's work...
The plot shamelessly uses the outrageous gadgets and cliches of the Pulp Magazine and Comic Book genres, plus numerous shout outs to other media of the period. Filmed with live actors against computer-generated surroundings, the movie did not make enough money to offset its production costs, so a sequel is unlikely.
The movie has the following Awesome, but Impractical machines:
Other tropes include:
• Absurdly Dedicated Worker: Dr. Totenkopf's machines carry on his work of assembling a "Noah's Ark"-type rocket and loading animals on it despite him having died 20 years prior.
• Action Dress Rip: Polly tears her skirt to run better during the NYC robot attack. But leaves on the heels....
• Action Girl: Commander Francesca "Franky" Cook of the Royal Navy, who sports an Eyepatch of Power. She is also part of a Love Triangle in the movie.
• Actor Allusion: Jude Law and Gwyneth Paltrow of The Talented Mr Ripley.
• Aerial Canyon Chase: The title character flies his fighter plane along the streets of New York just above ground level while trying to escape Dr. Totenkopf's robot ornithopters.
• Alternate History: There is a Hindenburg III zeppelin in the opening scene, which implies either that the first Hindenburg did not explode, or else its explosion was not an impediment to further airship construction.
• Anachronism Stew
• Anguished Declaration of Love: Spoofed. Joe Sullivan believes Polly Perkins deliberately sabotaged his plane in China while Going for the Big Scoop. When they're trapped in a cave packed with dynamite that's about to explode, Joe looks her in the eyes and asks...if she really did cut his fuel line. Polly is understandably annoyed that they're going to spend their last moments on Earth discussing this point. And on Totenkopf's island, she admits she did. A pissed off Joe then admits that he did sleep with Franky.
• Apologetic Attacker - Played With: The note Totenkopf leaves at his death chair reads "Forgive me", and there's indications he tried to shut down his entire project, and it's implied his robot assassin/kidnapper rebelled when he tried to shut everything down.
• Belligerent Sexual Tension: Polly and Joe bicker throughout the movie.
Joe: Could we just for once die without all this bickering?
• Bedmate Reveal: After an Outrun the Fireball moment, Polly wakes up naked in a bed next to an equally naked Joe. An embarrassed Polly tells him to turn around, which a grinning Joe does only to find he's also in bed with their guide, Kaji, who is also naked.
• Bilingual Dialogue: Polly can understand both written and spoken German.
• Billing Displacement: A sterling example. Law and Paltrow are billed diagonally. On the same card, Jolie gets the "and" honor by being billed as "and Angelina Jolie". They thus manage a triple simultaneous diagonal bill where everyone gets a position of honor.
• Bring It: The Mysterious Woman makes a gesture to Sky Captain before fighting him outside the rocket ship.
• Brits with Battleships: Flying battleships.
• Camera Obscurer: Polly Perkins spends much of the movie with only one frame left on her only roll of film, and wants to save it for a truly awesome photo. In the film's denouement, she decides to take a photo of Joe Sullivan, only for him to look at her and say "Lenscap".
• Casting Gag: Totenkopf turns out to have been Dead All Along, just like his actor Sir Laurence Olivier, who died 15 years before the movie was made.
• Catch Phrase: Joe (Sky Captain) says "Good boy, Dex" whenever his Sidekick Dex does something good.
• Chair Reveal: Dr. Totenkopf. Only it turns out he's been Dead All Along.
• Chekhov's Gun
• The two metal tubes that Polly Perkins received from Dr. Jennings.
• Dex also built Chekhov's Ray Gun and is seen chewing Chekhov's Gum.
• Chroma Key: The actors used only the most basic sets and props, with CGI backgrounds used in every shot. This was allegedly done for two reasons: One, the producers wanted to give the movie a "comic book" atmosphere, and two, the studio was too small to accommodate such large sets.
• Cut and Paste Environments - Toward the beginning of the movie, just after the robots attack Manhattan, Sky Captain lands at his base and drives his plane into a huge hangar. At the top of the doors of the hangar are these huge windows of 8x10 panes. In every window, some of the panes are broken. In every window, it's EXACTLY THE SAME PANES that are broken.
• Cutting the Gordian Knot - Polly and the door to Dr. Jenning's lab.
• Deliberately Monochrome: Filmed in colour and desaturated, then resaturated again to make it more like a painting than a photorealistic movie.
• Diesel Punk: With a healthy helping of Tesla Punk and Raygun Gothic, to boot.
• Disintegrator Ray
• Gadgeteer Genius Dex is shown testing a Buck Rogers raygun that can burn a hole through solid metal with luminous rings of energy.
• To a lesser extent, the superelectric field outside of Totenkopf's room that disintegrates anyone who steps on it, first stripping them to a skeleton, then pulverizing that to dust. An unfortunate no-name scientist finds this out the hard way.
• Doing It for the Art: The film's creator spent years developing software to achieve a film with this specific look, and then pitched a film around it.
• The Dragon: Dr. Totenkopf's Action Girl agent. A rather extreme case of Dragon Their Feet.
• The End of the World as We Know It/Utopia Justifies the Means: The planet-colonizing rocketship seems benign, until it's revealed that its afterburners will ignite the Earth's atmosphere.
• Eureka Moment: "Rana is a star!"
• Eyepatch of Power: Franky (Angelina Jolie's character)
• Fake Brit/Fake American: Englishman Jude Law plays the American hero, while American Angelina Jolie plays "Franky". Jolie's accent was mocked by some critics, though she's merely riffing on the stiff-upper-lip jargon of British war propaganda.
• Fake Shemp: Laurence Olivier, via the magic of stock footage and CGI, comes back to life as Dr. Totenkopf.
• The Fantastic Trope of Wonderous Titles: As a deliberate throwback.
• Genre Throwback: to 30's sci-fi and 40's-50's war fiction.
• Giant Flyer: The protagonists encounter giant prehistoric birds on Totenkopf's island.
• A God Am I: Totenkopf, in the video that plays while his rocket is rising through the atmosphere. It plays a twisted version of the first few verses of Genesis, replacing him as God and him seeing that "Man was evil."
• Gratuitous German: The German in this movie is often mangled.
• A particularly noticeable example is a button labeled with "Dringlichkeitsfreigabe", which then gets translated as "Emergency Release", while it actually means "Urgency Release". It should be "Notfallfreigabe/-abkopplung/-entriegelung/-freisetzung".
• The German newspaper headline about the robot invasion translates to "Very Big Metallc [sic!] Machines Steal Steal Reserves"
• The Grotesque: The sole survivor of Dr. Totenkopf's uranium mining and experiments.
• Gunship Rescue. An entire fleet of heliocarriers turns up to rescue the protagonists at the end, though they don't really need saving by that stage. And Dex has a Big Damn Heroes moment when he arrives in a hoversled just in time to save Joe and Polly from the swarm of flying killer robots.
• Herr Doctor: All the scientists are German and Austrian.
• Homages: The attack by giant bipedal robots is copied from the 1941 Superman cartoon "The Mechanical Monsters". Their laser sound-effects are the same as the Martian Disintegrator Ray in the 1953 The War of the Worlds film; similarly Polly's phoned-in report on the attack uses lines lifted from the famous Orson Welles radio broadcast. On seeing one of the robots, Dex mutters "Shazam!" The silhouette of Godzilla can be seen in a newspaper from Japan. During an underwater sequence we see both the wreck of the Titanic and the ship from King Kong, complete with ape-holding cage. King Kong himself can be seen at the top of the Empire State Building during one shot with the robots in the streets. The flying robots on Totenkopf's island have the same chest controls as Commando Cody's Jet Pack. The Wizard of Oz is seen playing in the cinema where Polly meets with one of the scientists, and the entire Totenkopf hologram head sequence is a massive homage to the The Wizard of Oz's giant head scene (Captain Sky even mentions the film when the Hologram appears and starts speaking.). Et cetera.
• Hostage for MacGuffin: When Dr. Totenkopf's thugs capture Polly Perkins in the uranium mine.
• Hot Scoop: Polly.
• Hologram Projection Imperfection: As the protagonists approach Dr. Totenkopf's office a Tesla-type generator creates a Huge Holographic Head of Totenkopf that explains his motives and warns them to get out or die. Both the image and voice are distorted when powering up, highlighting the more primitive 1930's zeerust technology of the film. The imperfections also hint this is a case of The Tape Knew You Would Say That.
• Hyperspace Arsenal: The many many things concealed in Joe's P-40 Warhawk. It's anyone's guess where the plane's builders found room to put the fuel and engine.
• Infodump: Dex and the scientists explain Totenkopf's entire plan (as well as mentioning an offscreen escape where the majority of them got killed) in a single moment of exposition. Although not unusual in the Comic Books on which the movie is based, the scene appears clumsy on screen.
• It Is Beyond Saving: Totenkopf's motive in creating his World of Tomorrow (and destroying the old one in the process).
• It's the Only Way: Spoofed.
Sky Captain: "Is it safe?"
(Sky Captain and Polly step across the booby-trapped threshold, holding hands, in lock-step and are relieved to be unharmed)
Dex: "...I meant throw something."
• Justified Title: The Character Name and the Noun Phrase title is obviously a reference to the retro-futuristic nature of the movie, but "Sky Captain" is the nickname of the main character, and the villain calls his scheme to seed life on another planet the "World of Tomorrow".
• Let's Split Up, Gang: Sky Captain, while in Dr. Totenkopf's abandoned uranium mine.
• Lois Lane: While she doesn't have the myopic identity problems Lois had, she's definitely filling this role in the film as the no-nonsense female reporter determined to follow the hero to the end and get the story.
• Mad Scientist Laboratory: The laboratory of Dr. Walter Jennings (with mutated fetus and tiny elephant), and the room in Shangri-La (shown in a deleted scene) where Totenkopf conducted experiments on radiation victims from his uranium mine.
• Meaningful Name: "Totenkopf" is German for "death's head." (Totenkopf wanted to destroy the world and create a new one out in space.) It also harkens back to Nazi Germany's SS "totenkopf" division that was in charge of the concentration camps.
• Minor Crime Reveals Major Plot: Kidnapping scientists -> Plot to build a spaceship that will destroy the Earth's atmosphere.
• Misguided Missile: Twice, both by Frankie and Sky Captain.
• My God, What Have I Done?: Totenkopf, realizing the error of his ways and unable to stop what he started (the rockets he's sending off the planet will destroy the Earth's atmosphere to escape its gravity), simply leaves a note on his corpse: "I'm sorry."
• Names to Run Away From Really Fast: Dr. Totenkopf (Death Head in German). Subverted, he's Dead All Along and it is implied that he was rather a Well-Intentioned Extremist. It's more commonly used by Germans as name for the bare skull. Hence the skull motif.
• Never Trust a Trailer: Angelina Jolie is in the movie for all of 15 minutes, but you'd think she was a main character.
• No OSHA Compliance: The walkways inside the rocket ship. They're barely wide enough to walk on, and have no railings.
• Noodle Incident: Polly is visibly annoyed when fellow fliers (and implied ex-lovers) Franky and Joe share an incomprehensible nostalgia moment.
• One Bullet Left: Polly is down to one shot left in her camera, so she's forced to forgo the chance to photograph the lost kingdom of Shangri-La, a top-secret flying aircraft carrier, a giant prehistoric bird, and every creature on Earth being loaded two-by-two into a giant rocketship. In the end Polly passes up the Scoop of the Century for a photograph of Joe... who promptly informs her she left the lens cap on. note
• Percussive Prevention: Joe knocks out Polly to stop her from accompanying him on a one-way trip to destroy Totenkopf's rocketship. It doesn't work.
• Plummet Perspective: Happens on at least three occasions.
• Practical Voiceover: A radio announcer (along with a Spinning Paper montage) is used to show that the robot attack on New York is part of a worldwide phenomenon.
• Prophetic Names: Totenkopf, literally "dead man's head" (e.g. skull) in German. Not only alluding to the skull symbol on his creations, but guess what you find him as. Well, still sorta... intact.
• Put Down Your Gun and Step Away: One of Dr. Totenkopf's Mooks tries this on Joe in the Tibetan uranium mine.
• Putting on the Reich: Dr. Totenkopf's emblem looks very much like Nazi Germany's coat of arms, with a death's head in place of a swastika.
• Raygun Gothic
• Reconstruction
• Redshirt Army: The Flying Legion gets wiped out by an air raid, but Joe is only concerned about his missing buddy Dex. Additionally, Franky's troops.
• Reluctant Mad Scientist: The former members of Unit 11.
• Running Gag
• Polly and Joe's discussions about whether she cut his fuel line. As it turns out, she did cut it.
• Polly missing a shot with her camera, or having it not show up for any reason. (She's a reporter, this is very important.)
• Shamgri-La: Shangri-La itself, whose people were forced to work in Totenkopf's uranium mine.
• Shoot Out the Lock: The title character throws an object and hits the control box for a door, causing the door to close.
• Shout-Out: By the dozen. Everything from The Land That Time Forgot, The Wizard of Oz, The Neverending Story, Apple's 1984 commercial, Star Wars Episode I and the anime film Castle in the Sky.
• Sidekick: Dex, to Joe (Sky Captain). But there's a lot more to Dex then this, he may look like a skinny little nerd but he's taken a level in badass and has matters well in hand when Joe finally arrives to 'rescue' him. To his credit Joe doesn't seem very surprised and just asks for a heads up on the plan.
• Sitting Duck: The Flying Legion is caught on the ground by an air raid launched by Totenkopf's robot flyers.
• Stripped to the Bone: One of the escaping scientists on Totenkopf's island gets skeletonised by a bolt of electricity from a Tesla coil.
• Stylistic Suck: A couple of effects were CGI'ed to resemble early stop-motion effects from serials. Note: This does not apply to the film's overall sepia-tone look, which is quite elaborate.
• Surprise Vehicle: Dex, in one of Totenkopf's hoversleds.
• That Was the Last Entry: When the group finally gets to Totenkopf's office, they find his papers and discover that "the last entry in his journal was made on October 11, 1918", 20 years before the setting of the film. Shortly thereafter, they find his mummified body.
• There Was a Door: Inverted.
• Too Awesome to Use: See One Bullet Left.
• Traitor Shot: Two of the guides look at each other while standing under Joe's plane as sinister music plays.
• What Happened to the Mouse?: A survivor of Totenkopf's experiments asks for one last favor: to be killed. We never find out if Joe obliges, although his rather sad look after they've left Shangri-La implies that he did it.
• Why Won't You Die?: Invoked word-for-word when The Dragon, beaten, revealed as a Robot Girl, and left for dead at the villain's island, sneaks into the rocketship to attack Joe one last time.
• Would Hit a Girl: In a rare heroic/noble example, Joe knocks Polly out to keep her from needlessly putting her life in danger.
• Wronski Feint: Subverted. It looks like Joe is doing this with the ornithopters chasing him out over the water, when actually he has every intention of actually crashing into the ocean. His plane is able to transform into a submersible mode!
• Zeerust: Basically the whole point of the film. Influences include the futuristic designs of Norman Bel Geddes, Raymond Lloyd and Hugh Ferriss.
Eternal Sunshine of the Spotless MindHugo AwardThe Chronicles of Narnia
The Shape of Things to ComeScience Fiction FilmsSoldier
HellboyDiesel PunkTin Man
Shutter IslandCreator/ParamountSliver
SidewaysFilms of 2000 - 2004 Spanglish
alternative title(s): Sky Captain; Sky Captain And The World Of Tomorrow
Permissions beyond the scope of this license may be available from
Privacy Policy |
global_01_local_0_shard_00000017_processed.jsonl/30039 | TV Tropes Org
search forum titles
google site search
Good Smoking Evil Smoking has no relationships defined yet. You can add relationships below.
parents kids shares a parent with:
Permissions beyond the scope of this license may be available from
Privacy Policy |
global_01_local_0_shard_00000017_processed.jsonl/30047 | World of Sport
Estonian game halted by assault of a challenge
Rakvere Tarvas' 3-1 win over Puuma in the second tier of Estonian football was punctuated by an astonishing assault of a challenge from Puuma's Yaroslav Dimitriev.
The Russian forward, irritated to have lost the ball, crunched into his opposite number with a savage kick to the victim's gut.
Unsurprisingly he was given a straight red card, but not before he had managed to belt the loose football straight into his prostrate victim's midriff.
If there's a been a rougher challenge in Europe in recent weekends, we've not seen it.
About World of Sport
|
global_01_local_0_shard_00000017_processed.jsonl/30052 | Category:Asian Nonnude Images
From Uncyclopedia, the content-free encyclopedia
Revision as of 08:51, June 20, 2011 by Haydrahlienne (talk | contribs)
Jump to: navigation, search
Asian Nonnude Images are now in their own separate and (more or less) equal category.
Media in category "Asian Nonnude Images"
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/30053 | From Uncyclopedia, the content-free encyclopedia
Revision as of 16:37, September 15, 2009 by Dylanlip (talk | contribs)
Jump to: navigation, search
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/30054 | UnNews:Paris Hilton Killed By Tourists Trying to Get Rooms in Paris
From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
This article is part of UnNews UnNews Logo Potato1 UnFair and UnBalanced
15 August 2006
PARIS, France -- Just a few hours after being attacked by an Iranian diplomat in Kentucky, Paris Hilton was stabbed to death by tourists in Paris, France.
Paris Hilton4
Paris Hilton, moments before her untimely death.
The terrible catastrophe occurred just three hours ago, while Paris was headed to her namesake hotel after an all-night drinking binge. Apparently, the two tourists, dressed in raincoats though it was a clear night, came up to her, called her a tramp, then stabbed her repeatedly in the breasts and nether regions. They ran away half an hour before police arrived, but they were easy to find; being the only couple in Paris wielding knives loaded with blonde pubic hair.
The two tourists, Jan and Pat Steinberg, turned out to be a middle-aged couple from Vermont who were just sick of trying to find rooms at the Paris Hilton and getting numerous sex videos of Paris Hilton. Says Pat, "That dirty tramp was the bane of our existence."
Many were saddened to hear of her demise, including Uncyclopedia founder and resident sage Oscar Wilde. Said Wilde, "What a fucking shame; I never got to bed her!"
Dark Lord of the Underworld Satan also prepared a statement. It reads, in part: "Yeah, you know, I feel bad and all, but, hey, look on the bright side; at least she's finally got a competent sexual partner -- ME!!!"
It does seem quite a tragedy that such a bright, young girl should leave the world so early in her life, but, really, the only thing worth it was that this reporter got to spend a night in Paris.
edit Sources
Personal tools |
global_01_local_0_shard_00000017_processed.jsonl/30059 | The Free World
A Novel
David Bezmozgis
Free World
Alec Krasnansky stood on the platform of Vienna's Southern Terminal while, all around him, the representatives of Soviet Jewry--from Tallinn to Tashkent--roiled, snarled, and elbowed to deposit their belongings onto the waiting train. His own family roiled among them: his parents, his wife, his nephews, his sister-in-law, and particularly his brother, Karl, worked furiously with the suitcases and duffel bags. He should have been helping them but his attention was drawn farther down the platform by two pretty tourists. One was a brunette, Mediterranean and voluptuous; the other petite and blond--in combination they attested, as though by design, to the scope of the world's beauty and plenitude. Both girls were barefoot, their leather sandals arranged in tidy pairs beside them. Alec traced a line of smooth, tanned skin from heel to calf to thigh, interrupted ultimately by the frayed edge of cutoff blue jeans. Above the cutoff jeans the girls wore thin sleeveless shirts. They sat on their backpacks and leaned casually against each other. Their faces were lovely and vacant. They seemed beyond train schedules and obligations. People sped past them, the Russian circus performed its ludicrous act several meters away, but they paid no attention. Alec assumed they were Americans. He guessed they were intheir early twenties. He was twenty-six, but he could pass for younger. In school and university he had run track and had retained a trim runner's build. He also had his father's dark, wavy hair. From the time Alec was a boy he had been aware of his effect on women. In his presence, they often became exaggerated versions of themselves. The maternal ones became more maternal, the crude ones became cruder, the shy ones shyer. They wanted only that he not make them feel foolish and were grateful when he did not. In his experience, much of what was good in life could be traced to a woman's gratitude.
Looking at the two girls, Alec had to resist the urge to approach them. It could be the simplest thing in the world. He had studied English. He needed only to walk over and say, Hello, are you Americans? And they needed only to respond, Yes.
--Where in America do you live?
--Chicago. And where are you from?
--Riga, Latvia. The Soviet Union.
--How interesting. We have never met anyone from the Soviet Union before. Where are you traveling to?
--No. Is this true?
--Yes, it is true. I am traveling to Chicago.
--Will this be your first time in Chicago?
--Yes, it will be my first time in Chicago. Can you tell me about Chicago?
--Yes, we can tell you about it. Please sit down with us. We will tell you everything about Chicago.
--Thank you.
--You are welcome.
Alec felt Karl's hand on his shoulder.
--What's the matter with you?
--We have seven minutes to finish loading everything onto the train.
He followed Karl back to where their parents were arranging thesuitcases so that Karl and Alec could continue forcing them through the window of the compartment. Near them, an elderly couple sat dejectedly on their bags. Others worked around them, avoiding not only helping them but also looking them in the face. Old people sitting piteously on luggage had become a familiar spectacle.
--I see them, Karl said. Move your ass and if there's time we'll help them.
Alec bent into the remaining pile of suitcases and duffel bags on the platform. Each seemed heavier than the last. For six adults they had twenty articles of luggage crammed with goods destined for the bazaars of Rome: linens, toys, samovars, ballet shoes, nesting dolls, leather Latvian handicrafts, nylon stockings, lacquer boxes, pocketknives, camera equipment, picture books, and opera glasses. One particularly heavy suitcase held Alec's big commercial investment, dozens of symphonic records.
First hefting the bags onto his shoulder and then sliding them along the outside of the train, Alec managed to pass them up to the compartment and into the arms of Polina and Rosa, his and Karl's wives.
Karl turned to the old couple.
--All right, citizens, can we offer you a hand?
The old man rose from his suitcase, stood erect, and answered with the formality of a Party official or university lecturer.
--We would be very obliged to you. If you will allow, my wife has with her a box of chocolates.
--It's not necessary.
--Not even a little something for the children?
Karl's two boys had poked their heads out the compartment window.
--Do as you like. But they're like animals at the zoo. I suggest you mind your fingers.
Alec and Karl shouldered the old people's suitcases and passed them into their compartment. Alec noticed the way the old man looked at Polina.
--This is your wife?
--A true Russian beauty.
--I appreciate the compliment. Though she might disagree. Emigration is not exactly cosmetic.
--Absolutely false. The Russian woman blossoms under toil. The Russian man can drink and fight, but our former country was built on the back of the Russian woman.
--What country wasn't?
--That may be so, but I don't know about other countries. I was a Soviet citizen. To my generation this meant something. We sacrificed our youth, our most productive years, our faith. And in the end they robbed us of everything. This is why it does my heart proud to see your wife. Every Jew should have taken with him a Russian bride. If only to deny them to the alcoholics. I'm an old man, but if the law had allowed, I would have taken ten wives myself. Real Russian women. Because that country couldn't survive five minutes without them.
The old man's wife, the incontrovertible product of shtetl breeding, listened to her husband's speech with spousal indifference. There was nothing, her expression declared, that she hadn't heard him say a hundred times.
--To women, Alec said. When we get to Rome we should drink to it.
Alec helped the old couple onto the car and scrambled up as it began to edge forward. He squeezed past people in the narrow passageway and found his family crammed in with their belongings. Perched on a pile of duffel bags, his father frowned in Alec's direction.
--What were you talking about with that old rooster?
--The greatness of the Russian woman.
--Your favorite subject. You almost missed the train.
Samuil Krasnansky turned his head and considered their circumstances.
--The compartments are half the size.
This was true, Alec thought. Say what you want about the Soviet Union, but the sleeping compartments were bigger.
--You want to go back because of the bigger compartments? Karl asked.
--What do you care about what I want? Samuil said.
Samuil Krasnansky said nothing else between Vienna and Rome. He sat in silence beside his wife and eventually fell asleep.
THE FREE WORLD. Copyright © 2011 by Nada Films, Inc. All rights reserved. |
global_01_local_0_shard_00000017_processed.jsonl/30064 | Take the 2-minute tour ×
A site I get on allot has updated a few of their files, and I wanted to know the meaning of them. For example, their CSS, and majority of their images. Before their update, the file was called "default.css" or "navigation.png". Now with their new updates, their files and images have this ?v at the end of them with a number, such as "default.css?v2" or "navigation.png?v3". Can anybody explain to me what does this mean?
share|improve this question
This question isn't asking about User Experience, but rather the technicality of file naming conventions. – slawrence10 Jan 3 '13 at 1:25
add comment
closed as off topic by Charles Boyung, JonW Jan 3 '13 at 8:41
2 Answers
up vote 1 down vote accepted
its a method of versioning the otherwise stored static files. Essentially browsers can store various files / filetypes locally for any amount of time as it is dictated by the enduser. The purpose of adding / appending these values is to juke said cache without creating a file nightmare on the server.
thats at least my assumption in this situation.
additionally, the reason this invalidates the static cache or gets around it is that they are requesting a file first and then also attempting to pass a query string to it. The query string creates a new context for the file and thus, a prior stored version is not conjured up from enduser cache but instead forces a server download.
share|improve this answer
add comment
It's a short way to invalidate cache or caching proxies, without actually changing any filenames.
For static files and unless some specific processing is taking place, you can usually rather safely add a question mark, "?", followed by any arbitrary string.
The web-server that has these static files will ignore everything after and including "?", but the web-developer can usually ensure that fresh copies of these files will be re-requested by the client (e.g. web-browser or caching-proxy), instead of possibly the old and incompatible cached versions being used, since caching is almost always done only on the complete URLs with all the HTTP GET params, which include "?" etc.
Note that the explicit approach you've described, with adding just a "?v2", only works if you always go forward on the web-server, and always in atomic steps. If you ever downgrade back to v1 whilst a v2 page requests a v2 document, or have a partial upgrade from v1 to v2, then a v1 (or v2) document will be served as if it was a v2 (or v1), since the web-servers generally don't do anything with these "?v1" / "?v2" markings on static files.
share|improve this answer
add comment
|
global_01_local_0_shard_00000017_processed.jsonl/30096 | Audiotrieve's InBoxer is a Microsoft Outlook email filter that can block more than 99 percent of spam and junk email. InBoxer builds filters that define legitimate messages and junk for each user according to his or her mail folders. The software uses Bayesian and other advanced language analysis techniques to filter email. Users can also define trusted senders and companies. As each new message arrives, InBoxer decides whether to block it. The software places questionable messages in a review folder. InBoxer learns by example, so it can let a newsletter about your favorite hobby come through to your Inbox even if other people consider the message to be spam. InBoxer can also learn the difference between an intimate note from your spouse and offensive mail from a bulk mailer. The product doesn't require an update subscription to remain accurate. InBoxer installs three buttons on your Outlook toolbar so that you can access all of its capabilities. You don't need to launch a separate program or change your email settings. Pricing is $27.95 for one unit. Volume discounts apply and start at $19.99 for 10 units.
Tim Lukehurst of Felixstowe, Suffolk, UK, says, "I want to pass on all credit to InBoxer for painlessly reducing the volume of spam mail I deal with. This product works within Outlook and progressively learns what is spam and quarantines it rather than requiring me to go through a mountain of messages every day. It's worth its weight in gold!"
Reader Tim Lukehurst
Felixstowe, Suffolk, UK
Product InBoxer
Company Audiotrieve
X1 Technologies, developer of business productivity software, released an update to its X1 Search, software that provides a Weblike search experience for personal data on the desktop. Chris Taylor of Nepean, Ontario, nominated X1 as a product that makes his job much easier. Chris says, "X1 indexes your email messages and disk files. It turns finding a needle in the haystack into a breeze! Indexing is quite quick. On my machine, X1 indexed approximately 200,000 files by name, size, path, date, and time; 30,000 files by full content; and 60,000 email messages that included thousands of attachments in a couple of hours. X1 keeps the index up-to-date with only 15 seconds of activity every 10 minutes and yields to the CPU cycles of other applications if necessary. Finding elusive email messages and files is now an operation that takes mere seconds. As fast as I can specify a search term, X1 finds the file or email message. X1 is an excellent product that I would be hard-pressed to do without, and it just keeps getting better!"
The software displays the files' contents and email messages in the format of the program that created them--X1 supports 255 formats--so Microsoft Word files look like Word documents, Microsoft Excel files look like Excel spreadsheets, and PDF files are displayed in .pdf format. New features include the ability to perform phrase searches, Boolean search capabilities, the option to save frequently used searches for future use, full support for attachments in Microsoft Outlook Express and Netscape Mail, and increased maximum individual file size for content indexing from 10MB to 2GB. X1 costs $99. The product supports Windows XP/2000/Me/98 systems.
Reader Chris Taylor
Nepean, Ontario
Product X1 Search
Company X1 Technologies
Darren McBride of Reno, Nevada, nominated Host Interface's Double Image-O as a top product. Darren says, "Double Image-O has dramatically simplified my life. The product is unique and useful for a variety of reasons. The product eliminates mysterious 'write cache failures' when backing up to USB hard drives, which is a huge deal for my company because we only back up to USB devices and have had problems with conventional backup software. Unlike most backup products, Double Image-O backs up by copying the original file and folder structure to the destination backup media. This makes it so much easier to confirm your backup and to restore data quickly and easily. The product is so easy to use. Double Image-O can back up open files. Instead of reviewing error logs full of skipped files, I get good backups without having to pay for expensive add-on open file modules that most backup products require."
Double Image-O works at the disk/volume block/sector level and isn't disturbed by the OS, regardless of new service packs or OS upgrades. The target data is accurate, giving you an exact copy of your source at an exact instant before the beginning of the backup activity. Double Image-O runs on Windows 2003/XP/2000/NT workstations and servers and starts at $199.
Reader Darren McBride
Reno, Nevada
Product Double Image-O
Company Host Interface
Alfonso Gomez of El Paso, Texas, works for a company with approximately 800 client computers all over Asia, Mexico, and the United States. His company was hit by the Blaster worm and needed a patch-management solution. Alfonso says, "We're a complete Microsoft shop, so I decided to use Microsoft Systems Management Server as our patch-management solution. After fighting with Systems Management Server, I decided to change my strategy and started using Microsoft Software Update Services. Software Update Services really came to the rescue in our environment, and everybody was happy until my boss came to me and asked for a status report on all the updates that I was approving for our clients. I started to do research and ran into's Flarepath Windows Update Analyser, which gives me the capability to generate reports for upper management and auditors to review. Flarepath Windows Update Analyser is a very inexpensive and user-friendly application."
Flarepath Windows Update Analyser features email notification about failed downloads or installations, background machine discovery and scanning capability, Active Directory (AD) integration, the ability to see which updates haven't been installed on a machine, imitation of the AD structure for easier grouping of machines, a dashboard that shows machines with a troublesome status, Software Update Services (SUS) server integration to show which updates were approved and when, and the ability to force a single computer or multiple computers to poll for SUS updates. For pricing, contact
See associated figure
Reader Alfonso Gomez
El Paso, TX
Product Flarepath Windows Update Analyser
For more information or to
reserve ad space, call Erik
Nielsen at 970-203-2752,
or send a message to |
global_01_local_0_shard_00000017_processed.jsonl/30097 | Many people think of marketing as a dirty word, a synonym for words such as lying, tricking, or overpromising. However, when done properly—and ethically—marketing can be pretty amazing. Consider what Apple has accomplished with the iPhone.
A little over two years ago, there was no such thing as the iPhone, but now it would be hard to find anyone in the industrialized world who hasn't seen or heard of it. Even my local paper, the famously stodgy Toledo Blade, ran a front-page story about the iPhone 3G release last summer. That level of familiarity and popularity is all due to a brilliantly conceived and executed marketing campaign, not to mention a product that appeals to consumers of all stripes.
I reviewed the iPhone 3G with iPhone OS 2.0 software, which was the first version to include Microsoft Exchange ActiveSync (EAS) support, and found that, as a mail device, the iPhone didn't stack up well against Windows Mobile. Now that Apple has released version 3.0 of the software along with the iPhone 3GS, it's time to take a look at its changes and see whether Apple has been able to move the needle.
Let's start with calendaring, which was one of the weakest parts of the 2.0 release. Apple has made some notable improvements here. The biggest change is that users can now invite people to meetings! This seems like a small step—and it is—but it removes one of the biggest limitations of the 2.0 software. For Exchange 2007 users, you can also view the attendee status of people you invite. When you receive a new invitation, you can use the Show in Calendar option to get a graphic view of where the appointment falls (see my blog for screenshots). However, there's no textual warning if the new meeting conflicts with an existing appointment. Also, you still can't see free/busy data for invitees or get suggested meeting times.
iPhone OS 3.0 has some notable improvements to email, too. You can choose individual folders to automatically sync (provided that you've turned push mail on), although the iPhone still insists on expanding every folder in your mailbox when you use the folder picker. iPhone OS 3.0 supports the use of client certificate authentication for Exchange access, and it adds support for some additional EAS password policy controls—as well as the EAS policy for controlling whether people can use the built-in camera.
Overall, though, even with these improvements, the iPhone still lags behind Windows Mobile 6.1 and 6.0 because it lacks some of their key features:
• task support, including over-the-air synchronization
• an easy way to add a contact from the GAL to your personal contacts lists
• the ability to flag and unflag messages, a necessity for triaging email on the road
• the conversation view found in the Windows Mobile 6.5 version of Outlook Mobile (remember, it's back-portable to Windows Mobile 6.1 devices as well)
• correct behavior for IMAP deletes (the iPhone still doesn't send the correct IMAP EXPUNGE command to remove deleted messages)
So, my bottom line: the 3.0 release makes progress, but it's still not as powerful or useful as an email device as Windows Mobile is.
Related Reading: |
global_01_local_0_shard_00000017_processed.jsonl/30106 | Take the 2-minute tour ×
I have an older WordPress site that is going away but I am taking the content from the site and importing it into my new WordPress site.
To do this, I used the WordPress Export tool, located in my dashboard, to create an export file that included all content, including images. The export process created the export file, which I successfully imported into my other WordPress site.
I noticed that if an imported post that resulted from a search or linked in the sidebar had a thumbnail, the URL linked to that thumbnail linked back to the original WordPress site - The site were I created the export file.
How do I fix this?
share|improve this question
add comment
2 Answers
The easiest way is to use the Cache Image plugin, which 'sideloads' images in your posts that are from other domains.
I've had mixed results, mainly because it seems to not work well for thousands of posts and images - if your site is smaller you may have better results.
share|improve this answer
I ended up editing the import XML file directly; a quick search and replace did the trick – Tom Castonzo Apr 28 '11 at 12:47
The only problem with that approach (which I was going to suggest), is that it doesn't add the images as attachments, and so it's possible that themes which use post-thumbnails and Featured images might not work. However, if you don't need to do that, then you're set :) – anu Apr 28 '11 at 12:50
add comment
You could have used the Search Regex plugin to search for the image tags and replace the URL with your new URL. However, now that you have manually edited the XML you will want to pull the images into the media manager.
The plugin, Add From Server, searches for images that are not already in the image manager and adds them. This makes them usable from the media manager just like the other images you upload directly into WordPress.
share|improve this answer
add comment
Your Answer
|
global_01_local_0_shard_00000017_processed.jsonl/30121 | Background Briefing 7 April 2013
Sunday 7 April 2013 8:05AM
In this episode
• As the Royal Commission into Child Sexual Abuse begins, it could learn from an outcry in Britain over the handling of abuse cases. A police whistleblower says a high profile prosecution in northern England failed some of the victims and left a new generation of girls potentially at risk. Jane Deith investigates.
This [series episode segment] has and transcript |
global_01_local_0_shard_00000017_processed.jsonl/30122 | Skip to navigation | Skip to content
Dinosaurs burrowed to keep warm
Burowing dinosaur
Remains of this burrowing dinosaur, Oryctodromeus cubicularis, were found at the end of a long tunnel. Palaeontologists say its snout and other features suggest it could gouge out and shift soil (Image: Lee Hall)
Palaeontologists have found the remains of small dinosaurs that made their home in a burrow, a finding that suggests dinosaurs could exploit a much wider habitat than thought.
Fossilised bones and the dinosaurs' underground den were found in the US state of Montana.
Researchers have named the dinosaurs Oryctodromeus cubicularis, a mix of Greek and Latin that means "digging runner of the lair".
The burrow's soil has been dated to the mid-Cretaceous, a hothouse period that ran from about 135-115 million years ago.
"[It is] the first trace and body fossil evidence of burrowing behaviour in a dinosaur," the trio of US and Japanese researchers say.
Their paper appears in the Proceedings of the Royal Society B, a journal of the UK's Royal Society.
The burrow, long filled with sediment, comprises a sloping, sinuous tunnel more than 2 metres long and about 70 centimetres wide, and is somewhat similar to holes made today by striped hyenas and puffins.
It ends in a chamber, where the skeletal remains of an adult and two juveniles were found.
Their snout, shoulder girdle and pelvis have the telltale shape of bones found in animals capable of gouging out and shifting soil.
Based on the preserved vertebrae, the adult would measure around 2.1 metres long, with a weight of 22-32 kilograms, which made it small on the dinosaur scale.
The authors, led by Assistant Professor David Varricchio of Montana State University, say burrowing would help small dinosaurs survive in extreme climates.
Unlike mammals, reptiles cannot regulate their own body temperature. Thus, in deserts, a burrow would provide shelter from extreme heat, while in polar regions and chilly mountains, it would preserve warmth.
The burrowers may have survived, at least for a while, the twilight of the dinosaurs, the authors say.
The prevailing theory is that this happened around 65 million years ago, when a massive asteroid struck the Earth. That kicked up a thick veil of dust and ash that cooled the planet, killing off the kinds of vegetation on which the dinosaurs depended.
The end of the Cretaceous bequeathed the planet to avian, or bird-like, dinosaurs and other species that either thrived without the old predators around or found a niche in a suddenly-changed climate.
Tags: science-and-technology, dinosaurs, palaeontology |
global_01_local_0_shard_00000017_processed.jsonl/30130 | Project QA: Writing a submodule
Let’s walk through building a submodule for Project QA, shall we? There’s three main steps to implementing your own extension to projectqa:
1. Create a fresh module
2. Create entities to store your data
3. Implement hook_eval_gitcommit
Set up your module
Creating a new module is outside the scope of this post. If you are new to module development or need a refresher, head over to for all the resources.
I am curious what modules people are working on, though. if you start one I’d love to hear about it. It may be something valuable to a wide enough audience to consider including in the main module.
Create entities to store your data
This part of projectqa is intentionally left wide open. You are expected to develop the entities to store your data. This also means you have complete control over the structure and accessing abilities of that data. I highly recommend leveraging the Entity API module. It will make your life easier as well as the lives of anyone building off of your work. Additionally it’s already a dependency for projectqa so you’ll have access to it on any system that you’re building your projectqa submodule for.
Consider how you want to leverage your data but also try to keep it as normalized as possible. For PHPLOC I decided to store in two tables; one table for the extracted data and one table for the delta data that was calculated. Putting it all in one table would’ve made a far too large table as well as too much information if I’m only interested in the delta. A bit more description on this will be in the next step.
Implement hook_eval_gitcommit
hook_eval_gitcommit($repo_path, $git_commit)
This hook gets fired by the main projectqa module on each git commit in the history of a repo. You don’t need to worry about walking the repo history, that is taking care of for you. You also don’t need to execute any git commands as the repo is already checked out to the proper location for you.
$repo_path: The system path where the repo is located.
$git_commit: The git commit to process.
The most important piece of information is the repo path that gets passed to you. This tells you were on the filesystem to look in order to start your processing of the code. Be sure not to alter any of the files as that would pollute the code to be processed for both your module and any other module accessing that git commit after your code is executed. The git commit hash that is passed to you is more of an optional piece of information. You don’t need to do anything with it unless it is valuable to whatever data processing algorithm you are utilizing. The git commit hash is already being saved to the projectqa_gitcommit table for you.
In the case of the projectqa_phploc submodule I was interested in calculating the deltas between commits and storing them so reporting could be easier and more performant. In order to accomplish this, I made this call from within my hook implementation:
function projectqa_phploc_eval_gitcommit($repo_path, $git_commit) {
// some code ...
$parent_hash = projectqa_get_parent_hash($repo_path, $git_commit->sha1);
// some more code ...
This way I can now access the correct records in my own table for the previous commit (remember, do not alter the file system or execute git commands) and generate the diff against my current commit and save to the database along with my current values. For better organization and scalability I keep all delta values in a separate table (entity).
If you end up developing a submodule for projectqa be sure to stay in contact and keep an eye out for updates. I already have plans for a few more hooks to implement that may help a number of developers. I will also be implementing some functionality to help validate, reprocess, and catch up data if needed.
Add new comment
Plain text
• No HTML tags allowed.
• Lines and paragraphs break automatically.
Filtered HTML
• Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy. |
global_01_local_0_shard_00000017_processed.jsonl/30147 | Poetry editor Donald Merriam Allen dead at 92
BY admin
September 10 2004 12:00 AM ET
Poetry editor Donald Merriam Allen, who helped bring the beat poets and other fringe artists into the mainstream, died on August 29 in San Francisco at the age of 92, according to The New York Times. Allen, who was gay, edited Grove Press's The New American Poetry: 1945-1960, a groundbreaking anthology of 44 notable poets, including such legendary gay bards as Allen Ginsberg and Frank O'Hara. The book drew both praise and scorn upon its publication in 1960 but has endured as a classic literary collection. Allen also translated four plays by Eugene Ionesco and edited collections of writings by O'Hara, Jack Kerouac, Robert Creeley, and many others. He also founded and managed two publishing houses, Grey Fox and Four Seasons Foundation, which published poetry as well as gay and lesbian literature and books on Buddhism and philosophy. He is survived by a sister and by his friend and executor, Michael Williams. |
global_01_local_0_shard_00000017_processed.jsonl/30214 |
Analysis and comments on The Waste Land by T.S. Eliot
1 [2]
Comment 4 of 14, added on November 27th, 2005 at 6:30 AM.
Referring to the title: try the parcival legend! As far as I know it's a
middle- german epic loosely linked to the King Arthur's epic. In short:
While Parcival is trying to become a knight errand he meets the Fisher King
(Amfortas), who resides over a dead land and suffers from a mortal wound,
but cannot die. As soon as any stranger would show care by asking the king
a question, the wound woud heal and the waste land would be restored, and
the one who asked would become the new king of the grail.
Parcival, who misses his first chance, has to learn, that all his knightly
behavior is worth nothing unless he also learns charity.
joachim from Austria
Comment 3 of 14, added on November 2nd, 2005 at 10:42 AM.
I have started to study "The Waste Land". I have not already understood
about the poem's titles. Could you give me any hint?
newton penna from Brazil
Comment 2 of 14, added on December 26th, 2004 at 5:35 AM.
water image in eliot's poems
water which is one of the four natural elements plays a distinguished role
in eliot's poetry.
In 'The waste Land' ,for example,water is both destructive and
constructive.Rain is either destructive
or constructive depending on its intinsity.In the 'Death by Water ,' it is
destructive.yet, through out the poem it is constructive;it rather refers
to the faith through which the waste land may be reserrected.
by Difaf Ibrahim
difaf ibrahim
Comment 1 of 14, added on November 26th, 2004 at 3:04 PM.
this is the best poem ever. ...fear in a handful of dust, heap of broken
images, look to windward. just g8. chhhhhheeeeerrrrrrssssssssss!!!!!!!
tom browning
1 [2]
Share |
Information about The Waste Land
Poet: T.S. Eliot
Poem: The Waste Land
Volume: The Waste Land
Year: 1922
Added: Feb 20 2003
Viewed: 21137 times
Poem of the Day: Nov 22 2012
Add Comment
Comment on: The Waste Land
By: T.S. Eliot
Name: (required)
E-mail Address: (required)
Show E-mail Address:
Yes No
Poem Comments:
Poem Info
Eliot Info |
global_01_local_0_shard_00000017_processed.jsonl/30218 | Stephan Wysochanski with his family Wysochanski family photo
Eugene Visochanskiy
Stephan Wysochanski with his family
This is photo of Wysochanski (or Vysochanski in ukrainen) family. They lived
in Ukraine (former Russian province)and were "shlaht" (ukrainen nobles).
I know only some male persons on the photo by names. These are (from the right):
Step(h)an - my great-grandfather and his sons - Step(h)an (standing) and Illarion
(sitting). Step(h)an (standing boy), my grandfather, used to be later an agronomist.
Illarion later was a chief on a rail-road. He had been repressed after revolution and his family had been exiled to the "GULAG". Another boy (standing in the left) also was Step(h)an (junior). I have "adolt photos" of the boys and hope that some descendants of Illarion and Step(h)an junior are still alive.
... show more
Write a comment
Photo taken at Kiev, Kharkov, Ukraine on |
global_01_local_0_shard_00000017_processed.jsonl/30223 | Code Geass: Tales of an Alternate Shogunate (GN)
allvideo BluRay DVD VHSmanga e-manga bookCD
Title: Code Geass: Tales of an Alternate Shogunate
Volume: GN
Pages: 160
Distributor: Bandai Entertainment
Release date: 2011-07-26
Suggested retail price: $10.99
Age rating: 13+
ISBN-10: 1604962593 1604962593
ISBN-13: 9781604962598 9781604962598
Set in an alternate 1853, Lelouch is is the commander of the Shogunate's military counterinsurgence brigade known as the Shinsengumi, which fights the Black Revolutionaries, a rebel group led by a mysterious masked individual known as Rei. An aternate-history retelling of the story of Code Geass!
Add this release to
or to |
global_01_local_0_shard_00000017_processed.jsonl/30243 | MACO only mentions 12(yellow), 22(orange), 25(red), 29(dark red), 70(very dark red), 88A, 87, and 87C (Black, IR transmitting) on their website...
No mention of 89C on, nor in my "filter bible": CRC Handbook of Chemistry and Physics, 62nd edition. There is an 89B, with 50% transmittance at 720nm, however...
The 50% point are (roughly):
25: 600nm
29: 620nm
70: 680nm - narrow band filter
87: 795nm
87C: 860nm
88A: 750mn
89B: 720nm
So 87 and 87C would be very dark for MACO 820, almost totally black for Konica IR750. 89B looks more promising, and is equivalent to what is sold as "R72". 70 would give about the same results as 89B with IR films - except Kodak HIE, where it would cut off some of the far infrared. |
global_01_local_0_shard_00000017_processed.jsonl/30244 | This lens is really nice, mint, never used as far as I can tell, includes the original Nikon plastic bubble (CP-2) and some swell foam to keep it stable in there.
Really, you need this. If you do medium format, this is it, your enlarging lens. Look no farther.
Why aren't I using it if it's so good? Many moons ago I had the wisdom to pick up a Vivitar VHE, which is really a Schneider Companon, so I'm also set. And now so can you be too. Toss that cheap Beseler lens or whatever it is you're using and step up to some lovely glass.
How lovely? I use a Nikkor el lens on my Leitz Focomat, never looked back.
$100 brings it to you anywhere in the US, post paid. Overseas, I gotta get some postage help, say $10? Sorry.
paypal to, which is also my direct email.
thanks, Charlie Trentelman |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.