text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Today, continuing the series, get ready for some code. The first thing we’ll need to do is lay out some files to work with. I previously described the syntax in three separate parts: Thankfully, those can map very nicely to three different files. These files will all be placed in a widgets directory somewhere on the PYTHONPATH, representing the entire framework. __init__.py: It’s necessary anyway, and it can be imported as simply import widgets. So we’ll use this to bring in components from the other files into one import location. base.py: This will contain the `Widget“ base class and everything necessary to support it. prefs.py: Since the main attribute type we’ll be using are preferences, this module will contain all the attribute classes. The first thing to do is set up a base class to extend. This may look fairly complicated, and it will encompass the rest of this post, but it’s really not that bad. The real trick here is that we’ll need to use a metaclass. A full explanation of metaclasses is being the scope of this article, and maybe someday I’ll do it justice, but for now, just know that it’s a necessary part of the process. The best way to sum up what you’re missing is with a quote from Tim Peters: Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don’t (the people who actually need them know with certainty that they need them, and don’t need an explanation about why). What it basically boils down to is that we need to set up two different classes just to make one work properly. First, we’ll start with the metaclass, which we’ll call WidgetBase, keeping with Django’s naming scheme. The following code is pretty much copied out of Django itself, changing Model to Widget:) There’s a lot going on here, and I apologize for not explaining it more fully, but here are the important bits. This class will be called any time a Widget class definition is processed, including Widget itself. The code above makes sure that Widget and any classes that don’t subclass Widget just get processed like any other class. Any additional code in the __new__ method can then safely assume that it’s processing a subclass of Widget, which is exactly what we want. For now, we’ll just return the the new class, as it was defined, with no additional processing: return type.__new__(cls, name, bases, attrs) When working with the rest of the metaclass code, it’s important to know what arguments you’re dealing with. As you can see from the code above, there are four arguments passed to __new__: cls: The actual class object that was processed by Python name: The name of the class, as it was defined in source bases: A tuple of classes that were defined as base classes (for this app, this can be safely ignored) attrs: A dictionary containing all the things that were specified in the class declaration (this is the real key) Now, to create the foundation of the base Widget class, we’ll create just a basic class and tell it to use WidgetBase as its metaclass: class Widget(object): __metaclass__ = WidgetBase While this doesn’t yet do anything special, it gets some of the most compliated stuff out of the way, leaving us free to explore more useful code in future articles.
https://www.martyalchin.com/2007/nov/11/using-declarative-syntax-part-2/
CC-MAIN-2017-13
en
refinedweb
Explorations in Go: A dupe checker in Go and Ruby Posted on by Jon Cooper in Everything Else /* * Add. The code Source code is here: Argument parsing Ruby Go Longer than the Ruby idiom, for sure, but still a breeze compared to, say, doing in in straight C. The 'flag' package that I've used here gets us some niceties, too, such as the ability to print a help message: Filesystem traversal. MD5 hashing files.) Printing results). Benchmarking I've applied a super craptastic benchmarking technique here: I just run each version three times. On my home directory (49k-ish files): Ruby Go Conclusion. Feedback Your feedback Jesper Louis Andersen October 4, 2011 at 3:38 pm Try running the two things with /usr/bin/time -v and see how much of that run time is IO. Bill Brasky October 4, 2011 at 8:10 pm A profile of the Go code shows that most of the time is spent on IO. The majority of the time is in syscall.Syscall and scanblock: (pprof) top5 Total: 53 samples 18 34.0% 34.0% 18 34.0% syscall.Syscall 8 15.1% 49.1% 11 20.8% scanblock Running one command after the other makes it even more obvious. The first run is slow, the second returns instantly because everything is cached. (the command ‘itime’ is just /usr/bin/time -f ‘%Uu %Ss %er %MkB %I inputs %C’) Clear your cache: $ sudo -i ‘echo 3 > /proc/sys/vm/drop_caches’ $ itime ./dupe ~/Downloads/ Total duped files found: 10 0.08u 0.36s 43.69r 10304kB 124832 inputs ./dupe /home/bill/Downloads/ $ itime ruby dupe.rb ~/Downloads/ Total duped files found: 10 0.06u 0.05s 0.49r 14240kB 3496 inputs ruby dupe.rb /home/bill/Downloads/ Clear the cache again to prove to yourself that it works in reverse: $ sudo -i ‘echo 3 > /proc/sys/vm/drop_caches’ $ itime ruby dupe.rb ~/Downloads/ Total duped files found: 10 0.16u 0.33s 43.30r 14224kB 126144 inputs ruby dupe.rb /home/bill/Downloads/ $ itime ./dupe ~/Downloads/ Total duped files found: 10 0.04u 0.03s 0.12r 9904kB 2320 inputs ./dupe /home/bill/Downloads/ The point of this article wasn’t to see whether Go or Ruby runs faster. Jon was trying to show how easily a person can whip up a quick program in either language. For the brevity, clarity, and ease of use that Ruby gets you, I see no reason to go with anything else. IO-intensive tasks demand a programming language that’s comfortable and easy. Clearly, Go is not that language. Jon Cooper October 11, 2011 at 10:51 am Thanks, Bill, appreciate the detail. My point was just as you say: to explore how hard it is to do something that I’d normally do in Ruby in Go instead. I didn’t start with a performance benchmark in mind. That’s why I referred to it as a “craptastic benchmark”. 🙂 Aaron Oman October 4, 2011 at 4:32 pm Have you looked at the D programming language? I think there’s a big intersection between Go and D, but I suspect D is a more robust language that offers better defaults and more flexibility. I’ve only done the most bare amount of programming in either of the two languages, though; so don’t take my word for it! Kk October 5, 2011 at 8:26 am By “robust” you mean bloated with every feature and paradigm ever conceived? While the various D compilers only implement various buggy subsets of the language. You have a very strange definition of “robust” Jacek Masiulaniec October 4, 2011 at 5:12 pm Many type declarations are redundant, the assignment operator can take care of them. These lines could be shortened: var md5sum hash.Hash = md5.New() md5sum.Write(contents) return md5sum.Sum() to: return md5.New().Write(contents).Sum() Also, loading entire file to memory can be wasteful; avoid ioutil.ReadFile for large files. Jon Cooper October 11, 2011 at 10:54 am Cheers. As I said to another commenter, I’ll update the repository with feedback from the comments. Jacek Masiulaniec October 4, 2011 at 5:15 pm (I haven’t tested the code, but you get the general idea.) Guest October 4, 2011 at 6:19 pm Your go program does not build with the latest version of go. Here is a fixed one: package main import ( “crypto/md5” “flag” “fmt” “io/ioutil” “log” “os” “path” “path/filepath” ) var verbose = flag.Bool(“verbose”, false, “Print the list of duplicate files.”) func MD5OfFile(path string) []byte { contents, err := ioutil.ReadFile(path) if err != nil { log.Fatal(err) } s := md5.New() s.Write(contents) return s.Sum() } func PrintResults(pathsByName map[string][]string) { dupes := 0 for name, paths := range pathsByName { if len(paths) 0 { rootDir = flag.Arg(0) } paths := FindDupes(rootDir) PrintResults(paths) } Jon Cooper October 11, 2011 at 10:37 am Thanks for posting this. I am using Go r60.1 (via Homebrew) on OS X. Kyle October 4, 2011 at 6:26 pm You don’t need to put parens around the condition of “if”s in Go. If you run your Go code through gofmt (“gofmt -w -s filename” fixes in place and simplifies where it can), it’ll take care of that and all other formatting for you. Also, I’d write MD5OfFile like this: func MD5OfFile(fullpath string) []byte { if f, e := os.Open(fullpath); e == nil { defer f.Close() hash := md5.New() if _, e := io.Copy(hash, f); e == nil { return hash.Sum() } } return nil } Longer, and possibly slower, but works nicely on files of any size. Also possibly worth noting: the filepath.Walk api has changed to be a little more sane, though it isn’t part of the release version of Go yet. And, it’s a matter of taste, but if you’re going to use flag.Arg(0), probably best to check flag.NArg(). However, it may be nicer overall to do: if args := flag.Args(); len(args) > 0 { rootDir = args[0] } Nobody October 4, 2011 at 6:28 pm Also, replace uses of “println” with fmt.Println. println is more of a bootstrapping function; it isn’t quite as smart, and it may write to stderr. Jon Cooper October 11, 2011 at 10:53 am Thanks for the feedback; I will update the code in the repository, although I’m not sure I’ll update the blog post. Bill Brasky October 4, 2011 at 7:15 pm Am I running the wrong version of Ruby? $ ruby dupe.rb dupe.rb:21:in `print_results’: undefined method `select!’ for # (NoMethodError) from dupe.rb:38 Bill Brasky October 4, 2011 at 7:25 pm Apparently, I am. select! doesn’t show up in Ruby until version 1.9.2. To get the code to work (if anyone has an older version of Ruby), change the offending line to: @full_paths_by_filename.delete_if { |filename, fullpaths| fullpaths.length < 2 } Jon Cooper October 11, 2011 at 10:53 am Whoops, should have probably noted that. (Next time I’ll include a .rvmrc as well.) Anonymous October 4, 2011 at 8:09 pm Nice intro Jon. I’ve been wanting to check out this language. I’m currently making my way through Riton October 5, 2011 at 9:49 am Maybe a better way to cumpute the hash of a file in Go using a “pipe” : hash := md5.New() file, _ := os.Open(path) // check the error if you want io.Copy(hash, file) // Just pipe the file into the hasher ! return hash.Sum() //et voilà Jon Cooper October 11, 2011 at 10:54 am That’s pretty. Thanks for the feedback. Joao October 6, 2011 at 3:28 am I’ve spent a few days learning about Go as well. It’s funny in the way that I had to take a second look at Ruby to like it. And with Go I also disregarded at first. But now I think it has gained momentum and made me to appreciate it more. I’ve watched videos, read some of the documentation and followed blogs and so on covering Go. The Go Tour has made me to get wet with the exercises. I can’t say I’m ready to jump ship from Ruby. Javascript just doesn’t feel as nice even with Node. The only nice thing is the excellent V8 engine. 🙂 Go is cool but we need the “Ruby batteries” included in it so we can program prototyping in it with faster iterations. Michael Conlen October 14, 2011 at 9:58 am I may have to include it in my upcoming ISP, a comparison of algorithms implemented in a selection of languages to see how language choice should influence algorithm implementation. One example is don’t recurse in Python (were it me I’d say don’t use Python at all…). I have one language I like (C), several I don’t like (Python, Perl, PHP) and one I’m on the fence about so far (Ruby). Any guess how Go will compare for speed for algorithms or operations on structures one might find in say Cormen, etc? John January 3, 2012 at 6:47 pm I ported your program to Factor, which I found to be pretty clean (and faster than both the Go and Ruby examples). roger pack June 6, 2014 at 7:56 am are those code links broken perhaps? or is it missing some inlines? xojoc February 4, 2015 at 5:39 pm I give a lot of examples of how to use filepath.Walk in among them a (naive) file duplication checker. angel ikaz February 25, 2016 at 3:20 am i would suggest you to try DuplicateFilesDeleter , it can help resolve duplicate files issue. Riona July 15, 2016 at 8:26 am You can try Long Path Tool, it will help you a lot.
http://blog.carbonfive.com/2011/10/04/explorations-in-go-a-dupe-checker-in-go-and-ruby/
CC-MAIN-2017-13
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 02-02-2011 07:52 AM I have a fixed size vertical manager, that prevents fields to become too wide. It works fine, but after sublayout() I would like to inspect the fields to see if any wrapped around because the set width of my manager was too small. As can be seen in my picture, the LabelField wrapped around. I would like to know how do I know programmically if the LabelField wrapped around? public class FixedSizeVerticalManager extends VerticalFieldManager { private int width; public FixedSizeVerticalManager(int width) { super(0); this.width = width; } protected void sublayout(int width, int height) { width = this.width; super.sublayout(width, height); } } 02-02-2011 07:58 AM just some ideas: - check the field height against the font height - overwrite drawText, do some checks and call super. 02-03-2011 07:16 AM Yeah I used the font.getadvance for that. thought there was a nicer way, but thanks anyway. always good to have it confirmed
https://supportforums.blackberry.com/t5/Java-Development/How-would-you-know-if-a-LabelField-was-forced-to-wrap-during-the/m-p/771869
CC-MAIN-2017-13
en
refinedweb
Functional Programming using C# 4.0 – Functional Composition C# 4.0 treats functions as regular objects which enables programmers to use functions as arguments of other functions. Functions those take other functions as arguments are technically called as higher order functions. Functional composition is one such example of higher order functions. Functional composition is a functional programming concept in which two or more functions can be combined to compose a higher order function. Definition Functional composition is a feature supported as part of functional programming features in C# 4.0. Suppose, say we have two functions represented by generic types: Func Code Snippet Code snippet for functional composition (for example described above): static Func { return (x) => f2(f1(x)); } Application The generic function shown in the code snippet above can be used in various scenarios. Following is an example: Func Func Func (2 * x + 3) * (2 * x + 3) double y = gf(3); // result would be 81 Here is an example of reverse; i.e. Func 2 * x * x + 3 double y2 = fg(3); // result would be 21 Two transforms g(f(x) )and f(g(x)) are different! ComposeFuncs (Func f1, Func f2) f = 2 * x + 3; g = x * x; gf = ComposeFuncs (f, g); //equivalent to fg = ComposeFuncs (g, f); //equivalent to
http://www.dotnetspider.com/resources/44207-Functional-Programming-using-C-40-Functional-Composition.aspx
CC-MAIN-2018-39
en
refinedweb
import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; These are some representative import statments. Each one has an asterisk (or "star") at the end of it. What does that mean?: import java.awt.Frame; import java.awt.Panel; import java.awt.Component; import java.awt.Color; import java.awt.Dialog; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Image; Since star can take the place of any name, we can replace all of those import statements with one: import java.awt.*; This imports every class in java.awt. It also imports all the interfaces, exceptions, and errors defined in java.awt. It makes absolutely everything in the java.awt package available to our program. OK, then why do we need to do this? import java.awt.*; import java.awt.event.*; If the * makes us get everything in java.awt, why do we need to then go an import java.awt.event.*? The reason is that java.awt.event is a different package than java.awt. The stuff in java.awt.event isn't actually in in java.awt, the way that System.out is in the class System. Just remember, every package with its own name is a totally different package. You can tell a class name from a package name by whether it is capitalized: import java.awt.Color; Color is a class in the java.awt package. This is one of the reasons we follow the convention of using a capital letter at the start of a class's name. Whereas this: import java.awt.color.*; is an import of the classes in the java.awt.color package. These classes are not in java.awt, so you can't import them using "import java.awt.*;".
http://beginwithjava.blogspot.com/2008/07/stars-in-import-statements.html
CC-MAIN-2018-39
en
refinedweb
An object providing a means to localize the ASPxSpreadsheet's user interface elements at runtime. public class ASPxSpreadsheetLocalizer : XtraLocalizer<ASPxSpreadsheetStringId> Public Class ASPxSpreadsheetLocalizer Inherits XtraLocalizer(of ASPxSpreadsheetStringId) The ASPxSpreadsheet allows you to localize its user interface elements at runtime. This approach can be useful, for example, if you want to set the resource value based on a run-time condition. ASPxSpreadsheet runtime interface localization can be performed via the ASPxSpreadsheetLocalizer object. It provides default (en) culture resource string values and allows you to override them. In a different way, you can use the ASPxSpreadsheetResourcesLocalizer class to localize ASPxSpreadsheetSpreadsheetStringId> ASPxSpreadsheetLocalizer
https://documentation.devexpress.com/AspNet/DevExpress.Web.ASPxSpreadsheet.Localization.ASPxSpreadsheetLocalizer.class
CC-MAIN-2018-39
en
refinedweb
Building Your Next Microservice With Eclipse MicroProfile Building Your Next Microservice With Eclipse MicroProfile This quick tutorial will show you how to build your next microservice with the recently updated Eclipse MicroProfile APIs.. Eclipse MicroProfile has been gaining a lot of attention recently with the latest release, 1.2, as well as recent new additions, including Oracle. This is a quick tutorial for building your next microservice with the MicroProfile APIs. MicroProfile is built from core JavaEE technologies: While adding to them a set of specifications that make your microservices ready for the cloud including: These specifications together make up Eclipse MicroProfile 1.2, the third release of the APIs. So how do you use all of this? Since this is only a specification and not an implementation, you'll need to check vendor-specific requirements, but this is a quick guide to building your first application. In many cases, you'll continue to deploy a WAR file like you would with JavaEE. First, add the MicroProfile dependency to your project. Maven: <dependency> <groupId>org.eclipse.microprofile</groupId> <artifactId>microprofile</artifactId> <version>1.2</version> <type>pom</type> <scope>provided</scope> </dependency> Gradle: dependencies { compileOnly 'org.eclipse.microprofile:microprofile:1.2' } This one dependency brings in all of the needed APIs to build your application, however, you may need to consult with your server's runtime to see if other dependencies are required. So what would a typical service. Monitoring. We want to know how often this service is invoked, and how long each request takes. be resilient. First, we have our rest controller, which should look very familiar. @Path("/api/books") // just a basic JAX-RS resource @Counted // track the number of times this endpoint is invoked public class BooksController { @Inject //use CDI to inject a service private BookService bookService; @GET @RolesAllowed("read-books") // uses common annotations to declare a role required public Books findAll() { return bookService.getAll(); } } If we dive in further to the); } } Next, let's suppose we also want to handle the creation of books, the publication process. A()); } } The last part, if we consider that managing authors is a separate bounded context, we want that to be represented as a discreet service. As a result, we want a client to that remote server to check that the author exists. @ApplicationScoped public class AuthorService { @Inject //inject a configuration property for the URL to the remote service @ConfigProperty(name = "author.service.url") private String authorUrl; private ConcurrentMap < String, Author > authorCache = new ConcurrentHashMap < > (); @Retry // indicate that this should trigger a retry in case the remote server // is unavailable or at capacity @CircuitBreaker // We also wrap the invocation in a circuit breaker, it will eventually @Fallback(fallbackMethod = "getCachedAuthor") // indicate that we should fallback to the local cache public Author findAuthor(String id) { // in MicroProfile 1.3 we'll be including a type safe rest client, // to make this setup even easier! Author author = ClientBuilder.newClient() .target(authorUrl) .path("/{id}") .resolveTemplate("id", id) .request(MediaType.APPLICATION_JSON) .get(Author.class); // ideally we want to read from the remote server. However, we can build // a cache as a fallback when the server is down authorCache.put(id, author); return author; } public Author getCachedAuthor(String id) { return authorCache.get(id); } } So there you have it! A couple of rest controllers, services, and you have a microservice built with Eclipse MicroProfile to manage books. You can download this sample code on GitHub. With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial! Published at DZone with permission of John Ament . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/building-your-next-microservice-with-eclipse-micro
CC-MAIN-2018-39
en
refinedweb
RavenDB… What am I Persisting, What am I Querying? (part 1) RavenDB… What am I Persisting, What am I Querying? (part 1) Join the DZone community and get the full member experience.Join For Free MariaDB TX, proven in production and driven by the community, is a complete database solution for any and every enterprise — a modern database for modern applications. Author: Phillip Haydon A couple of questions that pop’s up a lot in the #RavenDB JabbR chat room by people picking up RavenDB for the first time are; what am I persisting?, and how do I query relationships?. When we use relational databases we often de-normalize our data into multiple tables, usually this is done to get rid of duplication of data. We do this by adding 100’s of foreign keys to our tables relating things all over the place, we had a CountryId to our Address, a UserId to our Order, an OrderId to our OrderLine. There’s many reasons why this was done, some of which Oren describes in his Embracing RavenDB post. Then when we go to query those relationships we have to join data, when we have an entity with multiple relationships we end up getting into complex queries with cartesian joins, performance starts to degrade, and things just get messy. When working with Document Databases we throw all that out the window and we deal with Root Aggregates. These are objects that are responsible for their child objects, you don’t load the child objects individually, they are loaded with the root or parent object. The most common example I see is Blog/Posts/Comments, but I’m going to explain an easier scenario. Order/OrderLine The Order/Orderline is much easier to understand since it’s a scenario would probably always end up being the same in every system. It also easier to understand because when displaying an OrderLine in any system, it’s always displayed with the Order details, and never by itself. So when we query for the Order it makes sense to always get the OrderLine at the same time. When working with Business Rules applied to an Order, they almost always apply to the OrderLines also, so again you’re working with the entire Order, not a portion of it. When starting out it’s hard to imagine, but the OrderLine is actually part of the Order, it’s not a separate entity, it’s just when we persist it in two tables since that makes more sense in a relational database, and it ends up feeling like two separate things, when in reality, it’s still the same object. public class Order { public string Id { get; set; } // Other properties… public IEnumerable<OrderLine> Lines { get; set; } } public class OrderLine { public int Quantity { get; set; } public decimal Price { get; set; } public decimal Discount { get; set; } public string SkuCode { get; set; } // Other Properties } So when we persist this with a Relational Database these would go into two different tables. Order and OrderLine tables, joined by a foreign key. But now that we are thinking about the Root Aggregate, the Order, when we persist this with RavenDB we persist just the Order. When we persist ‘just the order’ that means we persist the ENTIRE Order object, including the OrderLines, since they are the Order. When persisted to RavenDB we end up with a JSON document that looks similar to: { Id: 'orders/123', Lines: [ { Quantity: 1, Price: 12.95, Discount: null, SkuCode: 'N1C3' }, { Quantity: 3, Price: 6.23, Discount: null, SkuCode: 'F4K21' } ] } Note: I purposely left out other properties for now. As you can see we are persisting the entire root object itself. We don’t put OrderLines into a separate document or collection. Note: I do realise I’ve mentioned persisting the entire object multiple times, but it’s something that some people find hard to wrap their head around at first. It confused me when I first started messing around with MongoDB. When we query for the Order: session.Load<Order>(“orders/123”); we end up fetching all the OrderLines at the same time. No joins, no separate queries, just the entire order. In a relational database we would have had to issue 2 separate queries, or join the tables together, like: SELECT * FROM [Order] o LEFT JOIN [OrderLine] ol ON o.Id = Ol.OrderId This makes querying the database more complicated than it needs to be. There are other ways around this in a relational database, you can blob the OrderLines. But then you lose the ability to search against OrderLines. Why this example and not Post/Comments? I don’t think Post/Comments is a good example to work from, Comment’s can be displayed with a Post, and without a Post, they can be paged, displayed on an individual page, in a ‘latest comments’ column on your blog, etc. Some of these scenarios may justify putting Comments into their own collection. However, more often than not, non-popular blogs such as my own only occur a few comments, so there’s no real reason to put them in their own collection, you can easily get away with putting them on the Post document. I think this comes down to personal preference and the business problem you’re trying to solve, but for a learning exercise it makes it harder to understand. My personal preference is to store Comment’s in a separate collection, because you click through from the post listing screen to the post and load the comment’s, and if there are > x number of comments then I would page them and only display the latest comments, or high rated comments if they were rated voted up/down. I hope that clear’s up what’s being persisted. In part 2 I’m going to go over References (Relationships), and in part 3 MapReduce (doing all those fancy SQL queries inside RavenDB and what is happening) MariaDB AX is an open source database for modern analytics: distributed, columnar and easy to use. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/ravendb%E2%80%A6-what-am-i-persisting
CC-MAIN-2018-39
en
refinedweb
0 in my C++ class we were given an assignment (as follows) and i only have one last objective with it. the user is given an option to be able to see the entire queue list and i don't know how to get the program to show that. does anyone know how this can be done? my problem starts at about line 42 with if(iChoice == 3). here's what i have so far: #include <iostream> #include <list> #include <queue> #include <string> using namespace std; int main() { queue<string> myqueue; string strName; char cContinue; char cAccess; int iChoice; while(1) { cout << "Hello! Welcome to Dr.Smith's office! \n" << endl; cout << "To continue, press any key" << endl; cin >> cContinue; cout << "If you would like to put your name on the waiting list, please enter '1'" << endl; cout << "If you would like to see who is next in line, please enter '2'" << endl; cout << "If you would like to access the entire queue, please enter '3'" << endl; cin >> iChoice; if(iChoice == 1) { cout << "Please enter your name" << endl; cin >> strName; cout << "Thank you " << strName << "! Please be seated and the doctor will see you momentarily." << endl; myqueue.push(strName); } if(iChoice == 2) { cout << "The Next person in line is "; cout << "" << myqueue.front() << endl; myqueue.pop(); if(myqueue.empty()) { cout << "There is no one in the queue." << endl; } } if(iChoice == 3) { for(int x=0; x < myqueue.size();x--) { cout << "These are the people currently waiting: \n" << endl; cout << "" << << endl; } if(myqueue.empty()) { cout << "There is no one in the queue." << endl; } } } return 0; }
https://www.daniweb.com/programming/software-development/threads/343806/queue-problems
CC-MAIN-2018-39
en
refinedweb
Python Tutorial – Python Programming For Beginners Recommended by 163 users Python Tutorial I will start this Python Tutorial by giving you enough reasons to learn Python. Python is simple and incredibly readable since closely resembles the English language. Through this Python Tutorial, I will introduce you to each aspect of Python and help you understand how everything fits together to gain insights from it. In this Python Tutorial blog, I would be covering following topics: - Hello World Program - Python & It’s Features - Python Applications - Variables - Data types - Operators - Conditional Statements - Loops - Functions Hello World Program Python is a great language for beginners, all the way up to seasoned professionals. In Python, you don’t have to deal with complex syntaxes, let me give you an example: If I want to print “Hello World” in Python, all I have to write is: print ('Hello World') It’s that simple! Python & It’s Features Python is an open source scripting language whch was created by Guido van Rossum in 1989. It is an interpreted language with dynamic semantics and is very easy to learn. Let’s look at some cool features of Python. Let me give you one more motivation to learn Python, It’s wide variety of applications. Python Applications: Python finds application in a lot of domains, below are few of those: This is not all, it is also used for automation and for performing a lot of other tasks. After this Python tutorial, I will be coming up with a separate blog on each of these applications. Let’s move ahead in this Python Tutorial and understand how Variables work in Python. Variables in Python:. Assigning values to a variable:. Data Types in Python:. Numeric:. List: - You can consider the Lists as Arrays in C, but in List you can store elements of different types, but in Array all the elements should of the same type. - List is the most versatile datatype available in Python which can be written as a list of comma-separated values (items) between square brackets. Consider the example below:. Tuples: A Tuple is a sequence of immutable Python objects. Tuples are sequences, just like Lists. The differences between tuples and lists are: - Tuples cannot be changed unlike lists - Tuples use parentheses, whereas lists use square brackets. Consider the example below:: - A Set is an unordered collection of items. Every element is unique. - A Set is created by placing all the items (elements) inside curly braces {}, separated by comma. Consider the example below: Set_1 = {1, 2, 3} In Sets, every element has to be unique. Try printing the below code: Set_2 = {1, 2, 3, 3} Here 3 is repeated twice, but it will print it only once. Let’s look at some Set operations: Union: Union of A and B is a set of all the elements from both sets. Union is performed using | operator. Consider the below example: A = {1, 2, 3, 4} B = {3, 4, 5, 6} print ( A | B) Output = {1, 2, 3, 4, 5, 6} Intersection: Intersection of A and B is a set of elements that are common in both sets. Intersection is performed using & operator. Consider the example below: A = {1, 2, 3, 4} B = {3, 4, 5, 6} print ( A & B ) Output = {3, 4} Difference:. Access elements from a dictionary: Dict = {'Name' : 'Saurabh', 'Age' : 23} print(Dict['Name']) Output = Saurabh Changing elements in a Dictionary: Dict = {'Name' : 'Saurabh', 'Age' : 23} Dict['Age'] = 32 Dict['Address'] = 'Starc Tower' Output = {'Name' = 'Saurabh', 'Age' = 32, 'Address' = 'Starc Tower'} Next in Python Tutorial, let’s understand the various Operators in Python.. Arithmetic Operators:. Assignment Operators: Bitwise Operators:. Identity Operators:: Conditional statements are used to execute a statement or a group of statements when some condition is true. There are namely three conditional statements – If, Elif, Else. Consider the flowchart shown below: Let me tell you how it actually works. - First the control will check the ‘If’ condition. If its true, then the control will execute the statements after If condition. - When ‘If’ condition is false, then the control will check the ‘Elif’ condition. If Elif condition is true then the control will execute the statements after Elif condition. - If ‘Elif’ Condition is also false then the control will execute the Else statements.. Loops: - In general, statements are executed sequentially. The first statement in a function is executed first, followed by the second, and so on - There may be a situation when you need to execute a block of code several number of times A loop statement allows us to execute a statement or group of statements multiple times. The following diagram illustrates a loop statement: Let me explain you the above diagram: - First the control will check the condition. If it is true then the control will move inside the loop and execute the statements inside the loop. - Now, the control will again check the condition, if it is still true then again it will execute the statements inside the loop. - This process will keep on repeating until the condition becomes false. Once the condition becomes false the control will move out of loop. There are two types of loops: - Infinite: When condition will never become false - Finite: At one point, the condition will become false and the control will move out of the loop There is one more way to categorize loops: - Pre-test: In this type of loops the condition is first checked and then only the control moves inside the loop - Post-test: Here first the statements inside the loops are executed, and then the condition is checked Python does not support Post-test loops. Learn Python From Experts Loops in Python: In Python, there are three loops: - While - For - Nested: Functions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. def add (a, b): return a + b c = add(10,20) print(c) Output = 30.
https://www.edureka.co/blog/python-tutorial/
CC-MAIN-2018-39
en
refinedweb
jruby 9.0.4.0 (2.2.2) 2015-11-12 b9fb7aa Java HotSpot(TM) 64-Bit Server VM 24.79-b02 on 1.7.0_79-b15 +jit [Windows 7-amd64] I have the following Java class: ================================================== class JC { private Integer vi; public JC(int i) { vi = new Integer(i); } public void square() { vi = vi*vi; } public Integer get_int() { return vi; } } ================================================== I create (in the Java part of my code) an object of this class, and pass it to a Ruby method. On the Ruby side, I find that I can't invoke the method 'square', even though the object seems to be of the right type: ================================================== include Java class RCconn def initialize @doer = RC.new end def dispatch(jc) puts "Received object of type #{jc.class}" jc.square # @doer.do_your_duty(jc) jc end end ================================================== Inside dispatch, I get: Received object of type Java::Default::JC NoMethodError: undefined method `square' for #<Java::Default::JC:0xa124e5> dispatch at ...... What did I do wrong? on 2015-11-20 14:30 on 2015-11-20 16:49 Ronald, Put the public modifier before your java class: public class JC { private Integer vi; public JC(int i) { vi = new Integer(i); } public void square() { vi = vi*vi; } public Integer get_int() { return vi; } } Move it to its own file and try again. The method, as you have it, is not accessible to Ruby. Cris on 2015-11-20 17:20 Cris S. wrote in post #1179417: > Ronald, > > Put the public modifier before your java class Waaaaa! This works!!!! Could you explain, please, why this is necessary? I had JC defined in its own file before, and I could executed all its methods (they are public, after all) from my Java application. Why, then, can't Ruby see it? Does this have to do with package visibility? But I didn't use any package declaration. Ronald on 2015-11-20 19:18 The other java files were in the same package, so they can see the package level methods. The private, protected, public (and java has no modifier -- package access (or friendly)) are a bit different then Ruby. I came from the java side first and I often get confused on the Ruby side :-). Cris on 2015-11-20 19:21 Since you are all setup with this example... Answer for me a question. JRuby has the java_package method. If you set your Ruby class to the same package can you access your method w/o the public modifier? Cris on 2015-11-23 12:40 Cris S. wrote in post #1179422: > JRuby has the java_package method. If you set your Ruby class to the > same package can you access your method w/o the public modifier? It didn't work, but I'm not sure, whether I did it in a right way. Since the Java class JC is in the global namespace, I added to my Ruby file after the include Java: java_package '' But maybe this is not the correct way to specify the global namespace. I searched several places for an explanation of how java_package works, but they don't explain handling the global namespace. on 2015-11-23 16:23 I am not sure either how one would handle an unnamed package. I suspect you might have to package your code. Cris on 2015-11-23 18:16 I'm still fighting with how to do packages properly in Java (I've opened a thread on this at...) - maybe, if you have time, you could have a look. It seems to be more difficult than expected, and as soon as I have a solution for this, I will try out java_package. Ronald on 2015-11-23 18:53 Ronald, Are you aware that in java package structures mandate that your file be in an appropriate directory or compilation fails? If you have package 'phee.phi.pho' Then class 'Phum' must be in the directory phee/phi/pho. It must be in a file named Phum.java if it is has the public modifier before the keyword class. Cris on 2015-11-23 19:17 Cris S. wrote in post #1179475: > Are you aware that in java package structures mandate that your file be > in an appropriate directory or compilation fails? I thought this is just a convention. It is strange: The compiler could see that the package name does not match the filename, but it did not complain ("file in a wrongly named directory" or something like this) when I compiled it. I will try to modify my example and let you know the findings. Ronald on 2015-11-24 11:26 Cris S. wrote in post #1179475: > If you have package 'phee.phi.pho' Then class 'Phum' must be in the > directory phee/phi/pho. It must be in a file named Phum.java if it is > has the public modifier before the keyword class. I have now restructured file structure and code accordingly, but running the program now doesn't find the main class. I have attached all files to this message (jr7.zip), in case you would like to hava a look. On Windows, I would run it by executing jr7.bat. I have also isolated the error, since it is not related to java_import, and posted the problem here:... on 2015-11-24 13:41 OK, fixed it now. There were several small changes necessary, which were not obvious - at least not to me. You find the updated test case in the attachment. As for the java_package on the Ruby side: This did not work as expected. I get the error message NameError: cannot load Java class DoIt I then changed back java_import 'DoIt' do .... to java_import 'foo.DoIt' do ... (although I think this should not be necessary, because we have a java_package 'foo' before that), but same effect.
https://www.ruby-forum.com/topic/6877191
CC-MAIN-2018-39
en
refinedweb
This simple kernel sends a ball bouncing around on the screen. Turn it into your own Pong, Breakout, or Tank clone. To run this way, Threads are usually used to allow more than one thing to be going on at a time in a Java program. We've looked at a simple way of using threads before, the Timer class. Here's a really, really simple video game kernel. It has all the basic elements of a video game. /* A simple video game style kernel by Mark Graybill, August 2010 Uses the Timer Class to move a ball on a playfield. */ // Import Timer and other useful stuff: import java.util.*; // Import the basic graphics classes. import java.awt.*; import javax.swing.*; public class VGKernel extends JPanel{ // Set up the objects and variables we'll want. that initializes things: public VGKernel(){ super(); screen = new Rectangle(0, 0, 600, 400); ball = new Rectangle(0, 0, 20, 20); bounds = new Rectangle(0, 0, 600, 400); // Give some starter with this as a method. if (right) ball.x+=ball.width; // If right is true, move ball right, else ball.x-=ball.width; // otherwise move left. if (down) ball.y+=ball.height; // Same for up/down. else ball.y-=ball.width;; } } public static void main(String arg[]){ java.util.Timer vgTimer = new java.util.Timer(); // Create a Timer. VGKernel panel = new VGKernel(); // Create and instance of our kernel. // Set the intial ball movement direction. panel.down = true; panel.right = true; // Set up our JFRame, 100); } } This example can be expanded with methods to get control inputs, additional players on the playfield (like paddles), and logic to determine when someone scores. The code here is far from perfect, but I've made some compromises to make things as simple as I could while still showing a full working example. Not that any code that runs and does what is supposed to is really bad, but there are other, better ways of doing this. But this works and is fairly easy to understand. What the program does is create a JPanel that has an inner class (a class defined within itself) of VGTimerTask. The VGTimerTask is a kind of TimerTask, which can be scheduled to occur on a regular basis by a Timer. Since VGTimerTask is an inner class of VGPanel, it has access to all the members of VGPanel. This is critical. Without that, it wouldn't be able to access the ball and redraw the screen easily (it can still be done, but in a more complex way.) Timer is a decent way of running a simple game, but more complex games should use some other timing mechanism. java.util.Timer is affected by a number of outside events, so to get smoother, more reliable timing you a timer like the one in the Java3D package would work better. A Simple Improvement There are many ways of improving on this basic example. One way that is very simple is to smooth the animation. The movement of the ball is pretty jerky. This is caused by both the distance that the ball moves each "turn", and by the time between screen updates. We can smooth out the animation by addressing both of these. First, let's change moveBall() to shift the ball a smaller distance each time: public void moveBall(){ // Ball should really be its own class with this as a method. if (right) ball.x+=ball.width/4; // If right is true, move ball right, else ball.x-=ball.width/4; // otherwise move left. if (down) ball.y+=ball.height/4; // Same for up/down. else ball.y-=ball.width/4;; } }Now the ball is being moved only one quarter of its size each turn. Next, change the Timer schedule to draw the screen every 20 milliseconds instead of every 100 milliseconds: // Set up a timer to do the vgTask regularly. vgTimer.schedule(panel.vgTask, 0, 20); Now you have a ball that moves a lot smoother. I'll be expanding on this basic kernel and improving it in future articles, starting with Java Video game Programming: Game Logic
http://beginwithjava.blogspot.com/2010/08/simple-java-video-game-kernel.html
CC-MAIN-2018-39
en
refinedweb
does the sharedObject get destroyed when the browser is closed?cyber0897 Mar 10, 2010 4:09 PM hey guys, so i just wanted to make sure that when my internet browser is closed it destroys or clears my sharedObject?? right now i wasnt sure so im doing somehting like the following public var sharedObj:SharedObject; if(sharedObj != null){ sharedObj.clear(); } sharedObj = SharedObject.getLocal(...); i just wnated to make sure that this is the most effective way to destory the sharedObject just in case there is one left in the browser... is there another way to make sure i clear the sharedObjects?? 1. Re: does the sharedObject get destroyed when the browser is closed?David_F57 Mar 10, 2010 5:00 PM (in response to cyber0897) Hi, A shared object will only save data if it is flushed, which writes it to the local drive or the remote drive. Think of a shared object as a smarter more peristant cookie. If you create a shared object use it but don't flush it the behaviour then becomes the same as an array, just like any other memory object it is destroyed when the swf is closed, if you do flush the object(save it) then you need to programmatically clear the data if you don't want it to be available the next time the use accesses the swf. Local shared objects get stored in the use profiler on windows(not sure where on a mac), so you can have a look for the shared object if you want to see what is happening with it. this is the path to the SO for the sample app I gave you. D:\Profiles\David\AppData\Roaming\Macromedia\Flash Player\#SharedObjects\3K9EESA2\gumbo.flashhub.net\login.sol David 2. Re: does the sharedObject get destroyed when the browser is closed?JohanVelthuis Mar 10, 2010 5:36 PM (in response to cyber0897)1 person found this helpful hello cyber0897, No sharedObjects are stored in a cookielike textfile on the clients harddrive, As far as I know there is not an option like with cookies to be destroyed after the browser closes. The reset you wrote should be in another order: var mySharedObject:SharedObject = SharedObject.getLocal.... //the above will never give null, because it is created when it doesnt exist. //the if (object != null) is useless because of the above, in your post it will be always null, since it is not instantiated... mySharedObject.clear(); mySharedObject.flush(); //not sure if this is neccesary You could reset the sharedObject when initializing your application, but if you just want to store variables and get them wherever you need them in your app you could use the singleton class below Usage: <s:TextInput id="text1" text="@{PrefObj.instance.myString} /> <s:TextInput id="text2" text="@{PrefObj.instance.myString} /> (the @ only works in flex 4) PrefObj.as package { import flash.events.EventDispatcher; [Bindable] public final dynamic class PrefObj extends EventDispatcher { private static var _instance:PrefObj = new PrefObj(); public function PrefObj(){ if (_instance != null){ throw new Error("PreferencesManager can only be accessed through PreferencesManager.instance"); } } public static function get instance():PrefObj { return _instance; } //add your variables here: public var myString:String = ""; } } 3. Re: does the sharedObject get destroyed when the browser is closed?David_F57 Mar 10, 2010 9:54 PM (in response to JohanVelthuis)1 person found this helpful Hi Johan, When you flush a local SO it writes to the local hard drive, this has always been the case, when an SO is created it writes an empty file to the drive which is stored in a folder that carries the swfs domain url in the path. If you don't write the data(flush) then the drive SO remains empty you can open the SO file and test to see if it contains data next time you run the swf, if you run the same swf from a different location it will create/read the SO for that url. So the file may not be 'text' based but shared objects can be used in a cookie style way as per my provided login example. You can clear the data in the SO if it is sensitive or you can update the data the beauty is that it is controllable and has some big advantages over the 'cookie' mechanism. David. 4. Re: does the sharedObject get destroyed when the browser is closed?cyber0897 Mar 11, 2010 9:17 AM (in response to David_F57) thank you soo much guys for all your help..! after your explainations i know exactly what im doing now... thank you soo much
https://forums.adobe.com/thread/593852
CC-MAIN-2018-39
en
refinedweb
About analyzers¶ Overview¶ An analyzer is a function or callable class (a class with a __call__ method) that takes a unicode string and returns a generator of tokens. Usually a “token” is a word, for example the string “Mary had a little lamb” might yield the tokens “Mary”, “had”, “a”, “little”, and “lamb”. However, tokens do not necessarily correspond to words. For example, you might tokenize Chinese text into individual characters or bi-grams. Tokens are the units of indexing, that is, they are what you are able to look up in the index. An analyzer is basically just a wrapper for a tokenizer and zero or more filters. The analyzer’s __call__ method will pass its parameters to a tokenizer, and the tokenizer will usually be wrapped in a few filters. A tokenizer is a callable that takes a unicode string and yields a series of analysis.Token objects. For example, the provided whoosh.analysis.RegexTokenizer class implements a customizable, regular-expression-based tokenizer that extracts words and ignores whitespace and punctuation. >>> from whoosh.analysis import RegexTokenizer >>> tokenizer = RegexTokenizer() >>> for token in tokenizer(u"Hello there my friend!"): ... print repr(token.text) u'Hello' u'there' u'my' u'friend' A filter is a callable that takes a generator of Tokens (either a tokenizer or another filter) and in turn yields a series of Tokens. For example, the provided whoosh.analysis.LowercaseFilter() filters tokens by converting their text to lowercase. The implementation is very simple: def LowercaseFilter(tokens): """Uses lower() to lowercase token text. For example, tokens "This","is","a","TEST" become "this","is","a","test". """ for t in tokens: t.text = t.text.lower() yield t You can wrap the filter around a tokenizer to see it in operation: >>> from whoosh.analysis import LowercaseFilter >>> for token in LowercaseFilter(tokenizer(u"These ARE the things I want!")): ... print repr(token.text) u'these' u'are' u'the' u'things' u'i' u'want' An analyzer is just a means of combining a tokenizer and some filters into a single package. You can implement an analyzer as a custom class or function,). Note that this only works if at least the tokenizer is a subclass of whoosh.analysis.Composable, as all the tokenizers and filters that ship with Whoosh are. See the whoosh.analysis module for information on the available analyzers, tokenizers, and filters shipped with Whoosh. Using analyzers¶ When you create a field in a schema, you can specify your analyzer as a keyword argument to the field object: schema = Schema(content=TEXT(analyzer=StemmingAnalyzer())) Advanced Analysis¶ Token objects¶ The Token class has no methods. It is merely a place to record certain attributes. A Token object actually has two kinds of attributes: settings that record what kind of information the Token object does or should contain, and information about the current token. Token setting attributes¶ A Token object should always have the following attributes. A tokenizer or filter can check these attributes to see what kind of information is available and/or what kind of information they should be setting on the Token object. These attributes are set by the tokenizer when it creates the Token(s), based on the parameters passed to it from the Analyzer. Filters should not change the values of these attributes. Token information attributes¶ A Token object may have any of the following attributes. The text attribute should always be present. The original attribute may be set by a tokenizer. All other attributes should only be accessed or set based on the values of the “settings” attributes above. So why are most of the information attributes optional? Different field formats require different levels of information about each token. For example, the Frequency format only needs the token text. The Positions format records term positions, so it needs them on the Token. The Characters format records term positions and the start and end character indices of each term, so it needs them on the token, and so on. The Format object that represents the format of each field calls the analyzer for the field, and passes it parameters corresponding to the types of information it needs, e.g.: analyzer(unicode_string, positions=True) The analyzer can then pass that information to a tokenizer so the tokenizer initializes the required attributes on the Token object(s) it produces. Performing different analysis for indexing and query parsing¶ Whoosh sets the mode setting attribute to indicate whether the analyzer is being called by the indexer ( mode='index') or the query parser ( mode='query'). This is useful if there’s a transformation that you only want to apply at indexing or query parsing: class MyFilter(Filter): def __call__(self, tokens): for t in tokens: if t.mode == 'query': ... else: ... The whoosh.analysis.MultiFilter filter class lets you specify different filters to use based on the mode setting: intraword = MultiFilter(index=IntraWordFilter(mergewords=True, mergenums=True), query=IntraWordFilter(mergewords=False, mergenums=False)) Stop words¶ “Stop” words are words that are so common it’s often counter-productive to index them, such as “and”, “or”, “if”, etc. The provided analysis.StopFilter lets you filter out stop words, and includes a default list of common stop words. >>> from whoosh.analysis import StopFilter >>> stopper = StopFilter() >>> for token in stopper(LowercaseFilter(tokenizer(u"These ARE the things I want!"))): ... print repr(token.text) u'these' u'things' u'want' However, this seemingly simple filter idea raises a couple of minor but slightly thorny issues: renumbering term positions and keeping or removing stopped words. Renumbering term positions¶ Remember that analyzers are sometimes asked to record the position of each token in the token stream: So what happens to the pos attribute of the tokens if StopFilter removes the words had and a from the stream? Should it renumber the positions to pretend the “stopped” words never existed? I.e.: or should it preserve the original positions of the words? I.e: It turns out that different situations call for different solutions, so the provided StopFilter class supports both of the above behaviors. Renumbering is the default, since that is usually the most useful and is necessary to support phrase searching. However, you can set a parameter in StopFilter’s constructor to tell it not to renumber positions: stopper = StopFilter(renumber=False) Removing or leaving stop words¶ The point of using StopFilter is to remove stop words, right? Well, there are actually some situations where you might want to mark tokens as “stopped” but not remove them from the token stream. For example, if you were writing your own query parser, you could run the user’s query through a field’s analyzer to break it into tokens. In that case, you might want to know which words were “stopped” so you can provide helpful feedback to the end user (e.g. “The following words are too common to search for:”). In other cases, you might want to leave stopped words in the stream for certain filtering steps (for example, you might have a step that looks at previous tokens, and want the stopped tokens to be part of the process), but then remove them later. The analysis module provides a couple of tools for keeping and removing stop-words in the stream. The removestops parameter passed to the analyzer’s __call__ method (and copied to the Token object as an attribute) specifies whether stop words should be removed from the stream or left in. >>> from whoosh.analysis import StandardAnalyzer >>> analyzer = StandardAnalyzer() >>> [(t.text, t.stopped) for t in analyzer(u"This is a test")] [(u'test', False)] >>> [(t.text, t.stopped) for t in analyzer(u"This is a test", removestops=False)] [(u'this', True), (u'is', True), (u'a', True), (u'test', False)] The analysis.unstopped() filter function takes a token generator and yields only the tokens whose stopped attribute is False. Note Even if you leave stopped words in the stream in an analyzer you use for indexing, the indexer will ignore any tokens where the stopped attribute is True. Implementation notes¶ Because object creation is slow in Python, the stock tokenizers do not create a new analysis.Token object for each token. Instead, they create one Token object and yield it over and over. This is a nice performance shortcut but can lead to strange behavior if your code tries to remember tokens between loops of the generator. Because the analyzer only has one Token object, of which it keeps changing the attributes, if you keep a copy of the Token you get from a loop of the generator, it will be changed from under you. For example: >>> list(tokenizer(u"Hello there my friend")) [Token(u"friend"), Token(u"friend"), Token(u"friend"), Token(u"friend")] Instead, do this: >>> [t.text for t in tokenizer(u"Hello there my friend")] That is, save the attributes, not the token object itself. If you implement your own tokenizer, filter, or analyzer as a class, you should implement an __eq__ method. This is important to allow comparison of Schema objects. The mixing of persistent “setting” and transient “information” attributes on the Token object is not especially elegant. If I ever have a better idea I might change it. ;) Nothing requires that an Analyzer be implemented by calling a tokenizer and filters. Tokenizers and filters are simply a convenient way to structure the code. You’re free to write an analyzer any way you want, as long as it implements __call__.
https://whoosh.readthedocs.io/en/latest/analysis.html
CC-MAIN-2018-39
en
refinedweb
A one sample t-test is used to determine whether or not the mean of a population is equal to some value. This tutorial explains how to conduct a one sample t-test in Python. Example: One Sample t-Test in Python Suppose a botanist wants to know if the mean height of a certain species of plant is equal to 15 inches. She collects a random sample of 12 plants and records each of their heights in inches. Use the following steps to conduct a one sample t-test to determine if the mean height for this species of plant is actually equal to 15 inches. Step 1: Create the data. First, we’ll create an array to hold the measurements of the 12 plants: data = [14, 14, 16, 13, 12, 17, 15, 14, 15, 13, 15, 14] Step 2: Conduct a one sample t-test. Next, we’ll use the ttest_1samp() function from the scipy.stats library to conduct a one sample t-test, which uses the following syntax: ttest_1samp(a, popmean) where: - a: an array of sample observations - popmean: the expected population mean Here’s how to use this function in our specific example: import scipy.stats as stats #perform one sample t-test stats.ttest_1samp(a=data, popmean=15) (statistic=-1.6848, pvalue=0.1201) The t test statistic is -1.6848 and the corresponding two-sided p-value is 0.1201. Step) is greater than alpha = 0.05, we fail to reject the null hypothesis of the test. We do not have sufficient evidence to say that the mean height for this particular species of plant is different from 15 inches. Additional Resources How to Conduct a Two Sample T-Test in Python How to Conduct a Paired Samples T-Test in Python
https://www.statology.org/one-sample-t-test-python/
CC-MAIN-2022-21
en
refinedweb
RationalWiki:Saloon bar/Archive32 Contents - 1 Drunken editing - 2 Working toward a post-CP RW - 3 freethoughtpedia - 4 TK in my pub... - 5 District 9 - 6 Questions for the mob - 7 Important - 8 Funny Stuff.... - 9 Fail. - 10 Maus - 11 Awesome film - 12 Usain Bolt - 13 Media termns - 14 WTF - 15 Pornographie! - 16 Cinema/Movies - 17 That health care bill sure would be nice right about now.... - 18 Spam? - 19 "When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection" - 20 Math question - 21 US ‘may take military action’ to liberate Britain from the NHS - 22 Here, have a bag of LOLs - 23 Fucking crickets! - 24 UFOs - 25 Master of the Internet! - 26 Bible translation Drunken editing[edit] I just got in from a friend's party (1am now here in the USSR UK) and of the approximately sixty odd people there only about five of them got sick. Three of them in my mate's kitchen though. HA! Poor guy. Anyway, I saw my ex-girlfriend there and we shared an awkward moment talking about the past. I'm actually surprised how off guard it caught me as I have no real outlet and have simply tried avoiding her for the last two months. Not that I reacted outwardly but I felt a weird twinge. Then "I have a sudden need for another cigarette!" I can't wait until I'm scarpering to uni a month or so to get away from this. SJ Debaser 00:16, 15 August 2009 (UTC) - You didn't suggest a quick and temporary reunion in the closet toilet? CrundyTalk nerdy to me 14:24, 16 August 2009 (UTC) Working toward a post-CP RW[edit] Alright, Armondikov and I were talking last night about removing CP references from mainspace articles, and I've been doing a lot of editing toward the goal of making the mainspace CP-free. That may piss a few folks off, but I think it would be good for the project to make it even less CP-centric. My prolonged boycott idea didn't really get off the ground, and that's disappointing. We're smart and funny. The world is full of nonsense. That should be enough to keep us busy--who needs to pick on a loser like Andy? For now, I'm not removing the "CP has an article on this too...." template. But should we? Thoughts? TheoryOfPractice (talk) 14:34, 15 August 2009 (UTC) - I'd suggest linking to more sites, not fewer. Linking to Creation Wiki, WikiSynergy and aSK as well as CP would allow a wider range of wingnuttery to be exposed. If, at some point in the future, CP finally disappears, just take it out of a common template. SuspectedReplicant (talk) 14:50, 15 August 2009 (UTC) - Good idea--I'll start making the relevant templates this evening. TheoryOfPractice (talk) 14:53, 15 August 2009 (UTC) - I've been working on a few new-age medical woo items and there are certainly some good WTF sites out there. It takes a bit more work than just watching the CP ant-farm but it's getting like the Last of the Mohicans over there. Генгисpillaging 15:12, 15 August 2009 (UTC) - Linking out to them and using the examples is great, WND (although they have gone a bit birtherific) for example. I'd encourage using CP as evidence as CP is certainly, as our article on them suggests, a manifestation of how such people think. So indeed it's a great first stop. But the random "kendoll" references and "some people" links things need to go or at least be integrated in properly as in chaning "some dicks think" to "a neo-conservative perspective thinks... as evidenced by Conservapedia". narchist 15:18, 15 August 2009 (UTC) - Ann Coulter, Rush Limbaugh and other stalwarts of the American right wing could provide ample amusement and discussion. EddyP (talk) 15:20, 15 August 2009 (UTC) - Where CP has a particularly funny history, there's no reason why RW couldn't host a separate article to preserve teh lulz but keep the mainspace articles CP-free. In fact, since so many diffs have been TKed, it would be a good excuse to give things a thorough house cleaning. SuspectedReplicant (talk) 15:24, 15 August 2009 (UTC) - That's why we have the CP space--I'm not talking about getting rid of that. Just, as A-kov says, lame links to CP material in the mainspace....TheoryOfPractice (talk) 15:26, 15 August 2009 (UTC) - Sorry... yes, that's what I meant. I just forgot to mention the actual namespace. SuspectedReplicant (talk) 15:40, 15 August 2009 (UTC) - While most of us know about Coulter, Limbaugh, Hannity et al, we don't get the same exposure to them outside North America. It was the online presence of CP which allowed the rest of the world to follow their antics. Генгисpillaging 15:46, 15 August 2009 (UTC) - To comment on the initial point, I think that raking CP out of mainspace is a great idea. I would have suggested that it should be discussed a bit more generally first, but as I agree with the idea wholeheartedly I'm not going to mention it.--BobNot Jim 16:52, 15 August 2009 (UTC) - Bob, you're right, I should have brought it up for discussion before going on a tear, but I was drunkfigured presenting as a fait accompli would get things moving quicker and generate discussion should there be any protest....TheoryOfPractice (talk) 16:59, 15 August 2009 (UTC) - - The only objection I would have to this mainspace scrubbing is that we lose a large number of diff-links to lunacy on Conservapedia. ListenerXTalkerX 17:06, 15 August 2009 (UTC) Most of what I've removed has been internal links to Andy articles or other sysop articles, a few uses of the Andy picture and links to CP stories that aren't difflinks/permalinks. AFAIK, no difflinks have been removed.TheoryOfPractice (talk) 17:12, 15 August 2009 (UTC) - Exactly. The working difflinks are preserved but we have an opportunity to scrub broken ones. We don't lose anything by removing a link that doesn't work. SuspectedReplicant (talk) 17:32, 15 August 2009 (UTC) - We wouldn't lose much by removing working ones that just weren't relevent. But in most cases, they usually are relevent enough. As ToP and I have pointed out, it's mostly the internal in-jokes that have really run their course. narchist 17:59, 15 August 2009 (UTC) - I approve of this conversation and its apparent conclusions. It also would not bother me if the CP-article and no-CP-article boxes slowly went to the great template graveyard in the sea. ħuman 20:40, 15 August 2009 (UTC) - Perhaps we should replace the cp template with a more generic one that links to other places as well (ask, creationwiki, etc.), e.g. {{others|cp=article|ask=other article}}, which would then produce a similar box ("For those living in an alternate reality, here are some "articles" about pagename:" and a list of links). -- Nx / talk 20:46, 15 August 2009 (UTC) - I can only see CP disappearing under the following scenario: 'a shooter, white male, goes and kills gays, Muslims, feminists etc. After being apprehended/his suicide, a check on his computer shows various segments of CP bookmarked. The government investigates if CP does not close for bad coverage after deleting what he has bookmarked. CP is accused of hate speech or something. Fox News martyrs CP and Andy. CP is told to close, which it complies but promises it's readers it will fight it'. ANYWAY, I think that we can always broaden the site. I do not care about CP more than the occasional laugh. I like going after cult leaders and organizations that spring up during moral panics, so I will focus on them and anything else I feel worthy.--Tabris (talk) 20:53, 15 August 2009 (UTC) - - Are you kidding, Tabris? Getting the EVIL FASCIST LIBRUL GUBMINT to persecute him would be like pure crack to Andy. He's be getting wingnut welfare funds for the defense faster than you can say "Rush Limbaugh". --Gulik (talk) 18:40, 16 August 2009 (UTC) I like the idea of de-CP-ing; I don't like adding ask templates nor Wikisynergy templates. ASK and Wikisynergy are tiny, tiny and not worth the effort. Sterile Toyota 22:36, 15 August 2009 (UTC) - To be honest, I like the cp template as it can quickly link to a first hand account of what the "opposition" supposedly think. The wp one is also massively useful. But I do like the idea of the "super external link" better, perhaps CreationWiki's articles on evolution etc. are more relevant and give better examples than the corresponding CP ones - which are usually just less coherent rehashes of other creationist stuff. narchist 16:33, 16 August 2009 (UTC) freethoughtpedia[edit] Does anyone know anything about these guys? They seem kinda small (really small), but a lot like us. Sterile Toyota 22:32, 15 August 2009 (UTC) - On the recent changes list there seem to be a grand total of 6 edits in the last week. It looks way too anti-religion for me... SJ Debaser 22:45, 15 August 2009 (UTC) Fuck, I'll bite--in what ways can one be sufficiently racist and homophobic? TheoryOfPractice (talk) 23:14, 15 August 2009 (UTC) - I don't think you can apply the same criteria of anti-religion to anti-racism and homophobia. Sure, they may incorporate the latter two into religion, but I still think anti-religion can go too far. As for homophobia and racism... unacceptable. SJ Debaser 23:21, 15 August 2009 (UTC) - PC hasn't even edited there for weeks. Talk about your wiki graveyards... ħuman 04:58, 16 August 2009 (UTC) - I don't think she likes me any more after I basically left Liberapedia. SJ Debaser 10:54, 16 August 2009 (UTC) - I just had a quick look and it appears that they plagiarised our Poe's Law article. It still has redlinks where ours are blue and they even retained one of our templates, which, of course, is borked. Lily Inspirate me. 14:07, 16 August 2009 (UTC) The freethinkers have always struck me as being so rantingly anti-religion that "free-thought" doesn't come into it - it's just a kneejerk. They're the kind of people who accuse you of shoving religion down their throats when you mention the bible. Free of religion they may be, but free thinkers they are not. Totnesmartin (talk) 12:22, 16 August 2009 (UTC) It's certainly possible to be too anti-religious. Forcing people to publicly repent isn't healthy behavior for ANY cause, even ESPECIALLY one that claims to be the "good guys". And as the USSR found out, nothing strengthens religious fervor like the sweet, sweet feeling of persecution. --Gulik (talk) 18:45, 16 August 2009 (UTC) - I suppose it's possible to be "too" anything. Too generous, too handsome, too rich. Sadly I suffer from all of these. --BobNot Jim 20:50, 16 August 2009 (UTC) - You can be too anti-racist and anti-homophobic; its the people who interpret any criticism towards a member of a certain group as being motivated by hatred for that group. For example: a black guy is not doing his job well. Boss: 'You're not doing your job well'. Anti-racist:'Racist!' EddyP (talk) 21:27, 16 August 2009 (UTC) - Is that along the lines of the recent trend of considering any and every criticism of Israel to be "anti-semetic"? Although I suspect that's more of a deliberate, cynical overreaction than being honestly overzealous in some cases. --Kels (talk) 21:32, 16 August 2009 (UTC) - Anti-Semite!!!!!1111!1!111oneone!! --The Emperor Kneel before Zod! 21:39, 16 August 2009 (UTC) TK in my pub...[edit] I was in a pub last week and noticed an unusual spot of graffiti upon the bathroom window... The Super Secret Police of Conservapedia are watching us! SJ Debaser 11:09, 16 August 2009 (UTC) - Would it be disingenuous of me to say that a men's bathroom is exactly the kind of place I'd expect to find TK lurking? --Psy - C20H25N3OYou know you want to 12:02, 16 August 2009 (UTC) - Just obvious. - π 12:03, 16 August 2009 (UTC) - But worthy of lulz, surely. SJ Debaser 12:15, 16 August 2009 (UTC) - We shold block him for being a member of a vandal site. Totnesmartin (talk) 12:55, 16 August 2009 (UTC) Again[edit] I was in the same pub again last night! Man, it's creepy now... SJ Debaser 10:41, 19 August 2009 (UTC) District 9[edit] Anybody seen this yet? Know we don't normally talk films, but when a $30m sci-fi flick, shot in and around Jo'burg with no big name actors, kicks Michael Bey off the top of the US box office, it must be kinda special. Alas, hasn't opened here yet - typical. --Psy - C20H25N3OYou know you want to 13:50, 16 August 2009 (UTC) - I'm going to check it out tonight, it looks like it has potential.--الملعب الاسود العقل In my prime like Optimus 16:13, 16 August 2009 (UTC) - The Spill guys all gave it the Better than Sex rating. I'm interested. ENorman (talk) 20:53, 16 August 2009 (UTC) - Its fucking awesome, Peter Jackson produced it and its gold. Ace McWickedModel 500 09:37, 17 August 2009 (UTC) Questions for the mob[edit] Resurrected from RationalWiki:Saloon_bar/Archive26#Discussion_regarding_hiding_revisions I'll try to keep this simple - Should bureaucrats be able to hide a revision with the show/hide button in such a way that only another bureaucrat can see it and restore it? - Currently, if you hide a revision, any sysop can view it. Since basically every user is a sysop, the point is somewhat lost. With this option privacy violations could be hidden so that sysops can't see them. - Which users should be able to see a log of bureaucrats hiding revisions from sysops: bureaucrats only, sysops too, registered users, or everyone including BONs? - I'm only asking this question because there is an option, though I believe most people will say open the log to everyone. -- Nx / talk 14:29, 16 August 2009 (UTC) - I assume this came up because of something that happened in the dim and distant past. Can somebody give me a link or a quick precis? SuspectedReplicant (talk) 16:26, 16 August 2009 (UTC) - No comment on the first matter. However, EVERYONE should be able to see what happened, BON's included. --The Emperor Kneel before Zod! 19:16, 16 August 2009 (UTC) - Well no argument on everyone being able to see the log. Is the hiding really necessary though? I'm sure that, at the time, people will be able to find the hidden edit, but if I wanted to locate that stuff now I wouldn't know where to look. Besides, given the stupidly large number of 'crats this site has, you aren't really hiding anything all that securely. SuspectedReplicant (talk) 19:34, 16 August 2009 (UTC) - I say leave that option for sysops and 'crats only. Or at least limit it as far as autoconfirmed users goes. Given some of the problems here, and considering issues of privacy, do we really want cyberstalking to be possible through us?Lord of the Goons (talk) 19:38, 16 August 2009 (UTC) - Here's an example of the log. It's basically the same as what you get when you use the interface as a sysop, only it goes into a different log (I have no idea why they made it that way, the deleted revision is still visible in the article history, so this isn't like oversight where it vanishes without a trace). As for the first option, all it does is add an extra checkbox on the show/hide page titled "Apply these restrictions to administrators and lock this interface". -- Nx / talk 20:07, 16 August 2009 (UTC) - Suggestion: Any implementation should be done after Trent is back so Just in case someone screw things up with newly powers (for them anyways) it would be less of a panic. - I am personally indifferent who can supress revisions as long as sysops can view it (it's mostly violation of community standards, so presumably those who follows the rules won't do much with the extra information, and it's easier to see what happened) ThiehWhat is going on? 23:24, 16 August 2009 (UTC) - It should be possible for revisions to be deleted such that sysops cannot see them, but the log should note, for all to see, that revisions were removed. The information that is being deleted may indeed be harmful, but I see no reason to hide from anyone that something was deleted. Fedhaji (talk) 07:48, 17 August 2009 (UTC) Important[edit] I am going away for a few weeks on Tuesday. I will not have physical access to the servers for two weeks. More than ever, it is important for people to make sure they are aware of rationalwiki.blogspot.com. This will be always be my primary form of communication if the site is inaccessible. If anyone has any ideas about how to spread the word more effectively about the existence of the blog please feel free to do so. tmtoulouse 19:05, 16 August 2009 (UTC) - Aaaarrgh! Doomed! We're all doomed! Lily Inspirate me. 19:14, 16 August 2009 (UTC) - Maybe it's time to have another whip round and buy him a watchdog timer board for Christmas so he can go away without too much worry. --JeevesMkII The gentleman's gentleman at the other site 19:59, 16 August 2009 (UTC) Funny Stuff....[edit] If only because there are not a lot of rap songs that name-check Margaret Sanger. TheoryOfPractice (talk) 20:24, 16 August 2009 (UTC) - "I Would Never (Have Sex with You)" should be required watching for all teenage males. Would prevent alotta disappointment to be sure! - Clepper Fail.[edit] with a goat. That is all. ĵ₳¥ášÇ♠ʘ secret trainer of boars! 20:48, 16 August 2009 (UTC) - "Dunno source" - the source is the evil Cult of Jerboa and their photoshopping lies. Totnesmartin (talk) 21:32, 16 August 2009 (UTC) Maus[edit] #2 cat has just presented me with a (dead) mouse & is noisily eating it as I type. I am eating & honeychat 01:56, 17 August 2009 (UTC) - Thought we were about to talk comic books. TheoryOfPractice (talk) 01:59, 17 August 2009 (UTC) - Naw! Just nature RED in tooth & claw. I am eating & honeychat 02:03, 17 August 2009 (UTC) - That's one of my favorite things about owning a cat. I've never actually seen a mouse here, but on several occasions he's managed to kill a bird. He never fails to present it right in front of my lounge chair.--الملعب الاسود العقل How strange it is to be anything at all 02:13, 17 August 2009 (UTC) - All that's left is a smear of blood and what looks like a mouse sized liver. #1 cat used to bring loads of mice, but didn't kill 'em, just chased 'em around until they died of terror or hid under the fridge. I am eating & honeychat 02:26, 17 August 2009 (UTC) - My old cat used to kill the occasional mice that managed to make it into the house, and then just sort of leave it where it lay. I doubt she ever tried to eat one. --Kels (talk) 02:35, 17 August 2009 (UTC) - Sometimes the "what to do after you torture it to death" part is bred out, but the "torture" part is still there. Sometimes they eat all but the best bits and bring them to their pet human to make mouse liver omelettes with. One I had caught a bird and slowly devoured it a yard from where I now sit, when she was done all that was left were some feather tips. She was then the most contented cat for about 24 hours. Live food is good food for carnivores. ħuman 03:01, 17 August 2009 (UTC) - Evolution at work. Lily Inspirate me. 07:57, 17 August 2009 (UTC) - Well, actually it's artificial selection. But there isn't much of a difference, since "we" are the modern domestic cat's environment. ħuman 08:14, 17 August 2009 (UTC) - Evolution either way, methinks. - Clepper - It is also improving the mouse population, the better mice survive. Генгисpillaging 09:37, 17 August 2009 (UTC) - Ah, but do you people not see that this is evidence for the existence of God? Cats instinctively know to leave offerings to their deity: you, the human being. They practice religion the way they know, bringing an offering of a mouse now and then to leave in front of the altar (your easy chair). NEED VICODIN NOW (talk) 21:29, 17 August 2009 (UTC) - One of my kitty cats seems to be almost actively vegetarian. He won't eat any sort of raw or cooked meat, but does eat catfood that allegedly is made of meat, but mostly just eats dry food. However, he still has the hunting instinct and will catch all sorts of small furry animals or even birds. He just doesn't really know what to do with them after he's killed them. He's never managed something so daring as a former kitty cat I had years ago who caught a large rabbit and managed to drag it in, headless, and hide it under the sideboard. --JeevesMkII The gentleman's gentleman at the other site 11:08, 18 August 2009 (UTC) - I had to rescue yet another mouse from my youngest (and most psychopathic) kitteh in the middle of the night. Seriously, if there is an afterlife then when I die I expect all the mice I've saved to have chipped in and bought me a nice watch or something. CrundyTalk nerdy to me 10:56, 19 August 2009 (UTC) Awesome film[edit] District 9. damn good. You'll see. End Transmission. Ace McWickedModel 500 09:33, 17 August 2009 (UTC) - I saw a trailer for that the other day. Looks most unusul. SJ Debaser 10:13, 17 August 2009 (UTC) - What's it about? --Gulik (talk) 01:02, 18 August 2009 (UTC) - Check out the District 9 tralier on youtube. It one of the best films I have seen in recent memory. Ace McWickedModel 500 01:41, 18 August 2009 (UTC) - Wangled an invite to the première here tomorrow night (so will be on best behaviour) - looking forward to it. @Gulik - aliens arrive in Johannesburg, turns out they're refugees and are placed in a refugee camp. Later on, humans get fed up with them and go all apartheid on them. (Ok, that's a brief, bad summary). --Psy - C20H25N3OYou know you want to 16:08, 18 August 2009 (UTC) Usain Bolt[edit] I was just watching Usain Bolt on Youtube, and one of the commenters put this little gem: right.. now he's obviously the fastest person alive...anyone half man/half cheetah hiding around here? c'mon, itd be awesome to beat him ;) - Where is that boy when you need him? Totnesmartin (talk) 10:28, 17 August 2009 (UTC) - I caught the race last night on the telly - it was beautiful to watch - I felt kinda sorry for Gay, he runs a superb race, breaks the US national record and still he's a metre behind Bolt. Bob Soles (talk) 10:37, 17 August 2009 (UTC) - And the guy was pretty quick around the Top Gear track too. narchist 13:45, 17 August 2009 (UTC) - Absolutely ridiculous, human beings aren't supposed to be able to run that fast. I always end up wondering what track stars would do in other sports. If he can catch a football someone needs to sign him, he'd make young Randy Moss look slow.--الملعب الاسود العقل Your ballroom days are over, baby 14:53, 17 August 2009 (UTC) - Half man/half cheetah? Does Aimee Mullins count? --Kels (talk) 15:24, 17 August 2009 (UTC) - That must be CUR's favourite pin-up - all his dreams come true at once! Bob Soles (talk) 15:32, 17 August 2009 (UTC) - I ran into a talk she gave at TED (which led to finding another, older one) and she's awesome. It's funny, the cheetah legs she wore for the photo shoot were totally impractical for walking, so she had all sorts of trouble getting around on the shoot. Great result, though. --Kels (talk) 15:41, 17 August 2009 (UTC) Media termns[edit] Some funny stuff. Some terms may even be worth stealing and expanding on as articles. I quite like "PR-reviewed", as in contrast to peer-reviewed. Ben Goldacre brought it up on a very recent blog entry too. I definitely reckon it has something to it. narchist 16:03, 17 August 2009 (UTC) WTF[edit] Just...wtf Totnesmartin (talk) 17:18, 17 August 2009 (UTC) - Worrying is the "Ex Transexual" part... is it physically possible to have an operation to become a woman and then have it a second time to become a man again? And "Ex HIV Positive"? Isn't that just HIV in remission or is that not one of "those diseases"... SJ Debaser 17:32, 17 August 2009 (UTC) - I reckon that's worth stealing and using as an article illustration. I just can't think what. narchist 17:49, 17 August 2009 (UTC) - Bullshit? Faith Healing? ENorman (talk) 20:29, 17 August 2009 (UTC) - It's not physically impossible to reverse the SRS.. it's just really expensive and you won't really be the same. The reason why doctors sell Sexual Reassignment Surgery as permanent is because you technically won't be able "function" down there again after a reverse surgery. - Vaginoplasty takes the penis and morphs it into a penis.. magically--because I was too scared to look at the case studies of it. Same with a vagina (Labiaplasty), they take the labia and "such" (didn't research that part too much) and bippity-boppity-boo.. a penis! - Financially speaking, changing a penis into a vagina costs about $50,000 (for the best), then back again would cost around $75,000 (for the best). Penis transformed into a vagina is easier because there's more to "work with". Silly Mr. Cat 20:41, 17 August 2009 (UTC) - First of all a transexual does not necessarily have to have had reassignment surgery, just a change of mind. He's also married "with a woman who has no uterus" not to a woman who has no uterus. And I think it would be theoretically possible for a woman without a uterus to still donate eggs which could be fertilised and implanted in a surrogate. Also, judging by some message-board comments this should have happened last year, yet there doesn't appear to be any other record of it on the ineternet (OK I didn't bother clicking on the second Google search page) apart from the picture. Генгисpillaging 20:49, 17 August 2009 (UTC) - @Skittlebucket. It's not just the penis, they usually remove the testes as well. Генгисpillaging 20:53, 17 August 2009 (UTC) - Actually it says "married with a woman who had no uterus". Does that mean she's got one now? Генгисpillaging 21:02, 17 August 2009 (UTC) - From what I know of it, the common practice is to skin the penis like a banana, toss away most of the insides, invert it into the body to make the vagina, toss the testes away, turn the scrotum into labia, and re-route the nerves. Tends to be very good results. The other direction, that ain't so easy. The more "traditional" is to take a skin graft from the arm or leg, make a tube out of that and graft it to where the now-closed vagina used to be, plus the same re-routing of nerves. Not a great result, overall a bit of a Frankenstein job, no ejaculation of course and usually a pump to get hard. Better results come from another version where the clitoris is released from the hood and anchoring to hang more freely. Re-route the urethera, make a new scrotum out of the labia where possible and insert fake balls, and add some hormone therapy which increases the size of the clitoris, and voila. A smallish and non-ejaculatory penis that gets hard the natural way and generally has better sensitivity, without the huge scarring and suchlike. Either way, nothing a person would go through "on a lark", so I expect this "ex-transsexual" is either lying, or is living a happy life of repression pushed by the church. Either way, I don't expect there was ever any surgery involved. --Kels (talk) 21:06, 17 August 2009 (UTC) - - Uh, that is, I think, a lot more than most of us needed to know about that... ListenerXTalkerX 04:10, 18 August 2009 (UTC) - I didn't make it past "skin the penis like a banana". --الملعب الاسود العقل You pray for rain, I'll pray for blindness 04:20, 18 August 2009 (UTC) - There is a website but it makes no mention of Prophet Isaac. The site doesn't appear to have been updated much since early 2008. Генгисpillaging 05:13, 18 August 2009 (UTC) - There is also a USA organisation with the same name but their website is no longer available. However, Google cache gives this gem from their about us page : - Who We Are - A sinificant [sic] Christian presence in the third world country is strong in the big cities which leaves the rural areas with very minimal or no Christian influence due to the economy and lack of trained leadership. - We seek to establish and help churches develop a significant Christian witness in the rural community which requires a shift in ministry paradigm appropriate for the rural context. - Генгисpillaging 05:20, 18 August 2009 (UTC) (UD) One thing they missed out on the list of Exes is Ex-parrot. He could be dead. Lily Inspirate me. 05:59, 18 August 2009 (UTC) Pornographie![edit] And no, it is not just to get your attention. I have a question for the residential porn experts that bothers me for quite some time now: why does most of the internet assume that German porn is all about pissing and shitting and other disgusting stuff? I`ve been surfing the internet for Porn for about 10 years now, and I don`t see anything especially shitty about our domestic porn. Is it because Germans are said to give a shit about everything? Or do you think we just want to piss of the rest of the world? Nothing but questions...— Unsigned, by: Gmb / talk / contribs - Hmm. My impression of German porn is either the soft stuff with middle-aged guys in lederhosen and hats with feathers in them rolling in the hay with a blond jungfrau with bunches or overly-serious masked (maybe even gas-masked) swingers with rubber and leather fetishes. But then I haven't seen a lot of it. Генгисpillaging 20:25, 17 August 2009 (UTC) - I assumed it was Russian. Just like the beastiality stuff. (wandering 4chan does too much to your mind). Ke klik 20:43, 17 August 2009 (UTC) - The fact that Germans love pissing and shitting in their fucking is an incontrovertible truism. Denying this is like that denying the beauty of fall foliage reduces the risk of cancer. Are you against school prayer by any chance? Judging by your other, more ludicrous positions on the state of the art of German pornography, I am 97.5% certain you are. — Signed, by: Neveruse513 / Talk / Block 20:45, 17 August 2009 (UTC) "Anonymous User" (what an absurd user name that is), your determination to deny conservative values is remarkable. I've managed to read your rants and nearly every one of your postings here includes a puerile, sneering remark, so my confidence is over three standard deviations away from the mean that you're almost certainly a liberal who's here to push your misguided ideology rather than genuinely help anyone learn. Like all atheists, you deny that action at a distance helps resist moral decay. As I've said before, it's a myth that were atheists in ancient times. There may not even be true atheists today. Your statement does not explain how a materialist, which most atheists are, can believe in something immaterial like love. Most don't. Try to open your mind in the future.--Aschlafly 09:00, 16 May 2022 (UTC) oh come on, such a wonderfull lead-in and all you can come up with is quotes from the assfly? show some more creativity Gmb (talk) 21:26, 17 August 2009 (UTC) - Fuck you bossy bum, you show some more creativity. 75.127.68.98 03:12, 18 August 2009 (UTC) - Whenever Bill uses Emule to download films and he gets a fake pornographic one it frequently seems to involve Germans involved in fairly basic acts. On the other hand I've only got his word for this story from start to finish so there may be some doubts about this data.--Hillary Rodham Clinton (talk) 07:13, 18 August 2009 (UTC) Cinema/Movies[edit] Just a little query: How often to people go to the movies? The last time I went was in 1962 (Sean Connery/Dr. No). I just haven't got the attention span to sit in one place just watching a film for that length of time. The only films I've seen for ages have been on DVD. Right now I'm flicking between Firefox (3 windows, 17 tabs) and Open Office Writer & the TV is constantly on. My other half is similarly occupied but with a book instead of OOW (she did go to see Titanic!). Am I unusual? I am eating & honeychat 01:24, 18 August 2009 (UTC) - You're typical (ADD for adults is fun, not a disease!). I actually like settling in and watching a complete film (screw movies, they have commercials, whatever). Last one I saw in a theater was Fargo, IIRC. But I rent and watch things I own, only pausing for pee or food/drink breaks. ħuman 02:39, 18 August 2009 (UTC) - I see a large number of films in the theater, mostly with no cash outlay, since I am on the mailing-lists for those free preview screenings the film distributors put on. ListenerXTalkerX 04:09, 18 August 2009 (UTC) - Lucky bastid. ħuman 05:04, 18 August 2009 (UTC) - I go very very rarely (much to chargrin of Ms McWicked) as you cant smoke, I always need to take a piss and when I was a movie reviewer I had to see 2 - 3 a day weekends on end, it really put me off. Although as I have already stated District 9 - awesome. Ace McWickedModel 500 05:23, 18 August 2009 (UTC) - That's always been my problem too. About halfway into the movie I start thinking "shit, I need a smoke" and can't fully enjoy the film. I'd say I go once every couple months though.--EcheNegraMente I can't drive straight counting your fake frowns 05:28, 18 August 2009 (UTC) - Wow. Just how many people on this site smoke? It seems to be way above average. I gave up (counts) two and a half years ago. SuspectedReplicant (talk) 05:32, 18 August 2009 (UTC) Poll: ListenerXTalkerX 05:37, 18 August 2009 (UTC) Further poll for smokers, how many per day?: Ace McWickedModel 500 05:41, 18 August 2009 (UTC) M - I am 6 foot 4 in the old money so after about 30 minutes in those seats my ass is numb and I can't wait to get out. Still, the only movie I ever walked out of was Sunshine. Terrible, terrible film (apologies to those that liked it). Rad McCool (talk) 10:17, 18 August 2009 (UTC) - NZlanders use cockney rhyming slang? Last movie I saw that sucked arse was the Butterfly Effect. I love that bit in Family Guy... "I got the idea to build a panic room after I saw that movie - you know, the Butterfly Effect? I thought 'wow, this is terrible, I wish I could escape to somewhere where this movie could never find me,' and then-" SJ Debaser 10:48, 18 August 2009 (UTC) So (on a huge sample size) half of RW users have smoked at some point. I'm sure Schlafly could come up with a statistic on that. Worst film? Well obviously Plan 9 from Outer Space, but that's so bad it's funny. You have to go a long way to beat the Blair Witch Project. I thought that was utterly rubbish. SuspectedReplicant (talk) 14:00, 18 August 2009 (UTC) - Been about three years since I've seen a movie in the theatre (went to see Howl's Moving Castle, totally worth it), but coincidentally I'm going to see Ponyo today. For a while before that I went fairly often to the local rep cinema, saw some great stuff, but it's been rare over the past decade for me to go to a main line cinema. Gonna be fun in October though, I'll be volunteering at the Ottawa Animation Festival. --Kels (talk) 14:04, 18 August 2009 (UTC) - Me and the wife go to the movies quite frequently, maybe twice a month, not to mention our extensive DVD collection and bitchin home theater. And as far as bad movies go, only two movies need to be considered; American Movie, and the Howling 7. Howling 7 wins though, because at least American Movie tried to have a plot. No, I wouldn't recommend anyone ever seeing either of these. Z3rotalk 15:48, 18 August 2009 (UTC) - I haven't been since 300. It was OK until the big giant Persian thing walked on and I thought "they have a cave troll." there's one in every film of that type nowadays, going by the trailers. Totnesmartin (talk) 09:50, 19 August 2009 (UTC) smoking poll[edit] So, a few people here smoke... but what do you smoke? Totnesmartin (talk) 09:51, 19 August 2009 (UTC) That health care bill sure would be nice right about now....[edit] Well, the battle over health care just became a fairly personal one for me. I just found out I have a stage 2-3 case of chronic kidney disease. I'm only 22 so this sucks pretty hard, even if I might've done it to myself. Luckily, I have decent insurance so I'm not totally in the shitter, but things are going to be really tight for a while. A little bit of extra help would go a long way. Hopefully this shit will get sorted out soon, I sure would like to able to afford more than tortillas and velveeta!--PitchBlackMind So analyze me, surprise me, but can't magmatize me 01:31, 18 August 2009 (UTC) - Fuck dude, sorry to hear that man. Wishing the best though. Ace McWickedModel 500 01:40, 18 August 2009 (UTC) - EC) That's a bastard, PBM. At 22 y.o. it's specially bad, you expect bits to start breaking down at my age but that sucks. Suggest that you start sounding out blood relatives with working parts ASAP. I am eating & honeychat 01:44, 18 August 2009 (UTC) - I don't know the details about your condition but it obviously doesn't sound good. Best wishes for successful treatment. Генгисpillaging 02:07, 18 August 2009 (UTC) - I can't offer anything aside from a "that really sucks man" Javasca₧ A sig not even he can predict! 02:12, 18 August 2009 (UTC) - Hang in there. Sterile Toyota 02:18, 18 August 2009 (UTC) - Best of luck, and concentrate on keeping that insurance up-to-date and fully paid. And, oh, yeah, seconding the ghoulish "mine your relatives for organs" comment. ħuman 02:41, 18 August 2009 (UTC) - That terrible mate. Best of luck, always keep positive about things. - π 02:48, 18 August 2009 (UTC) Thanks everyone, I really appreciate that. I should've explained more about it when I mentioned it, I wasn't sure what it was when I first heard it either. It's serious, but it's manageable. There's a strong possibility I'll be on dialysis or in need of a transplant when I'm around 55-60. Though that might not happen at all, only time will tell. Either way there should be plenty of time to life a good life. Still, there will be copious amounts of medication and doctor visits in my future, so Uncle Sam picking up the tab here and there would help a great deal.--EcheNegraMente When I look up the sky's all I see 02:54, 18 August 2009 (UTC) Good luck with this. I know it might not do any good, but apparently, various liberal groups are trying to stir up people to go to these townhall meetings to counteract the screaming rightwing loonies. You might want to consider attending a few, as a good personal story is worth a thousand pages of statistics. (Planning on going to a local one myself tomorrow.) --Gulik (talk) 03:57, 18 August 2009 (UTC) - It's never nice to find out seomthing nasty about one's health. However, you have apparently still got a long life in front of you before you need major stuff. That's 30+ years of medical research. I'm sure that things will have changed enormously by then, who knows what the options will be? Lily Inspirate me. 06:03, 18 August 2009 (UTC) - Slightly intrigued by: "even if I might've done it to myself.". I am eating & honeychat 13:54, 18 August 2009 (UTC) - Okay, I'll bite. I was pretty severely addicted to crystal meth for many years. Being basically pure poison, it's extremely hard on just about every part of your body. It may not be the specific cause, but it's undoubtedly a contributing factor.--PitchBlackMind Midnight wish blow me a kiss 14:34, 18 August 2009 (UTC) - Aah! Possibly Too Much Information. I am eating & honeychat 14:46, 18 August 2009 (UTC) - "If my answers frighten you then you should cease asking scary questions." :)--PitchBlackMind So analyze me, surprise me, but can't magmatize me 15:26, 18 August 2009 (UTC) - So, er, do you watch Breaking Bad? And if so, how "authentic" is it when it's not trying to be hilarious? ħuman 20:50, 18 August 2009 (UTC) - On the bright side, at least you're off the crystal meth now, right? Lily Inspirate me. 21:21, 18 August 2009 (UTC) - Yes Lily, I've been off of it for quite some time. Breaking Bad is a fantastic show, Human. It's a remarkably accurate take on the meth world and the people involved with it. Even the funnier stuff is plausible. They must have done some serious research for the show, because everything about it is as close to reality as you can get.--PitchBlackMind A smooth operator operating correctly 23:35, 18 August 2009 (UTC) Spam?[edit] I received the following email today: [email protected] Reply Follow up message The following is an e-mail sent to you by an administrator of "RichardDawkins.net Forum". If this message is spam, contains abusive or other comments you find offensive please contact the webmaster of the board at the following address: [email protected] Include this full e-mail (particularly the headers). Message sent to you follows: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You have been invited to join It is one of the best sites around to get all your free application downloads, latest movies, games of all sorts, and many other great things. Please make an introduction topic there so you can get a warm welcome from your members! -- Thanks, RDF is it spam or wot? I am eating & honeychat 01:39, 18 August 2009 (UTC) - "WAREZ" = Piracy! Spam definitely. Генгисpillaging 01:58, 18 August 2009 (UTC) - Yup, I got two of these. Considering it's from a "forum administrator" I'm guessing old dickie hasn't kept his forum software up to date and some spammers have hijacked an account to send spam. CrundyTalk nerdy to me 09:27, 18 August 2009 (UTC) "When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection"[edit] I'm.... speechless ... about this. Sterile Toyota 02:21, 18 August 2009 (UTC) - I'll say. There no discussion on previous zombie infection models, I would have asked for a more detailed literature review if I was the editor. - π 02:41, 18 August 2009 (UTC) Math question[edit] Maybe a tad technical, but I know we have at least a few mathematicians that frequent these parts. I need some help. I have a set of discrete points on a Cartesian plane and I need to calculate the area swept out underneath these points. Now if this was a function I know you just integrate under the curve, but these are discrete points. My two best guesses so far on what to do are to calculate some polynomial function with n degrees of freedom that meets some arbitrarily small mean square error between the function and the points. Then use that function to calculate the area. My second idea was to use some Runge-Kuttaesque method where I do a stepwise integration between two points with some arbitrarily small step size, then sum up the area between all the points. Would either of these be likely to fail? Is there a better, more accepted way to do this? Thanks much for anyone that happens upon this and can provide some insight. tmtoulouse 04:29, 18 August 2009 (UTC) - Without really being to sure exactly what you are trying to do, but based on "if this was a function I know you just integrate under the curve, but these are discrete points", I would say you would need to take the Lebesgue integral. 192.43.227.18 04:52, 18 August 2009 (UTC) US ‘may take military action’ to liberate Britain from the NHS[edit] Made me laugh. SuspectedReplicant (talk) 05:51, 18 August 2009 (UTC) - (explanatory links for non-Brits: [1] [2] and [3]) SuspectedReplicant (talk) 05:54, 18 August 2009 (UTC) - At least the US gave us ER. Casualty was just a pale imitation and they had to get their babies from the US as well. Lily Inspirate me. 06:10, 18 August 2009 (UTC) - Wait, I thought Casualty had been around for way longer than ER? CrundyTalk nerdy to me 09:30, 18 August 2009 (UTC) - It has. "Casualty is the longest running emergency medical drama series in the world" according to WP. It started in 1986, compared to 1994 for E.R. SuspectedReplicant (talk) 09:47, 18 August 2009 (UTC) I demand an equal opportunity to be freed from the tyranny of the Spanish health service Taliban.--BobNot Jim 09:56, 18 August 2009 (UTC) Bookmarked. That'll go nicely with my daily mash. Totnesmartin (talk) 10:17, 18 August 2009 (UTC) Obama is believed to have abandoned his plans to adopt the NHS model for American healthcare, even though he has privately commented that the bed-hopping antics of British medics 'sure look like fun'. Class. SJ Debaser 10:56, 18 August 2009 (UTC) - That's some funny parody, that there is, it is. I like how it actually takes the trouble to be two different jokes wrapped up in one. ħuman 20:54, 18 August 2009 (UTC) - Ah - I didn't know about the Daily Mash. Now bookmarked. SuspectedReplicant (talk) 06:24, 19 August 2009 (UTC) - Could be worse. They could have intercepted the transmission of a couple of episodes of Doctors. narchist 11:21, 19 August 2009 (UTC) Here, have a bag of LOLs[edit] I'd never heard this before. Have you? --Kels (talk) 15:26, 18 August 2009 (UTC) - Also this from him. CrundyTalk nerdy to me 08:26, 19 August 2009 (UTC) Fucking crickets![edit] OK, I was at Scientific evidence for God's existence and I knew those crickets were digital. But I closed it. Now I still here crickets. Well, hell, it's cricket season. Or is Mike Malloy reading Scientific evidence for God's existence while ranting? I once recorded an early version of a song, and you can hear a cricket chirping on the master in the background at the beginning and the end. Meaning the damn insect is deep in the background of the entire song. Never figured out an "easy" way to make the much later, listenable, version incorporate that delightful feature. Except... oh, shit, it's August! Cricket season! I could turn everything off and mike the "current" cricket on a spare track and mix the bastid in! I wonder if I could do that competently, it's not that late, I've only had a few "soul enriching beverages".... ħuman 02:42, 19 August 2009 (UTC) - Just think, in but a few short months, they will be gone. Summer really has been going fast. I'm already starting to prepare firewood--Tabris (talk) 02:54, 19 August 2009 (UTC) Then there's the real cricket. Last Test starts tomorrow (? well, 1000GMT Thursday 20/8). This is serious stuff for us Aussies - revenge for 2005.RagTopGone sailing 11:18, 19 August 2009 (UTC) UFOs[edit] Hehehe "One report, from 1995, describes how two children from Bovingdon, in Hertfordshire, were almost lured onto a helicopter-shaped spacecraft by an alien who 'could walk backwards but made it look like he was walking forwards'. According to the files: "The translucent creature called to them in a melodic falsetto and was only scared off after a local farmer threatened to report him and his pet monkey to the police."" CrundyTalk nerdy to me 08:33, 19 August 2009 (UTC) Master of the Internet![edit]Totnesmartin (talk) 09:36, 19 August 2009 (UTC) Bible translation[edit] Anyone thought about doing a RW re-translation? "Jesus then rebuked the evil spirit, "Shut up, muthafucka..."" narchist 12:27, 19 August 2009 (UTC)
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive32
CC-MAIN-2022-21
en
refinedweb
I have been writing about Entity Framework Core ( EF Core ) lately and how to integrate it with .NET Core. If you are interested in learning how to use EF Core in a .NET Core Console Application, you may be interested in my article Getting Started with Entity Framework Core and SQLite. I then showed how to use EF Core Migrations to manage the SQLite database in the article EF Core Migrations Tutorial. In this article I will be showing how to use EF Core with ASP.NET Core MVC to display a list of reminders in a SQLite database. I will be using EF Core Migrations to create the database. I will also be using Dependency Injection in ASP.NET Core with a custom DbContext Class in Entity Framework Core. All of this code is written by hand using Visual Studio Code on macOS. New ASP.NET Core MVC Project In a terminal window on macOS, I create a new mvcdata directory and .NET Core Application and open the files in Visual Studio Code. mkdir mvcdata cd mvcdata dotnet new dotnet restore code. I need to install a few dependencies in this project. At a minimum I need the Kestrel Server to serve my ASP.NET Core MVC pages and, of course, I need ASP.NET Core MVC ( Microsoft.AspNetCore.Server.Kestrel and Microsoft.AspNetCore.Mvc). I will be using EF Core with SQLite as well as the EF Core Tools for database migrations, so I also add Microsoft.EntityFrameworkCore.Design, Microsoft.EntityFrameworkCore.Sqlite, and Microsoft.EntityFrameworkCore.Tools. Since I am writing this from scratch in Visual Studio Code, it also helps to add a developer exception page in the ASP.NET Core Pipeline. Therefore I will also be using Microsoft.AspNetCore.Diagnostics in the application. And finally, I set preserveCompilationContext to true in the build options for use with the Razor View Engine. My project.json is as follows: { "version": "1.0.0-*", "buildOptions": { "debugType": "portable", "emitEntryPoint": true, "preserveCompilationContext": true }, "dependencies": { "Microsoft.AspNetCore.Diagnostics": "1.0.0", "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.Server.Kestrel": "1.0.0", "Microsoft.EntityFrameworkCore.Design": "1.0.0-*", "Microsoft.EntityFrameworkCore.Sqlite": "1.0.0" }, "tools": { "Microsoft.EntityFrameworkCore.Tools": "1.0.0-*" }, "frameworks": { "netcoreapp1.0": { "dependencies": { "Microsoft.NETCore.App": { "type": "platform", "version": "1.0.0" } }, "imports": "dnxcore50" } } } Configure ASP.NET Core MVC I wire up Kestrel to serve my web pages as well as add the ASP.NET Core MVC Middleware. If this is totally new to you, I recommend reading my ASP.NET Core MVC From Scratch Article that walks through this information in more detail. using System.IO; using Microsoft.AspNetCore.Hosting; namespace MvcData { public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseStartup<Startup>() .Build(); host.Run(); } } } using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; using MvcData.Models; using MvcData.Services; namespace MvcData { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); } public void Configure(IApplicationBuilder app) { app.UseMvcWithDefaultRoute(); } } } I will be adding more to the Startup Class in a minute. This is the bare minimum to get ASP.NET Core MVC up and running. The Startup Class is the entry point into my ASP.NET Core Applications. I am adding ASP.NET Core MVC to the Dependency Injection Framework and its middleware to the pipeline. I could configure it to use custom routes, but for this example I will just use the ASP.NET Core MVC Default Routing. Developer Exception Page in ASP.NET Core As I mentioned earlier, I don't always get these sample applications right the first time. Adding logging would be overkill, but the developer exception page is a necessity for displaying exceptions that occur in the pipeline. I add this to the middleware pipeline in the Configure Method of the Startup Class before ASP.NET Core MVC. public void Configure(IApplicationBuilder app) { app.UseDeveloperExceptionPage(); app.UseMvcWithDefaultRoute(); } Adding Entity Framework Core I add a custom DbContext, called MyDbContext, as well as a custom entity, called Reminder, to save reminders in my ASP.NET Core MVC Web Application. using Microsoft.EntityFrameworkCore; namespace MvcData.Models { public class MyDbContext : DbContext { public DbSet<Reminder> Reminders { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlite("Filename=./db.sqlite"); } } } using System.ComponentModel.DataAnnotations; namespace MvcData.Models { public class Reminder { [Key] public int Id { get; set; } [Required] public string Title { get; set;} } } I am using System.ComponentModel.DataAnnotations to give Entity Framework Core a bit more information about the properties on my Reminder Class. The property, Id, will be the primary key in the SQLite Database and the Title Column associated with the Title Property is required and should be set to non-nullable. The primary key would have been picked up using Entity Framework Core conventions, but I thought I would make it explicit anyway. I am wrapping the use of MyDbContext into a service, called MyData, for a bit more abstraction. This is a bit overkill for this example, but in larger projects one may want to limit the surface area of Entity Framework Core. using System.Collections.Generic; using System.Linq; using MvcData.Models; namespace MvcData.Services { public interface IMyData { List<Reminder> FetchReminders(); } public class MyData : IMyData { private readonly MyDbContext _db; public MyData(MyDbContext db) { _db = db; } public List<Reminder> FetchReminders() { return _db.Reminders.ToList(); } } } As I will explain in a moment, dependency injection in ASP.NET Core will inject MyDbContext into the MyData Service. ASP.NET Core Dependency Injection ASP.NET Core will inject my custom DbContext, called MyDbContext, into the MyData service in my ASP.NET Core Web Application. Again, dependency injection in ASP.NET Core happens in the Startup Class so I will add it there. I also register MyData with the dependency injection framework, because it will be injected into my RemindersController. The final Startup Class wiring up dependency injection, ASP.NET Core MVC, and a developer exception page looks as follows: using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; using MvcData.Models; using MvcData.Services; namespace MvcData { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddDbContext<MyDbContext>(); services.AddTransient<IMyData,MyData>(); } public void Configure(IApplicationBuilder app) { app.UseDeveloperExceptionPage(); app.UseMvcWithDefaultRoute(); } } } ASP.NET Core MVC Controller and Views Now that the ASP.NET Core Middleware Pipeline is set-up and configured, I can create ASP.NET Core MVC Controllers and Views. The Index Action of the RemindersController will retrieve a list of reminders in the SQLite Database via Entity Framework Core and display the reminders in an unordered list. using Microsoft.AspNetCore.Mvc; using MvcData.Services; public class RemindersController : Controller { private readonly IMyData _myData; public RemindersController(IMyData myData) { _myData = myData; } public IActionResult Index() { var reminders = _myData.FetchReminders(); return View(reminders); } } @model List<Reminder> @using MvcData.Models <ul> @foreach(var reminder in Model) { <li>@reminder.Title</li> } </ul> EF Core Migrations to Create SQLite Database The ASP.NET Core Web Application will not run without the SQLite Database. I can create it using EF Core Migrations. See my EF Core Migrations Tutorial for more detail on using migrations. From a Terminal Window in Visual Studio Code I run the Entity Framework Core Tools commands that create the initial migration and database. I also manually populated a couple of reminders into the SQLite Database for testing purposes. dotnet ef migrations add InitialMigration dotnet ef database update Run the ASP.NET Core MVC Web Application Run the ASP.NET Core MVC Web Application and display the results in a browser. dotnet run Project mvcdata (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation. Hosting environment: Production Content root path: /Users/Sasquatch/Core/mvcdata Now listening on: Application started. Press Ctrl+C to shut down. Conclusion If you are learning ASP.NET Core MVC, I hope you found the tutorial useful. I wrote this simple tutorial using Visual Studio Code on macOS so it seems like a great deal of configuration, but if you use Visual Studio 2015 a lot of this is done for you. I like to create these samples from scratch using a simple editor, because it gives me a better understanding of all the nitty gritty details. You can find me on twitter as Koder Dojo. I hope to hear from you! Best Wishes!
https://www.koderdojo.com/blog/getting-started-with-asp-net-core-mvc-and-ef-core
CC-MAIN-2022-21
en
refinedweb
Print string to text file It is strongly advised to use a context manager. As an advantage, it is made sure the file is always closed, no matter what: with open("Output.txt", "w") as text_file: text_file.write("Purchase Amount: %s" % TotalAmount) This is the explicit version (but always remember, the context manager version from above should be preferred): text_file = open("Output.txt", "w")text_file.write("Purchase Amount: %s" % TotalAmount)text_file.close() If you're using Python2.6 or higher, it's preferred to use str.format() with open("Output.txt", "w") as text_file: text_file.write("Purchase Amount: {0}".format(TotalAmount)) For python2.7 and higher you can use {0} {}instead of In Python3, there is an optional file parameter to the with open("Output.txt", "w") as text_file: print("Purchase Amount: {}".format(TotalAmount), file=text_file) Python3.6 introduced f-strings for another alternative with open("Output.txt", "w") as text_file: print(f"Purchase Amount: {TotalAmount}", file=text_file) In case you want to pass multiple arguments you can use a tuple price = 33.3with open("Output.txt", "w") as text_file: text_file.write("Purchase Amount: %s price %f" % (TotalAmount, price)) More: Print multiple arguments in python If you are using Python3. then you can use Print Function : your_data = {"Purchase Amount": 'TotalAmount'}print(your_data, file=open('D:\log.txt', 'w')) For python2 this is the example of Python Print String To Text File def my_func(): """ this function return some value :return: """ return 25.256def write_file(data): """ this function write data to file :param data: :return: """ file_name = r'D:\log.txt' with open(file_name, 'w') as x_file: x_file.write('{} TotalAmount'.format(data))def run(): data = my_func() write_file(data)run()
https://codehunter.cc/a/python/print-string-to-text-file
CC-MAIN-2022-21
en
refinedweb
table of contents NAME¶ getnetent, getnetbyname, getnetbyaddr, setnetent, endnetent - get network entry SYNOPSIS¶ #include <netdb.h> struct netent *getnetent(void); struct netent *getnetbyname(const char *name); struct netent *getnetbyaddr(uint32_t net, int type); void setnetent(int stayopen); void endnetent(void); DESCRIPTION¶. RETURN VALUE¶ The getnetent(), getnetbyname(), and getnetbyaddr() functions return a pointer to a statically allocated netent structure, or a null pointer if an error occurs or the end of the file is reached. FILES¶ - /etc/networks - networks database file ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). In the above table, netent in race:netent signifies that if any of the functions setnetent(), getnetent(), or endnetent() are used in parallel in different threads of a program, then data races could occur. CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008, 4.3BSD. NOTES¶ In glibc versions before 2.2, the net argument of getnetbyaddr() was of type long. SEE ALSO¶ getnetent_r(3), getprotoent(3), getservent(3) RFC 1101 COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
https://manpages.debian.org/bullseye/manpages-dev/getnetbyname.3.en.html
CC-MAIN-2022-21
en
refinedweb
SYNOPSIS #include <sasl/sasl.h> int sasl_canon_user_t(sasl_conn_t *conn, void *context, const char *user, unsigned ulen, unsigned flags, const char *user_realm, char *out_user, unsigned out_umax, unsigned *out_ulen) DESCRIPTION sasl_canon_user_t Is the callback for an application-supplied user canonicalization function. This function is subject to the requirements that all user canonicalization functions are: It must copy the result into the output buffers, but the output buffers and the input buffers may be the same. context context from the callback record user and ulen Un-canonicalized username (and length) flags Either SASL_CU_AUTHID (indicating the authentication ID is being canonicalized) or SASL_CU_AUTHZID (indicating the authorization ID is to be canonicalized) or a bitwise OR of the the two. user_realm Realm of authentication. out_user and out_umax and out_ulen The output buffer, max length, and actual length for the username. RETURN VALUESASL callback functions should return SASL return codes. See sasl.h for a complete list. SASL_OK indicates success.
https://manpages.org/sasl_server_userdb_checkpass_t/3
CC-MAIN-2022-21
en
refinedweb
RationalWiki:Saloon bar/Archive34 Contents - 1 On being pedantic - 2 In honor of my birthday - 3 Hell... - 4 Namespaces - 5 New Logo - 6 Politicians - 7 Bandwidth - 8 The Demon Insomnia and Holiday Hangover - 9 Dreams of wiki - 10 Conservapedia create account - 11 I still have three tabs open on RW - 12 Abducted girl found after 18 yrs - 13 TGIF! - 14 user Republicans Reject Science; Scientists Reject Republicans - 15 Porn for the Blind - 16 interesting observation - 17 Archive binging - 18 Ed Poor on Wikipedia - 19 Spiny Norman! - 20 DO SOMETHING ALREADY - 21 Possibly the best site on the internet - 22 hello - 23 An unexpected problem - 24 Happy Spring Day - 25 World War Two... - 26 Aubrey O'Day - 27 Enron - 28 Stumblin' - 29 Praying for a profit - 30 lol WND - 31 Health care - 32 Evolutionary mutation rate... - 33 Vaccination story - 34 Blarrrgh - 35 I Want one - 36 World War Two (again) - 37 Jon Voight on Obama - 38 Language - 39 Nothing changes is creationism land - 40 IM - 41 Archiving - 42 REWARD - 43 Porting to RW - 44 50 Greatest Comedy Films - 45 BNP to give up racism! - 46 A Whorehouse of Knowledge - 47 Best bible verse EVER - 48 My day - 49 WikiSynergy pulled down - 50 Idiots Convention - 51 RationalWiki - 52 WHOOT! Welcome Home! - 53 The future - 54 Fairy tales - 55 Holy bloomin' crap - 56 Isn't this amazing? - 57 Recovery - 58 In the Garden of Eden - 59 We're back just in time - 60 The Atheist's Guide To Christmas - 61 All in the Family - 62 Intelligent Design Explained - 63 How is it possible? - 64 Sending video files - 65 Preventing server collapse - 66 De-CPying the main space - 67 Online behaviour. - 68 Popular British Names On being pedantic[edit] Ah the intricacies of being pedantic in English. It reminds me of the old Jonathon Miller sketch in "Beyond the Fringe". . ‘Come in’, he said. I decided to wait awhile in order to test the validity of his proposition. ‘Come in’, he said once again. ‘Very well’, I replied, ‘if that is in fact truly what you wish’. I opened the door accordingly and went in, and there was Moore seated by the fire with a basket upon his knees. ‘Moore’, I said, ‘do you have any apples in that basket?’ ‘No’, he replied, and smiled seraphically, as was his wont. I decided to try a different logical tack. ‘Moore’, I said, ‘do you then have some apples in that basket?’ ‘No’, he replied, leaving me in a logical cleft stick from which I had but one way out. ‘Moore’, I said, ‘do you then have apples in that basket?’ ‘Yes’, he replied. And from that day forth, we remained the very closest of friends.' Genghis Khant 23:00, 26 August 2009 (UTC) - OK, my brain is fried, anyone want to explain the subtle pedantry in that one to me? Kthxbai... Human 03:59, 27 August 2009 (UTC) - At first I thought it was some trick on "some" and "any" but it doesn't work. Click here for the original soundtrack which makes it clear it's a gag. - Bob M 09:34, 27 August 2009 (UTC) - I think I've sort of figured it out, without the help of wikipedia's non-existent article on bare plurals. "Any apples" can mean "all apples" ("any apple grows on a tree"); "Some apples" is a bit tougher, but still has a meaning like "some apples are red", which may be the logical direction Moore was trying to avoid being skewered on. Simply "apples" means what it says on the tin - more than one apple, and nothing else. Human 21:51, 27 August 2009 (UTC) - (Hey, this thread is almost on-topic for this wiki!) Human 21:52, 27 August 2009 (UTC) - That's pretty much how I interepreted it. Genghis Khant 21:58, 27 August 2009 (UTC) In honor of my birthday[edit] Happy birthday! I have put some 15 year, cask-strength Macallan on tap here for all RationalWiki refugees! Drink up!Gooniepunk2005 23:23, 26 August 2009 (UTC) - Oooooooh, that's goood stuff. Did you catch Nancy on Ron Reagan's show last hour? Human 01:12, 27 August 2009 (UTC) - For some reason, I am never around when Ron Reagan's show is on, so, no, I didn't hear his interview with Nancy. Was it good?Gooniepunk2005 22:19, 27 August 2009 (UTC) - Yeah, it was. They sounded like a son and his mother chatting about an old friend of their dad's (ok, not her dad, mangled sentence). She was pretty lucid for her age, and very nice about Ted, and how close they all were. The mother/son thing was kinda sweet, that they would let down their hair a bit and be a little personal on the air. Might be at whiterosesociety or Ron's website if you want to hear it, it was roughly the last 20 minutes or so of the show. - Happy Birthday, Goonie, sláinte mhath, and thanks for the drink. RagTop 03:52, 27 August 2009 (UTC) - Cheers mate! Many happy returns and all that. SuperJosh 17:33, 27 August 2009 (UTC) - Thanks all. I had a blast yesterday.Gooniepunk2005 22:19, 27 August 2009 (UTC) Hell...[edit] ...is giving 4 presentations in 1 day, in a language I've barely spoken for the last 12 months. Just wanted to get that off my chest. -- PsyWhut? 18:43, 27 August 2009 (UTC) Namespaces[edit] I have noticed that there are Debate and Essay namespaces here, which don't seem to make much sense. I would suggest that you replace them with things like Lesson, and change Free stuff into a category, which is more fitting its purpose. Phantom Hoover 18:12, 28 August 2009 (UTC) - I could do. To be honest they are not much used but there might be in the future. The world of TEFL is by no means a done deal and is really rife with debate about methodology and such like. Essay will allow people to create opinion type pieces if they so wish. "Free stuff" doesn't really fit into the mainspace (or won't in theory) because one could give a lesson a name which would then look like an article. As you will have guessed I stoled the ideas of those spaces who in turn (I think) stoled them from CP.--Bob M 19:06, 28 August 2009 (UTC) - But namespaces should separate different kinds of page, like articles from essays and material for lessons, meanwhile categories are for indicating shared attributes of pages; therefore "Free stuff" shouldn't be a namespace, because you can have two lessons, one of which is free and one of which isn't, but "Lesson" should, because, as you say, you should separate lessons from articles. Phantom Hoover 19:20, 28 August 2009 (UTC) - Space "free stuff" is for class materials which teachers have developed and which they have donated to the community. Not everything that a teacher might develop is, in fact, a "lesson". It could be a game for example, or a particular activity which the teacher has found useful for explaining or clarifying a particular point. We will never host paid lessons here - indeed it's not clear how we could. We also do have a couple of categories for "free lessons" or resources which people can obtain from the web. --Bob M 19:38, 28 August 2009 (UTC) New Logo[edit] After much to-ing and fro-ing we have a new logo at Teflpedia. I would like to express my thanks to Toast who initially suggested the use of IPA text; to Phantom Hover and Nx who gave up their time and patience to actually create it; and to all those who contributed opinions. It should serve as a memory of the RW vacations in 09.--Bob M 14:20, 26 August 2009 (UTC) - "initially suggested the use of IPA text" and teh mortar board. Toast 14:30, 26 August 2009 (UTC) - Then I stand corrected. And the mortar board.--Bob M 15:08, 26 August 2009 (UTC) - Heh! I haz teh ideas and the peons do the work. That's management! Toast 15:26, 26 August 2009 (UTC) - I prefer to see it as me and Nx helping the tech-unsavvy old people make pretty pictures. Phantom Hoover 15:45, 26 August 2009 (UTC) - This senior citizen has been making pretty pictures on computers probably since teh "tech savvy" people were in nappies. Toast 15:51, 26 August 2009 (UTC) - I'd already gathered that you had done computery stuff before. Phantom Hoover 16:09, 26 August 2009 (UTC) - Actually, now I look at it again - and look at some other logos - do you think we should but a line of sub text. "The wiki for English teachers" or something of that nature?--Bob M 15:34, 26 August 2009 (UTC) - The only people who will ever see it are already here and presumably know why, so no. If you use it external to the site (on printed matter e.g.) then yes.— Unsigned, by: Toast / talk / contribs - - Well, it's good to see you got something positive out of letting us all crash here. Armondikov 15:48, 26 August 2009 (UTC) Text added here. Phantom Hoover 20:07, 26 August 2009 (UTC) - "The English teaching encyclopaedia" doesn't sound right to me. Does it mean an 'English encyclopaedia of teaching' or an 'English-teaching encyclopaedia'? Perhaps it should be 'The encyclopaedia of English-teaching' or 'The encyclopaedia of teaching English' or even 'The wiki for teachers of English'. Genghis Khant 22:03, 26 August 2009 (UTC) - It's not actually an encyclopaedia is it? It's a resource. Toast 22:14, 26 August 2009 (UTC) - I'll change it tomorrow. Phantom Hoover 22:15, 26 August 2009 (UTC) Well the mainpage says: Teflpedia is a wiki dedicated to everything associated with teaching English as a foreign language or second language. Our objective is to become a user-generated resource for English teachers internationally. We opened our doors on 27th May 2008 and we've been slowly growing ever since. So - how to describe it? Resource? Wiki? Encyclopaedia? Meeting point? Perhaps simply calling it a "wiki" would be best? Opinions?--Bob M 09:41, 27 August 2009 (UTC) - Thanks PH. Now I see it I'm not so sure. Would "The English Teachers' wiki" look better?--Bob M 10:34, 27 August 2009 (UTC) - I just want to second Bob's suggesting of a RW summer outing. No, not that sort of outing. Armondikov 09:46, 27 August 2009 (UTC) - Yeh, but I bet we forget next year. Unless Trent "arranges" it again.--Bob M 10:34, 27 August 2009 (UTC) - Hehe, yeah. And in your caption above, either "Teachers'" should be uncapped or "wiki" should be capped, right? Human 21:35, 27 August 2009 (UTC) - Yes, you're right. My bad. (Unless you were to regard "English teacher" as a proper noun or a title. But that's clutching at straws.) Where is Tolerance when you need him - or was it her?--Bob M 22:13, 27 August 2009 (UTC) - English teacher = teacher (of any subject) who happens to be English. - Wiki for teachers of - English as a Foreign Language - Toast 22:25, 27 August 2009 (UTC) - That's a bit unwieldy, isn't it? Also, while "English teacher" can be read that way, it also can clearly mean "teacher of English". In fact, I doubt the confusion will come up much, if at all. If I said "my English teacher" I doubt anyone would think I meant "my math teacher from England". See how it compares simply to "math teacher"? Human 00:24, 28 August 2009 (UTC) (UI)You forget I am in a foreign country, and, here, the first three phrases mean what you would expect, and "English teacher" means a teacher of English. Human 00:38, 28 August 2009 (UTC) - Have it your way. Toast 00:39, 28 August 2009 (UTC) - Well, it's Bob's decision, and I'm sure he will weigh our arguments in his fine balance. If he even bothers to read all of them ;) Human 00:47, 28 August 2009 (UTC) - As for me, I'll just wait until you've all come to something approaching an agreement, and then I'll update it. Phantom Hoover 07:18, 28 August 2009 (UTC) - Actually the description "English teacher" is acknowledged to cause problems. See our article teacher.--Bob M 07:23, 28 August 2009 (UTC) - OK, that article needs some work, but how about "Resources for teachers of English as a foreign language" on two lines? Human 07:26, 28 August 2009 (UTC) - That doesn't fit on two lines at the font size I'm using, and I don't want to make it any smaller. Phantom Hoover 08:19, 28 August 2009 (UTC) - OK, how about "For teachers of English"? Or how about the older logo, no text except the IPA? Bob? Human 08:31, 28 August 2009 (UTC) - The thing is that while "English teacher" is potentially ambiguous. (Is the teacher English, or is English the subject being taught, and is that a TEFL teacher or a "normal" English teacher.) In reality such ambiguity never really causes problems. Anybody coming to a site called "TEFL something" will know full well what is going on. And If they don't the first paragraph on the mainpage explains what we are. So I don't think that we need to get too hung up over this or get into too long an explanation. I also like PH's font and font size , and I like the idea of a bit of text.--Bob M 09:17, 28 August 2009 (UTC) Request[edit] 23:32, 28 August 2009 (UTC) Politicians[edit] I thought the House of Commons was bad but I really don't know what to make of this. Can anyone explain it to me? Genghis Khant 22:21, 26 August 2009 (UTC) - Yeah, they do this stuff all the time. Most "work" is done in committee, so the place is 3/4 empty and no one cares, but "speeches" like that end up in the Congressional Record. Note that she was limited to 5 minutes. In the good old days of filibustering the old farts would just stand there reading the bible or the phone book or whatever they had handy in order to keep talking. We haven't had a good fist fight in ages, though. Human 22:42, 26 August 2009 (UTC) - Apparently one of the record lowest turnouts for the UK Commons was during a discussion on truancy... But yes, all the real work is done in smaller committees (Select Committees in the UK, IIRC) leaving some weird shit to go down in the actual chambers. Armondikov 09:43, 27 August 2009 (UTC) Bandwidth[edit] Bloody hell. Hosting RW is really chewing up my bandwidth. My previous maximum bandwith in a month was 900 meg and I've got 2000 contracted. At the moment I'm at about 1400 used - which means that I've got about 600 left. My maximum daily load with RW on board was 206, though 180 seems more like an average. With four and a half days of the month left that looks a bit ominous, especially as things are more likely to go up than down. A word with my provider is in order I feel.--Bob M 10:57, 27 August 2009 (UTC) - Indeed. You may get some leverage if you say it's temporary. If you need a few bucks to get an allowance increase to cover the usage, then I know I'm happy to contribute. Worm 11:10, 27 August 2009 (UTC) - I'm just in live chat with them now. The bastards are playing hardball.--Bob M 11:16, 27 August 2009 (UTC) - Now you know how Trent feels. Theemperor 12:06, 27 August 2009 (UTC) - And you're not really getting hit as hard as RW usually does. Good luck, I'll stop refreshing the page 10 times a second :p. But cheer up, it's really only a few days before Trent is back at home, chained to the server so he stays within kicking distance. Armondikov - 500 used in the last seven days, 600 to last youten days... not gonna make it. Is there a way for us to chip in if you go over? 188.220.32.68 15:16, 27 August 2009 (UTC) - - I wonder... How much of the bandwidth will have been taken up by me and Nx uploading more than ten different logo candidates? Phantom Hoover 17:02, 27 August 2009 (UTC) - I don't think that'll be very high. What will increase the load though is the number of people looking at those pages. I must admit that I refresh Rc loads of times & that's gotta be fairly high bytewise. Suggestion: remove all RW added Javascript/CSS from personal and common. (dunno if that'd help at all. How much does "Use enhanced Recent changes" contribute over a week? Toast 17:22, 27 August 2009 (UTC) - I was wondering about what chews bandwith as well. I'm at 1469 now, but I can't remember the exact number from this morning, but we don't seem to be chewing up bandwith at quite the same rate. - Talking to their finance people was like negotiating with a brick wall, their only solution was to upgrade my packages. Their negotiating position was - "If you go over your limit we switch you off." But, in the middle of the conversation - weirdly just as I asked them to put a value on the upgrades - , they transferred me to their techo folk. They were a bit weird, but gave me an extra 100 mbg. - So we'll wait and see how close we get to the line. Perhaps people could hold off uploading stuff till the new month starts? Would that make much difference?--Bob M 17:30, 27 August 2009 (UTC) The top bandwidth users are: - Great Britain 322.46 MB - United States 307.37 MB - Spain 204.32 MB - Hungary 106.85 MB - Australia 85.77 MB Some of that will be regular traffic. The UK and Gb are usually near the top. For comparison last month was: - Spain 296.40 MB - United States 48.01 MB - Great Britain 21.33 MB - Malaysia 8.07 MB - China 16.30 MB --Bob M 18:01, 27 August 2009 (UTC) - *Hungary 106.85 MB - Hm. I guess I better lay off the refresh button Nx 18:26, 27 August 2009 (UTC) - I find it vaguely weird that the UK beats the US. Phantom Hoover 18:32, 27 August 2009 (UTC) - Yeah, I thought that too. - Would archiving older parts of student bar make much difference?--Bob M 18:42, 27 August 2009 (UTC) - Bob, it would be a good idea to archive a large chunk of this page (and any other long pages like WIGO ASK talk) as the bandwidth usage increases geometrically. Also I would suggest commenting out any images as well. You've made us all quite welcome here and it would be a pity if we trashed your wiki with our presence. Lily The Pink 19:00, 27 August 2009 (UTC) Update[edit] Update: 1530 used out of 2000.--Bob M 22:22, 27 August 2009 (UTC) Purple Scissors at wikisynergy had offered up a "room" where we could play if we wanted. We might want to try splitting the load or something? Offload the talk wigocp or the student bar to WS maybe? Human 23:55, 27 August 2009 (UTC) - Why don't we fire up an Amazon EC2 server to use and we chip in to the account holder? I have an account set up and so I can boot one up, but considering it would cost about $100 / month (including the bandwidth) I'd need some contributions to avoid being castrated by tehmizus. Crundy 07:53, 28 August 2009 (UTC) - Either that or I can give you an account on my Linux reseller server with something like 10Gb of bandwith. Do you need root access to use MediaWiki or can you install it on shared hosting servers? Crundy 07:55, 28 August 2009 (UTC) WikiSynergy apparently has plenty of spare BW to share. If things get scary, Bob, why not make a banner link thing to a random page or subpage over there. I'm sure she won't mind. Human 08:50, 28 August 2009 (UTC) - Update: Now on 1565 out of 2000 so things seem to be slowing down, or people are checking recent changes less often or, archiving some pages has had an effect. If we run right up to the edge I'll do what you suggest and put a banner on Student bar suggesting that people take it to a page on WikiSynergy. --Bob M 09:09, 28 August 2009 (UTC) - If we can keep the pages here relatively small (mainly talk pages anyway) then that should do the trick. Moving to another site will only complicate merging everything back into RW. Also, if people could use a slightly smaller font then that might help as well. ;) Genghis Khant 10:24, 28 August 2009 (UTC) Update 1622 of 2000.--Bob M 19:15, 28 August 2009 (UTC) Update:1709 of 2000. Thanks for the idea Crundy but I'm afraid that I don't know how to do that. Don't know if any of the more techy types here know.--Bob M 06:51, 29 August 2009 (UTC) - I think we should move either talk wigo CP (or the student bar, nah, keep that here, it's a cool pagename) to wikisynergy soon. We don't want to bump you over the line. In html you can do a "redirect" that is automatic. Can you copy talk wigo cp over to WS and insert such magic? Or just leave a simple one line link? Or, will "300"-odd last for two more days? I am trying to use small words as much as I can. Human 07:23, 29 August 2009 (UTC) - I think it might, actually, given that the count has only gone up by a little under 100 for the last two days. Phantom Hoover 08:03, 29 August 2009 (UTC) Update: 1772 of 2000. I should like to thank people for taking it easy. :-) I should especially like to thank the financial contributor from the other end of the world.--Bob M 16:51, 29 August 2009 (UTC) Update: 1834 of 2000. We should make plans for the worst. :-( --Bob M 06:58, 30 August 2009 (UTC) I should stop hitting F5 every ten minutes on the recent changes page.. Web 07:31, 30 August 2009 (UTC) - If you're a RC-junkie then at least turn off image downloads (check advanced settings for your browser if you don't know how) and in your personal settings reduce the number of entries to display to 10. Ⓖⓔⓝⓖⓗⓘⓢ 07:48, 30 August 2009 (UTC) Update: 1889 of 2000--Bob M 20:25, 30 August 2009 (UTC) Update: 1960 of 2000. She's going to blow captain!--Bob M 07:34, 31 August 2009 (UTC) - Need to make redirects to and ASAP? Human 09:18, 31 August 2009 (UTC) - I assume Bob's host uses gmt... if that is so, there are only 7 hours left until september ... hopefully teflpedia won't be down for long if it does down. Nx 15:40, 31 August 2009 (UTC) - I assume my host is giving me a little bit of rope. I have been getting automated messages telling me that I'm at 99% but so far the knife has not fallen. From previous dealings with them I have the impression they don't want to lose clients, although when I spoke to their accounts dept a few days ago they weren't that helpful. Should have spoken to sales. Yes, they are on GMT which is notably ahead of US time. - I'm reluctant to look at my statistics pages in case looking at them provokes some bot to action. I have the impression that some of my e-mails have followed immediately on from my looking at the stats. Sorry about reducing chances to play for everybody. It's a bit like inviting everybody to a party and then finding out you don't have enough beer.--Bob M 16:51, 31 August 2009 (UTC) Well, August is over, but for future reference you should look into enabling output compression to reduce bandwith. MediaWiki actually does this by default, but you need to enable zlib support in PHP. --79.40.239.85 05:47, 1 September 2009 (UTC) - Bob, I've uploaded a little memento, just in case you never see it again. 19:04, 2 September 2009 (UTC) The Demon Insomnia and Holiday Hangover[edit] One of those christing insomniac nights. Always happens to me seasonally, as I am moving into spring down here in this wayward hempishere my body reacts badly. Last night I was in the throes of some kind of alcholic sicknesses that bewitch those that spend the week drinking heavily whilst on holiday. Combined with some belly infection the struck me as soon as I was airbourne from Melbourne airport I was in a world of pain. Tonight though is straight insomnia. So I get back up and leave Ms McWicked to her own sleepless purgatory - on the boarder of waking hell and heavenly sleep - and pour myself a double triple quad full glass of scotch and partake in a cigarette in my backyard. Under these beautiful NZ skys, clear stars wink at me as they complete their circus. I mull it all over, feel the wonderousness that can only come from imagining deep space. Then I get cold and come to see what you fuckers are up to. So wow me. Ace McWicked 12:11, 27 August 2009 (UTC) - You scumbags are boring, I am going back to bed. Ace McWicked 13:03, 27 August 2009 (UTC) - I laughed at a crying kid today. That was the high point of it. SuperJosh 18:07, 27 August 2009 (UTC) - I put some dirty knickers in a Kiwi's airplane food on a flight from Melbourbe to Wellington a few days ago. I wonder if it made him sick as heck. Hi Ace! Nutty Roux 24.14.72.223 18:55, 27 August 2009 (UTC) Dreams of wiki[edit] I was having a bit of a bad dream last night and wanted to get out of it. So I found a wiki-link in the dream, clicked on it and felt myself go to a different page, and a different dream. Am I spending too much time on-line?--Bob M 17:35, 27 August 2009 (UTC) - No, but when you react to a joke by saying lol,you've reached the point of no return. -- PsyWhut? 17:54, 27 August 2009 (UTC) - I thought the point of no return was when you end an ironic remark by saying ;) --79.45.237.109 20:41, 27 August 2009 (UTC) - No, you're too far gone when you say "bracket bracket Liberal bracket bracket" Theemperor 05:08, 29 August 2009 (UTC) Nother dream[edit] I had a dream last night that I was on the London Underground (London train service) with a friend and the train got hijacked by a guy that took it to a secret house where he and a group of friends were pirating DVDs. It was an immensely cool dream, as I never usually remember them. SuperJosh 10:28, 29 August 2009 (UTC) Conservapedia create account[edit] There's no create account button on Conservapedia - I know Phantom Hoover bought this up the other day, but it's still not there. I think they've finally done it and effectively ended free speech on Conservapedia. Other people's verdicts? SuperJosh 19:54, 27 August 2009 (UTC) - Sometimes they forget to switch it back on. But just like night editing and range blocks, anything that serves to put off new editors can only be good news for the forces of reason. Genghis Khant 20:02, 27 August 2009 (UTC) - Whenever we get RW back, we totally need an FAQ to answer the "Why is there no create account button?" question. --Jeeves 20:03, 27 August 2009 (UTC) - I think the fact that me and Josh are both in Britain may have altered our perception of things like night mode, because of the time difference. Do you get the Create Account button if you've been blocked? TK rangeblocked me a while ago after I made a slip, so that might explain it. Phantom Hoover 20:06, 27 August 2009 (UTC) - I seem to recall this happened at least once before. I can't believe they'd do such a thing deliberately.--Bob M 20:09, 27 August 2009 (UTC) - You should still be able to see the create account button even if you've been blocked from creating an account - it just won't complete. Genghis Khant 20:59, 27 August 2009 (UTC) It's back[edit] The button's back. Allow the sockpuppetry to begin! SuperJosh 12:21, 28 August 2009 (UTC) - Oh wait, I'm IP blocked. Oops! SuperJosh 12:22, 28 August 2009 (UTC) I still have three tabs open on RW[edit] A template talk I've had for ages, an article talk page where the first ref is a dead link, and talk:C-decay where I was arguing poorly with h/2h over something that seems so... long ago... I go there when I need to eat brains. I hope my firefox doesn't crash before Septic VI, these tabs are my only link with our glorious past, my friends. Human 07:21, 28 August 2009 (UTC) - No doubt that is what brought RW down! Human's open tabs!--Bob M 11:45, 28 August 2009 (UTC) Abducted girl found after 18 yrs[edit] I'm sure that most of you have heard about the 11-years old girl who was abducted 18 years ago and recently walked into a police station. I was not at all surprised to read this from the BBC article:. .... Some of those who had had contact with Mr Garrido over recent years said he had developed increasingly strong religious beliefs. Those evil amoral atheists, they're everywhere nowadays. Genghis Khant 11:36, 28 August 2009 (UTC) - Sounds like something from Jack Chick. Evil rapist kidnaps young girl but later finds God so that's all right then.--Bob M 11:46, 28 August 2009 (UTC) - I bet that's something we're going to hear about on Conservapedia, the website which doesn't censor the truth. SuperJosh 12:21, 28 August 2009 (UTC) - Ah, I didn't know this topic was here, I just added a link to this story at WIGO world. Imagine the relief and joy of the girl's mother, what a situation though, that nutter kept her confined in his back yard for 18 years. Refugee 22:22, 28 August 2009 (UTC) - At the time of her disappearance she was living with her mother and step-father (no mention of real father, perhaps he died earlier on), after she disappeared their marriage broke up and the step-father has been under suspicion of involvement. Finally he is exonerated. Genghis Khant 22:41, 28 August 2009 (UTC) - No mention in the story who the father of the young girls is... (her daughters). Creepy. Human 01:57, 29 August 2009 (UTC) - I'm sure it has been stated (somewhere) that Garrido is the father of the young girls (too lazy to check). Apparently the girls have never seen a doctor nor a school. I wonder if they have been homeschooled? Now the police are saying that they are also investigating a series of prostitute murders. Genghis Khant 06:49, 29 August 2009 (UTC) 06:48, 29 August 2009 (UTC) TGIF![edit] That's all I have to say, TGIF! About freakin time! 3 day weekend now for me! :D Refugee 22:23, 28 August 2009 (UTC) - Our host is under some constraints with bandwidth so please refrain from too much page refreshing and archive stuff to prevent frequently-viewed pages getting too long. Genghis Khant 22:44, 28 August 2009 (UTC) - What? Refugee 00:32, 29 August 2009 (UTC) user Republicans Reject Science; Scientists Reject Republicans[edit] I can't remember if this was discussed before Teh GCOON, but only 6% of scientists are republicans. maybe there is hope for the world. PsyWhut? 15:28, 29 August 2009 (UTC) Porn for the Blind[edit] Ok, last one for today. porn for the Blind. I kid you not. -- PsyWhut? 15:34, 29 August 2009 (UTC) - Thanks.. I can use this when I'm too tired to keep my eyes open to look at my fiancée. *downloads each one and loads it onto my mp3 player* Kektklik 17:58, 29 August 2009 (UTC) - Sounds reasonable to me. We have guide dogs for the deaf and music for the deaf. Why not porn for the blind?--Bob M 18:55, 29 August 2009 (UTC) interesting observation[edit] We are always yacking on about how many of us are active on RW, and we've tried to use various metrics to get a good idea of the numbers. At Category:RationalWiki I count over 50 users - people who are into RW enough to have taken the trouble to join at this refuge. And people are still signing up. Just an observation. Human 21:05, 29 August 2009 (UTC) - A surprising number actually. After this month ends It would be interesting to start: "How I found the student bar." But not until the 1st of September please.--Bob M 21:43, 29 August 2009 (UTC) - Screw it. I wasn't going to overload the bandwidth, but if we're going to use "who found da site?" as a metric, I'll sign up slavishly. SuspectedReplicant 22:24, 29 August 2009 (UTC) - Yeah, but you're a sockpuppet... Human 04:13, 30 August 2009 (UTC) - Err... nope. SuspectedReplicant 07:37, 30 August 2009 (UTC) Archive binging[edit] I'm currently going through Irregular Webcomic. I've been at it for nearly two days, and I'm still only at about 600. Phantom Hoover 21:57, 29 August 2009 (UTC) Ed Poor on Wikipedia[edit] I realize he's a lame duck over there (one who won't leave, of course), compared to the license he has to drop his little 'quoticles' all over Conservapedia, but some of his recent WP edits have been a bit blatant in trying to slowly remove criticism of the Unification Church. This, on the other hand, is just amazing. I vaguely remember something on RW about him actually being employed by the Unification Church. Is that for real? Mt 22:14, 29 August 2009 (UTC) - I don't know if he was "employed" - as in getting paid - but I think that he said that he had helped out with his local church's website and maybe with the UC's website as well. Ⓖⓔⓝⓖⓗⓘⓢ 22:30, 29 August 2009 (UTC) - Wow, Ed is a master of writing policies that are bound to make tons of people lash out. The only question is: Does he do it on purpose or is he just hardwired to do so? Either way, if there is one thing we should thank Conservapedia for is that it keeps idiots like Ed from doing even more harm on Wikipedia. --Sid 20:40, 30 August 2009 (UTC) Spiny Norman![edit] Our own Spiny... I'm so proud! (I think.) Sterile 00:36, 30 August 2009 (UTC) DO SOMETHING ALREADY[edit] There hasn't been a single edit here or at Wikiindex for nearly two hours. WHAT THE HELL IS GOING ON? Phantom Hoover 20:11, 30 August 2009 (UTC) - Perhaps people are trying to conserve bandwidth.--Bob M 20:24, 30 August 2009 (UTC) - Ah. Phantom Hoover 20:26, 30 August 2009 (UTC) - (EC) We're holding our breath until September 1st. Ⓖⓔⓝⓖⓗⓘⓢ 20:27, 30 August 2009 (UTC) - It's a bank holiday weekend. Sane people aren't in front of their computers, working. --Jeeves 20:50, 30 August 2009 (UTC) - Oh, yes; I forgot that. It isn't in Scotland. Phantom Hoover 21:26, 30 August 2009 (UTC) - Hah, Scottish people. Shouldn't you be out shooting wild haggis or something? --Jeeves 21:39, 30 August 2009 (UTC) - Not in Edinburgh. Phantom Hoover 21:42, 30 August 2009 (UTC) Possibly the best site on the internet[edit] I have just discovered Objective Ministries, whos website is possibly the funnist thing ever. Some choice highlights- In their News section: (apparently they were subjected to a spam attack from YTMND, bold mine) ." There's also the bit where the plug Conservapedia, of course to top it all off, check out their kids section My favorite is the atheist Mr.Gruff. That is all. Theemperor 21:46, 30 August 2009 (UTC) - Sorry about the image, but it was too much. We removed the GIFs from the header; that had to go too. Phantom Hoover 21:51, 30 August 2009 (UTC) OMG 24.14.72.223 22:10, 30 August 2009 (UTC) - This has to be parody, the kids section is the giveaway. The creation science giraffe is especially over the top.PitchBlackMind 22:15, 30 August 2009 (UTC) - "Were Neanderthals the "monkey men" Evolutionists keep talking about?" "No! Neanderthals were humans with abnormal bone growth due to very advanced age and Flood-cloud-related rickets!" headdeskdeaddesk 24.14.72.223 22:16, 30 August 2009 (UTC) - I think it's for real, sadly. Oh, and the images? Just link to them, and save to your HD to upload at RW when it's back? Human 23:35, 30 August 2009 (UTC) - Um, surely not... Isn't it done by the same people who brought us Landover Baptist? --Jeeves 23:50, 30 August 2009 (UTC) - You may be right... my Poe meter is in for repairs. Human 23:54, 30 August 2009 (UTC) - This is listed on our Poe's Law article on the test yourself list, it is one of the parodies. The members list is the give away. Dr. Richard Paley showed up during the Lenski debacle on Conservapedia, I think Lenski mentioned him in the letter. Pi 00:44, 31 August 2009 (UTC) - Ah, well, I hadn't memorized the article, and, well, couldn't check :( Richard Paley, isn't he rather "well known" for digging up dead people in Africa? Human 01:56, 31 August 2009 (UTC) - Him possibly? Pi 02:44, 31 August 2009 (UTC) 02:41, 31 August 2009 (UTC) hello[edit] Back on line. Toast 05:31, 1 September 2009 (UTC) - That wasn't me: it was Marghanita Laski on the phone. Toast 12:01, 1 September 2009 (UTC) - I think we're alone now. TheoryOfPractice 05:41, 1 September 2009 (UTC) - Hello. Pi 05:55, 1 September 2009 (UTC) - Phew, I can breathe again, now. Ⓖⓔⓝⓖⓗⓘⓢ 06:06, 1 September 2009 (UTC) - Hi. So normal service is back.--Bob M 06:12, 1 September 2009 (UTC) - The service might be normal - not sure about the users ;) Worm 07:57, 1 September 2009 (UTC) - That might be too much to hope for.--Bob M 08:02, 1 September 2009 (UTC) - Of course, it's Sept 1st, RWians can raise hell here again! Huzzar!! Armondikov 10:10, 1 September 2009 (UTC) I'm using this conveniently titled section to say hello. Hello! NightFlare 20:46, 1 September 2009 (UTC) An unexpected problem[edit] Well, this is a bit rough. Just got a message from my main client cancelling my contract. They'd previously put off negotiations during the end of August, but it seems their HO has just obliged everyone to use one main supplier. Ha well. Have to look for other opportunities.--Bob M 12:48, 1 September 2009 (UTC) - Ouch. I hope something turns up for you soon. SuspectedReplicant 13:02, 1 September 2009 (UTC) Though my previous client is being most helpful. Things are in motion. --Bob M 08:52, 2 September 2009 (UTC) Happy Spring Day[edit] Spring is sprung! The grass is riz I wonder where My undies is Happy spring day to all Southerners. (That's real southerners, not you namby-pamby lot hovering above the equator.) -- PsyWhut? 17:34, 1 September 2009 (UTC) World War Two...[edit] ...Britain declared war on Germany 70 years ago today. Just remembered. SuperJosh 18:16, 1 September 2009 (UTC) - Close. Germany started kicking the stuffing out of Poland today. Britain/France declared war on the 3rd (My dad's birthday... he's 2 minutes older than WW2). -- PsyWhut? 18:25, 1 September 2009 (UTC) Aubrey O'Day[edit] Anyone see her on Hannity last night? He was JAQing off trying to corner her and she flat out told the truth...Hitler was a brilliant man. Hannity almost crapped himself. It was hilarious. 19:32, 1 September 2009 (UTC) - Wow. Story and Video. SuspectedReplicant 19:43, 1 September 2009 (UTC) - I think it's a great example of revisionism on Hannity's part. He is incapable of acknowledging Castro or Hitler's intelligence. The other funny revisionism thing that happened recently was Glenn Beck saying Obama was running an "Oligarhy [sic]". Yeah... and a bush was in the white house the last 24 out of 32 years or something??? 207.67.17.45 19:51, 1 September 2009 (UTC) - I saw video of the "OLIGARHY" thing - and the way he then desperately tried to make a point about "Czars" to make it appear as if he was making a point rather than making a f***-up. You yanks really seem to have much more fun with your TV than we do! SuspectedReplicant 20:07, 1 September 2009 (UTC) Enron[edit] On More 4 (UK TV) now. Mr Bush was in it up to the neck. Toast 23:01, 1 September 2009 (UTC) Stumblin'[edit] Dropped on this re a lib v con experiment. among other things: "Based on the results, he said, liberals could be expected to more readily accept new social, scientific or religious ideas." Interesting? Toast 02:44, 2 September 2009 (UTC) - Which means that conservatives are less likely? Surely that is what defines them as being conservative? Ⓖⓔⓝⓖⓗⓘⓢ 06:18, 2 September 2009 (UTC) Praying for a profit[edit] This caused me a certain amount of schadenfreude. 20,000 metric tonnes of gold? That's double the US reserve and worth $670bn at current prices. Just goes to show who the really gullible ones are. (Sorry to read that someone felt compelled to take their own life because of it though.) Ⓖⓔⓝⓖⓗⓘⓢ 10:24, 2 September 2009 (UTC) lol WND[edit] an excerpt from this story about right wingers getting fed up with WND. Joseph Farah goes on a masterful defense with this analysis: - Last Saturday, the Herald published an article titled, "Secret camps and guillotines? Groups make 'birthers' look sane." In that story, reporter Steven Thomma of McClatchy, a newspaper chain founded on the notion of promoting "public ownership of private property" (and doing its best to fulfill that mission, I might add), alleges "WorldNetDaily.com says that the government is considering Nazi-like concentration camps for dissidents." - To back up that complete misrepresentation, Thomma offers this excerpt from a brief news story in WND dating back to Feb. 1 and written by Jerome Corsi, a senior staff writer and two-time No. 1 New York Times best-selling author: "[A] proposal in Congress 'appears designed to create the type of detention center that those concerned about use of the military in domestic affairs fear could be used as concentration camps for political dissidents, such as occurred in Nazi Germany.'" -." - Corsi's is a much more nuanced and accurate statement – acknowledging the fears of many Americans recently maligned by the Homeland Security Department as "right-wing extremists" and potential terrorists – than the Herald's allegation that Corsi and WND claim "the government is considering Nazi-like concentration camps for dissidents." But perhaps you need to be slightly more literate – or concerned about the truth – than Thomma to see the distinction. So it's not Corsi claiming that "the government is considering Nazi-like concentration camps for dissidents" because Thomma left out that the WND thinks the mainstream media wasn't covering it? Is it too early to read, or is Joseph Farah just flat-out crazy? 12:49, 2 September 2009 (UTC) - Farah is not completely crazy, he has a history of this. Unless the Republicans win, we are three days away from some crisis that will allow the Democrats to take over the country, lock people in FEMA camps and outlaw Christianity. The reason I say he is not crazy is because I doubt he believes any of that, he just a hyper-partisan who hates the Democrats and peddles this crap for money. Look at the site, it is giant advertisement for books they write and DVDs they produce masquerading as a news site. They sell "end of the world"/"the new world order is coming" crap to people who are afraid their own shadow is a commie. Pi 11:31, 3 September 2009 (UTC) Health care[edit] A quick question for the Americans if I may: What is it about a national health service that makes Americans angry? I mean surely if you don't want the terrible free health care then you can just get health insurance and get the premium package? Which is what you have to do at the moment to get any kind of health care? Is this just a case of the rich people saying "I can afford good health care, so f**k the poor"? Crundy 13:31, 2 September 2009 (UTC) - It's socialism. Socialism = Communism = Evil. But it seems like your last sentence is quite correct. Armondikov 14:09, 2 September 2009 (UTC) - I still can't quite understand that. most likely as I'm a Brit and I've grown up with free Health Service. I can understand why some yanks might not want to accept welfare payments, as I've just been reading the paper and it seems the way Labour has arranged welfare has destroyed half the country (then again I was reading the Daily Mail and it's a very conservative-biased paper). But to say "no money = go die" is quite disgraceful. SuperJosh 16:42, 2 September 2009 (UTC) - I've actually argued with Americans (who can themselves afford health insurance) who claim that if they were poor they'd happily forgo lifesaving medical treatment for the sake of principle. I don't believe them. --Jeeves 16:48, 2 September 2009 (UTC) - Interesting, because if it's a Christian-religious thing, Christians (and some other religions in general) think of life as sacred and usually do whatever they can to preserve it (unless they're Jehovah's and don't want other people's blood). SuperJosh 17:02, 2 September 2009 (UTC) - My friend's father opposes free health care for the poor because his taxes will go up, and he's already struggling to get his four kids (my friends) the money they need to get into college. Pretty much 'just because you fucked up your life doesn't mean my kids should suffer'. I still side with the poor, and I find it hard to believe my friends can't get student loans. Apparently there's a limit on the amount allowed to be loaned per household, or something. - Clepper - I can understand where your mate's dad is coming from. It's not always that you necessarily want to kill, kill, kill, kill, kill the poor, just that the tax works it easier on oneself. Like every other issue in the world, it is a controversial one. SuperJosh 17:18, 2 September 2009 (UTC) - Why should society be expected to pay for his four kids? If he'd stopped at two, he'd probably be considerably better off. If I were to buy four cars would I expect people with only two cars to help with my car maintenance? It's one thing that annoys me about the NHS: funding IVF for people who want kids. I want a car, where's my funding? Toast 17:27, 2 September 2009 (UTC) - Education ain't an expense, it's an investment. A country full of incredibly stupid people isn't much of a tax base. University is another thing that doesn't tend to bankrupt people in the UK either. Sure, there are fees now, and loans instead of grants but you're probably not going to run up debts exceeding £20K for a degree, which is less than the average graduate salary per annum. --Jeeves 17:33, 2 September 2009 (UTC) - No, it more the freedom/libertarian thing. We like to think that they are quite similar to the Brits, Canucks or Antipodeans because they use a similar language and have similar popular culture but there really is a completely different mindset in the USA. You have to understand the anti-communism/socialism attitude is quite deeply embedded in the American psyche even if they don't always know what it really means, they've just been indoctrinated to think it's bad. RobS is a child of McCarthyism, he may be an extreme example but there are many who have been brought up to think along similar lines. So anything that's "socialised" automatically gets a bad rap even from those whom would actually benefit. I hate to upset our transatlantic cousins but to my mind. The idea that anyone can become president is a mantra that is reiterated like winning the lottery but it requires money and a agreement to play the power-game. You may come from humble beginnings but you quickly leave them behind. Unlike Europe there is no large-scale grass-roots socialism movement so although we may like to think that the Democrats are the equivalent of the Labour party they are closer to the left-wing of the Tory party. The US two-party system makes it difficult for other groups to get a foothold, so those with any left of centre leanings end up supporting the Democrats by default. Before Obama got elected I actually thought it would be a bad thing for the Democrats because they were coming in at the wrong stage of the economic cycle. The financial excrement is about to hit the fan and the Democrats will get the blame even though the real culprits are the Republicans. Starting with Nixon who abandoned the gold standard and thereby gave rise to all the later financial excesses of Reagan and the shrubbery. I my opinion Obama is trying to do the right thing but at the wrong time. Ⓖⓔⓝⓖⓗⓘⓢ 17:45, 2 September 2009 (UTC) - Oh, how liberals love to censor and control! -Teh Assfly, 19:55, 2 Septembver 2009AD (UTC) - @Toast - I mostly agree with you about IVF. I believe the NHS typically pays for one course of IVF treatment, which I think is fair enough. Some Health Authorities pay for more though, which does piss me off. When you get the couples on the news whining that they've had three free cycles, still haven't got preggers, but demand more for free because "having children is a right"... I'm afraid my only response is "No it isn't". SuspectedReplicant 17:48, 2 September 2009 (UTC) - My missus had a hysterectomy before I met her so there has never been any question of little Khants running around. So yes it does annoy me as well when people demand multiple IVF treatments on the NHS especially if they've already had kids by previous partners. As far as I'm concerned kids are a lifestyle choice. Ⓖⓔⓝⓖⓗⓘⓢ 18:46, 2 September 2009 (UTC) - "..." That pretty much sums it up. Reagan managed to convinced the "working class" and the poor that government could only mess up their lives more, while also promoting the "I intend to rich one day, so I don't want their taxes raised" silliness (people really do think this). Human 19:50, 2 September 2009 (UTC) - Genghis - you're right about the significant difference in healthcare and finanical issues between America and Britain - I was actually saying that to PC earlier on Liberapedia. I guess the platform differences between Labour and Democrats, and Tories and Republicans are so because America suffered a whole lot more than Britain from the Reds-under-the-Beds scare - RobSConservapedia is living proof of this. I gues it didn't affect Britain so badly because we were already starting our welfare state at the beginning of the 20th century. SuperJosh 20:00, 2 September 2009 (UTC) - I like to think these arguments don't sway British people because by and large we know something about history. The things American conservatives want, like an abolition of the welfare state and its responsibilities shouldered by private charities, have all been tried and have been dismal failures at ensuring a reasonable society. Indeed, these things are the very root cause of the communism they claim to so despise, both in Europe's past and in the present of the developing world. You only have to look at the vast base of support for modern Maoism amongst India's underclass to see that. --Jeeves 20:06, 2 September 2009 (UTC) - While I agree that people who need help should get it, I also believe the welfare system in general needs reforming in Britain. Like Ace said when he mentioned a while ago about his freakout at some lazy protesting bastard, I believe if you have the ability to work you should - obviously circumstances taken into account. No bloody government handouts, as taking advantage of the welfare system has got our country into the state it is - reading the paper earlier, we're one of the highest places in the western world for binge drinking, smoking, teenage pregnancy. I'll admit I'm guilty of all threethe first two, but I think the welfare state has led to lazy unemployed parents letting their kids run around and do whatever they want. SuperJosh 20:14, 2 September 2009 (UTC) - Gee, stereotype much there? Margret Thatcher got herself elected on just that platform. You might remember how well her "reforms" worked out. Your lazy stereotyping ignores the much more typical cases, like my friend who wsa unfairly banned from driving when some stupid bint rearended him after he'd had half a pint, and as consequence lost his job as a builder. By and large, the "welfare queens" of popular legend don't exist. --Jeeves 20:22, 2 September 2009 (UTC) (O/D - getting way too deep) My dad was a Liberal (ie, pre-Lib Dem) councillor back in the day. He spent a huge amount of time helping out two families in my hometown that seemed to spend their entire time ****ing each other and spitting out new kids. I was in the same class at school with two of them. At the age of 9, they knew that they were "due" a new council house because of the number of people in the one they had. While I would love to agree with Jeeves that "welfare queens" don't exist, I'm afraid they do. It still doesn't mean that welfare is a bad idea though. SuspectedReplicant 20:35, 2 September 2009 (UTC) - I don't know enough about it to get into a proper debate, but the reform I'm talking about is to deal with those who SuspectedReplicant mentioned - if people are in perfectly good health but do not work when they're able, then they shouldn't live on handouts. There's nothing Thatcherite about it - simply stop work-shy freeloaders (as you know Jeeves, Mark once said to Jez!)SuperJosh 20:42, 2 September 2009 (UTC) - Can I just add something to the whole IVF debate? Considering that there are thousands of orphan / abandoned babies throughout the whole world who need a good home, I find it extremely selfish of couples to start demanding exceptionally expensive treatments to have children who have their DNA. If you can't have kids and want some, do something good and give that love to someone who hasn't had any so far. Crundy 20:47, 2 September 2009 (UTC) - You're totally ignoring all the reasons why an able bodied person, even one who has been offered work, might perfectly legitimately choose not to. Single mothers, especially single mothers of large broods and who are only able to obtain low paid work, will often find that childcare expenses exceed their earning potential. It's completely irresponsible to label these people "freeloaders." There is no point punishing these people by taking away their benefits. That's just masking the symptoms, plus inflicting unjust child poverty. You need to address the root causes of family breakdown and poor family planning if you want to tackle the problem. --Jeeves 20:52, 2 September 2009 (UTC) - Right. I live in south London. Some people round here will work as long and as hard as it takes to earn as much as they need to get the best for their kids. Some will sit on their asses and then shout "Well! We're entitled!!!" when they don't get exactly what they want. Most are in the middle ground. It all comes down to: do you penalize the children of group b because of the actions of their parents? SuspectedReplicant 21:05, 2 September 2009 (UTC) - This is straying into a different topic. Sex education in the UK does a pretty poor job at preventing teenage pregnancies but there are those young girls (and it also applies to the US - see Freakanomics) who have no aspirations other than to be mothers. Some of it is a vicious spiral but in the long term it boils down to providing a decent education system, good role models and incentives not to sponge on the state. Although some politicians on the right have placed the blame on single mothers I think that this report today indicates that teenage boys are just as culpable. Unfortunately red-top tabloids and lads mags do not encourage a respectful relationship between the sexes. Ⓖⓔⓝⓖⓗⓘⓢ 21:17, 2 September 2009 (UTC) - Getting back on topic - I've heard it suggested that the US government pays more on a per capita basis for health care for its citizens than european countries do, but as a large proportion of this payment disappears into the medical companies profits the result is worse. Is there any truth is this?--Bob M 21:52, 2 September 2009 (UTC) - Yeah, absolutely true. The US government spends $500 more per capita providing non-universal healthcare than the UK spends providing universal care. Medical outcomes wise, there's precious little to choose between the systems for most comparable diagnoses. However, I suspect if you compared outcomes for medicaid/medicare patients vs. the NHS, the NHS would win hands down. Sadly, that isn't tracked in any useful sense. I don't know where the money goes in the US, but it certainly isn't to provide superior care. With private health care included, per capita health spend in the US is more than $3000 higher than the UK. (Oh yeah, and to add insult to injury, medical expenses are the leading cause of bankruptcy in the US compared. In the UK, bankruptcy due to medical expenses is more or less unknown.) --Jeeves 22:13, 2 September 2009 (UTC) - While nationalised services like the NHS may not be 100% efficient they don't pay dividends to their share holders, they pay lower salaries and also pay less for drugs because of bulk buying. I used to get BUPA insurance through my agency and paid extra for my wife - about £450 per year. When that stopped, because of IR30 rules, I looked at buying private health insurance out of my own pocket and was facing a bill of almost £2000 a year for myself and the same again for my wife. Of course that would be subject to limits so when people say that private is better in the UK, I ask whether they are part of a company scheme or pay directly out of their own pocket. When my wife had her stem cell transplant and was in an isolation room (on the NHS) she even got free phone calls thanks to the Friends of the Hospital (obviously one was aware of the fact and was expected not to abuse the privilege). Recently I had some dental surgery at a private hospital and while the conditions were very pleasant it did cost me a considerable sum. Ⓖⓔⓝⓖⓗⓘⓢ 22:21, 2 September 2009 (UTC) - In addition to the above stats, the "non-universal" system in the US leaves 1/6 of the population with no "health insurance". And those bankrupties? Many of those people had health insurance, and either got kicked out or still had to saddle far more than they could afford of the costs. Yeah, the US "system" is a travesty. 2k quid a year for a plan? That's like, less than $200 a month, right? That's what the last plan I had cost ($300/mo) before I gave up - 12-15 years ago. Human 01:11, 3 September 2009 (UTC) - If you lived a few score miles to the south, in Taxachusetts, without coverage, you would be paying the "breathing tax" along with all the others. Come tax time, if you can't prove you had an insurance plan in place, the state charges you an equivalent penalty. Next thing you know, we will have to buy walking licenses to cross the street. 66.189.117.133 01:48, 3 September 2009 (UTC) Hm, is having children a right or a privilege? Once Rationalwiki comes back it might make an interesting debate. Two perfectly rational agents should never disagree and what not. - Clepper - At BON, MA is doing the best they can given a totally disastrous general situation. They are trying to make sure all residents have access to health care. Is it a good system? No, not really. Mostly because they can't utilize the constitutional powers of the federal government to tax the obscenely rich to pay for basic services. Human 04:39, 3 September 2009 (UTC) - 'Twas ever thus. The very wealthy have the means to influence the kleptocrats[1] in office, as do the various insurance entities responsible for the high overhead associated with delivering the care. Not totally responsible of course, since there are systemic factors (oh, say, capitalism?) that encourage the high cost/performance ratio we get. I'm not sure how the recently fashionable strident polarization of US civic discourse plays into it; cause, effect, or simple inconvenient fact? 66.189.117.133 13:05, 3 September 2009 (UTC) - ↑ Taxation is theft. Any society bigger than about 1600 or 2400 people needs a "governing" organization that reserves to itself the "lawful" use of deadly force, making the rest of the population pay for the "privilege." See Jared Diamond, "Guns, Germs, and Steel." Evolutionary mutation rate...[edit] Apparently between 100 and 200 for each person. I assume this is actual mutations rather than just swapping alleles around and so on. But it's interesting and quite a high figure showing the the number of chances that nature has for producing something good. Armondikov 14:12, 2 September 2009 (UTC) Bugger, EC'd. Each of us has at least 100 new mutations in our DNA, according to research published in the journal Current Biology. Looking forward to someone using this at a Whorehouse of Knowledge. Ⓖⓔⓝⓖⓗⓘⓢ 14:19, 2 September 2009 (UTC) - Considering there's like 3 billion base pairs in our genome, it isn't that many. Though I suppose just one frame shift in the wrong place could produce some fairly bizarre aberrations. Or just kill the organism. Whatever. --Jeeves 16:34, 2 September 2009 (UTC) - I think the question is what kind of mutations. I guess that the vast majority are ether irrelevant or in non-coding DNA - that's where I hope mine are anyway. The chances of producing something very bad are a lot better than the chances of producing something good. --Bob M 17:54, 2 September 2009 (UTC) - There's a really good (and recent) article on the subject here (to which I was directed by Pharyngula). Cue Y chromosome-related jokes from the ladies present :) SuspectedReplicant 18:03, 2 September 2009 (UTC) - Well, yeah, that considered it's not many. But for the people who go "all mutations are bad" well, 100-200 in each individual (individuals who are otherwise perfectly healthy) it's massive kick in the teeth. If you want to take the view that all mutations are bad, you'd only expect maybe one or two that were present, so hundreds per person is high in that sense at least. Armondikov 14:51, 3 September 2009 (UTC) Vaccination story[edit] Go here and watch the clip or read the transcript of the whooping cough epidemic story. This was broadcast in Australia last evening. RagTop 23:07, 2 September 2009 (UTC) Blarrrgh[edit] I just finished eating the best pizza ever... a BBQ chicken pizza saturated with napkins. Yummers. Javascap 00:30, 3 September 2009 (UTC) I Want one[edit] of these Toast 03:10, 3 September 2009 (UTC) - eggselent! Human 03:48, 3 September 2009 (UTC) stumblin[edit] Why does this make me laugh immoderately? Toast 07:11, 3 September 2009 (UTC) - Because everything ever done by Gary Larson is pure, unadulterated comedy gold. The greatest cartoonist ever, in my book. DogP 16:47, 3 September 2009 (UTC) World War Two (again)[edit] NOW it was declared seventy years ago today... by which I mean Britain declaring war on Germany. SuperJosh 11:17, 3 September 2009 (UTC) - Well, happy birthday to Psy's dad. Crundy 13:37, 3 September 2009 (UTC) Jon Voight on Obama[edit] Wow. Just wow. -- PsyWhut? 19:02, 3 September 2009 (UTC) - Tom Waits Ol' 55 IS a great song. A real "wow" song. Don't see the connection to Obama or washed-up actors, though...TheoryOfPractice 22:10, 3 September 2009 (UTC) Damn, remember when Voight was cool? When he was in stuff like Runaway Train and Deliverance? I miss those days. Language[edit] - Arthur (Smith) - Do you know, I can say, in Danish, "I have spilled coffee on the anteater." - Stephen (Fry) - I would like you to do that for us now. - Arthur - Jeg har spildt kaffe på Myresluger''! That's all. Toast 22:34, 3 September 2009 (UTC) - I can say "hello" and "thank you" in Russian! "Zrazvytchye" and "Da Svedanya". Although that's almost definitely not how to spell the roman alphabet interpretation... SuperJosh 11:16, 4 September 2009 (UTC) - Although I have no clue how to spell it, I can say "fuck me with a cucumber" in Swedish. (Google says it's "knulla mig med en gurka") Armondikov 12:02, 4 September 2009 (UTC) - I made a slight language boo-boo. In the supermarket checkout queue a very sweaty, smelly manager-looking type (from said supermarket) walked to the checkout girl in the aisle next to us and said something to her. I said (quite loud) to my wife "Uh duh duh, pursino madour chod" ("Urgh, sweaty mother f**ker" in Gujurati), and then turned round to see our checkout girl was asian, and smirking at me. Bugger. Crundy 13:02, 4 September 2009 (UTC) Nothing changes is creationism land[edit] I've been reading Augustine of Hippo's City of God this week and amongst the general hilarity this chapter stands out in particular. Pah, those idolatrous Egyptians have records from back before the world began? Lets invent a reason to compress their timetables to fit our divinely inspired one. Now they're doing it with Egyptology and radiometric dating. That's progress! --Jeeves 11:08, 4 September 2009 (UTC) IM[edit] Anyone use instant messaging around here? I use MSN but I don't know if it's compatilable with yahoo messenger or whatever. SuperJosh 12:08, 4 September 2009 (UTC) - They tend not to be compatable with each other, but you can get free all-in-one clients which you give your credentials for all your different IM accounts to and it seems as though you are using a single protocol. Don't know of any good ones though. Crundy 13:04, 4 September 2009 (UTC) - Years ago I used something called Miranda that did them all supposedly. Pi 13:07, 4 September 2009 (UTC) - Pidgin. It's good.--PitchBlackMind 16:03, 4 September 2009 (UTC) - Pidgin or Adium for Mac (not sure if Pidgin supports Growl notifications but Adium does) or Trillium for Windoze. Nutty Roux 16:13, 4 September 2009 (UTC) - I use only IRC through Chatzilla. Phantom Hoover 16:57, 4 September 2009 (UTC) Archiving[edit] Actually a bit of archiving would be a good idea. I'm at 34% of my monthly bandwith now. I figure 50% by the 6th, which barring accidents, should leave with me enough to spare for the month - but cutting the page down wouldn't be a bad idea.--Bob M 18:49, 4 September 2009 (UTC) - I've just shoved some into the archive, but with "Healthcare" being pretty massive but not particularly old, and stuck in the middle, it makes the cut off point a little difficult. Unless you're happy to say that 3rd Sept is old enough to say it's finished. Armondikov 19:59, 4 September 2009 (UTC) - I've said it before and I'll say it again: you should enable output compression on the server. It would save you quite some bandwith. Look into enabling zlib support in PHP. --79.30.235.165 20:25, 4 September 2009 (UTC) - - I'm sure it's on the cards and Bob will ask around for it at some point but probably has other things to do. Also, as I was feeling slightly bored and wanting to edit something (yes, Wikidot is right, editing wikis is addictive), I archived the healthcare one too. Probably screwing your bandwidth in the process by looking at the other archives to see what they were headed with. It'll all be over soon :) Armondikov 20:35, 4 September 2009 (UTC) Bob, can you get rid of the background image on every page? Human 21:06, 4 September 2009 (UTC) - The book background? That's about 7kb (the TP logo is 8 kb and the CC-BY-SA logo is 5kb) and is probably cached locally so it wouldn't make too much of a difference to what the server has to deal with I think. Armondikov 21:12, 4 September 2009 (UTC) REWARD[edit] I will pay $10,000 to anyone who can prove that Glenn Beck did not rape and murder a young woman in 1990. 19:06, 4 September 2009 (UTC) - Can I choose which young woman? --79.30.235.165 20:26, 4 September 2009 (UTC) - The easiest thing to do would be to try and find the identity of the woman he allegedly raped and killed in 1990. If no possible rape victim can be found, it might not be true. 21:19, 4 September 2009 (UTC) Porting to RW[edit] Probably need to think about the things to be moved over. All the specific RW stuff and their archives obviously. I'll then delete them here. Everything from the student Bar will need to be cleaned out, but I think I'll keep the page as there are some external links to it. I'll leave a note up for a couple of months with a link to RW saying it is now active. Also it might be a good idea to have a student bar anyway, though I'll need to disinfect it and refurbish it after the RW departure . User accounts. I was thinking that these would be removed, but now that I think about it some more there is no obvious reason why they should be. If they are never used they will just disappear into wiki history after a time. So perhaps best to leave it up to individuals to put "retired" or just fade away or whatever. I think all the talk pages are going to magically copied over to RW. Is that correct?--Bob M 10:29, 5 September 2009 (UTC) - I think Nx is going to merge the page histories before RW goes back online. Pi 10:35, 5 September 2009 (UTC) - Well, it's time to start thinking about this. - First I need you to put {{RW|export}} on all pages that should be moved to RationalWiki. Whether you want your userpage and user talk page moved over is up to you. - If you add your userpage to the export category it will replace your userpage at RationalWiki. Don't panic! The pre-crash history will remain intact, so you can revert to the pre-crash version. - Next, there's the issue of differing usernames and page titles. I started listing those on my userpage, but stopped. You can help out with that, but please make sure the info is correct (and remember that wikis are case sensitive i.e. JeevesMkII and JeevesMKII are not the same thing - and if I import the edits with an incorrect username, the only way to reassign them is with manual database hacking). We don't need to rename them on the wiki, I can do that in the xml file with some search and replace. - Then there's the archives that have been created here. It would be best to do those manually, with copy-paste (since archives contain no history anyway), so make sure they are not in the RW export category. - Finally, there's the problem of edits while I'm doing the moving. Unless we coordinate things perfectly, I cannot really prevent edits to RationalWiki before I'm done with this. They are not a serious problem, but they will make things slightly more messed up. E.g. if someone replies on the RW saloon bar to a topic that has been there before the crash, after I merge the articles, the history will look like this: - history before crash -> new page: student bar -> history of student bar -> one edit which replaces student bar with the old saloon bar with the new post added - Also, I'll lock all the articles in the export category before I start so that no edits get lost. Anyone who has a sysop account here please don't edit them. - Once I've imported WIGO CP, I'll add the img tags. If you have a capture, just click the red img tag to get the file name and upload the image. When you upload it, please include the http address (right click - copy link address on the link before the img tag in the wigo entry) and put the image into Category:August 2009 Conservapedia screencaps (already exists) or Category:September 2009 Conservapedia screencaps (I'll create this one). Do not put the image into Category:Conservapedia screencaps - it took me half a day and 4000 edits to clean it up. Alternatively, ignore everything I said and leave it to me to fix things :) - @Bob: - User accounts cannot be removed, unless their edits are reassigned to another user. Removing their edits would require serious surgery to your database, and would leave article histories completely messed up. - I can run a bot to delete all the pages in the export category if you'd like. - Nx 11:27, 5 September 2009 (UTC) - Ok, if you bot can do it, fine. I'd like to keep (or if it's easier re-create) the student bar.--Bob M 14:00, 5 September 2009 (UTC) - That depends on whether you want the history hidden or visible to anyone. Nx 14:12, 5 September 2009 (UTC) - Need to think about it. It would be nice for nostalgic reasons; on the other hand it really has no logical reason to be here. Probably be best to wipe it I suppose. :-( --Bob M 16:09, 5 September 2009 (UTC) - I've just pulled all the archive pages out of the RW|Export and into just the RW category. The "active" talk pages are still in the RW|Export cat. What about the actual WIGO pages etc.? The vote formats won't necessarily blend in, or will we just archive out all the WIGO's with a separate page for what was done here and start fresh? Armondikov 12:39, 5 September 2009 (UTC) - Thanks. Because of the vote template, it's trivial to actually add the votecp tag to them with a search and replace. Have wigo entries been archived as well? Nx 12:55, 5 September 2009 (UTC) - I see there's already an archive for WIGO CP. I'll unarchive the WIGO's before I move them to RW. Nx 13:12, 5 September 2009 (UTC) - Okay, so I'll let you take care of that manually. I've added the RW template to the archive page anyway to make sure it doesn't get left behind. Armondikov 15:15, 5 September 2009 (UTC) Nx, when you are doing this, I think both wikis will need to be locked - or at least, not edited, as you point out. So I think it is best that until you are finished with the porting, RW not be visible. Archives: is there anyway to "skip" over the archive making (ie, large deletions from the bar, etc.) so all the "adding" edits are copied over but none of the "subtracting" ones? Then just run pibot so archives end up where they really should be? Because copying the ad hoc archive over from here is gonna make a mess. It sounds to me like you night have it figured out, I'm just asking. Human 22:42, 5 September 2009 (UTC) - The problem is, I have to be able to see the wiki, which means you will be able to see it too. And it will be after midnight here when Trent comes home. As for archives, there's no way to do that, because with every edit the whole page is saved to the database. Archives should not be in the export category, we'll just move the contents of those to RationalWiki with copypaste, because they contain no valuable history. Nx 06:18, 6 September 2009 (UTC) - OK on the archives, I see. As far as editing while you are working, I'd suggest some big ugly banners saying "thou shalt not edit" while you work? Human 06:41, 6 September 2009 (UTC) - (EC)We could just use the site message to leave a note requesting nobody edits it during the transfer, although that would just invite vandals and trolls to come stuff it up. By the way was the site message used to create the intercom, because you can't have both at the same time? Pi 06:44, 6 September 2009 (UTC) - Would it be best if I de-sysop everybody but you so as to prevent accidental edits to sensitive pages?--Bob M 06:44, 6 September 2009 (UTC) - It is your site, you can de-sysop everyone if you wish. Pi 06:46, 6 September 2009 (UTC) - I'll ask Trent to lock the RW database (I can still import with mediawiki's command line tools), as for Teflpedia, I'll lock the export pages before I start. I don't think there's a need to desysop everyone here, and if a sysop edits a locked page worst case is that their edit will not be moved to RationalWiki. Nx 06:52, 6 September 2009 (UTC) 50 Greatest Comedy Films[edit] As voted by the British public. It was on telly last night and I caught the top 14 - I was pleased to see that the top 7 films were some of my personal favourites. Monty Python's Life of Brian came number one obviously - what else could it possibly have been!? - but I think Ghostbusters deserved a better spot than 24, especially with Bill Murray in the lead role. "I make it a rule never to get involved with posessed people... actually it's more of a guideline than a rule... wait, no, I can't, it sounds to me like there's all ready at least two people in there." SuperJosh 11:57, 6 September 2009 (UTC) BNP to give up racism![edit] Well, not exactly, but it's getting there. Extra points for playing the Orwell card with "Undemocratic Orwellian equality laws" (and quite incorrectly, I don't think Oceania enforced equality like that, IIRC). Armondikov 13:34, 4 September 2009 (UTC) - I fixed the link as it was broken. After finding this, I don't think we can ever say the BNP will be "giving up" racism. If only my socks weren't blocked on CP I'd love to send this to Andy and say "these are the people you'd say would get your vote on education alone, you retard." Sorry for that, but the man has the same IQ as a starfish. SuperJosh 13:42, 4 September 2009 (UTC) - I think we can say the BNP will be as hostile as ever to ethnic minorities. This isn't a case of them voluntarily giving up their racism, but rather being forced to by a court decision. Instead, I dare say we'll see a version of the old "No Irish, No Blacks" stance of racist landlords from a few decades ago: "Oh I'd love to sign you up, Mr Black Person, but I just used my last membership form". - As for good BNP links, I hope you've all seen this (actually for the interviews rather than the pics)? SuspectedReplicant 14:02, 4 September 2009 (UTC) - Every black, asian and jewish person in the country should now sign up and elect only ethnic minorities to be their official candidates and local leaders, and vote for policies like a closer relationship with the EU etc. --Jeeves 14:17, 4 September 2009 (UTC) - This constitution rewrite will probably increase the powers of the executive, allowing them to select their candidates without local approval and so forth. The next fight will be discrimination about the selections. Pi 14:36, 4 September 2009 (UTC) - Nice idea Jeeves! (The "BNP gives up racism!" headline was more a joke, in case people were wondering :P.) Armondikov 15:40, 4 September 2009 (UTC) - Replicant: That is probably the best webpage, ever, in the history of the interwebs. For example: -. A Whorehouse of Knowledge[edit] I've been wanting to ask this for a while... Why do so many of you bother with aSK? Is it because PJR was a voice of (semi-)reason over at CP? If so, I'm afraid you're on to a loser. PJR is just as mad as any other YECretin and is never going to change his mind. He will always find time to point out the motes in your own eyes while ignoring the beams in his own. It looks like Human has finally got that, but there are still several people trying to engage PJR. Please, I'd like to know what people are hoping to achieve. If there's a chance of making a difference then I'll finally create an account, but right now it seems as if there are several people banging their heads on the walls of PJR (et al)'s insanity with no hope of changing anything. SuspectedReplicant 14:11, 4 September 2009 (UTC) - 'Cos aSoK is like CP used to be before the night of the blunt knives. Full of insane people who you could prod for lulz. It might not be productive, but it's more fun than wandalism. --Jeeves 14:14, 4 September 2009 (UTC) - Oh. If it's mainly for the lulz then... okay. It seemed to be a bit more than that in many cases though - your demolition of PJR's "most attested event in history" idiocy was particularly fun to read. SuspectedReplicant 14:19, 4 September 2009 (UTC) - Whilst we are on the topic of ASK, an idea floated on Human's talkpage is that we replace WIGO ASK with a more general WIGO Wikis, seeing as our trip abroad has allowed us to explore other horizons. Some suggestions included watching, Wikisynergy and WikiIndex, but we also had the failed CAM and Metapedia ones we could mix in. Throw in a few more we could have a little all purpose WIGO going that would allow us to ride out CPs fall a little easier. Pi 14:30, 4 September 2009 (UTC) - Yes, I saw that. It seems like a great idea to me. There's not enough to keep the aSK WIGO ticking, but combining it with other storehouses of idiocy might help. SuspectedReplicant 14:42, 4 September 2009 (UTC) - I participate at ASK from time to time because it's fun. I have no absolutely no hope of changing anybody's mind. If you believe something becasue of faith, how is reasoned argument going to change you mind? Obviously if you have faith then you must know that the argument must be wrong. But it's fun seeing the responses. - To answer the other point - yes, a generalised WIGO would be good. I'm not sure that WikiIndex would give much to report on in the long term though. It's only notable at the moment because of things associated with the RW server meltdown and the personalities who have temporally taken up lodging.--Bob M 14:43, 4 September 2009 (UTC) - Definitely, anything that facilitates RW's scope expanding is good by me. CP, obviously, would stay separate for now, but the "wikisphere" would be a nice one to go with blogs and clogs. "Forumsphere" too perhaps, but that would require watching things a lot closer I reckon. Armondikov 15:38, 4 September 2009 (UTC) - I walked away when I noticed the only people he was sysoping were from CP. I realised then that it wouldn't be a broader-based wiki than Conservapedia - it would inevitably end up repeating CP's mistakes. Totnesmartin 16:18, 4 September 2009 (UTC) - "looks like Human has finally got that" Well, really, I had it from the beginning. YECiots never change their "minds". All I do over at AWK is gawk, read the great comments people like Jeeves and Sterile and Bob make and laugh at PJR's twisted "logics". What happened last night was I came across a thing to quote to Philip, and finally figured out a way to word that diff to my user page. - As far as WIGOwikisphere, yeah, it's a great idea. It helps deal with say, wikiindex, being interesting only once in a while - it gives us a place to put the item of interest other than the saloon bar. Folding wigoAWK and wigo4R into it makes sense to me. Do we want to use "wikisphere"? PS, this also gives us a place to comment on user number 186's Ediocy on wikipedia. Great idea overall, whoever had it. Human 21:02, 4 September 2009 (UTC) - I'm getting attached to "wikisphere", it sort of merges with the other two terms that are already in use. And of course, what happens with a wiki tends to be a lot different to forums and blogs so it warrants a separate arena. And of course, sometimes shit happens at Wikipedia too that's worth mentioning, but only people who are also regular WP editors will probably notice it. Armondikov 21:08, 4 September 2009 (UTC) - Yes, it's growing on me, too. Human 00:22, 5 September 2009 (UTC) - It'd give somewhere for Ed's WP stoopids too. Toast 00:34, 5 September 2009 (UTC) - - What is going on in the wikisphere? has too good of a ring to it to pass up. Pi 00:38, 5 September 2009 (UTC) Best bible verse EVER[edit] Hosea 9:7 "The days of visitation are come, the days of recompence are come; Israel shall know it: the prophet is a fool, the spiritual man is mad, for the multitude of thine iniquity, and the great hatred." Javascap 16:12, 4 September 2009 (UTC) - I beg to differ... - Deu. [1]. - Kektklik 23:07, 4 September 2009 (UTC) No, no, no, this is the best Bible verse ever: Ezekiel 23:20 There she lusted after her lovers, whose genitals were like those of donkeys and whose emission was like that of horses. - I raise that a bit, and bring in Teh Jebus, Matthew 15:-7, "For God commanded, saying, Honour thy father and mother: and, He that curseth father or mother, let him die the death." That is one nice Jesus... ĴαʊΆʃÇä₰ my sig is my own opiate! 15:25, 7 September 2009 (UTC) My day[edit] I went to a funeral of a friend of my parents whom I have known since the 60s. Opposite the cemetery/crematorium there was a cafe called "Hell's Kitchen". Ⓖⓔⓝⓖⓗⓘⓢ 20:56, 4 September 2009 (UTC) - What an appropriate place to build a cafe called Hell's Kitchen. My condolensces. SuperJosh 17:27, 5 September 2009 (UTC) - Was there an aggressive, wrinkle-faced ex-footballing chef taking his shirt off while talking to a camera? CrundyTalk nerdy to me 15:46, 7 September 2009 (UTC) WikiSynergy pulled down[edit] "This site is being shut down. The reason is that the threat level to the founder is unacceptable. Threats have been made to attack the my reputation in real life. Although I believe that the friend who made the threats and then apologized will keep their word, that does not eliminate the threat from others. If even one of the several people who know who I really am make an accidental slip (as already happened once), I could be subject to a campaign of harassment and lies which will deeply effect my life. I had planned to continue my campaign to eliminate pseudoscience from the skeptical community, by providing a place where skeptics and believers could hash out their differences in a collegial environment. Still, there will be those whose ideas we debunk, because they are unscientific. The site is intrinsically a place for controversy and dispute. I know that in the future, the risk would only get worse. I cannot risk my real-world life, or my real-world material existence, in order to help the arena of fringe ideas progress to a higher level. It is possible that the site may be put back up, so please watch this space. But as of now, WikiSynergy is gone." Human 03:46, 5 September 2009 (UTC) What were the said comments? Theemperor 04:10, 5 September 2009 (UTC) - Damn shame, too. Now, I fear, that my prediction about a certain green goose trashing RationalWiki instead of WS will come true. Now more than ever. Besides that, it wasn't really a bad wiki for promoting pseudosciences.Gooniepunk2005 05:10, 5 September 2009 (UTC) - I liked the place and will miss it, I hope PS figures out a way to put it back up. I think PS is over-reacting a bit to lame threats from internet jerks. Oh, and Goonie, you think that lame goose is any match for the combined forces of the wikians that are rational? Human 05:15, 5 September 2009 (UTC) - Absolutely not!!! Especially with the numbers of syspos and 'crats we have, compared to the two that WikiSynergy had. I actually sent PS an e-mail telling him to feel free to join us at RationalWiki once it is back up.Gooniepunk2005 05:23, 5 September 2009 (UTC) - Although now, as if it wasn't obvious anyways, I can, at least, admit my sock: I was the one known as "The Janitor." But I think everyone but goose poop caught that one.Gooniepunk2005 05:34, 5 September 2009 (UTC) - A shame. What was the stuff that some "people" wanted deleted? Or would that simply compound the problem? (Now I think about it I can make a good guess though.)--Bob M 06:56, 5 September 2009 (UTC) - I just think the bullies got the better of her. They bullied, she acquiesced, so they bullied more. Assholes (sorry Bobwiki). I'm trying to communicate with her and get her over the paranoia hump if I can. Oh, and Goonie, you know PS is not a "he", don't you? And she's already on RW? Idea: Port all of WS to a namespace on RW for a bit where us tougher hombres and homgals can drive off the trolls? Human 07:25, 5 September 2009 (UTC) - There is something to be said for the power of the mob in resisting bullying. That's one reason Terry Koeckritz prefers to do it behind closed doors. Ⓖⓔⓝⓖⓗⓘⓢ 09:38, 5 September 2009 (UTC) - I'm surprised at how abrupt the whole thing is. Oh, well. I'll have to put the thing about all the molecules going to one side of the room on RW at some point. Sterile 11:44, 5 September 2009 (UTC) - I really really wanted to comment on that but I never found it again! It was a poor "example" since it's not possible - as soon as the molecules are even slightly "out of balance" the pressure will be higher in one area, making its molecules leave for the lower pressure area(s). Human 22:38, 5 September 2009 (UTC) - Yeah, you are right about that. My point was that even if we just use a simple model-assume that molecules are on one side of the room 50% of the time, how often will they be on one side, no real "chemistry" involved--that events like that just aren't going to happen (0.0000 ... 1 x 1023 0's 1% of the time, essentially never)--and, as you said, they would fly apart at a rate 100s of miles per second, such that in half a microsecond, we'd be back to normal. No paranormalsupernatural dude required. Sterile 02:05, 6 September 2009 (UTC) - Well, that is sort of chemistry. It sounds like statistical thermodynamics, the model that explains why entropy always increases. But the numbers you'd need to quote would be powers of powers (possibly another "of powers" on top). :S Armondikov 18:42, 6 September 2009 (UTC) Actually, you don't. Entropy is an energy phenomenon, and the partition functions you imply are about energy. (An increase range of translational energy equates to an increased density of states, hence more available microstates and more entropy.) The two are clearly related, but the argument I was making was purely based on a probability argument. They amount to the same answer, but are slightly different. Sterile 18:49, 6 September 2009 (UTC) - I have to point out that I hate thermodynamics. And I was taught stat-therm by a maniac, who, despite being an incredibly fun guy, can't teach for shit. So my understanding of it is pretty much limited to "you have more disordered states than ordered states, go figure". With any luck I'll never need to use it again. postate 15:17, 7 September 2009 (UTC) Idiots Convention[edit] Now this looks like fun. I have already planned my itinerary. - How to stop abortions: a new approach (DVD: Maafa 21 & Discussion) - How to deal with supremacist judges - How to stop socialism in health care - How to defend America vs. missile attack - How the media can help us take back America - How to recognize living under Nazis & Communists Looks like a fun week. Pi 01:34, 6 September 2009 (UTC) Damn I want to attend that so badly! Really! Theemperor 01:53, 6 September 2009 (UTC) - Aw, gee, I wonder why she didn't invite her prize idiot son to give an hour of monotonic moronicism that we could transcribe? Human 02:12, 6 September 2009 (UTC) - Andy will be speaking at How to use the Internet effectively: Internet 101, as the creator of teh largest online Christian Conservativetm encyclopedia. Theemperor 02:56, 6 September 2009 (UTC) - Seriously? You had better not be yanking my chain. Pi 03:05, 6 September 2009 (UTC) - You've been yanked. Hey, we should send CUR! Human 03:16, 6 September 2009 (UTC) - We only have a few days to get him into the early bird registration. Where is he these days? I saw him over at wikiindex once. Pi 03:20, 6 September 2009 (UTC) - I've been looking for to A Storehouse of Cheetahs, but to no avail. Theemperor 03:26, 6 September 2009 (UTC) - I have a theory that it's Lumeniki. Phantom Hoover 08:03, 6 September 2009 (UTC) - No, that guy use to have a wiki on ScribbleFarm, a bit pre-CUR. Pi 08:19, 6 September 2009 (UTC) - I think he's probably just retired from the wiki world, or he's socking on RW. As for "how to take back America"... a bunch of white, right-wing, McCarthyite, borderline racists saying the same thing over and over again: "Jesus, abortion, terrorism, Obama, Jesus, abortion, terrorism, Obama, Jesus, abortion, terrorism, Obama, Jesus, abortion, terrorism, Obama..." SuperJosh 11:36, 6 September 2009 (UTC) - Bloody hell... "How to understand Islam"... I really hope someone puts videos of this clusterf*ck on the net. SuperJosh 11:38, 6 September 2009 (UTC) - I bet no more than 10 minutes into the opening speech Phyllis Schlafly proves Godwin's Law and they don't return from that point for the rest of conference. I reckon I it will set new standards in Godwinian mechanics. Pi 11:52, 6 September 2009 (UTC) - Well, they'll have to be bootlegged and mirrored because you can guarantee that if they posted them officially, the comments section would be down. But keeping an eye on this should be an RW specialty. At least for the politics junkies. Armondikov 19:53, 6 September 2009 (UTC) RationalWiki[edit] Any chance that RationalWiki will come back soon? Proxima Centauri 14:36, 6 September 2009 (UTC) - No. It's gone forever. Don't bother checking back later. --Jeeves 14:39, 6 September 2009 (UTC) - Yeah, Trent got sued and is now in jail for the next 42 years. Now scurry back to Liberapedia and stay there. Theemperor 15:06, 6 September 2009 (UTC) - What went wrong? Where are the backups? I hope you're joking, I had trouble with Ratwiki but it will be sad if all that secular information vanishes from teh Innertubes. I know His Imperial Majesty is joking as it would take longer than a few weeks for a case to get to court. Proxima Centauri 15:08, 6 September 2009 (UTC) - Also, when I was very mad at you, I wanted to delete you from RationalWiki and accidentally deleted everything, including the backups. Nx 15:46, 6 September 2009 (UTC) - Well little Nixie, that just shows you shouldn't do irrational things when you're angry. Proxima Centauri 16:48, 6 September 2009 (UTC) - Moreover, after Trent was jailed the bailiffs claimed the server and backups and wiped them for resale. Phantom Hoover 18:22, 6 September 2009 (UTC) - Wasn't the server burned by the FBI after they mistook it as a stash for Class A drugs? Armondikov 18:46, 6 September 2009 (UTC) WHOOT! Welcome Home![edit] HURRAH! YAY FOR TRENT!!! YAY FOR BOB!!! YAY FOR ACE!!! Ace McWickedModel 500 23:17, 6 September 2009 (UTC) - what? Oh.. tomorrow. This'll be great. Kettle o' fish 23:23, 6 September 2009 (UTC) - AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA... AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA - Yay! ħuman 23:35, 6 September 2009 (UTC) - Could someone go to my monobook.js and delete the import'recentchanges.js'? I am kind of worried it is the problem. Cheers. Pi. 23:36, 6 September 2009 (UTC) - My long nightmare of productivity is over at last! Ace McWickedModel 500 23:37, 6 September 2009 (UTC) - I was doing so well at work, guess that's over now. Rad McCool (talk) 23:54, 6 September 2009 (UTC) - LOL Shot info (talk) 00:12, 7 September 2009 (UTC) - Yay, it's about time we get back online. I like being back at a more familiar wiki.Lord of the Goons (talk) 00:38, 7 September 2009 (UTC) - Well that was a nice holiday but it's good to get back home. Генгисmutating 06:17, 7 September 2009 (UTC) - Yeah, where I can finally add/edit all the stuff I was planning on doing while the site was down, but have since, naturally, forgotten... We should have kept a To Do list at Teflon. postate 14:07, 7 September 2009 (UTC) - ĴάΛäšςǍ₰ <insert witty comment here> 14:57, 7 September 2009 (UTC) - Wayhey! The gang's all here! Gosh it feels good to be back - I hope nobody's taken my favourite chair. I would just like to take this moment to say that Andy Schlafly linking to my blog from Conservapedia must be the biggest win evah! --Psy - C20H25N3OYou know you want to 17:29, 7 September 2009 (UTC) - What a wonderful thing to wake up to. Goodbye productivity!--الملعب الاسود العقل We laughed in the faces of kings never afraid to burn 17:32, 7 September 2009 (UTC) The future[edit] We really need to work out a way to prevent this happening next time Trent visits his homies. ħuman 03:15, 7 September 2009 (UTC) - Trent, do you know what went wrong? -- Nx / talk 07:55, 7 September 2009 (UTC) - Was it a MW fault, or h/ware? Shot info (talk) 08:06, 7 September 2009 (UTC) - Well, if there's a physical thing wrong with the physical server, if you can't access the physical object, then you can't do much about it. It was probably just a minor issue that was compounded because Trent was away, short of giving his keys to someone nearby, there's probably not much you can do about it. postate 14:03, 7 September 2009 (UTC) - It was a hardware issue, I have my suspicions, but no proof yet....I will be putting up a post in the next few days with some suggestions on what I think should be implemented to decrease the chances of this happening again. tmtoulouse 16:37, 7 September 2009 (UTC) Fairy tales[edit] Some are over 2,600 years old. And some are from 4004 BCE (allegedly). Where's the difference? I am eating & honeychat 11:02, 7 September 2009 (UTC) - The difference: 2600 year old fairytales have no ASK and CP to "support" them and give us lulz. --Editor at CPOh, Finland! Why? 13:11, 7 September 2009 (UTC) Holy bloomin' crap[edit] Stumbled upon this website, and though it might interest the rest of the mob. That is a large stack of monies.... ĴαʊΆʃÇä₰ the US Navy aircraft carrier of all air conditioners! 14:55, 7 September 2009 (UTC) - It's amazing, but I recall a reasonably old interview with a UK Chancellor or Shadow Chancellor or something, where the he said that when you get to this level, it pretty much stops being "real" money and becomes more of a concept. Which is sort of true, when a government pledges x million or billion to a cause, it's not like they're handing over that money, that instant, in cash, or even a cheque. It comes from different sources, goes different places, in different amounts at different times and it just gets confusing to think about (hence the comment that it's abstract and conceptual). But it's very interesting to see this much money visualised like this. postate 15:02, 7 September 2009 (UTC) - Where's the next, biggest stack? The one that represents what Bush/Obama spent to bail out failed capitalists? TheoryOfPractice (talk) 15:06, 7 September 2009 (UTC) - To be honest, I would also have liked to see that... didn't Bush drop out 700 billion, and Obama 800 billion or so? Hmm... ĴαʊΆʃÇä₰ lavish cutting boards bomb me... 15:07, 7 September 2009 (UTC) - For completeness, I'd like to see stacks of the average salary next to stacks representing government income. That'd be an eye-opener for some people. Or maybe stacks of the poor people's stuff next to the stacks of the rich ones... I suppose it's a fairly simple calculation that you could, with the expertise, put into a javascript and hook it up to some visualisation... postate 15:22, 7 September 2009 (UTC) Isn't this amazing?[edit] The first thing we do upon returning to RW is... ban one another silly. ĴαʊΆʃÇä₰ brah raaarrrgh bruh murgh bruagh! 16:21, 7 September 2009 (UTC) - Anyone like a dry sherry? The drinks are on me. Генгисmutating 16:22, 7 September 2009 (UTC) - Goodness no, just a Virgin Pina Colada, please. *bans Khant when he turns around. Javasca₧ wasn't me! 16:41, 7 September 2009 (UTC) - Just get a machine to get your drinks. Kettle o' fish 17:10, 7 September 2009 (UTC) - Virgin Piña Colada? Wimp. (Oh fuck, this is going to sound boring.) I had a proper Piña Colada at the bar where it was invented in Puerto Rico (only fresh pineapple and coconut milk does the business, and no paper brollies either). Генгисmutating 17:44, 7 September 2009 (UTC) - Actually that was several, not just 'a'. Генгисmutating 17:45, 7 September 2009 (UTC) Recovery[edit] The first thing we need to do is get re-indexed by search engines. If you have a blog or other dynamically updated site please consider picking your favorite RW page and tossing a link over our way to prod the bots. tmtoulouse 23:43, 6 September 2009 (UTC) - Will this require fresh links or simply clicking on existing links? Either way, i can get some stuff rolling.-- -PalMD --Yee haw! 02:17, 7 September 2009 (UTC) - The idea is to just post a blog post or something that google will index quickly with a link to an RW page, and hope that the bot following the link will realize the site is back up. tmtoulouse 02:20, 7 September 2009 (UTC) - I guess I should remove my nasty public comments on Google now? Kettle o' fish 23:44, 6 September 2009 (UTC) - GENTLEMEN OPERATION BRAINS HAS BEGUN!!11!11 --The Emperor Kneel before Zod! 23:55, 6 September 2009 (UTC) - OK, i got some new links up to get indexed. The first set went up on WCU last night, the next set goes up on science-based medicine this afternoon.-- -PalMD --Yee haw! 12:50, 7 September 2009 (UTC) - I've added some too - the ones I thought important. maybe give us a list of which ones people think are most NB and we can add them accordingly. --Psy - C20H25N3OYou know you want to 17:51, 7 September 2009 (UTC) - Jinx's blog comments doesn't mark the homepage external link no follow . - π 00:13, 8 September 2009 (UTC) - Google search for teh assfly has us back in the running!-- -PalMD --Yee haw! 01:29, 8 September 2009 (UTC) In the Garden of Eden[edit] Going to redo the question I put up on Trent's blog. Would it have been right for Adam and Eve to destroy the tree of knowledge, as they were only told not to eat off of it? And please, use your imagination as to how to destroy it, I go with talking dinosaurs.--Tabris (talk) 03:47, 7 September 2009 (UTC) - They were trying to destroy it! They started by eating its fruit... next would come the twigs, branches, and roots. ħuman 04:40, 7 September 2009 (UTC) - Isn't it obvious? The only way to destry teh evil tree o' knowledge was to throw it deep into the fires of Mount Doom, in the Heart of Mordor, where the shadows lie... SJ Debaser 10:12, 7 September 2009 (UTC) - Well, accepting the ridiculous premise of the question, there would be no right or wrong about it as they were innocent of what it was. It was just a tree to them and as they had been given dominion over the Earth destroying it would have been an acceptable option, especially during winter when burning a bit of old wood might keep the chill away from those naked bodies. Генгисmutating 19:06, 7 September 2009 (UTC) We're back just in time[edit] Thank goodness we're back.... I almost died on Thursday and need my Rationalwiki support group. This week at the Auraria campus has been fall fest. It's a great little festival, with groups from all over campus setting up tables and trying to get members. During the course of the festival, I spoke to many different religious groups including Christians (Baptist, Methodist and Catholic (I royally pissed off the Catholic table by the way)) Jews (Reform and Orthodox) and Muslims. At the very end of the line, there was a table labeled Menorah Ministries. I had never heard about them, so I started looking at some of their literature. Turns out to be a Messianic Judaism group, which is fine. I start talking to the guy and can tell right away that he's a little.... we'll just say off. Anyway, so then I look down and see this piece of shit book on the table.] He saw the look on my face and said, I swear "This man is one of the most brilliant thinkers of our time." I almost choked to death on my own saliva. I got into a little mini argument with some of the problems is that book (I literally had him open to a random page and I just found the first fallacy and attacked it) He comes back with the whole "well, I guess you're proving him right" speech. I just responded by saying "I'm not an Atheist; I just like nonfiction books better." Then I left, still shaking. However, I do believe I got the last laugh; I wondered back over to the Orthodox Judaism table and casually mentioned what he was saying at his table. They were not happy and were talking about sending someone over to have a talk with him. If you read on the news that some Orthodox Jews beat the shit of a Menorah Ministries guy, you never saw me. SirChuckBCall the FBI 09:05, 7 September 2009 (UTC) - I can't conceive of the level of delusion required to conclude that Ray Comfort is a brilliant thinker. I'm fairly sure that guy requires medication. --JeevesMkII The gentleman's gentleman at the other site 09:25, 7 September 2009 (UTC) - The UK Amazon has the tag "arsewater" for "You Can Lead an Atheist to Evidence, But You Cant Make Him Think" (along with complaints that it isn't possible to give a negative number of star reviews). The man is clearly a military-grade fuckwit Silvermute (talk) 09:39, 7 September 2009 (UTC) - It is a simple matter of perspective. If you were talking to Einstein and he was struck by lightning, instantly doubling his IQ would you notice any difference? All you have to be is one percent cleverer than someone and you look brilliant to them. Ray's arguments are brilliant. Atheists are wrong, they will instantly disagree and they will say false things; so by disagreeing with him you are proving him more right. - π 09:43, 7 September 2009 (UTC) - People like VenomFang think that they can prove the existence of gOD in seven words (or something like that) and they attract a bevy of followers who shriek "Wow, he's so right!" just because it supports their own preconception/delusion/religion. These people don't even approach the act of "thinking", like PJR they just regurgitate what someone else has spouted and fail to recognise the contradictions. Thanks for reminding me, Chuck, just why I started editing at CP and why I believe in our mission here. Генгисmutating 18:36, 7 September 2009 (UTC) - Aw, don't use "fag" as an insult... and Pi, yeah, I agree. You can tell how "smart" people dumber than you are, but you can't tell how much "smarter" someone smarter than you is. ħuman 19:42, 7 September 2009 (UTC) - Sorry that was a typo. Генгисmutating 06:22, 8 September 2009 (UTC) - Eliezer Yudkowsky at lesswrong.com came up with the idea that all people over one standard deviation above your own IQ start to blur together when it comes to judging intelligence. Seems reasonable to me. (That was horrible phrasing, sorry. I can't write currently.) Clepper is fallible 01:03, 8 September 2009 (UTC) The Atheist's Guide To Christmas[edit] Thought I'd give a shout out to Ariane Sherine's next project following on from where the Atheist bus campaign left off, The Atheist's Guide to Christmas. 42 writers, entertainers, comedians and scientists all contribute, and it's a pretty impressive line-up: Richard Dawkins, Charlie Brooker, Ben Goldacre, Jenny Colgan, David Baddiel, Simon Singh, AC Grayling, Brian Cox and Richard Herring. And a few more that Ariane have said are writing. Believer or not, it'll probably be far, far more entertaining that You Can Lead an Atheist to Evidence... or The Dawkins Delusion. postate 14:13, 7 September 2009 (UTC) - Thanks for the recommend. Even if it only had half the contributors and even if the profits were not going to a good cause it would still be essential reading. As it is I'll be bulk buying for Christmas stockings. Err.. solstice stockings. Err... It's bloody cold and we need an excuse to party stockings. Bob Soles (talk) 14:47, 7 September 2009 (UTC) - Mrs Khant and I try and avoid these religious celebrations. We hide away in our yurt with some figgy pudding, a Colston Bassett and a mixed case of festive beverages. BTW we had a recent get together of friends and I decided to open one of my bottles of Croft '63 which I bought back in 1978. Twas delicious. Генгисmutating 16:32, 7 September 2009 (UTC) - British Christmas is mostly secular nowadays anyway, isn't it? All you need to change to make it fully non-Christian is to change channels during the Service from Westminster Abbey and you're done. What's Christmas Day like in the United McDocracy of Colonia? Totnesmartin (talk) 18:53, 7 September 2009 (UTC) - A lot of casual lip service (people wishing each other a Merry one). For some time now the big stores have been open on Xmas day. Apart from the actual War on Zmas, it's pretty secularized. And, oh yeah, traffic and shopping get really bad in December. ħuman 19:47, 7 September 2009 (UTC) - Lots of pretty lights in rich neighborhoods and awkward moments when you realize you don't have change for the Salvation Army. Malls become unbearable, rush hour doubly so. Clepper is fallible 01:35, 8 September 2009 (UTC) All in the Family[edit] Given it's cultural significance, and the fact that I'm a huge fan, would it be alright if I do an article on this classic example of American television?--Tabris (talk) 02:23, 8 September 2009 (UTC) - I really don't know what it is about, but when I hear TV show my first reaction is likely off-mission. What are you looking to say about it? - π 02:25, 8 September 2009 (UTC) - '"The program you are about to see is All in the Family. It seeks to throw a humorous spotlight on our frailties, prejudices, and concerns. By making them a source of laughter we hope to show, in a mature fashion, just how absurd they are." That sounds a little like us. Besides,haven't you ever heard of the "Archie Bunker" Vote?--Tabris (talk) 02:39, 8 September 2009 (UTC) - No, but I do not live in the US and not all of their pop culture makes it our way. Have you heard of a donkey vote? - π 02:40, 8 September 2009 (UTC) - The asshole vote?--Tabris (talk) 02:43, 8 September 2009 (UTC) - Again, not culturally relevant, as we have a different electoral system. How about you write the article and I will learn from it? - π 02:46, 8 September 2009 (UTC) - American popular culture? Actually it was a rip-off of Till Death Us Do Part. Генгисmutating 06:31, 8 September 2009 (UTC) - I'd say go ahead and write it, as it's an example of a political parody. If worst comes to worst, we can debate later if it is off mission. But it has a significance in that it made fun of what was left over from the "Leave It To Beaver" conservative generations.Lord of the Goons (talk) 06:35, 8 September 2009 (UTC) Intelligent Design Explained[edit] Intelligent Design Explained; apparently it was a poker game between Satan and God with the foreskin being the stakes... postate 09:02, 8 September 2009 (UTC) - I love how they have NZlander accents. Very funny, but also blasphemous - "slaughter a goat!" SJ Debaser 12:09, 8 September 2009 (UTC) - No no, it was God lighting his fart. Damnit why do they keep removing all the good clips off youtube? CrundyTalk nerdy to me 14:33, 8 September 2009 (UTC) How is it possible?[edit] The section two above makes me sad. How is it possible that people would donate a third of million dollars to a convicted fraudster so that he could keep his fraudulently obtained property? I don't really keep up to date on the antics of Kent Hovind, but they're usually good for a laugh when I do hear of them. This particular thing however just depresses me. Can't people think of better things to do with their money than keep some yokel crime family in style to which they've become accustomed? --JeevesMkII The gentleman's gentleman at the other site 11:22, 8 September 2009 (UTC) - It is fucking awful. Farah can raise $125,000 to prop WorldNutDaily upto help with his billboardscampaign to expose the truth about Obama. It seems that if you can get people emotionally invested enough in something they will hand over cash to them with little regards about to what happens to it. - π 11:35, 8 September 2009 (UTC) - It's the one flaw in the wingnut dream of "you can spend your money better than the government can!"... the one flaw being "it's bollocks". postate 11:53, 8 September 2009 (UTC) Sending video files[edit] I'm about to send some chunks of digital video off to become part of a DVD that someone's making. what's the best way of doing this? - copy files to cdr - copy files to dvdr - record watchable files to dvdr I'm not very technical about these things, I just point the camera... Totnesmartin 12:44, 8 September 2009 (UTC) - I guess just save the files (MPEGs?) onto DVD-Rs and post those. If you burn as a watchable DVD then they'll probably have to convert them back to reauthor the DVD. CrundyTalk nerdy to me 14:28, 8 September 2009 (UTC) - Ok I'll do that then unless somebody here goes NOOOOO before I go to Moprrison's. Totnesmartin 15:36, 8 September 2009 (UTC) Preventing server collapse[edit] While anything can happen, 90 percent of the time when the website goes down it is fixed by simply restarting the server. Our nearly 3 week long outage was fixed in less than 10 seconds when I got home by pushing the reset button. A large amount of our intermittent downtime could be avoided if this were to be automated. It would also greatly reduce the chances of a repeat incident during my travels. I will be traveling a lot this coming year. In fact, I am leaving for Chicago for a week in a month. I would like to invest in some power management hardware that can auto detect when the website is off line and reset the machine/modem/network switch, etc. I will be putting up a mini-fundraiser for people who want to pitch in and help. Based on the response I get for that I will get the best possible hardware I can for the money donated to help manage this problem in the future. If you are able and willing to toss a few dollars my way I am sure every user of the site would appreciate it. Hey if Kent Hovind's son can raise $200,000 dollars to save Dino Adventure Land we should be able to get a few hundred dollars to keep this site alive. tmtoulouse 23:28, 7 September 2009 (UTC) - I suggest you buy one of these to do the job of hitting the reset button when you aren't there to do it. That should solve more or less all your problems at a stroke. --JeevesMkII The gentleman's gentleman at the other site 05:51, 8 September 2009 (UTC) De-CPying the main space[edit] Before the crash ToP and a friend decided to start a de-CPying of the mainspace. I quite like this idea, as most of the in jokes about Conservapedia are little of putting to non CP-centric readers. How does the mob feel and what do we do about it? Also sorry to AP about reverting your edits without first raising this here, I forgot that the discussion about this was only a few hours old before the crash. - π 00:59, 8 September 2009 (UTC) - This was, in my eyes at least, only about getting rid of the pointless and lame connections and links. Stuff like where people have written "some people" and wiki-linked every other word to random CP sysops. Articles where CP has a famous (or infamous) article on the subject, or the right-wing/fundamentalist opinion is best described by Conservapedia, then the links and mentions should stay. The boxes like the CP template we were planning on expanding to include a more general "wikisphere" mentality. postate 08:23, 8 September 2009 (UTC) - I liked the idea at the time and I still do. My tweak was that I would add more links to other wikis. Ideally, have one honkin' great template that allows CP, WP, aSK, and anything else to get a link. I don't think RW should be afraid of linking to wingbut sites and it would be helpful if it can act as a hub on topics of interest. SuspectedReplicant 08:58, 8 September 2009 (UTC) - I'd say that if it is necessary to link to other wonky wikis to make a point then that's fine. I'd say that CP should no longer hold pride of place in the wonky wiki world - but equally we shouldn't go out of our way to link to any of them. Furthermore, if we do link to them it should be clear to an "outsider" what they are and why the link exists.--BobNot Jim 09:41, 8 September 2009 (UTC) - I'm having a look at making a "wikisphere" template. Though I have no idea exactly how best to implement it. Perhaps just stick the WP and CP templates inside a box, like the userbox boxes, and make it collapsible. postate 12:07, 8 September 2009 (UTC) - e.g., here. Though obviously, integrated better, but my wiki-fu is probably not up to scratch on that yet. postate 12:11, 8 September 2009 (UTC) Online behaviour.[edit] So if somebody from our little community was engaging in online behaviour that might not reflect well on them, would it be in our place to call them out on it? TheoryOfPractice (talk) 04:04, 8 September 2009 (UTC) - Depends, in my opinion. Does it reflect poorly on our community or just them?Lord of the Goons (talk) 04:05, 8 September 2009 (UTC) - (EC)Does it affect RationalWiki? - π 04:06, 8 September 2009 (UTC) - I don't think it effects RW as such--but we are "the vandal site"--we are ALL tarred with the same brush no matter what. Prolly not a big deal. TheoryOfPractice (talk) 04:11, 8 September 2009 (UTC) - If it reflects poorly on us, then I'd say go ahead. If not, then let them misbehave in their life, since it is their own. If you are unsure, then just give us a general idea of what without giving too much information, and the mob can be the judge. If it doesn't really matter, then let it go.Lord of the Goons (talk) 04:16, 8 September 2009 (UTC) - What we are talking about is user:Proxima Centauri? Or am I wrong? Please, people, name names if you are going to name names. ħuman 04:18, 8 September 2009 (UTC) No. You're right--we may be a mobocracy, but we're not a mob. One of us is not all of us, and if one of us is doing something that another one finds a little distasteful, so be it. TheoryOfPractice (talk) 04:21, 8 September 2009 (UTC) - FUCK YOU, YOU COMMUNIST FUCKTARDSend random outburst ĴαʊΆʃÇä₰ Banhammer, Renamer, and Goat 04:22, 8 September 2009 (UTC) - Is it me? Do tell ToP. Ace McWickedModel 500 04:24, 8 September 2009 (UTC) - (EC)Glad that is sorted, can we now move on to some of the site rebuilding problems above? - π 04:25, 8 September 2009 (UTC) - It's nice to be home. Oh, it's so nice to be home. (ECx4) Shut up, Pi. ħuman 04:27, 8 September 2009 (UTC) - Exorzize Cheezbrgrz for cn etz! Tarantallegra (talk) 06:01, 8 September 2009 (UTC) - Tarantallegra (talk) 06:08, 8 September 2009 (UTC) - Sorry Tara. You're not famous you've just been had by the evil Template:USERNAME trick in Javascap's sig. Генгисmutating 12:29, 8 September 2009 (UTC) - You could always send 'em an email if they have it enabled. SuspectedReplicant 08:45, 8 September 2009 (UTC) - Is it me with all my accounts on Cp? o.O Ke klik 09:27, 8 September 2009 (UTC) - Yeah I know what it was! No one around here gets my sense of humor. Just not cut out for this site. Tarantallegra 22:41, 8 September 2009 (UTC) Popular British Names[edit] I remember Andy claiming that Mohammed was the most popular name in Britain and being roundly slapped by people who prefer to check their facts. Can't remember where the "debate" was though. In any case, the BBC has a new list and Mohammed is in 16th place overall, although it gets up to #2 in the West Midlands. If anybody is interested... SuspectedReplicant 13:08, 8 September 2009 (UTC) - Full data from here. To be fair, if you add the count for "Mohammed", "Muhammad" and "Mohammad" together, it would be in 3rd place overall. I'm disappointed to see Robert down in 89th place though. SuspectedReplicant 13:12, 8 September 2009 (UTC) - Oh, I thought this thread was going to be related to this article "Britain's naughtiest names". Генгисmutating 13:26, 8 September 2009 (UTC) - CP conversation here. Aboriginal Noise with 4 M's and a silent Q 13:28, 8 September 2009 (UTC) - Ah, thanks! I thought it was a World History lecture but I couldn't find it. SuspectedReplicant 13:46, 8 September 2009 (UTC) - Now that's interesting. Though it's not exactly cause-and-effect with names (there's that theory that names do make a person but it's mostly woo-woo that I can tell). Think of the kind of parents who call their kid Chardonnay (which got popular after Footballer's Wives started IIRC) and you see that the link can be pretty valid. postate 13:34, 8 September 2009 (UTC) - Yeah, around here, Sidney/Sydney is a popular name for both boys and girls, due to our popular hockey player. Aboriginal Noise with 4 M's and a silent Q 13:41, 8 September 2009 (UTC) - Chardonnay? Good grief. Some people really shouldn't be allowed to have kids at all. SuspectedReplicant 13:46, 8 September 2009 (UTC) - (EC) That was my snobbish take on it as well. Parents who give names that echo celebrity culture (lawks a mercy, I'm sounding like Andy) are probably not the ones who would encourage their kids to sit down and read a nice book. While those who call their kids Alice, Benjamin or Daniel are most likely to have traditional mores and encourage their kids to do well at school. Генгисmutating 13:58, 8 September 2009 (UTC) - When I am in charge after taking over the world with Trent's mind control powers, I shall ban any names which are a) already English words; b) contain the letters x or z and c) are "Tiffany". Educated harmonic Hoover! 14:01, 8 September 2009 (UTC) - This whole "your name defines you" is a bit bollocks. I'm named after a war god and I don't go conquering empires, although I did glare rather sternly at someone on the bus the other day. Totnesmartin 14:02, 8 September 2009 (UTC) - - Not that it proves anything, but Jesus and Mary - or rather their Spanish equivalents - must be the most popular names in Spain. Some people have both.--BobNot Jim 14:10, 8 September 2009 (UTC) - My name seems to be increasing in popularity and is getting towards the top 50, however, I have yet to find one of those souvenir mugs / pens / gonks with my name on it. CrundyTalk nerdy to me 14:22, 8 September 2009 (UTC) - I have never met or heard of anyone with my name and spelling outside of Ireland. Educated harmonic Hoover! 14:32, 8 September 2009 (UTC) - I've never met anybody anywhere called Phantom Hoover. Totnesmartin 15:34, 8 September 2009 (UTC) - Thinking about this I now understand the issue. The Catholic Church used to be very powerful in Spain, and they were hot on boys and girls being called either Jesus or Mary - so every child automatically got this name as either a first or second name. And this no matter what the parents wishes were - when baptism came 'round the priest would add it automatically in addition to the chosen name. Of course everybody was not actually called Mary or Joseph in real life (though some were), as that would have been too confusing. Now I'm going to make a guess about Mohamed. My guess is that all Muslim male children are called Mohamed in the same way that all Spanish boys were called Jesus. That's obviously going to skew the statistics somewhat - but it hardly means that the UK is suddenly a Muslim country.--BobNot Jim 14:29, 8 September 2009 (UTC) (undent)Also in the Beeb British teachers can spot trouble makers by name alone. Apparently 'Callum' is a definite no-no. As that's an alternative spelling for my (real) name I have to agree. Bob Soles 14:43, 8 September 2009 (UTC) - Looks like an also also (see Britain's Naughtiest Names above). Генгисmutating 14:51, 8 September 2009 (UTC) - Callums are naughty? Callumny! I'm sorry... Educated harmonic Hoover! 15:06, 8 September 2009 (UTC) - That sounds about right. Considering the wide variety of "christian" names, it doesn't take much to become most common. Say, a really low 5% minority group with 50% having the same name in there somewhere (probably do to what you say regarding it being semi-compulsory) still gives a whopping 2.5% of the total population with the same name. Even with these low figures I've pulled out of my arse, this name becomes more common than the most common name among my friends (which is Sarah, best that I can tell). postate 15:00, 8 September 2009 (UTC) The full list is weirdness itself. Alfie at #6? Have you ever met anyone called Alf? If I ever met one, I'd expect him to be a jolly, rotund, mid-forties publican. Charlie at #7... apparently people aren't content with being called Charles and having a matey version for their friends. Also, Cameron is the name of a hot terminator, not a name for your baby boy. Additionally, when the hell did Ruby come back? And why aren't there any Ekaterins, dammit. (Edit: Also, Lexie? Do you want your kid to be a hooker, stripper or porn star?) --JeevesMkII The gentleman's gentleman at the other site 15:21, 8 September 2009 (UTC) - I have never met a female Cameron, but at least one male one. Educated harmonic Hoover! 15:24, 8 September 2009 (UTC) - I suspect it may well now become popular. Plus there's David Cameron. Oh, no, wait... --JeevesMkII The gentleman's gentleman at the other site 15:26, 8 September 2009 (UTC) Kriss AkabusiAAAWOOOGAAAR!!1 15:28, 8 September 2009 (UTC) - Cameron is more popular in Scotland. I knew several when I lived up there. I agree about Lexie though. SuspectedReplicant 15:32, 8 September 2009 (UTC) - The Scottish data is here. Muhammad/Mohammed show up in 82nd/96th and would be 50th if combined - significantly lower than the England/Wales figure. Which probably isn't that surprising. What stands out for me is that the top boys names are fairly standard, (Jack, Lewis, Daniel) while the top girls generally seem to be middle-class oriented (Sophie, Emily, Olivia) or that simply my bias showing through? For a real laugh, the full Scottish list is here - yes 2 people called their son A Worm (t | c) 15:56, 8 September 2009 (UTC) - Two As but no Qs? If you name your baby Q, I suppose you have to worry that your kid might grow up either to be a brilliant engineer or the biggest trekkie nerd imaginable. --JeevesMkII The gentleman's gentleman at the other site 16:01, 8 September 2009 (UTC) - That's hilarious! Don't they know "A" is a girl's name??? SuspectedReplicant 16:02, 8 September 2009 (UTC) - (EC2) My guess is that the name of "Mohammed" has reached that prominence in Britain because there is no single name that is nearly as popular among the British as "Mohammed" is among Muslims. ListenerXTalkerX 16:03, 8 September 2009 (UTC) - Holy shit. "Precious-Alexia" and "Princess-Vanessa". These babies should be taken in to care and renamed, immediately. Also, a see some cake related bullying in the future for Sarah-Leigh. One can only hopes she doesn't ever get fat. --JeevesMkII The gentleman's gentleman at the other site 16:07, 8 September 2009 (UTC) - 4 Cs. Computer nerd parents, evidently. Educated harmonic Hoover! 16:10, 8 September 2009 (UTC) - Hah. When they have grandkids, instead of C Jr. they can be called c-plus-plus. --JeevesMkII The gentleman's gentleman at the other site 16:14, 8 September 2009 (UTC) - Assuming this is the population as a whole, I think I'm on it (and I'm unique!). As an unrelated aside, there are at five spellings of "Matilda" there. Educated harmonic Hoover! 16:17, 8 September 2009 (UTC) - Wait, no, it's only 2008 births. Awww. I'm not special any more. Educated harmonic Hoover! 17:28, 8 September 2009 (UTC) 20 years ago I won a muffin with the answer to the trivia question "what's the most common first name in the world?" At Bob M, that is exactly correct. Many many male Muslims have Mohammed as their first name, and of course is a middle name and last name that makes them distinguishable (Shah Mohammed Reza Pahlavi..), or simply fame (Mohammed Ali). ħuman 22:26, 8 September 2009 (UTC)
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive34
CC-MAIN-2022-21
en
refinedweb
Hello, Hello <?xml:namespace prefix = o Thanks for your request. I managed to reproduce the problem on my side. Your request has been linked to the appropriate issue. You will be notified as soon as it is fixed. Best regards, Hi, Andrey. I’m very interested in resolving this issue. Could you please help me? I can send you additional materials if it’s necessary. Our tech support is prolonged… Hi Ilya, <?xml:namespace prefix = o Thanks for your request. This issue will be fixed and included into the next hotfix, which will be released within 4-5 weeks. You will be notified. Also could you please attach your document here for testing? Best regards, Hello Alexander and Ilya.<?xml:namespace prefix = o I have inspected the source document and figured out the cause of the problem. This might give you a feasible workaround. The list is a so-named legacy list coming from Microsoft Word 6. List items are indented incorrectly because Aspose.Words does not process legacy lists perfectly. So you can recreate the list using ordinary list formatting technique. Newer versions of Microsoft Word never create legacy lists. Please ask me if there are any difficulties with document refactoring. Regards,Regards, I have looked at this too. The lists are "legacy" lists. There is documentation that explains how to process such lists, but it does not appear to be correct because MS Word renders the list not according to that description. I cannot figure out from the data in the list how to render it correctly. I am postponing this issue. We will not be working on it until we see more customer reports or until we have figured out a suitable solution. The issues you have found earlier (filed as 14758) have been fixed in this update. The issues you have found earlier (filed as 14758) have been fixed in this update. This message was posted using Notification2Forum from Downloads module by aspose.notifier.
https://forum.aspose.com/t/converting-word-to-pdf-cause-different-linebreaks/75739
CC-MAIN-2022-21
en
refinedweb
Hi, I have an Excel document (region and language is german) with some dates and times. When I save this document as HTML using Aspose.Cells the date and time formats are switched to the english ones. I'm using Aspose.Cells 4.8.2.0. I have attached the Excel document and the corresponding HTML output. Is ther any way that the date and time formats will not be changed when saving the document? Problem with date and time format in an Excel document (german region and language) saved as HTML Hi,<?xml:namespace prefix = o Thank you for sharing the template file. We have found your mentioned issue after an initial test. We will look into it and get back to you soon. Your issue has been registered in our internal issue tracking system with issue id: CELLSNET-16910. Thank You & Best Regards, Hi, Thank you Hi, with the attached version the problem has been fixed. Thank and Regards. Hi, sorry, but there are still some problems remaining. Please find attached a new sample document. The time format of cell B7 is not as aspected in the HTML and also the date format of cell D3. Hi, Thank you for sharing the template file. We have found your mentioned issue after an initial test. We have re-opened your issue. we will further look into it and get back to you soon. Thank You & Best Regards Hi, After further investigation, the number format of “B7” is [$-F400]h:mm:ss AM/PM. The displayed number format in MS is h:mm:ss. The number format of “D3” is [$-F800]TTTT, MMMM TT, JJJJ. The display number format in MS is TTTT,TT. MMMM JJJJ. We ignored the header “[$-Fxxx]” of the number in converting date time to a string value. By the way, we will fix the two issue soon. But we do not know whether there are some other special formats that start with “[$-Fxxx]”. So, could you simply set the number format as actual displayed number format: h:mm:ss, TTTT,TT. MMMM JJJJ? Thank you. Hi, We have fixed your issue, kindly try the attached version. Thank you. The issues you have found earlier (filed as 16910) have been fixed in this update. This message was posted using Notification2Forum from Downloads module by aspose.notifier. While trying to keep the API as straightforward and clear as possible, we have decided to recognize and honor the common development practices of the platform; we have re-arranged API Structure/ Namespaces. With this release, we have reorganized the API classes for Aspose.Cells component. This change has some major aspects that we follow. We have added new namespaces. The entire API (classes, interfaces, enumerations, structures etc.) were previously located in the Aspose.Cells namespace. Now, certain sets of API have been moved to their relative namespaces, which make the relationship of classes (with their members) and namespaces clear and simplified. It is to be noted here, we have not renamed existing API (classes, enumerations etc.) but, I am afraid you still need to make certain adjustments in your existing projects accordingly. For complete reference, please see the product's API Reference.
https://forum.aspose.com/t/problem-with-date-and-time-format-in-an-excel-document-german-region-and-language-saved-as-html/137016
CC-MAIN-2022-21
en
refinedweb
Set MAX TX power in LoRaWAN mode Hi everyone: Is there any way to make the LoPy4 default to max TX power (20 dBm in US915) in LoRaWAN mode? I am using ABP and the latest 1.20.2.rc6 firmware from Pycom. I'm thinking of this possible scenario: LoPy4 cannot hear GW, and GW cannot hear LoPy4 when transmitting at 5 dBm (so no comms at all). If the LoPy4 defaulted to the max TX power, it's possible that at least the GW would be able to hear the LoPy4 (and telemetry packets would be received, which is really all I care about since I'm using ABP). In this case, you can notice that I would not be able to use ADR because the LoPy cannot hear the GW. So it would be best for me if the LoPy increased its own transmit power. Any help you could give is greatly appreciated :) Best, Dan P.S. Regarding the much higher current consumption of transmitting at 20 dBm, that is not an issue for the setup I am running. Edit: I also understand this has been a recurrent topic in the forum, but unfortunately I haven't found any solutions in the responses. Here's the links to the other posts that look into this: I was wondering if possibly @jmarcelino or @jcaron could pitch in, as I've seen their responses in other posts... Thanks guys! @Eric73 Oh wow, that makes sense... So it actually is transmitting at max power, as expected. That's good to know. Thanks so much for your insight! @catalin @Xykon would you be able to confirm this? If so I think it's important to change the documentation in Thanks! @d-alvrzx As i understood source code , the documentation is wrong. tx_power is not in dBm but in lorawan coding. For US915 regional parameters are defined as following (Table 14: US902-928 TX power table) TXPower Configuration (conducted power) 0 30 dBm – 2*TXPower 1 28 dBm 2 26 dBm 3 : 13 …. 14 2 dBm 15 Defined in LoRaWAN1 So TW_POWER 5 is 20dBm (maximum EIRP of the sx1276 chip used in pycom hardware) @Eric73 Hi! Thanks so much for your reply :) Yes, I was presuming that the whole point of LoRaWAN is to have control over your end-devices, and so having them have control over their own TX characteristics isn't a great idea, specially if you have an area with lots of devices owned by different people. But, it was my understanding that the LoPy should default to its max TX power when ADR is disabled (according to @jmarcelino 's response on this thread) (?) I've checked it's only transmitting at 5 dBm by using the lora.stats()function, so I'm sure its not being confused for DR... Here's the code running on my LoPy to test this out. You'll see that I always transmit at DR0because I need to maximize range. from network import LoRa import socket import ubinascii import struct from utime import ticks_us, ticks_diff, sleep # Initialise LoRa in LORAWAN mode. lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.US915, adr=False, device_class=LoRa.CLASS_C) # clean all channels print("Initializing channels...") [lora.remove_channel(channel) for channel in range(0,72)] # open selected channels CH_RANGE = [8,9,10,11,12,13,14,15] FREQ_RANGE = [903900000, 904100000, 904300000, 904500000, 904700000, 904900000, 905100000, 905300000] [lora.add_channel(channel, frequency=thisFreq, dr_min=0, dr_max=3) for channel, thisFreq in zip(CH_RANGE, FREQ_RANGE)] # create an ABP authentication params dev_addr = struct.unpack(">l", ubinascii.unhexlify('00000005'))[0] nwk_swkey = ubinascii.unhexlify('2B7E151628AED2A6ABF7158809CF4F3C') app_swkey = ubin, 0) # wait for setup sleep(5) while True: # start timer t = ticks_us() # make the socket blocking # (waits for the data to be sent and for the 2 receive windows to expire) s.setblocking(True) # send some data s.send(bytes([0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])) # make the socket non-blocking # (because if there's no data received it will block forever...) s.setblocking(False) # print time delta = ticks_diff(ticks_us(), t) print("Sent. TX Time =", delta/1000) # get any data received (if any...) data = s.recv(256) print("Received:", data) # print stats of last packet print("Stats:", lora.stats()) print("\n") # wait sleep(5) On the LoPy, no matter how long I wait, lora.stats()always returns 5 dBm on TX power: Sent. TX Time = 5002.944 Received: b'' Stats: (rx_timestamp=0, rssi=0, snr=0.0, sfrx=0, sftx=0, tx_trials=0, tx_power=5, tx_time_on_air=371, tx_counter=27, tx_frequency=904300000) And the uplinks on the server look like this: My devices are stationary (mostly), they move every couple of weeks from place to place. So I think having ADR enabled wouldn't be ideal, because they could move from a location close to the GW to a location very far away, and the GW would have no way of knowing this. Basically, then... Should the LoPy default to 20 dBm when ADR is disabled? Or am I mistaken? Best wishes, Dan Hi, this is not how lorawan is supposed to work, that's why there's no python endpoint to do this. First at all, how can you say that your device always send at 5dBm even when you joined network with ADR disabled ? Are you sure to dont confuse txpower and DR ? Your device is static or mobile? This can change how you can handle your situation (if it's mobile i hope you have an accelerometer on your board) Hey, guys. Anybody out there that could pitch in regarding this? I have noted that even when having ADR disabled (both in the server as in the LoPy), the tx power is still stuck at the minimum of 5 dBm. How can I increase it?
https://forum.pycom.io/topic/5996/set-max-tx-power-in-lorawan-mode
CC-MAIN-2022-21
en
refinedweb
Quick Start Welcome! If you are new to Touca, this is the right place to be! Our main objective here is to introduce Touca without taking too much of your time. We hope to make you excited enough to check out our hands-on tutorials next. Revisiting Unit Testing Let us imagine that we want to test a software workflow that reports whether a given number is prime. - Python - C++ - TypeScript - Java def is_prime(number: int): bool is_prime(const int number); function is_prime(number: number): boolean; public static boolean isPrime(final int number); We can use unit testing in which we hard-code a set of input numbers and list our expected return value for each input. - Python - C++ - TypeScript - Java from code_under_test import is_prime def test_is_prime(): assert is_prime(-1) == False assert is_prime(1) == False assert is_prime(2) == True assert is_prime(13) == True #include "catch2/catch.hpp" #include "code_under_test.hpp" TEST_CASE("is_prime") { CHECK(is_prime(-1) == false); CHECK(is_prime(1) == false); CHECK(is_prime(2) == true); CHECK(is_prime(13) == true); } import { is_prime } from "code_under_test"; test("test is_prime", () => { expect(is_prime(-1)).toEqual(false); expect(is_prime(1)).toEqual(false); expect(is_prime(2)).toEqual(true); expect(is_prime(13)).toEqual(true); }); import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertTrue; public final class PrimeTest { @Test public void isPrime() { assertFalse(Prime.isPrime(-1)); assertFalse(Prime.isPrime(1)); assertTrue(Prime.isPrime(2)); assertTrue(Prime.isPrime(13)); } } Unit testing is an effective method to gain confidence in the behavior of functions we are going to implement or have already implemented. But let us, for a moment, review some of the fundamentals of building and maintaining unit tests: - For each input, we need to specify the corresponding expected output, as part of our test logic. - As our software requirements evolve, we may need to go back and change our expected outputs. - When we find other interesting inputs, we may need to go back and include them in our set of inputs. In our. Introducing Touca Touca makes it easier to continuously test workflows of any complexity and with any number of test cases. - Python - C++ - TypeScript - Java import touca @touca.Workflow def is_prime_test(testcase: str): touca.check("output", is_prime(int(testcase))) if __name__ == "__main__": touca.run() #include "touca/touca.hpp" #include "code_under_test.hpp" int main(int argc, char* argv[]) { touca::workflow("is_prime", [](const std::string& testcase) { const auto number = std::stoul(testcase); touca::check("output", is_prime(number)); }); return touca::run(argc, argv); } import { touca } from "@touca/node"; import { is_prime } from "./code_under_test"; touca.workflow("is_prime_test", (testcase: string) => { touca.check("output", is_prime(Number.parseInt(testcase))); }); touca.run();); } } This code needs some explanation. Let us start by reviewing what is missing: - We have fully decoupled our test inputs from our test logic. We refer to these inputs as "test cases". Our open-source SDKs retrieve the test cases from the command line, or a file, or a remote Touca server and feed them one by one to our code under test. - We have removed the concept of expected values. With Touca, we only describe the actual behavior and performance of our code under test by capturing values of interesting variables and runtime of important functions, from anywhere within our code. Touca SDKs submit this description to a remote Touca server which compares it against the description for a trusted version of our code. The server visualizes any differences and reports them in near real-time. We can run Touca tests with any number of inputs from the command line: - Python - C++ - TypeScript - Java export TOUCA_API_KEY=<TOUCA_API_KEY> export TOUCA_API_URL=<TOUCA_API_URL> python3 prime_app_test.py --revision v1.0 --testcase 13 17 51 export TOUCA_API_KEY=<TOUCA_API_KEY> export TOUCA_API_URL=<TOUCA_API_URL> ./prime_app_test --revision v1.0 --testcase 13,17,51 export TOUCA_API_KEY=<TOUCA_API_KEY> export TOUCA_API_URL=<TOUCA_API_URL> node dist/is_prime_test.js --revision v1.0 --testcase 13 17 51 export TOUCA_API_KEY=<TOUCA_API_KEY> export TOUCA_API_URL=<TOUCA_API_URL> gradle runExampleMinimal --args='--revision v1.0 --testcase 13 17 51' Where TOUCA_API_KEY and TOUCA_API_URL can be obtained from the Touca server at app.touca.io. This command produces the following output: Touca Test Framework Suite: is_prime_test/v1.0 1. PASS 13 (127 ms) 2. PASS 17 (123 ms) 3. PASS 51 (159 ms) Tests: 3 passed, 3 total Time: 0.57 s ✨ Ran all test suites. If and when we change the implementation of is_prime, we can rerun the test (passing a different version number) and submit the results for the new version to the Touca server. The server takes care of storing and comparing the results submitted between the two versions and reports the differences in near real-time. The Touca server considers the test results submitted for the first version of our test, as our baseline: all subsequent versions submitted to the server would be compared against it. If, for any reason, requirements of our software change and we decide to change this baseline, we can do so by clicking a button right from the Touca server UI. Value Proposition highlighted design features of Touca.
https://touca.io/docs/basics/quickstart/
CC-MAIN-2022-21
en
refinedweb
Java Program for Printing Mirrored Rhombus Star Pattern Printing Mirrored Rhombus Star Pattern In this problem we’re going to code a Java Program for printing mirrored rhombus star pattern. For doing so we’ll take a number input from user and store it in variable rows and then run the for loop start from i=0 to ii which print the spaces and then take a another loop to print star start from j=0 to j Algorithm: - Take the number of rows as input from the user (length of side of rhombus) and store it in any variable.(‘row‘ in this case). - Run a loop ‘row’ number of times to iterate through each of the rows. From i=0 to i<row. The loop should be structured as for(int i=0;i<rows;i++) - Run a nested loop inside the main loop to print the spaces before the rhombus. From j=row to j>i. The loop should be structured as for(int j=rows;j>i;j–) - Run another nested loop inside the main loop after the previous loop to print the stars in each column of a row. From j=0 to j<row. The loop should be structured as for(int j=0;j<rows;j++) inside this loop print System.out.println(“*”); - Move to the next line by printing a new line System.out.println(); Code in Java: import java.util.Scanner; public class Pattern1 { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter No"); int rows = sc.nextInt(); for(int i=0;i<rows;i++) //loop controlling number of rows { for(int j=rows;j>i;j--) //inner loop for spaces System.out.print(" "); //printing spaces for(int j=0;j<rows;j++) //inner loop for printing the stars in each column of a row System.out.print("*"); //printing stars System.out.println(); // printing a new line after each row } } } This code is contributed by Shubham Nigam (Prepinsta Placement Cell Student) Login/Signup to comment One comment on “Java Program for Printing Mirrored Rhombus Star Pattern” Thank you prepinsta for publishing my code…
https://prepinsta.com/java-program/mirrored-rhombus-star-pattern/
CC-MAIN-2022-21
en
refinedweb
Mouse Automation in Python using PyAutoGUI In this tutorial, we’ll learn about Mouse Automation in Python using PyAutoGUI. PyAutoGUI is a module in python which can automate Mouse movements as well as keystrokes. Mouse automation in python using PyAutoGUI has many applications in robotics. To read more about this amazing module, go visit its documentation. So let’s get started! Installing PyAutoGUI Installing the module in Windows is pretty easy. Just type in the following into your command line… pip install pyautogui Learning basic functions First, let’s import the PyAutoGUI module. import pyautogui as gui Then, check the size of our screen. this is done so that the mouse coordinates never go out of range. gui.size() Output: Size(width=1920, height=1080) Now we’ll see how to change the current position of our cursor. This is done using the moveTo() function. The first two parameters are the coordinates to be moved to, and the duration parameter is the time to be taken by the cursor to go to that position. The moveTo() function can also be replaced by the moveRel() function which moves the cursor to a relative position with respect to its current position. gui.moveTo(0,0,duration=1) gui.moveRel(0,-300,duration=1) We can also determine the current position of the mouse through the position() function. Let’s see a small program to help us understand this better. try: while True: x_cord,y_cord=gui.position() print("x={} y={}".format(x_cord,y_cord)) except KeyboardInterrupt: print("Exit") This bit of code will continuously print the current position of the mouse until CTRL+C is pressed. Output: x=1354 y=540 x=1354 y=540 x=1288 y=226 x=1288 y=226 x=1331 y=269 x=1331 y=269Exit Initiating mouse clicks for Mouse Automation in Python There are functions for left-click, right-click, dragging, scrolling and for many more events. Here are some of them: gui.click() #left-click gui.doubleClick() #double-click gui.rightClick() #right-click gui.dragTo(0,0) #drags from current position to (0,0) gui.dragRel(0,100) #drags down 100 pixels To better understand these functions, let’s write a program. We will draw a picture using just PyAutoGUI. gui.moveTo(200,400,duration=1) gui.click() for i in range(5): gui.dragRel(100,0,duration=1) gui.dragRel(0,100,duration=1) First, we go to our starting position from where we want our drawing to start. Then, the click() function is used to bring the window into focus. Further, we loop the dragRel() functions 5 times. The first dragRel() goes right by 100 pixels, and the second one goes down by 100 pixels. Thus creating a stairs-like figure. Output: Also read: KNN Classification using Scikit-Learn in Python
https://www.codespeedy.com/mouse-automation-in-python-using-pyautogui/
CC-MAIN-2020-50
en
refinedweb
Communication link failure trying to access Studio. Scott, try to enable Audit and events Protect and LoginFailure and then check if any events are recorded in Audit log, when you try to login into Studio and see the error.... I don't get to the point of trying to login. As soon as Studio launches for this connection, instead of showing the logon box, it goes directly to the error. I've never connected to it from Studio, so my username/password are not saved. There is also no entry in the audit log when I try to open Studio, no event for this is being logged. I've also tried ccontrol stop then start on the instance. I believe you can click Cancel on that "Communication link failure" error and then go to File -> Change namespace -> Connect -> choose the instance and then Studio will ask you for credentials Yes, you can do that, but it results in the same error after entering username/password and clicking OK. If Audit is enabled and LoginFailure and Protect events are enabled and there is no audit event recorded when you enter username/password and see error after clicking OK. then I would say that connection attempt from Studio does not reach HealthShare. Can you do telnet on port 1972 from the computer where you have Studio? Instead of serverspecify IP-address of server with HealthShare I was mistaken, it's not port 1972, it's port 19800. But, that is how the connection is configured, with 19800. I can't telnet from my machine, but was able to telnet to the HealthShare server that is refusing Studio connections from another server where HealthShare is installed. If you can’t telnet to that server/port from computer with Studio then something in the middle prevents this connection. This is question for network administrators. Yes, I meant I don't have telnet installed. I have admin access, will install it and try connecting. I'm connecting through a VPN. Network team is also looking into it. When you say you can't telnet from your machine, do you mean that you don't have a telnet client installed and/or cannot install one? Are you connecting to the server via a local LAN connection or through a VPN? There may be port-range restrictions for non-local IP addresses in the VPN server's configuration. Even if you're not using a VPN client to connect, there may be a firewall between your location and the server's location that filters traffic based on source network and/or destination port. Finally, there may be similar rules in the Caché host's internal firewall that could impact connections from specific local networks or hosts. This would explain why an Atelier connection works but a Studio connection does not. It's also possible that your workstation's firewall/anti-virus may be blocking outbound traffic on specific ports or port ranges. If it's so locked down that you can't install a telnet client to test with, I wouldn't be surprised if this was the case. I was mistaken, it's not using port 1972, but instead 19800. I am able to telnet to that machine/port from another machine, but not connect from Studio. FWIW, my money is still on this being caused by the Linux firewall on the server not allowing incoming TCP connections to port 1972. Those look correct. Is there perhaps a firewall blocking your Studio host's TCP/IP access to port 1972 on your server? Is the %Service_Bindings service enabled? Does it have any IP restrictions specified? %Service_Bindings is enabled, has Password and Delegated checked, no IP restrictions. I don't believe there is a firewall issue, but will investigate that too. If you are indeed telling Studio to use the same Web Server Port that you're successfully using with Atelier, then that's your problem. Studio connections are handled by the Superserver port (e.g. 1972), not the Web Server one (e.g. 57772). Studio settings: Atelier settings: Hello everyone, I have the same issue on logging to Ensemble Studio. The error I am receiving is: TCP Connect() failed-exception satisfied select () Reason (10061, 0X274d) No connection be made because the target machine actively refused it. Any input much appreciated! thanks, Social networks InterSystems resources To leave a comment or answer to post please log in Please log in To leave a post please log in
https://community.intersystems.com/post/communication-link-failure-trying-access-studio
CC-MAIN-2020-50
en
refinedweb
.TH socket_bind6 3 .SH NAME socket_bind6 \- set the local IP address and port of a socket .SH SYNTAX .B #include <socket.h> int \fBsocket_bind6\fP(int \fIs\fR, char \fIip\fR[16], uint16 \fIport\fR, uint32 \fIscope_id\fR); .SH DESCRIPTION socket_bind6 returns 0. If anything goes wrong, socket_bind6. .SH EXAMPLE #include <socket.h> int \fIs\fR; char \fIip\fR[16]; uint16 \fIp\fR; uint32 \fIscope_id\fR; \fIs\fR = socket_tcp6(); socket_bind6(s,ip,p,scope_id); socket_connect6(s,ip,p); .SH "SEE ALSO" socket_bind4(3), socket_getifidx(3)
https://git.lighttpd.net/mirrors/libowfat/src/commit/6919cf8bf38669d0b609f7d188cd5b5fa3eb73d0/socket/socket_bind6.3
CC-MAIN-2020-50
en
refinedweb
Building an Android Library Tutorial See how to create an Android library using Android Studio, publish the library to a Maven repository on Bintray, and host the library in the public JCenter repository. Version - Kotlin 1.2, Android 4.1, Android Studio 3 . In this tutorial, you’ll get to learn everything about building an Android library, from creation to publishing it for others to consume. In the process, you’ll learn: - How to create an Android library - How to publish your Android library - How to use your Android library - Best practices around building Android libraries Note: If you’re new to Android Development, it’s highly recommended that you work through Beginning Android Development and Kotlin for Android to get a grip on the basic tools and concepts. Other prerequisites include knowledge of using the bash/Terminal, git and Gradle. You’ll also need Android Studio 3.0 or later, and to publish your library you’ll need to have a GitHub account. Introduction Code reuse has been around since the advent of code, and Android is no exception. The end goal of every library developer is to simplify abstract complexities of code and package the code for others to reuse in their projects. As Android developer, you often come across situations where some code is a good candidate to get reused in the future. It is in these situations when packaging that code as an Android Library/SDK makes the most sense. Every Android developer is different in their own ways. At the same time, there are no set standards around building Android libraries. What that means is that developers come up with their own version of the solution and usually that leads to inconsistencies. Best practices, if defined and followed, could make things more streamlined. In case of library/SDK development, the goal should be to design better APIs so as to enable intended usage. Along with that, you also need to make sure that the API users are clear about its intended use and limitations. The above holds true for every library and not only for Android library development. If you spent some time solving a problem and believe that others might be facing the same problem, then abstract it into an Android Library. At the very least, it is going to save you precious time in the future when you revisit the same problem. And hopefully a bigger group of Android developers will benefit from your Android library. So it is a win-win in either case. It’s important to note, however, that the reason to create an Android library should not just be because you think so. If a solution already exists then use that, and if it does not solve your issue then you could make a request for the feature in the existing Android library. Brownie points to you if you decide to solve the problem and contribute back to the existing Android library. The advantage of that is huge because you helped to make the ecosystem better and in the process helped a lot of other Android developers. Better still you did not have to spend a lot of time managing an Android library but your code is going to get used by others. Phew! Looks like you are all set to embark on the journey to becoming a better Android Library developer! Let’s dive right into it! :] Getting Started Begin by downloading the materials for this tutorial at the top or bottom of the page. Inside, you will find the XML Layouts and associated Activities containing some boilerplate code for the app, along with helper scripts for publishing the Android library to Bintray,. MainActivity contains three EditText which you can use to enter email, password and credit card number. When you tap the Validate button, you’ll pass the text entered in the EditText fields and validate them via already declared methods. Another thing to note is the usage of the ext variable from the project’s build.gradle file in the app/build.gradle file. You will be defining more ext variables in this tutorial and referencing them in the same way later on. Build and run the app. You should see the following: You will see it does not do anything right now. That is where you come in. Next up you are going to create the Android library which will validate the entered text in the fields. Let’s get rolling! Creating the Android Library Inside your Android Studio, click File\New\NewModule… Select Android Library and hit Next. You will be at the Configure the new module step of the wizard. At this point, you are required to provide a name for your library , module name, package name and minimum SDK. Put validatetor as library name and module name, com.raywenderlich.android.validatetor as package name and set the minimum SDK to be at 14. Once done, hit Finish and go get yourself a coffee. Just kidding, wait or be back in 5 minutes tops (Gradle doing what it does best, compiling and building stuff…) for the next steps! Looks like you are back and you also have your Android library module named validatetor setup in your project. Go through the new module added to your project and get familiar with its files. An important thing to note is that under the validatetor module the folder src/com.raywenderlich.android.validatetor/ is empty! Hmm, that seems odd. Usually, the app module has at least the MainActivity.kt or MainActivity.java inside the same path under it. Well, let me clear this up for you! The reason it is empty is because it is a library and not an app. You need to write the code that the app module will later on consume. You need to set your Android library up for future steps. To do that, Add ext variables to the project’s(root) build.gradle file inside the already defined ext block, below the Project variables ext { // Project .. // ValidateTor Library Info libVersionCode = 1 libVersionName = '1.0.0' } Next, update your validatetor/build.gradle file to use the ext variables from project’s build.gradle file. - Update compileSdkVersion and buildToolsVersion: compileSdkVersion rootProject.ext.compileSdkVersion buildToolsVersion rootProject.ext.buildToolsVersion - Update minSdkVersion, targetSdkVersion, versionCode and versionName: minSdkVersion rootProject.ext.minSdkVersion targetSdkVersion rootProject.ext.targetSdkVersion versionCode rootProject.ext.libVersionCode versionName rootProject.ext.libVersionName - Update the version field for the support library dependency: testImplementation "com.android.support:appcompat-v7:$supportLibVersion" - Update version field for the junit dependency: testImplementation "junit:junit:$rootProject.ext.junitVersion" All the above makes sure is that your code is consistent when it comes to defining versions. It also enables control over all these from the project’s build.gradle file. Next, you have to write the logic of validating the strings that will be included in the validatetor library. The validation code is not the focus of this tutorial, so you’ll just use the Java files from the download materials in the validatetor-logic-code folder. Once downloaded, extract the files and copy all the files: Paste the copied files inside your validatetor module under src/com.raywenderlich.android.validatetor/ folder. This is what your project will look like now: Adding your Android Library to your app Open your app’s build.gradle file and add the following inside dependencies after // Testing dependencies // added validatetor Android library module as a dependency implementation project(':validatetor') Now sync up your project Gradle files. That is it. You have just added the validatetor Android library module to your app. This is one of the ways of adding an Android library to an app project. There are more ways to add an Android library to your app which will be discussed later on in the tutorial. Now that you have the validatetor library added as a dependency, you can reference the library code in your app. You will now put in place the validation code using the validatetor library for all three EditText fields in your app. Note: you will be referencing the Java-based Android library inside the Kotlin MainActivity class. There is no difference in usage except for following the Kotlin syntax Navigate to the app module and open the MainActivity.kt file inside the root package of the project to edit. - Create an instance of ValidateTor: private lateinit var validateTor: ValidateTor - Initialize the instance of ValidateTor by appending the following to onCreate(): // Initialize validatetor instance validateTor = ValidateTor() - Inside validateCreditCardField(editText:)replace // TODO: Validate credit card number...: if (validateTor.isEmpty(str)) { editText.error = "Field is empty!" } if (!validateTor.validateCreditCard(str)) { editText.error = "Invalid Credit Card number!" } else { Toast.makeText(this, "Valid Credit Card Number!", Toast.LENGTH_SHORT).show() } - Inside validatePasswordField(editText:)replace // TODO: Validate password...: if (validateTor.isEmpty(str)) { editText.error = "Field is empty!" } if (validateTor.isAtleastLength(str, 8) && validateTor.hasAtleastOneDigit(str) && validateTor.hasAtleastOneUppercaseCharacter(str) && validateTor.hasAtleastOneSpecialCharacter(str)) { Toast.makeText(this, "Valid Password!", Toast.LENGTH_SHORT).show() } else { editText.error = "Password needs to be a minimum length of 8 characters and should have at least 1 digit, 1 upppercase letter and 1 special character " } - Inside validateEmailField(editText:)replace // TODO: Validate email...: if (validateTor.isEmpty(str)) { editText.error = "Field is empty!" } if (!validateTor.isEmail(str)) { editText.error = "Invalid Email" } else { Toast.makeText(this, "Valid Email!", Toast.LENGTH_SHORT).show() } Run your app. You can now enter text in the EditText fields and hit the Validate button to run validations on the text entered. You can use the test credit card number 4111111111111111, a 4 with fifteen 1’s. :] You have successfully used the validatetor library in your sample app. Next up you will make your Android library available to others for use in their own apps by publishing to JCenter. Publishing your Android library In order to proceed with publishing your library for this tutorial, you’ll need to first put your Android project into a public repo on your GitHub account. Create a public repo in your GitHub account and push all the files to repo. If you don’t have a GitHub account, please just read along to see the steps involved with publishing the library. You just created your shiny new validatetor Android library and used it in your own app by referencing it as a module dependency. Right now only you can use this library because the library module is available to your project only. In order to make validatetor library available to others, you will have to publish the library as a Maven artifact on a public Maven repository. You have 3 options here You will publish your validatetor library to JCenter as it is the most common one and a superset of MavenCentral. Setup your Bintray Account You first need to create an account on Bintray. - Register for an Open Source Account at bintray.com/signup/oss: - Click on the activation link in the activation email they sent you and login into your account. - Click on Add New Repository: - Fill out the form as in the following screenshot and click Create. An important thing to note is that type has to be Maven: - You should now have a Maven repository. The URL for it should be of the form[bintray_username]/maven. If you’re not already on the Maven repository page, you can browse to it by clicking on it from your Bintray profile page: - Click on Edit button from inside your Maven repository: - Enable GPG signing for artifacts uploaded to your Maven repository on Bintray: Get API Key and Link GitHub Account Next, you need to get your API key to push your Android library to this Maven repository later on, and also link your GitHub account to Bintray. - Open your profile: - Click on Edit: - Navigate to the API Key list item and copy the API key using the button on the top right: - Go to your Bintray profile, hit the Edit button, go to the Accounts tab, and click Link your GitHub account to link your GitHub account to Bintray. Get your project ready for publishing To begin, back in Android Studio, switch to the Project view: Add API keys Double click your local.properties file and append the below bintray.user=[your_bintray_username] bintray.apikey=[your_bintray_apikey] Note: [your_bintray_apikey] is the key you copied earlier from your Bintray account. [your_bintray_username] is your Bintray username Add other details Open your gradle.properties file and append the following: # e.g. nisrulz POM_DEVELOPER_ID=[your_github_username] # e.g. Nishant Srivastava POM_DEVELOPER_NAME=[your_name] # e.g. [email protected] POM_DEVELOPER_EMAILID=[your_email_id] # You can modify the below based on the license you want to use. POM_LICENCE_NAME=The Apache Software License, Version 2.0 POM_LICENCE_URL= POM_ALL_LICENCES='Apache-2.0' # e.g. com.github.nisrulz GROUP=[your_base_group] POM_PACKAGING=aar Note: You should ideally put the above code block inside your global gradle.properties because these would be common to all libraries you publish Setup helper scripts There is a folder named scripts under your project. Drag and drop this folder into your validatetor module. You will see the move dialog on dropping it on the validatetor module. Click OK in the dialog. Once moved, your project structure under validatetor module will look like Add details specific to your Android library Open your project’s build.gradle file and append the following to the ext block below // ValidateTor Library Info, updating the values based on your specifics instead of mine where needed: libPomUrl = '[github_username]/[repo_name]' libGithubRepo = 'nisrulz/validatetor' libModuleName = 'validatetor' libModuleDesc = 'Android library for fast and simple string validation.' libBintrayName = 'validatetor' Setup publishing plugin and helper script - Add the publishing plugins under // NOTE: Do not place your application...inside your project’s build.gradle file // Required plugins added to classpath to facilitate pushing to JCenter/Bintray classpath 'com.jfrog.bintray.gradle:gradle-bintray-plugin:1.7.3' classpath 'com.github.dcendents:android-maven-gradle-plugin:2.0' - Apply the helper script to the very bottom of the validatetor/build.gradle file // applied specifically at the bottom apply from: './scripts/bintrayConfig.gradle' Publish to Bintray Next, execute the following command inside your project folder using the command line in a Terminal window: ./gradlew clean build install bintrayUpload -Ppublish=true Let me explain what the above line does ./gradlew: Execute Gradle wrapper clean: Execute the clean gradle task, which cleans up the repository of unused references build: Execute the build gradle task, which builds the whole project. This generates the aar file, i.e. your Android library package install: Execute the install gradle task, which is used to create the Maven artifact (aar file + POM file) and then add it to the local Maven repository. This task is available from the plugin you added earlier bintrayUpload: Execute the bintrayUpload gradle task, which is used to publish the installed Maven artifact to the Bintray hosted Maven repository. This task is available from the plugin you added earlier -Ppublish=true: This is simply a flag used to control the publishing of the artifact to a Bintray account. This is required to push the artifact to the Bintray account and is defined in the helper script Once your command completes successfully, head over to your Bintray account and navigate to your Maven repository. You should see the following: Hit Publish. Awesome. Your Android library is now published on Bintray. In order to use the library as it stands as a Maven repositiry, you would have to take the following steps: - In your project’s build.gradle file you will have append the below under allprojects/repositories // e.g. url '' maven { url '[bintray_username]/maven' } - To add it to your app, use the below (replace the module dependency you added earlier) // e.g. implementation 'com.github.nisrulz:validatetor:1.0' implementation '[group_name_you_defined_in_gradle_properties]:[library_name]:[library_version]' But you can eliminate the first step altogether, because jcenter() is the default repository for dependencies. So you need to publish your library to JCenter. Publish to JCenter - Goto binray.com and navigate to your Maven repository. - Find your library and open it by clicking its name. - Mouse over Maven Central tab and you should be able to see a popup like shown below - Select Click here to get it included link. This will initiate your request to include your Android library in JCenter After this you will have to wait for an email from JCenter confirming that your Android library is published on JCenter. Once published, you or anyone interested can use your Android library by adding to their app/build.gradle file under the dependencies block. // e.g. implementation 'com.github.nisrulz:validatetor:1.0' implementation '[group_name_you_defined_in_gradle_properties]:[library_name]:[library_version]' You can try it out once you have the email from JCenter. When doing so, remove the import on the local validatetor module. Note: You do not need to reference your Bintray Maven repository anymore. Your validatetor Android Library is hosted from JCenter now. Sweet. You just published your Android library for everyone. Feels good, right? :] Using your Android library You have already seen three ways of referencing an Android library in your app projects. They are summarized as: - Adding as module dependency: // inside app/build.gradle file implementation project(':validatetor') - Adding as a dependency from a remote Maven repository, i.e. a Bintray hosted Maven repository: // project's build.gradle file, under allprojects/repositories maven { url '' } // inside app/build.gradle file implementation 'com.github.nisrulz:validatetor:1.0' - Adding as a dependency from a public Maven repository, i.e. JCenter: // inside app/build.gradle file implementation 'com.github.nisrulz:validatetor:1.0' But what about if you have a local AAR file? First, you need to drop your AAR file inside the app/libs folder. Then to add the local AAR file as a dependency you need to add the below to your app/build.gradle file dependencies { compile(name:'nameOfYourAARFileWithoutExtension', ext:'aar') } repositories { flatDir { dirs 'libs' } } Then sync your Gradle files. You will have a working dependency. Cheers! Best practices for building Android libraries Hopefully, you now have an understanding about building and publishing an Android library. That’s great, but let’s also look at some of the best practices to follow when building Android libraries. Ease of use When designing an Android library, three library properties are important to keep in mind: - Intuitive: It should do what a user of the library expects it to do without having to look up the documentation. - Consistent: The code for the Android library should be well thought out and should not change drastically between versions. Follows semantic versioning. - Easy to use, hard to misuse: It should be easily understandable in terms of implementation and its usage. The exposed public methods should have enough validation checks to make sure people cannot use their functionality other than what they were coded and intended for. Provide sane defaults and handle scenarios when dependencies are not present. Avoid multiple arguments Don’t do this // Do not DO this void init(String apikey, int refresh, long interval, String type, String username, String email, String password); // WHY? Consider an example call: void init("0123456789","prod", 1000, 1, "nishant", "1234","[email protected]"); // Passing arguments in the right order is easy to mess up here :( Instead do this // Do this void init(ApiSecret apisecret); // where ApiSecret is public class ApiSecret { String apikey; int refresh; long interval; String type; String name; String email; String pass; // constructor // validation checks (such as type safety) // setter and getters } Or use the Builder pattern: AwesomeLib awesomelib = new AwesomeLib.AwesomeLibBuilder() .apisecret(mApisecret).refresh(mRefresh) .interval(mInterval).type(mType) .username(mUsername).email(mEmail).password(mPassword) .build(); Minimize permissions Every permission you add to your Android library’s AndroidManifest.xml file will get merged into the app that adds the Android library as a dependency. - Minimize the number of permissions you require in your Android library. - Use Intents to let dedicated apps do the work for you and return the processed result. - Enable and disable features based on whether you have the permission or not. Do not let your code crash just because you do not have a particular permission. If possible, have a fallback functionality when the permission isn’t approved. To check if you have a particular permission granted or not, use the following Kotlin function: fun hasPermission(context: Context, permission: String): Boolean { val result = context.checkCallingOrSelfPermission(permission) return result == PackageManager.PERMISSION_GRANTED } Minimize requisites Requiring a feature by declaring it in the AndroidManifest.xml file of your Android library, via // Do not do this <uses-feature android: means that this would get merged into the app AndroidManifest.xml file during the manifest-merger phase of the build and thus hide the app in the Play Store for devices that do not have Bluetooth (this is something the Play Store does as filtering). So an app that was earlier visible to a larger audience would now be visible to a smaller audience, just because the app added your library. The solution is simple. Simply do not add the line to the AndroidManifest.xml file for your Android Library. Instead use the following Java code snippet to detect the feature from your library during runtime and enable/disable feature accordingly: // Runtime feature detection String feature = PackageManager.FEATURE_BLUETOOTH; public boolean isFeatureAvailable(Context context, String feature) { return context.getPackageManager().hasSystemFeature(feature); } // Enable/Disable the functionality depending on availability of feature Support different versions A good rule of thumb: support the full spectrum of Android versions with your library: android { ... defaultConfig { ... minSdkVersion 9 targetSdkVersion 27 ... } } Internally detect the version and enable/disable features or use a fallback in the Android library: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { // Enable feature supported on API for Android Oreo and above } else { // Disable the feature for API below Android Oreo or use a fallback } Provide documentation - Provide a README file or a Wiki which explains the library API and its correct usage. - Include javadocs and other comments in the code wherever you see the need. Your code will be read by others so make sure it is understandable. - Bundle a sample app that is the most simplistic app showcasing how the Android library can be used. The sample project you used in this tutorial could serve as an example project. - Maintain a changelog Where to Go From Here? You can find the final project in the zip file you can download using the button at the top and bottom of the tutorial. You can see my version of the source code on GitHub here and the published library here. Contrary to usual belief, Android library development is different from app development. The differentiating factor is that apps target certain platforms and are consumed by users directly, but an Android library is consumed by Android developers and has to cover a much larger spectrum of platforms to enable its use by app supporting lower or higher platforms alike. Hopefully, after finishing this tutorial you have a solid understanding of building and publishing better Android libraries. If you want to learn more about Android library development, checkout Google’s official documentation. I hope you enjoyed this tutorial on building an Android library, and if you have any questions or comments, please join the forum discussion below!
https://www.raywenderlich.com/52-building-an-android-library-tutorial
CC-MAIN-2021-04
en
refinedweb
Writing Server-rendered React Apps with Next.js Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 The dust has settled a bit as far as the JavaScript front-end ecosystem is concerned. React has arguably the biggest mindshare at this point, but has a lot of bells and whistles you need to get comfortable with. Vue offers a considerably simpler alternative. And then there are Angular and Ember — which, while still popular, are not the first recommendations for starting a new project. So, while React is the most popular option, it still requires lot of tooling to write production-ready apps. Create React App solves many of the pain points of starting, but the jury is still out on how long you can go without ejecting. And when you start looking into the current best practices around front-end, single-page applications (SPAs) — like server-side rendering, code splitting, and CSS-in-JS — it’s a lot to find your way through. That’s where Next comes in. Why Next? Next provides a simple yet customizable solution to building production-ready SPAs. Remember how web apps were built in PHP (before “web apps” was even a term)? You create some files in a directory, write your script and you’re good to deploy. That’s the kind of simplicity Next aims at, for the JavaScript ecosystem. Next is not a brand new framework per se. It fully embraces React, but provides a framework on top of that for building SPAs while following best practices. You still write React components. Anything you can do with Next, you can do with a combination of React, Webpack, Babel, one of 17 CSS-in-JS alternatives, lazy imports and what not. But while building with Next, you aren’t thinking about which CSS-in-JS alternative to use, or how to set up Hot Module Replacement (HMR), or which of many routing options to choose. You’re just using Next — and it just works. I’d like to think I know a thing or two about JavaScript, but Next.JS saves me an ENORMOUS amount of time. — Eric Elliott Getting Started Next requires minimal setup. This gets you all the dependencies you need for starting: $ npm install next react react-dom --save Create a directory for your app, and inside that create a directory called pages. The file system is the API. Every .js file becomes a route that gets automatically processed and rendered. Create a file ./pages/index.js inside your project with these contents: export default () => ( <div>Hello, Next!</div> ) Populate package.json inside your project with this: { "scripts": { "dev": "next", "build": "next build", "start": "next start" } } Then just run npm run dev in the root directory of your project. Go to and you should be able to see your app, running in all its glory! Just with this much you get: - automatic transpilation and bundling (with Webpack and Babel) - Hot Module Replacement - server-side rendering of ./pages - static file serving: ./static/is mapped to /static/. Good luck doing that with Vanilla React with this much setup! Features Let’s dig into some of the features of modern SPA apps, why they matter, and how they just work with Next. Automatic code splitting Why it Matters? Code splitting is important for ensuring fast time to first meaningful paint. It’s not uncommon to have JavaScript bundle sizes reaching up to several megabytes these days. Sending all that JavaScript over the wire for every single page is a huge waste of bandwidth. How to get it with Next With Next, only the declared imports are served with each page. So, let’s say you have 10 dependencies in your package.json, but ./pages/index.js only uses one of them. pages/login.js import times from 'lodash.times' export default () => ( return <div>times(5, <h2> Hello, there! </h2>)</div>; ) Now, when the user opens the login page, it’s not going to load all the JavaScript, but only the modules required for this page. So a certain page may have fat imports, like this: import React from 'react' import d3 from 'd3' import jQuery from 'jquery' But this won’t affect the performance of the rest of the pages. Faster load times FTW. Scoped CSS Why it Matters? CSS rules, by default, are global. Say you have a CSS rule like this: .title { font-size: 40px; } Now, you might have two components, Profile, both of which may have a div with class title. The CSS rule you defined is going to apply to both of them. So, you define two rules now, one for selector .post .title, the other for .profile .title. It’s manageable for small apps, but you can only think of so many class names. Scoped CSS lets you define CSS with components, and those rules apply to only those components, making sure that you’re not afraid of unintended effects every time you touch your CSS. With Next Next comes with styled-jsx, which provides support for isolated scoped CSS. So, you just have a <style> component inside your React Component render function: export default () => ( <div> Hello world <p>These colors are scoped!</p> <style jsx>{\ p { color: blue; } div { background: red; } `}</style> </div> ) You also get the colocation benefits on having the styling (CSS), behavior (JS), and the template (JSX) all in one place. No more searching for the relevant class name to see what styles are being applied to it. Dynamic Imports Why it matters? Dynamic imports let you dynamically load parts of a JavaScript application at runtime. There are several motivations for this, as listed in the proposal: This could be because of factors only known at runtime (such as the user’s language), for performance reasons (not loading code until it is likely to be used), or for robustness reasons (surviving failure to load a non-critical module). With Next Next supports the dynamic import proposal and lets you split code into manageable chunks. So, you can write code like this that dynamically loads a React component after initial load: import dynamic from 'next/dynamic' const DynamicComponentWithCustomLoading = dynamic( import('../components/hello2'), { loading: () => <p>The component is loading...</p> } ) export default () => <div> <Header /> <DynamicComponentWithCustomLoading /> <p>Main content.</p> </div> Routing Why it matters? A problem with changing pages via JavaScript is that the routes don’t change with that. During their initial days, SPAs were criticized for breaking the web. These days, most frameworks have some robust routing mechanism. React has the widely used react-router package. With Next, however, you don’t need to install a separate package. With Next Client-side routing can be enabled via a next/link component. Consider these two pages: // pages/index.js import Link from 'next/link' export default () => <div> Click{' '} <Link href="/contact"> <a>here</a> </Link>{' '} to find contact information. </div> // pages/contact.js export default () => <p>The Contact Page.</p> Not only that, you can add prefetch prop to Link component, to prefetch pages even before the links are clicked. This enables super-fast transition between routes. Server rendering Most of the JavaScript-based SPAs just don’t work without JavaScript disabled. However, it doesn’t have to be that way. Next renders pages on the server, and they can be loaded just like good old rendered web pages when JavaScript is disabled. Every component inside the pages directory gets server-rendered automatically and their scripts inlined. This has the added performance advantage of very fast first loads, since you can just send a rendered page without making additional HTTP requests for the JavaScript files. Next Steps That should be enough to get you interested in Next, and if you’re working on a web app, or even an Electron-based application, Next provides some valuable abstractions and defaults to build upon. To learn more about Next, Learning Next.js is an excellent place to start, and may be all you’ll need. Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/writing-server-rendered-react-apps-next-js/
CC-MAIN-2021-04
en
refinedweb
Apache Spark 2 on CML Apache Spark is a general purpose framework for distributed computing that offers high performance for both batch and stream processing. It exposes APIs for Java, Python, R, and Scala, as well as an interactive shell for you to run jobs. In Cloudera Machine Learning (CML), Spark and its dependencies are bundled directly into the CML engine Docker image. CML supports fully-containerized execution of Spark workloads via Spark's support for the Kubernetes cluster backend. Users can interact with Spark both interactively and in batch mode. Dependency Management: In both batch and interactive modes, dependency management, including for Spark executors, is transparently managed by CML and Kubernetes. No extra required configuration is required. In interactive mode, CML leverages your cloud provider for scalable project storage, and in batch mode, CML manages dependencies though container images. Autosc or spot instances. Workload Isolation: In CML, each project is owned by a user or team. Users can launch multiple sessions in a project. Workloads are launched within a separate Kubernetes namespace for each user, thus ensuring isolation between users at the K8s level.
https://docs.cloudera.com/machine-learning/1.0/product/topics/ml-apache-spark-overview.html
CC-MAIN-2021-04
en
refinedweb
I am a beginner to dapper . I was going through the code and building samples . But I am having problems in retrieving data . My code is as follows Console.WriteLine("Reading Values"); string readSatement = "select * from employee where Id=@Id "; IEnumerable<Employee> objEmp1 = con.Query<Employee>(readSatement, new { Id = empId }); var objEmp2 = con.Query(readSatement, new { Id = empId }); In this code objEmp2 retrieves values from db for the id passed . But objEmp1 gives null values for the attributes of the object . Employee class is as below public class Employee { public int EmpId { get; set; } public string EmpName { get; set; } public int EmpAge { get; set; } } Whats wrong with the code . You need to ensure all your database columns either match the properties in your class you are using for the query or you return the columns with names that match. For example in your query above, I believe you might want to write it like: select Id as EmpId, otherColumn as Propertyname, etc.. from employee where Id = @Id
https://dapper-tutorial.net/knowledge-base/13080523/dapper---mapper-query-not-getting-values-where-as-dynamic-object-mapper-query-does
CC-MAIN-2021-04
en
refinedweb
PhantomJS is a headless Selenium WebDriver with JavaScript support. It is based on WebKit, making it behave similarly to Google Chrome or Safari. It is slightly faster than a regular WebDriver like ChromeDriver or FirefoxDriver in both startup time and performance. PhantomJS has many options and services that alter the behavior of the test, such as hiding the command prompt or not loading images. The easiest way of installing PhantomJS is by using a NuGet Package Manager. In your project, right click "References", and click on "Manage NuGet Packages" as shown: Then, type "PhantomJS" to the search bar, select it and install it as shown below. Here's a list of other recommended packages: Now, add these references at the beginning: using OpenQA.Selenium; using OpenQA.Selenium.PhantomJS; Now you can test it with a simple program like this [C#]: using (var driver = new PhantomJSDriver()) { driver.Navigate().GoToUrl(""); var questions = driver.FindElements(By.ClassName("question-hyperlink")); foreach (var question in questions) { // This will display every question header on StackOverflow homepage. Console.WriteLine(question.Text); } } var page = require('webpage').create(); page.open('', function(status) { console.log("Status: " + status); var title = page.evaluate(function() { return document.title; }); console.log("Loaded page: " + title); phantom.exit(); });
https://sodocumentation.net/phantomjs
CC-MAIN-2021-04
en
refinedweb
I am brand new to Android development. Hopefully this is an easy one. Relevant code (I hope) below. import android.location.LocationManager; import android.content.Context; LocationManager locationManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); I get the following: Error: error: cannot find symbol method getSystemService(String) All the code samples I am seeing say to do the above. I do notice that if I do this instead... Context cont; locationManager = (LocationManager) cont.getSystemService(Context.LOCATION_SERVICE); then that particular error goes away, but of course, you have an uninstatiated variable cont. As mentioned, I am new to Android, so I don't know what a Context object is and how to get one. I've been doing a little research on that. However, all the code samples I've seen seem to be of the variety where the method getSystemService is called without any Context object, so presumably it's compiling fine for everyone but me. Any ideas? On a different note, the preview is coming up rather odd in the code tags. "getSystemService" is in red and crossed out, "corrected" with "get SystemService" with a space between "get" and "System". Is this a function of the code tags? This is Java code and the function is "getSystemService" with no space. Is there some auto-correct HTML parsing going on?
https://www.daniweb.com/programming/mobile-development/threads/503664/android-cannot-resolve-method-getsystemservice-string
CC-MAIN-2021-04
en
refinedweb
Great achievements are fueled by passion This blog is about those who have purchased GPU+CPU and want to configure Nvidia Graphic card on Ubuntu 18.04 LTS and play with tensorflow-gpu. This blog will cover installing Nvidia drivers on ubuntu machine which will help you to install CUDA Toolkit 9.0, CUDNN 7.0. In the end of this tutorial, we will cover installing the virtual environment on Ubuntu and installing tensorflow-gpu on it using pip command. Everything will be divided into the steps. Every step will be explained with proper care and screenshots wherever it is possible. Please bear patience at each step. Nvidia does have any official downloads for ubuntu 18.04 but files available for 17.04 works for it. Please look below what is the content of this blog which we will be covering. - Disabling Open Source Ubuntu NVIDIA drivers - Install Nvidia Drivers - Install CUDA Toolkit 9.0 - Install CUDNN 7.0 - Install libcupti - Setting the path - Installing the virtual environment - Installing and verifying tensorflow-gpu What you will achieve after this blog - Tensorflow-gpu will be configured with ubuntu 18.04 - Nvidia graphic card will be installed for ubuntu - You will learn how to create virtual environment - Some superficial knowledge about CUDA will be delivered to you. 1. Disabling Open Source Ubuntu Nvidia drivers Nouveau is the name of the program which has developed high quality, free software drivers for Nvidia graphic card by reverse engineering. It is written in C language under MIT Licence. It provides accelerated open source drivers for nvidia card. It is managed by X. org foundation. Out first step would be to disable free open source nvidia drivers using the below steps - Step 1: Create the below file in your ubuntu machine nano /etc/modprobe.d/blacklist-nouveau.conf - Step 2: Please edit the file with the following content blacklist nouveau options nouveau modeset=0 - Step 3: Regenerate the kernel initramfs using the below command sudo update-initramfs -u - Step 4: If all the three steps has been done by you successfully and you donot face any errors, then simply reboot the system. Please enter the below command to reboot the system. sudo reboot - Step 5: When you reboot the system, we need to verify that nouveau drivers are loaded or not. Please enter the below command to verify it. lsmod | grep nouveau If nouveau drivers are still loaded, please don’t go to the next step and troubleshoot why it is still loading. 2. Install Nvidia Drivers Once the first step is complete and before installing the CUDA Toolkit, you need to install Nvidia Drivers. Please use the below command to detect the model of the graphic card and the recommended drivers. To execute the above command, please execute the below command. ubuntu-drivers devices You will get the list of model of your graphic card and list of non-free drivers and which one is recommended. It will help us to install recommended drivers. For example if nvidia-410 is detected as recommended one, please install it using the below command sudo apt install nvidia-410 3. Install CUDA Toolkit 9.0 Once you have completed the above step, please reboot the system and execute the below command. nvidia-smi If you will get the output representing your gpu model and version of driver which you have installed in the step 2, it means everything is going smooth else please look at the previous step and do troubleshooting. This step is about installing CUDA Toolkit. Please look at the below image to understand the relation between cpu and gpu First of all CPU arrays are initialized. GPU memory is allocated for these arrays. Arrays are transferred from CPU to GPU which uses cores to process it. Here comes the use case of CUDA. Please note that CUDA is meant for only Nvidia. CUDA stands for Compute Unified Device Architecture. It is the programming language developed by Nvidia. Operation of the GPU are performed by CUDA cores. Performance of the CUDA cores depends on the Nvidia architecture of the GPU. Cuda cores performance will be different for RTX 2080 Ti, GTX 1080 Ti, Pascal, Maxwell. Once the manipulation is done by the GPU, results are transferred from GPU memory to CPU memory. At that time GPU memory is de-allocated. To download the CUDA Toolkit 9.0, Please follow the below steps - Please visit NVIDIA LINK for CUDA TOOLKIT. Installing CUDA 9.0 Step 1 - Now you have to select the Operating System. Next step would be to select the architecture, then distribution, version and finally select installer type as runfile (local). To get visual understanding of this point, have a look at the below screenshot. Installing CUDA Toolkit 9.0 Step 2 - Downloading file would be about 1.6 GB which will take time according to your internet connection so have patience till the file is being downloaded. If you want to utilize the time till the download is finished, you can watch one of my video of which is my parent company. I am sharing the link for it. Please have the link - When you got the files, please navigate to the folder where the files are downloaded. Hopefully it would be downloads or any path that you have set. Open the terminal using ctrl + alt + t and type the below command sudo chmod +x cuda_9.0.xxxxxxxxx.run ./cuda_9.0.xxxxxxxxx --override xxxxxxxxx: means anything which is your version. While installing, you have to accept terms and conditions. During such, please type yes when asked for installing with unsupported configuration and no to install Nvidia Accelerated Graphic drivers for linux-x89-64xxxx. 4. Install CUDNN 7.0 Once the previous step is installed, please install CUDNN 7.0. Let us proceed with this step. - Please visit the link to install CUDNN. You will get the following screen when you click on this link. Installing CUDNN - You will be taken to the next page where membership is required. Please create account here. If you don’t have you can create a new one else login with the account you have. If you face any problem, please see the screenshot as some person understand better with the help of visual and images. Membership Page - When you login with the credentials you will be subjected to the next page where you have to accept the terms and conditions. Once you tick it, you can download CUDNN. Have a look at the below screenshot if you face any problem. Installing file for CUDNN - Once the file is downloaded, file is extracted. Downloading time will only depend on internet speed. If it is taking some time to download it, you don’t worry. I have some videos for you which will consume your time in an effective way. Please have a look at videos made by our parent company. - Next step is to extract the file. Please point to the folder where the file is downloaded and type the following command. tar -zxvf cudnn-9.0-linux-x64-xxx xxx: Represents the version which you have downloaded. - Move the content of the extracted folder to the following location. Please note if you have set any customize location for installing cuda 9.0 while installing it. sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-9.0/lib64/ sudo cp cuda/include/cudnn.h /usr/local/cuda-9.0/include/ - Give read access to all the user using the below command sudo chmod a+r /usr/local/cuda-9.0/include/cudnn.h /usr/local/cuda/lib64/libcudnn* 5. Install libcupti This is the next step. If all the above steps are successful, please proceed with this step using the below command. sudo apt-get install libcupti-dev 6. Setting the path I hope that all the previous steps has been configured by you. Now it is important to set the path. Please open the file ~/.bashrc with your favorite editor. My favorite one is nano. You may also use vim or sublime or gedit. Please type the below at the end of the file. export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 7. Installing the virtual environment Once you have done with all the above steps, it is important to check whether we have successful configured we need to install tensorflow gpu. We can create the virtual environment and can install tensorflow using pip statement. Please follow the below commands to install the virtual environment in ubuntu for python sudo apt install -y python3-venv Please type the below command to install virtual environment python3.3 -m venv ai_sangam Installing and verifying tensorflow-gpu This is the last step. We are near to aim which is installing tensorflow GPU to use Nvidia Graphic card. Once the virtual environment is installed, please activate it using the below command. source ai_sangam/bin/activate Now please execute the below command to install tensrorflow-gpu. pip3 install --upgrade tensorflow-gpu You can check whether gpu tensorflow is properly configured by typing python3. Once you type python3 in the terminal (virtual environment is activated) python3 will be launched. Please type the below command import tensorflow If no error came, it means that tensorflow gpu is properly configured. As the part of this tutorial we have learned how to configure tensorflow GPU using Nvidia graphic card on Ubuntu machine. We have promised that after reading this blog you will get some understanding about the following. - Tensorflow-gpu will be configured with ubuntu 18.04 - Nvidia graphic card will be installed for ubuntu - You will learn how to create virtual environment - Some superficial knowledge about CUDA will be delivered to you. I hope all the points are being covered in this part. In a nutshell, we can also conclude that configuring GPU is not as difficult as it seems. We need to follow proper steps. You can also watch some of our trending videos from the below link Real time face recognition on custom images using Tensorflow Deep learning Object detection Custom Training of Image Mask CNN Deep Learning | AI Sangam Auto music tagging prediction using Deep learning Tensorflow Image classification using Inception-v3 deep learning 17 Comments Hello Self Awareness hub, Hope you are find and enjoying good health. There is question from my side. My question follows as how to get the knowledge which of the cuda and cudnn fits for which version of tensorflow-gpu. I hope that you got my question. If you need any more clarification to my question, please let me know. With regards foundationideas First of all thanks to foundationideas for reaching to us. It is our pleasure to respond you back here With respect to your question stated “how to get the knowledge which of the cuda and cudnn fits for which version of tensorflow-gpu.” answer follows as below First of all please check cuda and cuDNN version respectively using the below command Please see the below tensorflow-gpu version====> cuDNN version ======>CUDA Name of tensorflow-gpu version cuDNN Version CUDA Version tensorflow_gpu-1.12.0 =======> 7=======> 9 tensorflow_gpu-1.4.0=========> 6=======> 8 Sir it is great to hear such a response from you. This has made my day but in the code that you have attached here, i did not understand what is the meaning of cat command. Can you elaborate this command also. With regards foundationideas Okay, Please see the description as below Cat functions in Linux 1.) Display contents of file. 2.) View Contents of Multiple Files in terminal 3.) Create a new file using cat >test2 Hello Self awareness hub!! How are you. I hope you are fine. All of your articles are very impressive and I loved to be here. Some of the features that I loved for this article are as below:- 1.) Content is impressive and invoking. 2.) Screenshots helped me a lot. I too have configured GPU ubuntu but while i was running the model, I was struct in the following error. terminate called after throwing an instance of ‘std::bad_alloc’ what(): std::bad_alloc Aborted (core dumped) Can you help me out With regards Vishal Kumar First of all Kumar self awareness hub welcomes you for reaching us. It is pleasure to hear that you have configured your GPU. Great work kumar. As I can see the error, it reveals that your memory of GPU is exhausted. So if you are creating the model using tensorflow, you must clear the graphs between the tensorflow training using the below command If you are using keras, please execute the below command May be these helps you in clearing the memory With regards Thanks a lot With Regards Vishal Kumar Hello Self Awareness hub!! I congratulate you for writing such an flamboyant article. I loved to read the article again and again. One of the reason I loved about this article is steps are elaborated as screenshots are added so as to make the things simple for the readers. This article is earmarked by me. I have successfully configured my GPU with Ubuntu. My aim is to see the GPU usage both of processor as well as memory. Please suggest me the way to do so. With Regards Alex First of all, I would thank you for reaching to us. To measure the GPU Usage, you can use glances which is a monitoring program. Please run the below command to install it After the successful installation please run the below command sudo glances Moreover, please visit the following link to find different ways to monitor the gpu processor and memory usage. With regards Hello Guys, It is great to comment here. I have gone through all other comments and found them very useful to me especially which version of cuda is linked with which version of tensorflow. I got the problem as below while I was running the code with tensorflow-gpu. Loaded runtime CuDNN library: 7.0.5 but source was compiled with: 7.2.1. Please help me out. With regards Rahul Hello Rahul. Hope you are okay.From my point, first of all check the version of tensorflow-gpu you are using using the below command Please upgrade the version to 1.9.0 using the below command and uninstalling the previous installation If you need more help in learning about the basics of tensorflow please do visit the following links With regards I would like to ask you that while running the model using tensorflow-gpu , I got the following error cuda memory error or exhausted memory: My gpu supports 11 GB VRAM. Could you explain how to resolve this error Hello sonam. Hope you are fine and in good health. Donot worry about the problem. First of all see which of the processes are using your GPU memory by executing the following command in terminal Now note the PID number of such command and type sudo kill -9 PID Number. Please run the code again. Problem is resolved. Hello sir, comments are really useful to me and helped me a lot. There is a question from my side and I hope you will help me to resolve it. My question follows how one can know which of the nvidia drivers are best for my system. Please do comment on this. Thanks for such question. Actually it is important to note which of the nvidia drivers are recommended for the system so that best drivers could be installed. Please follow the below code to know it. Paste this command in the terminal Hello, I read the blog and found the blog very useful especially for those who want to configure GPU for the first time. I have also configured it and installed pytorch. I want to test whether it is using cuda and gpu. I would be grateful to you. Thanks for asking the question. Please run the below command to verify it.
http://selfawarenesshub.org/index.php/2019/02/03/install-tensorflow-gpu-use-nvidia-graphic-card-ubuntu-18-04-lts/
CC-MAIN-2021-04
en
refinedweb
Deploying Python functions This topic describes how to deploy Python functions. You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying functions gives you the ability to hide details (such as credentials), preprocess data before passing it to models, perform error handling, and include calls to multiple models, all within the deployed function instead of in your application. Deploying a function from a deployment space This topic describes how to deploy a function using the Python client, but you can also deploy a function from a deployment space via the user interface. For details on creating and deploying from a deployment space, see Deployment spaces. Watson Machine Learning Python client library reference Watson Machine Learning Python client library There are six basic steps for creating and deploying functions in Watson Machine Learning: - Define the function - Authenticate and define a space - Store the function in the repository - Get the software specification - Deploy the stored function - Send data to the function for processing Step 1: Define the function To define a function, create a Python closure with a nested function named “score”. Example Python code #wml_python_function def my_deployable_function(): def score( payload ): message_from_input_payload = payload.get("input_data")[0].get("values")[0][0] response_message = "Recieved message - {0}".format(message_from_input_payload) # Score using the pre-defined model score_response = { 'predictions': [{'fields': ['Response_message_field'], 'values': [[response_message]] }] } return score_response return score You could test your function like this: input_data = { "input_data": [{ "fields": [ "message" ], "values": [[ "Hello world!" ]] } ] } function_result = my_deployable_function()( input_data ) print( function_result ) It will return the message “Hello world!”. Python closures To learn more about closures, see: Requirements for the nested, “score” function The following are requirements and usage notes for the nested function for online deployments: - score () must accept a single, JSON input parameter - The scoring input payload will be passed as value for input parameter for score(). Therefore, the value of score() input parameter must be handled accordingly inside the score(). - The scoring input payload must match the input requirements for the concerned Python Function. - Additionally, the scoring input payload must include an array with the name valuesas shown in this example schema. Note that the input_dataparameter is mandatory in the payload. { "input_data": [{ "values": [[ "Hello world!" ]] } ] } - The output payload expected as output of score()must include the schema of the “score_response” variable described in the “Step 1: Define the function” section for status code 200. Note that the predictionparameter, which has an array of JSON objects as its value, is mandatory in the score() output. - The scorefunction must return a JSON-serializable object (for example: dictionaries or lists). - When a Python function is saved using the Python client where a reference to the outer function is specified, only the code in the scope of the outer function (including its nested functions) are saved. Therefore, the code outside the outer function’s scope will be not be saved and thus will not be available when you deploy the function. Step 2: Authenticate with the Python client Add a notebook to your project by clicking Add to project and selecting Notebook. Authenticate with the Python client, following the instructions in Authentication. Initialize the client with the credentials: from ibm_watson_machine_learning import APIClient wml_client = APIClient(wml_credentials) (Optional) Create a new deployment space. To use an existing deployment space, skip this step and enter the name of the space in the next step, entering the credentials for your Cloud Object Storage. metadata = { client.spaces.ConfigurationMetaNames.NAME: 'YOUR DEPLOYMENT SPACE NAME, client.spaces.ConfigurationMetaNames.DESCRIPTION: description', client.spaces.ConfigurationMetaNames.STORAGE: { "type": "bmcos_object_storage", "resource_crn": 'PROVIDE COS RESOURCE CRN ' }, client.spaces.ConfigurationMetaNames.COMPUTE: { "name": 'INSTANCE NAME, "crn": 'PROVIDE THE INSTANCE CRN' } } space_details = client.spaces.store(meta_props=metadata) - Get the ID for the deployment space: def guid_from_space_name(client, space_name): instance_details = client.service_instance.get_details() space = client.spaces.get_details() return(next(item for item in space['resources'] if item['entity']["name"] == space_name)['metadata']['guid']) - Enter the details for the deployment space, putting the name of your deployment space in place of ‘YOUR DEPLOYMENT SPACE’. space_uid = guid_from_space_name(client, 'YOUR DEPLOYMENT SPACE') print("Space UID = " + space_uid) Out: Space UID = b8eb6ec0-dcc7-425c-8280-30a1d7a9c58a Set the default deployment space to work. client.set.default_space(space_uid) Step 4: Get the software specification Your function requires a software specification to run. To view the list of predefined specifications: client.software_specifications.list() Find the id of the software specification environment that the function will be using : software_spec_id = client.software_specifications.get_id_by_name('ai-function_0.1-py3.6') print(software_spec_id) Step 4: Save the function - Create the function metadata. function_meta_props = { client.repository.FunctionMetaNames.NAME: 'sample_function_with_sw', client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id } - Extract the function UID from the details. function_artifact = client.repository.store_function(meta_props=function_meta_props, function=my_deployable_function) function_uid = client.repository.get_function_id(function_artifact) print("Function UID = " + function_id) Function UID = 0f263463-21ec-4d2f-a277-2a7525f64b4e - Get the saved function metadata from Watson Machine Learning using the function UID. function_details = client.repository.get_details(function_uid) from pprint import pprint pprint(function_details) - To confirm the function was saved, list all of the stored functions using the list_functionsmethod. client.repository.list_functions() Step 5: Deploy the stored function To select the hardware runtime environment to deploy the function, first view available hardware configurations: client.hardware_specifications.list() Select a hardware configuration: hardware_spec_id = client.hardware_specifications.get_id_by_name('NAME OF THE HARDWARE SPECIFICATION') for example : #hard_ware_spec_id = client.hardware_specifications.get_id_by_name('M')} - Deploy the Python function to the deployment space by creating deployment metadata and using the function UID obtained in the previous section. deploy_meta = { client.deployments.ConfigurationMetaNames.NAME: "Web scraping python function deployment", client.deployments.ConfigurationMetaNames.ONLINE: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": hardware_spec_id} } - Create the deployment. deployment_details = client.deployments.create(function_uid, meta_props=deploy_meta) - View the deployment details. deployment_details - To confirm that the deployment was created successfully, list all deployments. client.deployments.list() Step 6: Send data to the function for processing Follow these steps to score the function and return a prediction. - List the function you plan to score. client.repository.list_functions() - Prepare the scoring payload, matching the schema of the function. job_payload = { client.deployments.ScoringMetaNames.INPUT_DATA: [{ # "input_data": [{ 'fields': ['url'], 'values': [ '' ] }] } pprint(job_payload) {‘input_data’: [{‘fields’: [‘url’], ‘values’: [‘’]}]} - Generate the prediction and display the results: job_details = client.deployments.score(deployment_uid, job_payload) pprint(job_details['predictions'][0]['values'][0][:10]) [‘02’, ‘2018’, ‘2019’, ‘459’, ‘49’, ‘575’, ‘about’, ‘accelerate’, ‘accelerates’, ‘accelerator’] Increasing scalability for a function When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. Additional replicas allow for a larger volume of scoring requests. The following example uses the Python client API to set the number of replicas to 3. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name":"S", "num_nodes":3} } client.deployments.update(<deployment_id>, change_meta)
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-functions_local.html
CC-MAIN-2021-04
en
refinedweb
Using the Caché Object Binding for .NET One of the most important features of Caché is the ability to access database items as objects rather than rows in relational tables. In Caché .NET Binding applications, this feature is implemented using Caché proxy objects. Proxy objects are instances of .NET classes generated from classes defined in the Caché Class Dictionary. Each proxy object communicates with a corresponding object on the Caché server, and can be manipulated just as if it were the original object. The generated proxy classes are written in fully compliant .NET managed code, and can be used anywhere in your project. This section gives some concrete examples of code using Caché proxy classes. Introduction to Proxy Objects — a simple demonstration of how proxy objects are used. Generating Caché Proxy Classes — using various tools to generate proxy classes. Using Caché Proxy Objects — using proxy objects to create, open, alter, save, and delete objects on the Caché server. Using Caché Queries — using a pre-existing Caché query to generate and manipulate a result set. Using Collections and Lists — manipulating Caché lists and arrays. Using Relationships — using Caché relationship objects to access and manipulate data sets. Using I/O Redirection — redirecting Caché Read and Write statements. Although the examples in this chapter use only proxy objects to access Caché data, it is also possible to access database instances via ADO.NET classes and SQL statements (as described in “Using Caché ADO.NET Managed Provider Classes”). SampleCode.cs, located in <Cache-install-dir>\dev\dotnet\samples\bookdemos (see “Caché Installation Directory” in the Caché Installation Guide for the location of <Cache-install-dir> on your system). Introduction to Proxy Objects A Caché .NET project using proxy objects can be quite simple. Here is a complete, working console program that opens and reads an item from the Sample.Person database: using System; using InterSystems.Data.CacheClient; using InterSystems.Data.CacheTypes; namespace TinySpace { class TinyProxy { [STAThread] static void Main(string[] args) { CacheConnection CacheConnect = new CacheConnection(); CacheConnect.ConnectionString = "Server = localhost; " + "Port = 1972; " + "Namespace = SAMPLES; " + "Password = SYS; " + "User ID = _SYSTEM;"; CacheConnect.Open(); Sample.Person person = Sample.Person.OpenId(CacheConnect, "1"); Console.WriteLine("TinyProxy output: \r\n " + person.Id() + ": " + person.Name ); person.Close(); CacheConnect.Close(); } // end Main() } // end class TinyProxy } This project is almost identical to the one presented in “UsingCaché ADO.NET Managed Provider Classes” (which does not use proxy objects). Both projects contain the following important features: The same Using statements may be added: using InterSystems.Data.CacheClient; using InterSystems.Data.CacheTypes;Copy code to clipboard The same code is used to create and open a connection to the Caché SAMPLES namespace: CacheConnection CacheConnect = new CacheConnection(); CacheConnect.ConnectionString = "Server = localhost; " + "Port = 1972; " + "Namespace = SAMPLES; " + "Password = SYS; " + "User ID = _SYSTEM;"; CacheConnect.Open();Copy code to clipboard Both projects have code to open and read the instance of Sample.Person that has an ID equal to 1. It differs from the ADO.NET project in two significant ways: The project includes a file (WizardCode.cs) containing code for the generated proxy classes. See “Generating Caché Proxy Classes” for a detailed description of how to generate this file and include it in your project. The instance of Sample.Person is accessed through a proxy object rather than CacheCommand and CacheDataReader objects. No SQL statement is needed. Instead, the connection and the desired instance are defined by a call to the OpenId() class method: Sample.Person person = Sample.Person.OpenId(CacheConnect, "1");Copy code to clipboard Each data item in the instance is treated as a method or property that can be directly accessed with dot notation, rather than a data column to be accessed with CacheReader: Console.WriteLine("TinyProxy output: \r\n " + person.Id() + ": " + person.Name );Copy code to clipboard In many cases, code with proxy objects can be far simpler to write and maintain than the equivalent code using ADO.NET Managed Provider classes. Your project can use both methods of access interchangeably, depending on which approach makes the most sense in any given situation. Generating Caché Proxy Classes This section covers the following topics: Using the Caché Object Binding Wizard — a GUI program that leads you through the process of generating proxy classes. Running the Proxy Generator from the Command Line — a DOS program that allows you to generate proxy classes from a batch file or an ANT script. Generating Proxy Files Programmatically — calling the Proxy Generator methods directly to create proxy classes from within a .NET program. Adding Proxy Code to a Project — what to do with new proxy files once you've got them. Methods Inherited from Caché System Classes — a set of standard methods that the Proxy Generator adds to all proxy files. Using the Caché Object Binding Wizard The Caché Object Binding Wizard can be run either as a stand-alone program (CacheNetWizard.exe, located in <Cache-install-dir>\dev\dotnet\bin\v2.0.50727 by default) or as a tool integrated into Visual Studio (See “Adding the Caché Object Binding Wizard to Visual Studio”). When you start the Wizard, the following window is displayed: Enter the following information: Select the Caché server you wish to connect to: Select the server containing the Caché classes for which you want to generate .NET classes. To select the server: Click Connect and select your server Enter your username and password at the prompt. The Cache Connection Manager is displayed: Select the namespace containing your class (this will be SAMPLES for the bookdemos project) Click OK. Select language: For the bookdemos project, you would select Language: C#. Select where the Wizard output will go: Generally, this will be the same folder that contains the .csproj file for your project. In this example, the file will be named WizardCode.cs, and will be placed in the main bookdemos project directory. Select the classes you wish to use: For this exercise, you should select the Sample.Person and Sample.Company classes from the SAMPLES namespace. The Sample.Address and Sample.Employee classes will be included automatically because they are used by Sample.Person and Sample.Company. If you check Show System Classes, classes from %SYS (the standard Caché Class Library) will be displayed along with those from SAMPLES. Generator options: For this exercise, check Methods with default arguments and leave the other fields empty. The options are: Use .Net Compact Framework — generate proxy code for mobile applications. Methods with default arguments — generates some optional overloads for certain system methods. Application Namespace — optional namespace that will be added to the names of all generated proxy classes. For example, if you entered MyNamespace, the generated code would contain references to MyNamespace.Sample.Person rather than just Sample.Person.Note: The server will not know about this namespace. To ensure that proxy objects referenced through relations will be generated properly, you should either use the name of your application's main assembly, or set CacheConnection.AppNamespace to the value you enter here (see “Instantiating a Proxy Object by Name” in “Using Caché Proxy Objects” for more information). Press 'Generate' to create classes: The generated file can now be added to your project (see “Adding Proxy Code to a Project”). Running the Proxy Generator from the Command Line The command-line proxy generator program (dotnet_generator.exe, located in <Cache-install-dir>\dev\dotnet\bin\v2.0.50727 by default) is useful when the same set of proxy files must be regenerated frequently. This is important when the Caché classes are still under development, since the proxy classes must be regenerated whenever the interface of a Caché class changes. The command-line generator always requires information about the connection string, output path and type of output file (cs or vb), and a list of the classes to be generated. The following arguments are used: -conn <connection string> — standard connection string (see “Creating a Connection”). If generating a single output file for all classes, use -path: -path <full filename> — path and name of the output file for the generated code. Type of output file to be generated is determined by extension of the filename (for example, C:\somepath\WizardCode.vb will generate a Visual Basic code file). If generating one output file for each class, use -dir and -src-kind: -dir <path> — directory where the generated proxy files will be placed. -src-kind <cs|vb> — type of proxy file to generate. For each class, a file named <namespace_classname>.<src-kind> will be generated in the directory specified by -dir. Options are cs or vb. -class-list <full filename> — path and name of a text file containing a list of the classes to be used. Each class name must be on a separate line. The following optional arguments are also available: -gen-default-args <true | false> — switch that controls generation of optional overloads to certain generated system methods. Options are true or false. -app-nsp<namespace> — optional namespace that will be added to the names of all generated proxy classes. For example, if you entered MyNamespace, the generated code would contain references to MyNamespace.Sample.Person rather than just Sample.Person.. -use-cf <true | false> — switch that controls whether code is generated for mobile devices or standard PCs. Options are true or false. The DOS batch file in this example calls dotnet_generator twice, generating the following output: The first call generates a single file containing several proxy classes. This command generates exactly the same WizardCode.cs file as the Object Binding Wizard (see the example in “Using the Caché Object Binding Wizard”). The second call generates one proxy file for each class, and generates Visual Basic code rather than C#. The filenames will be of the form <namespace_classname>.vb. Both calls use the same connection string, output directory, and class list file. set netgen=C:\Intersystems\Cache\dev\dotnet\bin\v2.0.50727\dotnet_generator.exe set clist=C:\Intersystems\Cache\dev\dotnet\samples\bookdemos\Classlist.txt set out=C:\Intersystems\Cache\dev\dotnet\samples\bookdemos set conn="Server=localhost;Port=1972;Namespace=SAMPLES;Password=SYS;User ID=_SYSTEM;" rem CALL #1: Generate a single WizardCode.cs proxy file %netgen% -conn %conn% -class-list %clist% -path %out%\WizardCode.cs -gen-default-args true rem CALL #2: Generate one <namespace_classname>.vb proxy file for each class %netgen% -conn %conn% -class-list %clist% -dir %out% -src-kind vb -gen-default-args true The contents of the class list file, Classlist.txt, are: Sample.Company Sample.Person Although only two classes are listed, proxy classes for Sample.Address and Sample.Employee are generated automatically because they are used by Sample.Person and Sample.Company. Generating Proxy Files Programmatically The CacheConnection class includes the following methods that can be used to generate proxy files from within a .NET program: Generates a new CS or VB proxy file that may contain definitions for several classes. CacheConnection.GenSourceFile(filepath, generator, classlist, options, errors); Parameters: filepath — A string containing the path and filename of the. Generates a separate CS or VB proxy file named <classname>.<filetype> for each class in classlist. CacheConnection.GenMultipleSourceFiles(dirpath, filetype, generator, classlist, options, errors); Parameters: dirpath — A string containing the directory path for the files to be generated. filetype — A string containing either ".vb" or ".cs", depending on the. For a working example that uses both methods, see the Proxy_8_MakeProxyFiles() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Using the Proxy Generator Methods The following code fragments provide examples for defining the method parameters, and for calling each of the proxy generator methods. The generator can be either a CSharpCodeProvider or a VBCodeProvider. System.CodeDom.Compiler.CodeDomProvider CS_generator = new CSharpCodeProvider(); System.CodeDom.Compiler.CodeDomProvider VB_generator = new VBCodeProvider(); Each of the methods accepts an iterator pointing to the list of classes to be generated. Although only two classes are listed in the following example, proxy classes for Sample.Address and Sample.Employee are generated automatically because they are used by Sample.Person and Sample.Company. ArrayList classes = new ArrayList(); classes.Add("Sample.Company"); classes.Add("Sample.Person"); System.Collections.IEnumerator classlist classlist = classes.GetEnumerator(); In this example, no special namespace will be generated for the proxy code, a complete set of inherited methods will be generated for each class, and no extra code will be generated for use by mobile applications. InterSystems.Data.CacheClient.ObjBind.GeneratorOptions options options = new GeneratorOptions(); options.AppNamespace = ""; options.GenDefaultArgMethods = true; options.UseCF = false; The errors parameter will store the error messages (if any) returned from the proxy generator method call. All three methods use this parameter. System.Collections.IList errors errors = new System.Collections.ArrayList(); This example generates a C# proxy file named WizardCode.cs in directory C:\MyApp\. The file will contain code for Sample.Person, Sample.Company, Sample.Address, and Sample.Employee. string filepath = @"C:\MyApp\WizardCode.cs"; System.CodeDom.Compiler.CodeDomProvider generator = new CSharpCodeProvider(); conn.GenSourceFile(filepath, generator, classlist, options, errors); This example generates a single VB proxy file for each class. string dirpath = @"C:\MyApp\"; string filetype = ".vb"; System.CodeDom.Compiler.CodeDomProvider generator = new VBCodeProvider(); conn.GenMultipleSourceFiles(dirpath, filetype, generator, classlist, options, errors); The following files will be generated in C:\MyApp\: Person.vb Company.vb Address.vb Employee.vb The proxy files for Sample.Address and Sample.Employee are generated automatically because they are used by Sample.Person and Sample.Company. Adding Proxy Code to a Project After generating .NET proxy files, add the code to your project as follows: From the Visual Studio main menu, select Project > Add Existing Item... Browse to the generated proxy file (or files, if you chose to generate one file for each class) and click Add. The file will be listed in the Visual Studio Solution Explorer. You can now use proxy objects as described in the following sections. A generated proxy class is not updated automatically when you change the corresponding Caché class. The generated classes will continue to work as long as there are no changes in the signatures of the properties, methods, and queries that were present when the proxy classes were generated. If any signatures have changed, the proxy class will throw CacheInvalidProxyException with a description of what was modified or deleted. Methods Inherited from Caché System Classes The proxy file generators also provide proxy methods for certain classes inherited from the standard Caché Class Library. For example, the Sample classes inherit methods from Caché %Library.Persistent and %Library.Populate. Proxies for these methods are automatically added when you generate the proxy files. This section provides a quick summary of the most commonly used methods. For more detailed information on a method, see the entries for these classes in the Caché Class Reference. For a generic guide to the use of Caché objects, see “Working with Registered Objects” in Using Caché Objects. The following %Library.Persistent proxies are generated: Id() — Returns the persistent object ID, if there is one, of this object. Returns a null string if there is no object ID. string ID = person.Id();Copy code to clipboard Save() — Stores an in-memory version of an object to disk. If the object was stored previously (and thus, already has an OID), Save() updates the on-disk version. Otherwise, Save() saves the object and generates a new OID for it. CacheStatus sc = person.Save();Copy code to clipboard Open() — Loads an object from the database into memory and returns an OREF referring to the object. OpenId() — Loads an object from the database into memory and returns an OREF referring to the object. OpenId() is identical in operation to the Open() method except that it uses an ID value instead of an OID value to retrieve an instance. Sample.Person person = Sample.Person.OpenId(CacheConnect, "1");Copy code to clipboard ExistsId() — Checks to see if the object identified by the specified ID exists in the extent. if (!(bool)Sample.Person.ExistsId(CacheConnect, ID)) { string Message = "No person with id " + ID + " in database."; };Copy code to clipboard DeleteId() — Deletes the stored version of the object with the specified ID from the database. CacheStatus sc = Sample.Person.DeleteId(CacheConnect, ID);Copy code to clipboard Extent() — This is a system provided query that yields a result set containing every instance within this extent. CacheCommand Command = Sample.Person.Extent(CacheConnect);Copy code to clipboard KillExtent() — Deletes all instances of a class and its subclasses. CacheStatus sc = Sample.Person.KillExtent(CacheConnect)Copy code to clipboard The following %Library.Populate proxies are generated: Populate() — Creates a specified number of instances of a class and stores them in the database. long newrecs = (long)Sample.Person.Populate(CacheConnect, 100);Copy code to clipboard OnPopulate() — For additional control over the generated data you can define an OnPopulate() method within your class. If an OnPopulate() method is defined then the Populate() method will call it for each object it generates. PopulateSerial() — Create a single instance of a serial object. For a working example that uses the KillExtent() and Populate() methods, see the Proxy_6_Repopulate() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Using Proxy Objects Caché proxy objects can be used to perform most of the standard operations on instances in a database. This section describes how to open and read an instance, how to create or delete instances, and how to alter and save existing instances. Opening and Reading Objects Use the OpenId() method to access an instance by ID (instances can also be accessed through SQL queries, as discussed later in “Using Caché Queries”). OpenId() is a static class method, qualified with the type name rather than an instance name: Sample.Person person = Sample.Person.OpenId(CacheConnect, "1"); Once the object has been instantiated, you can use standard dot notation to read and write the person information: string Name = person.Name string ID = person.Id(); person.Home.City = "Smallville"; person.Home.State = "MN"; In this example, person.Home is actually an embedded Sample.Address object. It is automatically created or destroyed along with the Sample.Person object. For a working example, see the Proxy_1_ReadObject() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Creating and Saving Objects Caché proxy object constructors use information in a CacheConnection object to create a link between the proxy object and a corresponding object on the Caché server: Sample.Person person = new Sample.Person(CacheConnect); person.Name = "Luthor, Lexus A."; person.SSN = "999-45-6789"; Use the Save() method to create a persistent instance in the database. Once the instance has been saved, the Id() method can be used to get the newly generated ID number: CacheStatus sc = person.Save(); Display.WriteLine("Save status: " + sc.IsOK.ToString()); string ID = person.Id(); Display.WriteLine("Saved id: " + person.Id()); The ExistsId() class method can be used to test whether or not an instance exists in the database: string personExists = Sample.Person.ExistsId(CacheConnect, ID).ToString() Display.WriteLine("person " + ID + " exists: " + personExists) For a working example, see the Proxy_2_SaveDelete() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Instantiating a Proxy Object by Name In some cases, an object that is returned from the server differs from the object that the client requested. For example, the client may request an instance of Sample.Person, but the server returns Sample.Employee. In order to instantiate an object of the desired class, the binding has to know the exact name of the proxy type, including the application namespace (if any). When a proxy class is generated, there is an option to specify the namespace that contains it. For example, if the application namespace is MyAppNsp, the Sample.Person proxy class can be specified as MyAppNsp.Sample.Person. Alternatively, the object could be generated as Sample.Person and then "MyAppNsp" could be assigned to the connection.AppNamespace property. Either option allows the binding to deduce that the full name of the proxy type is "MyAppNsp.Sample.Person". The binding tries to avoid instantiation by name as much as possible, so if a class is already loaded in memory, the binding uses the type in memory to create an instance. In this case, the exact class name is not necessary. In the following example, Y() returns a proxy object that the client knows must be Sample.Person: Sample.Person p = new Sample.Person(conn); Sample.Person q = x.Y(); The first line creates object p, and loads Sample.Person in memory. In this case, the binding does not need to the full name, and x.Y() will not throw an exception. When the first line is commented out, the second line will fail if the full name of the proxy class is actually something like "MyAppNsp.Sample.Person". Closing Proxy Objects The Close() method disconnects a proxy object and closes the corresponding object on the server, but does not change the persistent instance in the database: person.Close(); Always use Close() to destroy a proxy object. Object reference counts are not maintained on the client. Every time the server returns an object (either by reference or as a return value) its reference count is increased. When Close() is called, the reference count is decreased. The object is closed on the server when the count reaches 0. Do not use code such as: person = nothing; //Do NOT do this! This closes the proxy object on the client side, but does not decrement the reference count on the server. This could result in a situation where your code assumes that an object has been closed, but it remains open on the server. By default Close() calls are cached. Although the proxy object can no longer be used, it is not actually destroyed until the reference count can be decremented on the server. This does not happen until the server is called again (for example, when a different proxy object calls a method). In some situations, caching may not be desirable. For example, if an object is opened with Concurrency Level 4 (Exclusive Lock), the lock will not be released until the next server call. To destroy the object immediately, you can call Close() with the optional useCache parameter set to false: person.Close(false); This causes a message to be sent to the server immediately, destroying the proxy object and releasing its resources. Deleting Persistent Objects from the Database The DeleteId() class method deletes the instance from the database. You can use the ExistsId() method to make sure that it is gone: CacheStatus sc = Sample.Person.DeleteId(CacheConnect, ID); Display.WriteLine("Delete status: " + sc.IsOK.ToString()); Display.WriteLine("person " + ID + " exists: " + Sample.Person.ExistsId(CacheConnect, ID).ToString()); For a working example, see the Proxy_2_SaveDelete() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Using Caché Queries A Caché Query is an SQL query defined as part of a Caché class. For example, the Sample.Person class defines the ByName query as follows: Query ByName(name As %String = "") As %SQLQuery(CONTAINID = 1, SELECTMODE = "RUNTIME") [ SqlName = SP_Sample_By_Name, SqlProc ] { SELECT ID, Name, DOB, SSN FROM Sample.Person WHERE (Name %STARTSWITH :name) ORDER BY Name } Since queries return relational tables, Caché proxy objects take advantage of certain ADO.NET classes to generate query results. In the Sample.Person proxy class, ByName is a class method. It accepts a connection object, and returns an ADO.NET Managed Provider CacheCommand object that can be used to execute the predefined SQL query: CacheCommand Command = Sample.Person.ByName(CacheConnect); In this example, the Command.Connection property has been set to CacheConnect, and Command.CommandText contains the predefined ByName query string. To set the Command.Parameters property, we create and add a CacheParameter object with a value of A (which will get all records where the Name field starts with A): CacheParameter Name_param = new CacheParameter("name", CacheDbType.NVarChar); Name_param.Value = "A"; Command.Parameters.Add(Name_param); The CacheParameter and CacheDataReader ADO.NET Managed Provider classes must be used to define parameters and execute the query, just as they are in an ADO.NET SQL query (see “Using SQL Queries with CacheParameter”). However, this example will use the query to return a set of object IDs that will be used to access objects. A CacheDataReader object is used to get the ID of each row in the result set. Each ID is used to instantiate the corresponding Sample.Person proxy object, which is then used to access the data: Sample.Person person; string ID; CacheDataReader reader = Command.ExecuteReader(); while (reader.Read()) { ID = reader[reader.GetOrdinal("ID")].ToString(); person = Sample.Person.OpenId(CacheConnect, ID); Display.WriteLine( person.Id() + "\t" + person.Name + "\n\t" + person.SSN + "\t" + person.DOB.ToString().Split(' ')[0].ToString() ); }; For a working example, see the Proxy_3_ByNameQuery() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Using Collections and Lists Caché proxy objects interpret Caché collections and streams as standard .NET objects. Collections can be manipulated by iterators such as foreach, and implement standard methods such as add() and insert(). Caché lists ($List format) are interpreted as CacheSysList objects and accessed by instances of CacheSysListReader (in the InterSystems.Data.CacheTypes namespace). Collections of serial objects are exposed as .NET Dictionary objects. Serial objects are held as global nodes, where each node address and value is stored as a Dictionary key and value. The Person class includes the FavoriteColors property, which is a Caché list of strings. The foreach iterator can be used to access elements of the list: CacheListOfStrings colors = person.FavoriteColors int row = 0; foreach (string color in colors) { Display.WriteLine(" Element #" + row++ + " = " + color); } The standard collection methods are available. The following example removes the first element, inserts a new first element, and adds a new last element: if (colors.Count > 0) colors.RemoveAt(0); colors.Insert(0,"Blue"); colors.Add("Green"); For a working example, see the Proxy_4_Collection() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Caché does not support the creation of proxy classes that inherit from collections. For example, the Caché Proxy Generator would throw an error when attempting to generate a proxy for the following ObjectScript class: Class User.ListOfPerson Extends %Library.ListOfObjects {Parameter ELEMENTTYPE = "Sample.Person";} Using Relationships If a Caché database defines a relationship, the Caché Proxy Generator will create a CacheRelationshipObject class that encapsulates the relationship. The Sample.Company class contains a one-to-many relationship with Sample.Employee (which is a subclass of Sample.Person). The following example opens an instance of Sample.Employee, and then uses the relationship to generate a list of the employee's co-workers. The employee instance is opened by the standard OpenId() method. It contains a Company relationship, which is used to instantiate the corresponding company object : Sample.Employee employee = Sample.Employee.OpenId(CacheConnect,ID) Sample.Company company = employee.Company; Display.WriteLine("ID: " + (string)employee.Id()); Display.WriteLine("Name: " + employee.Name) Display.WriteLine("Works at: " + company.Name); The company object contains the inverse Employees relationship, which this example instantiates as an object named colleagues. The colleagues object can then be treated as a collection containing a set of Employee objects: CacheRelationshipObject colleagues = company.Employees; Display.WriteLine("Colleagues: "); foreach (Sample.Employee colleague in colleagues) { Display.WriteLine("\t" + colleague.Name); } For a working example, see the Proxy_5_Relationship() method in the bookdemos sample program (see “The Caché .NET Sample Programs”). Using I/O Redirection When a Caché method calls a Read or Write statement, the statement is associated with standard input or standard output on the client machine by default. For example, the PrintPerson() method in the Sample.Employee class includes the following line: Write !,"Name: ", ..Name, ?30, "Title: ", ..Title The following example calls PrintPerson() from a Sample.Employee proxy object: Sample.Employee employee = Sample.Employee.OpenId(CacheConnect, "102"); employee.PrintPerson(); By default, output from this call will be redirected to the client console using the CacheConnection.DefaultOutputRedirection delegate object, which is implemented in the following code: public static OutputRedirection DefaultOutputRedirection = new OutputRedirection(CacheConnection.OutputToConsole); static void OutputToConsole(string output) { Console.Out.Write(output); } The default redirection delegates are defined when a CacheConnection object is created. The constructor executes code similar to the following example: private void Init() { OutputRedirectionDelegate = DefaultOutputRedirection; InputRedirectionDelegate = DefaultInputRedirection; } In order to provide your own output redirection, you need to implement an output method with the same signature as OutputToConsole, create an OutputRedirection object with the new method as its delegate, and then assign the new object to the OutputRedirectionDelegate field of a connection object. This example redirects output to a System.IO.StringWriter stream. First, a new output redirection method is defined: static System.IO.StringWriter WriteOutput; static void RedirectToStream(string output) { MyClass.WriteOutput.Write(output); } The new method will redirect output to the WriteOutput stream, which can later be accessed by a StringReader. To use the new delegate, the WriteOutput stream is instantiated, a new connection conn is opened, and RedirectToStream() is set as the delegate to be used by conn: WriteOutput = new System.IO.StringWriter(); conn = new CacheConnection(MyConnectString); conn.Open(); conn.OutputRedirectionDelegate = new CacheConnection.OutputRedirection(MyClass.RedirectToStream); When PrintPerson() is called, the resulting output is redirected to WriteOutput (which stores it in an underlying StringBuilder). Now a StringReader can be used to recover the stored text: ReadOutput = new System.IO.StringReader(WriteOutput.ToString()); string capturedOutput = ReadOutput.ReadToEnd(); The redirection delegate for the connection object can be changed as many times as desired. The following code sets conn back to the default redirection delegate: conn.OutputRedirectionDelegate = CacheConnection.DefaultOutputRedirection; Input from Caché Read statements can be redirected in a similar way, using an InputRedirection delegate. For a working example, see the Proxy_7_Redirection() method in the bookdemos sample program (see “The Caché .NET Sample Programs”).
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GBMP_PROXY
CC-MAIN-2021-04
en
refinedweb
Entity Framework Core Quick Overview Entity Framework (EF) Core is a lightweight, extensible,. EF Core supports many database engines, see Database Providers for details. If you like to learn by writing code, we'd recommend one of our Getting Started guides to get you started with EF Core. What is new in EF Core If you are familiar with EF Core and want to jump straight into the details of the latest releases: - What is new in EF Core 2.1 (currently in preview) - What is new in EF Core 2.0 (the latest released version) - Upgrading existing applications to EF Core 2.0 Get Entity Framework Core Install the NuGet package for the database provider you want to use. E.g. to install the SQL Server provider in cross-platform development using dotnet tool in the command line: dotnet add package Microsoft.EntityFrameworkCore.SqlServer Or in Visual Studio, using the Package Manager Console: Install-Package Microsoft.EntityFrameworkCore.SqlServer See Database Providers for information on available providers and Installing EF Core for more detailed installation steps. The Model With EF Core, data access is performed using a model. A model is made up of entity classes and a derived context that represents a session with the database, allowing you to query and save data. See Creating a Model to learn more. You can generate a model from an existing database, hand code a model to match your database, or use EF Migrations to create a database from your model (and evolve it as your model changes over time). using Microsoft.EntityFrameworkCore; using System.Collections.Generic; namespace Intro { public class BloggingContext : DbContext { public DbSet<Blog> Blogs { get; set; } public DbSet<Post> Posts { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=MyDatabase;Trusted_Connection=True;"); } } public class Blog { public int BlogId { get; set; } public string Url { get; set; } public int Rating { get; set; } public List<Post> Posts { get; set; } } public class Post { public int PostId { get; set; } public string Title { get; set; } public string Content { get; set; } public int BlogId { get; set; } public Blog Blog { get; set; } } } Querying Instances of your entity classes are retrieved from the database using Language Integrated Query (LINQ). See Querying Data to learn more. using (var db = new BloggingContext()) { var blogs = db.Blogs .Where(b => b.Rating > 3) .OrderBy(b => b.Url) .ToList(); } Saving Data Data is created, deleted, and modified in the database using instances of your entity classes. See Saving Data to learn more. using (var db = new BloggingContext()) { var blog = new Blog { Url = "" }; db.Blogs.Add(blog); db.SaveChanges(); }
https://docs.microsoft.com/en-gb/ef/core/
CC-MAIN-2018-22
en
refinedweb
A term used when designing recursive algorithms to describe the case where the problem has been reduced to its simplest or most trivial form. If a recursive algorithm doesn't have this element, it will never unravel (that is, it will never stop recursing and begin returning results back up the chain). Here's some code: everyone's favorite trivial recursion algorithm, the factorial. In a factorial n!, the most trivial cases are where n == 1 (1! == 1) or n == 0 (0! = 0). Otherwise, n! == n * (n - 1)!, which is solved in the recursive case. #include <stdio.h> #include <stdlib.h> int factorial(int n) { if (n < 2) /* base case */ return(1); else /* recursive case */ return(n * factorial(n - 1)); } int main(int argc, char *argv[]) { printf("%d", factorial(atoi(argv[1]))); } This is extra-bad code which will return bad results if given a number bigger than 19 (that's on my x86 -- that number will change on different architectures). Just an example. Need help? [email protected]
https://everything2.com/user/flamingweasel/writeups/base+case
CC-MAIN-2018-22
en
refinedweb
Django-Readonly Put your website into read-only mode for maintanance. It blocks any POST requests and signs users out. It doesn't lock any database transactions (check out for that). Usage - pip install django-readonly - settings.py: add 'readonly.middleware.ReadOnlyMiddleware', to MIDDLEWARE_CLASSES - settings.py: READ_ONLY = True - template: {% if request.read_only %}<p>Website is currently in read-only mode.</p>{% endif %}
https://bitbucket.org/pombredanne/django-readonly/src
CC-MAIN-2018-22
en
refinedweb
Jeffrey F. Jaffe Spring Semester 2016 Corporate Finance FNCE 100 Syllabus, page 1. Spring 2016 Corporate Finance FNCE 100 Wharton School of Business - Paul Doyle - 1 years ago - Views: Transcription 1 Corporate Finance FNCE 100 Syllabus, page 1 Spring 2016 Corporate Finance FNCE 100 Wharton School of Business Syllabus Course Description: This course provides an introduction to the theory, the methods, and the concerns of corporate finance. It forms the foundation for all subsequent courses such as speculative markets, investments and corporate finance. The purpose of this course is to develop a framework for analyzing a firm s investment and financing decisions. Since the emphasis is on the fundamental concepts underlying modern corporate finance, the approach will be analytical and rigorous, and some familiarity with accounting, mathematical, and statistical tools is necessary. The topics covered in the course include (1) discounted cash flow (time value of money), (2) capital budgeting, (3) valuation of stocks, (4) valuation of bonds, (5) security market efficiency, (6) corporate financing and optimal capital structure, (7) portfolio analysis and the Capital Asset Pricing Model (CAPM), and (8) options. Grading: There are two midterms, each counting 30%, and a final exam, counting 40%. The midterms are scheduled for Monday, February 22 and Monday, April 4. Both midterms will be given from 6:15 8:15 PM. Attendance: Students are responsible for all material presented in class. You must attend the section in which you are enrolled. Required Reading: The textbook is Corporate Finance by Ross, Westerfield, Jaffe, and Jordan, 11 th edition, customized for FNCE 100. 2 Corporate Finance FNCE 100 Syllabus, page 2 Readings: a) Value and Capital Budgeting Firms and individuals invest in a large variety of assets. The objective of these investments is to maximize the value of the investment. In this part, we will develop tools that can be used to determine the best investment from several alternatives. Ch. 4 Discounted Cash Flow Valuation Ch. 5 Net Present Value and Other Investment Rules Ch. 6 Making Capital Investment Decisions Ch. 8 Interest Rates and Bond Valuation Ch. 9 Stock Valuation b) Capital Structure As with capital-budgeting decisions, firms seek to create value with their financing decisions. Therefore, firms must find positive NPV financing arrangements. However, to maximize NPV in financial markets, firms must consider taxes, bankruptcy costs, and agency costs. In this part, we will develop the methodology to maximize the value of the financing decision. Ch. 14 Efficient Capital Markets and Behavioral Challenges Ch. 16 Capital Structure: Basic Concepts Ch. 17 Capital Structure: Limited Use of Debt Ch. 18 Valuation and Capital Budgeting for the Levered Firm c) Risk and Portfolio Analysis In this part, we will investigate the relationship between expected return and risk for portfolios and individual assets. This relationship determines the shareholders required (expected) return and the firm s cost of equity capital. The capital-asset-pricing model is used to measure risk and expected return. Ch. 10 Risk and Return: Lessons from Market History Ch. 11 Return and Risk: The Capital-Asset-Pricing Model (CAPM) Ch. 13 Risk, Cost of Capital, and Valuation d) Options In this part, we study both the principles and uses of options. Ch. 22 Options and Corporate Finance 3 Corporate Finance FNCE 100 Syllabus, page 3 DETAILED DESCRIPTION OF TOPICS The first four topics deal with the time value of money and its application to capital budgeting: TOPIC I FUTURE AND PRESENT VALUE This topic examines one of the most important concepts in all of corporate finance, the relationship between $1 today and $1 in the future. Chapter 4 Compounding the one period case Assignment 1 Discounting the one period case Compounding beyond one year Discounting beyond one year Compounding more rapidly than once a year Annual percentage rate vs. effective annual yield Continuous compounding Multiperiod valuation Short cuts for multiperiod valuation: Perpetuity Growing perpetuity Annuity Growing annuity Examples Pension fund and Mortgage TOPIC II THE RULES OF CAPITAL BUDGETING This topic examines alternative approaches to capital budgeting. Chapter 5 Definition of capital budgeting (Excl. Sect. 5.6) The justification for net present value Independent vs. mutual exclusive projects Simple net present value example Payback example Problems with payback Internal rate of return (IRR) Problems of IRR with independent projects Borrowing vs. lending Multiple rates of return No internal rates of return Problems of IRR with mutually exclusive returns Timing Scale Replacement chains 4 Corporate Finance FNCE 100 Syllabus, page 4 TOPIC III THE PRACTICE OF CAPITAL BUDGETING This topic considers the practical application of capital budgeting techniques. Most of the emphasis here is on the determination of cash flows. Chapter 6 Brief review of capital budgeting Relation between cash flow and accounting income Important considerations in determining cash flows Incremental cash flows Opportunity costs Taxes Stockholders vs. tax books Working capital and capital budgeting Inflation and capital budgeting Interest rates and inflation Cash flow and inflation Discounting: nominal vs. real Direct cash flow effects of purchase and sale of capital assets Initial outlay Depreciation Resale of used asset TOPIC IV VALUATION OF STOCKS AND BONDS This topic uses earlier techniques (present value and future value) to value stocks and bonds. Chapter 9 Stocks (Excl. Sect. 9.5) Brief discussion of discount rate Relationship between short-term investor and long-term investor Assignment 2 Dividends vs. capital gains Estimating growth Difference between income and growth stocks Growth opportunities Price-Earnings ratio Pitfalls in applying dividend discount model and related approaches Chapter 8, including appendix Assignment 3 Bonds Pure discount bonds Coupon bonds Interest rates and bond prices Coupon vs. yield to maturity Term structure of interest rates Spot rates and yield to maturity 5 Corporate Finance FNCE 100 Syllabus, page 5 Forward rates Explanation of term structure Corporate Debt The next five topics deal with capital structure decisions. TOPIC V EFFICIENT CAPITAL MARKETS AND CAPITAL STRUCTURE This topic defines efficient capital markets, presents empirical evidence, and shows why timing decisions on capital structure are suspect. Chapter 14 Definition of efficient capital markets Types of market efficiency Empirical evidence Implications for corporate managers TOPICS VI AND VII CAPITAL STRUCTURE WITHOUT TAXES AND WITH TAXES Topic VI examines the basic issues of capital structure, finishing with the Modigliani- Miller relationship without taxes. Topic VII extends the Modigliani-Miller relationship to the world of corporate taxes. Chapter 16 (pp ) Assignment 4 The goal of the manager: Maximizing the value of the firm The relationship between firm value and stock price How to maximize value: The traditionalist s approach A counter-example to traditionalist approach The effect of leverage on value: Modigliani-Miller (MM) Proposition I The effect of leverage on required equity return: Modigliani- Miller (MM) Proposition II Justification for equality between personal and corporate borrowing rate Example when inequality between rates occurs The concept of market value balance sheets (pp ) Assignment 5 The basic paradigm: The pie chart Why the IRS treats interest more favorable than dividends The value of the tax shield The value of the levered firm: MM Proposition I The effect of leverage on required equity return: MM Proposition II Market value balance sheets Effect of leverage on stock prices 6 Corporate Finance FNCE 100 Syllabus, page 6 TOPIC VIII ADJUSTED PRESENT VALUE, WEIGHTED AVERAGE COST OF CAPITAL AND FLOWS TO EQUITY This topic shows how the earlier material on capital structure can be used to perform capital budgeting on levered firms. Chapter 18 Adjusted Present Value (APV) (Excluding 18.7) The base case: Review of capital budgeting Tax shield Market Value Balance Sheets Weighted average cost of capital (WACC) The cost of equity The cost of debt Calculating WACC Flows to Equity Determining cash flows Determining discount rate EPS and shareholder risk Comparison of WACC and APV The scale enhancing project The known debt level case A suggested guideline Recapitalization LBO Example TOPIC IX COSTS OF DEBT AND OPTIMAL CAPITAL STRUCTURE Topic IX shows why firms must balance the tax benefits of debt with agency costs of debt when considering capital structure. Chapter 17 Relationship between MM theory with taxes and real world behavior (Excluding 17.7 and The search for costs of debt: Bankruptcy 17.8) Direct costs of financial distress Indirect costs of financial distress Who bears costs of financial distress Taxes vs. bankruptcy costs: The tradeoff The three determinants of debt level Decision-Making in the real world Agency costs of equity Application to LBOs Bonding the managers How LBOs reduce agency costs The future of LBOs 7 Corporate Finance FNCE 100 Syllabus, page 7 The next three topics deal with the relationship between risk and returns in its application to the determination of the discount rate in capital budgeting. TOPIC X STATISTICAL CONCEPTS AND AN OVERVIEW OF CAPITAL MAR- KETS Chapter 10 Preview of the next three topics Review of definition of return Risk statistics for an isolated stock Variance Standard deviation Risk statistics for a diversified investor Covariance Correlation An historical perspective to risk and return TOPIC XI RETURN AND RISK The topic develops the relationship between the expected return on a stock and its risk. Chapter 11 Statistical parameters for a portfolio Expected return on a portfolio Variance and standard deviation of a portfolio The efficient frontier Efficient set for 2 assets Efficient set for many assets Efficient set and diversification Efficient frontier and riskless borrowing and lending The relationship between risk and return Beta: The measure of risk for individual security in context of a large portfolio Expected return as compensation for beta The capital asset pricing model (CAPM) Empirical evidence on CAPM Determining beta in the real world Formula for calculating beta 8 Corporate Finance FNCE 100 Syllabus, page 8 TOPIC XII THE CAPM AND CAPITAL BUDGETING This topic shows how discount rates for projects can be determined from the relationship between risk and return. Chapter 13 Review of rationale for choosing a discount rate (Excl. Sect ) Relationship between beta of a stock and beta of a project Determinants of beta of a project Practical application of CAPM to capital budgeting TOPIC XIII OPTIONS This topic discusses both the principles and uses of options. Chapter 22 Definition of calls and puts (Excl ) Combinations of options Covered calls Put-call parity Other option strategies 9 Corporate Finance FNCE 100 Syllabus, page 9 RECOMMENDED END-OF-CHAPTER PROBLEMS (Not to be handed in) While attending class and reading the textbook are obviously necessary for learning the FNCE 100 material, doing end-of-chapter problems is also an extremely important way to understand and reinforce the material. Students should work through the following end-of-chapter problems. While I strongly urge all students to attempt both the suggested and additional problems, students should, at the very least, tackle the suggested ones. Chapter 4 Basic: # 3, 4, 6, 12, 13, 14, 15, 16 Intermediate: # 21, 24, 25, 26, 27, 28, 30, 34, 38, 40, 41, 45, 46, 49 Challenge: # 52, 54, 56, 57, 65, 66, 67, 68, 69, and 70 Chapter 5 Basic: # 1, 5 Intermediate: # 11, 13 Challenge: # 23, 28 Basic: # 6 Intermediate: # 14, 17 Chapter 6 Basic: # 1, 3, 4, 5, 9 Intermediate: # 14, 15, 23, 25 Challenge: # 30, 31, 33, 34 Intermediate: # 19 Challenge: # 38a 10 Corporate Finance FNCE 100 Syllabus, page 10 Chapter 9 Basic: # 1, 4, 9 Intermediate: # 13, 14, 15, 16, 18, 21, 23 Challenge: # 30, 33, 34 Intermediate: # 17, 24, 25 Challenge: # 31 Note: #25 the P/E ratio that you are asked to calculate is the trailing P/E ratio. Chapter 8 Basic: # 3, 4 Intermediate: # 17, 18, 19, 20, 21 Challenge: #31 and # 1 6 in Appendix to Bond Chapter Intermediate: # 24, 26 Chapter 14 Concept Questions: # 2, 3, 5, 6, 7 Basic: # 2, 3, 4 Concept Questions: # 11 11 Corporate Finance FNCE 100 Syllabus, page 11 Chapter 16 Basic: # 1, 2, 12, 13, 14, 15, 16 Intermediate: # 17, 18, 19, 23, 24, 25 Challenge: #26 Note: #26 while this is a theoretical question, the rudiments of this question will be discussed in class. Chapter 18 (excluding the CAPM questions which will not be on the second midterm) Basic: # 1, 3 Intermediate: # 10, 11, 12 Challenge: # 15, 16, 17 Chapter 17 Concept Questions: # 1, 2, 4 Intermediate: #8 Chapter 10 Basic: # 1, 2, 12, 17 Intermediate: #23 Basic: # 4, 6 Chapter 11 Basic: # 1, 2, 3, 5, 10, 12, 16 Intermediate: # 26, 28, 29, 30, 31 Challenge: # 34, 36, 37, 38 12 Corporate Finance FNCE 100 Syllabus, page 12 Note: #37 the sentence Assume the CAPM holds should be ignored. #38 uses calculus which will not be on exam. Basic: # 6, 9 Intermediate: # 22 Chapter 13 Basic: # 1, 3, 5, 10, 11, 12, 13 Intermediate: # 16, 19, 21 Challenge: #24 (ignore part e and flotation cost) Note: #24 this is a good problem to go over but ignore calculations of flotation cost, which you will not be responsible for on the final Chapter 18 (CAPM questions which can be on the final exam but not on the second midterm) Basic: #4 Intermediate: #13 Chapter 22 Concept Questions: # 1, 2, 3, 4, 5, 6, 7, 11, 12, 13 Basic: #2, 3 1. What are the three types of business organizations? Define them Written Exam Ticket 1 1. What is Finance? What do financial managers try to maximize, and what is their second objective? 2. How do you compare cash flows at different points in time? 3. Write the formulas NUS Business School. FIN2004 Finance. Semester II 2013/2014 NUS Business School FIN2004 Finance Semester II 2013/2014 COURSE DESCRIPTION This course provides students with the foundations to understand the key concepts and tools used in Finance. It offers a broad UNIVERSITY OF WAH Department of Management Sciences BBA-330: FINANCIAL MANAGEMENT UNIVERSITY OF WAH COURSE DESCRIPTION/OBJECTIVES The module aims at building competence in corporate finance further by extending the coverage in Business Finance module Course Outline. BUSN 6020/1-3 Corporate Finance (3,0,0) Course Outline Department of Accounting and Finance School of Business and Economics BUSN 6020/1-3 Corporate Finance (3,0,0) Calendar Description Students acquire the knowledge and skills required to effectively Course Title : Financial Management. Teaching Hours : 42 hours (3 hours per week) Course Title : Financial Management Course Code : BUS201 / BUS2201 No of Credits/Term : 3 Mode of Tuition : Sectional Approach Teaching Hours : 42 hours (3 hours per week) Category in major Programme :. KEY EQUATIONS APPENDIX CHAPTER 2 CHAPTER 3 KEY EQUATIONS B CHAPTER 2 1. The balance sheet identity or equation: Assets Liabilities Shareholders equity [2.1] 2. The income statement equation: Revenues Expenses Income [2.2] 3.The cash flow identity: SOLUTIONS. Practice questions. Multiple Choice Practice questions Multiple Choice 1. XYZ has $25,000 of debt outstanding and a book value of equity of $25,000. The company has 10,000 shares outstanding and a stock price of $10. If the unlevered beta CAPITAL STRUCTURE [Chapter 15 and Chapter 16] Capital Structure [CHAP. 15 & 16] -1 CAPITAL STRUCTURE [Chapter 15 and Chapter 16] CONTENTS I. Introduction II. Capital Structure & Firm Value WITHOUT Taxes III. Capital Structure & Firm Value WITH Corporate CORPORATE FINAN AND INVESTMENT Seventh Edition CORPORATE FINAN AND INVESTMENT Decisions and Strategies Richard Pike Bill Neale Philip Linsley i B12-1163 List of figures and tables xii Preface xiv Guided tour xviii Using MyFinanceLab BF 6701 : Financial Management Comprehensive Examination Guideline BF 6701 : Financial Management Comprehensive Examination Guideline 1) There will be 5 essay questions and 5 calculation questions to be completed in 1-hour exam. 2) The topics included in those essay and Fundamentals of Corporate Finance Fundamentals of Corporate Finance Lamar Savings Centennial Professor of Finance University of Texas at Austin David 8. Kidwell Professor of Finance and Dean Emeritus Universityof Minnesota WILEY John Wiley MGT201 Solved MCQs(500) By MGT201 Solved MCQs(500) By Why companies invest in projects with negative NPV? Because there is hidden value in each project Because there may be chance of rapid growth Because Level 6 Advanced Diploma in Finance (531) 126 Credits Level 6 Advanced Diploma in Finance (531) 126 Credits Unit: Finance Theory Guided Learning Hours: 210 Exam Paper No.: 4 Prerequisites: Knowledge of Finance. Number of Credits: 21 Corequisites: A pass or: Leverage and Capital Structure Leverage and Capital Structure Ross Chapter 16 Spring 2005 10.1 Leverage Financial Leverage Financial leverage is the use of fixed financial costs to magnify the effect of changes in EBIT on EPS. Fixed The City University of New York, College of Technology The City University of New York, College of Technology Department of Business Managerial Finance (BUS 2340-2048) Prof. A. Zissu, Chairperson Prof. L. Bernard Spring 2013 Class Meets: Thursdays (6:00 PM Education & Training Plan Finance Professional Certificate Program with Externship Testing Services and Programs 1200 N. DuPont Highway Dover, DE 19901 Contact: Amystique Harris-Church 302.857.6143 [email protected] Student Full Name: Education & Training Education & Training Plan Finance Professional. Chapter 15: Debt Policy FIN 302 Class Notes Chapter 15: Debt Policy Two Cases: Case one: NO TAX All Equity Half Debt Number of shares 100,000 50,000 Price per share $10 $10 Equity Value $1,000,000 $500,000 Debt Value $0 $500,000 VALLIAMMAI ENGINEERING COLLEGE DEPARTMENT OF MANAGEMENT STUDIES BA7202-FINANCIAL MANAGEMENT - QUESTION BANK Year and Semester FIRST YEAR II SEMESTER (EVEN) Subject Code and Name BA7202 FINANCIAL MANAGEMENT Faculty Name 1) Mrs.A.UMA DEVI ASST PROF (OG) 2) Mr.J.ANAND ASST PROF (OG) Q.No UNIT I PART A BT Level Practice Exam (Solutions) Practice Exam (Solutions) June 6, 2008 Course: Finance for AEO Length: 2 hours Lecturer: Paul Sengmüller Students are expected to conduct themselves properly during examinations and to obey any instructions Chapter 17 Does Debt Policy Matter? Chapter 17 Does Debt Policy Matter? Multiple Choice Questions 1. When a firm has no debt, then such a firm is known as: (I) an unlevered firm (II) a levered firm (III) an all-equity firm D) I and III only Introduction. Objectives. Learning Outcomes. Content. Methodology. Evaluation. Corporate Finance COURSE OUTLINE GLOBAL EXECUTIVE MBA PROGRAM MODULE 5 Introduction is about companies, investors, and their interaction in financial markets. Essentially, companies make decisions (how to raise capital, how to invest that capital, ) and investors react to On the Applicability of WACC for Investment Decisions On the Applicability of WACC for Investment Decisions Jaime Sabal Department of Financial Management and Control ESADE. Universitat Ramon Llull Received: December, 2004 Abstract Although WACC is appropriate Brandeis University INTERNATIONAL BUSINESS SCHOOL. Business 71a Introduction to Finance Tuesday/Thursday 5:00 pm to 6:30 pm Spring 2013 Brandeis University INTERNATIONAL BUSINESS SCHOOL Business 71a Introduction to Finance Tuesday/Thursday 5:00 pm to 6:30 pm Spring 2013 Instructor: Debarshi K. Nandy Office: Sachar International Center, MASTER FINANZAS DE EMPRESA MASTER FINANZAS DE EMPRESA Subject Corporate Finance Code 607627 Mode Credits 4 Compulsory Attending 4 Nonattending Course First Year Semester 1 Language English LECTURERS 0 Department Professor Alejandro Far-western University Faculty of Management Far-western University Faculty of Management Course: Fundamentals of Financial Management Full marks: 100 Course No. Mgt FIN 2001 Pass marks: 45 Nature of the course: Theory Total periods: 120 Year: Second Financial Markets and Valuation - Tutorial 6: SOLUTIONS. Capital Structure and Cost of Funds Financial Markets and Valuation - Tutorial 6: SOLUTIONS Capital Structure and Cost of Funds (*) denotes those problems to be covered in detail during the tutorial session (*) Problem 1. (Ross, Westerfield MODERN PORTFOLIO THEORY AND INVESTMENT ANALYSIS MODERN PORTFOLIO THEORY AND INVESTMENT ANALYSIS EIGHTH EDITION INTERNATIONAL STUDENT VERSION EDWIN J. ELTON Leonard N. Stern School of Business New York University MARTIN J. GRUBER Leonard N. Stern School Use the table for the questions 18 and 19 below. Use the table for the questions 18 and 19 below. The following table summarizes prices of various default-free zero-coupon bonds (expressed as a percentage of face value): Maturity (years) 1 3 4 5 Price Course Outline. FNCE 4130-3 Advanced Financial Management (3,0,0) Course Outline Department of Accounting and Finance School of Business and Economics FNCE 4130-3 Advanced Financial Management (3,0,0) Calendar Description Building on either FNCE 2120: Financial Management Pearson Education International EXCEL MODELING AND ESTIMATION IN CORPORATE FINANCE Third Edition CRAIG W. HOLDEN Max Barney Faculty Fellow and Associate Professor Kelley School of Business Indiana University PEARSON Pearson Education MBA 8135 Corporate Finance J. Mack Robinson College of Business Department of Finance MBA 8135 Corporate Finance Course Syllabus Spring 2008 NOTE: It is your responsibility to read, understand and abide by all of the course information Cost of Capital, Valuation and Strategic Financial Decision Making Cost of Capital, Valuation and Strategic Financial Decision Making By Dr. Valerio Poti, - Examiner in Professional 2 Stage Strategic Corporate Finance The financial crisis that hit financial markets in Protective Put Strategy Profits Chapter Part Options and Corporate Finance: Basic Concepts Combinations of Options Options Call Options Put Options Selling Options Reading The Wall Street Journal Combinations of Options Valuing Options Valuation. The Big Picture: Part II - Valuation Valuation The Big Picture: Part II - Valuation A. Valuation: Free Cash Flow and Risk Apr 1 Apr 3 Lecture: Valuation of Free Cash Flows Case: Ameritrade B. Valuation: WACC and APV Apr 8 Apr 10 Apr 15 Lecture: Capital Budgeting and Corporate Finance Certification Capital Budgeting and Corporate Finance Certification Summary Syllabus September 2015 Study and Reference Guide Copyright 2015 Institute of Capital Markets Karachi, Pakistan All rights reserved This is Principles of Financial Management. 3 3 Lecture/Laboratory Hours COURSE OUTLINE BUS218 Course Number Principles of Financial Management Course Title Credits 3 3 Lecture/Laboratory Hours Course description: Principles of financial management as applied to the firm including Examiner s report F9 Financial Management June 2011 Examiner s report F9 Financial Management June 2011 General Comments Congratulations to candidates who passed Paper F9 in June 2011! The examination paper looked at many areas of the syllabus and a consideration Source of Finance and their Relative Costs F. COST OF CAPITAL F. COST OF CAPITAL 1. Source of Finance and their Relative Costs 2. Estimating the Cost of Equity 3. Estimating the Cost of Debt and Other Capital Instruments 4. Estimating the Overall Cost of Capital CHAPTER 5 HOW TO VALUE STOCKS AND BONDS CHAPTER 5 HOW TO VALUE STOCKS AND BONDS Answers to Concepts Review and Critical Thinking Questions 1. Bond issuers look at outstanding bonds of similar maturity and risk. The yields on such bonds are used ( ) ( )( ) ( ) Fundamentals Level Skills Module, Paper F9 Answers Fundamentals Level Skills Module, Paper F9 Financial Management June 2008 Answers 1 (a) Calculation of weighted average cost of capital (WACC) Cost of equity Cost of equity using capital asset E. V. Bulyatkin CAPITAL STRUCTURE E. V. Bulyatkin Graduate Student Edinburgh University Business School CAPITAL STRUCTURE Abstract. This paper aims to analyze the current capital structure of Lufthansa in order to increase market value MBA 8130 FOUNDATIONS OF CORPORATION FINANCE FINAL EXAM VERSION A MBA 8130 FOUNDATIONS OF CORPORATION FINANCE FINAL EXAM VERSION A Fall Semester 2004 Name: Class: Day/Time/Instructor:. Read the following directions very carefully. Failure to follow these directions will Ch. 18: Taxes + Bankruptcy cost Ch. 18: Taxes + Bankruptcy cost If MM1 holds, then Financial Management has little (if any) impact on value of the firm: If markets are perfect, transaction cost (TAC) and bankruptcy cost are zero, no CHAPTER 12 RISK, COST OF CAPITAL, AND CAPITAL BUDGETING CHAPTER 12 RISK, COST OF CAPITAL, AND CAPITAL BUDGETING Answers to Concepts Review and Critical Thinking Questions 1. No. The cost of capital depends on the risk of the project, not the source of the money. Paper F9. Financial Management. Specimen Exam applicable from September Fundamentals Level Skills Module Fundamentals Level Skills Module Financial Management Specimen Exam applicable from September 2016 Time allowed: 3 hours 15 minutes This question paper is divided into three sections: Section A ALL 15 Bachelor of Financial & Accounting Science Investment & Portfolio Management / ACF 354 This course covers topics related to the theoretical and practical background for the investment process and the tools and methods used in evaluating financial SampleFinal Finance 320 Finance Department SampleFinal Finance 320 Finance Department Name Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, and 13 1) A C corporation earns $4.50 per share before taxes. The corporate tax rate is 35%, the personal tax Things to Absorb, Read, and Do Things to Absorb, Read, and Do Things to absorb - Everything, plus remember some material from previous chapters. This chapter applies Chapter s 6, 7, and 12, Risk and Return concepts to the market value Executive Summary of Finance 430 Professor Vissing-Jørgensen Finance 430-62/63/64, Winter 2011 Executive Summary of Finance 430 Professor Vissing-Jørgensen Finance 430-62/63/64, Winter 2011 Weekly Topics: 1. Present and Future Values, Annuities and Perpetuities 2. More on NPV 3. Capital Budgeting Paper F9. Financial Management. Fundamentals Pilot Paper Skills module. The Association of Chartered Certified Accountants Fundamentals Pilot Paper Skills module Financial Management Time allowed Reading and planning: Writing: 15 minutes 3 hours ALL FOUR questions are compulsory and MUST be attempted. Do NOT open this paper More Tutorial at Corporate Finance Corporate Finance Question 1. The cost of capital (8 points) St. Claire Enterprises is a levered firm. The equity cost of capital for St. Claire is 7%. The debt cost of capital for St. Claire is 2%. Assume Paper F9. Financial Management. Specimen Exam applicable from December 2014. Fundamentals Level Skills Module Fundamentals Level Skills Module Financial Management Specimen Exam applicable from December 2014 Time allowed Reading and planning: 15 minutes Writing: 3 hours This paper is divided into two sections:.
http://docplayer.net/26009472-Jeffrey-f-jaffe-spring-semester-2016-corporate-finance-fnce-100-syllabus-page-1-spring-2016-corporate-finance-fnce-100-wharton-school-of-business.html
CC-MAIN-2018-22
en
refinedweb
This week at Build 2016, we released Visual Studio 2015 Update 2 and Visual Studio “15” Preview. Both releases include many new language features that you can try today. It’s safe to install both versions of Visual Studio on the same machine so that you can check out all of the new features for yourself. New C# and VB features in Visual Studio 2015 Update 2 In Visual Studio 2015 Update 2, you’ll notice that we’ve added some enhancements to previous features as well as added some new refactorings. The team focused on improving developer productivity by cutting down time, mouse-clicks, and keystrokes to make the actions you perform every day more efficient. Interactive Improvements (C# only in Update 2, VB planned for future) The C# Interactive window and command-line REPL, csi, were introduced in Visual Studio Update 1. In Update 2, we’ve paired the interactive experience with the editor by allowing developers to send code snippets from the editor to be executed in the Interactive window. We’ve also enabled developers to initialize the Interactive window with a project’s context. To play with these features: - Highlight a code snippet in the editor, right-click, and press Execute in Interactive (or Ctrl+E, Ctrl+E), as shown in the image below. - Right-click on a project in the Solution Explorer and press Initialize Interactive with project. Add Imports/Using Improvements (VB and C# in Update 2) We’ve improved the Adding Imports/Using command to support “fuzzy” matching on misspelled types and to search your entire solution and metadata for the correct type—adding both a using/imports and any project/metadata references, if necessary. You can see an example of this feature with a misspelled “WebCleint” type. The type name needs to be fixed (two letters are transposed) and the System.Net using needs to be added. Refactorings A couple refactorings we sprinkled in were: - Make method synchronous (VB and C# in Update 2) - Use null-conditional for delegate invocation (C# only in Update 2, Maybe for VB? – Read More below) So, the killer scenario for this feature is raising events in a thread-safe way. Prior to C# 6 the proper way to do this was to copy the backing field of the event to a local variable, check the variable for null-ness and invoke the delegate inside the if. Otherwise Thread B could set the delegate to null by removing the last handler after Thread A has checked it for null resulting in Thread A unintentionally throwing a NullReferenceException. Using the null-conditional in C# is a much shorter form of this pattern. But in VB the RaiseEvent statement already raised the event in a null safe way, using the same code-gen. So the killer scenario for this refactoring really didn’t exist and worse, if we add the refactoring people might mistakenly change their code to be less idiomatic with no benefit. From time to time we review samples that don’t understand this and perform the null check explicitly anyway so this seems likely to reinforce that redundant behavior. Let us know in the comments if you think the refactoring still has tons of value for you outside of raising events and we’ll reconsider! -ADG Roslyn Features (VB and C# in Update 2) We’ve added two new compiler flags to the Roslyn compiler: - deterministic: This switch will ensure builds with the same inputs produce the same the outputs, byte for byte. Previously, PE entries–like MVID, PDB ID and Timestamp–would change on every build but now can be calculated deterministically based on the inputs. - publicSign: Supports a new method of signing that is similar to delay signing except it doesn’t need to add skip verification entries to your machine. Binaries can be public signed with only the public key and load into contexts necessary for development and testing. This is also known as OSS signing. Sneak Peek: What’s in Visual Studio “15” Preview We released a first look of Visual Studio “15” this week at Build. It is point in time view of what we’ve been working on. Some features will still change and others are still coming. It’s a good opportunity to provide feedback on the next big release of Visual Studio. Play with C# 7 Prototypes (VB 15 Prototypes planned) The guiding theme for C# 7 language design is “working with data”. While the final feature set for C# 7 is still being determined by the Language Design Committee, you can play with some of our language feature prototypes today in Visual Studio “15” Preview. To access the language prototypes, right-click on your project in Solution Explorer > Properties > Build and type “__DEMO__” in the “Conditional compilation symbols” text box. This will enable you to play with a preview of local functions, digit separators, binary literals, ref returns, and pattern matching. There is a known bug related to ref return IntelliSense, which can be worked around by: - right-click on your project in Solution Explorer > Unload Project - right-click on your project after it’s been unloaded > Edit csproj - in the first Property Group under <AssemblyName> add: <Features>refLocalsAndReturns</Features> - Ignore any XML schema warnings you may see Custom Code Style Enforcement (VB and C# in Visual Studio “15” Preview) The feature you all have been asking for is almost here! In Visual Studio “15” Preview, you can play around and give us feedback on our initial prototype for customizable code style enforcement. To see the style options we support today, go to Tools > Options > C#/VB > Code Style. Under the General options, you can tweak “this.”/”Me.”, predefined type, and “var”/type inference preferences. Today, with “var’ preferences, you can control the severity of enforcement—e.g., I can prefer “var” over explicit types for built-in types and make any violation of this squiggle as an error in the editor. You can also add Naming rules, for instance, to require methods to be PascalCase. Please Keep up the Feedback Thanks for all the feedback we’ve received over the last year. It’s had a big impact on the features that I’ve described here and on others we’ve been working on. Please keep it coming. The language feedback on the open source Roslyn project has been extensive. It’s great to see a broader language community developing around the Roslyn open source project on Github. To give feedback, try one of the following places: - The Roslyn project, for language feedback - Send Feedback Option Visual Studio, for Visual Studio feedback - UserVoice , for Visual Studio suggestions Thanks for using the product. I hope you enjoy using it to build your next app. Over ‘n’ out Kasey Uhlenhuth, Program Manager, Managed Languages Team Join the conversationAdd Comment People have said this before about things like tab and spaces settings, but these really shouldn’t only live in the IDE on the machine – as someone who works on different OSS, personal and work projects on the same machine, having to flip these settings every time is a real pain. I know there is Editor Config, but there’s only a limited set of features. Sorry, I should have clarified that I’m referring to the Style Settings. An excellent point — we should be able to persist these style settings to the file system so that we can check in a style settings file alongside the source code. I agree, best solution is in my opinion would be for Microsoft to embed the external plugin for keeping responsability and flexibility. It has run some years now, so it could be considered to embed the support directly inside Visual studio. My team has for many years used a centrally placed visual studio teamsetting config and other OSS have used a synced settings, first with sublime settings files and since with. On windows we pushed the team settings to the registry by network group policy. This has worked fine for many years, only challenged by new editors that required support for editorconfig.org. Today there is support for all editors on editorconfig.org but it requires plugin, embedding the support inside Visual Studio, like has been done with grunt.js and other tools would be an excellent move for supporting better standardization on projects. Regards, Jan Hebnes Just tried to check out the /publicSign switch, but the compiler gives me a CS2007 error (unknown option). Update 2 is installed. How does this work? OK, I figured it out. But documentation is missing on /help (I’m using a localized build). I like the content here but the naming could use a bit of work. I kind of get that Visual Studio 2015 ‘Preview’ means an early look at the next update to Visual Studio 2015, but I think I looked at the Visual Studio 2015 “Preview” last year. Maybe you should just drop the number and and go to the inner ring, outer ring nomenclature. The naming could use work, because it’s Visual Studio 15 ‘Preview’ (not 2015). Really hard to imagine who thought that was a good idea. The title of this article was improvements to C# and VB. Could you republish with VB improvements? +1 Agreed. Hey DanW52, Please see the sections “Add Using/Imports Improvements”, “Refactorings” (i.e., make method synchronous), and “Roslyn features” for the VB features. Thanks, Kasey .NET Managed Languages If your saying that all that functionality works in both, just take 2 screenshots to demonstrate that.. is it really that hard? I think the point you missed is that the term “VB” only exists in the title and not in the document. In contrast “C#” is in the title then 6 more times in the article. The comment is more attributed to the lack of coherent explanation rather than the poster missing something. You made it sound like the reader was to blame for not understanding – but it’s poor articulation instead that is to blame. Hey Sean, Sorry about that. We were sure to include the VB screenshot in that section but didn’t explicitly call it out in the text for easy searching.. Also, wouldn’t want people writing less idiomatic code in VB because they think it’s somehow better than what they’re writing today. We’re going to update the post with more call-outs to what’s in VB and what’s coming for VB in the future. Thanks for the feedback. Regards, -ADG VB is only mentioned in the title. Is it no more a secret that MS is letting it die?? Shall we start recoding everything to c# or not yet?? +1, Thumbs Up Hey CAKCy, Visual Basic was only mentioned in the title but it was actually shown in the body (in screenshot #2). But the feedback is taken that it’s hard to search for the most relevant content for VB so we’re going to update the post with more explicit call outs to make it easier to find. I have a comment above about one of the refactorings which was deemed less relevant to VB because an existing language feature in VB already solves the problem. Other stuff is either planned or in progress for VB. Regards, -ADG Hey Anthony and thanks for you reply! The general feeling we, VB developers, get (those of us left from past eras) – and I’ve seen this on many sites – is that MS is favoring C# over VB for whatever reason. If MS considers VB to be a dying/dead language they should say so. We see more and more tools in VS applicable only to C# as if VB developers could not benefit from those IF they were offered in “our” language… Regs. And with the integration of Xamarin, we now have a major C# only component of Visual Studio. I was a “VB” developer from the early days of VBA (in Access 2.0) to just a few years ago. I had noticed more and more people saying that “C# was superior” and that “MS was allowing VB to die without publicly saying so”, but I tenaciously clung to using VB.NET, as C# had a couple of show-stoppers for me at the time. Case sensitivity made it a major chore to try to write code in C#, and C# didn’t have background compiling at the time. C# code also looked like complete gibberish to me back then, whereas the intention of VB.NET code just seemed “obvious”. Then a few years ago, I got so sick of online samples etc only being provided in C# that I decided to force myself to start using it a bit and see how I went. To my huge surprise I actually picked it up extremely quickly. Today I cringe if I have to deal with any of my “legacy” VB code. I would never, EVER, have thought that there’d be a day that I’d prefer to code in C# as opposed to VB.NET, but that day arrived! C# is now my main language; the one that I use every day. I used to read people saying similar things but I never believed them. Now I’m in the same boat as they were. I second Yann. C# is the future. VB.NET can be used to teach coding. But, to really get into coding and OOP and stuff, we need a language like C#. It is just so awesome. I feel bad for my love for old VB. But that’s how it goes. At one point, we need to move on. C# is definitely intimidating in start (like everything else new) but once you get along with it, it is a piece of cake. I agree, show VB some love, it’s been around a long time. Disrespect or ignore it at your own peril. C# still playing catch-up, I extensively use both C# and VB, and anyone who thinks VB is inferior to C# doesn’t know what they talking about: .” – Anthony D. Green [MSFT] RIP 2016-2016 I understand that Local Methods are under consideration. I think this is terrific. I spent many years programming in Pascal and was bitterly disappointed that they were excluded from c#. The provide terrific support for organizing code while still allowing for methods that have only one purpose. I hope they will be in the final release. Seems to me that “var” type inference for Funcs/delegates would work just fine for this case — internal fn’s should only be a line or two anyway. In F# we have: let addx item = “x” + item or let addx = fun item -> “x” + item in C# we’d just have: var addx = item => “x” + item; as a shortcut for the current Func addx = “x” + item; Func addx = “x” + item; Really? Func < string,string > = “x”+item Nice features, indeed. What has tied down my productivity much more is the absence of good documentation. MS offers **tens of thousands of APIs** but their documentation is handled by a tool that has manyfold deficits: 1. Different versions of docs and library tools for SQL Server 2014 and VS 2013 2. No possibility to have “Intelligent Favorites” which are part of a project and are maintained by the development team. 3. The Docs are structures as websites which a TOC that forgets your navigation (!!!) 4. The version management is annoying. Docs are offered in the latest but often wrong version from the projects perspective. 5. Descriptions of functions are trivial and so are many of the samples. 6. The two main views on information, Guides and References, have been mingled to stew hardly to digest: Guides show details on handling tools and references are incomplete and lack samples. An example of the **bright** of that story are the language references. Formatted as eBooks with a detailed TOC, you can find answers “at your finger tops”. I’d be interested in how you see the style settings functionality playing with StyleCop, especially with the Roslyn based StyleCop.Analyzers out there already. Compared to what that offers, this seems like a pretty limited release. Especially if as this post implies styles are applied to the IDE not the source control. Even in the same company software created at different times might have different associated styles, this is even more true when you bring open source projects into the mix. MS has embraced open source initiatives before instead of reinventing the wheel, I can’t help but feel like this is an occasion where you should defer to the existing option. Does defining “__DEMO__” enable anything for VB? Mark, Not yet. When we started this design cycle there was a broad set of new features we were looking at that would impact both languages and it didn’t really make sense to chase down every possibility in both languages. This proved to save a lot of time as many of those features have since been cut. Now that we have a narrower view of what kind of platform-driving features we’d like to go out with the next version of the languages we’re doing to work to properly design and implement these features for VB. That includes Tuples, ref returns, pattern matching, and other things that have thusfar only been discussed in C#. The VB language design team is meeting this week to discuss some of the VB particulars and we’ll be posting the output of that meeting as updates to the .NET blog and on GitHub and they will appear either in a future preview drop or directly through a mechanism we’re working on that allows you to download language feature prototypes from the next version directly from github at anytime (without installing a new VS or Update). Regards, -ADG Hi all, We had an issue with this post where we accidentally posted the same page twice *with the same url*. This wreaked havoc on Word Press causing one of the posts to be unreachable, the content to appear twice on a single page, and some comments to disappear entirely. We think we’ve corrected the problem by deleting one of the posts from the database. This had the unfortunately side effect of losing a bunch of comments. But, many of the comments were, rightly, complaining about the original confusion or the lack of response from team members so hopefully on balance this was a good trade-off. **Future Commentators** If you commented on this page before AndAlso your comment no longer appears AndAlso your comment wasn’t addressed by either fixing the broken page OrElse one of the team member responses Then please repost your comment and we’ll try to address it as quickly as possible. Regards, -ADG Not liking the VB later or ? statements in this article, we have invested a lot in VB across WinForms and ASP.NET ? Tip: If you want to use the new deterministic compiler option in VS 2015 Update 2, you can manually edit a .csproj or .vbproj file to include <Deterministic>True</Deterministic> in a <PropertyGroup> element. In the context of building service fabric applications in Team Services, this does not completely work. You’ll get a warning that says this: This package contains ‘*.pdb’ files, which will change in every build even if the ‘deterministic’ compiler flag is used. Exclude these files from your package in order to accurately detect if the package has changed. I want to see better intellisense features for JavaScript, such as a way to see all references and integration with TDD tools. Add Imports/Using Improvements I like this feature, but it doesn’t always find correct assembly to reference. I mean it picks wrong location. Say I included MS Prism DLLs from ThirdParty folder outside my solution. You can find it by looking at how project files reference it. However quick fix found this DLL in a build folder of some project, and suggested it instead. My coding experience is limited, but thought I would share my experience with Visual Studio in areas where I think efficiency needs to be improved. My development framework is Web Pages. In markup I can roll up code by clicking on “-” to the right of the line numbers. Why MS has not provided a similar feature on the code side of the page is beyond me. There is a plug in solution, but it is hopelessly torturous in use. Why is it that VS insists on formating a webgrid markup block every time I copy or move grid.Column() lines? CTRL-Z undoes some of the damage, but it feels like I am re-arranging lines and spaces tens of minutes a day. A big time waster for me is the repeated crashes of IIS express, requiring a complete restart of VS and the browser. This happens several times a day. I may waste half an hour a day with this. Why is the debugger getting slow as molasses over time? It used to be that after an exception it would take a few seconds for the error screen to come up in the browser, now it takes close to a minute. Same for starting the debugger, seconds before, now it it takes 15 seconds to get the page the show up in the browser. Another big time waster for me has always been tutorials made by Microsoft themselve, partners and MVPs that do not get updated or fixed when they have bugs. Showing how to integrate with the Paypal api, is an excellent tutorial. But it is an absolute time waster if it doesn’t work. This tutorial and others like that should be kept in maintained state for as long as possible. If authors cannot commit to that responsibility than they should not write any. Although improving the capabilities of FIND and REPLACE is nice, that will never get me back my time I waste with the big issues? Shouldn’t MS concentrate on the big time wasters first? I’ve just run this piece of code with “Execute in Interactive” and ran in trouble: > XElement elem = null; (1,1): error CS0246: The type or namespace name ‘XElement’ could not be found (are you missing a using directive or an assembly reference?) Looks pretty useless if this works only in a very prepared situation. Or I made something wrong. Hey Vitor, If you right-click on your project, you should see a command called “Initialize Interactive with project”. This should reset your interactive session and add your project references so you can use “Execute in Interactive” without having to manually #r your references. Hope this helps! Kasey I’ve been browsing online more than three hours today, yet I never found any interesting article like yours. It’s lovely price sufficient for me. In my opinion, if all website owners and bloggers made good content material as you did, the net can be a lot more helpful than ever before. I came into this world, looking for Visual Basic that I couldn’t load in my 64 bit system from Visual Studio 2002. AND I DON’T FIND ANY WAY TO GET TO IT!!!! You is noot going to feel any cravvings for an extended time and inn a position to avoid any extra calories. Usse Smaller Plates – It might sound crazy, but eating off large plates sometimes leaves you with food that you do nnot really want. It’s a bad suggestion to shell out your time and energy captivated by what you wiill be likely to eat tomorrow.
https://blogs.msdn.microsoft.com/dotnet/2016/04/02/whats-new-for-c-and-vb-in-visual-studio/?replytocom=84805
CC-MAIN-2018-22
en
refinedweb
Say I have a script like this: p = subprocess.Popen(['python forked_job.py'], shell=True) status = p.wait() # Do something with status import os import sys print 'hi' pid = os.fork() if pid == 0: sys.exit(do_some_work()) else: sys.exit(do_other_work()) When you fork, you have a parent and a non-parent process. When pid == 0, you are in the child process; your else statement is when you're within the parent process. Similar to calling Popen.wait, as you do in the first script, you want to call os.wait in the second one. os.wait() Wait for completion of a child process, and return a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. Availability: Unix As you can see, this of course assumes that you're running unix. Since os.fork is also Unix-only, this seems likely. So, have the parent call os.wait and reflect the status back up in what the parent returns. One thing to note, though it probably doesn't matter, and you're probably aware. You're technically not doing this: main_script / \ forked_job forked_job But instead: main_script | forked_job_parent | forked_job_child (I'm attempting to show the "ownership", and hence the usage of the second wait.)
https://codedump.io/share/flfa9ojbnJsx/1/python-subprocess---check-exit-codes-of-forked-program
CC-MAIN-2018-22
en
refinedweb
copy_with_extension_gen 1.0.8 Provides Dart Build System builder for generating copyWith extensions for classes annotated with copy_with_extension. For more info on this package check out my blog article. Usage # In your pubspec.yaml file: - Add to dependenciessection copy_with_extension: ">=1.0.0 <2.0.0" - Add to dev_dependenciessection copy_with_extension_gen: ">=1.0.0 <2.0.0" - Add to dev_dependenciessection build_runner: ">=1.0.0 <2.0.0" - Set environmentto at least Dart 2.7.0 version like so: ">=2.7.0 <3.0.0" Your pubspec.yaml should look like so: name: project_name description: project description version: 1.0.0 environment: sdk: ">=2.7.0 <3.0.0" dependencies: ... copy_with_extension: ">=1.0.0 <2.0.0" dev_dependencies: ... build_runner: ">=1.0.0 <2.0.0" copy_with_extension_gen: ">=1.0.0 <2.0.0"';. The extension will be generated: // GENERATED CODE - DO NOT MODIFY BY HAND part of 'basic_class.dart'; // ************************************************************************** // CopyWithGenerator // ************************************************************************** extension CopyWithExtension on BasicClass { BasicClass copyWith({ String id, }) { return BasicClass( id: id ?? this.id, ); } } Launch code generation: flutter pub run build_runner build 1.0.8 Analyzer rules # - Suppresses some of the analyzer's rules as we do not support generic types yet. 1.0.7 Extension name fix # - Creates a unique extension name for each class. 1.0.6 Minor corrections # - Minor metadata and description corractions. 1.0.0 Initial release # - Lets you generate a copyWithextension for objects annoteted with @CopyWith(). import 'package:meta/meta.dart' show immutable; import 'package:copy_with_extension/copy_with_extension.dart'; /// Make sure that `part` is specified, even before launching the builder part 'example.g.dart'; @immutable @CopyWith() class SimpleObject { final String id; final int value; /// Make sure that constructor has named parameters (wrapped in curly braces) SimpleObject({this.id, this.value}); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: copy_with_extension_gen: ^1.0.8 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:copy_with_extension_gen/builder.dart'; We analyzed this package on Mar 26, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.1 - pana: 0.13.6 Health suggestions Fix lib/src/copy_with_generator.dart. (-0.50 points) Analysis of lib/src/copy_with_generator.dart reported 1 hint: line 76 col 34: 'name' is deprecated and shouldn't be used. Check element, or use getDisplayString().
https://pub.dev/packages/copy_with_extension_gen
CC-MAIN-2020-16
en
refinedweb
In this tutorial, you will learn how to create a server-side Blazor application that interacts with an external web API using HttpClientFactory. Later in the series, you will add IdentityServer4 authentication to protect the API and authorize the client web app. In Part 1, you will create a public Web API, and you will learn the right way to interact with it from a server-side Blazor app. In the next tutorial, you will protect the API using IdentityServer4 and learn how to authorize your Blazor app using an access token. Though this tutorial is written for server-side Blazor applications, the techniques can also be used in other ASP.NET web apps, including both MVC and Razor Pages projects. Create the Shared Models Project Start by creating a shared library that will contain the models to be used in the solution. The models contained in the shared library will be referenced by both the API and the Blazor web application frontend, so this is a good place to start! Launch Visual Studio and create a New Project. Select Class Library (.NET Standard). A .NET Standard class library can be added as a reference to all .NET Core web applications (MVC, Razor Pages, and Blazor), and it can also be included as a reference in a Xamarin project if you ever decide to create a mobile frontend. In this series, you will create a contact management application. For Solution Name, type BlazorContacts. Enter BlazorContacts.Shared for the Project Name, and click Create to scaffold the shared library from template. In the Solution Explorer pane, right-click the BlazorContacts.Shared project and select Add > New Folder. Name the folder Models. This directory will contain all the shared models you will need in your application For this application, you will need a model that contains information about individual contacts. Right-click the Models folder, and select Add > Class. Name the class Contact.cs. This will generate a file that should look like the following: using System; using System.Collections.Generic; using System.Text; namespace BlazorContacts.Shared.Models { class Contact { } } The Contact model should be public, so you can access it from outside the class. It should also include a unique identifier for the contact, and, for this example, the contact’s name and phone number. You may choose to split the Name property into two properties, one for FirstName and one for LastName, to provide for more robust sorting and filtering in your end product. You could also add more fields, such as Address, EmailAddress, and Location. namespace BlazorContacts.Shared.Models { public class Contact { public long Id { get; set; } public string Name { get; set; } public string PhoneNumber { get; set; } } } There is a useful Nuget package for annotating data models that I recommend. Using it will make data validation much easier on both the backend database as well as the frontend UI. It can be used to define required fields, length constraints, valid characters, etc. Add the package to your project. Install-Package System.ComponentModel.Annotations Then include the package in your class library. By also including a JSON serialization library, you can ensure the public properties in your model are linked to the correct JSON property produced by the API. using System.ComponentModel.DataAnnotations; using System.Text.Json.Serialization; I am using the new System.Text.Json namespace instead of NewtonSoft. With this package, your annotated Models class might look like the following: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Text.Json.Serialization; namespace BlazorContacts.Shared.Models { public class Contact { [Key] [JsonPropertyName("id")] public long Id { get; set; } [Required] [JsonPropertyName("name")] public string Name { get; set; } [Required] [DisplayName("Phone Number")] [JsonPropertyName("phonenumber")] public string PhoneNumber { get; set; } } } The [Key] attribute can be used when attaching the model to a database. It is not actually necessary in this case, because a property with name Id will automatically be used as the key in a database entry. The [Required] attribute denotes a required field, and DisplayName allows you to denote a human readable name for a field. The DisplayName will be shown in the case of error messages, for example. Create the Web API Now, it is time to create the Web API which will be used to add, delete, and fetch contacts. With the BlazorContacts solution open, add a New Project , and select ASP.NET Core Web Application. Name the project BlazorContacts.API , and click Create. On the next page, select the API project template. Leave the Authentication setting as No Authentication. Later, you will configure IdentityServer4 to grant API access to your Blazor frontend. Click Create , and wait for the API project template to scaffold. In the Solution Explorer pane of your newly created API project, right click the BlazorContacts.API project and select Add > Reference. In the Reference Manager, add BlazorContacts.Shared as a reference for your API project. This will allow you to reference the Contact model you just created. Next, right-click the Controllers directory of the API project and select Add > Controller. In the Add New Scaffolded Item dialog, choose API Controller – Empty and click Add. In the next dialog, name the controller ContactsController. This will scaffold a blank API controller class called ContactsController.cs that looks like the following: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; namespace BlazorSecure.API.Controllers { [Route("api/[controller]")] [ApiController] public class ContactsController : ControllerBase { } } Add a using directive to include the BlazorContacts.Shared.Models namespace. using BlazorContacts.Shared.Models; For demonstration purposes, this API will simply store the contacts in memory. In production applications, you could use a database. Declare a collection of contacts, and add a method to populate the collection. private static readonly List<Contact> contacts = GenerateContacts(5); private static List<Contact> GenerateContacts(int number) { return Enumerable.Range(1, number).Select(index => new Contact { Id = index, Name = $"First{index} Last{index}", PhoneNumber = $"+1 555 987{index}", }).ToList(); } The GenerateContacts() method will generate and return a list of a given number of unique contacts. You could also generate contacts one by one and add them to the contacts variable. Again, this is all just sample data stored in memory, so do whatever you’d like. contacts.Add(new Contact { Id = 1, Name="First1 Last1", PhoneNumber="+1 555 123 9871" }); Next, add public methods for interacting with the API. Below are sample methods. // GET: api/contacts [HttpGet] public ActionResult<List<Contact>> GetAllContacts() { return contacts; } // GET: api/contacts/5 [HttpGet("{id}")] [ProducesResponseType(StatusCodes.Status404NotFound)] public ActionResult<Contact> GetContactById(int id) { var contact = contacts.FirstOrDefault((p) => p.Id == id); if (contact == null) return NotFound(); return contact; } // POST: api/contacts [HttpPost] public void AddContact([FromBody] Contact contact) { contacts.Add(contact); } // PUT: api/contacts/5 [HttpPut("{id}")] public void EditContact(int id, [FromBody] Contact contact) { int index = contacts.FindIndex((p) => p.Id == id); if(index != -1) contacts[index] = contact; } // DELETE: api/contacts/5 [HttpDelete("{id}")] public void Delete(int id) { int index = contacts.FindIndex((p) => p.Id == id); if (index != -1) contacts.RemoveAt(index); } There are two GET methods, one for fetching all contacts and one for getting individual contacts by ID. Next, there is a POST method for adding a new contact. The PUT method can be used to update an existing contact, and the DELETE method will delete a contact by ID. One final note regarding the BlazorContacts.API project: I am using the following launch settings (Properties > launchSettings.json) to start the project on. This address will be used when configuring the Identity Server authentication and when setting the base api address for the web frontend. { "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "", "sslPort": 0 } }, "$schema": "", "profiles": { "IIS Express": { "commandName": "IISExpress", "launchUrl": "api/contacts", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "BlazorContacts.API": { "commandName": "Project", "launchUrl": "api/contacts", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" }, "applicationUrl": "" } } } Create the Blazor Web App Now it is time to create the web interface to interact with the API. Create a New Project within the BlazorContacts solution. Select Blazor App and name the project BlazorContacts.Web. On the next page, select Blazor Server App. First, from the new BlazorContacts.Web project, Add > Reference that points to the BlazorContacts.Shared project, just like you did when creating the API. Then add a new folder called Services to the BlazorContacts.Web project. Right-click the new folder and Add > New Class called ApiService.cs.. using BlazorContacts.Shared.Models; using System.Collections.Generic; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; namespace BlazorContacts.Web.Services { public class ApiService { public HttpClient _httpClient; public ApiService(HttpClient client) { _httpClient = client; } public async Task<List<Contact>> GetContactsAsync() { var response = await _httpClient.GetAsync("api/contacts"); response.EnsureSuccessStatusCode(); using var responseContent = await response.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<List<Contact>>(responseContent); } public async Task<Contact> GetContactByIdAsync(int id) { var response = await _httpClient.GetAsync($"api/contacts/{id}"); response.EnsureSuccessStatusCode(); using var responseContent = await response.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<Contact>(responseContent); } } } In this example, I have written two methods for the ApiService class. One will return a List<Contact> and the other will return an individual Contact by its unique ID property. You could also just use a single method that returns a string and perform the deserialization in your page controller, instead. return await response.Content.ReadAsStringAsync(); Now that the service is created, you must register the HttpClientFactory interface. Still in BlazorContacts.Web , open Startup.cs and locate the public void ConfigureServices(IServiceCollection services) method and register the ApiService typed client. services.AddHttpClient<Services.ApiService>(client => { client.BaseAddress = new Uri(""); }); Go ahead and assign BaseAddress to the address where your web API is located, as shown above. The path argument in the the GetApiAsync() method of the ApiService class will append to this base address. Passing “api/contacts” to the method, for example, will fetch the result from which is configured in the API’s ContactsController.cs file to return a list of all contacts. Create the Blazor Page All that is left is to create Blazor pages that will interact with the API, through the ApiService. Create a new Razor component in the Pages folder called Contacts.Razor. To make this page appear at the /contacts route, add a page directive to the top of the page. @page "/contacts" Next, inject the HttpClientFactory interface you previously registered, and import the namespace of the shared contact model. @inject Services.ApiService apiService @using BlazorContacts.Shared.Models You can use the service in your page code as follows. In this case, I am overriding the OnInitialized method of the Blazor component so I can use the List<Contact> in the web interface. List<Contact> contacts; protected override async Task OnInitializedAsync() { contacts = await apiService.GetContactsAsync(); } A complete Blazor page that relies on apiService.GetContactsAsync() may look like the following. @page "/contacts" @inject Services.ApiService apiService @using BlazorContacts.Shared.Models <h3>Contacts</h3> @if (contacts == null) { <p><em>Loading...</em></p> } else { <table class="table"> <thead> <tr> <th>ID</th> <th>Name</th> <th>Phone Number</th> </tr> </thead> <tbody> @foreach (var contact in contacts) { <tr> <td>@contact.Id</td> <td>@contact.Name</td> <td>@contact.PhoneNumber</td> </tr> } </tbody> </table> } @code { List<Contact> contacts; protected override async Task OnInitializedAsync() { contacts = await apiService.GetContactsAsync(); } } To use the GetContactByIdAsync() method, simply pass the id value of the contact whose information you want to retrieve to the method. Contact contact3 = await apiService.GetContactByIdAsync(3); Putting it all together To test your Blazor solution, you will need to launch the Web API and the Blazor App at the same time. To do this, you must configure multiple projects to startup when you launch the debugger. Right click the BlazorContacts solution in the Solution Explorer. Select Set StartUp Projects. In the dialog that opens, choose Multiple startup projects and set the action for BlazorContacts.API and BlazorContacts.Web to Start. Now, when you start debugging, the API will run on and the web server will run on another port. I have configured mine to run on, for example. When you navigate to the /contacts page, the server-side Blazor frontend will fetch data from the external API using techniques that will properly dispose HttpClient and not result in socket exhaustion. Source code for this project is available on GitHub. In the coming tutorials, we will build off this application by adding IdentityServer4 protection to the API. 4 thoughts on “Blazor, HttpClientFactory, and Web API” Hi, pretty nice. I would like to separate the webapi project from the blazor project. Try to follow your step by step example. But i’am in trouble with starting all together in VS2019. If i set the class-library as a startup, it says “a project with output type class library cannot be started directly”. Yepp, but if i change the start-project to “whatever, webapi or blazor-web” it didn’t start together. Try to compare everything with your github project, but can’t figure out the difference. Any hint would be very fine You won’t set the Shared project as a startup project, but you do need to add it as a reference to each of the projects that depend on it, such as the webapi and Blazor projects. You’ll then use multiple startup projects, setting both the webapi and Blazor projects to Start. I hope that helps! If you get stuck, let me know Thx alot again! Now it works, but anyway can’t figure out why it is not working at first hand. I’ve to create a new Solution (Blazor) add the two existing Projects, did the referencing … and it works. I think the first solution was kinda broken. So if someone has the same problem: delete the solution, create a new and add the projects again … in Creating the Blazor App… .” I cant see where this is being done only injecting in IHttpClient…. Can you help?
https://wellsb.com/csharp/aspnet/blazor-httpclientfactory-and-web-api/
CC-MAIN-2020-16
en
refinedweb
helper class for notifying XPropertyChangeListeners More... #include <shapepropertynotifier.hxx> helper class for notifying XPropertyChangeListeners The class is intended to be held as member of the class which does the property change broadcasting. Definition at line 101 of file shapepropertynotifier.hxx. constructs a notifier instance Definition at line 94 of file shapepropertynotifier.cxx. Definition at line 99 of file shapepropertynotifier.cxx. Definition at line 147 of file shapepropertynotifier.cxx. is called to dispose the instance Definition at line 159 of file shapepropertynotifier.cxx. References aEvent, and m_xData. notifies changes in the given property to all registered listeners If no property value provider for the given property ID is registered, this is worth an assertion in a non-product build, and otherwise ignored. Definition at line 113 of file shapepropertynotifier.cxx. References aEvent, DBG_UNHANDLED_EXCEPTION, Exception, m_xData, and cppu::OInterfaceContainerHelper::notifyEach(). Referenced by SdrObject::notifyShapePropertyChange(). registers an IPropertyValueProvider Definition at line 103 of file shapepropertynotifier.cxx. References ENSURE_OR_THROW, and m_xData. Definition at line 153 of file shapepropertynotifier.cxx. Definition at line 136 of file shapepropertynotifier.hxx. Referenced by addPropertyChangeListener(), disposing(), notifyPropertyChange(), registerProvider(), and removePropertyChangeListener().
https://docs.libreoffice.org/svx/html/classsvx_1_1PropertyChangeNotifier.html
CC-MAIN-2020-16
en
refinedweb
Here is the program I created! Don't forget to add -lpthread in the linker options if you are running this! Code: Select all #include <stdio.h> #include <pthread.h> typedef struct { int start; int end; int step; }data; int isPrime(long int number) { long int i; for(i=2; i<number; i++) { if(number % i == 0) { //not a prime return 0; } } return number; } void calcPrimes(int start, int stop, int step) { long int s; for(s=start; s<=stop; s+=step) { if (isPrime(s) > 0) { //Its a prime number!!! } } } void* thread1Go() { calcPrimes(3, 100000, 8); //stepping 8 numbers for 4 cores return NULL; } void* thread2Go() { calcPrimes(5, 100000, 8); //starting thread 2 at the next odd number jumping 8 spaces for 4 cores return NULL; } void* thread3Go() { calcPrimes(7, 100000, 8); // starting thread 2 at the next odd number and jumping 8 spaces for a 4 core run return NULL; } void* thread4Go() { calcPrimes(9, 100000, 8); // think you get it. return NULL; } int main() { printf("Calculate Prime Numbers\n"); printf("==================================================\n\n"); //create the threads pthread_t thread0; pthread_create(&thread0, NULL, thread1Go, NULL); pthread_t thread1; pthread_create(&thread1, NULL, thread2Go, NULL); pthread_t thread2; pthread_create(&thread2, NULL, thread3Go, NULL); pthread_t thread3; pthread_create(&thread3, NULL, thread4Go, NULL); //wait for threads to join before exiting pthread_join(thread0, NULL); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); return 0; } And my results were not quite what I was expecting... sort of! 1 thread = 33.016 s 2 thread = 16.531 s 3 thread = 16.637 s <= WTF!!! 4 thread = 8.871 s I noticed that running on 3 threads the activity seemed only to be divided by 2 cores.... thought I would see 3 active cores and one ticking over? But its great to see a massive performance increase by having the extra cores to play with. I thought this was interesting so I shared it.
https://lb.raspberrypi.org/forums/viewtopic.php?p=838760
CC-MAIN-2020-16
en
refinedweb
Hello everyone, Welcome to my new article, where we will explore React Native CheckBox Component by creating a quiz app screen. The concept is pretty simple and straightforward. The Checkbox component is best used in Quiz apps, so to Demonstrate how it works, I think we should make one and explore from there. So let’s get into it. QUIZ App UI Concept The example we are building will be pretty simple and clean. Most of Quiz apps screens out there usually contain a heading text with the question of the quiz. The list of potential answers selections, and a submit button. So, This is what we will be building. Environment Setup To get started as usual create a new project', }, }); We will also need to install native base, since we are going to use it’s Checkbox component. To install native base, follow these instructions. Install the package npm install native-base --save If you are using React Native Cli, one more step is required react-native link Now we are ready to build our quiz app example. Getting Started Import the Checkbox component from native base import {CheckBox} from "native-base" Also import the TouchableOpacity from React Native, since we have to add a submit button import { StyleSheet, Text, View, TouchableOpacity } from 'react-native'; Our initial state will just have a single property to hold the users’ answer state={ selectedLang:0 } in the root view, we want to add a header Text at the top with a bold and clean color, for the quiz question. <Text style={styles.header}>What's your favorite programming language?</Text> With the style below header:{ fontSize:25, fontWeight:"bold", color:"#364f6b", marginBottom:40 } Checkbox Component Now or the checkbox component part, we are going to have an item View to hold the checkbox and the text option. We also want to make the text dynamic and changes color and thickness when its option is selected. Plus the checkbox’s functionality update onPress and have a clean color. To Achieve this, our code will look something similar to this. <View style={styles.item} > <CheckBox checked={this.state.selectedLang===1} color="#fc5185" onPress={()=>this.setState({selectedLang:1})}/> <Text style={ {...styles.checkBoxTxt, color:this.state.selectedLang===1?"#fc5185":"gray", fontWeight:this.state.selectedLang===1? "bold" :"normal" }} >Python</Text> </View> item:{ width:"80%", backgroundColor:"#fff", borderRadius:20, padding:10, marginBottom:10, flexDirection:"row", }, checkBoxTxt:{ marginLeft:20 }, Add more checkbox items for each potential answer for the question. in my case I will add 3 more. And finally, a submit button with the same checkbox color and a simple submit text <TouchableOpacity style={styles.submit}> <Text style={{color:"white"}}>SUBMIT</Text> </TouchableOpacity> submit:{ width:"80%", backgroundColor:"#fc5185", borderRadius:20, padding:10, alignItems:"center", marginTop:40 } And there you have it a simple yet elegant React Native Checkbox Component Example, by building a Quiz app screen. Final Result I have prepared a github repository for the project and an expo project as well, feel free to use them at your will. Recommended Articles React Native Countdown Timer Example Using MomentJs React Native Camera Expo Example React Native Accessibilityinfo API React Native Flatlist Example React Native Screen Transitions
https://reactnativemaster.com/react-native-checkbox-component-example/
CC-MAIN-2020-16
en
refinedweb
Which Powershell command in the PowerCLI module for VMware ESX used to interact with UI apps? By ur, in PowerShell Recommended Posts Similar Content -It. - By antonioj84 I need some help with the powershell code below #include <AutoItConstants.au3> #include <Array.au3> #RequireAdmin $PS='Get-NetConnectionProfile | Where-Object { $_.NetworkCategory -match "$Public" } | Set-NetConnectionProfile -NetworkCategory Private' $sCommands = "powershell -Command " & $PS &"" $iPID = Run(@ComSpec & " /k " & $sCommands, "", @SW_SHOW , $stdout_child) - By TheyCallMeBacon Has anyone had success managing LAPS with AutoIT? (LAPS is Microsoft's Local Admin Password Solution.) I am running v3.3.14.2 and Powershell 5.1.17134.858 on Windows 10 1803 build 17134.885. I have read the entire AutoIT Help file, all of the AD UDF scripts and supporting HTML files, and a large part of the Internet and have researched myself into paralysis. My company has more than one domain with two-way trusts and use LAPS on each domain. At present, we remote in to a jump box in each domain when we need to manage a device there. I want to build a multiple-domain console that works just like the LAPS UI, but allows the user to select a domain via pull-down. At this point, I can't even get the crazy thing to work on the current domain. If I feed it $sComputername = 'T4211BLC1' $sComputerName = GUICtrlRead($idComputerName) $iPID = Run('powershell.exe -executionpolicy bypass Get-AdmPwdPassword "' & $sComputerName & '"', "c:\", @SW_Show, $STDOUT_CHILD) ; Wait until the process has closed using the PID returned by Run. ProcessWaitClose($iPID) ; Read the Stdout stream of the PID returned by Run. While 1 $sOutput = StdoutRead($iPID) if @error then ExitLoop if $sOutput <> "" Then $sStdout = $sStdout & @CRLF & $sOutput WEnd sends this to the console: Get-AdmPwdPassword : The term 'Get-AdmPwdPassword' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + Get-AdmPwdPassword T4211BLC1 + ~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (Get-AdmPwdPassword:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException But if I put this on the Windows command line: powershell.exe -executionpolicy bypass Get-AdmPwdPassword "T4211BLC1" ...it runs perfectly. ComputerName DistinguishedName Password Expiration Timestamp ------------ ----------------- -------- ---------- T4211BLC1 CN=T4211BLC1,OU=GPO Computers Testing OU,O... YQc7Cl39wFrIF5 6/10/20... So (if you're still awake), Why can't Powershell find 'Get-AdmPwdPassword' when called from within AutoIT? Why can't I read STDOUT? FYI - I've tried ShellExecute, and calling a .ps1 from the script, even Run('cmd /k ...) and I get the same result - Powershell doesn't recognize the cmdlet. Thanks in advance!! -
https://www.autoitscript.com/forum/topic/201167-which-powershell-command-in-the-powercli-module-for-vmware-esx-used-to-interact-with-ui-apps/?tab=comments
CC-MAIN-2020-16
en
refinedweb
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including react-spartez-support-chat-widget with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. Support Chat Widget is a ReactJS component allowing easy integration of your React webapp with Chat for Service Desk. npm install react-spartez-support-chat-widget Your package.json must look like this { "dependencies": { "react": "^16.6.0", "react-dom": "^16.6.0", "react-scripts": "2.0.5", "react-spartez-support-chat-widget": "^0.0.4" } } import React from 'react'; import SupportChatWidget from 'react-spartez-support-chat-widget' . . . <SupportChatWidget url={''} portal={1} // delay={100} // delay between page load and chat load in milliseconds // container={'spartez-support-chat-container'} // ID of the page element that will be replaced by chat // iconClass={'spartez-support-chat-container-icon'} // additional class added to the chat icon // chatClass={'spartez-support-chat-container-chat'} // additional class added to the chat widget /> . . . npm install npm run start(so webpack can watch the files and build on every change) npm link npm link react-spartez-support-chat-widget npm run build npm publish
https://npm.runkit.com/react-spartez-support-chat-widget
CC-MAIN-2020-16
en
refinedweb
Doing my best to work smart, and not hard... Those of you that know me IRL, are aware that I work as a civil engineer in Canada -- typically on municipal type projects for sewers, pump stations, water treatment, and roadway design. That sort of stuff. Either on the ground, or under it (generally). I've found myself recently working very very iteraviely on some concept designs in AutoDesks Civil3D (very powerful, modern day engineering tool) -- but it felt tedious, and clunky, and involved a fair amount of going back and forth w/ spreadsheets to review / tweak / confirm designs. Enter: Blender, and Python! Currently, I'm very much in the beginning phases of understanding Blenders python API that they've put together to allow devs / designers to create / manipulate objects -- and I gotta say that I'm pretty impressed with the quality of documentation that they've put together. Something that I could learn from for my own projects and scripting adventures -- same with @exhaust. Here's the gist of my simple script that I hacked together as a bit of a proof of concept: from mathutils import * from math import * import bpy D = bpy.data # Find Site Surface meshes = D.meshes mesh = 0 for object in meshes: if object.name == "Surface": mesh = object # Find Lots lots = [] ROWs = [] for poly in mesh.polygons: if poly.material_index == 1: lots.append(poly) else: ROWs.append(poly) print("Found a bunch of lots!") # print(lots) # Find lot edges shared w/ ROWs row_verts = [] for face in ROWs: for vertex in face.vertices: if vertex not in row_verts: row_verts.append(vertex) print(row_verts) for lot in lots: print("Checking Lot No. " + str(lot.index)) counter = 0 row_pts = [] lot_verts = list(lot.vertices) set_lot = set(lot_verts) set_row = set(row_verts) lot_ROW_pts = list((set_row-(set_row-set_lot))) print(lot_ROW_pts) if len(lot_ROW_pts) == 2: frontage_vector = (mesh.vertices[lot_ROW_pts[0]].co - mesh.vertices[lot_ROW_pts[1]].co) print("Frontage Vector:", frontage_vector) frontage_vector = frontage_vector.normalized() print("IP#1:",mesh.vertices[lot_ROW_pts[0]].co) print("IP#2:",mesh.vertices[lot_ROW_pts[1]].co) print("Frontage Unit Vector:", frontage_vector) san_service_location = Vector( ( (mesh.vertices[lot_ROW_pts[0]].co.x+mesh.vertices[lot_ROW_pts[1]].co.x)/2, (mesh.vertices[lot_ROW_pts[0]].co.y+mesh.vertices[lot_ROW_pts[1]].co.y)/2, (mesh.vertices[lot_ROW_pts[0]].co.z+mesh.vertices[lot_ROW_pts[1]].co.z)/2 ) ) print("SAN:",san_service_location) wat_service_location = san_service_location+frontage_vector*2 print("WAT:",wat_service_location) bpy.ops.mesh.primitive_cube_add(location=san_service_location) bpy.context.active_object.name = "SAN Service" bpy.ops.mesh.primitive_ico_sphere_add(location=wat_service_location) bpy.context.active_object.name = "WAT Service" My "proof of concept" script above, generally works as follows: - I've created a random "subdivision layout" consisting of private properties (shaded blue w/ a simple material), and public road right-of-ways (ROWs) (shaded grey w/ a simple material) with some undulating topography; - The script loads up the mesh ("surface" for you C3D types), and iterates through all the available faces / polygons; - If the polygon has a blue material ( material_index == 1), it adds it to the list of private properties; - If the polygon has a grey material ( material_index == 0), it adds it to the list of public ROWs; - This is obviously a huge simplification -- but I'm starting simple. - After that, it detects with vertices of each lot are shared w/ the public ROWs, so we can determine which side of the private property is the frontage -- this is where our utility services will come in; - Currently, I've only implemented support for sites that have TWO common vertices (this would be typical of a mid-block property), and have yet to support any lots w/ 3 (corner lot) or 4 (lots w/ a rear easement / alley way); - After that, we take the two vertices that define the lot frontage (which would, in the real world, have iron pins set into the ground), and create a normalized (unit) Vector()object. - For those of you that don't remember your geometry / linear algebra (or whatever branch of math vectors are most used), a unit vector is simple a vector where the LENGTH = 1. That makes it easy for use to follow along the frontage, and move along in 1m, 2m, 0.4m, or any arbitrary increments. - For starters, we set the "Sanitary service location" as being at the direct midpoint of the frontage, and set the "Water service location" 2m offset from that. - Then we create some placeholder objects so we can see that it's working. I'm pretty fired up that this seems to work nicely, and quickly. Next few steps should be: - Create gravity service pipes at ~1.2m below surface, and connect to collection / distribution main in ROW; - Gravity sewers would connect at a minimum -1% grade; - Water services can connect anywhere, as they utilize flexible tubing and don't flow by gravity; - Create a gravity sewer collection system that connects to the sanitary service connection at every private property; - Sewers should maintain a minimum depth of cover (frost protection); - Add manholes at sewer intersections and / or every 120m in linear sewer; - Disregard practicality of construction for now. I anticipate that some sewers will be really really deep; - Coming up with different alternatives will come later; - Create a water distribution system that connects to the water service connection at every private property; - This one is easier, as it doesn't need to flow by gravity; - Maintain depth of cover; - Look for conflicts between pipes; Long term goals: - Implement tools for hydraulic design, rather than just "connecting the dots"; - Apply approximate sewage generation rates for each lot (80~100% of average water consumption of 360L/person/day); - Determine required pipe diameter and slope to convey flows safely; - Implement unit rates for cost estimation; - Come up w/ various options for servicing: - 100% gravity collection to global low point; - Check for opportunities for pump stations if required; - Determine if certain properties should be serviced via individual, privately owned, pump stations rather than a gravity connection (ex -- ocean-front properties that are lower than the adjacent ROW); - Compatibility w/ LandXML schema for import / export to AutoDesk Civil3D. Any other civil engineers out here on the chain? I'd be keen to hear your thoughts on what you think may be an effective way to automate 'preliminary design' tasks like this. I think it'll be VERY complicated to script something for detailed design -- but for the early stage "options discovery", I think there's quite a bit of potential. And yes -- I know that the python code above is like very inefficient. If you've got some cool suggestions for how to improve, I'd be keen to hear it! That's a creative use of the tool. I've not really done anything with Blender, but I know it's very capable. I knew you could use Python with it. I use Python at work, but that's to generate reports from a database for now. I do like it as a language. Ah yeah, I thought it was a cool idea! Any time I can automate something, I get excited to use Python. Usually, when I play w/ Blender -- it's just making some fun animations or something like that. But more and more, I think it could be used as a powerful (and free!) engineering tool. Fun fact -- EXHAUST is built on a Django backend -- so it's pretty much just working with python (with some JS mixed in there). If you're interested in dabbling with some information, I could extend the API so you could pull your data and run some analysis on it. Ooh, that would be cool. I could do with exploring more aspects of it. There are libraries for everything. Very very cool, honey! You're so clever! Thankyouuuuuuuuuuu. Hi, @mstafford! Thank you for using the #diy tag. This post has been rewarded with BUILD tokens in the form of an upvote. Build-it is a new tribe on the steem blockchain that serves as a central hub for DIY contents on steemit. We encourage steemians to post their DIY articles via our website. Have a question? Join us on Discord and telegram This project is run and supported by our witness @gulfwaves.net. If you like what we do, vote for us as a witness on the steem blockchain Cool beans! I'll start checkin it out! damn! I finally need to teach mayself some python.. And Blender afterwards.. :D Do it. Both of them are rad.
https://steemit.com/stem/@mstafford/python-scripting-in-blender
CC-MAIN-2020-16
en
refinedweb
MySQL Connector/Python is a driver released by Oracle themselves to make it easier to connect to a MySQL database with Python. MySQL Connecter/Python supports all versions of Python 2.0 and later, also including versions 3.3 and later. To install MySQL Connector/Python simply use pip to install the package. pip install mysql-connect-python --allow-external mysql-connector-python Connecting to database and committing to it is very simple. import mysql.connector connection = mysql.connector.connect(user="username", password="password", host="127.0.0.1", database="database_name") cur = connection.cursor() cur.execute("INSERT INTO people (name, age) VALUES ('Bob', 25);") connection.commit() cur.close() connection:
https://www.pythonforbeginners.com/mysql/how-use-mysql-connectorpython
CC-MAIN-2020-16
en
refinedweb
The goto statement in C++ is used to transfer the control of program to some other part of the program. A goto statement used as unconditional jump to alter the normal sequence of execution of a program. Syntax of a goto statement in C++ goto label; ... ... label: ... - label : A label is an identifier followed by a colon(:) which identifies which marks the destination of the goto jump. - goto label: : When goto label; executes it transfers the control of program to the next statement label:. C++ goto Statement Example Program #include <iostream> using namespace std; int main(){ int N, counter, sum=0; cout<< "Enter a positive number\n"; cin >> N; for(counter=1; counter <= N; counter++){ sum+= counter; /* If sum is greater than 50 goto statement will move control out of loop */ if(sum > 50){ cout << "Sum is > 50, terminating loop\n"; // Using goto statement to terminate for loop goto label1; } } label1: if(counter > N) cout << "Sum of Integers from 1 to " << N <<" = "<< sum; return 0; }Output Enter a positive number 25 Sum is > 50, terminating loop Enter a positive number 8 Sum of Integers from 1 to 8 = 36 In above program, we first take an integer N as input from user using cin. We wan to find the sum of all numbers from 1 to N. We are using a for loop to iterate from i to N and adding the value of each number to variable sum. If becomes greater than 50 then we break for loop using a goto statement, which transfers control to the next statement after for loop body. - goto statement ignores nesting levels, and does not cause any automatic stack unwinding. - goto statement jumps must be limited to within same function. Cross function jumps are not allowed. - goto statement makes difficult to trace the control flow of program and reduces the readability of the program. - Use of goto statement is highly discouraged in modern programming languages. Uses of goto Statement The only place where goto statement is useful is to exit from a nested loops.For Example : for(...) { for(...){ for(...){ if(...){ goto label1; } } } } label1: statements; In above example we cannot exit all three for loops at a time using break statement because break only terminate the inner loop from where break is executed.
https://www.techcrashcourse.com/2016/12/cpp-programming-goto-statement.html
CC-MAIN-2020-16
en
refinedweb
. Document of TO THEREPUBLIC OF FIJIFOR ATRANSPORT INFRASTRUCTURE INVESTMENT PROJECT February 5, 2015 CURRENCY EQUIVALENTS(Exchange Rate Effective February 2, 2015)Currency Unit = Fijian DollarFJD 1 = US$ 0.495US$ 1 = FJD 2.020FISCAL YEARJanuary 1 to December 31ABBREVIATIONS AND ACRONYMSADBAsian Development BankAPAffected PersonsCEDAW Convention on the Elimination of All Formsof Discrimination Against WomenCENCountry Engagement NoteCEOChief Executive OfficerCESMP Contractors Environmental and SocialManagement PlanDADesignated AccountDPDisplaced PersonsEAEnvironmental AssessmentEIAEnvironmental Impact AssessmentEIRREconomic Internal Rate of ReturnESMFEnvironmental and Social ManagementFrameworkESMPEnvironmental and Social Management PlanEUEuropean UnionEXIMExport-ImportFJDFijian DollarFMFinancial ManagementFRAFiji Roads AuthorityGAPGender Action PlanGDPGross Domestic Product iRAPIRRLARF GoFGRM Government of FijiGrievance Redress Mechanism IBRD IEEIFR LARPM&EMoF MoITMoUNCBNTIPPAMPDO PICsPSAPSCPSTRFPRoW TIIPTLTBUN US$ WA Withdrawal Application FIJITRANSPORT INFRASTRUCTURE INVESTMENT PROJECTTABLE OF CONTENTSPageI. II. III. IV. IMPLEMENTATION .....................................................................................................11A. Institutional and Implementation Arrangements ........................................................ 11B. Results Monitoring and Evaluation ............................................................................ 12C. Sustainability............................................................................................................... 13 V. VI. EA Category Team Leader P150028 B - Partial Assessment James A. Reichert Lending Instrument Financial Intermediaries [ ]Series of Projects [ ] 01-Jan-2016 30-Jun-2020 Joint IFCNoPracticeManager/Manager Country Director Michel Kerf Pierre Guislain Franz R. Drees-Gross Neil Cook Title: Loan [ ] Credit [ ] 167.50 Financing Gap: 0.00 [ ] OtherTotal Bank Financing: 50.00 Financing Source Amount Borrower 16.80 100.70 Total 2016 2017 2018 2019 2020 2021 Annual 3.00 11.50 15.50 14.50 4.00 1.50 Cumulative 30.00 44.50 48.50 Institutional DataPractice Area / Cross Cutting Solution AreaTransport & ICTCross Cutting Areas[X] Climate Change [X] Gender Jobs Sector Transportation AdaptationMitigation CoCo-benefits % benefits %20 100 I certify that there is no Adaptation and Mitigation Climate Change Co-benefits information applicableto this project.ThemesTheme (Maximum 5 and total % must equal 100)Major theme Theme 50 ii 150.00 Technical Assistance 16.70 Capacity Building 0.80 CompliancePolicyDoes the project depart from the CAS in content or in other significantrespects? Yes [ ] No [ X ] Yes [ X ] No [ ] Explanation:The project will seek a waiver from the Board of Executive Directors of the application of the WorldBanks Procurement Guidelines and Consultant Guidelines in order to apply the Asian Development Bank's(ADB) procurement policies during implementation. ADBs procurement policies would apply and ADBwould provide procurement oversight and no objections to government procurement requests. ADB hasagreed to this arrangement, which include: (i) the eligibility of firms and individuals from all countries tooffer goods, works and services under the Project; (ii) the ineligibility of firms and individuals to offergoods, works and services under the Project if they have been declared ineligible by the World Bank inaccordance with the prevailing World Bank sanctions procedures; (iii) application of the World Bankssanctions policies and Anti-Corruption Guidelines; and (iv) the World Banks right to declaremisprocurement if the ADB declares misprocurement in accordance with the ADBs procurement policies.Does the project meet the Regional criteria for readiness for implementation?Safeguard Policies Triggered by the Project Yes [ X ]Yes No [ ]No iii Legal CovenantsName Recurrent Due Date Frequency Ongoing Description of CovenantA Project Supervision Team (PST) consisting of a procurement/project management specialist, anaccounting/financial management specialist, an environmental specialist, and a social safeguards specialist,will be established to support FRA and maintained throughout project implementation.ConditionsSource Of Fund Name Type Description of Condition Team CompositionBank StaffName Title Specialization Unit Julie Babinard Sr Transport. Spec. GTIDR E T Consultant GSURR Marjorie Mpundu Senior Counsel LEGES Manush Hristov Senior ProcurementSpecialist GGODR Shruti Pandya E T Temporary EACNF Gerardo F. Parco Senior OperationsOfficer GENDR Senior InfrastructureSpecialist Team Lead Financial ManagementSpecialist Non-Bank StaffName City John Lowsby Mbabane iv I. STRATEGIC CONTEXTA. Country Context1.Fiji is an island country located in the South Pacific Ocean about two-thirds of the wayfrom Hawaii to New Zealand. It has a territory of 47,329 square km spread over 332 islands, athird of which are inhabited. With a population of about 875,000, nearly 90 percent live on thethree main islands of Viti Levu (10,429 sq. km), Vanua Levu (5,556 sq. km), and Taveuni (470sq. km). Main urban centers on Viti Levu include Suva, the largest city and capital, Nadi, animportant center of tourism and location of Fijis principal international airport, and Lautoka,Fijis second largest city. Labasa is the largest town on Vanua Levu and is the location of FijiSugar Corporations only Vanua Levu sugar mill (Viti Levu has three mills).2.Endowed with forest, mineral, and fish resources, Fiji is one of the most developed of thePacific Island economies, although it still has a large subsistence sector. With an average grossnational income of US$4,110 per capita (2012), Fiji is also one of the wealthier countries in theSouth Pacific. Agriculture, sugar and tourism drive economic activity. Agricultural activitiesemploy around 70 percent of the labor force, but account for just 10 percent of the grossdomestic product (GDP). The sugar industry has traditionally occupied a dominant role ineconomic activity, but has declined significantly in recent years due, in large part, to the end ofpreferential tariffs. The countrys economy is increasingly dependent on tourism, with about650,000 visitors annually. Based on results released by the Fiji Bureau of Statistics inSeptember 2014, the country expects to see a record number of tourists in 2014 and to surpassthe peak of 675,000 in 2011.3.Currently, about 50 percent of Fijians live in rural areas, but that figure is expected todrop to 40 percent by 2030, reflecting the on-going rural-urban migration. Poverty is higher inrural areas, at 44 percent, compared to 26 percent in urban areas, and larger households tend tohave a higher incidence of poverty, particularly in rural locations. In 2009, just over one-thirdof the Fijian population lived in poverty. However, aggregate poverty levels disguise importantdifferences in poverty incidence between urban and rural areas. Since 2003, national povertyhas progressively dropped, but while urban poverty has declined significantly, rural poverty hasremained virtually unchanged.4.Transport infrastructure plays an important role for local economic development and inproviding access. Improved road and maritime sector assets underpin inclusive economicgrowth and social development by providing communities in rural and island areas with reliableaccess to economic opportunities, information and services. While the road network is fairlywell developed, low levels of investment have contributed to the poor condition of many ruralroads, jetties and wharves. Rural populations lack access to reliable roads and must contendwith unsafe jetties and wharves, which results in higher transportation costs for many farmersand can negatively impact island economies. It can also have detrimental effects on visits tohealth care facilities and school attendance. To help alleviate the burden of fees andtransportation costs for children from pre-primary up to grade 12, a new school grant of FijianDollar (FJD) 250 per child per school year was recently introduced. In 2014, The WorldBanks Logistics Performance Index ranked Fiji 111 (out of 160 countries), underlining the countrys high cost of logistics and transportation, and the importance of continued investmentto improve infrastructure.5.Since the early-1980s, tourism has expanded and is now one of Fijis primary economicactivities. In 2013, it accounted for FJD 1.3 billion in gross earnings, or around 40 percent oftotal goods and non-factor service exports. Tourism relies on efficient internal freightdistribution systems and access to island destinations. Passengers, including an increasingnumber of tourists, and inter-island freight rely on coastal and island jetties and wharves, manyof which have been neglected for years and are in need of rehabilitation and/or upgrade.6.Like most Pacific Islands Countries, Fiji is vulnerable to extreme weather events,including tropical cyclones, flooding, earthquakes, and tsunamis, and infrastructure throughoutthe country is at high risk of climate and disaster-related events. Fiji is one of twenty countriesworldwide that have the highest average annual disaster losses in proportion to their GDP as aresult of extreme climatic events. The total value of infrastructure, buildings and cash cropsconsidered at some level of risk in Fiji is high and estimates for asset replacement costs andeconomic losses due to extreme events in Fiji are as much as five to ten times annual GDP.17.Gender differences are strongly embedded in Fijian culture and tradition. The roles ofwomen are impacted by ethnicity and vary in degree at the household level, but male-dominatedhierarchies tend to be common regardless of ethnicity. Womens involvement in political,social and economic activities is promoted through many international and regional genderequality commitments by the government. Womens civil society organizations have beeninstrumental in getting policies and laws in place for womens rights and gender justice in Fiji.In February 2014, the Government approved the National Gender Policy, which seeks topromote gender equity and equality by removing all forms of gender discrimination andinequalities to attain sustainable development.B. Sectoral and Institutional Context8.Sectoral Context. Fijis road network consists of approximately 11,115 km of roads,including 4,250 km of main/national roads, 675 km of rural roads, 340 km of municipal roads,and 5,850 km of cane roads. An estimated 1,480 km, or around 13 percent of the network, issealed. For several years prior to 2012, maintenance and resealing work had been deferred andthe majority of roads are now in fair or poor condition, which contributes to higher vehicleoperating costs and longer travel times. Since the Fiji Roads Authority (FRA) was established in2012 and made responsible for maintaining and constructing roads, bridges, jetties and wharvesthroughout the country, the government has implemented a program to reduce the backlog ofroad maintenance. Most of the secondary and rural roads are graveled and constructed to lowerstandards, which makes them particularly vulnerable during the wet season. Many cane roadsare little more than dirt tracks without proper formation or drainage systems.9.Many of the countrys estimated 950 bridges and 45 rural jetties and wharves are in aserious state of disrepair with an estimated backlog of FJD 900 million in bridge and jetty/wharfrenewals. In an effort to reduce the risk of failure, while staying within available budgets, many1 bridges and rural jetties and wharves only received temporary repairs that were meant to last ayear or two. However, this is beginning to backfire, and since the start of 2014, on average onebridge or jetty has failed each month, resulting in significant local hardship. FRA has stated thatit can no longer guarantee the safety of these assets. In an effort to prioritize improvements,FRA has developed a risk-based approach to assess those bridges, including many with weightrestrictions, requiring urgent attention. In some cases, bridges have been closed until they can berepaired to a safe condition. Several rural jetties and wharves are in equally poor condition withsome failing. The ability of FRA to assess and maintain these structures is made more difficultbecause many are in remote locations. A further FRA objective is to ensure that, prior torehabilitation, bridges, jetties and wharves are designed to be more resilient to climate changeimpacts and for improved safety.In terms of road safety, Fijis annual casualty rate of 7 per 100,000 people (2009) ranks as10.one of the lowest of the Pacific Islands, and is also low with respect to other developingcountries. However, given the growing rate of motorization in the country and the increasedtravel speeds that are expected as a result of improved road conditions, the Government of Fiji(GoF) has stated that road safety will remain a key priority. Fiji maintains systems for datacollection and black spot identification, and has been a regional leader in road safety through itsadoption of a framework and vision outlined in the Fiji Decade of Action on Road SafetyNational Action Plan. As part of this initiative, 600 km of the countrys main road network havebeen assessed and rated for safety according to International Road Assessment Programme(iRAP) methodology. Preliminary results have revealed that some 95 percent of roads surveyedhave a dangerous 1-Star and 2-Star rating.Institutional Context. Following the September 2014 elections, the Ministry of Works,11.Transport and Public Utilities was restructured to become the Ministry of Infrastructure andTransport (MoIT) and assumed responsibility for managing policy, administration and regulatoryactivities for all modes of transport. The principal goal of MoIT is to provide an integratedtransport system that is safe, efficient, affordable, accessible, and environmentally sustainable.MoIT has several units overseeing key activities in the land and maritime subsectors, includingroads, road safety, transport planning, and shipping services: The Transport Planning Unit coordinates policy associated with transport planning,monitoring, and investment programming.The FRA, which had been under the Prime Ministers Office prior to the elections,became part of MoIT on September 25, 2014 (as per Legal Notice 46 published in theGovernment of Fiji Gazette Supplement). It is responsible for maintaining andconstructing roads, bridges, jetties and wharves throughout the country.The Land Transport Authority (LTA) is responsible for enforcing road transportregulations and addresses road user issues.The Land Transport Division is primarily responsible for formulating regulations toenable the efficient performance of LTA and the National Road Safety Council. TheUnit also plays an advisory role to the Minister on policy matters related to landtransport. The Fiji Ports Corporation Limited oversees activities at the countrys internationalports.Fiji Islands Maritime Safety Administration monitors international shipping andregulates inter-island shipping services in accordance with the requirements of theMarine Act 1986 and subsidiary Regulations. The Marine Act is based onInternational Maritime Organization Conventions, and encompasses technicallegislation(s).Government Shipping Services promotes and facilitates national sea transportationthough the provision of shipping and marine navigational aids services, and meetsFijis obligation to international maritime conventions and the maritime community.The Department of Meteorological Services provides weather forecasting services forFiji and other Pacific Island Countries (PICs), including marine and cyclone warningservices on a wider regional scale, and aviation forecasting for the Nadi FlightInformation Region. 12.Until 2011, the Department of National Roads within the former Ministry of Works,Transport and Public Utilities had been responsible for managing the countrys network of roadsand bridges. However, in an effort to strengthen performance, a 2012 Decree transferredresponsibility for maintaining road sector assets to FRA.13.FRA manages its assets by outsourcing works and services to the private sector.2 In 2013,FRA issued three, five-year road maintenance contracts with private contractors from NewZealand to rehabilitate and maintain roads within certain districts. To help manage the threecontracts and to build its capacity to manage its activities with private contractors, FRA enlistedthe support of a private consulting firm with extensive experience in the roads sector.14.Over the past several years, the transport sector has accounted for about 12 percent ofGDP and around 8.3 percent of Fijis total employment in both the formal and informal sectors.3Improving the countrys land and maritime transport infrastructure has been identified as apriority for GoF, but one of the sectors biggest challenges is the large backlog in road andbridge maintenance, which has been neglected for several years. Although average annualexpenditures on road, bridge and jetty maintenance between 2001 and 2011 averaged about twopercent of the annual budget, it was not sufficient and unrepaired road sector assets deterioratedfaster than expected. In addition, almost half of the nearly 1,500 km of sealed network is in needof reseal or rehabilitation, which has been estimated at a cost of FJD 300 million.15.Since 2012, GoF has increased funding for the sector to approximately five percent ofGDP.4 The objective is to restore Fijis road network to a steady state of repair by 2018, and toadequately maintain it thereafter. A key FRA goal is to ensure that improvements lead to moreresilient and safer roads, bridges and rural jetties and wharves. Table 1 summarizes FRAexpenditures from 2007 through 2014 (estimated).2 34 Some maintenance activities on Outer Islands are carried out by the Ministry of Rural and Maritime Developmentunder a Memorandum of Understanding with FRA.Green Growth Framework for Fiji, April 4, 2014.Based on a GDP of US$4 billion in 2013. Capital WorksMaintenanceAdministrationTotals: 200731.031.32.765.0 200858.329.13.691.0 200963.737.53.1104.3 201073.329.23.4105.9 2011133.430.83.4167.6 2012124.887.313.8225.9 2013348.795.94.5449.1 20146387.897.211.5496.5 Spending on maintenance was relatively flat through 2011. However, with the16.establishment of FRA in 2012, the maintenance budget nearly tripled and has remained at thatlevel since. Spending on capital works increased substantially in 2011 and again in 2013, due toseveral large road improvement projects, including rural road upgrades financed through theExport-Import (EXIM) Banks of China and Malaysia. FRAs overall spending in 2013 was FJD449 million, and the estimated budget of FJD 496 million for 2014 is well above the proposedfunding envelope provided by the project. Administration costs rose significantly in 2012 withthe establishment of FRA.17.As an indication of the impact of extreme climatic events on the transport sector, FRAs2014 budget for emergency repairs was FJD 24 million, which was principally required to carryout emergency repairs of roads, bridges and jetties that were damaged by weather events.18.Donor Activity. In the wake of the 2014 elections, several development partners arepreparing to increase their assistance to GoF. In the past, Australia, New Zealand, the EuropeanUnion (EU) and United Nations (UN) agencies have extended support for health and educationservices, and small- and medium-sized enterprise development, and these donors are expected tocontinue assistance in these sectors. The EU also plans to support the sugar sector and to supportalternative livelihoods through agriculture.UN agencies have provided support forenvironmental, gender, governance, human rights, and capacity building activities in severalsectors. The Government of Japan has funded water supply, agriculture, rural transport, andeducation projects, as well as disaster relief. In addition to implementing the transport sectorproject which this project will jointly co-finance, the Asian Development (ADB) is helping Fijiupdate its land and maritime transport strategies, and supporting an urban development masterplan for the greater Suva area. The Governments of the Peoples Republic of China andMalaysia have emerged as major new sources of finance for roads, hydropower development,social housing, and hospitals. Table 2 provides a snapshot of recent and planned donor-fundedinvestments in the transport sector. Expenditures from 2007 through 2011 occurred under the Department of National Roads, while expenditures from2012 to 2014 were under FRA. Source: Ministry of Finance Annual Reports.Figures for 2014 are estimates. China EXIM Malaysia EXIMWorld Bank ProjectThird Upgrading Road ProjectEmergency Flood Recovery ProjectTransport Infrastructure Investment SectorProject (joint with World Bank (WB)Sigatoka-Sera Road Improvement ProjectBuca Bay-Moto Road Improvement ProjectNabouwalu-Dreketi Road Upgrading ProjectQueens Highway Upgrade ProjectTransport Infrastructure Investment Project(joint with ADB) Amount76.1017.60100.00 Timeframe1997-20122009-20142015-2020 48.2053.80123.9026.8050.00 2010-20132010-20132012-20152010-20142015-2020 19.Rationale. The World Bank has supported comparable road projects in the Pacific andglobally for many years. In particular, the World Bank brings significant experience inimproving standards for building road sector assets. Through on-going IDA-financed projects inSamoa and Papua New Guinea, similar work is being undertaken to update design standards andconstruction specifications for roads and bridges to ensure uniformity and incorporate climatechange adaptation considerations for more climate resilient road sector assets. Through itsGlobal Road Safety Facility, the World Bank is a global leader in road safety and activelysupports initiatives to improve safety for all road users. Currently under consideration is aPacific Islands regional road safety project that is proposed to be based in Fiji and modeled onrecent work in Argentina. It is anticipated that there would be synergies between this regionalinitiative and activities planned under the Transport Infrastructure Investment Project (TIIP).20.Supporting more transparent public contracting systems is an area in which the WorldBank has considerable knowledge. Through the Rural Development Project in the Philippines,which makes extensive use of geotagging,7 the World Bank assisted in the development of apublicly accessible website showing the various stages of the procurement process from precontract award to contract award and implementation. This system has enhanced how publiccontracts are monitored, and has received international awards and citations as an effective toolto increasing transparency in project implementation. FRA has expressed interest in pilotingopen contracting and the use of geotagging under TIIP.21.Through a combination of implementing projects worldwide and having sector specialistsparticipate in project preparation and semi-annual implementation support missions, the WorldBank can both gather and apply lessons learned and best practice. This regular participationgenerates ample opportunities to provide guidance and strengthen the performance and7 Geotagging makes use of photos to virtually monitor and evaluate physical progress on works. management of road sector entities, including addressing environmental and social issues, andincreases the chances of higher quality results during construction.C. Higher Level Objectives to which the Project Contributes22.TIIP will be the first major Bank project in Fiji since 1992 and was developedconcurrently with the Country Engagement Note (CEN). The CEN and TIIP are expected to bepresented to the Board together in March 2015.23.The CENs engagement is structured around the themes of: (i) deepening the World BankGroups relationship with, and knowledge of, Fiji; (ii) promoting macro-economic stability andinclusive private-sector led growth; and (iii) protecting vulnerable populations. The projectwould contribute to all three priorities, but particularly the second and third themes. Byupgrading and/or rehabilitating main, municipal, and rural roads, bridges, jetties and wharves,access to economic opportunities, information and services will be improved. By incorporatingrecommendations for more climate and disaster resilience in the designs for roads and bridges tobe rehabilitated, the reliability and safety of assets would be strengthened. The project wouldsupport the Governments broader measures to develop safer road systems by ensuring thatdesigns for assets to be rehabilitated include elements of road safety. Improved access willreduce temporary and/or longer-term breaks in the road network, which in turn would encourageprivate sector led growth. Improvements in rural locations, where the majority of poor peoplelive, would help to reduce inequality by targeting Fijis most vulnerable areas and communities.For the same reasons, TIIP would also contribute directly to the World Bank Groups strategicgoals of reducing poverty and increasing shared prosperity.24.By updating standards and incorporating safety recommendations in designs, the projectwould also further Government objectives identified in its Roadmap to Democracy andSustainable Socio-Economic Development 2010-2014 and Green Growth Framework for Fiji toprovide safe, efficient, affordable and accessible transport systems to all users, and advance keyactions outlined in the Fiji Decade of Action on Road Safety National Action Plan.II. PROJECT DEVELOPMENT OBJECTIVESA. PDO25.The projects development objective (PDO) is to improve the resilience and safety of landand maritime transport infrastructure for users of project roads, bridges and rural jetties andwharves.Project Beneficiaries26.The key beneficiaries of this project are the users of Fijis transport infrastructure, whichcomprise a majority of the population of 875,000 (including 433,000 women) who all makeregular use of the roads, bridges and rural jetties and wharves which form the countrystransportation network. Rural and village communities in hinterland areas located near roads andbridges, as well as commercial industries involved with tourism, agriculture, forestry and sugar sectors, are also expected to benefit. Given the dearth of information regarding transportinfrastructure use rates, the exact quantity and nature of beneficiaries cannot be quantified.27.The project would seek to prioritize investments in high poverty areas, including thepoorer northern islands of Vanua Levu and Taveuni. This prioritization would be realized byincluding poverty factors in the multi-criteria analysis used for the selection of sub-projects.28.According to the World Banks regional hardship and vulnerability study for PICs in2014, which assessed extreme poverty for 11 PICs, the location of the proposed Year 1 subprojects in the upper Sigatoka Valley on Viti Levu has a hardship headcount ratio in excess of 60percent, which is almost double the national rate of 35 percent.A Gender Action Plan (GAP) has been developed to ensure that engineering designs29.consider womens needs for safe travel. Road and bridge designs will include safe pedestrianaccess, such as walkways, guardrails and street lights. For some bridges where it is appropriate,stairways will be constructed to provide safe access from the road level to the waterway below toenable washing of clothes, further benefiting female pedestrians. Construction of certain typesof community assets, such as concrete wash tubs including soak pits, will also be considered andmonitored under TIIP.PDO Level Results Indicators30. Length of roads rehabilitated to revised standards for resilience and safety (km).Length of roads with a minimum 3-Star rating for vehicle occupants based on iRAPassessments (km).8Population at a reduced risk of bridge failure (number) 9Number of rural jetties/wharves requiring high priority attention (number).10 31.Annex 1, Results Matrix and Monitoring Framework, includes additional information onproject indicators. Star ratings for vehicle occupants are based on an assessment of various road attributes that are known to affectthe incidence and severity of crashes (e.g., width of lanes, rumble strips, pavement condition, traffic speed,curvature of the road, etc.). Unsafe roads are categorized as 1-Star and the safest roads are categorized as 5-Star.A 3-Star road is expected to substantially reduce the overall fatal and serious injury crash costs, in comparison to1-Star or 2-Star roads.9Relates to FRAs list of 85 bridges with high priority for rehabilitation. It measures the population in thecatchment areas served by these bridges that are at risk of losing access in the event of bridge failure, as measuresto improve bridges will increase the population who are at a reduced risk of bridge failure.10High priority refers to FRAs prioritization rating for the condition of a rural jetty or wharf that may includedeficiencies in its serviceability, including, but not limited to, structural integrity, vulnerability to adverse weatherevents, and/or user safety issues. carry out independent road safety audits, and implement the use of open contracting, which willinvolve regularly updating information and progress on FRA road contracts (including geotaggedphotos) on a publicly accessible website to enhance transparency of procurement.41.Component 3: Capacity Building (est. US$0.80 million). As a parallel activity, ADBwill provide a grant for initiatives to build government capacity across the transport sector.Areas of focus are expected to include planning, assessing and managing infrastructure projectsfor staff from various ministries and agencies, including FRA.42. B. Project Financing43.The World Bank would provide US$50.00 million to the project, which will be jointly cofinanced with ADB. ADB will extend US$100.70 million towards the project with GoFproviding ten percent in counterpart funding, or about US$16.80 million. Table 3 provides thefinancing plan with contributions of the various stakeholders. Table 4 estimates project costs bycomponent. Retroactive financing of up to US$10.00 million will be available for eligibleexpenses paid after the Board Date and before signing of the Legal Agreements.Table 3: Project Financing PlanSource Amount(US$ mil.)World Bank50.00Asian Development Bank100.70Government of Fiji16.80Totals:167.50 OriginalTotal Share(%)306010100 Cost(US$ mil.)150.0016.700.80167.50 1. Infrastructure Improvements2. Technical Assistance3. Capacity BuildingTotals: Share of total(%)9010100 44.GoFs counterpart funding, which will be made through annual financial contributions toFRA, will be used to pay for civil works, consulting services, taxes and duties, salaries of PSTstaff, and contingencies. GoF will provide in-kind contribution for any land acquisition neededand annual project financial audits.45.ADBs project was approved by its Board of Directors on December 5, 2014 as a standalone operation, which ADB Management intends to restructure in accordance with ADB10 policies following approval by the IBRD Board of this project. The ADB restructuring willinvolve a waiver of some provisions of ADBs procurement and consultant policies that limiteligibility to offer goods, works and services to firms and individuals from its member countries,while also recognizing the ineligibility of those firms and individuals declared ineligible by theWorld Bank.IV. IMPLEMENTATIONA. Institutional and Implementation Arrangements46.Since the first joint Bank-ADB re-engagement mission to Fiji in June 2013, GoF officialshave indicated their strong preference for the World Bank and ADB to prepare a joint operation,rather than separate projects. There is concern that after such a long time without having workedclosely with either donor (except for legacy ADB projects), it would be challenging andunnecessarily difficult to follow separate processes. In subsequent discussions with officials,they expressed interest in borrowing from the World Bank and ADB, but were consistent in theirmessage that the two agencies must minimize duplication, transaction costs and complexity bydeveloping a unified approach, both during preparation and throughout implementation.47.The World Bank and ADB teams subsequently devised a common approach andframework to address each donors respective environmental and social requirements, as well asfinancial management and disbursement arrangements (Annex 4 contains a matrix that detailshow ADB and the World Bank will carry out these key activities during implementation). Thereis also agreement that given the large number of tenders expected under the joint operation, onlyone donors procurement procedures should be utilized during implementation. For severalreasons, including the advanced stage of ADBs project preparation,11 the larger size of the ADBloan, and because ADB has a full-time presence in Suva with an office and staff, it was agreedthat ADB would be the lead cofinancier and that the World Bank team would seek a waiver fromits Board of Executive Directors to use ADB's procurement policies and procedures duringimplementation. Using one of the organizations procedures to procure works, goods andservices would eliminate the need to carry out separate tenders according to different procedures.It would also open the door for joint co-financing, whereby funding from each agency wouldfinance a portion of each project activity. As such, World Bank management has requestedBoard approval of a waiver of the World Banks Procurement Guidelines and the of ADBsprocurement policies and procedures during implementation. If the waiver is not approved,procurement under TIIP would be implemented under parallel financing arrangements. 48.The executing agency will be the Ministry of Finance (MoF), and the implementingagency will be FRA.12 This would be the first time FRA has worked with the World Bank,although it does have some limited experience in implementing projects financed by ADB.Since 2013, a consulting firm with expertise in transport asset management has supported FRAwith its program of transport sector capital works and maintenance. The firm is familiar withinternationally accepted tendering practices and requirements for environmental impactassessments/environmental and social management plans and land acquisition and resettlementplans (LARPs). The 2005 Environmental Management Act requires that any proposal to be1112 ADBs project was negotiated on October 22, 2014 and approved by its Board on December 5, 2014.Since the 2014 elections, the Ministry of Strategic Planning was merged with MoF. 11 The PAM is ADBs equivalent to a project operations manual, or POM, for a Bank project. 12 which has well established data collection methods. Those TIIP indicators that FRA does notroutinely gather will not impose additional costs as data and information needed to calculatethem are readily available.55.Social impacts of sub-projects will be measured through household income andexpenditure surveys at inception, mid-term and project close. These will be included as activitiesto be carried out by the design and supervision consultants.56.FRA will issue quarterly progress reports that will be due the last day of March, June,September and December. These will be forwarded to the World Bank within 30 days of the endof each calendar quarter. A mid-term review will be prepared in mid-2017 (expected), and anImplementation Completion Report completed within six months of the end of projectimplementation. FRA will also monitor progress against agreed performance indicators, asdefined in Annex 1. The design and supervision consultants will work closely with and provideregular updates to the PST Manager about project progress.C. Sustainability57.The establishment of FRA, combined with GoFs substantial increases in funding for roadmaintenance in 2012, 2013 and 2014, as well as its pledge to increase funding for the sector to asmuch as five percent of GDP, demonstrates political commitment and provides a good basis forimplementing TIIP. On the back of this increased investment, local contractor capacity for roadconstruction is growing, although it remains limited for large structural projects. Internationalcontractors will need to be attracted to undertake these works. It is expected that this will beaddressed through appropriate procurement approaches, including bundling works intosufficiently large packages to offset the mobilization costs of international contractors.58.Rehabilitating and/or replacing roads, bridges and rural jetties/wharves to higher, moreclimate resilient standards is key to more sustainable infrastructure. TIIP will support this byupdating design standards and construction specifications for roads and bridges, and byconstructing sub-projects to the revised standards.V. KEY RISKS AND MITIGATION MEASURESA. Risk Ratings Summary Table Risk Category Rating Stakeholder Risk Moderate Substantial Governance Project Risk- Design- 13 14 Expected Disbursementsunder TIIPAverage Annual FRAExpenditures (2011-2014) Year 1 15 Year 2 Year 3 Year 4 Year 5 $5.0 mil. $39.5 mil. $57.8 mil. $49.7 mil. $15.5 mil. $169.3 million 62.Although numerous, small value contracts are expected under TIIP, FRA will have thesupport of the PST and the design and supervision consultants during implementation to helpprocess tenders and supervise works. As such, FRA is well positioned to disburse project fundswithin TIIPs four and a half year implementation period.VI. APPRAISAL SUMMARYA. Economic and Financial Analysis63.A key GoF objective is to develop infrastructure to support economic development, createjobs, and improve access, particularly in areas of high poverty, and a substantial part of theproject is expected to rehabilitate and improve FRA assets in rural areas. Doing so supportsinclusive growth and is an effective means of increasing access to markets, economicopportunities, and vital social services, such as health facilities and schools. 64.FRA will utilize a systematic approach involving economic and social criteria to selectroad (see Annex 3 for a description of sub-project selection). A methodologyfor selecting FRA assets to be rehabilitated has been developed and includes a number of criteria,such as: 65.Principal project costs include construction and the support of engineering consultants todesign and supervise the work of contractors. By improving the condition of key assets, costs15 In Year 1, disbursements will only be made from ADBs loan. The WB loan will be utilized from Years 2 to 5. 15 relating to disruption from premature failure and increases in user costs from imposed speedlimits and/or weight limits as a result of weakened structures would be avoided.66.Given the nature of the anticipated investments, few, if any, are expected to be favorableto private sector financing. In addition, many of the improvements on GoFs long list of possiblesub-projects for funding under TIIP are located in rural areas or along low volume roads.67.Economic Justification for Year 1 Sub-Projects. FRA has identified a long list of roads,bridges and rural jetties and wharves for possible rehabilitation under TIIP. An analysis focusingon vehicle operating costs, user time savings from avoiding lengthy diversions when crossingsare closed, and savings from reduced risk of accident and injury and avoided maintenance costswas utilized to estimate the economic impact of Year 1 sub-projects in the upper SigatokaValley. The assessment was conducted using updated inputs, including traffic counts. Theresults, including sensitivity analysis on traffic volumes and capital costs, are provided in Table6. The assessments use a discount rate of 12 percent over a 30-year period.Table 6: Results of Economic Analysis for Year 1 Sub-Projects(in FJD million)Asset EstimatedCapitalCost(FJD mil.) Base Case Traffic Volume-25% Capital Costs+25% EIRR NPV(FJD mil.) Narata Bridge 5.287 10.3% (0.284) 8.5% (0.581) 8.7% (0.674) Matawale Crossing 1.923 13.8% 0.113 9.7% (0.148) 8.6% (0.316) 68.The economic analyses of the Year 1 sub-projects in the upper Sigatoka Valley yieldedpositive economic internal rates of return (EIRR). The preferred option of replacing NarataBridge with a two-lane structure yielded an EIRR of 10.3 percent, while replacement of thedeteriorated Matawale Crossing with a new Irish Crossing had an EIRR of 13.8 percent.Insufficient returns ruled out bridge and realignment options at Matawale. Sensitivity checks ofthese returns under adverse scenarios showed they remained robust.69.While the Matawale Crossing base case produced the only positive net present value,other social values were considered in the selection of these sub-projects for improvement. TheSigatoka Valley is the most intensively farmed area of Fiji and is notable for the extent ofsmallholder and larger-scale commercial market gardening activities. The area is a majorsupplier of produce for much of Viti Levu, including nearby tourist resorts along the CoralCoast, and there are several farmer associations that export vegetables to Australia and NewZealand.70.In addition, numerous villages in the area of the sub-projects are dependent on theSigatoka Valley Road for vehicular access. The Narata Bridge provides the sole vehicular accessto the entire west bank of the Sigatoka River valley above this point, and has a populationcatchment of around 9,800 people. About 7,500 people rely on the Matawale Crossing, and local 16 residents at Narata Bridge and Matawale Crossing are reliant on the crossings for access toschools, medical facilities, and other community facilities, as well as for rural bus services to theupper Sigatoka valley. The main hospital for the area is located at Sigatoka, a distance ofapproximately fifty km.B. Technical71.Although it has only been in existence for three years, FRA is the sole agency responsiblefor the countrys roads, bridges and rural jetties and wharves. It has developed a structure toefficiently manage multiple contracts with the full-time support of a qualified consulting firm.As such, FRA is the right agency to implement TIIP and well positioned to do so effectively.A methodology and criteria for evaluating and prioritizing sub-projects for rehabilitation72.has been developed. The methodology combines cost-benefit analyses based on commonvariables, including vehicle operating costs, travel times and traffic counts, when reliable data isavailable, as well as other factors, such as levels of poverty, number of beneficiaries, locations ofessential services (clinics, hospitals, schools, etc.), and economic activities that would be servedby an improved asset.73.Works supported under TIIP will conform to international standards. Existing designstandards and construction specifications for roads and bridges will be updated to strengthenresilience to climate change and extreme weather events. However, the updated standards arenot expected until mid-way through implementation, but some of the assets, particularly bridges,are in need of urgent repair/replacement and works cannot be put off until the revised standardsare available. In such cases, specialists would be engaged through the project to provideimmediate input to designs.74.As part of the project, iRAP surveys and road safety audits will be carried out for subprojects involving roads. The surveys will assess various road attributes that are known to affectthe incidence and severity of crashes, including lane widths, rumbles strips, pavement conditions,road curvatures, etc., which subsequently inform a prioritized list of recommendations toimprove the safety of road infrastructure. Several measures might be recommended, includingreducing speeds, constructing road side barriers or improving delineation. The road safety auditswill then examine specific sub-projects to determine which of the iRAP recommendations shouldbe incorporated into the designs of assets to be improved under TIIP, or to identify othermeasures to improve safety of specific sections of road. Combined with other GoF safe systemactions to create appropriate institutional mechanisms, data collection and informationmanagement processes, modify driver behavior, and improve the safety of vehicles and postcrash care, these activities will serve to improve road safety in Fiji.75.Works for roads, bridges and rural jetties and wharves are expected to be based onconventional technologies and construction methods.C. Financial Management76.A financial management (FM) assessment was carried out in accordance with OP/BP10.00 and the Principles Based Financial Management Practice Manual, issued by the Board17 on March 1, 2010. The main FM risks relate to the FRAs (the implementing entity) lack ofexperience with Bank projects, and the added complexity of dealing with two agencies andmultiple contracts, which may result in weak controls and errors, leading both to delays and theuse of project funds not for their intended purpose. FRA will dedicate an FM officer within thePST, and common project procedures will be adopted with the World Bank and ADB processesintegrated, as much as possible. These joint FM procedures will be detailed in a clear set of FMinstructions as part of the PAM. Ongoing training will be provided as part of the World Banksregular FM implementation support. The proposed financial management arrangements satisfythe financial management requirements stipulated in OP/BP 10.00.D. ProcurementBecause of the large number of tenders expected under the joint Project, it was agreed that77.only one donors procedures to procure works, goods and services should be utilized duringimplementation. Doing so eliminates the need to carry out separate tenders according to twoprocedures, and opens the door to co-financing, whereby funding from each donor would financea portion of all project activities. Because ADB will be the lead cofinancier, the World Bank andADB Project Teams agreed that it made sense to use ADB's procurement policies and proceduresduring implementation (refer to Annex 4 for details). As such, the World Bank intends to submita policy waiver request concurrently with the TIIP to its Board of Executive Directors to allowuse of ADBs procurements policies and procedures during implementation. If the waiver is notapproved, procurement under TIIP would be implemented under parallel financing arrangements.78.Annex 3 provides an assessment of FRAs abilities to carry out procurement activitiesunder the Project, as well as an initial summary procurement plan.E. Social79.It is anticipated that TIIP will provide several positive benefits to the rural communities inthe vicinity of sub-project areas. Improving infrastructure has been identified by GoF as crucialto poverty alleviation and for access to basic public services, and this was verified by the Povertyand Social Assessment (PSA) that was developed during project preparation. The PSA identifieda number of important actions for poverty alleviation, particularly for vulnerable households.These actions, which were considered during preparation and incorporated into project safeguarddocumentation, are summarized in Annex 6. In addition, while Fijis road network is fairly welldeveloped, many farmers still do not have access to markets in particular, in the iTaukeivillages up the valleys.16 As a result, transporting goods to markets can result in hightransportation costs and product damage resulting in lower prices. The project will be designedto minimize social risks by rehabilitating existing roads, bridges and rural jetties and wharveswithin existing rights-of-way (RoW) and within formation widths. At its meeting on June 30, 2010, the Cabinet approved the Fijian Affairs [Amendment] Decree 2010, which wentinto effect forthwith. The new law effectively replaced the word Fijian or indigenous or indigenousFijian with the word iTaukei in all written laws, and all official documentation when referring to the originaland native settlers of Fiji. 18 The project triggers OP 4.12 on Involuntary Resettlement, due to the potential for small80.amounts of land acquisition or minor land impacts during implementation. FRA, ADB and theWorld Bank developed and agreed on a harmonized approach to managing environmental andsocial safeguards requirements to ensure compliance with ADBs Safeguard Policy Statement2009 (SPS) and the World Banks Safeguard Policies. In addition to compliance with ADB andWorld Bank requirements, social and environmental assessments and clearances of sub-projectsunder TIIP will comply with Fijian laws (The Land Sales Act 1974, The State Acquisition ofLand Act 1998, the Land Use Decree 2010, The Environment Act 2005 and other relevantnational legislation). ADB and the World Bank will jointly clear all safeguard documents atappropriate milestones (contract award, commencement of works, etc.), as detailed in Annex 4.81.A Land Acquisition and Resettlement Framework (LARF) was prepared to address anyland changes or impacts to livelihoods that might occur as a result of involuntary acquisition ofassets and/or change in land use, including provision for compensation and rehabilitationassistance, which may occur throughout the life of the project. A LARP was prepared for the twoYear 1 sub-projects. For subsequent sub-project preparation involving land acquisition orresettlement, GoF will be responsible for preparing LARPs to help guide the implementationprocess and serve as documentation for compensation.82.Beneficiary communities and local stakeholders were provided with relevant informationabout TIIP, its possible land acquisition requirements, and policies on compensation andentitlements during preparation of the LARF and LARP. The LARF and LARP were disclosedat the World Banks InfoShop on January 12, 2015 and on FRAs website on January 13, 2015.For each Year 1 sub-project, the LARP and an information booklet was provided to stakeholdersin English and Fijian, summarizing entitlements and other relevant information.83.The LARF requires a similar process be followed for future sub-projects. The FRA, theProvincial Council and the iTaukei Land Trust Board (TLTB) will continue to consult andengage with landowners and tenants during sub-project preparation and throughoutimplementation. Additional consultation with beneficiaries will take place after detailed designsare completed and prior to the commencement of any works to enable compilation of a fullcensus and inventory of losses. The cut-off date for entitlement eligibility will be the date acensus is completed, after which the entitlement matrix will be updated and included in finalLARPs. FRA, in coordination with community leaders and representatives from the Departmentof Lands and TLTB, will inform affected parties in advance of Government intent to acquireland, and will respond to all compensation related inquiries. All consultations with affectedcommunities will be documented and follow the requirements established in the project LARF.84.FRA will be responsible for carrying out any necessary land acquisition and formonitoring any required payments.85.The project triggers OP4.10 on Indigenous Peoples, as the majority of the beneficiaries inthe sub-project areas are expected to be iTaukei, or indigenous Fijians. The social assessmentcompleted during preparation reported that the iTaukei will constitute the overwhelming majorityof project beneficiaries living on native iTaukei land that is administered by mataqalis (clans). 19 This land cannot be sold and remains forever as property of the landowning unit, unless sold tothe State and used solely for public purposes.86.In accordance with OP 4.10, because the overwhelming majority of beneficiaries areexpected to be indigenous people, project implementation will not require a separate IndigenousPeoples Plan or an Indigenous Peoples Planning Framework. Instead, the elements of anindigenous peoples plan will be integrated into the design of sub-projects in accordance with theguidelines included in the Environmental and Social Instruments for the Pacific. A well-definedconsultation process has been identified and agreed involving free, prior and informedconsultations. Such a process took place for the Year 1 sub-projects.87.All sub-projects will be designed to be culturally appropriate with community benefits.Also, communities from areas impacted by sub-projects will be consulted to ensure broadcommunity support so that there are no pending issues, such as disputes or affected familieswithin the current alignment and RoW.88.In order to receive and facilitate the resolution of community concerns, complaints, orgrievances in sub-project areas or about the projects overall safeguards performance, aGrievance Redress Mechanism (GRM) was developed and will be utilized for each sub-project.The GRM is gender responsive and readily accessible to all community members at no cost. TheGRM uses traditional systems for conflict and dispute resolution and, as far as possible,problems, concerns and grievances will be resolved at the sub-project level. The GRM will not,however, impede access to Fijis judicial or administrative remedies. In coordination withrelevant agencies, FRA will inform beneficiary communities about the GRM.89.Gender Issues. While Fiji is a signatory to the Convention on the Elimination of AllForms of Discrimination Against Women (CEDAW), gender inequality is still pronounced,particularly in employment, public decision-making and access to assets and resources. Womenare also subject to high levels of gender-based violence. The Millennium Development Goalsreport shows that the third goal of promoting gender equality and empowering women is unlikelyto be achieved by 2015, despite progress in reducing maternal mortality, and being on track toachieve universal primary education.90.To provide better transport infrastructure, womens needs as they pertain to safety,accessibility and affordability, must be taken into consideration when policies and designs areformulated. Women, through care giving and reproductive roles, are likely to be moredisadvantaged by poor access to healthcare centers. Safety and security risks faced by girls, or along walk to school on an unsafe road, may be the impetus for parents to withdraw girls fromschool. Without adequate roads and other forms of transport, women and men may be excludedfrom participating in political and community life, as well as being economically disadvantaged.91.Under TIIP, road, bridge and jetty/wharf infrastructure will be designed to accommodateissues that are important to all, but particularly to women, and will consider street lighting forsafety at night, bus shelters, footpaths on bridges with protective railings, steps near bridges toallow access to rivers for bathing, washing and fishing, and water and sanitation facilities at ruraljetties and wharves for those waiting for ships. To measure impact on gender, an intermediate 20 output indicator will be regularly monitored to assess the number of community ancillary assetsimproved under the project that benefit women.92.A Gender Action Plan (GAP) was developed for TIIP to help ensure that womens travelneeds are considered during sub-project preparation. Through the GAP and public consultationswith affected communities that will occur as part of sub-project preparation, the views of womenwill be considered during the design phase, which will help to ensure that the safety and accessto transport infrastructure for women and vulnerable groups, including the elderly and disabledare fully considered and incorporated.F. EnvironmentThe main environmental issues are expected to be impacts on physical and biological93.resources close to road, bridge, and rural jetty infrastructure, which will be rehabilitated underTIIP. Repairs and rehabilitation works are expected to be carried out within existing alignmentsand RoWs. As such, the potential impacts of sub-projects are expected to be manageable, sitespecific and reversible. The largest likely impacts come from the operation of material quarriesand borrow pits. Nonetheless, TIIP is not expected to have significant environmental impacts, includingon natural habitats or forests. Typical construction related impacts are expected to include smallamounts of vegetation/habitat removal, dust, noise, and impacts from temporary camps. Impactson physical cultural resources (PCR) are possible, in which case the impacts would be addressedby chance-find provisions detailed in the Environmental and Social Management Framework(ESMF). There is also the possibility for unintended impacts, such as improper disposal ofexcess material or siltation of streams and rivers along alignments or within sub-project areas.94.Due to the environmental issues detailed above, including to PCR and natural habitats,TIIP triggers OP 4.01: Environmental Assessment, OP 4.04: Natural Habitats (OP 4.04), andOP4.11: Physical Cultural Resources. The ESMF screens all sub-projects for safeguardsimpacts. By project design, any candidate sub-project that would be designated as a Category Awill not be financed. Category A projects will not be financed because GoF has not worked withthe World Bank or ADB for an extended period of time, and accordingly, the Department ofEnvironment (DoE) and FRA have limited safeguard experience and capacity to work withdevelopment partner funded projects, including undertaking necessary consultations andpreparation of safeguard instruments. As such, it is considered prudent that sub-projects withpotentially significant environmental and social impacts be excluded from funding under TIIPand only Category B and C sub-projects will be funded. The ESMF will guide the FRA indetermining the proper safeguards instruments needed to prepare subsequent sub-projects basedon the results of the screening. To address issues related to potential impacts on NaturalHabitats, the ESMF will screen and assess possible impacts through environmental and socialmanagement plans/environmental impact assessments. Impacts to Physical Cultural Resourceswill also be addressed through the ESMF and EIA/ environmental and social management planESMP. Since roads will be rehabilitated under TIIP, chance finds are possible. Accordingly,chance-find procedures have been included in the ESMF and will be incorporated into workscontracts.95.Public consultations on the projects safeguards instruments were held from July 23 to 25,2014 at Narata, Rararua, Vatubalevu and Wema Villages. The ESMF, LARF, ESMP and LARP21 were subsequently disclosed at the World Banks InfoShop on January 12, 2015 and on FRAswebsite on January 13, 2015. As described above, an appropriate GRM forms part of the ESMFand will be included in all sub-projects. The GRM was disclosed to the communities for Year 1sub-projects and will be publicized in local communities adjacent to future construction activitiesassociated with TIIP, as subsequent sub-projects are identified. FRA has instituted safeguardsmonitoring procedures and will assign a Safeguards Specialist as part of the PST. In addition,ADB has an office in Suva with a safeguard specialist who will assist in the implementation ofsafeguards arrangements.96.For the Year 1 sub-projects (Narata Bridge and Matewale Crossing in the upper SigatokaValley), an environmental impact assessment in the form of an Initial EnvironmentalExamination, or IEE, was prepared and disclosed. The Environmental and Social ManagementPlan from this IEE will be included as part of the contractual obligations of the winningcontractor.97.FRAs environment manager will be responsible for implementing environmental andsocial safeguards at the sub-project level. Environmental and social safeguards specialists bothat an international and national level will be part of the Design and Supervision Consultant(DSC). These specialists will provide capacity building to screen, implement and monitor ESMFrequirements of each sub-project. 22 Project Level Baseline YR1 YR2 YR3 YR4 YR5 End Target Standardsnot revised. Designstandardsrevised. 25 75 35 45 55 65 55,000 110,000 165,000 250,000 33 32 30 29 80 90 10 1,500 2,000 2,500 4,000 5,000 23 Indicator DescriptionProject Development Objective IndicatorsIndicator Name Data Source /Methodology Quarterly FRA Number of jetties/wharvesrequiring high priority attention(number). 24 Community assets constructed that Community assets that benefit women include pedestrianbenefit women (number).infrastructure and facilities, such as footpaths on bridges,protective railings on bridges, steps near bridges to allowaccess to rivers for bathing, washing and fishing, concretewash tubs, soak pits, or water and sanitation facilities at ruraljetties and wharves for those waiting for ships. 3. Capacity BuildingTotals: ProjectCost150.001716.701.0014.000.800.500.40 0.80167.50 World ADBBank46.00 88.304.00 11.703.500.24 10.50.1500.11 0.560.350.290.7050.00 100.70 GoF15.701.001.00 0.1016.80 26 improvements on selected roads, bridges and rural jetties and wharves, such as safety furnitureand signage, and repairing or replacing existing and/or installing new streetlights.4.Sections of existing main, municipal, and rural roads, including those accessing maritimeassets, would be rehabilitated through repairs and resealing of sealed roads, and regravelling orupgrading of existing gravel roads to sealed standards.5.Bridges. The project will help address the backlog of maintenance by reducing thenumber of bridges and jetties/wharves rated as high priority19 through appropriate repair,rehabilitation, or replacement measures.With more than 900 bridges and largeculverts/crossings identified as in need of repair, the project will improve the condition ofstructures on selected roads. For some single-lane bridges, it is possible that their decks could bewidened to accommodate two-lane traffic and sidewalks. Also, the hydraulic capacity could beincreased by, for example, raising deck heights to better accommodate water levels and flowsduring flood events. Where necessary and possible, structural elements involving steel,reinforced concrete or timber, may be replaced or repaired. Scour protection and soilreinforcement may also be installed around piers and abutments to ensure resilience ofunderlying structures to flood events. Where existing structures cannot be economicallyrepaired, a number of replacement options will be considered and the most appropriate optionadopted. The installation of safety improvements, including bridge safety furniture, signage andstreet lighting, is also expected.6.It is anticipated that some bridges in need of urgent repair will be rehabilitated by FRAbefore construction standards are formally revised as part of Component 2. However, FRA issupported by a full-time qualified international consulting firm, which will help design andsupervise works, and any early repairs to bridges should be designed to international standards.7.Rural Jetties/Wharves. Activities may consist of repairing or replacing platforms, pilings,and structural elements, including reinforced concrete, steel or timber sections. Repairs of stormdamage and reconfiguration to better suit current and/or planned operations to improve land andmarine access, and to provide more resilient and safe use of maritime transportation, are alsoexpected. Depending on conditions, some jetties and wharves may be replaced with newstructures.8.Year 1 Sub-Projects. The rehabilitation and replacement of high priority bridges isdeemed to have the greatest immediate impact on safety and reliability, and two bridge crossingsare proposed for the first year of the project. The Narata Bridge and the Matawale Crossing,located in the Sigatoka Valley, provide an important transportation link between the upperSigatoka Valley and Sigatoka Town, which is one of Fijis poorest areas. According to theWorld Banks regional hardship and vulnerability study for Fiji, the upper Sigatoka Valley has ahardship headcount ratio in excess of 60 percent. The Year 1 sub-projects would provide morereliable and safer all-weather access to the highlands, markets, employment opportunities andsocial facilities, while also contributing to economic growth and poverty reduction.19 High priority refers to FRAs prioritization rating for a bridge or jettys condition which includes deficiencies inits serviceability which may include but are not limited to its structural integrity, vulnerability to adverse weatherevents, and/or user safety issues. 27 9.Narata Bridge is a three span, 26.3 meter long, 3.4 meter wide structure with a concretedeck on steel girders resting on concrete pile caps and abutments and concrete pile foundations.The bridge provides the sole vehicular access to the entire west bank of the Sigatoka RiverValley upstream of this point, which has a population catchment of around 9,800 people. It alsoprovides access to several schools and community facilities in the area. The bridge carries about440 vehicles per day, including rural bus services to the upper Sigatoka Valley, and heavy truckscarrying produce and logs to market. The bridge also provides access for farm stock andagricultural tractors, pedestrians and horses, which are common forms of local transport in thevalley.10.The deck, pile caps, piles and abutments of the bridge have suffered damage from pastflood debris impacts, in particular logs. The existing bridge poses a road safety risk due to itsnarrow width, and the lack of guard railings, footpaths, end markers and other protections. Thereis a risk that further damage or deterioration could cause the bridge to be load-limited, orpossibly closed to traffic.11.A decision on whether to repair or replace the bridge is pending. No land acquisition,vegetation removal, or river bed disturbance (such as pile driving) would be required as part ofrepairs to the existing bridge. Repair works are estimated as taking only three to four months intotal to complete.12.Replacing the bridge would involve either the construction of a new 31 meter long, twolane bridge on the upstream side of the existing bridge (the existing bridge would bedemolished), or the construction of a new two-lane bridge in the existing location.13.Matewale Crossing is a single-lane concrete causeway on a gravel road. It is 22.8 meterslong, 4.3 meters wide and approximately 2.0 meters above the bed level. The original crossinghas suffered serious damage from flood scouring beneath it and has settled significantly, with arotation of the whole crossing of about 200 mm upstream evident. Six meters on one side hascollapsed completely and an embankment has been constructed to maintain access. It is likelythat in a significant flood the crossing would become impassable, thus cutting off all traffic to theupper Sigatoka Valley above this point. Several villages upstream of this location, includingvillages on the eastern bank of the Sigatoka via the Draubuta Crossing, rely on the MatewaleCrossing for access. 14.The existing causeway is not economically repairable, so would be either replaced with anew structure consisting of either: (i) another low level structure of improved design; (ii) ahigher level bridge at the same location; or (iii) a bridge about 130 meters upstream of thepresent location with modified road approaches. The construction of a new 44 meter long singlelane bridge on the downstream side of the existing crossing would require the acquisition ofsome additional land.15.The preferred option of a replacement causeway could be constructed adjacent to theexisting crossing on the downstream side, and then the existing crossing demolished. This mightrequire the acquisition of a very small area of additional land, depending on the locations of the 28 road reserve boundaries. The existing crossing would be used as the temporary crossing duringconstruction works. A replacement structure would be expected to take around six months toconstruct, and work would need to be carried out in the drier months of the year.Component 2: Technical Assistance (estimated US$16.70 million)16.Technical assistance would consist of support to establish and maintain a projectsupervision team at FRA to oversee project implementation, finance design and supervisionconsultants, update design and construction standards for roads and bridges, undertake iRAPsurveys and road safety audits, and pilot the use of open contracting, including geotagging.17.Using counterpart funds, FRA would appoint four staff to the PST, including a ProjectManager/Procurement Specialist, Accountant, Environmental Specialist and Social DevelopmentSpecialist. The PST would support overall project implementation, including carrying out allactivities associated with the procurement of services, works and goods, financial management,safeguards, and project monitoring and reporting.18.Design and supervision consultants would be engaged to support PST to carry out detailedfeasibility studies and assessments of proposed sub-projects, to prepare preliminary and detaileddesigns and bid documentation, and to provide support throughout the tender process. Designswould be in accordance with national and international standards using conventional designs andappropriate materials. Sub-project designs will consider access to transport infrastructure forwomen and vulnerable groups, including the elderly and disabled. Supervision duringconstruction would also be financed through this component. As part of these contracts, up totwo vehicles per contract (total of four), as well as associated operating expenses, including fuel,maintenance, and insurance, would be financed. All other operating costs, including rent,furniture and office equipment, would be covered by the consultants.19.TIIP would support the updating of design standards and construction specifications forroads and bridges to: (i) bring consistency to constructed road assets in Fiji; (ii) reflect currentinternational standards for road geometry, pavements, drainage, and associated structures; and(iii) incorporate climate change adaptation considerations for more climate resilient road sectorassets in line with GoFs Green Growth Framework for Fiji. The updated design standards willcontain recommendations for more resilient infrastructure. Some of these recommendationswould be included in the designs to rehabilitate works, which will support the PDO. 20.Funds would be provided to improve road safety by supporting Fijis establishedinstitutional framework and the vision outlined in the Fiji Decade of Action on Road SafetyNational Action Plan. As part of this initiative, 600 km of the countrys main road network havebeen assessed for safety according to iRAP ratings. iRAP rates the condition and safety of roadsbased on a system of star ratings from one to five, with a 1-star rating signifying roads that areleast safe and a 5-star rating signifying the safest roads. Ratings are based on engineeringfeatures of a road and the degree to which they impact the likelihood of crashes for various roadusers. Under TIIP, assistance would include continued support for iRAP activities to deepensurveys already completed or extend surveys to new road corridors, and to prepare associated investment plans and proposed interventions to improve road safety.20 An option that is underconsideration is to analyze and identify interventions to elevate the star ratings of criticalcorridors, such as from Nadi to Lautoka, Nadi to Sigatoka, or Suva to Pacific Harbor. FRA mayoutsource iRAP activities, or develop its own capabilities by acquiring a low-cost iRAPinspection device with GPS camera and attendant training. In addition, TIIP would financeindependent road safety audits at feasibility and design stages to validate proposed iRAPinterventions, or to identify other measures to improve safety for specific sections of road.Recommendations would be included in the designs for assets to be rehabilitated.21.Support would also be extended to implement open contracting, including geotagging, tovirtually monitor and evaluate progress on FRA projects. Open contracting seeks to enhancetransparency and monitoring of public contracts by disclosing relevant public procurementinformation from pre-contract award activities through to contract award and implementation.This allows for better accountability. The approach will follow that used in the Philippines underthe Rural Development Project (P084967), which involves a publicly accessible websitedisplaying project progress reports, including geotagged photos of physical works. While FRAalready publishes pre-award information on its website, via FRAs eProcurement Portal withTenderlink, there is an opportunity to amplify FRAs dedication to transparency by providinginformation on contract awards and implementation.Component 3: Capacity Building (estimated US$0.80 million)22.As a parallel activity, ADB has extended US$700,000 in grant funding (with an additionalUS$100,000 from GoF in counterpart funding) to support initiatives to build governmentcapacity across several sectors. Areas of focus are expected to include planning, assessing andmanaging infrastructure projects for staff from various ministries and agencies, including FRA.The costs of training programs and workshops, including cost of travel and per diems,registration fees, renting of venues, interpretation and translation services, both domestically andinternationally, as well as to bring outside short-term specialists to Fiji, could be financed underthis component. There are several measures, which might include lane widening, resurfacing, shoulder sealing, installing roadsidebarriers, improving delineations or providing street lighting and footpaths. More information on iRAP is availableat. 3.A Project Supervision Team (PST) would be established to support FRA in implementingTIIP. The PST would consist of four individual advisors with specialist expertise inprocurement/project management, accounting and financial management, and social andenvironmental safeguards. These four positions would be funded by FRA through counterpartfunds. The PST, under the guidance of FRA, would have responsibility for overseeing andmanaging project execution and compliance with project requirements, including thoseassociated with procurement, financial management and auditing, safeguards, monitoring andevaluation, and project reporting.4.The PST would conduct fieldwork, research and analysis, including basic comparativesocio-economic cost/benefit and cost effectiveness assessments required to prepare annual shortlists of possible sub-projects for improvement under the TIIP.5.A project administration manual (PAM)21 has been jointly developed with ADB thatdefines procedures for implementing TIIP. This will complement FRAs existing operationsmanual. TIIP will be carried out in accordance with the arrangements and procedures set out inthe PAM, which can be amended from time-to-time, provided all modifications are agreed withthe World Bank in writing prior to any changes.6.The World Bank and ADB signed a Memorandum of Understanding (MoU) defining howboth organizations would respond to issues during implementation, including technical,21 The PAM is ADBs equivalent to the World Banks project operations manual. 31 procurement, financial management and safeguards aspects of TIIP. Both organizations agree toensure the prompt delivery and exchange of information regarding the Project and whenpractical, will field joint missions during implementation to supervise progress. The MoU wouldtake effect after the Boards of Directors of both organizations approve the project. The matrixfrom the MoU showing how ADB and the World Bank will respond to issues duringimplementation is available in Annex 4.7.The project would be implemented over a four and a half year period, from about January1, 2016 to June 30, 2020.Methodology for Selecting Sub-ProjectsFRA will utilize a systematic approach involving economic and social criteria to select8.road.9.Sub-project selection will be undertaken considering national level priorities and projectlevel assessments and should support national level planning documents endorsed bygovernment, including the National Transport Infrastructure Plan, Fijis Green GrowthFramework, Roadmap to Democracy and Sustainable Socio-Economic Development 2010-2014,and Public Sector Investment Program. Sub-projects would also be assessed for equitabledistribution among divisions and provinces, as well as for urgency and project readiness.10.The PAM identifies a methodology for selecting FRA assets to be rehabilitated andincludes a number of criteria, such as: 11.Selection Procedure. Each year, FRA will prepare a short list of possible sub-projects andsubmit it to the PSC for approval. Following PSCs approval of candidate sub-projects, thedesign and supervision consultants will carry out the required feasibility studies and verify subproject eligibility. The studies will involve collecting and analyzing baseline data to assessexpected impacts, using methods and tools established for the sample sub-projects. Each subproject will be subject to: (i) a technical feasibility study; (ii) an economic analysis; (iii) a socialand poverty analysis; (iv) an environmental assessment and environmental management plan inaccordance with the agreed environmental and social management framework; and (v) a land acquisition and resettlement plan in accordance with the agreed land acquisition and resettlementframework. Each assessment will confirm acceptable ratings against the criteria, or recommendfurther works to complete the assessment.12.Once completed, FRA will review the sub-project feasibility studies and endorse the subproject, subject to their assessment that it meets all eligibility criteria. Formal approval fromADB and the World Bank must be obtained before the detailed design of any sub-project canproceed. FRA will be responsible for obtaining MoF approval to include the sub-projects in thenational budget, and for ensuring that counterpart funds are available. Once approved, detaileddesigns will commence.Financial Management, Disbursements and ProcurementFinancial Management13.Risks and Mitigating Strategies. The project will be conducted under a joint co-financingarrangement with ADB. The financial management (FM) risk associated with the loan isassessed as substantial, due primarily to the limited capacity of FRA and the inherent complexityof a joint operation with multiple works contracts. FRA will appoint a dedicated FM officerwithin the PST to oversee TIIP expenditures, and a common set of procedures integrating theWorld Bank and ADB processes has been agreed. These joint FM procedures will be detailed ina clear set of FM instructions that will form part of the PAM. Ongoing training will also beprovided as part of the World Banks regular FM implementation support.Budgeting Arrangements. FRA will develop annual budgets for the project. The budgets14.will be reviewed by FRA at least bi-annually, with analysis of budget against expenditures.15.Flow of Funds. MoF will make loan proceeds available to FRA on a grant basis. Loanproceeds will flow from the World Bank into a Designated Account (DA). FRA will be directlyresponsible for the management, maintenance and reconciliation of DA activities for projectcomponents, including preparation of withdrawal applications (WAs) and supporting documentsfor Bank disbursements. Authorized signatories on WAs will be senior staff from MoF.16.Accounting and Internal Controls. FRA will be responsible for managing, monitoring andmaintaining project accounting records. The project will be integrated, as much as possible, withFRAs existing accounting systems, including its internal controls and accounting procedures.Where necessary, spreadsheet-based systems will be maintained in order to meet reportingrequirements and for the preparation of WAs. The project will observe applicable governmentregulations. Original supporting documents will be retained by FRA in accordance with thelegal agreements. FRA does not have an internal audit unit and will outsource these activities.17.Financial Reporting. FRA will prepare unaudited interim financial reports (IFRs) for theproject on a semester basis. The IFRs will include analyses of expenditures for the period, yearto date, and project to date, and compared with total project budget, and commitments. The IFRdesign format will be developed in consultation with FRA, and included in the PAM. IFRs willbe forwarded to the World Bank within 45 days of the end of each calendar semester as part of project reporting. The World Bank and ADB will also agree on the format for the annualfinancial statements, which will be prepared in accordance with the cash basis InternationalPublic Sector Accounting Standard.18.Audits. Annual audits of the Projects financial statements, together with an associatedmanagement letter, will be required by an auditor acceptable to the World Bank. Annual auditswill be submitted to the World Bank within six months of the end of each year. It is envisagedthat audits will be the responsibility of the Office of the Auditor General.19.Supervision. The projects FM arrangements will be reviewed by the World Banksfinancial management staff based on the assessed risk rating and at least once per annum. TheWorld Bank and ADB have agreed to field joint missions to supervise implementation, and FMstaff will do their best to join. The MoU that was signed between the World Bank and ADBdefines how both organizations would respond to issues associated with financial managementduring implementation.Disbursements20.Disbursement Methods. Three disbursement methods will be available for the project,including advance payments, reimbursements, and direct payments. The supporting documentsrequired for Bank disbursement under the various disbursement methods will be documented inthe Disbursement Letter issued by the World Bank.21.Designated Account (DA). A DA for the implementing agency will be established inFijian Dollars and reconciled on a monthly basis. The DA will be held at a bank acceptable tothe World Bank. The ceiling of the DA will be determined and documented in the DisbursementLetter. Project funds will be disbursed against eligible expenditures, as set out in the legalagreement.22.Disbursement Categories. To be defined in the legal agreement, but will include works,goods, services, and training, and are inclusive of taxes and duties.Procurement23.Capacity Assessment. An assessment of FRAs capacity to implement procurementactions for TIIP was carried out as part of project preparation. The results of the assessment areavailable in the World Banks project portal website and identify risks, risk ratings andmitigation measures. Overall, the assessment indicates a limited level of human resources(including procurement) within FRA to manage projects in accordance with Bank processes,constraints related to the capacity of the contracting industry to respond to FRAs needs, and lackof vigorous public oversight. The overall procurement-related risk is rated substantial. A riskmitigation action plan has been recommended to FRA.24.While the local contracting industry is growing on the back of increased investment, andcompetition for past tenders has been good, the capacity of the local contracting industry remainsa risk, particularly for the structural works required for bridges and rural jetties and wharves. Local contractors lack the equipment and expertise for large structural projects, and internationalcontractors will need to be attracted to undertake these works. While an increasing number ofinternational contractors have participated in recent FRA tenders, works will need to be bundledinto sufficiently large packages to offset mobilization costs of an international contractor.Discussions with industry will need to be ongoing to ensure needs are addressed and toencourage healthy competition for large structural works packages.25.Key Risks and Mitigation Measures. The following table summarizes key procurementrisks and proposed mitigation measures.Key Risks Mitigation Measures Incompleteimplementation ofprocurement under theproject (in terms ofefficiency, competition,transparency).Delay in projectprocessing orimplementation due tolack of proper planning. Reduced competition,incomplete or defectivebids.Delays in project orincrease in claims dueto slow contractimplementation.Lack of publicoversight may lead toabuse or corruption. ByWhomFRA Date DueEffectiveness Duringimplementation 33.Procurement Plans. An initial procurement plan was prepared by FRA in relation tolikely investments to take place during the first 18 months of project implementation.Procurement Arrangements and Schedule for Works Description ofWorks ProcurementMethod DomesticPreference(Yes/No) Review(Prior/Post) Bid Opening(Expected) NCB No Prior October 2015 36 Individual ProposalSubmission Date(Expected)April 2015 May 2015 May 2016 QCBS April 2015 Description of Services 34.Retroactive financing of up to US$10.00 million will be available for eligible expensespaid after the Board Date and before signing of the Legal Agreements, but not more than twelvemonths prior to the date of the countersigning of the Legal Agreements. Eligible expenditureswill include works, goods and consultant services.Environmental and Social (including safeguards) 35.Ensuring safety and accessibility through comprehensive and well maintained networks ofroads, bridges, and rural jetties and wharves will facilitate rural people to remain on their landsand have access to markets and key social services, such as health and education. Rural povertyin Fiji is pronounced in isolated areas and must be addressed through development of betterinfrastructure and services. TIIP should encourage the use of non-polluting forms oftransportation, including bicycles and horses, through the provision of adequate footpaths andsafety measures. The Constitution of the Republic of Fiji (2013), and the Roadmap forDemocracy and Sustainable Socio-Economic Development 2010-2014 identifies the transportsector as a national priority. The development policy objectives set out in the GovernmentsNational Strategic Development Plan 20072011 center on the development of efficient andcost-effective infrastructure for economic and social development and private sector investment,accompanied by institutional strengthening for more capable and better-quality facilities andservices.36.Potential Impacts: TIIP will finance civil works to repair, rehabilitate, reconstruct, orupgrade existing roads, bridges, and rural jetties and wharves. It is expected that many of thesub-projects will be selected from the 20 Year Fiji Transport Infrastructure Investment Plan,which will take into account FRA draft 10 Year Asset Management Plan. By design, anycandidate sub-project that would be designated as a Category A will not be financed, therationale being that neither the World Bank nor ADB have worked with GoF for several years,and accordingly, the Department of Environment (DoE) and FRA have limited safeguardexperience and capacity to work with development partner funded projects, includingundertaking necessary consultations and preparation of safeguard instruments. As such, it is 37 considered prudent that sub-projects with potentially significant environmental and socialimpacts be excluded from funding under TIIP and only Category B and C sub-projects will befunded. A comprehensive ESMF has been prepared which, among other things, will assist inscreening project categorization and to guide the FRA in determining the proper safeguardsinstruments needed to prepare subsequent sub-projects.37.Most impacts are expected to be site-specific and can be readily mitigated, as the roads,bridges and rural jetties/wharves already exist and most works will involve repairs and/orreconstruction within existing structural footprints and transport corridors. Where replacementwill provide a better outcome, or is the only feasible option, the new infrastructure may be sitedalongside or at a nearby location better suited to its design and function and would be screenedand assessed using the processes set out in the project ESMF. The largest likely impacts comefrom the operation of material quarries and borrow pits. Nonetheless, sub-projects are not expectedto have significant environmental impacts, including on natural habitats or forests. Typical constructionrelated impacts are expected to include, small amounts of vegetation/habitat removal, dust, noise,and impacts and impacts from temporary camps. There is the possibility for unintended impacts,such as improper disposal of excess material or siltation of streams and rivers along alignmentsor within sub-project areas. Impacts on PCR are also possible, in which case the impacts wouldbe addressed by chance-find provisions in the ESMF. To screen for and address issues related topotential impacts on Natural Habitats, the project will use the ESMF. All potential impacts forsub-projects will be assessed through Environmental Impact Assessments and Environmentaland Social Management Plans (ESMP), which will be prepared and implemented to mitigateidentified impacts.Due to the environmental issues detailed above, including PCR and natural habitats, TIIP38.triggers OP4.01: Environmental Assessment, OP4.04: Natural Habitats, and OP4.11: PhysicalCultural Resources.39.The project also triggers OP4.12 on Involuntary Resettlement, due to the potential forsmall amounts of land acquisition or minor land impacts during implementation. Landacquisition and/or resettlement, if any, are expected to be minor as the sub-projects are expectedto be carried out within existing corridors and structural footprints. However, there is apossibility of minor land requirements should small realignments be found necessary. A LARFwas prepared to address any land changes or impacts to livelihoods that might occur as a resultof involuntary acquisition of assets and/or change in land use, including provision forcompensation and rehabilitation assistance which may occur throughout the life of the project. ALARP was prepared for the two Year 1 sub-projects. Consultations with community membersand representatives of the villages and provincial councils took place at four sites along theSigatoka Valley Road to ensure that people were informed about the sub-projects and had anopportunity to raise concerns. These communities, which were provided with copies of theLARP and an information booklet summarizing entitlements and other relevant information,strongly supported the sub-projects as the improved bridge and crossing will increase the safetyof road users and pedestrians, while ensuring access and better quality crossings. For subsequentsub-project preparation involving land acquisition or resettlement, GoF will be responsible forpreparing LARPs to help guide the implementation process and serve as documentation forcompensation. 38 40.The project also triggers OP4.10 on Indigenous People (IP), as the majority of thebeneficiaries in the project areas are iTaukei, or indigenous Fijians. The social assessment thatwas completed during preparation reported that the iTaukei constitute the overwhelming majorityof project beneficiaries and live on native iTaukei land that is administered by mataqalis (clans).This land cannot be sold and remains forever as property of the landowning unit, unless sold tothe State and used solely for public purposes. Because the overwhelming majority ofbeneficiaries are expected to be indigenous people, project implementation will not require aseparate Indigenous Plan or an Indigenous Peoples Planning Framework. Instead, the elementsof an IP plan will be integrated into the design of sub-projects in accordance with the guidelinesincluded in the Environmental and Social Instruments for the Pacific. A well-definedconsultation process has been identified and agreed involving free, prior and informedconsultations.41.FRAs environment manager will be responsible for implementing environmental andsocial safeguards at a sub-project level. Environmental and social safeguards specialists both atan international and national level will be part of the Design and Supervision Consultant (DSC).These specialists will provide capacity building for screening, implementation and monitoring ofESMF requirements of each sub-project. Additional support will be provided under a grant fromADB to build government capacity across the transport sector, which is likely to include supportfor safeguards training. Areas of focus are expected to include planning, assessing and managinginfrastructure projects for staff from various ministries and agencies, including FRA. FRA willestablish a Project Supervision Team (PST) and has instituted safeguards monitoring proceduresand will assign an environmental specialist and a social safeguards specialist as part of the PST.42.During sub-project identification and project implementation, the World Bank and ADBwill work jointly to ensure safeguards requirements are carried out in accordance with theapproved LARP and construction ESMPs. Joint missions with ADB will be conducted aboutevery six months, or more frequently, if needed. At least once per year the missions will includesafeguard specialists. Safeguards specialists will also work with counterparts to help ensure thatthe project complies with all relevant safeguard frameworks and plans. The World Bank andADB will be jointly involved in key safeguard activities, including screening of sub-projects,categorization and identification of appropriate instruments, clearing of EIAs ESMPs andLARPs, and issuing no-objection to start construction.43.The safeguards documents, including the ESMF, LARF, ESMP and LARP, weredisclosed at the World Banks InfoShop on January 12, 2015 and on FRAs website on January13, 2015.Monitoring & Evaluation44.To ensure effective monitoring and evaluation, several measures will be taken. Membersof the PST will be required to have demonstrated skills in data collection, collation and reportingon Bank projects. This expertise will be bolstered with support from the project team throughthe provision of reporting templates and feedback on reports. TIIP will also be monitored and 39 evaluated through quarterly reports and measuring outcomes achieved against baseline indicators(as defined in Annex 1).45.FRA will issue quarterly progress reports on the last day of March, June, September andDecember. These will be forwarded to the World Bank within 30 days of the end of eachcalendar quarter. The quarterly reports will cover the entire project, both cumulatively and forthe period covered by each report, and summarize progress several ways, including in terms ofoverall progress by component and any technical challenges encountered, procurement under theproject, including updated procurement plans, expenditures under contracts financed from allsources, estimated cost of the project versus available funding, and estimates of project financingneeds by quarter for the subsequent six months.FRA will be responsible for the overall management and implementation of the46.monitoring framework, and for reporting on progress in meeting performance targets (as outlinedin the Results Framework in Annex 1). Through its ongoing experience in managing privatecontractors, FRA has demonstrated capacity to meet the M&E requirements of this project. Thedesign and supervision consultants will work closely with and provide regular updates to the PSTManager about project progress47.Social impacts of sub-projects will be measured through household income andexpenditure surveys at inception, mid-term and project close. These will be included as activitiesto be carried out by the design and supervision consultants.48.A mid-term review will be carried out in mid-2017, and an Implementation CompletionReport will be prepared within six months of project closing.Role of Partners49.The World Bank is re-engaging with Fiji after a long absence. Although this will be thefirst major project in several years, effective lines of communication have been established withFRA, which was a valuable partner during preparation of TIIP. FRA reviewed and providedinput to project documents and worked in an efficient manner with the World Bank throughoutpreparation.50.The ADB and Bank teams have devised a common approach and framework to jointlyimplement the Project, and to address technical matters and each donors respectiveenvironmental, social, financial management and disbursement requirements.Duringimplementation, procurement will be carried out using ADBs procurement policies andprocedures. As part of this approach, ADB and the World Bank will jointly carry out bi-annualmissions to supervise project progress and implementation. At least one representativedesignated to speak on behalf of each partner will participate. ADB and the World Bank willjointly prepare and issue reports, such as MoUs/Aide Memoires, at the end of each supervisionmission. If a joint mission is not possible, the partners can fielded teams independently, but willshare all project documentation. 40 Technical Issues ADB WB GoF Sharing of information Joint Selection of sub-projects/activities Responsible Progress monitoring Consultant outputs Contractor outputs 41 NotesBoth donors agree to ensure the prompt delivery and exchange ofinformation regarding the Project and its progress. FRA/IA to apply a systematic approach for selecting sub-projects thatis based on pre-agreed economic and social criteria. If ADB and WB cant agree on a sub-project, the dispute resolutionprocess involving the Cofinanciers Coordination Committee (CCC)will be followed. ADB and WB to share copies of all consultants progress reports forinformation, review and comment. ADB and WB to jointly review, comment and agree on quality ofprogress reports. ADB and WB to jointly review, comment and agree on consultantsoutputs and deliverables, specifically for design and supervisionassignments, reports/studies associated with technical assistance andcapacity building initiatives, preliminary and detailed designs, designdrawings, specifications, and cost estimates. ADB and WB to agree position in the event of unsatisfactoryconsultant. Contract variations Supervision missions Project ratings Resolution of differences 42 project. If needed, WB and ADB will follow dispute resolution processinvolving the CCC.Safeguards Notes Disclose ESMF Input Clear EIAs EnvironmentalScreening of subprojects Environmental impact assessment (EIA) is the terminology used in Fijis Environmental Management Act 2005. It is not equivalent to EIA in ADBs SPS or WBs OP 4.01.Within the parameters of SPS it is equivalent to an initial environmental examination as appropriate for a Category B project. All subprojects under the Project will be Category Bor C projects, and will follow the process for screening, assessment, review and implementation as set out in the ESMF prepared for the project. Category A projects are noteligible for financing under the project. 43 submitted.Management of complaints Clear LARPsDisclose and consult on LARP JointInput ResponsibleResponsible Implement LARPs SocialInvoluntary ResettlementDisclose Land Acquisition andResettlement Framework (LARF). 44 Management of complaints Documentation on consultations ProcurementConsultancy Services for Firms orIndividualsDevelop Terms of Reference(ToRs)Review and clear ToRs InputInput Clear evaluationSign and award contractWorks and GoodsDevelop bid documents, includingEmployers Requirements InputInputInputJoint ResponsibleResponsibleResponsibleResponsible Financial Management Disbursement Categories Withdrawal Applications Disbursements 46 NotesTo be defined in respective Cofinancier Agreement(s) with theBorrower. Authorized signatories on withdrawal applications (WAs) for currentADB/WB projects are senior MoF staff. GoF requires two signatureson each WA. If cofinancing arrangements are agreed, the use of special commitments would not apply. 47 48 consultant. If ADB and WB have differences of opinion on quality of outputs,dispute resolution process involving the CCC will be followed. GoF to review and provide technical comments on project outputs andforward to the ADB and WB. ADB and WB Teams will undertake separate reviews and agree onsets of joint comments to be compiled by ADB, which will then bejointly provided to GoF. Each partner to provide comments in timely manner.ADB and WB to jointly review, comment and agree on any revisedscope of work that has a material variation to the original contract. ADBwill issue the no objection. Joint ADB/WB supervision missions will be carried out at least twotimes per year with participation from at least one representativedesignated to speak on behalf of each partner. Both ADB and WB Task Teams and specialists to accompanymissions, visit construction sites, meet government officials andproject managers/consultants representatives, as appropriate. If joint missions are not possible, the Task Teams can be fieldedindependently and will share MOUs and/or Aide Memoires concludedwith GoF. For joint missions, letters announcing the missions will be draftedjointly, but sent under ADBs signature ahead of planned missions. In the case of separate missions, each Cofinancier will prepare its ownletters, but will copy the other Cofinancier. ADB and WB to jointly prepare and issue joint reports (MOUs/AideMemoires) at the end of each supervision mission. If required by ADB, WB or GoF guidelines, project documents to becountersigned by government. ADB and WB to rate project progress separately and indicate findingsin MOUs/Aide Memoires. To the extent possible, ratings on project progress that are common to SafeguardsEnvironmentalScreening of subprojects 49 ProcurementConsultancy Services for Firms orIndividualsDevelop ToRs 51 website. Implementation will be carried out by FRA with support from TTLBand DoL. If there is potential for disputes, advice from ADB/ WBregional safeguard advisor to be obtained. Payments for any form of compensation must be made prior to thecommencement of works and in accordance with cut-off date fordetermining eligibility (i.e., prior to impacts being experienced).ADB and WB provide no objection to start civil works after resettlementmeasures set out in LARF and LARP have been implemented.Payments for any form of compensation must be made prior to thecommencement of works (i.e. prior to impacts being experienced).FRA to monitor implementation of land acquisition/resettlementactivities following the internal monitoring procedures (includingformats) and responsibilities described in LARF and LARP.ADB and WB teams ensure that monitoring reports are prepared andsubmitted according to schedule provided in the LARP. Monitoringreports reviewed by resettlement specialist and comments conveyed toFRA.In the event of a complaint received by either ADB or WB, each partnerwill inform the other through official communication. A commonregister of complaints will be established.FRA organizes free, prior and informed consultations with the affectedlocal communities including IPs about proposed subproject(s) andensures that there is broad community support for the proposedsubproject. Consultations are carried out in manner that is consistentwith social and cultural values of community.A summary document on local consultations including confirmation ofbroad local community support of proposed subproject. The documentwill be included in the feasibility study report to be submitted to ADBand WB.Updated information to be provided to affected local communitiesincluding IPs at each stage of project implementation, including anymodifications to design and to address any adverse effects or concerns.Notes Develop RFPClear RFPCarry out tenderClear evaluationSign and award contractWorks and GoodsDevelop bid documents, includingEmployers Requirements InputInputInputInputInput ResponsibleResponsibleResponsibleResponsibleResponsibleResponsible 52 53 Description: Risk Management: Continue discussions with key ministries involved in the project and ensure effective communityconsultations during preparation and implementation. Joint ADB-Bank missions will encourageparticipation among key ministries and agencies during implementation. Given the poor condition ofroad sector assets generally, improvements to roads and bridges unlikely to be opposed.Resp: Stage: Both All Recurrent: Due Date: Frequency: Status: n/a Continuous In Progress TIIP has been designed to support key tenets of Fijis Roadmap for Democracy and Sustainable SocioEconomic Development and Green Growth Framework. It has also been vetted with the relevantagencies and the consultants currently updating the land and maritime transport strategies (2015 to2025).Resp: Resp: Risk Management:FRA to continue to be responsible for the sector. ADB and Bank have involved the former MWTPU in all phasesof project development and will nurture a relationship with MoIT.Resp: Bank Risk Management:Agreement on methodology and criteria to identify and agree assets to be improved. Also, sub-project selection to 54 PREP 30-Nov-2014 Status:Completed The project will fund a PST and design consultants to provide technical, fiduciary and safeguard support to FRA. TIIP designed with FRA, but close collaboration to be maintained throughout implementation. Client 2.2. Governance Bank missions will monitor the effectiveness of the governance structure, particularly the fiduciary and safeguardsaspects, on an on-going basis.Resp: IMPL Risk Management:Unclear system of accountabilityand oversight of FRA with welldefined roles andresponsibilities. FRA is mandated to oversee all roads, bridges and jetties and wharves in the country. Financial system to manageproject funds to be established and accountant to be financed under project.Resp: Risk Management:TIIP not aligned with FRA 3.1. Design Works under the project will involve rehabilitating existing roads, bridges and rural jetties and wharves, albeit tomore climate resilient design standards. FRA has implemented similar projects. If needed, FRA can call on its inhouse consultants or hire consultants through the project for additional support. 31-Dec-2014 Status:In Progress 3. Project Risks Risk Management:Inability to adjust project designin response to changes in policyor operating environment. TIIP has been flexibly designed to allow most roads, bridges and rural jetties and wharves to be rehabilitated orupgraded.Resp: Risk Management:Ensure support is available from qualified firms with experience in design and supervision of road and maritimesector assets in similar topography and geography. Update design standards to strengthen climate resilience, andFRA to ensure recommendations are incorporated in designs and constructed using international practices.Resp: Consultations for sub-projects to be designed in a way that is culturally appropriate and based on publicconsultations with communities from the area(s) where improvements will occur. Access to properties forconstruction and/or acquisition of small parcels of land may be required, but major land take or resettlement is notanticipated. Viable alternative designs will be explored in an effort to minimize land take and potential forphysical displacement of people.Resp: 56 Risk Management:Social impacts, such as thetransmission of HIV/AIDS,could adversely impact thebeneficiary communities. Road contractors obligated to carry out consultations and training of beneficiary communities. Specific HIV/AIDSclauses will be included in major works contracts.Resp: Risk Management:Project design may negativelyaffect customary land traditionsand social norms. Sub-projects would be designed in a way that is culturally appropriate and based on public consultations withcommunities from the area(s) where improvements will occur, and to ensure broad community support and thatthere are no issues or disputes with individuals or communities affected by current alignments or rights-of-way.Resp: Risk Management:Improved roads may increasevehicle speeds, reducing roadsafety. Consider recommendations from iRAP surveys and feasibility and detailed design stage road safety audits indesigns for road and bridge sub-projects.Resp: Risk Management:Inability of FRA to carry outrequisite safeguardsconsultations or developsafeguards instruments inaccordance with Bank and ADBrequirements. TIIP to fund safeguards specialists to ensure compliance with requirements and support preparation of requisiteinstruments.Resp: If agreement not possible, the project will revert to parallel financing. Parallel financing would extend preparationof a Bank project by as much as one year.Resp: 57 21 Jan-2015 Risk Management:The World Bank Team to identify Board members that might oppose the waiver and discuss benefits of usingADBs procurement procedures prior to Board date.Resp: 5-Mar-2015 Risk Management:Disagreement between theWorld Bank and ADB about theperformance of consultants andcontractors, or the quality ofoutputs during implementation. The World Bank and ADB to continue close coordination and consultation and detailed discussions aboutexpectations during implementation. Also, roles, responsibilities and steps to resolve any differences aredocumented in the joint MoU with ADB, which was signed on December 12, 2014.Resp: Risk Management:The World Bank and ADBdisagree on the handling offraud or corruption cases. Under the MoU, both institutions commit to coordinate, cooperate and exchange information in relation toallegations and investigations of fraud and corruption in relation to the Project, and will establish a CofinanciersCoordination Committee to facilitate this relationship and work through differences in good faith.Resp: Technical assistance under the project to update design standards and construction specifications, superviseconstruction of works, and bring in implementation support to ensure quality monitoring or delivery of reportingrequirements.Resp: Risk Management:Implementing agency will be FRA is mandated to maintain Fijis roads, bridges and jetties and wharves, and their program seeks to 58 significantly reduce the backlog of maintenance by project completion. Doing so would maximize fundingavailable for sustainable maintenance.Resp: 4. Overall RiskImplementation Risk Rating: SubstantialComments:The principal risks during implementation involve capacity of the implementing agency, which has not previously worked with the World Bank and isunfamiliar with World Bank processes and requirements. 59 Fiji Islands Bureau of Statistics. (2011). Report on the 20082009 Household Income and expenditure Survey forFiji. Suva: Fiji Islands Bureau of Statistics 60 Indo-Fijians whose ancestors migrated to the Fiji Islands in the late 19th and early 20th5.centuries, make up about 38 percent of the population. The remaining 5 percent consist of otherminority communities, including people from various Pacific Island countries, Australia, NewZealand, the Peoples Republic of China, and Europe. Indo-Fijians first arrived in Fiji asindentured labor, brought by the British colonizers to work on sugarcane plantations. Between1879 and 1916, nearly 60,000 Indian laborers arrived in Fiji, and the majority of these migrantschose to remain in the country after the expiration of their five-year contracts. Their descendantsconstitute the bulk of the present Indo-Fijian population, the rest being mainly descendants ofGujarati traders and Punjabi agriculturalists who arrived in the 1920s.6.By the end of World War II, Indo-Fijians outnumbered iTaukei in the total population.However, since the end of land leases and political changes, emigration from Fiji has rapidlyincreased. Between 1997 and 2000 alone, 16,825 people migrated. The bulk of the emigrants,about 90 percent, have been Indo-Fijians. In more recent years, educated and skilled iTaukei andother ethnic minority members of the middle class have begun leaving Fiji, but their numbers,while growing, are still small.7.The Poverty and Social Assessment that was carried out as part of preparation identified anumber of important actions for poverty alleviation, particularly for vulnerable households.These actions, which were considered during preparation and incorporated into project safeguarddocumentation, are summarized below: Service Delivery. Linkages between transport sector agencies and central agenciessuch as agriculture, education and health should be encouraged where possible, sothat service delivery and reform programs are actively supported. 61 8.Gender differences are strongly embedded in Fijian culture and tradition. The roles andcontrols of women are impacted by ethnicity and are varying in degree at the household level,but male-dominated hierarchies tend to be common regardless of ethnicity, which hascompromised womens roles in society in general, while increasing education and changingnorms are reflected in both groups. 9.iTaukei culture places considerable emphasis on communal values, respect for theauthority of chiefs, who are predominantly male, and the precedence of men before women.Gender dynamics are influenced by these traditional values that allow women few, if any, rightsto inherit land or formally own property, or to take part in public decision-making. However,iTaukei cultural norms do not place restrictions on womens mobility or on most types ofeconomic participation. As greater numbers of iTaukei move into the urban middle class, gendervalues are becoming more liberal, while differences remain more pronounced in rural areas.10.There has been significant progress for improving womens rights, with better outcomesin a few domains, both in absolute terms and relative to men. The Gender-Related DevelopmentIndex of Fiji is 0.757, with a ranking of 82nd in the world. With the efforts of the nationalmachinery for gender in Fiji, gender mainstreaming and womens involvement in political, socialand economic activities has been promoted. This has been made possible through manyinternational and regional gender equality commitments by GoF. For instance, GoF ratified theConvention on the Elimination of All Forms of Discrimination Against Women (CEDAW) in1995. Other than CEDAW, the regional gender-specific dialogues were committed, such asConvention on the Rights of the Child (1993), Pacific Platform for Action (1994), the JakartaDeclaration for the Advancement of Women in Asia and the Pacific (1994) and the Platform forAction and Beijing Declaration (1995), Millennium Development Goals (2000). Finally, the1998 Constitution includes the statement that all people in Fiji have equal right and status.11.Increasingly, iTaukei value secondary and higher education, for both girls and boys, as ameans of social and economic mobility. iTaukei women tend to be active in informal small-scale 62 fisheries, food production, and produce marketing, and also in formal commercial agricultureand agricultural processing, the hospitality and tourism sector, and other occupations in the paidlabor force. Further, iTaukei women can be responsible for managing household finances and insome cases iTaukei men are comfortable in accepting womans authority which is not always thecase amongst Indo-Fijians.12.Indo-Fijian society is more culturally diverse than iTaukei society, as Indo-Fijiansoriginate from many different parts of the Indian subcontinent and gender relations areinfluenced by various traditional cultural values. Most belong to various Hindu denominations,but there is also a minority of Muslims and Christians of various denominations. Patriarchalideology emphasizes formal male authority in decision-making and over property ownership.Most Indo-Fijians with land practice father-to-son inheritance. The predominance of men onsmallholdings as titleholders to land leases and cane contracts and the lack of women in this rolereflects gendered notions of ownership, the control of land and the commercial production ofcane. A woman not having her name on the lease title legally means she has no right ofownership and effectively control and use of the land has to be negotiated within the farmingfamily. As is the case with the iTaukei ethnicity, education and employment for women havebecome increasingly valued, especially in socially acceptable occupations, such as professionaland clerical work. For example, in the tourism sector Indo-Fijian women work more inadministration and technical fields rather than as housekeepers.Land Tenure13.Land ownership is a critical factor affecting the economic dynamics of Fijian householdsand communities. Land in Fiji is managed through three systems: (i) native land (ii) freeholdland; and (iii) crown, or state, land. Native or iTaukei land makes up 84 percent of all land,freehold 8 percent, and crown land 8 percent.27 iTaukei and state land cannot be bought or sold,but is available on a leasehold basis for up to 99 years. It is divided into reserve (for housingand agricultural use by mataqali),28 or non-reserve, which can be leased. Virtually all iTaukeibelong to a village and have a right to share in the customary lands belonging to their mataqali(clan), even when they have moved away from the village.14.Many iTaukei continue to have plots of land that they farm for own consumption whenthey are wage earners either nearby or elsewhere, or that they plan to return to when they retire.Indo-Fijian farmers either own freehold land or more often lease or rent land when they live inrural areas, and usually do not have access to land when they move to urban areas.15.In the case of Fiji, financial obligations to ones community and church can be high andfamily commitments are also very strong. This demand on a households limited surplus incomecan impact the households ability to save and develop (for example, reducing the capacity tosave for childrens tertiary education or the purchase of land). Market Development Facility and Australia Aid, 2013, Study on Poverty, Gender and Ethnicity in Key Sectors ofthe Fijian Economy, written by Linda Jones and CARDNO team to assess potential of horticulture and tourismindustries.28Mataqali is the clan which is the main land-owning unit. 63 2.A stand-alone Indigenous Peoples Plan will not be prepared for the project as the majorityof communities that will benefit from the sub-projects are expected to be communities of iTaukeiliving on traditionally tenured land. Instead, issues relevant for preserving the interests andlivelihoods of the indigenous population, as well as for maintaining its cultural andsocioeconomic traditions, will be integrated within the project design and implementationphases. Specifically, the project provides the opportunity to protect and enhance the interests ofIPs through the following measures:3.Screening of Projects and Disclosure. The integration of concerns and practices ofindigenous peoples recognizes the need for free, prior, and informed consultations to ensuremeaningful participation by indigenous peoples during the preparation of each sub-project. Theconsultation process seeks to identify concerns by the community in relation to each sub-projectand to identify appropriate mitigation measures. It also seeks to present activities related to eachsub-project and the economic opportunities that may be derived as a result of the implementationor outcome of each sub-project. IPs will be provided with relevant project information anddocuments in language(s) and manner suitable to them, primarily English and Fijian language.4.Social Assessment. Relevant information on demographic characteristics of affected IPcommunities, such as social, cultural and economic will be gathered during sub-projectpreparation. Information about the land and territories that they have traditionally owned orcustomarily used or occupied and the natural resources on which they depend has been recordedunder the projects LARF. Further information on land and asset issues will be gathered throughcommunity groups meetings in the provinces targeted by each sub-project in the context of thepreparation of LARPs to help guide the implementation process.For the Year 1 sub-projects, consultations took place in four sites along the Sigatoka5.Valley Road with community members and representatives of the villages and provincialcouncils to ensure that people were informed about the project and had ample opportunity toraise concerns. These communities strongly supported the sub-projects as the improved bridgeand crossing will increase the safety of road users and pedestrians, while ensuring access andbetter quality crossings.6.Institutional Arrangements. Staff within FRA and among contractors will receivetraining, where necessary, to screen sub-project activities, evaluate their effects on IPs, andaddress any grievances. Under FRAs leadership, and with support from the World Bank Team, 64 contractors will take on increasing responsibility for ensuring that mitigation measures areimplemented.7.Monitoring and Grievance Redress Mechanisms. FRAs Provincial Offices and the PSTwill regularly supervise and monitor the implementation of sub-project activities and possiblerelated impacts on IPs. The grievance procedures, which have been evaluated as culturallyappropriate, will record any impacts and concerns caused by the sub-projects. In addition, whensub-project implementation begins, signs will be erected at all sites to provide the public withupdated project information and to summarize the redress grievance mechanism process,including contact details for FRAs social impact manager.8.Customary Land Tradition. Customary land in Fiji is a form of collective and inalienabletitle that adapts and sustains common benefits for life. Under customary iTaukei law, customaryland cannot be sold, is owned by families and administered by clan leaders. The projects ESMFand LARF will help to address any adverse social impacts that may result due to involuntaryacquisition of assets and/or changes in land use, and includes provisions for compensation andrehabilitation assistance. The ESMF and LARF recognize iTaukei law and iTaukei landownership traditions as the main mechanism for ensuring the maintenance of relationships withancestral land.9.The main framework for resolving any land issues will rest on the following stages for theattempted settlement of disputes over customary land: When sub-project implementation begins, a sign will be erected at all sites providingthe public with updated project information and summarizing the grievance redressmechanism process including contact details of relevant person at FRA. Allcorrective actions and complaints responses carried out on site will be reported backto FRA. FRA will include information from the complaints register and correctiveactions/responses in its progress reports to the ADB and the World Bank. The project manager or engineer supported by FRA safeguard staff and consultantswill be the grievance focal point to receive, review and address project relatedconcerns and to resolve land related disputes in coordination with the governmentauthorities. Community beneficiaries will be notified of the grievance redressmechanism and will be exempt from any fees associated with resolving the grievancepursuant to the projects grievance redress procedure. Complaints will be recorded and investigated by the FRA safeguard team workingwith relevant staff of the individual sub-project. A complaints register will bemaintained which will show the details and nature of the complaint, the complainant,the date and actions taken as a result of the investigation. It will also cross-referenceany non-compliance report and/or corrective action report or other relevantdocumentation. FRA will review and find solutions to problems within two weeks in consultationwith village head (Turaga-ni-Koro), or traditional chief and relevant local agencies.FRA safeguards staff will report the outcome of the review to the village head ortraditional chief and affected persons (APs) within a weeks time. If the complainant is dissatisfied with the outcome, or have received no advice in the allotted timeperiod, he or she can take grievance to the Chief Executive Officer (CEO) of FRA.FRAs CEO, in coordination with the relevant national agency, will review and reportback to the APs, Turaga-ni-Koro or traditional chief about the outcome. Ifunresolved, or at any time the complainant is not satisfied, he or she can take thematter to an appropriate court. Both successfully addressed complaints and nonresponsive issues will be reported to the ADB and the World Bank by FRA. 10. During the entire process, relevant Fiji agencies (Department of Lands, iTaukei LandTrust Board, etc.) will remain available to review public complaints and advice onFRAs performance for grievance redress. The following table sets out the process to address project related grievances.Grievance Redress Process StageActionDuration1Displaced persons (DPs)/village head or traditional chiefAnytimetakes grievance to FRA2FRA reviews and identifies solution to problem in2 weeksconsultation with village head or traditional chief andrelevant agencies3PST reports back an outcome to village/traditional1 weekchief/DPIf unresolved or not satisfied with the outcome at FRA level4DP/village head or traditional chief take grievance FRAWithin 2 weeks of receiptCEO.of Step 3 decision5FRA CEO reviews and identifies solution in coordination4 weekswith relevant agencies6FRA CEO reports the solution/decision to DP/village1 weekhead or traditional chiefIf unresolved or at any stage if DP is not satisfiedDP/village head or traditional chief can take the matter toAs per judicial systemappropriate court 66 3.During implementation, joint missions with ADB will be conducted about every sixmonths, or more frequently, if needed. At least once per year the missions will include technical,fiduciary and safeguard specialists, who will provide input on road and bridge design andconstruction, carry out post-reviews on contract management, conduct random audits ofexpenditures, check on filing of documents, and provide on-the-job and formal training onfiduciary aspects. Safeguards specialists will also work with counterparts to help ensure that theproject complies with all relevant safeguard frameworks and plans.4.The estimated level of annual support needed during implementation and required skillsmix are identified in the following tables.Main Focus of Implementation SupportTimeFirst 12months FocusProject launchand start-up Skills NeededTask Team LeaderRoad/Bridge EngineersFinancial ManagementEnvironmentSocialAdministrative Support 67 ResourceEstimateUS$90,000 Partner RoleADB toparticipate injoint missionsand providetechnical support. Following12 to 60months Projectimplementation US$380,000 ADB toparticipate injoint missionsand providetechnical support. Number ofStaff Weeks Number ofTrips 852224 322221 PartnersNameAsian Development Bank Institution/CountryADB / Manila, PhilippinesADB / Suva, Fiji 68 RoleOngoing support duringimplementation. ANNEX 9: MAPSFIJI: Transport Infrastructure Investment Project (TIIP) 69.
https://de.scribd.com/document/309745609/International-Bank-for-Reconstruction-Development-Project-Appraisal-Document-on-a-proposed-loan-in-the-amount-of-US-50M-to-the-Republic-of-FIji-fo
CC-MAIN-2020-16
en
refinedweb
Hello everyone, welcome into my my new article Where we are going to explore Pure Component in React Native. Which is one of the fundamental ways of making React Native Components. In React Native you can either have your components Pure or Functional. In other cases are referred to as Container and Stateless components. So what are these forms of React Native Components? and How can you make them. Pure Components / Container Components Pure components or Container Components are simply components based on React Component, they are classes, and they hold states. In other words, Pure components are the default form react and react native components comes as. They extend the react component class, that means that they have can have lifecycle methods, constructor etc. They Do have state, and you can use and manipulate it using this.state and this.setState(). Pure components are well optimized, and you mainly need to use them for you React Native Screens. Pure Component Example export default class App extends React.Component{ constructor(props){ super(props) this.state = { counter:1 } } componentDidMount(){ const { counter } = this.state setInterval(() => { this.setState({counter:counter + 1}) }, 1000); } render(){ return ( <View style={styles.container}> <View style={styles.statusBar} /> <Text>Counter : {this.state.counter}</Text> </View> ); } } As you have noticed from the code above, our component is extending React.Component. We are using the constructor method to setup our initial state. If you are not familiar with the constructor term, It’s an Object Oriented concept, and it’s the first function that runs when our React Component is created. Even before the component is mounted, and it’s the perfect place to start an initial state. Pure components also use the lifecycle methods when extending React Component. And that’s where we are setting up an interval to update our component state every second. And lastly, the render method, where the UI components live, here you can map all your states and logic, to have an appearance in your app screens. Best usage cases Pure components are best used for larger components such as screens of the app. Like settings and home screens. You probably would need to create pure components to handle small UI representative UI pieces. For that you can create Stateless components. Stateless Components / Functional Components They are the exact opposite of pure components. Simply they are functions and do not hold a state, just the props. They are mostly used to map props into the UI, and used for small UI representations. Such as a Header in your app, Buttons, etc. Stateless Component Example const Header =({name, openDrawer})=> ( <View style={styles.header}> <TouchableOpacity onPress={()=>openDrawer()}> <Ionicons name="ios-menu" size={32} /> </TouchableOpacity> <Text>{name}</Text> <Text style={{width:50}}></Text> </View> ) This is an example of a stateless components, with no state. It’s a Header that takes 2 props, Screen name and openDrawer function. It’s only used to display the props, and have no logic or calculations. There you have it a Pure Component React Native Overview with examples. I hope you find this article as informative as you expected, if you have any questions, please send me Your feedback. Happy Coding. What is the pros and cons of using Purecomponent. Also what are other situations that PureComponents will come in handy? Hi Jacky, I think I have mentioned that part, but I can elaborate. I wouldn’t look at it as pros or cons, but situational usage. PureComponents are mostly used when your component does not have a state and only serves as a UI display. Like Buttons, Flatlist Items, Headers etc HELLO! Sir, I would like to ask you, is there any library or free template available for react native UI bcoz it help us to save time Yes, I mainly use nativebase, and they have free templates to use. Such as e-commerce store etc. check it out.
https://reactnativemaster.com/pure-component-react-native-overview/
CC-MAIN-2020-16
en
refinedweb
#include <itkErodeObjectMorphologyImageFilter.h> Inheritance diagram for itk::ErodeObjectMorphologyImageFilter< TInputImage, TOutputImage, TKernel >: Erosion of an image using binary morphology. Pixel values matching the object value are considered the "object" and all other pixels are "background". This is useful in processing mask images containing only one object. If the pixel covered by the center of the kernel has the pixel value ObjectValue and the pixel is adjacent to a non-object valued pixel, then the kernel is centered on the object-value pixel and neighboring pixels covered by the kernel are assigned the background value. The structuring element is assumed to be composed of binary values (zero or one). BinaryErodeImageFilter Definition at line 44 of file itkErodeObjectMorphologyImageFilter.h.
http://www.itk.org/Doxygen16/html/classitk_1_1ErodeObjectMorphologyImageFilter.html
crawl-003
en
refinedweb
1. svn co /somewhere/in/the/PYTHONPATH/localdates 2. Add 'localdates' in your settings' INSTALLED_APPS 3. In the templates that you want to use the localdates filter, add {% load local_datefilter %}. 4. You can now use the ldate filter, like that: {{ date_object|ldate:"d {Fp} Y" }} {{ date_object|ldate:"{FULL_DATE}" }} {{ date_object|ldate:"d F Y" }} You can browse the source code to find the available format strings - once they're finalized I'll write up documentation for them. 5. Checkout the test-localdates project for sample usage. (svn co test-localdates) 6. Probably your locale wouldn't be available in the application. If you want to add it, do a make-messages.py -l lang, from the localdates directory, so that the django.po file will be generated. You can then add your locale's datestrings, do a compile-messages.py and reload. You may need to restart the dev server. Don't forget to attach the file in the issue tracker so I can update the source. 7. Check out the sample django/contrib/localflavor-sample package for ways of extending the current functionality. NOTE: It's best to contact me directly, as this might change. Hi! I use following middleware: class DjangoLocaleSwitchMiddleware(object): def process_request(self, request): lang = get_language() loc = 'en_US.UTF-8' if lang == 'ru': loc = 'ru_RU.UTF-8' elif lang == 'kk': loc = 'kk_KZ.UTF-8' locale.setlocale(locale.LC_ALL, loc) Of course it's not so flexible as yours, but it's simpler Hi! I use following middleware: Of course it's not so flexible as yours, but it's simpler
http://code.google.com/p/django-localdates/wiki/Usage
crawl-003
en
refinedweb
.\" ::Spec 3" .TH File::Spec 3 "2002-06-01" "perl v5.8.0" "Perl Programmers Reference Guide" .SH "NAME" File::Spec \- portably perform operations on file names .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 1 \& use File::Spec; .Ve .PP .Vb 1 \& $x=File::Spec->catfile('a', 'b', 'c'); .Ve .PP which returns 'a/b/c' under Unix. Or: .PP .Vb 1 \& use File::Spec::Functions; .Ve .PP .Vb 1 \& $x = catfile('a', 'b', 'c'); .Ve .SH "DESCRIPTION" .IX Header "DESCRIPTION" This module is designed to support operations commonly performed on file specifications (usually called \*(L"file names\*(R", Ko\*:nig, Andy Dougherty, Charles Bailey, Ilya Zakharevich, Paul Schinder, and others. .PP Since these functions are different for most operating systems, each set of \&\s-1OS\s0 specific routines is available in a separate module, including: .PP .Vb 5 \& File::Spec::Unix \& File::Spec::Mac \& File::Spec::OS2 \& File::Spec::Win32 \& File::Spec::VMS .Ve .PP The module appropriate for the current \s-1OS\s0 is automatically loaded by File::Spec. Since some modules (like \s-1VMS\s0) make use of facilities available only under that \s-1OS\s0, it may not be possible to load all modules under all operating systems. .PP Since File::Spec is object oriented, subroutines should not be called directly, as in: .PP .Vb 1 \& File::Spec::catfile('a','b'); .Ve .PP but rather as class methods: .PP .Vb 1 \& File::Spec->catfile('a','b'); .Ve .PP For simple uses, File::Spec::Functions provides convenient functional forms of these methods. .SH "METHODS" .IX Header "METHODS" .IP "canonpath" 2 .IX Item "canonpath" No physical check on the filesystem, but a logical cleanup of a path. .Sp .Vb 1 \& $cpath = File::Spec->canonpath( $path ) ; .Ve .IP "catdir" 2 .IX Item "catdir" Concatenate two or more directory names to form a complete path ending with a directory. But remove the trailing slash from the resulting string, because it doesn't look good, isn't necessary and confuses \&\s-1OS2\s0. Of course, if this is the root directory, don't cut off the trailing slash :\-) .Sp .Vb 1 \& $path = File::Spec->catdir( @directories ); .Ve .IP "catfile" 2 .IX Item "catfile" Concatenate one or more directory names and a filename to form a complete path ending with a filename .Sp .Vb 1 \& $path = File::Spec->catfile( @directories, $filename ); .Ve .IP "curdir" 2 .IX Item "curdir" Returns a string representation of the current directory. .Sp .Vb 1 \& $curdir = File::Spec->curdir(); .Ve .IP "devnull" 2 .IX Item "devnull" Returns a string representation of the null device. .Sp .Vb 1 \& $devnull = File::Spec->devnull(); .Ve .IP "rootdir" 2 .IX Item "rootdir" Returns a string representation of the root directory. .Sp .Vb 1 \& $rootdir = File::Spec->rootdir(); .Ve .IP "tmpdir" 2 .IX Item "tmpdir" Returns a string representation of the first writable directory from a list of possible temporary directories. Returns "" if no writable temporary directories are found. The list of directories checked depends on the platform; e.g. File::Spec::Unix checks \f(CW$ENV\fR{\s-1TMPDIR\s0} and /tmp. .Sp .Vb 1 \& $tmpdir = File::Spec->tmpdir(); .Ve .IP "updir" 2 .IX Item "updir" Returns a string representation of the parent directory. .Sp .Vb 1 \& $updir = File::Spec->updir(); .Ve .IP "no_upwards" 2 .IX Item "no_upwards" Given a list of file names, strip out those that refer to a parent directory. (Does not strip symlinks, only '.', '..', and equivalents.) .Sp .Vb 1 \& @paths = File::Spec->no_upwards( @paths ); .Ve .IP "case_tolerant" 2 .IX Item "case_tolerant" Returns a true or false value indicating, respectively, that alphabetic is not or is significant when comparing file specifications. .Sp .Vb 1 \& $is_case_tolerant = File::Spec->case_tolerant(); .Ve .IP "file_name_is_absolute" 2 .IX Item "file_name_is_absolute" Takes as argument a path and returns true if it is an absolute path. .Sp .Vb 1 \& $is_absolute = File::Spec->file_name_is_absolute( $path ); .Ve .Sp This does not consult the local filesystem on Unix, Win32, \s-1OS/2\s0, or Mac \s-1OS\s0 (Classic). It does consult the working environment for \s-1VMS\s0 (see \*(L"file_name_is_absolute\*(R" in File::Spec::VMS). .IP "path" 2 .IX Item "path" Takes no argument, returns the environment variable \s-1PATH\s0 as an array. .Sp .Vb 1 \& @PATH = File::Spec->path(); .Ve .IP "join" 2 .IX Item "join" join is the same as catfile. .IP "splitpath" 2 .IX Item "splitpath" Splits a path in to volume, directory, and filename portions. On systems with no concept of volume, returns undef for volume. .Sp .Vb 2 \& ($volume,$directories,$file) = File::Spec->splitpath( $path ); \& ($volume,$directories,$file) = File::Spec->splitpath( $path, $no_file ); .Ve .Sp For systems with no syntax differentiating filenames from directories, assumes that the last file is a path unless \f(CW$no_file\fR is true or a trailing separator or /. or /.. is present. On Unix this means that \f(CW$no_file\fR true makes this return ( '', \f(CW$path\fR, '' ). .Sp The directory portion may or may not be returned with a trailing '/'. .Sp The results can be passed to \*(L"\fIcatpath()\fR\*(R" to get back a path equivalent to (usually identical to) the original path. .IP "splitdir" 2 .IX Item "splitdir" The opposite of \*(L"\fIcatdir()\fR\*(R". .Sp .Vb 1 \& @dirs = File::Spec->splitdir( $directories ); .Ve .Sp $directories must be only the directory portion of the path on systems that have the concept of a volume or that have path syntax that differentiates files from directories. .Sp Unlike just splitting the directories on the separator, empty directory names (\f(CW''\fR) can be returned, because these are significant on some OSs. .IP "\fIcatpath()\fR" 2 .IX Item "catpath()" Takes volume, directory and file portions and returns an entire path. Under Unix, \f(CW$volume\fR is ignored, and directory and file are catenated. A '/' is inserted if need be. On other OSs, \f(CW$volume\fR is significant. .Sp .Vb 1 \& $full_path = File::Spec->catpath( $volume, $directory, $file ); .Ve .IP "abs2rel" 2 .IX Item "abs2rel" Takes a destination path and an optional base path returns a relative path from the base path to the destination path: .Sp .Vb 2 \& $rel_path = File::Spec->abs2rel( $path ) ; \& $rel_path = File::Spec->abs2rel( $destination\fR volume, and ignores the \f(CW$base\fR volume. .Sp On systems that have a grammar that indicates filenames, this ignores the \&\f(CW$base\fR filename as well. Otherwise all path components are assumed to be directories. .Sp If \f(CW$path\fR is relative, it is converted to absolute form using \*(L"\fIrel2abs()\fR\*(R". This means that it is taken to be relative to \fIcwd()\fR. .Sp No checks against the filesystem are made. On \s-1VMS\s0, there is interaction with the working environment, as logicals and macros are expanded. .Sp Based on code written by Shigio Yamaguchi. .IP "\fIrel2abs()\fR" 2 .IX Item "rel2abs()" Converts a relative path to an absolute path. .Sp .Vb 2 \& $abs_path = File::Spec->rel2abs( $path ) ; \& $abs_path = File::Spec->rel2abs( $base\fR volume, and ignores the \f(CW$path\fR volume. .Sp On systems that have a grammar that indicates filenames, this ignores the \&\f(CW$base\fR filename as well. Otherwise all path components are assumed to be directories. .Sp If \f(CW$path\fR is absolute, it is cleaned up and returned using \*(L"\fIcanonpath()\fR\*(R". .Sp No checks against the filesystem are made. On \s-1VMS\s0, there is interaction with the working environment, as logicals and macros are expanded. .Sp Based on code written by Shigio Yamaguchi. .PP For further information, please see File::Spec::Unix, File::Spec::Mac, File::Spec::OS2, File::Spec::Win32, or File::Spec::VMS. .SH "SEE ALSO" .IX Header "SEE ALSO" File::Spec::Unix, File::Spec::Mac, File::Spec::OS2, File::Spec::Win32, File::Spec::VMS, File::Spec::Functions, ExtUtils::MakeMaker .SH "AUTHORS" .IX Header "AUTHORS" Kenneth Albanowski , Andy Dougherty , Andreas Ko\*:nig , Tim Bunce . \&\s-1OS/2\s0 support by Ilya Zakharevich . Mac support by Paul Schinder , and Thomas Wegner . \fIabs2rel()\fR and \fIrel2abs()\fR written by Shigio Yamaguchi , modified by Barrie Slaymaker . \fIsplitpath()\fR, \fIsplitdir()\fR, \fIcatpath()\fR and \&\fIcatdir()\fR by Barrie Slaymaker.
http://www.fiveanddime.net/pod/File/Spec.html
crawl-003
en
refinedweb
#include <rtt/TaskObject.hpp> Definition at line 51 of file TaskObject.hpp. Create a TaskObject with a given name and description and tie it to a parent. Add a new child interface to this interface. Reimplemented in RTT::TaskContext. Get a pointer to a previously added TaskObject. Get a list of all the object names of this interface. Returns the parent OperationInterface in which this TaskObject lives. A TaskObject can have only one parent. Implements RTT::OperationInterface. Definition at line 76 of file TaskObject.hpp. Remove and delete a previously added TaskObject. Deletion will only occur if obj_name's parent is this. You can avoid deletion by first calling Set the execution engine of the parent TaskContext. Do not call this method directly. This function is automatically called when a TaskObject is added to a TaskContext. Implements RTT::OperationInterface. Set a new parent for this interface. Do not call this method directly. This function is automatically called when a TaskObject is added to another TaskObject. Implements RTT::OperationInterface. Definition at line 78 of file TaskObject.hpp.
http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.8.x/api/html/classRTT_1_1TaskObject.html
crawl-003
en
refinedweb
gettimeofday, settimeofday - get / set time Synopsis Description Errors Note #include <sys/time.h> int gettimeofday(struct timeval *tv, struct timezone *tz); int settimeofday(const struct timeval *tv , const struct timezone *tz); The functions gettimeofday and settimeofday can get and set the time as well as a timezone. The tv argument is a timeval struct, as specified in <sys/time.h>: struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; and gives the number of seconds and microseconds since the Epoch (see time(2)). The tz argument). The). Traditionally, the fields of struct timeval were longs. SVr4, BSD 4.3. POSIX 1003.1-2001 describes gettimeofday() but not settimeofday(). date(1), adjtimex(2), time(2), ctime(3), ftime(3)
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man2/settimeofday.2
crawl-003
en
refinedweb
OpenLayers Release 2.10 Release - Release Announcements: - Release Notes: Release/2.10/Notes - Current Stable API: - Tar Ball: - Zip: - Browse Source: - SVN Checkout: svn checkout Tickets No results No results No results No results No results No results - #1416 - Drawing the geometry doesn't work correctly, if the scale is changed to 100 - #1780 - Fix tests in Google Chrome - #1839 - Have Format.WFST support multiple typenames - #1997 - NavigationHistory control does not deal with reprojection - #2017 - Popup Class XHTML violation (wrong DIV tag generated) - #2063 - Create OpenLayers.Format.OWSContext - #2096 - measure control should not fire measurepartial on first click - #2131 - add support for WFS hit count (1.1) - #2137 - Handler class constructor optimization - #2221 - Wrong url in links/opengeo.rst - #2273 - broken images in examples - #2360 - make Layer.addOptions call initResolutions if necessary - #2427 - fix and rewrite initResolutions - #2449 - WFS 1.1 writer for GetFeature does not support resultType attribute - #2468 - make build.py callable in python - #2469 - Implement weighted average in getCentroid for Collections - #2478 - WMS 1.3 exception format is now INIMAGE - #2490 - ArcIMS as base layer blows up the Overview Map control - #2493 - Add support for Google Maps v3 API - #2506 - Test of Panel control: Not detected redraw through events. - #2520 - MousePosition control: implement the methods activate/deactivate. - #2546 - drawing features while the layer is not in the map may cause problem - #2548 - Add SOSCapabilities v1_0_0 to read more than one observedProperty per offering - #2549 - Layer.Vector fires featuresadded event with shapes vetoed by beforefeatureadded - #2553 - the documentation for map events is confusing and misleading - #2563 - labelAlign with one character can not works on IE - #2567 - Graticule control: Implements activate, deactivate and usage autoActivate property - #2570 - Create control to perform SLD selections on WMS layers - #2584 - Replace Renderer.Elements' minimumSymbolizer and Render.Canvas' symbolizer defaults with Renderer.defaultSymbolizer - #2600 - Wrong scaleline in lon/lat projections - #2603 - Layer.Grid.setTileSize no need to clone size - #2604 - remove global variables in Rule.js - #2605 - Give Style a unique id and a clone method - #2606 - Remove Prototype.js from license.txt - #2608 - external graphics not displayed when drawing after linestring - #2616 - allow cached tile layers with max resolution different than map - #2620 - remove global variables - #2621 - Renderer.Canvas: fix drawText loop - #2622 - Style::addRules should not create a new rules object - #2625 - upgrade to XMLHttpRequest 1.0.4 - #2626 - make Format.ArcXML tests pass in Safari - #2630 - Attribute Columns Do Not Accurately Display Case of Attribute Names - #2637 - Support WMTS layer in OpenLayers - #2638 - Loading GML layer with GeoJSON format renders, but gives an error - #2640 - make WFSCapabilities format retrieve feature type prefix and namespace - #2642 - Partial RasterSymbolizer support for Format.SLD - #2643 - GetFeature select by click should allow to select several features - #2651 - Anchored popup does not respect anchor size when relative position is 'tl' - #2652 - Element.js crashes locking up Map - #2654 - Add degrees symbol to Util.getFormattedLonLat - #2656 - no way to pass read options from protocol to format - #2660 - Tests broken on FF 3.5/3.6: 'Operation is not supported" code: "9' - #2664 - GetFeature doesn't add the "olCursorWait" CSS class to viewPortDiv when selecting by box - #2665 - SphericalMercator layers should respect custom projection code - #2666 - WMSCapabilities: this.parser not set - #2669 - renderers array should be able to deal with objects as well as strings - #2671 - OWSCommon format should include regExes and namespaces - #2673 - OpenLayers.Map.destroy() nullify panTween - #2674 - remove unneeded parseFloat from OpenLayers.LonLat.fromString - #2675 - OpenLayers.Tween: code simplification - #2676 - add createLayer method to WMTSCapabilities format - #2677 - better matrix updates for the WMTS layer - #2678 - create WMTSGetFeatureInfo control - #2686 - LonLat.add and string values - #2688 - Geometry.Collection does not call parent destructor, so bounds are not reset - #2691 - measure control - wrong measure when moving fast - #2692 - Add vincenty direct formula - #2693 - give featureId to eraseGeometry - #2696 - Format.Filter.v1 may fail to read LowerBoundary and upperBoundary - #2701 - RegularPolygon: clear when not activated - #2702 - Error: missing : after property id - #2710 - Fix SVG tests in webkit - #2712 - Renderer and Format.SLD should use fill and stroke unless set to false - #2713 - Renderer.SVG.importSymbol: don't use string concat - #2714 - TransformFeature transformcomplete event has a wrong feature attached - #2717 - Control.WMSGetFeatureInfo: double setMap on handler - #2722 - getFeatureByFid() for vector layers - #2724 - GML format creates features with geometry = OpenLayers.Bounds - #2727 - WMTS: url management bug - #2729 - Google dependency breaks build - #2730 - Google v3 layer is displayed even if not visible - #2731 - repositionMapElements fails if map object container has no child element - #2732 - Add getter method for maxExtent - #2733 - Rotated graphics are positioned incorrectly after zoom changes in IE - #2735 - Problems with Google layer and zoom box in IE - #2740 - WMC : layer extension for tileSize property - #2742 - Extend FramedCloud to a maximum width of 1200px - #2744 - WMSGetFeatureInfo: add an event when no queryable layers are found - #2745 - Cleaner panel redraw - #2746 - Control.OverviewMap: start maximized - #2750 - setVisibility(false) not respected before layer is fully loaded - #2751 - Changes in languages: "es" and "ca". - #2753 - Panel: Restore the active state of controls when panel is activate after deactivation. - #2755 - "temporary" sketch style from Vector layer does not get applied to OpenLayers.Handler.RegularPolygon - #2758 - Google layer improvements for maps with allOverlays set to true - #2759 - allOverlays trouble with Google v3 layer - #2760 - read and write multiple symbolizers and multiple FeatureTypeStyle elements in SLD - #2764 - Panel: strange behavior when using TYPE_TOOL and TYPE_BUTTON - #2766 - buffer option for tile layer not documented - #2767 - xy property should be an Object rather than an Array - #2768 - Only call dashStyle when strokeDashstyle is set - #2769 - Panel: Adding controls, unnecessarily activations occur followed by deactivations - #2770 - vector layer should fire beforefeaturesremoved - #2771 - parse gx:Track element from KML - #2773 - filter evaluate method should accept a feature - #2774 - add removeAllFeatures - #2775 - clear method should remove references to features - #2778 - add ISO date parsing and toISOString for dates - #2782 - Localization problem with XML format's createTextNode method in IE - #2784 - Once applied, strokes cannot be removed - #2790 - Filter strategy for filtering features passed to a layer - #2791 - Class.js tweak - #2793 - Make it safer to extend the Google layer. - #2796 - mergeWithDefaultFilter should not return null - #2797 - hoverRequest is never aborted - #2814 - SLDSelect control tests failing - #2815 - Cluster Strategy should not destroy all features
http://trac.osgeo.org/openlayers/wiki/Release/2.10
crawl-003
en
refinedweb
Date: Tue, 24 Oct 2000 12:21:04 +0100 From: "Stephen C. Tweedie" <[email protected]> To: Andreas Gruenbacher <[email protected]> Subject: Re: [PROPOSAL] Extended attributes for Posix security extensions Hi, On Sun, Oct 22, 2000 at 04:23:53PM +0200, Andreas Gruenbacher wrote: > > This is a proposal to add extended attributes to the Linux kernel. > Extended attributes are name/value pairs associated with inodes. What timing --- we had a long discussion (actually, several!) about this very topic at the Miami storage workshop last week. One of the main goals we had in getting people together to talk about extended attributes in general, and ACLs in particular, was to deal with the API issues cleanly. In particular, we really want the API to be at the same time: * General enough to deal with all of the existing, mutually-incompatible semantics for ACLs and attributes; and * Specific enough to define the requested semantics unambiguously for any one given implementation of the underlying attributes. These points are really important. We have people wanting to add ACL support to ext2 in a manner which Samba can use --- can we do POSIX ACLs with NT SID identifiers rather than with unix user ids? If we mount an NTFS filesystem, it will have native ACLs on disk. How does the API speficy that we want NT semantics, not POSIX semantics, for a given request? There is also the naming issue. There are multiple independent namespaces. For extended attributes, there may be totally separate namespaces for user attributes and for system ones, or there may be a common namespace with per-attribute system status. Again, these different sets of semantics _already exist_ on filesystems which Linux can mount (eg. NTFS, JFS and XFS), so the API has to deal with them. There is already a kernel API which has this flexibility: the BSD socket API handles these issues through the concepts of protocol families and address families. Those same two concepts are perfectly matched to the extended attributes problem.. Obviously the combinations of name types supported for any given attribute family will depend on the underlying implementation, but that's the whole point --- the API is expressive enough to define unambiguously what the application is trying to do, so that if the underlying filesystem doesn't support (say) POSIX ACLs, we get an error back telling us so rather than attempting to do an incomplete map of the POSIX request onto whatever the underlying filesystem happens to support. Before we look at the syscall API in detail, there's one other point to note. It is common to want to read or set one individual attribute in isolation (even if it is an atomic set-and-get which is being performed on that attribute). Sometimes, however, you want to access the entire set of related attributes as an ordered list. ACLs are the obvious case: if you have underlying semantics which allow you to mix both PASS and DENY ACLs on a file, then the order of the ACLs obviously matters. In such cases, you may sometimes want just to query or set the ACL for a specific user, but often you will want to do something more complex such as change the order of ACLs on the list or replace the entire list as a single entity --- and you want to do so atomically. So, the simple "SET" and "GET" operations on named attributes (which correspond to writing and reading the ACLs for specific named users in the ATR_POSIXACL family) need to be augmented with SET variants which append or prepend to the ACL list, or which atomically replace the old ACL list in its entirety. Our proposed kernel API looks something like this: sys_setattr (char *filename, int attrib_family, int op, struct attrib *old_attribs, int *old_lenp, struct attrib *new_attribs, int new_len); sys_fsetattr(int fd, int attrib_family, int op, struct attrib *old_attribs, int *old_lenp, struct attrib *new_attribs, int new_len); where <op> can be ATR_SET overwrite existing attribute ATR_GET read existing attribute ATR_GETALL read entire ordered attribute list (ignores new val) ATR_PREPEND add new attribute to start of ordered list ATR_APPEND add new attribute to end of ordered list ATR_REPLACE replace entire ordered attribute list and where <attribs> is a buffer of length <len> bytes of variable length struct attrib records: struct attrib { int rec_len; /* Length of the whole record: should be padded to long alignment */ int name_family; /* Which namespace is the name in? */ int name_len; int val_len; char name[variable]; /* byte-aligned */ char val[variable]; /* byte-aligned */ }; ATR_SET will overwrite an existing attribute, or if the attribute does not already exist, will append the new attribute (ie. it does not override existing ACL controls, in keeping with the Principle of Least Surprise). If multiple instances of the name already exist, then the first one is replaced and subsequent ones deleted. If supplied with an "old" buffer, all old attributes of that name will be returned. For the PREPEND/APPEND/REPLACE operations, the entire old attribute set is returned. For GET, the <new> specification is read and all attributes which match any items in <new> are returned, in the order in which they are specified in <new>. The actual value in <new> is ignored; only the name is used. For GETALL, <new> is ignored entirely. *old_lenp should contain the size of the old attributes buffer on entry. It will contain the number of valid bytes in the old buffer on exit. If the buffer is not sufficiently large to contain all of the attributes, E2BIG is returned. This is just a first stab at documenting what feels like an appropriate API. It should be extensible enough for the future, but is pretty easy to code to already --- existing filesystems don't have to deal with any complexity they don't want to. Additionally, the use of well-defined namespaces for attributes means that in the future we can implement things like common code for generic attribute caching, or process authentication groups for non-Unix-ID authentication tokens, without having to duplicate all of that work in each individual filesystem. The extended attribute patch currently on the acl-devel group simply doesn't give us the ability to do extended attributes on any filesystem other than ext2, because it has such specific semantics. I'd rather avoid that, and I'd rather do so without adding a profusion of different ACL and attribute syscalls in the process. Cheers, Stephen - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [email protected]
http://lwn.net/2000/1026/a/sct-attributes.php3
crawl-003
en
refinedweb
#include "ace/config-all.h" #include "ace/ACE_export.h" #include "ace/SStringfwd.h" #include "ace/Functor_String.inl" Include dependency graph for Functor_String.h: Class template specializations for ACE_*String types implementing function objects that are used in various places in ATC. They could be placed in Functor.h. But we don't want to couple string types to the rest of ACE+TAO. Hence they are placed in a seperate file.
http://www.theaceorb.com/1.4a/doxygen/ace/Functor__String_8h.html
crawl-003
en
refinedweb
#include <mach-o/loader.h> #include <mach-o/nlist.h> #include <mach-o/stab.h> #include <mach-o/reloc.h> The object files produced by the assembler and link editor are in Mach- O (Mach object) file format. The file name a.out is the default output file name of the assembler as(1) and the link editor ld(1) The format of the object file however is not 4.3BSD a.out format as the name sug- gests, but rather Mach-O format. The link editor will make a.out exe- cutable if the resulting format is an executable type and there were no errors and no unresolved external references. reloca- tion entries. seg- ment and all the segment headers are created and the segments them- selves are padded out to the segment alignment (typically the target pagesize). For the object file type produced by an assembler (or by the link editor for further linking) all the sections are placed in one segment for compactness. When the kernel executes a Mach-O file it maps in the object file's segments, the dynamic link editor (if used) and creates the thread(s) for execution. Any part of the object file that is not part of a seg- ment is not mapped in for execution. For executable using the dynamic link editor the headers and other link edit information is needed to execute the file. These parts include the relocation entries, the sym- bol table and the string table. These parts are mapped in with the use of the link editor's -seglinkedit option which creates a segment that contains these parts. These parts can be stripped down with the -S option to ld(1) or various options to strip(1). as(1), ld(1), nm(1), gdb(1), stab(5), strip(1) Apple Computer, Inc. October 22, 2001 MACH-O(5)
http://www.syzdek.net/~syzdek/docs/man/.shtml/man5/a.out.5.html
crawl-003
en
refinedweb
#include <itkImageIOFactory.h> #include <itkImageIOFactory.h> Inheritance diagram for itk::ImageIOFactory: Definition at line 28 of file itkImageIOFactory.h. Reimplemented from itk::Object. Definition at line 35 of file itkImageIOFactory.h. Convenient typedefs. Definition at line 43 of file itkImageIOFactory.h. Definition at line 34 of file itkImageIOFactory.h. Standard class typedefs. Definition at line 32 of file itkImageIOFactory.h. Definition at line 33 of file itkImageIOFactory.h. Mode in which the files is intended to be used [protected] [static] Create the appropriate ImageIO depending on the particulars of the file. [virtual] Run-time type information (and related methods). Register Built-in factories
http://www.itk.org/Doxygen16/html/classitk_1_1ImageIOFactory.html
crawl-003
en
refinedweb
#include <itkGradientDifferenceImageToImageMetric.h> Inheritance diagram for itk::GradientDifference derivatives of the moving and fixed images after passing the squared difference through a function of type . Spatial correspondance between both images is established through a Transform. Pixel values are taken from the Moving image. Their positions are mapped to the Fixed image and result in general in non-grid position on it. Values at these non-grid position of the Fixed image are interpolated using a user-selected Interpolator. Definition at line 50 of file itkGradientDifferenceImageToImageMetric.h.
http://www.itk.org/Doxygen16/html/classitk_1_1GradientDifferenceImageToImageMetric.html
crawl-003
en
refinedweb
Extracting plain text from HTML Fredrik Lundh | August 2003 | Originally posted to online.effbot.org As some readers may have noticed, my RSS feed no longer includes full articles; instead, each item contains the first 50-100 words from the corresponding article, as plain unstyled text. I may switch back again when I stop posting “standard python library” articles… If you want to use something similar in your feeds, here’s the code that does the work. Tweak as necessary: def textify(html_snippet, maxwords=50): import formatter, htmllib, StringIO, string class Parser(htmllib.HTMLParser): def anchor_end(self): self.anchor = None class Formatter(formatter.AbstractFormatter): pass class Writer(formatter.DumbWriter): def send_label_data(self, data): self.send_flowing_data(data) self.send_flowing_data(" ") o = StringIO.StringIO() p = Parser(Formatter(Writer(o))) p.feed(html_snippet) p.close() words = o.getvalue().split() if len(words) <= 2*maxwords: return string.join(words) return string.join(words[:maxwords]) + " ..." The HTMLParser subclass disables anchor footnotes; the DumbWriter subclass makes sure that HTML list items have proper labels (or in other words, the subclass works around a bug in the standard library).
http://sandbox.effbot.org/zone/textify.htm
crawl-003
en
refinedweb
#include <mlFields.h> Definition at line 879 of file mlFields.h. Default constructor, do not use it. Constructor, creates a field with a name to manage an OutputConnector on the Module module at index outputImageIndex. Destroys the field. Returns a reference to the OutputConnector. Reference to OutputConnector as C-string to return (OutputConnector do not have a value, therefore reference is returned). Returns the value of the field as a string value. setStringValue must be able to interpret this returned string correctly. A notified output image must also clear its PagedImage if it is considered as changed. So it needs to be overloaded here. Default is that it is considered as FieldSensor::CHANGED. Reimplemented from ml::Field.
http://www.mevislab.de/fileadmin/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/ToolBoxReference/classml_1_1OutputConnectorField.html
crawl-003
en
refinedweb
#include <itkImageReverseConstIterator.h> Inheritance diagram for itk::ImageReverseConstIterator< TImage >: ImageReverseConstIterator is a templated class to represent a multi-dimensional iterator. ImageReverseConstIterator is templated over the dimension of the image and the data type of the image. ImageReverseConstIterator is a base class for all the reverse image iterators. It provides the basic construction and comparison operations. However, it does not provide mechanisms for moving the iterator. A subclass of ImageReverseConstIterator must be used to move the iterator. ImageReverseReverseConstIterator holds a reference to the image over which it is traversing. ImageReverseConstIterator assumes a particular layout of the image data. In particular, the data is arranged in a 1D array as if it were [][][][slice][row][col] with Index[0] = col, Index[1] = row, Index[2] = slice, etc. Definition at line 60 of file itkImageReverseConstIterator.h.
http://www.itk.org/Doxygen16/html/classitk_1_1ImageReverseConstIterator.html
crawl-003
en
refinedweb
Hi guys I'm just about to start with the pi universe. This is my first post. I hope i am asking in a accepted way. I do have 15 years experience on linux, but not a lot in scripting/programming. I am not at all savvy in electrical engineering. I have a Pi Zero WH Rev. 1.1 and connected a RPi Motor Driver Board on top through the 40pin header. I do power it through a 12V Power Adapter from an old QNAP TS209 that goes into VIN and GND on the Motor Driver Board. The Motor Driver Board powers the piz zero (through the header) and two 12V DC motors. AFAIK the motor board can do that ( Onboard 5V regulator, provides power to Raspberry Pi There is a IR receiver on the motor board too, although i do not know which one exactly (TSOP etc.). I have installed Raspian Lite Buster from July (this might be a source of the problem, as I do not know how the development state is for IR, Pi Zero etc. on Debian Buster) I want to use my IR Remote to control the motors with python (but IMPORTANT:without LIRC) and i have succeeded to do so using the waveshare's example code for that HAT board. But it seems as if only about 10% of all key-presses (scancodes) are captured by the python code. The rest is ignored. Often i can't stop the motors after one of several key-presses has started the motor(s). If run my pre-configured ir-keytable -t I do receive a scancode for all key-presses using different remotes (NEC with Terratec Cinergy, RC-5 with Hauppauge remote, NEC with a Yamaha HiFi remote). But when i am running the motor.py code it only captures NEC type scancodes and only about 10% of the key-presses are executed through the code. This is the ir remote python script ( motor.py) from Waveshare demo code files i am using What I find strange is that they already take away some parts of the scancode and only using the last bits. But I think this is not the source of the problems i am facing. How can i test/modify the script in order to fix this? Code: Select all import RPi.GPIO as GPIO import time PIN = 18 PWMA1 = 6 PWMA2 = 13 PWMB1 = 20 PWMB2 = 21 D1 = 12 D2 = 26 PWM = 50 GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(PIN,GPIO.IN,GPIO.PUD_UP) GPIO.setup(PWMA1,GPIO.OUT) GPIO.setup(PWMA2,GPIO.OUT) GPIO.setup(PWMB1,GPIO.OUT) GPIO.setup(PWMB2,GPIO.OUT) GPIO.setup(D1,GPIO.OUT) GPIO.setup(D2,GPIO.OUT) p1 = GPIO.PWM(D1,500) p2 = GPIO.PWM(D2,500) p1.start(50) p2.start(50) def set_motor(A1,A2,B1,B2): GPIO.output(PWMA1,A1) GPIO.output(PWMA2,A2) GPIO.output(PWMB1,B1) GPIO.output(PWMB2,B2) def forward(): GPIO.output(PWMA1,1) GPIO.output(PWMA2,0) GPIO.output(PWMB1,1) GPIO.output(PWMB2,0) def stop(): set_motor(0,0,0,0) def reverse(): set_motor(0,1,0,1) def left(): set_motor(1,0,0,0) def right(): set_motor(0,0,1,0) def getkey(): if GPIO.input(PIN) == 0: count = 0 while GPIO.input(PIN) == 0 and count < 200: #9ms count += 1 time.sleep(0.00006) count = 0 while GPIO.input(PIN) == 1 and count < 80: #4.5ms count += 1 time.sleep(0.00006) idx = 0 cnt = 0 data = [0,0,0,0] for i in range(0,32): count = 0 while GPIO.input(PIN) == 0 and count < 15: #0.56ms count += 1 time.sleep(0.00006) count = 0 while GPIO.input(PIN) == 1 and count < 40: #0: 0.56ms count += 1 #1: 1.69ms time.sleep(0.00006) if count > 8: data[idx] |= 1<<cnt if cnt == 7: cnt = 0 idx += 1 else: cnt += 1 if data[0]+data[1] == 0xFF and data[2]+data[3] == 0xFF: #check return data[2] print('IRM Test Start ...') stop() try: while True: key = getkey() if(key != None): print("Get the key: 0x%02x" %key) if key == 0x18: forward() print("forward") if key == 0x08: left() print("left") if key == 0x1c: stop() print("stop") if key == 0x5a: right() print("right") if key == 0x52: reverse() print("reverse") if key == 0x15: if(PWM + 10 < 101): PWM = PWM + 10 p1.ChangeDutyCycle(PWM) p2.ChangeDutyCycle(PWM) print(PWM) if key == 0x07: if(PWM - 10 > -1): PWM = PWM - 10 p1.ChangeDutyCycle(PWM) p2.ChangeDutyCycle(PWM) print(PWM) except KeyboardInterrupt: GPIO.cleanup(); Important side information: I have manually recompiled WebIOPi (otherwise buster issues again) and can control all python functions as used in the IR script without any delay or problems. With WebIOPi it just works.
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=249572
CC-MAIN-2019-39
en
refinedweb
1 Synopsis Conclusions Bibliography 2 Acknowledgements This book is based on data from the research project “Poverty, ethnicity andgender during market transition.” I was principal investigator of this project, RebeccaEmigh, Eva Fodor and János Ladányi were my co-P.I’s. The project was funded by theFord Foundation. Data were collected in 1999 and 2000 by an international team in sixEuropean post-communist countries. Members of these research teams formed“workgroups”, which met at Yale University during the 2000-2001 academic year andduring the Fall Semester of 2001 in order to conduct preliminary data analysis. I have to thank the dedicated work of Rebecca Emigh (UCLA), who played acrucial role in conceptualizing the research, coordinating the project, developing theresearch instruments and cleaning, organizing the survey and ethnographic data. JánosLadányi (Budapest University of Economics, Hungary) and Eva Fodor (DartmouthCollege) helped me in every stage of the research and they spent time at Yale duri8ng thephase of preliminary data analysis. My special thanks are also due to Gail Kligman(UCLA) who was senior research consultant to the project. She offered all alongintellectual inspiration and participated in the spring 2001 workgroup. I am also gratefulto all those who participated in the workgroups at Yale University and who helped toproject in other ways. Members of the working groups were Adrianne Csizmady(Hungary), Henryk Domanski (Poland), Judit Durst (Hungary), Roman Dzambozovic(Slovakia), Gábor Fleck (Hungary), Joanna Jastrzebska-Szklarska (Poland), LiviaKrajcovicova (Slovakia), Petar-Emil Mitev (Bulgaria), László Péter (Romania), LiviaPopescu (Romania), Elzbieta Tarkowska (Poland), Ilona Tomova (Bulgaria), GabrielTroc (Romania), Tinatin Zurabishvili (Georgia). From Yale University three graduatestudents (Christy Glass, Janette Kawachi and Lucia Trimbur) and one post-doctoralfellow (Eric Kostello) also worked on the project. Ivan Szelenyi 3 Introduction This book is about markets, poverty and inequality. When markets expand and replace redistributive mechanisms does that lead to more or less inequality, more or less poverty? The answer to this question is far from obvious. Radicals of the classical type, Marx and his followers in particular suspected markets to be the primary cause of inequality and poverty. Classical liberals, from Adam Smith to Max Weber took the opposite position: for them less government or bureaucracy and more self regulating market meant less inequality and less poverty. Social democrats - such of Karl Polanyi - believed, that some mix of market and redistribution might keep inequality at bay, while This book is also about poverty under post-communism. After all post- old ideas with new realities. As socialism began the crumble the old question about the views emerged about the social consequences of market penetration in the move from 4undermine socialist clientelistic networks and reduce the power and privileges of the long run - the cause of the ‘direct producers’. To put is brutally simply: markets are good for you (Nee). 2/ Others – following the footsteps of the radicals - argued the opposite: redistribution was an egalitarian system, as markets appeared already within the least) more poverty. The bottom line: markets are bad for you, ‘good’ society’ was the pure type of socialism and already reform communism began the erosion which led to an explosion of social problems with the collapse of the system (Burawoy). 3/Finally it was also proposed – in the sprit of the social democratic theory of the ‘mixed economies’ - that markets attacked the inherent inequalities of socialist redistributive economies while markets only complemented the system of central planning, but they boosted new types of inequalities (new types of poverty) one they became the ‘dominant mechanism.’ The conclusion: under socialism complementing redistribution with market helped the poor; under post-communist capitalism the poor can be helped if markets are complemented by At first glance there is good reason for radicals to rejoice. The fall of communism in Central and Eastern Europe was followed with an explosion of poverty and inequality. According to World Bank estimates during the late 1980s in the former Eurasian socialist countries fewer than one out of twenty-five lived below their ‘absolute poverty line’, hence survived with less than $2.15 a day. Ten years later one out of five people had to live with less than $2.15 a day (The World Bank, 2000, p. 1). Income inequalities also jumped, the region, which was among the most egalitarian in the world just in a decade 5became one of the most inegalitarian ones. GINI coefficients between the late 1980s and the late 1990s increased in all countries. In Central Europe they rose modestly from around 0.20 to 0.25 (in Poland from 0.28 to 0.33), in South Eastern Europe it increased from around 0.25 to 0.30-0.40 and in Russia it jumped from 0.26 to 0.47 (The World So can we finally settle the century-and-half long debate? Far from it. The pure type of socialism, Stalinism may not have been as poverty free and egalitarian as the radicals like to believe; unlike the expectations of the liberals after the collapse of communism poverty and inequality exploded, but less markets and slower reform – just as the liberals would have expected – made it worse, not better (The World Bank, 2000, The story we tell in this book is a complex one, with pros and cons. There are no simple answer to the complex question we posed. Liberals may come back with vengeance after they let radicals to rejoice for a while and social democrats may stand in This book summarized the finding of research we conducted in year 2000 in six survey research on random samples of the adult population (with over-samples of Roma in three countries, Bulgaria, Hungary and Romania). Local graduate students affiliated with the project also carried out with funding from us and under our supervision ethnographic case studies of communities in extreme poverty (often rural Roma ghettos). This book summarizes what we learned about poverty under post communism with our investigation. 6 Let’s walk you through first the story line of this book and to try to develop a (a) The communist past was not that radiant. People do not remember Stalinism affectionately – many people reported that they tended to be poor in what they (b) As socialism progressed countries which were rather different before they turned socialist were converging. By the later epochs of socialism people report similar levels of poverty and low levels of inequality. Market reform does not hurt the reform (like the one in Romania) did. Under the circumstances reform (d) The speed of growth of poverty was not the same in all societies, in some - and in those where market reform was more limited (hence in Russia, Bulgaria or Romania) poverty grew faster than in those countries where liberal market (e) All countries experienced a similar dive into poverty during the first 3-5 years of reforms, bottomed out by the mid-1990s, and in those, which did not implement consistent reform the rate of poverty kept increasing. Hence given (c) and (d) more consistent market reform might be good for the poverty rate for a country. 7 Neo-liberal reform may be bad, but it is better than its alternative; little or incoherent reform; (f) People who remember to have been poor under socialism often grew up in single mother households several siblings and/or they were Roma. The education of their fathers did not matter at all. There are no significant cross-country differences or variations. (g) Those who report declining living standards after 1988 have low educational levels. Ethnicity (Roma may have been poor enough already under socialism), gender or the size of the family does not matter much. There is much less decline across countries: countries, which implemented liberals reform have much lower education and resulting poor labor market performance is the main reason of poverty. Those who live in single mother households and who are Roma tend to be over-represented among the poor, but size of the family is not as important as it Now we are ready to review some of the theoretical and methodological issues of poverty 8 1/ Poverty under socialism and post-communism – theoretical considerations Let’s first review, what we know from our own previous research and from the literature about the nature of social inequality and poverty under socialism. This way we can create a baseline and we can judge whether the conditions changed by 2000 either in State socialism was not an egalitarian society and people under socialism tended to be rather poor. Commentators after the fall of socialism often falsely describe socialism as an egalitarian society. This it was not: neither in its ideology, nor in its practice. As far as ideology goes socialism pretended to be meritocratic (and to some extent and in some sense it was): this was a social system in which in principle each was rewarded according higher educational credentials people had more they were supposed to add to productivity. To put it theoretically: socialist social structure can be described as a rank order. People were allocated to various ranks depending on what their educational credentials (or cultural capital) were, political loyalty (or political capital) acting as “glass ceiling” in the process of promotion to higher rank or office (Eyal at all, 1998). At “the last instance” possession of political capital may have been the final determinant of power and privilege 9in socialist societies, but it typically worked its way through educational credentials. In order to achieve a higher rank (and a higher level of income or privilege) one usually desirable position one needed political capital, one usually could not make such a transition without being the member of the Communist Party. Furthermore, occasionally promotions to such a higher rank did occur without the appointees having earned the appropriate credentials. They were simply promoted due to their political loyalty, though normally they were expected to earn the appropriate credentials sooner rather than later. In particular during the early years of socialism such accelerated promotions of working class and peasant cadres occurred quite frequently, since the new regime did not have enough well trained cadres yet, whom they could thrust. Communist leaders also felt the need to gain legitimacy among the masses by token promotions of ordinary people to high schools and at universities) to help the “prematurely’ promoted to reconcile their status inconsistencies. Thus those who were promoted to high office due to their political loyalty were helped to acquire the necessary level of education by short cutting the normal process through “political education.” (Bobai Li, 2001; Andrew Walder, Bobai Li and Donald Treiman, 2000). As a result empirical research normally finds that level of education was the best predictor of power and privilege under socialism (Ganzeboom; State socialism was not only poor, if you analyze the whole socialist epoch and look broadly across countries it is difficult to question that the system operated with a 10depressed level of consumption. To put it simply: as a general rule people were poor under socialism and looked with envy across the Iron Curtain and generally believed they had a worse life than people in similar social position in market capitalist countries. The main point we try to make is that socialism was an inegalitarian system, though this redistributive economies from those in market economies. While in market economies the law of supply and demand determines the degree of inequality, in state socialism credentials and political loyalty set inequalities. It follows that in capitalist economies the logic and extent of inequalities, which are counter-acted by market transactions b/ the gap between the lowest and highest incomes tended to be smaller in a socialist economy than the gap one would expect in comparable a market capitalist system. This was the result of multiple factors. Let’s name a few of those. First, socialism did private ownership of economic capital was not only illegal, but was non-existent for all practical purposes. Second, while socialism emphasized meritocracy it also claimed that equity is desirable and it tried to present itself as the “dictatorship of the proletariat.” As a result the income gap between higher and lower credentials was typically less than in 11market economies. The “flower of the proletariat” – defined the Marxist way (thus skilled workers in heavy industry, miners, welders etc) earned relatively high wages, often higher than those higher educated who were not in politically important positions (teachers and medical doctors typically had notoriously low salaries). True, wages and salaries did not measure adequately the actual differences in living standards. Socialism consumption was not accumulated in individual wages and salaries, but was allocated Those fringe benefits (subsidized better quality new housing, free tertiary education, special shops, hospitals, vacation homes reserved for cadres) tended to go to the those with more cultural and political capital. The extensive system of fringe benefits created a greater gap in living standards, than the gap between wages and salaries. But even if could take into account these privileges (which were very difficult to measure) it is likely that a “Gini coefficient of the monetary value of living standards” would have been less capitalist economy. Socialist countries were quite different from each other in this respect. The Soviet and Romanian ruling estate was known for its luxurious lifestyles, but the Czechoslovak and Hungarian elite lived rather modest lives. Third, socialism was a system of “production for production sake” (Antonio Carlo, 1978) and therefore it was a system of “dictatorship over needs” (Feher, Heller, Markus, 1983). Until the system began to disintegrate one central aim of the system was to “catch up and overtake” the most advanced economies. State socialist redistribution not only channeled resources from the lower income earners towards the higher income earners, it also channeled 12resources from consumption to production, in particular to productive investments. As the Hungarian Stalinist leader, Rakosi used to say: “you shall not eat the goose which lays the golden egg.” Let’s delay current consumption for the sake of future economic growth (and future affluence). One of the leading ideas of socialism was the “one bowl of rice” – one should consume only as much as necessary for the reproduction of labor power (hence: ”dictatorship over needs” to put it with Feher and his collaborators). This inevitably implied that the bottom had to be elevated and the top had to be suppressed. As socialism was entering its final phase – in particular in the reform communist countries, such as Hungary and Poland – it gave up hope that it can “catch up” with the West, it was less and less “production for production sake”, became more and more concerned with political legitimacy. As a result during the last two decades the more reform oriented countries tried to buy political peace with improving living standards. The reform communist countries began to borrow widely cheap oil-dollars and tried to boost consumption. But even this reversal of the “dictatorship over needs” was carried out in a rather egalitarian fashion. And this was inevitable: to so called “refrigerator socialism” of the Hungarian Janos Kadar (so much admired by the Polish Jaruzelski) was foreign borrowing had to spread rather equally, just as the costs of accelerated 13c/ the bottom of the social hierarchy under socialism tended to be higher than the bottom in market economies at the same level of economic development. While in the classical epoch everyone tended to be rather poor and levels of consumption were repressed across the board (in comparison with market capitalist economies at the same stage of growth) the poorest of the poor improved its conditions early in the game and substantially. The single most important reason for this was that socialism was an “economy of shortage” (Kornai, 1980), which operated with chronic shortages for all goods and factors of production, including for labor. Thus by definition labor was a scarce “commodity”, in general the system operated with full employment. There were epochs and regions with occasional unemployment. But if we speak about the whole socialist system over its whole life span one of its most unique feature was full employment, which included most of women and ethnic minorities, such as Gypsies. Before communism Gypsies were not part of regular labor markets, they rarely had permanent jobs. They were seasonal, casual workers, or were engaged in activities, where they sold products or services (were musicians, blacksmith, bricklayers, were involved in horse-trading, fortune telling and all sorts of similar activities) rather than selling their labor power. Before socialism in Southeastern Europe many Gypsies were travelers and had a semi-nomadic way of life (Tomova, 1995). Socialist regimes settled the travelers and offered to all Gypsies permanent employment. Typically Gypsies during the socialist epoch were employed at the bottom of the social hierarchy, as unskilled laborers, but often in the growth sectors of the socialist economy, thus for instance in mining, steel-mills, construction industry etc (Kemeny, 1973). They earned low incomes and carried out heavy and unhealthy work, 14but for the first time in history they had regular jobs and there was a regular cash inflow into their family budgets from stable wages. Socialist full employment by eliminating unemployment eliminated the major social source of extreme poverty: absence of regular income. Furthermore, socialist regimes not only enabled everyone to enter the labor force and earn regular incomes they forced everyone to live with this opportunity. There were laws against “hooliganism “ – these laws made sure that everyone had a job and had an address. Those who did not have a job or address in their identification card were treated as criminals, could be jailed or sent to labor camps. Work was not only a “right,” it was also a “duty.” And the same applied for housing. Public authorities offered some sort of housing for everyone, but people were also obliged to accept that housing and have an address. Unemployment and homelessness were not only “unnecessary”, but they were also not tolerated. Whether there is a “culture of poverty” (Oscar Lewis) or not has been the subject matter of intense controversy over the past decades. But if there is a “culture of poverty” state socialism certainly wanted to get rid of it. If it were true that some people are unemployed or homeless not by necessity, but by choice this choice was not offered to them by socialist regimes. Finally, state socialism had a highly underdeveloped social welfare system, but the very basic provisions (basic housing, free education, free medical services, child allowances, minimal disability and old age pensions) on top of “compulsory” – in the above sense of the term - regular wages and salaries were secured for everyone. Some critiques of state socialism claim that socialism operated with a “prematurely born” welfare system; that its welfare system was overgrown and not 15system of “collective consumption”, the value of many goods and services were not accumulated in personal incomes, but they were allocated through state redistribution. But to call this a “welfare state” is a misnomer. Most of such redistributive action could be better seen as fringe benefits, salary supplements, allocated to higher income groups, thus arguably they constituted redistribution of income not from the better-to-do to the poorer, but from the poorer to the better- to-do. Public housing is the best example. In market economies local authorities build new public housing to house those who could not afford to buy or rent housing from their wages and salaries. In socialist countries new public housing was systematically allocated to those with higher skills (and by definition higher incomes). The poorest had to build housing for themselves on the “market.” And the poorest of the poor entered the vacancy chain of public housing at the very bottom (Szelenyi, 1983), thus had to live in old, small, substandard housing. On the whole the socialist welfare system was poorly developed: schools, the healthcare system was neglected. Thus for instance the gradual deterioration of the healthcare system is responsible for the declining life expectancy during the last phase of state socialism. This could hardly be called an “overgrown welfare system.” The debate about the question whether the socialist welfare system was overgrown or underdeveloped has far reaching policy implications. If we believe that socialism offered more welfare provisions than it could afford the logical policy conclusion is that the task is to cut back on welfare expenditures and this was and is the policy of neo-liberals in the post-communist world. Post-communist countries are in great need of a social safety net that the welfare institutions they inherited from socialism are inadequate and they have to be developed. Kornai and other liberals have a point when they call for a reduction of “collective 16consumption” and the elimination of state subsidized fringe benefits to those who have high enough incomes to purchase for themselves and their families what they need. But the institutions, which cater for the educational, healthcare, housing and other needs of those who cannot afford to secure these goods or services for themselves at an adequate level have to be created. And this task is particularly urgent, since the “underdeveloped” socialist welfare system did perform one function quite well: the very basic provisions at the very bottom of the social hierarchy were met. Socialism eliminated starvation, put a roof over the head of every family, secured minimal incomes, and educational and healthcare services at a very minimal level. Under post-communism the bottom fells out below the poorest of the poor with the untargeted attack against welfare provisions. d/ the poor and ethnic minorities were segregated from the main-stream society in space, but the degree of such segregation was generally relatively low and it tended to decline. We treat the spatial separation of the poor and/or ethnic minorities as an important spatial segregation by class and ethnicity under socialism. In the cities as new public housing was built outside the urban core in deteriorating inner urban areas older and poorer people and ethnic minorities were concentrated. With urbanization and industrialization substantial regional change took place as well. The more dynamic, younger, better educated families were moving away from the smaller, isolated villages, which became the destinations of the poorest of the poor, who tried to escape urban poverty and poor Gypsies. Nevertheless, spatial segregation was relatively moderate. People rarely had sufficient income to purchase houses and thus to chose the area where 17they wanted to live. Bureaucracies allocated housing to people – the right to chose and the possibility of the better-to-do to segregate themselves into their own neighborhoods aimed at the reduction of spatial segregation. In Hungary and Romania for instance socialist governments made an effort to eliminate Gypsy ghettos. And they were not altogether unsuccessful in this respect. During István Kemény’s first Gypsy survey, carried out in 1971 a substantial proportion of Gypsies still lived in separate Gypsy settlements (Kemény, 1973), by the time of his second study in 1993 the population residing in such settlements was minimal (Kemény, Havas and Kertesi, 1995). Gypsy settlements were by and large eliminated in Romania as well. Not all countries followed these policies though. The Bulgarian communists tried to assimilate Gypsies and remove them from their ghettos during the 1950s and early 1960s. But they gave up on these plans by the mid 1960s and ironically – in order to hide the failure of their policies – they build walls around Roma ghettos. The Roma ghetto in Sliven, which houses today about 15,000 Gypsies was surrounded by a wall during the mid 1960s. The aim was just to make sure those who travel on the train, which passes by the Nadezhda ghetto of Sliven do not see that there is still a ghetto in the area (personal communication by Ilona Gypsy houses and replaced them often by high-rise apartment complexes, but unlike the Hungarian and Romanian governments it did retain spatial segregation. These Gypsy social housing was often built outside the main settlements (an example is for instance to so called Hamor, about two miles outside of the city of Nalepkovo, see Krajcovicova, 2000). Thus, spatial segregation by class and ethnicity was an important feature of the 18socialist urban and regional system. The policies of governments to reduce such segregation had mixed results on the whole the degree of segregation under socialism – given the limited choice of residence people had – was less than what one could expect in capitalisms are emerging on or with the ruins of socialism and the dynamics, extent and nature of poverty and inequality tends to be rather different in these different types of post-communist regimes. The former socialist societies during market transition followed rather diverse trajectories. During socialism there has been at least some convergence among the societies, which experimented with communism. After the fall of state socialism, however, post-communist societies are increasingly different from each other both in terms of the level of their economic development and also in terms of their economic 1 First this ideas were developed in Eyal at all, 1998. In this more developed formulation we rely on Kingand Szelenyi (forthcoming). 19institutions and the characteristics of their social structure. Various European post- communist societies – while all were implementing a revolution “from above” - carry out Central Europe followed more consistently the prescription of neo-liberal reforms, they those members of the former communist elite, which succeeded to reserve their foreign investors. The Czech Republic, Poland and Hungary (possibly East Germany, Slovenia – or even Croatia and Slovakia - and some of the Baltic States, Estonia in particular) may be In Eastern Europe “proper” (Russia, the Ukraine, Byelorussia, the Balkan states, such as Bulgaria, Romania, Serbia) foreign investment did not play such a critical role. These countries instead used various mechanisms to convert public property into private ownership of the emergent new domestic elite, usually recruited from members or 2 In East Asia (in particular in China and Vietnam) capitalism is emerging “from below.” In early stages ofthe transition the public sector is not privatized, capitalism is made in a “gradualist way” by opening upnew spaces for private economic activities. Privatization of the publicly owned corporate sector enters thepolitical agenda some 20 years after transition began. By the end of “the 20th century the formerlycommunist East Asian societies are “hybrid post-communist capitalism.” In sharp contrast to this inEuropean transitional societies capitalism is made “from above.” In 1989-1991 the communist politicalstructure breaks down, the new post-communist elite gives priority to the fast privatization of the publicsector. 20This coincided with relatively slow emergence of market institutions, a prominent role “post-communist neo-patrimonialism”. These pathways produce very different outcomes. Those countries that became neo-liberal regimes have created an economy where market institutions are highly developed and economies are well integrated into the world economy. In “neo- patrimonial” system on the other hand the patron-client relationships pervade the economy: between the state and enterprises, as well as between management and labor. in reciprocity and networks. Businesses cancel each other’s debts, use local monies, and engage in barter3. The two types have some common features as well – they both started with mass privatization of public good4. This strategy opened up rather unexpectedly when 1988- 1989 European communist regimes “melted down.” Until the mid-1980s no one seriously considered that this would be a possibility. During this time, no scenario was developed for a way to turn an economy based exclusively on public ownership into a system of private property. East European economists, speculating about ways to escape the deepening economic crisis of state socialism. They threw their hands up and said jokingly: “we know how to make a fish-soup out of fish (thus how to nationalize private property), but we do not have the faintest idea, how to make fish from a fish-soup.” The 3 East Asian “Capitalism from below” also creates market-integrated systems, but they rely more onrelatively small domestic capitalist enterprises co-existing with a large state owned sector, which areincreasingly market-dependent themselves. We call this hybrid capitalism.4 This is a major difference from “capitalism from below.” In Asia – given the continued dominance of theCommunist party mass privatization of the state sector did not take place for a long time. 21recipe for this culinary miracle was discovered during the second half of the 1980s and it (such as Russia, Romania or Serbia during the Milosevic epoch). However, capitalism “from above” can be combined with capitalism “from without” (as was the case in Central Eastern Europe and the Baltics). Both liberalk and patrimonial forms begin the transition with mass privatization of the corporate sector, but the technologies of privatization are different and the speed that market institutions are established are not regimes privatization is a reasonably transparent process. Public firms are auctioned off are traded on the market place. Prices are deregulated, currency is made convertible, the import are deregulated and so on. Under such circumstances even if corporate management is able to acquire controlling private ownership in their firms, they still have incentives to bring in foreign investors in order to attract capital and secure market access. apparatchiks manage to exchange their political capital into private property, frequently 5 We call these regimes “neo-liberal,” primarily because neo-liberalism is the guiding ideology of theseregimes, which were modeled on the Reagan and Thatcher vision of state-economy relations. Of course,this is an ambiguous legacy, since Reagan oversaw a massive military-keynsian expansion. Furthermore,many of the neo-patrimonial states, like Russia and Kazakhstan, implemented neo-liberal policies, whilesome of the “neo-liberal” systems, such as Poland and Slovenia , failed to implement crucial portions of theagenda (both delayed large scale privatization and instituted an industrial policy). 22using management buy-outs to achieve these aims. Creating a group of owners of giant contacts, and with no capital for restructuring. These structural challenges are exacerbated by the “habitus” of former communist officials turned private owners. They are likely to be less entrepreneurial, and more inclined to be paternalistic towards their business partners and employees. In liberal regimes the domestic expert-managers carry their habituses with themselves as well, and are not immune to paternalism either. Such actors are used to operating within the system of a paternalistic state and are likely to need these local experts exactly because they are well networked and have such local social and political know-how. Nevertheless, in liberal regimes foreign owners call the shots, and this, together with the existence of liberal press and democratic parliamentary neoliberal regimes. Firms frequently can not pay wages. This contributes to labor workers are separated from each other, and their “classness” decreases. Given the exchange. In this situation, the monetary system is very poorly developed, or, in Woodruff’s analysis, “money is unmade” (1999). Workers typically must resort to food grown on garden plots or collective potato farming to survive, and they are thus 23 These paths are only ideal types, and thus features of the various styles of capitalism will be found in all postcommunist countries. There are some multinationals and modern capitalist markets in Russia, while in Hungary, the archetype of “capitalism from without,” paternalism plays an important role not only in the political-cultural sphere, but even in the economy. Barter is not unknown in the Central European economies.6 The first three rows in Table I.1 summarize some of the characteristics of the two European postcommunist capitalist regimes. The rest of Table I.1 is an attempt to identify what forces are responsible for why countries took particular trajectories. We turn now 6 Some degree of the Asian type of “capitalism from below” can be found in the Czech Republic andHungary as well. Indeed, in Poland and Slovenia there were strong elements of this path, because theyseverely delayed the privatization of very large state owned enterprises. Thus, they created a space formore capitalism from the ground up, even as they pursued capitalism “from without” through strategicforeign investment. Even after more than ten years after the transition, many large state owned enterprisesstill exist in these countries. 24 Table I.1. Varieties of European post-communist capitalism 25 The origins of the liberal and patrimonial forms There are exogenous and endogenous factors, which may help us to explain why some countries followed a neo-liberal, or neo-patrimonial path and why some managed to go a long way by “building capitalism” from below. In this paper we will focus our attention to the “endogenous” factors, thus to factors, which are rooted in the class prior communism, and, to some degree, were able to retain under communism as well as the proximity of markets in core countries are likely to play an important role. It cannot be accidental that the only countries which are bordering the European Union, and which argument is that the more developed countries were able to adopt the neo-liberal policies, since they could afford to pay the rather high price of neo-liberal shock therapy. Less developed countries, however, may have experimented with some shock but they had to suspend it before it could work since their population could not tolerate more pain (hence shock without therapy – see Gerber and Hout (1998)). Thus, the argument could be made that the better economic performance in Central Europe may have nothing to do with their economic policies, or the path their capitalist development took, but can be 26explained simply by the fact that they were stronger economies and closer to Western markets. This exogenous explanation is not without merit, but it has its limits. First of all, during the socialist epoch the gap in the level of economic development among the socialist countries narrowed and just began to grow again after the fall of communism. And there are also curious exceptions to such economic determinism. For instance, how can one explain the unusual success of the Baltic states, which were for half a century parts of the Soviet Union. Sure, they were relatively better developed regions of the USSR, nevertheless they were within the USSR and during the 1990s they comfortably cultural explanation to the different routes various societies took. We do not know how far this analysis could be pushed, but it certainly deserves attention that the reconfiguration of post communist states occurred along religious lines. It may be just an artifact, or it may have something to do with elective affinities between religion and various religions. All neo-liberal regimes are dominated by Western Christianity; neo- patrimonial states are either Orthodox or Muslim7. It is beyond our competence to assess how important this fact may be, we just have to note that such an affinity between Now let us turn to the effect of those social structural variables, which are more 7 Hybrid capitalism happened to be in the Confucian and Taoist part of the world 27 The transition from communism to capitalism was the outcome of both inter and intra-class struggles. On the whole classes were not particularly well formed under state socialism. Socialist society can be better described as a rank order, rather than a class stratified society. Nevertheless, classes were in formation and in particular the strength of the working class had far reaching consequences for how intra-class struggles among monopoly of the political apparatus had been challenged for quite some time by an communist party, and critical intellectuals. The formation of the working class was the most advanced in Poland of course, where the collective action of workers in 1980 almost brought down the rule of the communist political apparatus. While, using the real or imagined threat of Soviet intervention, the bureaucracy cracked down on the working class movement. However, this sufficiently weakened the bureaucracy so that it could be Hungary followed a somewhat similar patter. While the Hungarian working class never engaged in the kind of collective action taken by the Polish working class, it was sufficiently a threat to the communist apparatus that it had to try to buy political peace by opening up the second economy to workers and peasants. The resulting petty hegemony of the communist bureaucracy and laid the groundwork for the Hungarian apparatus was wiped out in 1989. It lost political power altogether, and therefore had 28neither the will nor the capacity to carry out a project of political capitalism. The technocratic-intellectual alliance did not last for too long (Szalai 2001). The intellectual elite turned against the technocracy, which was now seen as part of the former communist establishment. In 1990, in both Poland and Hungary the newly formed Socialist Party suffered humiliating defeat. The intellectual themselves were split into a liberal and patriotic- Christian wings and the last decade can be described as struggles among these various political forces. Nevertheless, despite the political differences all of these intellectual and technocratic elites, they followed neo-liberal policies, and in particular, cooperation with foreign investors. Hungary and Poland are in many respect the “purest types.” The alliance of classes or elites may have been somewhat different in the other neo-liberal regimes, but our key hypothesis is that in all of these regimes the communist bureaucracy was unseated by an alliance between reform minded technocrats with liberal and patriotic-Christian intellectuals. This alliance received some initial support from the working class, which for the most part was demobilized after the defeat of the communist bureaucracy. In those countries that wound up with the neo-patrimonial system, the communist political apparatus was able to retain its power and defend the privileges of its clients as well. Communist ideology was of course instantly abandoned, but the former communist parties were not taken over by technocrats and transformed into right wing social democratic movements as it happened in the neo-liberal regimes. Instead the core of the political apparatus retained control over the successor parties and turned the communist 29ideology into nationalist, often xenophobic ideology. Iliescu in Romania and Milosevic in Serbia are prime examples. The transformation was more complex in Russia, where the successor party lost political power, but Yeltsin followed policies, which were quite similar to those of Iliescu and Milosevic. In those countries where working class resistance, political or economic, did not weaken the political apparatus, the technocratic and intellectual opposition could not smash the political bureaucracy. Instead, it adapted Thus, we argued that the patters of class conflict and intra-class alliances determine which path to capitalism is selected. This does not have to be thought of as an different segments of class structure. For example, the fact that a country is closer to Western Europe and more culturally similar make an alliance of technocrats and multinationals seem much more possible. Thus the exogenous factors are mediated by the 8 Arguably, China followed most closely the scrip described by Konrad and Szelenyi (1978). In1978 the political bureaucracy formed an alliance with the technocratic intelligentsia, and in fact acceptedthe leadership of the technocratically minded faction of the party elite during the years of Deng Hsiao Ping.The bureaucracy and technocracy keep each other under control to the present day. The technocracy didnot allow the political apparatus to implement a political capitalist scenario (it had little incentive to do soanyway, since it was not deprived from its political power and many of its economic privileges). Thecommunist bureaucracy, on the other hand, put strict limits on how far the technocracy could go in itsattempts to ally with the intelligentsia and to pursue neo-liberal policy. Workers and peasants coulld takeadvantage of the resulting “balance of power.” Thus Nee is probably correct to a large extent whe he seesthe “direct producers” benefiting from transition in China, while they proved to be the losers in the twoother systems. 30 Social consequences of various pathways, changes at foundations of society So far we focused on what happened at the top of the social hierarchy, but what path former communist societies took is also impacted by and influences processes in the order to a class stratified system. But the formation of classes is a historic task, which is likely take more than one generation and how far the social structure is at the end indeed In this process liberal and patrimonial regimes – just like in their progress to make This trend has its limitations for sure. Since international capital is calling the shots there “without capitalists.” The new petty bourgeoisie, which was emergent from the second economy during late state socialism in Hungary and Poland suffered a defeat as pro big business policies gained ground and as shock therapy opened up domestic markets to foreigner investors and exporters. The working class, which was forming in Poland powerfully during the early 1980s was demobilized as soon as communism collapsed, it is hardly a collective actor and could hardly be seen as a class for itself. Nevertheless labor markets operate according to free market rules, labor is to a large extent separated from the means of subsistence and production, price of labor is set by supply and 31demand, if demand is insufficient labor remains unsold, relationship between conditions. The main source of poverty is absence of employment, those will become and remain unemployed, who do not possess adequate human capital and therefore they often will be locked into long term unemployment and poverty. The consolidation of market institutions and the making of this new structural poverty coincides with the emergence of a middle class. This new middle class offers upward mobility chances to workers and even to ethnic minorities, such as Roma and this serves an important liberal ideological function. It makes possible to ‘blame the victims’ hence some of their fellow workers, or some of the fellow Roma did make it to the middle class. Under such circumstances the long term, structural unemployed – especially if they are Roma - may be defined as an underclass. The relationship between labor and management is primarily paternalistic. Labor and management are bound to each other by mutual loyalties, not just by interests. The workers will have an obligation to turn out at work even if management is in arrears with wages; management will have obligations to “look after” the welfare of their workers even if they do not necessarily need them at this point in time. As the state institutions of socialism collapse and are not replaced by capitalist welfare institutions the functions of they have to protect people, their citizens or employees against the high costs of 32transition, especially the high costs of shock therapy. The advocated therefore gradualism and emphasized social responsibility. They might have spread to costs of transition more poor.” Since meritocratic criteria, possession of skills, human capital regulate primarily the entry to jobs with a large population of working poor poverty will be less determined simply regulated by demographic factors who end up in poverty and who does not. Underprivileged ethnic minorities such as the Roma will be treated as a political problem, they will not be dealt with in economic ways. Hence the Roma will be treated as one ethnic minorities are fragmented along class lines, some being mobile into the mainstream others locked into a new underclass. Patrimonial regimes spread poverty broadly and ascriptive criteria are more important in the determination of poverty. Ethnic minorities are also treated this way, politically as a single category, defined as an under- caste. 33 C/The post-communist experience – the effects of market transition on inequality and poverty economies have been discussed extensively in the so-called market transition debate (See American Journal of Sociology, 1996 issue and elsewhere). To date, this debate has focused on three central questions. First, has market transition really occurred (Nee), or increased (Walder; Lin; Kostello and Szelenyi) or decreased (Nee) as economies have moved from redistribution to market integration? Finally, who has emerged as the beneficiaries, or “winners” of the reform —ordinary people (Nee), former elites (Hankiss new elites (Eyal, Szelenyi and Towsley)? Clearly absent in this debate has been questions regarding the extent and nature of poverty as a consequence of market transition. This book intends to correct this omission as we bring issues of poverty into the center of the debate. In order to develop a guiding theoretical framework for our analysis, we first must cast the scope of market transition debate more broadly to include theories about inequality and poverty during transition. For our purposes, the most notable figures in the between economic growth and social inequality. The so-called “Kuznetz curve” assumes 34the beginning of economic take-off, inequality increases sharply, and once high levels of ways. Nee does not consider the effects of economic growth, but he offers a powerful – inequality and increased economic liberalization. Nee – at least in his 1989 article – hypothesized that with more markets and less redistribution the allocation of assets and incomes may become more equal. If one adds to this assumption the hypothesis that market liberalization may induce more dynamic growth (at least that is what one would expect on the basis of neo-classical economics) one may be able the create a combined theory that at the time of actual “transition”, when the old system breaks down and the new system is not quite operating yet an increase of inequalities is plausible, but it economic growth are realized and the social benefits from growth begin to “trickle down.” Given the robust economic growth China experienced during its “transition” (in contrast with the economic meltdown for former Soviet Union and the former European socialist countries had to suffer from) it is therefore not that surprising that Nee hypothesized about equalizing impact of the markets already in early stages of transition, inequities. 35 Lal and Myint (1996), in their study of non-transitional economies, arrived at a very different hypothesis than did Kuznets/Nee. Lal and Myint do not find evidence that inequality declines with economic growth or, by implication, with freer markets. After all, neo-classical economists assume that a high degree of inequality is necessary, given the constant need for economic incentives for higher productivity and faster growth. Intervention by the welfare state, for example, which aims at reducing inequality, will limit economic growth because it will distort the proper functioning of free markets. Lal and Myint, however, do suggest that freer markets and more dynamic growth eventually reduce absolute poverty, if not inequality. From this theory, one can create a Lal-Myint Curve, which would illustrate that during the early stages of economic take-off, even absolute poverty may increase, given the destructive effects of markets on traditional however, the benefits of growth will “trickle down.” While relative poverty may remain stable once growth has been achieved—after all, relative poverty is more a measure of Patterns of economic growth and social change in the United States during the Clinton years seem to offer some support to the Lal-Myint hypothesis. While during the last decade of the 20th century the United States was becoming more inegalitarian at least extreme poverty, employment was up during this period, the number of people on welfare In order to incorporate the Lal –Myint hypothesis into our analysis of poverty in market transition, we must consider two issues, one of which is specific to transitional 36economies, the other is a general question, equally applicable to advanced Western economies as well. The first question concerns whether the two divergent transition strategies are consequential for the extent and nature of poverty. Did the strategies of the patrimonial systems deliver on the political promises of protecting citizens from the pains of the transition, or did these strategies simply delay the inevitable, or even intensify the negative consequences of transition? And what about neo-liberal strategies? Is there any evidence of trickle-down prosperity, or has the trickle down approach proved to be a false strategies. The second question is more general, and has implications for advanced market economies as well as for transitional economies. Let’s assume that the Lal-Myint curve accurately describes the dynamics of economic growth, namely, that growth leads to development will eventually lift all boats? What if poverty becomes linked to ascriptive criteria, such as race/ethnicity and/or gender, and thus poverty becomes racialized and/or feminized? Would the Lal-Myint curve hypothesis still apply? In other words, would we of people – for instance a certain ethnic minority, or single mothers may become locked into permanent poverty. Thus post-communist “shock therapy” may have produced a 37"new poverty" (Tarkowska, 2001). Stated differently, “shock therapy” may be The purpose of this book is to test whether a “new poverty” has emerged during during the last decade of the 20th century in Central and Eastern Europe. The “underclass concerned; B/ It becomes increasingly likely that the children of such people or groups will also live Gunnar Myrdal used for the first time the notion of underclass (1963, 1964). Myrdal coined the term to designate those who were left out of the benefits of the big post-war economic boom – the boom was so overwhelming that “all society” benefited, except those who were locked outside the society and pushed into an “underclass” position. Julius Wilson (1978) adjusts the notion in interesting ways. There are three important innovations in Wilson’s work: 1/ For Wilson the force, which drives underclass previously had decent jobs – often with respectable skills and incomes – which members of the underclass are not just those who lost jobs or who are poor, their poverty significance of race.” This implies, that once one controls for class the race effect 38weakens or disappears. This happens since some African-Americans are not affected by de-industrialization and the new poverty. The same time, when the new underclass of the inner urban Black ghetto poor was formed a robust process of Black middle class formation was also under way. Blacks are not separated from the rest of the society as a social category or a caste, their exclusion only takes place if race interacts with certain class characteristics. Evidence from the US indicates (Stack, 1974; Casper, McLanahan and Garfinkel, 1994; Rodgers, 1996) that poverty also tends to be feminized. In particular minority women and women from poor families are more likely to become single mothers at a young age and if that happens, their poverty may not be simply a life cycle phenomenon, they and their children, including their male children may be locked into life long poverty. Feminization of poverty can be interpreted very broadly. So for instance one may even consider feminization of poverty within the household. Women may live in poverty in non-poor or not so poor households, since they may carry the burden of poverty more than men. Women also tend to be over-represented among single senior adults and tend to be poorer than single old males. These are socially non-trivial problems, but since in this book the focus of our interest is the interaction of ascriptive characteristics and class headed households with children under 16 and we want to explore whether, much like There is some prima facie evidence that market transition may be accompanied by 39 The Roma minority seems to be affected by post-communist de-industrialization in analogous ways as African-Americans were in inner urban areas in the United States during the 1970s. Roma tended to be concentrated during the state socialist epoch in heavy and construction industries, which suffered the most severe job losses with the fall of communism. They lost jobs, became de-skilled and have little prospect ever finding a job again. They also tend to be locked into urban and rural ghettos in extreme poverty. Furthermore, much like African-American in the US, the process affects some Roma than others. The formation of the Roma underclass in urban and rural ghettos is complemented by them making of a Roma middle class, arguably in neo-liberal regimes more so than in patrimonial ones, where the more traditional social order is more likely to be preserved. life-style choice. Middle class women (and men) may just not want to get married, but they are ready to bear children nevertheless. This does not necessarily lock people into poverty. If out-of-wedlock birth is a life style choice the parent may co-habit and share resources, or even the single mother may be middle class and reasonable well off. Thus single motherhood may only lock someone into poverty if single motherhood interacts hypothesis. Lal-Myint theory was specifically devised to test the effects of economic growth, our own theory tends to test the effects of market penetration, or economic 40liberalization) in unlikely to reduce inequality, but we will expect it to have some countries, which follow the more closely the neo-liberal strategy of transition. We assume, however, that the relationship between market penetration and poverty is likely thus at the time of “take-off”, early stages of market penetration, when shock therapy is applied one may get rather high poverty rates, though those rates tend to decline as the benefits of growth begin to trickle down. Nevertheless, we anticipate that such “trickle ethnic minority status or single motherhood interacts with negatively privileged class position those who are locked into such situation may not see the benefits of economic growth. Data This book is uses statistical data generated by surveys conducted in the six post- during the Fall of 1999 and the Winter-Spring of 2000. In addition to survey data, we rely on ethnographic studies conducted in all of these countries during the same time period. The Ford Foundation supported both the survey and fieldwork components of the 41research project. Ivan Szelenyi was the principal investigator; co-principal investigators were Rebecca Emigh, Eva Fodor, and Janos Ladanyi. Our research team included a group The surveys were based on random samples of the general population in each the Urals; in the other countries the sample size was 1,000. In Romania and Slovakia, the sample was selected using the “random walk” method, thus the primary sampling unit was the household. Individual respondents were selected from the household roster using a Kish table. In all other countries, various lists of individuals constituted the sampling frame. Overall, the samples are sufficiently representative, with the exception of Slovakia, where less educated, lower income groups, and Gypsies are under-represented. Our data also include over-samples of sub-populations, including the poor and the Roma. While both groups are of particular theoretical interest, both comprise too small a During the past few years, various survey research agencies have asked interviewers to identify the Roma. For example, Szonda-Ipsos and TARKI in Hungary have used this technique a number of times. Other studies used other methods to identify Roma. For instance, some scholars and Census designers have relied on self- decided to use a combination of all three methods. Thus, we first relied on interviewer we did not ask the respondents to report their ethnicity. Next, in the actual survey 42instrument, we asked respondents whether or not they self-identify as Roma. Once the interviewer completed our interview he/she was asked for the second time (the interviewer may have been the same or may have been another person than the one who classified during the screening interviews) to classify the respondent by ethnicity. This time the interviewer knew how the respondent self-identified, and this may have effected his/her judgment. Finally, after the survey interviews were completed, we asked experts in those locations where Gypsies were reported to live to identify Gypsies among our respondents. These various systems of classification give us different estimates about the size of the Roma population. While there are cross-country differences in the extent of the discrepancies in all countries fewer people identify themselves as Roma than identified by experts. Finally interviewers tend to identify all those who are classified as Roma by experts as being Gypsy, but they cast their net more broadly and often classify people as Roma, who are not believed to be Gypsies by the experts and who do not classification is the most effective way to over-sample Roma. Whether all people classified this way as Roma are “really” Gypsies or not is another question. Nevertheless it is unlikely that people who will be regarded by expert as Gypsies, or who call themselves Roma would not be classified as such by the interviewers. If we “err” by this method, we err by casting our net to broadly, but the net will catch all other classificatory Romania) where there are sizeable populations of Roma, we worked closely with market research firms that carry out omnibus surveys on a regular basis. Over a period of a year, 43together with the market research firms, we screened between 10,000-19,000 households. To the omnibus questionnaire, we added a short questionnaire that asked the interviewer to tell us whether any given household or any member within the household was Roma. We then asked the interviewer his/her degree of certainty in that assessment. (We also asked the interviewer to explain on what basis he/she classified any given respondent or countries (Bulgaria, Hungary, Poland, and Romania). We did not create a poor over- sample in Russia because our pretests indicated that the general population sample would due to the inability of the survey firm to sample the bottom of the social hierarchy—we had to abandon our attempt to create an over sample of those in extreme poverty. As with the ethnicity over-sample, the poverty over-sample was determined using screening questions attached to the omnibus surveys of the market research firms. Interviewers were asked a series of questions designed to identify extremely poor households (e.g., Are there signs of undernourishment in the household? Is the house dangerous and/or unhealthy etc?). If the interviewer answered positively to any question, the household was selected into the poverty over-sample. In the actual survey, respondents were asked compare the Roma and the non-Roma poor. In this paper we do not analyze data from 44 The Roma and poor over-samples are random samples and we know the interview-identified poor and Roma respondents with great interest, we believe poverty assessments as the “true” measure of who is poor and who is Roma . In our analyses of Roma. We rely on multiple indicators from the survey questionnaire to guide our Our survey data are complemented by ethnographic case studies. In each country villages, with the exception of one urban center. Most communities were Roma poverty stricken or were predominated by a poor ethnic group other than Roma. Junior members of the research team received yearlong fellowships to conduct field projects in of project meetings. Reports of these case studies can be found at:. (From the web site, select the link for “Poverty Project,” followed by the link for “Final Reports.” There you will find the complete set of ethnographic case studies.) Survey data are also posted at the same web-site, though access to data is restricted until January 1, The selection of the six countries was driven by our research questions. The countries differ both in terms of ethnic composition and in terms of which ideal-type of 45“post-communist capitalism” best describes their economic system. Overall, there are two sets of questions for which our research design can provide answers: First, we explore whether the divergent paths leading from communism to “post- To answer this question, we compare the “neo-liberal regimes” of Hungary, Poland, and we test whether the presence of a sizeable ethnic minority—namely, the Roma — which tends to have some “elective affinity” with poverty changes the character of poverty or not. In particular we explore whether the presence of such an ethnic minority leads to drawn more sharply if poverty and ethnicity interacts, is the boundary, which surrounds the ethnically labeled poor more difficult to cross? Is the ethnically labeled poor is more likely to live in life-long poverty, pass its poverty onto the next generation and live spatially segregated ways than the poor who belongs to the ethnic majority? Does the ethnically labeled poor constitute an “underclass” (in the above defined sense of the term), or even if it does not is it on its way to form an underclass. To answer such questions, we compare Poland and Russia with Bulgaria, Hungary, Romania, and Slovakia. Measurement of poverty We use various measures of poverty in this book, some rather conventional, others less conventional. 46 (1) Subjective measures of poverty In order to assess the changes in living conditions between 1988 and 2000 we asked two sets of ‘subjective’ questions about poverty which were identical for 1988 and 2000. We call these questions ‘subjective’, since they explore the ‘lived experiences’ of poverty by the respondents. (We call those measure The first set of subjective measures we call here the ‘experience of poverty.’ We asked four questions which aimed at capturing the subjective experience of absolute poverty. We asked our respondents, whether they went to bed hungry recently since they could not afford to eat enough food, whether they could eat enough mean and whether they had adequate clothing (shoe and winter coat).9 We call those ‘very poor’, who reported experience with hunger, people were regarded as ‘poor’ if they reported deprivation in at least one of the three other indicators and those were regarded as ‘non We asked people whether their family earned in 1988 (is earning in year 2000) below average, average and above average incomes10? This proved to be a good question: people seem to enjoy answering it and the story, which is emerging from the answers is not only sensible, but occasionally rather robust. We struggled a lot, however, to try understand what people exactly want to tell us by answering it. People (the response rate is excellent, over 90%) are unlikely to have a good idea, what the ‘average income’ may have been, or even what it is at the time of the survey. Hence up to 80 percent of the respondents even in the general population sample might tell us that they earned ‘below 9 For exact wording and coding see Appendix 210 See exact wording of the question in Appendix 2 47average incomes’ (as this is indeed the case in 2000 in Bulgaria). Is this a meaningless answer? Far from it! We believe those who report us ‘below average’ income want to tell us their sense of frustration. They do not know what average income is, but they want to tell us that there are (were) people who are (were) doing better than they did. They might be wrong in their judgment, nevertheless the fact that they feel this way tells us something important about their state of mind. The answers to this question can be seen 48 (2) Objective measures of poverty. We also collected data to be able to calculate the ‘objective’ poverty measures and here we followed the procedures applied by the World Bank11. We asked questions about incomes and expenditure, or consumption. The World the basis of household consumption (money expenditures plus the value of food produced on a household plot), rather than on the basis of income12. In our questionnaire we used Following the conventions of the World Bank we calculated two types of ‘poverty lines’ from these data. We calculated an absolute poverty line set as $2.15 and $4.30 per person consumption (or income). In order to make comparison across countries possible in converting the currencies into US$ we used purchasing power parity (PPP) exchange rates as established by the IMF and World Bank. PPP rates measure the relative purchasing power of different currencies over equivalent goods and services.13 We also calculated the relative poverty line set at 50% of the median income. Both poverty lines are calculated per capita and also per equivalent adult. Conversion from per capita into per equivalent adults takes into consideration that the well being of a household is influenced by the size and age and gender composition of that household14.11 We are grateful to Jeanine Braithwaite and Dena Ringold – both from the World Bank - for theirassistance in constructing our questionnaire and working out the strategy of data analysis of expenditurebased measures of poverty. The World Bank also offered some financial support to our project and receivedin exchange our data for analysis when the data were made available to members of our project, thus wellbefore it was released into the public domain. We are grateful to the World Bank for their financial andprofessional support. The World Bank team produced an excellent report from our data (Revenga at all,2002). In this chapter we occasionally used their data and we acknowledge it when we do so.12 For reasons of consumption based measurement of well being and poverty see Appendix A in MakingTransition Work for Everyone, Washington D.C., 2000, pp.367-37713 Making Transition Work for Everyone, p.37014 See the conversion method in Appendix 2 49 A comparative evaluation of subjective and objective measures One may object that the subjective measure we use are unreliable. How reliable are the subjective and There is indeed a problem of reliability with the subjective questions. For instance will people understand the same way what is the meaning of ‘being hungry’, or will it have a different meaning for poor people and not so poor people? Will people remember in a reliable way their past subjective experiences? What speaks for these questions is the high response rate and the ease with which people seem to respond to such questions. Responses to questions about income or even consumption are much more problematic. While people may offer somewhat interpretation when they were ‘hungry’, they simply might not know at all how much for instance they spent for instance on “detergents” etc. We are prepared to be rather aggressive about this: our subjective measure may result on 50 There were other good reasons to use these “subjective” measures in this study. First of all one contribution what our study could make to the extensive work on poverty is to assess the changes in poverty over time in the former communist societies where no reliable income data are available for much of the socialist epoch. It is just not possible to generate retrospective income data for such a long historical period, and our poverty measures are working well. This is demonstrated by the powerful trends we could detect and also by the high response rate to in the case of these questions. Furthermore we also believe – and we try to demonstrate this in this chapter – that though the Roma is also income poor its poverty is income, or consumption underestimate the depth of Roma poverty. There is some cash inflow into Roma families (child-allowances, pensions, disability pensions, social assistance payments and the like). Given their poorer living conditions and other problems Roma can be starving and living in unhealthy and overcrowded conditions even when the inflow of monetary resources into their households are not that much lower than in poor non-Roma households. 51 Organization of the project, collaborators funding from The Ford Foundation to bring teams of scholars from post-communist countries and the U.S. to Yale University to collectively analyze our results. From September 2000 through December 2001 during each semester a different workgroup — composed of 6–8 scholars per team—met consecutively at Yale University in the Center for Comparative Research. During the fall 2000, we conducted preliminary cross-national comparisons of the poverty data; during the spring of 2001, we worked on the issues of ethnicity and poverty; and finally, during the fall of 2001, we focused on the “feminization of poverty.” On November 18-20, 2000 a workshop was organized. The first draft of the book “Poverty and Social Structure” was posted on the internet: and the manuscript was discussed by members of the collaborative research project who did not take part in the workgroup. The discussants were: Henryk Domanski, Livia Popescu, Iveta Radicova (Slovakia), and Dina Ringold (The World Bank, Washington D.C.). So far an edited volume was published from the project (Rebecca Emigh and Ivan Szelenyi (eds). Poverty, Ethnicity, and Gender in Eastern Europe During Market No.4, 2001 in Hungarian was published under the editorship of Ivan Szelenyi. Contributors to the special issue are: Christy Glass, Henryk Domanski, Eva Fodor, Janette Kawachi, Gail Kligman, Janos Ladanyi, Petar-Emil Mitev and Ivan Szelenyi. The 52 Chapter 1 Memories of Socialism Introduction including sharp increases in unemployment, inflation, and the cost of living, accompanied by notable decreases in real wages, and state subsidization of basic social and public goods—were far greater than most had anticipated. Newly acquired freedoms were often and practice, with the extremes of quickly amassed fortunes and the depths of poverty visible for all to see. To the extent that redistributive economies relativized and masked many, there was something increasingly out of sync between the pictures of their lives as imagined after the collapse of communism and their experiences thereafter. What some observers label as nostalgia for a familiar sense of minimal security and well-being is more aptly understood as a re-evaluation of the past in the context of a changing present. Contrary to western popular images of life under socialism—the scarcity of basic goods 53and lack of basic individual freedoms foremost among them—the perceptions of those who lived under socialism are not necessarily so bleak? With life turned dramatically upside down, offering both unexpected opportunities and curtailed access to them, we wondered how people remember their lives during different periods of socialist rule. Did they perceive their lives during various periods of socialism as better or worse? easier or harder? People’s memories of state socialism are the focus of this chapter, in particular, the present, providing a reassessment of the former in light of the latter. The past as the communist era, private memory was highly suspect and subordinated to the state’s construction and control of official, public memory in the form of rewritten histories. Private memories were repressed, relegated to the recesses of the mind.16 Since the collapse of communism, those official histories are themselves being reinterpreted, often transformed into autobiographies, memoirs, and oral histories have flooded the public 15 It is beyond the scope of this chapter to review the ever-broadening literature on memory, personaland/or collective (see, for example, Olick and Robbins 1998, Gillis 1994, special issue, “Memory and theNation,” Social Science History 1998; and special issue, “Grounds for Remembering,” Representations2000). With respect to the communist period, “memory” work has tended to excavate the excesses ofcommunism—repression, violence, and violation--rather than the everyday experiences of life undercommunism. For examples of the latter, see Kligman 1998, Verdery and Kligman in progress; ADD.16 See, for example, Milosz . Sherbakova 1992; Watson 1994). Diaries and memoirs are importantsources of memory to the extent that these were written and preserved. 54sphere, as communism is subjected to personalized and public remembering and In this general context in which “memory work” is being carried out, we wanted to gain a sense specifically of memories of poverty during the communist era. To that end, we surveyed people about their recollections at three distinct points in time so as to gain a range of experiences of poverty over the communist period. First, to establish a base line for comparison, we asked people to remember, when they were fourteen years of age, how they and their families lived;17 we then asked the same questions for 1988 and for 2000. As discussed below in greater detail, we posed questions designed to get (respondent was fourteen in 1948 or before), Stalinist (was fourteen between 1949-1959), or socialist (was fourteen between 1960-1988). In the end, we had an overview of how people recalled their living conditions, their experiences with poverty in early adulthood, in the last year of socialism, and again a decade after the collapse of communism18. 17 In the survey research it is customary to ask people about their living conditions, parents occupation etc,when they were 14 years old in order to establish their life chances when they entered young adulthood.18 In this book we use the terms ‘socialism’ and ‘communism’ interchangeably to designate Soviet typesocieties. On particular we label the epoch, which followed the collapse of the regimes as ‘post-communism,’ though by other authors it is occasionally also referred to as ‘post-socialism.’ 55 1/Methodological considerations data?19 We asked respondents if they recalled, when they were fourteen, whether they went to bed hungry, whether they ate meat in a typical week, whether they had a warm winter coat, or a second pair of durable shoes.20 The responses to these questions yielded memories of going to bed hungry, we refer to this as “deep poverty.” If s/he reported experiencing at least one of the other three indicators, we refer to this as “poverty.” The response rate for this set of questions was excellent: approximately 95% of the based, for example, on median income or PPP.21 It was not possible to gather reliable income or expenditure data retrospectively, hence, our analysis of poverty before and 19 As discussed in the introduction, measuring poverty is a complex matter. Throughout our study, weutilized multiple means of measuring poverty.20 We discuss the meaning of hunger as a measure of absolute poverty below. Similarly, a warm coat anddurable or good shoes suggest that people are able to protect themselves in a basic way against nature’selements.21 As discussed in the introduction, Purchasing Power Parity (PPP) is a method of measuring the relativepurchasing power of different countries’ currencies over the same types of goods and services. CalculatingPPP involves converting local currencies into a common currency and eliminating the differences in pricelevels between countries. Because goods and services may cost more in one country than in another,poverty lines based on PPP allow for comparisons of poverty rates across countries. Standard thresholdsusing PPP measures are $2.15 and $4.30 per capita per day. Our absolute measure of poverty was asubstitute for an indicator such as $2.15 per capita daily income in PPP. Our relative measure was asubstitute for indicators such as household earnings below fifty percent median income. 56 However, we do not feel that the lack of more standardized, conventional poverty measures hinders our historical analysis in any significant way. After all, income-based measures of poverty are themselves proxies for the phenomenon we are attempting to uncover: what were the material conditions of everyday life in which people lived before and during state socialism? As proxies, income-based measures often do not reveal conditions, subjective though they may be, are perhaps the strongest available indicators of the experience of poverty during state socialism. Indeed, when we compare the same subjective measures against more conventional measures available for the year 2000 (see The central question regarding the validity of our retrospective measures remains: what can we actually learn from recollections about past experiences of poverty during socialism? To be clear: this data cannot be used to make statements about the extent or representative sample of people who lived in the countries we studied in the year 2000. We do not have a representative sample of people who lived in these countries during socialism. Secondly, our survey data do not measure the "objective" conditions of our respondents’ lives at earlier time points. They were simply asked to share memories of their past experiences. To reiterate, such memories are shaped not only by their past experiences, but also by their present circumstances and their hopes and fears for the future. 57 Hence, the “memories of socialism” surveyed in our study offer a particular kind of people who grew up under that system. This, like all such recollected histories, is a selective history, based on memories that are as much about who these people are today and who they want to be tomorrow as it is about the “facts” of their pasts. At the same time, it is necessarily a selective history of those who have lived on in the countries of our study. Those who died, were killed, or who emigrated have not told us their relativize the meaning of these “remembered histories.” While memories about earlier times are selective--and their selectivity is shaped by the social conditions and motivations of respondents at the time of data collection-- memories are still formed out experiences; they are not intentional lies. They have a "real" and an "objective" core. How, then, may we interpret the meanings of the answers to the questions we asked our respondents? Those who went to bed hungry because they could not afford adequate food supplies would be regarded as poor under any circumstances. Nonetheless poverty reported this way is subjectively experienced. Being hungry is not just a physical condition; it is one that is socially interpreted. "I am starving" has different meanings for different people under different social and historical circumstances. For some, going to bed hungry may mean: "I cannot eat as much as I would like;” for others, "My body does not get enough calories, I am undernourished, and I may die years earlier than someone 58and is subjectively interpreted, reporting the experience or recollection of hunger offers a measures of poverty have been criticized by sociologists for good reasons (Townsend). One indeed does not have to be hungry, however, to consider oneself or to be considered poor. People may feel poor, believe that society does not treat them adequately, and assert that in comparison with other people, they are not as well off. Termed by in the evening, a light coat on a cold winter morning, or the chill a bare or thinly-covered foot feels in the snow. With our survey we collected data on incomes and expenditures in year 2000, therefore we can measure with some precision ‘relative poverty’, namely important, but they have their limitations for cross-national comparison. Relative have lower level of inequality than more affluent ones and as a result relative poverty indicators may find ‘more poverty’ in a more affluent society, than in a poorer one. In this chapter we present data only about experiences with “absolute” poverty. 22 As noted previously, we do not have reliable retrospective income data therefore we cannot doover-time analysis of relative poverty. We asked, however, our respondents the second set of questionsregarding their family’s income: was it below average, average, or above average, when they were 14? In1988? In 2000? This question is often used in survey research to evaluate how people see their familylocated in the hierarchy of incomes. There is no reason to believe that people actually know what averageincomes are or were at one point in time. When they report below average incomes they express frustrationof being slotted below average, hence we call this a measure of ‘relative deprivation”. These data areincluded in Appendix 1, but they are not systematically analyzed in this Chapter. 59 2/What are the memories of socialism? So, how do people remember socialism? Did people who came of age during later periods of socialism remember less or more poverty than people who turned fourteen during earlier periods of socialism? And did such differences vary across countries? As discussed above, memories of the past are simultaneously and selectively shaped by people’s retrospective responses represent their assessments of how they were faring in the year 2000 when the data were collected combined with and in relation to their assessments of how they fared during various periods before and during state socialism (depending on when they were age fourteen). With regard to our study, and as will be discussed below in greater detail, we generally found that people remember state socialism overall as a period in which poverty declined for most. Furthermore, cross- country differences seem to have narrowed somewhat during the socialist epoch. far the most benevolent memory of their lives in pre-communist times, over 70% of them do not remember any experience with poverty and only 16% of them report hunger. The other East European countries are rather similar between 30 and 50% of those who turned 14 before 1949 remember their families as not poor at all and roughly a third of them report experiences with hunger when they turned young adults. It is interesting that Bulgarians and Romanians are rather similar to Poles and to Slovaks. Neither Poles, nor 60conceivable that the Hungarian data reflect as much the post-communist ideological parties were in power and they made an effort to rehabilitate the image of pre WWII Hungary. In Poland the experiences with the war may darken the picture for the pre 1949 times. In Slovakia people may recall the pre-socialist epoch more negatively since it was the epoch of the ‘first republic’, Czechoslovakia, hence Slovak resentment against Czechs may color their memories and turn it darker. It is conceivable that Slovaks today, after independence from the Czechs and in keeping with national sentiments, are inclined to think in darker terms about the First Republic.23 Nevertheless the cross-country differences are large enough to claim that Hungarians who turned 14 before 1949 experienced less poverty in pre-communist times than people in the other countries. It is also safe to say that Bulgarians and Romanians experienced about as much poverty as Poles and Slovaks. In fact Bulgarians and Romanians remember less ‘deep poverty’ than 23 And they tend to be kinder toward the entire socialist era. Slovaks may feel that throughout the socialistera , Czech-Slovak “fraternal” relations were more balanced, with Czechs having been less privileged thanduring the First Republic. And indeed Slovaks paint a rather rosy picture of the whole socialist era whatthey describe in almost identical terms than the Hungarians. If we had a Czech component in our study, wecould expect that Czech respondents would offer a mirror image of the Slovak experience, looking at thesocialist period more negatively than the pre-socialist period. 61 Table 1.1 24 Created by Janette Kawachi, June 18, 2002 62 During “classical Stalinism”, Russia was by far remembered as the poorest of the countries studied. Our older Russian respondents remembered their youth during the 1930s and 1940s as having been far poorer than do people of the same cohort living in the other countries sampled. And this is not all the effect of the war. We have enough respondents in Russia who turned 14 before 1940 and among a third of them reports of having been “very poor” at that time. It is also interesting that people remember pre-war Soviet Russia not only as poor, but as relatively inegalitarian as well25. Hence in Russia Stalinism, even the ‘golden age’, therefore the 1930 is not recalled as a system, which could deal with extreme poverty and which at was egalitarian. Memories of people still living in Russia in 2000 who had lived through the Stalinist era do not remember the In the East European countries respondents who were young at that time remember the decade of Stalinism (1949-1959) in similar ways across countries than the previous cohort recalled the pre-socialist times. On the whole the proportion of those who told us that they were not poor during the Stalinist epoch is smaller than the same proportion reported for the earlier cohort (see Table 1.1). In all of the other countries surveyed – with the exception of Hungary - the early socialist or Stalinist period was 25 More than half of the people who were young adults in before 1959 recounted that their family earned“below average” incomes --a much higher proportion than anywhere else (See Appendix 1. Table A1.1.1).26 The memories of higher levels of poverty in Stalinist Soviet-Russia can be of course attributed to itslower level of economic development. Nevertheless it is surprising that despite the strong ideologicalcommitment of Stalinism to egalitarianism and reduction of poverty according to the recollection of peoplewho reached early adulthood at those time how little it could deliver on those promises. 63 The sharpest reduction of poverty, however, occurred after Stalinism, during the 1960s or even later. Throughout the entire post-Stalinist epoch (1960-1988), in what we refer to as the socialist period, it is noteworthy that in every country, those who were young adults reported less and less exposure to poverty (see Table 1.1). Socialism seems to have indeed had an homogenizing effect with respect to class distinctions both across and within countries. Hungary is remembered as slightly better off than the other countries during the whole post-Stalinist epoch, though the gap between reports from Hungarians and other respondents narrow greatly if we compare the reports for post- Stalinist and pre-socialist times. Slovaks have almost identical memories of this epoch than Hungarians and Poles do not paint a much darker picture either. Bulgarians and in particular Romanians are somewhat less sanguine, but even in this countries at least ‘deep poverty’ is sharply reduced, almost to the same level as in other countries. Russia is far the worse off, though there is continued reduction of poverty, in particular in the memories of those who turned 14 during the 1980s. Although state socialism did not eradicate differences in poverty among the countries studied, it did narrow the gap among them, at least in the ways people still alive in 2000 remembered their lives27. Of course, the actual differences in living standards may have been greater or lesser than what our respondents recall. We know that before the 1980s, shops in Hungary were full, whereas in Moscow and Bucuresti people stood in long lines to purchase the most basic goods. Having experienced even worse times, the expectations of 27 Data on ‘relative deprivation’ shows an even more consistent reduction of poverty in all countries, with alarge (between and similar proportion of the population (about 80%, up from what used to be 40-50% inearlier times) believing that their parents earned average or above average incomes when they turned 14between 1960 and 1988. See Appendix 1. 64Russians may have been lower, hence perhaps easier to satisfy.28 To reiterate, our data do not allow us to make statements about actual living standards, but it does appear that people’s memories about how bad or good their lives were converged across countries from one cohort to the next. As to the statement that the gap between countries may have narrowed during socialism (or at least it seems that people experienced socialism this poorer countries that had a large agricultural labor pool, from which the process of extensive industrialization could be fed. Yet, the socialist economic system could never quite figure out how to cope with the challenges of “intensive growth” (Kornai 1992; Szelenyi 1989). Hence, at least for a while, the system produced better economic results this regard, the “homogenization” we note does not necessarily represent good news for all of the countries under consideration: it suggests that in the early phase of socialist development, the more economically developed countries may have had more difficulties. Furthermore, because the region was integrated into the Soviet Empire, politically induced. To sum up, on the whole it does not appear that the 1950s represented a time of honey and milk. The Stalinist past is not that radiant after all. About half of those who were young adults during those years remember them as years when their families 28 People also stood in long lines in Bucharest in the 1980s. Although they had experienced better times,by then, they were all too familiar with the daily constraints imposed on them by the Ceausescu regime. 65experienced substantial inadequacies in income; quite often, they also remember having face of an alleged paradise of egalitarianism. Put differently, people remember the one bowl of rice but they also believe that others fared better than they did. These trends changed rather impressively, however, between the 1960s and the 1980s. Those who turned 14 during the 1960s remembered decreasing poverty in comparison with those in the previous cohort. Conditions continued to improve for the improvement seems to have occurred in the East European countries during the 1960s; in one decade the percentage of those who experienced extreme poverty was cut by one third. In Russia, between the 1950s and 1960s, there was a dramatic change in how young adults perceived their family’s situations. Those who were young adults in the 1950s recall the USSR as having been the poorest of the countries we studied Following the Kruschev reforms, however, Russia began to resemble the other socialist countries though remains somewhat poorer all along. Hungary and Slovakia were distinctive. People who turned 14 during the 1970s and 1980s and who had been living in these two countries remember them as places where people did not have to starve and 66 Our next task is to explore, what explains who reports experience of poverty at comparison. Socialism was supposed to have produced poverty, which was only a life- factors, such as education or labor market performance did not play much of a role. The expectation is that with market transition the role of such structural factors to explain poverty will substantially increase. As a result poverty for some will become a lasting phenomenon, which may stick to those who are poorly equipped with human capital and therefore are chronically under-performers on the labor market. We also speculated that racialization and feminization might also be new features of poverty, which come into being with market transition and in a more pronounced way in liberal regimes. With the data we have we cannot test what the determinants of poverty were under socialism since we do not have a representative sample of those who were poor under socialism. Nevertheless if the hypotheses hold for socialism they may hold for our population. And indeed Table 1.2 offer some support for these hypotheses. As expected the demographic factors play a substantial role (what cannot be said – as we will show in impact of demographic factors are minimal). Number of siblings has quite a substantial effect, while class as measured here with the education of the father hardly plays any 67role. Cross-country differences also look unimpressive (and again in Chapter 3 we will The descriptive data do not support however our assumptions about the novelty of 1.2 the strongest effect, however, seems to be attributable to the fact that the respondent lived only with his or her mother when was 14 years old. Poverty at age 14 is associated with single motherhood even more strongly than with Roma ethnicity the other major determinant of poverty. The poverty of those of our respondents, who turned 14 during socialism (and before) was already feminized and racialized and in particular it was feminized in every country. Here again the war certainly had an effect on feminization, war orphans are obviously a part of the poor children who grew up without a father, so we will have to control for cohort effects and that is exactly what we do in Table 1.3 in a regression model. 68 Table 1.2 People who reported to be “very poor” (suffering from hunger) when they were 14 years old if they turned 14 before 198931 Among all Among those Among those Among those Among those Among respondents with four or who were who lived whose father Roma education or lessBulgaria 9.8 14.3 12.1 23.4 11.9 20.0Hungary 10.8 19.2 11.6 26.2 13.5 16.8Poland 14.9 17.0 18.1 34.8 17.6 -Romania 13.1 16.0 14.9 27.8 15.2 21.6Russia 16.1 25.8 20.8 29.7 17.9 -Slovakia 12.4 18.7 14.1 10.1 18.2 - mother only has a formidable effect on poverty of those respondents who experienced poverty under socialism. And this effect remains if we control for the war orphan cohorts. Cohort is the best predictor of poverty – those who were born before 1935, hence turned 14 before socialism are 4 times more likely to have experienced poverty than others. Many of these respondents turned 14 during or immediately after the war years. hence their excessive level of poverty is not that surprising. Achieving young adulthood during the years of Stalinism (they also could be war orphans) is also a predictor of poverty of young adults. But once to cohort effect is controlled for the feminization effect is still 31 Created by Janette Kawachi, June 18, 2002 69robust. Those who lives only with their mothers at age 14 are almost four times more likely to be poor that those who lives also with their fathers as well. Furthermore, in a way why fathers are absent – because they were killed in war, that relevant. The point is that society already before 1989 had difficulties in coping with single motherhood. It was the major source of poverty, more important for instance than the number of siblings one had in childhood and infinitely more important than the social status of the parents. Single motherhood is a good predictor of poverty when measured the same way in year 2000, hence feminization of poverty is indeed one element of post- communist capitalism. But the strength of association between single motherhood and poverty is even stronger for those who were born before 1964 when they turned 14 than in year 2000. Since there are relatively few people who lived only with their mother during their childhood in our step-wise construction of our models the fit of our model does not improve that much when we enter this variable when moving from Model 2 to Model 132. 32 The log likelihood statistics declined only modestly 70 We ought to remember, that here we are dealing with a subjective assessment whether the respondent was poor as a young adult or not. One possible explanation to the strong feminization effect is that it may be easier for people to see themselves as poor if they only lived with their mothers. If their father also lived in the household they may be poverty-stricken. 71 Table 1.3 Probability (odd ratio) that respondent reported poverty (hunger) when was 14 years of age among those who turned 14 before 1989, six countries, GPS33 Model 1 Model 2 Model 3 when 14Class Father had primary 1.327 1.298 - school education or less Mother had primary 2.005 2.020 - lessCohort Born before 1935 4.471*** 4.475*** 4.904*** Born between 1936 3.174*** 3.213*** 3.344*** and 1945Country Central European 1.287 1.337 1.314 countriesLog likelihood -1089.321 -1105.5204 -1257.6956 33 Created by Janette Kawachi, June 18, 2002 72 And what about ethnicity? In Bulgaria, Hungary and Romania Roma are much more likely than gadjo to report hunger if they were 14 years old before 1989. While only 10% of Bulgarian gadjo was according to this measure in ‘deep poverty’ before 1989 twice as many Roma gave such a report. In Hungary and Romania the figures are similar, that once we control for all the major factors, which seem to effect the memory of poverty the odds of Roma to report deep poverty is 2.4 times greater than that of gadjo. Nevertheless, as one can already guess from the descriptive statistics single motherhood is even more important predictor of experiencing poverty before 1989. Among people born before 1964 those who live only with their mothers when they turned 14 are 2.8 times more likely to suffer from hunger than those who lived with their fathers and mothers. Ethnicity was therefore an important determinant of deep poverty before 1989 Those report experience with deep poverty at age 14 before 1989, many of them under socialism who lived with their mother when they were becoming young adults who were Roma and who had many brothers and sisters. The father’s education was not significantly associated with such experience. This is to some extent consistent with what we expected to be the determinants of poverty under socialism. Class indeed does not appear to be a factor, those whose fathers were poor in human capital were not more likely to live as children in families in poverty. Since the socialist economy provided employment to all with relatively egalitarian wages, including those with low levels of 34 See Ladányi and Szelényi, The Making of an Underclass, Chapter 4, Table 4.2 73 Somewhat to our surprise it appears that poverty was already feminized and racialized before 1989 and in fact it was probably more feminized before 1989 than it was later. We expected no feminization or little racialization before 1989 on the same grounds as we anticipated little effect of low educational level on poverty. Given the logic of “economics of shortages (Kornai, 1980) even for Roma there was a full employment under socialism. While Roma was slotted at the bottom of the occupational hierarchy as unskilled workers in mining, construction industry, steel industry etc. their employment was stable and their wages was respectable. Therefore they should not have been in poverty before market transition when their jobs were destroyed first by post- unemployment. We also believed that single mothers would have been in the labor force under socialism. There was a demand for their labor as well and there were inexpensive or free childcare institutions, which made it possible- even for single mothers and even with several children - to work full time. We anticipated feminization of poverty to occur with market transition, since demand for female labor declined and the supply of childcare institutions were reduced and their costs were increased. In pots-communist capitalism – so we expected - single women will drop out of labor force (or will not even able to join it to begin with) and therefore will slide into deep poverty. Those forces of market induced feminization and racialization may indeed working, but our data suggests that poverty may have been as much feminized and almost as much racialized as it is in year 2000. 74 Conclusions Our data, according to the memories of our respondents, reveals that poverty decreased during the socialist era. Did this perceived decline of poverty and inequality result from the pure type of socialism, which radically eradicated market forces or on the contrary from socialist market reform? Some believe that socialism reduced poverty and achieves a reasonable level of equality only as long as market reform did not begin the system, socially unjust and irresponsible, which was at least partially made somewhat poverty the result of socialist policies or would it have occurred anyway as the outcome of economic growth? Some believe that decline of poverty and inequality would have occurred anyway as the economy grew (in fact a capitalist system, with great capacity to grow would have created more equality and certainly would have reduced more sharply poverty),others would maintain, that socialism reduced poverty well beyond what one could have expected simply on the basis of the economic growth ion socialist countries. reform which reduced poverty? The proportion of people who recalled being poor when they were 14 years old steadily declined after 1960, that is, after the break with classical 75Stalinism in all countries, both in reformers35 and non-reformers36. Hence, in Slovakia and in neo-Stalinist Romania where market reforms were eschewed, poverty was reduced as much as elsewhere. Nevertheless, it appears that the purer the socialist model, the less persuasive was its performance in reducing poverty and inequality. Those who lived through the Stalinist era do not remember it as an egalitarian system in which everyone’s by those who grew up in the period of market-oriented reforms under the leaderships of The least we can say is that as socialism began to experiment with market reforms, people living in these countries did not perceive this change as a shift towards greater poverty, inequality and injustice. As previously noted, market reform in Hungary did not result in an increase in the proportion of people who experienced absolute poverty37. Russia during the Gorbachev reform years is also remembered as a country in "economic growth" versus socialist policies. One can possibly argue that the decline in poverty our respondents reported to us would have taken place anyway in a growing economy. Poverty was reduced in capitalist Greece or Spain as well, not only in socialist Bulgaria or Slovakia, hence there is no reason to believe that a capitalist system would have produced the same outcomes in Eastern Europe as well. But if that were the case, why countries, which were rather different in terms of their level of economic 35 Hungary, Poland and in the 1980s Russia, (see Berend)36 Slovakia, Romania and to some extent Bulgaria. Bulgaria experimented with reforms, but not asconsistently as Hungary or Poland (see Berend).37 Similarly, as can be seen in Appendix 1, the proportion of people who remember experiencing relativedeprivation did not increase with market reforms. 76development began to converge once they became socialist? Furthermore, as we will see during market transition. While during socialism our countries were converging, after Romania, Russia) poverty growing faster, while in other countries (Hungary and Poland) poverty after an initial increase stabilizes and is even reduced. Why divergence now after 77Social determinants of poverty before 1989 As expected before 1989 class factors such as education (in our case, when we try to explain the experience of poverty of our respondents when they were 14 years old this is the education of their father) did not play a particularly important role in the depressed wage scales between highly educated and less educated this should not be that were good predictors of poverty, what may have been as a result only a life-cycle Poverty before 1989 on the other hand appears to have been already feminized (to a large extent) and racialized (to some extent). Next to cohort there is no better predictor of poverty at age 14 than the fact that someone lived only with mother at that age. Roma also tend to have more experience with deep poverty than gadjo, though ethnicity has to feminization of poverty but our findings in this chapter suggest that poverty might have 78 Chapter 2 In this chapter we have two tasks. First we document how much change - growth using retrospective and subjective reports on experience of poverty in 1988, 1993 and 2000 and by presenting data on people’s reports on the deterioration of their living standards between 1988, 1993 and 2000. Next, we identify who the losers were of the transition, what are the characteristics of people who reported more poverty in 2000 than former socialist societies. According to the World Bank “In 1998 an estimated one out of every five people in the transition countries of Europe and Central Asia survived on less than $2.15 per day. A decade ago fewer than on our of twenty five lived in such absolute poverty38” Our study is consistent with these claims, we also find a 2-5 fold increase in dynamics of poverty and on the social character of those who report deteriorating living conditions between 1988 and 2000. The key hypotheses which guide the data analysis in 38 The World Bank, Making Transition Work For Everyone, Washington, D.C.: The World Bank, 2000, p.1 79H1 after decades of ‘convergence’ former socialist countries during the 1990s began to diverge in term of the living standards of their population, resulting oín strikingly H2 this divergence is recent, during the first 4-5 years living standards all countries appear to be declining at similar speed, but after 1993 some countries are stabilizing and H3 while the differences between the stabilizing and continuously declining countries have a lot to do with their ‘initial conditions’, government policies, what sort of reforms are done, how consistently and how radically make also a difference and neo-liberal regimes are the ones which stabilized and neo-patrimonials the ones, which continue the decay; H4 next to regime type class (measured by level of education) is the best predictor who are among those who report deteriorating living conditions between 1988 and 2000; the H5 demographic factors are not good predictors of whose living conditions deteriorated and since Roma tended to be rather poor already in 1988 Roma ethnicity is not as a good 801/Cross national divergent trends in poverty after the fall of communism, 1988-2000 comparisons In this Chapter we use the same indicators as in Chapter 1: people’s report of experience We compare three time points, 1988, 1993 and 2000. 1988 is retrospective data from our 2000 survey. Data for 1993 come from the survey conducted by Ivan Szelenyi and Don Treiman a(Social stratification in Eastern Europe after 1989) . In that study we asked a few questions, which were the same as questions asked in our year 2000 study. The 1993- 2000 comparison is interesting, since it suggests that the developmental trajectory across countries began to diverge mainly after 1993. During the first few years all post- growth in poverty rates, but some countries, arguably those, which implemented more Slovakia for instance the proportion of those who report experience of poverty at all according to our indicators by 2000 is somewhat lower than it was in 1988 and in Hungary only slightly higher proportion of the population report poverty in 2000, than they report for 1988. Countries, which either did not try, or abandon neo-liberal reforms poverty and the gap between the two groups of countries, which was rather small in 1993 81The proportion of “very poor” increased in all countries between 1988 and 2000. In terms of “deep poverty” Bulgaria and Romania, however, seems to be even worse off than Russia. About 16 percent of the respondent in these two countries reported that they experience on a weekly basis hunger, thus we classified them as “very poor.” The deterioration is even sharper in Bulgaria than in Romania, since “deep poverty” was respondents – and according to received wisdom too - Romania was the poorest country among the six countries in 1988. The Russian level of deep poverty is not significantly different from the figures in the neo-liberal regimes. This may have a lot to do with the Russians are poor, but they are not hungry since they grow their own food. those who according to our definition are not poor decreases somewhat in Hungary, Poland and Slovakia (from 80-95% to 705-85%), but not with the same speed as in the other three countries Table 2.1). There is a dramatic deterioration of living conditions Russia only about a third or a fourth of the respondent report that they do not experience any poverty in any of our measures. In 1988 all socialist countries were rather similar by these measures. Hungary was the best off, but Bulgarians remember 1988 quite favorably, more favorably than the Poles and about as favorably as the Slovaks. In the “shallow” measure of poverty Romania and Russia were in 1988 the worse off, Bulgaria, Poland and Slovakia do better and Hungary does the best. The country differences 82increase substantially and the grouping of the countries changes also substantially. Table 2.1 offers a strong support to our earlier hypothesis, namely that socialism is remembered of fast differentiation across social systems. Those countries, which do well or at least reasonably well, are clearly countries, which made the most progress on the way to market reform. 83 Table 2.1. Experience of poverty 1988 and 2000 (general population sample)39Country Years Experience of poverty in 1988 and 2000 Altogether, 100.0% (N) Very poor, in % Poor, in % Not poor, in % though not completely inconsistent with the experience of poverty, reported in Table A1.4 It is clear from Table A1.4 that pour respondents see post-communist transition in the whole European region as a process of increasing social inequality. The transition from socialism to capitalism is experienced by many as not having the fair deal what they 39 Created by Janette Kawachi, June 18, 2002 84 Some countries, such as Hungary and Poland implemented rather fast and rather typically sold public property on open auctions to the highest bidders, which occasionally included foreign investors. They liberalized prices, trade, and foreign exchange, and implemented mass reforms of their banking systems. We call these countries “neo-liberal regimes.” gradualist strategy of reform. We will call this set of countries “patrimonial regimes.”41 In patrimonial regimes members of the elites, in particular managers but also workers are paternalistic state, or management against the blind forces of the market. Elites, especially managers – and to some extent even workers - under socialism had some de facto property rights in firms. Managers, fort instance, from the whole “bundle of property rights” had the “right of control” workers had security of employment. Under patrimonial regimes, these inherited rights are defended. Managers and workers may receive vouchers to legalize their de facto ownership in the firms they worked for management will be obliged not to lay off workers. Workers, in exchange, will feel obliged to keep working even if they do not receive their salaries. In Russia (Caleb Southworth) workers often do not receive salaries for several month, they still show up for work, since they have security of employment and their employer provides them with40 Though this more true of Hungary than of Poland, in comparison with other countries it is true for Polandas well.41 Hebert Kitschelt calls the first type of regime “national consensus,” the second “bureaucratic-authoritarian.” See Kitschelt, Post-communist Party Systems, 1999. Our distinction between “neo-liberal”and “patrimonial” regimes attempts to be value-neutral, and describe more the socio-economic strategyrather than the political system. 85housing and with family plots to grow their own food. While some reformers flirted with certain features of the neo-liberal model in these countries as well, reformers who argued that the costs of rapid transition would be too costly42 prevailed. Thus, on the whole, the privatization process in these countries proceeded more slowly than in other countries. Rather than auctioning public property on an open market where foreign investors could management buy-out – see Lawrence King). Price liberalization was cautious and influx between Hungary and Poland on the one hand, and Bulgaria, Romania, and Russia, on the other hand. Since 1994, The European Bank for Reconstruction and Development (EBRD) attempted to assign a measure for the degree transition progress in post- the five countries in our study can be clearly classified into the following two categories: Highly marketized, neo-liberal regimes (Poland and Hungary) on the one hand, and modestly (or low) marketized, patrimonial systems on the other hand (Bulgaria, Romania and Russia). For our purposes, the key questions posed by these liberalization indicators is whether these differences in transition strategies and outcomes are consequential for emerging structures of poverty in these countries, and if so, how? Were patrimonial 42 Not coincidentally, these same reformers also had a hand, not to mention an interest, in ensuring that the“endowments” owed to the managerial elite be converted into private property. 86regimes successful in sheltering the poor from the devastation of the market, or is there question of the market’s effect on poverty rates, and how the extent and nature of poverty is affected by different strategies of transition. Does the growth of the market decrease poverty by raising all ships, or must the patrimonial state endure to prevent massive to levels far below that of classical social democracies of the 1950s or 1960s. In this sense, “capitalism” was “built” over a relatively short period of time throughout the post- communist world, despite the fact that various countries pursued strikingly different strategies in order to achieve this outcome. No theory has been offered, and no systematic poverty during the process of transformation. With the present analysis, we intend to fill this gap. 87 Table 2.2 Economic policies, economic growth and social indicators43 Country Cumulative Real GDP in Survey data from 1993 and 200044 liberalization 1999 % of % of % of % of 1989-1999 (1988=100) population population population population reporting reporting reporting reporting deteriorated deteriorated 45 poverty poverty in living living in 1993 2000 conditions conditions in in 1993 in 2000 in comparison comparison with 1988 with 1988 Hungary 10.0 99 57 44 62 54 Poland 8.0 122 59 44 63 55 Bulgaria 5.0 67 68 81 69 84 Russia 2.0 57 60 74 65 77 Romania 6.0 70 - 69 - 72 Table 2.2 offers some prima facie evidence that there may be some relationship between market liberalization, economic growth and extent of poverty. The two countries that followed more aggressive liberalization strategies (Hungary and Poland) posted some reduction in various measures of poverty between 1993 and 2000. Those countries, the social conditions of their population. There is clearly an increasing gap between the two categories of countries, though it is less obvious whether that is caused by degree of liberalization or economic growth. The more liberalized countries also post better economic growth. It is not obvious whether the slight improvement in social conditions is the result of liberalization or economic growth. We cannot tell it either whether economic 43 Transition Report 2000, European Bank of Reconstruction and Development, p. 21 and. P.6544 Data are from Szelenyi and Treiman, Social stratification in Eastern Europe after 1989 (data from 1993)and Emigh and Szelenyi, Poverty, ethnicity and gender in transitional societies (data from 2000).45 We only have one measure of “poverty” which was asked in identical ways in 1993 and 2000. We askedpeople whether their family earned average, above average or below average incomes. We regard herefamilies who reported earning below average income as “poor”, in the sense that they experience “relativedeprivation”. 88growth was the result of liberalization or it was rather the precondition of it. Nevertheless one may claim that there is an “elective affinity” between liberalization, growth and particularly radiant light. A decade after the fall of communism, Hungary just approached the GDP levels of 1988 (and even Poland – the most dynamic post-communist country during the first decade of transformation - only posted a modest 20% growth over the decade). The proportion of population that reports poverty declined somewhat between 1993 and 2000, but it is still over 40%. In 2000 in Hungary and Poland more than half of our respondent believed that their conditions were worse than in 1988 (Table 2.3). Table 2.3 Change in the standard of living in the respondent’s households between 1988 and 200046 46 Created by Janette Kawachi, June 18, 2002 89We created a new variable from the question whether the household earned below average, average or above average incomes in 1988 and 2000. We combined responses given for 1988 and 2000. Those were ‘downwardly mobile’ (our most interesting category) who reported average or above average incomes in1988 and below or way below average income in 2000. The other categories are defined by the same logic. There is massive pauperisation (downward mobility into poverty) in all countries, more between 30 and 50% of the respondents were not poor in 1988, but reported poverty in 2000. The same proportion is more modest, but it is still an astonishingly high 10-15 % in One important finding of our research is that in terms of the initial conditions all countries were rather similar. The proportion of those who reported experience with poverty in 1988 is almost identical in all Central- and East-European countries. This may not mean that the actual conditions were identical, after all Russia or Romania were poorer than Hungary or Poland, nevertheless people remember 1988 in similar ways. The BIG difference is in change: in people’s experience Eastern Europe is hit so much harder than Central Europe and that makes one wonder whether government policies may alternative than the paternalistic road to capitalism. Thus there is prima facie growth, inequality and level of poverty, and the nature of this relationship requires further 90scrutiny. Our task in this paper is to explore these relationships and the mechanisms, 91 2/ Social determinants of deteriorating living conditions, 1988-2000 Who are the losers of the transition? Who are those, who believe that they lived better in 1988 than 2000, who are those who were not poor by the end of the socialist regime, but who reported poverty more than a decade after the fall of communism? There is no obvious answer to this question. Some commentators believe that the big losers came from the middle class (Tamas Kolosi) others think the poor got poorer with the expansion As Table 2.4 shows it is likely to vary substantially across countries who the losers of the transition were. Demography does not play a major role, households without children did not escape poverty more than the rest. Gender does not make much difference in general. Rural residents fare better in some countries and were slightly over-represented among the losers in other countries. The big losers were those who in year 2000 were either out 92Table 2.4 People, who reported that they were “much worse off” in 2000 than in 198847 Country Among all Among Among Among Among Among Among Among children of of household households school workforce education or lessBulgaria 60.0 62.4 66.1 61.0 65.4 69.6 68.9 74.8Hungary 18.1 18.3 17.7 26.9 16.9 21.7 37.3 41.3Poland 25.1 24.6 22.1 26.3 29.5 31.4 46.3 -Romania 44.9 45.7 40.5 52.3 50.2 46.5 57.7 68.2Russia 51.1 54.7 56.0 54.6 57.2 57.6 61.2 -Slovakia 33.1 33.1 34.7 36.4 38.3 41.6 57.7 - than non-Roma, nevertheless ethnicity effect in this case is modest. Arguably Roma were already in 1988 quite badly off, so there was not much more how worse they could get. It is also notable that gender does not make much of a difference either. from the labour market), which explains who believes to be ‘much worse off’ in year predictor who reported average or above average incomes for 1988 and below average income in 2000 in neo-liberal regimes, but that is not the case for neo-patrimonial systems 47 Created by Janette Kawachi, June 18, 2002 93 The most astonishing - though upon reflection - sensible finding is the relative mobility. Roma in Bulgaria and Romania is only somewhat more likely to have moved down into poverty after 1988 than non-Roma, though the difference between the two ethnic groups is much greater in Hungary (Table 1.6). If this holds in multivariate analysis it shows that Roma was already poor in 1988, therefore there was not much room left for Gypsies to be downwardly mobile. They were already poor. Here those are regarded as downwardly mobile who did not report poverty in any of the “four questions” (hunger, not enough meet, no second pair of shoe, no warm winter coat) in 1988, but report poverty at least in one dimension in 2000. The story with this measure is not that different from the story, when downward mobility was measured with “relative deprivation”. 94 Table 2.5 Probability (log odds) that respondent’s household is muchworse off in 2000 than it was in 1988, GPS for the six countries. Model 1 (full Model 2: Model 3 model) M1 - class M2 – classDemography Number of .961 .978 .952 children Rural 1.000 1.099 1.104 residenceClass Head of HH 1.339*** - - primary school or less Head of HH 2.151*** - not in labor forceFeminization Single 1.235 1.257 .337*** mothers Single women 1.248 1.274 .298***Liberalization Hungary, .312*** .313*** .311*** Poland and Slovakia-2 Log likelihood -3935.4912 -4116.4113 -4123.9568*significance at 0.05 level, ** at 0.01 level The picture, which emerges from Table 2.5 is clear and powerful. There are three determinants of mobility into poverty: the type of regime, the level of education and employment status of the heads of households. The most robust effect is exercises by regime type – in neo-liberal regimes people are 3 fold less likely to become poor during transition than in neo-patrimonial regimes. This is the most interesting finding and it helps us to assess, whether government policies are consequential or not. Here we see the difference in the dynamism of poverty across regime types. We capture here the very process of divergence, hence it is more difficult to refute these findings with reference to the differences in the initial conditions. These data speak to how different the conditions become, rather than to how different they were initially. When we try to explain who reported declining living standards we get similar results. Neither single motherhood, not number of children make much of a difference, but households, where the head has only primary school education (or less) are almost three times more likely to believe that their living conditions turned for the worse. 95 The models presented in Tables 2.5 are problematic. All the independent variables we use in these models describe the year 2000 situation of the households and we do not know what their situation was in 1988. Indeed some of the households who have many children may have been still childless in 1988, single mother households may have formed or become single after 1988. But if number of children or single motherhood only help by the fact that such condition did not exist in 1988. And now let’s turn to the question of ethnicity. On data pooled from the general populations samples (GPS) and from the Roma over-samples (ROS) we test the relative Surprisingly Roma ethnicity in the model presented in Table A3.4 in Ladanyi and Szelenyi (which predicts ‘downward mobility’) has only a relatively weak effect. This is the result of rather wide-speared shallow poverty among Roma already in 1988. So many Roma were already reporting some poverty (even-though extreme poverty such as reporting hunger was rare during late socialism) for 1988 that it was difficult to be Even for the poor it is possible to become poorer – this is what the comparison of Table A3.4.and 4.5 in the Roma book by Ladanyi and Szelenyi tells us. If we try to explain who felt that their living conditions deteriorated between 1988 and 2000 Roma ethnicity plays again a role – even after one controls for ‘class’ Roma are still twice more likely to complain about deterioration in living conditions than others (Table 4.5 in Ladanyi and Szelenyi). One should note, however, that Roma ethnicity is not a 96particularly important predictor of who the losers are, education is a much more important factor. The market played a critical role in picking who the losers will be. It is the bottom of the society in terms of education and labor market performance who complain about deteriorating living standards, or who were not ‘poor’ in 1988 and became poor in 2000, thus were ‘downwardly mobile.’ The demographic factors, which according to earlier sociological studies was so important in identifying who were the poor under socialism does not play much of a role, single motherhood is barely significant, though Roma ethnicity has about as much explanatory power as the employment status of the head of the household (Table 4.5 in Ladanyi and Szelenyi). There is clearly a shift to ‘achievement’, the market measures people up in terms of their ‘human capital’ and unless the social safety net operates punishes those who are poorly endowed with human capital. This can be an important finding. We know that in 2000 Roma and single mother households – even after we control for ‘class’ – twice as likely to be poor than others. The relatively weak effect of ethnicity and single motherhood in explaining who between 1988 and 2000 the losers were suggests that Roma and single mother households were already among the poor by late state socialism, hence neither racialization nor racialized and feminized poverty. The most important such mechanism would be the 97labor market as well, hence it produced ‘excessive demand’ for all labor, including the labor of Roma and mothers of small children. There was near full employment among Roma and given the reasonable availability of childcare institutions a high level of that a more deregulated labor market creates surpluses of labor and the first to be difficult to women with small children, especially single mother to seek gainful employment. And exclusion from the labor market – after low levels of education – is an important source of poverty. It is arguable though that the labor market mechanism does not affect single women or Roma ethnicity more than others, it works through education in both instances and therefore neither racialization nor feminization was aggravated 98 3/ Making sense of re-emergent cross-national divergences: long duree; initial economic and political conditions; government policies and different strategies of transition The comparison of Tables 2.1, 2.2 and 2.3 is very instructive. It shows that the six countries we studies did not diverge right away after the fall of communism, the major changes took place after 1993. We are inclined to use this as evidence that the divergent conditions in year 2000 cannot be therefore simply attributed to the differences in the initial level of economic development. In these respect the countries were close to each other, stayed reasonably close during the first few years of post-communist transition, but they handled the challenges of transition differently. Thus we have at least some evidence, that post-communist economic policies made a difference. Countries were not on an over-determined path. Furthermore the data also suggest that neo-liberal reform, while its costs are also prohibitively high proved historically a better way to proceed that in neo-patrimonial systems. The better performance of the Central European countries in comparison with the 1. Initial conditions immediately. Hungary and Poland are close to EU markets, linked with better roads and railway links thus they are more attractive for foreign investors. One can argue that once 99Central Europe absorbed abut as much DFI as it can capital will move further East and will reduce the newly created gap between Central and Eastern Europe. There was a difference within these two sub-regions in the level of economic development. While there was some convergence in this respect between 1945 and 1989 enough difference remained, therefore the somewhat more affluent Central European countries found it easier to ‘bite the bullet’ and to implement shock therapy. Eastern European regimes tried it, but since they lived below the threshold, which would make the pains of shock tolerable they had to give in to political pressures and soften the reform measures. Finally there are also differences how far reform progressed prior the fall of communism. Reform communism may have paved the road to further faster reform in countries like Poland or Hungary and absence of prior reform may hold back the reform process. 2. Long duree It is of course not clear how far one has to go back to find the proper ‘initial conditions.’ Many of the immediate initial conditions were rather similar in the Baltic states to Russia or the Ukraine, nevertheless the Baltic states followed a similar trajectory than Central Europe. Slovakia at least in terms of no prior reform resembled in the Central European group. Hence it is not inconceivable that there is a longer term society, or even religion. Why Yugoslavia breaks along the lines of religious cleavages 100and why in one considers now the Baltic States as part of Central Europe the divide between Central and Eastern Europe is really a divide between Western and Eastern Christianity? No question about it, initial conditions, even long duree factors do play a role. Nevertheless, what needs an explanation is why the process of divergence begins after a long epoch of convergence. Initial conditions and long duree cannot explain BOTH The convergence among countries between 1945 and 1989 and the divergence during the 1990s suggest that governments have a certain degree of freedom in choosing their transition strategy and depending what strategy they choose they can do better or worse. In retrospect the Czech Republic pursued different policies than Poland and Hungary during the first half of the 1990s and it did so probably for the worse. If the Chinese way was not an option in the post-Soviet world than at least as far as poverty is concerned it appears the least damaging strategy was the reasonably consistent adaptation of neo-liberal policies. The main reason for this is foreign was created by liberal reform. The main source of economic dynamism in European post-socialist capitalism so far was foreign investment and arguably dynamic economic 101 Chapter 3 to a market economy. When socialism was crumbling few anticipated that the costs will be so high and the circle of losers so broad. Nevertheless what we saw in the previous chapter arguably was a price what had to be paid. Looking carefully where post- communist societies are in year 2000, thus 12 years after the collapse of communism will help us to begin to develop a theory what will be the nature of poverty once post- communism crystallizes into a system or even into multiple systems, which will tend to reproduce themselves. Many commentators expected that with the fall of communism a ‘new poverty’ will emerge. Most of these commentators agreed that poverty under socialism was a life cycle phenomenon (Ferge, Bokor, Kolosi). People with many children fell below poverty line when they had to cater for their children, but as kids grew up and left they moved out of poverty as well. People may have ended up in poverty, when they were ill and if they recovered they may have moved out of poverty. Full employment of all (including near full lifetime employment of women and Roma) was a mechanism, which prevented that people could stay below the poverty line all their lives. ‘New poverty’ implies not only a growth in the volume of poverty, but also a new quality of poverty. High level of 102poverty may be temporary anyway, and once dynamic economic growth takes place at least ‘absolute poverty may decline, but the nature of poverty may have changed for good, the importance of demographic factors might have declined and was replaced by In this chapter our task is to explore who are the people who become the ‘new move from a socialist rank order to a capitalist class stratified society achievement, hence factors such as education will play a greater role. But gender an ethnicity might very rates were higher than male unemployment rates and there has been a trend in all countries for women to drop out and stay out of the labor force in larger numbers than men do. This may be by ‘choice’, women may opt to stay at home to take care of children and husbands and may be also enforced by increasing gender segregation on the labor market. Furthermore there has been an increase in out-of-wedlock birth. This again may be by choice, thus some of those birth may occur in families where the couple decides against marriage as a life style choice. But it is not inconceivable that some of the new out-of-wedlock birth may be the result of mothers abandoned by the poverty-stricken fathers of their children. At any rate, increasing out-of-wedlock birth can certainly be 103 Furthermore, it is received wisdom, that the structural changes, which took place with market transition, affected the Roma population more than other groups. According to studies of Roma in the region (Kemény, Tomova, Zamfir) the single most dramatic change in Roma life during the 1990s has been the sharp decline in the proportion of Roma in the labor force. According previous research Roma were the first the laid off and Hence while market penetration may indeed changed the foundation of poverty but gender and ethnicity may counter-act this tendency. Our main aim in this chapter is to test: 1/ Is it indeed the case that the importance of demographic factors, such as rural place of residence or family size may not be less important in explaining who ends up in poverty important will be gender and ethnicity in explaining post-communist poverty, are these factors a ‘match’ to the effect of class, or they just transmit, or modify the effects of Finally in this chapter we also will explore what cross-national variations we find in the extent and in the determinant of poverty. One key hypothesis of this book is that are ‘capitalists’, but might represent rather different types of capitalism and these differences may be highly consequential for the extent and nature of their poverty. We rather differently from neo-patrimonial post-communist regimes. The task of this chapter 104is to explore to what extent this was only a feature of transition or to what extent the level and social determinant of poverty may settle in a different pattern in the two major households, which are the best indicators to predict poverty are rather similar in the six societies we studied. After all, this is not that surprising, since these household surprising result, however if why these similar characteristics were to produce rather different outcomes. 105 Table 3.1 Household characteristics in six post-communist societies, year 200048 Thus in our analysis we will use the following independent variables: the number of children; urban residence; education and employment status of the head of household; households of single mothers and of single women (hence the columns of Table 3.1); Roma ethnicity and ‘country’ or to be more precise the type of capitalism (liberal versus neo-patrimonial). expenditure below a certain threshold. We could not use these indicators so far, since for historical analysis we could not generate retrospective income or expenditure data. But now our subject is now social condition at the time of our survey, in year 2000 we will utilize these indicators. These are regarded indicators of ‘objective poverty’ and distinguish those from ‘subjective poverty’ measure we used so far for historical comparison. The table below summarizes the type of measures used as ‘dependent variables’ in 48 Created by Janette Kawachi, June 18, 2002 106 Various measures of poverty: Subjective: Objective: The real question is not, which measure is ‘better’, more ‘reliable’, or more ‘valid.’ They both have problems with reliability and validity. It is not obvious for instance whether respondents want to tell us what is their income or even expenditure, or even if they know what their expenditures are. Furthermore assigning to a household or an individual a ‘poverty line’ may mean that we call a household poor, or not poor There are similar problems with the subjective measures. The level of inter- subjective validity of reports by respondents both about absolute and relative poverty may not be great. What passes as ‘hunger’ for a certain respondent may not be felt or the most likely that the respondent does not know what ‘average income’ may be in that society at that point in time – thus how could such a response be reliable? Thus objective or subjective measures are not better or worse, they are different and tell us different stories. Subjective measures inform us what the ‘lived experience’ of poverty is, while the objective measure identify those of us who are ‘income/expenditure 107 As a result a household may be ‘income/expenditure poor’, though it may cope well, or it may not be poor in income/expenditure but may cope poorly therefore be in poverty. We will see for instance that income/expenditure wise rural households tend to be poorer than urban ones, nevertheless by the subjective measures the opposite may be the case. The same holds for Roma and for single mother households. Both Roma and single mother households ‘objectively’ tends to be less ‘poor’ (they receive various welfare payments) than subjectively (their crucial needs may not be met by a relatively We need therefore BOTH the subjective and objective measures, the first is likely And lets now turn to the question of absolute and relative poverty. The notion of absolute poverty was under attack for a long time by sociologists (Townsend) and for would prevent researchers of more affluent societies to identify in their societies those who are poor (even-though they may not be as poor as the poor in poverty-stricken countries.) Therefore sociologists for good reasons often advocated relative measures of poverty, such as those who earn 50% or less of median income. These relative measures are indeed useful to identify, who are poor in a given society, but by definition they are not very useful in cross-national comparison. These ‘relative poverty” measures if looked at from the perspective of cross-national comparison tell us more about inequality than poverty. As we will see for instance if we measure poverty as the population, whose median expenditure is 50% or less of the median Poland appears to be a poorer country 108than Bulgaria, which not only contradict common sense but is also disconfirmed by other data available to us. No matter how sympathetic we are to the argument that poverty was not eliminated, but rather transformed in the more affluent countries, nevertheless we should be able to distinguish among poor those who are even poorer. Yes, there are poor people in Sweden as well, but it is also true that the poor of Bangladesh would love to be our Finally there is a lively debate among scholars who poverty and living standards in general whether income or expenditure is a better way to understand poverty. The World Bank….). This is a reasonable position. If in survey research one asks questions about expenditures rather than incomes it is more likely that one can catch what the informal economy and self-provisioning may provide to the households. In our study we adopted the World Bank questionnaire in an abbreviated form. The World Bank developed a interviews with each household. We could not afford to collect data in such a detail, so we reduced the expenditure measurement, which did fit our research purposes in close consultation with World Bank experts. Like the World Bank itself, we also collected data on incomes. In fact the difference between the income and expenditure measures in our survey proved to be rather marginal. In this book we use the expenditure indicators. We do not want to minimize the importance of the difference of income and expenditure “subjective” measures they appear to be the variants of the same family. We believe the 109larger difference is not between income or expenditure, but between subjective and while predicting poverty coefficients for single mothers//single women and Roma variables; gender, but we expect similar levels of poverty as a lived experience in single mother households; 110 We begin our analysis with the data - subjective measures of poverty - we used in Chapter 1 and 2. First we present descriptive statistics and next we use some simple 111 Table 3.2 The most interesting finding in Tables 3.2 is the strength of the “single mother household” variable. Those who report “hunger” is well over-represented among single mother households and generally also among single women households. The ‘feminization” seems to be as strong an effect than low education and that stands for all Compare for instance the education effect in Bulgaria and Poland in Table 4.2: absolute poverty in Poland in the whole sample is 6%, but it jumps over 10% among those who do not have higher than completed primary school education. There is an education effect in Bulgaria as well, but here the national average of 16 % poverty compares with 23% among the poorly educated, Also as we anticipated differences among the two types of 112regimes (H5) are substantial and occasionally greater then differences within countries. The most striking figure is reporting hunger in Bulgaria and Hungary: among Hungarian Roma the proportion of those who report hunger is only slightly higher (20%) than among the general population on Bulgaria (16%). In other words the non-Roma are looking at relative deprivation (Table A1.8) rather than absolute poverty (Table 3.2). But in this case the story otherwise is similar – very strong race effect and a Table 3.3 and Table A1.9 are generally supporting the hypotheses we just evaluated, 113 Table 3.3 Probability (odd ratio) that respondent reported experience of poverty (hunger) in 2000, GPS for six countries49 In Table 3.3 a strong feminization outcome is supported, both for single mothers and for single women. In the full model households, where single women live with their dependent children are twice as likely to report hunger than other households. Nevertheless, the multivariate analysis shows that “class” when the chips come down tells a bigger part of the story than feminization. Low education and employment status of the head of the household in itself has a massive effect (those households, where the head has only primary school education are 2.8 times more likely the experience poverty than others). Furthermore as we build our models in a stepwise fashion we can see that adding the feminization variable improved significantly the fit of the models, but the49 Created by Janatte Kawachi, June 18, 2002 114improvement is much greater when we enter the education and employment status variables This is to be expected. The coefficients for the class variables are grater to begin with and in most countries only about 8-9 percent of the households are composed of single mothers and single women. A much larger proportion of the households is affected by loss of jobs (not only unemployment, but also by early retirement or other forms of ‘voluntary” withdrawal from the labor force) hence this variable indeed should have a greater explanatory power of the overall level of extreme and absolute poverty. The model presented in Table 3.3 also offers strong support to H1 – the demographic factors are relatively weak. Number of children has only a weak effect. The coefficient of this variable is marginally significant and its size is modest. This is not quite the case for rural residence. Once we control for all of our variables rural residence becomes significant – but in sharp contrast to what we will see in the case of rural areas are doing much better, they are about half as likely to report hunger than urban residents. When poverty is measured with income or expenditure, rural population tends to appear to be poorer than urban residents. The model also offers some support for H5: in neo-liberal regimes people are only half as likely to report hunger than in neo- patrimonial ones even after we control for all the major factors, which may impact absolute poverty. If our dependent variable is relative deprivation (whether people felt that their family earned well below average incomes, Table A1.9) the story is similar, though there is one striking difference with Table 3.3: the cross-country differences are much bigger. People in neo-liberal regimes are almost four times less likely to think that their income 115of below average. This is a puzzling finding and may indicate that people in neo-liberal regimes are more likely to accept inequalities than in patrimonial regimes. Since neo- patrimonial regimes legitimates themselves with claims that they “shelter” clients from pains, imposed by mark transition if it does not people challenge its legitimacy. absolute poverty and relative deprivation for the subjective measures, since the extent of the two phenomena is so dramatically different. Nevertheless, much like in the case of reporting experiences with hunger, when the question is which households report relative deprivation we again see the relative weakness of demographic factors. In most models number of children and rural residence is not significant. Education has a robust effect, but feminization has also sizeable coefficients. And as we anticipated it: cross-country variation is even stronger, than variation by class or gender. People who live in neo- liberal regimes are four times less likely to report relative deprivation than people living In order to see the effects of race we have to look at Table 4.8 in Ladanyi- Szelenyi book.. We run separate models for the three countries, where we created a Roma over-sample pooled the Roma over-sample (ROS) with the general population Roma if we only control for the demographic differences between Roma and non- Roma is seven times more likely to report hunger than non-Roma. When we first only control (in Model 2 of Table 4.8 in Ladanyi-Szelenyi) for the impact of feminization that does not reduce substantially the size of the race coefficient. This is not surprising, if we consider that given the strength of extended family among Roma the proportion single 116mothers and single women households is less than in the non-Roma population (see education, next employment status into the models Roma ethnicity loses a fair bit of its explanatory power. In the full model the size of the Roma coefficient is about half of what it was in the base-line model – now Roma is „only“ four times more likely to report poverty than non-Roma. Education and employment status are also important predictors analysis and they are likely to be bigger than they are if we only look at the general population. In Table 3.3 - where we did not consider the Roma over-samples - people in regimes. Now in Table 4.8 in Ladanyi-Szelenyi book - after we include the Roma over- samples - in Bulgaria and Romania people are almost 3 times more likely to report hunger than in Hungary. This tell us that Roma in Hungary is not only merely better off than Roma in Bulgaria and Romania, but there is also less difference between the Roma and non-Roma Hungarians than between Roma and Not Roma Bulgarians and Romanians. book) it is quite astonishing how smaller the Roma effect is in comparison with absolute poverty. While Roma is 4.3 times more likely to report hunger than non-Roma they are only 1.9 times more likely to believe that they earn below average incomes. This is indeed surprising, since gadjo tends to believe that Roma are „complainers“ and they 117believe that everybody is sucking their blood – the picture, which emerges from Table 4.8 and A3.5 from Ladanyi-Szelenyi book does not support this stereotype. whether the subjective measures we used so far are sensible at all and whether the differences between the subjective and objective measures are indeed those kind of 118 Table 3.5 119120 In all the previous data we found Hungary to be rather similar to Poland and quite different from Bulgaria, Romania and Russia. This pattern is confirmed in Table 3.5 where poverty lines are measured in terms of per capita incomes or expenditures. The clearest picture is presented if $4.30 PPP per capita expenditure per equivalent adult is used as the “poverty line.” In this case the proportion of population under poverty line is 9-11 % in Poland and Hungary, while it is about 30-50 % on the other three countries. Table 3.5 shows one striking difference from the ‘subjective’ indicators, namely Bulgaria and Russia swap places at the bottom of societies. While in Bulgaria many more people reported hunger than in Russia, when expenditures are measured in monetary terms hunger, in Russia the same proportion was only 7%, if poverty line is set at $4.30 PPP of adjusted expenditures than in Bulgaria ‘only’ 38% falls below this line, while in Russia the proportion of the poor is 54%. We believe this indicates that the subjective measures might be more accurate than measures which use monetary terms. Russian are undoubtedly ‘income poor’, but given the ‘involutionary’ nature of the Russian economy, they may avoid extreme poverty (especially hunger) more since they have a more Michael Burawoy) that Russia did not dismantle the kholhoz system, instead in a way it generalized it. Now even civil servants or industrial workers do not get paid (or their pay is in arrears for many month). They cope, since firms give access to them to what used to be ‘family lot’ (gardens) in the kholhozes, hence they grow their own food and do not starve (see also Table 3.9). While the World Bank methodology using expenditures 121accounting problem it may not do with sufficient precision (given the difficulty to Furthermore if we draw the poverty line as those whose per capita expenditure is 50% or less of the median than the cross country variation is quite different from the one measured with $4.30 PPP or the ‘subjective’ measure of ‘reporting hunger’. If our aim is to predict in which country does more people have less than 50% of expenditure, than Bulgaria and Hungary has the lowest proportion of poor people (they have about 9-11% such households). Poland is closer to Russia and Romania stands out as the “poorest” 50 When we adopted the World Bank methodology we indeed cut some of the questions about selfprovisioning, hence our data set is not quite as good as World Bank data usually are. The World Bank teamin their report what the4y prepared from our data on Roma poverty - Poverty and Ethnicity by A. Revenga,D. Ringold and W.;. Tracy, March 6, 2002 – used various calculations to estimate the value of self-provisioning. We decided not to adjust the data, but to acknowledge that our expenditure measures to notsufficiently reflect self provisioning. 122 Since we interpret the “relative poverty” primarily as a measure on inequality this finding has important implications. While the rate of poverty indeed seems to be highly impacted by the nature of the regime that is not the case in levels of inequality. Bulgaria and Hungary seem to be the most egalitarian countries (data using GINI coefficients also show the same results, see Eric Kostello), while the other three countries are more unequal. After all, this is rather sensible. The structure of the economy and its dynamism has a lot to do with levels of absolute poverty. Indeed we gain support here for Depak true that more consistent liberal economic reform helps to reduce the level of absolute poverty, there is at least some “trickle down” in liberal, more dynamic economic systems. Inequality however does not necessarily follow the same pattern. Lal speculated that inequality is increasing with economic dynamism (assuming that higher level of post-communists capitalism does not support this (though one can hardly expect from so few cases for such a brief time period to offer a good test of the Lal theory). What we seem to be finding here that levels of inequality may be influenced by country specific variables. It may be culture and politics, which has a greater impact what level of inequality a society accepts rather then the characteristics of the economic system. 123 Table 3.6Determinant of poverty – households at $4.30 PPP (adjusted) per capita daily expenditure or less, in %51 51 Created by Janette Kawachi, May 4, 2002. No household was below $2.15 daily adjusted per capitaexpenditure in Slovakia 124 Table 3.7 Determinants of Absolute Poverty ($4.30 per capita daily expenditure, adjusted to equiv. If poverty is measured with the more conventional, monetary way (Table 3.7) the demographic factors have a stronger predictive power than in the case when we used our 52 Created by Janette Kawachi, June 18, 2002 125subjective indicators (Table 3.3). In this case number of children and rural residence is feminization and the basic demographic variables are about equal importance in predicting poverty in terms of expenditures. Class, however, keeps stealing the show. Education is not quite as important as it was when ‘reporting hunger’ was our measure of absolute poverty, but households where the head has only primary school education (or less) are twice more likely to have an expenditure below $4.30 PPP, than households with secondary school (and higher) education. And the extraordinary power of ‘regime type’ is demonstrated one more time and even more strongly than before. People who lived in neo-liberal regimes in 2000 were ten times less likely to have expenditure below $4.30 There is however an important finding in Tables 4.9 and A3.6 in Ladanyi-Szelenyi book about ethnicity. It is clear that the effect of ethnicity is substantially lower if the measure of poverty is a monetary one, rather than a substantive one (such as reporting hunger, or feeling of relative deprivation). While we used such subjective and substantive indicators the effect of Roma ethnicity was about as strong as the effect of education both at absolute and relative poverty (Table 4.8 and A3.5 in Ladanyi-Szelenyi). Now in Tables 4.9 and Table A3.6 in Ladanyi-Szelenyi book this changed quite radically: when poverty line is established in monetary terms class is the most important determinant, the importance of ethnicity is much reduced. This might mean that Roma is a ‘welfare class’, Roma households receive various transfer payments, hence their poverty is better measured in terms of low quality housing, hunger and frustration rather than by the monetary level of their expenditures. For example in Bulgaria 47% of Roma households 126received child allowances (while only 22% of all households receive such transfer payment). In Hungary the same figures are: 65% for Roma households and 25% of all households. In Romania: 66% of Roma households, 38% for all households. For social assistance and unemployment benefits we can detect the same trend (see Revenga, What are the differences in the various “coping strategies” for households, who find themselves in poverty? To what extent can families rely on the welfare state and what are their chances that make ends meet with resources generated from the second economy? institutions. About the same proportion of the population is eligible to various transfer payments in all countries. The only major outliers are: Russia in housing (there is still substantial public housing assistance in Russia, while that was eliminated by 2000 in the other countries) and child allowances in Poland(Poland must have eliminated universal child allowance provisions, while the other countries retained it). Table 3.8 of course only describes the institutions of the welfare system and does not give an accurate picture of its functioning. Data in Table 3.8 describe only ’access’ and not actual transfers. The value of actual transfer may vary substantially in PPP and that has to be analyzed as well to understand how much of a coping strategy the welfare state can be for households As we already pointed out already in terms of access there are at least substantial cross-ethnic differences. Roma has been established firmly during the early years of 127post-communism as a ‘welfare class’, hence it is more dependent for its survival on 128 Table 3.8 Eligibility for Social Protection Bulgaria Hungary Poland Romania Russia SlovakiHousing1 8.2 5.1 9.1 7.0 43.3 5.4Child allowance 23.5 26.1 7.6 39.1 42.3 27.9Poverty assistance 7.8 7.3 6.1 3.8 2.6 10.2Old age and disability 60.6 59.3 54.1 59.4 52.2 47.6Unemployment benefits 5.4 6.2 7.5 7.8 2.6 12.4Source: GPS only, created by Janette Kawachi, June 18, 2002Notes: 1 Percent actually receiving. provisioning than on the welfare institutions (and if we would calculate not only access, but also the extent of each coping strategy than self-provisioning is likely to be even more important for the general population). There are few outliers: in Russia many more people gather herbs, mushrooms, wood than in other countries, in Hungary the proportion of food consumed by the family which was produced by the households is The main findings: about half of the families (or more) cultivate some land and about 129Table 3.9Self-provisioning strategies (% of Households) Bulgaria Hungary Poland Romania Russia SlovakCultivate a garden 44.0 44.4 38.4 42.2 65.3 60.1Cultivate any agricultural land 28.9 12.2 15.9 30.6 17.8 14.1Average area of cultivated land 1.8 1.9 16.6 2.1 1.1 1.4Gather herbs, mushrooms, wood 6.4 5.0 20.1 11.9 47.4 25.5Avg. share of food from own prod. 40.6 28.7 36.8 39.7 40.0 21.2Keep animals 39.3 29.3 18.0 48.2 21.2 31.9Source: GPS only, created by Janette Kawachi, June 18, 2002 While by year 2000 transfer payments, the welfare system appears to be rather targeted (understandably with the exception of old age pension), self provisioning is more universal. Generally the proportion of families which receive some transfer payments varies between 5% and 25% of the general population, the bottom half of the society just the opposite effect to what we observed in transfer payments. While Roma has more access to most forms of transfer payments than non-Roma, the opposite is true for most forms of self-provisioning (with the exception of ‘gathering’ – Roma tend to gather more, than non Roma, though they are engaged less in such an activity than Russian!) Roma cultivate less land than non-Roma and grow a smaller proportion of the food they 130 Conclusions demographic factors. Indeed number of children or rural place of residence are often not significant and when they are significant their coefficients are usual rather modest, are 2/ Class (short of regime effect) steals the show. In particular low level of measure of poverty one uses. Surprisingly, once one controls for education exclusion from the labor market itself is not such a strong predictor of poverty, though it almost has a statistically significant effect though the coefficient is relatively modest). Most likely that education explains exclusion for the labor market as well. mothers with children under the age of 16 are about twice as likely to be poor than others. poverty. Roma tend to be 2-7 times poorer than non-Roma. Education explains a lot from Roma poverty, but not everything. Even after one controls for education (while such a control indeed reduces the effect of ethnicity) Roma tend to be much poorer than non- Roma. Unlike the United States there does not appear to be an interaction between racialization and feminization. Given the strength of Roma extended family Roma single mothers are more likely to be looked after by their family than non-Roma single mothers. 131 4/Regime type, however, is the best predictor of poverty irrespective what measure of poverty we use. People who live in neo-liberal regimes tend to be less poor than people who live in neo-patrimonial form of post-communist capitalism. This is true not only for absolute, but also for relative poverty. While inequality is not being reduced not vary cross-system, it rather varies cross-countries, driven by political and/or cultural factors) it appears that in neo-liberal regimes people tend to accept great inequality more 5/Coping strategies open to the poor vary little across countries or systems. countries. The welfare system by year 2000 appears to be rather targeted in all receive some sort of welfare payments, while half survives by using some form of self- provisioning. Roma is different from the non-Roma in this respect. Roma has been constituted as a ‘welfare class’, it qualifies for more transfer payments than non- non-Roma the major way of coping is self-provisioning, for Roma the major way of 132 Conclusions This book summarizes what the “story” is of our research carried out in year 2000 about poverty, class, ethnicity and gender in six European post-communist poverty of people who are still alive today if they turned 14 years old before socialism. Hungary is remembered is suffering from the least poverty, Russia before 1949 was the poorest of all countries. Poles and Slovaks remember the pre 1949 times in rather similar - and in comparison with the Hungarians rather negative – ways. Bulgarians and Romanians, however, are surprisingly upbeat about their life before socialism, their responses resemble more the Hungarian ones, rather than those given by Poles or Slovaks. Memories of old Russians of the 1930s are puzzling. People who turned 14 during the peak of Stalinism and were still alive in 2000 remember the 1930s as times they were hungry and when the society they lived in was unfair and inegalitarian. Semi-feudal Hungary is remembered more favorably, not only a country, where there was less People remember socialism as an epoch in which poverty was gradually reduced (though in some countries the Stalinist epoch is recalled rather negatively and in all countries the countries, there is less improvement in Romania than elsewhere, Russia is catching up, 133more not completely. The gap, however among countries in the recollection of people narrowed. Market reform dopes not hurt the trend toward decreasing poverty and increasing equality. More markets under socialism do not generate inequality and it certainly does not create Who were poor under socialism? All what we can tell, whether people who turned 14 reported in 2000 that their family experienced poverty when they were becoming young adults. Who were they? ‘Class’ does not seem to matter, or to matter much. Father’s education does not explain their poverty, but whether they lived only with their mother when they were 14 does in a big way. So does the number of their siblings and their Roma ethnicity but there are no cross country differences. Hence our initial Losers of transition. We asked our respondents to recall how they lived in 1988, compare it with 2000 and to assess, whether the change was for better or worse. Our results are similar to the findings of other studies. In every country the epoch of transition is remembered as one of sacrifices. In year 2000 the majority of the people believed that they lived ‘worse’ or ‘much worse’ than in 1988 – between 55 to 85 percent of the respondent gave such an answer. We also asked our respondents to evaluate whether they experienced extreme poverty in 1988 and in 2000 – again we see a jump in reasonable to conclude that the proportion of population who lived under the poverty line 134increased (depending on how such a poverty line is measured) by two to five-fold during There are, however, dramatic differences in this respect across countries, or ‘regime We conducted a similar survey in 1993 and asked some of the same questions in the some of the same countries. In 1993 the percentage of the population who reported deterioration in their living standards in comparison with 1988 was almost identical in all countries. The growth of incidences of poverty during the first 5 years after the fall of communism was also similar. And let’s add to this, that in people’s experience the the level of economic development and living standards remained across those countries they shrank substantially. In 1988-1993 all societies took a dive, they took it from a relatively similar starting point and the speed of deterioration social and economic The comparison of the 1993 and 2000 data on the other hand tells us a different story. In some countries we see signs of consolidation even some modest improvement in living standards and in the extent of poverty. Thus for instance in 1993 Hungary 62% of the population reported a deterioration in living standards but in 2000 only 54% gave such an answer. In Bulgaria on the other hand the same data are: 69% in 1993 and 84 % in 2000 Levels of poverty were not only low, but were rather similar across countries in 1988 but given the different dynamism of changes during the first 12 years , and especially during the past 5-7 years by 2000 the level of poverty is strikingly different in 135cross-national comparison. In Hungary for instance 11 % of the population lived below the poverty line set by the World Bank (this is measured as $4.30 daily expenditure per ‘equivalent adult’), in Bulgaria the respective figure was 38% in year 2000. We tried to assess poverty in another, ‘softer’ way, so we asked people whether they suffered from hunger in 2000 (‘Did you go to bed hungry last week, since you could not afford to buy food?’). 7% of the Hungarians and 16% of the Bulgarians gave us such a response. pauperisation and which continued the decline and achieved very high levels of poverty by 2000? Hungary, Poland and Slovakia constitutes the first group of countries, while Bulgaria, Romania and Russia belongs to the second group. This poses some intriguing question: a/ why do we see a period of divergence after 1993 when during the previous 40-45 years all countries in the region were on a convergence trajectory? b/ is the Russia), just a temporary phenomenon and once they bottomed out they will ‘catch up’ first with Central Europe and eventually with the EU or do we see the making of two Survey data, collected at one or two time points are not sufficient to answer such questions, but our data call for the formulation of hypotheses along these lines. One might argue that the difference between Central and Eastern Europe is geographic and possible historical, cultural and what government policies were pursued during the last Hence the Central European countries were economically more developed to begin with and being closer to Western markets understandably their performance was better too. 136The East European countries faced the challenges of transition at a lower level of economic development and given their distance from the European markets it may take more time for them to ‘turn around.’ West European and US capital once it absorbed Central Europe will move further East and generate similar economic dynamism it has been generating since the mid-1990s in Central Europe. One may add to this the historical dimension: Hungary and Poland were reform communist countries, hence they started the reforms much earlier than the countries further East, that should help their economic dynamism. While this is an intriguing set of arguments it would be difficult not to see that in the two regions of post-communist Europe the transition process was rather different, both in terms of the speed of transformation and the nature of the emergent new institutions. In Central Europe the neo-liberal reforms were not only implemented faster, but also with more consistency and as a result by the end of the 1990s the institutions of the market, including the legal and political infrastructure (procedural law, rational accounting and banking, stable democracy and free media) which has an ‘elective affinity’ with a market economy were more developed. In the Eastern region of post-communist Europe, while neo-liberal reforms were also attempted they were often compromised and more survived from the paternalistic patterns of state socialism. The important role barter or self- provisioning plays for instance in the Russian economy is a good example of this. Arguably therefore the divergence may not be only temporary and may not be simply the outcome of differences in the ‘initial conditions.’ What we see is arguable the making of two different ‘regime types’ of capitalism, the first we may call neo-liberal (Hungary, 137Poland, Slovakia), given its emphasis to free markets and invisible hand and the second often legitimate themselves with populist policies, they pursue slower change, and the regimes justify this with the needs to buffer the population from the pains of the transition. If such a distinction makes sense at all our finding might speak to some of the more economic development in post-communist Europe during the past 12 years were not government policies also mattered than our data offers support to those who argued against gradualism and suggested that neo-liberal reform while it may be painful to begin There indeed is support for the idea in the data that liberal reform is likely to support economic growth and its results may to some extent ‘trickle down’ and ease poverty. There is no reason, however, for ‘market triumphalism.’ The social performance of neo- liberal regimes are only impressive in comparison with the performance of neo- patrimonial system. After all in Hungary in year 2000 still 54 percent of the respondent believed that their living conditions deteriorated in comparison with 1988 - a rather Who are those who see themselves as the losers, who report deteriorating living standards 12 years after the fall of communism? If we now bracket the cross-national differences (they are the biggest ones – people in neo-liberal regimes are four times less likely to report declining living standards than people in neo-patrimonial systems) than 138education steals the show. Households with low levels of education are almost three times more likely to be among losers than other households – there is no other factor, which comes near to this one. Much to our surprise even Roma ethnicity has a weaker effect that education. The Roma in Bulgaria, Hungary and Romania is only one-and-half Hence social determination of poverty when the respondents turned 14 (for most of them that was during some epoch of socialism) and social determination of who reported worsening conditions between 1988 and 2000 are drastically different. During the respondent, when he/she was 14, whether the respondent lived only with his/her mother and the respondent Roma ethnicity. Losers on the other hand are class defined: they are the least educated and demography, single motherhood does not matter and even Roma ethnicity is relatively unimportant. Hence there is a sharp shift from ascription to Who are the people who are below the $4.30 World Bank poverty line? Who are the poor substantially in comparison with late state socialism, where extreme poverty was relatively rare, due mainly to the practices of full employment. But did the character of poverty also change? Did post-communism create a ‘new poverty’? Many commentators suggest, it did. Earlier research available to us indicates that poverty under state socialism was mainly a life-cycle, demographic phenomenon. The single most important predictor of poverty was number of children. Large families, while children were dependent tended 139to be rather poor, but as children entered the labour market they moved out of poverty. So what could the ‘new poverty’ of post-communism mean? It is reasonable to accept that number of children will not be the major predictor in a market economy. Education and labour market performance, hence achievement rather than ascription will tell us who falls below the poverty line. Thus poverty will be a structural phenomenon, which will not necessarily go away with changes in life cycle. One carries inadequate education and resulting poor labour market performance with it for all his/her life. Our data offers qualified support to the hypothesis that the determination of poverty may The most powerful variable to predict who falls below the World Bank poverty line is again the regime type. People who live in neo-liberal regimes are ten times less likely to be below the poverty line (when in the measurement of the poverty line the value of differences in poverty rates are generally greater than intra-country differences. This is almost true for Roma ethnicity. For instance in Hungary 7% of the non-Roma and 21% of Roma households reported extreme poverty, in Bulgaria the same figures are 16% for non-Roma and 66% for Roma! Hence Hungarian Roma does not live much worse, than education is the most important predictor of poverty. Households with low level of education are 4-5 times more likely to be below the World Bank poverty line than other households. Labour market performance in itself is not that important. Those households, 140where the head of the household is unemployed or is out of the labour force are more likely to be poor than other households, but this is no match to the education effect, probably because education itself explains poor labour market performance as well. As expected by our theory number of children has a relatively modest effect and once one controls for education the explanatory power of Roma ethnicity is greatly reduced, though even after one controls for education Roma will be twice as likely to be poor than non-Roma. Single mother households and single women are also about twice as likely to be under poverty line than other households, hence the effect of gender is about as strong Thus if it is true that under socialism the number of children was the major predictor who fell below poverty line than post-communism did produced a ‘new poverty’, where number of children is secondary and low level of education predicts who will be poor in post-communist capitalism. Nevertheless, this is far from a full swing from ascription to achievement. While education indeed is the strongest factor poverty is also racialized and feminized in post-communist Europe. After one controlled for education single mother households (and their proportion is on the rise as well) and Roma are still about twice more likely to be below the World Bank poverty line. I would like to draw attention to the poverty of single mother households. While Roma poverty has been studies a lot the poverty of single mother households did not receive much attention. I would like to One of our initial hypothesis was that feminization and racialization of poverty occurs during market transition and it is likely to be more pronounced in neo-liberal rather than 141in neo-patrimonial regimes. Our data do not support this assumption. It appears that poverty was already feminized and racialized during socialism, hance neither signle mothers, nor Roma were not among the biggest losers of transition. They were sufficiently poor during socialism, hence for them to be impoverished faster than a rapidly impoverishing society was not conceivable. It is true, nevertheless, that despite the increasing importance of low level of education and inadequate labour market systems is also feminized and racialized, though the gap between single mother households and Roma on the one hand and the rest of the society on the other hand may As we pointed out the strength of Roma ethnicity is greatly reduced as we control for education, but it does not disappear. Both components of this statement are important. vehicle to reduce Roma poverty. On the other hand, it is also clear, that education in itself will not do the trick – once educational inequalities were eliminated substantial opportunities. and Eastern Europe – given the strength of extended Roma families this is not the case. Roma unwed mothers are much more likely to live with the father of their children and if 142they are abandoned by their partners they are likely to live with other kin, hence less How do people in poverty cope? To what extent can they rely on welfare institutions and to what extent do they survive by self-provisioning? In our study we looked both at ‘coping.’ Surprisingly the structure of both of these ‘fields’ rather similar across societies. The available to another 10 or so percent. The welfare system is rather uniform across these also similar. But roughly about half of the households relies on self provisioning when it those who end up in poverty. Ethnicity, however, cross-cuts these trends. Roma is much more likely to receive transfer payments than non-Roma. Almost twice as many Roma family receive child allowance than non-Roma households and they receive 2-3 time more often transfers in forms of social assistance and unemployment benefits than non- Roma families. In self-provisioning the opposite is true: Roma is half as likely to be able persistence of this poverty varies a great deal across regime types. Neo-liberal regimes show lower levels of poverty than neo-patrimonial systems. While during the past 5 years in the former ones there was some moderation in poverty, in neo-patrimonial countries 143the extent poverty increased. The shift to market economy happens with increasing returns to education, therefore those with low education tend to perform poorly on the job market and they are the most likely to end up in poverty. But the most important finding capitalism. The effect of class (measured with education) is complemented by the effects single motherhood, but both Roma and single mothers - after we controlled for education – are still about twice as likely to be poor than other groups. With market transition criteria based on achievement may have gained grounds but ethnicity and gender 144
https://www.scribd.com/document/44966113/Ivan
CC-MAIN-2019-39
en
refinedweb
.NET offers a variety of collections such as ArrayList, Hashtable, queues, Dictionaries. Objective of the Module Introduction:obj.Add("item1");obj.Add("2");obj.Add("Delhi");Each new item in the ArrayList is added to the end of the list, so it has the largest index number. If you wanted to add an item in the middle of the list, then you can use "Insert()" with a numeric argument as follows:Obj.Insert(2,"item2");You can also remove members of the ArrayList using either Remove() or RemoveAt() methods as in the following:Obj.Remove("item1");Obj.RemoveAt(3);Benefits of ArrayLists Limitations of ArrayListsThe flexibility of ArrayList comes at a cost of performance. Since memory allocation is a very expensive business, the fixed number of elements of the simple array makes it much faster to work with.Note: ArrayLists are slower and more resource-intensive than a conventional array.Simple ArrayList ExampleThe following example shows an array list with an #ff0000 size. Elements are added dynamically as required. We are adding elements using the "Add()" method as well as using the "Insert()" method at a specific location. Later we are displaying all the elements by iterating through the list.using System;using System.Collections; namespace CollectionApp{ class Program { static void Main(string[] args) { //Defining an ArrayList ArrayList obj = new ArrayList(); //Adding elements obj.Add("India"); obj.Add("USA"); obj.Add("Russia"); obj.Add("300"); //Adding elements to specific position obj.Insert(2, "Japan"); //Accessing elements for (int i = 0; i < obj.Count;i++ ) { Console.WriteLine("At Index["+i+"]= "+obj[i].ToString()); } Console.WriteLine("____________"); Console.WriteLine("Press any key"); Console.ReadKey(); } }} After building and running this program, the output of the program is as in the following;Figure 1.1 - Array ListHashtablesIn some respects, Hashtable objects are quite similar to an ArrayList object except that it is not required to use a numerical index. Instead, we can use a texture key that can be numeric. You can create a Hashtable object by using the same syntax as an ArrayList as in the following:Hashtable obj = new Hashtable();Once it is created, you must specify the key/value pairs. Remember that the key is like an index for the entry and the value is the data we are storing. We store each element using the "Add()" method with the following syntax;Obj["in"] = "india";Obj["en"] = "England";Obj["us"] = "USA";A Hashtable with numbers or dates for the keys are written as in the following;Obj[Convert.ToDateTime("21/12/2012")] = "The Judgment day";Note: you need to convert DateTime index values when they are used as an index for a Hashtable.To read elements, you just need to specify the key and the value is returned. The following code puts the value "india" into a variable named Country:string Country = Convert.ToString(obj["in"]);Benefits of Hashtable Limitations of Hashtable Simple Hashtable ExampleThe following Hashtable program uses a date index. First we define a Hashtable object and add some predefined dates. Thereafter we create a DateTime type variable to store user input and finally we display the output.using System;using System.Collections; namespace CollectionApp{ class Program { static void Main(string[] args) { //Defining an Hashtable Hashtable obj = new Hashtable(); //Adding elements obj[Convert.ToDateTime("02/14/2012")] = "Valentine day"; obj[Convert.ToDateTime("12/22/2012")] = "Math day"; obj[Convert.ToDateTime("08/15/1947")] = "Independence day"; //input date from user Console.WriteLine("Enter date 'Month/Date/Year' Format"); DateTime DateData = DateTime.Parse(Console.ReadLine()); //display data Console.WriteLine(obj[DateData]); Console.WriteLine("____________"); Console.WriteLine("Press any key"); Console.ReadKey(); } }}After successfully compiling the aforementioned program, the output is as in the following;Figure 1.2 - HashtableBitArrayBit values are ones and zeros, where 1 represents true and 0 is false. A BitArray collection provides an efficient means of storing and retrieving bit values. It allows us to perform bitwise operations as well as count the total number of bits. The contents of this collection are not Objects but Boolean values because a BitArray can hold a large number of Boolean values which are called bits. One of most significant features of a BitArray is resizability, that is useful if you don't know the number of bits needed in advance. You can create a BitArray object by using the same syntax as defined for other collections. BitArray provides several constructors. We are defining 7 bits in length in the BitArray in the following:BitArray obj = new BitArray(7);Note: The static members of BitArray are threaded-safe whereas instance members are not.Simple BitArray ExampleThe following program demonstrates.using System;using System.Collections; namespace CollectionApp{ class Program { static void Main(string[] args) { // BitArray definition BitArray bobj = new BitArray(16); // calaculate total bits Console.WriteLine("Total bits="+bobj.Count); //set all bits to 1 bobj.SetAll(true); //set 2nd bit to false bobj.Set(2, false); //set bit using Indexer bobj[3] = false; DislayBits(bobj); Console.WriteLine("\n___________"); Console.WriteLine("Press any key"); Console.ReadKey(); } static void DislayBits(BitArray obj) { foreach(bool bits in obj) { if (bits) Console.Write(1); // Console.Write(bit ? 1 : 0); else Console.Write(0); } } }}The displayed results of the initialized and configured bits are as in the following;Figure 1.3 - Bit Array The BitArray collection implements the properties and methods of the ICollection interface. The following table shows the properties and methods of BitArray; View All
https://www.c-sharpcorner.com/UploadFile/84c85b/using-net-collections-with-C-Sharp/
CC-MAIN-2019-39
en
refinedweb
Opened -----Original Message----- From: Maarten Coene [mailto:[email protected]] Sent: Wednesday, October 15, 2008 5:10 PM To: [email protected] Subject: Re: Chain resolver does not support namespace? I think it a documentation issue, the same problem exist for some of the other resolver attributes. Could you create a JIRA issue? Maarten ----- Original Message ---- From: "Brown, Carlton" <[email protected]> To: [email protected] Sent: Tuesday, October 14, 2008 9:47:40 PM Subject: Chain resolver does not support namespace? Hi all,? ml Thanks, Carlton ----------------------------------------- ====================================================. ====================================================
http://mail-archives.apache.org/mod_mbox/ant-ivy-user/200810.mbox/%3CEE7473F9E5EC3D4CBC1CDDD38BC1FA3208D17E58@CWA0020EX.ccsouth.cccore.com%3E
CC-MAIN-2019-39
en
refinedweb
A reader for a data format used by Omega3p, Tau3p, and several other tools used at the Standford Linear Accelerator Center (SLAC). More... #include <vtkSLACParticleReader.h> A reader for a data format used by Omega3p, Tau3p, and several other tools used at the Standford Linear Accelerator Center (SLAC). The underlying format uses netCDF to store arrays, but also imposes some conventions to store a list of particles in 3D space. This reader supports pieces, but in actuality only loads anything in piece 0. All other pieces are empty. Definition at line 52 of file vtkSLACParticleReader.h. Definition at line 55 of file vtkSLACParticleReader. Returns true if the given file can be read by this reader. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. Convenience function that checks the dimensions of a 2D netCDF array that is supposed to be a set of tuples. It makes sure that the number of dimensions is expected and that the number of components in each tuple agree with what is expected. It then returns the number of tuples. An error is emitted and 0 is returned if the checks fail. Definition at line 71 of file vtkSLACParticleReader.h.
https://vtk.org/doc/nightly/html/classvtkSLACParticleReader.html
CC-MAIN-2019-39
en
refinedweb
CSS is what makes the web look and feel the way it does: the beautiful layouts, fluidity of responsive designs, colors that stimulate our senses, fonts that help us read text expressed in creative ways, images, UI elements, and other content displayed in a myriad of shapes and sizes. It’s responsible for the intuitive visual cues that communicate application state such as a network outage, a task completion, an invalid credit card number entry, or a game character disappearing into white smoke after dying. The web would be either completely broken or utterly boring without CSS. Given the need to build web-enabled apps that match or outdo their native counterparts in behavior and performance (thanks to SPAs and PWAs), we are now shipping more functionality and more code through the web to app users. Considering the ubiquity of the web, its very low friction (navigate through links, no installation), and its low barrier to entry (internet access on very cheap phones), we will continue to see more people come online for the first time and join millions of other existing users engage on the web apps we are building today. The less code we ship through the web, the less friction we create for our applications and our users. More code could mean more complexity, poor performance, and low maintainability. Thus, there has been a lot of focus on reducing JavaScript payload sizes, including how to split them into reasonable chunks and minify them. Only recently did the web begin to pay attention to issues emanating from poorly optimized CSS. CSS minification is an optimization best practice that can deliver a significant performance boost — even if it turns out to be mostly perceived — to web app users. Let’s see how! What is CSS minification? Minification helps to cut out unnecessary portions of our code and reduce its file size. Ultimately, code is meant to be executed by computers, but this is after or alongside its consumption by humans, who need to co-author, review, maintain, document, test, debug, and deploy it. Like other forms of code, CSS is primarily formatted for human consumption. As such, we add spacing, indentation, comments, naming conventions, and instrumentation hacks to boost our productivity and the maintainability of the CSS code — none of which the browser or target platform needs to actually run it. CSS minification allows us to strip out these extras and apply a number of optimizations so that we are shipping just what the computer needs to execute on the target device. Why minify CSS? Across the board, source code minification reduces file size and can speed up how long it takes for the browser to download and execute such code. However, what is critically important about minifying CSS is that CSS is a render blocking resource on the web. This means the user will potentially be unable to see any content on a webpage until the browser has built the CSSOM (the DOM but with CSS information), which only happens after it has downloaded and parsed all style sheets referenced by the document. Later in this article, we will explore the concept of critical CSS and best practices around it, but the point to establish here is that until CSS is ready, the user sees nothing. Unnecessarily large CSS files, due to shipping unminified or unused CSS, helps to deliver this undesirable experience to users. Minify vs. compress — any difference? Code minification and compression are often used interchangeably, maybe because they both address performance optimizations that lead to size reductions. But they are different things, and I’d like to clarify how: - Minification alters the content of code. It reduces code file size by stripping out unwanted spaces, characters, and formatting, resulting in fewer characters in the code. It may further optimize the code by safely renaming variables to use even fewer characters. - Compression does not necessarily alter the content of code — well, unless we consider binary files like images, which we are not covering in this exploration. It reduces file size by compacting the file before serving it to the browser when it is requested. These two techniques are not mutually exclusive, so they can be used together to deliver optimized code to the user. With the required background information out of the way, let’s go over how you can minify the CSS for your web project. We will be exploring three ways this can be achieved and doing so for a sample website I made, which has the following CSS in a single external main.css file: html, body { height: 100%; } body { padding: 0; margin: 0; } body .pull-right { float: right !important; } body .pull-left { float: left !important; } body header, body [data-view] { display: none; opacity: 0; transition: opacity 0.7s ease-in; } body [data-view].active { display: block; opacity: 1; } body[data-nav='playground'] header { display: block; opacity: 1; } /* Home */ ; } Standalone online tools If you are totally unfamiliar with minifying CSS and would like to approach things slowly, you can start here and only proceed to the next steps when you are more comfortable. While this approach works, it is cumbersome and unsuitable for real projects of any size, especially one with several team members. A number of free and simple online tools exist that can quickly minify CSS. They include: All three tools provide a simple user interface consisting of one or more input fields and require that you copy and paste your CSS into the input field and click a button to minify the code. The output is also presented on the UI for you to copy and paste back into your project. From the above screenshot of CSS Minifier, we can see the that the Minified Output section on the right has CSS code that has been stripped of spaces, comments, and formatting. Minify does something similar, but can also display the file size savings due to the minification process. In either of these cases, our minified CSS looks like the below: body,html{height:100%}body{padding:0;margin:0}body .pull-right{float:right!important}body .pull-left{float:left!important}body [data-view],body header{display:none;opacity:0;transition:opacity .7s ease-in}body [data-view].active{display:block;opacity:1}body[data-nav=playground] header{display:block;opacity:1}} Minifying CSS in this way expects you to be online and assumes the availability of the above websites. Not so good! Command line tools A number of command line tools can achieve the exact same thing as the above websites but can also work without internet, e.g., during a long flight. Assuming you have npm or yarn installed locally on your machine, and your project is set up as an npm package (you can just do npm init -y), go ahead and install cssnano as a dev dependency using npm install cssnano --save-dev or with yarn add install cssnano -D. Since cssnano is part of the ecosystem of tools powered by PostCSS, you should also install the postcss-cli as a dev dependency (run the above commands again, but replace cssnano with postcss-cli). Next, create a postcss.config.js file with the following content, telling PostCSS to use cssnano as a plugin: module.exports = { plugins: [ require('cssnano')({ preset: 'default', }), ], }; You can then edit your package.json file and add a script entry to minify CSS with the postcss command, like so: ... "scripts": { "minify-css": "postcss src/css/main.css > src/css/main.min.css" } ... "devDependencies": { "cssnano": "^4.1.10", "postcss-cli": "^6.1.2" } ... main.min.css will be the minified version of main.css. With the above setup, you can navigate to your project on the command line and run the following command to minify CSS: npm run minify-css (or, if you’re using yarn, yarn minify-css). Loading up and serving both CSS files from the HTML document locally (just to compare their sizes in Chrome DevTools — you can run a local server from the root of your project with http-server) shows that the minified version is about half the size of the original file. While the above examples work as a proof of concept or for very simple projects, it will quickly become cumbersome or outright unproductive to manually minify CSS like this for any project with beyond-basic complexity since it will have several CSS files, including those from UI libraries like Bootstrap, Materialize, Material Design, etc. In fact, this process requires you to save the minified version and update all style sheet references to the minified file version — manually. Chances are, you are already using a build tool like webpack, Rollup, or Parcel. These come with built-in support for code minification and bundling and might require very little or no configuration to take advantage of their workflow infrastructure. Bring your own bundler (BYOB) Given that Parcel has the least configuration of them all, let’s explore how it works. Install the Parcel bundler by running yarn add parcel-bundler -D or npm install parcel-bundler --save-dev. Next, add the following script entries to your package.json file: "dev": "parcel src/index.html", "build": "parcel build src/index.html" Your package.json file should look like this: { ... "scripts": { "dev": "parcel src/index.html", "build": "parcel build src/index.html" }, ... "devDependencies": { "parcel-bundler": "^1.12.3" } ... } The dev script allows us to run the parcel-bunder against the index.html file (our app’s entry point) in development mode, allowing us to freely make changes to all files linked to the HTML file. We’ll see changes directly in the browser without refreshing it. By default, it does this by adding a dist folder to the project, compiling our files on the fly into that folder, and serving them to the browser from there. All of this happens by running the dev script with yarn dev or npm run dev and then going to the provided URL on a browser. Like the dev script we just saw, the build script runs the Parcel bundler in production mode. This process does code transpilation (e.g., ES6 to ES5) and minification, including minifying our CSS files referenced in the target index.html file. It then automatically updates the resource links in the HTML file to the output code (transpiled, minified, and versioned copies). How sweet! This production version is put in the dist folder by default, but you can change that in the script entry within the package.json file. While the above process is specific to Parcel.js, there are similar approaches or plugins to achieve the same outcome using other bundlers like webpack and Rollup. Do take a look at the following as a starting point: - webpack - Rollup Code coverage and unused code Minifyng CSS in itself is not the goal; it is only the means to an end, which is to ship just the right amount of code the user needs for the experiences they care about. Stripping out unnecessary spaces, characters, and formatting from CSS is a step in the right direction, but like unnecessary spaces, we need to figure out what portions of the CSS code itself is not totally necessary in the application. The end goal is not really achieved if the app user has to download CSS (albeit minified CSS) containing styles for all the components of the Bootstrap library used in building the app when only a tiny subset of the Bootstrap components (and CSS) is actually used. Code coverage tools can help you identify dead code — code that is not used by the current page or the application. Such code should be stripped out during the minification process as well, and Chrome DevTools has an inbuilt inspector for detecting unused code. With DevTools open, click on the “more” menu option (three dots at extreme top right), then click on More tools, and then Coverage. Once there, click on the option to reload and start capturing coverage. Feel free to navigate through the app and do a few things to establish usage if need be. After using the app to your heart’s content — and under the watchful eyes of the Coverage tool — click on the stop instrumenting coverage and show results red button. You will be presented with a list of loaded resources and coverage metrics for that page or usage session. You can instantly see what percentage of the resource entries are used vs unused, and clicking each entry will also show what portions of the code is used (marked green) vs. unused (marked red). In our case, Chrome DevTools has detected that nowhere in my HTML was I using the .pull-right and .pull-left CSS classes, so it marked them as unused code. It also reports that 84 percent of the CSS is unused. This is not an absolute truth, as you will soon see, but it gives a clear indication for where to begin investigating areas to clean up the CSS during a minification process. Determining and removing unused CSS I must begin by saying removing unused CSS code should be carefully done and tested, or else you could end up removing CSS that was needed for a transient state of the app — for instance, CSS used to display an error message that only comes into play in the UI when such an error occurs. How about CSS for a logged-in user vs. one who isn’t logged in, or CSS that displays an overlay message that your order has shipped, which only occurs if you successfully placed an order? You can apply the following techniques to begin more safely approaching unused CSS removal to drive more savings for your eventual minified CSS code. Add just the CSS you need — no more! This technique emphasizes the leverage of code splitting and bundling. Just like we can key into code splitting by modularizing JavaScript and importing just the modules, files, or functions within a file we need for a route or component, we should be doing the same for CSS. This means instead of loading the entire CSS for the Material Design UI library (e.g., via CDN), you should import just the CSS for the BUTTON and DIALOG components needed for a particular page or view. If you are building components and adopting the CSS-in-JS approach, I guess you’d already have modularized CSS that is delivered by your bundler in chunks. Inline CSS meant for critical render — preload the rest! Following the same philosophy of eliminating unnecessary code — especially for CSS, since it has a huge impact on when the user is able to see content — one can argue that CSS meant for the orders page and the shopping cart page qualifies as unused CSS for a user who is just on the homepage and is yet to log in. We can even push this notion further to say CSS for portions below the fold of the homepage (portions of the homepage the user has to scroll down to see) can qualify as unnecessary CSS for such a user. This extra CSS could be the reason a user on 2G (most emerging markets) or one on slow 3G (the rest of the world most of the time) has to wait one or two more seconds to see anything on your web app even though you shipped minified code!. – Addy Osmani on Critical Once you have extracted and inlined the critical CSS, you can preload the remaining CSS (e.g., for the other routes of the app) with link-preload. Critical (by Addy Osmani) is a tool you can experiment with to extract and inline critical CSS. You can also just place such critical-path CSS in a specific file and inline it into the app’s entry point HTML — that is, if you don’t fancy directly authoring the CSS within STYLE tags in the HTML document. Remove unused CSS Like cssnano, which plugs into PostCSS to minify CSS code, Purgecss can be used to remove dead CSS code. You can run it as a standalone npm module or add it as a plugin to your bundler. To try it out in our sample project, we will install it with: npm install @fullhuman/postcss-purgecss --save-dev If using yarn, we will do : yarn add @fullhuman/postcss-purgecss -D Just like we did for cssnano, add a plugin entry for Purgecss after the one for cssnano in our earlier postcss.config.js file, such that the config file looks like the following: module.exports = { plugins: [ require('cssnano')({ preset: 'default', }), require('@fullhuman/postcss-purgecss')({ content: ['./**/*.html'] }), ], }; Building our project for production and inspecting its CSS coverage with Chrome DevTools reveals that our purged and minified CSS is now 352B (over 55 percent less CSS code) from the earlier version that was only minified. Inspecting the new output file, we can see that the .pull-left and .pull-right styles were removed since nowhere in the HTML are we using them as class names at build time. Again, you want to tread carefully with deleting CSS that these tools flag as unused. Only do so after further investigation shows that they are truly unnecessary. Design CSS selectors carefully In our sample project, we might have intended to use the .pull-right and pull-left classes to style a transient state in our app — to display a conditional error message to the extreme left or right hand side of the screen. As we just saw, Purgecss helped our CSS minifier remove these styles since it detected they were unused. Perhaps there could be a way to deliberately design our selectors to survive preemptive CSS dead code removal and preserve styling for when they’d be needed in a future transient app state. It turns out that you can do so with CSS attribute selectors. CSS rules for an error message element that is hidden by default and then visible at some point can be created like this: body [msg-type] { width: 350px; height: 250px; padding: 1em; position: absolute; left: -999px; top: -999px; opacity: 0; transition: opacity .5s ease-in } body [msg-type=error] { top: calc(50% - 125px); left: calc(50% - 150px); opacity: 1 } While we don’t currently have any DOM elements matching these selectors, and knowing they will be created on demand by the app in the future, the minify process still preserves these CSS rules even though they are marked as unused — which is not entirely true. CSS attribute selectors help us wave a magic wand to signal the preservation of rules for styling our error message elements that are not available in the DOM at build time. This design construct might not work for all CSS minifiers, so experiment and see if this works in your build process setup. Recap and conclusion We are building more complex web apps today, and this often means shipping more code to our end users. Code minification helps us lighten the size of code delivered to app users. Just like we’ve done for JavaScript, we need to treat CSS as a first-class citizen with the right to participate in code optimizations for the benefit of the user. Minifying CSS is the least we can do. We can take it further, too, by eliminating dead CSS from our projects. Realizing that CSS has a huge impact on when the user sees any content of our app helps us prioritize optimizing its delivery. Finally, adopting a build process or making sure your existing build process is optimizing CSS code is as trivial as setting up cssnano with Parcel or using a few plugins and configuration for webpack or Rollup..
http://blog.logrocket.com/the-complete-best-practices-for-minifying-css/
CC-MAIN-2019-39
en
refinedweb
Password validation with certain conditions Hi Friends, I needed to validate password for certain conditions. As per the requirements, any valid must password must pass the following conditions: - Minimum of 7 characters - Must have numbers and letters - Must have at least a one special characters- “!,@,#,$,%,&,*,(,),+” I wrote the following method to meet the requirements: import java.util.regex.* boolean validatePassword(String password) { def pattern = /^.*(?=.{7,})(?=.*\d)(?=.*[a-zA-Z])(?=.*[!@#$%*&+()]).*$/ def matcher = password =~ pattern return matcher.getCount() ? true : false } I couldn’t make the single regular expression to make it work, so used two. If somebody knows better way of doing it the please share. Thanks to Imran for helping me in writing the regular expression. Hope this helped! ~~Amit Jain~~ [email protected]
http://www.tothenew.com/blog/password-validation-with-certain-conditions/
CC-MAIN-2019-39
en
refinedweb
#include "lib/cc/torint.h" Go to the source code of this file. Headers for crypto_hkdf.h. Definition in file crypto_hkdf.
https://people.torproject.org/~nickm/tor-auto/doxygen/crypto__hkdf_8h.html
CC-MAIN-2019-39
en
refinedweb
Composite Component — DC Motor This example shows how to implement a DC motor model by means of a composite component. Composite Component Using import Statements This example shows how you can use import statements to implement a DC motor model by means of a composite component. Component Variants — Series RLC Branch This example shows how to implement variants within a component file by using conditional sections. as hidden, and therefore their variables do not appear in the Variables tab of the top-level block dialog box. Specifying Component Connections The structure section of a Simscape™ file is executed once during compilation. Importing Domain and Component Classes An import mechanism provides a convenient means to accessing classes defined in different scopes, or namespaces. Defining Component Variants Use conditional sections to define variants within component file. Converting Subsystems into Composite Components You can generate a composite component from a subsystem consisting entirely of Simscape blocks.
https://de.mathworks.com/help/physmod/simscape/composite-components.html
CC-MAIN-2019-39
en
refinedweb
1 - Introduction This is part one in a multi-part article series on building custom administrative user interfaces using the Dynamicweb UI controls such as the RibbonBar. As you may have seen in the article series on building custom modules, pages you create to manage your custom modules in the administrative section of Dynamicweb are 100% ASP.NET. This means you can use whatever technique you prefer to create these pages. To build up the user interface you can use standard ASP.NET controls such as the Button, the GridView and the ListView. Alternatively, you can use third-party controls such as those from DevExpress or Telerik. However, to create modules that look and feel like Dynamicweb modules, you can also make use of the many built-in controls that ship with Dynamicweb. This helps your end-users familiarize themselves much quicker with your modules as they already know how to interact with many of the UI controls from working with the standard Dynamicweb modules. Note: if you haven’t read the series on building custom modules, you’re encouraged to do that now. The sample application used in this new series builds on top of the sample application that I built and explained in detail in the previous series. In this introductory article, I’ll give you a brief overview of many of the controls that you have at your disposal. In later parts in this series I’ll show you how to use these controls in your own code. You can find a complete list of the available controls and their API documentation on the Engage web site at. The RibbonBar control The RibbonBar is probably the most familiar control. It looks and behaves just as the Ribbon interface used in Microsoft Office 2007 and Office 2010 applications such as Word, Excel and Outlook. Figure 1 shows the RibbonBar in Dynamicweb when editing a page. Figure 1 (Click to enlarge) The control itself can contain a number of other controls, including: - RibbonBarTab – Used to create different tabs, such as the Content and Tools tabs in Figure 1. - RibbonBarGroup – Used to created different groups on a tab, such as the Insert and Content groups in Figure 1. - Within the RibbonBarGroup you can use a number of other controls such as: - RibbonBarButton - RibbonBarCheckbox - RibbonBarPanel - RibbonBarRadioButton - RibbonBarScrollable Many of the controls within the RibbonBarGroup expose client and server side events that you can handle. Part 2 through 4 of this series will show you how to use the RibbonBar and other controls to create good-looking and intuitive user interfaces for your Dynamicweb custom modules. The List control The List control enables you to present content in a list. Examples in the Dynamicweb UI include the list of paragraphs when editing a page, the products in the Product Catalog module and the list of orders. The control supports features such as sorting, paging, selection and filtering. Figure 2 shows an example of the List control displaying the articles in my custom DvkArticles module: Figure 2 (Click to enlarge) In part 5 of this series you see how to build this list to display the articles in the system. The Dialog control The Dialog control enables you to create a floating and draggable pop-up window. The control can contain other markup such as HTML and ASP.NET server controls, enabling you to completely customize its looks and content. The control has properties to change the appearance of various buttons (OK, Close, Cancel etc.) and their client side functions that get executed when you click them. In Figure 3 you see an example of a Dialog that in turn contains an Editor control. Figure 3 (Click to enlarge) In part 3 in this series you see how to use the Dialog control. The InfoBar control The InfoBar control is used to display short messages to the user. It typically appears at the top of the page so it draws the user’s attention. Out of the box it supports three types of messages (Error, Information and Warning), each represented by an icon. You can also assign a custom image. Figure 4 shows the InfoBar using the three default built-in message types and a custom one, featuring the logo of De Vier Koeden: Figure 4 I’ll show you how to use the InfoBar control in a reusable way in part 5 of this series. The GroupBox control The GroupBox control renders as HTML <fieldset /> and <legend /> elements and enables you to logically group controls together. Figure 5 shows the control in action. The text IE 8 Compatibility is set using the control’s Title property. The control’s content can be a combination of regular HTML and ASP.NET server controls. Figure 5 You can see this control at work in the Article module’s _Edit page and in the AddEditArticle.aspx page of the final sample application. The TabHeader control The TabHeader renders a series of tabs and is typically used to divide complex pages or forms in multiple areas, each accessible by its own tab. Figure 6 shows an example of the TabHeader control in action. Figure 6 You can determine the selected tab programmatically in a few different ways, including sending its (one based) index through a Query String variable called Tab. Redirecting to SomePage.aspx?Tab=2 would make the Categories tab the selected item. The Tree control The Tree control (shown in Figure 7) enables you to render a tree-like structure. The Tree is pretty versatile and supports features such as drag and drop and AJAX-enabled load-on demand features. Figure 7 The Tree control is not used in the sample application, but you can find information on how to use it on the Engage web site. The EditableGrid control The EditableGrid is similar to a standard ASP.NET GridView, but enables you to edit multiple items at once. Figure 8 shows the EditableGrid as it’s used in the SDK documentation samples for the UI controls. Figure 8 (Click to enlarge) Just as the Tree control, the EditableGrid is not used in the sample application, but you can find information on how to use it on the Engage web site. The Toolbar control When a RibbonBar is overkill, but you still need to present a few logically grouped buttons, the Toolbar control is the one to use. It enables you to define buttons with standard or custom images. Additionally, you can create an expandable button which in turn can show a ContextMenu that contains more menu items. In Figure 9 you see the toolbar. The Settings button is expandable and shows the context menu that’s associated with it when you click the down arrow next to the button. Figure 9 The Toolbar is not used in the sample application. However, it’s pretty easy to use so if you need it now, you’ll be able to figure out how to use it. The ContextMenu control The ContextMenu control can be used to display a context menu at various locations in your page. A common place for this menu is as a right-click menu for items in the List control. Additionally, it can be used for the expandable button on the Toolbar, as shown in Figure 10: Figure 10 The ContextMenu is used and demonstrated in part 5 of this series to enable a context menu in the list of articles. The RichSelect and RichSelectItem controls The RichSelect control can be used to create drop-down lists with rich content. Examples of the control in Dynamicweb can be found in the Paragraph Template and Layout selection controls. Items can be added programmatically only, using the RichSelectItem class. Figure 11 shows an example of a control with three items, two of which contain a combination of an HTML table, an image and some formatted text. Figure 11 You can find out more about the RichSelect control on the Engage web site. The Editor, FileArchive, FileManager and LinkManager controls You’ve seen these controls used in the article series on building custom modules. Their usage is pretty straight-forward as they don’t require you to set a lot of properties or handle events. I won’t dig any deeper into these controls in this series, but use them where they make sense. Check out the API documentation on the Engage website for an overview of their members and usage: The ControlResources control This control is not visible in your pages, but plays a very important role in most of your administrative pages. You need to include this control in most of your pages (or in a Master Page) in order for the other controls to work. The ControlResources control adds references to the many JavaScript and CSS files that many of the controls rely on to the head section of your page. If you find that the controls look or behave funky, chances are you forgot to include this control in your Admin pages. To use the control, add it to the <head /> section of an ASPX Page or Master Page like this: <head runat="server"> … <dw:ControlResources … </head> The ControlResources control has the following properties that influence its behavior: Depending on the controls you have in your page, and the properties you set on the ControlResources control, you end up with references in the <head /> of your page similar to the following: <head> <!-- Controls resources start --> <link rel="stylesheet" type="text/css" href="/Admin/Images/Ribbon/UI/Tree/Tree.css" /> <link rel="stylesheet" type="text/css" href="/Admin/Images/Ribbon/UI/Richselect/Richselect.css" /> <link rel="stylesheet" type="text/css" href="/Admin/Images/Ribbon/Ribbon.css" /> <script type="text/javascript" src="/Admin/Content/JsLib/dw/Ajax.js"></script> <script type="text/javascript" src="/Admin/Images/Ribbon/UI/Tree/Tree.js"></script> <script type="text/javascript" src="/Admin/Filemanager/Upload/js/EventsManager.js"></script> <script type="text/javascript" src="/Admin/Content/JsLib/prototype-1.6.1.js"></script> <script type="text/javascript" src="/Admin/Images/Ribbon/Ribbon.js"></script> <script type="text/javascript" src="/Admin/Images/Ribbon/UI/Tree/TreeDragDrop.js"></script> <script type="text/javascript" src="/Admin/Images/Ribbon/UI/Richselect/Richselect.js"></script> <script type="text/javascript" src="/Admin/Content/JsLib/scriptaculous-js-1.8.2/src/ scriptaculous.js?load=effects,dragdrop,slider"></script> <!-- Controls resources end --> </head> If you check out the API documentation for the Dynamicweb.Controls namespace, you’ll see that the namespace includes more controls than I have listed here. Some of those are used within the context of others, and thus aren’t discussed separately. Others aren’t used very often (anymore) and are thus not very relevant to this article series. Yet others may be relevant to your custom modules, but I simply haven't discussed them yet. If you want me to dig deeper into some of those controls, let me know through my Contact page. In the next three parts of this article series, I’ll show you how to use the RibbonBar and many of its child controls. The fifth part of this series will dig deeper in a number of the other controls I introduced in this article such as the List and the ContextMenu. Downloads Since this is just a general overview, there's no download for this article. Starting with the next article in the series, you can download the full source for the sample application (which is in C#) that I started in the article about custom modules.
https://devierkoeden.com/articles/building-dynamicweb-module-admin-interfaces-1-introduction
CC-MAIN-2019-39
en
refinedweb
BreizhCTF 2019 - Hallowed be thy nameCTF URL: Solves: ?? / Points: 300 / Category: crypto Challenge description We have the instructions to connect to a server and we can download its Python script. The server offers 3 actions: - “Enter plain, we give you the cipher”. It returns the ciphertext of the plaintext. - “Need a flag ?”. It returns a base64 encoded string, probably encrypted, and different each time it is called even within the same connection 🤔 - Exit Here is the server script, by @G4N4P4T1 (thank you for this challenge 👋), and of course the flag is redacted here: import sys import random import base64 import socket from threading import * FLAG = "bzhctf{REDACTED}" serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) init_seed = random.randint(0,65535) class client(Thread): def __init__(self, socket, address): Thread.__init__(self) self.sock = socket self.addr = address self.start() def get_keystream(self, r, length): r2 = random.Random() seed = r.randint(0, 65535) r2.seed(seed) mask = '' for i in range(length): mask += chr(r2.randint(0, 255)) return mask def xor(self, a, b): cipher = '' for i in range(len(a)): cipher += chr(ord(a[i]) ^ ord(b[i])) return base64.b64encode(cipher) def run(self): r = random.Random() r.seed(init_seed) self.sock.send(b'Welcome to the Cipherizator !\n1 : Enter plain, we give you the cipher\n2 : Need a flag ?\n3 : Exit') while 1: self.sock.send(b'\n>>> ') response = self.sock.recv(2).decode().strip())) elif response == "3": self.sock.close() break if __name__ == "__main__": if len(sys.argv) != 2: print("usage: %s port" % sys.argv[0]) sys.exit(1) serversocket.bind(('0.0.0.0', int(sys.argv[1]))) serversocket.listen(5) print ('server started and listening') while 1: clientsocket, address = serversocket.accept() print("new client : %s" % clientsocket) client(clientsocket, address) Best solution @Creased_ from the winning team AperiKube shared on Twitter the best solution which did not involve brute-forcing at all! Since the seed is the same for every client, I opened a first connection to the service, sent nullbytes to get the mask, then I used another connection to get the xored flag. A simple xor(mask, mask) then gives you the flag :) Here is his very effective script: Challenge resolution Script analysis The script accepts multiple clients in parallel through threads. When it starts, it generates a first seed with: init_seed = random.randint(0,65535) 65’536 possible values: that is not a very random nor robust seed. This seed is global and it is used for all clients. When a client connects, a new thread is started. A first Random object is created using the global seed: r = random.Random() r.seed(init_seed) When the client uses the 1. or 2. action, the self.get_keystream() function is called:)) The get_keystream() function receives the first Random object seeded with the global seed, and it does this: def get_keystream(self, r, length): r2 = random.Random() seed = r.randint(0, 65535) r2.seed(seed) mask = '' for i in range(length): mask += chr(r2.randint(0, 255)) return mask A second seed is created, based on the output of the first Random object, and it is used to seed a second Random object. Like for the first one, only 65’536 values are possible which is weak. The first Random object is used to seed the second… From this second object, a mask is generated with the length passed as argument. This length corresponds to the length of the data to encrypt. Indeed, the mask is combined with the input using the xor() function. The mask can then be considered as an encryption key. Here, we recognize in xor() a common function which applies the XOR operator on both inputs, character by character, and returns it base64-encoded: def xor(self, a, b): cipher = '' for i in range(len(a)): cipher += chr(ord(a[i]) ^ ord(b[i])) return base64.b64encode(cipher) Weakness Random is a PRNG (Pseudorandom number generator) and so it has an interesting weakness: it is actually deterministic! Given a seed, it will always generate the same output sequence 😉 Combined with the fact that we can obtain the ciphertext of a plaintext of our choice, that the seeds are very small, and that the second Random object is not shared with other players (so we will not be perturbated by others): we have a very good chance to brute-force the seeds off-line and therefore the encryption key that allows to decrypt the flag. Solution Our solution is to first send a static plaintext string to the server, then ask for the encrypted flag in the same connection (and nothing between). This way we know that the seed for the first Random object is the same for both requests and that this object is used only twice. # nc ctf.bzh 11000 Welcome to the Cipherizator ! 1 : Enter plain, we give you the cipher 2 : Need a flag ? 3 : Exit >>> 1 Enter plain : test Your secret : n3xljA== >>> 2 Your secret : UlXaKcVLuVuORY3lY3/0myvHh0FjDsoumjjOCempaoVQDRmtHSnJw1WOXb5P9I+I Our script will brute-force the first seed by trying to encrypt our chosen-plaintext and comparing with the obtained ciphertext, with the first Random object re-created everytime to start fresh. When it matches, the state of the first Random object is the good one, and the same as it was on the server, and we can then ask it to generate a second randint() for us. It will be the same as the one generated on the server to encrypt the flag we requested second, so we can generate a mask with it and decrypt the flag ciphertext we got. There is no need to brute-force the second seed too (as we initially thought), as its value is a direct consequence of the state of the first Random object. This is the Python script: import random import base64 import sys const = "test" const_out = "n3xljA==") # brute-force seed1 for seed1 in range(0, 65535): print "try seed1=%d" % seed1 rand1 = random.Random() rand1.seed(seed1) rand2 = random.Random() rand2.seed(rand1.randint(0, 65535)) # first call to first Random object # generate the mask mask = '' for i in range(len(const)): mask += chr(rand2.randint(0, 255)) # apply the mask to encrypt ret = xor(mask, const) if ret == const_out: # we found the seed1! print "GOT IT" print "seed1=%d" % seed1 rand2 = random.Random() seed2 = rand1.randint(0, 65535) # second call to first Random object print "seed2=%d" % seed2 rand2.seed(seed2) mask = '' for i in range(len(flag)): mask += chr(rand2.randint(0, 255)) print base64.b64decode(xor(mask, flag)) sys.exit(0) And its output: try seed1=0 GOT IT seed1=0 seed2=49673 bzhctf{The_sands_of_time_for_me_are_running_low} As we are lucky, or the challenge creator is nice, the first seed is ‘0’ so it does not even have to loop and we instantly get the flag 😁 Fun fact: the challenge title “Hallowed be thy name”, is an Iron Maiden song, and the flag is a verse of the lyrics… “Cheating” solution The solution above is, we believe, the intended solution. However, when writing this, we found that actually we could brute-force only the second seed. Yes it is generated from a first random generator, but as it has only 65’536 possible values, so we can brute-force it on its own 😉 Our trick here is also to know that the flag certainly contains “breizhctf” or “bzhctf”. Without this, and with a truly random flag (otherwise we could search for ASCII-only candidates), this would not work. Python script: import random import base64 import sys) for seed2 in range(0, 65535): rand2 = random.Random() rand2.seed(seed2) # generate the mask mask = '' for i in range(len(flag)): mask += chr(rand2.randint(0, 255)) decode = base64.b64decode(xor(mask, flag)) # if it looks like a flag, it should be a flag ;) if "bzhctf" in decode or "breizhctf" in decode: print decode sys.exit(0) It finds the flag in just a few seconds. Author: Clément Notin | @cnotin Post date: 2019-04-14
https://tipi-hack.github.io/2019/04/14/breizhctf-19-hallowed-be-thy-name.html
CC-MAIN-2019-39
en
refinedweb
In most situations, asynchronous I/O is not required because its effects can be achieved with the use of threads, with each thread execution of synchronous I/O. However, in a few situations, threads cannot achieve what asynchronous I/O can. The most straightforward example is writing to a tape drive to make the tape drive stream. Streaming prevents the tape drive from stopping while the drive is being written to. The tape moves forward at high speed while supplying a constant stream of data that is written to tape. To support streaming, the tape driver in the kernel should use threads. The tape driver in the kernel must issue a queued write request when the tape driver responds to an interrupt. The interrupt indicates that the previous tape-write operation has completed. Threads cannot guarantee that asynchronous writes are ordered because the order in which threads execute is indeterminate. You cannot, for example, specify the order of a write to a tape. #include <aio.h> int aio_read(struct aiocb *aiocbp); int aio_write(struct aiocb *aiocbp); int aio_error(const struct aiocb *aiocbp); ssize_t aio_return(struct aiocb *aiocbp); int aio_suspend(struct aiocb *list[], int nent, const struct timespec *timeout); int aio_waitn(struct aiocb *list[], uint_t nent, uint_t *nwait, const struct timespec *timeout); int aio_cancel(int fildes, struct aiocb *aiocbp); aio_read(3RT) and aio_write(3RT) are similar in concept to pread(2) and pwrite(2), except that the parameters of the I/O operation are stored in an asynchronous I/O control block (aiocbp) that is passed to aio_read() or aio_write(): aiocbp->aio_fildes; /* file descriptor */ aiocbp->aio_buf; /* buffer */ aiocbp->aio_nbytes; /* I/O request size */ aiocbp->aio_offset; /* file offset */ In addition, if desired, an asynchronous notification type (most commonly a queued signal) can be specified in the 'struct sigevent' member: aiocbp->aio_sigevent; /* notification type */ A call to aio_read() or aio_write() results in the initiation or queueing of an I/O operation. The call returns without blocking. The aiocbp value may be used as an argument to aio_error(3RT) and aio_return(3RT) in order to determine the error status and return status of the asynchronous operation while it is proceeding. You.
http://docs.oracle.com/cd/E19253-01/816-5137/gen-29/index.html
CC-MAIN-2017-30
en
refinedweb
I am running a remote command with: ssh = paramiko.SSHClient() ssh.connect(host) stdin, stdout, stderr = ssh.exec_command(cmd) # Wait for the command to finish while not stdout.channel.exit_status_ready(): if stdout.channel.recv_ready(): stdoutLines = stdout.readlines() readlines() # Wait until the data is available while not stdout.channel.recv_ready(): pass stdoutLines = stdout.readlines() recv_ready() That is, do I really first have to check the exit status before waiting for recv_ready() to say the data is ready? No. It is perfectly fine to receive data (e.g. stdout/stderr) from the remote process even though it did not yet finish. Also some sshd implementations do not even provide the exit status of the remote proc in which case you'll run into problems, see paramiko doc: exit_status_ready. The problem with waiting for exit_status_code for short living remote commands is that your local thread may receive the exit_code faster than you check your loop condition. In this case you won't ever enter the loop and readlines() will never be called. Here's an example: # spawns new thread to communicate with remote # executes whoami which exits pretty fast stdin, stdout, stderr = ssh.exec_command("whoami") time.sleep(5) # main thread waits 5 seconds # command already finished, exit code already received # and set by the exec_command thread. # therefore the loop condition is not met # as exit_status_ready() already returns True # (remember, remote command already exited and was handled by a different thread) while not stdout.channel.exit_status_ready(): if stdout.channel.recv_ready(): stdoutLines = stdout.readlines() How would I know if there is supposed to be data on stdout before waiting in an infinite loop for stdout.channel.recv_ready() to become True (which it does not if there is not supposed to be any stdout output)? channel.recv_ready() just indicates that there is unread data in the buffer. def recv_ready(self): """ Returns true if data is buffered and ready to be read from this channel. A ``False`` result does not mean that the channel has closed; it means you may need to wait before more data arrives. This means that potentially due to networking (delayed packets, retransmissions, ...) or just your remote process not writing to stdout/stderr on a regular basis may result in recv_ready being False. Therefore, having recv_ready() as the loop condition may result in your code returning prematurely as it is perfectly fine for it to sometimes yield True (when the remote process wrote to stdout and your local channel thread received that output) and sometimes yield False (e.g. your remote proc is sleeping and not writing to stdout) within an iteration. Besides that, people occasionally experience paramiko hangs that might be related to having stdout/stderr buffers filling up (pot. related to problems with Popen and hanging procs when you never read from stdout/stderr and the internal buffers fill up). The code below implements a chunked solution to read from stdout/stderr emptying the buffers while the channel is open. def myexec(ssh, cmd, timeout, want_exitcode=False): # one channel per command stdin, stdout, stderr = ssh.exec_command(cmd) # get the shared channel for stdout/stderr/stdin channel = stdout.channel # we do not need stdin. stdin.close() # indicate that we're not going to write to that channel anymore channel.shutdown_write() # read stdout/stderr in order to prevent read block hangs stdout_chunks = [] stdout_chunks.append(stdout.channel.recv(len(stdout.channel.in_buffer))) # chunked read to prevent stalls while not channel.closed or channel.recv_ready() or channel.recv_stderr_ready(): # stop if channel was closed prematurely, and there is no data in the buffers. got_chunk = False readq, _, _ = select.select([stdout.channel], [], [], timeout) for c in readq: if c.recv_ready(): stdout_chunks.append(stdout.channel.recv(len(c.in_buffer))) got_chunk = True if c.recv_stderr_ready(): # make sure to read stderr to prevent stall stderr.channel.recv_stderr(len(c.in_stderr_buffer)) got_chunk = True ''' 1) make sure that there are at least 2 cycles with no data in the input buffers in order to not exit too early (i.e. cat on a >200k file). 2) if no data arrived in the last loop, check if we already received the exit code 3) check if input buffers are empty 4) exit the loop ''' if not got_chunk \ and stdout.channel.exit_status_ready() \ and not stderr.channel.recv_stderr_ready() \ and not stdout.channel.recv_ready(): # indicate that we're not going to read from this channel anymore stdout.channel.shutdown_read() # close the channel stdout.channel.close() break # exit as remote side is finished and our bufferes are empty # close all the pseudofiles stdout.close() stderr.close() if want_exitcode: # exit code is always ready at this point return (''.join(stdout_chunks), stdout.channel.recv_exit_status()) return ''.join(stdout_chunks) The channel.closed is just the ultimate exit condition in case the channel prematurely closes. Right after a chunk was read the code checks if the exit_status was already received and no new data was buffered in the meantime. If new data arrived or no exit_status was received the code will keep on trying to read chunks. once the remote proc exited and there is no new data in the buffers we're assuming that we've read everything and begin closing the channel. Note that in case you wan to receive the exit status you should always wait until it was received otherwise paramiko might block forever. This way it is guaranteed that the buffers do not fill up and make your proc hang. exec_command only returns if the remote command exited and there is no data left in our local buffers. The code is also a bit more cpu friendly by utilizing select() instead of polling in a busy loop but might be a bit slower for short living commands. Just for reference, to safeguard against some infinite loops one can set a channel timeout that fires when no data arrives for a period of time chan.settimeout(timeout) chan.exec_command(command)
https://codedump.io/share/63gd1xUC4Zip/1/do-you-have-to-check-exitstatusready-if-you-are-going-to-check-recvready
CC-MAIN-2017-30
en
refinedweb
None 0 Points Jun 16, 2005 08:15 AM|jbowen|LINK You may be able accomplish what you want using the Request.Servervariables("REMOTE_ADDR") header. These may help: All-Star 17785 Points Aug 19, 2006 01:27 PM|vik20000in|LINK Oct 30, 2006 03:41 AM|etariq|LINK HttpContext.Current.Request.UserHostAddress; or HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"]; To get the IP address of the machine and not the proxy use the following code HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; Get users IP address Jul 13, 2007 09:33 AM|eramseur|LINK Take a look at the log module as it shows IP information. You can just copy that functionality. Jul 13, 2007 10:18 AM|eramseur|LINK Rainbow has a log module. You can find this on the portal by going on the Admin menu and clicking on logs and then Monitoring and Logs. This will show you IP info and you can get the functionality from DesktopModules\Monitoring. Jul 13, 2007 10:29 AM|eramseur|LINK When you login to rainbow, you see the admin menu correct? Off of the admin menu is a logs tab, off the logs tab is monitoring. I dont mean admin to the server; I mean admin to your portal. None 0 Points Jan 08, 2008 01:42 AM|balamurugan nagarajan|LINK Getting user's country name using ip Address Jan 24, 2008 01:16 PM|vitta|LINK Even I have simillar kind of problem. We have intranet application and I need to cature the IPAddress of the client. I am trying to capture the remote address. I tried in different ways as below.. but the result of all is 127.0.0.1 HttpContext.Current.Request.ServerVariables( I guess firewall/NAT Box hiding internal IP addresses. is there any way to get actual IP. Thanks in advance. Feb 06, 2008 08:54 AM|zestq|LINK Hi vitta, I got the same problem and I found the solution by doing dis: string strHostName = System.Net.Dns.GetHostName(); string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); I got this solution from C# corner. website client ip address Member 10 Points Feb 12, 2008 02:22 AM|satender88|LINK This example demonstrates how to find out the visitor's browser type, IP address, and more: <html> <body> <p> <b>You are browsing this site with:</b> <%Response.Write(Request.ServerVariables("http_user_agent"))%> </p> <p> <b>Your IP address is:</b> <%Response.Write(Request.ServerVariables("remote_addr"))%> </p> <p> <b>The DNS lookup of the IP address is:</b> <%Response.Write(Request.ServerVariables("remote_host"))%> </p> <p> <b>The method used to call the page:</b> <%Response.Write(Request.ServerVariables("request_method"))%> </p> <p> <b>The server's domain name:</b> <%Response.Write(Request.ServerVariables("server_name"))%> </p> <p> <b>The server's port:</b> <%Response.Write(Request.ServerVariables("server_port"))%> </p> <p> <b>The server's software:</b> <%Response.Write(Request.ServerVariables("server_software"))%> </p> </body> </html> ip Mar 31, 2008 04:22 PM|vitta|LINK Thanks Zestq..its working fine.. This is what I exactly want.. I really wonder why others given examples with server variables. I already said, I tried with server variables and couldnt succeed in INTRNET environment where proxy is hiding the original IP. API's under DNS class are solution for my problem. Thanks again Zestq Member 62 Points Apr 13, 2008 03:14 PM|Simon Deshaies|LINK etariq HttpContext.Current.Request.UserHostAddress;To get the IP address of the machine and not the proxy use the following code or HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"]; HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; Thanks this works for me. HttpContext.Current.Request.UserHostAddress; So I can log every user login and their IP. I use it like this in c#. Using This Sql Connection class as MySql. MySqlConnection MySql = new MySqlConnection(); string ip = HttpContext.Current.Request.UserHostAddress; MySql.CreateConn(); MySql.Command = MySql.Connection.CreateCommand(); MySql.Command.CommandType = CommandType.Text; MySql.Command.CommandText = "INSERT INTO dbo.[log.login] (userID, IP, DateTimeStamp, TargetPage) VALUES (" + myUserID + ", '" + ip + "', '" + DateTime.Now + "', '" + Server.UrlDecode(Request.QueryString["p"]) + "')"; MySql.Command.ExecuteNonQuery(); MySql.Command.Dispose(); MySql.CloseConn(); May 04, 2008 09:13 AM|Rizwan328|LINK Thanks ZESTQ, It really works for me...... May 07, 2008 10:44 PM|Hideyoshi|LINK dear zestq, string strHostName = System.Net.Dns.GetHostName(); string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); I got this solution from C# corner. website for the solution that you provided on February, Do u know how to solve it by using vb.NET? Hope to hear anyone of you, thank you Jun 12, 2008 07:14 AM|zestq|LINK Hi guyz, The solution I have provided gives the IP Address but If you look into it, It gives you the server IP not the client;s IP Address, while I guess we all need the client's IP Address or the visitor's IP. To get the Client's IP Address, try this: stringclientIPAddress = this.Page.Request.ServerVariables["REMOTE_ADDR"]; NOTE: when you run this it will give you "127.0.0.1", this is because, you are using from the HOST Machine i.e. the application is on ur machine and it is giving you, ur IP address but when any visitor or client hits ur page it will give you the exact IP Address of the Client. I have checked this on my LAN and its working fine,,,,,try this and if it fails do inform me or any 1 got any better solution please let me know. client ip address visitors ip address client machine ip address Jun 12, 2008 07:21 AM|zestq|LINK Hi Hideyoshi here is the VB version of both of my codes: Dim strHostName As String = System.Net.Dns.GetHostName() Dim clientIPAddress As String = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString() Dim clientIPAddress As String = Me.Page.Request.ServerVariables("REMOTE_ADDR" Following site address is the converter of VB.NET code to C# and vice versa (I have also used this site 2 convert my code). VB.NET code to C# Converter Nov 10, 2008 02:41 AM|TATWORTH|LINK >I want to get the time zone of the Ip Address.how to get the timezone of the particular IP IP Address is frequently wrong even on country let alone Time Zone within a country. Whilst being in the UK, I have been told that according to my IP address one time I was in Germany and another in the US. Given the way IP address blocks are allocated to an Internet Service Provider, it is quite lilely that a national ISP in the US willl have users from a different time zone to where that block is registered. Nov 10, 2008 10:48 AM|Rizwan328|LINK GeoByte is providing good services like these....... following link may help you Member 1 Points Nov 14, 2008 09:15 AM|mick breen|LINK Good Post but I want to removed the domain and only keep the host name eg Q1111.xxxxx.xx.xx.xx I only want Q1111. My code that returns the Q1111.xxxxx.xx.xx.xx is on page load as follows:System.Net.IPHostEntry HostIP = new System.Net.IPHostEntry();HostIP = System.Net.Dns.GetHostEntry(Request.ServerVariables["Remote_Host"]); lbl_NetBiosNameDetail.Text = HostIP.HostName; Can anyone assist? Mick Nov 14, 2008 09:26 AM|TATWORTH|LINK I will write you some code for you, once I clarify what you require. You cited Q1111.xxxxx.xx.xx.xx IP addresses have four components, but you show 5. However you may like to go to and try the IP Extensions project. Member 1 Points Nov 14, 2008 09:58 AM|TATWORTH|LINK Here is the function plus some unit test code #region " Copyle #endregion namespace Demo { using System; using NUnit.Framework; /// <summary> /// IpAddressTest /// </summary> [TestFixture] public class IpAddressTest { /// <summary> /// Test GetBlock1 /// </summary> [Test] public void GetBlock1Test1() { Assert.AreEqual("123", GetBlock1("123.111.222.103")); } /// <summary> /// Test GetBlock1 /// </summary> [Test] [ExpectedException(typeof(ArgumentException))] public void GetBlock1Test2() { Assert.AreEqual("123", GetBlock1("123.111.222")); } /// <summary> /// Get Block 1 /// </summary> /// <param name="input">IP Address</param> /// <returns>Block 1</returns> public string GetBlock1(string input) { var work = (string.Empty + input).Split('.'); if (work.Length != 4) { throw new ArgumentException("input is not in form xxx.xxx.xxx.xxx"); } return work[0]; } } } Member 171 Points Jan 20, 2009 04:53 AM|svibuk|LINK i am using Dim IPAddress IPAddress = Request.ServerVariables("HTTP_X_FORWARDED_FOR") If IPAddress = "" Then IPAddress = Request.ServerVariables("REMOTE_ADDR") End If but i am getting 127.0.0.1 i suppose i am getting it due to my local host based on this ip i need to find the country of that IP using vb.net or javascript. how ca i achieve it hope to get help country ip address Jan 20, 2009 09:56 AM|TATWORTH|LINK >based on this ip i need to find the country of that IP using vb.net or javascript. how ca i achieve it Getting Country from IP address is somewhat problematical. For example whist I have been in the United Kingdom, I have been told that I am in Germany one and in America at another time. Code for Address look up Star 9953 Points Jan 28, 2009 05:14 AM|yrb.yogi|LINK Get Users IP address...??? but How??? can u expalin me???? All-Star 52658 Points Jan 28, 2009 05:32 AM|mudassarkhan|LINK yrb.yogi<div id=ctl00_ctl00_bcr_bcr_PostForm__QuoteText> Get Users IP address...??? but How??? can u expalin me????</div> Jan 28, 2009 05:40 AM|TATWORTH|LINK You can get the user's IP Address by: var useraddress = Request.UserHostAddress; For details, see: Jan 28, 2009 05:45 AM|TATWORTH|LINK Or more fully using the function I gave earlier. var useraddress = Request.UserHostAddress; var test = IPAddress.Parse(useraddress); var result = CommonData.IPAddressToBytes(test); lblOutput.Text = result[0] + "." + result[1] + "." + result[2] + "." + result[3]; This displays (not surpringly) 127.0.0.1 None 0 Points May 02, 2009 02:58 AM|sunnyhasan|LINK string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); Is the uppoer statement returning the client ip address? That is a hexcode. But IP address should not be in that format right? string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(1).ToString(); I have used this GetValue(1) instead GetValue(0). Is that ok? Can you please explain that? I need to log all the ip address of the users when they visit a specific website...please help me. Thanks. - ip ip address May 02, 2009 05:39 AM|TATWORTH|LINK Try var ipAddress = Request.UserHostAddress.ToString(); or if using an earlier version of the framework string ipAddress = Request.UserHostAddress.ToString(); Nov 05, 2009 05:44 AM|Manwatkar|LINK dear sir. i have applied your code in my project string strHostName = System.Net.Dns.GetHostName(); string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); but it working correctly on localhost but when i am trying on the server it is giving as address like this 10.49.23.4 which address is this please reply me what is the problem....my email address is [email protected] thanks in advance.... None 0 Points None 0 Points Nov 15, 2009 02:42 PM|jamalqudah|LINK simply if you run the code on a class or the global you wont be able to get the remote IP address just run this code on any code behind page and you'll get the remote user IP Request.UserHostName you can try my URL if you need the IP on the class just pass it through string hope this help Dec 19, 2009 12:23 PM|samuel24|LINK Hi Getting users Ip address code is in the below link.may be it will useful for some one. Get users IP address None 0 Points Dec 19, 2009 01:27 PM|jamalqudah|LINK just use this code ">if you need to use inside the class just pass it with the class<">ex code:</div> inside the code behind page page.aspx.vb not inside a class and it will work if you need to use inside the class just pass it with the class ex code:Imports Microsoft.VisualBasic Public Class Testing Public Shared Function TEsting(ByVal IPaddress As string) As String 'insert to database as log dim IP as string = IPaddress connect.ExecuteScaler("INSERT INTO blabla(IP_Address) " & _ "VALUES (" & IPaddress & ")") Dim result As String = "" Return result End Function End Class in code behind page call the function and pass the IP TEsting(Request.UserHostName) thats it hope this help Mar 01, 2011 02:12 AM|raji539|LINK if ur want in assemblly.. string strHostName = Dns.GetHostName(); IPHostEntry ipEntry = Dns.GetHostEntry(strHostName); IPAddress[] addr = ipEntry.AddressList; string ipAddress = addr[0].ToString(); if ur using web application HttpContext.Current.Request.UserHostAddress Participant 949 Points Mar 02, 2011 11:40 PM|chandradev1|LINK Hi check this code protected void Button1_Click(object sender, EventArgs e) { string strHostName = System.Net.Dns.GetHostName(); string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); SqlCommand cmd = new SqlCommand(“insert into tblIpAddress(IPAddress)values(@IPAddress)”, con); cmd.Parameters.AddWithValue(“@IPAddress”, clientIPAddress); con.Open(); cmd.ExecuteNonQuery(); con.Close(); } Member 120 Points Mar 02, 2011 11:57 PM|prakash.radhwani|LINK //use the below code Session["ipAddress"] = Request.ServerVariables["REMOTE_HOST"]; Session["macAddress"] = GetMacAddress(Session["ipAddress"].ToString()); public string GetMacAddress(string IPAddress) { string strMacAddress = string.Empty ; try { string strTempMacAddress= string.Empty ; ProcessStartInfo objProcessStartInfo = new ProcessStartInfo(); Process objProcess = new Process(); objProcessStartInfo.FileName = "nbtstat"; objProcessStartInfo.RedirectStandardInput = false; objProcessStartInfo.RedirectStandardOutput = true; objProcessStartInfo.Arguments = "-A " + IPAddress; objProcessStartInfo.UseShellExecute = false; objProcess = Process.Start(objProcessStartInfo); int Counter = -1; while (Counter <= -1) { Counter = strTempMacAddress.Trim().ToLower().IndexOf("mac address", 0); if (Counter > -1) { break; } strTempMacAddress = objProcess.StandardOutput.ReadLine(); } objProcess.WaitForExit(); strMacAddress = strTempMacAddress.Trim(); } catch (Exception Ex) { //Console.WriteLine(Ex.ToString()); //Console.ReadLine(); } return strMacAddress; } 47 replies Last post Mar 02, 2011 11:57 PM by prakash.radhwani
https://forums.asp.net/t/892765.aspx
CC-MAIN-2017-30
en
refinedweb
Developing using Docker has many benefits, like ensuring your builds always build in the same way and ensuring a consistent way of installation. Although the development environment of our developers may be different, the use of Docker ensures that the build works on every machine, every time. No more libsass incompatibilities for us. With edi, as we call our solution, we provide a set of scripts and Docker images for developing EmberJS apps. Cloning and starting an existing repository could look like: # clone the project git clone [email protected]:fish-tracker cd fish-tracker # install the dependencies edi npm install edi bower install # launch the development browser (ember serve) eds Installation In order to install edi, we have to install the edi and eds commands, and ensure Docker creates files under our own namespace. Docker traditionally runs as the root user on your local machine. The side-effect is that the files which edi ember generate creates, are owned by root. With user-namespaces, it is possible to run Docker under your own user, and thus creating files under your own namespace. The installation shown here should run on modern Linux distributions. Using docker namespaces User-namespaces essentially provide an offset to the regular user identifiers. This means we can map the root’s user-id to your own user-id. This also means that when the root user in the container creates a file, the UID will be the root UID in the container, but it will be our UID outside of the container. In order to make this work we have to supply the mapping. Edit the /etc/subuid and /etc/subgid files with the following commands: MY_USER_UID=`grep my-user /etc/passwd | awk -F':' '{ print $3 }'` MY_USER_GUID=`grep my-user /etc/passwd | awk -F':' '{ print $4 }'` echo "ns1:$MY_USER_UID:65536"| sudo tee -a /etc/subuid echo "ns1:$MY_USER_GUID:65536"| sudo tee -a /etc/subgid Next, we have to tell the Docker daemon to use this new namespace. The configuration script may be in various locations, depending on your Linux distribution. systemctl provides a utility to find and edit the right file. systemctl edit docker.service Once the editor opens, ensure the following contents are present: ExecStart= ExecStart=/usr/bin/dockerd --userns-remap=ns1 Note the ExecStart= line, it is necessary to indicate that we intend to override the original ExecStart command. Without it, our new command will be ignored. Installing the scripts The scripts are contained in the madnificent/docker-ember repository and we will add them to our PATH. Open a terminal in a folder where you have access rights to, and in which you’d like to permanently store the reference to the ember-docker. Then execute the following commands: git clone echo "export PATH=\$PATH:`pwd`/docker-ember/bin" >> ~/.bashrc source ~/.bashrc Our first new app using Docker All commands, except for ember serve which deserves special attention, can be ran through the edi command. Let’s create a new project: edi ember new my-edi-app This will generate a new application. Once all dependencies have been installed, we can move into the application folder, and start the ember server: cd my-edi-app eds You will see the application running. Moving your browser to, you will see the ember-welcome-page. Yay! It works \o/ Let’s remove the welcome-page. As instructed, we’ll remove the {{welcome-page}} component from the build. Open your favorite editor, and update the contents of application.hbs to contain the following instead. <h1>My first edi app</h1> {{outlet}} We can generate new routes with the ember application still running. Open a new terminal and open the my-edi-app folder. Then generate the route: edi ember generate route hello-link edit the app/templates/hello-link.hbs template so it contains the following <p>Hello! This is some nested content from my first page. {{link-to 'Go back' 'index'}}</p> and add a link to app/templates/application.hbs <h1>My first edi app</h1> <p>{{link-to 'Hello' 'hello-link'}}</p> {{outlet}} Boom, we have generated files which we can edit. Lastly, we’ll install the ember-cli-sass addon as an example. edi ember install ember-cli-sass Now restart eds to ensure the ember server picks up the newly installed addon, remove the old app/styles/app.css and add a background-color to app/styles/app.scss body { background-color: green; } Caveats There are some caveats with the use of edi. First is configuring the ember version. You can switch versions easily by editing a single file. Next is configuring the backend to run against. Configuring the ember version You may want to have more than one ember version installed when generating new applications. When breaking changes occur to ember-cli, or if you want to be sure to generate older applications. Perhaps you simple don’t want to download all the versions of ember-cli. Whatever your motives are, you can set the version in ~/.config/edi/settings If the file or folder does not exist, create it. You can write the following content to the file to change the version: VERSION="2.11.0" Supported versions are the tags available at. Linking to a backend Each Docker Container is a mini virtual machine. In this virtual machine, the name localhost indicates that little virtual machine. We have defined the name host to link to the machine serving the application. In case you’d setup a mu.semte.ch architecture backend, published on port 80 of your localhost, you could connect your development frontend to it as such: eds --proxy It is a common oversight to try to connect to localhost from the container instead. Way ahead Our EmberJS development has become a lot more consistent and maintainable with the use of edi. We have used it extensively, and have been able to reproduce builds easily in the past year. Over time we have extended edi with support for developing addons. Stay tuned to read more about this.
https://mu.semte.ch/2017/03/09/developing-emberjs-with-docker/
CC-MAIN-2017-30
en
refinedweb
0 I am a student and am having trouble getting my for statement to work with an array. This is my first time here. Here is what have so far. Any tips would be greatly appreciated. using namespace std; void main() {#include <iostream> // Emp is number of employees needed for each hour in the array. // Each value in the array represents the number of customers present at each hour // The equation for number of employees is 3 + 1 added for each additional 20 customers // Counter variable is Hour int Emp=0; int Cust [24] = {96,112,45,12,0,24,32,36,196,20,25,200,187,96,34,67,107,224,107,98,19,30,45,60}; // 24 values for each hour of the day for (int Hour=1; Hour <= 24; Hour++) { Emp = Cust/20 + 3; if (Cust%20=0) //Modular division tells if Customers is evenly divided { Emp = Emp-1;// Any value disvisible by 20- take 1 employee away because three employees can handle the first 20 } //cout // need to output the Hour, number of Customers and number of employees needed } return; }
https://www.daniweb.com/programming/software-development/threads/195598/array-for-statment-woes-help
CC-MAIN-2017-30
en
refinedweb
- I'd like this package to work well with the existing HTML package, what are some things to keep in mind when subclassing (if that's the proper term) a package in Tcl? - I'd like to generate the HTML through an XML interface, I'd prefer not to resort to puts except in cases where it's practical to do so. What XML interface(s) are useful (I think Tdom is one)? - Should work equally well in a CGI context and with Tcl-based web servers like Tclhttpd. - Needs to take the drudgery out of writing accessible HTML, which is quite tedious due to the overhead imposed by the additional tagging. - A perl version is also required. - Should comply with existing accessibility recommendations, such as those at the W3C. <form action="#" method="get"> <fieldset> <legend>Example Form 1</legend> <p> <label for="first-name" accesskey="n">First <span class="accesskey">N</span>ame:</label> <input type="text" name="first-name" value="First Name" id="first-name" tabindex="1" /> </p> <p> <label for="last-name">Last Name:</label> <input type="text" name="last-name" value="Last Name" id="last-name" tabindex="2" /> </p> <p> <label for="fave-color">Favorite Color:</label> <select name="fave-color" id="fave-color" tabindex="3"> <option value="Select an option" selected="selected">Select an option</option> <option value="Red">Red</option> <option value="Yellow">Yellow</option> <option value="Blue">Blue</option> <option value="Green">Green</option> </select> </p> <p> Your Pet: <input type="radio" name="pet" value="Dog" tabindex="4" id="Dog" /> <label for="Dog">Dog</label> <input type="radio" name="pet" value="Non-dog" tabindex="5" id="Non" checked="checked" /> <label for="Non">Non-dog</label> </p> <p> <input type="submit" class="form-button" value="Save" tabindex="6" /> <input type="reset" class="form-button" value="Reset" tabindex="7" /> </p> </fieldset> </form>Given CSS, compliant browsers will render it something like this: AM Okay, correct me if I am wrong, but I deduce that the label in front of the entries or other controls are intimately connected to these controls, right? This means that if you represent this structure as an XML file you get awkward connections between nodes.My suggestion would be to use the struct::tree module from Tcllib (or some similar implementation of a general tree):This can easily be converted to an XML file by traversing the tree and its nodes, you can associate as many XML nodes with a single tree node as necessary (struct::tree allows you to associate arbitrary keys and values with each node).The way back (from XML to tree) is a bit more complicated, but if the XML/HTML file is structured in a straightforward way, it should not be that difficult. WJR I've been looking for an example of struct::tree's usage. What would be the benefit of this approach? WJR What about using something like xmlgen and doing it as shown in this basic example: package provide htmlable 1.0 package require xmlgen namespace import ::xmlgen::* namespace eval htmlable { namespace export * } proc htmlable::widget {type name text id class} { declaretag label declaretag input # If no ID is supplied, use the name if {$id == ""} { set id $name } set widget [eval label for=$id $text] append widget [eval input type=$type name=$name id=$id class=$class] return $widget }Example tclsh session: % package require htmlable 1.0 % htmlable::widget text last-name {Last Name:} {} text-medium <label for="last-name">Last Name:</label><input type="text" name="last-name" id="last-name" class="text-medium" /> AM There is no particular advantage of struct::tree in itself, it is just that it provides a more general structure than XML-trees, as it does not describe a particular structure of the information. I think the above approach is very suitable too: you create a semantic structure that gets translated into visibly distinct items. The fact that the items are visibly distinct is only a coincidence (a rather nasty one, from the structural point of view) which you solve by introducing another more coherent structure.
http://wiki.tcl.tk/10345
CC-MAIN-2017-30
en
refinedweb
How to: Customize Collection Generation (C#) This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here. This topic applies to C#. In order to modify collection generation in VB.NET, please refer to How to: Customize Collection Generation (VB). By default, when you create a new domain model all collections are generated in the following manner: public partial class Category { private IList<Car> _cars = new List<Car>(); public virtual IList<Car> Cars { get { return this._cars; } } } However, in data-binding scenarios you may need to use other types of collections. This example demonstrates how to modify the default code generation templates so that the navigation properties are of type TrackedList<T> or TrackedBindingList<T> instead of List<T>. TrackedList<T> and TrackedBindingList<T> can be found in the Telerik.OpenAccess namespace and are used by Telerik Data Access to monitor the changes in the content of the navigation properties. The process includes the following steps: 1. Copy the Templates Locally At the Code Generation Settings screen of either Add Model wizard when creating a new model or Model Settings dialog when editing an existing one, select the Custom Defined radio button and click the Copy Default To... button. The Copy Default To... button is part of the code generation improvements of Telerik Data Access and is available only in Visual Studio 2010 and Visual Studio 2012. Visual Studio 2008 uses the previous version of the code generation process. This article demonstrates the usage of the newer version of the process only. The Select Folder dialog displays the directory structure of your project. Select an already existing folder, or create a new one to place the code generation templates in and click OK. The path to the entry template, DefaultTemplateCS.tt, will be automatically selected in the path text box. The templates will be copied to the selected directory. Click Finish if you are using the wizard or OK for the Model Settings dialog. 2. Custom changes in the templates At this stage you are ready to perform the necessary custom changes in the templates: - Open PropertiesGenerator.ttinclude Place the following code at the end of the GenerateFieldForProperty() method: if (property.IsNavigationProperty && property.IsIEnumerable) {"; initialValue = " = new " + propertyType + "()"; } Place the following code at the end of the GenerateClassPropertySignature() method: if (property.IsNavigationProperty && property.IsIEnumerable) {"; } For TrackedBindingList<T> you only need to changed propertyType in the above code snippets. The outcome should be similar to this one: public partial class Category { private TrackedList<Car> _cars = new TrackedList<Car>(); public virtual TrackedList<Car> Cars { get { return this._cars; } } } In the cases when your project is tracked by a source control system, we recommend that the whole OpenAccessTemplates folder is checked-in with it. That allows the entire team to work with the hosting project without any risk for the changes in the templates or in the generated service files to be rewritten.
http://docs.telerik.com/data-access/deprecated/developers-guide/code-generation/customizing-code-generation/domain-model/data-access-tasks-customise-code-generation-collection-generation
CC-MAIN-2017-30
en
refinedweb
Program Structure Magpie programs are stored in plain text files with a .mag file extension. Magpie does not compile ahead of time: programs are interpreted directly from source, from top to bottom like a typical scripting language. some code // This is a line comment. // A line comment ends at the end of the line. some more /* This is a block comment. */ code code /* Block comments can span multiple lines. */ Unlike those languages, block comments nest in Magpie. That's handy for commenting out chunks of code which may themselves contain block comments. code /* A /* nested */ block comment */ code code Doc Comments In addition to regular line and block comments, Magpie has a third kind of comment called documentation comments or simply doc comments. They start with three slashes and proceed to the end of the line. def square(n is Int) /// Returns `n` squared. n * n end Doc comments are used to document entire constructs: modules, classes, methods, etc. Unlike other comments, doc comments are not ignored by the language. This means they are only allowed where they are expected: at the beginning of a file, method body, or class definition: defclass Address /// A postal address. val street val city val state end Doc comments are formatted using Markdown and are intended to be parsed to generate documentation files. Reserved Words Some people like to see all of the reserved words in a programming language in one lump. If you're one of those folks, here you go: and async break case catch def defclass do end else false fn for if import in is match not nothing or return then throw true val var while xor Also, the following are punctuators in Magpie which means they are both reserved words and they can be used to separate tokens: ( ) [ ] { } , . .. ... The only built-in operator is =. All other operators are just methods, as explained below. Names Identifiers are similar to other programming languages. They start with a letter or underscore and may contain letters, digits, and underscores. Case is sensitive. hi camelCase PascalCase _under_score abc123 ALL_CAPS Operators Magpie does not have many built-in operators. Instead, most are just methods like any other method. However, the grammar of the language does treat them a bit specially. Lexically, an operator is any sequence of punctuation characters from the following set: ~ ! $ % ^ & * - = + | / ? < > Also, the special tokens .. and ... are valid operator names. But a = by itself is not—that's reserved for assignment. The exact set of operator characters is still a bit in flux. These are all valid operators: + - * ?! <=>&^?! When expressions are parsed, infix operators have the same precedence that you expect from other languages. From lowest to highest: = ! < > .. ... + - * / % Every operator on the same line above has the same precedence. If an operator has multiple characters, the first determines the precedence. So this (unreadable) expression: a +* b *- c <!! d !> e %< f Will be parsed like: (((a +* (b *- c)) <!! d) !> (e %< f)) The goal here is to have code that works more or less like you expect coming from other languages while still being a little more open-ended than those languages. Newlines Like many scripting languages, newlines are significant in Magpie and are used to separate expressions. You can keep your semicolons safely tucked away. // Two expressions: print("hi") print("bye") To make things easier, Magpie will ignore a newline in any place where it doesn't make sense. Specifically, that means newlines following a comma ( ,), equals ( =), backtick ( `), or infix operator ( +, -, etc.) will be discarded: val a = 1, 2 // a will be the record (1, 2). val b = 1 + 2 // b will be 3. val c = true and false // c will be false. If you specifically want to ignore a newline where it otherwise would separate two expressions, you can end the line with a backslash ( \): val a = foo bar() // Sets a to foo then calls bar() val a = foo \ bar() // Equivalent to: // var a = foo bar()
http://magpie.stuffwithstuff.com/program-structure.html
CC-MAIN-2017-30
en
refinedweb
View everybody private void textBox1_KeyPress(object sender, KeyPressEventArgs e) { obj.Key = e.KeyChar; char getvalueFromFunction = obj.getKeyBoardKey(); if (getvalueFromFunction == 'B') { } else { e.KeyChar = getvalueFromFunction; } } this is my code under the presentation layer or form1. public class keyPress { private char key; public char Key { get { r Is there any way to make font size a percentage of the size of either the parent container or the browser window viewport? Having fixed-width fonts inside variable-width containers is causing the layout to break. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/64389-changing-font-size-on-table-contents-webpart.aspx
CC-MAIN-2017-30
en
refinedweb
Hi On Mon, May 21, 2012 at 03:12:58PM +0000, Babic, Nedeljko wrote: > Hi, > > CPU features can be detected at runtime in MIPS architectures, however this is quite cumbersome. > For example, information stored in status registers, can't be read from user space. > On the other hand we could obtain some information from /proc/cpuinfo. This was done in MIPS optimizations for pixman. However, this solution also has problems. For example, some vendors (like Broadcom) have their own version of the /proc/cpuinfo description, where they don't mention at all on which MIPS core these platforms are based on. So, this way of runtime detection would prevent MIPS optimizations although they are available for use. > You can see discussion regarding this problem on pixman mailing list (). that is a shitty design, i would suggest that this is fixed either in kernel or hardware so that theres a portable way to get this information from user space at least in the medium term future. > > > disabling the C functions like this is quite unpractical, consider > > how this would look with 7 cpu architectures (not to mention the > > maintaince burden) > > I think there either should be function pointers in a structure > > like reimar suggested (like LPCContext) or if this causes a > > "meassureable" overhead on MIPS and runtime cpu feature detection > > isnt possible or doesnt make sense for MIPS then > > There are a lot of functions that we optimized that are not optimized for other architectures. For most of these functions structures with appropriate function pointers don’t exit. In order to use this approach I would have to make probably a lot of changes in architecture independent parts of code and I am not sure if this is justifiably to do just for our optimizations. for functions that could be optimized in SIMD SSE* easily, structures with function pointers should be added. Because SSE is runtime detectable and once SSE optimizations for them are written there will be function pointers in structures for them. It would be quite inconvenient if we had a different system for these cases for MIPS in place at that point. > > > there simply could be a: > > > > #ifndef ff_acelp_interpolatef > > void ff_acelp_interpolatef(float *out, const float *in, > > const float *filter_coeffs, int precision, > > int frac_pos, int filter_length, int length){ > > ...C code here ... > > } > > #endif > > > > and a > > > > void ff_acelp_interpolatef_mips(float *out, const float *in, > > const float *filter_coeffs, int precision, > > int frac_pos, int filter_length, int length){ > > ... MIPS code here ... > > } > > > > and in some internal header a > > #define ff_acelp_interpolatef ff_acelp_interpolatef_mips > > This is maybe bather approach for us, but we have a lot of code to submit. This will also create maintenance burden. > > What do you suggest how should we proceed regarding this? what kind of maintaince burden do you see with this case: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-May/124934.html
CC-MAIN-2017-30
en
refinedweb
Copyright © 2004 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. This document defines a mobile profileGT12 17 September 2004. Issues about this document are documented in the last call issues list maintained by the Working Group. expected to become the 5 February 2004 W3C Patent Policy, pending the Advisory Committee review of the renewal of the XML Query Working Group.. Text in light blue boxes describes new SVGT 1.2 features.). In this specification, the module definitions are written in RelaxNG.', 'font-face-uri' requires support for SVG images with the following restrictions. First, the referencing 'image' element's overflow property and the referenced 'svg' overflow property MUST be set to 'visible'. Second, the referenced document should not contain any references to external SVG images. If these restrictions are not met, the document is in error. These restrictions have been added to protect implementations from most of the implementation cost while still covering important use cases with external svg images. The same use cases could have been covered by allowing external use elements instead, the WG feels that the implementation cost of these two alternatives are equal but are interested in any feedback that proves the difference. user agent has the ability to perform any transformation (including scaling) on video content. Note that the above feature strings use the "example.org" domain. In the next version of this specification, the feature strings will use the "" domain. Implementors should use the "" identifier for feature strings. SVGT 1.2 supports the page and pageSet elements as defined in SVG 1.2. The allowed values for preserveAspectRatio for <page> follow the same resrictions as the <svg> element.. The requiredFormats test attribute, allows content to check for particular audio codecs. allows content to switch to an alternate rendering if the user agent does not support transformed video. The requiredFormats test attribute, allows content to check for particular video codecs. SVGT support subsets of SVG 1.2's presentation attributes. SVGT does not support styling with CSS. SVGT can introduce constraints on style properties. Properties and attributes is not restricted, and does not support the tref' elements. SVGT supports a simplified version of the 'tspan' element, where the x, y, dx, dy, rotate, textLength and lengthAdjust attributes are notRoot' element. SVGT 1.2 shall support the following elements, with the listed restrictions, as specified in SVG 1.2: Note that children of a flowRegion element are inserted into the rendering tree before the text is drawn, and have the same rendering behavior as if they were children of a g element. SVG Tiny 1.2 supports the 'editable' attribute with restrictions. SVG Tiny 1.2 supports the 'editable' attrubute on 'text' elements which have no element children. SVG Tiny 1.2 supports the 'editable' attribute on 'flowPara' elements which are the only child of a 'flowRoot' element and which have no element children. If these restrictions are not met, the document is in error. SVG Tiny 1.2 supports the 'progression-align' property that indicates line progression direction alignment in flowing text. An SVG user agent uses this property to align blocks of flowed text within their containing regions. If the line progression direction is left to right, a setting of 'before' will align the block of text at the top of the region. A setting of 'after' will align the text at the bottom of the region. A setting of 'center' will vertically center the block of flowed text. The combined line heights of the entire flowed text is used for alignment (as opposed to maximum glyph extents on the flowed text lines). and background-fill-opacity properties 1.2 also supports the nav-index property, as defined in SVG 1.2. SVGT 1.2 supports the pointer-events property.. In particular, a 'handler' element can have children in an arbitrary XML namespace as a way to specify parameters. embedded within the same document that uses the font or referenced through the 'font-face-src' and 'font-face-uri' elements.:
http://www.w3.org/TR/2004/WD-SVGMobile12-20040813/
CC-MAIN-2017-30
en
refinedweb
I have a question regarding objects. I develop some kind of a basic RPG. In this game I have a character (let's say this Character object is hero). hero has equipment according to primary hand, body armor etc. This hero wants to attack with the weapon wielded in his primary hand, let's say he has a long sword. A word regard my Weapon class - the objects in this class have "attack" method. This method gets as an arguments the attacker and the target, so (weapon objects here).attack(hero, hero2) makes hero attack using the weapon object to attack hero2. My question is as follows - can I avoid using attack(hero, hero2) somehow while the code knows that "hero" is the first objects that calls the attack method ? That's the attack command: hero.getEquipment().getPrimaryHand().attack(hero, hero2); hero.getEquipment().getPrimaryHand().attack(some code here, hero2); public class MeleeWeapon extends Weapon { boolean throwable; MeleeWeapon(String name,String reqTraining, boolean oneHaned, int n, int dice, int attackBonus, int damageBonus,double weight, long cost, boolean throwable) { super(name, reqTraining, weight, cost, oneHaned, n, dice, attackBonus, damageBonus); this.throwable = throwable; } static ArrayList<MeleeWeapon> meleeWeaponList = new ArrayList<MeleeWeapon>(); static { meleeWeaponList.add(new MeleeWeapon("Long Sword","Martial", true, 1, 8, 0, 0,8, 10, false)); meleeWeaponList.add(new MeleeWeapon("Short Sword", "Martial", true, 1, 6, 0, 0,5, 5, false)); meleeWeaponList.add(new MeleeWeapon("Dagger","Basic", true, 1, 4, 0, 0,2, 3, true)); meleeWeaponList.add(new MeleeWeapon("Quarter-staff", "Basic", false, 1, 4, 0, 0,3, 2, false)); meleeWeaponList.add(new MeleeWeapon("Shield", "Shield", false, 1, 4, 0, 0,8, 8, false)); } public void attack(Character attacker, Character defender){ int attackRoll = DiceRoller.roll(20) + attacker.getBaseAttackBonus() + attacker.getModifier(attacker.getStrength()) + getAttackBonus() ; System.out.println(attacker.getName() + " attack Roll: " + attackRoll + "AC: " + defender.getArmorClass()); if (attackRoll >= defender.getArmorClass()){ System.out.println("Defender: " + defender.getName() + " had " + defender.getCurrentHp()); int damage = DiceRoller.roll(getN(), getDice()) + attacker.getModifier(attacker.getStrength()) + getDamageBonus() ; System.out.println("Damage : " + damage); defender.setCurrentHp(attacker.getCurrentHp() - damage); System.out.println("Defender: " + defender.getName() + " has " + defender.getCurrentHp()); } else { System.out.println("Missed Attack"); } } You can have an attack method in your Character class that would wrap your other attack method : public void attack (Character other) { getEquipment().getPrimaryHand().attack (this, other); } And you call it simply : hero.attack (otherHero);
https://codedump.io/share/6wGgqDVXHdQH/1/java---access-objects
CC-MAIN-2017-30
en
refinedweb
I am trying to add seller_id to my items model in a migration by doing the following: rails g model Item title:string description:text price:bigint status:integer published_date:datetime seller:belongs_to class CreateItems < ActiveRecord::Migration def change create_table :items do |t| t.string :title t.text :description t.bigint :price t.integer :status t.datetime :published_date t.belongs_to :user, index: true, foreign_key: true t.timestamps null: false end end end Just explicitly define what you want in the migration file and add the necessary relation to your model, instead of t.belongs_to you could just use: t.integer :seller_id, index: true, foreign_key: true and in your models you could go about this a few ways, if you also want reference your relation as seller on item instances then in the Item class make the relation: belongs_to :seller, class_name: 'User' and in the User class: has_many :items, foreign_key: :seller_id or if you want to reference it as user from the items then: belongs_to :user, foreign_key: :seller_id In terms of editing the generator, that is the default way that a model generator works and in my opinion I like to keep the defaults the way they are. However I do recommend creating your own rails generators or your own rake tasks for when you want to create some custom / special case stuff. For looking at creating your own generator I would point you to the ruby on rails official guide for creating generators to get you started:
https://codedump.io/share/va39lz1s8XoS/1/rails-migration-tbelongsto-user-add-custom-column-name-sellerid
CC-MAIN-2017-30
en
refinedweb
On 16 Apr, 03:36 pm, pje at telecommunity.com wrote: >At 03:46 AM 4/16/2009 +0000, glyph at divmod.com wrote: >>On 15 Apr, 09:11 pm, pje at telecommunity.com wrote: >. [snip clarifications] >Does that all make sense now? Yes. Thank you very much for the detailed explanation. It was more than I was due :-). >MAL's proposal requires a defining package, which is counterproductive >if you have a pure package with no base, since it now requires you to >create an additional project on PyPI just to hold your defining >package. Just as a use-case: would the Java "com.*" namespace be an example of a "pure package with no base"? i.e. lots of projects are in it, but no project owns it? >. Just to clarify things on my end: "namespace package" to *me* means "package with modules provided from multiple distributions (the distutils term)". The definition provided by the PEP, that a package is spread over multiple directories on disk, seems like an implementation detail..*. Right now it just says that it's a package which resides in multiple directories, and it's not made clear why that's a desirable feature. >> The concept of a "defining package" seems important to avoid >>conflicts like this one: >> >> [snip some stuff about plugins and package layout] >Namespaces are not plugins and vice versa. The purpose of a namespace >package is to allow projects managed by the same entity to share a >namespace (ala Java "package" names) and avoid naming conflicts with >other authors. I think this is missing a key word: *separate* projects managed by the same entity. Hmm. I thought I could illustrate that the same problem actually occurs without using a plugin system, but I can actually only come up with an example if an application implements multi-library-version compatibility by doing try: from bad_old_name import bad_old_feature as feature except ImportError: from good_new_name import good_new_feature as feature rather than the other way around; and that's a terrible idea for other reasons. Other than that, you'd have to use pkg_resources.resource_listdir or somesuch, at which point you pretty much are implementing a plugin system. So I started this reply disagreeing but I think I've convinced myself that you're right. ..
https://mail.python.org/pipermail/python-dev/2009-April/088838.html
CC-MAIN-2017-30
en
refinedweb
A new Flutter package project.. Initial Release. example/main.dart import 'package:flutter/material.dart'; import 'package:feedback_to_issue/feedback_to_issue.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: MyHomePage(), ); } } class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { final GlobalKey<ScaffoldState> _scaffoldKey = new GlobalKey<ScaffoldState>(); @override Widget build(BuildContext context) { return Scaffold( // we need scaffoldkey to share scaffold state with other widget. key: _scaffoldKey, body: Center( child: OutlineButton( child: new Text('Feedback'), onPressed: () { //assignee is default to your_github_username var feedbackDialogue = new FeedbackDialogue( context, _scaffoldKey, 'your_github_api_token', 'your_github_username', 'your_github_repository_name', assignee: 'assignee_user_name'); feedbackDialogue.prompt(); }, ), )); } } Add this to your package's pubspec.yaml file: dependencies: feedback_to_issue: ^0.0.7 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:feedback_to_issue/feedback_to_issue.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/feedback_to_issue.dart. Run flutter format to format lib/feedback_to_issue.dart. Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency ( github). Package is pre-v0.1 release. (-10 points) While nothing is inherently wrong with versions of 0.0.*, it might mean that the author is still experimenting with the general direction of the API.
https://pub.dev/packages/feedback_to_issue
CC-MAIN-2019-35
en
refinedweb
This guide describes how to use 100.5.0 of ArcGIS Runtime SDK for Java to build Java applications that incorporate capabilities such as mapping, geocoding, routing, and geoprocessing, for deployment onto Windows, Linux, and Mac platforms. A great place to start is Get the SDK, which describes three options for setting up your first application: as a Gradle project, as a Maven project, or by downloading the SDK and configuring manually. For an overview of new functionality at this release, including known limitations, see Release notes. With ArcGIS Runtime SDK for Java, you can do many things, including: - Build maps with the latest ArcGIS basemaps and ArcGIS Enterprise map services, feature services, and image services; include specialized layers, such as OpenStreetMap basemaps, WMTS and WMS map service layers. - Perform blazing fast searches for locations (geocode and reverse geocode) and routes. - Build applications to edit features. What you get Download and install ArcGIS Runtime SDK for Java to get the following: - A rich Java SE API provided through a suite of .jar files - A set of Runtime components for Windows (32 and 64 bit), Linux (64 bit), and Mac (64 bit) - An interactive sample viewer application which allows you to view the SDK's capabilities and see application code that you can use to create your own applications. - An open-source toolkit library that includes a set of components to assist rapid application development. You also get a comprehensive documentation set that includes: - A home page—Provides the SDK download, links to key help topics, and a view into ArcGIS Runtime SDK for Java related blog posts. - This guide, which includes conceptual and task-based topics. - An API reference—Describes all the public classes and methods in the API. - Samples - A sample code site—Allows you to browse through samples and their full source code. Each sample page has a link to its source code on GitHub. - Samples in GitHub—The same set of samples on the sample code site is also available in the arcgis-runtime-samples-java GitHub repository.
https://developers.arcgis.com/java/latest/guide/guide.htm
CC-MAIN-2019-35
en
refinedweb
Showcase of React + Redux Web Application Development Showcase of React + Redux Web Application Development This tutorial provides an introduction to using Redux with React and related technologies like React, Formik, react-router, and more in web app development. Join the DZone community and get the full member experience.Join For Free. This is where Redux comes into play. Using Redux with React solves this common problem with the concepts of central data store & application state. In this blog, I'll talk about Redux and explain how it can benefit React front-end development. I'll provide an introduction to using Redux with React and show a demonstration of reconstructing an example React application to React + Redux. Our Example The application that our example will begin with is the Keyhole "Now Playing" React application. The repository is located here.. If you are eager to see the final project, git clone these two repos: and. Construct a local build by using the following instructions provided in README; you can take a look of the running application in your browser at. Introduction to Redux So what is Redux and how does it relate to React? Redux is a predictable state container for JavaScript applications. In its simplest form, it is a library to manage the state of any JavaScript application. The following figure demonstrates the React + Redux flow. The right side of the figure as shows the key element for Redux: - Store: holds all application store into a single object tree. - Reducers: the only way to change state by emitting an action. - Actions: specify anything that could change the application state. The left side shows the integration of Redux with React: - Provider: makes the application state contained in store available to the container. - Containers Component: responsible for an external service call to retrieve application data, the container also maps the state and dispatches action to components. - Presentational Components: the presentation layer. This is the overall flow in a React + Redux application. Now let's go through each individual piece with examples. On the Redux Side Store To have Redux manage an application's state, all state must be contained within a single object. At the minimum, to create a store, you need to have root reducer and initial state. 'react-router-redux' and initial state object. import {createStore} from 'redux' import { routerReducer } from 'react-router-redux' const initialState = { movies: [] }; const store = createStore(rootReducer, initialState ) export default store; With this configuration, the state movies can be managed by Redux store. As you can see, the initial state is an empty array; we will need to create Reducer to change the movies' state. Reducers Reducers are responsible for changing application state with the emitting of actions. As you can imagine, the functions for a Movies Reducer are to retrieve movies and perform searches on the movies you retrieved. Reducers are a pure function. In our example, it returns the action payload based on its type. I'll show you later on how to create action. import {c} from '../constants' const movies = (state = { }, action) => { switch (action.type) { case c.LOAD_MOVIES: return action.payload; case c.SEARCH_MOVIES: return action.payload; default: return state } } export default movies Actions Anything that can cause application state change must happen in Actions. We have previously identified two type of actions that changes state of movies: LOAD_MOVIES and SEARCH_MOVIES. This is the Load movies action implementation; a similar implementation for search action could be found on git repo here. import {fetChMovies, changeRating} from '../services/movie.js'; import {c} from '../constants'; export function preloadMovies() { return (dispatch) => { loadMovies()(dispatch); }; } function loadMovies(dispatch) { return (dispatch) => { fetChMovies().then((movies) => { allMovies = movies; dispatch(loadTheMovies(movies)); }); }; } function loadTheMovies(movies) { return ({type: c.LOAD_MOVIES, payload: movies}); } export const movieActions = { preloadMovies } export default movieActions; In this example, I have, added a service layer of fetchMovies by calling RESTFul API that is published by khs-server application. You can take a look at the implementation details here. By calling dispatch(loadTheMovies(movies), the movie Reducer will update the movies state. On the React Side So by this point, we have demonstrated that application state can be managed by Redux. We have not, however, yet integrated state change with React. React and Redux can be integrated together with the help of provider offered by the react-redux package. Provider The Provider glues the Redux store together with the React component. This means that the application state information in the single store is available for React components. For a React application, you must define the render of the root application element like this: const element = <App /> ReactDOM.render(element, document.getElementById('root')) In order to have Redux work with React, you will need to do this: const element = <App /> ReactDOM.render( <Provider store={store}> <ConnectedRouter history={history}> <div className ="container" id="wrapper"> <div> <App/> </div> </ConnectedRouter> </Provider>, document.getElementById('root')) As you can see, the React application is now managed by the provider which is associated to the Redux store. ConnectedRouter from react-router-redux is also used to provide application routing. Through Provider, the React component is able to refer to the state information contained in Redux store. Based on the different roles in integrating with Redux, React component can be separated into two categories: presentational and container components. Presentational Components Container Components Presentational Component: MovieList Let's demonstrate this. The presentation component MovieList is responsible for displaying the movie's information. import React from 'react' import Movie from './Movie.js' const MovieList = ({ Movies }) => { return ( <div className="movie-container"> <br/> <div> <ul> {movies.map(movie => <li key={movie.id}> <Movie title={movie.title} poster={"/" + movie.poster_path} id={movie.id}/> </li>)} </ul> </div> </div> ) } export default MovieList; Container Component: MoviesContainer The container component called MoviesContainer is used to wrap up the presentation component movies. import {connect} from 'react-redux' import {Component} from 'react' import {bindActionCreators} from 'redux' import React from 'react' import {withRouter} from 'react-router-dom' import MovieList from '../ui/MovieList.js' import MovieActions from '../../actions/movie.js' class MoviesContainer extends Component { componentWillMount() { this.props.preloadMovies(); } render() { return <MovieList movies={this.props.movies/> } } const mapStateToProps = (state, props) => ({movies: state.movies }) const mapDispatchToProps = dispatch => bindActionCreators(MovieActions,dispatch); export default withRouter(connect(mapStateToProps, mapDispatchToProps)(MoviesContainer)) The container component MoviesContainer is responsible for data fetching by dispatching reducer action as followed: this.props.preloadMovies. You may be wondering why preloadMovies is a property here and not an action, and how it could work with Redux? Connect from the react-redux package does the trick. Connect is used to map the store's state and dispatch to the property of a component. As you can see from this example, the state movies is passed as a property to the presentation component. Conclusion What I discussed above are the main components for pre-loading movies. The git repository provides more features, like user registration, login, logout, the ability to search for movies by name, rating movies, to list a few examples. Please check out those features for further reference. You will quickly realize that the implementation pattern is consistent when a new feature is added and the responsibilities for each component are clearly defined. Most importantly, application state change is unidirectional, which means its more predictable, easier to manage, and the code is more maintainable. Redux is designed to manage state for single-page applications. We have discussed how the unification of React + Redux provides a powerful tool for web application development. We have increasingly seen more Keyhole clients choosing the React route for web development front-end technology. I personally recommend they also integrate Redux to manage state in the earliest stage of development. Published at DZone with permission of Jian Li , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/showcase-of-react-redux-web-application-developmen?fromrel=true
CC-MAIN-2019-35
en
refinedweb
These are chat archives for akkadotnet/akka.net Hello, I continue my experiments with Akka Streams, and there's a workflow I can't figure out: Source .From(Enumerable.Range(1, 10)) .Select(x => new[] {x + 10, x + 20, x + 30}) .DoSomethingInParallelWithEachElementOfTheArrayFromTheStepAbove() .AggregateResultsOfThePreviousStep() .To(sink); What happens is that each element of the steam source is converted into a collection, and the want to perform certain operations on each collection item and merge results back. I can't figure out how to express it in Akka Streams terms. SelectAsyncor SelectAsyncUnorderedto perform parallel operations, then SelectManyto flat map collections into single elements to the stream and then Aggregate, Sumor Scanto reduce results myActorRef.Tell(message, Sender)so i don't have to polute my POCO with IActorRefs _log.Error(reason, "Persistence failure when replaying events for persistenceId [{0}]. Last known sequence number [{1}]", PersistenceId, LastSequenceNr);- reason is the Exceptionwhich was thrown there akka:persistence:journal:workerSupervisorActor.highestSequenceNr source.SelectAsync(url => service.GetSomeCollectionAsync(url)).SelectMany(elements => elements) RecoverWithmethod which will be called each time exception was thrown Ask()pattern, webAPI calls var result = await actorA.Ask(request);, after actorA's done with its job, it passes the job over to ActorB using ActorB.tell(jobresult, Sender);, and after ActorB finished successfully, it calls Sender.Tell(finalResult);, but finalResultalways ends up in the unhandled message pile... Ask<object>()? because the response might be one of two messages public class MessageResponse { public MyClass Result1 { get; set; } public MyClass2 Result2 { get; set; } } [DEBUG][10/19/2016 3:29:05 PM][Thread 0008][akka://SolehWebService/user/$b/$a/$b] Unhandled message from akka://SolehWebService/user/$b : SolehServer.Messages.AuthResponse Web service: AuthResponse result; try { result = await Global.AuthActor.Ask<AuthResponse>(new RegisterUser(register), TimeSpan.FromSeconds(10)); } catch (TaskCanceledException e) { } catch (Exception e) { } Auth actor: Receive<RegisterUser>(msg => { MembershipCreateStatus result = _membership.CreateUser(msg.UserName, msg.Email, msg.Password); if (result == MembershipCreateStatus.Success) { _cognitoActor.Tell(new CognitoAuthRequest(msg.Email), Sender); return; } Sender.Tell(new AuthResponse(result)); }); Cognito actor: Receive<CognitoAuthRequest>(msg => { GetOpenIdTokenForDeveloperIdentityResponse idResponse = null; try { idResponse = _identClient.GetOpenIdTokenForDeveloperIdentity(idRequest); } catch (Exception e) { Sender.Tell(new AuthResponse(MembershipCreateStatus.ProviderError)); return; } Sender.Tell(new AuthResponse(idResponse.IdentityId, idResponse.Token)); }); TaskCanceledException, the AuthResponsefrom the successful cognito call always ends up in unhandled Askpattern should work even if the worker actors are behind a pool router, right? Basically, I'm dubious about how far one can go with the stateful approach: -- in particular, this part: Actors can be stateful - and that means we no longer have to factor round-trip times to SQL Server, Redis, Cassandra, or whatever into the design of our applications. The application already has the state it needs to do its job by the time a request arrives! LibraryPatronactor and I send a message to it GetListOfBorrowedBooksand I expect back List<Book>. There seems to be two approaches I could take: return Patron.BorrowedBooks.ToList<Book>;or some other join query or similar. int ExpectedBookResponsesto 100 then Become(CollectBookReplies), decrement the counter with each Bookobject the actor receives, and when it reaches 0, return the accumulated List<Book>object. Or is there a better way?
https://gitter.im/akkadotnet/akka.net/archives/2016/10/19
CC-MAIN-2019-35
en
refinedweb
The @now/node Builder takes an entrypoint of a Node.js function, builds its dependencies (if any) and bundles them into a Lambda. When to Use It This Builder is the recommended way to introduce any Node.js-based dynamic request handling into your codebases. It can be used directly (as a single file, like my-function.js), or you can define an index.js file inside a directory. How to Use It: module.exports = (req, res) => { const { name = 'World' } = req.query res.send(`Hello ${name}!`) } An example serverless Node.js function. @now/nodeBuilder also supports asynchronous functions, an example of which can be found in the Request and Response Objects section.: The example deployment above is open-source and you can view the code for it here: You can pass query parameters to make the name change: The lambda using the name query parameter to change the text using Node.js. Request and Response Objects For each invocation of a Node.js lambda, two objects, request and response, are passed to the function. These objects are the standard HTTP request and response objects given and used by Node.js. Helpers In addition to the standard Node.js request and response methods, we provide a set of helper properties and methods to make it even easier for you to build a new lambda function: The following lambda(`hi ${who}, what's up?`) } Example Node.js lambda using the req.query, req.cookies and req.body helpers. It returns greetings for the user specified using req.send(). helpersto falsein the configuration of the builder. If you opt-out, request objects will be replaced by the standard HTTP request and response. { "builds": [ { "src": "my-file.js", "use": "@now/node", "config": { "helpers": "false" } } ] } You can opt-out of helpers by setting helpers to false in the configuration. lambda using the req.body helper that returns pong when you send ping. Typescript support We provide types for the request and response objects. To install these types, use the following command: yarn add @now/node --dev Or, if you're using npm: npm install @now/node --save-dev Once the types have been installed, import NowRequest and NowResponse from @now/node, to type the request and response objects: import { NowRequest, NowResponse } from '@now/node' export default function(req: NowRequest, res: NowResponse) { const { name = 'World' } = req.query res.send(`Hello ${name}!`) } An example "Hello World" Typescript lambda using the req.query helper. Async support We support asynchronous functions out-of-the-box. In this example, we use the package asciify-image to create ascii art from a person's avatar on github. First, we need to install the package: yarn add. Deploying with Static Content You can upload static content alongside on including static files in your project, read the documentation for @now/static: Technical Details Entrypoint The entrypoint of this Builder is always a JavaScript file that exposes a function. If you want to expose a server, you should read the documentation for the @now/node-server Builder, although this is not recommended. @now/nodewill automatically detect a `package.json` file to install dependencies at the entrypoint level or higher. Dependencies Installation The installation algorithm of dependencies works as follows: - If a package-lock.jsonis present, npm installis used - Otherwise, yarnis used. Private npm modules To install private npm modules, define NPM_TOKEN as a build environment variable in now.json. Build Step, const fs = require('fs');.send(` This Lambda was built at ${new Date(BuiltTime)}. The current time is ${new Date()} `) } now.jsonfile: { "version": 2, "builds": [ { "src": "index.js", "use": "@now/node" } ] } The resulting lambda contains both the build and current time: Node.js Version The Node.js version used is the v8.10. Maximum Lambda Bundle Size" } } ] }
https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/node-js-now-node/
CC-MAIN-2019-35
en
refinedweb