text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Python alternatives for PHP functions import string all(c in string.punctuation for c in text) (PHP 4 >= 4.0.4, PHP 5) ctype_punct — Check for any printable character which is not whitespace or an alphanumeric character Checks if all of the characters in the provided string, text , are punctuation character. The tested string..
http://www.php2python.com/wiki/function.ctype-punct/
CC-MAIN-2018-22
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I've posted this one before, but it was in the old Atlassian Answers forum and it has been deleted so I have to post it again as I haven't found a solution to it yet. I'm trying to use the Clone and link functionality from ScriptRunner and in the process updating the new ticket a bit using the Additional issue actions field. I'm setting the estimate field on the issue, but it doesn't stick. After the issue has been created it does not have the new value. It has the same value as the ticket it is cloned from. I have added this code: import com.atlassian.jira.bc.project.component.ProjectComponent import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.Issue import com.atlassian.jira.project.version.Version import com.atlassian.jira.project.version.VersionManager MutableIssue currentIssue = issue BigDecimal timeSpent = getTimeSpent(sourceIssue) log.warn("Clone TLA to new ticket for the new year - timeSpent " + timeSpent) Number remaining = getRemaining(sourceIssue, timeSpent) log.warn("Clone TLA to new ticket for the new year - remaining " + remaining) Long estimate = 187.5 + remaining log.warn("Clone TLA to new ticket for the new year - estimate " + estimate) issue.estimate = estimate private static BigDecimal getTimeSpent(Issue issue) { BigDecimal timeSpent = issue.timeSpent if (timeSpent != null) { timeSpent = timeSpent / 3600 } else { timeSpent = 0 } timeSpent } private static Number getRemaining(Issue issue, BigDecimal usedTime) { BigDecimal remaining = 0 if (issue.originalEstimate != null) { remaining = (issue.originalEstimate) / 3600 - usedTime } remaining } The estimate value gets calculated correctly (issue.estimate = estimate), but somehow never stored in the new cloned ticket. I'm running this as the first post function for a transition. By the way: I'm running the latest version of ScriptRunner. Does anybody have any hints on what I'm doing wrong here? I've found a solution/workaround. I use issue.originalEstimate instead. That works. Just have to add the estiamte in seconds instead of hours. Still curious though why issue.estimate doesn't stick. Have you tried using the .setEstimate() method on the currentIssue. And also passing the currentIssue variable into the methods instead of sourceIssue? I've gone back to basic here. I've replaced everything that I had in the "Additional issue actions" with a single line: issue.estimate = new Long(400) The new cloned issue still gets the estimate from the original ticket. I believe I'm looking at an error in the clone and link feature. Should I move this to the Scriptrunner support instead? What do you think John.
https://community.atlassian.com/t5/Jira-questions/Cannot-update-estimate-on-an-issue-when-using-clone-and-link/qaq-p/460230
CC-MAIN-2018-22
en
refinedweb
In my previous post, Overview of the TestContext in NUnit. Part 1: Static properties and methods, I went over the static properties and methods that are available on the TestContext. I had briefly mentioned the CurrentContext and in this post we are going to dive deeper into this property. The CurrentContext gets created separately for each test but is still accessible anywhere within the TestFixture. It holds information about the current test, fixture execution, and even provides you with a Randomizer class that you can use inside your test to build up random data. Let's dig into the CurrentContext! Current Context - Properties Test - This gets you a representation of the current test. It's properties are: ID- The unique Id of the test Name- The name of the test, whether set by the user or generated automatically FullName- The fully qualified name of the test ClassName- The fully qualified name of the class MethodName- The name of the method representing the test, if any Properties- An IPropertyBagof the test properties If you recall the example we were using in our previous posts, we had this HelloWorld test. using NUnit.Framework; namespace BlogSamples.NUnit { [TestFixture] public class HelloWorld { [Test] public void Hello() { string greeting = BlogSamples.HelloWorld.Hello("get-testy"); Assert.That(greeting.Equals("Hello, get-testy.", StringComparison.Ordinal)); } } } If we were to access TestContext.CurrentContext.Test from inside the method Hello, we would see these values: - ID = "0-1001" - Name = "Hello" - FullName = "BlogSamples.NUnit.HelloWorld.Hello" - ClassName = "BlogSamples.NUnit.HelloWorld" - MethodName = "Hello" - Properties = { Keys: { } } A few things to note about the values. - The IPropertyBagfor Properties is empty. Since there are no extra attributes putting properties onto this test case, it is empty. - The IDis not guaranteed to be the same each test run. It is just to uniquely identify that execution of the test. Now let's take this same method, but change the Name of it and also add a property. [TestCase( TestName = "Hello, get-testy.", Description = "Verify we can get a properly formed message back from HelloWorld.", Author = "James Penning")] public void Hello() { string greeting = BlogSamples.HelloWorld.Hello("get-testy"); Assert.That(greeting.Equals("Hello, get-testy.", StringComparison.Ordinal)); } Now when we access TestContext.CurrentContext.Test, we would see these values: - ID = "0-1001" - Name = "Hello, get-testy." - FullName = "BlogSamples.NUnit.HelloWorld.Hello, get-testy" - ClassName = "BlogSamples.NUnit.HelloWorld" - MethodName = "Hello" - Properties = { Keys: { "Description": "Verify we can get a properly formed message back from HelloWorld.", "Author":"James Penning" } } There are a lot of different attributes that you can add in order to fill in the properties. Description, Author, Category, etc. You also may have noticed that I changed the attribute that declares Hello as a test case. Test and TestCase are subtly different. The basic difference is that you can have multiple TestCase attributes on a method, which will create multiple tests based off the values set in each attribute. Whereas with the Test attribute, you can only have one. Be on the look out to a post I will be making about all the different nunit attributes you can use! - Result - Gets a representation of the test result. Outcome- A ResultStaterepresenting the outcome of the test. PassCount- The number of tests that passed. InconclusiveCount- The number of tests that were Inconclusive. SkipCount- The number of tests that were skipped. WarningCount- The number of tests that had warnings. FailCount- The number of tests that failed. Message- Any message that nunit defines based off test results. IE: "Message: One or more child tests were ignored" would be the message if there were tests that were ignored. StackTrace- Any exception was thrown inside of a test. The ResultState is accessible during the execution of a test, however, it is best to only use it in the teardown stage. It will have all the values populated at that time. - ResultState Status- A TestStatuswith four possible values. Label- An optional string value, which can provide sub-categories for each Status. Site- A FailureSitevalue. These are the four possible values for Status. If you are familiar with the TestResult.xml output from a test execution, these are what get associated with the test-case nodes. I gave an example in my Part 1 post. - TestStatus Inconclusive Skipped Passed Warning Failed These are the possible values for the FailureSite. This will indicate where the failure occurred in the execution of the test execution. - FailureSite Test SetUp TearDown Parent Child One practical way you might use the ResultState would be to check if a test failed. If you have tests that you are running in serial that require the previous to pass, you would want to stop further execution to prevent cascading failures. Here's how you can easily do this: private bool _stopFurtherExecution = false; [SetUp] public void SetUp() { if (_stopFurtherExecution) { Assert.Inconclusive("Previous test failed"); } } [TearDown] public void TearDown() { bool status = TestContext.CurrentContext.Result.Outcome.Status; if(status.Equals(NUnit.Framework.Interfaces.TestStatus.Failed)) { _stopFurtherExecution = true; } } You could put this in a base class and then have any TestFixture that you want to follow this behavior and inherit from it. TestDirectory This will return the full file path of the directory in which the current executing assembly is located. WorkDirectory This will return the full file path of where the test is being executed. We can access the TestDirectory and WorkDirectory like this. TestContext.WriteLine($"Current TestDirectory: {TestContext.CurrentContext.TestDirectory}"); TestContext.WriteLine($"Current WorkDirectory: {TestContext.CurrentContext.WorkDirectory}"); Which, when we execute the tests, would produce a result that looks like this. <!-- Command to execute tests --> D:\SourceCode\BlogSamples> D:\SourceCode\nunit-console\bin\Release\nunit3-console.exe .\BlogSamples.NUnit\bin\Debug\BlogSamples.NUnit.dll NUnit Console Runner 3.7.0 Copyright (C) 2017 Charlie Poole Runtime Environment OS Version: Microsoft Windows NT 10.0.14393.0 CLR Version: 4.0.30319.42000 Test Files .\BlogSamples.NUnit\bin\Debug\BlogSamples.NUnit.dll <!-- Output from our tests --> => BlogSamples.NUnit.HelloWorld.Hello, get-testy. Current TestDirectory: D:\SourceCode\BlogSamples\BlogSamples.NUnit\bin\Debug Current WorkDirectory: D:\SourceCode\BlogSamples You can see that the TestDirectory is where the actual assembly that we are loading to execute tests and the WorkDirectory is where we are currently at in the powershell window. Random This one is probably my favorite property to use. The Random property returns a Randomizer object which can aid you in generating random values to use in your tests. In using Random, if you don't change the assembly or the seed used, you can repeat values on reruns. Which is really cool when you want to send in random data to methods but also be able to rerun tests with the same data if they fail. You might be thinking, "Why would I want to use this instead of using System.Random directly?" Well, there are a couple reasons why, but the main two include: - Uniform methods. These methods extend the behavior on a wide range of types which System.Randomdoes not include. - IE. NextLong, NextLong(long max), and NextLong(long min, long max). - Repeatable values. Like I said above, this is huge for being able to still use random values while also being able to reproduce that random value consistently. Let's see how we can use Random with our HelloWorld tests. We might want to just see if our Hello method can take in a bunch of different strings, but not have to write out hundreds of tests with data. [Test] public void Random() { string randomGreeting = TestContext.CurrentContext.Random.GetString(); TestContext.WriteLine($"RandomGreeting: {randomGreeting}"); string greeting = BlogSamples.HelloWorld.Hello(randomGreeting); Assert.That(greeting.Equals($"Hello, {randomGreeting}.", StringComparison.Ordinal)); } There are a lot of useful properties on the CurrentContext that you can use right away in your tests. Others might only have use when used inside of an Engine Extension, which I will cover in a future post. This concludes the 2 part series on the TestContext. Be on the lookout for more posts on NUnit in the future. Now I want to know, how do you use the TestContext inside of your tests?
https://get-testy.com/test-context-part-2/
CC-MAIN-2018-22
en
refinedweb
Scan input from a wide-character string (varargs) #include <wchar.h> #include <stdarg.h> int vswscanf( const wchar_t * ws, const wchar_t * format, va_list arg ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The vswscanf() function scans input from the string designated by ws, under control of the argument format. The vswscanf() function is the wide-character version of vsscanf(), and is a varargs version of swscanf(). The number of input arguments for which values were successfully scanned and stored is returned, or EOF when the scanning is terminated by reaching the end of the input string. It's safe to call vswscanf() in a signal handler if the data isn't floating point.
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/v/vswscanf.html
CC-MAIN-2018-22
en
refinedweb
DocumentDesigner Class Base designer class for extending the design mode behavior of, and providing a root-level design mode view for, a Control that supports nested controls and should receive scroll messages. For a list of all members of this type, see DocumentDesigner Members. System.Object System.ComponentModel.Design.ComponentDesigner System.Windows.Forms.Design.ControlDesigner System.Windows.Forms.Design.ParentControlDesigner System.Windows.Forms.Design.ScrollableControlDesigner System.Windows.Forms.Design.DocumentDesigner [Visual Basic] Public Class DocumentDesigner Inherits ScrollableControlDesigner Implements IRootDesigner, IToolboxUser [C#] public class DocumentDesigner : ScrollableControlDesigner, IRootDesigner, IToolboxUser [C++] public __gc class DocumentDesigner : public ScrollableControlDesigner, IRootDesigner, IToolboxUser [JScript] public class DocumentDesigner extends ScrollableControlDesigner implements IRootDesigner, IToolboxUser Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. Remarks This designer is a root designer, meaning that it provides the root-level design mode view for the associated document when it is viewed in design mode. You can associate a designer with a type using a DesignerAttribute. For an overview of customizing design time behavior, see Enhancing Design-Time Support. Requirements Namespace: System.Windows.Forms.Design Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family Assembly: System.Design (in System.Design.dll) See Also DocumentDesigner Members | System.Windows.Forms.Design Namespace | ComponentDesigner | ControlDesigner | ParentControlDesigner | ScrollableControlDesigner | IDesigner | IRootDesigner | DesignerAttribute
https://msdn.microsoft.com/en-us/library/x2y3748k(v=vs.71).aspx
CC-MAIN-2018-22
en
refinedweb
-05-09 at 11:51 -0400, Rajarshi Guha wrote: > On Mon, 2005-05-09 at 09:38 +0200, Christoph Steinbeck wrote: > > Rajarshi Guha wrote: > > > > > I thought that if I had multiple classes in a single file, then only the > > > public class would be visible and all other classes in the file would be > > > visible only by this public class. > > > > Still, all the classes defined in this single *.java file should end up > > as discrete .class files eventually. > > You mail doesn't say anything about the mechanism by which you run the > > program, beside your statement that you "compile it into the CDK". After looking at build.xml I see that the reallyRunDoclet task, creates the *.javafiles by considering classes that are designated public. As a result, if I place the surface class in a file and designate it public and also include the other classes in the same file, but with no access qualifier, then the build process will place the surface class *only* in the extra.javafiles (I specified that the surface class should be in the cdk-extra module via cdk.tag). Even if I split up each class into its own file, unless they are all specified as public the ant task does not place them in extra.javafiles. I realize that the documention should only produce docs for public classes and methods, but it also seems to result in non-public classes not being included for compilation. I'm sure I'm missing something here, but I can't seem to see how to fix this problem. ------------------------------------------------------------------- Rajarshi Guha <rxg218@...> <> GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE ------------------------------------------------------------------- "355/113 -- Not the famous irrational number PI, but an incredible simulation!" View entire thread
https://sourceforge.net/p/cdk/mailman/message/8849521/
CC-MAIN-2018-22
en
refinedweb
In Haskell The following Haskell code is supposed to just output "hello". main = print $ times 1000000000000 id "hello" times 0 _ x = x times n f x = times (n-1) f (f x) The times function I defined applies the given function many times. times 3 f xis... times 2 f (f x) times 1 f (f (f x)) times 0 f (f (f (f x))) f (f (f x)) id, defined in Prelude, just returns the given argument. You may want to assume the GHC internal optimizer to eliminate any id call. In other words, the Haskell code in the very beginning is supposed to be transformed just to main = print "hello". Unfortunately, no it doesn't, even with -O2 option. $ ghc -O2 a.hs -o a $ ./a ... (long long hours) .. In C I just translated the original Haskell code to C. #include "stdio.h" char *id(char *x) { return x; } char *times(long long n, char *(*f)(char *), char *x) { if (n == 0) { return x; } else { return times(n-1, f, f(x)); } } int main(int argc, char const* argv[]) { puts(times(1000000000000LL, id, "hello")); return 0; } I tried it with GCC and Clang. GCC on Mac: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) $ gcc -O2 a.c -o a $ ./a ... (long long hours) .. The assembly code for the main function with -O2 -m32 option is the following. _main: pushl %ebp movl %esp, %ebp pushl %ebx subl $20, %esp call L13 "L00000000001$pb": L13: popl %ebx leal LC0-"L00000000001$pb"(%ebx), %eax movl %eax, 8(%esp) leal _id-"L00000000001$pb"(%ebx), %eax movl %eax, 4(%esp) movl $-727379968, (%esp) call _times movl %eax, (%esp) call _puts xorl %eax, %eax addl $20, %esp popl %ebx leave ret You see it's calling _times for sure. Clang: $ clang -O2 a.c -o a $ ./a hello This gives you the message immediately! The below is the generated LLVM IR. You can easily tell that it just calls puts function directly. Note that the @.str const is just "hello" ending with null. define i32 @main(i32 %argc, i8** nocapture %argv) nounwind uwtable ssp { entry: %call1 = tail call i32 @puts(i8* getelementptr inbounds ([6 x i8]* @.str, i64 0, i64 0)) nounwind ret i32 0 } GCC on Linux: gcc 4.4 main: pushl %ebp movl %esp, %ebp andl $-16, %esp subl $16, %esp movl $.LC0, (%esp) call puts xorl %eax, %eax leave ret It ran like the one with Clang! When the big number isn't long long I wrote the first argument of times in C as int at first. I got warning messages from both GCC and Clang that the value, 1000000000000, was too big for int. In the case, GCC on Linux made the following assembly code. main: pushl %ebp movl %esp, %ebp .L11: jmp .L11 It's just an infinite loop! That actually makes sense because any int values never go to non-int value 1000000000000. Summary - GHC doesn't eliminate idin optimization - GCC eliminates it except for Apple GCC 4.2 - Clang eliminates id - Some optimizations can conflict if the code depends on undefined behaviour I want to know what the smallest modification for times function can prevents such optimization. smallest-type modification: change -O2 to -O0 smallest-range modification: change id() to cause a side-effect smallest-type modification2: use apple gcc 4.2
http://ujihisa.blogspot.com/2011/10/optimizer-comparisons-skip-nops.html
CC-MAIN-2018-22
en
refinedweb
According to the latest news, exploit kits such as Cool EK and Popads are integrating a new exploit for Java, targeting Java 7u11. An exploit for CVE-2013-0431 has been analyzed and shared by SecurityObscurity, and is also now available as a Metasploit module with some improvements for testability. We would like to use this blog post to share some details about the vulnerabilities abused by this new Java exploit. Step 1: Getting access to restricted classes The first of the vulnerabilities abuses the public findClass method from the com.sun.jmx.mbeanserver.MBeanInstantiator to get access to restricted classes: /** * Gets the class for the specified class name using the MBean * Interceptor's classloader */ public Class << ? > findClass(String className, ClassLoader loader) throws ReflectionException { return loadClass(className, loader); } The findClass method relies on loadClass, where the weakness lives, since it has been called with a null ClassLoader and: (1) It will ask for the MBeanInstantiatior ClassLoader, which should return null (bootstrap loader) and (2) Call Class.forName without ClassLoader, so Class.forName will use the Caller ClassLoader, which should be the bootstrap one. /** * Load a class with the specified loader, or with this object * class loader if the specified loader is null. **/ static Class << ? > loadClass(String className, ClassLoader loader) throws ReflectionException { Class << ? > theClass; if (className == null) { throw new RuntimeOperationsException(new IllegalArgumentException("The class name cannot be null"), "Exception occurred during object instantiation"); } try { if (loader == null) loader = MBeanInstantiator.class.getClassLoader(); // (1) if (loader != null) { theClass = Class.forName(className, false, loader); } else { theClass = Class.forName(className); // (2) } } catch (ClassNotFoundException e) { throw new ReflectionException(e, "The MBean class could not be loaded"); } return theClass; } The method above is abused to get a reference to the restricted sun.org.mozilla.javascript.internal.GeneratedClassLoader class: Class class2 = gimmeClass("sun.org.mozilla.javascript.internal.GeneratedClassLoader"); }); } Step: 2 Getting access to methods The exploit abuses the com.sun.jmx.mbeanserver.Introspector class, which makes an insecure use of the invoke method of the java.lang.reflect.Method class, as documented by Adam Gowdiak. The, exploit, as also explained by Adam, invokes getDeclaredMethod from the java.lang.Class to get access to methods of restricted classes. Specifically the defineClass method from the sun.org.mozilla.javascript.internal.GeneratedClassLoader class: Method method2 = getMethod(class2, "defineClass", false);; } Step 3: Bypassing security control In Java 7 Update 10, the security level for unsigned Java apps switched to "High," which means the user is prompted before any unsigned Java app runs in the browser, the exploits found in the wild aren't bypassing this control, so require user interaction in order to run the exploiting applet: The metasploit module is using the Security Control bypass also found by Adam Gowdiak. Adam noticed that the implementation of the above security levels doesn't take into account Java Applets instantiated with the use of serialization. A serialized version of the Exploit class can be obtained with the use of a Java application based in the code published in Adam's advisory: import java.io.*; public class Serializer { public static void main(String[] args) { try { Exploit b = new Exploit(); // target Applet instance ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(b); FileOutputStream fos = new FileOutputStream("Exploit.ser"); fos.write(baos.toByteArray()); fos.close(); } catch (Exception ex) { ex.printStackTrace(); } } } Demo With all the pieces of the puzzle together, complete Java sandboxing and security level bypassing can be accomplished. You'll want to view the video at 720p and a decent-size monitor if you want to actually read the text in the video below. Want to try this out for yourself? Get your free Metasploit download now or update your existing installation, and let us know if you have any further questions.
https://blog.rapid7.com/2013/02/25/java-abused-in-the-wild-one-more-time/
CC-MAIN-2018-22
en
refinedweb
When: using namespace std; class Volcano { private: string m_EnglishName; string m_nativeName; string m_meaning;Publish int m_elevation; //...rest of class definition }: <?xml version="1.0" encoding="utf-8"?> <AutoVisualizer xmlns=""> <Type Name="Volcano"> <DisplayString>Name: {m_EnglishName,sb}</DisplayString> <Expand> <Item Name="Native name">m_nativeName,sb</Item> <Item Name="Meaning">m_meaning</Item> <Item Name="Elevation">m_elevation</Item> </Expand> </Type> </AutoVisualizer>: struct Foo { int i; int j; #if _DEBUG std::string debugMessage; #endif }; . Join the conversationAdd Comment ".natvis files can now be added to projects or solutions and benefit from source control" With this feature natvis visualizers can finaly become a tool for everyday use! Tiny error: The first natvis example says "<DisplayString>Name: {m_EnglishName,sb}</DisplayString>", yet it seems the next image was created with "<DisplayString>Name: {m_nativeName,sb}</DisplayString>" ("Name: Tahoma" instead of "Name: Mount Rainier") Awesome work! Why don't work with mixed debugging? dmz1978, in the video just below that image they show the natvis file being changed from displaying m_EnglishString to displaying m_nativeName. All this goodness is worthless to my group so I'll repeat the plea to update the Mixed Mode debugging engine to handle .natvis. @Hrvoje: Glad to hear this will help! @dmz1978: I overlooked this discrepancy in the image after reordering these during a revision, the images should now be all correct based on the reading order. @Ofek: Thanks, kudos for the blog post on the EE:) @André, fourbadcats: We are currently working on getting .natvis visualizers to work when mixed mode debugging. Keep an eye on the VC++ blog for more details on this in the upcoming months. The new system is definitely an improvement over the VS2010 autoexp.dat system. One thing I miss, though, is the ability to trigger expansions from within the display string. This was useful both for conditionals and for lists. As an example, it was possible to write a visualizer in VS2010 that would crawl the stack frame structures of a scripting engine and piece together the signature for each function, such that a frame could display as "int function(int x=4, str y="foo") [test.s:43]". This doesn't appear to be possible in the natvis system. Other wish list items (forgive me if some are already possible in VS2015): – Display hex values without the 0x prefix ("xb" formatter). – Display a Pascal-style non-delimited, length-specified string. – Apply a shared condition to multiple items instead of having to repeat it on each (item group). – Temporary variables for caching common subexpressions. – Extension views or pseudo-targets for shared types that don't require modifying the base visualizer, e.g. interpret a std::vector<char> field as a bytecode stream that's expandable from a [bytecode] entry. Thanks for that. I've been putting off making these debugging extensions for a templated, fixed point arithmetic class where numbers are represented within an int type with specified numbers of fractional and non-fractional bits for simulating digital circuits with varying precision and this would provide much improved visualization during debugging. @Adam That's excellent news re: possible Mixed mode support. Thank you for listening and working on this! @phaeron > Display a Pascal-style non-delimited, length-specified string. You can specify length in expression "pascalString.data,[pascalString.length] > Temporary variables for caching common subexpressions. You can do it, but only in CustomListItems msdn.microsoft.com/…/jj620914.aspx > Extension views or pseudo-targets for shared types that don't require modifying the base visualizer, e.g. interpret a std::vector<char> field as a bytecode stream that's expandable from a [bytecode] entry. If I understand correct you can use IncludeView and ExcludeView attributes for that Please allow use Synthetic item inside CustomListItems. I want display compressed_vector as vector of synthetic pairs…/classboost_1_1numeric_1_1ublas_1_1compressed__vector.html Still waiting for ability to write a visualizer that provides a pointer to a structure (class instance) or integer, and allows you to produce an arbitrary output string. For example, visualize integer i=123456789 as "123,456,789". There are many times when I could use this facility. Another useful feature would be allow me to display an image from a pointer to a structure (class instance). For example, display a CBitmap. Are expression evaluators still supported? In autoexp.dat I could provide a custom DLL with the syntax $ADDIN(dll, entrypoint). Is there a way to do this with .natvis? I've used them in the past to crawl a data structure in the debugged process and display the result. It worked great for both live and postmortem debugging. Thanks for the detailed writeup! :) I would really, really like a tutorial on how to write a graphical visualizer in c++ e.g. for a custom polygon type, to supplement the natvis text visualizer. It is sometimes hard to make sense of a Geometry by just looking at a text representation. I have currently only seen tutorials on how to write graphical visualizers in c#, but with my c++ types, this isn't that helpful… @phaeron: The xb format specifier is supported for natvis, this is missing from the linked documentation and I will work on getting this updated. For the Pascal-style string, you could use the array-length format specifier "string,[length]" which makes the length an expression rather than a raw numbers. Alternatively, if the string is length-prefixed, like a BSTR, you can simply use the “,bstr” format specifier. To enable extension views for shared types that don't require modifying the base visualizer, you should be able to use the existing IncludeView attribute and the “,view” format specifier. For more info see this blog post: blogs.msdn.com/…/using-visual-studio-2013-to-write-maintainable-native-visualizations-natvis.aspx The other ideas are great suggestions and we will add them to the backlog for future consideration. @fourbadcats: Of course, keep that feedback coming:) @doug: Glad that this will help! @Arkady: Thanks for sharing your knowledge:) We will look into allowing Synthetic to be used inside of CustomListItems. @Virtual memory: Formatting numbers like this would be nice, we will look into adding support for such a format specifier. For visualizing a CBitmap object, this is possible using the CustomVisualizer and UIVIsualizer tags. We are working on getting these properly documented and will put samples online when ready. The Image Watch extension on MSDN does exactly this for OpenCV image types and allows adding support for basic custom image types, however I am not sure if its extensibility model supports the MFC CBitmap: visualstudiogallery.msdn.microsoft.com/e682d542-7ef3-402c-b857-bbfba714f78d @Ivo: This is what the LegacyAddin tag is used for, although it does not appear to be documented on MSDN. In theory, any addin dll that was accepted by the legacy engine should be compatible with the new engine via the LegacyAddin tag. The API set available to a legacy addin is somewhat limited – you can only control the display string (no expansion), and your access to the debugger includes only reading memory from the debuggee (no ability to query symbols or communicate with other parts of Visual Studio). If LegacyAddin is not sufficient, CustomVisualizer and UIVisualizer offer greater levels of flexibility which we are working on properly documenting. I plan on doing future blog posts that show how to create these types of visualizers so keep an eye on the VC++ blog. @Mats: Unfortunately the documentation for authoring advanced native visualizers is minimal, so we are working towards providing some buildable VS2015 samples for such types of visualizers to help developers create their own advanced visualizers. Keep an eye on the VC++ blog for future updates to the documentation and samples. @Adam : About the “xb” format specifier in natvis for VS2015 : is it me or does it replaces the ‘0x’ by ’00’, thus displaying uint64 with 18 characters ? I’m trying to disaply our custom GUID type, stored as 2 unsigned 64 with this : UID={myHigh,xb}{myLow,xb} and they’re shown like this (without the dashes) : 00-32chars-00-32chars Is there a way to not show this prefix ? Thanks. Thanks, Adam. I agree the legacy addins are quite limited as they don't have access to symbols. I'm looking forward to the documentation for the new types of visualizers. Ideally they can query symbols, read from the process memory, and work when viewing dumps. @Mats Taraldsvik: Recently I wrote an extension doing exactly that, visualizing graphical/geometrical data during debugging (see visualstudiogallery.msdn.microsoft.com/4b81868b-8901-408f-a28e-25a6580788fb). For now it suports only Boost.Geometry and Boost.Polygon types but certainly it could be extended. I can imagine that it could allow users to define their own types (in an XML file similar to *.natvis) and display them in one of the known ways (e.g. as a polygon). @Adam Welch: Thanks! Really looking forward to documentation and examples! :) @Adam Wulkiewicz: Thanks, I noticed.:) But aren't your visualizers written in c#/.net? @Mats Taraldsvik Ah yes, the language was your main concern. Sorry for confusion. Indeed, I wrote the extension in C#, because it's convenient (WPF, designer) and as you said there are more resources available in the net. Also because according to my possibly limited knowledge even with C++ I'd be still forced to deal with CLI/.NET. By saying that you saw examples in the net, were you talking about this interface: msdn.microsoft.com/…/e2zc529c.aspx ? And I guess you was thinking about deserializing native, unmanaged C++ objects on the debugee side and then somehow send the data to the debugger? Is it the way how it could be done? This way indeed writing in C++ could make the difference. Still, the debugee part could probably be written in C++ and C# used for the UI. I decided to do something different. On a higher level I'm using EnvDTE interface to run expressions with a Debugger, to access data that is needed. It's because I though that ultimately it'd be the most convenient for the users if it was possible to define how data should be accessed through expressions, just like it's done in the *.natvis file. So if a user wanted to add a visualizer for his type he would simply write some XML defining how e.g. coordinates should be accessed. He wouldn't be forced to write the code or his own extension. Though I guess this approach may be less performant than it could be. @Adam Wulkiewicz Yes, I need to visualize native C++ objects. Well, I was thinking about this UIVisualizer tutorial, which is written in C# only, but explicitly states: "Note that you can implement this VSPackage in different programming languages (this sample uses C#). You can just pick the language of your choice in Visual Studio Package Wizard when you are creating the project." code.msdn.microsoft.com/…/Writing-graphical-debugger-a17e3d75 I'm looking forward to more thorough documentation and tutorials/examples :) My VS2015 lets me create the natvis file but debugger ignores them. What am I doing wrong? @Adam Zielinski: Please send me an email with your .natvis file so we can take a look, thanks! [email protected] It was about time someone improved on the native debugger visualizers… They were comparable in developer-friendliness with Olly Debugger… > Since VS2012, Visual Studio had provided the .natvis visualizer format for declaring custom visualizations for different C/C++ types. Really? That's the first time I hear about this and I see no "natvis" entry in my NEW FILE menu… If you also provided a cross-platform framework like Qt (containing the basic stuff: string, array, path, FileSystem, operations) for development it would be cool. No offense, but std:string et al. ain't good… @Me: The .natvis format has existed officially since 2012, however the .natvis item template is new in VS2015. The template only shows up as an option when you use the "Project" menu or right-click the project/solution folder inside the solution explorer and use the "add new item" option. The "File" menu option to add items does not actually add them to your project or solution and would not work for this feature since being a part of the solution is how the debugger can pick up the visualizer file. I have updated the blog post to more clearly reflect that the item template is found in Project->Add New Item->Visual C++->Utility->Debugger Visualization File (.natvis). While providing visualizers for 3rd party framework is well beyond the scope of the Visual C++ team, we want to ensure that VS provides great capabilities for all framework/library creators to provide their own visualizers. Qt are a great example here because they actually maintain their own .natvis visualizers for Visual Studio 2012+ to enable a great debugging experience on VS. Here is there visualizer for Qt5: github.com/…/qt5.natvis I wonder if I could download a stl.natvis file directly which could visualize vector and other stl containers mostly as this pic shows: I could find no such implement in the original stl.natvis file with installation of VS 2015 Ent. I really want to get a similar visualizer as I used to have with VS 2012. Instead of rewriting a customized one all over again. Hoping to get some help asap. Thanks a lot :-) I’m using boost::string_view. Its got a const char* ptr_ and a unsigned int len_ data member. When debugging its cumbersome, as the default visualiser will show the char ptr_ with more charachters than length (as string_view is not null terminated). I tried creating my own visualiser by looking at the std::string visualiser. But I can even get it to be used by the debugger. Any help appreciated. Broken example: {MyLen= {len_}} {MyPtr= {ptr_}} After much pain I found the solution. It works with: “{ptr_,[len_]}” @AdamWelch – it has been nearly a year. Any news regarding mixed debugging support?
https://blogs.msdn.microsoft.com/vcblog/2015/09/28/debug-visualizers-in-visual-c-2015/
CC-MAIN-2018-22
en
refinedweb
Lately I came to find Django a bit top heavy for one of my projects, so I chose Flask as a lighter and smaller alternative. After fiddling with the tutorials for a bit I wanted to have a setup with several modules. Suprisingly that wasn’t as easy to do as the snippets and examples showed several options and configurations and… So, this is what worked for me. May not be the true gospel but I wanted modules to be set to certain urls like mounted apps in padrino. This is what I came up with: + Project -- start.py + module1 -- __init__.py -- app.py + module2 -- __init__.py -- app.py So module1 and 2 are two functional units which should answer to specific prefixes (localhost:5000/module1 and localhost:5000/module2) and start.py is the file to run the whole show. I used flask-blueprint to get it all under the roof. First let’s get the modules to behave like modules. In module1/app.py I added: from flask import Blueprint app1 = Blueprint('app1', __name__) ... @app1.route ... For module2 app.py looks similar except that app1 is changed to app2. So, now we have the blueprints, of which the project does not know yet. In fact we don’t have any app so far. All the nutrs and bolts go into start.py: from flask import Flask from module1.app import app1 from module2.app import app2 project = Flask(__name__) project.register_blueprint(app1, url_prefix='/path1') project.register_blueprint(app2. url_prefix='/path2') if __name__ == '__main__': project.run() This is the beauty of blueprint (imho). Import the blueprint, register it and pu t it on a dedicated path. Done. To modules in a flask-application.
https://kodekitchen.wordpress.com/tag/microframework/
CC-MAIN-2018-22
en
refinedweb
Hey everyone, Alright, my project requires that I write a program to demonstrate my understanding of class inheritance. Problem is the teacher wants us to use Java, something I have never used and the teacher did not even give us a crash course in. Right now I am trying to compile this file (Publication.java) which contains the super class. I get errors saying the strings are unrecognizable symbols. This is probably pretty simple so could anyone tell me what is wrong? public class Publication { // Constructors public Publication () {title = ""; medium = ""; copies = 0;} public Publication (string name, string type) // Yields an error on this line {title = name; medium = type; copies = 0;} // Methods public int copiesprinted () {return copies;} public void incrementcopies () {copies++;} // Data private string title, medium; private int copies; }
https://www.daniweb.com/programming/software-development/threads/20823/basic-help-with-java-class
CC-MAIN-2018-22
en
refinedweb
using the Spring MVC Test framework. During this blog post we will write unit tests for controller methods which provide CRUD functions for todo entries. Let’s get started. Getting The Required Dependencies with Maven We can get the required testing dependencies by adding the following dependency declarations to our POM file: - Hamcrest 1.3 (hamcrest-all). We use Hamcrest matchers when we are writing assertions for the responses. - Junit 4.11. We need to exclude the hamcrest-core dependency because we already added the hamcrest-all dependency. - Mockito 1.9.5 (mockito-core). We use Mockito as our mocking library. - Spring Test 3.2.3.RELEASE - JsonPath 0.8.1 (json-path and json-path-assert). We use JsonPath when we are writing assertions for JSON documents returned by our REST API. The relevant dependency declarations looks as follows: > <dependency> <groupId>com.jayway.jsonpath</groupId> <artifactId>json-path</artifactId> <version>0.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>com.jayway.jsonpath</groupId> <artifactId>json-path-assert</artifactId> <version>0.8.1</version> <scope>test</scope> </dependency> Let’s move on and talk a bit about the configuration of our unit tests. Configuring Our Unit Tests The unit tests which we will write during this blog post use the web application context based configuration. This means that we configure the Spring MVC infrastructure by using either an application context configuration class or a XML configuration file. Because the first part of this tutorial described the principles which we should follow when we are configuring the application context of our application, this issue is not discussed in this blog post. However, there is one thing that we have to address here. The application context configuration class (or file) which configures the web layer of our example application does not create an exception resolver bean. The SimpleMappingExceptionResolver class used in the earlier parts of this tutorial maps exception class name to the view which is rendered when the configured exception is thrown. This makes sense if we are implementing a “normal” Spring MVC application. However, if we are implementing a REST API, we want to transform exceptions into HTTP status codes. This behavior is provided by the ResponseStatusExceptionResolver class which is enabled by default.. Writing Unit Tests for a REST API Before we can start writing unit tests for our REST API, we need to understand two things: - We need to know what are the core components of the Spring MVC Test framework. These components are described in the second part of this tutorial. - We need to know how we can write assertions for JSON documents by using JsonPath expressions. We can get this information by reading my blog post which describes how we can write clean assertions with JsonPath. Next we will see the Spring MVC Test framework in action and write unit tests for the following controller methods: - The first controller methods returns a list of todo entries. - The second controller method returns the information of a single todo entry. - The third controller method adds a new todo entry to the database and returns the added todo entry. Get Todo Entries The first controller method returns a list of todo entries which are found from the database. Let’s start by taking a look at the implementation of this method. Expected Behavior The controller method which returns all todo entries stored to the database is implemented by following these steps: - It processes GET requests send to url ‘/api/todo’. - It gets a list of Todo objects by calling the findAll() method of the TodoService interface. This method returns all todo entries which are stored to the database. These todo entries are always returned in the same order. - It transforms the received list into a list of TodoDTO objects. - It returns the list which contains TodoDTO objects. The relevant part of the TodoController class looks as follows: import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.*; import java.util.ArrayList; import java.util.List; @Controller public class TodoController { private TodoService service; @RequestMapping(value = "/api/todo", method = RequestMethod.GET) @ResponseBody public List<TodoDTO> findAll() { List<Todo> models = service.findAll(); return createDTOs(models); } private List<TodoDTO> createDTOs(List<Todo> models) { List<TodoDTO> dtos = new ArrayList<>(); for (Todo model: models) { dtos.add(createDTO(model)); } return dtos; } private TodoDTO createDTO(Todo model) { TodoDTO dto = new TodoDTO(); dto.setId(model.getId()); dto.setDescription(model.getDescription()); dto.setTitle(model.getTitle()); return dto; } } When a list of TodoDTO objects is returned, Spring MVC transforms this list into a JSON document which contains a collection of objects. The returned JSON document looks as follows: [ { "id":1, "description":"Lorem ipsum", "title":"Foo" }, { "id":2, "description":"Lorem ipsum", "title":"Bar" } ] Let’s move on and write an unit test which ensures that this controller method is working as expected. Test: Todo Entries Are Found We can write an unit test for this controller method by following these steps: - Create the test data which is returned when the findAll() method of the TodoService interface is called. We create the test data by using a test data builder class. - Configure our mock object to return the created test data when its findAll() method is invoked. - Execute a GET request to url ‘/api/todo’. - Verify that the HTTP status code 200 is returned. - Verify that the content type of the response is ‘application/json’ and its character set is ‘UTF-8’. - Get the collection of todo entries by using the JsonPath expression $ and ensure that that two todo entries are returned. - Get the id, description, and title of the first todo entry by using JsonPath expressions $[0].id, $[0].description, and $[0].title. Verify that the correct values are returned. - Get the id, description, and title of the second todo entry by using JsonPath expressions $[1].id, $[1].description, and $[1].title. Verify that the correct values are returned. - Verify that the findAll() method of the TodoService interface is called only once. - Ensure that no other methods of our mock object java.util.Arrays; import static org.hamcrest.Matchers.*;_TodosFound_ShouldReturnFoundTodoEntries()"))); verify(todoServiceMock, times(1)).findAll(); verifyNoMoreInteractions(todoServiceMock); } } Our unit test uses a constant called APPLICATION_JSON_UTF8 which is declared in the TestUtil class. The value of that constant is a MediaType object which content type is ‘application/json’ and character set is ‘UTF-8’. The relevant part of the TestUtil class looks as follows: public class TestUtil { public static final MediaType APPLICATION_JSON_UTF8 = new MediaType(MediaType.APPLICATION_JSON.getType(), MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8") ); } Get Todo Entry The second controller method which we have to test returns the information of a single todo entry. Let’s find out how this controller method is implemented. Expected Behavior The controller method which returns the information of a single todo entry is implemented by following these steps: - It processes GET requests send to url ‘/api transforms the Todo object into a TodoDTO object. - It returns the created TodoDTO object. The source code of our controller method looks as follows: import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.*; @Controller public class TodoController { private TodoService service; @RequestMapping(value = "/api/todo/{id}", method = RequestMethod.GET) @ResponseBody public TodoDTO findById(@PathVariable("id") Long id) throws TodoNotFoundException { Todo found = service.findById(id); return createDTO(found); } private TodoDTO createDTO(Todo model) { TodoDTO dto = new TodoDTO(); dto.setId(model.getId()); dto.setDescription(model.getDescription()); dto.setTitle(model.getTitle()); return dto; } } The JSON document which is returned to the client looks as follows: { "id":1, "description":"Lorem ipsum", "title":"Foo" } Our next question is: What happens when a TodoNotFoundException is thrown? Our example application has an exception handler class which handles application specific exceptions thrown by our controller classes. This class has an exception handler method which is called when a TodoNotFoundException is thrown. The implementation of this method writes a new log message to the log file and ensures that the HTTP status code 404 is send back to the client. The relevant part of the RestErrorHandler class looks as follows: import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.HttpStatus; import org.springframework.web.bind.annotation.ControllerAdvice; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.ResponseStatus; @ControllerAdvice public class RestErrorHandler { private static final Logger LOGGER = LoggerFactory.getLogger(RestErrorHandler.class); @ExceptionHandler(TodoNotFoundException.class) @ResponseStatus(HttpStatus.NOT_FOUND) public void handleTodoNotFoundException(TodoNotFoundException ex) { LOGGER.debug("handling 404 error on a todo entry"); } } We have to write two unit tests for this controller method: - We have to write a test which ensures that our application is working properly when the todo entry is not found. - We have to write a test which verifies that the correct data is returned to the client when the todo entry is found. Let’s see how we can write these tests. Test 1: Todo Entry Is Not Found First, we must ensure that our application is working properly when a todo entry is not found. We can write an unit test which ensures this by following these steps: - Configure our mock object to throw a TodoNotFoundException when its findById() method is called and the id of the requested todo entry is 1L. - Execute a GET request to url ‘/api/todo/1’. - Verify that the HTTP status code 404 is returned. - Ensure that the findById() method of the TodoService interface is called only once by using the correct method parameter (1L). - Verify that no other methods of the TodoService interface are called duringReturnHttpStatusCode404() throws Exception { when(todoServiceMock.findById(1L)).thenThrow(new TodoNotFoundException("")); mockMvc.perform(get("/api/todo/{id}", 1L)) .andExpect(status().isNotFound()); verify(todoServiceMock, times(1)).findById(1L); verifyNoMoreInteractions(todoServiceMock); } } Test 2: Todo Entry Is Found Second, we must write a test which ensures that the correct data is returned when the requested todo entry is found. We can write a test which ensures this by following these steps: - Create the Todo object which is returned when our service method is called. We create this object by using our test data builder. - Configure our mock object to return the created Todo object when its findById() method is called by using a method parameter 1L. - Execute a GET request to url ‘/api/todo/1’. - Verify that the HTTP status code 200 is returned. - Verify that the content type of the response is ‘application/json’ and its character set is ‘UTF-8’. - Get the id of the todo entry by using the JsonPath expression $.id and verify that the id is 1. - Get the description of the todo entry by using the JsonPath expression $.description and verify that the description is “Lorem ipsum”. - Get the title of the todo entry by using the JsonPath expression $.title and verify that the title is “Foo”. - Ensure that the findById() method of the TodoService interface is called only once by using the correct method parameter (1L). - Verify that the other methods of our mock object are.is;ReturnFoundTodoEntry() throws Exception { Todo found = new TodoBuilder() .id(1L) .description("Lorem ipsum") .title("Foo") .build(); when(todoServiceMock.findById(1L)).thenReturn(found); mockMvc.perform(get("/api/todo/{id}", 1L)) .andExpect(status().isOk()) .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8)) .andExpect(jsonPath("$.id", is(1))) .andExpect(jsonPath("$.description", is("Lorem ipsum"))) .andExpect(jsonPath("$.title", is("Foo"))); verify(todoServiceMock, times(1)).findById(1L); verifyNoMoreInteractions(todoServiceMock); } } Add New Todo Entry The third controller method adds a new todo entry to the database and returns the information of the added todo entry. Let’s move on and find out how it is implemented. Expected Behavior The controller method which adds new todo entries to the database is implemented by following these steps: - It processes POST requests send to url ‘/api/todo’. - It validates the TodoDTO object given as a method parameter. If the validation fails, a MethodArgumentNotValidException is thrown. - It Adds a new todo entry to the database by calling the add() method of the TodoService interface and passes the TodoDTO object as a method parameter. This method adds a new todo entry to the database and returns the added todo entry. - It transforms the created Todo object into a TodoDTO object. - It returns the TodoDTO object. The source code of our controller method looks as follows: import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @Controller public class TodoController { private TodoService service; @RequestMapping(value = "/api/todo", method = RequestMethod.POST) @ResponseBody public TodoDTO add(@Valid @RequestBody TodoDTO dto) { Todo added = service.add(dto); return createDTO(added); } private TodoDTO createDTO(Todo model) { TodoDTO dto = new TodoDTO(); dto.setId(model.getId()); dto.setDescription(model.getDescription()); dto.setTitle(model.getTitle()); return dto; } }. } As we can see, this class declares three validation constraints which are described in the following: - The maximum length of the description is 500 characters. - The title of a todo entry cannot be empty. - The maximum length of the title is 100 characters. If the validation fails, our error handler component ensures that - The HTTP status code 400 is returned to the client. - The validation errors are returned to the client as a JSON document. Because I have already written a blog post which describes how we can add validation to a REST API, the implementation of the error handler component is not discussed in this blog post. However, we need to know what kind of a JSON document is returned to the client if the validation fails. This information is given in the following. If the title and the description of the TodoDTO object are too long, the following JSON document is returned to the client: { "fieldErrors":[ { "path":"description", "message":"The maximum length of the description is 500 characters." }, { "path":"title", "message":"The maximum length of the title is 100 characters." } ] } Note: Spring MVC does not guarantee the ordering of the field errors. In other words, the field errors are returned in random order. We have to take this into account when we are writing unit tests for this controller method. On the other hand, if the validation does not fail, our controller method returns the following JSON document to the client: { "id":1, "description":"description", "title":"todo" } We have to write two unit tests for this controller method: - We have to write a test which ensures that our application is working properly when the validation fails. - We have to write a test which ensures that our application is working properly when a new todo entry is added to the database. Let’s find out how we can write these tests. Test 1: Validation Fails Our first test ensures that our application is working properly when the validation of the added todo entry fails. We can write this test by following these steps: - Create a title which has 101 characters. - Create a description which has 501 characters. - Create a new TodoDTO object by using our test data builder. Set the title and the description of the object. - 400 is returned. - Verify that the content type of the response is ‘application/json’ and its content type is ‘UTF-8’. - Fetch the field errors by using the JsonPath expression $.fieldErrors and ensure that two field errors are returned. - Fetch all available paths by using the JsonPath expression $.fieldErrors[*].path and ensure that field errors about the title and description fields are found. - Fetch all available error messages by using the JsonPath expression $.fieldErrors[*].message and ensure that error messages about the title and description fields are found. - Verify that the methods of our mock object are not called during our.containsInAnyOrder; import static org.hamcrest.Matchers.hasSize;_TitleAndDescriptionAreTooLong_ShouldReturnValidationErrorsForTitleAndDescription() throws Exception { String title = TestUtil.createStringWithLength(101); String description = TestUtil.createStringWithLength(501); TodoDTO dto = new TodoDTOBuilder() .description(description) .title(title) .build(); mockMvc.perform(post("/api/todo") .contentType(TestUtil.APPLICATION_JSON_UTF8) .content(TestUtil.convertObjectToJsonBytes(dto)) ) .andExpect(status().isBadRequest()) .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8)) .andExpect(jsonPath("$.fieldErrors", hasSize(2))) .andExpect(jsonPath("$.fieldErrors[*].path", containsInAnyOrder("title", "description"))) .andExpect(jsonPath("$.fieldErrors[*].message", containsInAnyOrder( "The maximum length of the description is 500 characters.", "The maximum length of the title is 100 characters." ))); verifyZeroInteractions(todoServiceMock); } } Our unit test uses two static methods of the TestUtil class. These methods are described in the following: - The createStringWithLength(int length) method creates a new String object with the given length and returns the created object. - The convertObjectToJsonBytes(Object object) method converts the object given as a method parameter into a JSON document and returns the content of that document as a byte array. The source code of the TestUtil class looks as follows: import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.databind.ObjectMapper; import org.springframework.http.MediaType; import java.io.IOException; import java.nio.charset.Charset; public class Test(JsonInclude.Include.NON_NULL); return mapper.writeValueAsBytes(object); } public static String createStringWithLength(int length) { StringBuilder builder = new StringBuilder(); for (int index = 0; index < length; index++) { builder.append("a"); } return builder.toString(); } } Test 2: Todo Entry Is Added to The Database The second unit test ensures that our controller is working properly when a new todo entry is added to the database. We can write this test by following these steps: - Create a new TodoDTO object by using our test data builder. Set “legal” values to the title and description fields. - Create a Todo object which is returned when the add() method of the TodoService interface is called. - Configure our mock object to return the created Todo object when its add() method is called and a TodoDTO object is given as a parameter. - 200 is returned. - Verify that the content type of the response is ‘application/json’ and its content type is ‘UTF-8’. - Get the id of the returned todo entry by using the JsonPath expression $.id and verify that the id is 1. - Get the description of the returned todo entry by using the JsonPath expression $.description and verify that the description is “description”. - Get the title of the returned todo entry by using the JsonPath expression $.title and ensure that the title is “title”. - Create an ArgumentCaptor object which can capture TodoDTO objects. - Verify that the add() method of the TodoService interface is called only once and capture the object given as a parameter. - Verify that the other methods of our mock object are not called during our test. - Verify that the id of the captured TodoDTO object is null. - Verify that the description of the captured TodoDTO object is “description”. - Verify that the title of the captured TodoDTO object is “title”. The source code of our unit test looks as follows: import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.ArgumentCaptor; junit.framework.Assert.assertNull; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;ReturnAddedEntry() throws Exception { TodoDTO dto = new TodoDTOBuilder() .description("description") .title("title") .build(); Todo added = new TodoBuilder() .id(1L) .description("description") .title("title") .build(); when(todoServiceMock.add(any(TodoDTO.class))).thenReturn(added); mockMvc.perform(post("/api/todo") .contentType(TestUtil.APPLICATION_JSON_UTF8) .content(TestUtil.convertObjectToJsonBytes(dto)) ) .andExpect(status().isOk()) .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8)) .andExpect(jsonPath("$.id", is(1))) .andExpect(jsonPath("$.description", is("description"))) .andExpect(jsonPath("$.title", is("title"))); ArgumentCaptor<TodoDTO> dtoCaptor = ArgumentCaptor.forClass(TodoDTO.class); verify(todoServiceMock, times(1)).add(dtoCaptor.capture()); verifyNoMoreInteractions(todoServiceMock); TodoDTO dtoArgument = dtoCaptor.getValue(); assertNull(dtoArgument.getId()); assertThat(dtoArgument.getDescription(), is("description")); assertThat(dtoArgument.getTitle(), is("title")); } } Summary We have now written unit tests for a REST API by using the Spring MVC Test framework. This tutorial has taught us four things: - We learned to write unit tests for controller methods which read information from the database. - We learned to write unit tests for controller methods which add information to the database. - We learned how we can transform DTO objects into JSON bytes and send the result of the transformation in the body of the request. - We learned how we can write assertions for JSON documents by using JsonPath expressions. As always, you can get the example application of this blog post from Github. I recommend that you check it out because it has a lot of unit tests which were not covered in this blog post. Hi Petri, Thank you for this very useful following step by step your recommendations @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:rtls-management-test-context.xml", "classpath:rtls-management-application.xml" }) @WebAppConfiguration @TransactionConfiguration(defaultRollback = true, transactionManager = "hibernatetransactionManager") @Transactional public class UserRestServiceTest { Logger logger = Logger.getLogger(UserRestServiceTest.class.getName()); private MockMvc mockMvc; @Autowired private UserService userService; (userService); mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext) .build(); } private User addUser() { logger.info("-> addUser"); User user = new User(); long id = 1; user.setId(id); user.setEnabled(true); user.setFirstname("youness"); user.setUsername("admin"); user.setName("lemrabet"); user.setPassword("21232f297a57a5a743894a0e4a801fc3"); user.setEmail("[email protected]"); logger.info(" findAllUsers"); User user = addUser(); // stubbing when(userService.findAll()).thenReturn(Arrays.asList(user)); mockMvc.perform(get("/get/all")) .andExpect(status().isOk()) .andExpect( content().contentType(TestUtil.APPLICATION_JSON_UTF8)) .andExpect(jsonPath("$", hasSize(2))) .andExpect(jsonPath("$[0].id", is(1))) .andExpect(jsonPath("$[0].enabled", is(true))) .andExpect(jsonPath("$[0].firstname", is("youness"))) .andExpect(jsonPath("$[0].username", is("admin"))) .andExpect(jsonPath("$[0].name", is("lemrabet"))) .andExpect( jsonPath("$[0].password", is("21232f297a57a5a743894a0e4a801fc3"))) .andExpect( jsonPath("$[0].email", is("[email protected]"))); verify(userService, times(1)).findAll(); verifyNoMoreInteractions(userService); logger.info("<- findAllUsers"); } } I get the following error : Failed tests: findAllUsers(com.smartobjectsecurity.management.rest.user.UserRestServiceTest): Status expected: <200> but was: <404> Thank you for your help The status code 404 means that the tested controller method was not found. There are typically two reasons for this: I hope that this answered to your question. Hi Petri, Thanks for the wonderful post. I have implemented a similar kind of test in my environment but I am getting the 404 error. I have checked for the URL as well as included the @ComponentScan(basePackages = {“somepackage”}) . Do you recommend any other thing to be taken care of? Thanks in anticipation. Regards, Amishi Shah Hi, Usually when you get a 404 response status, the reason is that the URL is not correct or the controller class is not found during component scan. You mentioned that you checked the URL and configured the Spring container to scan the correct package. Did you remember to annotate the controller class with the @Controlleror @RestControllerannotation? Hi Petri, I have proper component scan, right url and using @controller annotation for controller class, but still facing 404 issue, do you know any other possible reason. But I am using @Path(“/list”) instead of @RequestMapping in controllers, will it make any difference? Thanks Well, to be honest, I have never used JAX-RS annotations in my Spring web applications even though it seems that you are able to do it if you use Spring Boot. In other words, I would use the @RequestMappingannotation. So you think this tutorial or any thing else will work with JAX-RS annotations :( ? Because now we can’t switch to @RequestMapping annotation. Any help would be highly appreciated. Hi, I did some digging and it seems that you cannot use Spring MVC Test if you use JAX-RS because Spring MVC Test framework uses its own mock implementation of the Servlet API (instead of deploying the application to a servlet container). That being said, you have still several other options: RestTemplatein your test class Note that each one of these tools require that you deploy your application to a servlet container before you run your tests. Hi I was wondering how I could write a test for testing xml response instead of json . Any sample code would be appreciated. I learnt from a lot from this tutorial thanks. Hi Lukman, The answer of this StackOverflow question answers to your question as well. Remember that you have to add XmlUnit to your pom.xml file (the correct scope for this dependency is test). Thanks Petri hi Petri I have the code from the blog. this is probably something silly but would like your input if you can. Everything from the blog is almost the same except the integration test @ContextConfiguration which I changed to @ContextConfiguration(locations = {“classpath:spring/root-Context.xml”, “classpath:spring/app/servlet-context.xml”}) as I made some changes to the files. The integration test run and executes through the controller fine but fails with a response content type Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.619 sec <<< FAILURE! testController(my.tests.web.SomeControllerIntegrationTest) Time elapsed: 4.331 sec <<< FAILURE! java.lang.AssertionError: Content type expected: but was: at org.springframework.test.util.AssertionErrors.fail(AssertionErrors.java:60) at org.springframework.test.util.AssertionErrors.assertEquals(AssertionErrors.java:89) at org.springframework.test.web.servlet.result.ContentResultMatchers$1.match(ContentResultMatchers.java:71) at org.springframework.test.web.servlet.MockMvc$1.andExpect(MockMvc.java:141) The mockMvc object is expecting Json in the response but it is not so. Thanks in advance Hi Chris, There are no silly questions! I assume that you have tested the controller method manually and verified that it returns a JSON document in the request body. Are you using Spring 4? I remember that I was having a similar problem in a project which uses Spring 4 instead of Spring 3.2. I was able to solve it by setting the Accept request header: Is it possible to see the request mapping of the tested controller method (I don’t need to see the actual implementation)? Hi Petri, Yes. This is the controller method under test. It returns a list of terms in json format. If i do a mvn tomcat:run and test it using rest client I can see that it does. @RequestMapping(value = “/terms/”, method = RequestMethod.GET) public ModelAndView listTerms() {…} The version of spring are newer and so is the JSon mapper. I wonder why there is such a version to version compatibility issue with these both but this another animal. 3.2.0.RELEASE 1.9.2 Hi Petri Thank you for your input previously. I think I see how I managed to screw this up. In my spring configuration. I am using .. which I changed now to .. turns out surprisingly that it actually does the conversion from pojo to json but renders it in plain text – which makes sense; so that it can be captured in a jsp etc., except it was not obvious. Many thanks again. Hi Chris, If you want to simplify your code a bit (I assume that you create the ModelAndViewobject inside the controller method), you can change the controller method to look like this: This simplifies the code of your controller method and more importantly, it fixes your content type problem. Hi Petri, Thanks for giving all information i need you help please tell me how i can write the junit test preparation on following code. @RequestMapping(value=”/search-calendars/{resourceTypeCode}”, method=RequestMethod.GET , produces = MediaType.APPLICATION_JSON_VALUE) public ResponseEntity<List> searchResource( @PathVariable String resourceTypeCode, @RequestParam(value = “resourceId”, required = false) Integer resourceId) throws SunshineException { if(StringUtils.isEmpty(resourceTypeCode)){ throw new SunshineException(“Resource code should not be Empty”); } SearchCalendarDTO resourceTypeDTO = new SearchCalendarDTO(resourceTypeCode,resourceId ); List searchlist = calendarService.searchCalendars(resourceTypeDTO); return new ResponseEntity(searchlist, HttpStatus.OK); } } Thanks, Abhay Hi Petri, First, I want to thank you for this great tutorial, it helps me a lot. I found an weird issue in bean validation, spring mvc controllers’ unit tests. To reproduce you juste have to upgrade your dependency for the jsp api to the lastest version and execute your first test in TodoControllerTest (`add_EmptyTodoEntry_ShouldReturnValidationErrorForTitle()`) Upgrading that dependency will make this test fail, because I don’t know how but it kinda bypass bean validation. So you should get a 201 instead of 400. Can you confirm this? Is it a known issue? Thanks in advance for your time. Hi Ahmad, actually I couldn’t reproduce this issue. Did you upgrade any other dependencies to newer versions? Hi Petri, Thanks for your reply, yes actually I upgraded other dependencies (sorry I forgot to mentionned): – Spring: 4.0.2.RELEASE – Hibernate Validator: 5.1.0.Final – Servlet Api: 3.1 If I remember well, I fixed the issue, by adding a dependency to javaee-web-api: 7.0. Petri, If you still can’t reproduce it, I can send you a pull request with the pom.xml if you want. Thanks for your time. Hi Ahmad, actually I happened to run into this issue when I updated Hibernate Validator to version 5.1.0.Final. I was able to solve this issue by using Hibernate Validator 5.0.3.Final. I suspect that Spring Framework 4 doesn’t support Hibernate Validator 5.1.0.Final but I haven’t been able to confirm this. thank you man for this, I was getting very frustrated!! You are welcome! I am happy to hear that I was able to help you out. Petri, these examples are really nice! I particularly like how clean and fast the unit test runs of your controller’s. Do you do the same type of setup in Spring 4? If you have a simple setup and use SpringBoot org.springframework.boot.SpringApplication.Application to bootstrap the application, is there anything significant that would change with your solution? Also, I thought I saw that spring-mvc-test was added into the Spring framework in Spring 3.2 which of course was released later after your blog post about this. I assume you would be using that instead of spring-mvc-test separately or did they make any changes that creates overhead that slows down the unit tests significantly? Thanks, Magnus Actually the example application of this blog post uses Spring Framework 3.2.X and Spring MVC Test framework. The standalone project is called spring-test-mvc, and it is used in my blog posts which talk about the integration testing of Spring MVC Applications. There really isn’t any reason to use it anymore (unless you have to use Spring 3.1). I use the same setup for testing web applications which use Spring Framework 4, but I haven’t used Spring Boot yet so I cannot answer to your question. I am planning to write a Spring Boot tutorial in the future, and I will definitely address this issue in that tutorial. Hi Petri Thanks for the tutorial. I am having some issue to get it start. I have post the question to the Stackoverflow. let me know if you need more info, thanks. Hi, It seems that you already got the correct answer to your StackOverflow question. Follow the instructions given in that answer and you should be able to solve your problem. Hi Petri, I have a question about the your object creation. In the controller test class you build your test data using a builder, but when you’re converting your data with createDTO you don’t use a builder here. Is there a reason for this? Thanks, Paul Thank you for asking such a good question. I cannot remember what my reason was when I made the decision to use the builder pattern only in my tests (I probably didn’t have any reason for this), but nowadays I follow these “rules”: final. Also, I won’t add setters to this DTO. This is handy if I want to just transform read-only information without exposing the internal data model of my application to the outside world. Hi Petri, Nice tutorial. When I tried to use : I found that TodoBuilder class was missing. Can you guide me what went wrong? (Version I have used :) Thanks, Akshay Hi Akshay, Unfortunately I am not sure what is going on because I can find the TodoBuilderclass. :( without TodoBuilder this are piece of shit… Hi, thanks for the feedback! I left a few trivial classes out from this blog post since I assumed that those who are interested in them, will read them on Github. Anyway, if you want to get the source code of those test data builder classes, just click the links below: TodoBuilderclass TodoDTOBuilderclass. Hi Petri, Superb Tutorial !! I have a question. While verifying the response, we are doing an inline compare of values like: .andExpect(jsonPath(“$[0].id”, is(1))) .andExpect(jsonPath(“$[0].description”, is(“Lorem ipsum”))) .andExpect(jsonPath(“$[0].title”, is(“Foo”))) is it possible to collect this response or parts of response into an object. So that I can write a seperate method to pass on the json response to a separate method for asserting values Okay, I have figured this out. MvcResult result = mockMvc.perform(post(“/admin/state/getById/” + state.getId())) .andExpect(status().isOk()) .andExpect(content().contentType(TestUtil.APPLICATION_JSON)) .andDo(print()) .andReturn(); MockHttpServletResponse response = result.getResponse(); String reponseJSON = response.getContentAsString(); this responseJSON now is a string representation of the returned JSON, Now we can use it in whatever way we want, eg: using Jackson convert it into the relevant DTO and compare values. Hi, I am happy to hear that you were able to solve your problem. By the way, is there some reason why you want to do this? Do you like to write assertions for real objects instead of using jsonpath? Hi petri, thanks for nice tutorial.. i have a very simple controller which return JSON array here is my test class to test that controller : here is my java controller which i am trying to test: i could not able to call list() method, which is defined in another class. when i run the test i am getting SecurityException:org.hamcrest.Matchers signature information does not match signature information of other classes in same package. can u help me to solve this issue.. thank you First, your test looks fine to me and it should work (assuming that you have annotated your controller with the @RestControllerannotation). Are you trying to run your unit tests by using Eclipse? The reason why I am asking this is that I found this bug report and one commenter suggests that: He also found a solution to this problem (but unfortunately it sounds like a hack): Unfortunately I cannot provide you any further advice since I haven’t used Eclipse for several years. On the other hand, if you this happened when you tried to run your unit tests by using Maven, let me know and I will take a closer look at this problem. hiiii petri… when i am trying to call any uri of controller i am getting some security error: class:org.hamcrest.Matchers signature information does not match signature information of other classes in same package. can u tell me why i am getting this error Thankl You Hi, I answered to this question in this comment. Thanks for instant response… i am able to solve that security issue error, but my main issue is i am not able to execute this “listContact= exampeldao.list();” statement from my test code. If i define the list() inside the controller class i am able to get the data. but if i define list() in some other class and if i try to access via instance , i could not able to do. can u tell me where probably i am making mistake. Thank You Do you mean that you cannot create a mock object that returns something when the list()method of ExampleDaoobject is invoked? If so (and you want to get good advice), I have to see the source code of the tested method and the source code of your unit test. If I have to guess, I would say that you haven’t configured the mock object correctly or the list()method of the ExampleDaoobject is not invoked during your unit test. hi petri… This is my list() method This method is declared in “Example” interface and which is implemented by ExampleDao class. Here is my controller: here is my complete Testclass: And here is my configuration files: test-context.xml: i don’t no where i m committing mistake :-) help me to fix this issue.. :-) Thank You Wordpress doesn’t allow you to paste XML in comments => Wordpress removed the content of your configuration files. You should paste those files to Pastebin.com and add the link to your comment. Anyway, the only problem I can see is that the ExampleDaoImplclass has a method called list()but your test mocks the list1()method. Also, it would be helpful if you could clarify your problem a bit. Do you mean that your test fails because: list1()method returns an empty list? list1()method is not invoked at all. i could not able to invoke list1() method at all… here is my link of configuration files… Thank You I took a quick look at your configuration files and I have no idea why you don’t get an exception because both configuration files configure the exampleDaobean. If you use the web application context based configuration, you have to split the application context configuration files of your application so that you can use only some of them in your unit tests (get more information about this). In other words, you have two options: I hope that this answered to your question. Hi Petri, instead of excluding hamcrest-core from junit and adding hamcrest-all as a dependency you could just add hamcrest-library as a dependency. Regards, Patrick Hi Patrick, Good catch! I am going to rewrite my Spring MVC Test tutorial during this year (2015), and I will include this fix to my new tutorial. Thank you for pointing this out! Have you rewrite this tutorial, if yes please share the url. Thanks Hi, I was supposed to rewrite this tutorial in 2015, but I decided to concentrate on other tutorials because this one was good enough. Hi again, just another Point: are you always programming your Builders like the TodoBuilder? I mean the fact that you are not able to create multiple Todo instances with one Builder instance (build only returns the reference to model instance). I prefer to copy all fields of the Todo model to the builder class and create a new Todo instance with every build method call. This is especially useful when you want to create multiple instances with e.g. only one difference. What do you think about it? Regards, Patrick Hi, Actually I don’t use the approach that is used in this blog post anymore. I “copy” the fields of the constructed object to the test data builder class (and of course to the “real” builder), and create a new object in the build()method. My main reason for doing this was that often objects have a many mandatory properties, and the “real” builder class verifies that everyone of them is “valid”. If these fields are not valid, it will throw the correct RuntimeException. When I copy these fields to the test data builder class, I also set default values to these mandatory fields (e.g. if the field is a Stringfield, I use string “NOT_IMPORTANT”). This way I can set only those fields that are relevant for the test case. This is one of those things that will change when I update my Spring MVC Test tutorial => don’t change your way of doing things since it is better than the approach described here. Hi Petri, First of all, your tutorial is really great. I am new to unit testing (or to spring in general) and I learned a lot from this tutorial. I was trying to test a controller my self, using the stand alone setup (because I hate xml), but then there’s an error in the .andExpect(jsonPath(“$.var3”, is(“123456”))) line. The error said “json can not be null” and it throws IllegalArgumentException. And I’ve tried almost everything and can’t figure out why it happened. Does it have anything to do with me not using xml? or what Here’s the controller I want to test Here’s my test controller Thank you Hi, Replace this line: With this line: The problem is that the MyRequestobject that is passed to the doSomething()method of the MyServiceclass is not the same object which you create in your test method. The reason for this is that Spring MVC creates a new MyRequestobject when it resolves the method parameters of your controller method (and reads the field values from the request body). Did this solve your problem? OMG, thank you! That did solve my problem. And apparently, it was that simple. I have some other questions if you don’t mind. I was wondering how to pass parameter (Object) annotated with @ModelAttribute when unit testing a method in Controller class. For example, I have this GET method public Response doSomething(@ModelAttribute Request request). How should I unit test that kind of method? What should I do in the mockMvc.perform(get(“/test/something”)) ? Thank you. Thank you for your kind words. I really appreciate them. It depends. If you want to write a unit test for a controller method that processes form submissions, you should read my blog post that describes how you can write unit tests for “normal” Spring MVC controllers. If you want to write a unit test that passes “other” objects to your controller method, the answer to your question depends from the way you “initialize” the @ModelAttributeobject (see Using @ModelAttribute on a method argument for more details). Could you provide an example that provides more details about your problem? Hi Petri, Your tutorial is good and thank you for your tutorials, I am new to this MVC testing and I learned a lot from this tutorial.I have some doubts. In this tutorial u explained about one @pathvariable only mockMvc.perform(get(“/api/todo/{id}”, 1L)) ok fine,how to pass multiple @pathvariables in url ,for ex /add/{id}/{page} Thank you Hi, You can simply pass the url template and variables as method parameters to the factory method that creates the used RequestBuilderobject (see the Javadoc of the MockMvcRequestBuildersclass). For example, if you want to send get a GET request by using url template: ‘/api/person/{personId}/todo/{todoId}’ when the personIdis 1Land the todoIdis 99L, you have to use the following code: As long as you remember to pass the path variable values in the same order than the path variables, you should be fine. If you have any further questions, don’t hesitate to ask them. yes,its working fine.Thank you I have one more questions,In my url am passing pathvariables ,HttpSession and Model for form values am clear about how to pass pathvariables and HttpSession but how to pass that Model. Am passing those all form values in model attribute but still am getting Null Pointer expection Hi, If you want to set the property values of a form object (a controller method parameter that is annotated with the @ModelAttributeannotation), you should set the property values by using the param()method of the MockHttpServletRequestBuilderclass. I have written a blog post that describes how you can write unit tests for controller methods which processes form submissions. It describes how you can use this method in your unit tests. Let me know if this solved your problem. :) Hi, thanks for that blog post thats very helpfull . how to test private methods? Thanks Hi, You cannot (and should not) test private methods. You should only test your public API. If the tested methods use private methods (and often they do), the private methods will be “tested” as well. On the other hand, if you have to test the functionality of a private method, you should move it to another class and make it public. Or you could make it protected (this is useful if you are working with legacy code). Hi, ok thank you. You are welcome! Hi Am trying to get userID form session but am getting null. Thank you Unfortunately I cannot know what is wrong. However, because you get null, it is very likely that the userID is not found from the session. Hi, How to test Spring MVC with tiles Thank you I have never used Tiles, but I assume that you can write controller tests by using the techniques described in this blog post. There seems to be some differences though. Ok, Thank you You are welcome. Hi, In my controller i have HttpSession session as parameter and my using this session to get userid. my mockMVC my passing mockSession as parameter but am getting null how to get userid using this seccion. Thank you Hi, Another reader had the exact same problem. Check out this thread. Hi, This the way to get userid from session. Thanks You are welcome! I am happy to hear that you were able to solve your problem. Hi, I am getting java.lang.ClassNotFoundException: org.springframework.test.web.server.MockMvc exception while building the maven project. could you please help to resolve this issue. Thank you Which Spring version are you using? The reason why ask this is that I noticed that the error message has the “old” package of the MockMvcclass. If you are writing tests for an application that uses Spring 3.1.X, this package is correct. If this is the case, you need to ensure that you have added the standalone spring-test-mvc dependency to your pom.xml file. If you are writing tests for an application that uses Spring 3.2.X or never, you should add the sprint-test library to your pom.xml file. Also, remember that in this case you must remove the spring-test-mvc dependency from your POM file. I am using spring 4.0.1.RELEASE Thank you You have to ensure that your pom.xml contains ONLY the spring-test dependency. If your pom.xml contains the spring-test-mvc dependency, you need to remove it. Also, you need to fix the imports because the package of the MockMvcclass has changed. Hi Now its working fine. Thank you You are welcome. Hi, I am getting “The method when(T) in the type Mockito is not applicable for the arguments (void)” while testing void methods. Thanks Hi, One more doubt whether tomcat is running or not while executing junit testcases. Thanks No. Tomcat is not running when you run unit tests that use the Spring MVC Test framework. The Mockito documentation explains how you can mock voidmethods. Ok Thank you very much Petri. Thanks for the insightful and thorough tutorial . However, it appears that for several of the examples, the controller logic is not actually being tested. Consider the line: when(todoServiceMock.findAll()).thenReturn(Arrays.asList(first, second)); Seems like it would be a more true unit test if the call service.findAll(); was mocked. In this way, we only mock the database service and actually execute the rest of the logic in findAll (which in this particular case is pretty minimal). Thoughts? Hi Paul, Thank your for your kind words. I really appreciate them. Also, thank you for writing an interesting comment. I will answer to it below: Do you mean that the service logic is not tested? The reason why I ask this is that these unit tests isolate the tested code by replacing its dependencies with mock objects. In other words, they test the controller logic but not the service logic (because the service is a mock object). I assume that you want to mock the repository’s findAll()method? This is entirely possible. It just means that you want to use a larger than unit I did. Typically I use small units and mock the external dependencies for two reasons: However, I am saying that this is the one and only way of doing things. It is just an approach that has served me well. If you think that a bigger unit size helps you to write better tests, you should give it a shot. You will notice pretty soon if it is a better approach than using small units. In fact, this example is so simple that the unit size probably don’t matter at all because there really isn’t a lot of logic. P.S. Naturally I write integration and end-to-end tests as well. P.P.S. You might want to check out my Writing Clean Tests tutorial. It should give you something to think about. Hi Great tutorial.. thanks very much. I am having an issue getting it working though and would be grateful for any advice.. I added the jsonPath 0.8.1 jar file to my class path manually through eclipse and now when running my test I get the following error.. On researching it suggest this jar file was compiled with a different jdk version, I am using java 5, could this be the problem? Update: I removed the irrelevant part of the stack trace – Petri Hi, The problem is that your JDK is too old. The latest version of JsonPath (2.1.0) requires JDK 1.6, and I assume that the version 0.8.1 requires it as well (otherwise you wouldn’t get this error). Thanks for the reply. I though that may be the problem. Unfortunately I cannot upgrade my java version as the production server is not compatible. Do you recommend an alternative library I could use to make assertions about the returned json response. Thanks. Maybe I could use jackson object mapper to make assertions like.. Hi Will, that is definitely one way to solve your problem. However, you can also “hide” these implementation details by creating a custom Hamcrest matcher. After you have created your custom matcher, you can use it for writing assertions for the returned JSON. For example, your test case could look like this: Hello, thank you so much for your tutorial. I have encountered an exception when makin unit test using mock: Error: java.lang.AssertionError: Status expected: but was: Contoller: @RestController @RequestMapping(value = “Univ”) @ComponentScan(basePackages = {“com.back.controller”}) public class UniversityController { @Autowired private UniversityService universityService; @RequestMapping(value = “/university/Hello”, method = RequestMethod.GET , produces = “application/json”) public String sayHello() throws UniversityException { return universityService.getName(); } } Class for test: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { “file:WebContent/WEB-INF/spring-config.xml”,”file:WebContent/WEB-INF/AnnotationsDriven.xml”}) @WebAppConfiguration public class TodoControllerTest { @Autowired @Qualifier(“universityForTest”) private UniversityService universityServiceMock; @Autowired private WebApplicationContext webApplicationContext; private MockMvc mockMvc; @Before public void setUp() { Mockito.reset(universityServiceMock); mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).dispatchOptions(true) .build(); } @Test public void tester() throws Exception { when(universityServiceMock.getName()).thenReturn(“Hello Mock!!”); mockMvc.perform(get(“/Univ/university/Hello”)).andExpect(status().isOk()); } } Hi, Your unit test fails because your controller method doesn’t return HTTP response status 200 (OK). Replace the following code: .andExpect(status().isOk())with: andDo(print()). This prints the request and response into System.outand should help you to solve your problem. The last example in this blog illustrates a test case for ToDo object entry into the database. You have already verified returned object success using jsonPath. I do not understand why did we again retrieve the ToDo object using ArgumentCaptor and then again verified ID, Description and other fields. Is it not duplicated? I am sure there must be a reason. Can you please explain. Good question. Actually I have to I have to admit that the test question sucks because it can fail for more than one reason (it has other flaws as well). If I would test that controller method now, I would write several unit tests for it. If you think about the requirements of the tested method, it is a somewhat clear that: Now, if you don’t capture the TodoDTOobject that is passed to the service method, you cannot ensure that the correct information is passed to the service method. Hello, Thanks for the very nice overview. Helped me a lot with understanding the test framework. Question: When I configure as you outlined here and run the test, I keep getting the below error. Any help with how I should troubleshoot it would be helpful. Thank you. org.mockito.exceptions.misusing.NotAMockException: Argument should be a mock, but is: class com.sun.proxy.$Proxy36 at com.travenion.controllers.customer.CustomerControllerTest.setup(CustomerControllerTest.java:50) Update: I removed the irrelevant part of the stacktrace – Petri To add to the above, I was able to get it to work with StandaloneSetup config, but when using webApplicationContext based configuration, I keep running into the above issue. And using WebApplicationContext, I was not even able to inject mocks manually. If I tried injecting it manually, the service is hitting the real object instead of mock object…hence it is going all the way to the database instead of using my test data. The problem is that a real object is injected into your controller instead of a mock object. If you want to use the web application context based setup, you should read this blog post. However, I recommend that you use the standalone setup in your unit tests because it’s a lot easier to maintain if you have a lot services that need to be mocked. Thanks Petri! Yes, I did follow that blog already and that’s where I picked up the standalone setup from. I will continue using the standalone setup. I appreciate your feedback. You are welcome. It is always fun to help other people. Hi Petri I have followed your youtube tutorial and I have a very basic problem as far as I know I am doing everything write but still getting assertion error Content type not set, I have asked this question on stack overflow and other sites no help yet and It’s kinda blocker for me right now So here is my problem this is the setup method @Before public void init() { MockitoAnnotations.initMocks(this); ReflectionTestUtils.setField(restController, “luceneSearchEnabled”, true); mockMvc = standaloneSetup(restController).build(); } This is my test method: @Test public void pmmSearchContentTypeTest() throws Exception { mockMvc .perform(get(“/api/v1/pmm”).contentType(MediaType.APPLICATION_JSON)) .andExpect(content().contentTypeCompatibleWith(MediaType.APPLICATION_JSON_VALUE) .andReturn(); } This is my search method where I am setting content type: @RequestMapping(value = “/api/v1/pmm”, method = RequestMethod.GET, produces ={MediaType.APPLICATION_JSON_VALUE}) @ResponseBody public String pmmSearch() { … } Also I have checked this manually from browser and the content-type is getting set correctly Hi, Typically when you get that error it means that the controller method threw an exception. Have you tried to print the sent request and the received response? I was able to figure that out I had to make a real oblect in standalone setup and I was using incomplete url Thanks for your help though Hi, You are welcome! It is good to hear that you were able to solve your problem. Hi Petri, Thanks for the awesome tutorial. It really helped !! But, I am facing this weird issue while running JUnit test . org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘com.borrowlenses.services.junits.OrderServiceControllerTest’: Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.borrowlenses.controller.OrdersRestController com.borrowlenses.services.junits.OrderServiceControllerTest.mockOrdersRestController; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.borrowlenses.controller.OrdersRestController] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) I have added the correct context configuration location. Can you help me to identify the cause of this problem ? Thanks in Advance :) Hi, It seems that the Spring container cannot find the OrdersRestControllerbean. Unfortunately it’s impossible to say what causes this because I cannot run your code. However, take a look at this blog post. It identifies the situations when the NoSuchBeanDefinitionExceptionis thrown and explains how you can solve them. Hi Petri, If our controller methods have Method Security(@PreAuthorize) and checks for roles. How do we write Unit tests for the controller methods ? Should we create mock for spring security ? if so, how do we do ? Hello Petri, Thanks for your tutorials, it really helps to explore Spring Test. I have a question regarding exploring the operation results. Is that anyhow possible to iterate through the returned ToDo lists instead of:”))); ??? Hi, You can invoke the andReturn()method of the ResultActionsinterface. This method returns a MvcResultobject that allows you to access the result of an invoked request. On the other hand, if you want to write a test that returns a list of objects, you can simply invoke the controller method without using the Spring MVC Test framework. Hi Petri, I am a new starter for junit test of Rest api. I have two problems when I tried to implement your code. The first one: When I implement the junit test for add(@Valid@RequestBody TodoDTO todoDto),it throws “java.lang.AssertError: Content type not set”, and I don’t know how to solve it. I have read your previous blogs, and every unit test is ok, and only this one has problem. The detailed information is below: The segment of ToDoController.java: The segment of TodoControllerTest.java: The second one: because I have encounter above problem, so I comment the ” .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8))”, but it throws another problem, and it is “java.lang.IllegalArgumentException:json can not be null or empty”, I followed you every step, and I don’t why it appears the problem.The detailed information is below: The segment of ToDoControllerTest: I have added the FieldErrorDTO.java, ValidationErrorDTO.java, RestErrorHandler.java to my project according to your “Spring From the Trenches: Adding Validation to a REST API” blog. I don’t know whether I have used it correctly. I don’t know if I have explained my problem clearly, if it doesn’t make any sense, could you send me an email, and I can send you my project. My address is “[email protected]”. Thank you in advance. Hi, If you get the error: java.lang.AssertError: Content type not set, the problem (most likely) is that the tested code threw an exception. What log level do you use for the org.springframeworkloggers? I ask this because if you set this log level to DEBUG, you should be able to find the exception from log file after you have run the failed test. I am confused about what it means when you are doing when(todoServiceMock.add(any(TodoDTO.class))).thenReturn(added); Is this is saying “when this method is called in my Service class, rather than call the actual add method, return the test model obj you created earlier? If so, then I am confused how this tests the Service or the Controller. Does the MockMVC run an instance of your server behind the scenes, processing the HTTP requests? And if so, does it automatically bind your MockService with a service instance in that particular controller? Hi, You are right. That particular line configures the object that is returned when the add()method is invoked and a TodoDTOobject is given as a method parameter. This test doesn’t test the service class. The service object is simply replaced with a test double that is created by using Mockito. This is a quite old tutorial and the examples use web application context based setup. In other words, our test double is a Spring bean that is created in the application context configuration class which configures the application context of our unit tests. Nowadays I use the standalone setup when I write unit tests for Spring MVC controllers mainly because the setup code of a single test case is easier read (IMO). The Spring MVC Test framework doesn’t use a real server. It’s build on the top of the Servlet API mock objects which are provided by the spring-test module, and it uses the DispatcherServletclass that provides full support for Spring MVC runtime behavior. No. The mock service is a Spring bean that is injected to the tested controller by the Spring container. That being said, if you use the standalone configuration, you have to create tested controller object yourself (and inject all required dependencies manually). If you have any additional questions, don’t hesitate to ask them! Hi petri, I have been following along your tutorials and i am stuck while returning from the controllers. It gives MockHttpServletRequest: HTTP Method = GET Request URI = /users Parameters = {} Headers = {Accept=[application/json]} Handler: Type = com.firsthelpfinancial.restapp.controller.UserController Method = public java.util.List com.firsthelpfinancial.restapp.controller.UserController.findAll() Async: Was async started = false Async result = null Resolved Exception: Type = org.springframework.web.util.NestedServletException ModelAndView: View name = error/error View = null Attribute = exception value = org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.AbstractMethodError: org.springframework.mock.web.MockHttpServletResponse.getHeaders(Ljava/lang/String;)Ljava/util/Collection; FlashMap: MockHttpServletResponse: Status = 500 Error message = null Headers = {} Content type = null Body = Forwarded URL = /WEB-INF/jsp/error/error.jsp Redirected URL = null Cookies = [] The stack trace prints: 13:45:52.694 [main] DEBUG org.springframework.web.servlet.mvc.annotation.ResponseStatusEx94 [main] DEBUG org.springframework.web.servlet.mvc.support.DefaultHandlerEx95 [main] DEBUG org.springframework.web.servlet.handler.SimpleMappingEx; any help is greatly appreciated. Hi, This look pretty interesting. In other words, I have never seen this exception before. That being said, it might be caused by incompatible Spring Test and Spring Framework versions. I would check that you use the same Spring Test and Spring Framework version. Yeah, that was it. It was due to the version mismatch as you have pointed out. Thank you very much. You are welcome! hi preti, i need your help . i wan know junit testing code in rest controller .how to test rest controller methods. Hi, This blog post describes how you can write unit tests for a REST API by using JUnit and Spring MVC Test framework. That’s why I am a bit confused. Is this post hard to understand? If so, could you identify the sections which are unclear to you? Hi, Petri Thank you for the post and information. Have an issue. Trying to run unit tests with your reccomendations, but getting such error. java.lang.AssertionError: JSON path “$” Expected: a collection with size but: collection size was at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.springframework.test.util.JsonPathExpectationsHelper.assertValue(JsonPathExpectationsHelper.java:74) at org.springframework.test.web.servlet.result.JsonPathResultMatchers$1.match(JsonPathResultMatchers.java:86) at org.springframework.test.web.servlet.MockMvc$1.andExpect(MockMvc.java:171) at com.softserve.edu.Resources.controller.LookUpControllerTest.testLoadResourceTypes(LookUpControllerTest.java:56)78) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192) This is my TestClass What could be wrong? it was simple mistake). just forgot to add object to List. But still after test running, gets an error: ava.lang.AssertionError: JSON path “$[0].id” Expected: is but: was why it changed the objects order of list? Hi, If you are writing assertions for numbers by using JsonPath and Hamcrest, you should use integers instead of long values. Of course, I cannot be 100% sure that this is the root cause of your problem because your comment doesn’t reveal the expected and actual values of the idproperty. Thus, if this doesn’t help you, let me know. i changed to integers. but still. Java.lang.AssertionError: JSON path “$[0].id” Expected: is 1 but: was 2 why it changed the objects order of list? It’s kind of hard to say for sure because I cannot debug the code, but I can say that I haven’t seen similar behavior in my own tests. In other words, I suspect that the tested code changes the order of the returned objects. I would take run the test by using a debugger and find out when the order of the objects changes. After you have figured out the reason for this behavior, it should be easy to fix it. By the way, I just noticed that your test code doesn’t add the ResourceTypeobjects to the returned list. Are you sure that you are adding them to the list by using the order that is expected by your test case? Have another issue: Here is my another controller method with DTO class static method usage: DTO Class: And when i test this method with the same test as previous, i`ve got: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:982) There were more erorrs in a trace: could send it if needed. and one more caused: Caused by: java.lang.NullPointerException at com.softserve.edu.Resources.dto.ResourceTypeDTO.(ResourceTypeDTO.java:27) at com.softserve.edu.Resources.dto.DtoUtilMapper.resTypesToResTypesDTO(DtoUtilMapper.java:15) 33 more errors what could be the reason? You are trying to use an object reference that has a nullvalue. Again, the easiest way to figure why this is happens is to run your test by using a debugger. Put a breakpoint to the DtoUtilMapperclass (just before the row that rows the exception) and take a look at the variable values. This should reveal why your test throws the NullPointerException. I learnt a lot from this tutorial thanks. You are welcome!
https://www.petrikainulainen.net/programming/spring-framework/unit-testing-of-spring-mvc-controllers-rest-api/
CC-MAIN-2018-22
en
refinedweb
2.2.4. Upgrading to Nine¶ Upgrading a Buildbot instance from 0.8.x to 0.9.x may require a number of changes to the master configuration. Those changes are summarized here. If you are starting fresh with 0.9.0 or later, you can safely skip this section. First important note is that Buildbot does not support an upgrade of a 0.8.x instance to 0.9.x. Notably the build data and logs will not be accessible anymore if you upgraded, thus the database migration scripts have been dropped. You should not pip upgrade -U buildbot, but rather start from a clean virtualenv aside from your old master. You can keep your old master instance to serve the old build status. Buildbot is now composed of several Python packages and Javascript UI, and the easiest way to install it is to run the following command within a virtualenv: pip install 'buildbot[bundle]' 2.2.4.1. Config File Syntax¶ In preparation for compatibility with Python 3, Buildbot configuration files no longer allow the print statement: print "foo" To fix, simply enclose the print arguments in parentheses: print("foo") 2.2.4.2. Plugins¶ Although plugin support was available in 0.8.12, its use is now highly recommended. Instead of importing modules directly in master.cfg, import the plugin kind from buildbot.plugins: from buildbot.plugins import steps Then access the plugin itself as an attribute: steps.SetProperty(..) See Plugin Infrastructure in Buildbot for more information. 2.2.4.3. Web Status¶ The most prominent change is that the existing WebStatus class is now gone, replaced by the new www functionality. Thus an html.WebStatus entry in c['status'] should be removed and replaced with configuration in c['www']`. For example, replace: from buildbot.status import html c['status'].append(html.WebStatus(http_port=8010, allowForce=True) with: c['www'] = dict(port=8010, plugins=dict(waterfall_view={}, console_view={})) See www for more information. 2.2.4.4. Status Classes¶ Where in 0.8.x most of the data about a build was available synchronously, it must now be fetched dynamically using the Data API. All classes under the Python package buildbot.status should be considered deprecated. Many have already been removed, and the remainder have limited functionality. Any custom code which refers to these classes must be rewritten to use the Data API. Avoid the temptation to reach into the Buildbot source code to find other useful-looking methods! Common uses of the status API are: - getBuildin a custom renderable - MailNotifiermessage formatters (see below for upgrade hints) - doIffunctions on steps Import paths for several classes under the buildbot.status package but which remain useful have changed. Most of these are now available as plugins (see above), but for the remainder, consult the source code. 2.2.4.5. BuildRequest Merging¶ Buildbot 0.9.x has replaced the old concept of request merging ( mergeRequests) with a more flexible request-collapsing mechanism. See collapseRequests for more information. 2.2.4.6. Status Reporters¶ In fact, the whole c['status'] configuration parameter is gone. Many of the status listeners used in the status hierarchy in 0.8.x have been replaced with “reporters” that are available as buildbot plugins. However, note that not all status listeners have yet been ported. See the release notes for details. Including the "status" key in the configuration object will cause a configuration error. All reporters should be included in c['services'] as described in Reporters. The available reporters as of 0.9.0 are MailNotifier IRC HttpStatusPush GerritStatusPush GitHubStatusPush(replaces buildbot.status.github.GitHubStatus) See the reporter index for the full, current list. A few notes on changes to the configuration of these reporters: MailNotifierargument messageFormattershould now be a buildbot.reporters.message.MessageFormatter, due to the removal of the status classes (see above), such formatters must be re-implemented using the Data API. MailNotifierargument previousBuildGetteris not supported anymore MailNotifierno longer forces SSL 3.0 when useTlsis true. GerritStatusPushcallbacks slightly changed signature, and include a master reference instead of a status reference. GitHubStatusPushnow accepts a contextparameter to be passed to the GitHub Status API. buildbot.status.builder.Resultsand the constants buildbot.status.results.SUCCESSshould be imported from the buildbot.process.resultsmodule instead. 2.2.4.7. Steps¶ Buildbot-0.8.9 introduced “new-style steps”, with an asynchronous run method. In the remaining 0.8.x releases, use of new-style and old-style steps were supported side-by-side. In 0.9.x, old-style steps are emulated using a collection of hacks to allow asynchronous calls to be called from synchronous code. This emulation is imperfect, and you are strongly encouraged to rewrite any custom steps as New-Style Build Steps. Note that new-style steps now “push” their status when it changes, so the describe method no longer exists. 2.2.4.8. Identifiers¶ Many strings in Buildbot must now be identifiers. Identifiers are designed to fit easily and unambiguously into URLs, AMQP routes, and the like. An “identifier” is a nonempty unicode string of limited length, containing only ASCII alphanumeric characters along with - (dash) and _ (underscore), and not beginning with a digit Unfortunately, many existing names do not fit this pattern. The following fields are identifiers: - worker name (50-character) - builder name (20-character) - step name (50-character) 2.2.4.9. Serving static files¶ Since version 0.9.0 Buildbot doesn’t use and don’t serve master’s public_html directory. You need to use third-party HTTP server for serving static files. 2.2.4.10. Transition to “worker” terminology¶ Since version 0.9.0 of Buildbot “slave”-based terminology is deprecated in favor of “worker”-based terminology. All identifiers, messages and documentation were updated to use “worker” instead of “slave”. Old API names are still available, but deprecated. For details about changed API and how to control generated warnings see Transition to “worker” terminology. 2.2.4.11. Other Config Settings¶ The default master.cfg file contains some new changes, which you should look over: c['protocols'] = {'pb': {'port': 9989}}(the default port used by the workers) - Waterfall View: requires installation ( pip install buildbot-waterfall-view) and configuration ( c['www'] = { ..., 'plugins': {'waterfall_view': {} }). 2.2.4.12. Build History¶ There is no support for importing build history from 0.8.x (where the history was stored on-disk in pickle files) into 0.9.x (where it is stored in the database). 2.2.2.4.14. More Information¶ For minor changes not mentioned here, consult the release notes for the versions over which you are upgrading. Buildbot-0.9.0 represents several years’ work, and as such we may have missed potential migration issues. To find the latest “gotchas” and share with other users, see.
https://buildbot.readthedocs.io/en/v0.9.8/manual/installation/nine-upgrade.html
CC-MAIN-2018-34
en
refinedweb
This page contains style decisions that both developers and users of TensorFlow should follow to increase the readability of their code, reduce the number of errors, and promote consistency. Python style Generally follow PEP8 Python style guide, except for using 2 spaces. Python 2 and 3 compatible All code needs to be compatible with Python 2 and 3. Next lines should be present in all Python files: from __future__ import absolute_import from __future__ import division from __future__ import print_function - Use sixto write compatible code (for example six.moves.range). Bazel BUILD rules TensorFlow uses Bazel build system and enforces next requirements: - Every BUILD file should contain next header: # Description: # <...> package( default_visibility = ["//visibility:private"], ) licenses(["notice"]) # Apache 2.0 exports_files(["LICENSE"]) - At the end of every BUILD file, should contain: filegroup( name = "all_files", srcs = glob( ["**/*"], exclude = [ "**/METADATA", "**/OWNERS", ], ), visibility = ["//third_party/tensorflow:__subpackages__"], ) - When adding new BUILD file, add this line to tensorflow/BUILDfile into all_opensource_filestarget. "//third_party/tensorflow/<directory>:all_files", - For all Python BUILD targets (libraries and tests) add next line: srcs_version = "PY2AND3", Tensor - Operations that deal with batches may assume that the first dimension of a Tensor is the batch dimension. Python operations A Python operation is a function that, given input tensors and parameters, creates a part of the graph and returns output tensors. The first arguments should be tensors, followed by basic python parameters. The last argument is namewith a default value of None. If operation needs to save some Tensors to Graph collections, put the arguments with names of the collections right before nameargument. Tensor arguments should be either a single tensor or an iterable of tensors. E.g. a "Tensor or list of Tensors" is too broad. See assert_proper_iterable. Operations that take tensors as arguments should call convert_to_tensorto convert non-tensor inputs into tensors if they are using C++ operations. Note that the arguments are still described as a Tensorobject of a specific dtype in the documentation. Each Python operation should have a name_scopelike below. Pass as arguments name, a default name of the op, and a list of the input tensors. Operations should contain an extensive Python comment with Args and Returns declarations that explain both the type and meaning of each value. Possible shapes, dtypes, or ranks should be specified in the description. See documentation details For increased usability include an example of usage with inputs / outputs of the op in Example section. Example: def my_op(tensor_in, other_tensor_in, my_param, other_param=0.5, output_collections=(), name=None): """My operation that adds two tensors with given coefficients. Args: tensor_in: `Tensor`, input tensor. other_tensor_in: `Tensor`, same shape as `tensor_in`, other input tensor. my_param: `float`, coefficient for `tensor_in`. other_param: `float`, coefficient for `other_tensor_in`. output_collections: `tuple` of `string`s, name of the collection to collect result of this op. name: `string`, name of the operation. Returns: `Tensor` of same shape as `tensor_in`, sum of input values with coefficients. Example: >>> my_op([1., 2.], [3., 4.], my_param=0.5, other_param=0.6, output_collections=['MY_OPS'], name='add_t1t2') [2.3, 3.4] """ with tf.name_scope(name, "my_op", [tensor_in, other_tensor_in]): tensor_in = tf.convert_to_tensor(tensor_in) other_tensor_in = tf.convert_to_tensor(other_tensor_in) result = my_param * tensor_in + other_param * other_tensor_in tf.add_to_collection(output_collections, result) return result Usage: output = my_op(t1, t2, my_param=0.5, other_param=0.6, output_collections=['MY_OPS'], name='add_t1t2') Layers A Layer is a Python operation that combines variable creation and/or one or many other graph operations. Follow the same requirements as for regular Python operation. - If a layer creates one or more variables, the layer function should take next arguments also following order: initializers: Optionally allow to specify initializers for the variables. regularizers: Optionally allow to specify regularizers for the variables. trainable: which control if their variables are trainable or not. scope: VariableScopeobject that variable will be put under. reuse: boolindicator if the variable should be reused if it's present in the scope. Layers that behave differently during training should take: is_training: boolindicator to conditionally choose different computation paths (e.g. using tf.cond) during execution. Example: def conv2d(inputs, num_filters_out, kernel_size, stride=1, padding='SAME', activation_fn=tf.nn.relu, normalization_fn=add_bias, normalization_params=None, initializers=None, regularizers=None, trainable=True, scope=None, reuse=None): ... see implementation at tensorflow/contrib/layers/python/layers/layers.py ...
https://www.tensorflow.org/versions/r1.3/community/style_guide
CC-MAIN-2018-34
en
refinedweb
US4558413A - Software version management system - Google PatentsSoftware version management system Download PDF Info - Publication number - US4558413AUS4558413A US06553724 US55372483A US4558413A US 4558413 A US4558413 A US 4558413A US 06553724 US06553724 US 06553724 US 55372483 A US55372483 A US 55372483A US 4558413 A US4558413 A US 4558413A - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - file - object - system - version - to software version management system and method for handling and maintaining software, e.g. software updating uniformily across the system, particularly in a large software development environment having a group of users or programmers. The system is also referred to as the "System Modeller". Programs consisting of a large number of modules need to be managed. When the number of modules making up a software environment and system exceeds some small, manageable set, a programmer cannot be sure that every new version of each module in his program will be handled correctly. After each version is created, it must be compiled and loaded. In a distributed computing environment, files containing the source text of a module can be stored in many places in a distributed system. The programmer may have to save it somewhere so others may use it. Without some automatic tool to help, the programmer cannot be sure that versions of software being transferred to another user or programmer are the versions intended to be used. A programmer unfamiliar with the composition of the program is more likely to make mistakes when a simple change is made. Giving this new programmer a list of the files involved is not sufficient, since he needs to know where they are stored and which versions are needed. A tool to verify a list of files, locations and correct versions would help to allow the program to be built correctly and accurately. A program can be so large that simply verifying a description is not sufficient, since the description of the program is so large that it is impractical to maintain it by hand. The confusion of a single programmer becomes much worse, and the cost of mistakes much higher, when many programmers collaborate on a software project. In multi-person projects, changes to one part of a software system can have far-reaching effects. There is often confusion about the number of modules affected and how to rebuild affected pieces. For example, user-visible changes to heavily-used parts of an operating system are made very seldom and only at great cost, since other programs that depend on the old version of the operating system have to be changed to use the newer version. To change these programs, the "correct" versions of each have to be found, each has to be modified, tested, and the new versions installed with the new operating system. Changes of this type often have to be made quickly because the new system may be useless until all components have been converted. Members or users of large software projects are unlikely to make such changes without some automatic support. The software management problems faced by a programmer when he is developing software are made worse by the size of the software, the number of references to modules that must agree in version, and the need for explicit file movement between computers. For example, a programming environment and system used at the Palo Alto Research Center of Xerox Corporation at Palo Alto, Calif., called "Cedar" now has approximately 447,000 lines of Cedar code, and approximately 2000 source and 2000 object files. Almost all binary or object files refer to other binary or object files by explicit version stamp. A program will not run until all references to an binary or object file refer to the same version of that file. Cedar is too large to store all Cedar software on the file system of each programmer's machine, so each Cedar programmer has to explicitly retrieve the versions he needs to run his system from remote storage facilities or file servers. Thus, the problem falls in the realm of "Programming-the-Large" wherein the unit of discourses the software module, instead of "Programming-in-the-Small", where units include scalor variables, statements, expressions and the like. See the Article of Frank DeRemer and H. Kron, "Programming-in-the-Large versus Programming in the small", IEEE Transactions on Software Engineering, Vol. 2(2), pp. 80-86, June 1976. To provide solutions solving these problems overviewed above, consider the following: 1. Languages are provided in which the user can describe his system. 2. Tools are provided for the individual programmer that automate management of versions of his programs. These tools are used to acquire the desired versions of files, automatically recompile and load aprogram, save new versions of software for others to use, and provide useful information for other program analysis tools such as cross-reference programs. 3. In a large programming project, software is grouped together as a release when the versions are all compatible and the programs in the release run correctly. The languages and tools for the individual programmer are extended to include information about cross-package dependencies. The release process is designed so production of release does not lower the productivity of programmers while the release is occurring. To accomplish the foregoing, one must identify the kinds of information that must be maintained to describe the software systems being developed. The information needed can be broken down into three categories: 1. File Information: For each version of a system, the versions of each file in the system must be specified. There must be a way of locating a copy of each version in a distributed environment. Because the software is always changing, the file information must be changeable to reflect new versions as they are created. 2. Compilation Information: All files needed to compile the system must be identified. It must be possible to compute which files need to be translated or compiled or loaded and which are already in machine runnable format. This is called "Dependency Analysis." The compilation information must also include other parameters of compilation such as compiler switches or flags that affect the operation of the compiler when it is run. 3. Interface Information: In languages that require explicit delineation of interconnections between modules (e.g. Mesa, Ada), there must be means to express these interconnections. There has been little research in version control and automatic software management. Of that, almost none has built on other research in the field. Despite good reasons for it, e.g. the many differences between program environments, and the fact that programming environments ususally emphasize one or two programming languages, so the management systems available are often closely related to those programming languages, this fact reinforces the singularity of this research. The following is brief review of previous work in this area. (1) Make Program The Make program, discussed in the Article of Stuart J. Feldman, "Make-A Program for Maintaining Computer Programs", Software Practice & Experience, Vol. 9 (4), April, 1979, uses a system description called the Makefile, which lists an acyclic dependency graph explicitly given by the programmer. For each node in the dependency graph, the Makefile contains a Make Rule, which is to be executed to produce a new version of the parent node if any of the son nodes change. For example the dependency graph illustrated in FIG. 1 shows that x1.o depends on x1.c, and the file a.out depends on x1.o and x2.o. The Makefile that represents this graph is shown in Table I below. TABLE I______________________________________a.out: x1.o x1.o x2.o______________________________________ cc x1.o x2.ox1.0: x1.c cc -c x1.cx2.o: x2.c cc -c x2.c______________________________________ In Table I, the expression, "cc-c x1.c" is the command to execute and produce a new version of x1.o when x1.c is changed. Make decides to execute the make rule i.e., compile x1.c, if the file modification time of x1.c is newer than that of x1.o. The description mechanism shown in Table I is intuitively easy to use and explain. The simple notion of dependency, e.g., a file x1.o, that depends on x1.c must be recompiled if x1.c is newer, works correctly vitually all the time. The Makefile can also be used as a place to keep useful commands the programmer might want to execute, e.g., print: pr x1.c x2.c defines a name "print" that depends on no other files (names). The command "make print" will print the source files x1.c and x2.c. There is usually only one Makefile per directory, and, by convention, the software in that directory is described by the Makefile. This makes it easy to examine unfamiliar directories simply by reading the Makefile. Make is an extremely fast and versatile tool that has become very popular among UNIX users. Unfortunately, Make uses modification times from the file system to tell which files need to be re-made. These times are easily changed by accident and are a very crude way of establishing consistency. Often the programmer omits some of the dependencies in the dependency graph, sometimes by choice. Thus, even if Make employed a better algorithm to determine the consistency of a system, the Makefile could still omit many important files of a system. (2) Source Code Control System (SCCS) The Source Code Control System (SCCS) manages versions of C source programs enforcing a check-in and check-out regimen, controlling access to versions of programs being changed. For a description of such systems, see the Articles of Alan L. Glasser, "The Evolution of a Source Code Control System", Proc. Software Quality & Assurance Workshop, Software Engineering Notes, Vol. 3(5), pp. 122-125, November 1978; Evan L. Ivie, "The Programmer's Workbench-A Machine for Software Development", Communications of the ACM, Vol. 20(10) pp. 746-753, October, 1977; and Marc J. Rochkind "The Source Code Control System", IEEE Transactions on Software Engineering, Vol. 1(4), pp. 25-34, April 1981. A programmer who wants to change a file under SCCS control does so by (1) gaining exclusive access to the file by issuing a "get" command, (2) making his changes, and (3) saving his changed version as part of the SCCS-controlled file by issuing a "delta" command. His changes are called a "delta" and are identified by a release and level number, e.g., "2.3". Subsequent users of this file can obtain a version with or without the changes made as part of "delta 2.3". While the programmer has "checked-out" the file, no other programmers may store new deltas. Other programmers may obtain copies of the file for reading, however. SCCS requires that there be only one modification of a file at a time. There is much evidence this is a useful restriction in multi-person projects. See Glasser, Supra. SCCS stores all versions of a file in a special file that has a name prefixed by "s.". This "s." file represents these deltas as insertions, modifications, and deletions of lines in the file. Their representation allows the "get" command to be very fast. (3) Software Manufacturing Facility (SMF) Make and SCCS were unified in special tools for a development project at Bell Labs called the Software Manufacturing Facility (SMF) and discussed in the Article of Eugene Cristofer, F. A. Wendt and B. C. Wonsiewicz, "Source Control & Tools=Stable Systems", Proceedings of the Fourth Computer Software & Applications Conference, pp. 527-532, Oct. 29-31, 1980. The SMF uses Make and SCCS augmented by special files called slists, which list desired versions of files by their SCCS version number. A slist may refer to other slists as well as files. In the SMF, a system consists of a master slist and references to a set of slists that describe subsystems. Each subsystem may in turn describe other subsystems or files that are part of the system. The SMF introduces the notion of a consistent software system: only one version of a file can be present in all slists that are part of the system. Part of the process of building a system is checking the consistency. SMF also requires that each slist refer to at least one Makefile. Building a system involves (1) obtaining the SCCS versions of each file, as described in each slists, (2) performing the consistency check, (3) running the Make program on the version of the Makefile listed in the slist, and (4) moving files from this slist to an appropriate directory. FIG. 2 shows an example of a hierarchy of slists, where ab.sl is the master slist. SMF includes a database of standard versions for common files such as the system library. Use of SMF solves the problem created when more than one programmer is making changes to the software of a system and no one knows exactly which files are included in the currently executing systems. (4) PIE Project The PIE project is an extension to Smalltalk developed at the Palo Alto Research Center of Xerox Corporation and set forth in the Articles of Ira P. Goldstein and Daniel G. Bobrow, "A Layered Approach to Software Design", Xerox PARC Technical Report CSL-80-5, December 1980; Ira P. Goldstein and Daniel G. Bobrow, "Descriptions for a Programming Environment", Proceedings of the First Annual Conference of the National Association of Artificial Intelligence, Stanford, Calif., August 1980; Ira P. Goldstein and Daniel G. Bobrow, "Representing Design Alternatives", Proceedings of the Artificial Intelligence and Simulation of Behavior Conference, Amsterdam, July 1980; and the book "Smalltalk-80, The Language and It Implemention" by Adele Goldberg and David Robson and published by Addison-Wesley, 1983. PIE implements a network database of Smalltalk objects, i.e., data and procedures and more powerful display and usage primitives. PIE allows users to categorize different versions of a Smalltalk object into layers, which are typically numbered starting at zero. A list of these layers, most-preferred layer first, is called a context. A context is a search path of layers, applied dynamically whenever an object in the network database is referenced. Among objects of the same name, the one with the layer number that occurs first in the context is picked for execution. Whenever the user wants to switch versions, he or she arranges his context so the desired layer occurs before any other layers that might apply to his object. The user's context is used whenever any object is referenced. The distinction of PIE's solution to the version control problem is the ease with which it handles the display of and control over versions. PIE inserts objects or procedures into a network that corresponds to a traditional hierarchy plus the threads of layers through the network. The links of the network can be traversed in any order. As a result, sophisticated analysis tools can examine the logically-related procedures that are grouped together in what is called a Smalltalk "class". More often, a PIE browser is used to move through the network. The browser displays the "categories", comprising a grouping of classes, in one corner of a display window. Selection of a category displays a list of classes associated with that category, and so on until a list of procedures is displayed. By changing the value of a field labeled "Contexts:" the user can see a complete picture of the system as viewed from each context. This interactive browsing features makes comparison of different versions of software very convenient. (5) Gandalf Project A project, termed the Gandalf project at Carnegie Mellon University, and discussed in the Article of A. Nico Habermann et al., "The Second Compendium of Gandalf Documention", CMU Department of Computer Science, May 1980, is implementing parts of an integrated software development environment for the GC language, an extension of the C language. Included are a syntax-directed editor, a configuration database, and a language for describing what is called system compositions. See the Articles of A. Nico Haberman and Dewayne E. Perry "System Compositions and Version Control for Ada", CMU Computer Science Department, May 1980 and A. Nico Haberman "Tools for Software System Construction", Proceedings of the Software Tools Workshop, Boulder, Colo., May 1979. Various Ph.D these have explored this language for system composition. See the Ph.D Thesis of Lee W. Cooprider "The Representation of Families of Software Systems", CMU Computer Science Department, CMU-CS-79-116, Apr. 14, 1979 and Walter F. Tichy, "Software Development Control Based on System Structure Description", CMU Computer Science Department, CMU-CS-80-120, January 1980. Recent work on a System Version Control Environment (SVCE) combines Gandalf's system composition language with version control over multiple versions of the same component, as explained in the Article of Gail E. kaiser and A. Nico Habermann, "An Environment for System Version Control", in "The Second Compendium of Gandalf Documentation", CMU Department of Computer Science, Feb. 4, 1982. Parallel versions, which are different implementations of the same specification, can be specified using the name of the specific version. There may be serial versions of each component which are organized in a time-dependent manner. One of the serial versions, called a revision, may be referenced using an explicit time stamp. One of these revisions is designated as the "standard" version that is used when no version is specified. Descriptions in the System Version Control Language (SVCL) specify which module versions and revisions to use and is illustrated, in part, in FIG. 3. A collection of logically-related software modules is described by a box that names the versions and revisions of modules available. Boxes can include other boxes or modules. A module lists each parallel version and revision available. Other boxes or modules may refer to each version using postfix qualifiers on module names. For example, "M" denotes the standard version of the module whose name is "M," and "M.V1" denote parallel version V1. Each serial revision can be specified with an "@," e.g., "M.V1@2" for revision 2. Each of these expressions, called pathnames, identifies a specific parallel version and revision. Pathnames behave like those in the UNIX system: a path name that begins, for example, /A/B/C refers to box C contained in box B contained in A. Pathnames without a leading "/" are relative to the current module. Implementations can be used to specify the modules of a system, and compositions can be used to group implementations together and to specify which module to use when several modules provide the same facilities. These ways of specifying and grouping versions and revisions alloy virtually any level of binding: the user may choose standard versions or, if it is important, the user can be very specific about versions desired. The resulting system can be modified by use of components that specialize versions for any particular application as illustrated in FIG. 3. SVCE also contains facilities for "System Generation". The Gandalf environment provides a command to make a new instantiation, or executable system, for an implementation or composition. This command compiles, links, and loads the constituent modules. The Gandalf editor is used to edit modules and edit SVCL implementations directly, and the command to build a new instantiation is given while using the Gandalf editor. Since the editor has built-in templates for valid SVCL constructs, entering new implementations and compositions is very easy. SVCE combines system descriptions with version control, coordinated with a database of programs. Of the existing systems, this system comes closest to fulfillng the three previously mentioned requirements: Their file information is in the database, their recompilation information is represented as lines in the database between programs and their interface information is represented by system compositions. (6) Intermetrics Approach A system used to maintain a program of over one million lines of Pascal code is described in an Article of Arra Avakian et al, "The Design of an Integrated Support Software System", Proceedings of the SIGPLAN '82 Syposium on Compiler Construction, pp. 308-317, June 23-25, 1982. The program is composed of 1500 separately-compiled components developed by over 200 technical people on an IBM 370 system. Separately-compiled Pascal modules communicate through a database, called a compool, of common symbols and their absolute addresses. Because of its large size (90 megabytes, 42,000 names), a compool is stored as a base tree of objects plus some incremental revisions. A simple consistency check can be applied by a link editor to determine that two modules were compiled with mutually-inconsistent compools, since references to code are stamped with the time after which the object file had to be recompiled. Management of a project this size poses huge problems. Many of their problems were caused by the lack of facilities for separate compilation in standard Pascal, such as interface-implementation distinctions. The compool includes all symbols or procedures and variables that are referenced by modules other than the module in which they are declared. This giant interface between modules severely restricts changes that affect more than one separately-compiled module. Such a solution is only suitable in projects that are tightly managed. Their use of differential-updates to the compool and creation times to check consistency makes independent changes by programmers on different machines possible, since conflicts will ultimately be discovered by the link editor. (7) Mesa, C/Mesa and Cedar Reference is now made to the Cedar/Mesa Environment developed at Palo Alto Research Center of Xerox Corporation. The software version management system or system modeller of the instant invention is implemented on this enviroment. However, it should be clear to those skilled in the art of organizing software in a distributed environment that the system modeller may be implemented in other programming systems involving a distributed environment and is not dependent in principle on the Cedar/Mesa environment. In other words, the system modeller may handle descriptions of software systems written in other programming languages. However, since the system modeller has been implemented in the Cedar/Mesa environment, sufficient description of this environment is necessary to be familiar with its characteristics and thus better understand the implementation of the instant invention. This description appears briefly here and more specifcally later on. The Mesa Language is a derivative of Pascal and the Mesa language and programming is generally disclosed and discussed in the published report of James G. Mitchell et al, "Mesa Language Manual, Version 5.0", Xerox PARC Technical Report CSL-79-3, April 1979. Mesa programs can be one of two kinds: interfaces or definitions and implementations. The code of a program is in the implementation, and the interface describes the procedures and types, as in Pascal, that are available to client programs. These clients reference the procedures in the implementation file by naming the interface and the procedure name, exactly like record or structure qualification, e.g., RunTime.GetMemory[] refers to the procedure GetMemory in the interface RunTime. The Mesa compiler checks the types of both the parameters and results of procedure calls so that the procedures in the interfaces are as strongly type-checked as local, private procedures appearing in a single module. The interconnections are implemented using records of pointers to procedure bodies, called interface records. Each client is passed a pointer to an interface record and accesses the procedures in it by dereferencing once to get the procedure descriptors, which are an encoded representation sufficient to call the procedure bodies. A connection must be made between implementations (or exporters) and clients (or importers) of interfaces. In Mesa this is done by writing programs in C/Mesa, a configuration language that was designed to allow users to express the interconnection between modules, specifying which interfaces are exported to which importers. With sufficient analysis, C/Mesa can provide much of the information needed to recompile the system. However, C/Mesa gives no help with version control since no version information can appear in C/Mesa configurations. Using this configuration language, users may express complex interconnections, which may possibly involve interfaces that have been renamed to achieve information hiding and flexibility of implementation. In practice, very few configuration descriptions are anything more than a list of implementation and client modules, whose interconnections are resolved using defaulting rules. A program called the Mesa Binder takes object files and configuration descriptions and produces a single object file suitable for execution. See the Article of Hugh C. Lauer and Edwin H. Satterthwaite, "The Impact of Mesa on System Design", Proceedings of the 4th International Conference on Software Engineering, pp. 174-182, 1979. Since specific versions of files cannot be listed in C/Mesa descriptions, the Binder tries to match the implementations listed in the description with flies of similar names on the invoker's disk. Each object file is given a 48-bit unique version stamp, and the imported interfaces of each module must agree in version stamp. If there is a version conflict, e.g., different versions of an interface, the Binder gives an error message and stops binding. Most users have elaborate command files to retrieve what they believe are suitable versions of files to their local disk. A Librarian, discussed in the Article of Thomas R. Horsley and William C. Lynch, "Pilot: A Software Engineering Case Study", Proceedings of the 4th International Conference on Software Engineering, pp. 94-99, 1979, is available to help control changes to software in multi-person projects. Files in a system under its control can be checked out by a programmer. While a file is checked out by one programmer, no one else is allowed to check it out until it has been checked in. While it is checked out, others may read it, but no one else may change it. In one very large Mesa-language project, which is exemplified in the Article of Eric Harslem and Leroy E. Nelson, "A Retrospective on the Development of Star" Proceedings of the 6th International Conference on Software Engineering, September 1982, programmers submit modules to an integration service that recompiles all modules in a system quite frequently. A newly-compiled system is stored on a file system and testing begins. A team of programmers, whose only duty is to perform integrations of other programmer's software, fix incompatibilities between modules when possible. The major disadvantage of this approach is the amount of time between a change made by the programmer and when the change is tested. The central concern with this environment is that even experienced programmers have a problem managing versions of Mesa or Cedar modules. The lack of a uniform file system, lack of tools to move version-consistent sets of modules between machines, and lack of complete descriptions of their systems contribute to the problem. The first solution developed for version mangement of files is based on description files, also designated as DF files. The DF system automates version control for the user or programmer. This version management is described in more detail later on because experience with it is what led to the creation of the version management system of the instant invention. Also, the version management of the instant invention includes some functionality of the DF system integrated into an automatic program development system. DF files have information about software versions of files and their locations. DF files that describe packages of software are input to a release process. The release process checks the submitted DF files to see if the programs they describe are made from compatible versions of software, and, if so, copies the files to a safe location. A Release Tool performs these checks and copies the files. If errors in DF files are found and fixed employing an interactive algorithm. Use of the Release Tool allows one making a release, called a Release Master, to release software with which he may in part or even to a large extent, not be familiar with. According to this invention, the system modeller provides for automatically collecting and recompiling updated versions of component software objects comprising a software program for operation on a plurality of personal computers coupled together in a distributed software environment via a local area network. As used herein, the term "objects" generally has reference to source modules or files, object modules or files and system models. The component software objects are stored in various different local and remote storage means throught the environment. The component software objects are periodically updated, via a system editor, by various users at their personal computers and then stored in designated storage means. The system modeller employes system modeller when any one of the objects is being edited by a user and the system modeller is responsive to such notification to track the edited objects and alter their respective models to the current version thereof. The system modeller upon command is adapted to retieve and recompile source files corresponding to altered models and load the binary files of the altered component software objects and their dependent objects into the user's computer. The system modeller also includes accelerator means to cache the object pointers in the object models that never change to thereby avoid further retrieving of the objects to parse and to discern the object pointers. The accelerator means for the models includes (1) an object type table for caching the unique name of the object and its object type to enhance the analysis of a model by the modeller, (2) a projection table for caching the unique name of the source object, names of object parameters, compiler switches and compiler version to enhance the translation of objects into derived objects, and (3) a version map for caching the object pathname. The system modeller is an ideal support system in a distributed software environment for noting and monitoring new and edited versions of objects or modules, i.e., source or binary or model files, and automatically managing the compilation, loading saving of such modules as they are produced. Further, the system modeller provides a means for organizing and controlling software and its revision to provide automatic support for several different kinds of program development cycles in a programming system. The modeller handles the daily evolution of a single module or a small group of modules modified by a single person, the assembly of numerous modules into a large system with complex interconnections, and the formal release of a programming system. The modeller can also efficiently locate a large number of modules in a big distributed file system, and move them from one machine to another to meet operational requirements or improve performance. More particularly, the system modeller automatically manages the compilation, loading and saving of new modules as they are produced. The system modeller is connected to the system editor and is notified of new and edited versions of files as they are created by the system editor, and automatically recompiles and loads new versions of software. The system user decribes his software in a system model that list the versions of files used, the information needed to compile the system, and the interconnections between the various modules. The modeller allows the user or programmer to maintain three kinds of information stored in system models. The models, which are similar to a blueprint or schematic, describe particular versions of a system. A model combines in one place (1) information about the versions of files needed and hints about their locations, (2) additional information needed to compile the system, and (3) information about interconnections between modules, such as which procedures are used and where they are defined. To provide fast response, the modeller behaves like an incremental compiler so that only those software modules that have experienced a change are analyzed and recompiled. System models are written in a SML language, which allows complete descriptions of all interconnections between software modules in the environment. Since these interconnections can be very complicated, the language includes defaulting rules that simplify system models in common situations. The programmer uses the system modeller to manipulate systems described by the system models. The system modeller (1) manipulates the versions of files listed in models (2) tracks changes made by the programmer to files listed in the models, (3) automatically recompiles and loads the system, and (4) provides complete support for the release process. The modeller recompiles new versions of modules and any modules that depend on them. The advantages of the system modeller is (1) the use of a powerful module interconnection language that expresses interconnections, (2) the provision of a user interface that allows interactive use of the modeller while maintaining an accurate description of the system, and (3) the data structures and algorithms developed to maintain caches that enable fast analysis of modules by the modeller. These advantages are further expandable as follows. First, the system modeller is easy to use, perform functions quickly and is to run while the programmer is developing his software and automatically update system descriptions whenever possible. It is important that a software version management system be used while the programmer is developing software so he can get the most benefit from them. When components are changed, the descriptions are adjusted to refer to the changed components. Manual updates of descriptions by the programmer would slow his software development and proper voluntary use of the system seems unlikely. The system modeller functioning as an incremental compiler, i.e. only those pieces of the system that are actually change are recompiled, loaded and saved. Second, the exemplified computing environment upon which the described system modeller is utilized is a distributed personal computer environment with the computers connected over an Ethernet local area network (LAN). This environment introduces two types of delays in access to versions of software stored in files: (1) if the file is on a remote machine, it has to be found, and (2) once found, it has to be retrieved. Since retrieval time is determined by the speed of file transfer across the network, the task of retrieving files is circumvented when the information desired about a file can be computed once and stored in a database. For example, the size of data needed to compute recompilation information about a module is small compared to the size of the module's object file. Recompilation information can be saved in a database stored in a file on the local disk for fast access. In cases where the file must be retrieved determining which machine and directory has a copy of the version desired can be very time consuming. The file servers can deliver information about versions of files in a remote file server directory at a rate of up to six versions per second. Since directories can have many hundreds of versions of files, it is not practical to enumerate the contents of a file server while looking for a particular version of a file. The solution presented here depends on the construction of databases for each software package or system that contains information about file locations. Third, since many software modules, e.g., Cedar software modules, have a complicated interconnection structure, the system modeller includes a description language that can express the interconnection structure between the modules. These interconnection structures are maintained automatically for the programmer. When new interconnections between modules are added by the programmer, the modeller updates the model to add the interconnection when possible. This means the user has to maintain these interconnections very seldom. The modeller checks interconnections listed in models for accuracy by checking the parameterization of modules. Further advantages, objects and attainments together with a fuller understanding of the invention will become apparent and appreciated by referring to the following description and claims taken in conjunction with the accompanying drawings. FIG. 1 is an illustration of a dependency graph for a prior art software management system. FIG. 2 is an illustration for a hierarchy of another prior art software management system. FIG. 3 is an illustration of the description specifiers of a still another prior art software management system. FIG. 4 is an illustration of a Cedar system client and implementor module dependency. FIG. 5 is an illustration of a Cedar system source and object file dependency. FIG. 6 is an illustration of a dependency graph for a Cedar System. FIG. 7 is an example of a typical distributed computer evironment. FIG. 8 is a flow diagram of the steps for making a release in a distributed computer environment. FIG. 9 is a dependency graph for DF files in the boot file. FIG. 10 is a dependency graph illustrative of a detail in the boot file. FIG. 11 is a dependency graph for interfaces. FIG. 12 is a dependency graph for files outside the boot file. FIGS. 13a and 13b illustrate interconnections between implementation and interface modules. FIG. 14 illustrates two different versions of a client module. FIGS. 15a and 15b illustrate a client module to IMPORT different versions of the module that EXPORTs. FIG. 16 illustrates a client module with different types of objects. FIG. 17 is an example of a model. FIG. 18 are examples of object type and projection tables. FIG. 19 is an example of a version map. FIG. 20 is an illustration the user's screen for system modeller in the Cedar system. FIG. 21 is a flow diagram illustrating the steps the user takes in employing the system modeller. FIG. 22 is a modeller implementation flow diagram illustrating "StartModel" analysis. FIG. 23 is a modeller implementation flow diagram illustrating computation analysis. FIG. 24 is a modeller implementation flow diagram illustrating loader analysis. FIG. 25 illustrates the Move Phase two of the release utilitity. FIG. 26 illustrates the Build Phase three of the release utility. FIG. 27 is an example of a version map after release. One kind of management system of versions of software for a programmer in a distribution environment is a version control system of modest goals utilizing DF files. Each programmer lists files that are part of his system in a description file which is called a DF file. Each entry in a DF file consists of a file name, its location, and the version desired. The programmer can use tools to retrieve files listed in a DF file and to save new versions of files in the location specified in the DF file. Because recompiling the files in his system can involve use of other systems, DF files can refer also to other DF files. The programmer can verify that, for each file in the DF file, the files it depends on are also listed in the DF file. DF files are input to a release process that verifies that the cross-package references in DF files are valid. The dependencies of each file on other files are checked to make sure all files needed are also part of the release. The release process copies all files to a place where they cannot be erroneously destroyed or modified. The information about file location and file versions in DF files is used by programs running in the distributed programming environment. Each programmer has a personal computer on which he develops software. Each personal computer has its own disk and file system. Machines are connected to other machines using an Ethernet local area network. Files can be transferred by explicit request from the file system on one machine or computer to another machine or computer. Often transfers occur between a personal machine and a file storage means, e.g., a file server, which is a machine dedicated to servicing file requests, i.e., storing and permitting the retrieval of stored files. The major research contributions of the DF system are (1) a language that, for each package or system described, differentiates between (a) files that are part of the package or system and (b) files needed from other packages or systems, and (2) a release process that does not place too high a burden on programmers and can bring together packages being released. A release is complete if and only if every object file needed to compile every source file is among the files being released. A release is consistent if, and only if, only one version of each package is being released and every other package depends on the version being released. The release process is controlled by a person acting as a Release Master, who spends a few days per monthly release running programs that verify that the release is consistent and complete. Errors in DF files, such as references to non-existent files or references to the wrong versions of files, are detected by a program called the Release Tool. After errors are detected, the Release Master contacts the implementor and has him fix the appropriate DF file. Releases can be frequent since performing each release imposes a low cost on the Release Master and on the programmers. The Release Master does not need to know details about the packages being released, which is important when the software of the system becomes too large to be understood by any one programmer. The implementor of each package can continue to make changes to his package until the release occurs, secure in the knowledge that his package will be verified before the release completes. Many programmers make such changes at the last minute before the release. The release process supports a high degree of parallel activity by programmers engaged in software development of a large dsitributed programing environment. The DF system does not offer all that is needed to automate software development. DF files have only that information needed to control versions of files. No support for automatic recompilation of changed software modules is provided in the DF system. The only tool provided is a consistency checker that verifies that an existing system does not need to be recompiled. In order to better understand the software version control system of the instant invention, a general understanding of the programming environment in which it is implemented is desirable. The programming environment is called Cedar. First, some general characteristics of Cedar. The Cedar system changes frequently, both to introduce new function and also to fix bugs. Radical changes are possible and may involve recompilation of the entire system. System requirements are: 1. The system must manage these frequent changes and must give guarantees about the location and consistency of each set of files. 2. Each consistent set of Cedar software is called a "Cedar Release", which is a set of software modules carefully packaged into a system that can be loaded and run on the programmer's personal machine. These releases must be carefully stored in one place, documented and easily accessible. 3. Cedar releases should be accomplished, e.g., as often as once a week, since frequent releases make available in a systematic way new features and bug fixes. The number of users or programmers is small enough that releases do not need to be bug-free since users are generally tolerant of bugs in new components or packages in the system. When bugs do occur, it must be clear who is responsible for the software in which the bug occurs. 4. The system must minimize inconvenience to implementors and cannot require much effort from the person in charge of constructing the release. The scheme must not require a separate person whose sole job is control and maintenance of the system. 5. The system must be added on top of existing program development facilities, since it is not possible to change key properties of such a large distributed programing environment. A limited understanding of the dependency relationships in the Cedar software systems is necessary, i.e., an overview of Cedar modules and dependencies. The view taken in the Cedar system is that the software of a system is completely described by a single unit of text. An appropriate analogy is the sort of card deck that was used in the 1950s to boot, load and run a bare computer. Note that everything is said explicitly in such a system description. There is no operator intervention, such as to supply compiler switches or loader options, after the "go" button is initiated. In such a description there is no issue of "compilation order", and "version control" is handled by distributing copies of the deck with a version number written on the top of each copy. The text of such a system naturally will have integral structure appropriate to the machine on which it runs as well as to the software system itelf. The present system is composed of modules that are stored as text in files termed modules or objects. This representation provides modularity in a physical representation, i.e., a file can name other files instead of literally including their text. In Cedar, these objects are Cedar modules or system models. This representation is convenient for users to manipulate, it allows sharing of identical objects or modules, and facilitates the separate compilation of objects or modules. But it is important to appreciate that there is nothing essential in such a representation. In principle, a system can always be expressed as a single text unit. Unless care is taken, however, the integrity of the system will be lost, since the contents of the named files may change. To prevent this, files are abstracted into named objects, which are simply pieces of text. The file names must be unique and objects must be immutable. By this it is meant that each object has a unique name, never used for any other object. The name is stored as part of the object, so there is no doubt about whether a particular collection of bits is the object with a given name. A name is made unique by appending a unique identifier to a human-sensible string. The contents of an object or module never change once the object is created. The object may be erased, in which case the contents are no longer accessible. If the file system does not guarantee immutability, it can be ensured by using a suitable checksum as the unique identifier of the object. These rules ensure that a name can be used instead of the text of a module without any loss of integrity, in the sense that either the entire text of a system will be correctly assembled, or the lack of some module will be detected. In Cedar, a Cedar module A depends on another Cedar module B when a change to B may require a change to A. If module A depends on module B, and B changes, then a system that contains the changed version of B and an unchanged version of A could be inconsistent. Depending on the severity of the change to B, the resulting system may not work at all, or may work while being tested but fail after being distributed to users. Cedar requires inter-module version checking between A and B that is very similar to Pascal type-checking for variables and procedures. As in Pascal, Cedar's module version checking is designed to detect inconsistency as soon as possible at compile time so that the resulting system is more likely to run successfully after development is completed. Each Cedar module is represented as a source file whose names, for example, ends in "Mesa". The Cedar compiler produces an object file whose name, for example, ends in "Bcd". Each object file can be uniquely-identified by a 48-bit version stamp so no two object files have the same version stamp. Cedar modules depend on other modules by listing in each object file the names and 48-bit version stamps of object files they depend on. A collection of modules that depend on each other are required to agree exactly in 48-bit version stamps. For example, module A depends on version 35268AADB3E4 (hexadecimal) of module B, but B has been changed and is now version 31258FAFBFE4, then the system is inconsistent. The version stamp of a compiled module is a function of the source file and the version stamps of the object files on which it depends on. If module A depends on module B which in turn depends on module C, and C is changed and compiled, then when B and A are compiled their version stamps will change because of the change to C. There are three kinds of software modules in Cedar. They are called interface, implementation, and configuration. There are two programs that produce object files. They are the Cedar Complier and the Cedar Binder. Executing code for a Cedar system is contained in an implementation module. Each implementation module can contain procedures, global variables, and local variables that are scoped using Pascal scoping rules. To call a procedure defined in another implementation module, the caller or client module must IMPORT a interface module that defines the procedure's type i.e. the type of the procedure's argument and result values. This interface module must be EXPORTED by the implementation module that defines it. This module is called the implementor. Both the client and implementor modules depend on the interface module. This dependency is illustrated in FIG. 3. If the interface is recompiled, both client and implementor must be recompiled. The client and implementor modules do not depend on each other, so if either is compiled the other does not need to be. Thus, Cedar uses the interface-implementor module distinction to provide type safety with minimal recompilation cost. A compiler-produced object file depends on (1) the source module that was compiled and (2) the object files of any interfaces that this module IMPORTs or EXPORTs. This dependency is illustrated in FIG. 5. These interface modules are compiled separately from the implementations they described, and interface object files contain explicit dependency information. In this respect, Cedar differs from most other languages with interface or header files. Another level of dependency is introduced by configuration modules, which contain implementation modules or other configuration modules. The programmer describes a set of modules to be packaged together as a system by writing a description of those modules and the interconnections among them in a language called C/Mesa. A C/Mesa description is called a configuration module. The source file for a configuration is input to the Cedar Binder which then produces an object file that contains all the implementation module object files. The Binder ensures the object file is composed of a logically-related set of modules whose IMPORTs and EXPORTs all agree in version. Large system of modules are often made from a set of configurations called sub-configurations. A configuration object file depends on (1) its source file and (2) the sub-configurations and implementation object files that are used to bind the configuration. These object files can be run by loading them with the Cedar Loader which will resolve any IMPORTs not bound by the Binder. In general, a Cedar system has a dependency graph like that illustrated in FIG. 6. Each Cedar programmer has its own personal computer, which is connected to other computers by an Ethernet local area network (LAN). Most files comprising a system are stored on central file servers dedicated to serving file requests and are copied from the central file server(s) to the personal machine by an explicit command, which is similar to the Arpanet "ftp" command. FIG. 7 illustrates a typical environment. In such an environment, a plurality of workstations comprising a personal computer or machine 10 with keyboard, display and local memory are connected to an Ethernet LAN via cable 12. Also connected to cable 12 is file server 14 comprising a server computer 16 and storage disk units 18 capable of storing large amounts of files under designated path or directory names. Cable 12 is also connected to a gateway computer 20 which provides access and communication to other LANs. The user of a machine 10 must first install a boot file that is given control after the machine is powered on. Cedar users install the Cedar boot file that contains the operating system and possibly pre-loaded programs. Since the Binder and Loader ensure that the version stamps of Cedar modules all agree, all Cedar modules could be bound together and distributed to all users for use as the Cedar boot file. However, users who wanted to make changes would have to re-bind and load the system every time they changed a module to test their changes. The resulting boot file would be very large and difficult to transfer and store on the disks of the personal machines. To avoid these problems, Cedar users install this boot file on their machine, which contains a basic system to load and execute Cedar programs, a file system, and a pre-loaded editor and then retrieve copies of programs they want to run that are not already in the boot file. These programs are thus loaded as they are needed. Changes to these programs are possible as long as the versions of interfaces pre-loaded in the Cedar boot file agree with the versions IMPORTed by the program being loaded. Since the boot file EXPORTs are more than 100 interfaces, the programmer can quickly become confused by version error messages for each of the interfaces he uses. This problem could be solved simply by disallowing changes to the Cedar interfaces except, say, once annually. However, it is desirable to be able to adjust interfaces frequently to reflect new features and refinements as they are understood. Control of software in module interconnection languages is analogous to control over types in conventional programming languages, such as Pascal. Still opposed by some, strong type-checking in a language can be viewed as a conservative approach to programming, where extra rules, in the form of type equivalence, are imposed on the program. Proponents claim these rules lead to the discovery of many programming errors while the program is being compiled, rather than after it has started execution. Like strong type-checking of variables, type-checking in a language like Cedar with the explicit notion of an interface module can be performed at the module level so that incompatibilities between modules can be resolved when they are being collected together rather than when they are executing. As in the strong type-checking case, proponents claim this promotes the discovery of errors sooner in the development of programs. Incompatible versions of modules, like incompatible types in a programming languages, may be corrected by the programmers involved. Many times, complex and subtle interdependencies exist between modules, especially when more than a few programmers are involved and the lines of communication between them are frayed or partially broken. In the Cedar Xerox environment, where each module is a separate file and development occurs on different personal computers or machines, module-level type-checking is more important than type-checking of variables in conventional programming languages. This is because maintaining inter-module type consistency is by definition spread over different files, possibly on different computers by more than one programmer/user, while maintaining type-consistency of variables is usually localized in one file by one programmer/user on one computer. Users in Cedar are required to group logically-related files, such as the source and object files for a program they are developing, into a package. Each software package is described by a DF file that is a simple text file with little inherent structure that is editable by the programmer/user. The DF file lists all the files grouped together by the implementor as a package. For each file, the DF file gives a pathname or location where the file can be found and information about which version is needed. In Cedar, files are stored on remote file servers with names like "Ivy" or "Indigo" and have path or directory names, e.g., "Levin>BTrees>". A file like "BTreeDefs.Mesa" would be referenced as "[Ivy]<Levin>BTrees>BTreeDefs.Mesa". In addition, when created, each file is assigned a creation time. Therefore "BTreeDefs.Mesa Of May 13, 1982 2:30 PM" on "[Ivy]<Levin>BTrees>" defines a particular version. A DF file is a list of such files. For syntactic grouping, we allow the user to list files grouped under common directories. The implementor of a B-tree package, for example, might write in his DF file, called BTrees.DF: ______________________________________Directory [Ivy]<Levin>BTrees>______________________________________BTreeDefs.Mesa 2-Oct-81 15:43:09______________________________________ to refer to the file [Ivy]<Levin>BTrees>BTreeDefs.Mesa created at 2-Oct-81 15:43:09. If, for example, the BTree package included an object file for BTreeDefs.Mesa, and an implementation of a B-tree package, it could be described in BTrees.DF as: ______________________________________Directory ______________________________________ Two different DF files could refer to different versions of the same file by using references to files with different create dates. There are cases where the programmer wants the newest version of a file. If the notation, ">", appears in place of a create time notation, the DF file refers to the newest version of a file on the directory listed in the DF file. For example, ______________________________________Directory [Ivy]<Pilot>Defs>______________________________________Space.Bed >______________________________________ refers to the newest version of Space.Bcd on the directory [Ivy]<Pilot>Defs>. This is used mostly when the file is maintained by someone other than the programmer and is content to accept the latest version of the file. Users are encouraged to think of the local disk on their personal computer as a cache of files whose "true" locations are the remote servers. A program called BringOver assures the versions listed in a DF file are on the local computer disk. Since DF files are editable, the programmer who edits, for example, BTreeDefs.Mesa could, when ready to place a new copy on the server,Ivy, store it manually and edite the DF file to insert the new create time for the new version. For large numbers of files, this would always be error prone, so a StoreBack program provides automatic backup of changed versions (1) by storing files that are listed in the DF file but whose create date differs from the one listed in the DF on the assumption that the file has been edited, and (2) by updating the DF file to list the new create dates. The DF file is to be saved on the file server, so we allow for a DF self-reference that indicates where the DF file is stored. For example, in BTrees.DF: ______________________________________Directory [Ivy]<Levin>BTrees>______________________________________BTrees.DF 20-Oct-81 9:35:09BTreeDefs.Mesa 2-Oct-81 15:43:09BTreeDefs.Bed 2-Oct-81 16:00:28BTreeImpl.Mesa 2-Oct-81 15:28:54BTreeImpl.Bed 2-Oct-81 16:44:31______________________________________ the first file listed is a self-reference. The StoreBack program arranges that the new version of BTrees.DF will have the current time as its create date. The Cedar system itself is a set of implementaton modules that export common system interfaces to the file system, memory allocator, and graphics packages. Assume the B-tree package uses an interface from the allocator. The user makes this dependency explicit in their DF file. The BTree package will then IMPORT the interface "Space", which is stored in object form in the file "Space.Bcd". The BTree DF package will reflect this dependency by "importing" Space.Bcd from a DF file "PilotInterfaces.DF" that lists all such interfaces. BTrees.DF will have an entry: ______________________________________Imports [Indigo]<Cedar>Top> 2-Oct-81 15:43:09PilotInterfaces.DF OfUsing[Space.Bed]______________________________________ The "Imports" in a DF file is analogous to the IMPORTS in a Cedar program. As in Cedar modules, BTrees.DF depends on Pilot.DF. Should "Space.Bcd" and its containing DF file "Pilot.DF" change, then BTrees.DF may have to also change. The programmer/user may want to list special programs, such as a compiler-compiler or other preprocessors, that are needed to make changes to his system. This is accomplished using the same technique of IMPORTing the program's DF file. For the individual programmer, there are two direct benefits from making dependency information explicit in his DF file. First, the BringOver program will ensure that the correct version of any imported DF files are on the local disk, so programmers can move from one personal computer to another and guarantee they will have the correct version of any interfaces they reference. Second, listing dependency information in the DF file puts in one place information that is otherwise scattered across modules in the system. How does the programmer/user know which files to list in his DF file? For large systems, under constant development, the list of files is long and changes frequently. The programmer can run a program VerifyDF that analyzes the files listed in the DF file and warns about files that are omitted. VerifyDF analyzes the dependency graph, an example of which is illustrated in FIG. 6, and analyzes the versions of (1) the source file that was compiled to produce the object file and (2) all object files that this object file depends on. VerifyDF analyzes the modules listed in the DF file and constructs a dependency graph. VerifyDF stops its analysis when it reaches a module defined in another package that is referenced by IMPORTs in the DF. Any modules defined in other packages are checked for versionstamp equality, but no modules that they depend upon are analyzed, and their sources do not need to be listed in the package's DF file. VerifyDF understands the file format of object files and uses the format to discover the dependency graph, but otherwise it is quite general. For example, it does not differentiate between interface and implementation files. VerifyDF could be modified to understand object files produced by other language compilers as long as they record all dependencies in the object file with a unique version stamp. For each new such language, VerifyDF needs (1) a procedure that returns the object version stamp, source file name and source create time, and (2) a procedure that returns a list of object file names and object version stamps that a particular object file depends on. If the programmer lists all such package and files he depends on, then some other programmer on another machine will be able to retrieve, using BringOver command, all the files he needs to make a change to the program and then run StoreBack to store new versions and produce a new DF file. Using these tools, that is BringOver, StoreBack, VerifyDF, the programmer/user can be sure he has a DF file that lists all the files that are needed to compile the package (completeness) and that the object files were produced from the source files listed in the DF file, and there are no version stamp discrepancies (consistency). The programmer can be sure the files are stored on central file servers and can turn responsibility for a package over to another programmer by simply giving the name of the DF file. DF files can be used to describe releases of software. Releases are made by following a set of Release Procedures, which are essentially managerial functions by a Release Master and requirements placed on implementors/users. A crucial element of these Release Procedures is a program called the Release Tool, which is used to verify that the release is consistent and complete, and is used to move the files being released to a common directory on a designated file server. If the packages a programmer depends on change very seldom, then use of the tools outlined above is sufficient to manage versions of software. However, packages that almost everyone depends on may be changed. A release must consist of packages that, for example, all use the same versions of interfaces supplied by others. If version mismatches are present, modules that IMPORT and EXPORT different versions of the same interface will not be connected properly by the loader. In addition to the need for consistency and completeness across an entire release, the component files of a particular release must be carefully saved somewhere where they are readily available and will not be changed or deleted by mistake, until an entire release is no longer needed. The administration of Cedar releases are organized around an implementor/user who is appointed Release Master. In addition to running the programs that produce a release, he is expected to have a general understanding of the system, to make decisions about when to try to make a release, and to compose a message describing the major changes to components of the release. Once he decides to begin the release process after conferring with other implementors and users, the Release Master sends a "call for submissions" message through an electronic mail system of the distributed system to a distribution list of programmers/users who have been or are planning to contribute packages to the release. Over a period of a few days, implementors/users are expected to wait until new versions of any packages they depend on are announced, produce a new version on some file server and directory of their choosing, and then announce the availability of their own packages. One message is sent per package, containing, for example, "New Version of Pkg can be found on [Ivy]<Schmidt>Pkg.DF, that fixes the bug . . . ". Programmers who depend on Pkg.DF are expected to edit their DF files by changing them to refer to the new version. Since often it is the newest version, clients of Pkg.DF usually replace an explicit date by the notation, ">". They might refer to Pkg.DF by inserting: ______________________________________Imports [Ivy]<Schmidt>Pkg.DF Of>Using[File1.Bed. File2.Bed]______________________________________ in their DF file. If the package is not changed, a message to that effect will be sent. These submissions do not appear in lock step since changes by one implementor may affect packages that are "above" them in the dependency graph. This pre-release integration period is a parallel exploration of the dependency graph of Cedar software by its implementor/users. If an implementor is unsure whether he will have to make changes as a result of lower level bug fixes, for instance, he is expected to contact the implementor of the lower package and coordinate with him. Circular DF-dependencies may occur, where two or more packages use interfaces exported by each other. In circular cases, the DF files in the cycle have to be announced at the same time or one of the DF files has to be split into two parts: a bottom half that the other DF file depends on and a top half that depends on the other DF file. The Release Master simply monitors this integration process and when the final packages are ready, begins the release. FIG. 7 illustrates the steps being taken to accomplish a release. Once all packages that will be submitted to the release are ready, the Release Master prepares a top-level DF file that lists all the DF files that are part of the release. Packages that are not changed relative to a previous release are also listed in this DF file. DF files are described using a construct similar to "Imports" discussed earlier. The contents of each DF file are referenced by an Include statement, e.g., Include [Ivy]<Levin>BTrees>BTrees.DF Of> refers to the newest version of the BTree package stored on Levin's working directory <Levin>BTrees>. Include is treated as macro-substitution, where the entire contents of BTrees.DF are analyzed by the Release Tool as if they were listed directly in the top-level DF. The Release Master uses the top-level DF as input to phase one of the Release Tool. Phase one reads all the included DF files of the release and performs a system-wide consistency check. A warning message is given if there are files that are part of the release with the same name and different creation times (e.g., BTreeDefs.Mesa of 20-May-82 15:58:23 and also another version of 17-Apr-82 12:68:33). Such conflicts may indicate that two programmers are using different versions of the same interface in a way that would not otherwise be detected until both programs were loaded on the same computer. These warnings may be ignored in cases where the Release Master is convinced that no harm will come from the mismatch. For example, there may be more than one version of "Queue.Mesa" in a release since more than one package has a queue implementation, but each version is carefully separated and the versions do not conflict. Phase one also checks for common blunders, such as a DF file that does not refer to newest versions of DF files it depends on, or a DF file that refers to system or program files that do not exist where the DF file indicates they can be found. The Release Master makes a list, package by package, of such blunders and calls each user and notifies them they must fix their DF files. Phase one is usually repeated once or twice until all such problems are fixed and any other warnings are judged benign. Phase two guarantees system wide completeness of a release by running VerifyDF will warn of files that should have been listed in the DF file but were omitted. Implementor/users are expected to run VerifyDF themselves, but during every release, ot is easy for at least one to forget. Any omissions must be fixed by the implementor/user. Once phases one and two are completed successfully, the Release Master is fairly certain there are no outstanding version of system composition problems, and he can proceed to phase three. To have control over the deletion of old releases, phase three moves all files that are part of a release to a directory that is mutable only by the Release Master. Moving files that are part of the release also helps users by centralizing the files in one phase. The DF files produced by users, however, refer to the files on their working directories. We therefore require that every file mentioned in the DF files that are being released have an additional phrase "ReleaseAsreleasePlace". The BTrees.DF example would look like: ______________________________________Directory [Ivy]<Levin>BTrees>______________________________________Release As [Indigo]<Cedar>Top>BTrees.DF 20-Oct-81 9:35:09ReleaseAs [Indigo]<Cedar>BTrees>BTreeDefs.Mesa 2-Oct-81 15:43:09BTreeDefs.Bed 2-Oct-81 16:00:28BTreeImpl.Mesa 2-Oct-81 15:28:54BTreeImpl.Bed 2-Oct-81 16:44:31______________________________________ which indicates a working directory as before and a place to put the stable, released versions. By convention, all such files must be released onto subdirectories of [Indigo]<Cedar>. To make searching for released DF files on the <Cedar> directory easier, each DF file's self-reference must release the DF file to the special subdirectory <Cedar>Top>. When the third phase is run, each file is copied to the release directory, e.g., B-tree files are copied to <Cedar>BTrees> and new DF files are written that describe these files in their release positions, e.g., ______________________________________Directory [Indigo]<Cedar>Top>Came From [Ivy]<Levin>BTrees>BTrees.DF 9-Nov-81 10:32:45Directory [Indigo]<Cedar>BTrees>Came From ______________________________________ The additional phrase "CameFrom" is inserted as a comment saying where the file(s) were copied from. The other major function of phase three is to convert references using the "newest version" notation, ">", to be explicit dates, since "newest version" will change for every release. Phase three arranges that a reference like: ______________________________________Imports[Ivy]<Levin>BTrees>BTrees.DF Of>Using[BtreeDefs.Bed]______________________________________ becomes ______________________________________Imports [Indigo]<Cedar>BTrees>Btrees.DF Of dateCame from [Ivy]<Levin>BTrees>Using [BTreeDefs.Bed]______________________________________ where date is approximately the time that phase three is run. The notion of a "Cedar Release" has many advantages. In addition to a strong guarantee that the software will work as documented, it has an important psychological benefit to users as a firewall against disasters, since programmers are free to make major changes that may not work at all, and are secure in the knowledge that last release is still available to fall back upon. Since users can convert back and forth between releases, users have more control over which versions they use. There is nothing wrong with more than one such release being in use at one time by different programmer/users, since each programmer has his own personal computer. Users are also allowed to convert to new releases at their own pace. This approach to performing releases fulfills initial requirements: (1). All files in the release have been moved to the release directory. These files are mutually consistent versions of software. All DF files refer to files known to be on the release directory. (2). As described earlier, we cannot make a configuration module that contains all the modules in a release. Cedar releases are composed of (a) a boot file and (b) programs that are mutually consistent and can be run on a personal machine with the boot file being released. Phase two runs VerifyDF on all the components to guarantee that the versions of source and object files listed in the DF file are the ones actually used to build the component and guarantees that all files needed to build the component are listed in the DF file, so no files that conflict in version can be omitted. (3). The release process is automatic enough that frequent releases are possible. Bugs in frequent releases are easily reported since the concept of ownership is very strongly enforced by our approach. The programmer who provides new versions of software is the recipient of bug reports of his software. (4). The Release Master is required to (a) decide when to make a release, (b) send a call-for-submissions message, (c) make a to-level DF file and run the Release Tool, and (d) send a message announcing the release's completion. Because releases are expected, over time, to include more and more system programs, it is important that the Release Master not need to compile packages other than any packages he may be contributing to the release. Indeed, no single person has ever known how to compile the entire system by himself. Since the implementors use DF files for maintaining their own software as well as for submitting components to the release, there is little additional burden on the implementors when doing a release. If the burden were too high, the implementors would delay releases and overall progress would be slowed as the feedback from users to implementors suffered. (5). A general database system to describe the dependency hierarchy of packages when we are producing systems is not needed. A message system is used, rather than a database of information that the programmers can query, to notify implementors that packages they may depend on are ready. Many aspects of bootstrapping Cedar are simplified when interfaces to the lowest and most heavily used parts of the boot file are not changed. Some major releases use the same versions of interfaces to the system object allocator and fundamental string manipulation primitives. Most major releases use the same versions of interfaces to the underlying Pilot system such as the file system and process machinery. The implementations of these stable parts of the system may be changed in ways that do not require interface changes. In the Cedar environment, two previous releases have included changes to the interfaces of the operating system, called Pilot and discussed in the Article of Redell et al. "Pilot: An Operating System for a Personal Computer", Proceedings of the Seventh Symposium on Operating System Principles, December 1979, and thereby forced changes in the style of integration for those releases. Since the released loader cannot load modules that refer to the new versions of operating system interfaces, the software of Cedar environment that is preloaded in the boot file must all be recompiled before any changes can be tested. Highest priority is given to producing a boot file in which these changes can be tested. If the DF files describing the Cedar system were layered in hierarchical order, with the operating system at the bottom, this boot file could be built by producing new versions of the software in each DF file in DF-dependency order. FIG. 9 shows the dependency graph for DF files in the boot file, where an arrow from one DF file, e.g., Rigging.DF, to another, e.g., CedarReals.DF, indicates Rigging.DF IMPORTS some file(s) from CedarReals.DF. In this dependency graph, "tail" DF files depend on "head" DF files. Double headed arrows indicate mutual dependency. Basic Heads.DF means that this DF file includes other files, BasicHeadsDorado.DF, BasicHeadsDO.DF and BasicHeadCommon.DF, Communication.DF includes CommunicationPublic.DF, CommunicationFriends.DF and RS232Interfaces.DF. CompatabilityPackage.DF includes MesaBAsics.DF. Note that Rigging.DF also depends on CompatibilityPackage.DF, but the dependency by CedarReals.DF on CompatibilityPackage.DF ensures a new version of Rigging.DF will be made after both lower DF files. The PilotInterfaces.DF file is at the bottom and must be changed before any other DF files. This dependency graph is not acrylic, however. The most extreme cycle is in the box with six DF files in it, which is expanded in FIG. 10. Each DF file is in a cycle with at least one other DF file, so each DF file depends on the other, and possibly indirectly, and no DF file can be announced "first". There is an ordering in which these component can be built: If the interfaces listed in each of the DF files are compiled and DF files containing those interfaces are stored on <PreCedar>, each programmer can then compile the implementation modules in this component and then store the remaining files on <PreCedar>. An example for the dependency graph for interfaces is shown in FIG. 11. This graph indicates that the interfaces of CIFS, VersionMap, Runtime, WorldVM, ListsAndAtoms, and IO can be compiled in that order. This interface dependency graph had cycles in it in the Cedar release that have since been eliminated. Appendix A contains examaples of some of these DF files before and after the release. Recompilation of all the interfaces in the boot file requires that at least nine programmer/users participate. Since the boot file cannot be produced until all interfaces and implementation modules in the DF files of FIG. 9 are compiled, interface changes are encouraged to be made as soon as possible after a successful release and only once per release. Once the users have made their interface changes and a boot file using the new interfaces is built, the normal period of testing can occur and new changes to implementation modules can be made relatively painlessly. Components being released that are outside the boot file have a much simpler dependency structure, shown in FIG. 12. The majority of these components are application programs that use Cedar system facilities already loaded in the boot file. The information in the DF files of a release help to permit study and planning for the development of the Cedar system. The ability to scan, or query, the interconnection information gives a complete view of the use of software by other programs in the system. For example, one can mechanically scan the DF files of an entire release and build a dependency graph describing the interfaces used in Cedar and which implementors depend on these interfaces. Since VerifyDF ensures all interfaces needed by a component are described in its DF file, an accurate database of information can be assured. This information can be used to evaluate the magnitude of changes and anticipate which components can be affected. One can also determine which interfaces are no longer used, and plan to eliminate the implementation of those interfaces, which happens often in a large programming environment while it is under active development. The Cedar release/DF approach assumes only one person is changing a DF file at a time. How would we cope with more than one modifier of a package? If the package is easily divided, as with the Cedar system window manager and editor, two or more DF files can be included by an "umbrella" DF file that is released. One of the implementors must "own" the umbrella DF file and must make sure that the versions included are consistent by running VerifyDF check on the umbrella file. If the package is not easily divided, then either a check in/check out facility must be used on the DF and its contents to guarantee only one person is making changes at a time, or a merge facility would be needed to incorporate mutually exclusive changes. Should more than one programmer change the same module, this merge facility would have to ask for advice on which of the new versions, if any, to include on the DF file. 2. Module Interconnection Language--SML SML is a polymorphic and applicative language that is used to describe packages of Cedar modules. The programmer/user writes SML programs, which are called system models, to specify the modules in the system the user is responsible for and the interconnections between them. These system models are analyzed by a system modeller of the instant invention that automates the compile-edit-debug cycle by tracking changes to modules and performs the compilation and loading of systems. The specification of module interconnection facilities of the Cedar system requires use of polymorphism, where the specification can compute a value that is later used as the type for another value. This kind of polymorphism is explained in detail later. The desire to have a crisp specification of the language and its use of polymorphism led to base SML on the Cedar Kernal language, which is used to describe the semantics of Cedar developed programs. The semantics of the SML language have to be unambiguous so every syntactically-valid system model has clear meaning. The Cedar Kernal language has a small set of principles and is easily implemented. The clear semantics of Kermel language descriptions give a concise specification of the SML language and give good support to the needs of the module interconnection specification. SML could have been designed without reference to the Kernal language. However, without the Kernel language as a base, there would be less confidence that all language forms had clear meaning. SML is an applicative language, since it has no assignment statement. Names or identifiers in SML are given values once, when the names are declared and the value of a name may not be changed later unless the name is declared in some inner scope. SML is easier to implement because it is applicative and function invocation has no side effects. The fundamental concepts of SML are now presented, followed by a description of SML's treatment of files. The Cedar Kernal language, which serves as a basis for SML, is described, followed by a section on the syntax and semantics of SML expressions. The Cedar System is based on the Mesa language see Mitchell et al., supra and Lauer et al., supra. The system contains features for automatic storage management (garbage collection) and allows binding of types at runtime, i.e. pointers to objects whose types are known only at runtime. The system derives from the Mesa language a rich module interconnection structure that provides information hiding and strong type checking at the module level, rather than at the procedure level. In order to better understand SML, it is important to know about the existing module interconnection facilities used in the Cedar system. As previously indicated in part, a Cedar system consists of a set of modules, each of which is stored in a separate file. A module can be one of two types: an implementation (PROGRAM) module, or an interface (DEFINITIONS) module. Interface modules contain constants found in other Pascal-like languages: procedure declarations, type declarations, and other variables. A module that wishes to call a procedure declared in another module must do so by IMPORTing an interface module that declares this procedure. This interface module must be EXPORTED by a PROGRAM module. For example, a procedure "USortList" declared in a module "SortImpl" would also be declared in an interface Sort, and SortImpl would EXPORT Sort. A PROGRAM that wants to call the procedure USortList does so by IMPORTing Sort. We call the importer of Sort the "client" module and say SortImpl (the exporter) "implements" Sort. Of course, SortImpl may IMPORT interfaces to use that are defined elsewhere. These interconnections are shown in FIG. 13, which shows filenames for each module in the upper left corner. The interface Sort defines an object composed of a pair of x,y coordinates. The EXPORTer, SortImpl.Mesa, declares a procedure that takes a list of these objects and sorts them, eliminating duplicates. LIST in the Cedar system is a built-in type with a structure similar to a Lisp list. ClientImpl.Mesa defines a procedure that calls USortList to sort a list of such objects. Details about "CompareProc" have been omitted for simplicity. Most collections of modules in the system use the same version of interfaces, e.g., there is usually only one version of the interface for the BTree package in a given system. Situations arise when more than one version is used in a system. For example, there could be two versions of an interface to a list manipulation system, each one manipulating a different type of object. FIG. 14 shows, on the left, the module from FIG. 13 and, on the right, another similar module that defines an "Object" to be a string instead of coordinates. A module that refers to the Sort interface would have to be compiled with one of the two versions of the Sort interface, since the compiler checks types of the objects being assembled for the sort. This is referred to as interface type parameterization since the types of items from the interface used by a client (ClientImpl.Mesa) are determined by the specific version of the interface (SortCoord.Mesa or SortNames.Mesa). A different kind of parameterization may occur when two different implementations for the same interface are used. For example, a package that uses the left version of the Sort interface in FIG. 14 above might use two different versions of the module that EXPORTs Sort, one of which uses the QuickSort algorithm and the other uses the HeapSort algorithm to perform the sort. Such a package includes both implementors of Sort and must specify which sort routine the clients (IMPORTers) use when they call Sort.USortList[]. In the Cedar system, it is possible for a client module to IMPORT both versions, as shown in FIG. 15. In FIG. 15, SortQuickImpl and SortHeapImpl both EXPORT different procedures for the Sort interface. One procedure, SortQuickImpl, uses QuickSort to sort the list. The other uses HeapSort to sort the list. The importer, ClientImpl, IMPORTS each version under a different name. SortQuickInst and SortHeapInst are called interface records, since they are represented as records containing pointers to procedures. The client procedure "TestThem" calls each in turn by specifying the name of the interface and the name of the procedure, e.g., SortQuickInst.USortList[]. How are the two interface records that are EXPORTED by SortQuickImpl and SortHeapImpl connected to the two interface records (SortQuickInst and SortHeapIInst) required by ClientImpl? A program called the Mesa Binder makes these connections by reading a specification written in a subset of Mesa called C/Mesa. C/Mesa source files, called CONFIGURATIONs, name the implementation modules involved and specify the interconnections. Below is shown the configuration that makes this connection: ______________________________________ClientConfig: CONFIGURATION = { SQ1: Sort ← SortQuickImpl[]; SHI: Sort ← SortHeapImpl[]; ClientImpl[SortQuickInst: SQ1, SortHeapInst: SHI]; }.______________________________________ Two variables are declared (SQI and SHI) that correspond to the interface records EXPORTED by the two modules. The client module is named, followed by the two interfaces given in keywork parameter notation. This is called interface record parameterization, since the behavior of the client module is a function of which interfaces SortQuickInst and SortHeapInst refer to when they are called in ClientImpl. C/Mesa, as currently defined, cannot express interface type parameterization at all. The semantics of some C/Mesa specifications are ambiguous. Because of this, the use of SML was choosen to replace the use of C/Mesa. SML programs give the programmer/user the ability to express both kinds of parameterization. It is possible to think of SML as an extension of C/Mesa, although their underlying principles are quite different. Before explaining SML, reference is first made to an example of modules that use both interface type and interface record parameterization and show how this can be expressed in SML. The essential features of SML are illustrated by the following simple model and are discussed later on relative to SML's treatment of files. A description of the SML language is also given later. Consider two versions of the Sort interface from FIG. 14 and two EXPORTERs of Sort from FIG. 15. Since the EXPORTERs do not depend on the kind of object (coordinates or names), the EXPORTERs can each be constructed with a different type of object. Assume the client module wants to call USortList with all four combinations of object type and sort algorithm: (coordinates+quicksort, coordinates+heapsort, names+quicksort, names+heapsort). FIG. 16 shows a version of ClientImpl module that uses all four combinations of object type. In SML, a model to express this is shown in Table II below. TABLE II______________________________________ClientModel˜[______________________________________interface typesSortCoord: INTERFACE˜@SortCoord.Mesa[];SortNames: INTERFACE˜@SortNames.Mesa[];interface recordsSQCI: SortCoord˜@SortQuickImpl.Mesa[SortCoord];SQNI: SortNames˜@SortQuickImpl.Mesa[SortNames];SHCI: SortCoord˜@SortHeapImpl.Mesa[SortCoord];give all to clientClient: CONTROL˜@Clientlmpl.Mesa[SortCoord.SortNames.SQCI,SQNI,SHCI,SHNI]______________________________________ SML allows names to given types and bound to values. After the header, two names "SortCoord" and "SortNames" are given values that stand for the two versions of the Sort interface. Each has the same type, since both are versions of the Sort interface. Their type is "INTERFACE Sort", where "INTERFACE" is a reserved word in SML and "Sort" is the interface name. The next four lines bind four names to interface records that correspond to the different sort implementations. "SQCI" is a name of type "SortCoord" and has as value the interface record with a procedure that uses QuickSort on objects with coordinates. Similarly, "SQNI" has as value an interface record with a procedure for QuickSort on objects with strings, etc. Note that each of the four implementations is parameterized by the correct interface, indicating which type to use when the module is compiled. The last line specifies a name "Client" of reserved type "CONTROL" and gives it as value the source file for ClientImpl, parameterized by all the previously defined names. The first two, SortCoord and SortNames, are values to use for the names "SortCoord: INTERFACE Sort" and "SortNames: INTERFACE Sort" in the DIRECTORY clause of ClientImpl. The last four, in order, give interface records for each of the four imports. There are a number of nearly-equal names in the example. If all related names were uniform, e.g., SortQuickCoordImpl instead of SQHI and SortQuickCoordInst, and SortHeapCoordImpl instead of SQHI and SortHeapCoordInst, then the parameter lists in the example could be omitted. The kinds of values in SML follow naturally from the objects being represented: the value of "@ SortCoord.Mesa[]" is the object file for the interface module SortCoord.Mesa when it is compiled. The value of "@ SortQuickImpl.Mesa[]" is an interface record produced when the object file for SortQuickImpl.Mesa is loaded. Note there are two versions of the object file for SortQuickImpl.Mesa: one has been compiled with SortCoord as the interface it EXPORTs, and the other has been compiled with SortNames as the interface it EXPORTs. It is helpful to differentiate the two types of parameterization by the difference in uses: Interface type parameterization is applied when a module is compiled and the types of the various objects and procedures are checked for equality. Interface record parameterization is applied when a module is loaded and the imports of other modules are resolved. The interface records by which a module is parameterized are used to satisfy these inter-module references. The SML language is built around four concepts: 1. Application: The basic method of computing. 2. Values: Everything is a value, including types (polymorphism) and functions. 3. Binding: Correspondence between names and values is made by binding. 4. Groups: Objects can be grouped together. The basic method of computation in the SML language is by applying a function to argument values. A function is a mapping from argument values to result values. A function is implemented either by a primitive supplied by the language (whose inner workings are not open to inspection) or by a closure, which is the value of a λ-expression whose body, in turn, consists of applications of functions to arguments. In SML, λ-expressions have the form λ[free-variable-list]→[returns-list]IN[body-expression] For example, a λ-expression could look like λ[x: STRING, y: STRING]→[a: STRING]IN[exp] where "x" and "y" are the free variables in the λ-expression, "a" is the name of the value returned when this λ-expression is invoked, and exp is any SML expression that computes a value for name "a". "IN" is like "." in standard λ-notation. It is helpful to think of a closure as a program fragment that includes all values necessary for execution except the λ's parameters, hence the term closure. Every λ-expression must return values, since the language has no side effects. Application is denoted in programs by expressions of the form ƒ[arg, arg, . . . ]. A SML program manipulates values. Anything that can be denoted by a name or expression in the program is a value. Thus strings, functions, interfaces, and types are all values. In the SML language, all values are treated uniformly, in the sense that any can be passed as an argument, bound to a name, or returned as a result. These operations must work on all values so that application can be used as the basis for computation and λ-expressions as the basis for program structure. In addition, each particular kind or type of value has its own primitive functions. Some of these (like equality) are defined for most types. Others (like subscripting) exist only for specific types (like groups). None of these operations, however, is fundamental to the language. There is a basic mechanism for making a composite value out of several simpler ones. Such a composite value is called a group, and the simpler ones are its components or elements. Thus [3, x+1, "Hello"] denotes a group, with components 3, x+1, and "Hello". The main use of groups is for passing arguments to functions without naming them. These are sometimes called positional arguments. Groups are similar to other language's "structures" or "records": ordered and typed sequences of values. A binding is an ordered set of [name, type, value] triples, often denoted by a constructor like the following: [x: STRING˜"s", y: STRING˜"t"], or simply [x˜"s", y˜"t"]. Individual components can be selected from a binding using the "." operation, similar to Pascal record selection: binding.element yields the value of the component named "element" in binding. A scope is a region of the program in which the value bound to a name does not change. For each scope there is a binding that determines these values. A new scope is introduced by a [. . . ] constructor for a declaration or binding, or a LET statement illustrated below. A declaration is an ordered set of [name, type] pairs, often denoted [x: STRING, y: STRING]. A declaration can be instantiated (e.g. on block entry) to produce a binding in which each name is bound to a name of the proper type. If d is a declaration, a binding b has type d if it has the same names, and for each name n the value b.n. has the type d.n. In addition to the scopes defined by nested bindings, a binding can be added to the scope using a LET statement, LET binding IN expr that makes the names in binding accessible in expr without qualification. Every name has a type, either because the name is in a binding or the name is in a declaration. Names are given values using bindings. If a name is given an explicit type in the binding, the resulting value must have that type. For example, n: t˜v the type of "v" must be "t". Similarly, if "p" is a λ-expression with "a" as a free variable of type "STRING", then p[b] type-checks if "b" has type "STRING". There are no restrictions on use of type as values in SML. For example, ______________________________________ [nl: t ˜ v1, n2: n1 ˜ v2]______________________________________ declares a name "n1" with a type t and a value v1, and then declares a name "n2" with type "n1" and value "v2". Although each such value can in turn be used as the type of another name, the modeller implementation does not attach semantics to all such combinations. Strings are useful in a module interconnection language for compiler options and as components of file names. SML contains facilities to declare strings. For example, the binding ______________________________________ [x: STRING ˜ "lit", y; STRING ˜ x]______________________________________ gives x and y the string literal value "lit". SML describes software by specifying a file containing data. This file is named in SML by a filename proceded by an @. SML defines @ as source-file inclusion: The semantics of an @-expression are idential to those of an SML program that replaced the @ expression by its contents. For example, if the file inner.sm contained "lit" which is a valid SML expression, the binding ______________________________________ [x: STRING ˜ @inner.sm, y: STRING ˜ @inner.sm] and [x: STRING ˜ "lit", y: STRING ˜ "lit"]______________________________________ The @-expression is used in SML to refer to source modules. Although we cannot substitute the @-expression by the contents of the source file since it is written in C/Cedar, we treat the Cedar source file as a value in the language with a type. This type is almost always a procedure type. The values in SML that describe module interconnection are all obtained by invoking one of the procedure values defined by an @-expression. When compiling a system module, all interfaces it depends on must be compiled first and the compiler must be given unambiguous references to those files. In order to load a module, all imports must be satisfied by filling in indirect pointers used by the microcode with references to procedure descriptors EXPORTed by other modules. Both kinds of information are described in SML by requiring that the user declare objects corresponding to an interface file (for compilation) or an interface record with procedure descriptors (for loading), and then parameterize module objects in SML as appropriate. Consider an interface that depends on no other interfaces, i.e., it can be compiled without reference to any files. SML treats the file containing the interface as a function whose closure is stored in the file. The procedure type of this interface is for a procedure that takes no parameters and returns one result, e.g., []→[INTERFACE Sort] where "Sort" is the name of the interface, as in FIG. 13. The application of this λ-expression (with no arguments) will result in an object of type "INTERFACE Mod". Id: INTERFACE Sort˜@ Sort.Mesa[] declares a variable "Id" that can be used for subsequent dependencies in other files. An interface "BTree" defined in the file "BTree.Mesa" that depends on an interface named "Sort" would have a procedure type like: [INTERFACE Sort]→[INTERFACE BTree] The parameters and results are normally given the same name as the interface type they are declared with, so the procedure type would be: [Sort: INTERFACE Sort]→[BTree: INTERFACE BTree] In order to express this in his model, the user would apply the file object to an argument list: Sort: INTERFACE Sort˜@ Sort.Mesa[]; BTree: INTERFACE BTree˜@ BTree.Mesa[Sort]; These interfaces can be used to reflect other compilation dependencies. An interface that is EXPORTed is represented as an interface record that contains procedure descriptors, etc. These procedures are declared both in the interface being EXPORTed and in the exporting PROGRAM module. One can think of the interface record as an instance of a record declared by the interface module. Consider the implementation module SortImpl.Mesa in FIG. 13. SortImpl EXPORTs an interface record for the Sort interface and calls no procedures in other SortImpl EXPORTs an interface record for the Sort interface and calls no procedures in other modules (i.e., has no IMPORTs). This file would have as procedure type: [Sort: INTERFACE Sort]→[SortInst: Sort] and would be used as follows: Sort: INTERFACE Sort˜@ Sort.Mesa[]; SortInst: Sort˜@ SortImpl.Mesa[Sort]; which declares an identifier "SortInst" of the type "Sort", whose value is the interface record exported by SortImpl.Mesa. If SortImpl.Mesa imported an interface reocrd for "BTree," then the procedure type would be: [Sort: INTERACE Sort, BTree: INTERFACE BTree. BTreeInst: BTree]→[SortInst: Sort] and the exported record would be computed by: SortInst: Sort˜@ SortImpl.Mesa[Sort, BTree, BTreeInst]: where [Sort, BTree, BTreeInstr] is a group that is matched to parameters of the procedure by position. Keyword matching of actuals to formals can be accomplished through a binding described later. LET statements are useful for including definitions from other SML files. A set of standard Cedar interfaces could be defined in the file CedarDefs.Model: ______________________________________Rope: INTERFACE Rope ˜ @Rope.Mesa,IO: INTERFACE IO ˜ @IO.Mesa,Space: INTERFACE Space ˜ @Space.Mesa]______________________________________ Then a LET statement like: LET @ Cedar Defs.Model IN [expression] is equal to: ______________________________________LET [Rope: INTERFACE Rope ˜ @Rope.Mesa,IO: INTERFACE IO ˜ @IO.MesaSpace: INTERFACE Space ˜ @Space.Mesa]IN [expression]______________________________________ and makes the identifiers "Rope", "IO", and "Scope" available within [expression]. SML syntax is described by the BNF grammar below. Whenever "x, . . . " appears, it refers to 0 or more occurrences of x separated by commas. "|" separates different productions for the same non-terminal. Words in which all letters are capitalized are reserved keywords. Words that are all lower case are non-terminals, except for id, which stands for an identifier, string, which stands for a string literal in quotes, and filename, which stands for a string of characters that are legal in a file name, not surrounded by quotes. Subscripts are used to identify specific non-terminals, so they can be referenced without ambiguity in the accompanying explanation. ______________________________________ exp :: = 1 [decl.sub.1 ] → [decl.sub.2 ] IN exp.sub.1 |let [binding] IN exp.sub.1 |exp.sub.1 → exp.sub.2 |exp.sub.1 [exp.sub.2 ] |exp.sub.1 . id |[exp, . . . ] |[decl] |[binding] |id |string |INTERFACE id |STRING |@filename decl :: = id: exp, . . . binding :: = bindelem, . . . bindelem :: = [decl] ˜ exp.sub.1 |id: exp.sub.1 ˜ exp.sub.2 |id ˜ exp.sub.1______________________________________ A model is evaluated by running a Lisp-style evaluator on it. This evaluator analyzes each construct and reduces it to a minimal form, where all applications of closures to known values have been replaced by the result of the applications using β-reduction. The evaluator saves partial values to make subsequent compilation and loading easier. The evaluator returns a single value, which is the value of the model, usually a binding. The semantics for the productions are: exp::=λ[decl.sub.1 ]→[decl.sub.2 ]IN exp.sub.1 The expression is a value consisting of the parameters and returned names, and the closure consisting of the expression exp1 and the bindings that are accessible statically from exp. The type is "decl1 →decl2 ". The value of this expression is similar to a procedure variable in conventional languages, which can be given to other procedures that call it within their own contexts. The closure is included with the value of this expression so that, when the λ-expression is invoked, the body (exp1) will be evaluated in the corect environment or context. exp::=LET [binding]IN exp.sub.1 The current environment of exp1 is modified by adding the names in the binding to the scope of exp1. The type and value of this expression are the type and value of exp1. exp::=exp.sub.1 →exp.sub.2 The value of exp is a function type that takes values of type exp1 and returns values of type exp2. exp::=exp.sub.1 [exp.sub.2 ] The value of exp1, which must be a closure, is applied to the argument list exp2 as follows. A binding is made for the values of the free variables in the λ-expression. If exp2 is a group, then the components of the group are matched by type to the formals of the λ-expression. The group's components must have unique types for this option. If exp2 is a binding then the parameters are given values using the normal binding rules to bind f˜exp2 where exp2 is a binding and f is the decl of the λ-expression. There are two cases to consider: 1. The λ-expression has a closure composed of SML expressions. This is treated like a nested function. The evaluation is done by substitution or β-reduction: All occurrences of the parameters are replaced by their values. The resulting closure is then evaluated to produce a result binding. The λ-expression returns clause is used to form a binding on only those values listed in the λ-expression returns list, and that binding is the value of the function call. 2. If the function being applied is a Cedar source or object file, the evaluator constructs interface types of interface records that correspond to the interface module or to the implementation module's exported interfaces, as appropriate. After the function is evaluated, the evaluator constructs a binding between the returned types in its procedure type and the values of the function call. exp::=[exp, . . . ] The exp1 is evaluated and must be a binding. The component with name "id" is extracted and its value returned. This is ordinary Pascal record element selection. exp::=[exp, . . . ] A group of the values of the component exp's is made and returned as a value. exp::=[decl] decl::=id:exp, . . . Adds names "id" to the current scope with type equal to value of exp. A list of decls is a fundamental object. ______________________________________ exp :: = [binding] binding :: = bindelem, . . . bindelem :: = [decl] ˜ exp.sub.1 |id: exp.sub.1 ˜ exp.sub.2 |id ˜ exp.sub.1______________________________________ A bindelem binds the names in decl to the value of expl. If an id is given instead of a decl, the type of id is inferred from that of exp1. The binding between the names in decl and the values in exp1 follows the same rules as those for binding arguments to parameters of functions. exp::=id id stands for an identifier in some binding (i.e., in an enclosing scope). The value or id is its current binding. exp::=string A string literal like "abc" is a fundamental value in the language. exp::=INTERFACE id This fundamental type can be used as the type of any module with module name id. Note id is used as a literal, not an identifier, and its current binding is irrelevant. The value of this expression is the atom that represents "INTERFACE id". exp::=STRING A fundamental type in the language. The value of "STRING" is the atom that represents string types. exp::=@ filename This expression denotes an object whose value is stored in file filename. If the file is another model, then the string @ filename can be replaced by the content of the file. If it is another file, such as a source or object file, it stands for a fundamental object for which the evauator must be able to compute a procedure type. Function calls in SML are made by applying a closure to (1) a group or (2) a binding. If the argument is a group, the parameters of the closure are matched to the components by type, which must be unique. If the argument is a binding, the parameters of the closure are matched by name with the free variables. For example, if p is bound to: p˜λ[x: STRING, y: INTERFACE Y]→[Z: INTERFACE Z]IN[ . . . ] then p takes two parameter, which may be specified as a group: ______________________________________defs: INTEFACE Y ˜ @Defs.Mesa[],z: INTERFACE Z ˜ p["lit", Defs]]______________________________________ where the arguments are matched by type to the parameters of the closure. The order of "lit" and Defs in the example above does not matter. Also the order of x and y in the call of p in the example does not matter. The function may also be called with a binding as follows: ______________________________________defs: INTERFACE Y ˜ @Defs,Mesa[],z: INTERFACE Z ˜ p[x ˜ "lit", y ˜ Defs]]______________________________________ which corresponds to keyword notation in other programming languages. Since the parameter lists for Cedar modules are quite long, the SML language includes defaulting rules that allow the programmer to omit many parameters. When a parameter list, either a group or a binding, has too few elements, the given parameters are matched to the formal parameters and any formals not matched are given default values. The value for each defaulted formal parameter is the value of a variable defined in some scope enclosing the call with the ame name and type as the formal. Therefore, the binding for Z in: ______________________________________ [ x: STRING ˜ "lit", y: INTERFACE Y ˜ @Defs.Mesa[], z: INTERFACE Z ˜ p[] ]______________________________________ is equivalent to "p[x. y]" by the equal-name defaulting rule. SML also allows projections of closures into new closures with parameter. For example, ______________________________________Y: INTERFACE Y ˜ @Defs.Mea[],pl:[Y: INTERFACE Y] ← [Z: INTERFACE Z] ˜ p["lit"],Z: INTERFACE Z ˜ pl[Y]]______________________________________ sets Z to the same value as before but does it in one extra step by creating a procedure value with one fewer free variable, and then applied the procedure value to a value for the remaining free variable. The defaulting rules allow parameter to be omitted when mixed with projections: ______________________________________X: STRING ˜ "lit",Y: INTERFACE Y ˜ @Defs.Mesa[],pl: [Y: INTERFACE Y] → [Z: INTERFACE Z] ˜ p[],Z: INTERFACE Z ˜ pl[]]______________________________________ Enough parameters are defaulted to produce a value with the same type as the target type of the binding (the type on the left side of the notation, "˜"). When the type on the left side is omitted, the semantics of SML guarantee that all parameters are defaulted in order to produce result values rather than a projection. Thus Z˜p1[] in the preceding examples declares a value Z of type INTERFACE Z and not a projection whose value is a λ-expression. These rules are stated more concisely below. If the number of components is less than those required to evaluate the function body, a coercion is applied to produce either (1) the complete argument list, so the function body may be evaluated, or (2) a projection of the original λ-expression into a new λ-expression with fewer free variables. If the type of the result of "exp1 [exp2 ]" is supplied, one of (1) or (2) will be performed. When the target type is not given, e.g., x˜proc[Y] case (1) is assumed and all parameters of "proc" are assumed defaulted. For example, the expression: proc: [Y: STRING, Z: STRING]→[r: R], x: T˜proc[Y] binds the result of applying proc to Y to x of type T. If T is a simple type (e.g., "STRING"), then the proc[Y] expression is coerced into proc[YU, Z], where Z is the name of the omitted formal in the λ-expression and R must equal T. If Z is undefined (has no binding) an error has occurred and the result of the expression is undefined. If T is a function type (e.g., [Z: STRING]→[r: R]), then a new closure is replaced by tghe value of Y. This closure may be subsequently applied to a value of Z and the result value can be computed. The type of Z must agree with the parameters of the target function type. The SML evaluator is embedded in a program management system that separates the functions of file retrieval, compilation, and loading of modules. Each of these functions is implemented by analyzing the partial values of the evaluated SML expression. For example, the application of a file to arguments is analyzed to see whether compilation or loading is required. For each of these phases, the evaluator could be invoked on the initial SML expression, but this would be inefficient. Since the SML language has no iteration constructs and no recursively-defined functions, the evaluator can substitute indirect references to SML expressions through @-expressions by the file's contents and can expand each function by its defining expression with formals replaced by actuals. This process of substitution must be applied recursively, as the expansion of a λ-expression may involve expansion of inner λ-expressions. The evaluator does this expansion by copying the body of the λ-expression, and then evaluating it using the scope in which the λ-expression was defined after adding the actual parameters as a binding for the function to the scope. The scope is maintained as a tree of bindings in which each level corresponds to a level of binding, a binding added by a LET statement, or a binding for parameters to a λ-expression. Bindings are represented as lists of triples of name, type, value. A closure is represented as a quadruple comprising "list of formals, list of returns, body of function, scope printer", where the scope pointer is used to establish the naming environment for variables inside the body that are not formal parameter. The @-expression is represented by an object that contains a pointer to the disk file named. A variable declared as INTERFACE mod (i.e., an interface type variable), is represented as a "module name, pointer to module file" pair, and a variable given as type and interface type variable, i.e., an interface record variable, is repreented as a "pointer to procedure descriptors, pointer to loaded module". The substitution property of Russell, discussed in the Article of A. Demers et al., "Data Types, Parameters & Type Checking", Proceedings of the Seventh Symposium on Principles of Programming Languages, Las Vegas, Nev., pp. 12-23, 1980, guarantees that variable-free expressions can be replaced by their values without altering the semantics of Russell programs. Since SML programs have no variables and allow no recursion, the substitution property holds for SML programs as well. This implies that the type-equivalence algorithm for SML programs always terminates, since the value of each type can always be determined statically. The following are two further examples of models described in SML. The B-tree package consists of an implementation module in the file "BTreeImpl.Mesa" and an interface "BTree.Mesa" that BTreeImpl EXPORTS. There is no client of BTree, so this model returns a value for the interface type and record for BTree. Some other model contains a reference to this model and a client for that interface. The BTree interface uses some constants found in "Ascii.Mesa", which contains names for the ASCII chaacter set. The BTreeImpl module depends on the BTree interface since it EXPORTs it and makes use of three standard Cedar interfaces. "Rope" defines procedures to operate on immutable, garbage collected strings. "IO" is an interface that defines procedures to read and write formatted data to a stream, often the user's computer terminal. "Space" defines procedures to allocate Cedar virtual memory for large objects, in this case the B-tree pages. ______________________________________Exl.ModelLET [Rope: INTERFACE Rope ˜ @Rope.Bed,IO: INTERFACE IO ˜ @IO.Bed,Space: INTERFACE Space ˜ @Space.Bed,] INBTreeProc ˜λ[RopeInst: Rope, IOIsnt:]O, SpaceInst: Space]→ [BTree: INTERFACE BTree, BTreeInst: BTree]IN [Ascii: INTERFACE Ascii ˜ @Ascii.MesaBTree: INTERFACE BTree ˜ @Btree[Ascii],BTreeInst: BTree ˜]@BTreeImpl.Mesa[BTree, Rope,]O.Space,RopeInst, IOInst. SpaceInst]______________________________________ This model, stored in the file "Exl.Model", describes a BTree system composed of an interface "BTree" and an implementation for it. The first three lines declare three names used later. Since they are given values that are object or binary (.bcd) files, they take no parameters. This model assumes those files have already been compiled. Note they could appear as: Rope˜@Rope.Bcd, IO˜@IO.Bcd, Space˜@Space.Bcd since the types of the three identifiers can be determined from their values. The seventh line binds an identifier "BTreeProc" to a λ-expression with three interface records as parameters. If those are supplied, the function will return (1) an interface type for the BTree system, and (2) an interface record that has that type. Within the body of the closure of the λ-expression, there are bindings for the identifiers "Ascii", "BTree", and "BTreeInst". In all cases, the type could be omitted as well. The file "Exl.Model" can be evaluated. Its value will be a binding of BTreeProc to a procedure value. The value is a λ-expression that must be applied to an argument list to yield its return values. Another model might refer to the BTree package by: [BTree. BTreeInst]˜@Exl.Model).BTreeProc[RopeInst, IOInst, SpaceInst] ______________________________________CedarDefs.ModelRope: INTERFACE Rope ˜ @Rope.Bed,IO: INTERFACE IO ˜ @IO.Bed.Space: INTERFACE Space ˜ @Space.Bed]BTree.ModelLet @CedarDefs.Model IN[BTreeProc ˜λ[RopeInst: Rope, IOIsnt:]O, SpaceInst: Space]→ [BTree: INTERFACE BTree. BTreeInst: BTree]IN[Ascii: INTERFACE Ascii ˜ @Ascii.Mesa.BTree: INTERFACE BTree ˜ @BTree[Ascii],BTreeInst: BTree ˜ @BTreeImpl.Mesa[BTree, Rope,]O, Space,RopeInst,]OInst, SpaceInst]]]______________________________________ The prefix part is split into a separate file. The BTree.Model file contains (1) a binding that gives a name to the binding in CedarDefs. Model, and (2) a LET statement that makes the values in CedarDefs.Model accessible in the λ-expression of BTree.Model. Dividing Example 1 into two models like this allows us to establish standard naming environments, such as a model that names the commonlyused Cedar interfaces. Programmer/users are free to redefine these names with their models if they so desire. The System modeller is a complete software development system which uses information stored in a system model, which describes a software system in the environment, e.g., the Cedar user or programmer, the modeller performs a variety of operations on the systems described by the system models: 1. It implements the representation of the system by source text in a collection of files. 2. It tracks changes made by the programmer. To do this, it is connected to the system editor and is notified when files are edited and new versions are created. 3. It automatically builds an executable version of the system, by recompiling and loading the modules. To provide fast response, the modeller behaves like an incremental complier: only those modules that change are analyzed and recompiled. 4. It provides complete support for the integration of packages as part of a release. Thus, the modeller can manage the files of a system as they are changing, providing a user interface through which the programmer edits, compiles, loads and debugs changes interactively while developing software. The models are automatically updated to refer to the changed components. Manual updates of models by the programmer are, therefore, not normally necessary. The programmer writes a model in SML notation for describing how to compose a set of related programs from their components. The model refers to a component module of the program by its unique name, independently of the location in the file system where its bits are stored. The development of a program can be described by a collection of models, one for each stage in the development; certain models define releases. As previously indicated, SML has general facilities for abstraction. These are of two kinds: (1) A model can be organized hierarchially into parts, each of which is a set of named sub-parts called a binding. Like the names of files in a directory, the names in a binding can be used to select any desired part or parts of the binding. (2) A model can be parameterized, and several different versions can be constructed by supplying different arguments for the parameters. This is the way that SML caters for planned variation in a program. The distributed computing environment means that files containing the source text of a module can be stored in many places. A file is accessed most efficiently if it happens to be on the programmer's own machine or computer.. When invoked, the modeller uses the objects in a model to determine which modules need to be recompiled. The modeller will get any files it needs and try to put the system together. Since it has unique-ids for all the needed sources, it can check to see if they are nearby. If not, it can take the path name in the model as a hint and, if the file is there, it can be retrieved. The modeller may have difficulty retrieving files, but it will not retrieve the wrong version. Having retrieved as many files as possible, it will compile any source files if necessary, load the resulting binary files, and run the program. A model normally refers to source files rather than the less flexible binary or object files produced by the compiler, whose interface types are already bound. The system modeller takes the view that these binary files are just accelerators, since every binary file can be compiled using the right source files and parameters. The model has no entry for a binary file when the source file it was compiled from is listed. Such an entry is unnecessary since the binary file can always be reconstructed from the source. Of course, wholesale recompilation is time consuming, so various databases are used to avoid unnecessary recompilation. Models refer to objects, i.e., source or binary (object) files or other models, using an @-sign followed by a host, directory, and file name, optionally followed by version information. In a model, the expression, @[Indigo]<Cedar>X.Mesa!(July 25, 1982 16:10:09) refers to the source version of X.Mesa created on July 25, 1982 16:10:09 that is stored on file server [Indigo] in the directory <Cedar>. The !(. . . ) is not part of the file name but is used to specify explicitly which version of the file is present. The expression, @[Indigo]<Cedar>X.Bed!(1AB3FBB462BD) refers to the binary or object version of X.Bcd on [Indigo]<Cedar>X.Bcd that has a 48-bit version stamp "1AB3FBB462BD" (hexadecimal). For cases when the user wants the most recently-saved version of X.Mesa or X.Bcd, @[Indigo]<Cedar>X.Mesa!H refers to the most recently stored version of X.Mesa on [Indigo<Cedar>. This "!H" is a form of implicit parameterization. If a model containing such a reference is submitted as part of a software release, this reference to the highest version is changed into a reference to a specific version. The system modeller takes a very conservative approach, so the users can be sure there is no confusion on which versions have been tested and are out in the field of the distributed software system. What happens, however, when a new version V2 of an object is created? In this view, such a version is a new object. Any model M1 which refers to the old object V1 continues to do so. However, it is possible to create a new model M2 which is identical to M1 except that every reference to V1 is replaced by a reference to V2. This operation is performed by the modeller and called Notice. In this way, the notion that objects are immutable is reconciled with the fact of evolution. With these conventions, a model can incorporate the text of an object by using the name of the object. This is done in SML expression by writing an object name preceded by sign "@". The meaning of an SML expression containing an @-expression is defined to be the meaning of an expression in which the @ expression is replaced by its contents. For example, if the object inner.model contains "lit" which is an SML expression, the binding ______________________________________ [x:STRING ˜ @inner.sm, y:STRING ˜ "lit"]______________________________________ has identical values for x and y. With these conventions, a system model is a stable, unambiguous representation for a system. It is easily transferred among programmers and file systems. It has a readable text representation that can be edited by a user at any time. Finally, it is usable by other program utilies such as cross-reference programs, debuggers, and optimizers that analyze intermodule relationships. The modeller uses the creation date of a source object as its unique identifier. Thus, an object name might have the form BTree.Cedar!(July 22, 1982 2:23:56); in this representation the unique identifier follows the "!" character., such as in the form, BTree.cedar!H. This means to consider all the objects whose names begin BTree.cedar, and take the one with the most recent create date. As previously explained, Cedar programing consists of a set of modules. There is included two kinds of modules: implementation (PROGRAM) modules, and interface (DEFINITIONS) modules. An interface module contains constants (numbers, types, inline procedures, etc.) and declarations for values to be supplied by an implementation (usually procedures, but also types and other values). A module M1 that calls a procedure in another module M2 must IMPORT an instance Inst of an interface I that declares this procedure. Inst must be EXPORTED by the PROGRAM module M2. For example, a procedure Insert declared in a module BTreeImpl would also be declared in an interface BTree, and BTreeImpl would EXPORT an instance of BTree. A PROGRAM calls Insert by IMPORTing this instance of BTree and referring to the Insert component of the instance. The IMPORTer of BTree is called the client module, and BTreeImp, the EXPORTer, implements Btree. Of course BTreeImpl may itself IMPORT and uses interfaces that are defined elsewhere. FIG. 17 discloses a very simple system model called BTree, which defines one interface BTree and one instance BTreeInst of BTree. BTree.model in FIG. 17 refers to two modules, BTree.cedar!(Sept. 9, 1982, 13:52:55) and BTreeImpl.cedar!(Jan. 14, 1983 14:44:09). Each is named by a user-sensible name (e.g., BTree.cedar), pat of which identifies the source language as Cedar, and a creation time (e.g. !(Sept. 9, 1982, 13:52:55)) to ensure uniqueness. The @ indicates that a unique object name follows. Each object also has a file location hint, e.g., ([Ivy]<Schmidt>, i.e., file server, Ivy, and the directory, Schmidt). BTree.model refers to two other models, CedarInterfaces.model!(July 25, 1982, 14:03:03) and CedarInstances.model!(July 25, 1982, 14:10:12). Each of these is a binding which gives names to four interface or instance modules that are part of the software system. A clause such as LET CedarInterfaces.model IN . . . makes the names bound in CedarInterfaces (Acii, Rope, IO, Space) denote the associated values (Ascii.cedar!(July 10, 1982, 12:25:00)[], argument. Applying the first function to its interface arguments is done by the compiler; applying the resulting second function to its instance arguments is done by the loader as it links up definitions with uses. In the example of FIG. 17, the BTree interface depends on the Ascii interface from CedarInterfaces. Since it is an interface, it does not depend on any implementations. BTreeImpl depends on a set of interfaces which the model does not specify in detail. The "*" in front of the first parameter list for BTreeImpl means that its arguments are defaulted by name matching from the system environment. In particular, it probably has interface parameters BTree, Rope, IO, and Space. All these names are defined in the environment, BTree explicitly and the others from CedarInterfaces through the LET clause, BTreeImpl also depends on Rope, IO and Space instances from CearInstances, as indicated in the second argument list. The interface parameters are used by the compiler for type-checking, and so that details about the types can be used to improve the quality of the object code. The instance parameters are used by the loader and they specify how procedures EXPORTed by one module should be linked to other modules which IMPORT them. The system modeller provides an interactive interface for ordinary incremental program development. When used interactively, the role of the modeller is similar to that of an incremental compiler; it tries to do as little work as it can as quickly as possible in order to produce a runnable system. To do this, it keeps track incrementally of as much information as possible about the objects in the active models under use. For example, consider the following Scenario. Assume a model already exists, say BTree.model, and a user wants to change one module to fix a bug (code error). Earlier, the user has started the modeller with BTree.model as the current model. The user uses the system editor to make a change to BTreeImpl.cedar!(Jan 14, 1983 14:44:09). When the user finishes editing the module and creates a new version BTreeImpl.cedar!(Apr. 1, 1983, 9:22:12), the editor notifies the modeller by calling its Notice procedure, indicating the BTreeImpl.cedar!(Apr. 1, 1983, 9:22:12) has been produced from BTreeImpl.cedar!(Jan. 14, 1983, 14:44:09). If the latter is referenced by the current model, the modeller notices the new version and updates BTree.model!(Jan. 14, 1983, 14:44:11) to produce BTree.model!(Apr. 1, 1983, 9:22:20), which refers to the new version. The user may edit and continue to change more files. When the user wants to make a runnable version of the system, upon command to the modeller, which then compiles everything in correct order and, if there are no errors, produces a binary file. A more complex scenario involves the parallel development of the same system by two programmers. Suppose both start with a system described by the model M0, and end up with different models M1 and M2. They may wish to make a new version M3 which merges their changes. The modeller can provide help for this common case as follows: If one programmer has added deleted or changed some object not changed by the other, the modeller will add, delete, or change that object in a merged model. If both programmers have changed the same object in different ways, the modeller cannot know which version to prefer and will either explore the changed objects recursively, or ask the user for help. More precisely, we have M.sub.3 =Merge[Base˜M.sub.0, New.sub.1 ˜M.sub.1, New.sub.2 ˜M.sub.2 ] and Merge traces out the three models depth-first. At each level, for a component named p: ______________________________________If Add to result______________________________________Base.p=M1.p=M2.p Base.pBase.p=M.sub.1/2.p≠M2/1.p M.sub.2/1.pBase.p=M.sub.1/2.p, no M2/1.p leave p outno Base.p or M.sub.1/2.p M.sub.2/1.pBase.p≠M.sub.1.p≠M.sub.2.p, all models Merge[Base.p:,M.sub.1.p,M.sub.2.p]ELSE error, or ask what to do.______________________________________ At all points, the modeller maintains a model that describes the current program. When a user makes a decision to save a module or program, this is accomplished by an accurate description in the model. Since the models are simply text files, the user always has the option of editing the model as preferred, so the modeller does not have to deal with specifically obscure special cases of editing. In a session which is part of the daily evolution of a program of software system, the user begins by creating an instance of the modeller, which provides a window on the user's screen, as shown in FIG. 20, in this case being that of the Cedar environment. The following explanation and subsequent sections to follow give an overview of its use, suggested by the contents of the Figure per se. The modeller window is divided into four fields, which are, from top to bottom: (1) A set of screen initiated names in field 30 that function as buttons to control the modeller, (2) A field 32 where object names may be typed, (3) A feedback field 34 for compiler progress messages, and (4) A feedback field 36 for modeller messages. To aid in the explanation modeller, the following example follows the steps the user performs to use the modeller. These steps are illustrated in the flow diagram of FIG. 21. Step 1. Assume that the modeller instance has just been created. The user decides to make changes to the modules in Example.Model. The name of the model is entered in the field 32 following the "ModelName:" prompt, and initiates the StartModel button in field 30. From this point on the modeller is bound to Example.Model. StopModel in field 30 must be initiated before using this instance of the modeller on another model. StartModel initializes data structures in this instance of the modeller, StopModel frees the data. Step 2. The user makes changes to objects on the user's personal machine or computer. The system editor calls the modeller's Notice procedure to report that a new version of an object exists. If the object being edited is in the model, the modeller updates its internal representation of the model to reflect the new version. If the changes involve adding or deleting parameters to modules, the modeller uses standard defaulting rules to modify the argument list for the object in the model. Step 3. Once the user has made the intended edits, the user initiates Begin in field 30, which (a) recompiles modules as necessary, (b) loads their object files into memory, and (c) the programs, the user may want to make changes simple enough that the old module may be replaced by the new module without re-loading and restarting the system. If so, after editing the modules, the user initiates "Continue" in field 30, which tries to replace modules in the already loaded system. If this is successful, the user may proceed with the testing of the program and the new code will be used. If the module is not replaceable, the user must initiate "Begin" in field 30, which will unload all the old modules in this model and load in the new modules. Step 5. After completing desired changes, the user can initiate "StoreBack" in field 30 to store copies of his files on remote file servers, and then initiate "Unload" to unload the modules previously loaded, and finally initiate "StopModel" to free modeller data structures. The following is a more further explanation of some of the field 30 initiated functions. StartModel: The modeller begins by reading in the source text of a model and buiding an internal tree structure traversed by subsequent phases. These phases use this tree to determine which modules must be compiled and loaded and in what order. Since parameters to files may have been defaulted, the modeller uses a database of information about the file to check its parameterization in the model and supply defaults, if necessary. If the database does not have an entry for the version of the file listed in the model, the modeller will read the file and analyze it, adding the parameterization information to the database for future reference. This database is described later. Notice Operation: The system editor notifies a modeller running on the machine when a new version of a file is created. The modeller searches its internal data structure for a reference to an earlier version of the file. If one is found, the modeller changes the internal data structure to refer to the new version. While making edits to modules, users often alter the parameterization of modules, i.e., the interface types and IMPORTed interface records. Since editing the model whenever this happens is time-consuming, the modeller automatically adjusts the parameterization, whenever possible, by using the defaulting rules of the modelling language: If a parameter is added and there is a variable with the same name and type as the new parameter, that variable is used for the actual parameter. If a parameter is removed, then the corresponding actual parameter is removed. The modeller re-parses the header of a "noticed" module to determine the parameters it takes. Some changes made by the user cannot be handled using these rules. For example, if the user changes a module so that it IMPORTs an interface record, and there is no interface record in the model with that name, the modeller cannot known which interface record was intended. Similarly, if the user changes the module to EXPORT a new interface record, the modeller cannot know what name to give the EXPORTed record in the model. In these situations, the user must edit the model by hand to add this information and start the modeller again on the new version of the model. Compilation and Loading: After the user initiates "Begin," the modeller uses the internal data structure as a description of a software system the user wants to run on the particular machine. To run the system, each module must have been compiled, then loaded and initialized for execution. The modeller examines each module using the dependency graph implied by the internal data structure. Each module is compiled in correct compilation order if no suitable object file is available. Modules that take no parameters are examined first, then modules that depend on modules already analyzed are examined for possible recompilation, and so on, until, if necessary, all modules are compiled. Modules are only recompiled if (1) the modules they depend on have been recompiled, or (2) they were compiled with a different version of the compiler or different compiler switches than those specified in the model. If there are no errors, the modeller loads the modules by allocating memory for the global variables of each module and setting up links between modules by filling in the interface records declared in the module. When loading is completed, execution begins. StoreBack: Models refer to files stored on central file servers accessable by users on the distributed system. The user types a file name without file server or directory information to the system editor, such as "BTreeImpl.Mesa," and the editor uses information supplied by the modeller to add location information (file server and directory) for the files. If the file name without location information is ambiguous, the user must give the entire file name to the editor. To avoid filling file servers with excess versions, the modeller does not store a new version of a source file on a file server after the source file is edited. Instead, the new versions are saved on the local disk. When the user initiates "StoreBack", all source files that have been edited are saved on designated remote directories. A new version of the model is written to its remote directory, with references to the new versions of source files it mentions. The compiler may have produced new versions of object files for source files listed in the model. Each object file so produced is stored on the same directory as its corresponding source file. Multiple Instances of Modellers: More than one modeller may be in use on the same machine. The user can initiate the "NewModel" button to create another window with the four subwindows or fields shown in FIG. 20 and is used in the same manner. Two instances of a modeller can even model two versions of the same system model. Since file names without locations are likely to be ambiguous in this case, the user will have to type file names and locations to the editor and do the same for the "ModelName:" field 32 in the modeller window. Other aspects of the operation of the modeller and modeller window in FIG. 20 is described in the following sections. Some models are shared among many users, who refer to them in their own models by using the @-notation and then using returned values from these shared models. An example is the model, "BasicCedar.Model," which returns a large number of commonly used interfaces (interface types) that a user might use. Although it is always possible to analyze all sub-models such as BasicCedar.Model, retrieving the files needed for analysis is very time consuming. When the user initiates "MakeModelBcd" in field 30, the modeller makes an object file for a model, much as a compiler makes an object file for a source file. This model object file, called a .modelBcd file, is produced so that all parameters except interface records are given values, so it is a projection of the source file for the model and all non-interface record parameters. The .modelBcd file acts as an accelerator, since it is always possible to work from the sources to derive the same result as is encoded in the .modelBcd. The loading ability of the modeller gives the user the ability to load the object files of any valid model. This speed of loading is proportional to the size of the system being loaded and the inter-module references. As the system gets larger, it takes more time to load. However, the Cedar Binder has the ability to take the instructions and symbol table stored in each object file, merge these pieces of object, and produce an object file that contains all the information of the constituent modules while combining some tables used as runtime. This transformation resolves references from one module to another in the model, which reduces the time required to load the system and also saves space, both in the object file and when the modules are loaded. To speed loading of large systems, this feature has been preserved in the modeller. If "Bind" is initiated after "StartModel" and then "Compile" or "Begin" are initiated, an object file with instructions and symbol tables merged is produced. The programmer may choose to produce a bound object file for a model instead of a .modelBcd file when (1) the model is very large and loading takes too long or the compression described above is effective in reducing the size of the file or (2) the object file will be input to the program that makes the boot file for the system. The ability to replace a module in an already loaded system can provide faster turnaround for small program changes. Module replacement in the Cedar type system is possible if the following conditions are met: (1). The existing global data of the module being replace may change in very restricted ways. Variables in the old global data must not change in position relative to other variables in the same file. New variables can only be added after the existing data. If the order changed, outstanding pointers to that data saved by other modules might be invalidated. (2). Any procedures that were EXPORTed by the old version of the module must also be EXPORTed by the new version, since the address of these objects could have been passed to other modules, e.g., a procedure that is passed as a parameter. (3). There are a number of architectural restrictions, such as the number of indices in certain tables, that must be obeyed. (4). No procedures from the affected module can be executing or stopped as a breakpoint during the short period of time the replacement is occurring. The modeller can easily provide module replacement since it loaded the modules initially and invokes the compiler on modules that have been changed. When the user initiates "Continue" in the field, the modeller attempts to hasten the compile-load-debug cycle by replacing modules in the system, if possible. Successful module replacement preserves the state of the system in which the replacement is performed. The modeller calls the compiler through a procedural interface that returns a boolean true if rules (1) and (2) are obeyed; the modeller will also check to see that rules (3) and (4) are obeyed. If all four checks succeed, the modeller will change the runtime structures to use a new pointer to the instructions in the new module, which in effect replaces the old instructions by the new ones. Some changes are substantial enough to violate rules (1)-(4), so after edits to a set of modules, some modules are replaceable and others are not. When this happens, the modules that are replaceable are replaced by new versions. The modules for which replacement failed are left undisturbed, with the old instructions still loaded. If desire, the user may try to debug those changes that were made to modules that were replaceable. If not, the user can initiate the "Begin" button to unload the current version and reload the system. Since no extra compilations are required by this approach, the user will always try module replacement if there is a possibility that it will succeed and the user wants to preverse the current state of the program or software system. When the Cedar debugger examines a stopped system, e.g., at a breakpoint, the debugger can follow the procedure call stack and fine the global variables for the module in which the procedure is declared. These global variables are stored in the global frame. The modeller can provide the debugger with module-level information about the model in which this module appears, and provide file location and version information. This is particularly useful when the debugger wants to inspect the symbol table for a module, and the symbol table is stored in another file that is not on the local machine or computer disk or the user. The programmer/user deals with the model naturally while debugging the system. Since more than one modeller can be in use on a machine or computer, the modeller(s) call procedures in an independent runtime loader to add each model to a list of models maintained for the entire running system. When the modules of a model are loaded or unloaded, this list is updated, as appropriate. To simplify the design, the list of models is represented by the internal data structures used by the modeller to describe a model. This model has no formal parameters and no file where it is stored in text form, but it can be printed. This allows the debugger to use a simple notion of scope: a local frame is contained in the global frame of a module. This module is listed in a model, which may be part of another model that invokes it, and so on, until this top-most model is encountered. The debugger can easily enumerate the siblings in this containment tree. It can enumerate the procedures in a module, or all the other modules in this model, as appropriate. This type of enumeration occurs when the debugger tries to match the name of a module typed by the user against the set of modules that are loaded, e.g., to set the naming environment for expressions typed to the debugger. The procedures of the modeller can be categorized into these functional groups: 1. Procedures to parse model source files and build an internal parse tree. 2. Procedures to parse source and object files to determine needed parameterization. 3. Procedures that maintain a table, called the projection table, that expresses relationships between object files and source files, as described below. 4. Procedures that maintain a table, called the file type table, that gives information about files described in models. This includes information about the parameters needed by the file, e.g., interface types, and information about its location on the file system. 5. Procedures that load modules and maintain the top-level model used by the debugger. 6. Procedures used to call the compiler, connect the modeller to the editor, and other utility procedures. 7. Procedures to maintain version maps. The sections below discuss essential internal data structures used in these groups, illustrations of which are shown in the tables of FIGS. 18 and 19. The model is read in from a text file and must be processed. The modeller parses the source text and builds an internal parse tree. This parse tree has leaves reserved for information that may be computed by the modeller when compiling or loading information. When a Notice operation is given to the modeller, it alters the internal data structures to refer to new versions of files. Since new models are derived from old models when Notice operations occur, the modeller must be able to write a new copy of the model it is working on. There is one parse tree per source model file. The links between model files that are "called" by other model files are represented as pointers from one model's internal data structure to another in virtual memory. The internal data structure represents the dependency graph used to compile modules in correct compilation order by threading pointers from one file name to another in the parse tree. three tables that record the results of computations that are too extensive to repeat. These tables serve as accelerators for the modeller and are stored as files on the local computer disk. These tables are of three types and are maintained independently from instances of the modeller on a local computer disk. The information in a table is like a cache for the modeller. It can be automatically reconstructed whenever it is not present, as the information is never purged. When the file containing the table becomes too large, the user simply deletes it from his local disk and the information is reconstructed. Object Type Table: This table contains a list of objects that are referenced by models and have been analyzed as to their types. An example is shown in FIG. 18. The modeller abstracts essential properties of the objects in models and stores the information in this table. For example, a Cedar source file is listed along with the implied procedure type used by the modeller to compile and load it. The unique name of an object is the key in this table and its type is the value. The object type table also contains information that records whether a file has been edited, and if so, whether it has been saved on a remote file server. Projection Table: This table keeps a list of entries that describe the results of running the compiler or other programs that takes a source object file and any needed parameters, such as interfaces, and produces a binary object file. An example is shown in FIG. 18. Before invoking, for example, the compiler on a source file to produce an object user-sensible name of the source object, plus the version stamp, the 48-bit hash code of all the other information. An entry is added to the projection table whenever the compiler is successfully run. If an entry is not in the table, there may be an object file on the disk made by the compiler that predates the information in the projection table. If not, the compiler is invoked to produce the object file. In either case a new entry is added to the table for later use. It is possible for these tables to fill up with obsolete information. Since they are just caches and can always be reconstructed from the sources, or from information in the .modelBinary objects,. The projection table does not include the location of object files. Version maps, described below, are used for this. Version Maps: The central file servers used by the system modeller can store more than one version of a source file in a directory. An example is shown in FIG. 19. Each version is given a version number, which ranges from 1 to 32767 and is typically less than 100. Obtaining the creation time of a source file or the 48-bit version stamp of object files from a central file server takes between 1/4 and 1 second. For directories with many versions of a file, searching for the create time or version stamp can take a few seconds per file. Since the modeller must determine the explicit version number of the file that is referenced in the model, this slow search for large numbers of files referenced by models is prohibitively excessive. To avoid this excessive searching when it is running, the modeller uses an index between create times or version stamps and full path names that include explicit version numbers for files. Since the version numbers used by the file servers are not unique and may be reused, the modeller uses this index as a cache of hints that are checked when data in the file is actually used. If there is no entry for a file in the cache, or if it is no longer valid, the versions of a file are searched and an entry is added or updated if already present. Commonly referenced files of the software system are inserted in a version map maintained on each computer or machine. In summary, the Object Type table speeds the analysis of files, the Projection table speeds the translation of objects into derived objects, and Version Maps are used to avoid extensive directory searches. The modeller keeps its caches on each machine or computer. It is also desirable to include this kind of precomputed information with a stored model, since a model is often moved from one computer or machine to another, and some models are shared among many users, who refer to them in their own models by using the @-notation. An example is the model CedarInterfaces.model, which returns a large number of commonly used interfaces that a program might need. Furthermore, even with the caches, it is still quite extensive to do all the typechecking for a sizable model. For these reasons, the modeller has the ability to create and read back compiled models. A compiled model contains (1) a tree which represents a parsed and typechecked version of the model; (2) object type and projection tables with entries for all the objects in the model; (3) a version map with entries for all the objects in the model. When the user initiates the "MakeModelBcd" button in field 30 of FIG. 20, the modeller makes this binary object for the current model, much as a compiler makes a binary file from a source file. In a .modelBcd object any parameters of the model which are not instances may be given specific argument values. This is much like the binary objects produced by the compiler, in which the interface parameters are fixed. The .modelBcd objects acts merely as an accelerator, since it is always possible to work from the sources of the model and the objects it references, to derive the same result as is encoded in the .modelBcd. As just indicated, .modelBcd file can be produced for a model that has been analyzed by initiating the "MakeModelBcd" button. The .modelBcd file contains the same information described in the previous tables. Only information relevant to the model being is analyzed is stored. The .modelBcd contains (a) a representation of the internal parse tree that results from reading and parsing the source file for the model, (b) an object type table for source files referenced by the model, (c) a projection table describing the object files are are produced, for example, by the compiler, and (d) a version map that describes, for each source and object file in (b) and (c), a file location including a version number. A model may refer to other models in the same way it refers to other source files. The projection table includes references to .modelBcd files for these inner models. The information stored in the model-independent tables or present in .modelBcd files is used in four different ways: three ways when the modeller is used, and once by the release process, which is described later. StartModel Analysis: Each application of a source file to a parameter list in the model is checked for accuracy and to see if any parameters have been defaulted. The version information (create time) following the source file name is employed to look up the parameters needed by the file in the file type table. If no entry is present, the source file must be parsed to get its parameters. The version map is used to obtain an explicit file on a file server. If there is no entry for the create time of this file in a version map, all versions of the source file on the directory listed in the model are examined to see if they have the right create time. If so, an entry for that version is added to the version map and the file is read and its type is added to the object type table. If so such version can be found by enumeration, an error is reported in field 36. If the version of the source file is given as "!H", meaning the highest version on that directory, the directory is probed for the create time of the highest version, and that create time is used as if it were given instead of "!H". FIG. 22 illustrates by flow diagram how a reference to "[Ivy]<Schmidt>X.Mesa" of July 25, 1982 14:03:02 is treated by the StartModel analysis. Compilation Analysis: After the user initiates "Begin" or "Compile" in field 30, the modeller constructs object files for each source file in the model. Each source file and its parameters is looked up in the projection table. If not present, the modeller constructs the 48-bit version stamp that an object file would have if it had been compiled from the source and parameters given. The version map is used to search for an object file with this 48-bit version stamp. If not found in the version map, the modeller searches for an object file in the directory where the source file is stored. If found, an entry is added to the version map and to the projection table. The modeller does not search for object files compiled from source files that have just been edited since it has knowledge that these have to be compiled. If the modeller must compile a source file because it cannot find an object file previously compiled, the source file is read using the version map entry for the source and an object file produced on the local computer disk. Information about this object file is added to the model-independent tales and version maps. The object file is stored on a file server later when "StoreBack" is initiated. The compilation analysis for this is illustrated in FIG. 23. Loader Analysis: Each object file must be read to copy the object instructions into memory. The modeller loader, as illustrated in the loading analysis of FIG. 24, looks up the 48-bit version stamp in the version map to find the explicit version of the file to read. Since the version maps are hints, the presence of an entry for a file in a version map does not guarantee that the file is actually present on the file server and, therefore, each successful probe to the version map delays the discovery of a missing file. For example, the fact that a source file does not exist may not be discovered until the compilation phase, when the modeller tries to compile it. When the modeller stores file type, projection, and version map information in .modelBcd files, it stores only information relevant to the model in use. When the modeller reads .modelBcd files, it takes the information from the .modelBcd and adds it to cache tables maintained on each machine or computer. When a module is compiled for the first time, this information is added to the tables manage centrally on each computer. This information can, over time, become obsolete and require large amounts of disk space, since these tables are stored in files on the local computer disk. If these files are deleted from the local disk, the modeller will reconstruct the information as it uses it. As previously indicated,]: (1) Checks that M and each component of M is legal: syntactically correct, type-correct, and causes no compiler errors. (2) Ensures that all objects needed by any component of M are components of M, and that only one version of each object exists (unless multiple versions are explicitly specified). (3) Builds the system described by M. (4) appropriate implementor/user to correct the model. Releases can be frequent, since performing each release imposes a low cost on the Release Master and on the environment programmers. The Release Master does not need to know any details about the packages being released, which is important when the software of the system becomes too large to be understood by any single programmer/user. The implementor/user of each package can continue to make changes until the release occurs, secure in the knowledge that the package will be verified before the release completes. Many programmers make such changes at the last minute before the release. The release process supports a high degree of parallel activity by programmers engaged in software development. ˜ [BTree ˜ @[indigo]<Int>BTree.Model!H --ReleaseAs [Indigo]<Cedar>--.Runtime ˜ @[Indigo]<Int>Runtime.Model!H --ReleaseAs[Indigo]<Cedar>--__________________________________________________________________________ The Top model is used during the development phase as a description of models that will be in the release and gives the locations of these objects while they are being developed. The Top model provides the list of moldels that will be released. Models not mentioned in the Top model will not be released. Every model M being released must have a LET statement at the beginning that makes the components in the Top model accessible in M. Thereafter, M must use the names from Top to refer to other models. Thus, M must begin ______________________________________LET@[Indigo<Int>Top.Model!H IN [. . .RTypes: INTERFACE ˜ Runtime,. . .______________________________________ Clients of a release component, e.g., RTTypes, are not allowed to refer to its model by @-reference, since there is no way to tell whether that model is part of the release. Aside from the initial reference to Top, a release component may have @-references only to sub-components of that component. A model M being released must also have a comment that gives its object name in the Top Model (e.g. BTree), and the working directory that has a copy of the model, e.g., --ReleaseName BTree --WorkingModelOn [Indigo]<Int>BTree.Model These comments are redundant but allow a check that Top and the component, and hence the Release Master and the implementor/user, agree about what is being released. M must also declare the release position of each file, by appending it as a comment ater the filename in the model, e.g., @[Ivy]<Work>XImpl.Mesa!H--ReleaseAs [Indigo]<Cedar>XPack>--[] A global ReleaseAs comment can define the default release position of files in the model (which may differ from the release position of the model itself). Thus if the model contains a comment, --DefaultReleaseAs [Indigo]<Cedar>BTrees>-- then the user may omit the --ReleaseAs [Indigo[<Cedar>BTrees>-- clauses. The modeller must be able to analyze large collections of modules quickly, and must provide interfaces to the compiler, loader, debugger, and other programs. Described first are the basic algorithms used for evaluation and then a description of the algorithms used for releases. The cache tables used have been previously explained which gently improve performance in the normal case of incremental changes to a large software system. In order to build a program or system, the modeller must evaluate the model for the program. As previously explained, a model is an expression written in SML notation. Evaluating an SML expression is done in three steps: (1) The Standard β-reduction evaluation algorithm of the typed lambda calculus converts the expression into one in which all the applications are of primitive objects, namely system modules. Each such application corresponds to compilation or loading of a module. β-reduction works by simply substituting each argument for all occurrences of the corresponding parameter. SML operations such as selecting a named component of a binding are executed as part of this process. Thus, in the example, LET Instances˜@CedarInstances.model IN Instances.Rope evaluates to @[Indigo]<Cedar>RopeImpl.cedar!(July 10, 1982, 17:10:24)[. . . ][. . . ] where the arguments of RopeImpl are filled in according to the defaulting rules. (2) Each application of a .cedar object is evaluated by the compiler, using the interface arguments computed by (1). The result is a .binary or .Bcd object. Of course, each interface argument must itself be evaluated first; i.e., the interfaces on which a module depends must be compiled before the module itself can be compiled. (3) Finally, each application of a .Bcd or software system. Step (1) is done when the user initiates the StartModel screen button shown in FIG. 20 or on the affected subtree whenever the current model is modified by a Notice operation. For StartModel, system system, INTEFACE B]→[[A]→[B]] i.e., it is a function taking two interface arguments and returning, after it is compiled, another function that takes an instance of A and returns an instance of B. The modeller checks that the arguments supplied in the model have thee types, and defaults them if appropriate. SML typechecking is discussed in detail in the Article of B. W. Lampson et al, "Practical Use of a Polymorphic Applicative Language", Proceedings of the 10th Symposium on Principles of Programming Languages, Austin, Tex., January 1983. After the entire model has been evaluated, the modeller has determined the type of each module and has checked to determine that every module obtains the arguments of the type it wants. Any syntactic or type errors discovered are reported to the user. If there are none, then whenever a value is defined in one module and used in another, the two modules agree on its type. Nothing at this point has yet been compiled or loaded. After step (1) , the value of the model is a tree with one application for each compilation or loading operation that must be done. The compilation dependencies among the modules are expressed by the aguments: or system, each application of a source module must be evaluated by the compiler, as described in (2). During this evaluation, the compiler may find errors within the module. This step is done when the user initiates the "Compile" or "Begin" button. After step (2), the value of the model is a tree in which each application of a source object has been replaced by the binary object that the compiler produced. To get from this tree to a runnable program or system, each binary object must be loaded, and each instance record filled in with the procedures EXPORTed from the modules that implement it. The details of how this is done are very dependent on the machine architecture and the runtime data structures of the implementing language. After preparation of all models that are to be released, the Relase Master runs the Release Utility, Release, which makes three passes over the module being released. for correction of errors caught in this phase. The Move phase moves the files of the release onto the release directory and makes new versions of the models that refer to files on the release directory instead of the working directory. For each model listed in the release position list, Move: (1) reads in the model from the working directory, (2) moves each file explicitly mentioned in the model to its release position, (3) writes a new version of the source file for the model in the release directory. This release version of the model is like the working version except that (a) all working directory paths are replaced by paths on the release directory, (b) a comment is added recording the working directory that contained the working version of the model, and (c) the LET statement referring to the Top model is changed to refer to the one on the release directory. For example, the model may look like the following: ______________________________________ReleaseName BTreeModelCameFromModelOn [Indigo]<Int>Btree.ModelDefaultCameFrom [Indigo]<Int>BTrees>LET @[ivy}<Rel>ReleasePosition.Model IN [. . .RTTypes:INTERFACE ˜ @[Indigo]<Cedar>XPack>file.bed!1234CameFrom [Indigo]<Int>XPack>--,. . .______________________________________ Any references to highest version, "!H", are changed to be explicit create times as the model is written. At the end of phase Move, the working position model is automatically converted to a release position model that defines the same variables as the working position model, but sets those variables to refer to the model stored on the release directory. A release position model might be ______________________________________Position ˜ [BTreeModel ˜ @[Indigo]<Cedar>BTree.Model!1234,RuntimeModel ˜ @[Indigo]<Cedar>Runtime,Model!]2345]______________________________________ Note that the LET switch is a deviation from explicit parameterization that allows us to change the nature of each model from being a development version to being a released version. The LET switch could be avoided if every model took a parameter that controlled whether its LET statement should refer to the working position model or the release position model. The SML language could be augmented with a type "BOOLEAN" and an IF-THEN-ELSE expression to accomplish this. Because Release has to rewrite models anyway to eliminate "!H" references, the LET switch is chosen to be accomplished automatically. Phase Move also constructs a directed graph of models in reverse dependency order that will be used in phase Build. In this dependency graph, if Model A refers to model B, then B has an edge to A. FIG. 22 illustrates the movement of files by this phase. The Build phase takes the dependency graph computed during the move phase and uses it to traverse all the models in the release. For each model: (1) All models on incoming edges must have been examined. (2) For every source file in the model, its object file is moved to the release directory from the working directory. (3) A.modelBed file is made for the version on the release directory. (4) If a special comment in the model is given, a fully-bound object file is produced for the model, usually to use as a boot file. After this is done for every model, a version map of the entire release is stored on the release directory. FIG. 23 illustrates the movement of files by this phase. At the conclusion of phases Check, Move and Build, Release has established that: (1) Check: All reachable objects exist, and derived objects for all but the top object have been computed. This means the files input to the release are statically correct. (2) Move: All objects are on the release directory. All references to files in these models are by explicit create time (for source files) or version stamps (for object files). (3) Build: The system has been built and is ready for execution. All desired accelerators are made, i.e., .modelBcd files and a version map for the entire release. Phase Check. In order to know the parameterization of files referenced in the model, some part of each system file must be read and parsed. Because of the large number of files involved, phase Check maintains object type and projection tables and a version map for all the files on their working directories. These tables are filled by extracting the files stored in the .modelBcd files for the models being submitted to the release. Any models without .modelBcd accelerators are read last in phase Check and the result of analyzing each file is entered into the database. The version map information about object file location(s) and projection table are used later in phase Build. Because files can be deleted by mistake after the .modelBcd file is made and before phase Check is run, Release checks that every version of every file in the release is present on the file server by verifying the file location hints from the .modelBcd files. Phases Move and Build. The Move and Build phases could have been combined into a single phase. Separating them encourages the view that the Build phase is not logically necessary, since any programmer can build a running system using the source models and source files that are moved to the release directory during the Move phase. The Build phase makes a runnable system once for all users and stores the object files on the release directory. The Build phase could be done incrementally, as each model is used for the first time after a release. This would be useful when a release included models that have parameters that are unbound, which requires the user to build the model when the model is used and its parameter are given values. The Check phase file type and projection tables and version map are used to make production of the .modelBcd files faster. The projection table is used to compute the version stamps of object files needed, and the version map is used to get the file name of the object file. This object file is then copied to the release directory. The file type entry, projection entry and new release position of source and object files are recorded in the .modelBcd being built for the released model. The Build phase has enough information to compile sources files if no suitable object files exist. To speed up releases, it is preferred that the programmer/user make valid object files before the operation of Move and Build. If such an object file is not on the same directory as the source file, the programmer/user is notified of his error and ask to prepare one. If the Release Master ran the compiler, he would most likely compile a file that the programmer had fogotten to recompile, and this file might have compilation errors in it. The ability to automatically compile every file during a release is useful in extensive bootstraps, however. For example, a conversion to a new instruction set, where every module in the release must be compiled, is easily completed using a cross-compiler during the phase Build. The Build phase produces the version map of the release by recording the create time or version stamp of every file stored by Release on the release directory, along with file server, directory, and version number for the file. The version maps supplied by the .modelBcd files that were submitted to the release cannot be used, since they refer to files on their development directories and not on the release directories. This released version map is distributed to every machine or computer. Although the .modelBcd files also have this information, it is convenient to have allthe version information released in one map. FIG. 24 is an example of a single version map. The working position model may list other nested working position models. The objects defined in the nested working position model are named by qualifying the name of the outer object. For example, if Top contained ______________________________________Top ˜ [. . .NestedSet ˜ @[Indigo]<Int>NestedWPM.Model!H -- ReleaseAs-Indigo]<Cedar>. . .______________________________________ Then, the elements of the nested working position model can be referred to using "." notation, e.g., Top.NestedSet.Element. The "ReleaseAs" clause in Top indicates the directory in which the analogous release position model is written. The same algorithm is used to translate the working model into a release model. A model refers to objects, i.e. source files, binary (object) files or other models, by their unique names. In order to build a system from a model, however, the modeller must obtain the representations of the objects. Since objects are represented by files, the modeller must be able to deal with files. There are two aspects to this: (1) Locating the file which represents an object, starting from the object's name. (2) Deciding where in the file system a file should reside, and when it is no longer needed and can be deleted. It would be desirable if an object name could simply be used as a file system name. Unfortunately, file systems do not provide the properties of uniqueness and immutability that object names and objects must have. Furthermore, most file systems require a file name to include information about the machine or computer that physically stores the file. Hence, a mapping is required from object names to the full pathnames that unambiguously locate files in the file system. To locate a file, the modeller uses a location hint in the model. The object reference @[Ivy]<Schmidt>BTreeImpl.cedar!(Jan. 14, 1983, 14:44:09) contains such a hint, [Ivy]<Schmidt>. To find the file, the modeller looks on the file server Ivy in the directory Schmidt for a file named BTreeImpl.cedar. There may be one or more versions of this file; they are enumerated, looking for one with a creation date of Jan. 14, 1983, 14:44:09. If such a file is found, it must be the representation of this object. The distributed environment introduces two types of delays in access to objects represented by files: (1) If the file is on a remote machine, it has to be found. (2) or computer and directory has a copy of the version desired can be very time consuming. Even when a file location hint is present and correct, it may still be necessary to determine the Version Map, discussed previously. Note that both source objects, whose unique identifiers are creation dates, and binary objects, whose unique identifiers are version stamps, appear in the version map. The full pathname includes the version number of the file, which is the number after the "!". This version number makes the file name unique in the file system so that a single reference is sufficient to obtain the file. Thus, the modeller's strategy for minimizing the cost of referencing objects has three paths: (1) Consult the object type table or the projection table, in the hope that the information needed about the object is recorded there. If it is, the object need not be referenced at all. (2) Next, consult the version map. If the object is there, a single reference to the file system is usually sufficient to obtain it. (3) If there is no entry for the object in the version map, or if there is an entry but the file it mentions does not exist, or does computer or machine and in each .modelBcd object. A .modelBcd version map has an entry for each object mentioned in the model. A machine version map has an entry for each object which has been referenced recently on that machine. In addition, commonly referenced objects of the software system are added to the machine version map as part of each release. Since the version maps are hints, a version map entry for an object does not guarantee that the file is actually present on the file server. Therefore, each successful probe to the version map delays the discovery of a missing file. For example, the fact. While the system modeller has been described in conjunction with specific embodiments, it is evident that alternatives, modifications and variations will be apparent to those skilled in this art in light of the foregoing description. Accordingly, it is intended to embrace all such alternatives, modifications and variations as fall within the spirit and scope of the appended claims.
https://patents.google.com/patent/US4558413
CC-MAIN-2018-34
en
refinedweb
Hi Manish, Can u explain your problem clearly. mayb ur code snippet where u wish to get the user. You can get the user by IPortalComponentRequest request = (IPortalComponentRequest) this.getRequest(); IUser user = (IUser) request.getUser(); As mentioned in this thread.> regards, Saravanan Hi Sarvana, Thanx for the code! But the problem here is I am using Repository Framework to connect to a content server. Now here the main block is the Repository Manager. I want to retrieve the portal user who has logged into the portal to retrieve the corresponding Content Server user and then Log into the system. The code where I need the the request object to use the request object to retrieve the user. public class ACEPRepositoryManager extends AbstractManager implements IReconfigurable{ private static Location logger = Location.getLocation(com.filenet.acep.ACEPRepositoryManager.class); private static final Set supportedOptions = new HashSet(); hi manish, I am not able to understand your query properly.But i hope the following code will help you. a) First make sure to import 1.com.sapportals.portal.prt.component.IPortalComponentRequest; 2.com.sapportals.portal.prt.component.IPortalComponentResponse; b) IPortalComponentRequest request = (IPortalComponentRequest) this.getRequest(); IUser user = (IUser) request.getUser().getUser(); ResourceContext ctxt = new ResourceContext(user); RID rid = RID.getRID("/documents"); IResource resource =ResourceFactory.getInstance().getResource(rid, ctxt); IRepositoryManager manager=resource.getRepositoryManager(); Regards, Srinath Add comment
https://answers.sap.com/questions/1297525/index.html
CC-MAIN-2019-18
en
refinedweb
Prologue to the Revelation Revelation Series: Episode III The Prologue In any well written epistle one would expect to be ushered into its body by way of an introduction. John does not disappoint his readers, for in the first eight verses of chapter one he offers up his introduction which includes a prologue, and the greetings with a doxology. In the very beginning of the manuscript the writer identifies himself as John (see verse 1). From the most earliest of times the church fathers have understood this John to be the son of Zebedee, the beloved disciple of our Lord. The Revelation is addressed to seven churches which existed in the western part of modern Turkey, then called Asia Minor (verse 4). In verses 5,6,7 John gives the reader a view of the whole drama of redemption: in verse 5 the death and resurrection of Christ; in verses 6 the empowering of the Church as kings and priests; and finally, in verse 7 the return (parousia - the coming) of Christ; verse 8 is a summation of the preceding three verses in that Jesus declares Himself to be Sovereign over human history. ♱ 1:1 The Revelation of Jesus Christ, which God gave Him to shew his servants things which must shortly come to pass; and he sent and signified it by his angel to his servant John: Dear disciple in the opening verse you are informed that before you, nay, in your very hands is “the revelation of Jesus Christ.” The Greek word translated “revelation” is “apocalypse,” which means: the unveiling. Therefore, my fellow disciples, you are told, in these very first strokes of John’s quill that Jesus Christ of Nazareth (who, according to Paul, is the mystery of God the Father [Col 2:1-2]) is about to be un-veiled before you. The Son of God (i.e. the humanity of Christ) received this “revelation” from the Father (i.e. the Spirit, Jn 4:24) for the benefit of His ‘servants.” The servants mentioned here are, of course, all believers. Notice that the things concerning Jesus which are about to be revealed are to “shortly come to pass.” It is interesting, and important that both here and in the epilogue of this work the disciples are instructed to expect an imminent fulfillment of the Revelation’s predictions. The Revelation is to be “signified” to John by the angel of Christ. From this word “esēmana” (from sēma) comes our English “sign;” “to give a sign,” or, “to make known.” We may expect, then, for the subsequent revelation to be given in signs or symbol images and languages (see Episode II, Literary Form). The phrase “his angel” may reference any number of possibilities. The word “angel” is the Greek word “angelos” meaning: a messenger, or envoy of God. This word, along with its plural, appears over 70 times in the Revelation. The word may apply to created angelic beings (Ge 28:12, Ps 68:17), or, redeemed saints (see 17:1 cf 19:10; and 21:9 cf 22:9), or more likely, in this case, “his angel” has the “angel of the LORD” in view. This view may be held with confidence because of chapter 22 and verses 6 and 16. In verse 6 John records “...and to show unto his servants the things which must shortly be done.” This verse is clearly a companion to the verse under consideration here (1:1). It should be pointed out at this point that Jesus identified this angel of the LORD as His angel (22:16): “I, Jesus, have sent My angel to testify to you these things in the churches. ...” A study of the Angel of the LORD (LORD=Yahweh) shows Him to be one and the same as Christ. The same statements made about the nature, character, mission, and activities of the angel of the LORD are also stated of Jesus. Angel of the LORD Activity or Attribute Jesus Genesis 16:7,13 Called “LORD” (YHWH) John 20:28 Genesis 48:15-16 Called-God Jude v25 Exodus 48:15-16 “I am” John 8:58 Exodus 13:20-23 Sent from God John 5:30 Joshua 5:13-18 Capt. of the LORD’s Host Isaiah 9:6 Isaiah 63:9 Redeemed His own Ephesians 5:25 ♱ 1:2 Who bare record of the word of God, and of the testimony of Jesus Christ, and of all things that he saw. Although some, such as Dionysius of North Africa, in the third century, have doubted the authorship of John (the beloved disciple of our Lord) for the Revelation, here is the very first sentence of his book is an interesting evidence to that evangelist’s favor. No, not that he gave his name, for indeed any could have signed a signature. The evidence is of greater import. Namely, the similarity of this first sentence with the first verses of both the Fourth Gospel and the First Epistle which bears the same name: Gospel of;" Revelation 1:2, "Who bare record of the word of God, and of the testimony of Jesus Christ, and of all things that he saw". The similarities between these three passages are the brush strokes of the same painter. Dear disciple, dwell for a moment on these three opening sentences for three separate works and tell me truly: Do you not agree that this is evidence of the same author? Concerning the author recording all that he saw there may need to be a qualification made here; for in chapter 10 verses 4-5 John was not permitted to record the utterances of the “seven thunders.” However, the prohibition was on what he heard and not no what he saw. So even in this account (of the seven thunders) we may be confident that John was a faithful eyewitness. ♱ 1:3 Blessed is he that readeth, and they that hear the words of this prophecy, and keep those things which are written therein: for the time is at hand . Verse 3 is the first of seven beatitudes in the Revelation. The other six can be found at: 14:13; 16:15; 19:9; 20:6; 22:7, 14. The beatitudes keep the symmetry of the book with having their number at seven. It would be proper to list all seven of the beatitudes together so the disciples may obtain an overall view of their scope. The Seven Beatitudes of the Apocalypse (NKJV) 1. (1:3) Blessed is he who reads and those who hear the words of this prophecy, and keep those things which are written in it; for the time is near. - (14:13) Then I heard a voice from heaven saying to me, “Write: ‘Blessed are the dead who die in the Lord from now on.’” “Yes,” says the Spirit, “that they may rest from their labors, and their works follow them.” - (16:15) “Behold, I am coming as a thief. Blessed is he who watches, and keeps his garments, lest he walk naked and they see his shame.” - (19:9) Then he said to me, “Write: ‘Blessed are those who are called to the marriage supper of the Lamb!’” And he said to me, “These are the true sayings of God.” - (20:6) Blessed and holy is he who has part in the first resurrection. Over such the second death has no power, but they shall be priests of God and of Christ, and shall reign with Him a thousand years. - (22:7) “Behold, I am coming quickly! Blessed is he who keeps the words of the prophecy of this book.” - (22:14) Blessed are those who do His commandments, that they may have the right to the tree of life, and may enter through the gates into the city. The word “blessed” (Greek: makarios) means: happy; but, not only that; it indicates the favorable position God has placed one into. One hears the psalmist sing: “Blessed is the man that walketh not in the counsel of the ungodly, or standeth in the way of sinners, nor sitteth in the seat of the scornful ...He shall be like a tree planted by the rivers of water, That bringeth forth his fruit in his season: His leaf also shall not wither; And whatsoever he doeth shall prosper. The ungodly are not so ...” (Ps 1:1ff KJV): blessed, that is. One should consider the beatitudes set forth it the Gospel of the Kingdom (Mt 5:3-12) in order to discover the most propitious position of the disciples of the Way. In this first beatitude of the Apocalypse the blessing is pronounced upon the reader and the hearer of this wonderful manuscript. We must acknowledge here that the idea conveyed is one of oral reading. This is seen in that the blessing is for the hearer as well as the reader; and it follows that the reading must be oral for there to be a hearer. Both prayer and reading of Scripture are intended to be spoken. There is creative power in the “spoken word.” Notice, that in the beginning God “said,” “Let there be ...!” (see Ge 1:3,6,9,11,14,20,and 24). At His spoken word His world leapt into existence. Because we humans are made in His likeness and image something of spoken creativeness is intrinsic to us. Jesus underlined this truth with His words recorded in Mark 11:23; He taught His disciples to “speak” to their circumstances. However, the blessings pronounced upon the readers and hears of this book is unparalleled in any other biblical treatise. The intent was clearly for the Revelation to be read aloud to the congregations of the seven churches to which the book is addressed (1:4). The nature of the apocalypse is prophecy (as is stated in verse 3 which we are presently viewing). This, alone, sets it apart from the gospels and epistles; earning for itself a place beside the other apocalyptic books of the Old Testament. Similar to these sister works, and following the true Hebraic nature of apocalyptic material, the Revelation is written in a highly covert style. Because of this and the esoteric knowledge it contains the Revelation experienced a long and ardent fight for its place among the canonical books of the New Testament. This being said, it must be acknowledged that prophecy consist of “telling forth” as well as "foretelling.” Both forms of prophecy are found here and the reader is admonished to “keep those things which are written in it.” This admonition makes clear that the Revelation provides exhortation for Christian living as well as futuristic prognostication, and both are to be kept in the mind and heart of the disciple. The need to “keep” these sayings is emphasized by the last statement of verse 3, “for the time is at hand.” The NKJV states: “for the time is near.” The author was obviously under the divine impression that he was writing for the benefit of the believers of his generation. To John, these events were “at the door.” This truth, dear disciple, must be kept before you - for it is the consistent theme of this ancient manuscript (see v1, here, 22:6,7,10,20). One would do well to consider the words written by James, our Lord’s brother (some say as early as A. D. 50): “Grudge not one against another, brethren, lest ye be condemned: behold, the judge standeth before the door” (Ja 5:9). The question must be asked: Were these holy, God inspirited, men mistaken? Or, are you, dear reader, on a journey of discovery during which time you will read prophecy, some of which, has already had a fulfillment that can be ascertained, and thereby give testimony to the sureness and faithfulness of the Holy Scripture? Yes. That surely will be the case. But not only that, for the prophecy is unto the every end of human history: a history that contains both the author who is writing this humble commentary, and you, dear readers, whom have given him the honor of considering it. Apostolically Speaking ♱ J L Hayes Popular I love the seven beatitudes! What a great presentation of the blessings received by reading and hearing the Book of Revelation. I'm very glad I've read the Book, for it is as the climax of all the prophecies written in the age-old Scriptures. Do you know if Revelation truly was the last book of the Bible written, by chance? I know Job is said to be the oldest book of the Bible, yet the Scriptures begin with the books written by Moses (Genesis-Deuteronomy, aka the Torah). I look forward to the sequence of your Revelation hubs, so I can glean from what has already been fulfilled and what is yet to come.
https://hubpages.com/religion-philosophy/Prologue-to-the-Revelation
CC-MAIN-2019-18
en
refinedweb
As defined in the Twelve-Factor-App it’s important for cloud-native applications to store configuration externally, rather than in the code since this makes it possible to deploy applications to different environments. An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes: Resource handles to … backing services. Credentials to external services … Microservices that are implemented with Java EE can leverage MicroProfile Config. The configuration can be done, for example, in Kubernetes yaml files and accessed from Java code via annotations and APIs. Here is a simple example from the cloud-native-starter repo. The ‘articles’ service uses configuration to define whether or not to create ten articles the first time it is invoked.In the yaml file an environment variable pointing to a ConfigMap is defined. kind: Deployment apiVersion: apps/v1beta1 metadata: name: articles spec: replicas: 1 template: metadata: labels: app: articles version: v1 spec: containers: - name: articles image: articles:1 ports: - containerPort: 8080 env: - name: samplescreation valueFrom: configMapKeyRef: name: articles-config key: samplescreation restartPolicy: Always --- kind: ConfigMap apiVersion: v1 metadata: name: articles-config data: samplescreation: CREATE In the Java code the configuration can be accessed via @Inject and @ConfigProperty. public class CoreService { private static final String CREATE_SAMPLES = "CREATE"; @Inject @ConfigProperty(name = "samplescreation", defaultValue = "dontcreate") private String samplescreation; @PostConstruct private void addArticles() { if (samplescreation.equalsIgnoreCase(CREATE_SAMPLES)) addSampleArticles(); } Note that you cannot access the injected variable in the constructor. Instead use the @PostConstruct annotation. Thanks to Emily Jiang for figuring this out. If you want to try this feature and many other MicroProfile and Istio features, get the code from the cloud-native-starter repo and run these commands. $ git clone $ scripts/check-prerequisites.sh $ scripts/deploy-articles-java-jee.sh $ scripts/show-urls.sh To learn more about MicroProfile Config, check out these resources:
http://heidloff.net/article/configuring-java-microservices-microprofile-kubernetes/
CC-MAIN-2019-18
en
refinedweb
I'm a beginner student learning about functions and do not understand them fully. My assignment was to write a program that deals with temperature conversion. My program had to use 6 functions (1. Program overview 2. Enter temp 3. Enter scale 4. Convert F to C 5. Convert C to F 6. Display results) It also needs to display an error message if an inappropriate input is entered for both scale (which I have) and temperature (which I do not know where to place). Appropriate values for temps are >= -459.67 F or -273.15 C. I do not know where to place this scale either. I also have multiple errors in my program. If anyone could give me any guidance in helping me to fix my program, I would greatly appreciate it!! Code:#include <iostream> using namespace std; void programOverview (); char getScale (); int getDegree (); float convertFtoC (float); float convertCtoF (float); int getResults (); int main () { cout.precision (2); programOverview (); getDegree (); getScale(); { if (scale == "F") { convertFtoC(); } else if (scale == "C") { convertCtoF(); } else cout << "ERROR: Invalid temperature scale" << endl; } getResults(); return 0; } //This function displays a brief overview explaining the program to the user void programOverview () { cout << "This program will convert a temperature reading provided in" << endl; cout << "either Fahrenheit or Celsius to the other measurement scale." << endl; } //This function requires the user to enter the temperature scale to be used char getScale () { char scale; cout << "Enter the letter of the temperature scale that will be used:" << endl; cout << "F = Fahrenheit; C = Celsius)" << endl; cin >> scale >> endl; return scale; } //This function requires the user to enter the temperature reading in degrees int getDegree () { int degree; cout << "Enter your temperature reading in degrees:" << endl; cin >> degree; return degree; } //This function converts a Fahrenheit temperature to Celsius float convertFtoC (float Ftemp) { float Ctemp; Ctemp = (Ftemp - 32) / 1.8; return Ctemp; } //This function converts a Celsius temperature to Fahrenheit float convertCtoF (float Ctemp) { float Ftemp; Ftemp = 1.8 * Ctemp + 32; return Ftemp; } //This function displays the results { int getResults () cout << "Your temperature reading converts as follows:" << endl; cout << "Fahrenheit:" << return Ftemp << endl; cout << "Celsius:" << return Ctemp << endl; }
https://cboard.cprogramming.com/cplusplus-programming/136435-temperature-conversion-program.html
CC-MAIN-2017-47
en
refinedweb
My entire project is fucked if I can't do this; Basically, I need to be able to add an element to a JList (doesn't need to be a JList, just needs to be a list component) then have a tag added to the element too, so when I get the selected list item I can cast it to the object I need... I can't really explain it in words, let me explain it in code. public class MyObject { public int i; public MyObject(int i) { this.i = i; } } public class Main { public void addJListElement() { jList.addElement("My object", new MyObject(10)); } public void getSelectedListElement() { MyObject myObject = (MyObject)jList.getSelectedElement().getTag(); int i = myObject.i; //should be 10. } } Please excuse the poor code, I was rushing to create an example. Is there anyway I can make this possible, or do something similar?
https://www.daniweb.com/programming/software-development/threads/357177/jlist-element-tags
CC-MAIN-2017-47
en
refinedweb
Are you sure? This action might not be possible to undo. Are you sure you want to continue? university of virginia fall 2006 Early Interfaces GUI Design and Implementation cs205: engineering software Schedule design meetings this week 1 IBM 705 Univac 1956 cs205: engineering software 2 Sketchpad • Ivan Sutherland, 1963 (PhD thesis supervised by Claude Shannon) • Interactive drawing program • Light pen Birth of the GUI .” 3 cs205: engineering software Medal of Technology 2000 Douglas Engelbart, Augmenting Human Intellect (1962) 4 cs205: engineering software Computer as “Clerk”. Douglas Engelbart, Augmenting Human Intellect (1962) Engelbart’s Demo (1968) • First Mouse • Papers and folders • Videoconferencing • Email • Hypertext • Collaborative editing cs205: engineering software 5 cs205: engineering software 6 2006 9 cs205: engineering software 10 Designing GUIs • Requires lots of skill • Psychology.org/screenshots/lisaos10 cs205: engineering software OS X Leopard. Abstraction. Specification – Testing: especially hard • Unique-ish Aspects – Event-Driven (network programming also like this) – Multi-Threaded (network. 1973 cs205: engineering software 1983 7 cs205: engineering software 8 Lisa Interface Any real progress since then? Alto Apple Lisa Xerox PARC. others) – Huge APIs cs205: engineering software 11 cs205: engineering software 12 .guidebookgallery. Cognitive Science • User studies • Good taste Read Donald Norman’s and Ed Tufte’s books Building GUIs • Like all Programming – Encapsulation. SwingUtilities. frame.awt. in java.invokeLater(new Runnable() { public void run() { showGUI().awt.swing. shows this. hides this.awt.Frame javax.awt.awt. // inherited from java.2) real reason for Swing coming later. Otherwise.Model-View-Controller • Model: domain data and logic • View: presents model • Controller: receives input and alters model Goal: abstraction separate display from model separate control interface Invented at PARC in 1970s (Smalltalk) cs205: engineering software Java GUI Toolkits AWT Abstract Window Toolkit Looks like Java 13 cs205: engineering software Swing (added JDK 1.. display EFFECTS: If b. public class Main { private static void showGUI() { //Create and set up the window. } public static void main(String args[]) { javax.pack().setVisible(true).lang. } } Based on Sun’s Swing tutorials:: public Component add(Component c) MODIFIES: this EFFECTS: Appends c to the end of this container.Component java.. 15 cs205: engineering software Main windows are JFrame objects JFrame frame = new JFrame("Swing GUI").Container java.swing.swing. cs205: engineering software 17 18 . frame. 14 Frames java. } }).sun.Window java.html cs205: engineering software Adding to a Frame public java. Window Title 16 Swing Application import javax.JFrame cs205: engineering software JFrame Methods // inherited from java.Container getContentPane() EFFECTS: Returns the contentPane object for this.awt. JFrame frame = new JFrame("Swing GUI").Object java.com/docs/books/tutorial/uiswing/learn/example1.awt.Window public void pack() MODIFIES: this EFFECTS: Causes this Window to be sized to fit the preferred size and layouts of its subcomponents.*.Component public void setVisible(boolean b) MODIFIES: this.awt. . 1963 (major influence on Alan Kay inventing OOP in 1970s) cs205: engineering software cs205: engineering software 20 What can you add? Components in Sketchpad public Component add(Component c) Component Container Window Frame JFrame cs205: engineering software JComponent JLabel JPanel AbstractButton JButton . The motivation for making the functions as general as possible came from the desire to get as much result as possible from the programming effort involved. } public static void main(String args[]) { .add(new JButton ("Click Me")). Sketchpad: a Man-Machine Graphical Communication System. For example. Each of the general functions implemented in the Sketchpad system abstracts..What can you add? public Component add(Component c) Component Container Window Frame JFrame JComponent JLabel JPanel AbstractButton JButton GUIs and Subtyping In the process of making the Sketchpad system operate.swing. a few very general functions were developed which make no reference at all to the specific types of entities on which they operate... in some sense. These general functions give the Sketchpad system the ability to operate on a wide range of problems. public class Main { private static void showGUI() { //Create and set up the window. java.. frame. content. some common property of pictures independent of the specific subject matter of the pictures themselves. frame.setVisible(true). The rewards that come from implementing general functions are so great that the author has become reluctant to write any programs for specific jobs. content.awt.*. } } cs205: engineering software Layout // in Container: public void setLayout(LayoutManager mgr) MODIFIES: this EFFECTS: sets the layout manager to mgr for this container. What happened to “Yo!”? 23 cs205: engineering software 24 .getContentPane().and hundreds (?) more subtypes in API 22 21 cs205: engineering software Adding Components import javax. the general function for expanding instances makes it possible for Sketchpad to handle any fixed geometry subpicture.add(new JLabel ("Yo!")). .pack(). JFrame frame = new JFrame("Swing GUI")..and hundreds (?) more subtypes in API 19 Ivan Sutherland.Container content = frame. content. } . frame. content. JFrame frame = new JFrame("Swing GUI").setVisible(true).swing.LayoutManager Implementations LayoutManager (interface) FlowLayout BoxLayout BorderLayout Adding Components import javax..add(new JButton ("Click Me")).awt.*.setVisible(true).pack().getContentPane().add(new JLabel ("Yo!")). JFrame frame = new JFrame("Swing GUI"). content. frame. } Charge • GUI APIs are subtyping and inheritance paradises.*.*.getContentPane().swing.sun. import java.Container content = frame.lang.about 30 more in API!(new FlowLayout()).Container content = frame.setLayout(new FlowLayout()).awt.IllegalArgumentException: adding container's parent to itself Creating a simpler GUI requires more complex programming 27 cs205: engineering software cs205: engineering software 28 . java.*. content. import java.. and portability Exception in thread "AWT-EventQueue-0" java.pack(). public class Main { private static void showGUI() { //Create and set up the window.awt. java.html cs205: engineering software 25 cs205: engineering software 26 Don’t try this at home? import javax.awt.com/docs/books/tutorial/uiswing/layout/visual.add(frame). frame. public class Main { private static void showGUI() { //Create and set up the window. frame. content. concurrency morasses • GUI APIs are huge and complex – Java’s is especially complex because of AWT + Swing.
https://www.scribd.com/document/144830491/Lecture-35
CC-MAIN-2017-47
en
refinedweb
No dia 18 eu palestrei na semana global do empreendedorismo, la na Plug'n work. A idéia da palestra foi mostrar para os empreendedores que estão começando a desenvolver suas idéias uma maneira de desenvolver seus protótipos (ou até mesmo um MVP) utilizando Python, web2py, bootstrap e o browser. Além de apresentar Python e ressaltar sua facilidade, assim como todo o poder do web2py para este nicho de público, eu tive a intenção de focar em uma opinião pessoal que é a minha repulsa pelo termo "Sócio técnico" e como isso soa como enganação e é claro mostrar como qualquer empreendedor que saiba usar um computador e pelo menos tenha noção de estrutura de dados (ja tenha usado uma planilha excel) é capaz de desenvolver seu próprio protótipo utilizando o web2py. Pretendo melhorar este material e quem sabe transformar em um vídeo e tambem estou disponível para dar a mesma palestra em outros eventos, universidades etc.. Seguem os slides. Em Janeiro de 2014 darei um treinamento Python para empreendedores na Yacows Academy, o curso é 100% prático e voltado para pessoas que não sabem nada (ou sabem pouco) de programação e desejam desenvolver seu próprio web app. O único requisito e ter um notebook e saber o básico do uso de uma planilha eletronica tipo excel, google doc. Em breve mais informações aqui e no Yacows Academy Gravando logs de aplicativos web2py Como utilizar o módulo logging do Python em seus apps web2py. Este video é parte da aula 4 do cursodepython.com.br web2py and Redis Queue R microblog app microblog app Este tutorial foi criado para o evento RuPy Brasil em parceria com a ZNC Sistemas. O download do app pode ser feito em: Download pacote w2p O tutorial em PDF: Tutorial em PDF Tutorial: Criando um microblog app Agora pretendo aproveitar que o blog tem mais espaço que o PDF para detalhar um pouco mais o app de microblog e também implementar algumas funcionalidades extra. CONTINUE: Quick and dirty search form example Considering models/db.py status_options = {"0": "pending", "1": "confirmed", "3": "canceled"} db.define_table("orders", Field("id_buyer", "reference auth_user"), Field("order_date", "date"), Field("status", requires=IS_IN_SET(status_options), represent= lambda value, row: status_options[value] ), Field("obs","text") ) And the search function controllers/default.py import datetime @auth.requires_login() def index(): # default values to keep the form when submitted # if you do not want defaults set all below to None status_default = request.vars.status date_initial_default = \ datetime.datetime.strptime(request.vars.date_initial, "%Y-%m-%d") \ if request.vars.date_inicial else None date_final_default = \ datetime.datetime.strptime(request.vars.date_final, "%Y-%m-%d") \ if request.vars.date_final else None obs_default = request.vars.obs # The search form created with .factory form = SQLFORM.factory( Field("status", default=status_default requires=IS_EMPTY_OR( IS_IN_SET(status_options, zero="-- All --") ), ), Field("date_initial", "date", default=date_initial_default), Field("date_final", "date", default=date_final_default), Field("obs", default=obs_default), formstyle='divs', submit_button="Search", ) # The base query to fetch all orders of the current logged user query = db.orders.id_buyer == auth.user_id # testing if the form was accepted if form.process().accepted: # gathering form submitted values status = form.vars.status date_initial = form.vars.date_initial date_final = form.vars.date_final obs = form.vars.obs # more dynamic conditions in to query if status: query &= db.orders.status == status if date_initial: query &= db.orders.order_date >= date_initial if date_final: query &= db.orders.order_date <= date_final if obs: # A simple text search with %like% query &= db.orders.obs.like("%%%s%%" % obs) count = db(query).count() results = db(query).select(orderby=~db.orders.data) msg = T("%s registers" % count ) return dict(form=form, msg=msg, results=results) Optionally you can create a view file in views/default/index.html {{extend 'layout.html'}} {{=form}} <hr /> {{=msg}} {{=results}} the end result Download the app: If you need a better and complex search engine I recommend Whoosh. App news reading (portuguese) models/db.py db = DAL("sqlite://news.sqlite") db.define_table("noticias", Field("titulo"), Field("texto", "text"), Field("data", "datetime") ) controllers/default.py def escrever(): form = SQLFORM(db.noticias) if form.process().accepted: redirect(URL("listar")) return dict(form=form) def listar(): noticias = db(db.noticias).select(orderby=~db.noticias.data) return dict(noticias=noticias) def ler_noticia(): id_noticia = request.args(0) or redirect(URL("listar")) # caso não passe um id retorna para /listar noticia = db.noticias[id_noticia] return dict(noticia=noticia) views/default/listar.html {{extend "layout.html"}} <ul> {{for noticia in noticias:}} <li> <a href="{{=URL("default", "ler_noticia", args=noticia.id)}}"> {{=noticia.titulo}} </a> </li> {{pass}} </ul> views/default/ler_noticia.html {{extend "layout.html"}} <h1> <a href="{{=URL("default", "ler_noticia", args=noticia.id)}}"> {{=noticia.titulo}} </a> </h1> <p> {{=XML(noticia.texto)}} </p> views/default/escrever.html {{extend "layout.html"}} <h1> escreva uma noticia</h1> {{=form}} WEb2py 2.0 I am now a member of Python Software Foundation Now. web2py - manage users and membership in the same form As requested by user of Stack Overflow. How to manage users and memberships at the same form NOTE: You have to register the first admin user first, because to manage users and memberships we require to be admin For the purpose of the example we are going to use the file controllers/default.pyaccessible at the url localhost:8000/YOURAPP/default A grid to list your users this is the list admins will see when hit The user list grid 1 - Put on the default.py file #@auth.requires_membership("admin") # uncomment to enable security def list_users(): btn = lambda row: A("Edit", _href=URL('manage_user', args=row.auth_user.id)) db.auth_user.edit = Field.Virtual(btn) rows = db(db.auth_user).select() headers = ["ID", "Name", "Last Name", "Email", "Edit"] fields = ['id', 'first_name', 'last_name', "email", "edit"] table = TABLE(THEAD(TR(*[B(header) for header in headers])), TBODY(*[TR(*[TD(row[field]) for field in fields]) \ for row in rows])) table["_class"] = "table table-striped table-bordered table-condensed" return dict(table=table) With generic views will see this 2 - The edit links to manage_users Now accessing you are going to see the grid showing all users, now if you click on the edit link on grid it goes to manage_user function we defined on btn = lambda row: A("Edit", _href=URL('manage_user', args=row.auth_user.id)) Create this two functions in the same controller The user form #@auth.requires_membership("admin") # uncomment to enable security def manage_user(): user_id = request.args(0) or redirect(URL('list_users')) form = SQLFORM(db.auth_user, user_id).process() membership_panel = LOAD(request.controller, 'manage_membership.html', args=[user_id], ajax=True) return dict(form=form,membership_panel=membership_panel) On the above function we are going to create two objects form which is the form to edit the user object, also we create membership_panel which is an ajax panel to load the manage_membership inside it and ajax managed. note: that this function takes user_id from request.args(0) then if it is not provided it redirects back to the list_users The membership panel #@auth.requires_membership("admin") # uncomment to enable security def manage_membership(): user_id = request.args(0) or redirect(URL('list_users')) db.auth_membership.user_id.default = int(user_id) db.auth_membership.user_id.writable = False form = SQLFORM.grid(db.auth_membership.user_id == user_id, args=[user_id], searchable=False, deletable=False, details=False, selectable=False, csv=False, user_signature=False) return form note that on the manage_membership we are returning the form directly, so we can input it inside the ajax panel membership_panel The manage_user view 3 - Create an html file in YOURAPP/views/default/manage_user.html {{extend 'layout.html'}} <h4> Edit The user </h4> {{=form}} <hr> <h4> User membership </h4> {{=membership_panel}} The end result User Form Add membership Done, using web2py 2.0 (trunk) Lazy DAL - Attempt 3 - Pbreit based on Pbreit request On Wed, Aug 15, 2012 at 2:32 PM, pbreit wrote: What would it take to set it up such that models are defined in mostly the same way as now but in "module" files and then imports are done in controllers/functions that need access to the table. This file goes on modules/mymodels.py # -*- coding: utf-8 -*- from gluon.dal import DAL, Field from gluon import current DBURI = "sqlite://....." TABLE_DEFINITIONS = { "owners": { "fields": [Field("name")], "kwargs": dict(format="%(name)s") }, "cars": { "fields": [Field("name"), Field("owner", "reference owner")], "kwargs": dict(format="%(name)s") } } class Models(object): def __init__(self): self.db = DAL(DBURI) @property def tables(self): return self.db.tables def __call__(self, *args, **kwargs): return self.db(*args, **kwargs) def table_definer(self, tablename): if not tablename in self.db.tables: fields = TABLE_DEFINITIONS.get(tablename, {}).get('fields', []) kwargs = TABLE_DEFINITIONS.get(tablename, {}).get("kwargs", {}) return self.db.define_table(tablename, *fields, **kwargs) return self.db[tablename] def __getattr__(self, key): if hasattr(self.db, key): return getattr(self.db, key) elif key not in TABLE_DEFINITIONS.keys(): raise AttributeError("attr not found") else: return self.table_definer(key) Now in any controller controllers/default.py from mymodels import Models() db = Models() def list_owners(): rows = db(db.owners).select() return dict(rows=rows) I tested and works, the only caveat is that you are not going to use db.define_table but you will put your model definitions on TABLE_DEFINITIONS dict websockets com tornado, web2py, Python, jQuery Comet messaging with Tornado and web2py (in Portuguese)
http://brunorocha.org/python/web2py/
CC-MAIN-2017-47
en
refinedweb
In message <20010524233356.A13206@yellowpig> Bill Allombert <[email protected]> wrote: > the relevant code look like > > #if PARI_BYTE_ORDER == LITTLE_ENDIAN > # define INDEX0 1 > # define INDEX1 0 > #elif PARI_BYTE_ORDER == BIG_ENDIAN > # define INDEX0 0 > # define INDEX1 1 > #else > error... unknown machine > #endif [snip] > solution is correct. Despite ARM systems usually being small endian, the word ordering of double floating point values is as you'd expect on a big endian machine. This typically affects things like compilers, which need a fix similar to what you have here. Peter -- ------------------------------------------------------------------------ Peter Naulls - [email protected] RISC OS Projects Initiative - Java for RISC OS and ARM - Debian Linux on RiscPCs - ------------------------------------------------------------------------
https://lists.debian.org/debian-arm/2001/05/msg00040.html
CC-MAIN-2017-47
en
refinedweb
You need to add reference to following DLLs in your ASP.NET MVC4 project Ninject.dll Ninject.Web.Common.dll Ninject.Web.Mvc.dll Modify controller code to declare a read-only member variable of your service and modify the It will be invalid to create a ProductsController without providing an instance of IProductRepository. Also, I need to mention that I'm calling these actions through AJAX, not normal requests. Here's the HTML with Razor: @model Blog.Models.PeopleViewModel However, some existing frameworks rely primarily on XML mapping files to set up the bindings between types. Terms Privacy Security Status Help You can't perform that action at this time. Returning to the NinjectWebCommon static class, we only need to add one line to use our new dependency resolver. How to tar.gz many similar-size files into multiple archives with a size limit Connecting sino japanese verbs What did John Templeton mean when he said that the four most dangerous words The first one in to use the binaries from the Github download. Why put a warning sticker over the warning on this product? Manual Depdendency Injection1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 namespace ProductStore.Controllers { In cases where you really don't want to create an instance until you need it, you can inject a factory into the business logic. Since I can’t use the Ninject MVC3 package, I need to do something slightly different. In my previous post I focused on testability and the issues with having a Web API handler that returned an HttpResponseMessage. But with the RC build of MVC4, it no longer works because the MVC team changed the dependency resolution mechanism for Web API projects. static class Program { /// It throws up the "MissingMethodException: Cannot create an instance of an interface" error any time I attempt to inject an IRepository I'm googling around this whole evening; also I read many SO posts - no joy; I know I can't instantiate an interface directly. Next Steps I feel pretty good about the state of this project at this point. Bind(typeof (IFactory<>)).To(typeof (InjectionFactory<>)); Bind(typeof (IContext)).ToMethod(c => c.Request.ParentContext); share|improve this answer answered Feb 7 '12 at 18:51 StriplingWarrior 2,1921919 Please, do you have the full implementation for this factory? –Tebo I am interested in Ninject - but couldn't start by reading . Do I need to use DependencyResolver? This may look like the Service Locator anti-pattern, but it isn't because you still keep container usage at an absolute minimum." (Edit: Wait, you ARE Mark! Missingmethodexception: Cannot Create An Instance Of An Interface. Object Type There are cases where the framework you're using makes it practically impossible to use proper dependency injection, and sometimes we're forced to use a service locator, but I'd definitely try to Can it be made better? System.ArgumentNullException: Cannot be null Parameter name: root at Ninject.ResolutionExtensions.GetResolutionIterator(IResolutionRoot root, Type service, Func`2 constraint, IEnumerable`1 parameters, Boolean isOptional, Boolean isUnique) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 258 at Ninject.ResolutionExtensions.Get[T](IResolutionRoot root, IParameter[] parameters) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 37 weblink Application Lifecycle> Running a Business Sales / Marketing Collaboration / Beta Testing Work Issues Design and Architecture ASP.NET JavaScript C / C++ / MFC> ATL / WTL / STL Managed C++/CLI But, as a bonus, things also become a lot easier for using the IoC framework inside other frameworks that don't inherantly support it. Now they’re not. An easy calculus inequality that I can't prove How do I change thickness and color of \hline on a table simultaneously؟ Draw some mountain peaks Graph Chromatic Number Problem RaspberryPi serial just a thought –XtremeBytes Aug 25 '15 at 21:54 @TyCobb Nope. The Ninject.Web.Common package provided it with the code required to bootstrap the Ninject kernel. navigate here You don't need to use a DI Container (such as Ninject) for this, but you can. Join them; it only takes a minute: Sign up MissingMethodException [Cannot create an instance of an interface] thrown selectively by ASP.NET MVC controller up vote 1 down vote favorite I'm getting Sign In·Permalink My vote of 5 Shubha_India8-Jan-13 8:47 Shubha_India8-Jan-13 8:471 Good Clarification Sign In·Permalink Very useful article PraveenKumarReddyChinta3-Dec-12 4:10 PraveenKumarReddyChinta3-Dec-12 4:101 I search many articles regarding Dependency Injection in asp.net mvc4 But one thing I always look for in DI frameworks is that I don’t have to change the code itself very much. Probability of All Combinations of Given Events How do i upgrade my wall sconces How safe is 48V DC? You signed out in another tab or window. While working in an MVC3 application, I developed a custom Controller Creation Factory using NInject, so any controller that is created will have dependencies injected in it through this Controller Factory. How Ninject constructs your types for you Ninject doesn’t do anything crazy like rewriting all your assemblies and replacing all instances of new with a redirection to it – it simply So in my controller I am calling it as below: BackgroundJob.Enqueue(() => _myImportService.AddCars(cars)); return RedirectToAction("Index", "Home"); So this service takes a list of car objects my user has uploaded and passes but the code that adapts between it and ASP.NET Web API will probably have to be different. Strange, uh? –Bozhidar Stoinev Aug 25 '15 at 21:58 whats the structure for NonFederalWorkingDayDto? –XtremeBytes Aug 25 '15 at 21:59 add a comment| 1 Answer 1 active oldest votes Ticks disappears under the axis A perfect metro map What did John Templeton mean when he said that the four most dangerous words in investing are: ‘this time it’s different'? Is there something in particular that you're having issues with? –StriplingWarrior Dec 31 '15 at 16:13 I already implemented it, i just want to know if there were other How can a Cleric be proficient in warhammers? What does the Hindu religion think of apostasy? On verses, from major Hindu texts, similar in purport to those found in the Bhagawat Gita How can I declare independence from the United States and start my own micro nation? Why is this C++ code faster than my hand-written assembly for testing the Collatz conjecture? share|improve this answer answered Oct 16 '13 at 20:52 Dan Csharpster 795620 2 Excactly the same mistake I did :) –BoKDamgaard Apr 30 '14 at 12:12 Me too, Sign In·Permalink My vote of 1 Member 987805020-Aug-14 0:19 Member 987805020-Aug-14 0:191 Not enough info about using it with webapi Sign In·Permalink My vote of 1 prageeth.madhu2-Jul-14 8:21 prageeth.madhu2-Jul-14 8:211 I return _kernel.TryGet(serviceType); 102. } 103. 104. Was there no tax before 1913 in the United States? BundleConfig.RegisterBundles(BundleTable.Bundles); 32. } 33. } 34.} Override OnApplicationStarted to Register Filters, routes and bundles etc. (standard ASP.NET MVC4 stuff which we used to do in Application_Start event when not using ninject) I can successfully create new objects by invoking Create() right now. i.e. About Me Peter Provost is a life-long-learner, hacker, maker, agilista, musician and heavy metal fan. Thanks. If necessary, you can override that by adding a binding of your own of the form: Bind
http://hiflytech.com/cannot-create/cannot-create-an-instance-of-an-interface-ninject.html
CC-MAIN-2017-47
en
refinedweb
Cluster provide a simple PowerShell script to enumerate the current list of Cluster Shared Volumes (CSV) mapping to the physical disk on which the volume resides. Failover Cluster Manager provides the below information about where the CSV is mounted in the local file system of each node of the cluster. Figure 1 Disk Management displays the list of Physical Disks but does not contain information about where the disk is mounted as a CSV. Figure 2 The attached script will enumerate the CSV resources in the cluster and map them to the actual Physical Disk. Sample script output: Cluster Shared Volumes mapped to Physical Disks =============================================== Name CSVPath PhysicalDisk —- ——- ———— Cluster Disk 3 C:\ClusterStorage\Accnt… Physical Disk 4 Cluster Disk 4 C:\ClusterStorage\Volume1 Physical Disk 5 Sample script: # Import-Module FailoverClusters $objs = @() $csvs = Get-ClusterSharedVolume $n = 1 Echo "Cluster Shared Volumes mapped to Physical Disks" > C:\Windows\Cluster\Reports\CSVtoDiskMap.txt Echo =============================================== >> C:\Windows\Cluster\Reports\CSVtoDiskMap.txt Echo `n"Collecting cluster resource information…" foreach ( $csv in $csvs ) { Echo "Processing Cluster Shared Volume $n" $Signature = ( $csv | Get-ClusterParameter DiskSignature ).Value.substring(2) $obj = New-Object PSObject -Property @{ Name = $csv.Name CSVPath = ( $csv | select -Property Name -ExpandProperty SharedVolumeInfo).FriendlyVolumeName PhysicalDisk = ( Get-WmiObject Win32_DiskDrive | Where { "{0:x}" -f $_.Signature -eq $Signature } ).DeviceID.substring(4) } ($obj).PhysicalDisk = ($obj).PhysicalDisk -Replace "PHYSICALDRIVE", "Physical Disk " $objs += $obj $n++ } Echo `n"Output file: C:\Windows\Cluster\Reports\CSVtoDiskMap.txt"`n $objs | FT Name, CSVPath, PhysicalDisk >> C:\Windows\Cluster\Reports\CSVtoDiskMap.txt notepad C:\Windows\Cluster\Reports\CSVtoDiskMap.txt Feel free to take this “as is” and modify it as you need. Play with the output… I am still tweaking to get a more pleasing output myself. How to collect additional information using PowerShell: PowerShell for Failover Clustering: CSV Free Disk Space (and other info) Chris Allen Senior Support Escalation Engineer Microsoft Enterprise Platforms Support Join the conversationAdd Comment Hi Chris, I’ve been looking for this.. can u post this using C# and WMI or atleast May I know how to embed the code and results to a object variable using SSIS Is there anyway of doing this for GPT disks? The signature property is blank for GPT disks. I've been banging my head on my desk for days now Do you have script to map cluster disk to physical disk? Is there anyway of doing this for GPT disks? The signature property is blank for GPT disks. I've been banging my head on my desk for days now Hi. Did You find script for GPT disks? TIA For GPT disks use "MSCluster_ResourceToDisk" to map cluster resources to GPT volumes then use "MSFT_Disk" from in namespace rootMicrosoftWindowsStorage to map a GPT Guid to physical volume.
https://blogs.technet.microsoft.com/askcore/2011/03/22/mapping-a-cluster-shared-volume-to-the-physical-disk/
CC-MAIN-2017-47
en
refinedweb
MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts. from math import floor import sys s = sys.argv[1] middle = int(floor(len(s)/2)) for i in range(middle): print s[i], if len(s) % 2 == 1: print s[i+1], for i in range(middle): print s[-i + middle - 1], def mirror(s): s = s[:len(s)//2] # take the first half of the string return s + ''.join(reversed(s)) # concatenate with the reversed string print mirror('COMPUTER') print mirror('PYTHON') If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions. Connect with top rated Experts 11 Experts available now in Live!
https://www.experts-exchange.com/questions/26839937/How-do-I-mirror-text-in-Python.html
CC-MAIN-2017-09
en
refinedweb
input_line() Get a string of characters from a file Synopsis: #include <stdio.h> char* input_line( FILE* fp, char* buf, int bufsize ); extern int _input_line_max; Since: BlackBerry 10.0.0 Arguments: - fp - The file that you want to read from. - buf - A pointer to a buffer where the function can store the string that it reads. - bufsize - The size of the buffer, in bytes. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. This function is in libc.a, but not in libc.so (in order to save space). Description: The input_line() function gets a string of characters from the file designated by fp and stores them in the array pointed to by buf. The input_line() function stops reading characters when: - end-of-file is reached - a newline character is read - bufsize - 1 characters have been read.. Returns: A pointer to the input line. On end-of-file or on encountering an error reading from fp, NULL is returned and errno is set. Examples: ; } Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/input_line.html
CC-MAIN-2017-09
en
refinedweb
Why no CPANs? How come almost no other programming languages have anything comparable to Perl's CPAN? I'm not just talking about a small repository of scripts, I'm talking about a central, searchable, categorized repository of libraries scripts to do all sorts of common tasks, complete with documentation, unit testing, bug tracking, and ratings/reviews. I've never ever seen anyone try to dispute the usefulness of CPAN, and everyone always puts CPAN and its modules towards the top of Perl's features list. So why is CPAN unique? Do programmers secretly *like* reinventing wheels over and over again? The Baja revolution. Friday, April 09, 2004 You're right CPAN rocks! Someone tried to make a cross language one once, but it never took off: CPAN is probably a result of the "get the job done" and extreme open source attitude of perl hackers. Matthew Lock Friday, April 09, 2004 How come almost no other programming languages have anything comparable to Perl's CPAN? Python has something similar. Java ones are all just a bunch of JAR files in SourceForge so what's the point? Li-fan Chen Friday, April 09, 2004 Few but ripe. Visual Gauss++ Friday, April 09, 2004 Li Fan- There's lots of things similar, but IME none compare to CPAN. I used to be amazed when I needed a specialized piece of software and found a module on CPAN. It's happened so many times now though, that I'm no longer amazed... I've come to expect it. Seriously, next time you need a specialized piece of software, just give CPAN a search. You might be pleasantly surprised! Friday, April 09, 2004 Which came first, CPAN, online since 1995-10-26 or CTAN, (online since ??)? mackinac Friday, April 09, 2004 For the record, CTAN came first--that's where the idea for CPAN came from. Since Perl's namespace system of hierarchal namespaces within namespaces (e.g., Net::FTP, Net::FTP::Common) facilitates this plethora of reusable Perl code, I guess another question would be why do language designers still resist this feature to this day (php, for example, had namespaces but they were removed from the language not long ago)? Why do so many languages either not support namesapces or only half-support them (like C++)? Do people like prefixing their identifiers with short, hopefully unique module names to avoid collisions (e.g., XML_ParseDocument() instead of just ParseDocument())? The Baja Revolution Friday, April 09, 2004 Cause name spaces add very little for the extra complexity. son of parnas Saturday, April 10, 2004 Some languages allow you to provide a local namespace for an imported module, e.g. Python's "import xxx as yyy". This seems to be the best of both worlds. Tom H Saturday, April 10, 2004 When you're talking about CPAN you may actually be talking about one or more of three distinct entities: 1) CPAN the website: The primary website is and all its links within to testers.cpan.org, rt.cpan.org and cpanratings.perl.org. (There are other sites, like .). This is what most people not so familiar with Perl and CPAN think about, but it really only scratches the surface. 2) CPAN the shell: This is a framework for installing modules, generally represented by the CPAN or CPANPLUS shells. The shell will find and install dependencies for you, fire off test results to the module authors if something fails, and a number of other tasks. This makes it possible for a large program (like the application server OpenInteract) to declare its dependencies and have CPAN install them as needed and is what enables the loosely coupled modules in CPAN. 3) CPAN the distribution network: When you want to contribute to CPAN you're given a PAUSE id -- Perl Authors Upload Server. To contribute a new module you go to the PAUSE website ( ), login, and upload your module. (You can also of course script this.) That's it. Your module is then copied to hundreds of mirrors around the world, appears on 'new on CPAN' lists like (also in RSS at ), and most importantly, is available for people to install via the CPAN shell or other tools that use it. There are lots of tools that people have built to examine and support CPAN-the-infrastructure: what is the freshness of CPAN (new stuff vs old stuff)? how many modules have been created this year vs previous years? what modules are most declared in dependencies? and so on... In the last couple of years the CPAN ratings system appeared ( ) as did the CPAN-wide issue tracking system RT ( ), automated reports from the CPAN testers community ( ), and the CPAN tools (available from any module on search.cpan.org) which allow you to diff all files in a module to one of its previous versions. What people who aren't familiar with CPAN don't get is that it's not just a repository. It's really the infrastructure that glues the Perl community together. This glue not only makes it possible for anyone around the world to contribute but also to painlessly build new modules that take advantage of work other people have done. Me? I'm at: Chris Winters Saturday, April 10, 2004 There was some discussion on comp lang python a while ago about setting something similar up for python, but nothing came of it. I can't remember why, as I wasn't paying attention to that discussion, but it probably had something to do with it being better to have only one way to do things... :) But I suspect it primarily comes down to size of user base. I bet more people use perl than use python, ruby and tcl put together. Insert half smiley here. Saturday, April 10, 2004 Several possible reasons, off the top of my head... Age of the language (Perl is a bit older than the upstarts); so more time to get things organized and more time for the repository of code to grow. Philosophy behind the language. Some languages (sad, but true) don't encourage reusing and sharing code the way Perl does. Flame me if you must, but there are lots more Java programmers but far fewer pieces of shared code. Most Java programmers aren't into this whole "contribute code back to the community" thing. Not criticizing, but I think it is true. Jakarta is a notable exception. Just inertia on the part of the other languages. Like it or not, organizing something like CPAN takes a lot of time and effort. You need server resources (that won't go away in a hurry), you need people to think about and setup hierarchies of code and most important, you need a fair amount of code to get started.. or it wouldn't become a valued resource. Other languages have tried and their efforts are coming up to speed.. but it takes constant effort (CJAN ? anyone know where that went ? ) Automation and making it easy to build modules in the first place.. Module::Build and other modules generally make it much easier to contribute modules, even for someone who has never done it before. deja vu Saturday, April 10, 2004 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/joelonsoftware4/default.asp?cmd=show&ixPost=131438&ixReplies=11
CC-MAIN-2017-09
en
refinedweb
Dear diary, on Wed, Nov 06, 2002 at 10:43:12PM CET, I got a letter,where "Sartorelli, Kevin" <[email protected]> told me, that...>> > I haven't seen anything on the list about this so am assuming that I'm either alone in trying this, or there is something not quite right with my configuration.> > Any thoughts anyone (I know the one about using gcc 2.95.3 ;- ) )Looks as a bug in gcc 3.3 preprocessor to me, or I have hallucinations whenstaring at the #endif at line 391.-- Petr "Pasky" Baudis
http://lkml.org/lkml/2002/11/6/291
CC-MAIN-2017-09
en
refinedweb
27 January 2010 By clicking Submit, you accept the Adobe Terms of Use. This article assumes a familiarity with ActionScript 3. All As the Flash Platform continues to proliferate and reach more devices, developers need to adopt techniques for authoring with multiple screen sizes and resolutions in mind. This article discusses several techniques to help Flash developers author content that will render properly on any device, regardless of its screen resolution and pixel density. The techniques explored in this article are somewhat "low-level" in that they show the programmatic creation of vectors and the use of algorithms (albeit simple ones) to dynamically size and position assets. There will always be a need for this level of authoring control for some applications, but there will also be "higher-level" and simpler alternatives in the future. Adobe is currently working on a mobile Flex framework (codenamed "Slider"), which will automatically apply some of what's discussed here, and will make it much easier for you to write applications that adapt to different screens. Until Slider is available, however—and for those whose applications might not fit into the framework model—the tips and tricks discussed in this article will help to jumpstart your multi-screen development. Before exploring specific techniques for authoring SWF-based applications for multiple screen sizes, it's worth covering some relevant terminology. Although you are probably familiar with the general meaning of these terms, a thorough understanding is necessary to actually put them to use: The goal of a multi-screen application is not necessarily to look identical on every device; rather, it should adapt to any device it's installed on. In other words, multi-screen applications should dynamically adjust to the resolution and the PPI of their host devices, displaying more information on larger screens, removing or shrinking elements on smaller screens, ensuring buttons are physically large enough to tap on, and so on. In order for applications to work across different screens, they must be architected in such a way that they draw and redraw themselves at the proper times and using the proper constraints. Before laying out your application, it's important that you set the Stage's scale mode and alignment. This should be done in your Sprite's constructor, just before or after registering for the Stage resize event (more on this below): this.stage.scaleMode = StageScaleMode.NO_SCALE; this.stage.align = StageAlign.TOP_LEFT; Setting the Stage's scale mode to NO_SCALE indicates that you don't want any kind of automatic scaling or layout of your content to occur, and that you will handle all the layout and scaling yourself. This is what enables applications to dynamically adapt themselves to different screen sizes. Setting the Stage's align property to TOP_LEFT indicates that you want to lay content out relative to the top left-hand corner with the coordinates of 0,0. The best place to do rendering in a multi-screen application is in a Stage resize event handler. The Stage will dispatch a resize event when the application is initialized and the size of the Stage (the area your application has to work with) is set. In a pure ActionScript application, you will want to listen for a Stage resize event in your main Sprite's constructor, like this: this.stage.addEventListener(Event.RESIZE, doLayout); After registering for resize events on the Stage, doLayout will get called whenever the Stage is resized. For example: By performing your layout in the Stage resize event handler, your application will automatically lay itself out whenever the size of the Stage changes, regardless of why it changes. Note:To determine the size of the Stage, use the stage.stageWidth and stage.stageHeight properties. It is not necessary to set the dimensions of your SWF file using SWF metadata. In fact, doing so may prevent your resize event handler from being called when the application initializes. It's best to set the width and height of your application in the initial window section of your application descriptor file like this: <initialWindow> <width>320</width> <height>480</height> <!-- several other properties... --> </initialWindow> Applications designed to run on devices with different screen sizes and resolutions will often need to determine the size of assets dynamically. In other words, a button that looks and works perfectly on one device might be far too small to read or tap on devices with higher resolutions. Consequently, it's important that developers know how to think in terms of both pixels and inches. In order to add a solid-colored background to an application, you need only think in terms of pixels. For example, it doesn't matter how big or small the screen is—the background will always need to match the screen's dimensions in pixels. The code below shows adding a solid-colored background to an application of any size: var bg:Sprite = new Sprite(); bg.x = 0; bg.y = 0; bg.graphics.beginFill(0x006E59); bg.graphics.drawRect(0, 0, this.stage.stageWidth, this.stage.stageHeight); this.addChild(bg); The stageWidth and stageHeight properties on the Stage object indicate the dimensions in pixels of the content's Stage. This information is all you need to create a background that works with any size application on any size device. Sizing assets in pixels works in cases where the assets can be sized relatively (as in the case of a background), but not when assets need to be sized absolutely. In other words, it doesn't matter how big or small a background is as long as it's the size of the entire Stage; however, the size of things like fonts and buttons needs to be controlled more precisely. That's when you have to think in terms of physical units, or PPI. Using PPI to determine an asset's dimensions allows you to control the exact size of an asset regardless of what kind of screen it's being rendered on. For example, to make a button that is always ¾" × ¼" whether it's being rendered on a huge desktop monitor or a small mobile screen, you must use the screen's PPI. Note: Research has shown that a hit target should be no smaller than ¼", or 7mm, in order to be hit consistently and reliably. The only way to make sure your buttons are usable across devices is to think in terms of physical units. The PPI of the current screen can be determined by the Capabilities.screenDPI property. Of course, assets are always ultimately sized in pixels rather than inches, so it's necessary to convert PPI into pixels. I use a simple utility function like this: /** * Convert inches to pixels. */ private function inchesToPixels(inches:Number):uint { return Math.round(Capabilities.screenDPI * inches); } The code below demonstrates how to create a sprite that will appear as ¾" × ¼" on any device: var button:Sprite = new Sprite(); button.x = 20; button.y = 20; button.graphics.beginFill(0x 003037); button.graphics.drawRect(0, 0, this.inchesToPixels(.75), this.inchesToPixels(.25)); button.graphics.endFill(); this.addChild(button); So as not to be too American-centric, and because the metric system is superior at smaller scales, here's a version for converting millimeters to pixels: /** * Convert millimeters to pixels. */ private function mmToPixels(mm:Number):uint { return Math.round(Capabilities.screenDPI * (mm / 25.4)); } Now that your application is architected in such a way that it can be authored to adapt to multiple screen sizes, and now that you have techniques for determining asset sizes, it's time to start laying out assets. The key to laying out assets capable of adapting to different screen sizes is to know what to hard-code and what to calculate based on properties of the current screen. For example, to create a title bar at the top of your application, you know that you want the x and y coordinates to be 0,0 which means those properties can be hard-coded. In other words, regardless of what kind of device your application is running on, you will always want your title bar positioned in the top-left corner. The following code shows creating a new Sprite to be used as a title bar, and hard-coding its position: var titleBar:Sprite = new Sprite(); titleBar.x = 0; titleBar.y = 0; Although the position of the title bar won't change from one device to another, its width will. On high-resolution devices, the width needs to be greater. Determining the width of the title bar is as easy as using the stage.stageWidth property, but what about the height? You could hard-code the height in pixels, but the size of it will change pretty dramatically from device to device depending on resolution. In this case, a better approach is to think in terms of physical units which will give your title bar a consistent look across all devices. The following code creates a title bar that demonstrates all of the following concepts: stage.stageWidthto dynamically determine the width of the title bar in pixels var titleBar:Sprite = new Sprite(); titleBar.x = 0; titleBar.y = 0; titleBar.graphics.beginFill(0x003037); titleBar.graphics.drawRect(0, 0, this.stage.stageWidth, this.inchesToPixels(.3)); titleBar.graphics.endFill(); this.addChild(titleBar); The examples above demonstrate how to dynamically size assets, but what about dynamically positioning them? For example, the position of the title bar is obvious since it always originates from the top-leftcorner, but what about positioning a footer whose x coordinate is always 0 but whose y coordinate is determined by the height of the Stage? The code below shows how to create a footer that will always span the entire width of the application, and always be positioned at the bottom, regardless of the height of the screen: var footer:Sprite = new Sprite(); footer.graphics.beginFill(0x003037); footer.graphics.drawRect(0, 0, this.stage.stageWidth, this.inchesToPixels(.3)); footer.graphics.endFill(); footer.x = 0; footer.y = this.stage.stageHeight - footer.height; this.addChild(footer); There are three primary ways to lay out assets (two of which I've already covered): Calculating the position of an asset based on another asset is referred to as relative positioning, and it's an extremely important technique for designing multi-screen applications. Going back to the title bar example, we succeeded in creating a title bar that will always be positioned and rendered like you want it to, but what about the title itself? You could always hard-code a y position, which would place it a few pixels down from the top, then calculate an x coordinate based on the width of the Stage and the width of the title, but that won't always yield the best results—for two reasons: Both of these issues can be addressed by positioning your title relative to your title bar. Since this is something I find myself doing often, I have a simple utility function that does it for me: /** * Center one DisplayObject relative to another. */ private function center(foreground:DisplayObject, background:DisplayObject):void { foreground.x = (background.width / 2) - (foreground.width / 2); foreground.y = (background.height / 2) + (foreground.height / 2); } Using the center() function above, positioning my title is simple: var title:SimpleLabel = new SimpleLabel("My Application", "bold", 0xffffff, "_sans", this.inchesToPixels(.15)); this.center(title, titleBar); this.addChild(title); Dynamically sizing and laying out things like title bars is one thing, but actual application content can be more challenging. For example, consider a game whose main content is a grid of squares. What's the best technique for making the game playable on multiple devices? Should the squares simply be scaled up or down depending on screen size, or should rows and columns be added or removed? Both are valid approaches, depending on the game. For example, in the case of a chess or checkers game, you can't add or remove rows or columns based on the size of the screen. In this case, it's usually best just to scale your content up or down in order to keep it consistent. Some games can actually adapt their game play based on the size of the screen. For example, a real-time strategy game may be enhanced on a larger screen since higher resolutions can accommodate more tiles, or in the case of smaller screens, it may be best to remove tiles so that the remaining tiles can be larger and render more detail. In this case, you need your content to adapt. There's no single strategy or formula for adapting content to various screen sizes since content is so diverse, but there are some standard techniques that can be used. The following describes the logic of adapting a game board to any size screen: The code below demonstrates the logic of laying as many ¼" blocks as possible in the allotted space while maintaining an equal margin both above and below the game board: // Display as many blocks on the screen as will fit var BLOCK_SIZE:Number = .25; var BLOCK_BUFFER:uint = 3; var blockSize:uint = this.inchesToPixels(BLOCK_SIZE); var blockTotal:uint = blockSize + BLOCK_BUFFER; var cols:uint = Math.floor(this.stage.stageWidth / blockTotal); var rows:uint = Math.floor((this.stage.stageHeight - titleBar.height) / blockTotal); var blockXStart:uint = (this.stage.stageWidth - ((cols * blockSize) + ((cols - 1) * BLOCK_BUFFER))) / 2; var blockX:uint = blockXStart; var blockY:uint = ((this.stage.stageHeight + titleBar.height) - ((rows * blockSize) + ((rows - 1) * BLOCK_BUFFER))) / 2; for (var colIndex:uint = 0; colIndex < rows; ++colIndex) { for (var rowIndex:uint = 0; rowIndex < cols; ++rowIndex) { // Use a private function to draw the block var block:Sprite = this.getBlock(blockSize); block.x = blockX; block.y = blockY; this.addChild(block); blockX += blockTotal; } blockY += blockTotal; blockX = blockXStart; } } The code below is the function that generates each block: /** * Get a new block to add to the game board */ private function getBlock(blockSize:uint):Sprite { var block:Sprite = new Sprite(); block.graphics.beginFill(0xAAC228); block.graphics.drawRect(0, 0, blockSize, blockSize); block.graphics.endFill(); block.cacheAsBitmap = true; return block; } Note: As each block is created, its cacheAsBitmap property is set to true. Although mobile application optimization is beyond the scope of this article, it's always best to set the cacheAsBitmap property to true for DisplayObjects that you don't anticipate will need to be scaled or rotated frequently. Although this improves performance on the desktop, it can have dramatic results on devices with less powerful processors. Fonts are a key element of almost all applications, and must be handled with the same care as other assets when designing for multiple screens. Below are three tips for using fonts in a way that will successfully adapt across devices: TextFieldobjects, but FTE gives you the ability to position your text much more precisely. The properties of TextLinelike ascent, descent, textWidth, and textHeightmake it possible to position text with pixel-perfect accuracy. SimpleLabelwhich encapsulates my use of FTE and makes creating text far simpler. Not only do I save several lines of code everyplace I want to add some text, but I also have one central location where I can make universal text changes, or fix text-related bugs. DisplayObjectobject's widthand heightproperties in order to make text fields work better with some of my utilities like the center()function above. Following are the widthand heightgetters that give me the best results: public override function get width():Number { return this.textLine.textWidth; } public override function get height():Number { return (this.textLine.ascent - 1); } As the Flash Platform proliferates, so do opportunities for Flash developers. The ability to use the same tools, skills, and code to build applications across an increasing array of diverse devices is hugely powerful, and gives Flash developers the chance to reach an unprecedented number of users. In order to take advantage of the ubiquity of the Flash Platform, however, developers need to build applications with multiple screens in mind. Fortunately, the understanding and mastery of just a few relatively simple techniques give developers the tools they need to make the most of the Flash Platform.
http://www.adobe.com/devnet/flash/articles/authoring_for_multiple_screen_sizes.html
CC-MAIN-2017-09
en
refinedweb
Hi there, What I am trying to do is implement my own sequence number program. I have a sequence number i.e 0001 and then I try to convert this to a binary string in byte format of 4 bytes: 00 01 00 00 And then I am appending this in the middle of a byte payload that I have created, which is just a group of numbers. ie choose 12345678 which got converted to something like 32 33 34 35 etc for byte format.. I then want to extract the sequence number portion, which is bytes 4-8 of the 8 byte payload. This is where I am stuck and not sure how to progress. ie my payload looks like 32 33 34 35 00 01 00 00 and I want to extract 00 01 00 00 from this and then convert this back to my original integer. I am trying to use the substr command but am not just trying to extract data from a string, rather a string of bytes. I am just extracting the sequence number part of the last calculated sequence number for now to try and get it working. Here is my code, any help would be great, Many thanks! #include <stdio.h> #include <iostream> #include <string> using std::string; #include "hex.h" unsigned char temp[4]; unsigned int seq; bool flag = true; unsigned int currentseq; unsigned int previousseq = 0; int testing =2; int main() { byte payload[8] = {"1234567"}; for (int i=0; i<10; i++) { if (flag == true) { seq = 0000; flag = false; } seq=seq++; printf("Sender sequence number = %0004x", seq); printf("\n"); // BYTE CONVERSION for( int i=0;i<4;i++) temp[i]= '\0'; memcpy(temp,&seq,4); printf("TEMP = %02x %02x %02x %02x \n", temp[0], temp[1], temp[2], temp[3]); payload[4] = temp[0]; payload[5] = temp[1]; payload[6] = temp[2]; payload[7] = temp[3]; printf ("payload[4] = %02x \n", temp[0]); printf("\n"); } [b] //this is where my problems are, am trying to extract by sequence number in byte format from the binary string, but not sure how string str = payload; const char *p = str.substr(2,4).c_str(); std::cout << "\n" << p << "\n";[/b] }
https://www.daniweb.com/programming/software-development/threads/130222/trying-to-use-substr-to-extract-data-from-a-string-of-bytes-help-would-be-great
CC-MAIN-2017-09
en
refinedweb
Advances in client-side technologies are driving changes in the role of today’s web application server. Android, iOS and purely browser-based clients are powerful computing platforms in their own right. Their success has led to a reevaluation of ‘classic’ web applications. MVC frameworks, such as AngularJS, have been developed that enable client-side developers to build powerful and compelling UIs. AngularJS is a popular open source JavaScript client-side MVC framework supporting the rapid development of client-side applications. By implementing a majority of the MVC functionality at client-side, it reduces the complexity of the server and results in applications composed around well-defined APIs that should be more maintainable and applicable. lets. Challenge A recent and significant change in web application frameworks has been the shift from Model-View-Controller (MVC) on the server to the MVC on the client. Advancements in client-side technology are driving this change. The result, however, is the challenge of building elegant architect applications that span multiple devices hosted in the cloud. Android, iOS and browser-based clients are rich development platforms and each is able to run a fully-fledged MVC framework. Developing the user interfaces (using MVC) on the client is a more natural architecture that better leverages client developers’ skills. The growing popularity of client-side MVC frameworks such as Backbone.js and AngularJS is representative of this shift to the client. These frameworks help reduce complexity and increase reuse in even the simplest of applications. As the UI moves to the client, the server becomes simpler. Server-side developers need not worry about how to construct HTML pages using templates to generate the dynamic content. Rather, they can focus on implementing business logic and data persistence using technologies such as App Engine Datastore access and OAuth2 authentication and then use a framework to expose services to their applications’ clients. The question that remains is how to design and implement these modern web applications. This next section provides some guidance for web application developers and architects wanting to learn how to incorporate browser-based clients into an existing service-based architecture. Overview: Client-side MVC + Google Cloud Endpoints Google Cloud Endpoints is the answer for this question: a feature of Google App Engine that provides a RPC framework: The flow is described in more detail below. Using Cloud Endpoints, developers expose the public methods of any class as a service endpoint with the addition of simple Java annotations. For example, imagine a simple guestbook application that has a single class “GuestbookEndpointV1” with two methods insert() to add an entry new messages on the guestbook and list() to list them. In order to expose the class as a service endpoint, put the @Api annotation on the class definition: @Api(name = "guestbook") public class GuestbookEndpointV1 { ... } This code would be deployed on the Google App Engine server as shown in Figure 1 above. You will also need to put @ApiMethod annotations on any methods you want to publish to the client. @ApiMethod(name = "messages.insert") public void insert(Message message) { // store a Message on Datastore Entity e = new Entity("Message"); e.setProperty("createdAt", message.getCreatedAt()); e.setProperty("createdBy", message.getCreatedBy()); e.setProperty("content", message.getContent()); datastore.put(e); } The insert() method receives a Java object, message, and comes with properties like createdAt (the creation timestamp), createdBy (the owner of the message) and content. Then it creates a Datastore entity “e” with those property values and saves it onto Google App Engine Datastore. These service endpoints are automatically exposed via REST API and you can use Google APIs Explorer to view the service endpoints. It is also good to issue test requests to confirm the responses in JSON format. In the following screen, the guestbook.messages.insert results directly from the annotations to the Java class. Combining AngularJS with Cloud Endpoints The next step addresses the integration of Cloud Endpoints with client-side MVC frameworks such as AngularJS. Such a system would have an architecture similar to the one described in Figure 4: Although the integration between two technologies is straightforward, there are some pitfalls and caveats you should know about before starting your coding. So, let’s take a closer look at the left side of the diagram above (Fig. 4) – the actual JavaScript code that integrates the client-side MVC with the endpoints shown in the right side of the diagram. As an example, let’s consider creating an AngularJS-based web UI for the Guestbook service endpoint we have just defined. The following diagram is a screenshot of how the UI will appear: Creating a View and Model The message form can be created using an ordinary AngularJS form that calls the controller’s insert() method. The form specifies two model properties, createdBy and content, as shown in the following script: <h2>Guestbook</h2> <form ng- <input type="text" ng-<br> <input type="text" ng-<br> <input type="submit" class="btn" value="Post"> </form> The following code demonstrates how you can use an AngularJS iterator to iterate over the messages that will be returned to the client in response to a call to the “list” method on the endpoint: <ul> <li ng- {{message.createdAt|date:'short'}} {{message.createdBy}}: {{message.content}} </ul> Creating a Controller The controller’s insert function (set to the $scope.insert variable) corresponds to the insert method of the endpoint we defined. It builds an object that contains the message content and createdAt/createdBy properties copied from the model properties, and then calls the insert method of the endpoint. This is demonstrated in the following JavaScript fragment: function GuestbookCtrl($scope) { $scope.insert= function() { message = { "createdAt" : new Date(), "createdBy" : $scope.createdBy, "content" : $scope.content }; ... Next, you can send the object by passing it to the insert function of the Cloud Endpoint client library. Remember, this is one of the two methods exposed as a Cloud Endpoint with the addition of annotations to the GuestbookEndpointV1 class. gapi.client.guestbook.messages. insert(message).execute(); In the same way, you can call the service endpoint’s list() method to retrieve messages on the guestbook: $scope.list = function() { gapi.client.guestbook.messages.list().execute(function(resp) { $scope.messages = resp.items; $scope.$apply(); }); } Please note that the anonymous function passed as a callback to the execute() method is called whenever it receives messages from the server,and these would be assigned to the model’s messages property. You will need to call $apply() to apply the model change to the UI, since the callback function is called from outside of the controller thread. Additionally, the Cloud Endpoints client library does not support the Promise API of AngularJS for describing this kind of asynchronous processing. A tricky aspect when integrating AngularJS with Cloud Endpoints is the initialization sequence. It’s important to know the sequence in which the libraries will be loaded and initialized. If you do not pay attention to the sequence of loading required libraries and their initialization, you could create problems that will take a long time to debug. See the Appendix for further details. It should be clear from the examples presented that AngularJS and Cloud Endpoints enable quite a straight forward design pattern that combines client-side MVC and server-side service endpoints. In fact, you will find that it is easier and simpler to use Cloud Endpoints for RPC than it is to implement the XHR and Dependency Injection in AngularJS for server communication. Cloud Endpoints also provides many other benefits including integrated OAuth2 security and multi-client platform support for Android and iOS using a shared, standardized client API. Conclusion To conclude, here is a summary of some of the key benefits of AngularJS + Cloud Endpoints that were presented in this paper: - Cloud Endpoints makes it easy to expose a server-side API by adding annotations to classes and methods to generate client libraries supporting JavaScript. - Cloud Endpoints encapsulate the plumbing for OAuth2 authentication, URI definition and request routing, JSON serialization, and RPC with graceful error handling and more. These tasks can be quite non-trivial when implementing them without Cloud Endpoints. - The AngularJS JavaScript based client-side MVC framework supports the rapid development of client-side applications. Rich user experiences can be easily implemented by HTML5 and JavaScript. This eliminates the need for server-side HTML rendering. - Incorporating the Cloud Endpoints JavaScript client library into AngularJS is so straight forward that you can write a single line of code to call server logic and a callback function to update model properties with the result. References Google App Engine, /appengine/ AngularJS, Cloud Endpoints, /appengine/docs/java/endpoints/ Appendix: Tips on AngularJS + Cloud Endpoints Initialization Tip #1: Be careful on the initialization sequence The guestbook app loads three different JS libraries in the following sequence: - AngularJS - The guestbook app - Google API Client, which contains the Endpoints functionalities To follow this sequence, the index.html contains the following <script> tags in the <head> tag for loading each of the JS libraries: <script src="/js/angular.min.js"></script> <script src="/js/guestbook.js"></script> <script src=""></script> Once loaded, the third library (Google API Client) calls the initialization function specified by its ‘onload’ parameter. In this case, the init() function is expected and invoked. Tip #2: Enter into the AngularJS world as quickly as possible In the initialization sequence, we use the two functions: - init() function - window.init() function This init() function is defined in guestbook.js in the following way: function init() { window.init(); } As you can see the code above, the function just calls window.init() function (i.e. init() function defined in the global window object) and does nothing else. The window.init() is defined in the AngularJS controller as follows: $window.init= function() { $scope.$apply($scope.load_guestbook_lib); }; In AngularJS, the global window object is accessed by “$window” notation which is a wrapper for it. It is a best practice in AngularJS not to access the window object directly to improve testability. The reason why you would not want to execute the initialization in the first init() method is so you can put as much of the code as possible in the AngularJS world, such as controllers, services and directives. As a result, you can harness the full power of AngularJS and have all your unit tests, integrations tests,and so forth. Tip #3: Use a Flag to Indicate If the Backend is Ready Eventually, the $window.init() is called and you can write any application initialization logic in this function. The primary objective here is to use Google API Client’s onload parameter to invoke an initialization function defined inside theAngularJS controller so that AngularJS can execute all the initialization in a predictable sequence.In the guestbook script, $window.init() calls the load_guestbook_lib() function and is defined as follows. $scope.load_guestbook_lib = function() { gapi.client.load('guestbook', 'v1', function() { $scope.is_backend_ready = true; $scope.list(); }, '/_ah/api'); }; No RPC calls should be made until the backend is ready to serve them. The backend’s “readiness” is indicated by property “is_backend_ready”. This property is set in the handler function call back after the guestbook’s endpoints client library loads.To prevent the application logic from calling endpoints before the endpoints are ready, the Guestbook uses a flag named is_backend_ready in the index.html file. <div ng- …. guestbook UI... </div> By controlling the ng-show attribute with the flag value, the Guestbook UI does not need to be shown until it is ready to make calls to the endpoints.
https://cloud.google.com/solutions/angularjs-cloud-endpoints-recipe-for-building-modern-web-applications
CC-MAIN-2017-09
en
refinedweb
Asked by: Include GAC assemblies in helpfile - Hello I'm creating a help file for a component which uses the MailItem and CalendarItem classes from the Microsoft.Office.Interop.Outlook namespace. These Microsoft Office assemblies are stored in the GAC. Is there a way to include them in my Sandcastle help project so that information about the MailItem and CalendarItem class will be generated ? At this moment I am using the SandCastle Help File Builder to generate the files. Thanks StefanThursday, January 03, 2008 7:00 PM Question All replies In Sandcastle Help File Builder it is very simple thing to do, but I will encourage you to post this SHFB specific question on the SHFB forum. Best regards, Paul.Thursday, January 03, 2008 7:48 PM Add them to the Dependencies property as GAC references. You can find answers to this and other common questions in the FAQ in the help file builder's help file. EricFriday, January 04, 2008 2:52 AM - I have added the Microsoft.Office.Interop.Outlook assembly from the GAC to the Dependencies but it doesn't make a difference. None of the Outlook classes is being documentated. e.g. One of the properties of my component is a collection of AppointmentItems : public IEnumerable<AppointmentItem> CalendarItems { get; } SandCastle has generated a link for the IEnumerable interface to the corresponding MSDN web page, but there is no link for the AppointmentItem class. Am I missing some configuration in SHFB or SandCastle ? Or is there a problem with this specific Outlook assembly ?Friday, January 04, 2008 8:13 PM You'll only get links to online MSDN content if an entry is added to the ResolveReferenceLinks2 component in the configuration file. For that to happen, you'd have to run MRefBuilder and a couple of the doc model transforms on the Office Interop assemblies. There's also the question of whether the MSDN web service knows about stuff in the interop assemblies and can produce links to them. That's a question better answered by Anand. So, for now, you can produce a help file that will list the inherited members of the class, but there won't be links to online help for the interop stuff. EricSaturday, January 05, 2008 2:45 AM
https://social.msdn.microsoft.com/Forums/en-US/3061228b-9b78-41ee-ae83-e5467cb057f3/include-gac-assemblies-in-helpfile?forum=devdocs
CC-MAIN-2017-09
en
refinedweb
Add the needed dependencies to your client application Install-Package Auth0-WCF-Service-JWT Show Lock, the login box, to the user public class Service1 : IService1 { public string DoWork() { var claims = ((IClaimsIdentity)Thread.CurrentPrincipal.Identity).Claims; string email = claims.SingleOrDefault(c => c.ClaimType == "email"); return "Hello from WCF " + User.Identity.Name + " (" + email + ")"; } })
https://auth0.com/authenticate/wcf-service/azure-blob
CC-MAIN-2017-09
en
refinedweb
- Setting Up a Node HTTP Server - Conclusion This article is a simple yet effective introduction to getting started with Node.js, which is often described as an asynchronous server-side JavaScript library written in the C language. To server-side OO developers, this may be a bit confusing. After all, how can JavaScript be a server-side implementation? It can’t; but Node allows the JavaScript V8 engine (written by Google) to run on a server-side platform, thus allowing you to deploy Node apps to the cloud (i.e., using Microsoft Azure). One of the main reasons to use Node for a hosted platform is for performance. Unlike traditional application hosting platforms, such as the Apache Web server that uses a single thread to handle a separate request, Node uses an event-loop model for handling incoming application requests. The traditional single-thread-per-request model ends up increasing a processor’s I/O wait time by using multiple threads (one for each request). The event-loop model used by Node decreases the I/O wait time by issuing an event that tells the processor when a request has finished processing using function callbacks of a Node application, thus using a single thread to handle multiple requests. If this seems a bit confusing at first, you can learn more about the event-loop model at. The main thing to take away is that Node uses a single thread to handle all application requests, whereas more traditional servers such as Apache, IIS, and others use a separate thread for each request. To install Node, visit and click the Install button, which detects the operating system you are using to install the right package. To verify that Node is installed, open a command window, type node –version, and see the result. To start using Node, type node at the command prompt and start typing in JavaScript. You will often put your Node application code into a .js file that is executed either from the command prompt or by using an IDE such as Eclipse. Setting Up a Node HTTP Server One of the most common uses for Node is as a web server for applications created with AngularJS or Express.js. The following code sets up a simple server: var http = require('http'); var server = http.createServer(function onRequest(request, response) { response.writeHead(200, { 'Content-Type': 'text/plain'}); response.write('Welcome to a simple HTTP Server'); response.end(); }).listen(3000); The first line includes the HTTP module. Like other frameworks, Node includes a set of built-in modules that perform certain tasks. You can think of a module like a namespace in .NET or a package in Java. The require statement imports the module. Next, the server instance is created using the createServer function. If you are a Java programmer, the syntax may look very similar to setting up a Java servlet. A callback function containing the request and response objects is passed in as a parameter to the main function. Finally, we create a simple header and write out some content to the stream. After running it from your IDE or node command line, the server should start. As indicated by a blinking cursor at the command prompt, the server is waiting for requests. If you open your browser and go to localhost:3000, you should see the welcome message displayed from the previous example. So how do you load application pages using a Node server? To do this, you can use the http, fs, path, and url modules. (The fs module is the file system module and is responsible for streaming files to Node.) Here is an example for loading pages: var http = require('http'); var fs = require('fs'); var path = require('path'); var url = require('url'); var server = http.createServer(function onRequest(request, response) { var urlParts = url.parse(request.url); var page = 'pages' + urlParts.pathname; fs.exists(page, function fileExists(exists) { if (exists) { response.writeHead(200, { 'Content-Type': 'text/plain'}); fs.createReadStream(page).pipe(response); } else { response.write('Page Not Found'); response.end(); } }); }).listen(3000); In the preceding example, another callback function is used to see whether the page exists. If it does, we pipe the file contents to the browser; otherwise, we send a message saying the page cannot be found. You can use the previous code for most of your application hosting needs if your application is simple and does not use an application architecture such as MVC. If your application is a large one, the previous code has to be repeated in different ways for handling different paths, and so on. Node helps solve this problem with a concept referred to as connect middleware (CM). If you come from a Java or .NET background, you can think of CM as a filter. Like filters, CM is some code that plugs into the request pipeline to perform certain functions. Some examples of CM are for logging and authentication. It is built on top of Node’s HTTP server and is referenced through the package module name connect in the application. To add CM to your Node modules, go to a command prompt and type npm connect. The following code is an example of CM in use: var connect = require('connect') , http = require('http'); var app = connect() .use(function(req, res){ res.end('Hello from Connect!\n'); }); http.createServer(app).listen(3000); From this example, you can see that the app variable is passed into the createServer function, which adds all the CM functionality. Here is another way to write the code using CM but with a bit more flexibility: var connect = require('connect'); var intercept = function(request,response,next){ console.log("hello"); next(); }; var server = connect() .use(intercept) .use(function onRequest(request, response) { response.write("Hello from the Connect Middleware Package"); response.end(); }).listen(3000); This example differs from the previous example because we created a function called intercept that handles the request and response, but also passes in a parameter called next. Then we set up the server using the connect package and tell it to first use the intercept function before handling requests (similar to the other examples using just the http module). If you write something to the console in the intercept function, it displays in the console before the Hello from the message appears in the browser. You can see how you can more effectively write your code to handle multiple operations by using this example.
http://www.informit.com/articles/article.aspx?p=2264833
CC-MAIN-2017-09
en
refinedweb
. Okay, good to know. Should there be an equivalent for usr/src? With all the stuff that might go into /etc/make.conf, shouldn't there be an equivalant 'doc' to help prevent DFLY make namespace collision, and centralize the options? Is it just a matter of someone witting and maintaining it? >> I think I have the full cvs tree, per: > head -n 15 usr/src/Makefile,v head 1.11; access; symbols DragonFly_RELEASE_1_2_Slip:1.9.2.1 DragonFly_RELEASE_1_2:1.9.0.2 DragonFly_Preview:1.9 DragonFly_Snap29Sep2004:1.8 DragonFly_Snap13Sep2004:1.8 DragonFly_Stable:1.9 DragonFly_1_0A_REL:1.7 DragonFly_1_0_REL:1.7 DragonFly_1_0_RC1:1.7 FREEBSD_4_FORK:1.1; locks; strict; comment @# @; I'm trying to reproduce the dragonflybsd.com cvsupd, with the addition of FBSD ports. I wouldn't expect the collection name to change when I cvsup from my local cvsupd. I'm only changing the host in my supfiles. Especially per: > find sup/ -type f sup/cvs-site/checkouts.cvs sup/cvs-doc/checkouts.cvs sup/cvs-src/checkouts.cvs sup/cvs-dfports/checkouts.cvs sup/ports-all/checkouts.cvs sup/releases sup/list :-\ Maybe the DFLY master cvsupd config (or sanitized revision) could be published in /usr/share/examples? // George -- George Georgalis, systems architect, administrator Linux BSD IXOYE cell:646-331-2027 mailto:george@xxxxxxxxx
https://www.dragonflybsd.org/mailarchive/users/2005-05/msg00116.html
CC-MAIN-2017-09
en
refinedweb
I recently read a blog post by Santiago Valdarrama about developer programming problems. He said that any programmer should be able to solve five problems in an hour or less with a programming language of his choice. My language of choice is PowerShell, which is probably not what he had in mind when he wrote the blog post. So far, I’ve demonstrated In adding numbers in an array three different ways, merging lists together and solving the Fibonacci series. Problem 4: The Maximal Combination Problem 4 is described thusly: Write a function that given a list of non negative integers, arranges them such that they form the largest possible number. For example, given [50, 2, 1, 9], the largest formed number is 95021. This one got me thinking. PowerShell has a cmdlet for sorting, but it won’t do what I want it to – it will sort numerically or lexically in ascending or descending mode. This isn’t flexible enough for my purposes. At the heart of every sort operation is a comparator – two elements in the list will be compared and one will be shown to be “before” the other. For example, 8 is before 9 (when sorting ascending). Now, let’s take our problem. In order to compare two numbers, we have to look at their combination. Given 5 and 50, 5 comes before 50 because 5-50 is bigger than 50-5 when you push them together. We need to encode that logic in our comparator. Fortunately, we have the full power of .NET at our disposal. Explicitly, there is a sort method on the System.Collections.ArrayList object that has a Sort method that takes a custom IComparer object. Every sort operation ultimately compares two things in the list – the IComparer interface allows us to specify a custom ordering. First of all, we need to get an ArrayList of strings. Let’s take a look at the code in two pieces: # # Custom sort routine for the ArrayList # $comparatorCode = @" using System.Collections; namespace Problem4 { public class P4Sort: IComparer { public static void Sorter(System.Collections.ArrayList foo) { foo.Sort(new P4Sort()); } public int Compare(object x, object y) { string v1 = (string)x + (string)y; string v2 = (string)y + (string)x; return v2.CompareTo(v1); } } } "@ Add-Type -TypeDefinition $comparatorCode The Compare method is used to compare our two numbers. The Sorter method is a static method that will do an in-place sort of the provided ArrayList usign the custom comparator. A quick note – you can only add this type once. You will likely have to restart your PowerShell session if you make changes to it. Now, let’s look at my cmdlet: function ConvertTo-BiggestNumber { [CmdletBinding()] [OutputType([string])] Param ( # Param1 help description [Parameter(Mandatory=$true, Position=0)] [int[]] $Array ) Begin { $stringArray = new-object System.Collections.ArrayList # Convert the original list to an arraylist of strings for ($i = 0 ; $i -lt $Array.Length ; $i++) { $stringArray.Add($Array[$i].toString()) | Out-Null } } Process { [Problem4.P4Sort]::Sorter($stringArray) [string]::join("", $stringArray.ToArray()) } } This starts by converting the array we are provided to a string ArrayList as required by our custom type, and then sorts it using the .NET Framework Sort method we imported using Add-Type. Finally, we join the array list together. Use it like this: $a = @( 60, 2, 1, 9) ConvertTo-BiggestNumber -Array $a You will get the output 96021. That leaves the fifth puzzle. Unfortunately, I was not able to find a neat solution in PowerShell to the fifth problem. I ended up dropping down to embedded C# – my solution came pretty close to the authors solution to the same problem. Whether this one is a suitable question on an interview is an open debate – I contest that this doesn’t actually test programming skills but rather logic skills. Given the problem, you either see how to do it or you don’t. If you don’t then no amount of coding skills is going to solve the problem.
https://shellmonger.com/tag/powershell-2/
CC-MAIN-2017-13
en
refinedweb
This HOWTO introduces the Twisted reactor, describes the basics of the reactor and links to the various reactor interfaces.. You can get to the reactor object using the following code: from twisted.internet import reactor The reactor usually implements a set of interfaces, but depending on the chosen reactor and the platform, some of the interfaces may not be implemented:
http://twistedmatrix.com/documents/current/core/howto/reactor-basics.html
CC-MAIN-2017-13
en
refinedweb
Flash Player Memory Leak in IE 11 with ExternalInterface Issue #10497410 • Assigned to Crispin C. Steps to reproduce I work for a publisher of web delivered materials used by teachers and students in K-12 classrooms. A lot of our districts/schools run older hardware and OS due to budget and IT staffing shortfalls. We consequently see a lot of classrooms continuing to use Windows 7, and of those, quite a few continuing to run IE 11 (thousands of sessions daily). We have an existing flash application (SWF) embedded in our website capable of attaching to a microphone and streaming audio up to the a media server (think students recording what they read for review/grading later on). While we have many customers running legacy browser versions, a larger percentage do run newer versions of browsers, OS, hardware, etc. Since those newer versions of browsers have begun to phase out flash, we’ve rewritten the microphone SWF application to be a JavaScript based application (UserMedia, AudioContext, ScriptProcessor, etc). A hurdle we ran into was that IE 11 doesn’t support this JS approach (Edge on Win 10 is the first browser to support these JS objects). So, we’ve coded up a simple Flash pollyfill that connects through ActionScript to the microphone, and uses the SampleDataEvent in conjunction with the ExternalInterface class to send pass the sound data to a common JS audio handler. We fall back to the pollyfill for browsers that don’t support the JS approach (e.g. IE 11, Safari). Our pollyfill approach seems to have uncovered a memory leak with the flash player in IE 11. After around 10 minutes of recording, the flash player crashes. Using the developer tools memory profiling tab in IE 11, it appears flash continues to consume memory without garbage collecting. Specifically, I believe the memory leak has to do with any data passed via ExternalInterface. If I pass null instead of the sampled data array, memory climbs and reduces – proper garbage collection. Here are some very slimmed down code samples that help isolate the problem: ActionScript package { public class MemLeakTest extends MovieClip { private var microphone:Microphone; public function MemLeakTest() { super.stop(); microphone = Microphone.getMicrophone(); microphone.setSilenceLevel(0, 1000000); microphone.gain = 65; microphone.rate = 44; microphone.setUseEchoSuppression(true); microphone.setLoopBack(false); microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, captureAudioSample); } private function captureAudioSample(sampleDataEvent:SampleDataEvent) { var samples:Array = new Array(); while(sampleDataEvent.data.bytesAvailable) { samples.push(sampleDataEvent.data.readFloat()); } ExternalInterface.call("processAudio", samples); } } } HTML/JavaScript <html> <head> <script type="text/javascript" src="swfobject-2.2.js"></script> <script type="text/javascript"> var processAudio = function (samples) { //Nothing done, but still a memory leak. }; swfobject.embedSWF("mem-leak-test.swf", "swf", "215", "138", "23"); </script> </head> <body> <div>Memory Leak Test</div> <div> <div id="swf"></div> </div> </body> </html> I realize the perspective of many in our industry is that Flash has one foot in the grave (I personally agree in general with that), but this functionality in IE 11 is important to many, many classrooms. It would be a shame if we had to go back to these schools and tell them that they had to use a different browser on Win 7 – and a bigger shame for those that can’t do even that because of limited IT resources, internal rules etc. Microsoft Edge Team Changed Assigned To to “Ibrahim O.” Changed Assigned To from “Ibrahim O.” to “Crispin C.” You need to sign in to your Microsoft account to add a comment.
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/10497410/
CC-MAIN-2017-13
en
refinedweb
Controlled Integrator example 2¶ Nengo Example: Controlled Integrator 2: $\dot{x} = \mathrm{Ax}(t) + \mathrm{Bu}(t)$ The control in this circuit is A in that equation. This is also the controlled integrator described in the book “How to build a brain.” import numpy as np import matplotlib.pyplot as plt %matplotlib inline import nengo %load_ext nengo.ipynb <IPython.core.display.Javascript at 0x7f271036d610> Step 1: Create the network As before, we use standard network-creation commands to begin creating our controlled integrator. An ensemble of neurons will represent the state of our integrator, and the connections between the neurons in the ensemble will define the dynamics of our integrator. The control signal will be 0 function that changes half way through the run control_func = piecewise({0: 0, 0.6: (212)26d3ed2c 0 (t < 0.6), the neural integrator performs near-perfect integration. However, when the control value drops to -0.5 (t > 0.6), the integrator becomes a leaky integrator. This means that with negative input, its stored value drifts towards zero. Download controlled_integrator2 as an IPython notebook or Python script.
https://pythonhosted.org/nengo/examples/controlled_integrator2.html
CC-MAIN-2017-13
en
refinedweb
Sorting algorithms, which arrange the elements of a list in a certain order (either ascending or descending), are an important category in computer science. We can use sorting as a technique to reduce search complexity. Vast amounts of research has gone into this category of algorithms because of its importance, particularly in database algorithms. Classification Sorting algorithms are generally classified into different categories based on various factors like the number of comparisons, memory usage, recursion, etc. Existing sorting algorithms include the Selection, Bubble, Heap, Insertion, Quick, Merge, Counting, Bucket and Radix sort. All these different algorithms have some advantages and disadvantages. The description of each of these algorithms is beyond the scope of this article. The following gives a glimpse of how sorting works (with the Bubble sort): Void bubble_sort(int arr[], int n) { for(int pass=n-1;pass>=0;pass--) // outer loop executed n times { for( int i=0; i<pass-1;i++) // inner loop execute n times { if(arr[i]> arr[i+1]) { // swapping the elements Int temp=arr[i]; arr[i]= arr[i+1]; arr[i+1]=temp; } } } } The above algorithm takes O(n2) time even in the best case. Try optimising it so that we can skip some extra swaps by introducing an extra flag. The following is optimised code of the above: Void bubble_sort(int arr[], int n) { Int pass, I, temp, swapped=1; for( pass=n-1; pass>=0&& swapped; pass--) { Swapped=0; for( i=0; I<pass-1;i++) { If(arr[i]> arr[i+1]) { temp=arr[i]; arr[i]=arr[i+1]; arr[i+1]=temp; swapped=1; } } } } This optimised version finally improves the time of the worst and best cases of the Bubble sort to O(n). This is called an optimisation of the code. My new sorting algorithm Recently, I developed a sorting algorithm that is able to sort any input (i.e., not ask for input restrictions) in O(n) time with O(n) Space Complexity. Though there are several sorting algorithms out there, which I mentioned earlier, some like Merge, Quick and Heap sort take O(nlogn) time, and some like Bubble, Selection and Insertion sort, take O( n2 ) time; but no algorithm takes O(n) time to throw up the result. Of course, Counting, Radix and Bucket sort take O(n) time, but ask for input restrictions and finally cannot sort all the input given. In this algorithm, we use a hash table implicitly to operate onto input elements, and only by doing two scans we are able to sort any kind of input (no input restriction) provided to us. Implementation: How does it work? Now, let us find a better solution to this problem. Since our objective is to sort any kind of input, what if we get the maximum elements and create the hash? So, create a maximum hash and initialise it with all zeros. For each of the input elements, go to the corresponding position and increment its count. Since we are using arrays, it takes constant time to reach any location. Here is the main code implementation: #include <stdio.h> #include <conio.h> #include <stdlib.h> void main() { long int arr[]={62,4,8,423,43,4,432,44,23,2,55,12,3}; int n=sizeof(arr)/sizeof(arr[0]) ; int i, j, k,l; int min=arr[0]; int max=arr[0] ; for(i=1;i<n;i++) // run for n times { if(arr[i]<min) min=arr[i] ; // required for optimization else if(arr[i]>max) max=arr[i] ; } int *hash=(int*)calloc((max+1), sizeof(int)); //should be max+1 for(i=0;i<n;i++ ) { hash[arr[i]]++; // storing the count of occurances of element } printf(\n); for(i=0;i<max+1;i++) // loop to read the hash table... { if(hash[i]>0) { printf(%d\t, i) ; //print 1st time if the element has occured } j=hash[i]; /* this requires if the count of the element is greater than one(if there is duplicates element in the array) otherwise dupplicated element will not be displayed more than once.*/ if(j==1 ||j==0) continue; while(j!=1) { // here we are printing till count not equal to 1 printf(%d\t, i) ; j--; } } getch(); } The time complexity of the above algorithm is O(n) and with only two scans, we are able to arrive at the result. Its space complexity is O(n), as it requires extra auxiliary space (almost the maximum elements in an array) to get the result. How is it better than other existing sorting algorithms? Lets check how this new algorithm compares with other existing sorting algorithms in terms of time complexity and speedup, flexibility and elegance of code. I have already given you a glimpse of the Bubble sort and its optimisation. Now lets take a look at the time complexity of existing algorithms: How the new sorting algorithm is better in terms of time-complexity and overhead I am using rand(). The function rand() generates a pseudo-random integer number. This number will be in the range of 0 to RAND_MAX. The constant RAND_MAX is defined in the standard library (stdlib). #include<stdio.h> #include<stdlib.h> #include<time.h> main() { int arr[10000],i,j,min,temp; for(i=0;i<10000;i++) { arr[i]=rand()%10000; } //The MySort Agorithm clock_t start,end; start=clock(); min=arr[0]; int max=arr[0] ; for(i=1;i<10000;i++) { if(arr[i]<min) min=arr[i] ; // required for optimization else if(arr[i]>max) max=arr[i] ; } int *hash=(int*)calloc((max+1), sizeof(int)); //should be max+1 for(i=0;i<10000;i++ ) { hash[arr[i]]++; // storing the count of occurances of elements } printf(\n); end=clock(); double extime=(double) (end-start)/CLOCKS_PER_SEC; printf(\n\tExecution time for the MySort Algorithm is %f seconds\n ,extime); for(i=0;i<10000;i++) { arr[i]=rand()%10000; } clock_t start1,end1; start1=clock(); // The Selection Sort for(i=0;i<10000;i++) { min=i; for(j=i+1;j<10000;j++) { if(arr[min]>arr[j]) { min=j; } } temp=arr[min]; arr[min]=arr[i]; arr[i]=temp; } end1=clock(); double extime1=(double) (end1-start1)/CLOCKS_PER_SEC; printf(\n); printf(\tExecution time for the selection sort is %f seconds\n \n,extime1); if(extime1<extime) printf(\tSelection sort is faster than MySort Algorithm by %f seconds\n\n,extime- extime1); else if(extime1>extime) printf(\tMysort Algorithm is faster than Selectionsort by %f seconds\n\n,extime1-extime); else printf(\tBoth algo has same execution time\n\n); } How is this sorting algorithm better than Merge sort and Quick sort algorithms in terms of both time complexity and overhead spent? Both Merge sort [O(nlogn)] and Quick sort [O(nlogn)-best case] use recursion to sort the input; also, the running time of Quick sort in the worst case is O(n2). There are always certain overheads involved while calling a function. Time has to be spent on passing values, passing control, returning values and returning control. Recursion takes a lot of stack space. Every time the function calls itself, memory has to be allocated, and also recursion is not good for production code. In the new sorting algorithm, we are getting rid of all issuesthe recursion, the time and the overhead. In comparison with the Merge and Quick sort, we are almost cutting the time by logn and making the new algorithm to run in O(n) time. It may happen that when you try to compare the execution time of Merge and Quick sort and the new sorting algorithm, they may give the same or equal execution time based on the above method by introducing clock, but again, you cannot measure the accurate running time by doing so. I am providing a Merge sort algorithm here, so that you can refer to and trace the difference in both codes, and see how many recursions and function calls are involved: /* Function to merge the two haves arr[l..m] and arr[m+1..r] of array arr[] */ void merge(int arr[], int l, int m, int r) { int i, j, k; int n1 = m - l + 1;); } } Finally, as every algorithm has some pros and cons according to the input provided, some of the advantages and disadvantages of this new sorting algorithm are listed here. Advantages - Time efficient about O(n) in even the worst case, and has sorting flexibility - Elegant and lucid code that is easy to understand - Requires only two scans to get the result, and it is free from recursion and extra overhead - Is an advantage in many problems that use sorting as pre-processing to arrive at the result Disadvantages - It requires extra space as per input provided - Wastes much of the space if the input is short and has a large number at any point, e.g., {5, 1, 11, 395, 159, 64, 9,}, though situations like this rarely occur Applications of this sorting algorithm There are several applications and uses that require sorting as pre-processing before doing real computation to get the result, which takes O(nlogn) as the total running time. This particular problem can be decreased to a time of O(n) with the help of this new sorting algorithm. Some examples can be: - Finding the majority element - Finding the triplet in an array - Finding two elements whose sum is closest to zero. Connect With Us
http://opensourceforu.com/2015/08/check-this-new-sorting-algorithm-design/
CC-MAIN-2017-13
en
refinedweb
String::Substitution - Simple runtime string substitution functions version 1.001 use String::Substitution -copy; my $subbed = gsub($string, $pattern, $replacement); This module is a collection of functions to enable (global) substitution on a string using a replacement string or function created at runtime. It was designed to take in the string, pattern, and replacement string from input at runtime and process the substitution without doing an eval. The replacement string may contain [numbered] match vars ( $1 or ${1}) which will be interpolated (by using another s/// rather than eval). The sub_* and gsub_* functions come in three variants: copy- Performs the substitution on a copy and returns the copy. modify- Modifies the variable in-place (just like $s =~ s///). context- Guess by the context which version to use. In void context execution will pass to the modifyvariant, otherwise pass to the copyvariant. It's probably best to use copyor modifyexplicitly but the choice is yours. Each version of each function takes three (scalar) arguments: qr//pattern) Besides a string, the replacement can also be a coderef which will be called for each substitution. The regular pattern match variables will be available inside the coderef ( $1) as you would expect. # uppercase the first captured group in $pattern: gsub($string, $pattern, sub { uc $1 }); For convenience, however, the coderef will be passed the list returned from "last_match_vars" to allow you to do other pattern matching without losing those variables. # can also use @_ (same as above) gsub($string, $pattern, sub { uc $_[1] }); # which is essentially: # $string =~ s/$pattern/ $codref->( last_match_vars() );/e # which allows you to get complicated (an example from t/functions.t): gsub(($string = 'mod'), '([a-z]+)', sub { (my $t = $1) =~ s/(.)/ord($1)/ge; "$_[1] ($1) => $t" }); # produces 'mod (d) => 109111100' # notice that $1 now produces 'd' while $_[1] still has 'mod' See "FUNCTIONS" for more information about each individual substitution function. $subbed = gsub_copy($string, $pattern, $replacement); # $string unchanged Perform global substitution on a copy of the string and return the copy. gsub_modify($string, $pattern, $replacement); # $string has been modified Perform global substitution and modify the string. Returns the result of the s/// operator (number of substitutions performed if matched, empty string if not). gsub_context($string, $pattern, $replacement); # $string has been modified $subbed = gsub_context($string, $pattern, $replacement); # $string unchanged If called in a void context this function calls "gsub_modify". Otherwise calls "gsub_copy". $interpolated = interpolate_match_vars($string, @match_vars); Replaces any digit variables in the string with the corresponding elements from the match_vars array (returned from "last_match_vars"). Substitutes single and multiple digits such as $1 and ${12}. A literal $1 can be escaped in the normal way. Any escaped (backslashed) characters will remain in the string and the backslash will be removed (also counts for doubled backslashes): $string = 'the'; $pattern = 't(h)e'; # 'replacement' => 'output' # appearance when printed # '-$1-' => '-h-' # prints: -h- # '-\\$1-' => '-$1-' # prints: -$1- # '-\\\\$1-' => '-\\h-' # prints: -\h- # '-\\\\\\$1-' => '-\\$1-' # prints: -\$1- # '-\\x\\$1-' => '-x$1-' # prints: -x$1- # '-\\x\\\\$1-' => '-x\\h-' # prints: -x\h- This function is used when the substitution functions receive a string as the replacement parameter. Essentially: $interpolated = interpolate_match_vars($replacement, last_match_vars()); @match_vars = last_match_vars(); Return a list of the numeric match vars ( $1, $2, ...) from the last successful pattern match. The first element of the array is undef to make it simple and clear that the digits correspond to their index in the array: @m = (undef, $1, $2); $m[1]; # same as $1 $m[2]; # same as $2 This can be useful when you want to save the captured groups from a previous pattern match so that you can do another (without losing the previous values). This function is used by the substitution functions. Specifically, it's result is passed to the replacement coderef (which will be "interpolate_match_vars" if replacement is a string). In the future the first element may contain something more useful than undef. $subbed = sub_copy($string, $pattern, $replacement); # $string unchanged Perform a single substitution on a copy of the string and return the copy. sub_modify($string, $pattern, $replacement); # $string has been modified Perform a single substitution and modify the string. Returns the result of the s/// operator (number of substitutions performed if matched, empty string if not). sub_context($string, $pattern, $replacement); # $string has been modified $subbed = sub_context($string, $pattern, $replacement); # $string unchanged If called in a void context this function calls "sub_modify". Otherwise calls "sub_copy". This module exports nothing by default. All functions documented in this POD are available for export upon request. This module uses Sub::Exporter which allows extra functionality: There are predefined export groups corresponding to each of the variants listed above. Importing one (and only one) of these groups will rename the functions (dropping the suffix) so that the functions in your namespace will be named sub and gsub but will reference the variation you specified. Surely it is more clear with examples: package Local::WithCopy; use String::Substitution -copy; # now \&Local::WithCopy::gsub == \&String::Substitution::gsub_copy package Local::WithModify; use String::Substitution -modify; # now \&Local::WithModify::gsub == \&String::Substitution::gsub_modify package Local::WithContext; use String::Substitution -context; # now \&Local::WithContext::gsub == \&String::Substitution::gsub_context Note that String::Substitution does not actually have functions named sub and gsub, so you cannot do this: $subbed = String::Substitution::gsub($string, $pattern, $replacement); But you are free to use the full names (with suffixes): $subbed = String::Substitution::gsub_copy($string, $pattern, $replacement); String::Substitution::gsub_modify($string, $pattern, $replacement); String::Substitution::gsub_context($string, $pattern, $replacement); That is the magic of Sub::Exporter. If you are not satisfied with this see Sub::Exporter for other ways to get what you're looking for. Probably a lot. The replacement string only interpolates (term used loosely) numbered match vars (like $1 or ${12}). See "interpolate_match_vars" for more information. This module does not save or interpolate $& to avoid the "considerable performance penalty" (see perlvar). I tried to use this, but when I couldn't get it to install I decided to polish up an old function I had written a while back and try to make it reusable. Plus I thought the implementation could be simpler. You can find documentation for this module with the perldoc command. perldoc String::Substitution-string-substitution.
http://search.cpan.org/~rwstauner/String-Substitution-1.001/lib/String/Substitution.pm
CC-MAIN-2017-13
en
refinedweb
In my last article, I discussed some of my previous experiences with dependency management solutions and set forth some primary objectives I believe a dependency management tool should facilitate. In this article, I’ll show how I’m currently leveraging NuGet’s command line tool to help facilitate my dependency management goals. First, it should be noted that NuGet was designed primarily to help .Net developers more easily discover, add, update, and remove dependencies to externally managed packages from within Visual Studio. It was not designed to support build-time, application-level dependency management outside of Visual Studio. While NuGet wasn’t designed for this purpose, I believe it currently represents the best available option for accomplishing these goals. Approach 1 My team and I first started using NuGet for retrieving application dependencies at build-time a few months after its initial release, though we’ve evolved our strategy a bit over time. Our first approach used a batch file we named install-packages.bat that used NuGet.exe to process a single packages.config file located in the root of our source folder and download the dependencies into a standard \lib folder. We would then run the batch file after adding any new dependencies to the packages.config and proceed to make assembly references as normal from Visual Studio. We also use Mercurial as our VCS and added a rule to our .hgignore file to keep from checking in the downloaded assemblies. To ensure a freshly downloaded solution obtained all of its needed dependencies, we just added a call to our batch file from a Pre-build event in one of our project files. Voilà! Here’s an example of our single packages.config file (note, it’s just a regular NuGet config file which it normally stores in the project folder): <?xml version="1.0" encoding="utf-8"?> <packages> <package id="Antlr" version="3.1.3.42154" /> <package id="Castle.Core" version="2.5.1" /> <package id="Iesi.Collections" version="3.2.0.4000" /> <package id="NHibernate" version="3.2.0.4000" /> <package id="FluentNHibernate" version="1.3.0.717" /> <package id="Machine.Specifications" version="0.4.9.0" /> <package id="Machine.Fakes" version="0.2.1.2" /> <package id="Machine.Fakes.Moq" version="0.2.1.2" /> <package id="Moq" version="4.0.10827" /> <package id="Moq.Contrib" version="0.3" /> <package id="SeleniumDotNet-2.0RC" version="3.0.0.0" /> <package id="AutoMapper" version="1.1.0.118" /> <package id="Autofac" version="2.4.5.724" /> <package id="Autofac.Mvc3" version="2.4.5.724" /> <package id="Autofac.Web" version="2.4.5.724" /> <package id="CassiniDev" version="4.0.1.7" /> <package id="NDesk.Options" version="0.2.1" /> <package id="log4net" version="1.2.10" /> <package id="MvcContrib.Mvc3.TestHelper-ci" version="3.0.60.0" /> <package id="NHibernateProfiler" version="1.0.0.838" /> <package id="SquishIt" version="0.7.1" /> <package id="AjaxMin" version="4.13.4076.28499" /> <package id="ExpectedObjects" version="1.0.0.0" /> <package id="RazorEngine" version="2.1" /> <package id="FluentMigrator" version="0.9.1.0" /> <package id="Firefox" version="3.6.6" /> Here’s the batch file we used: @echo off set SCRIPT_DIR=%~dp0 set NUGET=%SCRIPT_DIR%..\tools\NuGet\NuGet.exe set PACKAGES=%SCRIPT_DIR%..\src\packages.config set DESTINATION=%SCRIPT_DIR%..\lib\ set LOCALCACHE=C:\Packages\ set CORPCACHE=//corpShare/Packages/ set DEFAULT_FEED="" echo [Installing NuGet Packages] if NOT EXIST %DESTINATION% mkdir %DESTINATION% echo. echo [Installing From Local Machine Cache] %NUGET% install %PACKAGES% -o %DESTINATION% -Source %LOCALCACHE% echo. echo [Installing From Corporate Cache] %NUGET% install %PACKAGES% -o %DESTINATION% -Source %CORPCACHE% echo. echo [Installing From Internet] %NUGET% install %PACKAGES% -o %DESTINATION% echo. echo [Copying To Local Machine Cache] xcopy /y /d /s %DESTINATION%*.nupkg %LOCALCACHE% echo. echo Done This batch file uses NuGet to retrieve dependencies first from a local cache, then from a corporate level cache, then from the default NuGet feed. It then copies any of the newly retrieved packages to the local cache. I don’t remember if NuGet had caching when this was first written, but it was decided to keep our own local cache due to the fact that NuGet only seemed to cache packages if retrieved from the default feed. We used the corporate cache as a sort of poor-man’s private repository for things we didn’t want to push up to the public feed. The main drawback to this approach was that we had to keep up with all of the transitive dependencies. When specifying a packages.config file, NuGet.exe only retrieves the packages listed in the file. It doesn’t retrieve any of the dependencies of the packages listed in the file. Approach 2 In an attempt to improve upon this approach, we moved the execution of NuGet.exe into our rake build. In doing so, we were able to eliminate the need to specify transitive dependencies by ditching the use of the packages.config file in favor of a Ruby dictionary. We also removed the Pre-Build rule in favor of just running rake prior to building in Visual Studio. Here is our dictionary which we store in a packages.rb file: packages = [ [ "FluentNHibernate", "1.3.0.717" ], [ "Machine.Specifications", "0.4.9.0" ], [ "Moq", "4.0.10827" ], [ "Moq.Contrib", "0.3" ], [ "Selenium.WebDriver", "2.5.1" ], [ "Selenium.Support", "2.5.1" ], [ "AutoMapper", "1.1.0.118" ], [ "Autofac", "2.4.5.724" ], [ "Autofac.Mvc3", "2.4.5.724" ], [ "Autofac.Web", "2.4.5.724" ], [ "NDesk.Options", "0.2.1" ], [ "MvcContrib.Mvc3.TestHelper-ci", "3.0.60.0" ], [ "NHibernateProfiler", "1.0.0.912" ], [ "SquishIt", "0.7.1" ], [ "ExpectedObjects", "1.0.0.0" ], [ "RazorEngine", "2.1"], [ "FluentMigrator", "0.9.1.0"], [ "Firefox", "3.6.6"], [ "FluentValidation", "3.1.0.0" ], [ "log4net", "1.2.10" ] ] configatron.packages = packages Here’s the pertinent sections of our rakefile: require 'rubygems' require 'configatron' ... FEEDS = ["//corpShare/Packages/", "" ] require './packages.rb' task :default => ["build:all"] namespace :build do task :all => [:clean, :dependencies, :compile, :specs, :package] ... task :dependencies do configatron.packages.each do | package | FEEDS.each do | feed | !(File.exists?("#{LIB_PATH}/#{package[0]}")) and sh "#{TOOLS_PATH}/NuGet/nuget Install #{package[0]} -Version #{package[1]} -o #{LIB_PATH} -Source #{feed} -ExcludeVersion" do | cmd, results | cmd end end end end end Another change we made was to use the -ExcludeVersion switch to enable us to setup up the Visual Studio references one time without having to change them every time we upgrade versions. Ideally, I’d like to avoid having to reference transitive dependencies altogether, but I haven’t come up with a clean way of doing this yet. Approach 2: Update As of version 1.4, NuGet will now resolve a package’s dependencies (i.e. transitive dependencies) from any of the provided sources (see workitem 603). This allows us to modify the above script to issue a single call to nuget: task :dependencies do configatron.packages.each do | package | !(File.exists?("#{LIB_PATH}/#{package[0]}")) and feeds = FEEDS.map {|x|"-Source " + x }.join(' ') sh "nuget Install #{package[0]} -Version #{package[1]} -o #{LIB_PATH} #{feeds} -ExcludeVersion" do | cmd, results | cmd end end end Summary While NuGet wasn’t designed to support build-time, application-level dependency management outside of Visual Studio in the way demonstrated here, it suits my team’s needs for now. My hope is NuGet will eventually support these scenarios more directly. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #941() Pingback: digitalBush » Manage Your Dependencies with Rake and NuGet()
https://lostechies.com/derekgreer/2011/09/20/dependency-management-in-net-using-nuget-without-visual-studio/
CC-MAIN-2017-13
en
refinedweb
Python's standard unittest library is great and I use it all the time. One thing missing from it, however, is a simple way of running parametrized test cases. In other words, you can't easily pass arguments into a unittest.TestCase from outside. Consider the use case: I have some TestCase I want to invoke several times, each time passing it a different argument. One approach often mentioned is create a base TestCase for the bulk functionality and derive sub-classes from it for variations. But this isn't flexible enough - what if you want to add parameters from the outside (command-line) or test with a large amount of parameters? Fortunately, Python is dynamic enough (and unittest flexible enough) to allow a relatively straightforward solution. Here's a class that makes it possible: import unittest class ParametrizedTestCase(unittest.TestCase): """ TestCase classes that want to be parametrized should inherit from this class. """ def __init__(self, methodName='runTest', param=None): super(ParametrizedTestCase, self).__init__(methodName) self.param = param @staticmethod def parametrize(testcase_klass, param=None): """ Create a suite containing all tests taken from the given subclass, passing them the parameter 'param'. """ testloader = unittest.TestLoader() testnames = testloader.getTestCaseNames(testcase_klass) suite = unittest.TestSuite() for name in testnames: suite.addTest(testcase_klass(name, param=param)) return suite Before I explain how this works, here's a sample usage. Let's define some test case that can be parametrized with an extra param argument: class TestOne(ParametrizedTestCase): def test_something(self): print 'param =', self.param self.assertEqual(1, 1) def test_something_else(self): self.assertEqual(2, 2) Note how nothing except inheriting ParametrizedTestCase is required. self.param automagically becomes available in all test methods (as well as in setUp, tearDown, etc.) And here is how to create and run parametrized instances of this test case: suite = unittest.TestSuite() suite.addTest(ParametrizedTestCase.parametrize(TestOne, param=42)) suite.addTest(ParametrizedTestCase.parametrize(TestOne, param=13)) unittest.TextTestRunner(verbosity=2).run(suite) As expected, we get: test_something (__main__.TestOne) ... param = 42 ok test_something_else (__main__.TestOne) ... ok test_something (__main__.TestOne) ... param = 13 ok test_something_else (__main__.TestOne) ... ok ---------------------------------------------------------------------- Ran 4 tests in 0.000s OK Now, a word on how ParametrizedTestCase works. It's a subclass of unittest.TestCase, and the parametrization is done by defining its own constructor, which is similar to TestCase's constructor but adds an extra param argument. This param is then saved as the instance attribute self.param. Test cases interested in being parametrized should then derive from ParametrizedTestCase. To actually create the parametrized test, ParametrizedTestCase.parametrize should be invoked. It accepts two arguments: - A subclass of ParametrizedTestCase - essentially our custom test case class - The parameter we want to pass to this instance of the test case And then uses the test name discovery facilities available in unittest.TestLoader to create the tests and parametrize them. As you can see in the usage example, the approach is easy to use and works quite well. I have a couple of qualms with it, however: - It directly calls TestCase.__init__, which isn't an officially documented feature. - When different parametrized instances of our test case run, we can't know which parameter was passed. I suppose some hack can be crafted that attaches the parameter value to the test name, but this is very much application-specific. I'm really interested in feedback on this post. Could this be done better? Any alternative approaches to achieve the same effect?
http://eli.thegreenplace.net/2011/08/02/python-unit-testing-parametrized-test-cases/
CC-MAIN-2017-13
en
refinedweb
. Please add support for ELF symbol versioning, so that the usual namespace problems can be avoided. I notice from <> that this lib is distributed under the terms of the GPL only, so I have my doubts that it's particularly useful for Debian to adopt it. Is there any particular reason that GNU shishi is not made available under the LGPL? -- Steve Langasek Give me a lever long enough and a Free OS Debian Developer to set it on, and I can move the world. [email protected] Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-devel/2005/08/msg01823.html
CC-MAIN-2017-13
en
refinedweb
The goal of this example is to show you how to serve static content from a filesystem. First, we need to import some objects: Site, an IProtocolFactory which glues a listening server port (IListeningPort) to the HTTPChannel implementation: from twisted.web.server import Site File, an IResource which glues the HTTP protocol implementation to the filesystem: from twisted.web.static import File The reactor, which drives the whole process, actually accepting TCP connections and moving bytes into and out of them: from twisted.internet import reactor And the endpoints module, which gives us tools for, amongst other things, creating listening sockets: from twisted.internet import endpoints Next, we create an instance of the File resource pointed at the directory to serve: resource = File("/tmp") Then we create an instance of the Site factory with that resource: factory = Site(resource) Now we glue that factory to a TCP port: endpoint = endpoints.TCP4ServerEndpoint(reactor, 8888) endpoint.listen(factory) Finally, we start the reactor so it can make the program work: reactor.run() And that’s it. Here’s the complete program: from twisted.web.server import Site from twisted.web.static import File from twisted.internet import reactor, endpoints resource = File('/tmp') factory = Site(resource) endpoint = endpoints.TCP4ServerEndpoint(reactor, 8888) endpoint.listen(factory) reactor.run() Bonus example! For those times when you don’t actually want to write a new program, the above implemented functionality is one of the things the command line twistd tool can do. In this case, the command twistd -n web --path /tmp will accomplish the same thing as the above server. See helper programs in the Twisted Core documentation for more information on using twistd.
http://twistedmatrix.com/documents/current/web/howto/web-in-60/static-content.html
CC-MAIN-2017-13
en
refinedweb
NotificationSettingsError Since: BlackBerry 10.2.0 #include <bb/platform/NotificationSettingsError> To link against this class, add the following line to your .pro file: LIBS += -lbbplatform The possible Notification Settings errors. Overview Public Types Index Public Types The possible Notification Settings errors. BlackBerry 10.2.0 - None 0 No errors have occurred. - Unknown 1 An unknown error has occurred.Since: BlackBerry 10.2.0 - InsufficientPermissions 2 The application lacks the proper permissions needed to perform the requested operation.Since: BlackBerry 10.2.0 - Internal 3 An internal error has occurred. Service availability determines whether Notification Settings can be accessed.Since: BlackBerry 10.2.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__platform__notificationsettingserror.html
CC-MAIN-2017-13
en
refinedweb
Historically Finagle has depended on a forked org.apache.thrift libthrift version 0.5.x, which happens to not be published to the Central Repository. We have published the artifact to maven.twttr.com, but this requires users to add the maven.twttr.com repository as a resolver for their project. This is normally not a serious problem for most users, but historically maven.twttr.com has had a propensity to be unreliable and inaccessible from certain locations. Our longterm goal is to transition to a current version of Apache libthrift which is hosted by the Central Repository, but there is a significant amount of work we need to do internally before we can get there. However, we have good news: now maven.twttr.com is no longer required! We have published a fork of libthrift v0.5.0 to the central repository under the ‘com.twitter’ organization. The class files still reside in the 'org.apache’ namespace, so users can still upgrade to newer version of libthrift as they wish, but doing so will now require manual exclusion rules because the eviction mechanisms used by sbt rely on matching organization names: // snippet from build.sbt script libraryDependencies ++= Seq( "com.twitter" %% "finagle-thrift" % "6.40.0" exclude("com.twitter", "libthrift"), "org.apache.thrift" % "libthrift" % "0.9.3" ) See the sbt documentation on library management for more details.
http://finagle.github.io/blog/2016/11/29/central-libthrift/
CC-MAIN-2017-13
en
refinedweb
Hi We are using USB barcode scanner to scan material barcode labels. These lables are from our vendors. We does not have control over the label generation. We are using USB barcode scanner for scanning purpose. We want to capture USB barcode scanner output. Normally the scanner returns output in the place holder in which the cursor present. But we do not put writable text box for the cursor. Instead of write allowable text box we want to put locked or readonly text box in the screen and want to capture scanner value. How can we do? Assume that our barcode scanner does not support serial ports. Regards S. Muhilan View Complete Post I Dear All, I'm creating simple web service that will accept some parameters (such as customer name and file name) and return content of an xml file as a string and WS client will save it on his side. I'm using custom binding with binaryMessageEncoding and httpTransport. The question is if it is possible to have these input parameters as well as returned xml content binary encoded somehow? Thanks in advance for your help! I have a user table with userID, userName ...the standards.I also have a "Material" table with materialID, materialName, etc. I combine these in another table "UsersMaterial" with userID and materialID, depending on which materials the user has chosen (between 0 and 5). Now I need a stored procedure that returns the user information + the set of materials that is mapped to the user. But I can't see how I can return one row of user information combined with a set of the user materials? Do I join or can I return a set + one row? Thanks in advance! Niklas I am using the MsiEnumRelatedProducts function and according to all the articles I should be getting text return values such as: But I am only getting numbers returned from the function. I assume I need to use some Enum but I cannot find any example anywhere. I have looked at this article: But there is no explanation of how to actually get those text return values back? I have been reading that a value needs to be returned by a function, otherwise the following error will occur: "Function <function name> doesn't return a value on all code paths. A null reference exception could occur at run time when the result is used." The cure, I have found, is to return something (i.e. "Return 0") before exiting the function. Can someone please enlighten me as to why this might be useful? Can I pass this return value back to the routine that called the function? If so, how do I code this? Most of the time, however, I find I'm not needing to return a value anyway, and just end up adding "0" to the ends of all my Return statements. Does anyone have any comments on this? Links referred to: Thanks! i'm trying to check if the text fields are blank i attached the code below, if i leave both textbox blank, both alerts pop up but confirm how to make it like if the first textbox is blank, alert pop up then exit everything? function Validation(){ if (IsValid()) { if (!(confirm("Are you sure you want to save this record?"))) { return false; }return true; } function IsValid() { if (!isDate(document.getElementById('txtTest'))) { alert("Please insert a Date"); return false; } if (!isBlank(document.getElementById('txtTest2'))) {alert("Please insert a Holiday Name "); return false; } return true; } asp:button onclientclick="return Validation()" hi all, i need your assistance please. select Comments.Id,SuroundId,ArticleId,PosterId,PostRime,Subject,Body,Visible,Users.Username from dbo.Comments inner join Users on Comments.PosterId=Users.Id where SuroundId=@suround and ArticleId=@articleId for some reason it doesnt return any values. before i've added the inner join it worked perfectly. i couldn't find my mistake, though i passed over it several times. thanks. I have the following stored procedure ALTER PROCEDURE dbo.GetProvMoveModifier ( @prov_id int ) AS DECLARE @mod float SELECT @mod = PROV_TYPES.MOVE_MOD FROM PROV_DETAILS INNER JOIN PROV_TYPES ON PROV_DETAILS.TYPE_ID = PROV_TYPES.TYPE_ID WHERE (PROV_DETAILS.PROV_ID = @prov_id) RETURN @mod MOVE_MOD is defined as a float. Yet within Dataclasses.designer.cs the function gets added as: [Function(Name="dbo.GetProvMoveModifier")] public int GetProvMoveModifier([Parameter(DbType="Int")] System.Nullable<int> prov_id) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), prov_id); return ((int)(result.ReturnValue)); } Naturally this is causing some issues as my C# code needs to handle decimal values. If I try to manually change the designer code to use floats, I get the following error 'System.Single' is not a valid return type for a mapped stored procedure method. Any thoughts? I could setup a work-around, but I would think I could return a decimal value from a stored procedure.
http://www.dotnetspark.com/links/33470-capturing-usb-barcode-scanner-return-values.aspx
CC-MAIN-2017-13
en
refinedweb
Tools are the lifeblood of any programming language, and F# is no different. While you can be successful writing F# code in your favorite text editor and invoking the compiler exclusively from the command line, you’ll likely be more productive using tools. Like C# and VB.NET, F# is a first-class citizen in Visual Studio. F# in Visual Studio has all the features you would expect, such as debugger support, IntelliSense, project templates, and so on. To create your first F# project, open up the Visual Studio IDE and select File→New Project from the menu bar to open the New Project dialog, as shown in Figure 1-1. Select Visual F# in the left pane, select F# Application in the right pane, and click OK. After you click OK in the New Project dialog, you’ll see an empty code editor, a blank canvas ready for you to create your F# masterpiece. To start with, let’s revisit our Hello, World application. Type the following code into the F# editor: printfn "Hello, World" Now press Control + F5 to run your application. When your application starts, a console window will appear and display the entirely unsurprising result shown in Figure 1-2. It may be startling to see a program work without an explicit Main method. You will see why this is admissible in the next chapter, but for now let’s create a more meaningful Hello, World–type program to get a feel for basic F# syntax. The code in Example 1-1 will create a program that accepts two command-line parameters and prints them to the console. In addition, it displays the current time. Example 1-1. Mega Hello World (* Mega Hello World: Take two command line parameters and then print them along with the current time to the console. *) open System [<EntryPoint>] Now that you have actual F# code, hopefully you are curious about what is going on. Let’s look at this program line by line to see how it works. Example 1-1 introduces three values named greeting, thing, and timeOfDay: let greeting, thing= args.[0], args.[1] let timeOfDay= DateTime.Now.ToString("hh:mm tt") The key thing here is that the let keyword binds a name to a value. It is worth pointing out that unlike most other programming languages, in F# values are immutable by default, meaning they cannot be changed once initialized. We will cover why values are immutable in Chapter 3, but for now it is sufficient to say it has to do with functional programming. F# is also case-sensitive, so any two values with names that only differ by case are considered different: let number = 1 let Number = 2 let NUMBER = 3 A value’s name can be any combination of letters, numbers, an underscore _, or an apostrophe '. However, the name must begin with a letter or an underscore. You can enclose the value’s name with a pair of tickmarks, in which case the name can contain any character except for tabs and newlines. This allows you to refer to values and functions exposed from other .NET languages that may conflict with F# keywords: let ``this.Isn't %A% good value Name$!@# ``= 5 Other languages, like C#, use semicolons and curly braces to indicate when statements and blocks of code are complete. However, programmers typically indent their code to make it more readable anyway, so these extra symbols often just add syntactic clutter to by changing the relevant setting under Tools→Options→Text Editor→F#. Reviewing Example 1-1, notice that the body of the main method was indented by four spaces, and the body of the if statement was indented by another four spaces: If the body of the if statement, the failwith, was dedented four spaces and therefore lined up with the if keyword, the F# compiler would yield a warning. This is because the compiler wouldn’t be able to determine if the failwith was meant for the body of the if statement: [<EntryPoint>] let main (args : string[]) = ifargs.Length <> 2 then failwith"Error: Expected arguments <greeting> and <thing>" Warning FS0058: possible incorrect indentation: this token is offside of context started at position (25:5). Try indenting this token further or using standard formatting conventions The general rule is that anything belonging to a method or statement must be indented further than the keyword that began the method or statement. So in Example 1-1, everything in the main method was indented past the first let and everything in the if statement was indented past the if keyword. As you see and write more F# code, you will quickly find that omitting semicolons and curly braces makes the code easier to write and also much easier to read. Example 1-1 also demonstrated how F# can interoperate with existing .NET libraries: open System// ... let timeOfDay = DateTime.Now.ToString("hh:mm tt") The .NET Framework contains a broad array of libraries for everything from graphics to databases to web services. F# can take advantage of any .NET library natively by calling directly into it. In Example 1-1, the DateTime.Now property was used in the System namespace in the mscorlib.dll assembly. Conversely, any code written in F# can be consumed by other .NET languages. For more information on .NET libraries, you can skip ahead to Appendix A for a quick tour of what’s available. Like any language, F# allows you to comment your code. To declare a single-line comment, use two slashes, //; everything after them until the end of the line will be ignored by the compiler: //Program exit code For larger comments that span multiple lines, you can use multiline comments, which indicate to the compiler to ignore everything between the (* and *) characters: (*Mega Hello World: Take two command line parameters and then print them along with the current time to the console. *) For F# applications written in Visual Studio, there is a third type of comment: an XML documentation comment. If a comment starting with three slashes, ///, is placed above an identifier, Visual Studio will display the comment’s text when you hover over it. Figure 1-3 shows applying an XML documentation comment and its associated tooltip. No credit card required
https://www.safaribooksonline.com/library/view/programming-f/9780596802080/ch01s02.html
CC-MAIN-2017-13
en
refinedweb
This one time, at band camp, Oliver Fuchs said: > On Wed, 11 Dec 2002, Sandip P Deshmukh wrote: > > > i would like to know if i can: > > > > - automatically move these messages to a particular folder - say > > spam > > > > Hi, > add this to your .procmailrc file: > > :0: > * ^X-Spam-Status: Yes > caughtspam ISTR that Sandip sai (in another post) that he's not using procmail. If that's so, just use exim's native filtering abilities. Put this is your ~/.forward: # Spam Drop if $h_X-Spam-Status contains "Yes" then save $home/Mail/SPAM finish endif Exim's filtering ability is fine for most straightforwad filters - My mail goes into about 25 folders based on header keywords, and it's worked reliably here. I understand procmail is more versatile, but so far I haven't needed it. -- -------------------------------------------------------------------------- | Stephen Gran | Help! I'm trapped in a PDP 11/70! | | [email protected] | | | | | -------------------------------------------------------------------------- Attachment: pgprxlXPCs1Eu.pgp Description: PGP signature
https://lists.debian.org/debian-user/2002/12/msg02089.html
CC-MAIN-2017-34
en
refinedweb
HMACSHA512 Class Assembly: mscorlib (in mscorlib.dll) System.Security.Cryptography.HashAlgorithm System.Security.Cryptography.KeyedHashAlgorithm System.Security.Cryptography.HMAC System.Security.Cryptography.HMACSHA512. The following example shows how to sign a file by using the HMACSHA512 object and then how to verify the file. using System; using System.IO; using System.Security.Cryptography; public class HMACSHA512512 hmac = new HMACSHA512512 hmac = new HMACSHA512.
https://msdn.microsoft.com/en-us/library/system.security.cryptography.hmacsha512
CC-MAIN-2017-34
en
refinedweb
Generating, dep.department_name, cursor (select emp.employee_id, emp.first_name, emp.last_name from employees emp where emp.department_id = dep.department_id) AS emps from departments dep, emp.first_name, emp.last_name from employees emp where emp.department_id = dep.department_id); public class MyEmp {() { ArrayList emps = new ArrayList(); OracleResultSet set = getEmpsList(); try { while (set.next()) { MyEmp myEmp = new MyEmp(); myEmp.setId(set.getInt(1)); myEmp.setId(set.getString(2)); myEmp.setId(set.getString(3)); emps.add(myEmp); } //set.beforeFirst(); } catch (Exception e) { e.printStackTrace(); } return em.
https://technology.amis.nl/2006/01/17/using-cursors-in-oracle-adf/
CC-MAIN-2017-34
en
refinedweb
hello every one. i would like to tell you that i am real biggener so please reply accordingly. any kind of help would be appriciated. i was trying to clean a specific registry key's value through my application. i have added a button now i want that on click event of that button that particular registry entry's value is cleaned. what i want to do is to increase the typed URL history through my application. i was told to use a namespace microsoft.win32 but i dont even know how to use the namespace. i would like someone to guide me through. the registry key is HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\TypedURLs anyone can please help with this one and rest i'll do on the same traks. Best Regards Mohit
https://www.daniweb.com/programming/software-development/threads/20678/i-want-to-manipulate-the-registry-key-values-through-a-vb-net-application
CC-MAIN-2018-39
en
refinedweb
EventSource A simple Swift event source for fun and profit EventSource is based on the EventSource Web API to enable Server Sent Events. If you're unfamiliar with this one-way streaming protocol - Start Here. Under the hood, EventSource is built on top of NSURLSession and has zero third-party dependencies. Enjoy! Usage An Event looks like this struct Event { let readyState: EventSourceState // The EventSourceState at the time of the event's creation let id: String? let name: String? let data: String? let error: NSError? } You create an EventSource with an NSURL import EventSource let url = NSURL(string: "")! let eventSource = EventSource(url: url) Opening and closing the connection eventSource.open() eventSource.close() Adding standard event handlers eventSource.onOpen { event in debugPrint(event) } eventSource.onMessage { event in debugPrint(event) } eventSource.onClose { event in debugPrint(event) } eventSource.onError { event in debugPrint(event) } Adding named event handlers eventSource.addHandler("tweet.create") { event in debugPrint(event.data) } Example In the Example directory, you'll find the Server and EventSourceExample directories. The Server directory contains a simple python server that sends events to any connected clients, and the EventSourceExample directory contains a simple iOS app to display recent events from that server. Server Setup The server uses Redis to setup pub / sub channels, and it uses Flask deployed with Gunicorn to serve events to connected clients. Install the following packages to run the simple python server brew install redis pip install flask redis gevent gunicorn Start redis and deploy the server (in two separate terminal tabs) redis-server gunicorn --worker-class=gevent -b 0.0.0.0:8000 app:app Client Setup Open the EventSourceExample Xcode project and run the app in the simulator Tap the "Open" button in the app to open a connection to the server Sending Events Now you can visit in your browser to start sending events Demo If all goes well, you should get a nice stream of events in your simulator Heads Up API Decisions EventSource deviates slightly from the Web API where it made sense for a better iOS API. For example, an Event has a name property so you can subscribe to specific, named events like tweet.create. This is in lieu of the Web API's event property of an Event (because who wants to write let event = event.event? Not me... 😞). Auto-Reconnect An EventSource will automatically reconnect to the server if it enters an Error state, and based on the protocol, a server can send a retry event with an interval indicating how frequently the EventSource should retry the connection after encountering an error. Be warned: an EventSource expects this interval to be in seconds - not milliseconds as described by the Web API. Installation Carthage Add the following line to your Cartfile. github "christianbator/EventSource" Then run carthage update --platform iOS Github Help us keep the lights on Dependencies Used By Total: 0 Releases v0.2-alpha - Nov 22, 2016 - Swift 3 support v0.1-alpha - Aug 30, 2016 - Initial pre-release with basic event source functionality
https://swiftpack.co/package/christianbator/EventSource
CC-MAIN-2018-39
en
refinedweb
I need something that validates input if his/her inputted data (word) exists in .txt file. My code is working if there is just a one condition. if(line.find("2014-1113") != string::npos) But when I try to add else condition.. Everytime I run the program, the else condition is always the output. I don't know why.. I try to have a experiment so that if user enter a word that is not exist in my txt file, there will be an output that something is wrong with his/her inputted data. When I run at using debugging mode. This is the output: cout << "NOT FOUND!"; break; Until I run it, even I change the word and it exist on my txt file, still the ELSE condition is the output.. Does anyone knows my problem? thanks! Here is my sample txt file: 2015-1111,Christian Karl,M 2015-1112,Joshua Evans,M 2015-1115,Jean Chloe,F 2015-1113,Shairene Traxe,F 2015-1114,Paul Howard,M Then my code: #include <iostream> #include <fstream> #include <string> using namespace std; int main() { ifstream stream1("db.txt"); string line ; while( std::getline( stream1, line ) ) { if(line.find("2015-1113") != string::npos){ // WILL SEARCH 2015-1113 in file cout << line << endl; } else{ cout << "NOT FOUND!"; break; } } stream1.close(); system("pause"); return 0; } When your code goes over the first line, it doesn't find what it's looking for, and goes into the else clause. Then it prints "NOT FOUND" and break (the break stops the while loop). What you should do is something along these lines: bool found = false; while( std::getline( stream1, line ) && !found) { if(line.find("2015-1113") != string::npos){ // WILL SEARCH 2015-1113 in file cout << line << endl; found = true; // If you really want to use "break;" Here will be a nice place to put it. Though it is not really necessary } } if (!found) cout << "NOT FOUND"; Since your if condition is inside you loop the else statement will run for every line that does not contain what you are searching for. What you need to do is use a bool flag and set that in the loop. After the loop finishes then you check the flag and see if the line was found or not. bool found = false; while(std::getline(stream1, line) && !found ) { if(line.find("2015-1113") != string::npos){ // WILL SEARCH 2015-1113 in file found = true; } } if (found) std::cout << "Your line was found.";
http://m.dlxedu.com/m/askdetail/3/f98f06d0195d74b31e49b17282d4e48f.html
CC-MAIN-2018-39
en
refinedweb
Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 6:49 PM I thought I would pass on some of the hard one lessons here to help people avoid some of the pain I have had to go through. While I think my scenario is perhaps not one that jbpm has in mind specifically, I fear that it is quite a typical scenario in the wild. Also I think someone else might have a suggestion or ten. I am not claiming to be a jbpm guru. Required Reading: Problem JBPM3 architecture assumes and relies heavily on an xml configuration file and dependencies are spread throughout several classes. (sometimes in surprising places and ways) Penetration of this is rather difficult and a few versions ago nearly impossible due to private/final members/methods in key classes. This was most evident in the job executor and scheduler. Previously spring-modules was sufficient to integrate with spring, although quite clunky due to the above restrictions. Current wisdom suggests that using the latest version of jbpm with spring-modules can introduce subtle bugs and it appears that the jbpm part of spring modules is now dead. The reference links above outline an alternative solution but it is somewhat flawed unfortunately when it comes to async processing. It also does not go into specifics on the spring config side of things and how to remove the dependency on the jbpm configuration xml file and have a fully spring injected environment. I will assume that you have read the references above and have downloaded the source code they reference. 1. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 6:56 PM (in response to Shane Paul) The problem to point out is the job executor thread. The overrided example given does not synchronise the acquire jobs properly. This means that there will be a race condition in the acquire jobs thread between the acquireJobs synchronized block in JobExecutorThread and the unsynched one in SpringExecutorThread. The correct implementation is: public class SpringExecutorThread extends JobExecutorThread { private TransactionTemplate transactionTemplate; private JobExecutor jobExecutor; public GfgSpringExecutorThread( String name, JobExecutor jobExecutor, JbpmConfiguration jbpmConfiguration, TransactionTemplate transactionTemplate, int idleInterval, int maxIdleInterval, long maxLockTime, int maxHistory ) { super(name, jobExecutor, jbpmConfiguration, idleInterval, maxIdleInterval, maxLockTime, maxHistory); this.transactionTemplate = transactionTemplate; this.jobExecutor=jobExecutor; } @Override protected Collection acquireJobs() { synchronized(jobExecutor) { return (Collection) transactionTemplate.execute(new TransactionCallback() { public Object doInTransaction(TransactionStatus transactionStatus) { return GfgSpringExecutorThread.super.acquireJobs(); } }); } } @Override protected void executeJob(final Job job) { transactionTemplate.execute(new TransactionCallback() { public Object doInTransaction(TransactionStatus transactionStatus) { GfgSpringExecutorThread.super.executeJob(job); return null; } }); } @Override protected Date getNextDueDate() { return (Date) transactionTemplate.execute(new TransactionCallback() { public Object doInTransaction(TransactionStatus transactionStatus) { return GfgSpringExecutorThread.super.getNextDueDate(); } }); } } 2. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:06 PM (in response to Shane Paul). 3. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:23 PM (in response to Shane Paul) The next issue is the extension of the jbpm configuration object. In this example I would like my configuration to autodeploy my process definitions at startup and fail with warnings if something is not right. I would also like the option of safely starting the jobexecutor AFTER everything is deployed and safe for processing jobs that may have already been queued up. Notes: The object factory being injected is our own custom one that I will talk about next. The line: JbpmConfiguration.Configs.setDefaultObjectFactory(objectFactory); Is necessary because I noticed jbpm attempting to use the default factory if it was not set. This step was the one that was preventing me from completely removing all dependency on an xml file and caused some subtle transaction bugs. A bit random to be honest and I have not looked deeply into why it is trying to undermine my config behind my back. Nasty little troll... JbpmSpringDelegationNode.beanFactory = applicationContext; This is the bit of code I am most unhappy with. Basically there is no way currently (unlike in jbpm4) to hook the process def lookup to the spring beanfactory. So we have to inject it manually. The new Spring 2.5/3.0 configuration annotations could be used here to force injection of properties, but we do not use them. Until then...yuck. public class JbpmConfigurationFactoryBeanImpl implements ApplicationContextAware, FactoryBean, InitializingBean, ApplicationListener { private static org.apache.log4j.Logger log = org.apache.log4j.Logger.getLogger(JbpmConfigurationFactoryBeanImpl.class); private JbpmConfiguration jbpmConfiguration; private TransactionTemplate transactionTemplate; private ObjectFactory objectFactory; private boolean startJobExecutor; private SessionFactory sessionFactory; private Resource[] processDefinitionsResources; @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { JbpmSpringDelegationNode.beanFactory = applicationContext; } public void afterPropertiesSet() throws Exception { JbpmConfiguration.Configs.setDefaultObjectFactory(objectFactory); jbpmConfiguration = new JbpmConfiguration(objectFactory); JbpmContext ctx = null; try { ctx = jbpmConfiguration.createJbpmContext(); ctx.setSessionFactory(sessionFactory); } finally { if (ctx != null) { ctx.close(); } } if (getProcessDefinitionsResources() != null) { InputStream inputStream = null; for (int i = 0; i < getProcessDefinitionsResources().length; i++) { try { ctx = jbpmConfiguration.createJbpmContext(); Resource definitionLocation = getProcessDefinitionsResources(); inputStream = definitionLocation.getInputStream(); final ProcessDefinition processDefinition = ProcessDefinition.parseXmlInputStream(inputStream); final JbpmContext finalContext = ctx; getTransactionTemplate().execute(new TransactionCallback() { @Override public Object doInTransaction(TransactionStatus status) { finalContext.deployProcessDefinition(processDefinition); return null; } }); } finally { if (inputStream != null) { inputStream.close(); } if (ctx != null) { ctx.close(); } } } } StaleObjectLogConfigurer.hideStaleObjectExceptions(); } public Object getObject() throws Exception { return jbpmConfiguration; } @Override public Class getObjectType() { return JbpmConfiguration.class; } @Override public boolean isSingleton() { return true; } @Override public void onApplicationEvent(ApplicationEvent applicationEvent) { if (applicationEvent instanceof ContextClosedEvent) { jbpmConfiguration.getJobExecutor().stop(); } if (applicationEvent instanceof ContextStartedEvent) { if (startJobExecutor) { log.info("Starting job executor ..."); jbpmConfiguration.startJobExecutor(); log.info("Job executor started."); } } } public void setStartJobExecutor(boolean startJobExecutor) { this.startJobExecutor = startJobExecutor; } public void setObjectFactory(ObjectFactory objectFactory) { this.objectFactory = objectFactory; } public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } public Resource[] getProcessDefinitionsResources() { return processDefinitionsResources; } public void setProcessDefinitionsResources(Resource[] processDefinitionsResources) { this.processDefinitionsResources = processDefinitionsResources; } public TransactionTemplate getTransactionTemplate() { return transactionTemplate; } public void setTransactionTemplate(TransactionTemplate transactionTemplate) { this.transactionTemplate = transactionTemplate; } } 4. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:32 PM (in response to Shane Paul) The next is the object factory. We use a map object to store the various nebulous properties that are required as well as getting those core, spring injected service factories in there. The map makes specifying boolean and string properties a little cleaner and allows us to remove some ugly naming in our core spring config file that would otherwise be needed. public class SpringObjectFactory implements ObjectFactory, ApplicationContextAware { private ApplicationContext applicationContext; private Map<String, Object> jbpmProperties; @Override public Object createObject(String objectName) { if (getJbpmProperties().containsKey(objectName)) { return getJbpmProperties().get(objectName); } else if (getApplicationContext().containsBean(objectName)) { return getApplicationContext().getBean(objectName); } else { return null; } } @Override public boolean hasObject(String objectName) { return getJbpmProperties().containsKey(objectName)|| getApplicationContext().containsBean(objectName); } public void setJbpmProperties(Map<String, Object> jbpmProperties) { this.jbpmProperties = jbpmProperties; } public Map<String, Object> getJbpmProperties() { return jbpmProperties; } @Override public void setApplicationContext(ApplicationContext applicationContext) { this.applicationContext = applicationContext; } public ApplicationContext getApplicationContext() { return applicationContext; } } 5. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:37 PM (in response to Shane Paul) Other java classes we use that are almost the same or identical as the ones in the articles. Just for completeness. Lock monitor thread: public class SpringLockMonitorThread extends LockMonitorThread { private TransactionTemplate transactionTemplate; public SpringLockMonitorThread(JbpmConfiguration jbpmConfiguration, int lockMonitorInterval, int maxLockTime, int lockBufferTime, TransactionTemplate transactionTemplate) { super(jbpmConfiguration, lockMonitorInterval, maxLockTime,lockBufferTime); this.transactionTemplate = transactionTemplate; } @Override protected void unlockOverdueJobs() { transactionTemplate.execute(new TransactionCallback() { public Object doInTransaction(TransactionStatus transactionStatus) { SpringLockMonitorThread.super.unlockOverdueJobs(); return null; } }); } public void setTransactionTemplate(TransactionTemplate transactionTemplate) { this.transactionTemplate = transactionTemplate; } } Job executor: public class SpringJobExecutor extends JobExecutor { private TransactionTemplate transactionTemplate; @Override public synchronized void start() { if (!isStarted) { for (int i = 0; i < nbrOfThreads; i++) { startThread(); } lockMonitorThread = new SpringLockMonitorThread(jbpmConfiguration, lockMonitorInterval, maxLockTime, lockBufferTime, transactionTemplate); isStarted = true; } } @Override protected Thread createThread(String threadName) { log.info("Creating JobExecutor thread " + threadName); return new GfgSpringExecutorThread(threadName, this, jbpmConfiguration, transactionTemplate, idleInterval, maxIdleInterval, maxLockTime, historyMaxSize); } public void setTransactionTemplate(TransactionTemplate transactionTemplate) { this.transactionTemplate = transactionTemplate; } } 6. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:38 PM (in response to Shane Paul) The horribly ugly delegation node... public class JbpmSpringDelegationNode implements DecisionHandler, ActionHandler, AssignmentHandler { private static final long serialVersionUID = 1L; private String beanName; protected static BeanFactory beanFactory; private Object bean; public String decide(ExecutionContext executionContext) throws Exception { DecisionHandler dh = getBean(); return dh.decide(executionContext); } public void execute(ExecutionContext executionContext) throws Exception { ActionHandler ah = getBean(); ah.execute(executionContext); } public void assign(Assignable assignable, ExecutionContext executionContext) throws Exception { AssignmentHandler ah = getBean(); ah.assign(assignable, executionContext); } @SuppressWarnings("unchecked") private <T> T getBean() { if (bean == null) { bean = beanFactory.getBean(beanName); } return (T) bean; } public String getBeanName() { return beanName; } public void setBeanName(String beanName) { this.beanName = beanName; } } 7. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 7:55 PM (in response to Shane Paul) Finally, the spring config to tie it all together which has hopefully survived sanitation intact. :) You will not the class names of the above classes have been fudged to protect the innocent. Notes: I also do not claim this is the perfect solution. There are probably further optimisations that could be made to reduce the size/complexity. We create our own object factory and define various properties in spring. The default jbpm context has the services injected manually to catch code that ignores the object factory. They are all named THE SAME as they would be in the jbpm config file. This is so our custom object factory will pick them up. You could rename them and put them into the objectfactory's map also. <?xml version="1.0" encoding="UTF-8"?> <beans> <bean id="jbpmConfiguration" class="JbpmConfigurationFactoryBeanImpl"> <property name="transactionTemplate" ref="transactionTemplate" /> <property name="sessionFactory" ref="jBPMSessionFactory" /> <property name="startJobExecutor" value="true" /> <property name="objectFactory" ref="springJbpmObjectFactory" /> <property name="processDefinitionsResources"> <list> <value>classpath:where is my xml???</value> </list> </property> </bean> <bean id="springJbpmObjectFactory" class="SpringObjectFactory"> <property name="jbpmProperties"> <map> <entry key="jbpm.hide.stale.object.exceptions"> <bean class="java.lang.Boolean"><constructor-arg</bean> </entry> <entry key="resource.business.calendar" value="org/jbpm/calendar/jbpm.business.calendar.properties" /> <entry key="resource.default.modules" value="org/jbpm/graph/def/jbpm.default.modules.properties" /> <entry key="resource.converter" value="org/jbpm/db/hibernate/jbpm.converter.properties" /> <entry key="resource.action.types" value="org/jbpm/graph/action/action.types.xml" /> <entry key="resource.node.types" value="org/jbpm/graph/node/node.types.xml" /> <entry key="resource.parsers" value="org/jbpm/jpdl/par/jbpm.parsers.xml" /> <entry key="resource.varmapping" value="org/jbpm/context/exe/jbpm.varmapping.xml" /> <entry key="resource.mail.templates" value="jbpm.mail.templates.xml" /> <entry key="jbpm.mail.smtp.host" value="localhost" /> <entry key="jbpm.mail.from.address" value="jbpm@noreply" /> <entry key="jbpm.byte.block.size"> <bean class="java.lang.Integer"><constructor-arg</bean> </entry> <entry key="jbpm.mail.address.resolver"> <bean class="org.jbpm.identity.mail.IdentityAddressResolver"></bean> </entry> <entry key="jbpm.variable.resolver"> <bean class="org.jbpm.jpdl.el.impl.JbpmVariableResolver"></bean> </entry> <entry key="jbpm.task.instance.factory"> <bean class="org.jbpm.taskmgmt.impl.DefaultTaskInstanceFactoryImpl"></bean> </entry> </map> </property> </bean> <bean id="default.jbpm.context" class="org.jbpm.JbpmContext" scope="prototype"> <constructor-arg> <bean class="org.jbpm.svc.Services"> <constructor-arg> <map> <entry key="message" value- <entry key="authentication" value- <entry key="scheduler" value- <entry key="tx" value- <entry key="persistence" value- </map> </constructor-arg> </bean> </constructor-arg> <constructor-arg </bean> <bean id="jbpmCommandService" class="org.jbpm.command.impl.CommandServiceImpl"> <constructor-arg </bean> <bean id="jbpm.job.executor" class="SpringJobExecutor"> <property name="transactionTemplate" ref="transactionTemplate" /> <property name="jbpmConfiguration" ref="jbpmConfiguration" /> <property name="name" value="SpringJbpmJobExecutor" /> <property name="nbrOfThreads" value="40" /> <property name="idleInterval" value="5000" /> <property name="maxIdleInterval" value="1700000" /> <property name="historyMaxSize" value="20" /> <property name="maxLockTime" value="300000" /> <property name="lockMonitorInterval" value="60000" /> <property name="lockBufferTime" value="5000" /> </bean> <!-- services --> <bean id="persistence" class="org.jbpm.persistence.db.DbPersistenceServiceFactory"> <property name="sessionFactory" ref="jBPMSessionFactory" /> <property name="transactionEnabled" value="false" /> <property name="currentSessionEnabled" value="true" /> </bean> <bean id="authentication" class="org.jbpm.security.authentication.DefaultAuthenticationServiceFactory" /> <bean id="message" class="org.jbpm.msg.db.DbMessageServiceFactory" /> <bean name="tx" class="org.jbpm.tx.TxServiceFactory" /> <bean name="scheduler" class="org.jbpm.scheduler.db.DbSchedulerServiceFactory" /> </beans> 8. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 19, 2009 8:02 PM (in response to Shane Paul) And that is it. Hopefully that helps someone avoid the painful experience I had. Of course everyone is rushing off to jbpm4 now and I wish I could post something similar on that. (or not have to at all!) Unfortunately after a week of frustration it just would not work with JTA/Spring and async processing. I tried every configuration suggestion on the net and nothing worked. (mainly due to the job executor) A shame as jbpm4 is a step in the right direction on all fronts IMHO. I eagerly await when I can make the move myself. 9. Re: Solution: JBPM3 and Spring IntegrationJoram Barrez Aug 25, 2009 6:11 AM (in response to Shane Paul) Thanks for the posts! I added a link to this thread on my blog. 10. Re: Solution: JBPM3 and Spring IntegrationJoram Barrez Aug 25, 2009 6:15 AM (in response to Shane Paul) Btw for reference, your jBPM4 issues are described here: It makes me wonder, since the GWT console uses JTA transactions. Can you post your configs here? 11. Re: Solution: JBPM3 and Spring IntegrationMustafa Musaji Aug 25, 2009 7:43 AM (in response to Shane Paul) "mr_magoo" wrote:. This is good to know. I'm not using Spring but I am getting this error when I'm executing multiple jobs in parallel using JBPM 4 and MySQL. I've been seeing all sorts of posts/blogs etc skimming this issue but none explaining the cause or if there is a solution. The problem I am having is my workflow doesn't always complete because of the errors. It seems like it's a little hit and miss and sometimes will carry on through to completion and sometimes not. Maybe should post in another thread. 12. Re: Solution: JBPM3 and Spring IntegrationRonald van Kuijk Aug 25, 2009 9:05 AM (in response to Shane Paul) PLEASE post in topics that are ON TOPIC this one is about Spring and jBPM 3, you do not use EITHER 13. Re: Solution: JBPM3 and Spring IntegrationShane Paul Aug 26, 2009 1:14 AM (in response to Shane Paul) Hey there, I wish I could, but I dumped them a few weeks ago once I had JBPM3 all working. Regretting that now. Also there were about 50 variants. That in itself is wierd as the spring integration docs are only a few paragraphs. You guys tempted me with promises of a painless spring integration....now I feel so cheated and cheap. ;) I will give it another hack when I have some time and be sure to keep a log and post then. 14. Re: Solution: JBPM3 and Spring IntegrationJoram Barrez Aug 26, 2009 10:28 AM (in response to Shane Paul) @MrMagoo: there are a lot of people who are using the Spring integration without problems, so don't feel cheap ;-) Do keep us posted on your progress, I'm interested in seeing in what area we can improve things.
https://developer.jboss.org/thread/118944
CC-MAIN-2018-39
en
refinedweb
Validation Toolkit - Introduction - Requirements - Installation - Usage Examples - Contribute - Meta Introduction ValidationToolkit is designed to be a lightweight framework specialised in data validation, such as email format, input length or passwords matching. At the core of this project are the following principles: - Separation of concerns - Availability on all platforms Separation of concerns Think of ValidationToolkit as to an adjustable wrench more than to a Swiss knife. With this idea in mind, the toolkit is composed from a small set of protocols, structs and classes than can be easily composed to fit your project needs. All platforms availability Since validation can take place at many levels, ValidationToolkit is designed to support iOS, macOS, tvOS, watchOS and native Swift projects, such as server apps. Every project is unique in it's challenges and it's great when we can focus on solving them instead of spending our time on boilerplate tasks. ValidationToolkit is compact and offers you the foundation you need to build data validation around your project needs. In addition, it includes a set of common validation predicates that most of the projects can benefit of: email validation, required fields, password matching, url validation and many more to come. Requirements - iOS 8.0+ / macOS 10.10+ / tvOS 9.0+ / watchOS 2.0+ - Xcode 8.1+ - Swift 3.0+ Installation Carthage You can use Carthage to install ValidationToolkit by adding it to your Cartfile: github "nsagora/validation-toolkit" Run carthage update to build the framework and drag the built ValidationToolkit.framework into your Xcode project. Setting up Carthage Carthage is a decentralised dependency manager that builds your dependencies and provides you with binary frameworks. You can install Carthage with Homebrew using the following command: $ brew update $ brew install carthage CocoaPods You can use CocoaPods to install ValidationToolkit by adding it to your Podfile: source '' platform :ios, '8.0' use_frameworks! target 'YOUR_TARGET_NAME' do pod 'ValidationToolkit' end Then, run the following command: $ pod install Note that this requires CocoaPods version 1.0.0, and your iOS deployment target to be at least 8.0. Setting up CocoaPods CocoaPods is a dependency manager for Cocoa projects. You can install it with the following command: $ gem install cocoapods Swift Package Manager You can use the Swift Package Manager to install ValidationToolkit by adding it to your Package.swift file: import PackageDescription let package = Package( name: "YOUR_PROJECT_NAME", targets: [], dependencies: [ .Package(url: "", majorVersion: 1), ] ) Note that the Swift Package Manager is still in early design and development, for more information checkout its GitHub Page. Manually To use this library in your project manually you may: - for Projects, just drag the Sourcesfolder into the project tree - for Workspaces, include the whole ValidationToolkit.xcodeproj Usage example For a comprehensive list of examples try the ValidationToolikit.playground: - Download the repository locally on your machine - Open ValidationToolkit.workspace - Build ValidationToolkit iOStarget - Select the ValidationToolkitplaygrounds from the Project navigator. Predicates The Predicate represents the core protocol and has the role to evaluate if an input matches on a given validation condition. At ValidationToolkit's core we have the following two predicates, which allow developers to compose predicates specific to the project needs. RegexPredicate let predicate = RegexPredicate(expression: "^[a-z]$") predicate.evaluate(with: "a") // returns true predicate.evaluate(with: "5") // returns false predicate.evaluate(with: "ab") // returns false BlockPredicate let pred = BlockPredicate<String> { $0.characters.count > 2 } predicate.evaluate(with: "a") // returns false predicate.evaluate(with: "abc") // returns true In addition, the toolkit offers a set of common validation predicates that your project can benefit of: let predicate = EmailPredicate() predicate.evaluate(with: "hello@") // returns false predicate.evaluate(with: "[email protected]") // returns true predicate.evaluate(with: "hé[email protected]") // returns true URLPredicate let predicate = URLPredicate() predicate.evaluate(with: "") // returns true predicate.evaluate(with: "http:\\") // returns false PairMatchingPredicate let predicate = PairMatchingPredicate() predicate.evaluate(with: ("swift", "swift")) // returns true predicate.evaluate(with: ("swift", "obj-c")) // returns false On top of that, developers can build more advanced or complex predicates by extending the Predicate protocol, and/ or by composing or decorating the existing predicates: Custom Predicate public class MinLenghtPredicate: Predicate { public typealias InputType = String private let minLenght:Int public init(minLenght:Int) { self.minLenght = minLenght } public func evaluate(with input: String) -> Bool { return input.characters.count >= minLenght } } let predicate = MinLenghtPredicate(minLenght: 5) predicate.evaluate(with: "alph") // returns false predicate.evaluate(with: "alpha") // returns true predicate.evaluate(with: "alphabet") // returns true Constraints A PredicateConstraint represents a data type that links a Predicate to an Error, in order to provide useful feedback for the end users. PredicateConstraint let predicate = BlockPredicate<String> { $0 == "Mr. Goodbytes" } let constraint = PredicateConstraint(predicate: predicate, error: MyError.magicWord) let result = constraint.evaluate(with: "please") switch result { case .valid: print("access granted...") case .invalid(let summary): print("Ah Ah Ah! You didn't say the magic word!") } // prints "Ah Ah Ah! You didn't say the magic word!" enum MyError:Error { case magicWord } Constraint Sets A ConstraintSet represents a collection of constraints that allows the evaluation to be made on: - any of the constraints - all constraints To provide context, a ConstraintSet allows us to constraint a piece of data as being required and also as being a valid email. ConstraintSetAn example is that of the registration form, whereby users are prompted to enter a strong password. This process typically entails some form of validation, but the logic itself is often unstructured and spread out through a view controller. ValidationToolkit seeks instead to consolidate, standardise, and make explicit the logic that is being used to validate user input. To this end, the below example demonstrates construction of a full ConstraintSet object that can be used to enforce requirements on the user's password data: let lowerCase = RegexPredicate(expression: "^(?=.*[a-z]).*$") let upperCase = RegexPredicate(expression: "^(?=.*[A-Z]).*$") let digits = RegexPredicate(expression: "^(?=.*[0-9]).*$") let specialChars = RegexPredicate(expression: "^(?=.*[!@#\\$%\\^&\\*]).*$") let minLenght = RegexPredicate(expression: "^.{8,}$") var passwordConstraints = ConstraintSet<String>() passwordConstraints.add(predicate: lowerCasePredicate, error: Form.Password.missingLowercase) passwordConstraints.add(predicate: upperCasePredicate, error: Form.Password.missingUpercase) passwordConstraints.add(predicate: digitsPredicate, error: Form.Password.missingDigits) passwordConstraints.add(predicate: specialChars, error: Form.Password.missingSpecialChars) passwordConstraints.add(predicate: minLenght, error: Form.Password.minLenght(8)) let password = "3nGuard!" let result = passwordConstraints.evaluateAll(input: password) switch result { case .valid: print("Wow, that's a 💪 password!") case .invalid(let summary): print(summary.errors.map({$0.localizedDescription})) } // prints "Wow, that's a 💪 password!" From above, we see that once we've constructed the passwordConstraints, we're simply calling evaluateAll(input:) to get a Summary of our evaluation result. This summary can then be handled as we please. Contribute We would love you for the contribution to ValidationToolkit, check the LICENSE file for more info. Meta This project is developed and maintained by the members of iOS NSAgora, the community of iOS Developers of Iași, Romania. Distributed under the MIT license. See LICENSE for more information. [] Credits and references We got inspired from other open source projects and they worth to be mentioned below for reference: - - Github Help us keep the lights on Dependencies Used By Total: 0 Releases 0.6.2 - May 13, 2018 - Add generated docs 0.6.0 - May 13, 2018 - Introduce async predicates - Introduce async constraints 0.5.0 - Jul 21, 2017 - Change license from Apache License to MIT License - Simplify names
https://swiftpack.co/package/nsagora/validation-toolkit
CC-MAIN-2018-39
en
refinedweb
iconv.h - codeset conversion facility #include <iconv.h> *); None. None. None. <sys/types.h> XSH iconv, iconv_close, iconv_open First released in Issue 4. The restrict keyword is added to the prototype for iconv(). SD5-XBD-ERN-56 is applied, adding a reference to <sys/types.h> for the size_t type. The <iconv.h> header is moved from the XSI option to the Base. return to top of pagereturn to top of page
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/iconv.h.html
CC-MAIN-2018-39
en
refinedweb
STOMP and Individual Ack?jscheid Jun 1, 2011 11:08 PM Hi, I'm in the process of porting an application from JBM+StompConnect to HornetQ with native STOMP protocol handler. One issue I've just run into is that StompSession calls ServerSessionImpl.acknowledge rather than ServerSessionImpl.individualAcknowledge, forcing acknowledgment of messages in the order in which they were received. This breaks my application, which processes and acknowledges messages out-of-order using separate worker threads. StompConnect doesn't have the same limitation, and the STOMP protocol description also doesn't mention it. Could this be changed to use individualAcknowledge by default, or could this be made configurable? Thanks, Julian 1. Re: STOMP and Individual Ack?Tim Fox Jun 2, 2011 4:15 AM (in response to jscheid) HornetQ is just following the STOMP 1.0 spec : "When a client has issued a SUBSCRIBE frame with the ack header set to client any messages received from that destination will not be considered to have been consumed (by the server) until the message has been acknowledged via an ACK." BTW StompConnect won't do anything different since it just uses the JMS API, and there is no way using JMS to acknowledge an individual message out of order. 2. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 5:51 AM (in response to Tim Fox) Thanks for your reply. You are right that HornetQ is under no obligation to support out-of-order acknowledgements, I was mistaken. I missed the bit where vanilla JMS doesn't support out-of-order acks. I'm not sure I agree that the STOMP spec is clear in this regard -- it doesn't clarify what "any messages" are in the context of "the message" -- but you are probably right that the intention was to match JMS behaviour. That said, if I'm not missing anything HornetQ could easily support out-of-order STOMP acks, and my application -- and probably others -- would benefit from it. Would you consider adding this as a feature that can be enabled in the configuration? I'd be happy to provide a patch to this end. 3. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 6:03 AM (in response to jscheid) Even better than a configuration option might be to introduce a third ack mode, something pseudo-namespaced like "hornetq:individual" maybe? The STOMP spec does mention ack modes beyond "auto" and "client" in passing... ( ) 4. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 9:22 PM (in response to jscheid) Even betterer: STOMP 1.1 provides a new ack type, client-individual! It would be fantastic to add support for this prior to adding full STOMP 1.1 support as per HORNET-553. I'll send over a patch shortly. 5. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 9:56 PM (in response to jscheid) I took the liberty of creating issue HORNETQ-713 with a patch for this.
https://developer.jboss.org/thread/167456
CC-MAIN-2018-39
en
refinedweb
TaskQueue A TaskQueue is basically a FIFO queue where tasks can be enqueued for execution. The tasks will be executed concurrently up to an allowed maximum number. A task is simply a non-throwing asynchronous function with a single parameter which is a completion handler called when the task finished. Features - Employs the execution of asynchronous "non-blocking" tasks. - The maximum number of concurrently executing tasks can be set, even during the execution of tasks. - Employs a "barrier" task which serves as a synchronisation point which allows us to "join" all previous enqueued tasks. - A task queue can be suspended and resumed. - A task queue can have a target task queue where tasks which are ready for execution will be enqueued and the target will then become responsible for execution of the task (which again may be actually performed by another target task queue). - Task and TaskQueue can be a used as a replacement for NSOperationand NSOperationQueue. With barriers, suspend and resume functionality, target relationships and the control of the concurrency level allows us to design complex systems where the execution of asynchronous tasks can be controlled by external conditions, interdependencies and by the restrictions of system resources. Description With a TaskQueue we can control the maximum number of concurrent tasks that run "within" the task queue. In order to accomplish this, we enqueue tasks into the task queue. If the actual number of running tasks is less than the maximum, the enqueued task will be immediately executed. Otherwise it will be delayed up until enough previously enqueued tasks have been completed. At any time, we can enqueue further tasks, while the maximum number of running tasks is continuously guaranteed. Furthermore, at any time, we can change the number of maximum concurrent tasks and the task queue will adapt until the constraints are fulfilled. Installation Note: Swift 4.0, 3.2 and 3.1 requires slightly different syntax: For Swift 4 use version >= 0.9.0. For Swift 3.2 compatibility use version 0.8.0 and for Swift 3.1 use version 0.7.0. Carthage Add github "couchdeveloper/TaskQueue" to your Cartfile. This is appropriate for use with Swift 4, otherwise specify version constraints as noted above. In your source files, import the library as follows import TaskQueue CocoaPods Add the following line to your Podfile: pod 'cdTaskQueue' This is appropriate for use with Swift 4, otherwise specify version constraints as noted above. In your source files, import the library as follows import cdTaskQueue SwiftPM To use SwiftPM, add this to your Package.swift: .Package(url: "") Usage Suppose, there one or more asynchronous tasks and we want to execute them in some controlled manner. In particular, we want to make guarantees that no more than a set limit of those tasks execute concurrently. For example, many times, we just want to ensure, that only one task is running at a time. Furthermore, we want to be notified when all tasks of a certain set have been completed and then take further actions, for example, based on the results, enqueue further tasks. So, what's a task anyway? A task is a Swift function or closure, which executes asynchronously returns Void and has a single parameter, the completion handler. The completion handler has a single parameter where the eventual Result - which is computed by the underlying operation - will be passed when the task completes. We can use any type of "Result", for example a tuple (Value?, Error?) or more handy types like Result<T> or Try<T>. Canonical task function: func task(completion: @escaping (R)->()) { ... } where R is for example: (T?, Error?) or Result<T> or (Data?, Response?, Error?) etc. Note, that the type R may represent a Swift Tuple, for example (T?, Error?), and please not that there are syntax changes in Swift 4: Caution: In Swift 4 please consider the following changes regarding tuple parameters: If a function type has only one parameter and that parameter’s type is a tuple type, then the tuple type must be parenthesized when writing the function’s type. For example, ((Int, Int)) -> Voidis the type of a function that takes a single parameter of the tuple type (Int, Int)and doesn’t return any value. In contrast, without parentheses, (Int, Int) -> Voidis the type of a function that takes two Int parameters and doesn’t return any value. Likewise, because Voidis a type alias for (), the function type (Void) -> Voidis the same as (()) -> ()— a function that takes a single argument that is an empty tuple. These types are not the same as () -> ()— a function that takes no arguments. So, this means, if the result type of the task´s completion handler is a Swift Tuple, for example (String?, Error?), that task must have the following signature: func myTask(completion: @escaping ((String?, Error?))->()) { ... } Now, create a task queue where we can enqueue a number of those tasks. We can control the number of maximum concurrently executing tasks in the initialiser: let taskQueue = TaskQueue(maxConcurrentTasks: 1) // Create 8 tasks and let them run: (0...8).forEach { _ in taskQueue.enqueue(task: myTask) { (String?, Error?) in ... } } Note, that the start of a task will be delayed up until the current number of running tasks is below the allowed maximum number of concurrent tasks. In the above code, the asynchronous tasks are effectively serialised, since the maximum number of concurrent tasks is set to 1. Using a barrier A barrier function allows us to create a synchronisation point within the TaskQueue. When the TaskQueue encounters a barrier function, it delays the execution of the barrier function and any further tasks until all tasks enqueued before the barrier have been completed. At that point, the barrier function executes exclusively. Upon completion, the TaskQueue resumes its normal execution behaviour. let taskQueue = TaskQueue(maxConcurrentTasks: 4) // Create 8 tasks and let them run (max 4 will run concurrently): (0...8).forEach { _ in taskQueue.enqueue(task: myTask) { (String?, Error?) in ... } } taskQueue.enqueueBarrier { // This will execute exclusively on the task queue after all previously // enqueued tasks have been completed. print("All tasks finished") } // enqueue further tasks as you like Specify a Dispatch Queue Where to Start the Task Even though, a task should always be designed such, that it is irrelevant on which thread it will be called, the practice is often different. Fortunately, we can specify a dispatch queue in function enqueue where the task will be eventually started by the task queue, if there should be such a limitation. If a queue is not specified, the task will be started on the global queue ( DispatchQueue.global()). taskQueue.enqueue(task: myTask, queue: DispatchQueue.main) { Result<String> in ... } Note, that this affects only where the task will be started. The task's completion handler will be executed on whatever thread or dispatch queue the task is choosing when it completes. There's no way in TaskQueue to specify the execution context for the completion handler. Constructing a Suitable Task Function from Any Other Asynchronous Function The function signature for enqueue requires that we pass a task function which has a single parameter completion and returns Void. The single parameter is the completion handler, that is a function, taking a single parameter or a tuple result and returning Void. So, what if our asynchronous function does not have this signature, for example, has additional parameters and even returns a result? Take a look at this asynchronous function from URLSession: dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Swift.Void) -> URLSessionDataTask Here, besides the completion handler we have an additional parameter url which is used to configure the task. It also has a return value, the created URLSessionTask object. In order to use this function with TaskQueue, we need to ensure that the task is configured at the time we enqueue it, and that it has the right signature. We can accomplish both requirements by applying currying to the given function. The basic steps are as follows: Given any asynchronous function with one or more additional parameters and possibly a return value: func asyncFoo(param: T, completion: @escaping (Result)->()) -> U { ... } we transform it to: func task(param: T) -> (_ completion: @escaping (Result) -> ()) -> () { return { completion in let u = asyncFoo(param: param) { result in completion(result) } // handle return value from asyncFoo, if any. } } That is, we transform the above function asyncFoo into another, whose parameters consist only of the configuring parameters, and returning a function having the single remaining parameter, the completion handler, e.g.: ((Result) -> ()) -> (). The signature of this returned function must be valid for the task function required by TaskQueue. "Result" can be a single parameter, e.g. Result<T> or any tuple, e.g. (T?, Error?) or (T?, U?, Error?), etc. Note, that any return value from the original function (here asyncFoo), if any, will be ignored by the task queue. It should be handled by the implementation of the task function, though. You might want to examine this snippet a couple of times to get used to it ;) Then use it as follows: taskQueue.enqueue(task: task(param: "Param")) { result in // handle result ... } This ensures, that the task will be "configured" with the given parameters at the time it will be enqueued. The execution, though, will be delayed up until the task queue is ready to execute it. Example Here, we wrap a URLSessionTask executing a "GET" into a task function: func get(_ url: URL) -> (_ completion: @escaping ((Data?, URLResponse?, Error?)) -> ()) -> () { return { completion in URLSession.shared.dataTask(with: url) { data, response, error in completion((data, response, error)) }.resume() } } Then use it as follows: let taskQueue = TaskQueue(maxConcurrentTasks: 4) taskQueue.enqueue(task: get(url)) { (data, response, error) in // handle (data, response, error) ... } Having a list of urls, enqueue them all at once and execute them with the constraints set in the task queue: let urls = [ ... ] let taskQueue = TaskQueue(maxConcurrentTasks: 1) // serialise the tasks urls.forEach { taskQueue.enqueue(task: get($0)) { (data, response, error) in // handle (data, response, error) ... } } Github Help us keep the lights on Dependencies Used By Total: 0
https://swiftpack.co/package/couchdeveloper/TaskQueue
CC-MAIN-2018-39
en
refinedweb
Well, I faced with the common problem I can't find right solution for that. My scenario: I have a View, let say ViewA and I have the second View - ViewB. From the ViewA I call my command in the ViewModelA, show me ViewB. In the ViewB i have two buttons (Save and Exit). I can easily handle the button Exit just by calling the static command from the View (Command="ApplicationCommands.Close") <Button Content="Exit" Width="77" Command="ApplicationCommands.Close" Height="35 View Complete Post..... I'm trying to create a generic dialog in WPF using the MVVM pattern. The dialog's return value needs to be generic so it can return anything. In this case I want to return an instance of my DialogOptions class. First, I created a DialogBase class that has a dependency property for the return value: using System; using System.Windows; namespace OutputCompare { public class _DialogBase<T> : Window where T : new() { public static readonly DependencyProperty RetValDP = DependencyProperty.Register("RetVal", typeof(T), typeof(_DialogBase<T>), new FrameworkPropertyMetadata()); public T RetVal { get { return (T)GetValue(RetValDP); } set { SetValue(RetValDP, value); } } public _DialogBase() { RetVal = new T(); } } } Next I create a dialog based off it <src:_DialogBase x:Class="OutputCompare.LoadDialog" x:TypeArguments="src:DialogOptions" xmlns="" xmlns:x="" xmlns:src="clr-namespace:OutputCompare" Title="Load" Height="331" Width="518" RetVal="{Binding Option! I've,");. In SharePoint 2010, I know the Modal dialog height is calculated dynamically in Javascript. For some dialogs, I mean if the dialog is opening a edit form then it includes ribbon on top of it which reduces the height of the scrolling part and it really looks ugly. Is there any way to add ribbon height to modal dialog height so that unscrollable ribbon height doesn't affect the scrollable area height? Hope I am explaning it correct. I would really appreciate, if anyone can reply for me. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/45545-mvvm-modal-dialog.aspx
CC-MAIN-2017-43
en
refinedweb
Most database systems are largely transactional. The reason being that the main feature of any given database is to store and manipulate data. Providing transactional support is a huge requirement for systems that use databases. If the database doesn't provide the necessary transaction support, the applications that use the database would need to implement it. Transactional support isn't trivial to implement. Especially the kind of transactional support provided by production-grade databases. Moving down to the individual transaction level, what data, exactly, does each transaction need to store? Do transactions need to make full copies of the data being operated on in order to restore previous states? This is one really inefficient way to do it. It is inefficient because the transaction data, once accumulated, would grow uncontrollably large. The better way to store transaction data is to store only what is absolutely necessary to revert the current data to a previous state. Once in a previous state, the same principle can be applied to the data to move further back in time still. Does the transactional model have a place inside application code? Well, maybe on a fractional scale in comparison to database system transactional support. Having simplistic transactional support that fits inside an object-oriented design could potentially be well suited for small edits that need to be made to objects during runtime. In this case, the number of transactions at any given time would be very small and probably wouldn't exist for any significant amount of time. The real benefit here is simplicity. Even if the application you are building does use a database with transactional support, better to leave the heavy transaction lifting to it rather than bother it with smaller edits. An in-memory transaction class could be of use for this purpose. Sub classes could then inherit from this class in order to become transactional. Below is a simple example of such a class as implemented in Python. In this example, the Transactional class is responsible for providing the transaction support for sub classes. The basic idea behind this class is that it will provide very basic transaction support for any string attributes of the class. This means that sub classes can define any number of string attributes and each one will be transactional. #Example; Python transaction objects. #Do imports. from difflib import ndiff, restore from types import StringTypes #String type tuple. STRING_TYPES=StringTypes #A transactional class that should be sub-classed. class Transactional(object): #Constructor. Initialize the transaction list. def __init__(self): self.transactions=[] #Start recording a transaction. def start(self): _attribute={} for i in dir(self): current_attribute=getattr(self, i) if type(current_attribute) in STRING_TYPES: _attribute[i]=current_attribute self.transactions.append(_attribute) #Stop recording a transaction. def stop(self): _tran_index=len(self.transactions)-1 _tran_current=self.transactions[_tran_index] for i in dir(self): current_attribute=getattr(self, i) if type(current_attribute) in STRING_TYPES: _tran_current[i]="\n".join(ndiff(_tran_current[i], current_attribute)) #Rollback the last stored transaction. def rollback(self): _tran_index=len(self.transactions)-1 _tran_current=self.transactions[_tran_index] for i in _tran_current.keys(): setattr(self, i, "".join(restore(_tran_current[i].splitlines(), 2))) self.transactions.pop(_tran_index) #Commit all changes. def commit(self): self.transactions=[] #Simple class capable of storing transactions. class Person(Transactional): #Constructor. Initialize the Transactional class. def __init__(self): super(Person, self).__init__() self.first_name="" self.last_name="" def set_first_name(self, first_name): self.first_name=first_name def set_last_name(self, last_name): self.last_name=last_name #Main. if __name__=="__main__": #Instantiate a person. person_obj=Person() #Start recording a transaction. person_obj.start() #Manipulate the object. person_obj.set_first_name("John") person_obj.set_last_name("Smith") #Stop recording the transaction. person_obj.stop() #Manipulate the object. person_obj.set_first_name("jOhN") person_obj.set_last_name("sMiTh") #Display object data. print "FIRST NAME:",person_obj.first_name print "LAST NAME: ",person_obj.last_name #Rollback to latest stored transaction. person_obj.rollback() #Display object data. print "FIRST NAME:",person_obj.first_name print "LAST NAME: ",person_obj.last_name There are four basic methods to Transactional: start(), stop(), rollback(), and commit(). The Transactional.start() method will start recording a transaction. This means that any changes made after the method is invoked, will be part of the transaction data. The Transactional.stop() method completes the current transaction that is being recorded. It does this by using Python diff support to store only the changes that have been made to the data. The Transactional.rollback() method restores the string attributes to the most recently stored transaction state. Again, this is done using Python diff support. Finally, Transactional.commit() simply purges all transaction data.
http://www.boduch.ca/2009/10/python-transaction-objects.html
CC-MAIN-2017-43
en
refinedweb
PageLoader and effects Is there a way to have an effect on lading page for example right to left? Sorry. I'm talking about the loader qml object. Have you got an example? @mrdebug Is this what you are trying to do ? import QtQuick 2.4 Item { width: 100 height: 120 Loader { id: loader anchors.fill: parent sourceComponent: component } Component { id: component Rectangle { id: rect width: parent.width height: parent.height color: "red" NumberAnimation { running: true target: rect from: -100 to: 0 property: "x" duration: 600 easing.type: Easing.InOutQuad } } } }
https://forum.qt.io/topic/54847/pageloader-and-effects
CC-MAIN-2017-43
en
refinedweb
A simple html parser that constructs DOM tree. Project DescriptionRelease History Download Files It is aimed to provide jquery like API. Example from pyHTMLParser.Query import Q_open, Q_close, Q Q_open('') second_target_link = Q('a[href$="-target.html"]:nth-child(2)') print(second_target_link.attr('href')) >>> Q_close() Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyHTMLParser/
CC-MAIN-2017-43
en
refinedweb
Fried! Occurrence I would argue that the frequency of occurrence of words and other linguistic elements is the fundamental measure on which much NLP is based. In essence, we want to answer “How many times did something occur?” in both absolute and relative terms. Since words are probably the most familiar “linguistic elements” of a language, I focused on word occurrence; however, other elements may also merit counting, including morphemes (“bits of words”) and parts-of-speech (nouns, verbs, …). Note: In the past I’ve been confused by the terminology used for absolute and relative frequencies —– pretty sure it’s used inconsistently in the literature. I use count to refer to absolute frequencies (whole, positive numbers: 1, 2, 3, …) and frequency to refer to relative frequencies (rational numbers between 0.0 and 1.0). These definitions sweep certain complications under the rug, but I don’t want to get into it right now… Anyway, in order to count individual words, I had to split the corpus text into a list of its component words. I’ve discussed tokenization before, so I won’t go into details. Given that I scraped this text from the web, though, I should note that I cleaned it up a bit before tokenizing: namely, I decoded any HTML entities; removed all HTML markup, URLs, and non-ASCII characters; and normalized white-space. Perhaps controversially, I also unpacked contractions (e.g., “don’t” => “do not”) in an effort to avoid weird tokens that creep in around apostrophes (e.g., “don”+”’”+”t” or “don”+”‘t”). Since any mistakes in tokenization propagate to results downstream, it’s probably best to use a “standard” tokenizer rather than something homemade; I’ve found NLTK’s defaults to be good enough (usually). Here’s some sample code: from itertools import chain from nltk import clean_html, sent_tokenize, word_tokenize # combine all articles into single block of text all_text = ' '.join([doc['full_text'] for doc in docs]) # partial cleaning as example: this uses nltk to strip residual HTML markup cleaned_text = clean_html(all_text) # tokenize text into sentences, sentences into words tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(cleaned_text)] # flatten list of lists into a single words list all_words = list(chain(*tokenized_text)) Now I had one last set of decisions to make: Which words do I want to count? Depends on what you want to do, of course! For example, this article explains how filtering for and studying certain words helped computational linguists identify J.K. Rowling as the person behind the author Robert Galbraith. In my case, I just wanted to get a general feeling for the meaningful words Friedman has used the most. So, I filtered out stop words and bare punctuation tokens, and I lowercased all letters, but I did not stem or lemmatize the words; the total number of words dropped from 2.96M to 1.43M. I then used NLTK’s handy FreqDist() class to get counts by word. Here are both counts and frequencies for the top 30 “good” words in my Friedman corpus: You can see that the distributions are identical, except for the y-axis values: as discussed above, counts are the absolute number of occurrences for each word, while frequencies are those counts divided by the total number of words in the corpus. It’s interesting but not particularly surprising that Friedman’s top two meaningful words are mr. and said –— he’s a journalist, after all, and he’s quoted a lot of people. (Perhaps he met them on the way to/from a foreign airport…) Given what we know about Friedman’s career (as discussed in (1)), most of the other top words also sound about right: Israel/Israeli, president, American, people, world, Bush, … On a lark, I compared word counts for the five presidents that have held office during Friedman’s NYT career: Ronald Reagan, George H.W. Bush, Bill Clinton, George W. Bush, and Barack Obama: - “reagan”: 761 - “bush”: 3582 - “clinton”: 2741 - “obama”: 964 Yes, the two Bush’s got combined, and Hillary is definitely contaminating Bill’s counts (I didn’t feel like doing reference disambiguation on this, sorry!). I find it more interesting to plot conditional frequency distributions, i.e. a set of frequency distributions, one for each value of some condition. So, taking the article’s year of publication as the condition, I produced this plot of presidential mentions by year: Nice! You can clearly see frequencies peaking during a given president’s term(s), which makes sense. Plus, they show Friedman’s change in focus over time: early on, he covered Middle Eastern conflict, not the American presidency; in 1994, a year in which Clinton was mentioned particularly frequently, Friedman was specifically covering the White House. I’m tempted to read further into the data, such as the long decline of W. Bush mentions throughout —– and beyond –— his second term possibly indicating his slide into irrelevance, but I shouldn’t without first inspecting context. Some other time, perhaps. I made a few other conditional frequency distributions using NLTK’s ConditionalFreqDist() class, just for kicks. Here are two, presented without comment (only hints of a raised eyebrow on the author’s part): These plots-over-time lead naturally into the concept of dispersion. Dispersion Although frequencies of (co-)occurrence are fundamental and ubiquitous in corpus linguistics, they are potentially misleading unless one also gives a measure of dispersion, i.e. the spread or variability of a distribution of values. It’s Statistics 101: You shouldn’t report a mean value without an associated dispersion! Counts/frequencies of words or other linguistic elements are often used to indicate importance in a corpus or language, but consider a corpus in which two words have the same counts, only the first word occurs in 99% of corpus documents, while the second word is concentrated in just 5%. Which word is “more important”? And how should we interpret subsequent statistics based on these frequencies if the second word’s high value is unrepresentative of most of the corpus? In the case of my Friedman corpus, the conditional frequency distributions over time (above) visualize, to a certain extent, those terms’ dispersions. But we can do more. As it turns out, NLTK includes a small module to plot dispersion; like so: from nltk.draw import dispersion_plot dispersion_plot(all_words, ['reagan', 'bush', 'clinton', 'obama'], ignore_case=True) To be honest, I’m not even sure how to interpret this plot –— for starters, why does Obama appear at what I think is the beginning of the corpus?! Clearly, it would be nice to quantify dispersion as, like, a single, scalar value. Many dispersion measures have been proposed over the years (see [1] for a nice overview), but in the context of linguistic elements, most are poorly known, little studied, and suffer from a variety of statistical shortcomings. Also in [1], the author proposes an alternative, conceptually simple measure of dispersion called DP, for deviation of proportions, whose derivation he gives as follows: - Determine the sizes s of each of the n corpus parts (documents), which are normalized against the overall corpus size and correspond to expected percentages which take differently-sized corpus parts into consideration. - Determine the frequencies v with which word a occurs in the n corpus parts, which are normalized against the overall number of occurrences of a and correspond to an observed percentage. - Compute all n pairwise absolute differences of observed and expected percentages, sum them up, and divide the result by two. The result is DP, which can theoretically range from approximately 0 to 1, where values close to 0 indicate that a is distributed across the n corpus parts as one would expect given the sizes of the n corpus parts. By contrast, values close to 1 indicate that a is distributed across the n corpus parts exactly the opposite way one would expect given the sizes of the n corpus parts. Sounds reasonable to me! (Read the cited paper if you disagree, I found it very convincing.) Using this definition, I calculated DP values for all words in the Friedman corpus and plotted those values against their corresponding counts: As expected, the most frequent words tend to have lower DP values (be more evenly distributed in the corpus), and vice-versa; however, note the wide spread in DP for a fixed count, particularly in the middle range. Many words are definitely distributed unevenly in the Friedman corpus! A common —– but not entirely ideal –— way to account for dispersion in corpus linguistics is to compute the adjusted frequency of words, which is often just frequency multiplied by dispersion. (Other definitions exist, but I won’t get into it.) Such adjusted frequencies are by definition some fraction of the raw frequency, and words with low dispersion are penalized more than those with high dispersion. Here, I plotted the frequencies and adjusted frequencies of Friedman’s top 30 words from before: You can see that the rankings would change if I used adjusted frequency to order the words! This difference can be quantified with, say, a Spearman correlation coefficient, for which a value of 1.0 indicates identical rankings and -1.0 indicates exactly opposite rankings. I calculated a value of 0.89 for frequency-ranks vs adjusted frequency-ranks: similar, but not the same! It’s clear that the effect of (under-)dispersion should not be ignored in corpus linguistics. My big issue with adjusted frequencies is that they are more difficult to interpret: What, exactly, does frequency*dispersion actually mean? What units go with those values? Maybe smarter people than I will come up with a better measure. Well, I’d meant to include word co-occurrence in this post, but it’s already too long. Congratulations for making it all the way through! :) Next time, then, I’ll get into bigrams/trigrams/n-grams and association measures. And after that, I get to the fun stuff! [1] Gries, Stefan Th. “Dispersions and adjusted frequencies in corpora.” International journal of corpus linguistics 13.4 (2008): 403-437.
http://bdewilde.github.io/blog/blogger/2013/11/03/friedman-corpus-1-occurrence-and-dispersion/
CC-MAIN-2017-43
en
refinedweb
{"_id":"55acf3cd18eefd0d0071d5fb","parentDoc":null,"user":"55a79a4d4a33f92b00b7a111","project":"5540ce1b31827a0d007ab1cc","category":{"_id":"5540e5f131827a0d007ab212","__v":15,"project":"5540ce1b31827a0d007ab1cc","pages":["5540e66b31827a0d007ab217","5540e67731827a0d007ab219","55ac917b5863b817008ae3b4","55acaa4c6b4ff90d00784a92","55acb96418eefd0d0071d553","55acc8aa18eefd0d0071d596","55accd2818eefd0d0071d5a8","55acd06518eefd0d0071d5b6","55acddd7fb7b3c19003739cc","55ace14bfb7b3c19003739d3","55ace3a9fb7b3c19003739d7","55acea24f93f0c0d005b880f","55acf15bf93f0c0d005b8821","55acf3cd18eefd0d0071d5fb","55d1f1c0486de50d00326f17"],"version":"5540ce1c31827a0d007ab1cf","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-04-29T14:08:49.271Z","from_sync":false,"order":4,"slug":"android","title":"Android"},"},"__v":56,"updates":["583ea8833a4a941900c5459a","5949ed88106f84001a4fd549"],"next":{"pages":[],"description":""},"createdAt":"2015-07-20T13:12:45.720Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"* [What does Android push token and HWID look like?]()\n* [How can I obtain my Android device push token?]()\n* [How do I find my Google Project Number?]()\n* [What permissions are necessary and what are optional?]()\n* [How accurate is the total number of Android subscribers?]()\n* [Can I use HTML tags in pushes sent to Android?]()\n* [How do I set a notification icon in Android Lollipop (and later versions)?]()\n* [Can I use two GcmListenerService's?]()\n* [Can I use old GcmBroadcastReceiver with the new GcmReceiver?]()\n* [Using Pushwoosh with LeakCanary or AppMetrica libraries]()\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"What does Android push token and HWID look like?\"\n}\n[/block]\nAndroid device push tokens can differ in length (usually below 255 characters), and usually start with **APA**… Push token example:\n```\nAPA91bFoi3lMMre9G3XzR1LrF4ZT82_15MsMdEICogXSLB8-MrdkRuRQFwNI5u8Dh0cI90ABD3BOKnxkEla8cGdisbDHl5cVIkZah5QUhSAxzx4Roa7b4xy9tvx9iNSYw-eXBYYd8k1XKf8Q_Qq1X9-x-U-Y79vdPq\n```\nUsing raw device tokens for targeting specific devices is not the most reliable way because *GCM push tokens tend to change from time to time,* and it’s hard to tell how often it occurs. Therefore, we strongly recommend using **Tags** to send pushes to specific devices. HWID example:\n\n`a9f282012f5dce9e`\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"How can I obtain my Android device push token?\"\n}\n[/block]\nYou can obtain your Android device push token in the console log. Use the logcat tool in [Android Studio]().\n\nOpen **monitor.bat** in `%USERPROFILE%\\AppData\\Local\\Android\\sdk\\tools\\monitor.bat`, connect your device to PC and allow USB debugging in Android settings. Run your application on the device. Locate `/registerDevice`, find the push token for your device to use in Test Devices later on.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Android_Studio_Logcat-1024x443.png\",\n \"1024\",\n \"443\",\n \"#558aa9\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nNote that you can simply launch this file from its location without launching the Android Studio itself.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"How do I find my Google Project Number?\"\n}\n[/block]\n**Project Number** is automatically assigned by [**Google API Console**]() when you create a project. You can find the **Project Number** in “IAM & Admin” tab of Google API console.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"27701da-step-1.png\",\n 319,\n 284,\n \"#ebe5e7\"\n ]\n }\n ]\n}\n[/block]\nThen go to **Settings** tab and here it is, your **Project Number**.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"69dde8d-step-2.png\",\n 559,\n 442,\n \"#f6f7f9\"\n ]\n }\n ]\n}\n[/block]\nHere, **852741519435** is the **Project Number** you enter in the app. *Don’t confuse it with Project ID, which is a completely different identifier & is **used only within Google Developers Console!***\nEven though GCM Project Number is a number, make sure you prefix it with the letter “A” when integrating the following Android SDKs into your project: Native, Unity, Marmalade, Adobe AIR and Xamarin.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"What permissions are necessary and what are optional?\"\n}\n[/block]\nWhen installed on an Android device, the application will ask for the following permissions in connection with Pushwoosh SDK:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"<!-- GCM connects to Google Services. -->\\n <uses-permission android:name=\\\"android.permission.INTERNET\\\"/>\\n \\n <!-- GCM requires a Google account to use push notifications. -->\\n <uses-permission android:name=\\\"android.permission.GET_ACCOUNTS\\\"/>\\n \\n <!-- Permission to get device hwid. This permission is optional, if not used, the random hwid is generated -->\\n <uses-permission android:name=\\\"android.permission.READ_PHONE_STATE\\\"/>\\n \\n <!-- Keeps the processor from sleeping when a message is received. -->\\n <uses-permission android:name=\\\"android.permission.WAKE_LOCK\\\"/>\\n \\n <!-- This permission is required for geolocation pushes, and is used to determine whether the device can access the network. -->\\n <uses-permission android:name=\\\"android.permission.ACCESS_NETWORK_STATE\\\"/>\\n \\n <!-- Prevents other applications from registering and receiving the application's messages. -->\\n <uses-permission android:name=\\\"${applicationId}.permission.C2D_MESSAGE\\\"/>\\n \\n <!-- This app has permission to register and receive data message. -->\\n <uses-permission android:name=\\\"com.google.android.c2dm.permission.RECEIVE\\\"/>\",\n \"language\": \"java\"\n }\n ]\n}\n[/block]\nOur SDK doesn’t ask for permission to access images, device contacts, etc.\n[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"**READ_PHONE_STATE** permission: SDK uses this permission to get DeviceId property of the device if ANDROID_ID is not available. You can omit this permission in your APK. However if ANDROID_ID is unavailable, features like AppGroups or cross-application user targeting will not work for this device.\\n\\n**GET_ACCOUNTS** permission: GCM requires a Google account if you target Androids lower than 4.0.4\"\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"How accurate is the total number of Android subscribers?\"\n}\n[/block]\nPushwoosh clears unsubscribed Android devices from the database upon receiving the “NotRegistered” response from GCM, which can be returned after the second attempt to reach a specific device. It means that you have to send 2 pushes to an unsubscribed device to have it removed from our database. \nHere’s the most common scenario described in the [GCM documentation]():\n\n1. Your subscriber uninstalls the app.\n2. Pushwoosh sends a message to GCM server.\n3. The GCM server sends the message to your user’s device.\n4. The GCM client on the device receives the message and detects that your application has been uninstalled; the detection details depend on the platform on which the app is running.\n5. The GCM client on the device informs the GCM server that the app was uninstalled.\n6. The GCM server marks the registration ID for deletion.\n7. Pushwoosh sends another message to GCM.\n8. The GCM returns a NotRegistered message.\n9. Pushwoosh removes the push token from your userbase.\n\nIt might take a while for registration ID to be completely removed from GCM. Thus it is possible a message sent in step 7 above gets a valid message ID as response, even though the message will not be delivered to the client app.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Can I use HTML tags in pushes sent to Android?\"\n}\n[/block]\nYes, in Android you may use the following HTML tags in order to modify the appearance of a push:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"<span style=\\\"color: green;\\\"><b><i><span style=\\\"text-decoration: underline;\\\">Hello world!\\nHello hi hey</span></i></b></span>\",\n \"language\": \"html\"\n }\n ]\n}\n[/block]\nPlace these HTML tags in the **Message** input field, and use them in the API request as well. Note that some Android devices may fail to process these HTML tags properly, but most of the devices we have used for tests displayed formatting properly.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"How to set a notification icon in Android Lollipop (and later versions)?\"\n}\n[/block]\nIn Android Lollipop icons were changed to be white only. Therefore, if you select **targetSdkVersion >= 21** in your **AndroidManifest.xml** file, Android will only use alpha-channel of the icon. \nSee more on the behavior in [Android documentation]().\n\nThe system ignores all non-alpha channels in action icons and in the main notification icon. Assume these icons will be alpha-only. The system draws notification icons in white and action icons in dark gray. *This is beyond Pushwoosh SDK control*.\n\nHowever, you can revert this behavior to use old style colored icons:\n**1.** Set targetSdkVersion to **19**. This automatically brings back the old behavior.\n\n**OR** \n\n**2.** Create the notification icon according to the [Android guidelines](). As per [documentation](), the system will ignore all the colors. \n**2.1.** Name the icon as **pw_notification.png** and put it in **res/drawable** folder. Pushwoosh SDK will use this icon as default for notifications. \n**2.2.** Alternatively, you can use Remote API and set the `\"android_icon\"` parameter value to the icon image (without file extension).\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Can I use two GcmListenerService's?\"\n}\n[/block]\nYes you can. Sometimes you need to use Pushwoosh SDK with another SDK that uses GCM as well. Usually one runs into the following problem: **GcmReceiver** launches only **one** service, as a result only one subclassed service will receive push notifications and events. This would be either the one which appears the first in **AndroidManifest.xml** or the one with the higher priority (as outlined here:)\n\nTo solve this problem create master GcmListenerService and proxy events to other services. See the example:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"public class GCMListenerRouterService extends GcmListenerService {\\n private void dispatchMessage(String component, Bundle data) {\\n Intent intent = new Intent();\\n intent.putExtras(data);\\n intent.setAction(\\\"com.google.android.c2dm.intent.RECEIVE\\\");\\n intent.setComponent(new ComponentName(getPackageName(), component));\\n \\n GcmReceiver.startWakefulService(getApplicationContext(), intent);\\n }\\n \\n :::at:::Override\\n public void onMessageReceived(String from, Bundle data) {\\n Log.i(\\\"PushwooshTest\\\", \\\"Gcm router service received message: \\\" + (data != null ? data.toString() : \\\"<null>\\\") + \\\" from: \\\" + from);\\n \\n // Base GCM listener service removes this extra before calling onMessageReceived\\n // Need to set it again to pass intent to another service\\n data.putString(\\\"from\\\", from);\\n \\n if (TextUtils.equals(from, getString(R.string.PUSHWOOSH_PROJECT_ID))) {\\n dispatchMessage(PushGcmIntentService.class.getName(), data);\\n }\\n else if (TextUtils.equals(from, getString(R.string.PRIVATE_PROJECT_ID))) {\\n dispatchMessage(PrivateGCMListenerService.class.getName(), data);\\n }\\n\\n }\\n}\",\n \"language\": \"java\",\n \"name\": \"GCMListenerRouterService.java\"\n }\n ]\n}\n[/block]\nRegister this receiver with the highest priority:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"<service\\n android:name=\\\".GCMListenerRouterService\\\"\\n android:enabled=\\\"true\\\"\\n android:exported=\\\"false\\\" >\\n <intent-filter android:priority=\\\"100\\\" >\\n <action android:name=\\\"com.google.android.c2dm.intent.RECEIVE\\\" />\\n </intent-filter>\\n</service>\",\n \"language\": \"xml\",\n \"name\": \"AndroidManifest.xml\"\n }\n ]\n}\n[/block]\nUse the following code to handle push token changes:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"Subscription<RegistrationSuccessEvent> registrationSuccessEventSubscription = EventBus.subscribe(RegistrationSuccessEvent.class, new EventListener<RegistrationSuccessEvent>() {\\n @Override\\n public void onReceive(RegistrationSuccessEvent event) {\\n String token = event.getData();\\n //send the push token to the other service\\n }\\n});\",\n \"language\": \"java\"\n }\n ]\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Can I use old GcmBroadcastReceiver with the new GcmReceiver?\"\n}\n[/block]\nYes you can and this works out of the box. **GcmReceiver** and **GcmBroadcastReceiver** - are broadcast receivers and both will receive all messages.\n[block:api-header]\n{\n \"title\": \"Using Pushwoosh with LeakCanary or AppMetrica libraries\"\n}\n[/block]\nWhen you integrate analytics tools such as LeakCanary, AppMetrica or others, these libraries start a new process, creating new instance of the app. Since you can't listen for the push notifications in another process, this results in `java.lang.NullPointerException` being thrown.\n\nIf you call `registerForPushNotifications` inside `Application.onCreate()`, you should check if you are in the application's main process. Use the following code to perform this check:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"List<ActivityManager.RunningAppProcessInfo> runningAppProcesses = ((ActivityManager) getSystemService(Context.ACTIVITY_SERVICE)).getRunningAppProcesses();\\nif (runningAppProcesses != null && runningAppProcesses.size() != 0) {\\n for (ActivityManager.RunningAppProcessInfo runningAppProcessInfo : runningAppProcesses) {\\n boolean isCurrentProcess = runningAppProcessInfo.pid == android.os.Process.myPid();\\n boolean isMainProcessName = getPackageName().equals(runningAppProcessInfo.processName);\\n if (isCurrentProcess && isMainProcessName) {\\n\\n Pushwoosh.getInstance().registerForPushNotifications(...);\\n break;\\n }\\n }\\n}\",\n \"language\": \"java\"\n }\n ]\n}\n[/block]","excerpt":"","slug":"android-faq","type":"basic","title":"Android FAQ"}
http://docs.pushwoosh.com/docs/android-faq
CC-MAIN-2017-43
en
refinedweb
tag:blogger.com,1999:blog-231604572016-03-06T13:23:05.197+00:00Archeology of the FutureA website about UK Science Fiction, digging through the past to uncover the future.Archeology of the Future as a force for good?<object width="425" height="350"><param name="movie" value=""></param><embed src="" type="application/x-shockwave-flash" width="425" height="350"></embed></object><br /><br /><span style="font-family:arial;">Breaking a lengthy silence, Archeology of the Future emerges from deep cover to share this clip of alien fighting machines bringing their might to fight an earthly battle...</span><br /><br /><span style="font-family:arial;">The originator of this clip ( stevethedalek!) describes it thus:</span><br /><br /><span style="font-family:arial;">"[A] Compilation of my appearances on Sky, BBC and ITN News at the G8 finance ministers' meeting in London on 11th June 2005 (the day that 'Bad Wolf' aired!)"</span><br /><br /><span style="font-family:arial;"...</span><br /><br /><span style="font-family:arial;".</span><br /><br /><span style="font-family:arial;">Imagine that the Daleks really were a political force... Where would they fit in? Who would they support?</span><br /><br /><span style="font-family:arial;".</span><br /><br /><span style="font-family:arial;">Thinking about it, the Dalek on the cobbles is my own version of </span><a style="font-family: arial;" href="">Robby in the Suburbs</a><span style="font-family:arial;">, a point where Science Fiction and ordinary events meet, causing some kind of rip in time and space that causes fantasy to spill out like a modern version of Pandora's Box.</span><br /><br /><span style="font-family:arial;">Technorati Tags: </span><a style="font-family: arial;" href="" rel="tag">doctor who</a><span style="font-family:arial;">, </span><a style="font-family: arial;" href="" rel="tag">british science fiction</a><span style="font-family:arial;">, </span><a style="font-family: arial;" href="" rel="tag">daleks</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Invasion Earth 2150AD: "It appears to be landing in the vicinity of Sloane Square!"<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-style: italic;font-family:arial;" >It’s 2150. In the ruins of a destroyed London, the last members of the human resistance plan and scheme, surviving in the shadows, living like rats in</span><span style="font-style: italic;font-family:arial;" > the walls of the world. All of Bedfordshire is a concentration camp, thousands dying, worked to death by the invaders and those forced into their</span><span style="font-style: italic;font-family:arial;" > service. Nothing remains of civilisation, the invaders rule from the air like gods, picking off any mortal they choose from high in the</span><span style="font-style: italic;font-family:arial;" > clouds.</span><span style="font-style: italic;font-family:arial;" > The survivors stay quiet, hunger ruling the fields and forests; hanging like a mist over</span><span style="font-style: italic;font-family:arial;" > the buildings turned to rubble. Time stopped in the 1960s, nothing new has come into being, reduced to a never-ending</span><span style="font-style: italic;font-family:arial;" > struggle for survival the human race survives, picking through roots and scavenging, a mere irritation to the expressionless beings remaking the world in their own image…</span><br /><br /><span style="font-family:arial;">In an introduction to his 1971 collection of short stories ‘Vermillion Sands’, J.G. Ballard makes the statement:</span> <span style="font-family:arial;"><br /><br /><span style="font-style: italic;">“It is a curious paradox that almost all</span></span><span style="font-family:arial;"><span style="font-style: italic;"> science fiction, however far removed in time and space, is really about the present day. Very few attempts have been made to visualize a unique and self-contained future that offers no warnings to us.”</span></span> <span style="font-family:arial;"><br /><br />Daleks’ Invasion Earth: 2150AD is a film that is often interpreted as a science fiction treatment of British anxieties about the Second World War, with the Daleks cast as the Nazis, the Dalek mine as a forced labour camp and the resistance movement of the film</span><span style="font-family:arial;"> modelled upon the various partisan groups that audiences were used to seeing on screen. Released in 1966, Daleks’</span><span style="font-family:arial;"> Invasion Earth is a film for children who weren’t even born during the Second World War, a war that</span><span style="font-family:arial;"> finished over twenty years earlier. It strikes Archeology of the Future that it is more correct to view Daleks’ Invasion Earth as a film</span><span style="font-family:arial;"> almost nostalgic for the war and the strange certainty of terror it conferred upon Britain, at a time when upheavals of other, more nebulous kinds were occurring in the country and the world at large. </span> <span style="font-family:arial;"><br /><br />The film itself, the second Milton Subotsky produced featuring Terry Nation’s creations The Daleks and the second featuring Peter Cushing as a character called</span><span style="font-family:arial;"> Doctor Who, is a children’s romp with a grim undertone. Released during the school holidays at the height of Dalekmania, it begins with Bernard Cribbins as an affable copper Tom Campbell stumbling into the TARDIS after trying to stop a jewellery heist. As the TARDIS, because of a broken Chameleon Circuit, is stuck in the form of a Blue Police Box this is not an impossible mistake to make. Finding a similarly affable old man with white hair and a nice line in corduroy jackets (Peter Cushing as ‘Doctor Who’), his niece Louise (Jill Curzon), and his granddaughter Susan (Roberta Tovey), Tom is unsurprisingly puzzled. After a little bit of business about the TARDIS being bigger on the inside than on the outside and establishing that it travels in time and space, we are whisked to future Earth, only to find London in</span><span style="font-family:arial;"> ruins.</span><span style="font-family:arial;"> </span> <span style="font-family:arial;"><br /><br />Following the classic Doctor Who structure of separating the Doctor from his companions, the film wastes no time splitting Doctor Who and Tom from Louise and Susan, pitching the lot of latter in</span><span style="font-family:arial;"> with the resistance and getting the former quickly into a bit of bother with The Daleks, who glide</span><span style="font-family:arial;"> menacingly around the bombed out city with their human slaves, the Robomen. <;">Susan and Louise are taken down into the underground, literally, via Embankment tube station, meeting the three major figures of the resistance. David is a young, serious fighter, played with conviction by Ray Brooks. Wyler is a tough scot, played very straight by Andrew Keir. Dortmun (Godfrey Quigley) is a kind of Douglas Bader, in both the sense that he believes the resistance must take the fight to the Daleks and the fact he survives and thrives despite injuries or disabilities that leave him in a wheelchair. The resistance is working on bombs to battle the Daleks, drawing around the radio when the Daleks make broadcasts ordering them to surrender.</span> <span style="font-family:arial;"><br /><br />The film quickly establishes that humanity has been more or less vanquished by the Daleks, who bombarded the earth with meteors, arriving in their spaceship(s) to mop up. They take prisoners, ‘robotising’ the most resourceful and sending the rest to an immense work camp in Bedfordshire.</span><span style="font-family:arial;"> These Robomen, dressed in black PVC jumpsuits and radio helmets with mirrored visors are the</span><span style="font-family:arial;"> Daleks main troops. Patrolling the city in almost mechanical fashion and armed with whips and</span><span style="font-family:arial;"> guns of Dalek origin, they are literally brainwashed into collaboration. Doctor Who and Tom narrowly miss being ‘robotised’ when they are captured and taken to the Dalek ship, the process interrupted by a resistance attack which ends in failure.</span> <span style="font-family:arial;"><br /><br />After this first act, setting up as it does the ruined city and showing just how much the Daleks have destroyed, the second act concerns the country under the Daleks, with the various characters weaving their way to an eventual rendezvous in Bedfordshire, the centre of the Daleks activities. </span> <span style="font-family:arial;"><br /><br /></span><span style="font-family:arial;">Tom and Louise, trapped on the Dalek spaceship, escape when it lands, only to discover they have escaped into a giant forced labour camp, where teams of prisoners, Robomen armed with whips guarding and running them, are worked to death. Susan escapes London with Wyler,</span><span style="font-family:arial;"> leaving a message written in chalk at Embankment for the Doctor to meet her in Watford. Escaping in a van, Dortmun is killed by Daleks in the process, a fate that Susan and Wyler narrowly avoid when the Dalek spaceship targets them from the air as they drive down cold deserted country lanes, forcing them to continue by foot. The Doctor, saved by David from being robotised, is chased through the</span><span style="font-family:arial;"> streets by Daleks. David saves him again leading him through a manhole down into the sewers, end then back to the resistance base. Reading through Dortmun’s notes, he decides that they must make their way to Bedfordshire, pronounced in a lovely eccentric fashion by Cushing as ‘Bedford-Shire’, to get to the bottom of the Daleks plan.</span> <span style="font-family:arial;"><br /><br />When Doctor Who and David are travelling across country to Bedford, the well-dressed and raincoated Brockley captures them, taking them to the Dalek camp, despite his puzzlement at their</span><span style="font-family:arial;"> enthusiasm to go there. A fascinating character, Brockley, played by Philip Madoc, is a kind of spiv</span><span style="font-family:arial;"> or quisling, a pantomime profiteer, selling information to the Daleks and food to the camp inmates.</span><span style="font-family:arial;"> In one scene, after Doctor Who and David have been sleeping, we see Brockley cooking something in a tin over a small fire. When David says that it smells good, Brockley gestures for David to take some. When David bends to pick it up, Brockley gives the can a petulant kick and spills the contents.</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">There is pure malice in Brockley with no redeeming features, a kind of self-serving spoilt brat, lacking the backbone and moral fibre that, with the rosy glow of nostalgia, all Britons displayed in the face of threat. He is the nostalgic vision of those who made profit from the conditions of war, a ruthless, immoral turncoat, suspiciously flash and well groomed, a code for everything that the earthy, clean cut Britons of the propagandised national image were not supposed to be. There is no hint of the moral complexity of the situation that occupation actually forces upon people, Brockley is a bad ‘un, a ‘flash Harry’ who eventually gets his comeuppance</span><span style="font-family:arial;"> when he is disposed of by his masters as he informs on the</span><span style="font-family:arial;"> rebellion. In some ways, he represents the view that those who</span><span style="font-family:arial;"> ‘go over to the other side’ do so out of a kind of spite towards the country that raised them.</span> <span style="font-family:arial;"><br /><br />Wyler and Susan have a similar experience. Deciding to make for Bedford on foot, the burly man and the bubbly schoolgirl in her red dress come across a house by a bridge, hidden amongst the misty trees. Stopping for food, they find an old women and her daughter. They are invited to stay the night and fed from the meagre rations the Daleks give the women for cleaning the clothes of people at the camp. Wyler wakes in the night to find the daughter sneaking in with a bag full of food. When he tries to wake Susan silently so that they can make their escape, after hearing the women cackling at their good fortune having, in himself and Susan such a prize to trade, he draws back the curtain to discover a Dalek waiting for them outside.</span><br /><br /><span style="font-family:arial;">This episode has a paradoxical feel of both dream and grim reality. The women, dressed like medieval peasants in their house over the misty bridge feel like characters from fable or fairytale. Indeed, when Archeology of the Future saw this film as a child, we believed that the purpose of the women was to catch unwary travellers and turn them over to the Daleks, so much did they seem rooted in the logic of myth, like the witch in her gingerbread house, waiting intently forever for</span><span style="font-family:arial;"> innocents to arrive and fulfil her function. The reality of the situation, the hunger, the ill health, the</span><span style="font-family:arial;"> selfishness necessary for survival, the facts of most collaboration only really hit us as adults. Britain</span><span style="font-family:arial;"> in this film is a country that has been, to borrow a phrase, almost bombed into the Stone Age.<;">The third act reunites all of our chrononauts, along with David and Wyler, inside of the Daleks forced labour camp. Here Doctor Who devises a way of disrupting the Dalek plan to detonate a bomb inside of the Earth’s core, part of a hazy plan to ‘pilot the Earth like a spaceship’ back to the area of their planet. Redirecting the bomb, it increases the Earth’s magnetic field and destroys the Daleks, freeing humanity and allowing just enough time for Doctor</span><span style="font-family:arial;"> Who to return Tom Campbell to the present in time to foil the jewel thieves, before the jaunty, jazzy theme tune that plays over the cast list tells us it’s time to pick up our things and head out of the cinema into the bright, new world of mid-sixties Britain. </span> <span style="font-family:arial;"><br /><br />Considered in terms of the world that it is born into, Daleks Invasion Earth looks backwards rather than forwards for its inspiration. It arrived in a world where Pop Art, youth cult, rock ‘n’ roll, Tamla Motown and the Beatles and Stones were asserting the primacy of youth, where Harold Wilson and Tony Benn were exploring the possibilities of new markets and new technologies with MinTech, where the Telecom Tower would emerge glittering like a space craft landed at the heart of London, where everything was faster, shinier, newer, younger, more brash, less deferential, less respectful. It is a warning to the young, rather than a discussion for the old. It reduces fears and experiences of World War II to a fable told to damp the spirits of this world of uncertainty and possibility.</span> <span style="font-family:arial;"><br /><br />There are many explicit references to the lore of World War II, to the horrid litany of abuses carried out during its duration. When the chrononauts first arrive, Susan remarks that there are no people, and ‘no birds either’, bringing to mind the horror of death camps. At the Dalek mine in Bedfordshire, Brockley trades food for a handful of rings, again making reference to the horror of death camps, the implication that rings are not the only things of precious metal that are brought to Brockley. Near the beginning of the film, Doctor Who and Tom pass through a room that has a prominent framed picture of a Spitfire or a Hurricane. The use of the Underground as a refuge and</span><span style="font-family:arial;"> a place were industry could continue is again an obvious reference, (<a href="">see this article for more about the underground</a>). The Dalek radio broadcasts, with their mocking tone, again speak to the lore of the War rather than an actual experience of it. The film is nostalgic for the War, in that it strips a complicated narrative down to its most evocative and emotive images, removing them from their context and making them emotional triggers only.<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Like a fable, Daleks Invasion Earth reduces the characters and situations to ciphers and removes all notions of context or politics from the equation. With the characters of Brockley and the Mother and Daughter, we see that collaborators are resoundingly bad. With the Daleks, we are given an</span><span style="font-family:arial;"> image of invasion that removes any of the difficult ambiguities that arise when examining history and the motivations of people. The Daleks are evil, because they are evil. There are no geopolitics, no history, no relationship before their invasion. In many ways, they are the reassuring bogeyman conjured up by wartime propaganda, an all destroying; unreasonable force that can only be beaten rather than accommodated or lived with. In a time of total war, there is a comforting certainty, and an escape from the anxiety of having to make responsible political choices, in having an implacable enemy to oppose.</span> <span style="font-family:arial;"><br /><br />One aspect of the film that is always pointed out is that it doesn’t look to be set in 2150 AD, but about five minutes from when it was filmed. Based on the underlying message of the film, this doesn’t pose as much of a problem as most reviewers suggest. Given the film’s deep hankering for the certainties of the War, the message to the present seems to be ‘don’t get too excited about relentless progress and forward motion, it could all just stop’. This is a very English response to the apocalypse, an anxiety about adversity stopping the march of progress and the advancement of civilization. American popular culture, overall, has a much more rosy view of the breakdown of civilisation, with many seeing it as kind of ‘back to nature’ return to the essentials of life where, at worst, you and your family will be enjoying the kind of life someone on the Frontier might have had.</span><span style="font-family:arial;"> British popular culture, on the other hand, is usually concerned with what will be lost. If there was an invasion by immensely powerful alien beings tomorrow, the process of commodity capitalism as we know it now would stop. There wouldn’t be the time or resources to create new and exciting shiny things to signify that the future had arrived. Although it is a stretch to believe that the Britain of Daleks Invasion Earth is two hundred or so years hence of the date that the film was made, the cultural and technological entropy that it portrays does fit with its overall orientation. < top of all this subtext, there is a fast moving children’s film, with excellent set pieces and some lovely, local references. When Tom and Doctor Who see the Dalek saucer, a brilliant bit of Dan Dare style design, landing in London, Doctor Who exclaims: “It appears to be landing in the vicinity of Sloane Square!”. The fact that Bedfordshire becomes a huge concentration camp adds a huge thrill to the proceedings; as does the fact that characters need to travel across country to get there, plunging into the chilly interior of the landscape. The use of the London Underground as a hiding place for the human underground adds a wonderful sparkle to Archeology of the Future’s enjoyment.</span> <span style="font-family:arial;"><br /><br />In many ways, the Daleks are amongst one of the most recognisable artefacts of the bright, brash, shiny, poppy mid-sixties. They achieved huge popularity and iconic status precisely because they were colourful, exciting and from a very low, mass-culture source, beginning with the kids then finding more and more fans amongst the grown-ups… Looking back at the film now, it almost seems to say, in a grumpy, reactionary way, that we should be careful, lest the new world of Pop and bright colours turn around and wreck our cities and enslave us…<br /><br /><a href=";amp;amp;camp=1634&creative=6738&path=ASIN%2FB00006CFJM%2Fqid%3D1148218478%2Fsr%3D8-1%2Fref%3Dsr_8_xs_ap_i1_xgl">Buy Daleks Invasion Earth: 2150 AD from Amazon.co.uk here.</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /><br /><br />Technorati Tags: <a href="" rel="tag">doctor who</a>, <a href="" rel="tag">peter cushing</a>, <a href="" rel="tag">daleks</a>, <a href="" rel="tag">science fiction</a>, <a href="" rel="tag">1960s</a>, <a href="" rel="tag">british cinema</a><br /><br /></span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Guest, director of Quatermass Xperiment and Quatermass II dies<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><a style="font-family: arial;" href="">Variety.com - Val Guest Obituary</a><br /><br /><span style="font-family:arial;">It with great sadness that Archeology of the Future was informed this evening of the death of Val Guest, Director of 'The Quatermass Xperiment', 'Quatermass II', 'The Day The Earth Caught Fire' and 'The Abominable Snowman'. Val died Wednesday, May 10 in Palm Springs, California at the age of 94.</span><br /><br /><span style="font-family:arial;">If you look at his filmography </span><a style="font-family: arial;" href="">here</a><span style="font-family:arial;">, you can see that Val had a hand in everything from the bleak majesty of Quatermass, to the youth cult of Expresso Bongo to the ill lit, freezing cold smut of 'Confessions...'. As a writer, director, writer of music, producer and even actor Val was around for almost every weird twist and turn taken by the British film industry from the 1930s onward.</span> <span style="font-family:arial;"><br /><br />A sad loss that even the hard headed Quatermass of Val's two Quatermass films would have to mourn...<br /><br /></span><span style="font-family:arial;">Archeology of the Future offers these two previous articles in memory of his work:<br /><br /></span><a style="font-family: arial;" href=""></a><br /><br /><a style="font-family: arial;" href=""></a><br /><br /><span style="font-family:arial;">Archeology of the Future feels saddened that Val's death seems to have been mentioned nowhere of any note. The UK does have a film heritage, but we seem to want to forget it, as if it makes us uncomfortable because it was different to Hollywood. It's as if we feel that people like Val, who were involved in British commercial cinema for all of their careers, are somehow embarrassing, being a reminder of a time when we thought that as a country we could make films on our own terms.<br /><br />A sad death, for the most part unremembered in the country and industry where he worked for the majority of his life.<br /><br />Technorati Tags: <a href="" rel="tag">science fiction</a>, <a href="" rel="tag">british cinema</a>,<br /></span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Therapy: Archeology of the Future guest blogs on SF Memes<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><a href=""><span style="font-family:arial;">Meme Therapy: Archeology of the Future guest blogs on SF Memes</span></a><span style="font-family:arial;"><br /><br />In response to a discussion at Meme Therapy, Archeology of the Future has managed to escape from the twisted wreckage of Winnerden Flats for long enough to discuss Science Fiction memes. Responding to a previous discussion,.<br /><br />Science Fiction is, in its most potent form, only a distorted mirror of the present. Science Fiction ideas are tools and machines for making new and surprising understanding...</span><br /><br /><span style="font-family:arial;">The postcard that accompanies this post can be considered a trailer for an excavation that we're currently undertaking at the request of a regular reader. Just taking in what's going on in the photograph should give you an idea of what's in store when we present what we've dug up...</span><br /><span style="font-family:Arial;"></span><br /><span style="font-family:Arial;">Thanks to <a href="">Meme Therapy</a>.</span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future"We're on the verge of something so ugly": Quatermass II<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">It is fifty years ago or so, in a place not so alien… In the greens and browns of the countryside, New</span><span style="font-family:arial;"> Towns of tile, concrete and rational planning spring up. Roads arrive metre by metre to join them. Places of work, factories and refineries grow and evolve like intricate corals between. A government still wary of the people that it works to protect plans and plots for the worst</span><span style="font-family:arial;"> of all worlds. Wary people keep quiet; blowing the whistle will destroy a chance of a comfortable life for them</span><span style="font-family:arial;"> and their families. A man tries to turn the tide of history, aware that his vision of the future, clean and bright, is slipping away, consumed by the squalor of plans for war and</span><span style="font-family:arial;"> national hubris… </span><br /><br /><span style="font-family:arial;">Quatermass II, the second film Quatermass film produced by Hammer, is often held up as a kind of British answer to American films such as ‘Invasion of The Bodysnatchers’. While there are similarities, with people being controlled or replaced by a malevolent alien force, there are also</span><span style="font-family:arial;"> significant and interesting differences. </span><br /><br /><span style="font-family:arial;">Often coming off a critical third when compared to the two Quatermass films that bookend it, Quatermass II is generally seen to lack the imagination of its siblings, producing a hackneyed exploration of themes treated far more excitingly in US films of the period. What this analysis misses is the very thing that gives Quartermass II its particlular power. The film is imbued with v</span><span style="font-family:arial;">ery</span><span style="font-family:arial;"> a particular sense of British unease, and is, in fact, extremely responsive to the domestic concerns and unease of a nation nearly a decade out of the War but still in many ways living with it.</span><br /><br /><span style="font-family:arial;">Quatermass returns, played again by famously thirsty American actor Brian Donlevy, this time in a far more precarious position than when we last saw him in The Quatermass Xperiment. The future of his British Rocket Group is hanging in the balance, a grand plan to colonise the moon passed over for funding. Still a hard headed driving force, eyes locked upon achieving his aims, we now</span><span style="font-family:arial;"> see Quatermass as others see him: A grumpy man with big ideas, pacing the corridors of his own scientific fiefdom, his life rotating around the aerodynamic rocket that pokes into the overcast sky above his experimental research establishment.</span><br /><br /><span style="font-family:arial;">Travelling back from another meeting with the people from the ministries, Quatermass runs into a</span><span style="font-family:arial;"> couple in trouble, beginning about a chain of events which climax in the ending of an insidious alien</span><span style="font-family:arial;"> threat and a perversion of his beloved rocket, the Quatermass II, which like the predecessor we saw in our previous meeting with him, brings destruction rather than scientific hope. He can’t even manage to be civil to his own staff, complaining that they are wasting valuable time tracking a succession of meteorites that entering the atmosphere with precise regularity.</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Investigating the claims of the young couple, Quatermass travels into the green emptiness of the</span><span style="font-family:arial;"> English countryside, travelling up a new road that quite literally ends under his feet in a mess o</span><span style="font-family:arial;">f government property signs and distant chainlink fences. Tracking back, he finds the remains of a village, Winnerden Flats; he also finds a huge copy of his moon base, all domes and pipes, sitting seemingly undiscovered amongst the hills. His colleague, uncovering one of the meteorites we’ve seen being tracked, is struck down by what Quatermass describes as ‘a big black bubble’ shooting from it. Uniformed and armed guards arrive, complete with breathing apparatus, bundling Quatermass’s colleague into a car and sending Quatermass on his way.</span><br /><br /><span style="font-family:arial;">Nearby, Quatermass finds a New Town in the early stages of construction. Housing for the workers at the plant, it is still in the process of coming into being and carving its existence into the</span><span style="font-family:arial;"> countryside. It has wide, planned but unfinished streets that end in empty fields and a general air of</span><span style="font-family:arial;"> desolation.</span><br /><br /><span style="font-family:arial;">The community centre, the only bit of the town with any sign of activity, is plastered with posters extolling the necessity of secrecy. Trying to ring for help, Quatermass is stopped by a community leader who tells him directly not to stir up trouble. The plant produces synthetic food, no more, no less.</span><br /><br /><span style="font-family:arial;">Travelling back to London, Quatermass is put in touch with MP Vincent Broadhead, implicitly Labour, who is as unconvinced as Quatermass that the production of synthetic food is what is actually going on at Winnerden Flats. Broadhead has been campaigning and asking questions, mainly about the huge amounts of public money being channel into the project and, having finally arranged an official site visit, invites Quatermass to accompany him.</span><br /><br /><span style="font-family:arial;">Once there, Quatermass, Broadhead and other visitors are guided around the sprawling plant, all tangles of huge pipes, domes and gasholders, empty and exposed. Quatermass manages to sneak off, finding an empty infirmary and no sign of his associate. Their guide is most forceful that they remain with the group, as they have a schedule to keep. </span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Broadhead manages to slip away, and it becomes</span><span style="font-family:arial;"> clear to Quatermass that there is some sort of</span><span style="font-family:arial;"> unpleasant fate awaiting him. The guide’s brittle efficiency soon evaporates as Quatermass</span><span style="font-family:arial;"> manages to escape, only to find Broadhead screaming covered in horrid black ooze, slumping to his death amongst the steel and concrete. Under fire, Quatermass manages to escape and make it back to London, taking his experiences to his old friend Inspector Lomax.</span><br /><br /><span style="font-family:arial;">Despite what Quatermass saw, the newspaper shows Broadhead to be alive and well. Lomax, hoping to ask his superior for advice notices a tell-tale mark on his hand, one not dissimilar to the marks that Quatermass has described on those who have been exposed to the meteors… Picking up a drunken yet sharp reporter played by Sid James, a long way from Carry On and all the better</span><span style="font-family:arial;"> for it, Quatermass and Lomax set out to get to the bottom of what seems to be no less than a silent,</span><span style="font-family:arial;"> careful invasion, where figures of authority are being controlled from afar… </span><br /><br /><span style="font-family:arial;">Giving a commentary on the current DVD edition of Quatermass II but nowhere credited on the disc itself as doing so, Nigel Kneale, who handled the adaptation of his script from the television series he had written the previous year for the BBC, says that in the 1950s, there were plenty of things to be afraid of. He talks about a country that was secret prone, where too much was taken for granted. All over, actual research projects and military sites were being created at the same time as New Towns were being built. It is the Hemel Hempstead New Town Development Corporation that is acknowledged in the opening credits, alongside the Shell Haven Refinery, for providing the settings for events. Rather than being set in an imagined landscape of secrecy and change, Quatermass is</span><span style="font-family:arial;"> actually set in a real landscape minded for its maximum discomfort. He talks of consciously setting the events in “<span style="font-style: italic;">the new British scene of the 1950s</span>”, of how it was "<span style="font-style: italic;">easy to imagine anything at that time</span>” with the “<span style="font-style: italic;">guarded, secretive announcements of the War Department</span>” where “<span style="font-style: italic;">no-one knew who was responsible for what</span>” and where “<span style="font-style: italic;">people took a great deal more on trust than they should have</span>”. As we have seen, the UK really was a place where rockets took off and where huge structures did spring up guarded by secrecy.</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Britain as portrayed in Quatermass II is a</span><span style="font-family:arial;"> threatening, dark place full of dislocations and only partially perceived threats. Shot under overcast skies and in squally winds, even the familiar green and pleasant lands have a sense of menace and bleak emptiness, as if the pastoral celebrations of Powell and Pressberger and the little people in triumph of Ealing that so informed Britain’s image of itself during the war years had turned sour in the mouth of post war realities.</span><br /><br /><span style="font-family:arial;">The film has a particularly British sense of localism, displaying a pre-flight and pre-mass car ownership understanding of landscape. Advances in popular mechanical modes of transport, as well as ever proliferating media outlets, mean that we have progressively collapsed space and time, folding together geographically distant points until we feel that we know intimately everything about the UK. We can hold an image of what we think the UK is in our minds, so used are we to moving almost instantly from place to place, town to town, eating up the landscape and carrying it in</span><span style="font-family:arial;"> our heads as an illusion of a totality. In Quatermass II, the landscape is unknown, unknowable. It is possible to go slightly off the usual path and find a whole huge development, unknown to the world at large. New towns are springing up, their inhabitants as distant from London as settlers on a distant island. When Quatermass asks the way to the police station in Winnerden Flats, he is told they don’t have one, the implication being not that this new town is so peaceful and well ordered that it doesn’t need one, but that it is like a frontier town, pushing beyond the reach of the law. When Quatermass finds the site of the alien project, his colleague remarks, “Maybe we’ve struck a rival project”, as if it were to be expected that the countryside would be littered with top-secret establishments. Scenes of uniformed, gasmasked troops spreading out across fields like a virus, occupying the unfamiliar terrain, heighten this sense of the unknown regarding one’s own county. </span><br /><br /><span style="font-family:arial;">There is a sense of fading idealism in Quatermass and his Rocket Group that very much mirrors the feelings of engineers who worked on the real British Space Programme. (<a href="">See this previous post for more details</a>) Relaying the results of his meeting in London to his staff, Quatermass says:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“No more money… To date you’ve spent a lot of money on a rocket that isn’t even safe to launch. At the moment we have projects of far more importance. Isn’t it important enough to be the first to</span><span style="font-style: italic;font-family:arial;" > build a colony on the moon? To get men there against the odds?”</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">While it may be inferred from the plot that the project of more importance is the alien bridgehead at Winnerden Flats, there is a much stronger indication that it is the military application of Quatermass’s work which is of interest, a point underlined by the use of his rocket as a bomb to destroy the alien base and the huge, dirty and unpleasant creatures that emanate from it at the films climax. The film seems concerned with this inversion, with the sullying of science by its association with The Bomb. As Nigel Kneale says:<span style="font-style: italic;"> “There was a lot of cheap joy when the British H</span></span><span style="font-family:arial;"> <span style="font-style: italic;">Bomb exploded… It was not healthy that people should rejoice such a thing”</span>. Quatermass is concerned that his work should be put toward constructive, not destructive purposes, but the plot systematically turns this wish on its head. His rocket, the Quatermass II of the title, doesn’t bring life to other worlds; it destroys life on this planet. His moon base design doesn’t protect humanity from the hostile elements; it encloses hostile elements to nurture a hostile form of life. Even this form of life represents ‘getting your hands dirty’, comprising as it does of a huge liquid thing resembling nothing more than a kind of caustic manure, leaving a black, stinking trail behind it. The plant itself is said to produce food but in fact produces the opposite, materials that kill rather than sustain life. There are two wonderful shots in the film of a huge dome, with the party of visitors walking towards it, followed later by Quatermass running away from it alone. Standing as it does on a flat horizon, the dome looks like nothing less than a huge black sun, taking in light and life rather than giving it out.</span><br /><br /><span style="font-family:arial;">In a wonderful shot of the New Town earlier on in the film, a street of houses, empty but for a mother pushing a pram, simply ends in fields. This sense of artificiality is another factor of the film that may not travel well. The people of Winnerden Flats New Town are dependent on the Refinery for work, marooned as they are. The scene where all ages are having a dance or function at the community centre, with drinks served by a brassy young lady from behind a makeshift bar, is at once a perfect expression of the dreams of town planners and a throwback to the frontier towns we are used to seeing in Westerns. They have no choice but to be complicit in the goings on at the Refinery, only rebelling and marching upon it when the actions of Quatermass and his colleagues bring about a brutal repression. The worry about artificiality versus organic growth in communities is a British preoccupation; with town planning being both demonised and lionised depending upon circumstance. The suggestion that the relocation of people to newly created places may introduce them to new pressures and problems may seem obvious now, but at the time of the films production the first wave of post war New Towns was in full swing. It would eventually result in the construction of twenty-nine 'New Towns'. Twenty-three towns in England and Wales and six in Scotland, Stevenage being the first. (<a href="">See this previous post for more on New Towns</a>)</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Quatermass II has a sense of unease in the future that can only be experience by engaging with the present, which it shares with many British New Wave films that followed it. It also suggests that there are more problems in Britain than can be solved by the final destruction of the Refinery at the films climax. While it was possible to vanquish the alien foe, the conditions of the country continue beyond the titles. There is a sense that, while there were extenuating circumstances, the way in which Britain functions made it ripe for an invasion of the sort detailed in the film, with secretive governments and deferential workers conspiring to allow unspeakable things to happen. At one point Quatermass, in despair, exclaims: "We're on the verge of something so ugly."</span><br /><br /><span style="font-family:arial;">In many ways, Quatermass II is the most British of the three Quatermass films, depending as it does on topical, domestic developments for its sense of uneasiness. In a genre widely considered American both in parentage and in practice, Quatermass II unsurprisingly fails to push the right buttons for some audiences.</span><br /><br /><span style="font-family:arial;">Archeology of the Future, on the other hand, thinks it’s a bleak, British wonder.<br /><br />In a strange piece of life imitating art, the climax of Quatermass II had an almost exact mirror in real life when Hemel Hempstead was the scene of the biggest fire in Europe since World War II when Buncefield oil depot exploded. The BBC reported the initial explosion like this:<br /><br /><a href=""></a><br /><br />This site includes some amazing pictures of the huge plumes of smoke and the devastation caused:<br /><br /><a href=""></a><br /><br />Suddenly, the world doesn’t seem so predictable or cosy…<br /><br /><a href=";amp;amp;camp=1634&creative=6738&path=ASIN%2F6305807922%2Fqid%3D1147044791%2Fsr%3D8-1%2Fref%3Dsr_8_xs_ap_i1_xgl">Buy Quatermass II from amazon.co.uk here</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /><br /><br />Technorati Tags: <a href="" rel="tag">science fiction</a>, <a href="" rel="tag">Quatermass</a>, <a href="" rel="tag">british science fiction</a>, <a href="" rel="tag">post war britain</a>, <a href="" rel="tag">Nigel Kneale</a>, <a href="" rel="tag">1950s</a><br /><br /></span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Conservatory: A virtual garden?<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><a href=""><span style="font-family:arial;">Come and explore the Barbican Conservatory in this 360 degree model!</span></a><span style="font-family:arial;"><br /><br />To further illustrate why I love the Barbican Conservatory, </span><a href=""><span style="font-family:arial;">as discussed in this previous post</span></a><span style="font-family:arial;">, why not have a virtual look round it by following the link above?<br /><br />If you can't imagine an episode of popular children's television programme Doctor Who taking place there, we think that maybe we've been talking cross purposes for the last two months...<br /><br /><a href="">Photograph from sleekit's collection at Flickr</a></span><br /><span style="font-family:arial;"><br />Technorati Tags: </span><a href="" rel="tag"><span style="font-family:arial;">science Fiction</span></a><span style="font-family:arial;">, </span><a href="" rel="tag"><span style="font-family:arial;">doctor who</span></a><span style="font-family:arial;">, </span><a href="" rel="tag"><span style="font-family:arial;">barbican</span></a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future of Escape: John Wyndham's Chocky<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><span style="font-family:arial;">Archeology of the Future has been raiding the mouldering piles of paperbacks in the cupboard under the stairs, digging through the accumulated strata to uncover some rich gems. </span><br /><br /><span style="font-family:arial;">Chocky by John Wyndham, published in 1968, is the story of a family afflicted, or blessed, with a 'special child'. Rather than dealing with a wider apocalypse or with the world plunged in violent change, Chocky is the exclusive story of one suburban English family. It's a novel of relationships; the relationship between husband and wife, between father and son and between a young boy and an intangible presence that takes up residency in his mind. There is a great change, but not necessarily the one that the reader might expect.</span><br /><br /><span style="font-family:arial;">Before we discuss Chocky and the boy who brings him/her to our attention, we’d like to introduce you to another outlandish figure, an almost- contemporary of Chocky:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“Alan Measles was the leader, the benign dictator, of my made-up land, the glamorous, raffish, effortlessly handsome, commanding character… When I was about ten, I worked out the game was set one hundred years in the future in the 2060s and 2070s. There had been a calamitous nuclear war, almost obliterating Planet Earth. Everyone agreed that technology had advanced</span><span style="font-style: italic;font-family:arial;" > too far, so an international agreement was forged stating that from now on technology could only move backwards. Armies once again began to use old-fashioned, conventional weapons.”</span><br /><br /><span style="font-family:arial;">In the book ‘Grayson Perry: Portrait of the Artist as a Young Girl’, a book of interviews with the 2003 Turner Prize winning artist by Wendy Jones, Perry outlines the content of his childhood fantasy world ruled over as it was by Alan Measles, his teddy bear. A series of rationalisations that allowed him to include whatever toys, and later Lego and Airfix models that came along, the world of Alan Measles, improvised from the topography of Perry’s bedroom, provided both a rich territory for the young Perry’s imagination but also a refuge from his disrupted home life and uneasy, and often violent, relationship with his stepfather. As Perry says:</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-style: italic;font-family:arial;" ></span><span style="font-style: italic;font-family:arial;" >.</span> <span style="font-style: italic;font-family:arial;" >“I lived in Alan Measles’s realm, carrying it around with me like a comfy sleeping bag I could pop into at any time… I no longer had separate games, they were all facets of the one game: everything was linked..”</span><br /><br /><span style="font-family:arial;">This retreat into fantasy as defence and the creation of an avatar representing some essential qualities was a key experience for Perry, as they represent a kind of reflexive therapy and playing out of drama that would eventually find expression in his art. For those unfamiliar with his work, Perry works in ceramics, producing glazed vases and other objects with images and photographs that, to us at least, are reminiscent of Pulp songs, with an uneasy mixture of fantasy and ‘kitchen sink’ reality.</span><br /><br /><span style="font-family:arial;">In many ways, the realm of Alan Measles seems to have prompted Perry into pursuing certain</span><span style="font-family:arial;"> paths, as if such interior richness, once tasted needs to be returned to repeatedly in different ways.</span><br /><br /><span style="font-family:arial;">In Chocky too, there is a figure that exists in the mind of a small boy, but of a small boy with a very different set of problems.</span><br /><br /><span style="font-family:arial;">Matthew is the adopted child of the stable Gore family of Hindmere, Surrey, happily undertaking the pursuit of a normal English childhood. His father, David, the narrator of the book, tinkering with a lawnmower in the shed, overhears Matthew one sun kissed suburban afternoon arguing with an unseen 'friend’ about the number days in the week and the number of months in the year. Before he can see who it is that Matthew is arguing with, a child from next door calls and Matthew runs off to meet him.</span><br /><br /><span style="font-family:arial;">For David Gore, the narrator and patriarch of the Gore family, this is the beginning of the story of Matthew and his involvement with Chocky. </span><br /><br /><span style="font-family:arial;">David and his wife Mary debate the best course of action, reasoning that Matthew may be too old for an imaginary friend, but also intrigued by the ‘unMatthewness’ of the questions that he has taken to asking. Hoping that it is a phase that Matthew is going through, like his younger sister had with her imaginary friend Piff, they agree to keep a benign eye on things.</span><br /><br /><span style="font-family:arial;">As the novel progresses, Matthew causes consternation at school with his outlandish questions about matter, space and intelligence. He becomes extremely upset when the family buys a new car, only to have Chocky criticise it as primitive. In art class at school, he finds that Chocky can show him how to do things if he lets his mind go blank, channelling Chocky’s will through him, producing drawings, that while technically proficient, present a certain off kilter air. He manages to save his sister from drowning when they are both deposited into a river when a boat hits a jetty,</span><span style="font-family:arial;"> despite previously being unable to swim. When asked how he did it, he can only answer that Chocky showed him. After coming to national prominence as a boy who was saved from drowning by a guardian angel, as well as winning a prize for his drawings completed under the influence of Chocky, Matthew is finally abducted by the government, which prompts the departure of Chocky.</span><br /><br /><span style="font-family:arial;">Despite being a story about a possible alien incursion into the life of a young boy, Wyndham gives the reader no solid confirmation that Chocky is an actual real entity, despite both David and Matthew believing this to be the case. The ways in which Chocky manifests him/herself are resolutely non-physical, and while stretching credibility, are not impossible. People do spontaneously ‘learn to swim’ in extreme circumstances and other people do manage to tap into ways of manufacturing art that produce unsettling or slightly skewed results.</span><br /><br /><span style="font-family:arial;">David, the narrator and father of Matthew, is a rational, freethinking man, uncomfortable with ‘common sense’ and questioning of orthodox answers. A self-aware suburbanite, he comments when visited by Mary’s sister and her husband: <span style="font-style: italic;">“Kenneth and I kept mostly to the safe, and only slightly controversial, topic of cars.”</span> The world of the Gore family is comfortable and unremarkable: the world outside of their garden hedge is experienced as being filled with grumpy maths teachers, avuncular policemen, skittish arts teachers and irascible family doctors of English sitcom life. </span><br /><br /><span style="font-family:arial;">In the early days of his marriage David decides to move Mary away from her more traditional family, to escape their fecundity, saying of Mary’s family:<span style="font-style: italic;"> “There was so many Bosworths that I had a feeling of being engulfed”</span>. Seemingly unable to have children, David wonders at Mary’s need to do so. Rationalising his feelings as despair at the seemingly unending generative powers of the</span><span style="font-family:arial;"> Bosworth family, while revealing an inability for things to be ‘just so’, he tells a friend: <span style="font-style: italic;">“She’s in a circle where it’s kind of a competition in which every married woman is considered ipso facto an entrant – which makes it damned hard on a non-starter.”</span> Ever practical, David and Mary adopt Matthew, and then move the family to Surrey, allowing them to make a fresh start, away from the interfering figures of Mary’s family, soon afterward conceiving Polly. </span><br /><br /><span style="font-family:arial;">Both Mary and David, eschewing the traditional, earthy advice of Mary’s sisters, seem insecure in their parenting of Matthew in a way that they don’t of his younger sister, Polly. Throughout the book, Mary and David take it in turns to be dismissive of Polly, telling her to be quiet, or to stop being silly. It is as if they feel comfortable in doing this with a child that they brought into the world themselves, but feel that Matthew is something of an unknown quantity. When Chocky first manifests, Mary says:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“I do wish we knew a little bit more about his parents. That might help. In Polly I can see bits of you and bits of me. It gives one something to go on. But with Matthew there’s no guide at all… there’s nothing to give me any idea what to expect…”</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Interestingly, rather than representing a unity, David and Mary are divided in their response to Matthew’s ‘difficultness’. David is more indulgent, at different points instructing Matthew to hide the evidence of Chocky’s presence from his mother. He finds Mary to be too rigid and hard in the face of Matthew’s ‘specialness’. When Matthew informs his parents of Chocky’s departure, completely heartbroken at the loss of this part of himself, David comments after Mary is dismissive of his pain:<br /><br /></span><span style="font-style: italic;font-family:arial;" >“I have been astonished before, and doubtless will be again, how the kindliest and most sympathetic of women can pettify and downgrade the searing anguishes of childhood.”</span><br /><br /><span style="font-family:arial;">Both Mary and David refer to the collusion between David and Matthew in supporting the existence of Chocky. Unable to define exactly what gender Chocky is, Matthew eventually decides that she is more female than male. When David tells Mary this, she answers: <span style="font-style: italic;">“You decide she’s feminine because you feel it will help you and Matthew to gang up on her.”</span></span><br /><br /><span style="font-family:arial;">Throughout the book the reader feels that David, in Matthew’s experiences has glimpsed a land that, maybe, once he himself inhabited. It is as if David, for all of suburban sophistication and outward stability, wishes for something else. Remember that, if this book is taken as being set within a few years of its release, Britain is swinging, the Beatles and the Stones are duking it out in the charts, the world is between the tragedy of Apollo 1, exploding on the dry Florida concrete and the live footage of the planet below beamed back by the triumphant Apollo 7. David seems to want something beyond his nine-to-five family existence. Under the guise of distracting one of Mary’s sisters from expounding her opinions on Matthew, David lets slip his distain for ‘normality’. Talking of his nephew, he says:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“Your Tim is so splendidly normal. It’s hard to imagine him saying anything odd. Though I sometimes think… that it’s a pity that thorough normality is scarcely achievable except at some cost to individuality. Still, there it is, that’s what normality means – average.”</span><br /><br /><span style="font-family:arial;">Indeed, it is revealing that when Chocky makes his/her final departure, she takes David aside to explain. With Matthew channelling her voice in a way now familiar from many UFO fringe cults, she outlines her reasons for settling on Matthew and then warns David that Matthew must remain out of sight in life and not attempt to make use of the insights she has provided him into science, as this will lead him into being used badly by the powers that be. She tells David:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“If you are wise you will discourage him from taking up physics – or any science, then there will be nothing to feed their suspicions. He is beginning to learn how to look at things, and to have an idea of drawing, As an artist he would be safe…”</span><br /><br /><span style="font-family:arial;">It is almost as if the entire book is a justification for David in allowing his son to follow his own path, harnessing his creativity to find a route out of the constraints of orthodox suburban life of which he is acutely aware. We get a feeling of this from David early on in the book when he mentions the demise of Polly’s imaginary friend Piff, forgotten about on a trip to the seaside:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“I was able to feel quite sorry for the deserted Piff, apparently doomed to wander for ever in summer’s traces upon the forlorn beaches of Sussex.”</span><br /><br /><span style="font-family:arial;">It seems to us that Chocky almost represents a way for all of the family to accept that, for Matthew, his path will be the path that leads from suburban dreaming into a more uncertain and more creative future.</span><br /><br /><span style="font-family:arial;">Certainly, Chocky does seem a convenient way for Matthew to raise some issues that trouble him. At one point he raises the question of loving two parents at once, and how difficult Chocky thinks this must be, underlining the dichotomy that exists in Mary and David’s relations with him:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“She thinks it must be terribly confusing to have two parents, and not a good idea at all. She says it is natural and easy to love one person, but if your parent is divided into two people it must be pretty difficult for your mind not to be upset by trying not to love one more than the other. She thinks it’s very likely its’ the strain of that which accounts for some of the peculiar things about us.”</span><br /><br /><span style="font-family:arial;">Chocky allows Matthew to state very baldly both his dilemma and the dilemma that his parents find themselves in regarding him and his sister. This non physical entity that exists only inside Matthew serves to fulfil a similar role to Grayson Perry’s adventures with Alan Measles, providing a safe way of dealing with anxieties. </span><br /><br /><span style="font-family:arial;">Unlike Grayson Perry, who found himself chastised by family for the directions that his fantasy world took him in, Matthew finds acceptance. At the climax of the book, once Chocky has departed, it is David that encourages Matthew to continue his drawing as a way of keeping alive the strange sense of otherness that came to their suburban home.</span><br /><br /><span style="font-family:arial;">Rather than changing the entire outside world, the events of Chocky only open up for one child of the surburbs the possibility of exploring further his own interior world. A profound upheaval, if not a world changing one.</span><br /><br /><span style="font-family:arial;">In many ways, all of us who fall prey to science fiction or to dreams about the way that the future could be different, do the same thing, escaping when we can into another world inside ourselves, peopled with voices and places that don’t really exist.</span><br /><br /><span style="font-family:arial;">Like David Gore, we can’t help but feel that somewhere, there’s always an Alan Measles and a Chocky waiting for the children who left them behind to return.</span><br /><br /><span style="font-family:arial;">You can read a slightly abridged version of John Wyndham’s Chocky online here:</span><br /><br /><span style="font-family:arial;"><a href=""></a></span><br /><br /><span style="font-family:arial;"><a href=";amp;amp;amp;amp;camp=1634&creative=6738&path=ASIN%2F0701178930%2Fqid%3D1146081962%2Fsr%3D8-1%2Fref%3Dsr_8_xs_ap_i1_xgl">You can buy ‘Grayson Perry: A Portrait of the Artist as A Young Girl’ by Wendy Jones at Amazon.co.uk here</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /></span><br /><br /><span style="font-family:arial;">Technorati Tags:<a href="" rel="tag">science fiction</a>, <a href="" rel="tag">john wyndham</a>, <a href="" rel="tag">wyndham</a>, <a href="" rel="tag">grayson perry</a>, <a href="" rel="tag">imaginary friends</a>, <a href="" rel="tag">book reviews</a>, <a href="" rel="tag">british suburbs</a> </span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Holiday Archeology Part 2<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></a><br /><span style="font-family:arial;">As promised, Archeology of the Future has returned to share the remainder of our bank holiday activities with you...</span><br /><span style="font-family:arial;"><strong></strong></span><br /><span style="font-family:arial;"><strong>The Traveller On Returning<br /></strong><br />Saturday was, of course, the return of a champion, reborn and made anew for each generation. David Tennant, as the latest incarnation of Doctor Who in the popular BBC Television children’s programme of the same name is, on this showing and his first appearance on Christmas day last year, a more disturbing proposition than any of his previous incarnations.</span><br /><span style="font-family:arial;"><br />Saturday’s programme, in which The Doctor and Rose, his South east London travelling companion, visit the home of humanity in the far distant future and uncover a conspiracy in its eqivilent of the National Health Service, revealed a Doctor who was by turns beguiling and unsettling. For the first time since childhood, we</span><span style="font-family:arial;"> found ourselves wildly jealous of the Doctor’s assistant, ready to throw all caution to the wind and join the Doctor in whatever adventure he chose.<br /><br />This Doctor has the wild eyes and quicksilver emotions of a manic-depressive, one moment</span><span style="font-family:arial;"> goggle eyed with childish wonder, the next cynical or flip, suddenly moving to righteous anger and then wet eyed emotion. He is also more self consciously dandy-ish, coming across as more mannered, more brittle than his predecessor. Whether through accident or design, this Doctor is</span><span style="font-family:arial;"> far less centred than Chris Ecclestone’s portrayal of his previous regeneration. Despite his energy and his zaniness, Ecclestone’s Doctor exuded solidity and strength, and a quiet kind of reassurance. Tennant’s Doctor is, on these early showings closer to a Harlequin. Witness this description of Harlequin from wikipedia:<br /><br /><em>“His everlasting high-spirits and cleverness work to save him from several difficult situations his amoral behaviour gets him into during the course of the play. In some Italian forms of the harlequinade, the harlequin is able to perform magic feats. He never holds a grudge or seeks revenge.”</em><br /><br />The reason this connection occurs lies in our reading of <em>‘A Cure fore Cancer’</em> by Micheal Moorcock as detailed in this previous post. Jerry Cornelius is specifically refered to as a Harlequin, an essentially immoral and clever dandy figure whose appetites get him into trouble. Everything is farce to him and nothing important, making him immoral in the way that a child is. It is the strutting, peacock nature of him and the way in which the tone of him alters dependant on the mood of his times that puts us in mind of the current Doctor. His enthusiasm to explore and adventure is the only thing that precipitated the events of Saturday’s episode. For all of his cleverness, charm and charisma, he was essentially dancing between events he set in motion but could not control.</span><br /><span style="font-family:arial;"><br />Watching, Archeology of the Future couldn’t work out whether we wanted to follow him or, more worryingly, be him or be like him. Imagine that, being able to escape into time and the wide</span><span style="font-family:arial;"> universe at will, to skip where and when you pleased…>The Most Science Fiction Place In London</strong><br /><br />On Sunday, Archeology of the Future found ourselves in the most Science Fiction place in London. Standing in the wet</span><span style="font-family:arial;"> artificial heat, surrounded by the sound of birdsong and the fecundity of huge tropical plants climbing and possessing grey concrete walkways, eyes drawn astray by the golden flash of fish in cool water, we thanked whatever powers that exist that the relentless marches of time have overlooked the Barbican Conservatory.<br /><br /></span><span style="font-family:arial;">For those of you unfamiliar with The Barbican, it is a huge housing development that brings together flats and houses, amenities, gardens, a huge theatre, cinemas, offices and green spaces</span><span style="font-family:arial;"> in the City of London. Designed originally as social housing, it is the most well realised example of utopian architectural practice in the UK. Everything about it screams out possibilities of new ways of living, new ways of constructing life in a city. It was purposely built so that everyone could feel themselves to be an actor in a great drama of their own. Through a long gestation period, where different pieces of the development were design, modified, scrapped, reinstated and then finally built, The Barbican absorbed all of the new ideas and experimental developments of British architecture, making it like a dream of possible future architectures. It is here that Archeology of the Future has a spiritual home and here that we shall explore in far greater depth in future posts.<br /><br /></span><span style="font-family:arial;">The Conservatory, opened in 1984, was a late addition to plans and is like a set for a botanical area onboard a space station. Walkways criss-cross the high vault under the glass exterior. Even the interior doors have rounded corners suggestive of airlocks. There is just something so incongruous, so wondrous about the sheer artificiality of the space, surrounded by tall concrete towers and brown brickwork, looking out beyond to the ever growing office towers of Moorgate. Standing there, Archeology of the Future could see the possible future, the new people meeting there under their artificial canopy, the last generation to have a faith in the possibility of changing</span><span style="font-family:arial;"> the course of the future.<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></a><span style="font-family:arial;">Archeology of the Future wasn’t even aware of the existence of The Conservatory until an afternoon exploring amongst the different levels and structures delivered it to us as we rounded a corner. It was like a homecoming. All of the beautiful dreams of bringing nature and artifice together in way that it can be enjoyed at the leisure of the participant come together for us under the glass. The nearest that we can think of is the kind of thing that you get in some large corporate buildings or shopping developments, but this so radically different that the similarity is only fleeting. The Barbican Conservatory is no less than a stab at an artificial Eden, a place to play without being directed or moved on…<br /><br />The Conservatory is being run with a skeleton staff now, and is ever having its opening times reduced. Only open on Sundays now, the popular consensus is that it is being run into the ground before it is closed. This enhances the charm, because there is a sense of the working future, where people patch thing up and things evolve rather than being pristine.<br /><br />If you have the opportunity, we advise you to visit this beautiful glass bubble outside of time while</span><span style="font-family:arial;"> you can. If Archeology of the Future were ever to meet any of you, our audience, it would be among the green fronds of the Conservatory>Designing a New World</strong><br /><br />On Monday, Archeology of the Future made it to the wide, spy haunted streets of South Kensington to visit the Victoria & Albert Museum to see the <em>Modernism 1914-1939: Designing a New World</em> exhibition. It was an exhibition that we’d been waiting for since our teens, bringing together as it does the largest collection of modernist objects ever in one exhibition in the UK. The effect was like sherbet fizzing in our brains, or more correctly, finding that other people shared the dreams you’d nursed inside of you since childhood too.<br /><br />Exhausted by the First World War and energised in one way or another by the Russian Revolution, a generation of artists and designers worked to literally remake the world and bring about a future where the division between art and everyday life was dissolved and where the terrible weight of the past could be escaped. At this exhibition, everything in one-way or another, was Science Fiction.<br /><br />We point this out, because as we said in a previous post, Archeology of the Future is about Science Fiction as a practice of looking or a way of interrogating the world. Too often Science Fiction is limited and inward looking, happy to exist on its own as a genre. In an essay on Philip K. Dick, the late Stanislaw Lem recognised this. Comparing genre fiction to a situation where natural selection is frustrated or interrupted he states:<br /><br /><em>“In culture an analogous situation leads to the emergence of enclaves shut up in ghettos, where intellectual production likewise stagnates because of inbreeding in the form of incessant repetition of the selfsame patterns and techniques. The internal dynamics of the ghetto may appear intense, but with the passage of years it becomes evident that this is only a semblance of motion, since it leads nowhere, since it feeds into nor is fed by the open domain of culture, since it does not generate new patterns or trends, and since, finally, it nurses the falsest of notions about itself, for lack of any honest evaluation of its activities from outside.”</em><br /><br />Walking around the Modernism exhibition, it was clear to us that for the men and women involved in creating the artefacts on show, the division between Science Fiction and the real world would have seemed laughable, as they were actively engaged in a process of trying to build a new future. Everything was geared toward making the future happen in the present, to changing and making anew.<br /><br />We advise anyone with even a passing interest in the idea that it possible to find new ways of organising or living human existence to get along there. It’s on until 23 June 2006, and worth travelling for.<br /><br />It was too much for us to take in, so we bought a badge that says ‘UTOPIA’ from the shop on our way out of the exhibition and began plotting ways to skive off work and visit again.<br /><br />For Archeology of the Future, it was like finding a home.</span><br /><br /><p><span style="font-family:arial;"></span></p><p style="FONT-FAMILY: arial"><a href="">For more information on Modernism 1914-1939 visit the V&A minisite here. </a></p><span style="font-family:arial;">The essay 'Philip K. Dick: A Visionary Among Charlatans' by Stanislaw Lem appears in the anthology '</span><em style="FONT-FAMILY: arial">Microworlds'</em><span style="font-family:arial;">. </span><a style="FONT-FAMILY: arial" href="">Buy it from amazon.co.uk here.</a><img style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; MARGIN: 0px; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none; FONT-FAMILY: arial" height="1" alt="" src="" width="1" border="0" /><br /><br /><span style="font-family:arial;">Doctor Who is broadcast on Saturday nights in the UK, as it should be. It is best watched whilst eating beans on toast.</span><br /><br /><span style="font-family:arial;"><span style="font-family:arial;">More excellent photographs of the Barbican can be found </span><a style="FONT-FAMILY: arial" href="">here</a><span style="font-family:arial;">, </span><a style="FONT-FAMILY: arial" href="">here</a><span style="font-family:arial;"> and also </span><a style="FONT-FAMILY: arial" href="">here</a><span style="font-family:arial;">. The conservatory photograph was taken by sleekit, the waterfall by </span><a style="FONT-FAMILY: arial" href="">will survive</a><span style="font-family:arial;">.</span><br /><br /><span style="font-family:arial;">Technorati Tags: <a href="" rel="tag">science Fiction</a>, <a href="" rel="tag">doctor who</a>, <a href="" rel="tag">barbican</a>, <a href="" rel="tag">modernism</a></span><br /></span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Holiday Archeology<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><span style="font-family:arial;">Archeology of the future had a smashing time this bank holiday weekend, undertaking a variety of activities that relate very strongly to the purpose of this blog.</span> <span style="font-family:arial;"><br /><br />The practice of the archaeology of the future, by its very nature, involves sifting through the remains of the past to look for hints of the future, picking through the dustbins of history to find the glittering jewels of hope or apprehension. This means casting a wide net and making investigations far beyond the boundaries of genre.</span> <span style="font-family:arial;"><br /><br />Talking about the testimonies of those who had been involved in revolutions, uprisings, strikes and other attempts to alter the direction that the river of history seemed, in retrospect, to be bound to take, Greil Marcus writes:</span><br /><br /><span style="font-style: italic;font-family:arial;" >“…If one reads in the right frame of mind, the leavings of those stories stir with a truly strange power. Suddenly they are not ephemeral, not extraneous to real history. But plainly, obviously, the true story the events of the past years have been straining towards all along.”</span><br /><br /><span style="font-family:arial;">It is this right frame of mind that Archeology of the Future seeks to apply to a world littered with the remnants of attempts to build, create, imagine or explore possible futures. There are, then, many ‘jumping off points’, both fictional and actual, where it is possible to see efforts to imagine a present very different from our own, or a different course that events could have taken given different conditions.</span> <span style="font-family:arial;"><br /><br />For Archeology of the Future Science Fiction is more a way of seeing than it is a strictly policed exercise in genre exploration. Anything that is suggestive of an attempt to build an alternate future, or that shows a direction that history could of taken, but did not, is an object for interrogation. < Friday, Archeology of the Future spent a pleasant day exploring the expanses of Greenwich Park, the man made expanse of recreation ground surrounding the Royal Observatory and, incidentally, the place where the measurement of time begins. We especially enjoyed our pilgrimage to the Henry Moore sculpture ‘Standing Figure Knife Edge’ which sits on one of the three hills of the park and resembles nothing less than one of those distorted figures that would grace sixties science fiction paperback covers. One of a few monuments in the park, it is like a memorial of some transfiguring event that has yet to occur, a marker of a visitation or a transmogrification.<br /><br /></span><span style="font-family:arial;">Looking down from the base of the statue you see the Royal Navel College, the centre of the naval might of the British Empire and as such a strong presence suffused with power and might, and then over to its distant descendant at Canary Wharf across the Thames, an arrogant series of exclamations shouting out their monetary power to all London. </span> <span style="font-family:arial;"><br /><br />The Royal Observatory itself has a working telescope, and has the globular dome that you would expect. It is sometimes possible to see a green laser beam coming from the Observatory and arcing into the dark night.<;">We were excited to find that Greenwich was the mooring point for The Pierrot, Jerry Cornelius’ cruiser in Michael Moorcock’s “<span style="font-style: italic;">A Cure for Cancer”</span>, which we read over the weekend. In a UK over-run and under siege by the US for no clear reason, Jerry is his usual immoral, swinging London self, shagging, shooting and rollicking through the landscape. Sitting in the park, the spirit of this infused our afternoon:</span> <span style="font-style: italic;font-family:arial;" ><br /><br />“The white hovertruck sang onwards into the ruined roads of South London that were full of columbine, ragged robin, foxglove, golden rain, dog rose, danewort, ivy, creeping cinquefoil, Venus’s Comb, deadnettle, shepherd’s purse and dandelion, then turned towards Greenwich where Jerry’s cruiser, The Pierrot, was moored. As Jerry directed his patients up the gangplank Karen von Krupp pointed to a battered, broken looking building in the distance. ‘What is that, Jerry?’</span> <span style="font-style: italic;font-family:arial;" ><br /><br />‘Greenwich Observatory,’ he said. ‘It’s a bit redundant now, I suppose…’</span> <span style="font-style: italic;font-family:arial;" ><br /><br />The banks of the river and the fields and ruins beyond them were carpeted with flowers of every description… they sailed between fields and old ruined farmhouses, deserted villages and abandoned pubs.”</span> <span style="font-family:arial;"><br /><br />As we have noted previously, Greenwich has a strange ambience and is an odd spot for entropic chaos, with various versions of the future winding down and lying to rust. In the novel, Jerry is the spirit of the age, an embodiment of the late sixties. As the initial utopian lust for change dissipates and becomes mired in the actual heavier events of the seventies, Jerry himself begins to run out of energy with it. Entropy is the point at which no movement is possible because all energy has been expended, so it seems fitting that Jerry should find himself in a Greenwich returned to wilderness, with all cultural energy expended. As we never tire of saying: There’s nothing better than an apocalypse on your doorstep.</span> <span style="font-family:arial;"><br /><br />We’ll tell you about the rest of our weekend later today, but to give a couple of hints, including the picture a the head of this post, it involved the return of a traveller; the most science fiction place in London and the designing of a whole new world… </span><br /><br /><span style="font-family:arial;">The Greil Marcus quote in this post comes from the essay "The Dustbin of History in a World Made Fresh" which features in the collection "The Dustbin of History".</span><br /><br /><a style="font-family: arial;" href=";amp;camp=1634&creative=6738&path=ASIN%2F0674218574%2Fqid%3D1145319421%2Fsr%3D1-2%2Fref%3Dsr_1_0_2">Buy The Dustbin of History by Greil Marcus from amazon.co.uk here.</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important; font-family: arial;" border="0" height="1" width="1" /><br /><br /><a style="font-family: arial;" href=";amp;camp=1634&creative=6738&path=ASIN%2F1568581831%2Fref%3Dsi_1_1">Buy A Cure For Cancer by Michael Moorcock, as featured in this Jerry Cornelius Anthology, from amazon.co.uk here.</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important; font-family: arial;" border="0" height="1" width="1" /><br /><br /><br /><span style="font-family: arial;">Technorati Tags: </span><a style="font-family: arial;" href="" rel="tag">greenwich</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">science fiction</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">michael moorcock</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">greil marcus</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future The Devil Girl From Limbo<object width="425" height="350"><param name="movie" value=""></param><embed src="" type="application/x-shockwave-flash" width="425" height="350"></embed></object><span style="font-family: arial;"><br />The above embedded video clip is from the much maligned 1954 film 'Devil Girl from Mars', starring Patricia Laffan as the titular extraterrestrial female. Going on this clip it doesn't look half as bad as it's made out to be, with the robot, Chani, having a fair degree of charm and a quite imposing bulk. </span><br /> <br /><span style="font-family: arial;">The problem is that Archeology of the Future only has this clip to go on. Whilst the American films of the same period are well documented and widely available for the most part, the British science fiction cinema that was its parallel has all but disappeared. To the best of Archeology of the Future's knowledge, 'Devil Girl from Mars' is not commercially available anywhere at the moment. Currently, there are very few available British Science Fiction films, or at least very few that are currently 'in print'. Lacking the budgets, charm or allure of their more canonical American counterparts, and falling into the gap between commercial viability and archivists objective, they seem to be consigned to the limbo of small viewership satellite channels and charity shop shelves. Like most things, something has to be considered either of overwhelming historical importance or of salable value to be saved from limbo. </span><br /> <br /><span style="font-family: arial;">Reading through some histories of British Science Fiction, Archeology of the Future has come across a long list of 'missing presumed dead' films that read like a litany of possibilities, frustratingly just out of reach. Some of them Archeology of the Future remembers seeing as a child, others we know only through stills and articles where they appear almost as footnotes to the 'real' business of science fiction on the big screen. </span><br /> <br /><span style="font-family: arial;">Each one represents a possible excavation of a known site that we don't, as yet, have access to, like a rumour of something untoward happening on the moors just out of town, waiting to be investigated. </span><br /> <br /><span style="font-family: arial;">These remote and exciting possible future digs, misty and indistinct, known only through snapshots and third person accounts include: </span><br /> <br /><span style="font-family: arial;">High Treason (1928) </span><br /><span style="font-family: arial;">The Tunnel (1935) </span><br /><span style="font-family: arial;">The Perfect Woman (1949) </span><br /><span style="font-family: arial;">Mr Drake's Duck (1950) </span><br /><span style="font-family: arial;">Dick Barton at Bay (1950) </span><br /><span style="font-family: arial;">The Four-Sided triangle (1953) </span><br /><span style="font-family: arial;">Immediate Disaster (1954) </span><br /><span style="font-family: arial;">Fire Maidens From Outer Space (1956) </span><br /><span style="font-family: arial;">Satellite in the Sky (1956) </span><br /><span style="font-family: arial;">Strange World of Planet X (1957) </span><br /><span style="font-family: arial;">Man Without A Body (1957) </span><br /><span style="font-family: arial;">First Man Into Space (1958) </span><br /><span style="font-family: arial;">Fiend Without A Face (1958) </span><br /><span style="font-family: arial;">Behemoth, the Sea Monster (1959) </span><br /><span style="font-family: arial;">Womaneater (1959) </span><br /><span style="font-family: arial;"> Horrors of the Black Museum (1959) </span><br /><span style="font-family: arial;">Konga (1960) </span><br /><span style="font-family: arial;"> Gorgo (1960) </span><br /><span style="font-family: arial;">Village of the Damned (1960) </span><br /><span style="font-family: arial;">The Damned (1963) </span><br /><span style="font-family: arial;">Unearthly Stranger (1963) </span><br /><span style="font-family: arial;">Children of the Damned (1963) </span><br /><span style="font-family: arial;">The Earth Dies Screaming (1964) </span><br /><span style="font-family: arial;">The Night Caller (1965) </span><br /><span style="font-family: arial;">The Projected Man (1966) </span><br /><span style="font-family: arial;">Invasion (1966) </span><br /><span style="font-family: arial;">Night of the Big Heat (1967) </span><br /><span style="font-family: arial;">Priviledge (1967) </span><br /><span style="font-family: arial;">The Body Stealers (1969) </span><br /><span style="font-family: arial;">Moon Zero Two (1969) </span><br /><span style="font-family: arial;">The Bed Sitting Room (1969) </span><br /><span style="font-family: arial;">No Blade of Grass (1970) </span><br /><span style="font-family: arial;">Percy (1971) </span><br /><span style="font-family: arial;">The Final Programme (1973) </span><br /> <br /><span style="font-family: arial;">Each film, regardless of its place in the overall pantheon of cinema history represents something which we wish to interrogate, question and explore... They really are some of the lost worlds of British Science Fiction. As we've said before, there's something deeply cherishable about an apocalypse on your own doorstep, no matter how small or marginal. </span><br /> <br /><span style="font-family: arial;">Archeology of the Future asks all of you to consider us in our task. If you have a copy of any of these films, taped off late night television or boxed inappropriately for the 1980s video shop boom, we'll happily pay costs if you can let us have them for a while. </span><br /> <br /><span style="font-family: arial;">It'd really help us, if for no other reason, to save from obscurity the history of British futures. </span><br /> <br /><span style="font-family: arial;">Come on, look at the Devil Girl... There's no way she should be consigned to oblivion... </span><br /> <br /><span style="font-family: arial;">For more about the history of British Science Fiction Cinema, read the superb Kim Newman introduction to SF:UK: How British Science Fiction Changed The World. </span><br /> <br /><a style="font-family: arial;" href=";camp=1634&creative=6738&path=ASIN%2F1903111161%2Fqid%3D1144432081%2Fsr%3D8-2%2Fref%3Dsr_8_xs_ap_i2_xgl">Buy SF:UK from amazon.co.uk here</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important; font-family: arial;" border="0" height="1" width="1" /><span style="font-family: arial;"> </span><br /> <br /><span style="font-family: arial;">Technorati Tags: </span><a style="font-family: arial;" href="" rel="tag">british science fiction</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">science fiction</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">science fiction films</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">british films</a><span style="font-family: arial;"> </span><a style="font-family: arial;" href="" rel="tag">cinema history</a><span style="font-family: arial;"> </span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future: The Little Satellite That Could<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><span style="font-family:arial;"</span><span style="font-family:arial;"> Baxter, <em>“one day an old Spitfire pilot would fly into orbit… pipe clutched inside his space suit helmet.”</em> A future where Britain would extend itself beyond</span><span style="font-family:arial;"> this planet and take its place amongst the stars.<br /><br />It’s a little remembered fact that, up until October 1971 the UK had its own, independent, space programme. As Francis Spufford puts it in his excellent book “Backroom Boys: The secret return of the British Boffin”:<br /><br /><em></em></span><span style="font-family:arial;"><em> jackets with leather elbow patches sat in control rooms watching bakelite consoles. The countdown was heard in regional accents.”</em><br /></span><br /><span style="font-family:arial;">This is familiar territory for Archeology of the Future, a land where, looking back, we can see the direction that things could have taken, a land where possible futures never materialised, forever suspended tantalisingly out of reach.</span><br /><br /><span style="font-family:arial;"></span><span style="font-family:a;">Prospero’s parents are varied. His first ancestor is <a href="">Blue Streak</a></span><span style="font-family:arial;">. The</span><span style="font-family:arial;"> bastard offspring of American technology and the British workshop pride of De Havilland of Stevenage, Blue Streak was an intermediate range ballistic missile developed as part of a treaty with the US. Commissioned in 1954, it had cost £60 million by 1960, and would have needed another £440 million to be installed in its concrete home in East Anglia. A stillborn, the world moved on while it was crafted and tinkered with, a good rocket but a poor missile. In a race to fire destruction at the enemy, Blue Streak was too dignified and stately, the next generation already threatening to arrive in the time it took Roger Bannister to run a mile.<br /><br /><br /><br /></span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">The knowledge developed during the building of Blue Streak would eventually find it’s way into a joint project between France, Italy, Great Britain and West Germany. Naming itself The European Launcher Development Organisation, it would build Europa, a three-stage European satellite launcher, with Blue Streak as the brute force that would punch it into the </span><span style="font-family:arial;">blackness above. After a change to a Labour Government in October 1964, and a growing sense of economic crisis, British will reduced to carry on such costly activities. Leaving behind some test firings of Europa, and, <a href="">according to this site</a>, </span><span style="font-family:arial;"> in the jungles of Guiana, one rocket abandoned, lying in the South American jungle, being used as a chicken coop, the British Space Programme again returned to solely British hands.<br /><br /><br /></span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;"> In 1965, The Labour government of Harold Wilson made a great commitment to modernisation through technology and creating a Ministry of Technology on it’s arrival in office, commissioned the <a href="">Royal Aircraft Establishment </a></span><span style="font-family:arial;"> to begin work on a new project: Black Arrow. This was almost fact following fiction. The RAE was a place where ideas and concepts were toyed with before being commission and</span><span style="font-family:arial;"> put out to private contractors for production, almost an analogue of Quatermass’s British Rocket Group. Just look at the <a href="">photographs on this page </a>and tell us that they you can't imagine, just out of shot the sleek shape of the Quatermass 2 rocket. Whilst a military establishment, the RAE had a remit to explore any technologies that might deliver a future strategic advantage, so it is easy to imagine that it became home to people committed to science who tried, where possible, to shoehorn in research into matters lass aggressive and more wondrous. It was charged with the development of an all British Satellite Launcher, but only on the condition that it cost next to nothing. Black Arrow would be funded with the scraps from the table of ELDO and any budget leftovers for the UK’s purchase of Polaris missiles. Eventually, the entire budget for the project would come to £9 million pounds, a drop in the ocean of NASA;">While it was the military that dreamed up the scheme and ran it, it was the men who worked at place like Armstrong Siddeley Rocket Motors at Ansty that made it happen. As Francis Spufford explains:</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;"><em>.</em></span><br /><span style="font-family:arial;"><em></em></span><br /><span style="font-family:arial;"><em>.’”</em></span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;" “<em>regard(ed) these small rockets in very much the same way I regard simulators and wind tunnels”.</em>:</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;"><em>“I would not underestimate the romantic reasons why we got into Black Arrow. Even people who worked in the Ministry went home and read science fiction, saw science fiction stuff on the television; they dreamed too.”</em></span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;". </span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;".</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">As Spufford puts it, the space dream continued in hearts and dreams of those engineers and science fiction fans: </span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;"><em>.”</em></span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">The ghost of this dream still orbits above us now, sucking in energy from the sun. According to <a href="">this article</a>, amateur satellite trackers heard the last tiny cries of Prospero as late as 2000, a phantom voice from an aborted future…</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">As a postscript to this post, it seems that there is some real Archeology of the Future to be carried out on the remnants of the British Space Programme. According to this Department of <a href="%28">Trade and Industry Committee of HM Government Report published in 2000 </a> great hunks of Britain’s Space History are lying forgotten and overlooked including “The Spadeadam Blue Streak, rotting in a car park in a restricted area hidden from the public.” For more details see:</span><br /><span style="font-family:arial;"></span><br /><a href=""><span style="font-family:arial;"></span></a><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">Real life artefacts from a future that never happened, right here right now. Even in the space of thirty years we seem to have forgotten what might have been...<br /><a href=""><br />Check this previous post for more information on the British Space Programme.</a><br /><br /><a href=";amp;amp;amp;amp;camp=1634&creative=6738&path=ASIN%2F0571214975%2Fqid%3D1144272698%2Fsr%3D8-1%2Fref%3Dsr_8_xs_ap_i1_xgl">Click here to buy the extremely wonderful 'Backroom Boys: The Secret Return of The British Boffin' by Francis Spufford from amazon.co.uk</a><br /><br />Technorati Tags: <a href="" rel="tag">science fiction</a> <a href="" rel="tag">rockets</a> <a href="" rel="tag">space</a> <a href="" rel="tag">space history</a><br /><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /><br /></span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Fiction and the Suburbs<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><span style="font-family:arial;"><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></span></a><span style="font-family:arial;">Sometimes you come across something that speaks to something inside of you that you didn't even know was there. In many ways the origin of this blog can be traced back to a single photograph. You can see it here. The photograph is labelled "In July 1956 Robby visited New Malden as part of a trip to promote Forbidden Planet". It appears in 'SF:UK' by Daniel O'Brien. It is one of Archeology of the Future's favourite pictures ever.<br /><br /.<br /><br /.<br /><br /.<br /><br /.<br /><br /.<br />…<br /><br /…<br /><br /.<br /><br /.<br /><br />We defy you to look at this picture and not feel a strange and elusive magic, a sense of something difficult to define about Science Fiction and childhood and England captured and expressed.<br /><br />In quiet streets, fantastic things happen…<br /><br />The juxtaposition of the ordinary and the extraordinary is a fundamental of UK Science Fiction.<br /><br /></span><a href=""><span style="font-family:arial;">Buy SF:UK: How British Science Fiction Changed The World at amazon.co.uk</span></a><span style="font-family:arial;"><img style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; MARGIN: 0px; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none" height="1" alt="" src="" width="1" border="0" /><br /><br />Technorati Tags: </span><a href="" rel="tag"><span style="font-family:arial;">science fiction</span></a><span style="font-family:arial;"> </span><a href="" rel="tag"><span style="font-family:arial;">forbidden planet</span></a><span style="font-family:arial;"> </span><a href="" rel="tag"><span style="font-family:arial;">suburbs</span></a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future on the Peninsula, the future passed by...<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br />Archeology of the Future's been out and about this weekend. We took part in this project: <a href="">Greenwich Emotion Map</a>. The project itself is as Science Fiction as they come. Christian Nold, the artist, using Global Positioning System technology and measuring galvanic skin response to indicate arousal, is engaging members of the local community in building an emotion map of the Greenwich Peninsula, home to the Millennium Dome, described by Iain Sinclair as <em>"the tongue of poisoned land, a couple of miles to the east of The Royal Naval College"</em>. You walk, your arousal is recorded, along with your position, then when you get back the whole thing is combined and uploaded onto Google Earth, where it sits with the walk data from all of the other people who have taken part. The arousal appears as a series of peaks on the map like jagged mountain ranges. The higher the peak, the higher the emotion. Combined, the sum of the walks shows a topography of the Peninsula, a map of feeling with sloughs and heights, a physical/psychic landscape overlaying the real one.<br /><br />We turned up on the first spring-like Sunday afternoon this year and were kitted out by Christian with a little GPS device that looked like a mobile phone and , attached to the first two fingers on our left hands, two wires that led to a little silver box that measures a tiny electric current across the skin. The more aroused we are, the more we sweat, the more conductive of current our skin becomes. Standing in a silent residential street, Christian explains with relish that there are twenty four satellites above us and that the GPS device needs to contact at least three of them to be able to locate my position. We tell him it's all a bit Science Fiction.<br /><br />"I designed this stuff, so it can't be that science fiction," grins Christian. "Off you go."<br /><br />Setting ourselves an hour to walk, we set off with no fixed idea of where we're going.<br /><br />According to Iain Sinclair's book 'Sorry Meniscus' , the Peninsula was never a part of the Greenwich story. Formerly Bugsby's Marshes, <em>"The Peninsula was where the nightstuff was handled: foul-smelling industries, the manufacture of ordinance, brewing, confectionery, black smoke palls and sickly sweet perfumes... The Peninsula thrives on secrecy. For as long as anyone can remember much of this land has been hidden behind tall fences. Walkers held their breath and made a wide circuit. Terrible ghosts were trapped in the ground. A site on the west of the Peninsula, now captured by the Teflon-coated fabric of the Dome, had once featured a gibbet where the corpse of some pirate, removed from Execution Dock in Wapping, would be left to decay."<br /></em><br />There's still ghosts on the Peninsula, the physical remnants of The Millennium and all that it did or didn't mean. The Dome itself, we noticed walking around the outside of it today, is like a tribute act to that other great Dome of national identity, The Dome of Discovery at the Festival of Britain. There is obviously the Dome itself, but also the struts that hold it up, each one like a child's version of the Skylon. Pressing our noses up against the fences and gazing in, there are weeds pressing up through the concrete of deserted car parks. There are empty offices, all of the furniture pushed to one end as if the Peninsula had been tipped and shaken slightly. Small pieces of wreckage, shards of scrap, metal, upended benches. All looks as if the area has been vacated in a hurry.<br /><br />We walked through the millennium Village, flats and houses built to take advantage of the regeneration of the Peninsula. Modernist design, of the kind that was used for social housing, is here repackaged and resold to a much more exclusive class of dweller. Some of the huge, colourful flats a quite literally cut off, surrounded by marshland crossed with wooden walkways, as if moated. These are buildings left stranded. The regeneration never happened. The Dome opened, then it closed. There's the great sense of a boom town built by the river, only to find that the river has changed path, leaving the roads empty, the sky too wide, the streets silent and unmoving.<br /><br />For all of the order imposed by the rigid, controlling architecture and traffic flow design,. For most of our walk we were the only people in sight, half expecting to see an invasion, or a lonely astronaut lost and adrift amongst the concrete. Under a mile from hyper-ordered private Island of Canary Wharf, there's a landscape that is huge moon base copy at Winnerton Flats; a landscape that is the entropic world of J.G. Ballard where even meaning has run out of energy; a landscape where the Festival of Britain collides with the scrapyard where Susan goes home each night to her Grandfather, much to the curiosity of Barbara and Ian. Arcane technologies are housed in concrete or stand on masts, rotating slowly. On the Peninsula, different futures jostle with each other, all trying to remain viable, all past.<br /><br />Returning to base camp, Christian uploads our walk onto a laptop which is projected onto the wall in front of me. A jagged set of peaks cuts a serated path up to the tip of the Peninsula, like a body drawn at speed towards the dense gravitational pull of the Dome then flung back on its way. Christian shows me how to label each of the peaks, letting us add annotations to our journey. We notice how much higher our arousal has been than some of the others who have taken part.<br /><br />Science Fiction has a lot to answer for.<br /><br />You can see section of the Greenwich Emotion Map here, with a further explanation of the technology and method: <a href=""></a><br /><br /><a href=";amp;amp;amp;amp;camp=1634&creative=6738&path=ASIN%2F1861971796%2Fqid%3D1143418031%2Fsr%3D8-2%2Fref%3Dsr_8_xs_ap_i2_xgl">Buy Iain Sinclair's 'Sorry Meniscus' from amazon.co.uk here.</a><img style="border: medium none ; margin: 0px;" alt="" src="" border="0" height="1" width="1" /><br /><br />Technorati Tags: <a href="" rel="tag">science fiction</a> <a href="" rel="tag">psychogeography</a> <a href="" rel="tag">london</a> <a href="" rel="tag">mapping</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future"If this is living in the future today... I'm all for it."<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br />I've never really been a fan of Gerry Anderson and his various miniature and not so miniature extravaganzas. There's a horrid air of 'gadget' hanging over them that stifles any interesting ideas they might have. For me they're always like watching a piece of pornography made for a fetish I don't have: Where there should be a shot of an actors reaction there instead is a ponderous sequence involving some spaceships or some such; where there might be a long short of the couple 'doing it' instead there is a lingering close-up of a toe... Wholely unsatisfying to me, but obviously targeted at the taste of someone, somewhere.<br /><br /><a href="">I couldn't resist, though, flagging up this little snippet of Sylvia Anderson trying to pass off the clothes that the women wear in 'UFO' as practical fashions.</a> According to the obsessively neat and categorized <a href="">'UFO' fansite </a>the clip is from, it comes from a 1970 UK TV programme "Tomorrow Today". It's interesting how much it betrays of Sylvia Anderson's attitude to actors versus puppets. To quote her: "With puppets one could decide what body, what hairstyle a puppet has exactly, how they would dress and so on. You're a little more inhibited with an actor. You can't change a tall man into a short man."<br /><br />To Sylvia, it seems, all is about appearance. That's the peculiar emptiness of the Anderson canon, I suppose.<br /><br />In many ways, 'UFO' is the opposite of what I'm trying to connect with on this blog. It sits in isolation, a collection of gadgets and gimmicks that don't go any further than themselves. It doesn't manage to either create an alternative world or comment on this one, it just exists for the length of time an episode lasts then vanishes. From what I remember of watching it, it was like those action comics where a world was erected for the purposes of housing six pages of inky story and then collapsed again, all of the backgrounds hastily drawn and generic, the locations sketched in very quickly. I suppose what I can't see is the point at which the world of 'UFO' jumps off from this one, our 'real' world. I very much like the idea of fictions as 'secondary realities', realities that are like our world but have deviated at certain points.<br /><br />What is great about this clip, at least, is the seventies future-ness of it. At one point the model in her spangley, stretchy nylon and PVC is seen in a clothes shop not a million miles away from record shop that we see Alex in 'A Clockwork Orange' (<a href="">see this post for more</a>). What is also great is the job we see our futuristic model at, all whirring tape wheels and grey enameled computer appliances.<br /><br />It's a good example of the ephemera of something being far more interesting than the thing itself.<br /><br />Posting up stuff like this wanders very close to what <a href="">TVCream</a> would call 'the wrong sort of nostalgia', taking the past out of context and applying a sort of cynical jokiness to it all. It's difficult to define the wrong sort of nostalgia but you know it when you see it or hear it. It reduces the past to 'blimey, weren't they all unsophisticated, they must have known how daft this would all look in a few years time'. It is ahistorical, because it assumes that the way things seem now is the way they must have seemed then, applying the notion of ironic knowingness retrospectively. It judges everything against the sophisticated 'now' and finds it wanting.<br /><br />All that said, there's also a great futuristic car / freezing cold back road sequence to be cherished. A Sylvia says: "Modern design is <em>very</em> practical"...<br /><br />UFO fansite: <a href=""></a><br /><br />Technorati Tags: <a href="" rel="tag">science fiction</a> <a href="" rel="tag">gerry anderson</a> <a href="" rel="tag">british television</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future going to college for?<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br />The British Film Institute is a wondrously arcane institution. Living inside the National Film Theatre on London's Southbank, it fights to save the last remnants of UK moving visual culture from disappearance. Sittings in the shadows, studiously taking notes and archiving things, the BFI only occasionally sharing the fruits of its labour with the rest of us.<br /><br />A case in point is this site, <a href=""></a>. It's a big database of important film and television, all stuck online for us, the people. Just check out this section on British science fiction television: <a href=""></a>.<br /><br />You go in and wow! There's clips and complete episodes from such treasures as <a href="">Doomwatch</a>, <a href="">Day of the Triffids</a>, <a href="">Quatermass and The Pit</a> and the masterful <a href="">Survivors</a>. You click the link, for example, on the complete first episode of Survivors, salivating at the thought of seeing Peter Bowles cough himself to death in the seventies, and then you find that you can only watch these clips if you are in an educational establishment or public library.<br /><br />So near and yet so far...<br /><br />If there's anyone out there who can get Archeology of the Future into their college, library or other educational establishment out of hours to watch this stuff, we'd be over the moon.<br /><br />Today's photo comes from <a href=""></a>, a wonderful blog of lovely ephemera, of which Archeology of the Future are huge fans. The picture's linked to the site. Please go and check it out, there's much to be seen and cherished. If Archeology of the Future could chose its family, <a href=""></a> would be the cousin that we thought was so cool we got too embarrassed to talk to them every time we saw them at weddings and birthdays.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future've been linked by a Triffid!<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><span style="font-family:arial;">It's not every day that you can say that you've been highly commended for your efforts by an extra-terrestrial plant, but that's exactly what happened earlier today.</span><br /><span style="font-family:arial;"></span><br /><a href=""><span style="font-family:arial;">Wyndham The Triffid has posted a glowing recommendation of this blog here</span></a><span style="font-family:arial;">.</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">Wyndham himself seems to be a rum old Triffid, more concerned with the worries of growing old, the toll that journalism takes on the soul and the way that films are never as good as you think they're going to be than wandering menacingly through deserted suburban streets and attacking people with his poisoned tongue.</span><br /><span style="font-family:arial;"></span><br /><a href=""><span style="font-family:arial;">Check out the thoughts of this most unconventional murderous vegetable invader here.</span> </a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future's nothing better than an apocalypse on your doorstep<span style="font-size:100%;"><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" style="FONT-FAMILY: arial" href=""><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></a></span><span style="font-family:arial;font-size:100%;">There's a brilliant scene in 'The Quatermass Xperiment' (<a href=";amp;amp;camp=1634&creative=6738&path=ASIN%2FB00008IAUZ%2Fqid%3D1142884995%2Fsr%3D2-3%2Fref%3Dsr_2_11_3">Buy it at amazon.co.uk here</a>) where Victor Caroon, the last surviving member of the first British mission into space and in the process of transforming into a plant creature, is awoken from his sleep in a river barge by a seven year old Jane Asher playing with her doll. Framed by the wildness and decay of the riverbank at what is supposed to be Deptford, they are </span><span style="font-family:arial;font-size:100%;">two isolated figures in a landscape, as alone as if they were the last people on earth. What you can't see in the colourised photo of this scene is the sheer sense of disarray and an area having been forgotten about. Despite being near to the centre of London, this scene seems to take place in another, more lawless and less structured country. Being the sad fantasist that I am, I couldn’t resist going down to the riverside where Deptford Creek (The River Ravensbourne) meets the Thames. It's changed out of all recognition in the eight or so years since I arrived in London, never mind since the 1950s when they were shooting Quatermass. Even when I arrived there was decaying warehouses, dark moorings and piles of rubble where the scrub pushed through. There's a small amount of the wildness left now, but enough to give me that fantasist’s thrill. Here I was, standing where the scene was supposed to have been (or near enough), early on a Saturday morning, close enough to the location and close enough</span><span style="font-family:arial;font-size:100%;"> to sleep to feel myself on the border between this world and the other world of spacemen and rocketships and silent dignified astronauts fighting to retain their humanity. It was the same when I lived in Surrey and read ‘War of The Worlds’ by H.G. Wells. Walking the quiet lanes and looking out over the green valleys I could almost see the great tripods striding towards London, the smell of burning flesh catching on the warm breeze and falling to the street with the ashes. There is nothing better than seeing the apocalypse on your doorstep.<br /></span></v:imagedata></o:lock></v:path></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:stroke><span style="font-size:100%;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></a></span><span style="font-family:arial;font-size:100%;">'The Quatermass Xperiment' contains one of the most poignant and potent images of what I'm trying to capture in this blog. A sleek and beautiful rocket, a sister to the Skylon, sticks into the soil of an English meadow at almost a forty-five degree angle. There is no cordon, no helicopters, no quarantine. A van, with five men is all that is sent to retrieve it. Locals take cover in memory of the war when they hear it overhead, and then look on with an almost casual interest. There is a feeling that, for these people, a rocketship seems almost commonplace, as if they are thinking 'of course there's a rocketship'. These are people who have never had it so good. The presentation of technological marvels is an everyday occurrence to them. They are almost certain that they are living in the future. There is no reason not to expect that they belong to country who, having won the war, will go on to beat the world in whatsoever it chooses. The government and it's scientists are, in this world, working away to make the future British.<br /><br />In this incarnation of Quatermass, Quatermass himself reflects this. Rather than the avuncular, donnish figure he is portrayed as elsewhere, in this film he is driven. When he succeeds in dispensing with the monstrous creature that Caroon eventually becomes, he walks from Westminster Abbey, the site of Carroon's final stand, and does not stop walking. He ignores those around him, determined and hard-headed; he brushes aside questions and pleas for information. He is literally the relentless march of science. He is on the way to carry on with preparations for the next launch. Nothing will stand in his way. Compassion, celebration, sadness, and guilt: all are alien to this Quatermass. He is science as destroyer, as over-reacher. Here is the steely callousness that presents more and more technology as the answer to unreliable humanity. Ruthless scientific advancement.<br /><br />Interestingly, this Quatermass, played by American Brian Donlevy, is an accident of circumstance. Nigel Kneale had intended that his first Quatermass serial would be a check to the cheery post-war optimism he saw around him in the country, but it was the Quatermass of the television serial that was the motor for that. It was only in the making of the film, which Kneale didn't have direct involvement with, that it was Quatermass that became the chilling figure he is here. Donlevy, partial to the drink and used to playing hard-nosed characters made his Quatermass inflexible and unpleasant. This has the effect of shifting the anxiety away from the sympathetic Caroon whom we see suffer and fight his transmogrification, and onto the figure of Quatermass as scientist as authoritarian.</span><br /><"><?xml:namespace prefix = o /><o:lock<v:imagedata</v:imagedata></o:lock></v:path></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:stroke><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future"I could viddy myself very clear, running and running on like very light and mysterious feet, carving the whole face of the creeching world"<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><a href=""><span style="font-family:arial;">Classic Trailers - A Clockwork Orange</span></a><span style="font-family:arial;"><br /><br />I watched 'A Clockwork Orange'<a href=";amp;amp;amp;amp;amp;camp=1634&creative=6738&path=ASIN%2FB00005MHNI%2Fqid%3D1142281589%2Fsr%3D8-1%2Fref%3Dpd_ka_1">(Buy A Clockwork Orange from amazon.co.uk here)</a><img style="border: medium none ; margin: 0px;" alt="" src="" border="0" height="1" width="1" /> the other night and was surprised at just how dowdy, cold and worn out it looks. I like to think that Kubrick intended it that way as a counter point to 2001: A Space Odyssey, as if to show that for all of the grand dreams of space flight, the warm antiseptic cocoons of space stations and departure lounges, the ordinary people of the world would still be receiving a raw deal of things.</span><br /><span style="font-family:arial;"><br />Watching it is like watching the seventies refracted through a kaleidoscope. All of the things that we consider to be so emblematic are there but in a slightly different form. Rather than being futuristic, it's contemporary to the time it was made. Kubrick is on record as stating that the locations were drawn from a survey of current architectural journals, catching the possible future as it hatched and then extrapolating its effects. Only three sets were constructed, the rest of the film shot in real buildings and locations.</span><br /><span style="font-family:arial;"><br />Witness the Writer's home, with its split level rooms and pine cladding. The chilly clarity of Thamesmead and Brunel University are exaggerated versions of themselves, shot without warmth, the harsh lines and brutal concrete deliberately presented as alienating. The decor of the DeLarge</span><span style="font-family:arial;"> household is high seventies modern, an explosion of plastic and glitter.<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">Watching it now, it's like an alternate seventies... Whilst masquerading as a film set in the future, it actually sat very much in its own present. It makes me think of the fact that none of the characters in the film know that they are 'in the future', to them it is merely the present, evidenced by Alex's parents with their archetypal British family set-up of factories and breakfasts. </span><span style="font-family:arial;">Despite being clad in tinfoil and garnished with moulded plastic, the life of the characters in 'A Clockwork Orange' show that as much as technology advances, many fundamentals of life remain the same. They are typically working class, transplanted as many people were, to the top of a tower poking up into a</span><span style="font-family:arial;"> grey sky. Witness the 'People's Stylings' of the mural at the base of their block.<br /><br />The Film is set in Avengerland almost, the same chilly country lanes, large villas and futuristic eruptions. There are parts where it veers toward British sitcom staples. The main prison guard, the chaplain and the prison governor could almost be moonlighting from 'Porridge'. There is nothing incompatible in 'A Clockwork Orange' that removes it from the seventies as they actually happened. This must have been evident at the time, with youth culture making it a building block, along with 'Cabaret' for Glam and later Punk.<br /></span><br /><span style="font-family:arial;">It's a delicious thought, the idea that out in Thamesmead there actually were little Droogs at play, picking through the wreckage of a country on the wane. In the introduction to his 1974 book 'Soft City',<a href="">(Buy Soft City from amazon.co.uk here)</a><img style="border: medium none ; margin: 0px;" alt="" src="" border="0" height="1" width="1" /> Jonathan Raban talks of a gang called The Envies haunting the South Bank praying on the cultured middle classes as they travel from the Royal Festival Hall to The Hayward to The Purcell Rooms. It's hard to resist the idea that Alex and Co. had escaped from the confines of their two dimensional prison and made it to an analog of their world, feeling immediately at home in the futurism and concrete next to the Thames. In the film there is certainly an implicit lampooning of the cultured middle classes with their self consciously modern art and exciting design and their retreat from the inner city, something oddly senile and dead in the way that they conduct themselves.<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><span style="font-family:arial;">The scenes of Alex's everyday life reminded me of the consumerism of Jerry Cornelius in Michael Moorcock's series of novels and stories. The scene where Alex goes shopping in the reflective, shiny, noisy record shop with its mirrors and plastic and flashing lights, swaggering like the Elizabethan dandy he resembles could have come from Moorcock. The same with decadence of the Korova Milkbar with its achingly cutting edge mixture of sexual abandon and stylised nightclub stylings. If Alex has a close cousin in fiction it's Jerry. Both are led by boredom to escape into aesthetic violence. Both have a shocking hole where their conscience and moral bearings should be. Both enjoy dressing up. The amoral dandy as science fiction hero? I'd love to see more of Alex's everyday life in this hyper seventies.<br /><br />I also, for the first time, saw the trailer for 'A Clockwork Orange', which can be viewed by clicking the link at the head of this post. It's a blinder, because it manages to put across an idea of the film without giving any firm idea of what it is will actually be seen on viewing it. Almost a lost art now...</span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future to Avengerland<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><span style="font-family:arial;">According to </span><a href=""><span style="font-family:arial;">this site</span></a><span style="font-family:arial;">, Avengerland is "the area within half an hour's drive of the Borehamwood and Pinewood film studios" which provided many of the locations for UK television programmes. Programmes such as as The Prisoner, The Avengers, Randall and Hopkirk (Deceased) and many others seemed to inhabit a landscape made of odd, syphilitic stately homes, cold looking country roads and modernist shopping centres, factories and offices all linked by endless motorways...<br /><br />If you have a flick through this site, you'll find a landscape so familiar, that other England, the England of secret bases, strange eccentics and alien invasions... For many, this other world was more real and more accessible than the actual landscape.</span><a href=""><span style="font-family:arial;"><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></span></a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future would hang his head in shame...<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><a href="">Black Arrow: British Rocket Science and the Cold War</a><br /><br />An online seminar on the history of British Rocket Science in the post-war period. Will comment more when I get the chance. Once I do, we'll be able to see just how close old Quatermass was to the truth of matters...<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future"And a pub right next door to me"... "Oh no you don't."<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" border="0" /></a><br /><a href=""><span style="font-family:arial;">The National Archives Research, education & online exhibitions Exhibitions Public Information Films</span></a><span style="font-family:arial;"><br /><br />Another Central Office of Information film, or more correctly, cartoon. Entitled 'New Town', it features Charley, a young cockernee gent on a bicycle extolling the virtues of his choice of a New Town to live in over his old home in the city. Made in 1948, this film presumably attempted to drum up demand for relocation from dirty old London to a nice, clean rational New Town. Quite excellently, it represents the growth of the New Town as being something akin to a 1940s version of Sim City, with isometric buildings and roads dropped into place on a pleasing grid pattern. Witness the city as described in this film: It's an almost exact representation of the London that George Orwell wrote about in 1984, completed the same year. It also mentions Community Centres and 'Hostels where the young people can get together'.<br /><br />Charley complains of the hour it took him to get to work in the city, dragging himself into his office to a funeral dirge. As he puts it:<br /><br /><em>"Well, one day I was proper fed up with it all. It seemed to me we had made a real mess of things in our town. Still, if you can make a muck-up of things you can put them right. Boy, that's when I had a great idea. But. I wasn't the only one - oh no!"</em><br /><br />Charley rises into the air and bursts through the roof of his workplace, to be followed by others across the city. There are obvious visual parallels here with pictorial representations of The Rapture in contemporary evangelical christian literature, with God lifting the chosen out of the strife and trouble of the earth as the apocalypse arrives. The film, therefore, seem to present the City as a kind of hell-on-earth, a mess of structureless and unplanned expansion which can be escaped by reaching the New Town in which Charley has made his home.<br /><br />A further interesting point: The city and its spread into the countryside is described and represented as the spread of a disease, even going so far as to depict it as a kind of horrid amoeba only being repulsed and/or contained by the greenbelt surrounding it. When one of the characters in the film suggests moving beyond the greenbelt, it is represented on screen as if the release of spores ready to travel outward an contaminate the rest of the land.</span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future"Yes, it's over... He's the smaller of the two"<a href=""><span style="font-family:arial;"><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></span></a><span style="font-family:arial;"><br /><br /><br /></span><a href=""><span style="font-family:arial;">The National Archives Research, education & online exhibitions Exhibitions Public Information Films</span></a><span style="font-family:arial;"><br /><br />A public information film produced by The Central Office of Information and The Observer two weeks before The Festival of Britain buildings were demolished.<br /><br />There's something downbeat about it, as if it was already known that a brief moment of opportunity, a 'jumping off point' had passed. In the opening moments we see images of the future that would be so familiar later in Britain.<br /><br />Despite being new and shiny, these buildings are presented as ruins or remnants, the Skylon standing like a totem for a culture already superseded. Somehow, this version of postwar optimism was already obsolete and strangely laughable. In the first thirty seconds there's a great shot of puddle with a sheet of newspaper trapped in it, underneath the canopy of the Dome of Discovery. There are strangely shaped buildings of a purpose made unclear sitting in amongst scrap metal and odd remnants. The figures in this landscape wander as if bewildered.<br /></span><p><span style="font-family:arial;">In the film, Patrick O' Donovan of the Observer describes the Festival, and by extension the possibilities it represents as: <em>"Really, the place was like a gigantic toyshop for adults. It was a series of surprises; now serious, now witty, now rather vulgar, now even a little mad" </em></span></p><p><span style="font-family:arial;">It seems to me that there's a message here about the prevailing attitude to the future present in the concrete and aluminum of the Festival, a certain desperate whimsy in the face of actual conditions. The film contrasts the Festival with the London that surrounds it, talking of darkness and drabness and churches put up 'on the cheap'.</span></p><p><span style="font-family:arial;">Patrick O' Donovan paints a the visitors that seems to sum up Britain and it's relationship to the idea of progress and the future as much as it sums up the Festival:</span></p><p><span style="font-family:arial;"><em>"In among these unfamiliar shapes, there were the visitors, and they were not dwarfed by the show, they were part of it. There were the thousands of women whose feet hurt and weren´t going to give up. There were clusters of fierce little boys, filled with their secret purposes. There were suspicious housewives who wondered what they´d have to buy the disappointed ones who wanted free samples. There were the militant individualists who weren´t going to take any notice of the officious arrows, and blame the organisers when they got lost. There were the lovers that were indifferent to it all. There were people who began to feel uncomfortable yet hesitated to ask. There were cautious intellectuals who´d seen better in Stockholm and Paris There were the foreigners in un-English clothes who secretly got stared at behind their backs, while they were often amazed at this spectacle of the British at their ease. There were people who wanted tea, and people who wanted a four course dinner with two sorts of wine. And all of them in a special mood, slightly excited, slightly exaggerated. A mood that had been made by the building, the colour and the music."</em></span></p><p><span style="font-family:arial;">As he summarises:</span></p><p><span style="font-family:arial;"><em>"Here at the South Bank there was a blueprint for new towns, light hearted, sensible, not too dear, practical and never boring... There were no resounding proud messages here, no one was taught to hate anything, At a time when nations were becoming assertive and more intolerant, here was a national exhibition that avoided these emotions, and tried to stay rational. In a bad year in the world´s history, it had a spiritual quality that is worth remembering. "</em></span></p><p><span style="font-family:arial;">The Festival of Britain is cast as a tentative toe placed into the sea of the future, a possible direction taken where the major factors are lightness, shininess and, overall, fun. Like the Skylon, the film suggests, this future floats in isolation, unattached to the ground, an alien marvel destined never to be integrated into the world as it is rather than the world as it should be.</span></p><p><span style="font-family:arial;"></span></p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future shaking hand wipes Dean Jagger's pate...<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><span style="font-family:arial;">Watched X - The Unknown at the weekend, a phenomenal 1956 Hammer science fiction film.<br /><br />Here are some preliminary notes for future exploration:<br /><br />According to Jimmy Sangster, the screenwriter and production manager for the film, Dean Jagger the American actor who plays the Quatermass surrogate Dr Adam Royston, refused to work on the production because the director originally slated to produce had appeared before the House Un-American Activities committee.<br /><br />There is a wonderful caption in the opening titles thanking the War Office for their kind help.<br /><br />In the film, which features a hungry blob emerging from beneath the Earth's crust to feed on energy, nuclear research is carried out in what seems to be a garden shed.<br /><br />Dr Royston is working on developing a way of neutralising radioactive material with a strange meccano contraption involving some panels spinning at the top of towers. He claims that he is trying to find a way of removing the energy without exploding it.<br /><br />Nuclear research stations behind chainlink fences seem to be the norm in Scotland, where the film is set.<br /><br />Leo McKern turns up as a kind of semi-governmental police officer who has the job of researching atomic events.<br /><br />There are at least two pivotal scenes set in a church.<br /><br />The military feature, with soldiers completing their national service involved in both the initial appearance of the unknown visitor and it's final destruction.<br /><br />The establishment where the atomic material is seems to be a non-commercial venture, possibly government funded.<br /><br />These thoughts to be expanded on later.<br /><br /></span><a href=""><span style="font-family:arial;"></span></a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future Fiction that we like:<a href=""><span style="font-family:arial;">The Politics of Space: 1 The Last Astronaut ABCtales.com</span></a><span style="font-family:arial;"><br /><br />An example of the kind of UK science fiction in which we're interested. Contains allusions to various UK science fiction characters and situations alongside Walter Benjamin style meditations on ruins and ruination.</span><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>Archeology of the Future
http://feeds.feedburner.com/ArcheologyOfTheFuture
CC-MAIN-2017-43
en
refinedweb
When first I looked through the pages of the book “Hacker’s Delight”, I found myself looking at the chapter about bases. There I learned a very curious fact – with the digits of 0,1 and the base of -2, you can represent any integer. Right afterwards I learned something even more interesting – with the digits of 0,1 and the base of 1-i, you can represent and number of the form a+bi where a and b are integers. Having nothing to do with this curious fact, I let the subject go. Some time later, I was reading through Knuth’s “Art of Computer Programming”, and found that with the base of (1-i)^-1, and digits of 0,1 you can generate the dragon fractal! Generating the fractal is quite simple actually: def create_dragon_set(n): """calculate the dragon set, according to Knuth""" s = set([0.0+0.0j]) for i in range(n): new_power = (1.0-1.0j)**(-i) s |= set(x+new_power for x in s) return s (By the way, can you do it better?) The annoying part is converting the complex numbers to drawable integer points. After doing so, I used PIL to draw the jpeg. Here’s a link to the code. actually there are Dragon curves – very interesting fractals simplest way to get them is folding a paper string by half few times curious fact is that 4 Dragon curves can completely fill 2d space without any interference i was playing with .NET some time ago and have a sample program which generates Dragon curve for few steps ahead with specific folding direction – but it’s horribly slow because i was using strings to represent fractal :) Well, because of your comment I also looked at the dragon curve page on wikipedia. It seems that: 1. Dragon curves may be produced by lsystems (lstrings). On a side note, a version of the book ‘Jurassic Park’ had another level of the dragon curve on the openning page of each chapter. It was drawn by using lsystems. I remember that as a kid reading this book, I really liked it (and tried to write a program to draw it). 2. As you said, about folding paper – well, that really makes me very happy to know. I never made this connection between fractals and folding. Excellent! 3. It seems what I actually drew in this post is the twin-dragon, which is also named the Davis-Knuth dragon. It is constructed from two dragon curves. Thanks! Pingback: Hacker Wannabe » Blog Archive » Dragonul lui Heghway
http://www.algorithm.co.il/blogs/math/fractals-in-10-minutes-no-3-the-dragon/
CC-MAIN-2017-43
en
refinedweb
CodePlexProject Hosting for Open Source Software What is the correct way to hook in when a page render here i composite. My problem is that I have a user there is logged on to my site(I have a complete database with users) I have a metadata field on my content page, where I can mark that the page requere login. When the page load I wanna make a check if metadata field is true and my session variable is there. I have made a function in my dll that can handle login information and give information about that. But have do I make the last final small step :-) It depends a little on whether the actual execution of the page is dependent on this session being there or not. Of not, you can write the validation code in a HttpModule which you wire up to the PostRequestHandlerExecute of your HttpApplication and handle the logic in there. By doing this you can access the currentpage through PageRenderer.CurrentPage and for instance do a redirect based on MetaData value and session. The page will still actually be rendered but the output will not be sent to the client, which is perhaps not the most efficient way but perfectly acceptable since the page would have been rendered anyway if the user for instance was logged in. So a little exampe public class MyModule : IHttpModule { public void Init(HttpApplication app) { app.PostRequestHandlerExecute += (sender, e) => { var currentPage = PageRenderer.CurrentPage; // doSomethingWithCurrentPage } } } It looks like just what I need. Is there ar smart way to get the metadata fields there is attached to the page. I can not find a property on the object Currentpage, only the title and basic attributes. PageRenderer.CurrentPage returns a IPage interface that of course only contains the properties and methods defined on this specific interface. Because of the modularity of C1 each of your metadata has its own interface and you can fetch instances of them using a DataConnection. Personally i have this extensions-class i use for fecthing Page Meta Data, dunno if there is anything similar in the C1 core - i think there should. public static class IPageExtensions { public static T GetProperty<T>(this IPage page) where T : class, IPageData { using (var conn = new DataConnection()) { return conn.Get<T>().FirstOrDefault(t => t.PageId == page.Id); } } public static IEnumerable<T> GetProperties<T>(this IPage page) where T : class, IPageData { using (var conn = new DataConnection()) { return conn.Get<T>().Where(t => t.PageId == page.Id); } } } with this you can fetch a single MetaData instance on your currentpage like this var myMetaData = PageRenderer.CurrentPage.GetProperty<IYourType>(); Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://c1cms.codeplex.com/discussions/249215
CC-MAIN-2017-26
en
refinedweb
Rack streamed proxy causing Net::HTTPOK#read_body called twice Showing 1-2 of 2 messages Rack streamed proxy causing Net::HTTPOK#read_body called twice Fabio Kreusch 2/21/12 3:23 PM So, I have a Rack application which acts as a proxy. I have implemented a net/http api to deal with the communication, so the rack app basically calls it like this: api.new(request_method, rack_request, options).response .response returns a valid rack response [status, headers, body]. Body is an instance of the api, and it its implementation is like this (simplified version): def initialize ... @proxy_connection = Net::HTTP.start(proxied_uri.host, proxied_uri.port) @proxy_connection.request(@proxy_request) do |response| @proxy_response = response end end def each(&block) @proxy_response.read_body(&block) end def to_s @proxy_response.read_body end The rack app can then call the proxied service with a full response with to_s, or as a streaming and receive the response in chunks with each. The thing is, when I return the response in rack, I'm getting the following stack trace: Apparently, the server or rack is calling 'each' more than once, but I'm not sure what might be causing it. The Rack app is running under rails 3.2.1. Re: Rack streamed proxy causing Net::HTTPOK#read_body called twice Konstantin Haase 2/22/12 12:41 AM Unfortunately, the Rack specification does not make any assumptions on whether each will be called once or twice or how to reflect upon that. So you always have to assume that each might be called more than once. Konstantin
https://groups.google.com/forum/?_escaped_fragment_=topic/rack-devel/YgEzAlZd8YA
CC-MAIN-2017-26
en
refinedweb
In part 1 of our journey into react we got our windows work environment set up; we installed ConEmu to set up a pretty windows console, the text editor Atom.io to improve our coding experience, and then from within that console we installed Node.js and git. After that we learned how to navigate using the command line, and we created our project directory, and initialized our project with the npm init command. Lastly, we got a feel for how installing dependencies works by installing React itself. In this part of the tutorial we’ll get a working development server set up, and our first react app to display in the web browser. To do this we are going to install several new dependencies, set up our webpack.config file, and of course get our first taste of programming in React. So let’s dive right in! If you’d like to review only the code for this tutorial, you can find it on it’s own branch on my GitHub. Let’s start with something familiar Before delving into the nitty gritty of our first react app, let’s work first with some basic HTML to get us warmed up. We’re going to make the index.html file that our app will be mounted into. In your console, navigate to the root folder of your project and create the index.html file. It’s important that you keep the extensions in mind when making these files as there will be multiple different index files down the road. $ echo.>index.html Setting up the index.html is pretty straight forward and follows the conventions of the language that you’re already familiar with. So make you html, head, and body. Inside the body, you’re going to create an empty div with a unique id: <!DOCTYPE html> <html> <head> <title>Journey into React</title> </head> <body> <div id="app"></div> </body> </html> This div is going to be where we’ll mount our application, which we’ll get to a little bit further on. For now, save this file and let’s get to making our first component. Creating our first component React is made up of blocks of code called components. This is one of the many things that makes React such a powerful language for application development. We are able to create different parts of the program independently of each other, then call them only when we need them. This is also great for organization, working in a team, scalability, and debugging. The first component that we are going to make is going to be the primary one, the one that all other components are mounted into. Our app.js component should be placed in its own directory, which we’ll do right now. By convention, you’ll want to start with a src directory, and inside that your components directory, where you’ll make your app.js file – like so: $ mkdir src $ cd src $ mkdir components $ cd components $ echo.> app.js Now that we have our file made, let’s import React so that we can start working with it. At the top of your app.js file add: import React, { Component } from "react"; Not only can we import just the resources we need, but we can also pick and choose which functions from that resource to bring into our file. That’s what we did just now – we imported the Component function from the react dependency that we installed earlier. This will save us some valuable space in the next part of our code, where we will actually create our component, and share it with the rest of program: export default class App extends Component { render() { return ( <div> Hello World </div> ); } } There’s a lot going on here, so let’s break it down. First, we are creating a Component with the function we imported from React, and we’re exporting it as a default class with the name App. Now anything inside this object will be be called when we import it into our other components. So what did we create here? Inside our render() function we are going to return a div with the contents “Hello World”. Easy as that! It might feel weird at first writing our HTML inside JavaScript, but you’ll get used to it pretty quickly. It’s incredibly powerful to be able to mix our JS and our HTML, as we’ll be able to call variables, and perform logic on what our clients are seeing on their screen. We are able to do this because of something called webpack, and some particular settings we’ll implement below. But first, let’s take it one step at a time and now call this component to the screen so we can see it. Rendering our components with ReactDOM The next step in our first react is to call in the dom dependency. In your console, install ReactDom and save it, like so: $ npm install react-dom --save After a few moments, you’ll have the react-dom module installed and saved to your package.json file so that others can use your application without you transferring your node_modules file. By simply typing npm install from the root of your application, npmwill install all the dependencies in your package.json file. Alright, so next let’s create a new file in our src directory named index.js. $ echo.> index.js Then let’s make the imports that we’ll need so that everything works: import React from 'react'; import ReactDom from 'react-dom'; Also, we’ll want to import the component we just built, so import that as well. Notice that with the component we built, we want to include the relative file path to it: import App from './components/app'; Let’s tell our application to render our component into the #app element we made on our index.html file. ReactDom.render( <App />, document.querySelector('#app')); This little bit of code here is telling your clients screen to render the App component (noted by the tags we put on either side of it) inside a document element with the id of app. Now we’re almost there. We’ve created our webpage itself, our app component which we then imported it into our index.js file. We’ve told index.js to render our app in the webpage. Next we’ll set up our server so that we can view our application. Creating a local development server with Webpack We’ve got some more dependencies to install for our first react app: $ npm install webpack webpack-dev-server --save-dev You’ll notice I did something a little different here. We’ve told npm to install two different packages, and save them to a special list of dependencies in our package.json file that we’ll need only for development. Webpack is one of the most complicated, yet most important part of our application. It’s a module bundler, which means it takes all of the dependencies we’re using, all of our static assets, and bundles them together into one minified file that we then serve our client. It reduces transfer sizes, and has some pretty incredible features that are invaluable, such as “loaders”. Remember when we were writing HTML inside our JavaScript? It’s through the use of loaders inside webpack that we are able to do this. A loader is an extension of webpack that preprocesses our code and converts it into JavaScript. We are going to use a large number of loaders called babel in our application to set up our next file: webpack.config.js. Navigate to your root directory and create it: $ echo.>webpack.config.js Now before we go any further let’s get babel and all it’s additions installed. This might take a few minutes, but sit tight, we only have this one more file to write before we get to see all of our hard work in action! I would recommend copying the below code into your package.json file to save yourself some time. It should go right above your webpack devDependencies, like so: "devDependencies": { "babel-core": "^6.7.6", "babel-loader": "^6.2.4", "babel-plugin-add-module-exports": "^0.1.2", "babel-plugin-react-html-attrs": "^2.0.0", "babel-plugin-transform-class-properties": "^6.3.13", "babel-plugin-transform-decorators-legacy": "^1.3.4", "babel-preset-es2015": "^6.3.13", "babel-preset-react": "^6.3.13", "babel-preset-stage-0": "^6.3.13", "webpack": "^1.13.0", "webpack-dev-server": "^1.14.1" } Now that we have all of those in there, we can save it and run npm install to install them all for us. We are preemptively installing a few of these for future use. They all convert the array of programming languages in our application into JavaScript so that it works properly. After those are all installed, open your webpack.config.js file and define the webpack variable like so: var webpack = require('webpack'); Now let’s configure our webpack file. It’s going to look a bit complicated, but stick with it, and we’ll talk about it more below: module.exports = { context: __dirname, entry: "./src/index.js", module: { loaders: [ { test: /\.js|.jsx?$/, exclude: /(node_modules|bower_components)/, loader: 'babel-loader', query: { presets: ['react', 'es2015', 'stage-0'], plugins: ['react-html-attrs', 'transform-class-properties', 'transform-decorators-legacy'] } } ] }, output: { path: __dirname, filename: "bundle.js" } }; Okay, so there’s a few things going on here. Let’s start at the entry: object, it’s here that we’re telling webpack which part of our first react app to bundle for us. It’s going to read this file, take all of it’s dependencies (and all of their dependencies), apply our loaders, then compress them, and output them as a single file. This file is defined as bundle.js in our output:object, and it’s output path: is our parent directory. Now for the loaders. Loaders can be complex, and have a great number of options, parameters, and plugins that we can use with them, so unfortunately that is something we’re going to want to get into in more depth later. So for our first react app let’s take a look at the one we have typed up here. Basically, what it’s saying is that we are going to look for ( or test:) any files that end in the js or jsx extension, with the exception of ( exclude:) anything in our node_modules folder, and then apply the babel-loader to them. The babel-loader itself then takes some presets and plugins that will make sure we catch all the HTML we’ve written and convert it into JavaScript. Whew! That was a lot, but as you continue to work with and see webpack configuration files, they will begin to make more sense. We only have one last thing to do before we see our first react app greet us on our web browser. If you want to read more about Webpack take a look at my other tutorials here. Add a dev script, and start our first react app Before going any further in our first react app we have to add one last thing to our index.html file. We told webpack that we wanted to output our application in a minified file called bundle.js, so we better get that pulled into our web page. In the body of our html (in our index.html file) add the following: <script type="text/javascript" src="bundle.js" charset="utf-8"></script> The great part about the webpack-dev-server is that it not only it creates a local server for us to view our project on, but it also allows it be edited in real time. We won’t have to stop the server and start it again to view what our changes have done, it will do all of that for us. Technically we could do this and run our application right this second by typing in this monster of a command into our console: $webpack-dev-server --inline --hot But, who wants to do that every time they want to start their server? Nobody does, so let’s make it much easier. Open your package.json and near the top you should see a "scripts" object with a "test" option within it. Erase it, and replace it with this: "dev": "webpack-dev-server --inline --hot" Now, every time we type npm run dev it’ll call that line of code, starting our server and getting us ready for development. We can also type in webpack to create our bundle.jsfile. We would do this if we were say, deploying our app. Finally, we’ve reach the moment of truth! Time to open your browser, navigate to localhost:8080 and start our server with npm run dev. Webpack, by default will always serve our application to port 8080. With any luck, and of course all of your hard work, you should see the simple tiny words “Hello World” displayed in your browser. You’ve done it! You’ve officially created your first React app. Granted it doesn’t do much and it took a long time to get those simple words up on a single web page, but it’s ready to be scaled up and built upon. You and I will begin doing that in the next tutorial where we’ll implement routing so we can change pages, our navigation bar, and add Sass styling! If you have any questions feel free to leave them below and until next time, happ coding! Now that your first react app is done, why not head over to Part 3 where we learn to navigate around with react-router?
https://www.davidmeents.com/blog/journey-into-react-part-2-creating-your-first-react-app/
CC-MAIN-2017-26
en
refinedweb
Hey everyone, I'm having this serious problem with this linke dlist implementation. It is susposed to be part of a flight reservtion system (basic introductory problem in C) - but everytime the execution of initiateNode begins, the program crashes, saying that "The instruction at xxxxxx referenced memory at xxxxxxx. The memory could not be written" I understand that this is a problem with reserved memory trying to be used, but I don't known how to fix it - Heres the parts of the code that are most likely the culprits! -------------------Code------------------------------ //-------------------------- initiateNode --------------------------// /* This function initialises a node, allocates memory for the node, and returns a pointer to the new node. Name, Phone and Flight number are added at this stage. Uses add function to establish node on the actual linked list.*/ struct node * initiateNode(char *name, char *flight_no, int phone) { struct node *ptr; ptr = (struct node *) malloc( 1, sizeof(struct node ) ); //In case of an error, ptr set to NULL, so... if(ptr == NULL) return (struct node *) NULL; //Otherwise take passenger details and. else { strcpy(ptr->name, name); ptr->phone = phone; strcpy(ptr->flight_no, flight_no); return ptr; //This is pointer to the new node. } } //-------------------------- add --------------------------// /* This adds a node to the end of the list (doesn't fill in detials).*/ void add(struct node *newNode) { if( head == NULL ) { //If this is the first node... head = newNode; } (*end).next = newNode; (*newNode).next = NULL; // Set next field to signify the end of list end = newNode; // Make end to point to the last node } This is the implementation part - not all of it! case 2: printf("\nEnter name: "); scanf("%s", &name); printf("\nEnter phone number: "); scanf("%d", &phone); printf("\nCurrent Flights Available: "); for(i = 0; i < number; i++) { printf("%s\n", current_flight[i].flight_no); }//Ignore this part printf("\nEnter Flight number for passenger: "); scanf("%s", &flight_no); ptr = initiateNode(name, flight_no, phone); add(ptr); break; There is also two global variables struct node *head = NULL; struct node *end = NULL; _________________end code_________________ I know its probably too much to ask, but does anyone have any ideas???
https://cboard.cprogramming.com/c-programming/3798-memory-problem-i-think.html
CC-MAIN-2017-26
en
refinedweb
Is it possible to simultaneously read and write from a socket? I've a thread which continuously reads a socket. Since only one thread is reading from socket, read operation is thread safe. Now i've many threads (say 100) which write into socket. Hence it is obvious that i've to make write operation thread safe by doing something like this, package com.mysocketapp.socketmanagement; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.net.UnknownHostException; public class SocketManager { private Socket socket = null; private InputStream inputStream = null; private OutputStream outputStream = null; public SocketManager() { socket = new Socket("localhost", 5555); //server's running on same machine on port 5555 inputStream = socket.getInputStream(); outputStream = socket.getOutputStream(); } public void writeMessage(byte[] message) throws IOException { synchronized (SocketManager.class) { if (message != null) { outputStream.write(message); } } } public byte[] readMessage() throws IOException { byte[] message = new byte[10]; //messages are of fixed size 10 bytes inputStream.read(message); } } Now I have a thread constantly calling readMessage() function (in an while loop). As far as i know if there is no message on the socket to be read the statement inputStream.read(message) will wait for a message. I want to know whether it is safe to exceute outputStream.write(message); while inputStream.read(message); is in execution Yes, a socket is bidirectional and enable full duplex communication, there is no problem in doing (on different threads) simultaneous read and write. And there is no problem in having multiple writing threads. You could synchronize on the socket instance instead of the SocketManager class. Similar Questions
http://ebanshi.cc/questions/945466/is-it-possible-to-simultaneousy-read-from-and-write-into-a-java-net-socket
CC-MAIN-2017-47
en
refinedweb
share|improve this answer answered Jun 18 '12 at 11:09 Simeon Visser 52.9k877106 add a comment| up vote 2 down vote zerkms told you the difference. what was I going to say again? C#C++VB Copy using System; public class ParseTest { [FlagsAttribute] enum Colors { Red = 1, Green = 2, Blue = 4, Yellow = 8 }; public static void Main() { Console.WriteLine("The If you want to send the string as a argument for example you can do like this: var myString = "Test"; MethodThatRequiresStringArrayAsParameter( new[]{myString} ); I honestly can't see any other reason Am I missing something? –KrisTrip Dec 28 '09 at 18:18 2 Yes, add System.Linq to your using –Craig Stuntz Dec 28 '09 at 18:19 Make sure you are asked 4 years ago viewed 87024 times active 10 months ago Related 3836What is the difference between String and string in C#?355How do I convert a string to an enum in You’ll be auto redirected in 1 second. Was there no tax before 1913 in the United States? Is it safe to use cheap USB data cables? Add-in salt to injury? Add-in salt to injury? Since no conversion exists between String and student, the last statement fails. How can I check to see if a program is stopped using bash? Visual Basic Language Features Data Types Type Conversions in Visual Basic Type Conversions in Visual Basic Array Conversions Array Conversions Array Conversions Widening and Narrowing Conversions Implicit and Explicit Conversions Conversions These two types have nothing in common, and no conversion of any kind exists between them.A conversion of one array type to another is widening or narrowing depending on whether the C# String To Int Array However, its underlying type is still Object, and you can subsequently set it to another array of an unrelated class. What is the total sum of the cardinalities of all subsets of a set? An example that fails this requirement is an attempted conversion between a String array and an array of a class derived from System.Attribute. String.Split comes to mind (myString.Split(new[]{", "}, StringSplitOptions.RemoveEmptyEntries); for example) –Øyvind Brathen Jun 18 '12 at 11:15 9 Why so many downvotes? For example, you can use decimal.TryParse to parse “10”, “10.3”, “ 10 “, but you cannot use this method to parse 10 from “10X”, “1 0” (note space), “10 .3” (note How can I trust that this is Google? Convert String To Array C# Split Where are you getting the exception? –David Hoerster May 2 '12 at 11:32 How is Ids(plural) going to be returned as a single string? –Doomsknight May 2 '12 at Without opening the PHB, is there a way to know if it's a particular printing? It isn't trivial. Console.WriteLine( "The string array contains the following values:" ); PrintIndexAndValues( myArr ); } public static void PrintIndexAndValues( ArrayList myList ) { int i = 0; foreach ( Object o in myList Type myType = MethodToGetMyEnumType(); Array enumValuesArray = Enum.GetValues(myType); object[] objectValues new object[enumValuesArray.Length]; Array.Copy(enumValuesArray, objectValues, enumValuesArray.Length); var correctTypeIEnumerable = objectValues.Select(x => Convert.ChangeType(x, t)); share|improve this answer answered Jun 25 '14 at 19:26 Convert String To String C# The title has since been edited to match your actual question. –razlebe Jun 18 '12 at 13:02 add a comment| up vote 3 down vote A string is one string, a Cannot Convert String To String C# What is exactly meant by a "data set"? If this behavior is undesirable, call the IsDefined method to ensure that a particular string representation of an integer is actually a member of enumType. Its contain method ConvertFromString. var ids = (string)o["ids"]; console.WriteLine(ids.GetType()); If ids is of type int[] you have to cast all elements to String. string[] strings = Array.ConvertAll thats why posted here –Mihir Jun 18 '12 at 11:23 | show 3 more comments 10 Answers 10 active oldest votes up vote 45 down vote accepted string[] is an array You’ll be auto redirected in 1 second. The following example defines a Colors enumeration, calls the Parse(Type, String) method to convert strings to their corresponding enumeration values, and calls the IsDefined method to ensure that particular integral values are Source ArrayList Class ArrayList Methods ToArray Method ToArray Method ToArray Method (Type) ToArray Method (Type) ToArray Method (Type) ToArray Method ToArray Method (Type) TOC Collapse the table of content Expand the table Y/N // Y // Enter a number between -2,147,483,648 and +2,147,483,647 (inclusive). // -1000 // The new value is -999 // Go again? C# Convert Nullable String To String Browse other questions tagged c# or ask your own question. You need to specify how you want to join those strings(f.e. Learning resources Microsoft Virtual Academy Channel 9 MSDN Magazine Community Forums Blogs Codeplex Support Self support Programs BizSpark (for startups) Microsoft Imagine (for students) United States (English) Newsletter Privacy & cookies Dev centers Windows Office Visual Studio Microsoft Azure More... Tank-Fighting Alien How safe is 48V DC? Split String To Array C# Enum.Parse Method (Type, String) .NET Framework (current version) Other Versions Visual Studio 2010 .NET Framework 4 Visual Studio 2008 .NET Framework 3.5 .NET Framework 3.0 .NET Framework 2.0 Converts the string representation See more here share|improve this answer answered Jun 18 '12 at 11:10 ericosg 3,65822035 add a comment| up vote 1 down vote In case you are just a beginner and want Dev centers Windows Office Visual Studio Microsoft Azure More... The content you requested has been removed. have a peek here How to: Convert a String to a Number (C# Programming Guide) Visual Studio 2015 Other Versions Visual Studio 2013 Visual Studio 2012 Visual Studio 2010 Visual Studio 2008 Updated: July 20, For example, in most scenarios: to do what you want, you would need a TypeDescriptionProvider and a custom ICustomTypeDescriptor, and hook into several registration APIs. What does the Hindu religion think of apostasy? c# share|improve this question edited Jun 18 '12 at 13:01 razlebe 5,93763249 asked Jun 18 '12 at 11:07 Mihir 3,553134073 2 In what context do you want to convert string So you could try: Array a = ... Real numbers which are writable as a differences of two transcendental numbers For a better animation of the solution from NDSolve What is the text to the left of a command more hot questions question feed lang-cs about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Join them; it only takes a minute: Sign up How to convert string to string[]? share|improve this answer answered Jun 18 '12 at 11:10 Øyvind Brathen 33.6k1286116 add a comment| up vote 1 down vote A string holds one value, but a string[] holds many strings, share|improve this answer answered Jun 18 '12 at 11:11 Klaus Byskov Pedersen 62.5k19126193 I don't think this is the intended result. –tazboy Mar 15 at 19:59 add a comment| Advisor professor asks for my dissertation research source-code Is it unethical to poorly translate an exam from Dutch to English and then present it to the English speaking students? Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! If the string is not in a valid format, Parse throws an exception whereas TryParse returns false.ExampleThe Parse and TryParse methods ignore whitespace at the beginning and at the end of You can come up with our own implementation of course. See AlsoTypesHow to: Determine Whether a String Represents a Numeric Value.NET Framework 4 Formatting Utility Show: Inherited Protected Print Export (0) Print Export (0) Share IN THIS ARTICLE Is this page Existence proof of Lorentz transformation from lightlike to lightlike vectors Figuring out why I'm going over hard-drive quota Primenary Strings Were the Smurfs the first to smurf their smurfs? And given the scale, without that context: the most correct answer possible genuinely is: "use string.Split". –Marc Gravell♦ Dec 17 '15 at 9:17 | show 5 more comments 2 Answers 2 Y/N // n // Press any key to exit.
http://hiflytech.com/string-to/cannot-convert-from-string-to-array-of-string-c.html
CC-MAIN-2017-47
en
refinedweb
Other than using a compiler that is Legion-aware (MPLC) or using a library that interfaces with Legion (MPI or PVM), the only way to interface objects with Legion is to write glue code by hand or to use the legion stub generator. The Legion stub generator generates client-side and server-side stubs. The client-side stubs are linked to the developer's client code to build the client application. The server-side stubs are linked to the developer's implementation of the server-side functionality to build the Legion object. The stub generator generates strict RPC method calls. Unlike MPLC, generate_stubs does no dataflow detection or complicated graph building. For legacy purposes, generate_stubs functions in two modes: the default mode and the enhanced mode. Example: Below is a C++ header file, which we are going to pass through generate_stubs: class HelloWorld { public: HelloWorld(); int sayHello(); }; First, generate_stubs takes the header file as input and generates the following files: HelloWorld.client-stubs.new.c HelloWorld.client-stubs.new.h HelloWorld.trans.new.c HelloWorld.trans.new.h The first two files define a class called HelloWorld_client, which the developer will use in a client application to make method calls on the Legion HelloWorld object. So, to instantiate HelloWorld_client the developer would use: HelloWorld_client client(loid); To make a method call on the sayHello() method, he would use: int ret = client.sayHello(); There are examples of this in the Legion source tree at $LEGION/SDK/src/Examples/StubGenerator. The two trans files, HelloWorld.trans.new.c and HelloWorld.trans.new.h, define a wrapper around the developer's implementation of the HelloWorld class, as defined in HelloWorld.h and HelloWorld.c. The developer need only link his implementation to the trans object code in order to build a Legion HelloWorld object capable of accepting and processing HelloWorld method calls. The stub generator also allows you to build add-in objects, Legion software components that add functionality. Add-in objects are compiled to .o files, which are then linked with an existing Legion object. An add-in can manipulate the Legion message stack and add functions to the interface. One example of an add-in object is TrivialMayI(), which adds a MayI() security check to the message stack as well as extends the object's interface to allow methods that get and set security information. For examples of objects using the stub generator see $LEGION/SDK/src/Examples/StubGenerator. For examples of add-ins using the stub generator, see the AuthenticationObject and TrivialMayI objects. We can modify the default mode example: class HelloWorld { public: HelloWorld(); SYNC int sayHello(); ASYNC int sayHelloWithReturns(IN int, INOUT int, OUT LRef<LegionPackableString>); }; The above HelloWorld class description would be saved in a local file called HelloWorld.h.idl. This file would be passed as input to generate_stubs which would output the following files: HelloWorld.h // The HelloWorld class description with the // Legion IDL keywords stripped off HelloWorld.client-stubs.new.c HelloWorld.client-stubs.new.h HelloWorld.trans.new.c HelloWorld.trans.new.h These files are used in exactly the same way as they are for the default mode except that the HelloWorld client is defined slightly differently. If a method call is defined as SYNC, it will be defined exactly the same as in the default mode. If, however, a method call is defined as ASYNC, then the client definition of that method call will return an integer representing the id of that invocation of the method as well as methods for retrieving the return parameters of that method. For example, the definition of the sayHelloWithReturns() asynchronous method call described above would have the following client definition: int sayHelloWithReturns_async (int _legion_fnord_arg0, int _legion_fnord_arg1, struct timeval *timeout = NULL); int get_sayHelloWithReturns_async_return ( int id, struct timeval *timeout = NULL ); void get_sayHelloWithReturns_async_param_1(int id, int _legion_fnord_arg1, struct timeval *timeout = NULL); void get_sayHelloWithReturns_async_param_2(int id, LRef <LegionPackableString> _legion_fnord_arg2, struct timeval *timeout = NULL); The first function definition is the actual method call. The subsequent function definitions are the functions for retrieving the return parameters. The first argument for all retrieval methods is the invocation's id. The return from the actual method call must be passed to the retrieval methods in order to get the correct return parameters back. The developer is tasked with memory management of the invocations. The following functions are defined for memory management: void releaseInv(int id); void releaseAll(); The first function will release memory only for the specified invocation. The second function will release all invocations. Note: There are limits to the amount of C++ the stub generator can handle. If you have a type such as UPSet in your interface, you will have to edit the resulting files in order to call new on LPSetLL. The stub generator will generate memory leak warnings. If the new class inherits from other Legion classes, the list of class names will begin with the parent and ending with the new class. Note that multiple inheritance is not supported. No client calls are generated for the object-mandatory interface. Integer where method numbering should begin. Default is 200. Class of resulting object. Defaults to nothing, which means that it will inherit its class ID from its class. But if the new class is a command line class object, you need to specify UVaL_CLASS_ID_COMMANDLINE. Generates code for an add-in trans file instead of a main transfile. Comma-separated list of options. So far, there is only one option, nomain, which generates no main program. This option is intended for programs with custom server loops. Path to the C include files Print debugging messages at run-time for every method invocation Skip C preprocessor. This option is not recommended if the developer's code contains comments. Print startup message. The Legion-CORBA IDL is currently being reworked and is not yet available. Please contact <[email protected]> if you have any questions about using CORBA with Legion. [email protected]
http://www.cs.virginia.edu/~legion/documentation/tutorials/1.8/manuals/Developer_1_8.7.html
CC-MAIN-2017-47
en
refinedweb
Google introduced the Material Design guidelines for Android developers, but, initially, neglected to provide some of the new widgets and elements. At I/O 2015, however, the Android Design Support Library was announced, which greatly simplifies the effort required to implement some of the Material Design widgets and techniques in Android apps. The Design Support Library is incredibly easy to use, and, unsurprisingly, is compatible with Android versions from API 7 (Gingerbread) upwards. Excited yet? Let’s jump right in. Adding the Design Support Library, using Android Studio, is straightforward. Firstly, update your SDK tools to the latest version, then, from Android Studio menu, click “Build”, “Edit Libraries and Dependencies…”, click the green “+” button at the top right, select “Library Dependency”, scroll and locate “design (com.android.support:design:x.y.z)”. On the other hand, edit your app build.gradle, and include the following line in the dependencies section compile 'com.android.support:design:x.y.z' Where x.y.z represents the version available on your installation. As at publication, this is 23.0.0. Some of the most exciting widgets provided by the design support library, which you would use frequently, include: TextInputLayout The TextInputLayout is a great addition, and it is designed to add functionality to the well known EditText. The TextInputLayout isn’t designed to replace the EditText, but should be used along with the EditText. An EditText is wrapped within a TextInputLayout, to get advantage of the TextInputLayout. With an EditText, you can set a hint that is shown to the user before he begins typing in a value. When the EditText is selected, however, the hint disappears. Using the TextInputLayout, the hint transitions to a label above the EditText field. <android.support.design.widget.TextInputLayout android: <EditText android: </android.support.design.widget.TextInputLayout> From the above, you can see that the only change to your current code involves wrapping the EditText with a TextInputLayout. The wrapped EditText can be referenced in code using TextInputLayout’s getEditText() method. Also, the TextInputLayout can be used to show error messages, for example textInputLayout.setErrorEnabled(true); textInputLayout.setError(getString(R.string.text_error_message)); Snackbar Every Android app developer should be familiar with the Toast component, which provides a simple way to show a quick message to the user at the bottom of the screen. Unlike a Toast, a Snackbar can have an Action bundled with it, like an “Undo” button. Snackbars can also be swiped away before the display duration has elapsed, and, when used properly, a Snackbar can alert other widgets of its visibility, enabling these other widgets move out of the way (like a FAB for instance). Implementing the Snackbar is very similar to a Toast, however, the Snackbar must be anchored to a View that knows the bottom of the app’s display (the app’s base view). Snackbar.make(view, editText.getText(), Snackbar.LENGTH_LONG) .setAction("Dismiss", new View.OnClickListener() { @Override public void onClick(View v) { // The snackbar is automatically dismissed, so you add // whatever additional tasks to be performed } } ) .show(); Much like using a Toast, do not forget to call show() on your Snackbar. Floating Action Button The FloatingActionButton is possibly the most well known new widget specified in the Material Design guidelines. The FAB is a very important part of the specifications, considering that it should be the primary action button, and should experience frequent user interaction. While Google designers and developers repeatedly talked about how it was simply a round button, reading the specifications shows that it is a round button with a specific size, should be properly elevated with shadows, should be responsive to clicks, should respond to changes in the app layout…You get the idea. Thankfully, with the Design Support Library, app developers no longer have to spend days/hours either implementing their own interpretation, or using one of the many third party FAB libraries that sprung up. The new standard FAB has two possible sizes, normal (56dp) and mini (40dp). By default, the FAB will use your app’s theme accent color for it’s background, as per the guidelines. Using a FAB is pretty straightforward <android.support.design.widget.FloatingActionButton android: The FAB can be customized with a few attributes. Some of the attributes likely to be altered regularly include - fabSize – Set the size of the button to either “normal” or “mini” - backgroundTint – Set the background colour of the button - borderWidth -Set the border for the button - rippleColor – Pressing a FAB should produce a ripple effect, and this sets the colour of the ripple - src – Customize the icon displayed within the FAB Note that these attributes are in the app namespace, so to set the fabSize for instance, you would use app:fabSize="normal" CoordinatorLayout The CoordinatorLayout is an exciting and interesting new Layout. It enables the creation and implementation of interactions between views, such as the ability to move a child view as a result of the movement (or visibility) of another child view. To take advantage of these effects, be sure to update your support libraries to the same version as the design support library version. For example, having the FAB automatically shift upwards, and out of the bounds of a displayed Snackbar, can easily be achieved by simply using a Coordinator layout as the base layout. <android.support.design.widget.CoordinatorLayout android: </android.support.design.widget.CoordinatorLayout> Some of the important attributes (also in the app namespace) include - layout_anchor – Used to anchor the view on the edge of another view - layout_anchorGravity – Used to set the gravity to the applied anchor To make a FAB move out of a Snackbar’s bounds, simply add the FAB within the CoordinatorLayout, and pass the CoordinatorLayout as the Snackbar’s View parameter Snackbar.make(coordinatorLayout, "Have a small snack", Snackbar.LENGTH_LONG) .show(); TabLayout The TabLayout is a new component, designed to simplify our app development efforts. Tabs are used in a lot of apps, and is a great design when used properly. With the Material Design guidelines specifying how Tabs should look, it is only proper for a new widget that implements the guidelines to be released. The TabLayout can be used with a ViewPager to easily add tabs to a layout, which is great, considering the ViewPager is available via the support library. To use the TabLayout, it can be included in the layout <android.support.design.widget.TabLayout android: The TabLayout can be customized to have either fixed tabs, or scrollable tabs. In addition, we can set different listeners on the TabLayout to track the states of the Tabs, such as - OnTabSelectedListener - TabLayoutOnPageChangeListener, and - ViewPagerOnTabSelectedListener NavigationView The “slide in” navigation drawer design is a commonly used technique in Android app development. Unfortunately, there has been various implementations, with varying degrees of slide distance (or navigation drawer width), height, and content type. The Material Design guidelines has defined very specific rules regarding the correct implementation of a navigation drawer, but there was no standard (official) widget. The design support library has the NavigationView, which simplifies the implementation of simple navigation drawers, and can be easily customized. The NavigationView must be added within a DrawerLayout in the layout xml <android.support.v4.widget.DrawerLayout xmlns: <FrameLayout android: <android.support.design.widget.NavigationView android: </android.support.v4.widget.DrawerLayout> The NavigationView supports the use of an attribute, called headerLayout, that allows the use of a header section in the navigation drawer, above the list of navigation items. The navigation items can be declared in a menu resource file <menu xmlns: <item android: <item android: </group> </menu> CollapsingToolbarLayout and AppbarLayout Have you noticed how the Material Design guidelines seem to complement the migration from the ActionBar to Toolbar? Have you noticed that Toolbars that slide in and out of view (or alter size) in response to scroll events in the app content is now available in a lot of android apps? Have you tried to implement this functionality in your app? The Design Support Library provides new widgets that help app developers implement similar animations with ease and minimal fuss. Using the above mentioned CoordinatorLayout, along with the AppbarLayout, CollapsingToolbarLayout and Toolbar, you can achieve tons of different effects, guaranteed to be smooth and aesthetically pleasing, as well as supported on a wide range of devices. There are many different permutations and combinations (possibilities) for using these layouts together, and we absolutely have an upcoming article discussing some of these possibilities. Conclusion/Roundup The Android Design Support Library is a great addition to the Android developer tool-set. It greatly simplifies the work of developers striving to adhere to the Material Design guidelines. Rather than spending hours trying to achieve simple tasks such as a correct FAB implementation, hiding, and showing Toolbars in response to user scrolling among others, these can be completely abstracted away, and achieved with simple one liners (or more). It is worth mentioning, though, that the Material Design guidelines is much more than simply having the right widget, with the right look and feel. The Google Material guideline specification is available online, but for a summary, check out our Material Design guidelines article. We have been hard at work using and experimenting with these widgets, so stay tuned for our in-depth articles, along with our experiences and challenges. Share your experiences in the comments, or request for a closer look at any of the widgets (and we just might write those first). Happy coding.
https://www.androidauthority.com/android-design-support-library-overview-636694/
CC-MAIN-2017-47
en
refinedweb
Vanguard Demystifies Israel-Palestine Conflict Vanguard Demystifies Israel-Palestine Conflict By Sonia Nettnin Norman G. Finkelstein spoke about the Israel-Palestine conflict in Oak Park, IL. He was the final speaker in “Voices of Conscience and Dissent: Important Perspectives on the Israeli-Palestinian Conflict,” a community education and dialog series. The event was sponsored by the Committee for a Just Peace in Israel and Palestine. Finkelstein is the author of Image and Reality of the Israel-Palestine Conflict, The Rise and Fall of Palestine, A Nation on Trail: The Goldhagen Thesis and Historical Truth, and The Holocaust Industry: Reflections on the Exploitation of Jewish Suffering. He teaches political science at DePaul University in Chicago. “The basic moral and historical facts are not complicated,” he said. “Consensus has been reached on the facts and history.” Controversial disagreements about the conflict fall into one of two categories: one kind is factual and the other kind is fabrication. Perspectives and questions about the basic moral, historical, legal, political, and factual aspects of the conflict are based on reality, therefore legitimate. These valid disagreements can have differences about the solution. An example given is the political disagreement about the Palestinian peoples’ right to return (Al-Awda in Arabic). Within the UN General Assembly, the majority votes for a two-state solution. Yet the balance of power between the majority of the world and the dissenting vote (U.S., Israel and a handful of small countries), determines the end result. According to Finkelstein, one school of thought states that sewing an illusion for people is wrong because they are suffering. What is realistic is a question for discussion. Within scholarly literature and media publications, disagreements about the conflict are confected (made up) also. Finkelstein separates them into three realms. The first dimension is “an effort to envelope the conflict into a mystical, cosmic, ideological cloud,” which contains “Biblical indemnities and ancient clashes.” These arguments state that the conflict is not comparable to anything else. However, these reasons do not suffice for Finkelstein, so the root of the conflict requires further investigation. In Image and Reality in the Israel-Palestine Conflict, Finkelstein explores the academic work of Benny Morris in a chapter called “‘Born of War, Not by Design’ Benny Morris’s ‘Happy Median’ Image.” He examines Morris’s past publications and contends “…that Morris’s own evidence points to the conclusion that Palestine’s Arabs were expelled systematically and with premeditation” and that Morris “….obscures the ideological motivations behind Israel’s decision to expel Palestine’s Arabs.” (p. 53). He concludes that Morris should heed his own advice when “he prepared the results of his research for publication” (87). In January, Morris was interviewed by Ari Shavit for the Ha’aertz. Morris referred to the Palestinians as barbarians. In 2003, the LA Times published an op-ed by Morris, which sparked numerous letters and phone calls. The second dimension of concocted disagreements “plays the holocaust card to justify what the Israelis are doing to the Palestinians.” The explanation for its use is what happened to the Jewish people is unique, so these standards are not applicable. Despite the consensus among mainstream, human rights organizations (Amnesty International, B’tselem – The Israeli Information Center for Human Rights in the Occupied Territories, Human Rights Watch, and Physicians for Human Rights) about the human rights violations of the Palestinians, accountability for these injustices does not exist in this rationalization. Finkelstein made reference to the 30,000 Palestinians tortured in the first, two years of the first intifada. Furthermore, Finkelstein sees the new anti-Semitism literature on the market as an extension of this reasoning. He does not see evidence of this anti-Semitism and describes it as “a complete and total fraud.” He cited recent PEW research center surveys, which asked Europeans questions about attitudes toward Jewish people. Overall, findings conclude that attitudes toward Jews in Europe are more favorable than they were eleven years ago. In France, there have been a few incidences of arson and a few injuries. Nevertheless, 85 per cent of youth gave positive answers about Jewish people and around 80 per cent of North African Muslims gave positive answers about Jewish people. Through Finkelstein’s eyes, criticism of Israel is classified as anti-Semitism. The final dimension concerns the claims of unleashed aggression toward Jewish people. Finkelstein stated that the self-identified, Jewish state has committed loathsome, heinous policies. So why is it a surprise that it creates a resentful sentiment? Moreover, whenever Israel comes under attack, the new anti-Semitism has the spotlight. He believes these strategies serve a political purpose and a vast proliferation of fraud exists on the topic. In academia, the “quality control” for publication has answers to these questions: who published the book; who reviewed the book; and who wrote excerpts for the back cover of the book. According to Finkelstein, the problem with the Israel-Palestine conflict is a book can have a well-known publisher, tremendous reviews by scholars in the field and still contain fraud. He gave the example of From Time Immemorial by Joan Peters, which he describes as a “colossal hoax” plagued with plagiarism. Exactly twenty years ago this month, a reading of Peter’s book initiated the fountainhead of Finkelstein’s academic fraud investigations. “You can write whatever you want and be guaranteed excellent reviews and reception…the truth means nothing,” he said. Finkelstein should launch a nationwide academic-fraud-committee where people are held accountable for their words. When asked about a one-state solution, Finkelstein said that it is far in the future (although he does not believe in borders). He said our first moral responsibility is to diminish the real suffering. If the kibbutz communities (where groups of people live equally) do not include the Palestinian people, then what is fair and equal? Within history is the Diaspora (Hatfutsot) and the Palestinian Diaspora - the catastrophe (Al-Nakba) of 1948 is an outstanding action item for serious address. The members of academia should be more like vanguard Finkelstein and expose the truth, so people can live in freedom.
http://www.scoop.co.nz/stories/HL0405/S00027.htm
CC-MAIN-2017-47
en
refinedweb
I take one look at this project, define my structure.....and my brain goes numb. I don't know where to star..... ///////project/////// Write a program that dynamically allocates an array large enough to hold a user defined number of structures (string name and three double tests). Using a function, the user will enter a name and three test scores for each element in the array. Using another function, the user will sort the array into ascending order by name. Using a third function a table will be displayed on the screen (under a header labeling each column) showing: Name Test 1 Test 2 Test 3 Average Then average for each element (three test grades / 3) will be calculated by a fourth function as it is being displayed. help of any kind is greatly appreciated #include <iostream> struct grades { char name; double test1; double test2; double test3; }; using namspace std; int main() { system("pause") return 0; }
https://www.daniweb.com/programming/software-development/threads/189696/dynamic-array-with-struct-and-pointers-help
CC-MAIN-2017-47
en
refinedweb
Despite Why Haskell is not popular in this post. Myths difficult. :)) Lack of documentation The Lack of documentation Hoogle, wai-websockets package but let’s look on it’s documentation. How to use it? I don’t know how about you, but I don’t understand. I see only one way out of this: to read wai-websockets source code or examples. Is it good? I’m not sure. Standard Library Now let’s talk about Haskell’s standard library. On my look it has many really useful things for haskell and has really little amount things for Real World play. What it means when I am telling about Real World. It’s simple, i mean that Haskell standard library has many things like Control.Category, Control.Arrow, Data.Typeable Network.TcpClient, Network.HttpServer golang for example. I think that it has a perfect standard library.. Why to learn? Lazyness When we start to learn Haskell we can read something like this: Haskell - general purpose programming language with non-strict evaluation. ByteString library. It provides two implementations lazy strings and strict. But I still don’t know where to use first and where to use second. Different abstractions: import sys data = sys.stdin.readlines() print data All is transparent enough. We do this task with all imperative programming languages in this way. First we reading from stdin and put result to a variable and than pass this variable to the printing function. Let’s look at the same Haskell example: main = do getLine >>= putStrLn Ok. Developer can guess about getLine and putStrLn, but what is it >>=. If we open documentation we will read something like this: >>= combine two monadic values… “combine two monadic. Conclusion So it was a short list of my thoughts why Haskell is not popular. I am very interesting what do you think about Haskell popularity. In the end I want to remind that all from this post only my opinion and if you’re agree or disagree with me write me a comment.
https://0xax.github.io/haskell_not_popular/
CC-MAIN-2017-47
en
refinedweb
Overview pentagons is a simple way to add a casual sense of freedom to any webpage. With pentagons, you can easily turn any canvas into a dynamically moving cluster of pentagons. The pentagons rotate, scale, and translate in a semi-random way. In fact, the pentagons move with an algorithm that attempts to prevent them from making "ugly" formations. Example You can create a PentagonsView and dynamically update it as follows: import 'package:pentagons/pentagons.dart'; CanvasElement canvas; PentagonsView view; void main() { canvas = querySelector('#canvas'); canvas..width = window.innerWidth..height = window.innerHeight; view = new PentagonsView(canvas, 15); drawNextFrame(null); } void drawNextFrame(_) { view.draw(); window.animationFrame.then(drawNextFrame); } This example creates a new PentagonsView which will draw 15 pentagons on a canvas named "#canvas". Of course, this example does not handle window resizing. To fix this, add the following code to your main() function: window.onResize.listen((_) { canvas..width = window.innerWidth..height = window.innerHeight; view.updateContext(); }); This way, the view will be alerted when the canvas has resized and will get a new 2D rendering context internally. License pentagons is under the BSD 2-clause license. See LICENSE.
https://www.dartdocs.org/documentation/pentagons/1.0.0/index.html
CC-MAIN-2017-47
en
refinedweb
Author: kevans Date: Fri Apr 6 16:59:58 2018 New Revision: 332115 URL: Log: MFC r330005-r330007, r330021, r330029, r330622, r331207: Solo loader.conf(5) r330005: Go back to one loader.conf We really only need one loader.conf. The other loader.conf was created because the current one took forever to parse in FORTH. That will be fixed in the next commit. r330006: Take a meat cleaver to defaults/loader.conf Remove almost all of the _load=XXX options (kept only those relevant to splash screens, since there were other settings). Remove the excessively cutesy comment blocks. Remove excessive comments and replace with similar content Remove gratuitous blank lines (while leaving some) We have too many modules to list them all here. There's no purpose in doing so and it's a giant hassle to maintain. In addition the extra ~500 lines slow this down on small platforms. It slowed it down so much small platforms forked, which caused other issues... This is a compromise between those two extremes. r330007: loader.conf is loader agnostic, so remove 4th references. r330021: These two directories build man pages, so it's incorrect to tag them NO_OBJ. Also, make sure the loader.conf.5 man gets built and installed. r330029: Fix a typo: "now" -> "no". r330622: loader.conf(5): Document some other settings These tend to have less coverage in other places and they don't have defaults as of yet, so mention them here: - fdt_overlays - kernels_autodetect (lualoader only) r331207: loader.conf: remove obsolete non-x86 beastie menu statement As of r330005 the same loader.conf defaults are used on all platforms. Added: stable/11/stand/defaults/ - copied from r330007, head/stand/defaults/ Deleted: stable/11/stand/arm/loader/loader.conf stable/11/stand/forth/loader.conf stable/11/stand/forth/loader.conf.5 stable/11/stand/mips/uboot/loader.conf Modified: stable/11/stand/Makefile stable/11/stand/defaults/Makefile stable/11/stand/defaults/loader.conf stable/11/stand/defaults/loader.conf.5 stable/11/stand/forth/Makefile Directory Properties: stable/11/ (props changed) Modified: stable/11/stand/Makefile ============================================================================== --- stable/11/stand/Makefile Fri Apr 6 16:48:11 2018 (r332114) +++ stable/11/stand/Makefile Fri Apr 6 16:59:58 2018 (r332115) @@ -9,6 +9,7 @@ SUBDIR+= ficl SUBDIR+= forth .endif +SUBDIR+= defaults SUBDIR+= man .include <bsd.arch.inc.mk> Modified: stable/11/stand/defaults/Makefile ============================================================================== --- head/stand/defaults/Makefile Mon Feb 26 03:16:57 2018 (r330007) +++ stable/11/stand/defaults/Makefile Fri Apr 6 16:59:58 2018 (r332115) @@ -1,11 +1,9 @@ # $FreeBSD$ -NO_OBJ=t - .include <bsd.init.mk> FILES+= loader.conf -FILES+= loader.conf.5 +MAN+= loader.conf.5 FILESDIR_loader.conf= /boot/defaults Modified: stable/11/stand/defaults/loader.conf ============================================================================== --- head/stand/defaults/loader.conf Mon Feb 26 03:16:57 2018 (r330007) +++ stable/11/stand/defaults/loader.conf Fri Apr 6 16:59:58 2018 (r332115) @@ -69,7 +69,7 @@ acpi_video_load="NO" # Load the ACPI video extension #loader_delay="3" # Delay in seconds before loading anything. # Default is unset and disabled (no delay). #autoboot_delay="10" # Delay in seconds before autobooting, - # -1 for now user interrupts, NO to disable + # -1 for no user interrupts, NO to disable #password="" # Prevent changes to boot options #bootlock_password="" # Prevent booting (see check-password.4th(8)) #geom_eli_passphrase_prompt="NO" # Prompt for geli(8) passphrase to mount root Modified: stable/11/stand/defaults/loader.conf.5 ============================================================================== --- head/stand/defaults/loader.conf.5 Mon Feb 26 03:16:57 2018 (r330007) +++ stable/11/stand/defaults/loader.conf.5 Fri Apr 6 16:59:58 2018 (r332115) @@ -23,7 +23,7 @@ .\" SUCH DAMAGE. .\" .\" $FreeBSD$ -.Dd January 6, 2016 +.Dd March 19, 2018 .Dt LOADER.CONF 5 .Os .Sh NAME @@ -249,7 +249,6 @@ be displayed. If set to .Dq YES , the beastie boot menu will be skipped. -The beastie boot menu is always skipped if running non-x86 hardware. .It Va loader_logo Pq Dq Li orbbw Selects a desired logo in the beastie boot menu. Possible values are: @@ -277,6 +276,23 @@ See the entropy entries in .Pq Dq /boot/entropy The name of the very early boot-time entropy cache file. +.El +.Sh OTHER SETTINGS +Other settings that may be used in +.Nm +that have no default value: +.Bl -tag -width bootfile -offset indent +.It Va fdt_overlays +Specifies a comma-delimited list of FDT overlays to apply. +.Pa /boot/overlays +is created by default for overlays to be placed in. +.It Va kernels_autodetect +If set to +.Dq YES , +attempt to auto-detect kernels installed in +.Pa /boot . +This is an option specific to the Lua-based loader. +It is not available in the default Forth-based loader. .El .Sh FILES .Bl -tag -width /boot/defaults/loader.conf -compact Modified: stable/11/stand/forth/Makefile ============================================================================== --- stable/11/stand/forth/Makefile Fri Apr 6 16:48:11 2018 (r332114) +++ stable/11/stand/forth/Makefile Fri Apr 6 16:59:58 2018 (r332115) @@ -7,7 +7,6 @@ MAN+= beastie.4th.8 \ check-password.4th.8 \ color.4th.8 \ delay.4th.8 \ - loader.conf.5 \ loader.4th.8 \ menu.4th.8 \ menusets.4th.8 \ @@ -33,10 +32,9 @@ FILES+= screen.4th FILES+= shortcuts.4th FILES+= support.4th FILES+= version.4th -FILESDIR_loader.conf= /boot/defaults # Allow machine specific loader.rc to be installed. -.for f in loader.rc menu.rc loader.conf +.for f in loader.rc menu.rc .if exists(${BOOTSRC}/${MACHINE:C/amd64/i386/}/loader/${f}) FILES+= ${BOOTSRC}/${MACHINE:C/amd64/i386/}/loader/${f} .else _______________________________________________ [email protected] mailing list To unsubscribe, send any mail to "[email protected]"
https://www.mail-archive.com/[email protected]/msg160770.html
CC-MAIN-2018-43
en
refinedweb
Opened 4 months ago Last modified 4 months ago #29466 new Bug Textual "to" parameter of ForeignKey fails to resolve if placed in abstract model Description import django from django.conf import settings settings.configure(DEBUG=True) django.setup() from django.db import models SHOULD_I_FAIL = True # switch this to reproduce the bug class ReferencedModel(models.Model): field = models.FloatField() class Meta: app_label = 'myapp' ref = 'ReferencedModel' if SHOULD_I_FAIL else ReferencedModel class AbstractModel(models.Model): # NOTE: only abstract models are affected field = models.ForeignKey(ref, on_delete=models.CASCADE) class Meta: abstract = True app_label = 'myapp' class RealModel(AbstractModel): other_field = models.CharField(max_length=100) class Meta: app_label = 'myapp' ffield = AbstractModel._meta.get_field('field') # ValueError: Related model 'ReferencedModel' cannot be resolved print(ffield.target_field) Change History (3) comment:1 Changed 4 months ago by comment:2 Changed 4 months ago by I will assume that this is intended behavior. According to #24215, abstract models does not register pending lookups into Apps registry. I have tried to implement it, but found an issue with recursive relations in abstract model - they cannot be resolved (because abstract models is not registered), but it is valid to have them, because they can be resolved in concrete children classes. So for abstract models lazy-referenced relations should not be resolved. I am not sure about this, need someone more proficient to take a look. comment:3 Changed 4 months ago by Sounds reasonable, but what if I'm sure that all models are already loaded (e.g. during Command execution)? How can I manually resolve textual model name and target_field? Reproduced at 741792961815cf4a95c5ce8ab590dfc7700c6153.
https://code.djangoproject.com/ticket/29466
CC-MAIN-2018-43
en
refinedweb
A long time ago I wanted to show similar/related posts at the end of each post on this blog. At the time, Hugo didn't have built in support to show related posts (nowadays it has). So I decided to implement my own using python, sklearn and Clustering. Program design Reading & Parsing posts Since I write in English and Spanish, I needed to train the model twice, in order to only show English related post to English readers and Spanish ones to Spanish readers. To achieve it, I created a readPosts function that takes in as parameters a path where the post are, and a boolean value indicating whether I want related posts for English or Spanish. dfEng = readPosts('blog/content/post', english=True) dfEs = readPosts('blog/content/post', english=False) Inside this function (you can check it on my github), I read all the English/Spanish posts and return a Pandas Data Frame. The most important thing this function does is select the correct parser, to open files using a yaml parser or a TOML parser. Once the frontmatter is read, readPosts makes a DataFrame using that metadata. It only takes into account the following metadata: tags = ('title', 'tags', 'introduction', 'description') This is the information that will be used for classifying. Help me keep writing Model Selection As I said at the beginning of the post, I decided to use the Clustering technique. As I am treating with text data, I need a way to convert all this data to numeric form, as clustering only works with numeric data. To achieve it, I have to use a technique called TF-IDF. I won't delve into the details of this technique, but give you a short introduction to it. What is TF-IDF (Term Frequency - Inverse Document Frequency) When working with text data, many words will appear for multiple documents of multiple classes, this words typically don't contain discriminatory information. TF-IDF aims to downweight those frequently appearing words in the data (In this case, the Pandas Data Frame). The tf-idf is defined as the product of: - The term frequency. Number of times a term appears in a document. - The Inverse document frequency. How much information the word provides taking into account all documents, that is, if the term is common or rare across all documents. Multiplying the above values gives the tf-idf, quoting Wikipedia: A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf-idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf-idf closer to 0. In short, as more common a term is across all documents, less tf-idf score it will have, signaling that this word is not important for classifying. Hyper-Parameter Tunning To select the appropriate parameters for the model I've used sklearn's GridSearchCV method, you can check it on line 425 of my code. Cleaning the Data Now that I have decided what method use (clustering) and how convert the text data to a vector format (TF-IDF), I have to clean the data. Usually, when dealing with text data you have to remove words that are used often, but doesn't add meaning, those words are called stop words (the, that, a etc). This work is done in generateTfIdfVectorizer. In this process I also perform a stemmization of the words. From Wikipedia, Stemming is the process of: Reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. Depending on which language I am generating the related posts for (English or Spanish) I use def tokenizer_snowball(text): stemmer = SnowballStemmer("spanish") return [stemmer.stem(word) for word in text.split() if word not in stop] for Spanish or def tokenizer_porter(text): porter = PorterStemmer() return [porter.stem(word) for word in text.split() if word not in stop] for English. After this process, finally I have all the data ready to perform clustering. Clustering I've used KMeans to do the clustering. The most time consuming task of all this process was, as usual, clean the data, so this step is simple. I just need a way of know how many clusters I should have. For this, I've used the Elbow Method. This method is an easy way to identify the value of k (How many clusters there are.) for which the distortion begins to increase rapidly. This is best shown with an image: After executing the model, using 16 features, this are the ones selected for Spanish: [u'andro', u'comand', u'curs', u'dat', u'desarroll', u'funcion', u'googl', u'jav', u'libr', u'linux', u'program', u'python', u'recurs', u'script', u'segur', u'wordpress'] and the ones used for English: [u'blogs', u'chang', u'channels', u'curat', u'error', u'fil', u'gento',u'howt', u'list', u'lists', u'podcasts', u'python', u'scal', u'scienc', u'script', u'youtub'] How I integrated it with Hugo This was a tedious task, since I had to read the output of the model (in CSV format) into hugo and pick 10 random post from the same cluster. Although is no longer required to use this, I want to share how I integrated this approach with Hugo to show related posts: {{ $url := string (delimit (slice "static/" "labels." .Lang ".csv" ) "") }} {{ $sep := "," }} {{ $file := string .File.LogicalName }} {{/* First iterate thought csv to get post cluster */}} {{ range $i, $r := getCSV $sep $url }} {{ if in $r (string $file) }} {{ $.Scratch.Set "cluster" (index . 1) }} {{ end }} {{ end }} {{ $cluster := $.Scratch.Get "cluster" }} {{/* loop csv again to store post in the same cluster */}} {{ range $i, $r := getCSV $sep $url }} {{ if in $r (string $cluster) }} {{ $.Scratch.Add "posts" (slice $r) }} {{ end }} {{ end }} {{ $post := $.Scratch.Get "posts" }} {{/* Finally, show 5 randomly related posts */}} {{ if gt (len $post) 1 }} <h1>{{T "related" }}</h1> <ul> {{ range first 5 (shuffle $post) }} <li><a id="related-post" {{ printf "href=%q" ($.Ref (index . 2)) | safeHTMLAttr }} {{ printf "title=%q" (index . 3) | safeHTMLAttr }}>{{ index . 3 }}</a></li> {{ end }} </ul> {{ end }} If you have any comments, or want to improve something, comment below. References Spot a typo?: Help me fix it by contacting me or commenting below!
https://elbauldelprogramador.com/en/related-posts-hugo-sklearn/
CC-MAIN-2018-43
en
refinedweb
Use QZXing in qml Hi all, at the moment I am trying to use QZXing in my qml. But I am failing. If I try to compile I get the following error: "undefined reference to 'QZXing::registerQMLTypes()'" But decoding the qr code of an image in main() works. What I have done so far: Downloaded QZXing source and compiled it to get libQZXing.so. QT += core gui qml quick widgets multimedia ... LIBS += -L$$PWD/QZXing/ -lQZXing INCLUDEPATH += $$PWD/QZXing DEPENDPATH += $$PWD/QZXing main.cpp: #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QZXing.h> int main(int argc, char *argv[]) { QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); QGuiApplication app(argc, argv); QZXing::registerQMLTypes(); QQmlApplicationEngine engine; engine.load(QUrl(QLatin1String("qrc:/main.qml"))); if (engine.rootObjects().isEmpty()) return -1; QImage qrCodeimageToDecode("/xxx/qrcode.png"); QZXing decoder; decoder.setDecoder(QZXing::DecoderFormat_QR_CODE); QString resultQrCode = decoder.decodeImage(qrCodeimageToDecode); return app.exec(); } Any ideas what I am doing wrong? - raven-worx Moderators @MHermann said in Use QZXing as library in qml: Any ideas what I am doing wrong? are you sure the libQZXing.sorelies in $$PWD/QZXing/?? @raven-worx : Yes, I compiled it and copied it there. @raven-worx : The three lines QZXing decoder; decoder.setDecoder(QZXing::DecoderFormat_QR_CODE); QString resultQrCode = decoder.decodeImage(qrCodeimageToDecode); are executing without problems. And they are also used from libQZXing.so. are executing without problems. And they are also used from libQZXing.so. Yes i overread that. seems the QZXING_QMLmacro definition is missing. Recompile the lib with CONFIG += qzxing_qml @raven-worx : I added the line CONFIG += qzxing_qml to QZXing.pro and compiled the lib again. I copied the new *.so files to my directory. But it behaves exactly the same... "undefined reference to `QZXing::registerQMLTypes()'" I forgot to mention, that I also added additional arguments in my project: DEFINES+=QZXING_QML DEFINES+=QML_MULTIMEDIA Any more ideas? @MHermann said in Use QZXing as library in qml: Any more ideas? No, this seemed pretty much the cause for issue to me. Alternatively - if its's an option for you - you could add include($$PWD/QZXing/src/QZXing.pri(and also add CONFIG += qzxing_qml) to your application's .pro file. Then QZXing is compiled into your binary directly. This could also be a solution. But this would not be my preferred solution... That would mean that the whole QZXing source code will be compiled too, each time I am compiling my own source code. Or is it possible to exclude this from the compiling process and compile it only once? I added the QZXing source code via include($$PWD/QZXing/src/QZXing.pri Now it is working.
https://forum.qt.io/topic/86995/use-qzxing-in-qml
CC-MAIN-2018-43
en
refinedweb
What can cause this SIGSEGV error? I received a crash log that I cannot explain. I have searched around and it appears that the SIGSEGV has something to do with memory. But in my case there is nothing of my own code except for the main.m in the stacktrace. Also it doesn't seem to symbolicate any of the system libraries. The crash so far only happened on one iPhone. On other phones I haven't been able to reproduce it. Right now I'm completely stuck and don't know where to continue so if anyone has seen something like this before it would be good to hear their problem and resolution. The crash log: Incident Identifier: TODO CrashReporter Key: TODO Hardware Model: iPhone4,1 OS Version: iPhone OS 6.1.3 (10B329) Report Version: 104 Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0x41fd5903 Crashed Thread: 0 Thread 0 Crashed: 0 libobjc.A.dylib 0x3b0b9564 0x3b0b6000 + 13668 1 libobjc.A.dylib 0x3b0bb1d7 0x3b0b6000 + 20951 2 CoreFoundation 0x33396605 0x332d4000 + 796165 3 CoreFoundation 0x3339635d 0x332d4000 + 795485 4 libobjc.A.dylib 0x3b0bea65 0x3b0b6000 + 35429 5 libc++abi.dylib 0x3ab0b07b 0x3ab0a000 + 4219 6 libc++abi.dylib 0x3ab0b114 0x3ab0a000 + 4372 7 libc++abi.dylib 0x3ab0c599 0x3ab0a000 + 9625 8 libobjc.A.dylib 0x3b0be9d1 0x3b0b6000 + 35281 9 CoreFoundation 0x332dcf21 0x332d4000 + 36641 10 CoreFoundation 0x332dcd49 0x332d4000 + 36169 11 GraphicsServices 0x36eb52eb 0x36eb0000 + 21227 12 UIKit 0x351f2301 0x3519b000 + 357121 13 Stylbar 0x0007109f main (main.m:21) Edit 3th of May: The crash log is sent by a user. I haven't been able to reproduce the issue myself unfortunately, which is why it's so difficult for me to figure out what went wrong with just this crash log. It appeared to have happened about 15 times in a row for the same user when opening a certain view controller. The view controller does several calls to a server to load a post, comments and images and profile pictures. All the code that's executed when this view controller is opened is probably over 2000 lines of code (excluding the RestKit and SBWebImage libraries that are used within this code). Posting that code here wouldn't help anyone I'm afraid. Answers The most simple and useful way to spend your time hunting for the cause of the crash is to look at your code and focus on places where UIKit has a delegate that points back into your code. For example, I found that the most common place this sort of thing would show up was in UITableView. The reason these problems are so hard to track down is that they might only happen in a low memory situation or in some uncommon UI condition that is very hard to reproduce. It is better to just do a code review and make sure that delegate that are set to point to your classes are set back to nil in your own object destructors. If you have many developers, it is often better to work on some higher level abstractions like a generic table and cell class that is used throughout the project than to have every developer coding up a UITableView and making mistakes like forgetting to nil out the delegate that are very difficult to find. SIGSEGV is a problem the occurs when your application tries to access an address of memory that doesn't exists or some address where is already reserved to another program. I have the same issue with an application right now but I have to review my code to figure it out better. One clue for this kind of problem could be something equivalent to this (found in wikipedia): #include <stdlib.h> int main(void) { char p = NULL; / p is a pointer to char that initializes poiting to "nowhere"*/ * p = 'x'; /* Tries to save the char 'x' in 'no address'*/ return 0; } I hope this can help someone. Need Your Help Splitting Strings by slashes Deleting an associated record through a remote_form ruby-on-rails ruby model associationsI have built a ruby on rails app that lets users track their workouts. User has_many workouts. In addition, a User can create a box (gym) if they are a gym owner. Users can then associate with that...
http://unixresources.net/faq/16320955.shtml
CC-MAIN-2018-43
en
refinedweb
Build. Most times, the information you need to get from a user is boolean-like - for example: yes or no, true or false, enable or disable, on or off, etc. Traditionally, the checkbox form component is used for getting these kinds of input. However, in modern interface designs, toggle switches are commonly used as checkbox replacements, although there are some accessibility concerns. In this tutorial, we will see how to build a custom toggle switch component with React. At the end of the tutorial, we would have built a very simple demo React app that uses our custom toggle switch component. Here is a demo of the final application we will be building in this tutorial. Prerequisites Before getting started, you need to ensure that you have Node already installed on your machine. I will also recommend that you install the Yarn package manager on your machine, since we will be using it for package management instead of npm that ships with Node. You can follow this Yarn installation guide to install yarn on your machine. We will create the boilerplate code for our React app using the create-react-app command-line package. You also need to ensure that it is installed globally on your machine. If you are using npm >= 5.2 then you may not need to install create-react-app as a global dependency since we can use the npx command. Finally, this tutorial assumes that you are already familiar with React. If that is not the case, you can check the React Documentation to learn more about React. Getting Started Create new Application Start a new React application using the following command. You can name the application however you desire. create-react-app react-toggle-switch npm **>= 5.2** If you are using npm version 5.2 or higher, it ships with an additional npx binary. Using the npx binary, you don't need to install create-react-app` globally on your machine. You can start a new React application with this simple command: npx create-react-app react-toggle-switch Install Dependencies Next, we will install the dependencies we need for our application. Run the following command to install the required dependencies. yarn add lodash bootstrap prop-types classnames yarn add -D npm-run-all node-sass-chokidar We have installed node-sass-chokidar as a development dependency for our application to enable us use SASS. For more information about this, see this guide. Modify the npm Scripts Edit the package.json file and modify the scripts section to look like the following: "scripts": { "start:js": "react-scripts start", "build:js": "react-scripts build", "start": "npm-run-all -p watch:css start:js", "build": "npm-run-all build:css build:js", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject", "build:css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/", "watch:css": "npm run build:css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive" } Include Bootstrap CSS We installed the bootstrap package as a dependency for our application since we will be needing some default styling. To include Bootstrap in the application, edit the src/index.js file and add the following line before every other import statement. import "bootstrap/dist/css/bootstrap.min.css"; Start the Application Start the application by running the following command with yarn: yarn start The application is now started and development can begin. Notice that a browser tab has been opened for you with live reloading functionality to keep in sync with changes in the application as you develop. At this point, the application view should look like the following screenshot: The ToggleSwitch Component Create a new directory named components inside the src directory of your project. Next, create another new directory named ToggleSwitch inside the components directory. Next, create two new files inside src/components/ToggleSwitch, namely: index.js and index.scss. Add the following content into the src/components/ToggleSwitch/index.js file. /_ src/components/ToggleSwitch/index.js _/ import PropTypes from 'prop-types'; import classnames from 'classnames'; import isString from 'lodash/isString'; import React, { Component } from 'react'; import isBoolean from 'lodash/isBoolean'; import isFunction from 'lodash/isFunction'; import './index.css'; class ToggleSwitch extends Component {} ToggleSwitch.propTypes = { theme: PropTypes.string, enabled: PropTypes.oneOfType([ PropTypes.bool, PropTypes.func ]), onStateChanged: PropTypes.func } export default ToggleSwitch; In this code snippet, we created the ToggleSwitch component and added typechecks for some of its props. theme- is a stringindicating the style and color to be used for the toggle switch. enabled- can be either a booleanor a functionthat returns a boolean, and it determines the state of the toggle switch when it is rendered. onStateChanged- is a callback function that will be called when the state of the toggle switch changes. This is useful for triggering actions on the parent component when the switch is toggled. Initializing the ToggleSwitch State In the following code snippet, we initialize the state of the ToggleSwitch component and define some component methods for getting the state of the toggle switch. /_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { state = { enabled: this.enabledFromProps() } isEnabled = () => this.state.enabled enabledFromProps() { let { enabled } = this.props; // If enabled is a function, invoke the function enabled = isFunction(enabled) ? enabled() : enabled; // Return enabled if it is a boolean, otherwise false return isBoolean(enabled) && enabled; } } Here, the enabledFromProps() method resolves the enabled prop that was passed and returns a boolean indicating if the switch should be enabled when it is rendered. If enabled prop is a boolean, it returns the boolean value. If it is a function, it first invokes the function before determining if the returned value is a boolean. Otherwise, it returns false. Notice that we used the return value from enabledFromProps() to set the initial enabled state. Also, we have added the isEnabled() method to get the current enabled state. Toggling the ToggleSwitch Let's go ahead and add the method that actually toggles the switch when it is clicked. /_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here toggleSwitch = evt => { evt.persist(); evt.preventDefault(); const { onClick, onStateChanged } = this.props; this.setState({ enabled: !this.state.enabled }, () => { const state = this.state; // Augument the event object with SWITCH_STATE const switchEvent = Object.assign(evt, { SWITCH_STATE: state }); // Execute the callback functions isFunction(onClick) && onClick(switchEvent); isFunction(onStateChanged) && onStateChanged(state); }); } } Since this method will be triggered as a click event listener, we have declared it with the evt parameter. First, this method toggles the current enabled state using the logical NOT ( !) operator. When the state has been updated, it triggers the callback functions passed to the onClick and onStateChanged props. Notice that since onClick requires an event as its first argument, we augmented the event with an additional SWITCH_STATE property containing the new state object. However, the onStateChanged callback is called with the new state object. Rendering the ToggleSwitch Finally, let's implement the render() method of the ToggleSwitch component. /_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here render() { const { enabled } = this.state; // Isolate special props and store the remaining as restProps const { enabled: _enabled, theme, onClick, className, onStateChanged, ...restProps } = this.props; // Use default as a fallback theme if valid theme is not passed const switchTheme = (theme && isString(theme)) ? theme : 'default'; const switchClasses = classnames( `switch switch--${switchTheme}`, className ) const togglerClasses = classnames( 'switch-toggle', `switch-toggle--${enabled ? 'on' : 'off'}` ) return ( <div className={switchClasses} onClick={this.toggleSwitch} {...restProps}> <div className={togglerClasses}></div> </div> ) } } A lot is going on in this render() method - so let's try to break it down. First, the enabledstate is destructured from the component state. Next, we destructure the component props and extract the restPropsthat will be passed down to the switch. This enables us to intercept and isolate the special props of the component. Next, we use classnames to construct the classes for the switch and the inner toggler, based on the themeand the enabledstate of the component. Finally, we render the DOM elements with the appropriate props and classes. Notice that we passed in this.toggleSwitchas the clickevent listener on the switch. Styling the ToggleSwitch Now that we have the ToggleSwitch component and its required functionality, we will go ahead and write the styles for the toggle switch. Add the following code snippet to the src/components/ToggleSwitch/index.scss file you created earlier: /_ src/components/ToggleSwitch/index.scss _/ // DEFAULT COLOR VARIABLES $ball-color: #ffffff; $active-color: #62c28e; $inactive-color: #cccccc; // DEFAULT SIZING VARIABLES $switch-size: 32px; $ball-spacing: 2px; $stretch-factor: 1.625; // DEFAULT CLASS VARIABLE $switch-class: 'switch-toggle'; /_ SWITCH MIXIN _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) {} Here, we defined some default variables and created a switch mixin. In the next section, we will we will implement the mixin, but first, let's examine the parameters of the switch mixin: $size- The height of the switch element. It must have a length unit. It defaults to 32px. $spacing- The space between the circular ball and the switch container. It must have a length unit. It defaults to 2px. $stretch- A factor used to determine the extent to which the width of the switch element should be stretched. It must be a unitless number. It defaults to 1.625. $color- The color of the switch when in active state. This must be a valid color value. Note that the circular ball is always white irrespective of this color. $class- The base class for identifying the switch. This is used to dynamically create the state classes of the switch. It defaults to 'switch-toggle'. Hence, the default state classes are .switch-toggle--onand .switch-toggle--off. Implementing the Switch Mixin Here is the implementation of the switch mixin: /_ src/components/ToggleSwitch/index.scss _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) { // SELECTOR VARIABLES $self: '.' + $class; $on: #{$self}--on; $off: #{$self}--off; // SWITCH VARIABLES $active-color: $color; $switch-size: $size; $ball-spacing: $spacing; $stretch-factor: $stretch; $ball-size: $switch-size - ($ball-spacing _ 2); $ball-slide-size: ($switch-size _ ($stretch-factor - 1) + $ball-spacing); // SWITCH STYLES height: $switch-size; width: $switch-size * $stretch-factor; cursor: pointer !important; user-select: none !important; position: relative !important; display: inline-block; &#{$on}, &#{$off} { &::before, &::after { content: ''; left: 0; position: absolute !important; } &::before { height: inherit; width: inherit; border-radius: $switch-size / 2; will-change: background; transition: background .4s .3s ease-out; } &::after { top: $ball-spacing; height: $ball-size; width: $ball-size; border-radius: $ball-size / 2; background: $ball-color !important; will-change: transform; transition: transform .4s ease-out; } } &#{$on} { &::before { background: $active-color !important; } &::after { transform: translateX($ball-slide-size); } } &#{$off} { &::before { background: $inactive-color !important; } &::after { transform: translateX($ball-spacing); } } } In this mixin, we start by setting some variables based on the parameters passed to the mixin. Then we go ahead, creating the styles. Notice that we are using the ::after and ::before pseudo-elements to dynamically create the components of the switch. ::before creates the switch container while ::after creates the circular ball. Also notice how we constructed the state classes from the base class and assign them to variables. The $on variable maps to the selector for the enabled state, while the $off variable maps to the selector for the disabled state. We also ensured that the base class ( .switch-toggle) must be used together with a state class ( .switch-toggle--on or .switch-toggle--off) for the styles to be available. Hence, we used the &#{$on} and &#{$off} selectors. Creating Themed Switches Now that we have our switch mixin, we will continue to create some themed styles for the toggle switch. We will create two themes, namely: default and graphite-small. Append the following code snippet to the src/components/ToggleSwitch/index.scss file. /_ src/components/ToggleSwitch/index.scss _/ @function get-switch-class($selector) { // First parse the selector using `selector-parse` // Extract the first selector in the first list using `nth` twice // Extract the first simple selector using `simple-selectors` and `nth` // Extract the class name using `str-slice` @return str-slice(nth(simple-selectors(nth(nth(selector-parse($selector), 1), 1)), 1), 2); } .switch { $self: &; $toggle: #{$self}-toggle; $class: get-switch-class($toggle); // default theme &#{$self}--default > #{$toggle} { // Always pass the $class to the mixin @include switch($class: $class); } // graphite-small theme &#{$self}--graphite-small > #{$toggle} { // A smaller switch with a `gray` active color // Always pass the $class to the mixin @include switch($color: gray, $size: 20px, $class: $class); } } Here we first create a Sass function named get-switch-class that takes a $selector as parameter. It runs the $selector through a chain of Sass functions and tries to extract the first class name. For example, if it receives: .class-1 .class-2, .class-3 .class-4, it returns class-1. .class-5.class-6 > .class-7.class-8, it returns class-5. Next, we define styles for the .switch class. We dynamically set the toggle class to .switch-toggle and assign it to the $toggle variable. Notice that we assign the class name returned from the get-switch-class() function call to the $class variable. Finally, we include the switch mixin with the necessary parameters to create the theme classes. Notice that the structure of the selector for the themed switch looks like this: &#{$self}--default > #{$toggle} (using the default theme as an example). Putting everything together, this means that the elements hierarchy should look like the following in order for the styles to be applied: <!-- Use the default theme: switch--default --> <element class="switch switch--default"> <!-- The switch is in enabled state: switch-toggle--on --> <element class="switch-toggle switch-toggle--on"></element> </element> Here is a simple demo showing what the toggle switch themes look like: Building the Sample App Now that we have the ToggleSwitch React component with the required styling, let's go ahead and start creating the sample app we saw at the beginning section. Modify the src/App.js file to look like the following code snippet: /_ src/App.js _/ import classnames from 'classnames'; import snakeCase from 'lodash/snakeCase'; import React, { Component } from 'react'; import Switch from './components/ToggleSwitch'; import './App.css'; // List of activities that can trigger notifications const ACTIVITIES = [ 'News Feeds', 'Likes and Comments', 'Live Stream', 'Upcoming Events', 'Friend Requests', 'Nearby Friends', 'Birthdays', 'Account Sign-In' ]; class App extends Component { // Initialize app state, all activities are enabled by default state = { enabled: false, only: ACTIVITIES.map(snakeCase) } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : ACTIVITIES.map(snakeCase) }); } render() { const { enabled } = this.state; const headingClasses = classnames( 'font-weight-light h2 mb-0 pl-4', enabled ? 'text-dark' : 'text-secondary' ); return ( <div className="App position-absolute text-left d-flex justify-content-center align-items-start pt-5 h-100 w-100"> <div className="d-flex flex-wrap mt-5" style={{width: 600}}> <div className="d-flex p-4 border rounded align-items-center w-100"> <Switch theme="default" className="d-flex" enabled={enabled} onStateChanged={this.toggleNotifications} /> <span className={headingClasses}>Notifications</span> </div> {/_ ...Notification options here... _/} </div> </div> ); } } export default App; Here we initialize the ACTIVITIES constant with an array of activities that can trigger notifications. Next, we initialized the app state with two properties: enabled- a booleanthat indicates whether notifications are enabled. only- an arraythat contains all the activities that are enabled to trigger notifications. Notice that we used the snakeCase utility from Lodash to convert the activities to snakecase before updating the state. Hence, 'News Feeds' becomes 'news_feeds'. Next, we defined the toggleNotifications() method that updates the app state based on the state it receives from the notification switch. This is used as the callback function passed to the onStateChanged prop of the toggle switch. Notice that when the app is enabled, all activities will be enabled by default, since the only state property is populated with all the activities. Finally, we rendered the DOM elements for the app and left a slot for the notification options which will be added soon. At this point, the app should look like the following screenshot: Next go ahead and look for the line that has this comment: {/_ ...Notification options here... _/} and replace it with the following content in order to render the notification options: { enabled && ( <div className="w-100 mt-5"> <div className="container-fluid px-0"> <div className="pt-5"> <div className="d-flex justify-content-between align-items-center"> <span className="d-block font-weight-bold text-secondary small">Email Address</span> <span className="text-secondary small mb-1 d-block"> <small>Provide a valid email address with which to receive notifications.</small> </span> </div> <div className="mt-2"> <input type="text" placeholder="[email protected]" className="form-control" style={{ fontSize: 14 }} /> </div> </div> <div className="pt-5 mt-4"> <div className="d-flex justify-content-between align-items-center border-bottom pb-2"> <span className="d-block font-weight-bold text-secondary small">Filter Notifications</span> <span className="text-secondary small mb-1 d-block"> <small>Select the account activities for which to receive notifications.</small> </span> </div> <div className="mt-5"> <div className="row flex-column align-content-start" style={{ maxHeight: 180 }}> { this.renderNotifiableActivities() } </div> </div> </div> </div> </div> ) } Notice here that we made a call to this.renderNotifiableActivities() to render the activities. Let's go ahead and implement this method and the other remaining methods. Add the following methods to the App component. /_ src/App.js _/ class App extends Component { toggleActivityEnabled = activity => ({ enabled }) => { let { only } = this.state; if (enabled && !only.includes(activity)) { only.push(activity); return this.setState({ only }); } if (!enabled && only.includes(activity)) { only = only.filter(item => item !== activity); return this.setState({ only }); } } renderNotifiableActivities() { const { only } = this.state; return ACTIVITIES.map((activity, index) => { const key = snakeCase(activity); const enabled = only.includes(key); const activityClasses = classnames( 'small mb-0 pl-3', enabled ? 'text-dark' : 'text-secondary' ); return ( <div key={index} <Switch theme="graphite-small" className="d-flex" enabled={enabled} onStateChanged={ this.toggleActivityEnabled(key) } /> <span className={activityClasses}>{ activity }</span> </div> ); }) } } Here, we have implemented the renderNotifiableActivities method. We iterate through all the activities using ACTIVITIES.map() and render each with a toggle switch for it. Notice that the toggle switch uses the graphite-small theme. We also detect the enabled state of each activity by checking whether it already exists in the only state variable. Finally, we defined the toggleActivityEnabled method which was used to provide the callback function for the onStateChanged prop of each activity's toggle switch. We defined it as a higher-order function so that we can pass the activity as argument and return the callback function. It checks if an activity is already enabled and updates the state accordingly. Now the app should look like the following screenshot: If you prefer to have all the activities disabled by default instead of enabled as shown in the initial screenshot, then you could make the following changes to the App component: /_ src/App.js _/ class App extends Component { // Initialize app state, all activities are disabled by default state = { enabled: false, only: [] } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : [] }); } } Accessibility Concerns Using toggle switches in our applications instead of traditional checkboxes can enable us create neater interfaces, especially considering the fact that it is difficult to style a traditional checkbox however we want. However, using toggle switches instead of checkboxes has some accessibility issues, since the user-agent may not be able to interpret the component's function correctly. A few things can be done to improve the accessibility of the toggle switch and enable user-agents to understand the role correctly. For example, you can use the following ARIA attributes: <switch-element</switch-element> You can also listen to more events on the toggle switch to create more ways the user can interact with the component. Conclusion In this tutorial, we have been able to create a custom toggle switch for our React applications with proper styling that supports different themes. We have also been able to see how we can use it in our application instead of traditional checkboxes and the accessibility concerns involved. For the complete sourcecode of this tutorial, checkout the react-toggle-switch-demo repository on Github. You can also get a live demo of this tutorial on Code Sandbox.
https://scotch.io/tutorials/build-a-custom-toggle-switch-with-react
CC-MAIN-2018-43
en
refinedweb
Static Import is a new feature added in Java 5 specification. Java 5 has been around the corner for some time now, still lot of people who are new in Java world doesn’t know about this feature. Although I have not used this feature in my work, still it is interesting to know about. What is Static Import? In order to access static members, it is necessary to qualify references with the class they came from. For example, one must say: double r = Math.cos(Math.PI * theta); or System.out.println("Blah blah blah"); You may want to avoid unnecessary use of static class members like Math. and System. For this use static import. For example above code when changed using static import is changed to: import static java.lang.System.out; import static java.lang.Math.PI; import static java.lang.Math.cos; ... double r = cos(PI * theta); out.println("Blah blah blah"); ... So whats the advantage of using above technique? Only advantage that I see is readability of the code. Instead of writing name of static class, one can directly write the method or member variable name. Also keep one thing in mind here. Ambiguous static import is not allowed. i.e. if you have imported java.lang.Math.PI and you want to import mypackage.Someclass.PI, the compiler will throw an error. Thus you can import only one member PI. Happy importing.. :) Further Reading: Varargs in Java Why bother importing Math.PI and Math.cos if you don’t save anything in the code ( Math.cos(Math.PI * theta) ? Rightly pointed, Some cases I prefer to prefix name of Class but sometime it unnecessary bloat the code e.g.consider calling methods like assertNull from Junit as Assert.assertNull(), nothing wrong but I still like former one. Some more usecase discussed on where to use static import in Java Thanks for pointing out the error. I have changed the code. (Copy/Paste effect ;-)) Looks like a fun little feature. I agree, but if in single class there are too many static imports, it will be real difficult to know which function belongs to which class. This will make readability bit diificult. What are the benfits of static import. :) good. keep it up cool. -:D Nice post
https://viralpatel.net/blogs/static-import-java-example-tutorial/
CC-MAIN-2018-43
en
refinedweb
Write program adds large integers using linked listsemplois Hello I am looking for a similar application ( see the application and give me the price t... As disucssed As discussed As discussed. We have a large list of roughly 20,000 websites to scrape 13,000 keywords from each to check how many times and if the words appear in the websites. ...first. I want to promote my lunch video add on youtube with help of google adds. Create a command-line tool C program, which implements a non-limited range integer calculator. The program should be able to: handle arbitrary integers, their input in multiple lines (choose a continuation symbol, like: ), carry out the five integer base operations as they work in. I need some help with internet app marketing to promote my app to a large audience which includes young men and women and big adults. A project in which we need to read integers from a file of 80-byte records, with zero or more numbers per record, and store the integers in a table. job at hand writing an email to attract costumers and selling them an idea.... Veuillez vous inscrire ou vous connecter pour voir les détails.: [se connecter pour voir l'URL] Would like to hire someone professional about Linked in business page. That person will build our linked in business page like pro. I need a simple program in Java, C++ or C# that will do restoring division utilizing bitwise shifting two integers. Must be able to divide positive and negative numbers - use 8 bit registers. Can provide more details. I need 2 (Two) banner adds - Banner Size: 728x90 and 300x250 - to promote a concert tour I need someone to copy information from some websites. import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks... ...the. I have a website with a google shop. I need google advertisement for my products. looking for email lists dumper who can get mail lists from databases. I need an expert that can rewrite my resume, cover letter and optimise my LinkedIn profile. The service should be exceptional. The person should have an in depth knowledge of the ATS system and how it works as regards resume development there is only to improve the edges of the logo in better format with optimal reshaping me.... ...
https://www.fr.freelancer.com/job-search/write-program-adds-large-integers-using-linked-lists/
CC-MAIN-2018-43
en
refinedweb
VisualEditor Contents - 1 Introduction - 2 Installation notes for MW 1.27x under Ubuntu 16.x - 3 Tips for old version 1 Introduction The Mediawiki VisualEditor is a Wysiwyg editor for Mediawiki versions that came out in late 2014. The system is also deployed in most Wikipedia versions (however since it is still in Beta, only registered users can use it). Installation of this extension is more complex than usual and requires two components: - Extension:VisualEditor - Parsoid, a combined MediaWiki and HTML parser implemented with Node.js I managed to get this installed and working, both on a test server and in this wiki. Anyhow, VisualEditor does seem work on MediaWiki 1.24 and 1.25 and 1.27 but the installation is very tricky and the software combination needs to be right. VisualEditor also seems to handle most pages that include templates and semantic forms. I don't think that semantic tags will work, but since we always use semantic forms that is more or less ok. Though, I consider moving forms-based pages into a special name space. Will have to do some extra testing however - Daniel K. Schneider (talk) 18:38, 26 June 2014 / Sept. 2015. Notice: This extension sometimes breaks (depending on the combo of installed versions) - Daniel K. Schneider (talk) 15:48, 19 September 2014 - Daniel K. Schneider (talk) 12:37, 17 April 2015 (CEST). Therefore upgrade your wiki on a test server first, if you are keen to have this working all the time. 2 Installation notes for MW 1.27x under Ubuntu 16.x It seems to be working for now - Daniel K. Schneider (talk) 20:01, 6 December 2016 (CET) 2.1 Pre-requisites - A node.js server and its npm package manager must be installed on your machine. To check if it exists, type nodjs --version sudo apt-get install nodejs sudo apt-get install npm - Curl must be installed, including the php library. sudo apt-get install curl sudo apt-get install php7.0-curl sudo apt-get install php-curl sudo apachectl restart 2.2 Installation of Parsoid Parsoid is a service that will run under node.js server and that will do the backend of the editing process, i.e. parse MediaWiki and HTML. If you have old developper versions installed it could be a good idea to remove these. My new install only started working after: sudo apt-get --purge remove parsoid rm -r /usr/lib/parsoid etc. My system admin skills are fairly low and I cannot explain why I had to remove every trace of an old parsoid... Installing/upgrading parsoid sudo apt-key advanced --keyserver pgp.mit.edu --recv-keys 90E9F83F22250DD7 sudo apt-add-repository "deb jessie-mediawiki main" sudo apt-get install apt-transport-https sudo apt-get update sudo apt-get install parsoid Configure parsoid Edit file /etc/mediawiki/parsoid/config.yaml For each wiki, add two lines likes this under mwApis:, one that defines the URL for the API and the other that gives a domain name (same as the one used in LocalSettings.php). Make sure the the API URL does work ! mwApis: - # This is the only required parameter, uri: '' domain: 'test' - # This is the only required parameter, uri: '' domain: 'your_wiki' Restart parsoid: sudo service parsoid restart 2.3 Installation of the extension Normal users (tested with MW 1.31, Sept. 2018) Take the one from the official extensions directory and just extract the archive. For example, do not use this, the URL will change wget Advanced users cd extensions git clone cd VisualEditor git submodule update --init Make sure to select the right version, 'master' probably will not work. Checkout the good version and update the submodule again git checkout REL1_27 git submodule update --init but, as we said above, for once it is a safer bet to get this extension from the distributor and not via git .... In LocalSettings.php: # Visual Editor wfLoadExtension( 'VisualEditor' ); // Enable; $wgVirtualRestConfig['modules']['parsoid'] = array( // URL to the Parsoid instance // Use port 8142 if you use the Ubuntu or Debian packages 'url' => '', // Parsoid "domain" -- ADAPT TO YOUR NEED, i.e. replace "your_wiki" by the same "domain" name you gave in the yaml.config file. 'domain' => 'your_wiki' ); 2.4 Allow visual editing in more than default namespaces Example (adapt to yours) $wgVisualEditorAvailableNamespaces = [ "0" => true, "2" => true, "102" => true, "104" => true, "106" => true, "_merge_strategy" => "array_plus"]; Tip: Look at the HTML source of Special:AllPages. The HTML options menu will list the numbers for your namespaces. 2.5 Troubleshooting 2.5.1 New installation is not working It is really really crucial to get the URLs right in the various configuration files 1) Control if Apache is ok. On the command line, type: sudo apache2ctl -S Depending on your installation you should get something like this (and not some random name that does not match your wiki URL !!) VirtualHost configuration: 129.194.7.85:* edutechwiki.unige.ch (/etc/apache2/sites-enabled/mediawiki.conf:1) Make very sure that in your Apache mediawiki.conf file you get the right name of your Internet alias, for example: <VirtualHost edutechwiki.unige.ch> ServerAdmin ....@..... ServerName edutechwiki.unige.ch 2) Make sure that the Parsoid domain is correct and consistent. 3) Uninstall old code and configuration files and start fresh. 2.5.2 Parsoid did work and stopped working This arrived on March 28, 2018, Ubuntu Server 16.04 / Mediawiki 1.27.4. According to a post on Mediawiki one should install an old version. Firstly make sure that this is the problem apt-cache show parsoid If it shows pool/main/p/parsoid/parsoid_0.9.0all_all.deb then you got version 9 and you could try downgrading. It's probably safer than trying to install a new visual editor that also can break everything, but I just do not know. Also make sure to copy the configuration files in case they go away. apt-get purge parsoid wget dpkg -i parsoid_0.8.0all_all.deb apt-mark hold parsoid Remember what you did here. apt-mark hold prevents installing a new package. Revert this after installing a new evething ... apt-mark unhold parsoid 2.6 https support - Is possible, read the official manual page (not tested so far) - Requires getting and installing certificates... 3 Tips for old version I had VisualEditor going for MW 1.25. After that it broke. You can browse through the wiki history if you are really interested in that ...
https://edutechwiki.unige.ch/en/VisualEditor
CC-MAIN-2018-43
en
refinedweb
Meet the bundle: MobileDetectBundle Mobile phones are taking over the world and, by extension, the web. Aside from the growth of responsive web design, websites are increasingly designing specific versions of their sites for mobile devices. In this article we'll introduce MobileDetectBundle, which detects mobile devices and helps you redirect users to the appropriate version of your website. Once installed, this is the only configuration you need to setup the usual scenario of redirecting mobile users to the m. host of your website: Read the official documentation of this bundle to learn how to configure more complex redirection scenarios. Apart from redirecting users, this bundle also provides a mobile_detector service which allows you to detect device types, operating systems and even vendors: All these methods are also available in your Twig templates via the equivalent functions provided by the bundle:! Thinking about buying an iPhone? Check out our deals! {% endif %} You make my day! :D Before I think of whether or not I want to create a community site, a blog, or anything else, I am always sure that I want a site that adopts to the specific features, needs and requirements of the browser and/or OS that is used to view it. To ensure that comments stay relevant, they are closed for old posts. Jean-Marie Lamodière said on Aug 18, 2015 at 10:52 #1 "if is_device('samsung') -> Thinking about buying an iPhone?" Mobile OS war in 3... 2... 1...
https://symfony.com/blog/meet-the-bundle-mobiledetectbundle
CC-MAIN-2018-43
en
refinedweb
getsockopt() Get options associated with a socket Synopsis: #include <sys/types.h> #include <sys/socket.h> int getsockopt( int s, int level, int optname, void * optval, socklen_t * optlen ); Since: BlackBerry 10.0.0. See " Keepalive timing," below. SO_LINGER. SO_OOBINLINE. SO_RCVBUF and SO_SNDBUF. SO_RCVLOWAT. SO_RCVTIMEO. SO_REUSEADDR level: SOL_SOCKET Enables or disables the reuse of duplicate addresses and port bindings. Indicates that the rules used in validating addresses supplied in a bind() call allows/disallows local addresses to be reused. SO_REUSEPORT.. SO_SNDTIMEO. SO. For a few clients (such as windowing systems that send a stream of mouse events that receive no replies), this packetization may cause significant delays. Therefore, TCP provides a boolean option, TCP_NODELAY, to defeat this algorithm.: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getsockopt.html
CC-MAIN-2018-43
en
refinedweb
inet_aton() Convert a string into an Internet address stored in a structure Synopsis: #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> int inet_aton( const char * cp, struct in_addr * addr ); Since: BlackBerry 10.0.0 Arguments: - cp - A pointer to the character string. - addr - A pointer to a in_addr structure where the function can store the converted address. Description:(). Returns: - 1 - Success; the string was successfully interpreted. - 0 - Failure; the string is invalid. Last modified: 2014-11-17 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/inet_aton.html
CC-MAIN-2018-43
en
refinedweb
Hello all, My assignment is as follows: Write a program that prompts the user for the number of tellers at Nation’s Bank in Hyatesville that worked each of the last three years. For each worker the program should ask for the number of days out sick for each of the last three years. The output should provide the number of tellers and the total number of days missed by all the tellers over the last three years. I've got the body right (I think), so it does ask how many tellers and covers a three year period of time, but for the life of me I cannot figure out how to get totals from each tellers days out to one big total outside the loop. I'm sure it's simple math but I would really appreciate any help with the syntax (hope that was the right word). The last total is where I am stumped. Thanks in advance. // This program finds the number of days a teller was // out of work sick over a three year period. // Christine #include <iostream> using namespace std; int main() { int numTellers; float sickDays, totalsick, total = 0; int teller, year = 0; // these are the counters for the loops cout << "This program finds the number of days a teller \n" "was out of work sick over a three year period\n" << endl; cout << "How many tellers worked at Nation's Bank during each of the \n" "last three years?\n\n"; cin >> numTellers; for (teller = 1; teller <= numTellers; teller++) { totalsick = 0; for (year = 1; year <= 3; year++) { cout << "\nPlease enter the number of days teller " << teller << " was out " "sick in year " << year << "." << endl; cin >> sickDays; totalsick = totalsick + sickDays; } } cout << "\nThe " << numTellers << " tellers were out of work sick for a total \n" "of " << total << " days during the last three years." << endl << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/475383/calculations-in-for-loop
CC-MAIN-2018-43
en
refinedweb
main and background thread example i wanna write a basic thread example. here's the idea: i have the main event loop. before calling app.exec(), i create an object which itself creates an object and puts it in a separate thread for running. i need some way to tell the background thread that main thread has finished (to join()), so i have a boolto indicate that. however, when the application exits, something goes wrong. i'm running it using visual studio and the debugging session won't stop. i don't understand what's wrong. here's the code: #include <QApplication> #include <QObject> #include <QThread> #include <iostream> #include <thread> #include <QtWidgets/QLabel> #include <atomic> std::atomic<bool> bMainThreadRunning { true }; class ThreadObj { public: void work() { while(bMainThreadRunning) { ++counter; std::this_thread::sleep_for(std::chrono::seconds(5)); } } private: int counter { 0 }; }; class Obj { public: void start() { ThreadObj obj; m_thread = std::thread(&ThreadObj::work, &obj); } void onMainThreadFinished() { bMainThreadRunning = true; m_thread.join(); } private: std::thread m_thread; }; int main(int argc, char *argv[]) { QApplication a(argc, argv); Obj o; o.start(); QWidget w; w.show(); auto ret = a.exec(); o.onMainThreadFinished(); return 0; } Hi, @user4592357 said in main and background thread example: void onMainThreadFinished() { bMainThreadRunning = true; m_thread.join(); } Shouldn't that be bMainThreadRunning = false;? my bad, that's right, i tried to implement what's in my actual project in a snippet and that's what i got automatically. but anyways that's not the result i expect. what i expect is, when the main thread finishes i need the background thread to be notified and finish immediately, but with this code it was for the interval to exit. actually in my actual project i implemented that scenario, it's basically the same code but then i had to change the places where all of these objects are created and this is the result i get. Well, if you thread just started to sleep, you'll have to wait for your 5 seconds before it ends properly. Do you really have that kind of loop in your application ? It could be reworked to use e.g. a QTimer in a worker object. i'll try using QTimerinstead of this_thread::sleep_for(). but it still blows my mind how my previous implementation worked the way i wanted with this code. and is everything else okay with this code (i.e. the use of atomic bool etc.)? i can't figure out how to do that. this is what i have right now: #include <QtWidgets/QMainWindow> #include <QApplication> #include <QObject> #include <QThread> #include <iostream> #include <thread> #include <QtWidgets/QLabel> #include <atomic> #include <QTimer> std::atomic<bool> bMainThreadRunning { true }; class ThreadObj : public QObject { public: ThreadObj() : counter(0) {} void work() { auto timer = new QTimer; //timer.setInterval(3000); connect(timer, &QTimer::timeout, this, &ThreadObj::inc); timer->start(3000); } void inc() { while(bMainThreadRunning) ++counter; } private: int counter { 0 }; }; class Obj { public: void start() { ThreadObj obj; m_thread = std::thread(&ThreadObj::work, &obj); } void onMainThreadFinished() { bMainThreadRunning = false; m_thread.join(); } private: std::thread m_thread; }; int main(int argc, char *argv[]) { QApplication a(argc, argv); Obj o; o.start(); QWidget w; w.show(); auto ret = a.exec(); o.onMainThreadFinished(); return 0; } i did that. what i need is, when in background thread something weird happens, i tell the main thread to show a message box. then if at some time the "good" state of background thread is restored, main thread closes the message box. here's what i have now. actually the application works. but at some points it crashes. looking at the crash log i can say that the reason is the background thread but i don't see how my implementation is wrong. i appended the crash log after code. class BgThread : public QThread { Q_OBJECT public: explicit BgThread (const int &nSeconds, QObject *parent = nullptr) : QThread(parent), interval(1000 * nSeconds) /* to milliseconds */ {} void onStopBgThread() { running = false; } signals: void somethingWentWrong(); void restoreGoodState(); private slots: void performWork() { if(/* something went wrong */) { emit somethingWentWrong(); good_state = false; } else if(!good_state) { // means going from bad state to good state emit restoreGoodState(); good_state = true; } } private: void run() override { QTimer timer; connect(&timer, SIGNAL(timeout()), this, SLOT(performWork()), Qt::DirectConnection); timer.start(interval); exec(); quit(); wait(); } private: int interval; bool running { true }; bool good_state { true }; }; class MainApp { public: void start_bg_thread(const int seconds) { thread = new BgThread { seconds, this }; connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater())); connect(thread, SIGNAL(somethingWentWrong()), SLOT(onSomethingWentWrong())); connect(thread, SIGNAL(restoreGoodState()), SLOT(onRestoreGoodState())); thread->start(); } void stop_bg_thread() { thread->onStopBgThread(); } public slots: void onSomethingWentWrong() { if(!msgBox) { msgBox= new QMessageBox(window()); msgBox->setIcon(QMessageBox::Warning); msgBox->setWindowTitle("window title"); msgBox->setText("message box text"); const auto pExitBtn = msgBox->addButton(tr("Exit"), QMessageBox::AcceptRole); connect(pExitBtn, SIGNAL(clicked()), qApp, SLOT(quit())); } msgBox->exec(); } void onRestoreGoodState() { msgBox->close(); } private: BgThread *thread { nullptr }; QMessageBox *msgBox; }; this is basically it. and somewhere in the app init process i call MainApp app; app.start_bg_thread(5); the crash log is something like this (stacktrace): do_system () from /lib64/libc.so.6 system () from /lib64/libc.so.6 // call signal handler ... pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 QThread::msleep(unsigned long) () start_thread () from /lib64/libpthread.so.0 clone () from /lib64/libc.so.6 waitpid () from /lib64/libc.so.6 do_system () from /lib64/libc.so.6 system () from /lib64/libc.so.6 // call signal handler BgThread::onStopBgThread () MainApp::qt_metacall(QMetaObject::Call, int, void**) () QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () QCoreApplication::exec() main () Did you check the memory used by you application ? Using the top command for example. Valgrind etc. okay, thanks. one last question, as you can see, on timer's timeout i execute a function. is it possible stop the timer (so the function won't be executed), and then restart it again? hi again, i ran valgrind and it says there's a memory leak on exec()which is in run()overridden method. so what's wrong? @user4592357 said in main and background thread example: ThreadObj obj; m_thread = std::thread(&ThreadObj::work, &obj); obj is allocated on the stack, it will go out of scope and delete itself hi, thanks for the reply. the code has been modified since the first post (see my last code-post), so it's not the problem i've read similar articles, saying "you shouldn't subclass QThread". in my case i need to do that. - SGaist Lifetime Qt Champion Out of curiosity, what are you doing that requires to subclass QThread ? nothing requires it. it's just that the whole implementation is done and i don't wanna change everything
https://forum.qt.io/topic/86622/main-and-background-thread-example
CC-MAIN-2018-43
en
refinedweb
With pythonanywhere I develop an application that works with the social network, and it would be great if this social network would be in the white list. With pythonanywhere I develop an application that works with the social network, and it would be great if this social network would be in the white list. @noTformaT: Welcome to PA!!! It should be in the list now. Give it a go and let me know. @glenn I have the following problem. This code: import requests r = requests.post("") print r.text output text: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: Unsupported Request Method a nd Protocol Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request. Your cache administrator is webmaster. Generated Thu, 11 Oct 2012 17:04:1 3 GMT by glenn-liveproxy (squid/2.7.STABLE9) That's odd. My results were different running your example... ========================================================================== [email protected]'s password: 21:32 ~ $ python2.7 Python 2.7.3 (default, Oct 4 2012, 11:28:36) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> r = requests.post("") >>> print(r.text) {"error":"invalid_client","error_description":"client_secret is undefined"} >>> Results when browsing to the URL via Aurora: {"error":"invalid_client","error_description":"client_secret is undefined"} I ran the code again from a non paid account and got the same results as noTformaT Is that the same bug in requests/urllib3 that we keep running into? It looks like it is the same one. Of course in that case the user bought a premium account, so the thread action stopped. In this case they haven't. I guess the question is on noTformaT...Are you planning to upgrade or do we need to try to figure out a fix for you? @a2j I develop non-commercial applications without the system of donations, so finding the funds to purchase a paid account's difficult for me. Probably will have to cut back on some functional applications. @noTformaT: Nobody's saying you have to upgrade to a paid account. Just that you wouldn't have a problem if you did, so why try and fix it if you were going to upgrade. Now we know we need to help you figure out a solution while using a free account. I think that the problem isn't present if you use urllib instead of requests, is that possible?
https://www.pythonanywhere.com/forums/topic/286/
CC-MAIN-2018-43
en
refinedweb
Interface for requesting credentials in QGIS in GUI independent way. More... #include <qgscredentials.h> Interface for requesting credentials in QGIS in GUI independent way. This class provides abstraction of a dialog for requesting credentials to the user. By default QgsCredentials will be used if not overridden with other credential creator function. QGIS application uses QgsCredentialDialog class for displaying a dialog to the user. Object deletes itself when it's not needed anymore. Children should use signal destroyed() to be notified of the deletion Definition at line 37.
https://api.qgis.org/2.8/classQgsCredentials.html
CC-MAIN-2020-34
en
refinedweb
# 5. Extras At this point, you already have a working full-featured serverless API, well done! 🎉 NestJS is a very comprehensive framework, and there could be a lot more use-cases to cover for your specific needs. I encourage you to dive into the NestJS documentation to learn more about the techniques and tools you can use. If you have more time and feel like it, here are some extras points that I found interesting to cover, especially if you want to build enterprise apps. Note that each of these extra parts is entirely independent, so you can skip to the one you are the most interested in or do them in any order 😉. # Add data validation It is a best practice to check and validate any data received by an API. What do you think would happen if you call your story creation endpoint, but without providing data? Let's try! curl -X POST -d "" Whoops! A new story is created, but with our entity properties are left empty 😱. We might want to make sure a new story has its animal field set and either a description or an image provided. Nest.js provides a built-in ValidationPipe that enforces validation rules for received data payloads, thanks to annotations provided by the class-validator package. To use it, you have to create a DTO (Data Transfer Object) class on which you will declare the validations rules using annotations. First, you need to install the required packages: npm install class-validator class-transformer Then create the file src/stories/story.dto.ts: export class StoryDto { @IsNotEmpty() animal: string; @IsOptional() description: string; @IsOptional() createdAt: Date; } It looks like a lot like our Story entity, but this time you define only properties that are expected in the request payload. That's why there is no imageUrl property here: it will be set by the controller only if an image file is uploaded. The annotations @IsNotEmpty() and @IsOptional() describe which property can be omitted and which one can be set in the payload. You can see the complete list of provided decorators here. Now open src/stories/stories.controller.ts and change the type of the data parameter of your POST function to StoryDto: ... async createStory( @Body() data: StoryDto, @UploadedFile() file: UploadedFileMetadata, ): Promise<Story> { ... Finally open src/main.azure.ts and enable ValidationPipe at the application level, to ensure all endpoints gets data validation: const app = await NestFactory.create(AppModule); app.setGlobalPrefix('api'); app.useGlobalPipes(new ValidationPipe()); Start your server with npm run start:azure and run the previous curl command again. This time you should properly receive an HTTP error 400 (bad request). Pro tip By default, detailed error messages will be automatically generated in case of a validation error. You also specify custom error message in the decorator options, for example: @IsNotEmpty({ message: 'animal must not be empty' }) animal: string; You also use special tokens in your error message or use a function for better granularity. See the class-validator documentation for more details. What about our other constraint, which is to have either a description or an image file provided? Since the imageUrl information is not directly part of the DTO, we cannot use it for validation. As the imageUrl property is set in the controller, that's where you have to perform manual validation. You can use the manual validation methods of the class-validator package for that. This time, it's your turn to finish the job! - Ensure that either descriptionor imageUrlis not empty, using manual validation. - Ensure that descriptionlength is at most 240 characters. - Ensure that animalis either set to cat, docor hamsterusing annotations. - Ensure that createdAtis a date if provided, using annotations. You can read more on data validation techniques in the NestJS documentation. # Enable CORS If you try to access your API inside a web application from your browser, you might encounter an error like that one: This error occurs because browsers block HTTP requests from scripts to web domains different than the one of the current web page to improve security. To bypass this restriction, your funpets-api \ --resource-group funpets \ --allowed-origins If you want to allow any website to use your API, you can replace the website URL by using * instead. In that case, be careful as Azure Functions will auto-scale to handle the workload if millions of users start using it, but so will your bill! # Enable authorization By default, all Azure Functions triggered by HTTP are publicly available. It's useful for a lot of scenarios, but at some point you might want to restrict who can execute your functions, in our case your API. Open the file main/function.json. In the functions, bindings, notice that authLevel is set to anonymous. It can be set to one of these 3 values: anonymous: no API key is required (default). function: an API key specific to this function is required. If none is defined, the defaultone will be used. admin: a host API key is required. It will be shared among all functions from the same app. Now change authLevel to function, and redeploy your function: # Don't forget to change the name with the one you used previously func azure functionapp publish <your-funpets-api> --nozip Then try to invoke again your API: curl https://<your-funpets-api>.azurewebsites.net/api/stories -i You should get an HTTP status 401 error ( Unauthorized). To call a protected function, you need to either provide the key as a query string parameter in the form code=<api_key> or you can provide it with the HTTP header x-functions-key. You can either log in to portal.azure.com and go to your function app, or follow these steps to retrieve your function API keys: // Retrieve your resource ID # Don't forget to change the name with the one you used previously az functionapp show --name <your-funpets-api> \ --resource-group funpets \ --query id # Use the resource ID from the previous command az rest --method post --uri "<resource_id>/host/default/listKeys?api-version=2018-11-01" You should see something like that: { "functionKeys": { "default": "functionApiKey==" }, "masterKey": "masterApiKey==", "systemKeys": {} } Then try to invoke again your API, this time with the x-functions-key header set with your function API key: curl https://<your-funpets-api>.azurewebsites.net/api/stories -i \ -H "x-functions-key: <your_function_api_key>" This time the call should succeed! Using authorization level you can restrict who can call your API, this can be useful especially for service-to-service access restrictions. However, if you need to manage finely who can access your API with an endpoint granularity, you need to implement authentication in your app. # Write tests Your API might currently look fine, but how can you ensure it has as little bugs as possible, and that you won't introduce regression in the future? Writing automated is not the most fun part of development, but it's a fundamental requirement to develop robust software applications. It helps to catch bugs early, preventing regressions and ensuring that production releases meet your quality and performance goals. The good news is NestJS has you covered to make your testing experience as smooth as possible. When you bootstrapped the project using the nest CLI, Jest and SuperTest frameworks have been set up for you. Each time you run the nest generate command, unit test files are also created for you with the extension .spec.ts. There are 5 NPM scripts dedicated to testing in your package.json file: npm test: runs unit tests once. npm run test:watchruns unit tests in watch mode, it will automatically re-run tests as you make modifications to the files. It is suited perfectly for TDD. npm run test:covruns unit tests and generate coverage report, so you can know which code paths are covered by your tests. npm run test:debug: runs unit tests with Node.js debugger enabled, so you can add breakpoints in your code editor and debug your tests more easily. npm run test:e2e: runs your end-to-end tests. Now run the npm test command. Oops, it seems that src/stories/stories.controller.spec.ts test if failing 😱! # Add module and providers mocks If you look at the stack trace, you can see that the reason is that @nestjs/typeorm and AzureStorageModule services cannot be resolved. It's expected: when running unit tests, you want to isolate the code you are testing as much as possible, and for that you can see that each test file provides its own module definition: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], }).compile(); controller = module.get<StoriesController>(StoriesController); }); The module created with Test.createTestingModule does not import AzureTableStorageModule and AzureStorageModule, so that's why their providers cannot be resolved. Instead of importing them right away to fix the issue, we should write mocks for the providers we use instead. # Mock @nestjs/azure-storage Let's start with mocking what we use in @nestjs/azure-storage module, using jest.mock(<module>) helper function. Add this code just after the imports: jest.mock('@nestjs/azure-storage', () => ({ // Use Jest automatic mock generation ...jest.genMockFromModule('@nestjs/azure-storage'), // Mock interceptor AzureStorageFileInterceptor: () => ({ intercept: jest.fn((context, next) => next.handle()) }) })); For simple modules, using jest.mock(<module>) would be enough to generate mocks automatically according to the module interface. But in our case, AzureStorageFileInterceptor needs to be mocked manually as it is a bit trickier: it must returns an object with a method intercept(context, next) that needs to call next.handle() to not break the chain of interceptors calls. So we provide our own version of the @nestjs/azure-storage module mock, using jest.genMockFromModule(<module>) helper to automatically generates mocks for everything except AzureStorageFileInterceptor. For AzureStorageFileInterceptor we manually reproduce a minimal implementation. Using jest.fn() method here creates a mock function. Thanks to that, we can later change its implementation in a specific test if needed. Then add AzureStorageService to the testting module providers list: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], providers: [AzureStorageService] }).compile(); controller = module.get<StoriesController>(StoriesController); }); And complete the missing import: import { AzureStorageService } from '@nestjs/azure-storage'; # Mock @nestjs/typeorm We also need to mock the storiesRepository service injected in our controller using @InjectRepository(Story), but how to do that? This time we do not need to mock the entire module, but only this specific service. We can still use Jest automatic mock generation: // Add this code after the imports const mockRepository = jest.genMockFromModule<any>('typeorm').MongoRepository; Its injection token is generated dynamically, so we need to add a custom provider to our testing module to reproduce the same behavior: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], providers: [ AzureStorageService, { provide: getRepositoryToken(Story), useValue: mockRepository }, ], }).compile(); controller = module.get<StoriesController>(StoriesController); }); Pro tip We had to look at the implementation @InjectRepository() annotation to find out that it uses the method getRepositoryToken() internally. Unfortunately, that's something you have to do sometimes to be able to mock modules properly. Don't forget to add missing imports: import { getRepositoryToken } from '@nestjs/typeorm'; import { Story } from './story.entity'; Now run npm test again, this time the tests should succeed! # Complete test suite Hold on, now that we have solved the mock issue, it's time to write more tests 😃! Try to add: - Unit tests for your controller in src/stories/stories.controller.ts. - An end-to-end for of your endpoints in tests/app.e2e-spec.ts. Take also a look a the report generated by npm run test:cov to see your test coverage. If you are not familiar with Jest you might want to take a look at the documentation. For end-to-end tests, HTTP assertions are made using the SuperTest library. You can also find examples and more information in the NestJS documentation. Solution: see the code for extras
https://black-cliff-0123f8e1e.azurestaticapps.net/step5/
CC-MAIN-2020-34
en
refinedweb
Important: Please read the Qt Code of Conduct - [SOLVED] Accessing UI from QtConcurrent::run - Chick3nN00dleS0up last edited by Chick3nN00dleS0up In my "DelayedRespondingBlock" I try to execute a method delayed without delaying the whole application, so I multithread and make the new thread sleeping for a little time: void DelayedRespondingBlock::updateOutputs() //called from another class { QFuture<void> future = QtConcurrent::run(this, &DelayedRespondingBlock::doUpdateOutputs); } void DelayedRespondingBlock::doUpdateOutputs() { #ifdef Q_OS_WIN Sleep(uint(delay_ms)); #else struct timespec ts = { delay_ms / 1000, (delay_ms % 1000) * 1000 * 1000 }; nanosleep(&ts, NULL); #endif this->produceOutputValues(); } This code works perfectly. I derive "LedBlock" from DelayedRespondingBlock and implement produceOutputValues(): void LedBlock::produceOutputValues() { this->repaint(); //paintEvent() checks some booleans } delay_ms is 1000 for LedBlock. After those 1000 ms, produceOutputValues() is called, so this->repaint() is called and the QWidget-class does some stuff, and finally my paintEvent() is called: void LedBlock::paintEvent(QPaintEvent *event) { QPainter* painter = new QPainter(this); painter->eraseRect(QRect(QPoint(0,0), QPoint(this->size().width(), this->size().height()))); if(this->getInputSockets().at(0)->getValue() == true) { painter->setBrush(QBrush(this->illuminationColor,Qt::SolidPattern)); painter->drawEllipse(QRect(10, 1, 18, 18)); } painter->setPen(QPen(this->getColor(), 2, Qt::SolidLine, Qt::SquareCap)); painter->drawEllipse(QRect(10, 1, 18, 18)); painter->end(); } There are some more steps after my paintEvent is finished, but after that, I crash. How am able to repaint my Widget from another Thread, without making it crash? Thanks in advance! - A Former User last edited by Hi, painting can only be done in the GUI thread. You could send a Qt signal from your worker thread to the GUI thread to trigger the repaint (signals/slots are thread safe). - Chick3nN00dleS0up last edited by @Wieland Thank you, it worked! :) - A Former User last edited by @Chick3nN00dleS0up You're welcome :-)
https://forum.qt.io/topic/52989/solved-accessing-ui-from-qtconcurrent-run
CC-MAIN-2020-34
en
refinedweb
Timeline Jul 27, 2016: - 11:37 PM Changeset in webkit [203814] by - 2 edits in trunk/Source/WebCore [Streams API] Use makeThisTypeError in ReadableStreamDefaultReader.js Patch by Romain Bellessort <[email protected]> on 2016-07-27 Reviewed by Darin Adler. Use makeThisTypeError and makeGetterTypeError in ReadableStreamDefaultReader.js No change in functionality. - Modules/streams/ReadableStreamDefaultReader.js: (cancel): (read): (releaseLock): (closed): - 11:28 PM Changeset in webkit [203813] by - 3 edits in trunk/Source/WebCore [soup] Incorrect usage of relaxAdoptionRequirement in the constructor of SocketStreamHandle Patch by Fujii Hironori <Fujii Hironori> on 2016-07-27 Reviewed by Carlos Garcia Campos. No new tests (No behavior change). Incrementing refcount in a constructor causes an assertion failure that it's not adopted yet. So, relaxAdoptionRequirement() was used to avoid the problem in the constructors of SocketStreamHandle. This is a incorrect solution. The correct solution is to make SocketStreamHandle::create() increment the refcount after calling the constructor. - platform/network/soup/SocketStreamHandle.h: Removed the second constructor of SocketStreamHandle which is not used anymore. Uninlined create() because this is not trivial anymore. - platform/network/soup/SocketStreamHandleSoup.cpp: (WebCore::SocketStreamHandle::create): Do the rest of jobs which was done by the constructors. (WebCore::SocketStreamHandle::SocketStreamHandle): Move the jobs after initialization to create(). Removed the second constructor. - 10:59 PM Changeset in webkit [203812] by - 3 edits in trunk/Source/JavaScriptCore [JSC] Remove some unused code from FTL Patch by Benjamin Poulain <[email protected]> on 2016-07-27 Reviewed by Mark Lam. All the liveness and swapping is done inside B3, this code is no longer needed. - dfg/DFGEdge.h: (JSC::DFG::Edge::doesNotKill): Deleted. - ftl/FTLLowerDFGToB3.cpp: (JSC::FTL::DFG::LowerDFGToB3::doesKill): Deleted. - 10:47 PM Changeset in webkit [203811] by - 2 edits in trunk/Tools LayoutTestRelay should wait for WebKitTestRunnerApp installation to complete Reviewed by Daniel Bates. - LayoutTestRelay/LayoutTestRelay/LTRelayController.m: (-[LTRelayController installApp]): - 9:45 PM Changeset in webkit [203810] by - 3 edits in trunk/LayoutTests Marking http/tests/loading/basic-credentials-sent-automatically.html as flaky on mac and ios-sim wk2 Unreivewed test gardening. - platform/ios-simulator-wk2/TestExpectations: - platform/mac-wk2/TestExpectations: - 6:59 PM Changeset in webkit [203809] by - 4 edits in trunk/Source/WebKit2 [iOS] Add WKUIDelegate SPI for specifying that an attachment list is from a managed source <rdar://problem/27471815> Reviewed by Dan Bernstein. - Platform/spi/ios/UIKitSPI.h: Declared UIPreviewItemTypeAttachment, UIPreviewDataAttachmentList, and UIPreviewDataAttachmentIndex. - UIProcess/API/Cocoa/WKUIDelegatePrivate.h: Declared -_attachmentListForWebView:sourceIsManaged:. - UIProcess/ios/WKContentViewInteraction.mm: (-[WKContentView _dataForPreviewItemController:atPosition:type:]): Used UIPreviewItemTypeAttachment, UIPreviewDataAttachmentList, and UIPreviewDataAttachmentIndex. If uiDelegate responds to -_attachmentListForWebView:sourceIsManaged:, called it instead of -_attachmentListForWebView:. Set sourceIsManaged as the value for the UIPreviewDataAttachmentListSourceIsManaged key in dataForPreview. - 6:36 PM Changeset in webkit [203808] by - 23 edits4 deletes in trunk/Source/JavaScriptCore [JSC] DFG::Node should not have its own allocator Patch by Benjamin Poulain <[email protected]> on 2016-07-27: - 4:54 PM Changeset in webkit [203807] by - 4 edits in trunk/Source/WebInspectorUI Web Inspector: Visual Styles Sidebar should have only one column when it's narrow <rdar://problem/27413248> Reviewed by Joseph Pecoraro. Many CSS values often get clipped in the two colunm layout. Change the layout to one colunm when visual style rows get too narrow. - UserInterface/Views/VisualStyleDetailsPanel.css: (.sidebar > .panel.details.css-style .visual > .details-section .details-section > .content .group > .row): (.sidebar > .panel.details.css-style .visual > .details-section .details-section > .content .group > .metric-section-row): Wrap all rows except for position/padding/margin controls. (.sidebar > .panel.details.css-style .visual > .details-section .details-section > .content .group > .row > .visual-style-property-container:not(.layout-reversed):last-child): Deleted. Margin between the first and the second column doesn't make sense one column layout. Set the margin in .visual-style-property-container instead. - UserInterface/Views/VisualStyleDetailsPanel.js: (WebInspector.VisualStyleDetailsPanel.prototype._generateMetricSectionRows): - UserInterface/Views/VisualStylePropertyEditor.css: (.visual-style-property-container): - 4:50 PM Changeset in webkit [203806] by - 9 edits in trunk First parameter to HTMLMediaElement.canPlayType() should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline w3c test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: First parameter to HTMLMediaElement.canPlayType() should be mandatory: Firefox and Chrome agree with the specification. No new tests, rebaselined existing tests. - html/HTMLMediaElement.idl: LayoutTests: Update existing tests to reflect behavior change. - media/encrypted-media/encrypted-media-can-play-type.html: - media/media-can-play-type-expected.txt: - media/media-can-play-type.html: - platform/mac/media/encrypted-media/encrypted-media-can-play-type-expected.txt: - 4:50 PM Changeset in webkit [203805] by - 5 edits in trunk First parameter to setTimeout() / setInterval() should be mandatory Reviewed by Darin Adler. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: First parameter to setTimeout() / setInterval() should be mandatory: Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - bindings/js/JSDOMWindowCustom.cpp: (WebCore::JSDOMWindow::setTimeout): (WebCore::JSDOMWindow::setInterval): - bindings/js/JSWorkerGlobalScopeCustom.cpp: (WebCore::JSWorkerGlobalScope::setTimeout): (WebCore::JSWorkerGlobalScope::setInterval): - 4:36 PM Changeset in webkit [203804] by - 2 edits in trunk/Source/WebInspectorUI [Mac] Web Inspector: CodeMirror-based editor bindings for Home and End don't match system behavior <rdar://problem/27575553> Patch by Joseph Pecoraro <Joseph Pecoraro> on 2016-07-27 Reviewed by Brian Burg. - UserInterface/Views/CodeMirrorEditor.js: (WebInspector.CodeMirrorEditor.create): (WebInspector.CodeMirrorEditor): Add some key map overrides for Home and End to better match system Mac behavior. This scrolls to the start or end of a document and does not change the cursor position. - 4:28 PM Changeset in webkit [203803] by - 3 edits2 adds in trunk Parameters to insertAdjacentText() / insertAdjacentHTML() should be mandatory Reviewed by Darin Adler. Source/WebCore: Parameters to insertAdjacentText() / insertAdjacentHTML() should be mandatory: - - Firefox and Chrome agree with the specification (although Firefox does not have insertAdjacentText()). Test: fast/dom/Element/insertAdjacentText-parameters.html - html/HTMLElement.idl: LayoutTests: Add test coverage. - fast/dom/Element/insertAdjacentText-parameters-expected.txt: Added. - fast/dom/Element/insertAdjacentText-parameters.html: Added. - 4:22 PM Changeset in webkit [203802] by - 7 edits in trunk/Source/JavaScriptCore [JSC] Fix a bunch of use-after-free of DFG::Node Patch by Benjamin Poulain <[email protected]> on 2016-07-27 Reviewed by Mark Lam. FTL had a few places where we use a node after it has been deleted. The dangling pointers come from the SSA liveness information kept on the basic blocks. This patch fixes the issues I could find and adds liveness invalidation to help finding dependencies like these. - dfg/DFGBasicBlock.h: (JSC::DFG::BasicBlock::SSAData::invalidate): - dfg/DFGConstantFoldingPhase.cpp: (JSC::DFG::ConstantFoldingPhase::run): Constant folding phase was deleting nodes in the loop over basic blocks. The problem is the deleted nodes can be referenced by other blocks. When the abstract interpreter was manipulating the abstract values of those it was doing so on the dead nodes. - dfg/DFGConstantHoistingPhase.cpp: Just invalidation. Nothing wrong here since the useless nodes were kept live while iterating the blocks. - dfg/DFGGraph.cpp: (JSC::DFG::Graph::killBlockAndItsContents): (JSC::DFG::Graph::killUnreachableBlocks): (JSC::DFG::Graph::invalidateNodeLiveness): - dfg/DFGGraph.h: - dfg/DFGPlan.cpp: (JSC::DFG::Plan::compileInThreadImpl): We had a lot of use-after-free in LCIM because we were using the stale live nodes deleted by previous phases. - 3:34 PM Changeset in webkit [203801] by - 4 edits in trunk/Source/WebCore Add localizable strings for inserting list types -and corresponding- rdar://problem/26102954 Reviewed by Dan Bernstein. - English.lproj/Localizable.strings: - platform/LocalizedStrings.cpp: (WebCore::insertListTypeNone): (WebCore::insertListTypeNoneAccessibilityTitle): (WebCore::insertListTypeBulleted): (WebCore::insertListTypeBulletedAccessibilityTitle): (WebCore::insertListTypeNumbered): (WebCore::insertListTypeNumberedAccessibilityTitle): - platform/LocalizedStrings.h: - 3:15 PM Changeset in webkit [203800] by - 3 edits2 adds in trunk Parameters to DOMParser.parseFromString() should be mandatory Reviewed by Ryosuke Niwa. Source/WebCore: Parameters to DOMParser.parseFromString() should be mandatory: Firefox and Chrome agree with the specification. Test: fast/parser/DOMParser-parameters.html - xml/DOMParser.idl: LayoutTests: Add test coverage. - fast/parser/DOMParser-parameters-expected.txt: Added. - fast/parser/DOMParser-parameters.html: Added. - 3:06 PM Changeset in webkit [203799] by - 4 edits in trunk/Source/WebCore Captions do not render in PiP window when element is hidden <rdar://problem/27556788> Reviewed by Simon Fraser. - html/shadow/MediaControlElements.cpp: (WebCore::MediaControlTextTrackContainerElement::createTextTrackRepresentationImage): Pass new flag so caption layers are always rendered. - rendering/RenderLayer.cpp: (WebCore::RenderLayer::paintLayerContents): Paint non-visible layers when PaintLayerIgnoreVisibility flag is set. - rendering/RenderLayer.h: Define PaintLayerIgnoreVisibility. - 2:59 PM Changeset in webkit [203798] by - 2 edits1 add in trunk/Source/JavaScriptCore concatAppendOne should allocate using the indexing type of the array if it cannot merge <rdar://problem/27530122> Reviewed by Mark Lam. Before, if we could not merge the indexing types for copying, we would allocate the the array as ArrayWithUndecided. Instead, we should allocate an array with the original array's indexing type. - runtime/ArrayPrototype.cpp: (JSC::concatAppendOne): - tests/stress/concat-append-one-with-sparse-array.js: Added. - 2:51 PM Changeset in webkit [203797] by - 11 edits in trunk Parameter to named property getter should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: Parameter to named property getter should be mandatory: No new tests, rebaselined existing test. - Modules/mediastream/RTCStatsResponse.idl: - bindings/scripts/test/TestOverrideBuiltins.idl: - html/HTMLOptionsCollection.idl: - html/HTMLSelectElement.idl: - plugins/DOMMimeTypeArray.idl: - plugins/DOMPlugin.idl: - plugins/DOMPluginArray.idl: - 2:36 PM Changeset in webkit [203796] by - 4 edits in trunk First parameter to Range.createContextualFragment() should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that one more check is passing. - web-platform-tests/domparsing/createContextualFragment-expected.txt: Source/WebCore: First parameter to Range.createContextualFragment() should be mandatory: Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - dom/Range.idl: - 2:33 PM Changeset in webkit [203795] by - 5 edits4 adds in trunk Align MediaList with the CSSOM specification Reviewed by Ryosuke Niwa. Source/WebCore: Align MediaList with the CSSOM specification: In particular, the parameter to item() / deleteMedium() and appendMedium() is now mandatory. Firefox and Chrome agree with the specification. Test: fast/css/MediaList-mediaText-null.html fast/css/MediaList-parameters.html - css/MediaList.idl: LayoutTests: - fast/css/MediaList-mediaText-null-expected.txt: Added. - fast/css/MediaList-mediaText-null.html: Added. Add test coverage for MediaList.mediaText to make sure it is not nullable and treats null as the empty string. Our IDL did not match the specification here but our behavior was correct. Therefore, this test is passing with and without my change. I just wanted to make sure we had good coverage since I updated our IDL to match the specification. - fast/css/MediaList-parameters-expected.txt: Added. - fast/css/MediaList-parameters.html: Added. Add test coverage for mandatory parameters. - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: Update existing test to reflect behavior change. - 2:18 PM Changeset in webkit [203794] by - 2 edits in trunk/LayoutTests Correct the syntax used to skip js/regress/script-tests/bigswitch-indirect-symbol.js Unreviewed test gardening. - js/regress/script-tests/bigswitch-indirect-symbol.js: - 2:11 PM Changeset in webkit [203793] by - 2 edits1 add in trunk/Source/JavaScriptCore We don't optimize for-in properly in baseline JIT (maybe other JITs too) with an object with symbols <rdar://problem/27572612> Reviewed by Geoffrey Garen. The fast for-in iteration mode assumes all inline/out-of-line properties can be iterated in linear order. This is not true if we have Symbols because Symbols should not be iterated by for-in. - runtime/Structure.cpp: (JSC::Structure::add): - tests/stress/symbol-should-not-break-for-in.js: Added. (assert): (foo): - 2:10 PM Changeset in webkit [203792] by - 2 edits in trunk/Source/WebCore Fullscreen video zoom button does not work after rotating when aspect ratio matches display. rdar://problem/27368872 Patch by Jeremy Jones <[email protected]> on 2016-07-27 Reviewed by Eric Carlson. When video and display aspect ratio match, and rotating from landscape to protrait, the transform used in layout will be Identity. This means checking the transform for identity is an insufficient test to see if the bounds need to be resolved. Instead, always attempt to resolve the bounds and do a more accurate test while doing so. - platform/ios/WebVideoFullscreenInterfaceAVKit.mm: (-[WebAVPlayerLayer layoutSublayers]): (-[WebAVPlayerLayer resolveBounds]): - 2:00 PM Changeset in webkit [203791] by - 3 edits in trunk/Source/WebKit2 [iOS] Remove unused textContentType SPI from _WKFormInputSession Patch by Chelsea Pugh <[email protected]> on 2016-07-27 Reviewed by Dan Bernstein. - UIProcess/API/Cocoa/_WKFormInputSession.h: Remove unused SPI. - UIProcess/ios/WKContentViewInteraction.mm: (-[WKContentView textInputTraits]): Set textContentType of _traits to whatever we classify it as based on the assisted node info. The default textContentType is nil, and that is our fallback in our method for determining textContentType. (-[WKFormInputSession textContentType]): Deleted. (-[WKFormInputSession setTextContentType:]): Deleted. - 1:55 PM Changeset in webkit [203790] by - 2 edits1 add in trunk/Source/JavaScriptCore The second argument for Function.prototype.apply should be array-like or null/undefined. <rdar://problem/27328525> Reviewed by Filip Pizlo. The spec for Function.prototype.apply says its second argument can only be null, undefined, or must be array-like. See and. Our previous implementation was not handling this correctly for SymbolType. This is now fixed. - interpreter/Interpreter.cpp: (JSC::sizeOfVarargs): - tests/stress/apply-second-argument-must-be-array-like.js: Added. - 1:52 PM Changeset in webkit [203789] by - 2 edits in trunk/Source/WebCore Stop accepting the deprecated "requiredShippingAddressFields" and "requiredBillingAddressFields" properties rdar://problem/27574519 Reviewed by Simon Fraser. - Modules/applepay/ApplePaySession.cpp: (WebCore::createPaymentRequest): (WebCore::isValidPaymentRequestPropertyName): - 1:51 PM Changeset in webkit [203788] by - 21 edits in trunk First parameter to indexed property getters should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: First parameter to indexed property getters should be mandatory: No new tests, rebaselined existing tests. - Modules/gamepad/deprecated/GamepadList.idl: - bindings/js/JSHTMLAllCollectionCustom.cpp: (WebCore::JSHTMLAllCollection::item): - css/CSSRuleList.idl: - css/CSSValueList.idl: - css/StyleSheetList.idl: - css/WebKitCSSFilterValue.idl: - css/WebKitCSSTransformValue.idl: - dom/ClientRectList.idl: - dom/DOMStringList.idl: - dom/DataTransferItemList.idl: - html/HTMLAllCollection.idl: - plugins/DOMMimeTypeArray.idl: - plugins/DOMPlugin.idl: - plugins/DOMPluginArray.idl: LayoutTests: Update existing tests to reflect behavior change. - fast/css/webkit-keyframes-crash.html: - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: - 1:05 PM Changeset in webkit [203787] by - 2 edits in trunk/LayoutTests Land test expectations for rdar://problem/27574303. Unreviewed test gardening. - platform/mac-wk2/TestExpectations: - 12:56 PM Changeset in webkit [203786] by - 7 edits in trunk/Source/JavaScriptCore MathICs should be able to emit only a jump along the inline path when they don't have any type data Reviewed by Mark Lam. This patch allows for MathIC fast-path generation to be delayed. We delay when we don't see any observed type information for the lhs/rhs operand, which implies that the MathIC has never executed. This is profitable for two main reasons: - If the math operation never executes, we emit much less code. - Once we get type information for the lhs/rhs, we can emit better code. To implement this, we just emit a jump to the slow path call that will repatch on first execution. New data for add: | JetStream | Unity 3D | Old | 148 bytes | 143 bytes | New | 116 bytes | 113 bytes | ------------------------------------ New data for mul: | JetStream | Unity 3D | Old | 210 bytes | 185 bytes | New | 170 bytes | 137 bytes | ------------------------------------ - jit/JITAddGenerator.cpp: (JSC::JITAddGenerator::generateInline): - jit/JITAddGenerator.h: (JSC::JITAddGenerator::isLeftOperandValidConstant): (JSC::JITAddGenerator::isRightOperandValidConstant): (JSC::JITAddGenerator::arithProfile): - jit/JITMathIC.h: (JSC::JITMathIC::generateInline): (JSC::JITMathIC::generateOutOfLine): (JSC::JITMathIC::finalizeInlineCode): - jit/JITMathICInlineResult.h: - jit/JITMulGenerator.cpp: (JSC::JITMulGenerator::generateInline): - jit/JITMulGenerator.h: (JSC::JITMulGenerator::isLeftOperandValidConstant): (JSC::JITMulGenerator::isRightOperandValidConstant): (JSC::JITMulGenerator::arithProfile): - jit/JITOperations.cpp: - 11:46 AM Changeset in webkit [203785] by - 2 edits in trunk/Source/WebCore [css-grid] The isValidTransition function must not alter the GridSizingData object Reviewed by Darin Adler. It's not correct that a function which purpose is to check out the validity of a state modifies such state. That code was there to allow the extra row track sizing iteration in the case of a grid container with indefinite height. We need to do that in order to figure out its intrinsic height, which will be used then to properly sizing the row tracks. Since the intrinsic height computation uses directly the computeUsedBreadthOfGridTracks function, it does not alter the algorithm state-machine, hence, we can safely remove this code, which was incorrect in any case. No new tests, it's just a refactoring with no functionality change. - rendering/RenderGrid.cpp: (WebCore::RenderGrid::GridSizingData::isValidTransition): - 11:27 AM Changeset in webkit [203784] by - 5 edits2 adds in trunk First parameter to Document.execCommand() / queryCommand*() should be mandatory Reviewed by Darin Adler. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: First parameter to Document.execCommand() / queryCommand*() should be mandatory: Firefox and Chrome agree with the specification. Test: fast/dom/Document/editing-parameters.html - dom/Document.idl: LayoutTests: Add layout test coverage. - fast/dom/Document/editing-parameters-expected.txt: Added. - fast/dom/Document/editing-parameters.html: Added. - 11:04 AM Changeset in webkit [203783] by - 2 edits in trunk/LayoutTests Fix a typo in TestExpectations. Unreviewed test gardening. - platform/mac-wk1/TestExpectations: - 11:04 AM Changeset in webkit [203782] by - 8 edits2 adds in trunk Align CSSSupportsRule with the specification Reviewed by Darin Adler. Source/WebCore: Align CSSSupportsRule with the specification: In particular: - Make the parameters to insertRule() / deleteRule() mandatory - Expose CSSSupportsRule on the global Window object Both Firefox and Chrome agree with the specification here. Test: fast/css/CSSSupportsRule-parameters.html - css/CSSSupportsRule.idl: LayoutTests: - fast/css/CSSSupportsRule-parameters-expected.txt: Added. - fast/css/CSSSupportsRule-parameters.html: Added. Add layout test coverage for mandatory parameters. - platform/efl/js/dom/global-constructors-attributes-expected.txt: - platform/gtk/js/dom/global-constructors-attributes-expected.txt: - platform/mac-yosemite/js/dom/global-constructors-attributes-expected.txt: - platform/mac/js/dom/global-constructors-attributes-expected.txt: - platform/win/js/dom/global-constructors-attributes-expected.txt: Rebaseline existing test now that CSSSupportsRule is exposed on the global Window object. - 10:15 AM Changeset in webkit [203781] by - 2 edits in trunk/Tools Disable WebCoreNSURLSessionTest API tests on ios-simulator Reviewed by Alexey Proskuryakov. - TestWebKitAPI/Tests/WebCore/WebCoreNSURLSession.mm: - 10:03 AM Changeset in webkit [203780] by - 2 edits in trunk/Source/WebKit2 Fix m_isInBackground initialization for Safari View Services <rdar://problem/27569255> Reviewed by Tim Horton. Fix m_isInBackground initialization for Safari View Services. The code was using m_applicationStateMonitor without initializing it. Instead, use the local applicationStateMonitor and make sure we invalidate it before it gets released. - UIProcess/ApplicationStateTracker.mm: (WebKit::ApplicationStateTracker::ApplicationStateTracker): - 9:18 AM Changeset in webkit [203779] by - 2 edits in trunk/LayoutTests [GTK] Unreviewed gardening: update expectations after r203770 Unreviewed gardening. Patch by Miguel Gomez <[email protected]> on 2016-07-27 - platform/gtk/TestExpectations: - 9:14 AM Changeset in webkit [203778] by - 2 edits in trunk/Source/WebInspectorUI Regression(r203535): Uncaught Exception: TypeError: Not enough arguments at LayerTreeDataGridNode.js:47 <rdar://problem/27540435> Reviewed by Eric Carlson. After r203535, document.createTextNode() requires an argument. - UserInterface/Views/LayerTreeDataGridNode.js: (WebInspector.LayerTreeDataGridNode.prototype.createCellContent): Since this use-site is for creating a cell in an unknown column, initialize it to '–'. Previously it would have been the string "undefined" or empty. - 9:00 AM Changeset in webkit [203777] by - 7 edits5 adds in trunk [GTK] Fix some video/canvas tests that should be passing Patch by Miguel Gomez <[email protected]> on 2016-07-27 Reviewed by Carlos Garcia Campos. Tools: Add a platform identifier to the TestRunner's page user agent when the tests are run on the EFL or GTK platforms. - WebKitTestRunner/efl/TestControllerEfl.cpp: (WTR::TestController::platformConfigureViewForTest): - WebKitTestRunner/gtk/TestControllerGtk.cpp: (WTR::TestController::platformConfigureViewForTest): LayoutTests: Modify the tests to use a tolerance of 6 when running on the GTK or EFL platforms. Also, added new expectations for the tests that need them. - fast/canvas/canvas-createPattern-video-loading.html: - fast/canvas/canvas-createPattern-video-modify.html: - media/video-canvas-createPattern.html: - platform/efl/fast/canvas/canvas-createPattern-video-loading-expected.txt: Added. - platform/efl/fast/canvas/canvas-createPattern-video-modify-expected.txt: Added. - platform/gtk/fast/canvas/canvas-createPattern-video-loading-expected.txt: Added. - platform/gtk/fast/canvas/canvas-createPattern-video-modify-expected.txt: Added. - resources/platform-helper.js: Added. (isGtk): (isEfl): (videoCanvasPixelComparisonTolerance): - 8:58 AM Changeset in webkit [203776] by - 13 edits in trunk/Source/WebKit2 [Coordinated Graphics] Improve scheduling of tasks between threads in CoordinatedGraphicsScene Reviewed by Michael Catanzaro. This patch makes the following improvements: - Avoid scheduling tasks to the main thread if the scene is detached. - Do not take references when not actually sending tasks to another threads. - Use Function instead of std::function on dispatch methods. - Remove purgeBackingStores that is actually never called. It's only scheduled from purgeGLResources() that is always called after detach. - Shared/CoordinatedGraphics/CoordinatedGraphicsScene.cpp: (WebKit::CoordinatedGraphicsScene::dispatchOnMainThread): (WebKit::CoordinatedGraphicsScene::dispatchOnClientRunLoop): (WebKit::CoordinatedGraphicsScene::paintToCurrentGLContext): (WebKit::CoordinatedGraphicsScene::updateViewport): (WebKit::CoordinatedGraphicsScene::onNewBufferAvailable): (WebKit::CoordinatedGraphicsScene::commitSceneState): (WebKit::CoordinatedGraphicsScene::renderNextFrame): (WebKit::CoordinatedGraphicsScene::purgeGLResources): (WebKit::CoordinatedGraphicsScene::commitScrollOffset): (WebKit::CoordinatedGraphicsScene::detach): (WebKit::CoordinatedGraphicsScene::setActive): (WebKit::CoordinatedGraphicsScene::dispatchCommitScrollOffset): Deleted. (WebKit::CoordinatedGraphicsScene::purgeBackingStores): Deleted. - Shared/CoordinatedGraphics/CoordinatedGraphicsScene.h: - Shared/CoordinatedGraphics/threadedcompositor/ThreadedCompositor.cpp: (WebKit::ThreadedCompositor::purgeBackingStores): Deleted. - Shared/CoordinatedGraphics/threadedcompositor/ThreadedCompositor.h: - UIProcess/CoordinatedGraphics/CoordinatedLayerTreeHostProxy.cpp: (WebKit::CoordinatedLayerTreeHostProxy::purgeBackingStores): Deleted. - UIProcess/CoordinatedGraphics/CoordinatedLayerTreeHostProxy.h: - WebProcess/WebPage/CoordinatedGraphics/CompositingCoordinator.cpp: (WebKit::CompositingCoordinator::invalidate): - WebProcess/WebPage/CoordinatedGraphics/CompositingCoordinator.h: - WebProcess/WebPage/CoordinatedGraphics/CoordinatedLayerTreeHost.cpp: (WebKit::CoordinatedLayerTreeHost::invalidate): (WebKit::CoordinatedLayerTreeHost::purgeBackingStores): Deleted. - WebProcess/WebPage/CoordinatedGraphics/CoordinatedLayerTreeHost.h: - WebProcess/WebPage/CoordinatedGraphics/CoordinatedLayerTreeHost.messages.in: - WebProcess/WebPage/CoordinatedGraphics/ThreadedCoordinatedLayerTreeHost.h: - 8:52 AM Changeset in webkit [203775] by - 2 edits in trunk/Source/WebCore [Soup] Test http/tests/xmlhttprequest/auth-reject-protection-space.html fails since added in r203743 Reviewed by Michael Catanzaro. It times out in release and crashes in debug due to an ASSERT_NOT_REACHED(). The soup backend doesn't really support rejecting the protection space and continuing with the next one, but we can at least continue with the request without crendential to make the test pass. - platform/network/soup/ResourceHandleSoup.cpp: (WebCore::ResourceHandle::receivedChallengeRejection): - 8:50 AM WebKitGTK/Gardening/Calendar edited by - (diff) - 8:49 AM Changeset in webkit [203774] by - 2 edits in trunk/Source/WebKit2 [GTK] Remove network setup from web process Reviewed by Michael Catanzaro. We are still doing network init and finish in th web process. It's useless since we switched to mandatory network process. - WebProcess/gtk/WebProcessMainGtk.cpp: - 8:49 AM WebKitGTK/Gardening/Calendar edited by - (diff) - 8:30 AM Changeset in webkit [203773] by - 2 edits in trunk/Tools [Tools] The built product doesn't contains the dwo files when DEBUG_FISSION is enabled. Reviewed by Michael Catanzaro. - BuildSlaveSupport/built-product-archive: (createZip): (dirContainsdwo): (archiveBuiltProduct): - 3:40 AM Changeset in webkit [203772] by - 23 edits4 moves in trunk [Streams API] Replace ReadableStreamReader by ReadableStreamDefaultReader Patch by Romain Bellessort <[email protected]> on 2016-07-27 Reviewed by Youenn Fablet. Replaced ReadableStreamReader by ReadableStreamDefaultReader to align with updated Streams API specification. No change in functionality. Source/WebCore: - CMakeLists.txt: - DerivedSources.cpp: - DerivedSources.make: - Modules/fetch/FetchInternals.js: (consumeStream): - Modules/fetch/FetchResponse.js: (body): - Modules/streams/ReadableStream.js: (getReader): - Modules/streams/ReadableStreamDefaultReader.idl: Renamed from Source/WebCore/Modules/streams/ReadableStreamReader.idl. - Modules/streams/ReadableStreamDefaultReader.js: Renamed from Source/WebCore/Modules/streams/ReadableStreamReader.js. (cancel): (read): (releaseLock): (closed): - Modules/streams/ReadableStreamInternals.js: (privateInitializeReadableStreamDefaultReader): (teeReadableStream): (teeReadableStreamPullFunction): (isReadableStreamDefaultReader): (closeReadableStream): (readFromReadableStreamDefaultReader): - WebCore.xcodeproj/project.pbxproj: - bindings/js/JSDOMGlobalObject.cpp: (WebCore::JSDOMGlobalObject::addBuiltinGlobals): (WebCore::JSDOMGlobalObject::finishCreation): - bindings/js/JSReadableStreamPrivateConstructors.cpp: (WebCore::constructJSReadableStreamDefaultReader): (WebCore::JSBuiltinReadableStreamDefaultReaderPrivateConstructor::initializeExecutable): (WebCore::createReadableStreamDefaultReaderPrivateConstructor): - bindings/js/JSReadableStreamPrivateConstructors.h: - bindings/js/WebCoreBuiltinNames.h: - features.json: LayoutTests: - streams/brand-checks.html: - streams/readable-stream-controller-error-expected.txt: - streams/readable-stream-controller-error.html: - streams/readable-stream-default-reader-read-expected.txt: Renamed from LayoutTests/streams/readable-stream-reader-read-expected.txt. - streams/readable-stream-default-reader-read.html: Renamed from LayoutTests/streams/readable-stream-reader-read.html. - streams/readable-stream-error-messages-expected.txt: - streams/readable-stream-error-messages.html: - streams/reference-implementation/readable-stream-reader-expected.txt: - streams/shadowing-Promise-expected.txt: - streams/shadowing-Promise.html: - 3:07 AM Changeset in webkit [203771] by - 4 edits6 adds in trunk [css-grid] Handle alignment with orthogonal flows Reviewed by Darin Adler. Now that grid sizing and positioning issues wrt orthogonal flows have been clarified in the last spec draft, we can adapt now our alignment logic to work with orthogonal flows. Source/WebCore: Even though basic alignment would work with orthogonal flows with this patch, we still doesn't allow stretching in that case. I'll provide a patch for that feature since it's a complex logic and better have an isolated change. Tests: fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-lr.html fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-rl.html fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows.html - rendering/RenderGrid.cpp: (WebCore::computeOverflowAlignmentOffset): Using 'size' instead of 'breadth' as concept. (WebCore::RenderGrid::columnAxisPositionForChild): Dealing with orthogonal flow cases. (WebCore::RenderGrid::rowAxisPositionForChild): Dealing with orthogonal flow cases. (WebCore::RenderGrid::columnAxisOffsetForChild): Using 'size' instead of 'breadth' as concept. (WebCore::RenderGrid::rowAxisOffsetForChild): Using 'size' instead of 'breadth' as concept. (WebCore::RenderGrid::findChildLogicalPosition): Dealing with orthogonal flow cases. LayoutTests: These tests ensure that alignment works as expected in the cases where grid and its children are orthogonal. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-expected.txt: Added. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-lr-expected.txt: Added. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-lr.html: Added. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-rl-expected.txt: Added. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows-vertical-rl.html: Added. - fast/css-grid-layout/grid-item-alignment-with-orthogonal-flows.html: Added. - fast/css-grid-layout/resources/grid-alignment.css: (.alignContentSpaceBetween): (.alignContentSpaceAround): (.alignContentSpaceEvenly): (.alignContentStretch): (.selfStart): (.selfEnd): (.selfCenter): (.selfRight): (.selfLeft): (.selfSelfStart): (.selfSelfEnd): (.itemsSelfEnd): Deleted. - 12:42 AM Changeset in webkit [203770] by - 2 edits in trunk/Source/WebKit/win Fix Win debug build after r203749. - WebView.cpp: - 12:06 AM Changeset in webkit [203769] by - 1 copy in releases/WebKitGTK/webkit-2.13.4 WebKitGTK+ 2.13.4 - 12:03 AM Changeset in webkit [203768] by - 4 edits in trunk Unreviewed. Update OptionsGTK.cmake and NEWS for 2.13.4 release. .: - Source/cmake/OptionsGTK.cmake: Bump version numbers. Source/WebKit2: - gtk/NEWS: Add release notes for 2.13.4. Jul 26, 2016: - 11:38 PM Changeset in webkit [203767].cpp. Removing WebCoreJSBuiltins.cpp from CMake list as it is not needed. -: - 11:37 PM Changeset in webkit [203766] by - 10 edits4 adds in trunk JS Built-ins should throw this-error messages consistently with binding generated code Patch by Youenn Fablet <[email protected]> on 2016-07-26 Reviewed by Darin Adler. Source/WebCore: Introducing @makeThisTypeError and @makeGetterTypeError to create TypeError objects with a consistent error message. Making use of these functions in streams API and fetch API related built-in code. Refactored JSDOMBinding.cpp code by adding makeThisTypeErrorMessage and makeGetterTypeErrorMessage helper routines These routines are used by both new built-in functions as well as binding generated code helper routine. Tests: fetch/fetch-error-messages.html streams/readable-stream-error-messages.html - Modules/fetch/FetchResponse.js: (body): Adding an explicit check so that the error message is right. The previous error message was related to the call of Response.@isDisturbed. - Modules/streams/ReadableStream.js: (cancel): (getReader): (pipeTo): (tee): (locked): - Modules/streams/ReadableStreamController.js: (enqueue): (error): (desiredSize): - Modules/streams/ReadableStreamReader.js: (cancel): (read): (releaseLock): (closed): - bindings/js/JSDOMBinding.cpp: (WebCore::makeGetterTypeErrorMessage): (WebCore::throwGetterTypeError): (WebCore::makeThisTypeErrorMessage): (WebCore::throwThisTypeError): (WebCore::throwSequenceTypeError): Deleted. (WebCore::throwSetterTypeError): Deleted. - bindings/js/JSDOMBinding.h: - bindings/js/JSDOMGlobalObject.cpp: (WebCore::makeThisTypeErrorForBuiltins): (WebCore::makeGetterTypeErrorForBuiltins): (WebCore::JSDOMGlobalObject::addBuiltinGlobals): - bindings/js/WebCoreBuiltinNames.h: LayoutTests: - fetch/fetch-error-messages-expected.txt: Added. - fetch/fetch-error-messages.html: Added. - streams/readable-stream-error-messages-expected.txt: Added. - streams/readable-stream-error-messages.html: Added. - 11:35 PM Changeset in webkit [203765] by - 3 edits in trunk/Source/WebCore Unreviewed. Fix GTK+ distcheck build. wtf/spi/darwin/dyldSPI.h is not included in GTK+ release tarballs. - html/HTMLObjectElement.cpp: Include wtf/spi/darwin/dyldSPI.h only for iOS. - html/MediaElementSession.cpp: Ditto. - 11:20 PM Changeset in webkit [203764] by - 5 edits4 adds in trunk [iOS] SF-Heavy is inaccessible by web content <rdar://problem/27434423> Reviewed by Dean Jackson. Source/WebCore: Once we create the system font, we need to modify it with the appropriate weight. This is because the CoreText API we use to get the system font on iOS does not let us choose the exact weight we want. Test: fast/text/system-font-weight.html - platform/graphics/ios/FontCacheIOS.mm: (WebCore::baseSystemFontDescriptor): (WebCore::systemFontModificationAttributes): (WebCore::systemFontDescriptor): (WebCore::platformFontWithFamilySpecialCase): - platform/spi/cocoa/CoreTextSPI.h: LayoutTests: - platform/ios-simulator/TestExpectations: system-font-weight-italic.html is expected to fail on iOS 9. - fast/text/system-font-weight-italic-expected.txt: Added. - fast/text/system-font-weight-italic.html: Added. - fast/text/system-font-weight-expected.txt: Added. - fast/text/system-font-weight.html: Added. - 10:31 PM Changeset in webkit [203763] by - 2 edits in trunk/LayoutTests Skip failing JSC test regress/script-tests/bigswitch-indirect-symbol.js Unreviewed test gardening. - js/regress/script-tests/bigswitch-indirect-symbol.js: - 10:20 PM Changeset in webkit [203762] by - 2 edits in trunk/Source/WebCore [GTK] ASSERTION FAILED: !m_adoptionIsRequired when Inspector Server is connected Patch by Fujii Hironori <Fujii Hironori> on 2016-07-26 Reviewed by Carlos Garcia Campos. An assertion fails because refcount of SocketStreamHandle is incremented before adoptRef, in the constructor of SocketStreamHandle. The constructor of SocketStreamHandle needs to increment recount because it passes this pointer to libsoup. - platform/network/soup/SocketStreamHandleSoup.cpp: (WebCore::SocketStreamHandle::SocketStreamHandle): Do relaxAdoptionRequirement() as well as the another constructor. - 9:27 PM Changeset in webkit [203761] by - 9 edits in trunk Move 'dir' attribute from HTMLDocument to Document Reviewed by Sam Weinig. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: Move 'dir' attribute from HTMLDocument to Document to match the specification: Both Firefox and Chrome have 'dir' on Document already. No new tests, rebaselined existing test. - dom/Document.cpp: (WebCore::Document::dir): (WebCore::Document::setDir): - dom/Document.h: - dom/Document.idl: - html/HTMLDocument.cpp: (WebCore::HTMLDocument::dir): Deleted. (WebCore::HTMLDocument::setDir): Deleted. - html/HTMLDocument.h: - html/HTMLDocument.idl: - 9:12 PM Changeset in webkit [203760] by - 8 edits2 adds in trunk Second parameter to History.pushState() / replaceState() should be mandatory Reviewed by Sam Weinig. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/html/dom/interfaces-expected.txt: Source/WebCore: Second parameter to History.pushState() / replaceState() should be mandatory: Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - bindings/js/JSHistoryCustom.cpp: (WebCore::JSHistory::pushState): (WebCore::JSHistory::replaceState): - page/History.idl: LayoutTests: - fast/history/state-api-parameters.html: Added. - fast/history/state-api-parameters-expected.txt: Added. Add layout test coverage. - fast/history/replacestate-nocrash.html: - fast/loader/stateobjects/popstate-fires-with-page-cache.html: Update existing tests to reflect behavior change. - 8:42 PM Changeset in webkit [203759] by - 4 edits in trunk Align Node.isEqualNode() with the specification Reviewed by Sam Weinig. LayoutTests/imported/w3c: Rebaseline W3C test now that one more check is passing. We are now passing all the checks in this test like Firefox and Chrome. - web-platform-tests/dom/nodes/Node-isEqualNode-expected.txt: Source/WebCore: Align our implementation for Node.isEqualNode() to match more closely the text of the specification: - - This also fixes a bug where isEqualNode() would sometimes return false wrongly for elements because we were comparing the value returned by nodeName() which returns the tagName of elements. The issue was that the tagName's case may differ depending on the element being in an HTMLDocument or not. We now compare Element::tagQName() instead which ends up comparing namespace / namespace prefix and localName, as per the specification. The new behavior matches Firefox and Chrome and helps us pass an extra W3C test. No new tests, rebaselined existing test. - dom/Node.cpp: (WebCore::Node::isEqualNode): - 6:23 PM Changeset in webkit [203758] by - 10 edits in trunk/Source/JavaScriptCore rollout r203666 Unreviewed rollout. -::asNumber): (JSC::B3::Air::Arg::castToType): Deleted. -AndRun): (JSC::B3::testSomeEarlyRegister): (JSC::B3::zero): (JSC::B3::run): (JSC::B3::lowerToAirForTesting): Deleted. (JSC::B3::testBranchBitAndImmFusion): Deleted. - 5:45 PM Changeset in webkit [203757] by - 2 edits in trunk/LayoutTests Marking inspector/memory/tracking.html as flaky on El Capitan Debug WK1 Unreviewed test gardening. - platform/mac-wk1/TestExpectations: - 5:44 PM Changeset in webkit [203756] by - 4 edits in trunk/Source/WebKit Fix the Windows debug build. WebResourceLoadScheduler's logging was a holdover from it being in WebCore, and prior to r203749 WebKit was actually using a WebCore log channel. For some reason this doesn't build on Windows debug, so just remove this logging for now. Source/WebKit: - WebCoreSupport/WebResourceLoadScheduler.cpp: (WebResourceLoadScheduler::scheduleLoad): Deleted. (WebResourceLoadScheduler::servePendingRequests): Deleted. (WebResourceLoadScheduler::scheduleServePendingRequests): Deleted. (WebResourceLoadScheduler::requestTimerFired): Deleted. (WebResourceLoadScheduler::HostInformation::addLoadInProgress): Deleted. Source/WebKit/mac: - Misc/WebKitLogging.h: - 4:40 PM Changeset in webkit [203755] by - 2 edits in trunk/Tools Fix tests after r203743. - WebKitTestRunner/TestController.cpp: (WTR::TestController::resetStateToConsistentValues): Reset the new boolean m_rejectsProtectionSpaceAndContinueForAuthenticationChallenges. - 4:39 PM Changeset in webkit [203754] by - 6 edits in trunk/Source Sort the project files. Source/WebCore: - WebCore.xcodeproj/project.pbxproj: Source/WebKit: - WebKit.xcodeproj/project.pbxproj: Source/WebKit2: - WebKit2.xcodeproj/project.pbxproj: - 4:37 PM Changeset in webkit [203753] by - 4 edits4 adds in trunk Align CSSKeyframesRule with the specification Reviewed by Darin Adler. Source/WebCore: Align CSSKeyframesRule with the specification: In particular, the parameter to insertRule() / appendRule() / deleteRule() / findRule() should be mandatory. Both Firefox and Chrome agree with the specification here. Also, the CSSKeyframesRule.name attribute should not be nullable. Chrome agrees with the specification. However, Firefox, has the attribute nullable. This patch aligns our behavior with Chrome and the specification. Tests: animations/CSSKeyframesRule-name-null.html animations/CSSKeyframesRule-parameters.html - css/CSSKeyframesRule.h: (WebCore::StyleRuleKeyframes::name): (WebCore::StyleRuleKeyframes::setName): - css/CSSKeyframesRule.idl: LayoutTests: Add layout test coverage. - animations/CSSKeyframesRule-name-null-expected.txt: Added. - animations/CSSKeyframesRule-name-null.html: Added. - animations/CSSKeyframesRule-parameters-expected.txt: Added. - animations/CSSKeyframesRule-parameters.html: Added. - 4:36 PM Changeset in webkit [203752] by - 28 edits2 adds in trunk [iPhone] Playing a video on tudou.com plays only sound, no video <rdar://problem/27535468> Source/WebCore: Reviewed by Eric Carlson and Dan Bernstein. This patch re-implements r203520 in a much simpler way which doesn't involve a new SPI. The biggest problem with r203520 is that it make it impossible for a WKWebView to match MobileSafari's behavior. Instead of adding this new SPI, a simple version check should be used to keep old apps working. The new behavior is characterized by the following table: | iOS | Non-iOS ============================================================================================= requiresPlayInlineAttribute == true | Old app: honor -webkit-playsinline | honor playsinline | New app: honor playsinline | honor playsinline requiresPlayInlineAttribute == false | Always inline | Always inline Specifically, this patch reverts r203545 which is the commit which actually removes the old SPI. As soon as Safari is migrated back to this old SPI, I'll remove the two new SPIs added in r203520. Tests: media/video-playsinline.html media/video-webkitInlineMediaPlaybackRequiresPlaysInlineAttribute): - testing/InternalSettings.h: - testing/InternalSettings.idl: Source/WebKit/mac: Reviewed by Eric Carlson and Dan Bernstein. - WebView/WebPreferenceKeysPrivate.h: - WebView/WebPreferences.mm: (+[WebPreferences initialize]): (-[WebPreferences inlineMediaPlaybackRequiresPlaysInlineAttribute]): (-[WebPreferences setInlineMediaPlaybackRequiresPlaysInlineAttribute:]): - WebView/WebPreferencesPrivate.h: - WebView/WebView.mm: (-[WebView _preferencesChanged:]): Source/WebKit2: Reviewed by Eric Carlson and Dan Bernstein. - Shared/WebPreferencesDefinitions.h: - UIProcess/API/C/WKPreferences.cpp: (WKPreferencesSetInlineMediaPlaybackRequiresPlaysInlineAttribute): (WKPreferencesGetInlineMediaPlaybackRequires _inlineMediaPlaybackRequiresPlaysInlineAttribute]): (-[WKWebViewConfiguration _setInlineMediaPlaybackRequiresPlaysInlineAttribute:]): - UIProcess/API/Cocoa/WKWebViewConfigurationPrivate.h: - WebProcess/WebPage/WebPage.cpp: (WebKit::WebPage::updatePreferences): Tools: Reviewed by Dan Bernstein. - DumpRenderTree/mac/DumpRenderTree.mm: (setDefaultsToConsistentValuesForTesting): - TestWebKitAPI/Tests/WebKit2Cocoa/RequiresUserActionForPlayback.mm: (RequiresUserActionForPlaybackTest::SetUp): - WebKitTestRunner/TestController.cpp: (WTR::TestController::resetPreferencesToConsistentValues): - WebKitTestRunner/cocoa/TestControllerCocoa.mm: (WTR::initializeWebViewConfiguration): LayoutTests: Reviewed by Eric Carlson and Dan Bernstein. - media/video-playsinline-expected.txt: - media/video-playsinline.html: - media/video-webkit-playsinline-expected.txt: Added. - media/video-webkit-playsinline.html: Added. - 4:33 PM Changeset in webkit [203751] by - 4 edits in trunk/Source/WebCore Move RenderView::shouldDisableLayoutStateForSubtree to SubtreeLayoutStateMaintainer. Reviewed by Darin Adler and Simon Fraser. No change in functionality. - page/FrameView.cpp: (WebCore::SubtreeLayoutStateMaintainer::SubtreeLayoutStateMaintainer): - rendering/RenderView.cpp: (WebCore::RenderView::shouldDisableLayoutStateForSubtree): - rendering/RenderView.h: - 4:31 PM Changeset in webkit [203750] by - 5 edits2 deletes in trunk/Source/WebKit2 Remove unused DownloadAuthenticationClient Patch by Alex Christensen <[email protected]> on 2016-07-26 Reviewed by Darin Adler. - CMakeLists.txt: - NetworkProcess/Downloads/Download.cpp: - NetworkProcess/Downloads/Download.h: - NetworkProcess/Downloads/DownloadAuthenticationClient.cpp: Removed. - NetworkProcess/Downloads/DownloadAuthenticationClient.h: Removed. - WebKit2.xcodeproj/project.pbxproj: - 4:30 PM Changeset in webkit [203749] by - 28 edits1 copy2 adds in trunk Allow LOG macros to be used outside the namespace, and other logging cleanup Reviewed by Anders Carlsson. Source/WebCore:.xcodeproj/project.pbxproj: - platform/LogInitialization.h: Added. - platform/LogMacros.h: Added. - platform/Logging.cpp: (WebCore::initializeLogChannelsIfNecessary): (WebCore::initializeLoggingChannelsIfNecessary): Deleted. - platform/Logging.h: - testing/js/WebCoreTestSupport.cpp: (WebCoreTestSupport::initializeLogChannelsIfNecessary): (WebCoreTestSupport::initializeLoggingChannelsIfNecessary): Deleted. - testing/js/WebCoreTestSupport.h: Source/WebKit:Support/WebResourceLoadScheduler.cpp: Source/WebKit/mac:. - Misc/WebKitLogging.h: - Misc/WebKitLogging.m: - WebView/WebView.mm: (-[WebView _commonInitializationWithFrameName:groupName:]): Source/WebKit/win:KitLogging.cpp: - WebKitLogging.h: - WebView.cpp: (WebView::initWithFrame): Source/WebKit2:. - NetworkProcess/NetworkProcess.cpp: - Platform/LogInitialization.h: Copied from Source/WebKit2/Platform/foundation/LoggingFoundation.mm. - Platform/Logging.cpp: (WebKit::initializeLogChannelsIfNecessary): - Platform/Logging.h: - Platform/foundation/LoggingFoundation.mm: - Shared/WebKit2Initialize.cpp: (WebKit::InitializeWebKit2): - UIProcess/API/Cocoa/WKWebView.mm: (-[WKWebView _updateContentRectsWithState:]): (-[WKWebView _navigationGestureDidBegin]): - UIProcess/WebProcessPool.cpp: (WebKit::m_hiddenPageThrottlingTimer): - WebKit2.xcodeproj/project.pbxproj: Tools: initializeLoggingChannelsIfNecessary -> initializeLogChannelsIfNecessary - DumpRenderTree/TestRunner.cpp: - DumpRenderTree/mac/DumpRenderTree.mm: (resetWebViewToConsistentStateBeforeTesting): - 4:15 PM Changeset in webkit [203748] by - 2 edits in trunk/Websites/perf.webkit.org REGRESSION: Tooltip for analysis tasks doesn't show up on charts Rubber-stamped by Chris Dumez. The bug was caused by ChartPaneBase resetting annotation bars every time the current point has moved. Avoid doing this in ChartPaneBase's _renderAnnotations(). - public/v3/components/chart-pane-base.js: (ChartPaneBase): (ChartPaneBase.prototype.fetchAnalysisTasks): (ChartPaneBase.prototype._renderAnnotations): - 4:06 PM Changeset in webkit [203747] by - 3 edits2 moves in trunk/Source/JavaScriptCore [JSC] Object.getOwnPropertyDescriptors should not add undefined props to result Reviewed by Geoffrey Garen. - runtime/ObjectConstructor.cpp: (JSC::objectConstructorGetOwnPropertyDescriptors): - tests/es6.yaml: - tests/es6/Object_static_methods_Object.getOwnPropertyDescriptors-proxy.js: (testPropertiesIndexedSetterOnPrototypeThrows.set get var): Deleted. (testPropertiesIndexedSetterOnPrototypeThrows): Deleted. - tests/stress/Object_static_methods_Object.getOwnPropertyDescriptors-proxy.js: Renamed from Source/JavaScriptCore/tests/es6/Object_static_methods_Object.getOwnPropertyDescriptors-proxy.js. - tests/stress/Object_static_methods_Object.getOwnPropertyDescriptors.js: Renamed from Source/JavaScriptCore/tests/es6/Object_static_methods_Object.getOwnPropertyDescriptors.js. - 3:56 PM Changeset in webkit [203746] by - 2 edits in trunk/Source/WebCore onpaymentauthorized callback not received when authorizing for a second time rdar://problem/27527151 Reviewed by Tim Horton. Only null out the active session if the status is a final state status. - Modules/applepay/PaymentCoordinator.cpp: (WebCore::PaymentCoordinator::completePaymentSession): - 3:42 PM Changeset in webkit [203745] by - 4 edits in trunk Range.prototype.compareBoundaryPoints.length should be 2 Reviewed by Sam Weinig. LayoutTests/imported/w3c: Rebaseline W3C test now that one more check is passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Range.prototype.compareBoundaryPoints.length: We had a bug in our IDL which caused length to be 0 even though both parameters were effectively mandatory. No new tests, rebaselined existing test. - dom/Range.idl: - 3:38 PM Changeset in webkit [203744] by - 6 edits4 adds in trunk Align CSSStyleDeclaration with the specification Reviewed by Darin Adler. Source/WebCore: Align CSSStyleDeclaration with the specification: In particular, the parameters to removeProperty() / item() and getPropertyPriority() should be mandatory. Firefox and Chrome match the specification. Tests: fast/css/CSSStyleDeclaration-cssText-null.html fast/css/CSSStyleDeclaration-parameters.html - bindings/js/JSCSSStyleDeclarationCustom.cpp: (WebCore::JSCSSStyleDeclaration::getPropertyCSSValue): - css/CSSStyleDeclaration.idl: LayoutTests: - fast/css/CSSStyleDeclaration-cssText-null-expected.txt: Added. - fast/css/CSSStyleDeclaration-cssText-null.html: Added. Add layout test coverage for setting cssText to null. This test passes in WebKit, Firefox and Chrome, with or without my change. Our IDL wrongly reported the cssText attribute as nullable but WebKit was already behaving correctly. - fast/css/CSSStyleDeclaration-parameters-expected.txt: Added. - fast/css/CSSStyleDeclaration-parameters.html: Added. Add testing for omitting CSSStyleDeclaration API parameters, to make sure they are mandatory. This test passes in Firefox and Chrome. - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: Update existing test to reflect behavior change. - 3:31 PM Changeset in webkit [203743] by - 16 edits2 adds in trunk Networking process crash due to missing -[WebCoreAuthenticationClientAsChallengeSender performDefaultHandlingForAuthenticationChallenge:] implementation <rdar://problem/23325160> Reviewed by Alex Christensen. Source/WebCore: Test: http/tests/xmlhttprequest/auth-reject-protection-space.html - platform/network/mac/AuthenticationMac.mm: (-[WebCoreAuthenticationClientAsChallengeSender performDefaultHandlingForAuthenticationChallenge:]): Added. (-[WebCoreAuthenticationClientAsChallengeSender rejectProtectionSpaceAndContinueWithChallenge:]): Added. Source/WebKit2: - UIProcess/API/C/WKAuthenticationDecisionListener.cpp: (WKAuthenticationDecisionListenerRejectProtectionSpaceAndContinue): - UIProcess/API/C/WKAuthenticationDecisionListener.h: Added new SPI for testing corresponding to calling the completion handler of WKWebView.didReceiveAuthenticationChallenge with NSURLSessionAuthChallengeRejectProtectionSpace. Tools: - DumpRenderTree/TestRunner.cpp: (TestRunner::TestRunner): (setRejectsProtectionSpaceAndContinueForAuthenticationChallengesCallback): - DumpRenderTree/TestRunner.h: (TestRunner::rejectsProtectionSpaceAndContinueForAuthenticationChallenges): (TestRunner::setRejectsProtectionSpaceAndContinueForAuthenticationChallenges): - DumpRenderTree/mac/ResourceLoadDelegate.mm: (-[ResourceLoadDelegate webView:resource:didReceiveAuthenticationChallenge:fromDataSource:]): - WebKitTestRunner/InjectedBundle/Bindings/TestRunner.idl: - WebKitTestRunner/InjectedBundle/TestRunner.cpp: (WTR::TestRunner::queueNonLoadingScript): (WTR::TestRunner::setRejectsProtectionSpaceAndContinueForAuthenticationChallenges): - WebKitTestRunner/InjectedBundle/TestRunner.h: - WebKitTestRunner/TestController.cpp: (WTR::TestController::didReceiveAuthenticationChallenge): - WebKitTestRunner/TestController.h: (WTR::TestController::setRejectsProtectionSpaceAndContinueForAuthenticationChallenges): - WebKitTestRunner/TestInvocation.cpp: (WTR::TestInvocation::didReceiveMessageFromInjectedBundle): Add TestRunner.setRejectsProtectionSpaceAndContinueForAuthenticationChallenges to use for testing. LayoutTests: - http/tests/xmlhttprequest/auth-reject-protection-space-expected.txt: Added. - http/tests/xmlhttprequest/auth-reject-protection-space.html: Added. - 3:04 PM Changeset in webkit [203742] by - 2 edits in trunk/Tools check-for-exit-time-destructors should be usable outside Xcode <> Reviewed by Darin Adler. - Scripts/check-for-exit-time-destructors: Update to parse for exit time destructors on the command-line. The clang compiler will find these at compile-time with the -Wexit-time-destructors switch, but this script will check for them after-the-fact. - 3:03 PM Changeset in webkit [203741] by - 3 edits in trunk/Source/WebKit2 Payment session does not end if user closes all Safari windows rdar://problem/27480873 Reviewed by Tim Horton. Listen for the NSWindowWillCloseNotification of the sheet window and hide the payment UI when the sheet window is going to be closed. - UIProcess/ApplePay/WebPaymentCoordinatorProxy.h: - UIProcess/ApplePay/mac/WebPaymentCoordinatorProxyMac.mm: (WebKit::WebPaymentCoordinatorProxy::platformShowPaymentUI): (WebKit::WebPaymentCoordinatorProxy::hidePaymentUI): - 3:00 PM Changeset in webkit [203740] by - 11 edits2 adds in trunk Parameters to CSSStyleSheet.insertRule() / deleteRule() should be mandatory Reviewed by Darin Adler. Source/WebCore: Parameters to CSSStyleSheet.insertRule() / deleteRule() should be mandatory: They are mandatory in Firefox. They are mandatory in Chrome except for the second parameter of insertRule() which merely logs a deprecation warning. This patch aligns our behavior with Chrome to move towards to specification while limiting the risk of breakage. Test: fast/css/stylesheet-parameters.html - css/CSSStyleSheet.cpp: (WebCore::CSSStyleSheet::deprecatedInsertRule): - css/CSSStyleSheet.h: - css/CSSStyleSheet.idl: LayoutTests: - fast/css/stylesheet-parameters-expected.txt: Added. - fast/css/stylesheet-parameters.html: Added. Add layout test coverage. - editing/selection/first-letter-selection-crash.html: - fast/css/counters/asterisk-counter-update-after-layout-crash.html: - fast/dom/HTMLElement/dynamic-editability-change.html: - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: Update existing tests to reflect the behavior change. - 2:52 PM Changeset in webkit [203739] by - 21 edits2 adds in trunk HTMLVideoElement frames do not update on iOS when src is a MediaStream blob <rdar://problem/27379487> Patch by George Ruan <[email protected]> on 2016-07-26 Reviewed by Eric Carlson. Source/WebCore: Test: fast/mediastream/MediaStream-video-element-displays-buffer.html - WebCore.xcodeproj/project.pbxproj: - platform/cf/CoreMediaSoftLink.cpp: Add CMSampleBufferCreateReadyWithImageBuffer and CMVideoFormatDescriptionCreateForImageBuffer softlink. - platform/cf/CoreMediaSoftLink.h: Ditto. - platform/cocoa/CoreVideoSoftLink.cpp: Add CVPixelBufferCreate, kCVPixelBufferCGBitmapContextCompatibilityKey, and kCVPixelBufferCGImageCompatibilityKey. - platform/cocoa/CoreVideoSoftLink.h: Ditto. -::play): Call updateReadyState as a deferred task. (WebCore::MediaPlayerPrivateMediaStreamAVFObjC::currentReadyState): readyState is bumped to HAVE_ENOUGH_DATA only when the MediaPlayerPrivateMediaStreamAVFObjC has received a media sample. AVFObjC that a new SampleBuffer is available to enqueue to the AVSampleBufferDisplayLayer. - platform/mediastream/RealtimeMediaSource.cpp: (WebCore::RealtimeMediaSource::settingsDidChange): Fix grammatical mistake in function name settingsDidChanged(). (WebCore::RealtimeMediaSource::mediaDataUpdated): Relays to all observers that a new SampleBuffer is available. (WebCore::RealtimeMediaSource::settingsDidChanged): Deleted. - platform/mediastream/RealtimeMediaSource.h: - platform/mediastream/mac/AVVideoCaptureSource.mm: (WebCore::AVVideoCaptureSource::processNewFrame): Calls mediaDataUpdated when a new SampleBuffer is captured. - platform/mediastream/mac/MockRealtimeVideoSourceMac.h: - platform/mediastream/mac/MockRealtimeVideoSourceMac.mm: (WebCore::MockRealtimeVideoSourceMac::CMSampleBufferFromPixelBuffer): Convert CVPixelBuffer to CMSampleBuffer. (WebCore::MockRealtimeVideoSourceMac::pixelBufferFromCGImage): Convert CGImage to CVPixelBuffer. (WebCore::MockRealtimeVideoSourceMac::updateSampleBuffer): Creates a CMSampleBuffer from current imageBuffer and sends the CMSampleBuffer to MediaPlayerPrivateMediaStreamAVFObjC - platform/mock/MockRealtimeVideoSource.cpp: (WebCore::MockRealtimeVideoSource::setFrameRate): Fix grammar of settingsDidChanged() to settingsDidChange(). (WebCore::MockRealtimeVideoSource::setSize): Ditto. (WebCore::MockRealtimeVideoSource::generateFrame): Call updateSampleBuffer(). - platform/mock/MockRealtimeVideoSource.h: Change elapsedTime() from private to protected. (WebCore::MockRealtimeVideoSource::updateSampleBuffer): Overriden by MockRealtimeVideoSourceMac.:49 PM Changeset in webkit [203738] by - 5 edits in trunk/Source/WebCore Move ControlStates HashMap to RenderBox. Reviewed by Simon Fraser. Move and modernize it. No change in functionality. - platform/ControlStates.h: (WebCore::ControlStates::ControlStates): Deleted. - rendering/RenderBox.cpp: (WebCore::controlStatesRendererMap): (WebCore::controlStatesForRenderer): (WebCore::removeControlStatesForRenderer): (WebCore::RenderBox::~RenderBox): (WebCore::RenderBox::paintBoxDecorations): - rendering/RenderElement.cpp: (WebCore::controlStatesRendererMap): Deleted. (WebCore::RenderElement::hasControlStatesForRenderer): Deleted. (WebCore::RenderElement::controlStatesForRenderer): Deleted. (WebCore::RenderElement::removeControlStatesForRenderer): Deleted. (WebCore::RenderElement::addControlStatesForRenderer): Deleted. - rendering/RenderElement.h: - 2:35 PM Changeset in webkit [203737] by - 2 edits in trunk/Source/WebCore Occasional crash in WebCore::RenderVTTCue::initializeLayoutParameters Reviewed by Darin Adler. - rendering/RenderVTTCue.cpp: (WebCore::RenderVTTCue::initializeLayoutParameters): Return when firstChild is NULL so a release build will not crash. - 2:01 PM Changeset in webkit [203736] by - 2 edits in trunk/Websites/perf.webkit.org REGRESSION: The arrow indicating the current page doesn't get updated Reviewed by Chris Dumez. The bug was caused by Heading's render() function not updating the DOM more than once. I don't understand how this has ever worked. Fixed the bug by rendering DOM whenever the current page has changed. - public/v3/pages/heading.js: (Heading): (Heading.prototype.render): - 1:59 PM Changeset in webkit [203735] by - 6 edits1 delete in trunk/LayoutTests Remove the tests for legacy custom elements API Reviewed by Chris Dumez. Removed the tests for legacy custom elements v0 API. The tests for the new v1 API is at fast/custom-elements. - fast/dom/custom: Removed. - fast/dom/custom/document-register-basic-expected.txt: Removed. - fast/dom/custom/document-register-basic.html: Removed. - fast/dom/custom/document-register-namespace-expected.txt: Removed. - fast/dom/custom/document-register-namespace.html: Removed. - fast/dom/custom/document-register-reentrant-null-constructor-expected.txt: Removed. - fast/dom/custom/document-register-reentrant-null-constructor.html: Removed. - fast/dom/custom/document-register-reentrant-returning-fake-expected.txt: Removed. - fast/dom/custom/document-register-reentrant-returning-fake.html: Removed. - fast/dom/custom/document-register-reentrant-throwing-constructor-expected.txt: Removed. - fast/dom/custom/document-register-reentrant-throwing-constructor.html: Removed. - fast/dom/custom/document-register-type-extensions-expected.txt: Removed. - fast/dom/custom/document-register-type-extensions.html: Removed. - fast/dom/custom/lifecycle-ready-createElement-recursion-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-createElement-recursion.html: Removed. - fast/dom/custom/lifecycle-ready-createElement-reentrancy-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-createElement-reentrancy.html: Removed. - fast/dom/custom/lifecycle-ready-creation-api-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-creation-api.html: Removed. - fast/dom/custom/lifecycle-ready-innerHTML-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-innerHTML.html: Removed. - fast/dom/custom/lifecycle-ready-parser-only-expected.html: Removed. - fast/dom/custom/lifecycle-ready-parser-only.html: Removed. - fast/dom/custom/lifecycle-ready-parser-script-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-parser-script.html: Removed. - fast/dom/custom/lifecycle-ready-paste-expected.txt: Removed. - fast/dom/custom/lifecycle-ready-paste.html: Removed. - fast/dom/custom/resources: Removed. - fast/dom/custom/resources/document-register-fuzz.js: Removed. - platform/efl/TestExpectations: - platform/gtk/TestExpectations: - platform/ios-simulator/TestExpectations: - platform/mac/TestExpectations: - platform/win/TestExpectations: - 1:57 PM Changeset in webkit [203734] by - 4 edits in trunk Parameters to CustomEvent.initCustomEvent() should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Parameters to CustomEvent.initCustomEvent() should be mandatory: Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - dom/CustomEvent.idl: - 1:54 PM Changeset in webkit [203733] by - 13 edits in trunk Second parameter to Range.isPointInRange() / comparePoint() should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Second parameter to Range.isPointInRange() / comparePoint() should be mandatory and be of type "unsigned long": Firefox and Chrome agree with the specification. No new tests, rebaselined existing tests. - accessibility/AXObjectCache.cpp: (WebCore::AXObjectCache::traverseToOffsetInRange): - dom/DocumentMarkerController.cpp: (WebCore::DocumentMarkerController::removeMarkers): (WebCore::DocumentMarkerController::markersInRange): (DocumentMarkerController::setMarkersActive): - dom/Range.cpp: (WebCore::Range::isPointInRange): (WebCore::Range::comparePoint): (WebCore::Range::compareBoundaryPoints): (WebCore::Range::toString): (WebCore::Range::absoluteTextRects): (WebCore::Range::absoluteTextQuads): (WebCore::boundaryTextNodesMerged): (WebCore::Range::getBorderAndTextQuads): - dom/Range.h: (WebCore::Range::startOffset): (WebCore::Range::endOffset): - dom/Range.idl: - dom/RangeBoundaryPoint.h: (WebCore::RangeBoundaryPoint::ensureOffsetIsValid): (WebCore::RangeBoundaryPoint::toPosition): (WebCore::RangeBoundaryPoint::offset): (WebCore::RangeBoundaryPoint::setOffset): (WebCore::RangeBoundaryPoint::setToBeforeChild): (WebCore::RangeBoundaryPoint::setToAfterChild): (WebCore::RangeBoundaryPoint::setToEndOfNode): (WebCore::RangeBoundaryPoint::childBeforeWillBeRemoved): (WebCore::RangeBoundaryPoint::invalidateOffset): LayoutTests: Update existing test to reflect behavior change. - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: - 1:18 PM Changeset in webkit [203732] by - 23 edits8 copies11 adds in trunk [Fetch API] Add support for fetch mode, in particular cors Patch by Youenn Fablet <[email protected]> on 2016-07-26 Reviewed by Darin Adler. LayoutTests/imported/w3c: Rebasing tests. - web-platform-tests/fetch/api/basic/integrity-expected.txt: - web-platform-tests/fetch/api/basic/integrity-worker-basic-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-cookies-expected.txt: - web-platform-tests/fetch/api/cors/cors-cookies-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-filtering-expected.txt: - web-platform-tests/fetch/api/cors/cors-filtering-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-multiple-origins-expected.txt: - web-platform-tests/fetch/api/cors/cors-multiple-origins-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-preflight-expected.txt: - web-platform-tests/fetch/api/cors/cors-preflight-referrer-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-preflight-worker-expected.txt: - web-platform-tests/fetch/api/cors/cors-redirect-credentials-expected.txt: - web-platform-tests/fetch/api/cors/cors-redirect-credentials-worker-expected.txt: - web-platform-tests/fetch/api/credentials/authentication-basic-worker-expected.txt: - web-platform-tests/fetch/api/request/request-cache-expected.txt: Source/WebCore: Covered by rebased tests. - Modules/fetch/FetchLoader.cpp: (WebCore::FetchLoader::start): Passing fetch mode to ThreadableLoader. Disabling as a temp fix credentials in case of CORS mode as credential options is not yet supported and would make several tests fail. LayoutTests: Rebasing specific expectations as Maci/iOS WK2 does not like https tests. - platform/ios-simulator-wk2/imported/w3c/web-platform-tests/fetch/api/basic/mode-no-cors-expected.txt: Added. - platform/ios-simulator-wk2/imported/w3c/web-platform-tests/fetch/api/basic/mode-no-cors-worker-expected.txt: Added. - platform/ios-simulator-wk2/imported/w3c/web-platform-tests/fetch/api/cors/cors-basic-expected.txt: Added. - platform/ios-simulator-wk2/imported/w3c/web-platform-tests/fetch/api/cors/cors-basic-worker-expected.txt: Added. - platform/mac-wk2/imported/w3c/web-platform-tests/fetch/api/basic/mode-no-cors-expected.txt: Added. - platform/mac-wk2/imported/w3c/web-platform-tests/fetch/api/basic/mode-no-cors-worker-expected.txt: Added. - platform/mac-wk2/imported/w3c/web-platform-tests/fetch/api/cors/cors-basic-expected.txt: Added. - platform/mac-wk2/imported/w3c/web-platform-tests/fetch/api/cors/cors-basic-worker-expected.txt: Added. - 12:09 PM Changeset in webkit [203731] by - 10 edits2 adds4 deletes in trunk Align NamedNodeMap with the specification Reviewed by Darin Adler. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Align NamedNodeMap with the specification: In particular, mark parameters as mandatory when they should be, and use tighter "Attr" typing instead of Node. Chrome and Firefox agree with the specification. Test: fast/dom/NamedNodeMap-parameters.html - dom/NamedNodeMap.cpp: (WebCore::NamedNodeMap::getNamedItem): (WebCore::NamedNodeMap::getNamedItemNS): (WebCore::NamedNodeMap::removeNamedItem): (WebCore::NamedNodeMap::removeNamedItemNS): (WebCore::NamedNodeMap::setNamedItem): (WebCore::NamedNodeMap::item): - dom/NamedNodeMap.h: - dom/NamedNodeMap.idl: LayoutTests: - dom/html/level2/core/hc_namednodemapinvalidtype1-expected.txt: Removed. - dom/html/level2/core/hc_namednodemapinvalidtype1.html: Removed. - dom/xhtml/level2/core/hc_namednodemapinvalidtype1-expected.txt: Removed. - dom/xhtml/level2/core/hc_namednodemapinvalidtype1.xhtml: Removed. Drop outdated DOM level 2 tests that expect the wrong exception type to be thrown when passing a non-Attr node in. - fast/dom/NamedNodeMap-parameters-expected.txt: Added. - fast/dom/NamedNodeMap-parameters.html: Added. Add layout test coverage. I have verified that this test is passing in both Firefox and Chrome. - fast/dom/NamedNodeMap-setNamedItem-crash-expected.txt: - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: Update / rebaseline existing tests to reflect behavior change. - 11:58 AM Changeset in webkit [203730] by - 2 edits in trunk/Source/JavaScriptCore Remove unused DEBUG_WITH_BREAKPOINT configuration. Reviewed by Keith Miller. - bytecompiler/BytecodeGenerator.cpp: (JSC::BytecodeGenerator::emitDebugHook): - 11:58 AM Changeset in webkit [203729] by - 4 edits2 adds in trunk Infinite Canvas context save() causes WebKit to crash <rdar://problem/26759984> Patch by Said Abou-Hallawa <[email protected]> on 2016-07-26 Reviewed by Simon Fraser. Source/WebCore: Limit the size of the canvas context state stack to 1024 * 16 saves. All the saves which come after that limit will stay unrealized. The restore() should not have any effect till there is no unrealized saves. Test: fast/canvas/canvas-context-save-limit.html - html/canvas/CanvasRenderingContext2D.cpp: (WebCore::CanvasRenderingContext2D::realizeSaves): (WebCore::CanvasRenderingContext2D::realizeSavesLoop): - html/canvas/CanvasRenderingContext2D.h: LayoutTests: - fast/canvas/canvas-context-save-limit-expected.txt: Added. - fast/canvas/canvas-context-save-limit.html: Added. - 11:00 AM Changeset in webkit [203728] by - 5 edits3 adds in trunk DOMTokenList should be iterable Patch by Youenn Fablet <[email protected]> on 2016-07-26 Reviewed by Chris Dumez. Source/WebCore: DOMTokenList should be iterable as per Test: fast/dom/domTokenListIterator.html - html/DOMTokenList.idl: Added iterable to the interface description. LayoutTests: - fast/dom/domTokenListIterator-expected.txt: Added. - fast/dom/domTokenListIterator.html: Added. - fast/dom/iterable-tests.js: Added. - fast/dom/nodeListIterator-expected.txt: - fast/dom/nodeListIterator.html: Making use of iterable-tests.js - 10:26 AM Changeset in webkit [203727] by - 2 edits in trunk/Source/WebKit2 [Threaded Compositor] ASSERTION FAILED: canAccessThreadLocalDataForThread(m_thread) after r203718 Reviewed by Michael Catanzaro. I forgot to call purgeGLResources() before invalidating the scene in the compositing thread. - Shared/CoordinatedGraphics/threadedcompositor/ThreadedCompositor.cpp: (WebKit::ThreadedCompositor::invalidate): - 10:13 AM Changeset in webkit [203726] by - 21 edits2 deletes in trunk Unreviewed, rolling out r203719. It is breaking win build (Requested by youenn on #webkit). Reverted changeset: "[Fetch API] Response constructor should be able to take a ReadableStream as body" - 9:57 AM Changeset in webkit [203725] by - 3 edits6 adds in trunk Stop supporting compressed character sets BOCU-1 and SCSU Reviewed by Brent Fulgham. WebKit should not support the compressed character sets BOCU-1 and SCSU. Chrome and Firefox don't and these old formats may pass server-side character filters while still rendering in WebKit. The HTML specification says "The above prohibits supporting, for example, CESU-8, UTF-7, BOCU-1, SCSU, EBCDIC, and UTF-32." Source/WebCore: Tests: http/tests/misc/char-encoding-bocu-1-blacklisted.html http/tests/misc/char-encoding-scsu-blacklisted.html - platform/text/TextEncodingRegistry.cpp: Blacklisted BOCU-1 and SCSU character sets. LayoutTests: - http/tests/misc/char-encoding-bocu-1-blacklisted-expected.txt: Added. - http/tests/misc/char-encoding-bocu-1-blacklisted.html: Added. - http/tests/misc/char-encoding-scsu-blacklisted-expected.txt: Added. - http/tests/misc/char-encoding-scsu-blacklisted.html: Added. - http/tests/misc/resources/bocu-1-cyrillic.php: Added. - http/tests/misc/resources/scsu-cyrillic.php: Added. - 9:32 AM Changeset in webkit [203724] by - 2 edits in trunk/Source/WebKit2 Support configurable autocapitalization. rdar://problem/27536113 Reviewed by Tim Horton. Autocapitalization should be enabled/disabled regardless of whether we are using advance spelling feature. - UIProcess/mac/TextCheckerMac.mm: (WebKit::TextChecker::checkTextOfParagraph): (WebKit::TextChecker::getGuessesForWord): - 9:29 AM Changeset in webkit [203723] by - 2 edits in trunk/Source/WebCore Reviewed by Michael Catanzaro. This is happening in the GTK+ Debug bot when running test loader/load-defer.html (note that the assert is inside a !USE(CF) block). The test is creating an iframe with load deferred, then in a timeout it disables the deferred load and checks that the load actually happens. What happens is that the initial empty document is what calls DocumentLoader::finishedLoading() when load is still deferred. The onload handler is not called because load events are disabled for the initial empty document in SubframeLoader::loadSubframe(), but DocumentLoader::finishedLoading() is called unconditionally from maybeLoadEmpty(). I think it's fine to call DocumentLoader::finishedLoading() for the initial empty document even when load is deferred, so we can simply update the assert to handle that case. - loader/DocumentLoader.cpp: (WebCore::DocumentLoader::finishedLoading): Do not assert if called for the initial empty document when load is deferred. - 9:26 AM Changeset in webkit [203722] by - 3 edits in trunk/Source/WebKit2 [Coordinated Graphics] Test fast/fixed-layout/fixed-layout.html crashes in debug Reviewed by Michael Catanzaro. The problem is that WebPage has its own m_useFixedLayout that is only updated when changed from the UI process. However, layout tests doing internals.setUseFixedLayout() change the frame view directly, and the WebPage doesn't notice it. - WebProcess/WebPage/WebPage.cpp: (WebKit::WebPage::setFixedVisibleContentRect): Deleted. (WebKit::WebPage::sendViewportAttributesChanged): Change the assert to check the main FrameView is in fixed layout mode. - WebProcess/WebPage/WebPage.h: - 9:25 AM Changeset in webkit [203721] by - 5 edits in trunk/Source/WebKit2 [Threaded Compositor] ASSERTION FAILED: isMainThread() when ThreadedCompositor is destroyed since r203718 Reviewed by Žan Doberšek. ThreadedCompositor can be destroyed from a secondary thread, for example, when a task takes a reference and the main threads derefs it, when the task finishes in the secondary thread the lambda ends up deleting the threaded compositor. This is ok for the Threaded compositor but not for the CompositingRunLoop class. this was not a problem before r203718 because the CompositingRunLoop object was created and destroyed in the same thread always, but now it's part of the ThreadedCompositor class. This patch uses std:unique_ptr again to explicitly create the CompositingRunLoop in the ThreadedCompositor constructor and delete in the invalidate() method to make sure it happens in the main thread in both cases. - Shared/CoordinatedGraphics/threadedcompositor/CompositingRunLoop.cpp: (WebKit::WorkQueuePool::invalidate): (WebKit::WorkQueuePool::getOrCreateWorkQueueForContext): -::updateViewport): (WebKit::ThreadedCompositor::scheduleDisplayImmediately): (WebKit::ThreadedCompositor::forceRepaint): - Shared/CoordinatedGraphics/threadedcompositor/ThreadedCompositor.h: - 9:23 AM Changeset in webkit [203720] by - 25 edits in trunk Remove ClientCredentialPolicy cross-origin option from ResourceLoaderOptions Patch by Youenn Fablet <[email protected]> on 2016-07-26 Reviewed by Alex Christensen. LayoutTests/imported/w3c: Below test changes as ResourceLoader is now computing whether to request credentials to clients if: - request is authorized to request credentials (DocumentThreadableLoader only allows same-origin to make such thing) - credential policy is Include or Same-Origin and request is same-origin. This test changes as current fetch loader sets the credential mode to Omit, thus disabling credential request. To be noted that only fetch API is allowing to disable credentials sending for same-origin request using "Omit" credential mode. - web-platform-tests/fetch/api/credentials/authentication-basic-expected.txt: Rebasing test. Source/WebCore: ClientCredentialPolicy had three values (not ask, ask, ask only for same origin). The distinction between allowing cross-origin or same-origin credentials is misleading as it is not supported for synchronous loads and not supported by Network process. It is best replaced by a boolean option (ask or not ask). Same-origin ClientCredentialPolicy option was only used by DocumentThreadableLoader for asynchronous loads. Since DocumentThreadableLoader is already computing whether the request is cross-origin, it can also compute whether credentials may be requested or not. In case of cross-origin redirections, credentials are omitted, thus disabling any possibility for requesting credentials for cross-origin resources after redirections. Moving ClientCredentialPolicy to ResourceLoaderOptions since it is not used by platform code except for some mac-specific code that is already using ResourceLoaderOptions. Covered by existing tests. - loader/CrossOriginPreflightChecker.cpp: (WebCore::CrossOriginPreflightChecker::startPreflight): Setting clearly credential mode to Omit credentials. (WebCore::CrossOriginPreflightChecker::doPreflight): Ditto. - loader/DocumentLoader.cpp: (WebCore::DocumentLoader::startLoadingMainResource): AskClientForAllCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. - loader/DocumentThreadableLoader.cpp: (WebCore::DocumentThreadableLoader::loadRequest): Disabling requesting credentials for any cross-origin request. - loader/FrameLoader.h: - loader/MediaResourceLoader.cpp: (WebCore::MediaResourceLoader::requestResource): AskClientForAllCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. - loader/NetscapePlugInStreamLoader.cpp: (WebCore::NetscapePlugInStreamLoader::NetscapePlugInStreamLoader): Ditto. - loader/ResourceLoader.cpp: (WebCore::ResourceLoader::isAllowedToAskUserForCredentials): Disabling client credential request if ClientCredentialPolicy is CannotAskClientForCredentials. Otherwise, returns true if fetch credentials mode allows it. - loader/ResourceLoaderOptions.h: (WebCore::ResourceLoaderOptions::ResourceLoaderOptions): (WebCore::ResourceLoaderOptions::clientCredentialPolicy): Deleted. (WebCore::ResourceLoaderOptions::setClientCredentialPolicy): Deleted. - loader/cache/CachedResourceLoader.cpp: (WebCore::CachedResourceLoader::requestUserCSSStyleSheet): AskClientForAllCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. (WebCore::CachedResourceLoader::defaultCachedResourceOptions): AskClientForAllCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. - loader/icon/IconLoader.cpp: (WebCore::IconLoader::startLoading): DoNotAskClientForAnyCredentials -> ClientCredentialPolicy::CannotAskClientForCredentials. - platform/graphics/avfoundation/cf/WebCoreAVCFResourceLoader.cpp: (WebCore::WebCoreAVCFResourceLoader::startLoading): DoNotAskClientForCrossOriginCredentials -> ClientCredentialPolicy::CannotAskClientForCredentials. This is ok as credentials mode is omit and stored credentials are not allowed. - platform/graphics/avfoundation/objc/WebCoreAVFResourceLoader.mm: (WebCore::WebCoreAVFResourceLoader::startLoading): Ditto. - platform/network/ResourceHandleTypes.h: - xml/XSLTProcessorLibxslt.cpp: DoNotAskClientForCrossOriginCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. This is ok as this is a synchronous load. (WebCore::docLoaderFunc): - xml/parser/XMLDocumentParserLibxml2.cpp: (WebCore::openFunc): DoNotAskClientForCrossOriginCredentials -> ClientCredentialPolicy::MayAskClientForCredentials. This is ok as this is a synchronous load. Source/WebKit2: Renaming of ClientCredentialPolicy values. - NetworkProcess/Downloads/DownloadManager.cpp: (WebKit::DownloadManager::startDownload): - NetworkProcess/NetworkLoad.cpp: (WebKit::NetworkLoad::continueCanAuthenticateAgainstProtectionSpace): (WebKit::NetworkLoad::didReceiveAuthenticationChallenge): - NetworkProcess/NetworkLoadParameters.h: - NetworkProcess/NetworkResourceLoadParameters.cpp: (WebKit::NetworkResourceLoadParameters::NetworkResourceLoadParameters): - NetworkProcess/NetworkResourceLoader.cpp: (WebKit::NetworkResourceLoader::NetworkResourceLoader): - WebProcess/Network/WebLoaderStrategy.cpp: (WebKit::WebLoaderStrategy::scheduleLoad): - WebProcess/Network/WebResourceLoader.cpp: (WebKit::WebResourceLoader::willSendRequest): LayoutTests: - platform/mac-wk1/imported/w3c/web-platform-tests/fetch/api/credentials/authentication-basic-expected.txt: Removed. - 9:03 AM Changeset in webkit [203719]: - 6:26 AM Changeset in webkit [203718] by - 5 edits in trunk/Source/WebKit2 [Threaded Compositor] Crashes and deadlocks in single web process mode Reviewed by Žan Doberšek. Every WebPage has its own threaded compositor that runs its own compositing thread. That means that when there's more than one WebPage in the same process, we are running OpenGL stuff in different secondary threads. That's causing crashes and deadlocks in X and graphics drivers. We should ensure there's a single compositing thread per process when multiple threads is not supported. This is causing unit test WebKit2.WKPageGetScaleFactorNotZero to time out since we switched to the threaded compositor. That test is creating two pages in the same web process, and most of the times the web process crashes or deadlocks causing the test to never finish and time out. This patch makes CompositingRunLoop use a thread pool that spawns the compositing threads and schedules the tasks there. - Shared/CoordinatedGraphics/threadedcompositor/CompositingRunLoop.cpp: (WebKit::WorkQueuePool::singleton): (WebKit::WorkQueuePool::dispatch): (WebKit::WorkQueuePool::runLoop): (WebKit::WorkQueuePool::invalidate): (WebKit::WorkQueuePool::WorkQueuePool): (WebKit::WorkQueuePool::getOrCreateWorkQueueForContext): (WebKit::CompositingRunLoop::CompositingRunLoop): (WebKit::CompositingRunLoop::~CompositingRunLoop): (WebKit::CompositingRunLoop::performTask): (WebKit::CompositingRunLoop::performTaskSync): -::purgeBackingStores): (WebKit::ThreadedCompositor::renderNextFrame): (WebKit::ThreadedCompositor::commitScrollOffset): (WebKit::ThreadedCompositor::updateViewport): (WebKit::ThreadedCompositor::scheduleDisplayImmediately): (WebKit::ThreadedCompositor::forceRepaint): (WebKit::ThreadedCompositor::tryEnsureGLContext): Deleted. (WebKit::ThreadedCompositor::glContext): Deleted. (WebKit::ThreadedCompositor::updateSceneState): Deleted. - Shared/CoordinatedGraphics/threadedcompositor/ThreadedCompositor.h: - 4:08 AM Changeset in webkit [203717] by - 19 edits in trunk [css-grid] repeat() syntax should take a <track-list> argument Reviewed by Darin Adler. Source/WebCore: The repeat() notation used to allow just 1 <track-size> as second argument. Specs have been recently modified so that a <track-list> is now supported, meaning that we can pass an arbitrary number of track sizes and line numbers. It has been working for some time for repeat() if the first argument was a positive integer, but it requires some changes for the auto repeat cases (auto-fill and auto-fit). - css/CSSComputedStyleDeclaration.cpp: (WebCore::OrderedNamedLinesCollector::OrderedNamedLinesCollector): Store the total number of auto repeat tracks and the length of a single repetition instead of the number of repetitions. (WebCore::OrderedNamedLinesCollector::collectLineNamesForIndex): Do not assume that there is only 1 repeat track. (WebCore::valueForGridTrackList): - css/CSSParser.cpp: (WebCore::CSSParser::parseGridTrackRepeatFunction): Allow multiple tracks in repeat(). - rendering/RenderGrid.cpp: (WebCore::RenderGrid::rawGridTrackSize): Renamed repetitions -> autoRepeatTracksCount. (WebCore::RenderGrid::computeAutoRepeatTracksCount): Use all the repeat tracks to compute the total track size of a single repetition. (WebCore::RenderGrid::computeEmptyTracksForAutoRepeat): - rendering/style/GridPositionsResolver.cpp: (WebCore::NamedLineCollection::NamedLineCollection): Renamed m_repetitions -> m_autoRepeatTotalTracks. Added m_autoRepeatTrackListLength (it was always 1 before). (WebCore::NamedLineCollection::find): Resolve lines inside multiple repeat tracks. (WebCore::NamedLineCollection::firstPosition): Ditto. - rendering/style/GridPositionsResolver.h: LayoutTests: Added new test cases with multiple tracks inside repeat() notation, both for fixed an automatic (auto-fill & auto-fit) repetitions. - fast/css-grid-layout/grid-auto-fill-columns-expected.txt: - fast/css-grid-layout/grid-auto-fill-columns.html: - fast/css-grid-layout/grid-auto-fill-rows-expected.txt: - fast/css-grid-layout/grid-auto-fill-rows.html: - fast/css-grid-layout/grid-auto-fit-columns-expected.txt: - fast/css-grid-layout/grid-auto-fit-columns.html: - fast/css-grid-layout/grid-auto-fit-rows-expected.txt: - fast/css-grid-layout/grid-auto-fit-rows.html: - fast/css-grid-layout/grid-element-auto-repeat-get-set-expected.txt: - fast/css-grid-layout/grid-element-auto-repeat-get-set.html: - fast/css-grid-layout/grid-element-repeat-get-set-expected.txt: - fast/css-grid-layout/grid-element-repeat-get-set.html: - 2:37 AM Changeset in webkit [203716] by - 15 edits in trunk [css-grid] grid-auto-flow|row should take a <track-size>+ Reviewed by Darin Adler. Source/WebCore: - css/CSSComputedStyleDeclaration.cpp: (WebCore::valueForGridTrackSizeList): (WebCore::ComputedStyleExtractor::propertyValue): Return a list of <track-size> instead of just one. - css/CSSParser.cpp: (WebCore::CSSParser::parseValue): Use the new values of TrackListType; (WebCore::CSSParser::parseGridTemplateRowsAndAreasAndColumns): Ditto. (WebCore::CSSParser::parseGridTemplateShorthand): Ditto. (WebCore::CSSParser::parseGridShorthand): Ditto. (WebCore::CSSParser::parseGridTrackList): Changed behavior depending on the value of TrackSizeList. - css/CSSParser.h: TrackListType has now 3 different values which determine the behavior of parseGridTrackList. - css/CSSPropertyNames.in: Use a new converter for lists. - css/StyleBuilderConverter.h: (WebCore::StyleBuilderConverter::convertGridTrackSizeList): - rendering/RenderGrid.cpp: (WebCore::RenderGrid::rawGridTrackSize): Resolve the size of the auto track. - rendering/style/RenderStyle.h: (WebCore::RenderStyle::gridAutoColumns): Return a Vector. (WebCore::RenderStyle::gridAutoRows): Ditto. (WebCore::RenderStyle::setGridAutoColumns): Store a Vector. (WebCore::RenderStyle::setGridAutoRows): Ditto. (WebCore::RenderStyle::initialGridAutoColumns): Return a Vector with one auto track. (WebCore::RenderStyle::initialGridAutoRows): Ditto. - rendering/style/StyleGridData.h: Store a Vector of GridTrackSize instead of just one. LayoutTests: - fast/css-grid-layout/grid-auto-columns-rows-get-set-expected.txt: - fast/css-grid-layout/grid-auto-columns-rows-get-set.html: - fast/css-grid-layout/grid-shorthand-get-set-expected.txt: - fast/css-grid-layout/grid-shorthand-get-set.html: - svg/css/getComputedStyle-basic-expected.txt: CSSPrimitiveValue -> CSSValueList. - 1:06 AM Changeset in webkit [203715] by - 1 edit in trunk/Tools/ChangeLog Test svn.webkit.org functionality after maintenance. - 12:51 AM MathML/Early_2016_Refactoring edited by - (diff) - 12:46 AM Changeset in webkit [203714] by - 6 edits3 adds in trunk MathOperator: Add a mapping from combining to non-combining equivalents Patch by Frederic Wang <[email protected]> on 2016-07-25 Reviewed by Darin Adler. Source/WebCore: Many math fonts provide stretch variants and assemblies for combining characters but not for their non-combining equivalent. In the MathML recommendation, it is suggested to use non-combining charaters, so we allow the operator stretching code to look for constructions associated to these non-combining characters in order to still be able to stretch the combining ones. Test: mathml/presentation/bug159513.html - rendering/mathml/MathOperator.cpp: (WebCore::MathOperator::getGlyph): New function extending getBaseGlyph to retrieve the glyph data for an arbitrary character. (WebCore::MathOperator::getMathVariantsWithFallback): This helper function calls getMathVariants for the base glyph. If no constructions are available, it calls getMathVariants for the glyph associated to equivalent fallback characters as listed in the small characterFallback table. (WebCore::MathOperator::calculateStretchyData): Call getMathVariantsWithFallback instead of getMathVariants. Note that we do not need to do that for calculateDisplayStyleLargeOperator as we do not use fallback for large operators. - rendering/mathml/MathOperator.h: (WebCore::MathOperator::getBaseGlyph): Use getGlyph to implement this function. LayoutTests: - mathml/presentation/bug159513.html: Added. - platform/gtk/mathml/presentation/bug159513-expected.png: Added. - platform/gtk/mathml/presentation/bug159513-expected.txt: Added. - platform/ios-simulator/TestExpectations: Skip this test on iOS. - platform/mac/TestExpectations: Skip this test on Mac. Jul 25, 2016: - 10:46 PM Changeset in webkit [203713] by - 12 edits in trunk Second parameter to Range.setStart() / setEnd() should be mandatory Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that more checks are passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Second parameter to Range.setStart() / setEnd() should be mandatory: Also use "unsigned long" instead of "long" type for the parameter, as per the specification. Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - dom/Range.cpp: (WebCore::Range::setStart): (WebCore::Range::setEnd): (WebCore::Range::checkNodeWOffset): - dom/Range.h: - dom/Range.idl: - dom/RangeBoundaryPoint.h: (WebCore::RangeBoundaryPoint::set): LayoutTests: Update tests to reflect behavior change. - editing/deleting/delete-uneditable-style.html: - fast/dom/non-numeric-values-numeric-parameters-expected.txt: - fast/dom/script-tests/non-numeric-values-numeric-parameters.js: - fast/regions/simplified-layout-no-regions.html: - 10:27 PM Changeset in webkit [203712] by - 4 edits in trunk DOMTokenList.prototype.toString should be enumerable Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline W3C test now that one more check is passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: DOMTokenList.prototype.toString should be enumerable: - - Firefox and Chrome agree with the specification. No new tests, rebaselined existing test. - html/DOMTokenList.idl: - 10:27 PM Changeset in webkit [203711] by - 7 edits in trunk AX: Expose autoFillButtonType to accessibility Reviewed by Chris Fleizach. Source/WebCore: Added a new attribute on Mac to expose the auto-fill button type. Changes are covered in modified test. - accessibility/AccessibilityObject.cpp: (WebCore::AccessibilityObject::isValueAutofillAvailable): (WebCore::AccessibilityObject::valueAutofillButtonType): (WebCore::AccessibilityObject::isValueAutofilled): - accessibility/AccessibilityObject.h: (WebCore::AccessibilityObject::passwordFieldValue): - accessibility/mac/WebAccessibilityObjectWrapperMac.mm: (-[WebAccessibilityObjectWrapper accessibilityAttributeValue:]): LayoutTests: - accessibility/auto-fill-types-expected.txt: - accessibility/auto-fill-types.html: - 8:04 PM Changeset in webkit [203710] by - 2 edits in trunk/Websites/perf.webkit.org Build fix for Safari 9. - public/v3/models/build-request.js: (BuildRequest.prototype.waitingTime): Don't use "const" in strict mode. - 7:45 PM Changeset in webkit [203709] by - 15 edits2 adds in trunk/Websites/perf.webkit.org Perf dashboard should show the list of a pending A/B testing jobs Rubber-stamped by Chris Dumez. Add a page to show the list of A/B testing build requests per triggerable. Ideally, we would like to see a queue per builder but that would require changes to database tables and syncing scripts. Because this page is most useful when the analysis task with which each build request is associated, JSON API at /api/build-requests/ has been modified to return the analysis task ID for each request. Also streamlined the page that shows the list of analysis tasks per Chris' feedback by consolidating "Bisecting" and "Identified" into "Investigated" and moving the toolbar from the upper left corner inside the heading to right beneath the heading above the table. Also made the category page serialize the filter an user had typed in so that reloading the page doesn't clear it. - public/api/analysis-tasks.php: (fetch_associated_data_for_tasks): Removed 'category' from the list of columns returned as the notion of 'category' is only relevant in UI, and it's better computed in the front-end. (format_task): Ditto. (determine_category): Deleted. - public/api/test-groups.php: (main): - public/include/build-requests-fetcher.php: (BuildRequestsFetcher::fetch_for_task): Include the analysis task ID in the rows. (BuildRequestsFetcher::fetch_for_group): Ditto. Ditto. (BuildRequestsFetcher::fetch_incomplete_requests_for_triggerable): Ditto. (BuildRequestsFetcher::results_internal): Ditto. - public/v3/index.html: - public/v3/main.js: (main): Create a newly introduced BuildRequestQueuePage as a subpage of AnalysisCategoryPage. - public/v3/components/ratio-bar-graph.js: (RatioBarGraph.prototype.update): Fixed a bogus assertion here. ratio can be any number. The coercion into [-1, 1] is done inside RatioBarGraph's render() function. - public/v3/models/analysis-task.js: (AnalysisTask.prototype.category): Moved the code to compute the analysis task's category from determine_category in analysis-tasks.php. Also merged "bisecting" and "identified" into "investigated". (AnalysisTask.categories): Merged "bisecting" and "identified" into "investigated". - public/v3/models/build-request.js: (BuildRequest): Remember the triggerable and the analysis task associated with this request as well as the time at when this request was created. (BuildRequest.prototype.analysisTaskId): Added. (BuildRequest.prototype.statusLabel): Use a shorter label: "Waiting" for "pending" status. (BuildRequest.prototype.createdAt): Added. (BuildRequest.prototype.waitingTime): Added. Returns a human readable time duration since the creation of this build request such as "2 hours 21 minutes" against a reference time. (BuildRequest.fetchTriggerables): Added. (BuildRequest.cachedRequestsForTriggerableID): Added. Used when navigating back to - public/v3/pages/analysis-category-page.js: (AnalysisCategoryPage): Construct AnalysisCategoryToolbar and store it in this._categoryToolbar since it no longer inherits from Toolbar class, which PageWithHeading recognizes and stores. (AnalysisCategoryPage.prototype.title): (AnalysisCategoryPage.prototype.serializeState): Added. (AnalysisCategoryPage.prototype.stateForCategory): Added. Include the filter in the serialization. (AnalysisCategoryPage.prototype.updateFromSerializedState): Restore the filter from the URL state. (AnalysisCategoryPage.prototype.filterDidChange): Added. Called by AnalysisCategoryToolbar to update the URL state in addition to calling render() as done previously via setFilterCallback. (AnalysisCategoryPage.prototype.render): Always call _categoryToolbar.render() since the hyperlinks for the category pages now include the filter, which can be updated in each call. (AnalysisCategoryPage.cssTemplate): - public/v3/pages/analysis-category-toolbar.js: (AnalysisCategoryToolbar): Inherits from ComponentBase instead of Toolbar since AnalysisCategoryToolbar no longer works with Heading class unlike other subclasses of Toolbar class. (AnalysisCategoryToolbar.prototype.setCategoryPage): Added. (AnalysisCategoryToolbar.prototype.setFilterCallback): Deleted. (AnalysisCategoryToolbar.prototype.setFilter): Added. Used to restore from a serialized URL state. (AnalysisCategoryToolbar.prototype.render): Don't recreate the input element as it clears the value as well as the selection of the element. Also use AnalysisCategoryPage's stateForCategory to serialize the category name and the current filter for each hyperlink. (AnalysisCategoryToolbar.prototype._filterMayHaveChanged): Now takes an boolean argument specifying whether the URL state should be updated or not. We update the URL only when a change event is fired to avoid constantly updating it while an user is still typing. (AnalysisCategoryToolbar.cssTemplate): Added. (AnalysisCategoryToolbar.htmlTemplate): Added a button to open the newly added queue page. - public/v3/pages/build-request-queue-page.js: (BuildRequestQueuePage): Added. (BuildRequestQueuePage.prototype.routeName): Added. (BuildRequestQueuePage.prototype.pageTitle): Added. (BuildRequestQueuePage.prototype.open): Added. Fetch open build requests for every triggerables using the same API as the syncing scripts. (BuildRequestQueuePage.prototype.render): Added. (BuildRequestQueuePage.prototype._constructBuildRequestTable): Added. Construct a table for the list of pending, scheduled or running build requests in the order syncing scripts would see. Note that the list of build requests returned by /api/build-requests/* can contain completed, canceled, or failed requests since the JSON returns all build requests associated with each test group if one of the requests of the group have not finished. This helps syncing scripts picking the right builder for A/B testing when it had previously been unloaded or crashed in the middle of processing a test group. This characteristics of the API actually helps us here because we can reliably compute the total number of build requests in the group. The first half of this function does this counting as well as collapses all but the first unfinished build requests into a "contraction" row, which just shows the number of build requests that are remaining in the group. (BuildRequestQueuePage.cssTemplate): Added. (BuildRequestQueuePage.htmlTemplate): Added. - public/v3/pages/summary-page.js: (SummaryPage.prototype.open): Use one-day median instead of seven-day median to compute the status. (SummaryPageConfigurationGroup): Initialize _ratio to NaN. This was causing assertion failures in RatioBarGraph's update() while measurement sets are being fetched. - server-tests/api-build-requests-tests.js: Updated the tests per change in BuildRequest's statusLabel. - unit-tests/analysis-task-tests.js: Ditto. - unit-tests/test-groups-tests.js: Ditto. - unit-tests/build-request-tests.js: Added tests for BuildRequest's waitingTime. - 7:40 PM Changeset in webkit [203708] by - 15 edits in trunk/Source/WebCore Cleanup RenderTable*::createAnonymous* Reviewed by Simon Fraser. This patch - tightens the type on createAnonymousBoxWithSameTypeAs, createAnonymousWithParentRendererAndDisplay and createAnonymousWithParentRenderer from RenderObject to the appropriate type. - changes the return type of create* function from raw pointer to std::unique_ptr<> - decouples createAnonymousBoxWithSameTypeAs and createAnonymousWithParentRenderer. createAnonymousBoxWithSameTypeAs misleadingly calls createAnonymousWithParentRenderer while it is never the parent in case of table items. (std::unique_ptr::release() vs. WTFMove() will be addressed in a separate patch) No change in functionality. - rendering/RenderBlock.cpp: (WebCore::RenderBlock::createAnonymousBoxWithSameTypeAs): (WebCore::RenderBlock::createAnonymousWithParentRendererAndDisplay): - rendering/RenderBlock.h: (WebCore::RenderBlock::createAnonymousBlock): - rendering/RenderBox.cpp: (WebCore::RenderBox::layoutOverflowRectForPropagation): - rendering/RenderBox.h: (WebCore::RenderBox::createAnonymousBoxWithSameTypeAs): - rendering/RenderElement.cpp: (WebCore::RenderElement::addChild): - rendering/RenderInline.cpp: (WebCore::RenderInline::splitFlow): - rendering/RenderTable.cpp: (WebCore::RenderTable::addChild): (WebCore::RenderTable::createTableWithStyle): (WebCore::RenderTable::createAnonymousWithParentRenderer): - rendering/RenderTable.h: (WebCore::RenderTable::createAnonymousBoxWithSameTypeAs): - rendering/RenderTableCell.cpp: (WebCore::RenderTableCell::createTableCellWithStyle): (WebCore::RenderTableCell::createAnonymousWithParentRenderer): - rendering/RenderTableCell.h: (WebCore::RenderTableCell::createAnonymousBoxWithSameTypeAs): - rendering/RenderTableRow.cpp: (WebCore::RenderTableRow::addChild): (WebCore::RenderTableRow::createTableRowWithStyle): (WebCore::RenderTableRow::createAnonymousWithParentRenderer): - rendering/RenderTableRow.h: (WebCore::RenderTableRow::createAnonymousBoxWithSameTypeAs): - rendering/RenderTableSection.cpp: (WebCore::RenderTableSection::addChild): (WebCore::RenderTableSection::createTableSectionWithStyle): (WebCore::RenderTableSection::createAnonymousWithParentRenderer): - rendering/RenderTableSection.h: (WebCore::RenderTableSection::createAnonymousBoxWithSameTypeAs): - 7:24 PM Changeset in webkit [203707] by - 3 edits2 adds in trunk Touch properties should be on the prototype Reviewed by Ryosuke Niwa. Source/WebCore: Touch properties should be on the prototype: Chrome agrees with the specification. Test: platform/ios-simulator/ios/touch/Touch-attributes-prototype.html - bindings/scripts/CodeGeneratorJS.pm: (InterfaceRequiresAttributesOnInstanceForCompatibility): Deleted. LayoutTests: Add layout test coverage. - platform/ios-simulator/ios/touch/Touch-attributes-prototype-expected.txt: Added. - platform/ios-simulator/ios/touch/Touch-attributes-prototype.html: Added. - 7:18 PM Changeset in webkit [203706] by - 2 edits in trunk/Source/WebCore Set MediaRemote playback state based on MediaSession playback state. Patch by Jeremy Jones <[email protected]> on 2016-07-25 Reviewed by Eric Carlson. Use playback session state to update media remote playback state instead of unconditionally setting it to playing. - platform/audio/mac/MediaSessionManagerMac.mm: (WebCore::MediaSessionManagerMac::updateNowPlayingInfo): - 7:15 PM Changeset in webkit [203705] by - 9 edits in trunk/Source/WebCore RenderBox::haveSameDirection is used only by table items. Reviewed by Simon Fraser. Remove RenderBox::haveSameDirection() since it's used only by RenderTable* classes. The new stand alone function (with 2 arguments) now checks if both of the objects are valid. No change in functionality. - rendering/RenderBox.h: (WebCore::RenderBox::hasSameDirectionAs): Deleted. - rendering/RenderTable.cpp: (WebCore::RenderTable::tableStartBorderAdjoiningCell): (WebCore::RenderTable::tableEndBorderAdjoiningCell): - rendering/RenderTable.h: (WebCore::haveSameDirection): - rendering/RenderTableCell.cpp: (WebCore::RenderTableCell::hasStartBorderAdjoiningTable): (WebCore::RenderTableCell::hasEndBorderAdjoiningTable): - rendering/RenderTableCell.h: (WebCore::RenderTableCell::borderAdjoiningTableStart): (WebCore::RenderTableCell::borderAdjoiningTableEnd): - rendering/RenderTableRow.h: (WebCore::RenderTableRow::borderAdjoiningTableStart): (WebCore::RenderTableRow::borderAdjoiningTableEnd): - rendering/RenderTableSection.cpp: (WebCore::RenderTableSection::borderAdjoiningStartCell): (WebCore::RenderTableSection::borderAdjoiningEndCell): (WebCore::RenderTableSection::firstRowCellAdjoiningTableStart): (WebCore::RenderTableSection::firstRowCellAdjoiningTableEnd): - rendering/RenderTableSection.h: (WebCore::RenderTableSection::borderAdjoiningTableStart): (WebCore::RenderTableSection::borderAdjoiningTableEnd): - 6:05 PM Changeset in webkit [203704] by - 23 edits4 copies in trunk/Source/JavaScriptCore Unreviewed, rolling out r203703. It breaks some internal tests Reverted changeset: "[JSC] DFG::Node should not have its own allocator" - 5:27 PM Changeset in webkit [203703] by - 23 edits4 deletes in trunk/Source/JavaScriptCore [JSC] DFG::Node should not have its own allocator Patch by Benjamin Poulain <[email protected]> on 2016-07-25: - 5:21 PM Changeset in webkit [203702] by - 8 edits5 adds in trunk ClientRect properties should be on the prototype Reviewed by Geoffrey Garen. Source/WebCore: Move ClientRect properties from the instance to the prototype. This matches the specification, Firefox and Chrome. Also add a serializer to ClientRect in order to match the specification: - - This avoids breaking content that relies on JSON.stringify() to serialize ClientRect objects. Tests: fast/css/ClientRect-attributes-prototype.html fast/css/ClientRect-serialization.html - CMakeLists.txt: - WebCore.xcodeproj/project.pbxproj: - bindings/js/JSBindingsAllInOne.cpp: - bindings/js/JSClientRectCustom.cpp: Added. (WebCore::JSClientRect::toJSON): - bindings/scripts/CodeGeneratorJS.pm: - dom/ClientRect.idl: LayoutTests: - fast/css/ClientRect-attributes-prototype-expected.txt: Added. - fast/css/ClientRect-attributes-prototype.html: Added. Add layout test to check that ClientRect's properties are on the prototype. - fast/css/ClientRect-serialization-expected.txt: Added. - fast/css/ClientRect-serialization.html: Added. Add layout test to check that ClientRect has a serializer. - 2:53 PM Changeset in webkit [203701] by - 8 edits2 adds in trunk Parameters to DOMImplementation.createDocumentType() should be mandatory and non-nullable Reviewed by Ryosuke Niwa. LayoutTests/imported/w3c: Rebaseline a W3C test now that more checks are passing. - web-platform-tests/dom/interfaces-expected.txt: Source/WebCore: Parameters to DOMImplementation.createDocumentType() should be mandatory and non-nullable: Firefox and Chrome both agree with the specification. However, those parameters were nullable and optional in WebKit. Test: fast/dom/DOMImplementation/createDocumentType-parameters.html - dom/DOMImplementation.idl: LayoutTests: - editing/selection/script-tests/DOMSelection-DocumentType.js: - fast/dom/DOMImplementation/createDocumentType-err-expected.txt: - fast/dom/DOMImplementation/script-tests/createDocumentType-err.js: Update existing tests to reflect the behavior change. - fast/dom/DOMImplementation/createDocumentType-parameters-expected.txt: Added. - fast/dom/DOMImplementation/createDocumentType-parameters.html: Added. Add layout test coverage. I have verified that this test passes on both Firefox and Chrome. - 2:38 PM Changeset in webkit [203700] by - 7 edits in trunk/Tools Modern IDB: Make sure IndexedDB works from file:// url documents by default Reviewed by Alex Christensen. Previously, to grant IndexedDB access to file:// urls for testing purposes, we had to call the SPI [WKWebViewConfiguration _setAllowUniversalAccessFromFileURLs:]. As of this is no longer required. Change the relevant API tests to make sure this continues to be no longer required. - TestWebKitAPI/Tests/WebKit2Cocoa/IDBDeleteRecovery.mm: - TestWebKitAPI/Tests/WebKit2Cocoa/IndexedDBDatabaseProcessKill.mm: - TestWebKitAPI/Tests/WebKit2Cocoa/IndexedDBMultiProcess.mm: - TestWebKitAPI/Tests/WebKit2Cocoa/IndexedDBPersistence.mm: - TestWebKitAPI/Tests/WebKit2Cocoa/StoreBlobThenDelete.mm: - TestWebKitAPI/Tests/WebKit2Cocoa/WebProcessKillIDBCleanup.mm: - 2:03 PM Changeset in webkit [203699] by - 7 edits in trunk/Source/JavaScriptCore AssemblyHelpers should own all of the cell allocation methods Reviewed by Saam Barati. Prior to this change we had some code in DFGSpeculativeJIT.h and some code in JIT.h that did cell allocation. This change moves all of that code into AssemblyHelpers.h. - dfg/DFGSpeculativeJIT.h: (JSC::DFG::SpeculativeJIT::emitAllocateJSCell): (JSC::DFG::SpeculativeJIT::emitAllocateJSObject): (JSC::DFG::SpeculativeJIT::emitAllocateJSObjectWithKnownSize): (JSC::DFG::SpeculativeJIT::emitAllocateVariableSizedJSObject): (JSC::DFG::SpeculativeJIT::emitAllocateDestructibleObject): - jit/AssemblyHelpers.h: (JSC::AssemblyHelpers::emitAllocate): (JSC::AssemblyHelpers::emitAllocateJSCell): (JSC::AssemblyHelpers::emitAllocateJSObject): (JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize): (JSC::AssemblyHelpers::emitAllocateVariableSized): (JSC::AssemblyHelpers::emitAllocateVariableSizedJSObject): (JSC::AssemblyHelpers::emitAllocateDestructibleObject): - jit/JIT.h: - jit/JITInlines.h: (JSC::JIT::isOperandConstantChar): (JSC::JIT::emitValueProfilingSite): (JSC::JIT::emitAllocateJSObject): Deleted. - jit/JITOpcodes.cpp: (JSC::JIT::emit_op_new_object): (JSC::JIT::emit_op_create_this): - jit/JITOpcodes32_64.cpp: (JSC::JIT::emit_op_new_object): (JSC::JIT::emit_op_create_this): - 1:51 PM Changeset in webkit [203698] by - 6 edits2 adds in trunk Media controls should not be displayed for a video until it starts playing <rdar://problem/26986673> Reviewed by Beth Dakin. Source/WebCore: For videos that have never played back yet, we should not show media controls. To ensure this behavior, we ensure that the playback behavior restriction is set upon creating the media element. This restriction is then removed when the media element begins to play. Added two new WebKit API tests. - html/HTMLMediaElement.cpp: (WebCore::HTMLMediaElement::HTMLMediaElement): Tools: Verify that multiple videos do or don't show the media controller depending on whether videos are playing. Also tweaks an existing API test (VideoControlsManagerSingleLargeVideo) that was passing because we were always showing media controls for large videos with audio, even if they had not played back yet. This change ensures that large videos with audio show media controls only after they begin to play back, and not by virtue of being large enough for main content. - TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: - TestWebKitAPI/Tests/WebKit2Cocoa/VideoControlsManager.mm: (TestWebKitAPI::TEST): - TestWebKitAPI/Tests/WebKit2Cocoa/large-video-with-audio.html: - TestWebKitAPI/Tests/WebKit2Cocoa/large-videos-with-audio-autoplay.html: Added. - TestWebKitAPI/Tests/WebKit2Cocoa/large-videos-with-audio.html: Added. - 1:40 PM Changeset in webkit [203697] by - 8 edits in trunk/Source/JavaScriptCore MathICs should be able to take and dump stats about code size Reviewed by Filip Pizlo. This will make testing changes on MathIC going forward much easier. We will be able to easily see if modifications to MathIC will lead to us generating smaller code. We now only dump average size when we regenerate any MathIC. This works out for large tests/pages, but is not great for testing small programs. We can add more dump points later if we find that we want to dump stats while running small small programs. - bytecode/CodeBlock.cpp: (JSC::CodeBlock::jitSoon): (JSC::CodeBlock::dumpMathICStats): - bytecode/CodeBlock.h: (JSC::CodeBlock::isStrictMode): (JSC::CodeBlock::ecmaMode): - dfg/DFGSpeculativeJIT.cpp: (JSC::DFG::SpeculativeJIT::compileMathIC): - ftl/FTLLowerDFGToB3.cpp: (JSC::FTL::DFG::LowerDFGToB3::compileMathIC): - jit/JITArithmetic.cpp: (JSC::JIT::emitMathICFast): (JSC::JIT::emitMathICSlow): - jit/JITMathIC.h: (JSC::JITMathIC::finalizeInlineCode): (JSC::JITMathIC::codeSize): - jit/JITOperations.cpp: - 1:35 PM Changeset in webkit [203696] by - 16 edits10 moves1 add in trunk Rename SubtleCrypto to WebKitSubtleCrypto <rdar://problem/27483617> Reviewed by Brent Fulgham. Source/WebCore: Tests: crypto/webkitSubtle/gc-2.html crypto/webkitSubtle/gc-3.html crypto/webkitSubtle/gc.html Rename Class SubtleCrypto to WebKitSubtleCrypto, and Crypto.subtle to Crypto.webkitSubtle in order to let the new implementation to reuse the name SubtleCrypto. This renaming should match what our current JSBindings use, and therefore should not introduce any change of behavoir. - CMakeLists.txt: Revise project files for for new file names. - DerivedSources.cpp: - DerivedSources.make: - PlatformEfl.cmake: - PlatformGTK.cmake: - PlatformMac.cmake: - WebCore.xcodeproj/project.pbxproj: Revise project files for for new file names. - bindings/js/JSWebKitSubtleCryptoCustom.cpp: Renamed from Source/WebCore/bindings/js/JSSubtleCryptoCustom.cpp. - crypto/WebKitSubtleCrypto.cpp: Renamed from Source/WebCore/crypto/SubtleCrypto.cpp. - crypto/WebKitSubtleCrypto.h: Renamed from Source/WebCore/crypto/SubtleCrypto.h. - crypto/WebKitSubtleCrypto.idl: Renamed from Source/WebCore/crypto/SubtleCrypto.idl. - page/Crypto.cpp: (WebCore::Crypto::webkitSubtle): (WebCore::Crypto::subtle): Deleted. - page/Crypto.h: - page/Crypto.idl: LayoutTests: Move tests involving crypto.webkitSubtle from crypto/subtle to crypto/webkitSubtle. - crypto/webkitSubtle/gc-2-expected.txt: Renamed from LayoutTests/crypto/subtle/gc-2-expected.txt. - crypto/webkitSubtle/gc-2.html: Renamed from LayoutTests/crypto/subtle/gc-2.html. - crypto/webkitSubtle/gc-3-expected.txt: Renamed from LayoutTests/crypto/subtle/gc-3-expected.txt. - crypto/webkitSubtle/gc-3.html: Renamed from LayoutTests/crypto/subtle/gc-3.html. - crypto/webkitSubtle/gc-expected.txt: Renamed from LayoutTests/crypto/subtle/gc-expected.txt. - crypto/webkitSubtle/gc.html: Renamed from LayoutTests/crypto/subtle/gc.html. - platform/efl/TestExpectations: - platform/gtk/TestExpectations: - platform/ios-simulator-wk1/TestExpectations: - platform/win/TestExpectations: - 12:57 PM Changeset in webkit [203695] by - 3 edits2 moves1 add1 delete in trunk Allow LocalStorage by default for file URLs. Reviewed by Brent Fulgham. Source/WebCore: Test: storage/domstorage/localstorage/file-can-access.html - page/SecurityOrigin.cpp: (WebCore::SecurityOrigin::canAccessStorage): Remove the m_universalAccess check for local URLs. LayoutTests: - storage/domstorage/localstorage/blocked-file-access-expected.txt: Removed. - storage/domstorage/localstorage/file-can-access-expected.txt: Added. - storage/domstorage/localstorage/file-can-access.html: Renamed from LayoutTests/storage/domstorage/localstorage/blocked-file-access.html. - storage/domstorage/localstorage/resources/unblocked-example.html: Renamed from LayoutTests/storage/domstorage/localstorage/resources/blocked-example.html. - 12:15 PM Changeset in webkit [203694] by - 3 edits2 adds in trunk AX: AccessibilityRenderObject is adding duplicated children when CSS first-letter is being used. Reviewed by Chris Fleizach. Source/WebCore: We were adding the same text node twice if CSS first-letter selector was being used. Added a check for the inline continuation so that we only add it once. Test: accessibility/mac/css-first-letter-children.html - accessibility/AccessibilityRenderObject.cpp: (WebCore::firstChildConsideringContinuation): LayoutTests: - accessibility/mac/css-first-letter-children-expected.txt: Added. - accessibility/mac/css-first-letter-children.html: Added. - 12:04 PM Changeset in webkit [203693] by - 17 edits in trunk/Source/JavaScriptCore op_mul/ArithMul(Untyped,Untyped) should be an IC Reviewed by Mark Lam. This patch makes Mul a type based IC in much the same way that we made Add a type-based IC. I implemented Mul in the same way. I abstracted the implementation of the Add IC in the various JITs to allow for it to work over arbitrary IC snippets. This will make adding Div/Sub/Pow in the future easy. This patch also adds a new boolean argument to the various snippet generateFastPath() methods to indicate if we should emit result profiling. I added this because we want this profiling to be emitted for Mul in the baseline, but not in the DFG. We used to indicate this through passing in a nullptr for the ArithProfile, but we no longer do that in the upper JIT tiers. So we are passing an explicit request from the JIT tier about whether or not it's worth it for the IC to emit profiling. We now emit much less code for Mul. Here is some data on the average Mul snippet/IC size: | JetStream | Unity 3D | Old | ~280 bytes | ~280 bytes | New | 210 bytes | 185 bytes | ------------------------------------ - bytecode/CodeBlock.cpp: (JSC::CodeBlock::addJITAddIC): (JSC::CodeBlock::addJITMulIC): (JSC::CodeBlock::findStubInfo): - bytecode/CodeBlock.h: (JSC::CodeBlock::stubInfoBegin): (JSC::CodeBlock::stubInfoEnd): - dfg/DFGSpeculativeJIT.cpp: (JSC::DFG::GPRTemporary::adopt): (JSC::DFG::FPRTemporary::FPRTemporary): (JSC::DFG::SpeculativeJIT::compileValueAdd): (JSC::DFG::SpeculativeJIT::compileMathIC): (JSC::DFG::SpeculativeJIT::compileArithMul): - dfg/DFGSpeculativeJIT.h: (JSC::DFG::SpeculativeJIT::callOperation): (JSC::DFG::GPRTemporary::GPRTemporary): (JSC::DFG::GPRTemporary::operator=): (JSC::DFG::FPRTemporary::~FPRTemporary): (JSC::DFG::FPRTemporary::fpr): - ftl/FTLLowerDFGToB3.cpp: (JSC::FTL::DFG::LowerDFGToB3::compileToThis): (JSC::FTL::DFG::LowerDFGToB3::compileValueAdd): (JSC::FTL::DFG::LowerDFGToB3::compileMathIC): (JSC::FTL::DFG::LowerDFGToB3::compileArithMul): - jit/JIT.h: (JSC::JIT::getSlowCase): -::isLeftOperandValidConstant): (JSC::JITAddGenerator::isRightOperandValidConstant): - jit/JITArithmetic.cpp: (JSC::JIT::emit_op_add): (JSC::JIT::emitSlow_op_add): (JSC::JIT::emitMathICFast): (JSC::JIT::emitMathICSlow): (JSC::JIT::emit_op_mul): (JSC::JIT::emitSlow_op_mul): (JSC::JIT::emit_op_sub): - jit/JITInlines.h: (JSC::JIT::callOperation): - jit/JITMathIC.h: (JSC::JITMathIC::slowPathStartLocation): (JSC::JITMathIC::slowPathCallLocation): (JSC::JITMathIC::isLeftOperandValidConstant): (JSC::JITMathIC::isRightOperandValidConstant): (JSC::JITMathIC::generateInline): (JSC::JITMathIC::generateOutOfLine): - jit/JITMathICForwards.h: - jit/JITMulGenerator.cpp: (JSC::JITMulGenerator::generateInline): (JSC::JITMulGenerator::generateFastPath): - jit/JITMulGenerator.h: (JSC::JITMulGenerator::JITMulGenerator): (JSC::JITMulGenerator::isLeftOperandValidConstant): (JSC::JITMulGenerator::isRightOperandValidConstant): (JSC::JITMulGenerator::didEmitFastPath): Deleted. (JSC::JITMulGenerator::endJumpList): Deleted. (JSC::JITMulGenerator::slowPathJumpList): Deleted. - jit/JITOperations.cpp: - jit/JITOperations.h: - 11:45 AM Changeset in webkit [203692] by - 2 edits in trunk/Source/WebKit2 Fix assertion. - NetworkProcess/cache/NetworkCacheCodersCocoa.cpp: (WebKit::NetworkCache::encodeCertificateChain): - 11:01 AM Changeset in webkit [203691] by - 5 edits2 adds in trunk/Source/WebKit2 Split platform specific parts of NetworkCacheCoders.cpp into separate files Patch by Sam Weinig <[email protected]> on 2016-07-25 Reviewed by Alex Christensen. - NetworkProcess/cache/NetworkCacheCoders.cpp: (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::encode): Deleted. (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::decode): Deleted. - NetworkProcess/cache/NetworkCacheCodersCocoa.cpp: Copied from Source/WebKit2/NetworkProcess/cache/NetworkCacheCoders.cpp. (WebKit::NetworkCache::encodeCFData): Moved. (WebKit::NetworkCache::decodeCFData): Moved. (WebKit::NetworkCache::encodeSecTrustRef): Moved. (WebKit::NetworkCache::decodeSecTrustRef): Moved. (WebKit::NetworkCache::encodeCertificateChain): Moved. (WebKit::NetworkCache::decodeCertificateChain): Moved. (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::encode): Moved. (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::decode): Moved. - NetworkProcess/cache/NetworkCacheCodersSoup.cpp: Copied from Source/WebKit2/NetworkProcess/cache/NetworkCacheCoders.cpp. (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::encode): Moved. (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::decode): Moved. - PlatformGTK.cmake: - PlatformMac.cmake: - WebKit2.xcodeproj/project.pbxproj: Add new files. - 10:49 AM Changeset in webkit [203690] by - 7 edits3 adds in trunk Media controls on apple.com don't disappear when movie finishes playing <rdar://problem/26668526> Reviewed by Darin Adler. Source/WebCore: When a video ends, it should cause media controls to hide. While current logic mostly accounts for this, it does not account for programmatic seeks causing the video to lose its 'ended' status before querying for whether or not to show media controls. Three new API tests: large-video-seek-after-ending.html large-video-hides-controls-after-seek-to-end.html large-video-seek-to-beginning-and-play-after-ending.html - html/HTMLMediaElement.cpp: (WebCore::HTMLMediaElement::mediaPlayerTimeChanged): (WebCore::HTMLMediaElement::setPlaying): - html/MediaElementSession.cpp: (WebCore::MediaElementSession::canControlControlsManager): - html/MediaElementSession.h: Tools: Adds new API tests. Please see WebCore ChangeLog for more details. - TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: - TestWebKitAPI/Tests/WebKit2Cocoa/VideoControlsManager.mm: (-[MediaPlaybackMessageHandler initWithWKWebView:finalMessageString:]): (-[MediaPlaybackMessageHandler userContentController:didReceiveScriptMessage:]): (TestWebKitAPI::TEST): (-[DidPlayMessageHandler initWithWKWebView:]): Deleted. (-[DidPlayMessageHandler userContentController:didReceiveScriptMessage:]): Deleted. - TestWebKitAPI/Tests/WebKit2Cocoa/large-video-hides-controls-after-seek-to-end.html: Added. - TestWebKitAPI/Tests/WebKit2Cocoa/large-video-seek-after-ending.html: Added. - TestWebKitAPI/Tests/WebKit2Cocoa/large-video-seek-to-beginning-and-play-after-ending.html: Added. - 10:21 AM Changeset in webkit [203689] by - 5 edits in branches/safari-602-branch/Source Versioning. - 10:13 AM Changeset in webkit [203688] by - 9 edits2 copies in trunk/Source/WebCore Introduce a MathMLOperatorElement class Patch by Frederic Wang <[email protected]> on 2016-07-25 Reviewed by Darin Adler. No new tests, rendering is unchaned. - CMakeLists.txt: Add MathMLOperatorElement to the build file. - WebCore.xcodeproj/project.pbxproj: Ditto. - mathml/MathMLAllInOne.cpp: Ditto. - mathml/MathMLOperatorElement.cpp: New DOM class for <mo> element. (WebCore::MathMLOperatorElement::MathMLOperatorElement): (WebCore::MathMLOperatorElement::create): (WebCore::MathMLOperatorElement::parseAttribute): Handle mo attributes. (WebCore::MathMLOperatorElement::createElementRenderer): Create RenderMathMLOperator. - mathml/MathMLOperatorElement.h: Declare a class deriving from MathMLTextElement. - mathml/MathMLTextElement.cpp: Remove all the RenderMathMLOperator parts. (WebCore::MathMLTextElement::MathMLTextElement): Remove inline keyword so that the class can be overriden. (WebCore::MathMLTextElement::parseAttribute): Remove code handled in MathMLOperatorElement. (WebCore::MathMLTextElement::createElementRenderer): Ditto. - mathml/MathMLTextElement.h: Make class and members overridable. - mathml/mathtags.in: Map mo to MathMLOperatorElement. - rendering/mathml/RenderMathMLOperator.cpp: (WebCore::RenderMathMLOperator::RenderMathMLOperator): Make the constructor take a MathMLOperatorElement. - rendering/mathml/RenderMathMLOperator.h: Ditto. - 9:57 AM Changeset in webkit [203687] by - 4 edits in trunk/Source/WebKit2 [iOS] Make sure we call the ProcessAssertion invalidation handler on the main thread <rdar://problem/27399998> Reviewed by Darin Adler. Based on crash traces, it appears BKSProcessAssertion is calling our invalidation handler on a background thread. This was not anticipated and therefore, this would lead to thread safety issues and crashes. We now make sure to call our invalidation handler on the main thread. We also use a WeakPtr to ensure that the ProcessAssertion is still alive once on the main thread and before calling the invalidation handler. - UIProcess/ProcessAssertion.cpp: (WebKit::ProcessAssertion::ProcessAssertion): - UIProcess/ProcessAssertion.h: (WebKit::ProcessAssertion::ProcessAssertion): (WebKit::ProcessAssertion::createWeakPtr): - UIProcess/ios/ProcessAssertionIOS.mm: (WebKit::ProcessAssertion::ProcessAssertion): (WebKit::ProcessAssertion::markAsInvalidated): - 9:37 AM Changeset in webkit [203686] by - 4 edits in trunk/Source Speed up make process slightly by improving "list of files" idiom Reviewed by Mark Lam. - DerivedSources.make: Change rules that build lists of files to only run when DerivedSources.make has been modified since the last time they were run. Since the list of files are inside this file, this is safe, and this is faster than always comparing and regenerating the file containing the list of files each time. - 9:30 AM Changeset in webkit [203685] by - 2 edits in trunk/Tools Unreviewed, fix test-webkitpy after r203674. - Scripts/webkitpy/port/linux_get_crash_log_unittest.py: (GDBCrashLogGeneratorTest.test_generate_crash_log): - 9:03 AM Changeset in webkit [203684] by - 4 edits2 adds in trunk The web process hangs when computing elements-based snap points for a container with large max scroll offset <rdar://problem/25353661> Reviewed by Simon Fraser. Source/WebCore: Fixes a bug in the computation of axis snap points. The ScrollSnapPoints object, which tracks snap points along a particular axis, has two flags, hasRepeat and usesElements. For elements- based snapping, both flags would be turned on, since StyleBuilderConverter::convertScrollSnapPoints short-circuits for elements-based snapping and does not default usesRepeat to false. To address this, we make ScrollSnapPoints not repeat(100%) by default. Test: css3/scroll-snap/scroll-snap-elements-container-larger-than-children.html - css/StyleBuilderConverter.h: (WebCore::StyleBuilderConverter::convertScrollSnapPoints): Deleted. - rendering/style/StyleScrollSnapPoints.cpp: (WebCore::ScrollSnapPoints::ScrollSnapPoints): LayoutTests: Adds a scroll snap offset computation test case that would have previously caused the web process to hang before this patch. - css3/scroll-snap/scroll-snap-elements-container-larger-than-children-expected.txt: Added. - css3/scroll-snap/scroll-snap-elements-container-larger-than-children.html: Added. - 8:02 AM Changeset in webkit [203683] by - 2 edits in trunk/Source/WebCore REGRESSION(r200931): Invalid cast in highestAncestorToWrapMarkup() Reviewed by Michael Catanzaro. Since r200931 the result of enclosingNodeOfType() in highestAncestorToWrapMarkup() is downcasted to Element, but the result of enclosingNodeOfType() can be a Node that is not an Element, in this case is Text. The cast is not needed at all since that node is passed to editingIgnoresContent() and selectionFromContentsOfNode() and both receive a Node not an Element. - editing/markup.cpp: (WebCore::highestAncestorToWrapMarkup): Remove invalid cast. - 8:01 AM Changeset in webkit [203682] by - 2 edits in trunk/Source/WebCore [Coordinated Graphics] ASSERTION FAILED: m_coordinator->isFlushingLayerChanges() in fast/repaint/animation-after-layer-scroll.html Reviewed by Michael Catanzaro. So, we fixed an assertion in r203663, but now is hitting the next one. As explained in bug #160142, flush compositing state can be triggered in tests by RenderLayerCompositor::layerTreeAsText(), without the coordinator even noticing it, so the assert can be just removed. - platform/graphics/texmap/coordinated/CoordinatedGraphicsLayer.cpp: (WebCore::CoordinatedGraphicsLayer::flushCompositingStateForThisLayerOnly): Remove incorrect assert. - 7:33 AM Changeset in webkit [203681] by - 3 edits in trunk/Source/WebCore EllipsisBox ctor's isVertical parameter should read isHorizontal. Reviewed by Andreas Kling. It indicates whether the ellipsis box is horizontal. (both the callsites and the parent class use isHorizontal) No change in functionality. - rendering/EllipsisBox.cpp: (WebCore::EllipsisBox::EllipsisBox): - rendering/EllipsisBox.h: - 5:25 AM MathML/Early_2016_Refactoring edited by - (diff) - 1:57 AM Changeset in webkit [203680] by - 21 edits1 copy2 moves5 adds in trunk [css-grid] Implement repeat(auto-fit) Reviewed by Darin Adler. Source/WebCore: The auto-fit keyword works exactly as the already implemented auto-fill except that all empty tracks collapse (became 0px). Absolutely positioned items do not participate on the layout of the grid so they are not considered (a grid with only absolutely positioned items is considered an empty grid). Whenever a track collapses the gutters on either side do also collapse. When a collapsed track's gutters collapse, they coincide exactly. If one side of a collapsed track does not have a gutter then collapsing its gutters results in no gutter on either "side" of the collapsed track. In practice this means that is not possible to know the gap between 2 consecutive auto repeat tracks without examining some others whenever there are collapsed tracks. Uncommented the auto-fit cases from Mozilla tests. They have to be adapted as the reftest machinery requires all the content to be rendered in the original 800x600 viewport. Tests: fast/css-grid-layout/grid-auto-fit-columns.html fast/css-grid-layout/grid-auto-fit-rows.html fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-1.html fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-2.html - css/CSSComputedStyleDeclaration.cpp: (WebCore::valueForGridTrackList): Use the newly added trackSizesForComputedStyle(). - rendering/RenderGrid.cpp: (WebCore::RenderGrid::computeTrackBasedLogicalHeight): (WebCore::RenderGrid::computeTrackSizesForDirection): (WebCore::RenderGrid::isEmptyAutoRepeatTrack): (WebCore::RenderGrid::gridGapForDirection): Returns the gap directly from the style. (WebCore::RenderGrid::guttersSize): Computes the gap between a startLine and an endLine. This method may need to inspect some other surrounding tracks to compute the gap. ): Returns a Vector with the auto repeat tracks that are going to be collapsed because they're empty. (WebCore::RenderGrid::placeItemsOnGrid): (WebCore::RenderGrid::trackSizesForComputedStyle): Used by ComputedStyle logic to print the size of tracks. Added in order to hide the actual contents of m_columnPositions and m_rowPositions to the outter world. (WebCore::RenderGrid::offsetAndBreadthForPositionedChild): (WebCore::RenderGrid::gridAreaBreadthForChild): (WebCore::RenderGrid::populateGridPositionsForDirection): Added some extra code to compute gaps as they cannot be directly added between tracks in case of having collapsed tracks. (WebCore::RenderGrid::columnAxisOffsetForChild): (WebCore::RenderGrid::rowAxisOffsetForChild): (WebCore::RenderGrid::offsetBetweenTracks): Deleted. - rendering/RenderGrid.h: Made some API private. Added new required methods/attributes. - css/CSSComputedStyleDeclaration.cpp: (WebCore::valueForGridTrackList): - rendering/RenderGrid.cpp: (WebCore::RenderGrid::computeTrackBasedLogicalHeight): (WebCore::RenderGrid::computeTrackSizesForDirection): (WebCore::RenderGrid::hasAutoRepeatEmptyTracks): (WebCore::RenderGrid::isEmptyAutoRepeatTrack): (WebCore::RenderGrid::gridGapForDirection): (WebCore::RenderGrid::guttersSize): ): (WebCore::RenderGrid::placeItemsOnGrid): (WebCore::RenderGrid::trackSizesForComputedStyle): (WebCore::RenderGrid::offsetAndBreadthForPositionedChild): (WebCore::RenderGrid::assumedRowsSizeForOrthogonalChild): (WebCore::RenderGrid::gridAreaBreadthForChild): (WebCore::RenderGrid::populateGridPositionsForDirection): (WebCore::RenderGrid::columnAxisOffsetForChild): (WebCore::RenderGrid::rowAxisOffsetForChild): (WebCore::RenderGrid::offsetBetweenTracks): Deleted. - rendering/RenderGrid.h: LayoutTests: Uncommented the auto-fit cases. Split the Mozilla's 005 test in two because it was not possible to fit all the content in a viewport without scrollbars. - fast/css-grid-layout/grid-auto-fit-columns-expected.txt: Added. - fast/css-grid-layout/grid-auto-fit-columns.html: Added. - fast/css-grid-layout/grid-auto-fit-rows-expected.txt: Added. - fast/css-grid-layout/grid-auto-fit-rows.html: Added. - fast/css-grid-layout/grid-element-auto-repeat-get-set-expected.txt: - fast/css-grid-layout/grid-element-auto-repeat-get-set.html: - fast/css-grid-layout/grid-only-abspos-item-computed-style-crash-expected.txt: - fast/css-grid-layout/grid-only-abspos-item-computed-style-crash.html: - fast/css-grid-layout/grid-positioned-items-padding-expected.txt: - fast/css-grid-layout/grid-positioned-items-padding.html: - fast/css-grid-layout/grid-template-columns-rows-computed-style-gaps-content-alignment-expected.txt: - fast/css-grid-layout/grid-template-columns-rows-computed-style-gaps-content-alignment.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-001-expected.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-001.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-002-expected.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-002.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-003-expected.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-003.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-004-expected.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-004.html: - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-1-expected.html: Renamed from LayoutTests/fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-expected.html. - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-1.html: Copied from LayoutTests/fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005.html. - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-2-expected.html: Added. - fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005-part-2.html: Renamed from LayoutTests/fast/css-grid-layout/mozilla/grid-repeat-auto-fill-fit-005.html. - 12:21 AM MathML/Early_2016_Refactoring edited by - (diff) - 12:18 AM Changeset in webkit [203679] by - 12 edits in trunk/Source/WebCore Move parsing of display, displaystyle and mathvariant attributes into MathML element classes Patch by Frederic Wang <[email protected]> on 2016-07-24 Reviewed by Brent Fulgham. No new tests, already covered by existing tests. - mathml/MathMLElement.cpp: (WebCore::MathMLElement::parseMathVariantAttribute): Move helper function to parse the mathvariant attribute. (WebCore::MathMLElement::getSpecifiedDisplayStyle): Helper function to set the displaystyle value from the attribute specified on the MathML element. (WebCore::MathMLElement::getSpecifiedMathVariant): Helper function to set the mathvariant value from the attribute specified on the MathML element. - mathml/MathMLElement.h: Move the enum for mathvariant values and declare new members. (WebCore::MathMLElement::acceptsDisplayStyleAttribute): Indicate whether the element accepts displaystyle attribute (false for most of them). (WebCore::MathMLElement::acceptsMathVariantAttribute): Indicate whether the element accepts mathvariant attribute (false for most of them). - mathml/MathMLInlineContainerElement.cpp: (WebCore::MathMLInlineContainerElement::acceptsDisplayStyleAttribute): Add mstyle and mtable to the list of elements accepting the displaystyle attribute. (WebCore::MathMLInlineContainerElement::acceptsMathVariantAttribute): Add mstyle to the list of elements accepting the mathvariant attribute. (WebCore::MathMLInlineContainerElement::parseAttribute): Mark displaystyle and mathvariant dirty if necessary. Also use the new accepts*Attribute function. - mathml/MathMLInlineContainerElement.h: Declare overridden accepts*Attribute members. - mathml/MathMLMathElement.cpp: (WebCore::MathMLMathElement::getSpecifiedDisplayStyle): Override acceptsDisplayStyleAttribute so that the display attribute is also used to set the default value if the displaystyle attribute is absent. (WebCore::MathMLMathElement::parseAttribute): Mark displaystyle and mathvariant dirty if necessary. We directly MathMLElement::parseAttribute to avoid duplicate work. - mathml/MathMLMathElement.h: Add the math tag to the list of elements accepting the displaystyle and mathvariant attributes. Declare overridden getSpecifiedDisplayStyle. - mathml/MathMLTextElement.cpp: (WebCore::MathMLTextElement::parseAttribute): Mark mathvariant as dirty. - mathml/MathMLTextElement.h: Add token elements to the list of elements accepting the mathvariant attribute. - rendering/mathml/MathMLStyle.cpp: (WebCore::MathMLStyle::updateStyleIfNeeded): Use the new MathMLElement::MathVariant enum. (WebCore::MathMLStyle::resolveMathMLStyle): We no longer parse the display value to initialize the default value on the math tag, because this is handled in getSpecifiedDisplayStyle. In general, we also just call getSpecifiedDisplayStyle and getSpecifiedMathVariant on the MathML elements instead of parsing the displaystyle and mathvariant attributes here. (WebCore::MathMLStyle::parseMathVariant): Deleted. This is moved into MathMLElement. - rendering/mathml/MathMLStyle.h: Use the new MathMLElement::MathVariant enum. - rendering/mathml/RenderMathMLToken.cpp: Ditto. (WebCore::mathVariant): Ditto. (WebCore::RenderMathMLToken::updateMathVariantGlyph): Ditto. - 12:10 AM Changeset in webkit [203678] by - 3 edits in trunk/Source/WebCore Unreviewed. Remove unneeded header includes from CoordinatedGraphicsLayer. Not only thjey are not needed, they are a layer violation, CoordinatedGraphicsLayer shouldn't know anything about Page, Frame and FrameView. - platform/graphics/texmap/coordinated/CoordinatedGraphicsLayer.cpp: - platform/graphics/texmap/coordinated/CoordinatedGraphicsLayer.h: Jul 24, 2016: - 11:34 PM Changeset in webkit [203677] by - 1 edit1 add in trunk/Tools Unreviewed, forgot to commit this file in r203674. - Scripts/process-linux-coredump: Added. (main): - 11:33 PM Changeset in webkit [203676] by - 3 edits in trunk/Source/WebKit2 [GTK][Threaded Compositor] ASSERTION FAILED: !!handle !!m_nativeSurfaceHandle with several layout tests Reviewed by Michael Catanzaro. We have a message to set the native surface handle and another one for destroying it, the former is a normal message while the latter is sync. This assertion happens if the web view is realized before the web process is launched. This is the sequence: 1.- DrawingAreaProxyImpl sends SetNativeSurfaceHandleForCompositing message to the web process, since the process hasn't been launched yet, the message is queued. 2.- Web process is launched and queued messages are now sent to the web process. 3.- The page is closed right after the web process is launched, and DrawingAreaProxyImpl sends DestroyNativeSurfaceHandleForCompositing to the web process. 4.- The web process processes incoming messages, and DestroyNativeSurfaceHandleForCompositing is processed before SetNativeSurfaceHandleForCompositing because it's sync. 5.- The web process processes SetNativeSurfaceHandleForCompositing message. This is not only producing the assertion, it's also setting a handle for a X window already destroyed in the UI process, so this could be producing the X errors we have seen in other tests. So, we need to make sure SetNativeSurfaceHandleForCompositing and DestroyNativeSurfaceHandleForCompositing are handled in order by the web process. We could make SetNativeSurfaceHandleForCompositing sync as well, but sync messages are just ignored when sent before the web process has been launched (only normal messages are queued for obvious reasons). The other option is sending the SetNativeSurfaceHandleForCompositing message with the IPC::DispatchMessageEvenWhenWaitingForSyncReply flag. In this case the message is queued and dispatched on process launch, but it's dispatched before other messages also queued without that flag, like CreateWebPage. Since there's no WebPage the web process doesn't find a valid message receiver for it and it's discarded. We need to ensure the DrawinArea object has been created before sending the SetNativeSurfaceHandleForCompositing with the PC::DispatchMessageEvenWhenWaitingForSyncReply flag. - UIProcess/DrawingAreaProxyImpl.cpp: (WebKit::DrawingAreaProxyImpl::didUpdateBackingStoreState): If we have received the first update and there's a SetNativeSurfaceHandleForCompositing message pending, send it. (WebKit::DrawingAreaProxyImpl::setNativeSurfaceHandleForCompositing): Do not send the message before the first update is received. (WebKit::DrawingAreaProxyImpl::destroyNativeSurfaceHandleForCompositing): If there was a SetNativeSurfaceHandleForCompositing message pending, just ignore this destroy since the web process never received the handle. - UIProcess/DrawingAreaProxyImpl.h: - 11:28 PM Changeset in webkit [203675] by - 27 edits3 copies1 delete in trunk [Fetch API] Request should be created with any HeadersInit data Patch by Youenn Fablet <[email protected]> on 2016-07-24. - 11:07 PM Changeset in webkit [203674] by - 16 edits in trunk Improve GDB backtrace generation for GTK/EFL Reviewed by Carlos Garcia Campos. Source/WebKit2: Move the Web, Database and Network ProcessIdentifier functions to the cross-platform WKContext and WKPage implementations. - UIProcess/API/C/WKContext.cpp: (WKContextGetNetworkProcessIdentifier): (WKContextGetDatabaseProcessIdentifier): - UIProcess/API/C/WKContextPrivate.h: - UIProcess/API/C/WKPage.cpp: (WKPageGetProcessIdentifier): - UIProcess/API/C/WKPagePrivate.h: - UIProcess/API/C/mac/WKContextPrivateMac.h: - UIProcess/API/C/mac/WKContextPrivateMac.mm: - UIProcess/API/C/mac/WKPagePrivateMac.h: - UIProcess/API/C/mac/WKPagePrivateMac.mm: Tools: The PID of the crashed process is now correctly supplied to the crash log reporter. The kernel core_pattern needs to be updated after this change to something like: echo "|/home/phil/WebKit/Tools/Scripts/process-linux-coredump /tmp/core-pid_%p.dump" > /proc/sys/kernel/core_pattern - Scripts/process-linux-coredump: Added. (main): Minimal python script reading coredump data on stdin and writing it to a file in /tmp/. - Scripts/webkitpy/port/efl.py: (EflPort._get_crash_log): Supply path of the process-linux-coredump script. - Scripts/webkitpy/port/gtk.py: (GtkPort._get_crash_log): Ditto. - Scripts/webkitpy/port/linux_get_crash_log.py: (GDBCrashLogGenerator.init): New argument for supplying the path of a coredump processor script. (GDBCrashLogGenerator.generate_crash_log): Update error message, the core_pattern should now be set to pipe coredumps to a script. (GDBCrashLogGenerator): Deleted. - Scripts/webkitpy/port/linux_get_crash_log_unittest.py: (GDBCrashLogGeneratorTest.test_generate_crash_log): Update test expectations. - WebKitTestRunner/TestController.cpp: (WTR::TestController::networkProcessDidCrash): Supply PID of crash process. (WTR::TestController::databaseProcessDidCrash): Ditto. (WTR::TestController::processDidCrash): Ditto. - 8:42 PM Changeset in webkit [203673] by - 2 edits in trunk/Source/WebInspectorUI Web Inspector: Filtering is broken in the Overview timeline view <rdar://problem/27517481> Reviewed by Joseph Pecoraro. - UserInterface/Views/SourceCodeTimelineTimelineDataGridNode.js: (WebInspector.SourceCodeTimelineTimelineDataGridNode.prototype.filterableDataForColumn): Non-resource nodes should be filtered based on their display name. - 6:14 PM Changeset in webkit [203672] by - 3 edits in branches/safari-602-branch/LayoutTests Merge r203665. rdar://problem/27453479 - 5:48 PM Changeset in webkit [203671] by - 3 edits in trunk/Source/WebKit2 Add specialization for encoding/decoding WebCore::CertificateInfos in the Network Cache <rdar://problem/27409315> Reviewed by Chris Dumez. - NetworkProcess/cache/NetworkCacheCoders.cpp: (WebKit::NetworkCache::encodeCFData): (WebKit::NetworkCache::decodeCFData): (WebKit::NetworkCache::encodeSecTrustRef): (WebKit::NetworkCache::decodeSecTrustRef): (WebKit::NetworkCache::encodeCertificateChain): (WebKit::NetworkCache::decodeCertificateChain): (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::encode): (WebKit::NetworkCache::Coder<WebCore::CertificateInfo>::decode): - NetworkProcess/cache/NetworkCacheStorage.h: (WebKit::NetworkCache::Storage::version): Bump the version and lastStableVersion to account for the format change. - 1:33 PM Changeset in webkit [203670] by - 33 edits2 adds in trunk B3 should support multiple entrypoints Reviewed by Saam Barati. Source/JavaScriptCore: This teaches B3 how to compile procedures with multiple entrypoints in the best way ever. Multiple entrypoints are useful. We could use them to reduce the cost of compiling OSR entrypoints. We could use them to implement better try/catch. Multiple entrypoints are hard to support. All of the code that assumed that the root block is the entrypoint would have to be changed. Transformations like moveConstants() would have to do crazy things if the existence of multiple entrypoints prevented it from finding a single common dominator. Therefore, we want to add multiple entrypoints without actually teaching the compiler that there is such a thing. That's sort of what this change does. This adds a new opcode to both B3 and Air called EntrySwitch. It's a terminal that takes one or more successors and no value children. The number of successors must match Procedure::numEntrypoints(), which could be arbitrarily large. The semantics of EntrySwitch are: - Each of the entrypoints sets a hidden Entry variable to that entrypoint's index and jumps to the procedure's root block. - An EntrySwitch is a switch statement over this hidden Entry variable. The way that we actually implement this is that Air has a very late phase - after all register and stack layout - that clones all code where the Entry variable is live; i.e all code in the closure over predecessors of all blocks that do EntrySwitch. Usually, you would use this by creating an EntrySwitch in the root block, but you don't have to do that. Just remember that the code before EntrySwitch gets cloned for each entrypoint. We allow cloning of an arbitrarily large amount of code because restricting it, and so restricing the placement of EntrySwitches, would be unelegant. It would be hard to preserve this invariant. For example we wouldn't be able to lower any value before an EntrySwitch to a control flow diamond. This patch gives us an easy-to-use way to use B3 to compile code with multiple entrypoints. Inside the compiler, only code that runs very late in Air has to know about this feature. We get the best of both worlds! Also, I finally got rid of the requirement that you explicitly cast BasicBlock* to FrequentedBlock. I can no longer remember why I thought that was a good idea. Removing it doesn't cause any problems and it makes code easier to write. - CMakeLists.txt: - JavaScriptCore.xcodeproj/project.pbxproj: - b3/B3BasicBlockUtils.h: (JSC::B3::updatePredecessorsAfter): (JSC::B3::clearPredecessors): (JSC::B3::recomputePredecessors): - b3/B3FrequencyClass.h: (JSC::B3::maxFrequency): - b3/B3Generate.h: - b3/B3LowerToAir.cpp: (JSC::B3::Air::LowerToAir::lower): - b3/B3MoveConstants.cpp: - b3/B3Opcode.cpp: (WTF::printInternal): - b3/B3Opcode.h: - b3/B3Procedure.cpp: (JSC::B3::Procedure::isFastConstant): (JSC::B3::Procedure::entrypointLabel): (JSC::B3::Procedure::addDataSection): - b3/B3Procedure.h: (JSC::B3::Procedure::numEntrypoints): (JSC::B3::Procedure::setNumEntrypoints): (JSC::B3::Procedure::setLastPhaseName): - b3/B3Validate.cpp: - b3/B3Value.cpp: (JSC::B3::Value::effects): (JSC::B3::Value::typeFor): - b3/B3Value.h: - b3/air/AirCode.cpp: (JSC::B3::Air::Code::cCallSpecial): (JSC::B3::Air::Code::isEntrypoint): (JSC::B3::Air::Code::resetReachability): (JSC::B3::Air::Code::dump): - b3/air/AirCode.h: (JSC::B3::Air::Code::setFrameSize): (JSC::B3::Air::Code::numEntrypoints): (JSC::B3::Air::Code::entrypoints): (JSC::B3::Air::Code::entrypoint): (JSC::B3::Air::Code::setEntrypoints): (JSC::B3::Air::Code::entrypointLabel): (JSC::B3::Air::Code::setEntrypointLabels): (JSC::B3::Air::Code::calleeSaveRegisters): - b3/air/AirCustom.h: (JSC::B3::Air::PatchCustom::isTerminal): (JSC::B3::Air::PatchCustom::hasNonArgEffects): (JSC::B3::Air::PatchCustom::hasNonArgNonControlEffects): (JSC::B3::Air::PatchCustom::generate): (JSC::B3::Air::CommonCustomBase::hasNonArgEffects): (JSC::B3::Air::CCallCustom::forEachArg): (JSC::B3::Air::ColdCCallCustom::forEachArg): (JSC::B3::Air::ShuffleCustom::forEachArg): (JSC::B3::Air::EntrySwitchCustom::forEachArg): (JSC::B3::Air::EntrySwitchCustom::isValidFormStatic): (JSC::B3::Air::EntrySwitchCustom::isValidForm): (JSC::B3::Air::EntrySwitchCustom::admitsStack): (JSC::B3::Air::EntrySwitchCustom::isTerminal): (JSC::B3::Air::EntrySwitchCustom::hasNonArgNonControlEffects): (JSC::B3::Air::EntrySwitchCustom::generate): - b3/air/AirGenerate.cpp: (JSC::B3::Air::prepareForGeneration): (JSC::B3::Air::generate): - b3/air/AirLowerEntrySwitch.cpp: Added. (JSC::B3::Air::lowerEntrySwitch): - b3/air/AirLowerEntrySwitch.h: Added. - b3/air/AirOpcode.opcodes: - b3/air/AirOptimizeBlockOrder.cpp: (JSC::B3::Air::blocksInOptimizedOrder): - b3/air/AirSpecial.cpp: (JSC::B3::Air::Special::isTerminal): (JSC::B3::Air::Special::hasNonArgEffects): (JSC::B3::Air::Special::hasNonArgNonControlEffects): - b3/air/AirSpecial.h: - b3/air/AirValidate.cpp: - b3/air/opcode_generator.rb: - b3/testb3.cpp: Source/WTF: - wtf/GraphNodeWorklist.h: Expose some handy functionality. (WTF::GraphNodeWorklist::pop): (WTF::GraphNodeWorklist::saw): (WTF::GraphNodeWorklist::seen): - wtf/VectorTraits.h: Fix a bug! Otherwise filling a vector of byte-sized enum classes doesn't work. Websites/webkit.org: Update some statements about ControlValue (which doesn't exist anymore) and add a blurb about EntrySwitch. - docs/b3/index.html: - docs/b3/intermediate-representation.html: - 12:53 PM Changeset in webkit [203669] by - 3 edits2 adds in trunk AX: Video Controls: Volume cannot be adjusted using VO. Reviewed by Dean Jackson. Source/WebCore: The volume slider in video tag had 0.01 step which caused the screen reader adjusting it slowly. Changed the step to 0.05 and added the aria-valuetext attribute to the slider, so that the value is spoken in percentage. Test: accessibility/mac/video-volume-slider-accessibility.html - Modules/mediacontrols/mediaControlsApple.js: (Controller.prototype.createControls): (Controller.prototype.handleVolumeSliderInput): (Controller.prototype.updateVolume): LayoutTests: - accessibility/mac/video-volume-slider-accessibility-expected.txt: Added. - accessibility/mac/video-volume-slider-accessibility.html: Added. - 11:59 AM Changeset in webkit [203668] by - 3 edits in trunk/Source/JavaScriptCore Unreviewed, fix broken test. I don't know why I goofed this up without seeing it before landing. - b3/air/AirOpcode.opcodes: - b3/testb3.cpp: (JSC::B3::run): - 10:47 AM Changeset in webkit [203667] by - 3 edits2 adds in trunk REGRESSION (r203106): Crash in WebCore::MathMLElement::parseMathMLLength() <> <rdar://problem/27506489> Reviewed by Chris Dumez. Source/WebCore: Test: mathml/mpadded-crash.html - mathml/MathMLElement.cpp: (WebCore::skipLeadingAndTrailingWhitespace): Change to take StringView parameter instead of String to avoid creating a temporary String that's released on return. LayoutTests: - mathml/mpadded-crash-expected.txt: Added. - mathml/mpadded-crash.html: Added. - 10:36 AM Changeset in webkit [203666] by - 11! -::castToType): (JSC::B3::Air::Arg::asNumber): -): (JSC::B3::compileAndRun): (JSC::B3::lowerToAirForTesting): (JSC::B3::testSomeEarlyRegister): (JSC::B3::testBranchBitAndImmFusion): (JSC::B3::zero): (JSC::B3::run): - 10:25 AM Changeset in webkit [203665] by - 3 edits in trunk/LayoutTests Test gardening after r203626. <rdar://problem/27453479> Unreviewed. - platform/ios-simulator/editing/deleting/delete-emoji-expected.txt: - platform/mac-yosemite/editing/deleting/delete-emoji-expected.txt: - 8:53 AM Changeset in webkit [203664] by - 3 edits in trunk/Source/JavaScriptCore Unreviewed, update the exponentiation expression error message Follow up patch for r203499. - parser/Parser.cpp: (JSC::Parser<LexerType>::parseBinaryExpression): - tests/stress/pow-expects-update-expression-on-lhs.js: (throw.new.Error): - 6:08 AM Changeset in webkit [203663] by - 3 edits in trunk/Source/WebCore [Coordinated Graphics] ASSERTION FAILED: !m_flushingLayers in fast/repaint/animation-after-layer-scroll.html Patch by Carlos Garcia Campos <[email protected]> on 2016-07-24 Reviewed by Michael Catanzaro. This only happens in layout tests, because it happens when RenderLayerCompositor::layerTreeAsText() is called. The thing is that CoordinatedGraphicsLayer::flushCompositingState() calls notifyFlushRequired() that checks if the coordinator is flusing layers and if not it calls RenderLayerCompositor::notifyFlushRequired() and returns early. This normally works because the coodinator is the one starting the layer flush, so that when RenderLayerCompositor::flushPendingLayerChanges() is called the coordinator is always flusing layers. But RenderLayerCompositor::layerTreeAsText() calls RenderLayerCompositor::flushPendingLayerChanges() directly, so at that moment the coordinator is not flusing layers, what causes that CoordinatedGraphicsLayer::flushCompositingState() ends up calling RenderLayerCompositor::notifyFlushRequired() that schedules a new flush while flusing layers causing the assertion. CoordinatedGraphicsLayer::flushCompositingState() is always called from CompositingCoordinator::flushPendingLayerChanges() or RenderLayerCompositor::flushPendingLayerChanges() so we never need to call RenderLayerCompositor::notifyFlushRequired() from there. - platform/graphics/texmap/coordinated/CoordinatedGraphicsLayer.cpp: (WebCore::CoordinatedGraphicsLayer::notifyFlushRequired): This is void now since the return value is not checked anywhere. (WebCore::CoordinatedGraphicsLayer::flushCompositingState): Remove the call to notifyFlushRequired(). - platform/graphics/texmap/coordinated/CoordinatedGraphicsLayer.h: - 6:04 AM Changeset in webkit [203662] by - 3 edits in trunk/LayoutTests [GTK] Layout test security/contentSecurityPolicy/plugins-types-allows-quicktime-plugin-replacement.html timing out Unreviewed, skip the tests. - platform/efl/TestExpectations: - platform/gtk/TestExpectations: - 4:10 AM Changeset in webkit [203661] by - 3 edits2 deletes in trunk/Source Adding a new WebCore JavaScript built-in source file does not trigger rebuild of WebCoreJSBuiltins* Reviewed by Youenn Fablet. Source/JavaScriptCore: - make-generated-sources.sh: Removed. Was unused. Source/WebCore: - DerivedSources.make: Added a missing dependency so the rule that builds WebCore_BUILTINS_WRAPPERS kicks in when the list of WebCore_BUILTINS_SOURCES is modified. Also added another missing dependency so that changes to the JavaScript built-ins Python scripts will also trigger WebCore_BUILTINS_WRAPPERS. - make-generated-sources.sh: Removed. Was unused.):
http://trac.webkit.org/timeline?from=2016-07-27T08%3A50%3A15-07%3A00&precision=second
CC-MAIN-2020-34
en
refinedweb