text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Sauce Labs Sauce labs is a cloud provider. It requires a monthly or yearly subscription, but offers free plans for open source projects. Start by signing up for an account at saucelabs.com, then set the following environment variables: - SAUCE_USERNAME - Your sauce labs username. - SAUCE_ACCESS_KEY - Your sauce labs access key. To do this locally, you can create a .env file in your project's root directory: SAUCE_USERNAME={your sauce username} SAUCE_ACCESS_KEY={your sauce access key} Then you can test using code like: import assert from 'assert'; import cabbie from 'cabbie-sync'; // connect to saucelabs, adding {debug: true} makes cabbie log each method call. const driver = cabbie('saucelabs', (); }
https://cabbiejs.org/getting-started/saucelabs/
CC-MAIN-2020-34
en
refinedweb
- Objects in XML: The SOAP Data Model As you saw in Chapter 2, XML has an extremely rich structureand the possible contents of an XML data model, which include mixed content, substitution groups, and many other concepts, are a lot more complex than the data/objects in most modern programming languages. This means that there isn't always an easy way to map any given XML Schema into familiar structures such as classes in Java. The SOAP authors recognized this problem, so (knowing that programmers would like to send Java/C++/VB objects in SOAP envelopes) they introduced two concepts: the SOAP data model and the SOAP encoding. The data model is an abstract representation of data structures such as you might find in Java or C#, and the encoding is a set of rules to map that data model into XML so you can send it in SOAP messages. Object Graphs The SOAP data model g is about representing graphs of nodes, each of which may be connected via directional edges to other nodes. The nodes are values, and the edges are labels. Figure 3.6 shows a simple example: the data model for a Product in SkatesTown's database, which you saw earlier. Figure 3.6 An example SOAP data model In Java, the object representing this structure might look like this: class Product { String description; String sku; double unitPrice; String name; String type; int numInStock; } Nodes may have outgoing edges, in which case they're known as compound values, or only incoming edges, in which case they're simple values. All the nodes around the edge of the example are simple values. The one in the middle is a compound value. When the edges coming out of a compound value node have names, we say the node represents a structure. The edge names (also known as accessors) are the equivalent of field names in Java, each one pointing to another node which contains the value of the field. The node in the middle is our Product reference, and it has an outgoing edge for each field of the structure. When a node has outgoing edges that are only distinguished by position (the first edge, the second edge, and so on), the node represents an array. A given compound value node may represent either a structure or an array, but not both. Sometimes it's important for a data model to refer to the same value more than oncein that case, you'll see a node with more than one incoming edge (see Figure 3.7). These values are called multireference values, or multirefs. Figure 3.7 Multireference values The model in this example shows that someone named Joe has a sister named Cheryl, and they both share a pet named Fido. Because the two pet edges both point at the same node, we know it's exactly the same dog, not two different dogs who happen to share the name Fido. With this simple set of concepts, you can represent most common programming language constructs in languages like C#, JavaScript, Perl, or Java. Of course, the data model isn't very useful until you can read and write it in SOAP messages. The SOAP Encoding When you want to take a SOAP data model and write it out as XML (typically in a SOAP message), you use the SOAP encoding g. Like most things in the Web services world, the SOAP encoding has a URI to identify it, which for SOAP 1.2 is. When serializing XML using the encoding rules, it's strongly recommended that processors use the special encodingStyle attribute (in the SOAP envelope namespace) to indicate that SOAP encoding is in use, by using this URI as the value for the attribute. This attribute can appear on headers or their children, bodies or their children, and any child of the Detail element in a fault. When a processor sees this attribute on an element, it knows that the element and all its children follow the encoding rules. SOAP 1.1 Difference: encodingStyle In SOAP 1.1, the encodingStyle attribute could appear anywhere in the message, including on the SOAP envelope elements (Body, Header, Envelope). In SOAP 1.2, it may only appear in the three places mentioned in the text. The encoding is straightforward: it says when writing out a data model, each outgoing edge becomes an XML element, which contains either a text value (if the edge points to a terminal node) or further subelements (if the edge points to a node which itself has outgoing edges). The earlier product example would look something like this: <product soapenv: <sku>947-TI</sku> <name>Titanium Glider</name> <type>skateboard</type> <desc>Street-style titanium skateboard.</desc> <price>129.00</price> <inStock>36</inStock> </product> If you want to encode a graph of objects that might contain multirefs, you can't write the data in the straightforward way we've been using, since you'll have one of two problems: Either you'll lose the information that two or more encoded nodes are identical, or (in the case of circular references) you'll get into an infinite regress. Here's an example: If the structure from Figure 3.7 included an edge called owner back from the pet to the person, we might see a structure like the one in Figure 3.8. If we tried to encode this with a naïve system that simply followed edges and turned them into elements, we might get something like this: <person soapenv: <name>Joe</name> <pet> <name>Fido</name> <owner> <name>Joe</name> <pet> --uh oh! stack overflow on the way!-- Figure 3.8 An object graph with a loop Luckily the SOAP encoding has a way to deal with this situation: multiref encoding. When you encode an object that you want to refer to elsewhere, you use an ID attribute to give it an anchor. Then, instead of directly encoding the data for a second reference to that object, you can encode a reference to the already-serialized object using the ref attribute. Here's the previous example using multirefs: <person id="1" soapenv: <name>Joe</name> <pet id="2"> <name>Fido</name> <owner ref="#1"/> <!-- refer to the person --> </pet> </person> Much nicer. Notice that in this example you see an id of 2 on Fido, even though nothing in this serialization refers to him. This is a common pattern that saves time on processors while they serialize object graphs. If they only put IDs on objects that were referred to multiple times, they would need to walk the entire graph of objects before writing any XML in order to figure that out. Instead, many serializers always put an ID on any object (any nonsimple value) that might potentially be referenced later. If there is no further reference, then you've serialized an extra few bytesno big deal. If there is, you can notice that the object has been written before and write out a ref attribute instead of reserializing it. SOAP 1.1 Differences: Multirefs The href attribute that was used to point to the data in SOAP 1.1 has changed to ref in SOAP 1.2. Multirefs in SOAP 1.1 must be serialized as independent elements, which means as immediate children of the SOAP:Body element. This means that when you receive a SOAP body, it may have multiref serializations either before or after the real body element (the one you care about). Here's an example: <soap:Envelope xmlns: <soap:Body> <!-- Here is the multiref --> <multiRef id="obj0" soapenc:root="0" xsi:type="myNS:Part" soapenv: <sku>SJ-47</sku> </multiRef> <!-- Here is the method element --> <myMultirefMethod soapenc: <arg href="#obj0"/> </myMultirefMethod> <!-- The multiref could also have appeared here --> </soap:Body> </soap:Envelope> This is the reason for the SOAP 1.1 root attribute (which you can see in the example). Multiref serializations typically have the root attribute set to 0; the real body element has a root="1" attribute, meaning it's the root of the serialization tree of the SOAP data model. When serializing a SOAP message 1.1, most processors place the multiref serializations after the main body element; this makes it much easier for the serialization code to do its work. Each time they encounter a new object to serialize, they automatically encode a forward reference instead (keeping track of which IDs go with which objects), just in case the object was referred to again later in the serialization. Then, after the end of the main body element, they write out all the object serializations in a row. This means that all objects are written as multirefs whenever multirefs are enabled, which can be expensive (especially if there aren't many multiple references). SOAP 1.2 fixes this problem by allowing inline multirefs. When serializing a data model, a SOAP 1.2 engine is allowed to put an ID attribute on an inline serialization, like this: <SOAP:Body> <method> <arg1 id="1" xsi:Foo</arg1> <arg2 href="#1"/> </method> </SOAP:Body> Now, making a serialized object available for multireferencing is as easy as dropping an id attribute on it. Also, this approach removes the need for the root attribute, which is no longer present in SOAP 1.2. Encoding Arrays The XML encoding for an array in the SOAP object model looks like this: <myArray soapenc: <item>Huey</item> <item>Duey</item> <item>Louie</item> </myArray> This represents an array of three strings. The itemType attribute on the array element tells us what kind of things are inside, and the arraySize attribute tells us how many of them to expect. The name of the elements inside the array (item in this example) doesn't matter to SOAP processors, since the items in an array are only distinguishable by position. This means that the ordering of items in the XML encoding is important. The arraySize attribute defaults to "*," a special value indicating an unbounded array (just like [] in Javaan int[] is an unbounded array of ints). Multidimensional arrays are supported by listing each dimension in the arraySize attribute, separated by spaces. So, a 2x2 array has an arraySize of "2 x 2." You can use the special "*" value to make one dimension of a multidimensional array unbounded, but it may only be the first dimension. In other words, arraySize="* 3 4" is OK, but arraySize="3 * 4" isn't. Multidimensional arrays are serialized as a single list of items, in row-major order (across each row and then down). For this two-dimensional array of size 2x2 0 1 Northwest Northeast Southwest Southeast the serialization would look like this: <myArray soapenc: <item>Northwest</item> <item>Northeast</item> <item>Southwest</item> <item>Southeast</item> </myArray> SOAP 1.1 Differences: Arrays One big difference between the SOAP 1.1 and SOAP 1.2 array encodings is that in SOAP 1.1, the dimensionality and the type of the array are conflated into a single value (arrayType), which the processor needs to parse into component pieces. Here are some 1.1 examples: In SOAP 1.2, the itemType attribute contains only the types of the array elements. The dimensions are now in a separate arraySize attribute, and multidimensionality has been simplified. SOAP 1.1 also supports sparse arrays (arrays with missing values, mostly used for certain kinds of database updates) and partially transmitted arrays (arrays that are encoded starting at an offset from the beginning of the array). To support sparse arrays, each item within an array encoding can optionally have a position attribute, which indicates the item's position in the array, counting from zero. Here's an example: <myArray soapenc: <item soapenc:I'm the second element</item> </myArray> This would represent an array that has no first value, the passed string as the second element, and no third element. The same value can be encoded as a partially transmitted array by using the offset attribute, which indicates the index at which the encoded array begins: <myArray soapenc: <item>I'm the second element</item> </myArray> Due to several factors, including not much uptake in usage and interoperability problems when they were used, these complex array encodings were removed from the SOAP 1.2 version. Encoding-Specific Faults SOAP 1.2 defines some fault codes specifically for encoding problems. If you use the encoding (which you probably will if you use the RPC conventions, described in the next section), you might run into the faults described in the following list. These all are subcodes to the code env:Sender, since they all relate to problems with the sender's data serialization. These faults aren't guaranteed to be sentthey're recommended, rather than mandated. Since these faults typically indicate problems with the encoding system in a SOAP toolkit, rather than with user code, you likely won't need to deal with them directly unless you're building a SOAP implementation yourself: MissingIDGenerated when a ref attribute in the received message doesn't correspond to any of the id attributes in the message DuplicateIDGenerated when more than one element in the message has the same id attribute value UntypedValueOptional; indicates that the type of node in the received message couldn't be determined by the receiver
https://www.informit.com/articles/article.aspx?p=327825&seqNum=12
CC-MAIN-2020-34
en
refinedweb
How to: Share Editors Between Multiple XtraGrid Controls - 2 minutes to read The following example shows how to share a single repository item between two Grid Controls. For this purpose, an external PersistentRepository component must be used. First, add a specific repository item to the persistent repository and customize it depending upon your needs. Then link the persistent repository to the ExternalRepository properties of the required grid controls. After that, assign the repository item to the required grid columns via the ColumnEdit property. In this example we create a repository item corresponding to an ImageComboBoxEdit editor. using DevExpress.XtraEditors.Repository; //Add a repository item corresponding to a combo box editor to the persistent repository RepositoryItemComboBox riCombo = new RepositoryItemComboBox(); riCombo.Items.AddRange(new string[] {"Cash", "Visa", "Master", "Am.Express" }); persistentRepository1.Items.Add(riCombo); //Link the persistent repository to two grid controls gridControl1.ExternalRepository = persistentRepository1; gridControl2.ExternalRepository = persistentRepository1; //Now you can define the repository item as an inplace editor of columns for the two grids (gridControl1.MainView as GridView).Columns["PaymentType"].ColumnEdit = riCombo; (gridControl2.MainView as GridView).Columns["PaymentType"].ColumnEdit = riCombo; Feedback
https://docs.devexpress.com/WindowsForms/9511/controls-and-libraries/editors-and-simple-controls/examples/how-to-share-editors-between-multiple-xtragrid-controls
CC-MAIN-2020-34
en
refinedweb
Important: Please read the Qt Code of Conduct - QtCreator project with folder named new I noticed a strange behaviour with qmake, using Qtcreator. I have a project with many files allocated in different folders. One of these folders is named "new". The strange thing is, every time I edit some file located in this folder, the whole project is recompiled, instead of recompiling only the edited file (.cpp file). Changing other files on other folders, the behaviour is correct. Renaming the new folder, solve the problem. A non-Qt project does't have this issue. Actually I am using Qt5.7.1 with MinGW 5.3.0 on windows7. It is very easy to reproduce. Create a new Qt project. Add a new file and put it in a folder named new. After compiling the whole project, try to change the new added file. is it a known problem? Thank you for any hint - mrjj Lifetime Qt Champion last edited by mrjj Hi It's not something I have seen reported before. Also, i tried to reproduce in a new project using your step by step but it only compiles test.cpp in the new folder. Could you perhaps post your .pro file? Thanks for the replay. This is my .pro file without any change, copied and pasted. #------------------------------------------------- # # Project created by QtCreator 2019-03-19T15:43:18 # #------------------------------------------------- QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = prova_new \ new/dummy.cpp HEADERS += \ mainwindow.h \ new/dummy.h FORMS += \ mainwindow.ui # Default rules for deployment. qnx: target.path = /tmp/$${TARGET}/bin else: unix:!android: target.path = /opt/$${TARGET}/bin !isEmpty(target.path): INSTALLS += target - mrjj Lifetime Qt Champion last edited by Hi Super. Well, it looks exactly like my test one. What do you have in dummy.h / cpp ? Also are they include in mainwindow or main ? it could be a bug in Qt5.7.1 as im testing with Qt5.12 You must use such older version ? In dummy.h and dummy.cpp, (nothig special) #ifndef DUMMY_H #define DUMMY_H class Dummy { public: Dummy(); }; #endif // DUMMY_H #include "dummy.h" Dummy::Dummy() { } main.cpp #include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); } My project is an application that we use at work and many users have Windows XP. Unfortunately, Qt5.12 is not compatible with Windows XP. - mrjj Lifetime Qt Champion last edited by Hi Ahh, the good old Windows XP. Not as stoned dead as they want to believe :) I would check to see if it has been reported. Your pro file look pretty normal. Its only when folder is named new ? If I rename new folder in new_old, for example, the problem disappears. I also tried, Qt5.8, already installed in my system, same problem. Maybe tomorow, I will try Qt5.12 and let you know.
https://forum.qt.io/topic/100890/qtcreator-project-with-folder-named-new
CC-MAIN-2020-34
en
refinedweb
tinylog alternatives and similar libraries Based on the "Logging" category kibana9.7 10.0 L3 tinylog VS kibanaAnalyzes and visualizes log files. Some features require payment. logstash9.5 9.1 L4 tinylog VS logstashTool for managing log files. graylog8.7 9.8 L4 tinylog VS graylogOpen-source aggregator suited for extended role and permission management. Logback7.9 5.0 L4 tinylog VS LogbackRobust logging library with interesting configuration options via Groovy. DistributedLog7.6 1.8 L1 tinylog VS DistributedLogA high performance replicated log service. SLF4J7.3 6.7 L4 tinylog VS SLF4JAbstraction layer which is to be used with an implementation. Apache Log4j 26.9 9.1 L3 tinylog VS Apache Log4j 2Complete rewrite with a powerful plugin and configuration architecture. Logbook5.2 8.2 L5 tinylog VS LogbookExtensible, open-source library for HTTP request and response logging. Tracer3.3 8.4 L5 tinylog VS TracerCall tracing and log correlation in distributed systems. Herald2.6 0.5 tinylog tinylog or a related project? Popular Comparisons README tinylog 2 Example import org.tinylog.Logger; public class Application { public static void main(String[] args) { Logger.info("Hello World!"); } } Support More information about tinylog including a detailed user manual and the Javadoc documentation can be found on. On GitHub, issues and pull requests are always welcome :) Build tinylog tinylog requires Maven 3.5 and JDK 9 for building. Newer JDKs cannot compile legacy code for older Java versions, and older JDKs cannot compile new features for the latest Java versions. OpenJDK 9 is still available on java.net and Oracle JDK 9 on oracle.com. Build command: mvn clean install A new folder "target" with Javadoc documentation and all JARs will be created in the root directory. The generated JARs contain Java 6 byte code and are compatible with any JRE 7 and higher as well as with Android API level 1 andinylog README section above are relevant to that project's source code only.
https://java.libhunt.com/tinylog-alternatives
CC-MAIN-2020-34
en
refinedweb
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rl_usb.h> U32 usbh_msc_get_last_error ( U8 ctrl, U8 dev_idx ); The usbh_msc_get_last_error function returns the last error that happened on requested mass storage device. The argument ctrl is the index of USB Host Controller. The argument dev_idx is the index of device instance. The usbh_msc_get_last_error function is part of the RL-USB-Host software stack. The usbh_msc_get_last_error function returns one of the following manifest constants. usbh_msc_read,.
https://www.keil.com/support/man/docs/rlarm/rlarm_usbh_msc_get_last_error.htm
CC-MAIN-2020-34
en
refinedweb
Put a word on a stream #include <wchar.h> int putw( int w, FILE *stream ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically.. Legacy Unix Because of possible differences in word length and byte ordering, files written using putw() are machine-dependent, and might not be read correctly using getw() on a different processor. errno, ferror(), fopen(), fputc(), fputchar(), fputs(), getw(), putchar(), putchar_unlocked(), putc_unlocked(), puts()
http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.neutrino_lib_ref/p/putw.html
CC-MAIN-2020-34
en
refinedweb
Important: Please read the Qt Code of Conduct - QLabel setText() cannot handle change of text length I am calling setText() to change the text of a Qlabel when a user presses a button. The text is just an integer that is incremented by one. Everything goes fine until 9 than it goes to 1 as it cannot show the 0 of the 10. What code should I add? def update_amount_of_faces(self): self.label3.setText(str(len(self.a_list))) #List has length of 10 text shown is 1 @FreeMake okay since you did not give a simple example program to test your function here is one and using Python 3.7 / PyQt5 on Win10 this works perfectly fine. I would suggest checking the values you think you ought to be getting to make sure they are what you are getting. Also QLabel does not have an easy way (that I could see) of restricting the length other than somehow making the label container so small that it will only show the first value in which case perhaps check the contents of your Label after modification to see what it actually contains. If it contains '10' when only showing '1' this means somehow your display area is really small. from sys import exit as sysExit from PyQt5.QtCore import * from PyQt5.QtGui import * from PyQt5.QtWidgets import * class CenterPane(QWidget): def __init__(self): QWidget.__init__(self) self.MyButtn = QPushButton('Update', self) self.MyButtn.clicked.connect(self.on_click) self.MyTList = QLineEdit('A, B, C, D, E', self) # Simply type the following into the Text Box: , F, G, H, I, J # press the Update button self.label3 = QLabel('0', self) vbox = QVBoxLayout(self) vbox.addWidget(self.MyButtn) vbox.addWidget(self.MyTList) vbox.addWidget(self.label3) self.setLayout(vbox) @pyqtSlot() def on_click(self): Value = self.MyTList.text() self.a_list = Value.split(',') print("List:",self.a_list) print("Count:",len(self.a_list)) self.update_amount_of_faces() def update_amount_of_faces(self): self.label3.setText(str(len(self.a_list))) #List has length of 10 text shown is 1 class MainWindow(QMainWindow): def __init__(self, parent=None): super(MainWindow, self).__init__(parent) Left = 200 Top = 200 Width = 250 Height = 100 self.setWindowTitle('Main Window') self.setGeometry(Left, Top, Width, Height) self.setCentralWidget(CenterPane()) if __name__ == "__main__": mainApp = QApplication([]) mainGUI = MainWindow() mainGUI.show() sysExit(mainApp.exec_()) This is a sizeHint/layout problem. You are not allowing your widget to have enough space to display more than a single digit. Set your minimum size hint a bit larger and try it again. @Kent-Dorfman I am not sure how you can make that claim without making a major assumption as to what @FreeMake did as they did not denote having changed their layout and/or even used sizeHint both of which would be relatively significant changes as opposed to the simple usage which is what was presented. Not saying your claim is incorrect because it is the only thing I can think of as well but it was not mentioned thus I cannot safely make that assumption and that is why I presented some code to show it working properly so that @FreeMake could try and discern where there problem resides. I make that assertion because that is the only thing it could be...simply writing text to a label is a no-brainer. If it is truncating the text then the layout information and/or widget geometry rules are at fault...assuming self.a_list has the value that he thinks it does @Kent-Dorfman right "assuming that the data is correct" which btw we cannot assume at this point as that has not been ascertained. Further if one considers that changing a layout and/or using sizeHint are not what most folks asking this kind of question would generally do your assumption falls even shorter. Now this is not to say (as I stated) that your assumption is incorrect all I am saying is that it is not a good assumption and your off handed answer could actually lead to more confusion as the individual may not even know what those are or how they work. Basically meet the individual where they seem to be at, not where you are and try to ascertain exactly what you can and cannot from the question as stated -- do not read into it. If you need the question clarified ask for that because making logical leaps can as I stated lead to more confusion and that is not what I am thinking you are trying to do. I hope that clarifies what I mean. @Denni said in QLabel setText() cannot handle change of text length: Further if one considers that changing a layout and/or using sizeHint are not what most folks asking this kind of question would generally do your assumption falls even shorter. whatevah! Hi all, apologies, I was really tired at the moment of writing and should have added a minimal example. I checked the length of my list and it is really 10. @Denni I cannot run your example as I am using the python version that came with FreeCAD which does not have the PyQt5 implemented. The window that I have seems to have a lot of space for an extra digit, where should I put the size hint? However here is a minimal example, you can click add faces to see the counter increment: from PySide import QtGui, QtCore # UI Class definitions class SelectFastenerFaces(QtGui.QDialog): """""" def __init__(self): super(SelectFastenerFaces, self).__init__() self.initUI() def initUI(self): self.selected_faces = [] self.result = "Cancelled" # create our window # define window xLoc,yLoc,xDim,yDim self.setGeometry(150, 150, 400, 350) self.setWindowTitle("Select fastener geometry of visible object") self.setWindowFlags(QtCore.Qt.WindowStaysOnTopHint) # create some Labels self.label1 = QtGui.QLabel("Select the fastener faces", self) self.label1.setFont('Courier') # set to a non-proportional font self.label1.move(0, 0) self.label2 = QtGui.QLabel("Number of selected faces: ", self) self.label2.setFont('Courier') # set to a non-proportional font self.label2.move(0, 20) self.label3 = QtGui.QLabel("0", self) self.label3.setFont('Courier') # set to a non-proportional font self.label3.move(200, 20) # add faces button addfacesButton = QtGui.QPushButton('Add Faces', self) addfacesButton.clicked.connect(self.onaddfaces) addfacesButton.setAutoDefault(True) addfacesButton.move(260, 140) # clear selection button clearButton = QtGui.QPushButton('Clear', self) clearButton.clicked.connect(self.onClear) clearButton.setAutoDefault(True) clearButton.move(150, 140) # cancel button cancelButton = QtGui.QPushButton('Cancel', self) cancelButton.clicked.connect(self.onCancel) cancelButton.setAutoDefault(True) cancelButton.move(150, 280) # OK button okButton = QtGui.QPushButton('OK', self) okButton.clicked.connect(self.onOk) okButton.move(260, 280) # now make the window visible self.show() def onaddfaces(self): self.selected_faces.append("face") # Update the faces self.update_amount_of_faces() def update_amount_of_faces(self): print(str(len(self.selected_faces))) self.label3.setText(str(len(self.selected_faces))) def onClear(self): self.selected_faces = [] self.update_amount_of_faces() def onCancel(self): self.result = "Cancelled" self.close() def onOk(self): self.result = "OK" self.close() # Class definitions # Function definitions # code *********************************************************************************** def main(): """Asks the users to select faces from an object 'obj', returns the selected faces in a list.""" # Constant definitions userCancelled = "Cancelled" userOK = "OK" form = SelectFastenerFaces() form.exec_() if form.result == userCancelled: return None # steps to handle user clicking Cancel if form.result == userOK: # steps to handle user clicking OK return form.selected_faces if __name__ == '__main__': main() Okay @FreeMake first pyside means your still using Qt4 which was deprecated in 2015 further pyside is no longer supported as well since it uses python 2.7 (or there abouts which is being deprecated next year). Now you can get both pyside2 (which is not needed as pyqt5 is directly supported within the latest version of python upon doing a pip install of pyqt5) and the latest version of python 3.7 for free so I would suggest uninstalling that old stuff and upgrading to the current stuff. That being said I copied your code and after tweaking it a little got it to run in python 3.7 pyqt5 and reproduced your issue (and its not due to changing a layout and/or using sizeHint). Your issue resides in line 32 (or close to that) where you declare: self.label3 = QLabel("0", self) The "0" is being used as a Mask I extended it to "00" and could then see 2 digit numbers but it did not show 3 digit numbers. My suggestion is to use 2 labels side by side. The first label holding the text you want and the second label holding nothing or "0" initially -- BUT then -- fully re-populate with the new value you want to show each time. Or figure out how to use that QLabel so it does what you are wanting it to do. Note the former solution is what I have used in other languages because there is really no need to continue adding text to the end of an exiting static label for it is a lot simpler (and easier for the computer) to completely overwrite a 2nd label that is used only for that purpose. P.S. In all future questions be sure to add the version of the languages you are using as well as the OS for instance I am using python 3.7 pyqt5 on Win10 this helps to inform anyone attempting to help you to compare pears to pears because you rarely get the proper results by comparing pears with avocados even if avocados are sometimes referred to as alligator pears
https://forum.qt.io/topic/103923/qlabel-settext-cannot-handle-change-of-text-length
CC-MAIN-2020-34
en
refinedweb
May 07, 2008 02:55 PM|sjnaughton|LINK you could use the technique from this post Removing Columns From Gridview in List.aspx Page to filter columns by your own attributes. ASP.NET Dynamic Data May 07, 2008 06:30 PM|davidebb|LINK Hi Steve, I'm not sure I quite understand what you're asking. It is possible to create new metadata attributes that your custom code can look for and act upon. But maybe that's not what you meant? thanks, David May 07, 2008 06:38 PM|sjnaughton|LINK Yes thats what I meant. Then when you use GridView1.ColumnsGenerator =new FilterDataSource(table); as described in another post you could deal with things like: Read/Write for Admin role Read Only for User role Hidden for another role keeping the logic and the model seperate. May 07, 2008 06:49 PM|davidebb|LINK Yes, you absolutely could do this. At some point, we had considered creating such security related attributes ourselves, but concluded that we could not come up with something that would please everyone. But the system works in a way that you are encouraged to create your own custom attributes that fit your scenario. In your generator, you can call MetaColumn.GetAttributes to get all the attributes for the column. David May 08, 2008 03:15 AM|sjnaughton|LINK So all I need to do is create my own metadata class add the attribute to the Metadata class and then call MetaColumn.GetAttributes and it will return a collection of the attributes? ASP.NET Dynamic Data May 08, 2008 05:24 AM|sjnaughton|LINK Hi David I've tried that here is my Attribute class: using System; [AttributeUsage(AttributeTargets.Class | AttributeTargets.Property, AllowMultiple = true)] public class PermissionsAttribute : System.Attribute { public PermissionsAttribute(Permissions permission, String role) { this._permission = permission; this._role = role; } private String _role; public String Role { get { return this._role; } set { this._role = value; } } private Permissions _permission; public Permissions Permission { get { return this._permission; } set { this._permission = value; } } public enum Permissions { ReadWrite, ReadOnly, Hidden } } Here is My Metadata class using System; using System.ComponentModel.DataAnnotations; //using NotAClue; [MetadataType(typeof(OrderMetadata))] public partial class Order { } [Permissions(PermissionsAttribute.Permissions.ReadOnly, "User")] [Permissions(PermissionsAttribute.Permissions.ReadWrite, "Admin")] public class OrderMetadata { [Permissions(PermissionsAttribute.Permissions.Hidden, "*")] [Permissions(PermissionsAttribute.Permissions.ReadWrite, "Admin")] public Object OrderDate { get; set; } }The problem I'm having is that if I include two attibute on the class (see above) I get the error below: The main type 'Order' already contains at least one attribute of type 'PermissionsAttribute'. And if I have two attributes on the property (see above) only the first on shows in the collection. ASP.NET Dynamic Data May 08, 2008 08:16 AM|tlanier|LINK Maybe you could use an attribute per role. [UserPermissions(ReadOnly)] [AdminPermissions(ReadWrite)] [DefaultPermission(Hidden)] public Object OrderDate { get; set; } Maybe you would define a default permissions to be set unless otherwise overridden. Tommy May 08, 2008 09:07 AM|sjnaughton|LINK Hi Tommy that would be a workaround but I wanted somthing a bit more generic that would fit in to any pattern you wanted. Also there may be more groups than just Admin and User level of permissions e.g. Accounts, Sales, Production etc. ASP.NET Dynamic Data May 08, 2008 11:07 AM|davidebb|LINK DynamicData is going through TypeDescriptor to get the attribute (this allows extensible attribute providers), and unfortunately it does have this restriction that it only supports one attribute of a given type. How about reversing Tommy's idea and having one attribute type per permission instead of per role? e.g. ReadOnlyPermission(), ReadWritePermission(). You can then make those attribute contructors take an arbitrary list of roles (by using 'params object[] roles' as the last argument). Then you can write: [ReadOnlyPermission("user1", "user2", "user3")] [ReadWritePermission("admin1", "admin2")] David May 08, 2008 12:33 PM|sjnaughton|LINK Thanks David that sounds like it will do the same. Will this restriction always apply of one attribute of a given type? May 08, 2008 12:57 PM|sjnaughton|LINK Here are my finished attributes clsses using System; [AttributeUsage(AttributeTargets.Class | AttributeTargets.Property, AllowMultiple = false)] public class ReadWritePermissionAttribute : System.Attribute { private String[] _roles; public ReadWritePermissionAttribute(params String[] roles) { this._roles = roles; } } [AttributeUsage(AttributeTargets.Class | AttributeTargets.Property, AllowMultiple = false)] public class ReadOnlyPermissionAttribute : System.Attribute { private String[] _roles; public ReadOnlyPermissionAttribute(params String[] roles) { this._roles = roles; } } [AttributeUsage(AttributeTargets.Class | AttributeTargets.Property, AllowMultiple = false)] public class HiddenPermissionAttribute : System.Attribute { private String[] _roles; public HiddenPermissionAttribute(params String[] roles) { this._roles = roles; } } Some Sample Metadata (Northwind) [MetadataType(typeof(OrderMetadata))] public partial class Order { } [ReadWritePermission("Admin")] [HiddenPermission("Production")] public class OrderMetadata { [ReadWritePermission("Admin")] [ReadOnlyPermission("User","Accounts","Sales")] public Object OrderDate { get; set; } } And finally I can access the attribute via: _table.Attributes; _table.columns[0].column.Attributes;I could then iterate through the attribute collections returned turing fields or tables on or off etc. ASP.NET Dynamic Data May 08, 2008 01:16 PM|tlanier|LINK Steve, Using your previous example, I'm having trouble detecting the custom attribute. The below code does not work. public ICollection GenerateFields(Control control) { List oFields = new List(); foreach (MetaColumn col in _table.Columns) { PermissionsAttribute a = new PermissionsAttribute(PermissionsAttribute.Permissions.ReadOnly, "Admin"); if (col.Attributes.Contains(a)) continue; // do anything to show it's working DynamicField f = new DynamicField(); f.DataField = col.Name; oFields.Add(f); } return oFields; } ---------------------------- In the partial class: [Permissions(PermissionsAttribute.Permissions.ReadOnly, "Admin")] public object SerialNo { get { return _SerialNo; } set { _SerialNo = value; } } May 08, 2008 01:28 PM|davidebb|LINK Tommy, you can't just call Attribute.Contains(a), as this will look if that exact instance of the attribute is in the collection (which it will never be). You need to search for Attributes of that type and see if you find the one you care about. You could write some simple heplers to make this easier. David May 08, 2008 02:45 PM|tlanier|LINK If I do something silly like below, it shows the colum has 4 attributes, none of which are the custom one. I think I'm looking in the wrong place. Attribute: System.SerializableAttribute Attribute: System.Runtime.CompilerServices.TypeDependencyAttribute Attribute: System.Data.Linq.Mapping.ColumnAttribute Attribute: System.Runtime.InteropServices.ComVisibleAttribute foreach (MetaColumn col in _table.Columns) { if (col.Name == "SerialNo") { string s = ""; foreach (Attribute a in col.Attributes) { Type t = a.GetType(); s = s + "Attribute: " + a.ToString() + " "; } } } May 08, 2008 02:46 PM|davidebb|LINK Good news, it turns out there is an easy way to get multiple attributes of the same type working. Just add this to your custom attribute class:public override object TypeId { get { return this; } } There is a little bug in the base class (Attribute) implementation of this method which cause duplicates to be removed. Please confirm that this works for you. This will not fix the case where you try to put multiple attributes on the class, as that is a different bug. The good news is that we can fix that as well, though that one will take a Dynamic Data update. thanks, David May 08, 2008 03:52 PM|davidebb|LINK It's normal to see all those other random attributes, but you should see yours as well. What about if you add standard attributes like UIHint or DisplayName on that same column? Do you get those? David May 08, 2008 04:02 PM|sjnaughton|LINK davidebbpublic override object TypeId { get { return this; } } TypeId is that replaced by your own type name or as is? May 08, 2008 05:20 PM|sjnaughton|LINK That seems to be it David. And for those watching Metadata: [PermissionsAttribute(PermissionsAttribute.Permissions.ReadOnly, "Admin")] [PermissionsAttribute(PermissionsAttribute.Permissions.ReadWrite, "User")] [PermissionsAttribute(PermissionsAttribute.Permissions.Hidden, "Sales")] [UIHint("Date")] [DisplayName("Order Date")] [Description("Its the Date you ordered it numpty")] public Object OrderDate { get; set; } Attibute Class: using System; [AttributeUsage(AttributeTargets.Class | AttributeTargets.Property, AllowMultiple = true)] public class PermissionsAttribute : System.Attribute { public override object TypeId { get { return this; } } public PermissionsAttribute(Permissions permission, String role) { this._permission = permission; this._role = role; } private String _role; public String Role { get { return this._role; } set { this._role = value; } } private Permissions _permission; public Permissions Permission { get { return this._permission; } set { this._permission = value; } } public enum Permissions { ReadWrite, ReadOnly, Hidden } } FieldGenerator class and helper class: using System; using System.Collections; using System.Collections.Generic; using System.Web.DynamicData; using System.Web.UI; public class FilteredFieldsManager : IAutoFieldGenerator { protected MetaTable _table; public FilteredFieldsManager(MetaTable table) { _table = table; } public ICollection GenerateFields(Control control) { List oFields = new List (); // where do I put this test I think in the page? //System.ComponentModel.AttributeCollection a1 = _table.Attributes; foreach (MetaColumn column in _table.Columns) { // drop out if the name starts with "ship" or == "customer" var Roles = column.Attributes.GetPermissionsAttributes(); if (!column.Scaffold) continue; DynamicField f = new DynamicField() { DataField = column.Name }; oFields.Add(f); } return oFields; } } public static class FilteredFieldsManagerHelper { public static Dictionary GetPermissionsAttributes(this System.ComponentModel.AttributeCollection attributes) { Dictionary permissions = new Dictionary string>(); foreach (Attribute attribute in attributes) { if (attribute.GetType() == typeof(PermissionsAttribute)) { permissions.Add(((PermissionsAttribute)attribute).Permission, ((PermissionsAttribute)attribute).Role); } } return permissions; } }I'm getting dictionary of permission role pairs backvar Roles = column.Attributes.GetPermissionsAttributes();from this line which I could use to filter which columns to show.Now I just need to decide how to deal with readonly columns.[:D] ASP.NET Dynamic Data May 08, 2008 06:28 PM|sjnaughton|LINK Thanks David we can call that one done then. May 08, 2008 06:55 PM|sjnaughton|LINK I'll do that David [:D] May 09, 2008 08:41 AM|tlanier|LINK Steve, Nice work! For my application I think I'm going to try to simply it a little using David's suggestion. A field will default to ReadWrite unless the ReadOnly Attribute is used: [ReadOnly("Role1", "Role2", etc.) A field will default to visible unless the Hidden Attribute is used: [Hidden("Role1", "Role2", etc.)] I'm assuming you saw the previous message on how to implement read only fields. Tommy May 09, 2008 09:27 AM|sjnaughton|LINK I found it easier to search for one attribute type and then dig out the permissions: public static class FilteredFieldsManagerHelper { public static List<PermissionsAttribute.Permissions> GetPermissionsAttributes(this System.ComponentModel.AttributeCollection attributes, String role) { List<PermissionsAttribute.Permissions> permissions = new List<PermissionsAttribute.Permissions>(); foreach (Attribute attribute in attributes) { if (attribute.GetType() == typeof(PermissionsAttribute)) { if(((PermissionsAttribute)attribute).Role == role) permissions.Add(((PermissionsAttribute)attribute).Permission); } } return permissions; } } in new heloper class you pass the role you want (you have that already) and it diggs out the permission into a List<PermissionsAttribute.Permissions>with List<PermissionsAttribute.Permissions>roles = column.Attributes.GetPermissionsAttributes("User"); and then you can foreach through it. ASP.NET Dynamic Data May 10, 2008 06:45 AM|sjnaughton|LINK Hi David you said: davidebbThis will not fix the case where you try to put multiple attributes on the class, as that is a different bug. The good news is that we can fix that as well, though that one will take a Dynamic Data update. Will this update be in the next DynamicData update or a future update? ASP.NET Dynamic Data Metadata May 10, 2008 10:19 AM|davidebb|LINK DynamicData update next week. But actually, I think you can avoid the bug trivially: put the class attributes on the main entity partial class rather than the 'buddy' metadata class. They should get picked up from there. David May 12, 2008 08:12 AM|tlanier|LINK Steve, This is the method that was previously shown to implement ReadOnly fields. public ICollection GenerateFields(Control control) { List oFields = new List(); foreach (MetaColumn col in _table.Columns) { if (!col.Scaffold) continue; if (col.Name == "HiddenFieldName") continue; DynamicField f; if (col.Name == "ReadOnlyFieldName") { f = new DynamicReadonlyField(); } else { f = new DynamicField(); } f.DataField = col.Name; oFields.Add(f); } return oFields; } .......DynamicData;; cell.Controls.Add(control); } else { base.InitializeCell(cell, cellType, rowState, rowIndex); } } } Tommy May 12, 2008 09:38 AM|sjnaughton|LINK Hi Tommy I'm not sure what in: public class DynamicReadonlyField : DynamicField you are doing that changed the way it is displayed. (I'm probably just being a bit stupid) ASP.NET Dynamic Data DynamicField May 12, 2008 10:00 AM|tlanier|LINK Here's David's original post: I think the reason it works is the overridden class DynamicReadonlyField is not doing something the the original DynamicField class is doing. Tommy 32 replies Last post May 12, 2008 10:08 AM by sjnaughton
https://forums.asp.net/p/1258077/2344827.aspx?Re+Will+it+be+possible+or+is+it+already+possible+to+extend+the+metadata
CC-MAIN-2020-34
en
refinedweb
Multi URL performance test with App Service Thursday, June 2, 2016 Multi URL tests are now available in production. Thursday, June 2, 2016 Multi URL tests are now available in production. Thursday, May 12, 2016 Notification Hubs recently enabled namespace-level tiers so customers can allocate resources tailored to each namespace’s expected traffic and usage patterns. Tuesday, May 10, 2016 We are transitioning from Azure Mobile Services to Azure App Service. Monday, April 25, 2016 As of today, it is now possible to migration your existing Application Insights mobile app to HockeyApp with a single click. Tuesday, April 19, 2016 The v2.1.0 release of the Server SDK adds support for SQLite on the server. This provides a new data store that can be used for a number of important scenarios. Thursday, April 14, 2016 Notification Hubs' per message telemetry feature now supports scheduled send and the allowed device expiry (time to live) is extended to infinity. Friday, April 1, 2016 The MyDriving Azure IoT and Mobile sample application enables you to record trips in your car using the MyDriving mobile application and off the shelf OBD devices. Thursday, March 31, 2016 Developing cloud applications presents a unique opportunity for businesses to reach new markets and customers that span the globe. Tuesday, March 29, 2016 We are very excited to announce some key features to increase the reach of our product. Tuesday, March 29, 2016 Today we are pleased to announce Parse Server on Azure Managed Services.
http://azure.microsoft.com/en-us/blog/topics/mobile/?Page=4
CC-MAIN-2020-34
en
refinedweb
Making your first figure¶ Welcome to PyGMT! Here we’ll cover some of basic concepts, like creating simple figures and naming conventions. All modules and figure generation is accessible from the pygmt top level package: import pygmt Creating figures¶ All figure generation in PyGMT is handled by the pygmt.Figure class. Start a new figure by creating an instance of this class: fig = pygmt.Figure() Add elements to the figure using its methods. For example, let’s start a map with an automatic frame and ticks around a given longitude and latitude bound, set the projection to Mercator ( M), and the figure width to 8 inches: fig.basemap(region=[-90, -70, 0, 20], projection="M8i", frame=True) Now we can add coastlines using pygmt.Figure.coast to this map using the default resolution, line width, and color: To see the figure, call pygmt.Figure.show: Out: <IPython.core.display.Image object> You can also set the map region, projection, and frame type directly in other methods without calling gmt.Figure.basemap: fig = pygmt.Figure() fig.coast(shorelines=True, region=[-90, -70, 0, 20], projection="M8i", frame=True) fig.show() Out: <IPython.core.display.Image object> Saving figures¶ Use the method pygmt.Figure.savefig to save your figure to a file. The figure format is inferred from the extension. Note for experienced GMT users¶ You’ll probably have noticed several things that are different from classic command-line GMT. Many of these changes reflect the new GMT modern execution mode that will be part of the future 6.0 release. A few are PyGMT exclusive (like the savefig method). The name of method is coastinstead of pscoast. As a general rule, all ps*modules had their psprefix removed. The exceptions are: psxywhich is now plot, psxyzwhich is now plot3d, and psscalewhich is now colorbar. The arguments don’t use the GMT 1-letter syntax (R, J, B, etc). We use longer aliases for these arguments and have some Python exclusive names. The mapping between the GMT arguments and their Python counterparts should be straight forward. Arguments like regioncan take lists as well as strings like 1/2/3/4. If a GMT argument has no options (like -Binstead of -Baf), use a Truein Python. An empty string would also be acceptable. For repeated arguments, such as -B+Loleron -Bxaf -By+lm, provide a list: frame=["+Loleron", "xaf", "y+lm"]. There is no output redirecting to a PostScript file. The figure is generated in the background and will only be shown or saved when you ask for it. Total running time of the script: ( 0 minutes 1.880 seconds) Gallery generated by Sphinx-Gallery
https://www.pygmt.org/latest/tutorials/first-figure.html
CC-MAIN-2020-34
en
refinedweb
Type: Posts; User: GCDEF As I said earlier, the problem isn't likely in the resource file, it's where the toolbar is getting created. If you're using MFC, lookin in Mainfrm.cpp. If you're not, look into whatever framework... It's probably in your MainFrm.cpp code. Look for an array called nButtons, and a call to m_wndToolBar.SetButtons(). If you don't see anything there, look for anything in MainFrm.cpp that has to to... That doesn't make much sense either. Can you explain in more detail what you're trying to do? If you're not debugging the third-party app, why not just debug yours? Why the need to attach to anything? Are you trying to debug a third party tool? You can't do that unless you have a debug version of it including their pdb file. Why debug in release mode? Did you compile and link to include debug info? Is there a reason you're using attach rather than starting the app in the debugger? Thanks. I don't know how it's being used. It seems to be a boost file. It's a large project I need to make a small change to. I'm going to have to wait till the guy that wrote it comes back from... I really don't know what it's supposed to be doing, but changing it to == gave me way more errors. That line makes no sense to me, but as I said, I really haven't used templates, so I don't even... Not my code and I haven't used templates much. These two lines template <class U> static yes_type test(U&, decltype(U(source<U>()))* = 0); give the error ... Not every browser maintains history. Perhaps you could do something with EnumWindows. Targetver.h contains one line #include <SDKDDKVer.h> Look for a difference in stdafx.h between the projects if it can't find that. Also, the part of the resource file that describes an edit control created in the resource editor. You really need to use the resource editor. You're developing a really bad habit. I can't tell you anything else at this point. Maybe zip up the smallest project you can so we can take a look. I don't see that behavior. You can turn WS_EX_CLIENTEDGE off in the resource editor, but it shouldn't be on anyway. WS_BORDER and WS_VISIBLE should be all you need. What version of Visual Studio... Now look at the numbers you're passing in. Do they make sense? If you use the dialog editor, you don't need to call any kind of create. And again, you don't need CreateEx. I only mentioned it... Let's start simple. What are the arguments the CRect constructor takes? FWIW, the resource editor is much easier than doing it this way No clue how to fix it? Look at the parameters you need to pass in to a CRect constructor and see if the ones you're passing in make sense. You need to figure that one out yourself. I gave a good... It's not displaying because his CRect arguments are wrong. The bottom is higher than the top. Look at the order of parameters for the CRect constructor. WS_EX_CLIENTEDGE will give you the 3D look you don't want. I only mentioned CreateEx earlier because you were using WS_EX_CLIENTEDGE... Is there some reason you're not using the resource editor? It's an extended style so you'll need to use CreateEx Right click on the edit control and select properties This thread is from 2013 Maybe it's just me, but a CDialog having a CFormView as a member seems kind of wonky to me. I don't know if it has anything to do with your problem though. Why did you set it up that.
https://forums.codeguru.com/search.php?s=df09612032aad4dc90e41bed57ca0c26&searchid=20838045
CC-MAIN-2020-34
en
refinedweb
Administrator’s Guide Version 1.6 Table of Contents Getting Started ...................................................................................................................................... 5 Authentication ................................................................................................................................... 18 Streaming ........................................................................................................................................... 59 Maintenance ........................................................................................................................................ 80 Logging ............................................................................................................................................ 84 Glossary .............................................................................................................................................. 99 2 Notices © Exterity Limited 2003-2017 This document contains information that is protected by copyright. Reproduction, adaptation, or translation without prior permission is prohibited, except as under the copyright laws. Document Reference 1300-0090-0001 Exterity Limited, St David’s House, St David’s Drive, Dalgety Bay, Fife, KY11 9NB, Scotland, UK Trademarks © Exterity Ltd 2017. All rights reserved. Exterity, the Exterity logo, AvediaServer, AvediaStream, ArtioPortal, AvediaPlayer and ArtioSign are registered trademarks or trademarks of Exterity Ltd. All other trademarks and logos are property of their respective owners. Exterity tries to ensure that all information in this document is correct but does not accept liability for any error or omission. Information and specifications are subject to change without prior notice. Disclaimer The information contained in this document is subject to change without notice. EXTERITY LIMITED MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Exterity Limited shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. 3 Getting Started Warranty A copy of the specific warranty terms applicable to your Exterity products and replacement parts can be obtained from Exterity. To request more information or parts, email: [email protected] Safety Notices Before installing and operating these products, please read the safety information contained in this guide. Audience This documentation is intended for use by systems integrators or systems administrators who are installing and setting up Exterity products. It assumes that readers are familiar with installing and configuring network-based products. Scope This edition of the documentation refers to version 1.6 of the g44xx TVgateway firmware. All Exterity AvediaStream g44xx TVgateway products are covered. 4 Getting Started TVgateways are network devices that receive digital TV channels and make them available as MPEG transport streams over an IP network. Each channel is automatically announced on the network using the information contained in the digital broadcast for easy viewing on Exterity Receivers and desktop clients. For the purposes of this documentation, “TVgateway” refers to a single TVgateway blade in an AvediaStream chassis. Each blade is a separate entity and is configured and managed independently from any other blades in the chassis. The following TVgateways are currently available and supported by this firmware version: These devices are operated and managed largely in the same way. Each blade has its own unique features and management requirements which are identified and highlighted in the document. TVgateways with CAM slots have the ability to descramble channels; to do this, a CAM and subscription card from the package provider are required. If you are descrambling content using a TVgateway with CAM, ensure that you have the appropriate authority/rights to distribute the descrambled content on the network. 5 Getting Started Product Overview Configuration Overview This section contains a brief overview of the steps required to install and configure the TVgateway. Before using the TVgateway, it must be powered on and connected to the network and input signal source. This process is described in the installation guide for the chassis and also in Physical Interfaces. By default, the TVgateway requires a DHCP server to be available on the network to assign it an IP address. There are two methods of assigning a static IP address to the TVgateway, if required: • Temporarily set up a DHCP server on an isolated network. Once an IP address has been assigned to the TVgateway, you can configure a static IP address using the Web Management Interface. For more information, see IP Address Configuration. • Use the Admin Interface to configure the IP address. For more information, see Admin Interface. Provide a name (and location) for the device so you can easily identify it in the future. You can do this using the Web Management Interface or the AvediaServer Director Application. Scan the source to discover details of the available channels. This is done on a per-tuner basis and is described in Scanning DVB-T/T2 & DVB-C/C2 Channels and Scanning DVB-S/S2 Satellite Channels. Once you have successfully scanned for channels, select the multiplex containing the channels you want to stream. This is also carried out on a per-tuner basis. This is described in Channel Configuration. Once you have successfully scanned for channels and selected a suitable multiplex, select the channels you want to stream, again on a per-tuner basis. This is described in Channel Configuration. You are now ready to stream channels onto the network. See Streaming for details of this final step. Network Considerations The TVgateway transmits audio/video using IP multicast. In order for this to work satisfactorily, it is vital that the network switches are multicast-enabled in order to prevent unwanted flooding of traffic on the network. For these purposes, “Multicast-enabled” is understood to mean that all network switches carry out IGMP snooping, and one switch must function as the IGMP querier. 6 Getting Started For more information, please see the 'Network Requirements' knowledgebase article. Channel Announcements The TVgateway announces the list of channels it is streaming using the Session Announcement Protocol (SAP). This enables Exterity Receivers/desktop clients and third-party equipment to automatically discover and connect to the available channels on the network. Content Protection The Exterity content protection solution is designed to meet the Digital Rights Management (DRM) requirements of content owners and broadcasters. The AvediaStream g4415-sm, AvediaStream g4415-lk and AvediaStream g4415-vx form part of this content solution, as detailed below. • If a CAM on a g4415-sm, g4415-lk or g4415-vx has been used to decrypt content, it never streams that content in the clear. However, it can stream FTA channels which have never been encrypted. • All other AvediaStream g44xx TVgateways with CAM slots can stream de-scrambled content in the clear. For example, the g4412 can decrypt content using a CAM and then stream it on the LAN in an unencrypted format. For more information on how to set up and use SecureMedia encryption technology, please see “Configuration of Exterity SecureMedia broadcast channel encryption with AvediaStream g4400 series TVgateways”, available on the Exterity website. 7 Getting Started Samsung LYNK DRM technology forms part of the content protection solution, and is integrated into the AvediaStream g4415-lk TVgateway. The g4415-lk first decrypts the selected channels using a CAM, then uses Samsung LYNK DRM technology to re-encrypt the content prior to streaming onto the LAN. The AvediaStream g4415-lk contains a Samsung LYNK DRM license which ensures that video content is always delivered securely throughout the IPTV system. This is illustrated below. The AvediaStream g4415-vx contains a Verimatrix license which ensures that video content is always delivered securely throughout the IPTV system. This is illustrated below. For more information on how to set up and use Verimatrix encryption technology, please see “Connecting AvediaStream g4415-vx TVgateways to a Verimatix Server”, available on the Exterity website. 8 Management Interfaces • Admin Interface • AvediaServer Director It can also be managed by third party applications using SNMP or using Exterity’s proprietary Terminal Control Interface (TCI). 1. Enter the IP address of the TVgateway directly into your browser, or click the TVgateway’s name in the AvediaServer Director application. 2. When prompted, enter the username and password. The default login details are: Login window You can also change the admin password using the Admin Interface. 9 Management Interfaces 4. Use this menu to navigate through the pages, changing settings as required. Click Apply on each page to save your changes. For security reasons, we recommend that you change the administrator password as soon as possible. Please see Authentication for details of how to do this. Admin Interface In certain circumstances it may not be possible to manage the TVgateway via the Web Management Interface. For these situations, a text-based Admin Interface is provided, which is available via the serial interface (marked ‘ADM’ on the chassis front panel) or via SSH. See Serial Interface Connection for details of how to connect to the serial admin port. Once connected, log in using username admin and password as for the Web Interface (default labrador). 10 Management Interfaces Provides access to the device log file, the DVB card status and the CAM Show Diagnostics Set Network Config Allows the administrator to set the IP address of the device. Allows the administrator to change the admin password for the Admin Set Admin Password and Web Interfaces. Return to factory Allows the administrator to set all configuration to factory defaults. defaults 11 Management Interfaces AvediaServer Director The AvediaServer Director is used for device discovery and management and is an integral part of the AvediaServer platform. The Director uses SNMP to manage a subset of device functionality and can also be used to open the Web Management interface of the TVgateway. AvediaServer Director You can carry out the following actions on the TVgateway using the Director: • Export Config – Export the current configuration for archiving or applying to another device. • Set TFTP Server – Specify the IP address of the TFTP server to be used. • Set Syslog Server – Specify the IP address of the Syslog server to be used. 1. Open the AvediaServer Web Interface and start the AvediaServer Director application. 2. Select TVgateway from the Device drop-down list to display only TVgateways, and use the column sort functions to help locate the TVgateway you want to configure. 3. Click the required TVgateway Name hyperlink to launch the Web Interface login window. 12 General Device Management This section describes how to manage attributes of the TVgateway not associated with IPTV streaming. All procedures described in this section assume that you are running the Web Management Interface as described in Management Interfaces. • Device Naming • Network Configuration • Authentication • Software Version: The version of software (often known as firmware) running on this device. • Description: A detailed version description identifying when the software was built. • Hardware Type: This identifies the exact type of hardware in the device. • Date: The configured NTP server and time zone are used to generate the displayed date and time. (If no NTP server is present, the TVgateway’s internal clock is used, starting on Jan 1 1970 (Linux Epoch).) • Secure Hardware: Shows hardware has security and tamper proofing features required for video content protection. • License: A comma-separated list of feature licenses that have been deployed on this device. 13 General Device Management Device Naming You can assign a name and location to the TVgateway which can help identify it in a management application, such as AvediaServer Director. 1. Click General. 2. Enter a name and location as required in the Name and Location fields, then click Apply. You can also configure the name and location using the Name and Location actions in the AvediaServer Director application. 14 General Device Management Network Configuration This section describes TVgateway options relating to network connections. These options are all available from the Network page. IP Address Configuration You can configure the TVgateway to obtain an IP address automatically using DHCP, or you can specify static addressing information, i.e. IP address, subnet mask, default gateway and DNS server. An IP addressing change may take a short time to come into effect. The device starts using the new IP address automatically - no reboot is necessary. 1. Click Network. 2. In the IP Address Configuration section, select DHCP (Automatic) from the IP Address Settings drop-down list and click Apply. 15 General Device Management 1. Click Network. 2. In the IP Address Configuration section, select Static (use below) from the IP Address Settings drop-down list. 3. Specify values for IP Address, Subnet Mask, Default Gateway and DNS Server, then click Apply. The two LEDs on the front of the AvediaStream chassis indicate the type and status of the link. See Network Port Status for more information. It is important to ensure that the TVgateway settings match the settings on the switch port to which the TVgateway is connected. If this is not the case, it can result in dropped packets causing breakup of audio/video. In practice this means the TVgateway and the connected network switch should be configured for operation as follows: • Auto-negotiation enabled on both the TVgateway and the connected network switch. or • Auto-Negotiation disabled on both the TVgateway and the connected network switch, and a fixed setting of 100FD (Full Duplex) on both the switch and the TVgateway. 1. Click Network. 2. In the Network Port Configuration section, choose Auto-negotiate or 100Mb/s Full as required from the drop-down list, then click Apply. If the TVgateway has been inserted into an AvediaStream c1210 chassis, a separate drop- down list is available for each Ethernet interface. 16 General Device Management The statistics at the bottom of the Network page indicate which Ethernet interface is active. Below, the higher transmit stats for Ethernet A indicate that it is currently in use: The Primary Interface defines which Ethernet interface is used for transmitting data if both interfaces are available. By default, Ethernet A is the primary interface. To change this, select Ethernet B in the drop-down list and click Apply. Information on Ethernet port usage and switching is also recorded in the log file. Please see Logging. 17 General Device Management Authentication Admin Password You can control access to the web management interface and admin interface by changing the password. This option is available from the Authentication page in the Web Interface. 1. Click Authentication. Password Requirements Administrator passwords: • numbers 18 General Device Management SNMP SNMP is used by management applications such as the AvediaServer Director application to manage a subset of the TVgateway functions and configuration. It is possible to completely disable the use of SNMP; however, if you disable SNMP on the TVgateway, management applications such as the AvediaServer Director will not be able to communicate with it. Device discovery and configuration change traps are still sent even when SNMP control is disabled. 1. Click Authentication. 2. Enter the required read/write and read-only community strings in the appropriate boxes. 3. Click Apply. 1. Click Authentication. 2. Check or uncheck the Enable SNMP Agent box as required (default: checked – enabled). 3. Click Apply. 19 Physical Interfaces This section describes the connections required to connect the following TVgateways to the terrestrial/satellite source: Overview AvediaStream TVgateways can operate in any of the following chassis: • AvediaStream c1101 • AvediaStream c1103 The blade input signal interfaces are on the rear panel, while the edge connector enables access to the network and admin ports via the chassis front panel connections. The installation guide for each chassis describes the connection of the blades to the power supply, the network and to a PC via serial connection. 20 Physical Interfaces Please disconnect all RF cables from the blade before inserting or removing from a chassis. The g4412. 21 Physical Interfaces The g4415-sm. 22 Physical Interfaces The g4415-l. 23 Physical Interfaces The g4415-v. 24 Physical Interfaces The g4418 is shown below and has eight tuners, Tuner A to H, connected to the antennae by eight female F-type connector inputs. Please disconnect all RF cables from the blade before inserting or removing from a chassis. Connect the satellite dish LNB or multiswitch to the selected TVgateway tuner input using the F-type connector. The satellite dish should be installed by a professional installer, ensuring that the signal levels conform to the requirements listed in Recommended Signal Levels. Connect the AvediaStream g4418 to a multiswitch rather than directly to an LNB if the LNB draws more than 100 mA from the TVgateway. Failure to do this may result in power to the LNB being switched off. 25 Physical Interfaces Please disconnect all RF cables from the blade before inserting or removing from a chassis. Connect the antenna feed to the TVgateway tuner input F-type connector. The antenna should be installed by a professional installer, ensuring that the signal levels conform to the requirements listed in Recommended Signal Levels. 26 Physical Interfaces Please disconnect all RF cables from the blade before inserting or removing from a chassis. The g4442, shown below has two tuners, connected to the antennae by one female F-type connector input. Connect the antenna feed to the TVgateway tuner input using the F-type connector. The antenna should be installed by a professional installer, ensuring that the signal levels conform to the requirements listed in Recommended Signal Levels. 27 Physical Interfaces The g4448, shown below has eight tuners, Tuner A to H, connected to the antennae by two female F-type connector inputs. Input 1 feeds tuners A-D, and input 2 feeds tuners E-H. Please disconnect all RF cables from the blade before inserting or removing from a chassis. 28 Scanning Overview DVB-S/S2, DVB-T/T2 and DVB-C/C2 signals consist of television and radio channels collected into bundles called multiplexes. Each multiplex is transmitted on a separate frequency, or for satellite (DVB-S/S2) a combination of frequency and signal polarization. A TVgateway, in common with all RF receivers, must tune to the transmission frequency to access the channels in a multiplex. Each tuner in a TVgateway can tune to one frequency and therefore stream all the channels transmitted on that particular frequency. TVgateways are supplied with transmitter files for many commonly used satellite transponders, terrestrial transmitters and cable sources. These files contain the required tuning parameters such as frequency, polarization, modulation schemes, symbol rate, and error correction information. Tune the TVgateway by selecting the source for the relevant multiplex and initiating a scan. When tuned, the required channels can be selected and subsequently streamed to the IP network. Scan Resources contains some useful tips on how to find information which will help you decide what to scan. To use the advanced scanning procedure you must know the frequencies, polarization, symbol rate and delivery system information for the satellite you intend to use. The process of tuning the TVgateway to the required transmitter source, selecting channels, and streaming them onto the IP network follows the logical process shown below. 29 Scanning Overview • The process of selecting the RF signal source, tuning the TVgateway tuner to specified frequencies and reviewing the channel contents of a multiplex. Refer to the following sections for the relevant signal source: • To understand the results of a scan and manage the TVgateway transmitter files, refer to Interpreting Scan Results. • To enable specific channels from the selected multiplex, configure or change channel parameter details, refer to Channel Configuration. • To set up the parameters to announce and stream the selected channels onto the IP network, refer to Streaming. 30 Scanning DVB-T/T2 & DVB-C/C2 Channels In order to successfully receive and stream channels, the input signal level and quality must meet the requirements specified in Recommended Signal Levels. There are three types of scan: • Frequency Range Scans enable you to scan the complete list of frequencies listed in a transmitter file. • Basic Scans provide the ability to individually scan any frequency listed in a transmitter file for a specific transmitter. • Advanced Scans provide the ability to tune to a multiplex not listed in the transmitter files. This version of firmware does not have in-built transmitter files for DVB-C/C2. If you cannot find a suitable configuration file, you can add additional files using the Transmitter Files import function on the Resources page. See Transmitter File Format for file format information, and Managing Transmitter Files. 31 Scanning DVB-T/T2 & DVB-C/C2. 32 Scanning DVB-T/T2 & DVB-C/C2 Channels 1. Click Scan in the required Tuner menu to display the relevant Scan page. 3. From the Transmitter file drop-down list, select the file you want to use. 4. From the Frequency drop-down list, select the frequency/UHF channel number you want to scan. The scan starts and a progress bar is shown before results are displayed on the screen. For more information, refer to Interpreting Scan Results. 33 Scanning DVB-T/T2 & DVB-C/C2 Channels You must use the advanced scan method for DVB-C/C2 signals. 3. Enter the frequency, making sure to select the correct frequency units from the drop-down list. 4. Select a value from the Bandwidth drop-down list. The default is 8 MHz. The DVB-T/T2 option scans for both DVB-T and DVB-T2 multiplexes. The scan starts and a progress bar is shown before results are displayed on the screen. For more information, refer to Interpreting Scan Results. 34 Scanning DVB-S/S2 Satellite Channels In order to successfully receive and stream channels, the input signal level and quality must meet the requirements specified in Recommended Signal Levels. There are three types of scan: • Basic Scan • Advanced Scan If you cannot find a suitable configuration file, you can add additional files using the Transmitter Files import function on the Resources page. See Transmitter File Format for file format information, and Managing Transmitter Files. 35 Scanning DVB-S/S2 Satellite. 36 Scanning DVB-S/S2 Satellite Channels If you cannot find a suitable configuration you can add additional transmitter files using the Transmitter Files import function on the Resources page (see Managing Transmitter Files). See Transmitter File Format for file format information. 1. Click Scan in the required Tuner menu to display the Tuner Scan page. 37 Scanning DVB-S/S2 Satellite Channels 2. Select the required version from the DiSEqC drop-down list. If None is selected, no further DiSEqC configuration is required. 3. Select the required DiSEqC switch position from the Committed Switch drop-down list. For detailed configuration of the DiSEqC settings refer to Configuring the DiSEqC Switch Position. 6. Select the required RF frequency and polarization from the Scan Frequency drop-down list. 7. Select the installed LNB type from the LNB Type drop-down list. (The default is Universal, the most commonly used.) Tool-tips show the Local Oscillator (LO) frequency used for each LNB type when you place your cursor over each LNB Type in the drop-down list 38 Scanning DVB-S/S2 Satellite Channels The scan starts and a progress bar is shown before results are displayed on the screen. For more information, refer to Interpreting Scan Results. 2. Select the DiSEqC switch position from the DiSEqC drop-down menu. (For more information, refer to Configuring the DiSEqC Switch Position. 39 Scanning DVB-S/S2 Satellite Channels 4. Enter the frequency in the Frequency field, making sure to select the correct units from the drop-down list. 6. Enter the symbol rate in the Symbol Rate field, making sure to select the correct units from the drop-down list. 8. Select the LNB type. If required select Manual and configure as described in Specifying LNB Parameters. The scan starts and a progress bar is shown before results are displayed on the screen. For more information, refer to Interpreting Scan Results. If the satellite equipment is connected to the TVgateway through a DiSEqC switch, it is necessary to configure the required input prior to starting a scan. To do this use the DiSEqC settings on the Scan page. Configuration of the DiSEqC settings is common to all types of satellite scan described above. If the satellite equipment is not connected through a DiSEqC switch, the DiSEqC version described in the following procedure should be left at the default value of ‘None’. 40 Scanning DVB-S/S2 Satellite Channels 1.2 Adds to 1.1 the ability to steer a motorized dish to a stored position number. 1.1 + Goto X Adds to 1.1 the ability to steer a motorized dish to a satellite at a particular longitude. DiSEqC 2.x switches are backwards-compatible with DiSEqC 1.x satellite receivers. The TVgateway can therefore operate with DiSEqC 2.x switches. Motorized dishes require some time to move to a new position therefore more than 30 seconds may elapse before a scan starts if one of the motorized position options is selected. To configure the DiSEqC settings before starting a scan, follow the instructions below. 2. Select a switch (A, B, C, D or None) from the DiSEqC committed switch drop-down menu. The DiSEqC switch inputs may be numbered rather than lettered. In this case, position A would correspond to the lowest numbered position. For example, if the switch is labeled with positions 0 – 3, position A corresponds to position 0, position B to position 1, and so on. 3. Select a switch (1 to 16 or None) from the DiSEqC uncommitted switch drop-down menu. 2. Select committed and non-committed switch positions as for DiSEqC 1.0 and 1.1, if required. 3. Enter the geographic coordinates of the satellite dish location in the Ground Station fields. 41 Scanning DVB-S/S2 Satellite Channels The geographic coordinates are required in order for the TVgateway to calculate the correct angle offsets for the dish. It is up to the administrator to make sure that it is possible to receive the signal from the required satellite from this location and using this dish. 2. Select committed and non-committed switch positions as for DiSEqC 1.0 and 1.1, if required. 3. Enter the required position number as specified by the satellite installer in the Stored Position # field. 10.6 >11.7 Standard 10.75 — DBS 11.25 — C-Band 5.15 — If a different type of LNB is used, you can manually configure the Local Oscillator frequency. For a Universal LNB, the TVgateway selects the LO frequency by disabling/enabling a 22kHz tone to the LNB for transmission frequencies below/above 11.7GHz respectively. Again, this can be manually configured. 1. Select the LNB in use from the LNB Type drop-down list. To specify the LNB local oscillator frequency: 2. Specify Off (low band) or On (high band) to specify the use of the 22kHz tone from the 22kHz tone drop-down list. 3. If the configuration is complete, click Start Scan. Refer to Interpreting Scan Results to review the Multiplex and Channels Lists. 42 Interpreting Scan Results The scan process produces the details of the discovered multiplex(es), including the frequency and other details used in scanning. If the scan of a frequency was successful, the mux and channel information is listed. If unsuccessful, a Scan Status of “No Lock” is displayed. • Tuner Locked – The TVgateway tuner has found and is locked to the specified scan frequency. • Scan Complete – The TVgateway has completed the scan of the specified frequency and the multiplex and channel details are listed. • No Lock – The TVgateway tuner has been unable to locate a signal at the specified frequency. • Scan timed out – No data has been received from the tuner. Examples of successful satellite frequency/polarization and terrestrial scans are shown below: 43 Interpreting Scan Results Summary information about the scanned multiplex and its channel content is displayed as shown above. New Multiplexes The New Multiplexes section displays the following information: Mux Number On completion of a successful scan the detected multiplex is assigned a value by the TVgateway (in sequence) and added to the list on the Multiplexes page. Parameters Displays the multiplex frequency and delivery system polarization (if applicable), and symbol rate. Quality Green or orange indicate that the signal is good enough to stream; red indicates that the signal is not strong enough. 44 Interpreting Scan Results New Channels The channel content for the New Multiplex is listed and the following information is displayed: Scrambled: 45 Interpreting Scan Results You can upload additional configuration files for additional satellites to the TVgateway. These files may be supplied to you by your Exterity reseller. The format of these files is shown in Transmitter File Format. The TVgateway uses TFTP to acquire transmitter files, so the new transmitter file must be hosted on a TFTP server for the TVgateway to be able to download it. For more information, refer to Specifying the TFTP Server’s Address. The transmitter files are managed from the Resources page on the Web Management Interface. 46 Interpreting Scan Results 2. Ensure that the transmitter file is hosted correctly in the root directory of the TFTP server. 3. Click Resources. 47 Interpreting Scan Results 4. Ensure that the correct TFTP Server address is shown. This is configured on the Maintenance page (see Specifying the TFTP Server’s Address for more details). 6. Click Import. The file is retrieved from the TFTP server and is available for use on the Scan page on completion of the upload. 1. Click Resources. 3. Click Delete. 1. Click Resources. 48 Channel Configuration A successful scan results in a list of one or more multiplexes, and lists of channels for each scanned multiplex. From these lists you can select the channels to be streamed onto the network. Use the Channels page to view all the channels. By default, channels are ordered by multiplex. To re-order the table, click any of the column headings. Any multiplex may contain a mix of TV, radio, and data channels. (Data channels do not carry normal audio-video streams but are typically used as control channels.) A tuner tunes to a specified frequency and can therefore stream all the channels in the multiplex at that frequency. You can change the announced Channel Names and Numbers. More advanced channel editing allows you to enable/disable discrete elements. For example you can choose to enable or disable subtitles if they are a discrete part of the channel stream. Refer to Advanced Channel Configuration for more information. Selecting a Multiplex Once you have identified the multiplex containing the channels you want to stream, you must select it as the active multiplex so that the TVgateway tunes to the correct frequency. To select a multiplex: 2. Click Delete. 49 Channel Configuration 2. Select the View check boxes for the types of channels required. For example, click TV and Radio to list only TV and radio channels. Click the Active Mux only check box to list only the channels on the selected multiplex. The View check box selection is applied and saved in your browser; no configuration changes are applied to the TVgateway. Click the headings to sort the list and help you to find the specific channels you want to stream. For example, click Name to sort the channels in alphabetical order. The information displayed is described below. Enable Click the check box to enable streaming of the selected channel. Note: To stream the specified channel you must set the respective multiplex as the active mux. Mux The reference number of the multiplex containing this channel. Click Mux to order the channels by multiplex number. 50 Channel Configuration Num The channel number is displayed in the channel list on the Status page. This is the number displayed by AvediaPlayer Receivers, AvediaPlayer/ArtioPortal Desktop clients, and the AvediaServer Channel Monitor application, and can be configured as required. The Channel number field may be pre-populated by the scan. Name The channel name. This is the name displayed by AvediaPlayer Receivers, AvediaPlayer/ArtioPortal Desktop clients and the AvediaServer Channel Monitor application. The default name is that applied by the broadcaster. To change this name, click the name and edit the text field. Groups Within the Exterity IPTV system a simple but powerful mechanism called groups is used to filter access to content available to receivers or computer based clients on the network. For example, you can configure a group of sports channels and a group of children’s channels. Channels are assigned to group(s) by Exterity Encoders and TVgateways. The name is included as part of the SAP announcements and the groups mechanism allows Exterity Receivers and AvediaServer/ArtioPortal Desktop clients to list only channels in a particular group or groups. The default value is “all”, meaning that the channel is a member of all groups. To change the group membership for a channel, click the group name and edit the text field. Where required, enter more than one group name(s) in a comma- separated list. Note: Valid characters are: A-Z (upper case alphabet), a-z (lower case alphabet), 0- 9, and _ (underscore) CA Indicates whether or not the channel is scrambled (encrypted). FTA indicates Free To Air with no restriction on streaming. Scrambled indicates that the channel’s content is protected: • g4415-vx - indicates that a CAM module, access card and Verimatrix VCAS server are required. DRM (g4415-sm, g4415-lk and g4415-vx only) In order to stream a channel which is scrambled, select the required option from the drop-down list for the channels you want to encrypt and stream. 51 Channel Configuration • If you are using SecureMedia with the g4415-sm, these bands match those set up on the SecureMedia Broadcast Director, and are available only if the TVgateway has been registered with the Broadcast Director. • If you are using Samsung LYNK DRM technology with the g4415-lk, select Lynk™ DRM. • If you are using Verimatrix technology with the g4415-vx, select VCAS™ IPTV. Decrypt (g4412 and g4442 only) Select Decrypt to decrypt the encrypted channel using the CAM and stream it in the clear. Edit Channel The Edit Channel window allows you to configure advanced channel configuration settings such as more than one destination address and PID filtering. Refer to Advanced Channel Configuration for more information. Selecting Channels To select channels: 2. Click the View: Active Mux only check box to list only the channels on the selected multiplex: 52 Channel Configuration 3. Click the Enabled only check box to display only the channels you have selected for streaming. 4. If required, change the Name and Number of any channel using the Name and Num fields, and enter the group membership name(s) in the Groups field. Please note that commas (,) should not be used in any field as they interfere with SAP announcements. To stream the selected channels onto the IPTV network refer to Streaming. Refer to Advanced Channel Configuration below for more information about advanced channel configuration such as PID filtering. • Manually configure parameters such as the multicast address. Each channel selected for streaming can be individually configured. • Create and configure duplicate channels, allowing you to stream a multi-language channel as discrete single language channels, for example. Specifying the channel content and meta data makes use of the Service Information (SI) and Program Specific Information (PSI) tables. When making changes in the Edit Channel window, click OK to close the window, then click Apply on the Channel page to save your settings. 53 Channel Configuration Alternatively, you can manually set a multicast address for each channel, which overrides the automatic setting. If you do this, ensure the address you specify for each channel is unique for the network. If you specify a multicast address without entering a port number, the default tuner port (set using Stream > Default Port) is used. You can manually configure a different multicast address, or one or more unicast addresses. When streaming to unicast addresses, you may also want to disable SAP announcement of the channel. To globally disable SAP announcements, deselect the SAP Service check box on the Services page. 1. Configure each channel required for streaming as described in Selecting Channels for Streaming. 2. Click for the channel you want to configure to open the Edit Channel window. 3. To disable SAP announcement of the channel, deselect the SAP Announce check box. 4. Specify the Destination Addresses. If left blank, the default multicast address and port are used. 54 Channel Configuration Please be aware that entering multiple addresses creates multiple streams which increases bandwidth usage. 1. Configure each channel required for streaming as described in Selecting Channels for Streaming. 2. Click for the channel you want to configure to open the Edit Channel window. The TVgateway automatically enables all video, audio, subtitle/closed caption and Teletext elements. 3. Click the Enabled check boxes to enable each channel element you want to include in the stream. 4. If you require a particular number for the PID, enter this in the Mapped PID box. This is then used instead of the default PID for that element. 55 Channel Configuration Including additional service information may be useful for processes subsequently applied to the channel after it is streamed. For example, if the channel is to be decrypted by an IPTV set-top box or player client, the CAT is most likely required. 1. Configure each channel required for streaming as described in Selecting Channels for Streaming. 2. Click for the channel you want to configure to open the Edit Channel window. 3. Click the check box for the additional SI table(s) you want to include in the stream. (For example, CAT and NIT.) Tip: To specify additional tables not available from the check boxes, you can enter the decimal value of the required table(s). Enter the table number and/or ranges separated by commas. For example 4,7,100-104: 5. Click OK to close the window, then click Apply to save your changes. 1. Configure each channel required for streaming as described in Selecting Channels for Streaming. 2. Click for the channel you want to configure to open the Edit Channel window. 56 Channel Configuration Copying a channel 3. Click Create Copy. The duplicate channel is added to the channel list. 4. Re-name the duplicate channel by clicking in the name field and editing the name, for example Euronews. Click Apply to save the new name. Renaming a channel 6. Deselect the Audio PIDs you do not want to include in this channel. 8. Continue the process of copying the channel, re-naming and enabling the required content until you have configured all the required channels. Duplicate channels can be deleted when required. You cannot delete the source channel listing. 2. Click Delete. 57 Channel Configuration 58 Streaming Once you have scanned for channels and selected those required, you can stream the channels onto the IP network. Streaming is configured for each individual tuner. In most circumstances the default settings are suitable, but you can manually configure parameters such as the base IP address used for the multicast address assignment. AvediaStream TVgateways can be configured to concurrently access multiple high bit rate channels, and are capable of streaming up to 500Mbps onto the network. Ensure your network architecture and devices are capable of handling these high data rates, and that the network is correctly multicast enabled before starting to stream channels. Refer to the Transmit % Utilisation on the Network page to determine how much of the capacity of the Ethernet interface is being used for streaming TV channels. You should also take the number of channels into account and ensure that you do not exceed the number of multicast groups the network can handle. For example, lower end switches and routers may only support 255 different multicast groups. • Starting/Stopping Streaming 59 Streaming Stream page 2. Click the Stream on Boot check box to disable or enable the stream on boot function. 3. Click Apply. Starting/Stopping Streaming Start/stop control of streaming is applied on a per-tuner basis. When the TVgateway has started streaming, the complete list of channels streaming is shown on the Status page. To control streaming on a tuner: 2. Click Start or Stop to start or stop all channels streamed from the tuner. If a tuner is already streaming when you apply changes to the list of channels, you do not need to manually stop and restart channel streaming as the changes are made dynamically. However, if the tuner is not streaming, you must manually restart streaming after making changes. 60 Streaming Parameter Description Num The channel number as advertised in SAP announcements and displayed in the channel list on Exterity Receivers and clients. Name The channel name as advertised in SAP announcements and displayed in the channel list on Exterity Receivers and clients. SAP Indicates the SAP announcement state of the channel. Note that the SAP setting on the Services page can globally enable/disable SAP announcements. If SAP is disabled, the SAP column is empty. 61 Streaming Parameter Description SM Band (g4415-sm, g4415-lk and g4415-vx only) Indicates which form of DRM has been selected on the Channels page. This can be LYNK™ DRM, a SecureMedia band or VCAS™ IPTV. The SecureMedia bands match those set up on the SecureMedia Broadcast Director, and are available only if the TVgateway has been registered with the Broadcast Director. Please see Selecting Channels for information on how to change the channel name and number and to configure group membership. Stream Configuration This section explains how to apply stream settings. These mainly relate to the way the stream is transmitted on the network, such as the Transport Protocol. The following settings are configured per tuner, on the Stream page: The following settings are configured for the TVgateway as a whole, on the Network page: • Specifying IP TOS/Diffserv The following setting is configured for the TVgateway as a whole, on the Services page: 3. Click Apply. 62 Streaming Automatically assigned multicast addresses can be manually overridden on the Channels page. Using the default base address 239.192.0.0 as an example, these automatic addresses will occupy the following address ranges (where ‘y’ represents the last octet of the TVgateway’s IP address): Tuner From To A 239.192.0.y 239.192.63.y B 239.192.64.y 239.192.127.y C 239.192.128.y 239.192.191.y D 239.192.192.y 239.192.255.y E 239.193.0.y 239.193.63.y F 239.193.64.y 239.193.127.y G 239.193.128.y 239.193.191.y H 239.193.192.y 239.193.255.y If a multiplex carries more than 64 channels the additional channels will require manual configuration. 2. Enter the required base address, remembering that only the first 15 bits are relevant (refer to Assigning Multicast Addresses). 3. Click Apply. 63 Streaming The port number for each channel can be manually overridden on the Channels page. 2. Enter the new value in the Default Port field and click Apply. These settings are ignored if you have specified destinations in the Channels page. Refer to Advanced Channel Configuration for more information. Stream on Boot When this option is selected, the TVgateway automatically starts to stream on startup (assuming it has been previously configured to do so, and the RF feed is connected) and restarts the channel streams after an event such as a power outage. Deselect this if you do not want the streams to start immediately on boot. Stream on boot is enabled by default. 1. Click Stream. 3. Click Apply. This TTL value applies only to channel streams. The TTL for SAP announcements can also be configured using a hidden configuration option. Details are available on request. 1. Click Network. 3. Click Apply. 64 Streaming Specifying IP TOS/Diffserv You can set the value of the TOS byte in the IP header. By default, the stream is sent with an IP TOS value of 0. Note that the value can be set between 0 and 255. To configure only a Differentiated Services Code Point (DSCP), only the upper six bits are required, with the two lower Explicit Congestion Notification bits (ECN) set to zero. 0 1 2 3 4 5 6 7 Decimal Value 0 0 0 0 0 1 0 0 4 For example, as shown here, to specify a DSCP (decimal) value of 1 you must left shift the binary value by 2 bits and enter a value of 4 in the IP TOS/Diffserv entry field. Refer to RFC 2474 for more detailed information. To specify the IP TOS/Diffserv: 1. Click Network. 3. Click Apply. 1. Click Network. 3. Click Apply. To do this: 1. Click Services. 65 Streaming • If you have one AvediaServer, enter its IP address in the EPG Server field. • If you have more than one AvediaServer and would like to transmit EPG data to them all, multicast is required. Choose an available multicast IP address and specify this in the EPG Server field. (You also need to configure the AvediaServer EPG application to listen on this multicast address). 4. Click Apply. 2. Click SecureMedia. 3. Enter the IP address and registration password of the SecureMedia Server. (Default is securemedia). 66 Streaming TVgateway registered 5. Confirm the status is as expected. Any Registration message other than ‘OK’ indicates the process was not successful. Check all IP addresses and ensure the TVgateway has a SecureMedia license. 3. Enable the channels you want to stream and select the associated SecureMedia band from the SM Band drop-down list, then click Apply. 67 Streaming Configuration During the VCAS server setup, you must set up a VPN. Instructions on how to do this can be found in the Connecting AvediaStream g4415-vx TVgateways to a Verimatix Server Configuration Guide. 1. First set up the VPN on the VCAS server (some of the information required for the TVgateway setup is generated during this procedure). 2. Click Verimatrix on the TVgateway's web interface to display the following page: 68 Streaming 69 Streaming • VCAS ECMG Port: If the Verimatrix VCAS server’s ECMG is using a port other than the default (12704), specify it here. • VCAS VECMG Channel: VECMG Channel number as shown in the VCAS Administrative Interface under “Streams”. 5. During VPN installation, certain files are created which contain certificate/key information required to finish configuring the TVgateway: • VCAS tunnel CA certificate: Paste in from the ca.crt file generated during the VPN setup. For security reasons these certificates/keys are no longer visible after they have been applied. "(Present)" indicates that a certificate has been applied. If the field is blank, this means that no key or certificate is present. 70 Status Monitoring This section explains how to check the operating status of the TVgateway. It contains the following sections: • Warning Messages For details of each item, please see About the TVgateway. This information is useful for identifying the software and hardware revisions in use on this device. If contacting technical support regarding a problem with the device, it can be useful to provide all this information. 71 Status Monitoring • Main Menu On the left side of the Status page operational status is summarized for each tuner using simple traffic light indicators in the tuner name label. The indicators are visible at all times. Refer to Understanding the Traffic Light Indicators for more information. 72 Status Monitoring • Tuner Status The Tuner Status section of the page displays more details about each tuner, including indicators for streaming status and signal, signal strength and quality information, and summary information about the selected multiplex and each channel currently streaming. Refer to Understanding the Traffic Light Indicators for more information about the traffic light indicators. • Rear Panel Each tuner has its own LED on the rear panel, which provides information about the TVgateway’s operational status. Refer to Understanding the Traffic Light Indicators for more information. Warning Messages CPU and Temperature Status These details are found on the Status page. The TVgateway alerts you if there are changes in CPU fan activity, for example, if the fan speed drops, or if the temperature starts to rise. The uptime is also indicated. This is the length of time since the device was restarted. 73 Status Monitoring If a low fan speed warning is displayed, please contact Exterity support as the CPU could start to overheat if the fan stops spinning. If the CPU's core temperature gets too hot the unit automatically stops streaming and waits for user intervention. If the Board Temperature exceeds 60°C, you should stop using the unit immediately. Connect the AvediaStream g4418 to a multiswitch rather than directly to an LNB if the LNB draws more than 100 mA from the TVgateway. Failure to do this may result in an Over Current warning (), and power to the LNB being switched off. The AvediaStream g4410, g4412, g4415-sm or g4415-lk can be connected to either a multiswitch or LNB. If a scan is attempted but fails because of over-current, a warning is also provided on the Scan page: 74 Status Monitoring 75 Status Monitoring Multiplex Information The Multiplex section shows the following details about the multiplex you have selected to be active on the Tuner Multiplex pages: • Displays the number (internal ID) of the multiplex. For example, here Multiplex 1 (#1) has been selected for Tuner A. • Displays the RF input signal frequency (and polarization for satellite inputs). For example, here Tuner A is tuned to a signal at 522 MHz with horizontal polarization. • Displays details taken from the selected transmitter file. For example, here Tuner A is using the transmitter file for Central Scotland. • Signal Strength – Indicates the power level of the RF input signal. Generally, the stronger the signal, the better. • Signal Quality – The average number of received bit errors that have been successfully corrected. This number will vary slightly over time. Note that the signal quality is an instantaneous measurement and may fluctuate. Error Statistics Uncorrectable Blocks Most RF signals contain errors. The tuners frequently correct these errors automatically. Some of these errors are not correctable and are reported as uncorrectable blocks under the heading UCB Errs. This is the most important metric for determining the quality of RF signal to the TVgateway. If this value is steadily increasing, the signal is almost certainly not of good enough quality and results in a poor TV picture. Continuity Errors DVB programs are transmitted as MPEG Transport Streams. Transport Stream packets contain a continuity counter which allows stream integrity to be checked. Any missing packet is reported as a continuity error under the heading Cont Err. If the continuity error count is steadily increasing it will probably result in a poor TV picture and usually indicates that the signal is not of good enough quality. 76 Status Monitoring If the continuity error count is increasing while the UCB error count remains static, this may indicate that the TVgateway is close to its performance limit. The Error Counter Reset button allows you to reset the error counts to 0. This can be useful to see if a change you have made to rectify a problem (such as a signal quality issue) has been successful. To view details of network port utilization, click Network. The Ethernet interface statistics are shown along the bottom of the page. If the TVgateway has been inserted into a c1210 chassis, and both Ethernet interfaces have been connected, Ethernet statistics are shown for both interfaces. • Transmit % Utilization– Indicates how much of the capacity of the Ethernet interface is being used for streaming TV channels. Best practice suggests that you do not exceed 80% capacity in normal usage, and you may have to adjust the number of channels being streamed to maintain this. • Transmit Errors – (Errors, Dropped, Collisions) If being recorded in any volume, these may indicate that the interface capacity has been reached, or may indicate a mismatch in Ethernet settings between the TVgateway and the network switch (e.g. auto-negotiation settings mismatch). Any transmit errors may adversely affect the quality of TV picture at the endpoints. • Receive % Utilization – Indicates how much traffic the TVgateway is receiving from the network. Under normal circumstances this should be 0%. If this is non-zero, this may indicate that the network is not correctly multicast enabled, resulting in the TVgateway receiving multicast traffic from other streaming devices. Even if the Receive Utilization % displays 0%, some non-streaming traffic is being constantly received. This may be traffic: 77 Status Monitoring CAM Menu Navigate the CAM menu system and view the details available by clicking the information labels (such as Consultation) and Back. To use the admin interface: 2. Select option 1>Show Diagnostics (enter ‘1’) to display the diagnostics list as shown below: 78 Status Monitoring The CAM menu is type specific. No other details are included here. 79 Maintenance • Upgrading Firmware • Logging 1. Click Maintenance. 2. Enter the IP address of the TFTP server in the TFTP Server field, then click Apply. 1. Click Maintenance. 2. Enter the required IP address in the SNMP Trap Manager field and click Apply. 80 Maintenance If no time server is present, the TVgateway’s internal clock is used, which starts at Jan 1 1970 (Linux Epoch). The TVgateway can be configured with a Time Server IP address in one of two ways: • Manual configuration A manually configured time server overrides a time server provided by the DHCP Server. 1. Click Maintenance. 2. Enter the IP address or the host name of the time server in the Time Server field, and click Apply. If already configured, the IP address is displayed. Before specifying the time zone used by the TVgateway, it is important to ensure that the base time of the TVgateway is correct. You can do this by specifying a time server address. 1. Click Maintenance. 81 Maintenance 3. Click Apply. The TVgateway must be rebooted for any time zone changes to apply. Please note that if you do not select a time zone, the default time zone of UTC is used. 82 Maintenance Upgrading Firmware By upgrading firmware regularly, you can ensure that you are always using the most recent version. As the new firmware is uploaded using TFTP, you must first ensure that the TVgateway is using the correct TFTP server address (see Specifying the TFTP Server’s Address). 1. Click Maintenance. 2. Ensure that you have specified the IP address of your TFTP server and that it is running and configured correctly. 3. Ensure that the following firmware file is hosted in the root directory of the TFTP server: gateway_4g.srec 4. Ensure that the Firmware filename field shows the file indicated above (or matches the name of the firmware file if this is different). 6. The firmware is downloaded from the TFTP server. This process will take several minutes. When resetting to factory default settings, all previously saved settings are lost. IP addressing is returned to DHCP. 1. Click Maintenance. All configuration settings, including device-specific settings (IP address, name and location) are saved when exported. When a saved configuration file is imported, all settings except the IP address, name and location are imported. 83 Maintenance 1. Ensure that the TFTP server is running and is correctly configured (see Specifying the TFTP Server’s Address). 2. Click Maintenance. 3. Enter a name for your configuration archive in the Export filename field. 1. Ensure that the TFTP server is running and that the configuration file is hosted in the root directory of the TFTP server. 2. Click Maintenance. 3. Enter the name of the configuration file in the Import Filename field. The configuration file is downloaded from the TFTP server and the TVgateway reboots. Logging The TVgateway saves historical information about internal events within the device to its log file. This can be useful when troubleshooting problems with the device. All log information up to the selected level is automatically saved locally and can be viewed in the Web Interface. Use of a time server ensures all devices in your IPTV system are synchronized. The TVgateway uses NTP and the configured time zone to maintain an accurate time on the device, useful when examining log files as each log message has an accurate timestamp. Logging is configured in the Web Interface Logging page as shown below. Emergency (level 0) The highest priority, usually reserved for catastrophic failures and reboot notices. Info (level 6) The lowest priority that you would normally log, and purely informational in nature. Debug (level 7) The lowest priority, and normally not logged except for messages from the kernel. 84 Maintenance Under normal circumstances, the log level should be set to 5 or 6. Level 7 should ideally only be used for diagnostics, as it logs all device activity. The default logging level is 4. Local Logging You can view the log file in the Web Interface or download it to your computer. All log information up to the selected level is automatically saved locally. The log is stored in memory and is lost if the TVgateway is rebooted or powered down. As TVgateway memory capacity is limited, older log information is overwritten. To configure local logging: 1. Click Logging. 3. Select a logging level from the Local logging level drop-down list and click Apply. 1. Click Logging, and click Show Log to display the log in a browser window. 1. Click Logging. 2. Click Download log to download the log file to the configured download folder on your local computer. The downloaded log file can be more easily viewed with an application which understands Unix line endings. For example, on Windows®, Wordpad is preferable to Notepad. Remote Logging To send device log information to a remote server, you need to install a syslog server application on the remote server. Then set up the remote logging function on the TVgateway as described below. 1. Click Logging, then select Local and Remote from the Logging drop-down list. 2. In the Syslog server field, enter the IP address or host name of the syslog server where the log files are to be sent. 3. In the Syslog port field, enter the port number on the syslog server, then click Apply. The default value is 514. 85 Serial Interface Connection The serial port provides access to a small subset of device functionality. For example, you can configure an IP address using a terminal program session, such as PuTTY or HyperTerminal. See Management Interfaces for more information. • Cabling • Adaptor Wiring • Opening a Session Cabling To connect to the serial interface use the female DB-9 to RJ45 adaptor or the USB – RJ45 serial cable shown below. The female DB-9 connector should be plugged into the serial port on a PC. A straight-through network cable should be used between the RJ45 socket on the adaptor and the admin port on the device. Although the cable fits, the admin port should not be connected to the Ethernet port on a PC. Adaptor Wiring If you do not have an adaptor you can make one using the details shown below. 86 Serial Interface Connection 2 TxD 8 3 RxD 2 5 GND 4 Opening a Session 1. Open a terminal program such as PuTTY or HyperTerminal. • Data bits: 8 • Parity: none • Stop bits: 1 The program should now connect and present a login prompt when you press the Return key. 87 Recommended Signal Levels The digital satellite, terrestrial and cable AvediaStream TVgateways require good quality signals at their inputs. The recommended signal levels are specified below. 88 Recommended Signal Levels • Code Rate: • QPSK: 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 5/6, 8/9, 9/10 89 Scan Resources This page lists useful sources of information which may help you scan for the required channels. • Satellite • Terrestrial Satellite The websites listed below contain details of major satellites broadcasting DVB-S/S2 signals. Channels are listed on a per-frequency basis, and maps are available detailing satellite coverage. • • Terrestrial DVB-T broadcasts are country-specific, and within a country there will be multiple transmitters, each broadcasting on a different set of frequencies. To find out the transmission frequencies for your location you may need to get this information from the relevant national broadcasting authority. • Parameter Meaning 90 Scan Resources Example Multicast addresses are generated using the values below: BBBBBBBB.BBBBBBT.TTNNNNNN.IIIIIIII where: Octets • Octet 1 is the first octet of the base address – fixed at 239 (decimal). • Octet 2 is the sum of the second octet of the base address – 192 (decimal) (B)+ the first bit of the three bits required to specify the source tuner (T) in binary: • Tuner A = 000 • Tuner B = 001 • Tuner C = 010 • Tuner D = 011 • Tuner E = 100 • Tuner F = 101 • Tuner G = 110 • Tuner H = 111 For example: • Octet 3 is determined by the TVgateway – it combines the two remaining bits of the source tuner (T) with 6 bits used to define the program index to get the 8 bits. 91 Scan Resources If a multiplex has more than 63 channels, you should manually assign multicast addresses to these channels to avoid conflicts. A 0 - 63 B 64-127 C 128 - 191 D 192 - 255 E 0 - 63 F 64-127 G 128 - 191 H 192 - 255 Transmitter files are supplied as text files, with the first two lines being description and date parameters; see the example below. The table below details all parameters used in transmitter files and their meanings, and the Examples below provides examples of these for satellite, terrestrial, and cable transmitter types. 3 Frequency Hz 1 = QAM 16 2 = QAM 32 92 Scan Resources 3 = QAM 64 4 = QAM 128 5 = QAM 256 6 = Auto 7 = 8 VSB 8 = 16 VSB 9 = 8PSK 5 Bandwidth Hz 8 Symbol Rate Hz 9 FEC 1 - 1/2 2 - 2/3 3 - 3/4 4 - 4/5 5 - 5/6 7 - 7/8 8 - 8/9 9 - Auto 10 - 3/5 11 - 9/10 12 - 2/5 13 - 1/4 14 - 1/3 2 - J.83/B 3 - DVB-T 5 - DVB-S 6 - DVB-S2 93 Scan Resources 11 - ATSC 16 - DVB-T2 19 - DVB-C2 240 - DVB-T/T2 118 - Vertical Examples The following examples are described below: date: 1404294928 • Polarization: Horizontal • FEC: 5/6 • Modulation: QPSK • Bandwidth: 8Mhz 94 Scan Resources 95 Safety Notices Before installing and operating these products, please read the safety information in this documentation. EU and Others Safety Information This section contains important safety information. If you are unsure about any of the information in the section, please contact Exter. 7. Do not block any ventilation openings. Install in accordance with the instructions contained in this manual. 96 Safety Notices.. To reduce the risk of fire or electric shock, do not expose this apparatus to rain or moisture. EU and Others Do not proceed beyond a Warning notice until you have understood the hazardous conditions and have taken appropriate steps. Safety Information Warning:There are no user serviceable parts inside any Exterity product. To prevent electric shock or fire hazard, do not remove cover. Refer service to qualified service personnel. For 230/240 volt operation, be sure to use a harmonized grounded 3 conductor cord, rated 6 Amp minimum. Use a suitable cord for connection to the equipment and terminating in an IEC. This equipment relies upon a safety earth for operation, ensure that you always use a power cord with appropriate earth and that the inlet to which is inserted also has the appropriate earth. If in any doubt about the earth provision in your building consult a qualified electrician. Use only the dedicated power supply or cord supplied for your device. 97 Safety Notices Exterity products use ventilation holes for cooling. None of the ventilation holes should be blocked. Keep all materials at least 5cm away from all the ventilation holes. The operating conditions of the product should be 0°C – 40°C with a Relative Humidity of 5 – 95%. The product should not be operated outside of these conditions. There are no user-serviceable parts inside these products. Any servicing, adjustment, maintenance, or repair must only be performed by service-trained personnel. 98 Glossary A AAC: Advanced Audio Coding - a standard for carrying digital audio. AC-3: An audio compression scheme, also known as Dolby Digital. AES: Advanced Encryption Standard ATSC: A set of standards developed by the Advanced Television Systems Committee for digital television transmission over terrestrial, cable, and satellite networks. AV: Audio/Video B BAT: Found in a Transport Stream, the Bouquet Association Table describes a group of services presented as though they are on the same transport stream. Board: The printed circuit board within the unit. C CAT: Found in a Transport Stream, the Conditional Access Table controls the scrambling of a service. It associates one or more CA systems with their EMM (Entitlement Management Message) stream. CDN: Content Delivery Network CEC: Consumer Electronics Control Component Video: A video signal which has been split into two or more physically separate channels. Exterity encoders accept YPbPr component video which is output by many DVD and Blu-ray players and set-top boxes. Composite video: A type of analog video signal where the luminance, chrominance and sync signals are all carried on a single cable. This is often referred to as CVBS. CVBS: Composite Video, Blanking and Sync. An analog video transmission that carries standard definition video typically at 480i or 576i resolution. D Data channel: Data channels do not carry normal audio-video streams but are typically used as control channels under the DSM-CC protocol (part 6 of the MPEG-2 standard). DHCP: Dynamic Host Configuration Protocol is a protocol used to allocate an IP configuration to devices on an IP network. DiSEqC: Digital Satellite Equipment Control is a communication protocol for use between a satellite receiver and a multi-dish switch, which enables the satellite receiver to choose between multiple satellites attached to that switch. It also enables the steering of a motorized dish. DNS: Domain Name Server is used to resolve device names to IP addresses. DVB: Digital Video Broadcasting is a suite of internationally accepted open standards for digital television transmission over terrestrial, cable, and satellite networks. DVB-C/C2: Digital Video Broadcasting standard for Cable delivery. DVB-S/S2: Digital Video Broadcasting standards for Satellite delivery DVB-T/T2: Digital Video Broadcasting standards for Terrestrial delivery. 99 Glossary E ED: Enhanced Definition video, 525p 60Hz and 625p 50Hz. EDID: Extended Display Identification Data EIT: Found in a Transport Stream, the Event Information Table provides information to enable construction of Program Guides. EPG: Electronic Program Guide F FEC: Forward Error Correction FTA: Free to Air FTP: File Transfer Protocol is used to transfer files across a network. H H.264: A standard for video compression, also known as MPEG-4 Part 10 and MPEG-4 AVC (Advanced Video Coding). HD: High Definition video, 720p, 1080i and 1080p (1080p capability only on specified products). HDAV: A digital interface for connection to devices equipped with DVI (DVI-to-HDMI adaptor required) and HDMI connections. HDCP: High-bandwidth Digital Content Protection is designed to prevent copying of digital audio and video content passing across Digital Visual Interface (DVI) and High-Definition Multimedia Interface (HDMI) connections. HDCP Professional: A form of HDCPv2.2 Digital Content Protection. The e3635 HDCP Professional encoder supports up to 1000 clients using HDCP Professional. HDCPv2.2: A form of HDCP Digital Content Protection. The e3635 HDCP 32 encoder supports up to 32 clients using HDCPv2.2. HDMI™: High-Definition Multimedia Interface, a compact interface for transmission of uncompressed digital audio and video content. HLS: An HTTP-based media streaming communications protocol. I IGMP: Internet Group Management Protocol is a protocol used to manage multicast traffic on an IP network. Input: Physical interface on Exterity equipment that receives audio/video from a source. IP: Internet Protocol, a protocol used for communicating data across a network using the Internet Protocol Suite, also referred to as TCP/IP. IP TOS: The Type of Service (TOS) field is a six-bit Differentiated Services Code Point (DSCP) field and a two-bit Explicit Congestion Notification field. L LDAP: Lightweight Directory Access Protocol LNB: Low Noise Block is the receiving device mounted on a dish for satellite TV reception. M MDIX: Medium Dependent Interface Crossover. An Ethernet connection whose transmit and receive pins are crossed over, allowing connection to MDI ports (i.e. PCs or workstations) with standard straight-through cabling. 100 Glossary N NFS: Network File System NIT: Found in a Transport Stream, the Network Information Table provides information about the physical organization of the multiplexes and the network. NTP: Network Time Protocol, used for synchronizing the clocks of computer systems. P PAT: Found in a Transport Stream, the Program Association Table lists all the services found in a transport stream. The PAT is always on PID 0. PID: Found in a Transport Stream, the Packet ID identifies a particular stream of data (e.g. video, audio, etc) within an MPEG Transport Stream. PMS: Property Management System PMT: Found in a Transport Stream, the Program Map Table identifies all the Elementary Streams within a service. POE: Power Over Ethernet R RGBHV: Red (R), Green (G), Blue (B) Component analog video signal with horizontal (H) and vertical (V) synchronization, all on separate lines. It is most commonly used in the VGA connection for computer monitors RGBS: A specific type of RGB where the video sync signal is carried on a fourth cable. RTP: Real-time Transport Protocol, a protocol used to carry real time data on an IP network. RTSP: Real Time Streaming Protocol S SAP: Session Announcement Protocol is a protocol used to advertise the presence of multicast sessions on an IP network. SD: Standard Definition video, 525i 60Hz and 625i 50Hz SDI: Serial Digital Interface. Typically used for transmission of uncompressed, unencrypted digital video signals SDT: Found in a Transport Stream, the Service Description Table provides the name and other information such as languages about the service SFTP: SSH FTP. A network protocol that provides file access, file transfer, and file management over any reliable data stream SNMP: Simple Network Management Protocol Source: A device that can provide an audio/video input to the TVgateway. SSH: Secure Shell SSM: Source Specific Multicast SVC: Scaleable Video Coding, part of the H.264/MPEG-4 AVC video compression standard. Syslog: A protocol for forwarding log message in an IP network 101 Glossary T Telnet: Telnet is a network protocol that enables one computer to communicate with another over an IP network. TFTP: Trivial File Transfer Protocol, a simple file transfer protocol used on IP networks. Transmitter file: A transmitter file typically lists all the frequencies available for transmission in a particular country or geographic region. U UDP: User Datagram Protocol is a transport protocol in the TCP/IP suite, which provides a connectionless transport mechanism with low overhead. Unit: Exterity product, for example, an AvediaStream unit containing a printed circuit board. V VoD: Video on Demand X XML: Extensible Markup Language Y YPbPr: A type of Component analog video signal consisting of a colorless Component (luminance), combined with two color-carrying components (chrominance). This is commonly referred to simply as “Component”. 102
https://de.scribd.com/document/407883405/AvediaStream-g44xx-Gateway-1-6-pdf
CC-MAIN-2019-35
en
refinedweb
Remote vanilla PDB (over TCP sockets) *done right*: no extras, proper handling around connection failures and CI. Based on `pdbx <>`_. Project description Remote vanilla PDB (over TCP sockets) done right: no extras, proper handling around connection failures and CI. Based on pdbx. - Free software: BSD 2-Clause License Installation pip install remote-pdb Usage To open a remote PDB on first available port: from remote_pdb import set_trace set_trace() # you'll see the port number in the logs To use some specific host/port: from remote_pdb import RemotePdb RemotePdb('127.0.0.1', 4444).set_trace() To connect just run telnet 127.0.0.1 4444. When you are finished debugging, either exit the debugger, or press Control-], then Control-d. Alternately, one can connect with NetCat: nc -C 127.0.0.1 4444 or Socat: socat readline tcp:127.0.0.1:4444 (for line editing and history support). When finished debugging, either exit the debugger, or press Control-c. Integration with breakpoint() in Python 3.7+ If you are using Python 3.7 one can use the new breakpoint() built in to invoke remote PDB. In this case the following environment variable must be set: PYTHONBREAKPOINT=remote_pdb.set_trace The debugger can then be invoked as follows, without any imports: breakpoint() As the breakpoint() function does not take any arguments, environment variables can be used to specify the host and port that the server should listen to. For example, to run script.py in such a way as to make telnet 127.0.0.1 4444 the correct way of connecting, one would run: PYTHONBREAKPOINT=remote_pdb.set_trace REMOTE_PDB_HOST=127.0.0.1 REMOTE_PDB_PORT=4444 python script.py If REMOTE_PDB_HOST is omitted then a default value of 127.0.0.1 will be used. If REMOTE_PDB_PORT is omitted then the first available port will be used. The connection information will be logged to the console, as with calls to remote_pdb.set_trace(). To quiet the output, set REMOTE_PDB_QUIET=1, this will prevent RemotePdb from producing any output – you’ll probably want to specify REMOTE_PDB_PORT as well since the randomized port won’t be printed. Note about OS X In certain scenarios (backgrounded processes) OS X will prevent readline to be imported (and readline is a dependency of pdb). A workaround (run this early): import signal signal.signal(signal.SIGTTOU, signal.SIG_IGN) See #9 and cpython#14892. Requirements Python 2.6, 2.7, 3.2, 3.3 and PyPy are supported. Changelog 2.0.0 (2019-07-31) 1.3.0 (2019-03-13) - Documented support for Python 3.7’s breakpoint(). - Added support for setting the socket listening host/port through the REMOTE_PDB_HOST/REMOTE_PDB_PORT environment variables. Contributed by Matthew Wilkes in #14. - Removed use of rw file wrappers around sockets (turns out socket’s makefile is very buggy in Python 3.6 and later - output is discarded). Contributed in #13. 1.2.0 (2015-09-26) - Always print/log listening address. 1.1.3 (2015-07-06) - Corrected the default frame tracing starts from. 1.1.2 (2015-07-06) - Small readme update. 1.1.1 (2015-07-06) - Remove bogus remote_pdb console script. 1.1.0 (2015-06-21) - Fixed buffering issues when running on Python 3 and Windows. 1.0.0 (2015-06-15) - Added support for PDB++. 0.2.1 (2014-03-07) - First release on PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/remote-pdb/
CC-MAIN-2019-35
en
refinedweb
Now that we have a lot of books in our library system, it would be great if we could quickly filter the books based on their category. HTML supplies a nifty builtin tag, called multi select, that will let us display a couple options. Our librarians can then select one category, or a couple of categories, and we’ll filter the books displayed on the page. This is going to require a number of additions to our existing app. We’ll create a new controller that returns a list of books in HTML that will be sent over the wire. We’ll have to setup a route to handle the search request. Since our stimulus controller will use the fetch api, and we’re sending filter parameters, it seemed like this action would work best as a POST end point. I’m going to work from my previous tutorial, which you can find on Github. Here’s our setup. Refactor the Books index action Our books_controller.rb now gets an extra list of data. This is leveraging the pluck operation, which only returns the values in a single column as an array. Then we use the ruby uniq method to give us all the unique values. These will appear in the select field as selectable values. class BooksController < ApplicationController def index @categories = Book.all.order(:category).pluck(:category).uniq @books = Book.all end end We will refactor our index.html.erb to include the book table as a reusable fragment, and the select tag. <h1>All Books</h1> <div data- <div class="left-col"> <h2>Filter categories:</h2> <select name="categories" multiple <%= @categories.each do |category| %> <option value="<%= category %>"><%= category.humanize %></option> <% end %> </select> </div> <div class="right-col" data- <%= render partial: 'book_list', locals: { books: @books} %> </div> </div> And our fragment, _book_list.html.erb: <table> <thead> <tr> <th>Title</th> <th>Author</th> <th>Publisher</th> <th>Category</th> </tr> </thead> <tbody> <% books.each do |book| %> <tr> <td><%= book.title %></td> <td><%= book.author %></td> <td><%= book.publisher %></td> <td><%= book.category %></td> </tr> <% end %> </tbody> </table> Add our new Books Filter Controller We’ll need to add the route for our new controller: Rails.application.routes.draw do resources :books post 'books_filter', action: :index, controller: 'books_filter' end And our new controller, books_filter_controller.rb: class BooksFilterController < ApplicationController def index @books = Book.where(category: params[:categories]).order(:category) end end And finally, the view that will be sent back to our Stimulus filter controller, books_filter/index.html.erb. It’s going to reuse the previous books_list.html.erb fragment, so that any changes we make to that one file will be propagated throughout our app. <%= render partial: 'books/book_list', locals: { books: @books} %> Using Stimulus to Wrap It All Together Let’s create our stimulus controller that will handle the selection changes, load the new filtered list of books, and change the pages html. import { Controller } from "stimulus" export default class extends Controller { static targets = [ "books" ] change(event) { fetch(this.data.get("url"), { method: 'POST', body: JSON.stringify( { categories: [...event.target.selectedOptions].map(option => option.value)}), credentials: "include", dataType: 'script', headers: { "X-CSRF-Token": getMetaValue("csrf-token"), "Content-Type": "application/json" }, }) .then(response => response.text()) .then(html => { this.booksTarget.innerHTML = html }) } } function getMetaValue(name) { const element = document.head.querySelector(`meta[name="${name}"]`) return element.getAttribute("content") } When the select element changes, our controller fetches data from a url we’ve set as a data attribute of the controller. We post the categories to our book_filter index action, which takes the tags, and returns our filtered table. The html table is replaced with the new data. Now you have a controller that retrieves data from an API and replaces html on your page. And it let’s you select multiple categories, so you can easily find your favorite scientific treatise and horror book. Want To Learn More? Try out some more of my Stimulus.js Tutorials. One Comment […] Stimulus.js Tutorial – Using multi select to pare down a large set of data […]
https://johnbeatty.co/2018/10/04/stimulus-js-tutorial-using-multi-select-to-pare-down-a-large-set-of-data/
CC-MAIN-2019-35
en
refinedweb
Repairs triangular meshes Project description Python/Cython wrapper of Marco Attene’s wonderful, award-winning MeshFix software. This module brings the speed of C++ with the portability and ease of installation of Python.. Installation pip install pymeshfix git clone cd pymeshfix pip install . Dependencies Requires numpy and pyvista Examples Test installation with the following from Python: from pymeshfix import examples # Test of pymeshfix without VTK module examples.native() # Performs same mesh repair while leveraging VTK's plotting/mesh loading examples.with_vtk() Easy Example This example uses the Cython wrapper directly. No bells or whistles here: from pymeshfix import _meshfix # Read mesh from infile and output cleaned mesh to outfile _meshfix.clean_from_file(infile, outfile) This example assumes the user has vertex and faces arrays in Python. from pymeshfix import _meshfix # Generate vertex and face arrays of cleaned mesh # where v and f are numpy arrays or python lists vclean, fclean = _meshfix.clean_from_arrays(v, f) Complete Examples with and without VTK One of the main reasons to bring MeshFix to Python is to allow the library to communicate to other python programs without having to use the hard drive. Therefore, this example assumes that you have a mesh within memory and wish to repair it using MeshFix. import pymeshfix # Create object from vertex and face arrays meshfix = pymeshfix.MeshFix(v, f) # Plot input meshfix.plot() # Repair input mesh meshfix.repair() # Access the repaired mesh with vtk mesh = meshfix.mesh # Or, access the resulting arrays directly from the object meshfix.v # numpy np.float array meshfix.f # numpy np.int32 array # View the repaired mesh (requires vtkInterface) meshfix.plot() # Save the mesh meshfix.write('out.ply') Alternatively, the user could use the Cython wrapper of MeshFix directly if vtk is unavailable or they wish to have more control over the cleaning algorithm. from pymeshfix import _meshfix # Create TMesh object tin = _meshfix.PyTMesh() tin.LoadFile(infile) # tin.load_array(v, f) # or read arrays from memory # Attempt to join nearby components # tin.join_closest_components() # Fill holes tin.fill_small_boundaries() print('There are {:d} boundaries'.format(tin.boundaries()) # Clean (removes self intersections) tin.clean(max_iters=10, inner_loops=3) # Check mesh for holes again print('There are {:d} boundaries'.format(tin.boundaries()) # Clean again if necessary... # Output mesh tin.save_file(outfile) # or return numpy arrays vclean, fclean = tin.return_arrays() Algorithm and Citation Policy To better understand how the algorithm works, please refer to the following paper: M. Attene. A lightweight approach to repairing digitized polygon meshes. The Visual Computer, 2010. (c) Springer. DOI: 10.1007/s00371-010-0416-3 This software is based on ideas published therein. If you use MeshFix for research purposes you should cite the above paper in your published results. MeshFix cannot be used for commercial purposes without a proper licensing contract. MeshFix is Copyright(C) 2010: IMATI-GE / CNR This program is dual-licensed as follows: (1) You may use MeshFix as free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. In this case the program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License () for more details. (2) You may use MeshFix as part of a commercial software. In this case a proper agreement must be reached with the Authors and with IMATI-GE/CNR based on a proper licensing contract. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pymeshfix/
CC-MAIN-2019-35
en
refinedweb
hu_KDFDerive() Derives a value of the requested length based on shared secret information, suitable for use as a key value. Synopsis: #include "hukdf.h" int hu_KDFDerive(int algid, size_t secretLen, const unsigned char *sharedSecret, size_t addInfoLen, const unsigned char *addInfo, size_t keyLen, unsigned char *keyValue, sb_GlobalCtx sbCtx) Since: BlackBerry 10.0.0 Arguments: - algid A KDF algorithm. The acceptable values are one of the HU_KDF_* macros. - secretLen The length (in bytes) of the shared secret data. - sharedSecret The shared secret data. - addInfoLen The length (in bytes) of the additional information. (Optional) - addInfo Additional information. (Optional - set to NULL if not used.) - keyLen The length (in bytes) of the key buffer. - keyValue The key buffer. - sbCtx A global context. Library:libhuapi (For the qcc command, use the -l huapi option to link against this library) Description: Additional shared information may also be given. For the IEEE KDF1 algorithm, the requested length must be the underlying digest algorithm's output length. When the NIST Alternative 1 KDF is used, the underlying hash algorithm must be registered. If not, a not supported error for the hash algorithm will be returned. Returns: - SB_ERR_KDF_BAD_ALGORITHM The KDF algorithm identifier is invalid. - SB_ERR_NULL_INPUT_BUF The shared secret value is NULL. - SB_ERR_BAD_INPUT_BUF_LEN The length of the shared secret length is invalid. - SB_ERR_NULL_ADDINFO The additional information value is NULL. - SB_ERR_NULL_OUTPUT_BUF The key buffer is NULL. - SB_ERR_BAD_OUTPUT_BUF_LEN The length of the key buffer length is invalid. - SB_SUCCESS Success. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.crypto.lib_ref/topic/hu_KDFDerive.html
CC-MAIN-2019-35
en
refinedweb
Yup, it does. Nice detective work! On Mon, Jul 13, 2015, 8:58 AM Britton Smith [email protected] wrote: Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right? Britton On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk [email protected] wrote: That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call. On Mon, Jul 13, 2015, 8:35 AM Britton Smith [email protected] wrote: Hi again, Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them. Britton On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith [email protected] wrote: Hi Matt, Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome. Thanks, Britton On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk [email protected] wrote: Hi Britton, What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up? On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith [email protected] wrote: Hi all, I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen this? For reference, my user_script is just this: import yt from yt.frontends.enzo.api import EnzoDatasetInMemory def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass") Thanks for any help, Britton
https://mail.python.org/archives/list/[email protected]/message/6XE4C3N5VUUGGFSAEIY6SFXTZE4T4YQ7/
CC-MAIN-2019-35
en
refinedweb
If you’ve played around with Python and you’re confident writing code, then you may be ready to step it up and try developing your own Python-powered website! While Python is well-known for its capabilities in data analysis and machine learning, it is also used to run some of the largest websites on the internet, including Reddit, Netflix, Pinterest, and Instagram. Choosing a Framework: Django or Flask? If you want to try out some Python web development, you first have to choose between two frameworks: Flask or Django. They both do the same thing – they convert your Python code into a fully-functioning web server. Flask is a minimalist version in which everything is an optional extra you can choose, whereas Django comes with everything already “plugged in” and has pre-defined libraries, techniques, and technologies that you use out-of-the-box. Flask is good for beginners and for quickly building small/medium applications. Django is arguably better for large enterprise applications, but also requires a deeper understanding. For this tutorial, we’ll start with Flask, but you can always hop over to Django in the future once you’re comfortable with the basics. Installing Flask As with most libraries in Python, installation is a dream – just install it with pip: $ pip install Flask That’s it! Onto the next section. Client and Server Before we begin developing our first Flask web app, we need to clear up the roles of the client and the server. The client is a web browser, like Google Chrome or Firefox. When you type in a web address and press enter, you’ll send a request to a server that will “serve” the website you want. The server will respond to your request with the website (in the form of HTML, CSS, and JavaScript) and then render this request in your web browser. HTML defines the structure of the website, CSS gives it styling, and JavaScript is the logic that executes in your browser. Flask is a server-side framework or “back-end” as it’s often called. You program the bit of code that receives a request, builds a web page, and then responds with the necessary HTML (and optional CSS and JavaScript). The discipline of client-side development or “front-end” is concerned with how the website behaves in the browser after it is received from the server. We will not focus on this as client-side development is pretty much only done in JavaScript. Your First Flask Web App Let’s begin with the time-honored tradition of a “Hello World” application. This example is a slightly modified version of the Hello World app in the Flask documentation. from flask import Flask # Import Flask app = Flask(__name__) # Create a Flask "app" instance @app.route("/") # Instructs that the default url of should trigger this code @app.route("/index") # Instructs that the url of should also trigger this code def index(): return "<h1>Hello World!</h1>" # The HTML to return to the browser if __name__ == '__main__': # Startup stuff app.run(host='127.0.0.1', port=5000) # Specify the port and url Run the code and then go to Google Chrome (or your browser of choice) and type in “”, without the quotes. You should see a delightful “Hello World” appear, welcoming you to the world of web development. A Scalable Structure for Flask Web Apps While having all of the code in a single file is a quick and exciting introduction to Flask, you will probably want to split your code into separate files to make it more manageable. To help you in this, I’ve created a simple structure you can use for your Flask applications. Head over to GitHub to grab the source code for yourself. This structure has a “start.py” script that you run to fire-up your webserver. All of the Python code is inside a folder called “code”. This folder is where your “routes.py” file lives. It contains the definitions for requesting web URLs and how your app responds when they are. If you have any Python logic such as calculating stats or machine learning models, then these should also go inside the “code” folder. The “templates” folder houses the HTML files that will be returned by the server. They’re called templates because you can optionally insert values to change the text and graphics at runtime. The “static” folder contains code like CSS and Javascript that does not change at runtime. Setting Up Deployment With what we’ve discussed so far, you’ll be able to create a web app that runs on your computer, but if you try and access your website from another computer, you’ll run into difficulties. This is the topic of deployment. First, we need to adjust the port on which your website presents itself. We want to set the host to “host=0.0.0.0” and the port to “port=80”. Setting the host and port makes your website visible on the IP address of your host computer. For example, if you run your Flask web app on a Raspberry Pi with an IP address of 192.168.1.10, then you can navigate to that IP address in the web browser of a separate computer, and you’ll see your web page served up for you. This is a great foundation for IoT projects and live sensor dashboards. We define that “port=80” as this is the default port for communication over the HTTP protocol. When you request a URL in your browser over HTTP, it knows to go to port 80, so you don’t have to state it explicitly in the web address. For all of this to work, you’ll have to be on the same network as the computer running your web server, and you’ll need to make sure that the firewall on your host computer (if you have one) is opened up to requests on port 80. Next Steps and Going Further If you want to take a deep-dive into Flask web development, then I strongly recommend the Flask Mega Tutorial by Miguel Grinberg. It’s very detailed and is an invaluable source for learning Flask.
https://maker.pro/custom/tutorial/an-introduction-to-the-python-flask-framework
CC-MAIN-2019-35
en
refinedweb
ninth version ZAGREB 2010 No part of this publication should be copied or edited without permission from the author. LIST OF ABBREVIATIONS Abkh. = Abkhaz adm. = admirative ADV = adverbial advers. = adversative Adyg. = Adyghean af. = affirmative ant. = anterior assoc.= associative, associative plural caus. = causative cond. = conditional conj. = conjunctivity dir. = directional (directional prefix) ERG = ergative evid. = evidential fut. = future fut.II = future II ger. = gerund imp. = imperative impf. = imperfect inf.= infinitive INST = instrumental inter. = interrogative intrans. = intransitive invol. = involuntative Kab. = Kabardian neg. = negation NOM = nominative opt. = optative part. = participle perm. = permissive pl. = plural plup. = pluperfect poss. = possessive pot. = potential pref. = prefix pres. = present pret. = preterite quot.part. = quotative particle rec. = reciprocal refl. = reflexivity rel. = relative particle rec. = reciprocal prefix Rus. = Russian PREFACE This grammar should be used with some caution, not only because it was written by a linguist who is far from being a fluent speaker of Kabardian. It is largely compilatory in nature, and many examples were drawn from the existing works on Kabardian by M. L. Abitov, Mukhadin Kumakhov, and others. However, I have also excerpted and analyzed many sentences from the literature, especially from the Nart corpus (Nrtxar, 1951, Nrtxar, 2001), and some examples were elicited from native speakers. Although I have relied heavily on the published scholarly works on Kabardian, my interpretations of the data are sometimes very different from those in the available literature. I have tried to approach the Kabardian language from a typological point of view, comparing its linguistic features, that may appear strange to speakers of Indo-European languages, to similar features found in other languages of the world. Although primarily designed for linguists, I hope that at least parts of this overview of Kabardian grammar may be of some use to laymen. If it succeeds in attracting at least a few people to the study of Kabardian, this grammar will have served its purpose. Apart from John Colarusso's grammar (1992) and his recently published grammatical sketch (2006), and the largely outdated monograph by Aert Kuipers (1960), this is, to my knowledge, the only general overview of the structure of Kabardian available in English. In contrast to these three works, which were composed as a result of field work with native speakers from the Kabardian diaspora, this grammar attempts to describe the standard Kabardian language used in the Kabardino-Balkar Republic of the Russian Federation. This grammar is a result of my long-standing endeavor to learn this exciting and fascinating, though incredibly difficult language. In a world in which a language dies out every fortnight, the linguist's task is at least to describe the small languages threatened by extinction. Although the statistics on the number of speakers of Kabardian does not lead one to think that Kabardian is in immediate danger of extinction, especially if compared with other small Caucasian languages in Russia, sociolinguistic data show that the number of native speakers is decreasing among the younger generations; it seems that it is especially in the diaspora that Kabardian is facing extinction. As R. M. W. Dixon wrote, anyone who calls themselves a linguist should assume the task of saving at least one endangered language from oblivion. This work is my response to this greatest challenge that linguists, as well as other people who care about the preservation of linguistic diversity, are facing today. Finally, I would like to thank Lemma Maremukova and Alim Shomahua for their help and for the examples they provided as native speakers of Kabardian. Globalization, which is partly responsible for the mass extinction of languages, has, on the other hand, opened some, until recently unimaginable, possibilities for the investigation of languages over large distances, for "field work" via Internet. F''axwa. INTRODUCTION The Kabardian language is a member of the Abhkaz-Adyghean (Northwest Caucasian) language family. 1 Together with the closely related Adyghean language Kabardian constitutes the Adyghean branch of this family, while Abhkaz and Abaza constitute the other branch (these are also considered to be dialects of the same language by some linguists). The third, transitional branch was formed by the recently extinct Ubykh 2: Proto-Abkhaz-Adyghean Abkh.-Abaz. Adyghean-Kabardian The frequent common name for Adygheans and Kabardians is Circassians. The names Kabardian and Circassian are alloethnonyms 3. The Adygheans and the Kabardians call themselves da, and their language dabz. Their languages are mutually quite intelligible, and most Adygheans and Kabardians consider themselves members of the same nation, with a common history and a common set of social institutions and customs (da xbza) 4. The Kabardians are the easternmost Abkhaz-Adyghean people. Their country is bordered by Ossetia to the south, by Chechnia and Ingushetia to the east, and by the The NW Caucasian languages may be affiliated with the NE Caucasian (Nakh-Dagestanian) languages, but this hypothesis is still unproven sensu stricto (but see, e. g., Dumzil 1933, Abdokov 1981, 1983). Some linguists connect them to the extinct Hattic language of Anatolia (cp. Chirikba 1996, Braun 1994). In my opinion, the evidence suffices to show areal and typological, but not necessarily genetic links between Hattic and NW Caucasian. 2 It seems that Ubykh was dialectally closer to the Adyghean languages than to the Abkhaz-Abaza languages (Kumaxov 1976). However, Chirikba (1996) rejects this, and proposes an Ubykh-Abkhazian node. 3 The ethnonym Kabardians (Rus. kabardncy) is of unknown origin (Kabardians derive it from the name of one ancient chief, Kabarda Tambiev), while the ethnonym Circassians (Rus. erksy, older erkasy) has two etymologies; some relate it to the Greek name Kerktai for one of the ancient peoples on the east coast of the Black Sea (e. g. Der Kleine Pauly, s. v.), and others derive it from the Ossetian crgs, originating from a Scythian word *arkas "nobleman" (e.g. M. Vasmer, Russisches etymologisches Wrterbuch, s. v.). The name kasog, pl. kasozi "Circassians" is found from the 10th century in Old Russian, and most linguists relate it to the Ossetian ksg "Circassian" (according to Vasmer this name is also related to the Scythian word *arkas "nobleman"). The resemblance with the ancient inhabitants of Northern Anatolia, Kaskas, is probably accidental. Finally, the name by which Circassians are called by the Abkhazians, -zxwa, has been compared with Gr. Zgoi, Zikkho, which designated a people on the NE Caucasus in the 1st century AD. This could, perhaps, be related to Kabardian c'xw "man" (Chirikba 1996: 3). 4 In the Soviet age, in accordance with the "divide and rule" principle, Circassians in the KarachayCherkess Autonomous Region of Russia were also set apart as a distinct ethnic group, but they consider themselves descendants of immigrant Kabardians. Their literary language is close to standard Kabardian, though it does have some characteristics which link it to Adyghean (cf. Kumaxova 1972: 22-23). 1 Abazinia region to the west. The Abkhaz-Adyghean languages used to be spoken along the entire eastern coast of the Black Sea, from the Kuban River (Kabardian Ps) almost as far as the town of Batumi, and in the interior all the way to the Terek River 5. The Kabardians became a distinct ethnic group in the Middle Ages. They were one of the dominant peoples to the north of the Caucasus, and they established diplomatic relations with the Muscovite kingdom as early as the 15th century. Emperor Ivan the Terrible married the Kabardian princess Goshenay, christened as Maria Temriukovna. In the course of the next couple of centuries a few important Russian noblemen and army leaders were of Kabardian origin. Slave trade in the Islamic world brought numerous Circassians into various countries of the Near East, and it is believed that the Mameluke dynasty, which ruled Egypt from 1379 to 1516, was of Circassian origin. Unlike the Adygheans and the West Circassians, whose society mostly remained organized into large families and clans/tribes, the Kabardians have developed a feudal social organization with princes (warq), noblemen (p) and serfs/commoners (wna?wt). Part of the nobility converted to Orthodoxy during the 16th century, and in the course of the 16th and 17th centuries Islam spread into Kabardia. The majority of the population, however, remained loyal to pagan traditions, still alive in the Kabardian folklore. Islam was not solidified until the 19th century wars with the Russians, and a part of the Kabardian people (speakers of the Mozdok dialect) remained true to Orthodoxy. After the Russian conquest of Caucasus in 1864 the Adygheans became isolated in the north (around the city of Maykop), and the area where all the other Abkhaz-Adyghean languages used to be spoken has decreased due to Russian immigration, and due to the exodus of almost all Ubykhs and of many Circassians into the Ottoman Empire 6. There are more than 400 000 speakers of Kabardian living in the Kabardino-Balkar Republic and the neighbouring areas. More than 90% of ethnic Kabardians use Kabardian as their mother-tongue, but almost all of them are bilingual and speak Russian as well. Kabardians are today an absolute majority in the Kabardino-Balkar Republic of the Russian Federation, with 55.3% of the population according to the 2002 census. Other important ethnic groups are Turkic Balkars, with around 11% of the population, and Russians, whose number is decreasing (according to the 2002 census they constituted around 25% of the population). The number of Kabardian speakers abroad is unknown, but it is believed that a significant number of them still live in Jordan, Turkey and Syria, where they emigrated after the Russian conquest of Caucasus in 1864. It is believed that around 400 000 Kabardians and Adygheans were then exiled, while their descendants went through a partial linguistic assimilation in their new countries. Today there are around 200 000 ethnic Kabardians in Turkey and around 30 000 in Syria 7, but it is not known how many of them still speak Kabardian. Part of the Syrian Kabardians emigrated to the USA after the Israeli occupation of the Golan Heights (1967), and settled as a relatively compact group in New Jersey. Most The original homeland of the Abkhaz-Adyghean languages must have comprised the Black Sea coastal area as well, because common words for "sea" (Ubykh wa, Adyghean x, Kabardian x), for "big sea fish" (Abkhaz a-ps, Ubykh psa, Adyghean pca, Kabardian bdza), etc. can be reconstructed (see Klimov 1986: 52). 6 A part of Kabardians and other West Caucasian refugees ended up in Kosovo, where their language survived until recently in two villages, cf. zbek 1986. It appears that all of the remaining Kosovo Circassians were resettled in Russia a few years ago. 7 Kabardian is also preserved in a few villages in Israel, and until recently there was a primary school in Kabardian in one of these villages. 5 speakers of Kabardian in Jordan are centered around Amman, where there is a private school with classes held in Kabardian. In central Turkey Kabardians and other Circassians live around the cities of Samsun, Amasya and Sivas. While the use of Kabardian (and other Circassian idioms) was persecuted under Atatrk, the situation has become a bit better recently. Today Circassian culture associations are being founded in Turkey as well, and their language is making a humble appearance in the media (especially the Internet). Turkish television recently started broadcasting shows in Kabardian and Adyghean. From the typological point of view, Kabardian shares many common features with other Abkhaz-Adyghean languages: a complex system of consonants (though simpler than in Ubykh, for example), an extremely simple vowel system, a complex prefixation system and the S(ubject) O(bject) V(erb) order of syntactic constituents. There are, however, some typological differences between Abkhaz-Abaza and Kabardino-Adyghean. Unlike Abkhaz-Abaza, the Adyghean languages do not have grammatical gender, but they do have cases. Adpositional phrases are expressed as in the Indo-European languages, and not according to the HM (head marking) pattern 8, as in Abkhaz-Abaza. This means that a Kabardian postpositional phrase consists of the postposition and the governed noun only, without any person/gender affixes on the postposition (as, for example, in Abkhaz). The verbal system, however, is in some respects even more complicated than in Abkhaz-Abaza. Kabardian was a non-written language until the beginning of the twentieth century, though there were attempts to write it down using an adapted Arabic script. Up until the 20th century Classical Arabic was the language of literacy throughout the Caucasus. Special alphabets for Kabardian, based on Arabic and the Russian Cyrillic, were developed by the Kabardian scholar Shora Nogma (1801-1844), who is also the author of the first Kabardian-Russian dictionary (which was not published until 1956). However, these alphabets have not persisted, and neither have the Arabic and Latin alphabets developed by a Turkish doctor of Kabardian origin, Muhamed Pegatluxov (1909-10) 9. The Latin script was adapted for Kabardian in 1923 by M. Xuranov in Soviet Russia, and in 1924 the first Kabardian periodical began to be published in Latin script. Classes in primary schools have been held in Kabardian since 1923. In 1936 the Latin alphabet was replaced by an adapted Russian Cyrillic, still used as the Kabardian alphabet. The last reform of the Kabardian Cyrillic was in 1939. There are some attempts today to reintroduce the Latin script, especially with the Kabardian diaspora in Turkey, where the Latin alphabet is used. These attempts, however, have not taken hold in Kabardia. To abandon the Cyrillic script would mean to give up the literary tradition which has been developing for some seventy years now. Standard Kabardian is based on the Baksan dialect, spoken in Great Kabardia, which today constitutes a significant part of the Kabardino-Balkar Republic in the Russian 8 For the term HM (head marking), introduced by Johanna Nichols, and for other commonplace terms of linguistic typology, see Matasovi 2001. 9 On the beginnings of literacy in Kabardian see Kumaxova 1972: 18-21. The fate of the Latin alphabet adapted for Circassian by G. Kube Shaban is also interesting. Shaban was a Circassian scholar who was taken prisoner near Dravograd (on the Slovenian-Austrian border) as a soldier of the Wehrmacht, but he ran away from the British camp and settled in Syria, where he developed educational institutions for Circassians in the 50-ies (zbek 1982). However, the regime of the Baath party abolished all cultural institutions of Circassians in Syria in the 1960-ies, so that Kube Shabans alphabet was also abandoned. Federation (west of the Terek River). There are also the Besleney dialect (also called Besney, spoken in the Karachay-Cherkess Republic of the Russian Federation and in the Krasnodar area), the Mozdok dialect (spoken in the north of North Ossetia, where some Kabardians are believed to have emigrated some time before the 16th century), and the Kuban dialect (spoken in the territory of the Republic of Adyghea in the Russian Federation) 10. All dialects are mutually intelligibile 11, and Besleney differs most from the other dialects, being, in a sense, transitional between Eastern Circassian (Kabardian proper) and Western Circassian (Adyghean, or Adyghe, divided into Bzhedhugh, Temirgoy, Abadzekh, and Shapsugh dialects). Besleney is spoken in the region from which the majority of Kabardians are believed to have emigrated, probably in the 13-14th centuries, to Great Kabarda. Along with Russian and Balkar, Kabardian is one of the official languages of the Kabardino-Balkar Republic of the Russian Federation. In the first four grades of primary school in the Kabardino-Balkar Republic classes are held in Kabardian, and there is a Kabardian Department at the University of Nalchik (the capital of Kabardia). Literature and the publishing industry in Kabardian are poorly developed, but there is a huge corpus of oral literature, with the mythological Nart Epic standing out (Colarusso 2002). There are a few weeklies and the daily Mayak ("Lighthouse") published in Kabardian. The official daily newspaper Ada psa ("Adyghean Word") is available on the Internet (http:). Note also the monthly magazine Psyna "Source" (). Radio Free Europe () broadcasts news in Kabardian on the "listen on demand" principle. Speakers of the Kuban dialect are trilingual, they speak Adyghean along with Russian and Kabardian (Kumaxova 1972). They are rather recent immigrants into the region. 11 For an overview of Kabardian dialects, see Kumaxov (ed.) 1969. 10 PHONOLOGY Kabardian has one of the most complex phonological systems of all the languages in the world. In native words there are only two vowels and around fifty consonants (depending on the dialect). The vowel a can be both short and long (ie. a and ) 12. VOWELS a -short -long The vowel o appears in loan-words; the diphthong aw is pronounced as in some surroundings, the diphthong y as , the diphthong w as and the diphthong ay as . Alternative accounts of Kabardian phonology posit two short vowels ( and a) and five long vowels (, , , , ). Only the vowel can occur in the word-initial position in native words 13. dental palatal velar 12 k'w The difference between a and is not only in their length, but also in their quality, though phonetic descriptions differ. In the pronunciation of my informants, is a low open vowel, while a is a central open vowel (as in the phonological description by Kumaxov (ed.) 2006). Kuipers (1960) thinks that is not a distinct phoneme, but rather a phonological sequence of the short a and the consonant h in all positions except at the beginning of a word, where it can be analyzed as ha. Kuiperss analysis, though disputed, has the advantage of enabling us to formulate a simple rule according to which all Kabardian words start with a consonant, since and a can never occur word-initially. In the speech of many Kabardians the initial is, indeed, realized with a "prosthetic" h-. 13 Aert Kuipers (1960, 1968) tried to eliminate the phonological opposition between the vowels a and as well, claiming that it is actually a feature of "openness" which should be ascribed to consonants (like palatalization, glottalization and labialization). In Kuipers's analysis the opposition between pa and p in Kabardian is not an opposition between two vowels, but rather between an "open" (pa) and a "closed" (p) consonant (p). This would make Kabardian the only language in the world without the opposition between vowels and consonants, but most Caucasiologists do not accept this analysis by Kuipers (for a critical review see, e. g., Halle 1970, Kumaxov 1973, Anderson 1991). x xw uvular q qw w ? h ?w laryngeal According to some authors 14 labiovelars (kw, gw, k'w) are actually labialized uvulars, while the point of articulation of uvulars is even deeper in the pharynx (they represent pharyngeal consonants 15). The dialect described in J. Colarusso's grammar (1992) has pharyngeal fricatives as well; in the standard language described by this grammar they have, as far as I was able to determine from the examples, become velar fricatives. The voiceless laryngeal fricative h has its voiced pair in the standard speech of the older generation, which penetrated the language mostly through Arabic loanwords, e. g. Hazb "torment"; the Kabardian Cyrilic does not have a distinct symbol for this segment, which becomes h in the speech of the younger generation and is written with the digraph x. In the speech of many Kabardians from the diaspora (especially from Turkey) 16 some oppositions, still preserved in Kabardia, have been lost, such as the one between and (Turkish Kabardian has got only ). The pronunciation of the stops which are described here as voiced and voiceless varies from speaker to speaker (apparently, this has nothing to do with the dialect, but rather with cross-linguistic interference). Some speakers pronounce voiceless stops as voiceless aspirated stops (ph, th, kh); these speakers sometimes unvoice voiced stops (i. e. instead of b, d, g they pronounce p, t, k). Only the glottalized stops are consistently ejective with all speakers, regardless of the dialect. Laterals l, , and ' are actually lateral fricatives: l is voiced, voiceless, and ' glottalized. The fact that it has lateral fricatives without having the lateral resonant [l] (except in loan-words) makes Kabardian typologically unique. The presence of glottalized fricatives ', ' and f is also typologically rare. Besides Kabardian, segments such as these are found only in some American Indian languages (of the Salishan and the Na-Dene language families) and in some dialects of Abkhaz. As in other Caucasian languages, the consonant r can never occur at the beginning of a word, except in recent borrowings; older borrowings receive an unetymological prosthesis, e. g. wrs "Russian". Among the velar stops, Kabardian does not have the segment k (except in loanwords); it has only the labiovelar kw, gw and k'w. The segments transcribed in this grammar as , d and ' are, according to some descriptions, palatalized velars (ky, gy and k'y) 17. This would make Kabardian a typologically unique language, having 14 15 E. g. Kumaxova 1972. E. g. according to Kumaxov (ed.) 2006: 51. 16 See Gordon & Applebaum 2006: 162. 17 According to Kumaxova (1972) in the contemporary standard pronunciation these segments are palatal affricates, but in the older and the dialectal pronunciation they are palatalized velars. Turkish 10 palatalized and labialized velars without having the ''unmarked'', regular velars. (This is exactly the kind of system that some linguists ascribe to the Proto-Indo-European language). Voiceless stops are assimilated to the stops and fricatives that follow them with respect to the features of voice and glottalization: sa z-l "I painted" < *sa s-l (cf. sa saw "I saw") wa paw "you saw" < *wa b-aw (cf. wa bl "you painted") da t'' "we did" (in writing ) < *da d-' (cf. da daw'a "we do") Two vowels cannot occur next to each other; at a morpheme boundary where the first morpheme ends and the second one begins with a vowel, the two vowels merge, whereby the lower vowel is always stronger (i. e. * -a merge as a, *a- as ): sk'w "I went" < *s-k'wa-- sh "I carried it" < *s-h-- Morpheme-final can be deleted in (underlyingly) polysyllabic words, but the exact rules are complex, and the deletion appears to be optional in some cases (for details see Colarusso 1992: 43ff.): hn "carry" but s-aw-h "I carry" < *sawh "horse" but z- "one horse" < *z The vowel is preserved word-finally after y and w, when it merges with the glide and is pronounced as [i:] viz. [u:], e. g. patmy "although" [patmi:], dadw "cat" [gyad(d)u:]. Unaccented vowels in open syllables are shortened (i. e. becomes a): x ma "foreign" vs. xam' "foreigner" Likewise, accented vowels in open syllables are lengthened (a becomes ): d xa "beautiful" vs. daxa "excessively beautiful" APOPHONY (ABLAUT) Like the Semitic, Kartvelian, and the older Indo-European languages, the AbkhazAdyghean languages have morphologically regular vowel alternations (apophony, Kabardian most certainly has palatalized velars, which must be an archaism with regard to the innovative standard (in Kabardia), in which these segments have become affricates. 11 Ablaut) 18. Vowel alternations in Kabardian are most frequently used with verbs, especially to express the category of transitivity/intransitivity. The most common vowel alternations are: 1. a - : this apophony pattern is used for the opposition between transitive and intransitive verbs, e. g. dan "to sew (intrans.)" - dn "to sew (trans.)", txan "to write (intrans.)" - txn "to write (trans.)", xan "mow (intrans.)" - xn "mow (trans.)"; in some verbs of movement, the root-final vowel a also characterizes movement towards the subject (the so called "illative verbs"), while the vowel characterizes movement away from the subject (the so-called "elative verbs"), cf. badaatan "fly towards" vs. badaatn "fly away from". Finally, this apophony pattern serves to distinguish cardinal from adverbial numbers, e. g. "three" - a "thrice". 2. - 0: this pattern is used to distinguish the personal prefixes cross-referencing lowest ranking macrorole arguments (Undergoers, with the ''full-grade'', ) from the prefixes cross-referencing Actors and Obliques (with the ''zero-grade'', 0): s-b-d-aw-va 1sg.-2sg.-conj.-pres.-to plow "I plow together with you" intransitive verb with the prefix s- for the 1st person sg. as the single core macrorole argument. b-d-z-aw-va 2sg-conj.-1sg.-pres.-to plow "I plow (it) together with you" transitive verb with the prefix z- < *s- for the 1st person sg. Actor 3. a - 0. This apophony pattern is merely a special type of the alternation between a and ( is usually dropped in the word-final position). It is used to distinguish between the forms of the illative and elative verbs, e. g. y- "take out!" y-a "bring in!", and it also appears in different forms of transitive and intransitive verbs, e. g. m-da "he is sewing (intrans.)" - ya-d "he is sewing it (trans.)". It is also used to distinguish personal prefixes indexing Obliques (non-macrorole core arguments, including the causees of causative verbs) from those indexing Actors and Undergoers, cf. y-xw-va-z-a--- "I made you carry him for them", where -va- indexes the 2pl. causee argument, and y-xwa-z-v-a--- "you (pl.) made me carry him for them". STRESS In Kabardian the last syllable carries the stress, except for words ending in a, in which the second-to-last syllable is stressed. Grammatical suffixes are mostly unstressed. Apophony patterns in the Abkhaz-Adyghean languages are typologically particularly similar to those in Proto-Kartvelian (Kumaxov 1971: 202). For a general overview of apophony in the Adyghean languages see Kumaxov 1981: 228 ff. 18 12 The following words are thus stressed in this way: ztan "give presents", d ta "sword", but d tam "with the sword", pa "girl", but paxar "girls". We can 'a formulate the rule: the syllable before the last root consonant carries the stress. However, some verbal suffixes attract the stress, e. g. the preterite suffix -- and the future suffix -nw-, so these forms, although suffixed, are end-stressed, cp. yaa w-s-w-- 2sg.-1sg.-see-pret.-af. "I saw you" I s-k'wa-nw- 1sg.-go-fut.-af. "I will go" SYLLABLE Unlike the neighbouring Kartvelian languages, the Abkhaz-Adyghean languages do not have complex consonant clusters in the onset of the syllable; the structure of most syllables is C(C)V(C), and most consonant clusters consist of a stop and a fricative, e.g. t+h: tha "God" b+w: bw "nine" p+': p' "ten", etc. There are also consonant clusters consisting of two stops, e. g. in the word pqaw "pillar". Some rare clusters consist of three consonants, e. g. in the verb ptn "to boil", or in the noun bzw "sparrow". Consonant clusters in Kabardian are predominantly regressive, i.e. the point of articulation of the first element is closer to the lips than that of the second element. Consonant clusters in which the first element is a labial consonant are especially frequent, e. g. p "prince, nobleman", pssa "story", xbza "custom", bl "seven", etc. Roots are mainly monosyllabic, e. g. fz "woman", t- "give", z- "one", k'wa- "go". Bisyllabic roots, which typically end in a vowel (?an earlier suffix), are less frequent, e. g. pa "girl", mza "moon", etc. Syllables are normally closed in the middle of a word. Many speakers have a geminate pronunciation of consonants preceded by an open syllable in the middle of a word, which results in the canonical syllable structure, i. e. instead of pssa "story" they pronounce psssa, instead of dda "very" they say ddda (Colarusso 1992: 15); if the long vowel -- is phonologically analyzed as -ah-, as is the habit of some linguists, 13 then the rule is that all syllables in the middle of a word are closed. This type of restriction on the syllable structure is typologically very rare in the world's languages. 14 ORTHOGRAPHY The Russian Cyrillic alphabet, used as the Kabardian script since 1936, contains the following graphemes 19: I. consonants stops voic. unvoic. glott. b p I p' v d t I t' dz c I c' z l d I ' f s I ' r I f' n affricates voic. unvoic. glott. fricatives voic. unvoic. glott. m resonants I ' x xw gw kw I k'w w w h q qw q' q'w I I ? ?w The grapheme <> denotes the uvular character of the consonants q, qw, q', q'w, , w, and w, and there is a special grapheme used to mark voicelessness of uvulars (hence Rules for the transliteration of the Kabardian Cyrillic applied in this grammar are basically the same as the standard principles of transliteration for the Caucasian languages written in the Cyrillic script, proposed by J. Gippert in his work Caucasian Alphabet Systems Based upon the Cyrillic Script (). Some minor deviations from Gippert's system in this grammar should, however, be brought to the reader's attention: 1) glottalized consonants are written as C', and not as C; 2) labialized consonants are written as Cw, and not as Co; 3) the Cyrillic is transliterated as y, and not as j. 4) palatalized fricatives are written as , , and not as , z. 5) the Cyrillic letters , are transliterated as dz, d, instead of , '. 19 15 <> = q, < > = qw). The notation of palatal consonants is inconsistent: < , > denote d (gy), (ky), but <I> is ' (k'y). Although the Kabardian orthography is phonological, the notation of some phonological changes is inconsistent 20, e. g. the shortening of the long which occurs in compounds, cf. xdaxa' "fruit" (the first part of the compound xda "garden" has a long , but the pronunciation in the compound is /xadaxa'/). Some authors (e. g. M. A. Kumaxov) use and for the palatalized and , instead of the standard , , since that is how these consonants are denoted in the closely related Adyghean language. However, despite certain efforts to make them more alike (e. g. the 1970 proposition for a common orthography for all Adyghean languages), the Adyghean and the Kabardian orthographies are still quite different 21. II. semi-vowels: = y; = w III. vowels: = ; = a; = The Kabardian Cyrillic has some other graphemes for vowels, but these graphemes always denote diphthongs and triphthongs: = y = y = aw, wa y= w = yw = ay, ya The grapheme y thus has a double value: it can denote the semi-vowel w or the phonemic sequence (diphthong) w. Cf. Kumaxova 1972: 46. A few years ago a group of the most distinguished Adyghean and Kabardian linguists put forward a proposal for the creation of the common Adyghean-Kabardian orthography (see Kumaxov (ed.) 2006, I: 40 ff.). Although this proposal received the support of the parliament of the Kabardino-Balkar Republic, at the moment I am writing this its future is still uncertain. 21 20 16 MORPHOLOGY Kabardian is a polysynthetic language which has a very large number of morphemes compared to the number of words in a sentence. Nouns can take a relatively small number of different forms, but the verbal complex typically contains a large number of affixes for a host of grammatical categories. In Kabardian the morphemes combine within a word according to the agglutinative principle: each grammatical morpheme expresses only one grammatical category. The exception is the category of person, which is always fused with the category of number in the case of verbs and pronouns: the form da, for example, denotes that a pronoun is in the first person and that it is plural, and it is not possible to divide this form into two morphemes (one for the first person and one for plural). Likewise, the category of definiteness is to a large extent fused with the category of case. Most of Kabardian morphemes consist of only one segment and a vowel (i.e., the structure is CV) 22; this results in large number of homonyms; e.g. can mean "brother", "horse", "to milk" and "to take out", c'a means "name" and "louse", dza means "tooth" and "army", x is "sea" as well as "six", etc. Bisyllabic and polysyllabic roots are mostly borrowings, e. g. nwka "science" (from Russian), haw "air" (from Persian), lh "god" (from Arabic), nq "glass" (from a Turkic language), etc. NOMINAL INFLECTION Nominal categories are: definiteness, number and case. Of all the Abkhaz-Adyghean languages only Abkhaz and Abaza have the category of gender; Kabardian shows no trace of this category. If we consider proclitic possessive pronouns to be possessive prefixes (see below), then possession should also be included in the morphological categories of nouns. NUMBER There are two numbers singular and plural; the plural suffix is -xa: 'la "young man": 'laxar "young men"; wna "house": wnaxar "houses". The use of the suffix xa is optional for many nouns, i. e. the suffix is used only when the speaker wants to emphasise that the noun is plural. This is why forms such as sby "child/children" c'xw "man/men" and fz "woman/women" are inherently neutral with respect to the category of number. These nouns can be construed with both singular and plural forms of verbs: fz-m ay?a "the woman is speaking": fz-m y?a "the women are speaking" c'xwm ya' "the man is working": c'xwm y' "the men are working" 22 Three quarters of all morphemes have this structure according to Kuipers (1960). 17 Similarly, nouns neutral with respect to number can be construed with singular and plural possessive pronouns: c'xwm ypsaw'a "a man's life": c'xwm ypsaw'a "men's life" 23. The postposition sma is used to pluralise personal names: Dwdr sma "Dudar and others". This is the so-called "associative plural", which exists, e. g., in Japanese and Hungarian: Iaa , A, , a, a e xay aa I ?whamxwa y hagw-m Maztha, m, Thaalad, Sawzra, ap sma Pstha Elbrus 3sg.poss. top-ERG M. A. T. S. . assoc. P. w w day -zax a-s--xa-w Sna-x fa y-?a-t at dir.-meet-sit-pret.-pl.-ger. sana-drink 3pl.-have-impf. "On the top of Uesh'hemakhue (Elbrus) Mazatha, Amish, Thagoledzh, Sozrash, Hlapsh and others were meeting with Psatha (god of life) and having "the drinking of sana" (drink of the gods)". Nouns which denote substance and collective nouns have no plural: 'lawla "the youth", a "milk". CASE Unlike Abkhaz and Abaza, the Adyghean languages (Kabardian and Adyghe) and Ubykh have cases marked by suffixes on nouns, adjectives and pronouns 24. The cases are: nominative (-r), ergative (-m), instrumental (-'a) and adverbial (-wa). Core cases, which express basic syntactic relations within a sentence, are nominative and ergative, and peripheral cases are instrumental and adverbial. NOM a dtar ERG a dtam INST aI / aI dtam'a / dta'a ADV ay dtawa The instrumental case has the definite (dtam'a) and the indefinite form (dtam'a). Definite forms consist of the ergative marker (-m-) and the suffix for the instrumental (-'a). On this subject see Kumaxov 1971: 7 ff. By all accounts, the case system in the Adyghean-Ubykh languages is an innovation; the ProtoAbkhaz-Adyghean had no cases (Kumaxov 1976, 1989). 24 23 18 The nominative is the case of the nominal predicate: Ia e m 'la-r- yampyawn-r that young man-NOM-af. hero-NOM "that young man is the champion" The nominative is the case of the intransitive subject and the transitive object, i. e. the case of the verb argument which is the lowest ranking macrorole (see below): Ia 'la-r y-aw-da boy-NOM 3sg.-pres.-study "the boy studies" x a sa tx-r q'a-s-t-- I book-NOM dir-1sg.-take-pret.-af. "I took the book" The ergative is, basically, the general oblique case used for all other grammatical functions; it is the case of the transitive subject: ye x ea stwdyant-m tx-r ya-d-- student-ERG book-NOM. 3sg.-study-pret.-af. "the student studied the book" The ergative can also correspond to the dative case in the European languages; it marks the recipient of the verbs of giving, and other oblique arguments: a I x e w c'x -m tx-r m fz-m y-r-ya-t this man-ERG book-NOM this woman-ERG 3sg.-3sg.-3sg.-give "this man gives the book to this woman" I II '-m psaan-r f'af'-t old man-ERG. speech-NOM. like-impf. "the old man liked to speak", cp. Croatian, for example, which has the dative case: starcu se svialo govoriti (lit. "to-the-man it pleased to speak") The ergative is also the case which marks the goal of the verbs of movement (like the Latin accusative of the goal): y y a wa-ry wy -r a-m you-and 2sg.poss. horse-NOM barn-ERG "and you take your horse to the barn" a take to-imp. 19 The ergative can correspond to the locative case in those European languages which have it, indicating a spatial or temporal location: xa I dy xda-m ?wha yt- 2pl. poss. garden-ERG poppy be located-af. "there is poppy in our garden (poppy grows in our garden)" Croatian: "u naem je vrtu mak (u naem vrtu raste mak)", with vrt in the locative sg. I aa Sa sy nb-m ?ada s-w-- I 1sg.poss. life-ERG a lot 1sg.-see-pret.-af. "I have seen a lot in my life" Croatian: "Ja sam u svojem ivotu mnogo vidio", with ivot in the locative sg. In some constructions, the ergative can correspond to the English possessive genitive or prepositional phrase: a x a x Z mza-m xawa-nw-r z mxwa-m xawa-rt 1 month-ERG grow-inf.-NOM 1 day-ERG grow-impf. "He grew a one month's growth in one day" / "In one day he grew as much as is usually grown in one month" Thus, ergative functions as both the case of the Agent and as a "general oblique case" covering all other functions of oblique and non-macrorole core arguments, but nonarguments (adjuncts) can also be in the ergative. 25 The other two cases, as a rule, are reserved for non-arguments in the clause, i. e., for the adjuncts. Nouns and adjectives in the adverbial case (Rus. obstojatel'stvennyj pade) usually correspond to adverbs in the European languages, i. e. they indicate the circumstances under which the action is performed: x y -xa-r str-w tree-pl.-NOM row-ADV "They planted the trees in rows" xa xas-- to plant-pret.-af. The adverbial can correspond to the genitive in the European languages: y g faww-w z kylawgram-m sugar-ADV 1 kilogram-ERG "I bought 1 kg of sugar " a q'a-s-xw-- dir.-1sg.-to be involved in shopping-pret.-af. The adverbial can be the case of the nominal predicate, corresponding to the instrumental in Slavic: 25 20 a ee ay a a Taymbawlayt p-m y wsa-w r-t . T. prince-ERG 3sg.poss. servant-ADV it.be-ipf. "aga Taymbawlayt was the prince's servant" Interestingly, in the language of the epic poetry, the adverbial can correspond to the vocative case 26, i. e. it is used for addressing individuals: oe Sawsrq'wa-w sy-naf S.-ADV 1sg.poss.-light "O Sosruko, my light!" The instrumental mostly corresponds to the Slavic instrumental, i. e. it expresses the instrument with which the action is performed (including means of locomotion), cf. -m-'a m-k'wa "he rides the horse", literally "he goes with the horse", or q'arand'a tx n "to write with a pen"; however, the Kabardian instrumental has other functions as well, e. g. it can express various circumstances of the action, as well as the path (but not direction) with verbs of movement, and the duration of an action: a aI aa r mxwa-'a m-la he (NOM) day-INST 3sg.-to work "he works by day" I I maz-'a k'wa-n forest-INST to go-inf. "to go through the forest" I I I a-y- mxw-y--'a w-q'-ya-s-a'-ry night-rel.-3 day-rel.-3-INST 2sg.-dir.-3sg.-1sg.-lead.through-and sy ha'a w-q'-y-s--- my guest.house 2sg.-dir.-3sg.-1sg.-lead-pret.-af. "I was leading you three days and nights, and I led you into my guest-house" Occasionally, the Instrumental can also express the actor (in some participial constructions): I I Ia w w ? ax -r sar-'a '- wn-q'm job-NOM I-INST do-pret.(part.) become-neg. Kumaxov (ed.) 2006: 369 calls this "the vocative case", but this is clearly just another use of the adverbial. 26 21 "I cannot do this job" (lit. "This job does not become done by me") Personal names normally do not differentiate cases (at least not NOM and ERG), but family names do 27; this is related to the fact that nominative and ergative endings express not only case, but also definiteness. Also, nouns (personal names) in the "associative plural" (see above) show no case differentiaton: e aI Maryan sma m-k'wa M. assoc.pl. 3pl.-go "Maryan and the others are going" e a Maryan sma s-aw-w M. assoc.pl. 1sg.-pres.-to see "I see Maryan and the others" In addressing people, nouns referring to them show no case differentiation, i.e. the bare stem is used (similarly as the Indo-European "vocative"): a, II I Nna, st m p'-'-r z-'-s-r mother what this 2sg.-do-pres. part.-dir.-sit-NOM "Mother, what is it that you're doing now?" aI t'a ydy? now Demonstrative pronouns differentiate cases, but personal pronouns of the 1st and 2nd person have only got the peripheral cases (adverbial and instrumental), and not the core cases (ergative and nominative). This agrees entirely with Michael Silverstein's hierarchy28, according to which the most common case marking pattern in ergative languages is the one in which 1st and 2nd person pronouns do not differentiate core cases, while nominals and groups lower on the "animacy hierarchy" do (cf. the inverse pattern of case differentiation in the accusative languages, e. g. in English, where the nominative and the accusative are differentiated on the 1st person pronoun, but not on nouns). Since the category of case (especially of primary cases) is connected with the category of definiteness, and syntactical relations within a sentence are expressed by a system of personal prefixes on the verb (see below), there is some uncertainty over the rules of case assignment with some speakers, especially in the case of complex syntactic structures (just as there is often some uncertainty over the rules of the use of articles with speakers of languages which have the definite article). See Kumaxov et alii 1996. This feature excludes Kabardian from the typological universal according to which languages that distinguish cases on 3rd person pronouns always distinguish cases on personal names as well (but not vice versa). 28 See e. g. Dixon 1994. 27 22 DEFINITENESS Definiteness is clearly differentiated only in the core cases, i.e. in the nominative and the ergative: the endings -r and -m are added only when the noun is definite; indefinite nouns receive no ending 29: a eI pa-m m-r ya-'a girl-ERG it-Nom. 3sg.-know "the girl knows it" a eI pa m-r ya-'a girl it-NOM 3sg.-know "a girl knows it" With some nouns, whose meaning is inherently definite (e. g. mza "moon", nsp "happiness", personal names), the nominative/definiteness suffix is optional: () a da-(r) s-aw-w sun-(NOM.) 1sg.-pres.-see "I see the sun" Other cases are not used to differentiate definite and indefinite forms of nouns, and the opposition definite/indefinite does not exist in the plural either (see Kumaxov et alii 1996). However, if a noun in the instrumental is definite, the ergative marker -mis added before the instrumental ending -'a: I I sa m-r sa-m-'a s-aw-' I it-NOM knife-ERG.-INST. 1sg.-pres.-do "I do it with the knife" The ergative marker m- probably developed from the demonstrative pronoun (cf. maw "this"), which had been "petrified" in the "definite instrumental" before the instrumental ending. ADJECTIVES Adjectives are divided into two categories in Kabardian: qualitative and relational adjectives. Qualitative adjectives typically follow the noun they modify: wna yn "big house" (wna "house"), pa dxa "beautiful girl" (pa "girl"). Occasionaly they may also precede the noun, e.g. xma wna "foreign house" (wna "house"). Adjectives are declined like nouns, but they show no number and case agreement. If See Kumaxov 1972, where the grammaticalization of the definiteness marker -r- is discussed (from the ending for the formation of participles, it seems). On the category of definiteness in the Adyghean languages see also Kumaxov & Vamling 2006: 22-24. 29 23 the noun is modified by a qualitative adjective, only the adjective receives the endings for case and number: y x wna xw-xa-r house white-pl.-Nom. "white houses" If a qualitative adjective precedes the head noun, it is not declined; it may be modified by the adverbial suffix -w (e.g. mva-w "made of stone", fa-wa "made of leather"): axy e a dxa-w pawrtfaylr sa q'a-s-axw-- pretty-ADV wallet I dir.-1sg.-buy-pret.-af. "I bought a pretty wallet / a pretty wallet I bought". Qualitative adjectives mostly have analytical comparison: dxa "beautiful", na dxa "more beautiful", dda dxa "the most beautiful" (or "very beautiful"). The morpheme na is sometimes merged with the adjective into a compound, cf. na--'a "the youngest" ('a "young"). There are also suffixes which express the elative superlative: -a, -?wa, -bza, -ps, -ay, but this seems to belong to the domain of word formation rather than morphology, cf. ?af'-a "the sweetest" (?af' "sweet"), 'h?wa "the longest" ('h "long"), p-bza "very red" (p "red"), etc. Adding the suffix -?wa to the comparative form gives the adjective a diminutive meaning, e. g. na xwba-?wa "somewhat warmer" (cp. xwba "warm"), na ?af' -?wa "somewhat sweeter" (?af' "sweet") 30. The circumfix xwa-...-fa has a similar function, e.g. xwa-dayl-fa "somewhat foolish". Adjectives can be reduplicated, whereby the first stem receives the suffix -ra "and", and the second the adverbial suffix -w(a). Such reduplicated adjectives have intensive meaning, e.g. bw-ra b w-w "extremely broad", p-ra-pw "extremely red", f'yay-ra-f'yay-wa "extremely dirty". Relational adjectives precede the head noun and they take no case and number endings; they can be formed by adding the relative particle -y to nominal and adjectival stems, e.g. mawdryay "other", dwaphryay "evening": e a nawbara-y mxwa today's day "today" Some nouns, ordinal numbers and Russian loans (nouns) can also function as relative adjectives, e.g. nawna "scientific", ha "head", ypa "first". 30 24 Some adjectival meanings are expressed by suffixes: the suffix -xwa means "great", cf. dwnay-xwa "great world", whe suffix -na means "being without, -less". It can often be translated as the English preposition without, but its adjectival status can be shown by the fact that nouns to which it is added can get the affirmative marker - to build a static verb (see below): sa s-da-na na-na- (I 1sg.-father-without motherwithout-af.) "I am without father and mother" = "I am fatherless and motherless". PERSONAL AND DEMONSTRATIVE PRONOUNS Kabardian personal pronouns are indeclinable. The personal pronouns of the first and second person are similar to, and presumably represent the origin of, the person markers on the verb. There is no trace of the inclusive/exclusive opposition in pronouns, which exists in some NE Caucasian languages. sg. 1. sa 2. wa 3. r pl. da fa xar The pronouns of the 1st and 2nd person also have longer forms sara, wara, dara, fara, which are used as stems to which verbal suffixes can be added: ye a q'araway-r sar-q'm K.-NOM I-neg. "I am not Karashawey (a Nart hero)" Third person pronouns are also used as demonstrative pronouns; Kabardian does not distinguish between "he" and "this, that". The pronominal declension is somewhat different from the nominal one: 1sg. Nom. sa Erg. Inst. Adv. sa 1pl. da da 3sg. r b 3pl. xar bxam I bxam'a xarw In the first and second person singular the nominative form is always the same as the ergative form, which means that pronouns do not have the ergative alignment, as, for example, in Dyirbal. Unlike in Dyirbal, however, the clause alignment of personal 25 pronouns in Kabardian is neutral, rather than accusative. The third person pronoun is formed with the stems -, m - and maw-. It can appear without the nominative -r (which also expresses definiteness of personal pronouns): a a sa rsskaz-m I that story-ERG "I read that story" ea s-ya-d-- 1sg.-3sg.-read-pret.-af. The difference in the usage of pronominal stems -, m - and maw- is not entirely clear, but - is the basic pronoun used in anaphora (reference to what has already been mentioned in the discourse), while m- and maw- are in opposition with respect to the degree of distance from the speaker: m- refers to a closer object (or person), and maw- to a more distant one. In the 3rd person plural Ergative, two different sets of forms exist: the basic stem can be extended with the pronominal Ergative ending, but it also occurs without it: xam = bxam mxam = mbxam mawxam = mawbxam There appears to be no difference in meaning, but the longer forms are somewhat more common in the texts. The stems which are used in the formation of demonstrative pronouns also serve to form pronominal prefixes, which are used instead of demonstrative pronouns: m- "this" maw- "that" x maw--xa-r those-tree-pl.-NOM "those trees" These prefixes can also be used as independent words, and they are declined like personal pronouns, e. g. NOM sg. m-r, maw-r, ERG sg. m-b, maw-b, etc. In addition to the pronominal case ending, third person personal/demonstrative pronouns can get the ergative ending used for nouns as well, which then results in double case marking (Kumaxov et alii 1996): a () e ya -b-m (m-b-m) bdaay-r q'-y-wbd-- he-ERG-ERG (this-ERG-ERG) fish-NOM dir.-3sg.-to catch-pret.-af. "He (this one) caught the fish" 26 In a larger sense, the category of demonstrative pronouns would also include pxwada "such, such as this" (from - and pxwada "similar"), mpxwada "such, such as that", mawpxwada "such, such as that". As a rule, these words occur in the attributive position, in front of the noun they modify, cf. pxwada c'xw "such a man". POSSESSIVE PRONOUNS Invariable possessive pronouns have only one form and they precede the noun they refer to: 1. sy 2. wy 3. y dy fy y There is also the relative possessive pronoun zy "whose", and the 3rd person attributive possessive pronouns yay "his", yy "their". The attributive possessives must be preceded by a head nominal in the ergative: b yay "his, that which belongs to him", 'm yay "old man's, that which belongs to the old man". Possessive pronouns are clitics, and they should perhaps be thought of as prefixes which express possession. Sometimes they are written as one word with the word they refer to (ie. with the possessum), cf. sy "my cow". There seems to be a lot of uncertainty in the Kabardian orthography over whether possessive pronouns should be written separately or as one word with the possessum. The relative-possessive pronoun zy "whose" always precedes the noun it refers to: zy r "whose horse". It is declined as the personal pronouns: NOM zy-r, ERG zym, INST zyr'a, etc. In addition to the basic (clitic) possessive pronouns there are also emphatic possessive pronouns, formed by reduplication: ssay "my", wway "your", dday "our", ffay "your", yy "their". Unlike the clitic possessive pronouns, these can be inflected for case (e. g. NOM sysayr, ERG sysaym, etc.). INTERROGATIVE PRONOUNS Although it does not distinguish animacy in other pronouns, Kabardian, like most of the world's languages, distinguishes the animate and inanimate forms of interrogative pronouns: x xat "who" st "what" 27 Interrogative pronouns are normally not inflected for case, though there is a growing tendency in the spoken language to use the case endings -m (ERG), -r (NOM), and -w (ADV) with the pronoun st 31: st-w xx "What was he elected for?" The interrogative possessive pronouns do not exist, but are rather replaced by the interrogative xat "who" and the possessive pronoun 3sg. y, e.g. x a y I? xat y da-m wna-r y-'-ra? who 3sg.poss. father-ERG house-NOM 3sg.-do-inter. "Whose father is building the house?" Other interrogative words are: dana "where", stw "why", dawa "how", dpa "how much", dpa "when", datxana "which". THE EMPHATIC PRONOUN The emphatic pronoun is yaz "personally, himself". It emphasises the verb's subject and stresses it as the topic of the sentence (theme). It is declined as a noun: NOM yaz-r, ERG yaz-m, etc. e a yaz-r m- personally-NOM 3sg.-to cry "he himself cries" ("It is he who cries") e I a yaz-m '-r y-v-- personally-ERG ground-NOM 3sg.-to plow-pret.-af. "they personally plowed the ground"/"he personally plowed the ground" In the following passage one can see how yaz is used to shift the topic back to the name Dlstan which had already been introduced earlier in the discourse: a y axa a X Ia. "a a, a a" aI I. E a axa ay a Dlstan y-pw Badaxw y dxa-r Nrt Xakw-m ?w-t. D. 3sg.poss.-daughter B 3sg.poss. beauty-NOM N land-ERG be.heard-ant.pret. "Mxwa-m da-, a-m mza-" --?a-rt Badaxw ha'a. day-ERG sun-af. night-ERG moon-af. pref.-3pl.-say-ipf. B about Yaz Dlstan-y y pw-m y dxa-m himself D.-and 3sg.poss. daughter-ERG 3sg.poss. beauty-ERG 31 28 y-ry-gwxwa---wa z-y-apa-rt 3sg.-3sg.-rejoice-back-pret.-ger. refl.-3sg.-boast -impf. "The beauty of Dilahstan's daughter Badah was heard in the Land of the Narts. 'She is the Sun by day, she is the Moon at night' -they used to say about Badah. Dilahstan himself, having rejoiced at his daughter's beauty, boasted (about it)". QUANTIFIERS Quantifiers differ from adjectives and pronouns in their morphological and syntactic features. For example, the quantifier q'as "every" is not inflected for case (this is what differentiates it from adjectives), and it follows the noun it modifies (this is what differentiates it from pronouns): I aI c'xw q'as m-k'wa man every 3sg.-go "every man walks" The quantifier gwar "some" syntactically behaves similarly as q'as; it can be used together with the number "one" (z) which precedes the noun it modifies: I z ' gwar "a man", "some man" Aside from these, there is also the quantifier psaw "all, every"; its meaning is inherently plural, and it can be marked for case, cf. ' psawr "all men". Perhaps the words za'a "whole" and ha "every" should also be thought of as quantifiers. 29 INVARIABLE WORDS NUMERALS Cardinal numbers: z I t'w I p' txw 1 2 3 4 5 x bl y bw I p' 6 7 8 9 10 a 100 Numerals sometimes merge with the noun which they precede, e. g. z "one horse", but z am "one cow". In the first example, the morpheme final of "horse" had been deleted, and the numeral received the stress; in the second example, the morpheme final -a- of a "cow" was preserved, together with its stress. Numerals can also merge with the noun they follow using the relative conjunction/particle -y-. They can also take case endings: a aI mz-y-bw-ra mxw-y-bw-'a month-rel.-9-and day-rel.-9-INST "In nine days and nine months" Kabardian has the decimal counting system; numerals above ten are formed with the stem p'- "ten", the suffix -k'w- (probably the root of the verb kwn "to go over (a distance), to transverse") and ones, e. g. p'k'wz "eleven", p'k'wt' "twelve", pk'w "thirteen", etc. The tens are formed on a decimal base, with the numeral "ten" reduced to -': t'wa' "twenty", a "thirty", p'' "forty", txw' "fifty", x' "sixty", etc. There are traces of the vigesimal system, manifested in the formation of tens as products of multiplication of the number twenty, t'wa': t'wa'-y-t' "forty", t'wa-y- "sixty", etc. In the standard language, these vigesimal formations are notably archaic, but they are alive in some dialects (e.g. Besleney) and in Adyghe. When counting above twenty, the counted noun (or noun phrase) is normally repeated before both constituent parts of the complex number: I aI I c'xw a'-ra c'xw-y--ra man thirty-and man-suf.-three-and "thirty three men" Ordinal numbers: yp 1. yaxna 6. 30 2. 3. 4. 5. 7. 8. 9. 10. Ordinal numbers behave like relational adjectives, so they can take the suffix -ray (used for the formation of adjectives): yatxwnaray "fifth" etc. Adverbial numerals are formed from cardinal numbers by apophony, e. g. za "once", a "thrice", but they can also be formed by the prefix (or infix?)32 -r- and reduplication of the root of a cardinal number: z-r-z "once", p'-r-p' "ten times". Distributive numerals are formed from cardinal numbers with the suffix - na: t'wna "a half", na "a third", etc. Note also yz "one of two" and ztxwx (one-five-six) "about five or six". ADVERBS Adverbs are formed from adjectives by adding the suffixes -w, -wa, -ra: I ?ay "bad" - I ?ayw "badly"; xwb "quick" - xwbw "quickly", I f' "good" - I f'wa "well". ba "many, plentitude" - bara "much, very" The suffix -wa is identical to the suffix for the adverbial case (see above). The possessive prefix y- can be added to nouns to form adverb-like expressions (or "relational nouns"?) with directional meaning: ha "head" - yha "up, upwards" ba "hoof" - yba "down, downwards" Nouns in the instrumental case (in -'a) can also function as adverbs: mxwa "day": I mxwa'a "by day"; a "flight": I a'a "in flight" Some adverbs are formed with both the possessive prefix y- and the suffix -'a: ha "top, head": I yha'a "on top" (lit. "on his head"). There are also underived adverbs: nawba "today", pday "tomorrow", dwsa "yesterday", naba "tonight", dda "very much, just", wayblama "very much", mb "here": y (e) I st wa mb (daym) -p-'a-r 32 The morpheme -r- can be analysed as an infix which is inserted between the reduplicated root syllable and the root, if we think of reduplication as a kind of modification of the root (and not a special form of prefixation). 31 what you here dir.-2sg.-do-NOM "What are you doing here?" The category of adverbs might also include invariable expressions such as q?a "please", f'axwa "thank you", xat y'ara "maybe" ("who knows?"), etc. POSTPOSITIONS Kabardian uses postpositions instead of prepositions. Postpositions are words which determine the grammatical relations of the nouns that precede them: naw "after", p'awnda "until", (y) day "at, in", ha'a "because, after", ypam "in front of", y'am "at the end, after", nam' "except", f'a'() "except", xwada "like", pp''a "because, due to", zaxwakwa "between", y gww "about", nas "to, up to, until", "from", ndara "since", yp'a'a "instead of", da'a "behind". a adwa nawm "after lunch" a I adwa p'awnda "until lunch, before lunch" I padyay p'awnda "until tomorrow" (I) m ypam('a) "in front of the horse" I war ha'a "because of you" I 'lam y gww "about the boy" y m xwada "like his horse" II Yamnya f'a'a "except Yaminezh" q'lam nas "to the city" y e Ia day ?-- Mwhamad y q'wa-m M. poss.3sg. brother-ERG at be-pret.-af. "Muhamad was at his brother's" y I I aIa st wy ?waxw-m ha'a --?--r what poss.2sg. work.-ERG about dir.3pl.-say-pret.-inter. "What did they say about your work?" As the preceding examples illustrate, postpositions govern the ergative case of nouns. Some of them govern the possessive pronouns, rather than personal pronouns, e.g. sy w "after me", wy day "to you", (wa) wy p'a'a "instead of you", etc. Others govern the personal pronouns (cf. war ha'a "because of you", sar f'a' "except myself"). The majority of postpositions are derived from nouns, especially nouns denoting body parts, cf. ha "head", pa "nose", 'a "tail". Some postpositions can be inflected, e. g. day has the full case paradigm (NOM day, ERG daym, INSTR day'a, ADV daywa), and some, but not all, can be construed with possessive prefixes (e. g. y 32 gww "about (it/him)" 33. This means that many Kabardian postpositions are quite like relational nouns in languages such as Tibetan. Instead of local adpositions, Kabardian often uses directional (local) prefixes on the verb; the English sentence "the student is sitting on the chair" corresponds to the Kabardian sentence waynk-r ant-m tay-s- (student-NOM chair-ERG dir.-to sitaf.), where the equivalent of the English preposition on is the Kabardian verbal prefix tay- (on local prefixes see below). PARTICLES, CONJUNCTIONS AND INTERJECTIONS There are relatively few particles in Kabardian; these are the most frequently used ones: hawa "no"; I nt'a "yes" mys "here!" mda "there!, look!" I p'ara (interrogative particle); it is always placed at the end of a sentence and expresses a slight doubt: y Iy I wa p-'a-wa p'ara you 2sg.-know-ger. inter.particle "do you (really) know?" The other interrogative particle is a (also placed at the end of a sentence): a aI -r q'-k'wa-ma a he-NOM dir.-go-cond. inter.particle "Will he come?" The particle ayry is used as a quotation mark; it is usually best left untranslated: ye, e eya a ha w-naay, ayry ya-wp-- Bdnawq'wa why 2sg.-be.sad quot.part. 3sg.-ask-pret.-af. B "Why are you sad, asked Badinoko" Conjunctions are clitics, so they are mostly written as one word with the words they conjoin, e. g. -ra "and", -y "and", but there are also conjunctions which occur as separate words: y'y "and", wa "but", t'a "but", wayblama "even, but", ya...ya "either...or", hama "or". 33 33 The copulative conjunction -ra, -ry is repeated after each conjoined word within a noun phrase (NP): x I Tx-ra ?ana-ra "A book and a table" The conjunction -ry is placed after the verb in a sentence: " I" I e I Ia "M-r st a'awan" y?a-ry Satanyay y thak'wma-r mva-m ?wyh-- this-ERG what wonder said-and S. poss.3sg. ear-NOM rock-ERG place-pret.-af. "'What kind of wonder is this?' said Satanaya and placed her ear on the rock." The most common interjections are n "oh", waxw "ouch", ?wa "oh", wa "hey", yraby "hey!", ma "here!" (used while giving something away) 34 VERBS Cette singularit (ergatif) tient, en gros, ce que, l o nous pensons "je vois le livre", les Caucasiens pensent quelque chose comme "-moi le-livre (il-m')est-en-vue" (G. Dumzil, cit. in Paris 1969: 159). Kabardian verbal morphology is extremely complex. Prefixes and suffixes are used to express different verbal categories, and there is also apophony (regular root vowel alternation). The verb does not have the category of voice (it does not distinguish active and passive), 34 but it does have the categories of transitivity, person, number, tense, mood, causative, two types of applicatives (version/benefactive (Rus. versija) and conjunctivity/comitative (Rus. sojuznost')), reflexivity, reciprocity, involuntative, and evidentiality. Active and stative verbs are distinguished systematically, and many of the mentioned categories do not apply to stative verbs. THE VERBAL COMPLEX The verbal complex consists of a number of prefixes, the root, and a number of suffixes: P1...Pn -R-S1..Sn The prefix positions can be seen in the following matrix: 1. dir. 2. reflexive/reciprocal 3. version 4. conjunctivity 5. pot. 6. neg. 7. caus. invol. absolutive oblique agent - person markers In the non-third persons, the dynamic present tense marker -aw- is added between the positions 5 and 6, cf., e.g., q'-z-aw--k'wa "I make him come". As can be gathered from the scheme above, the personal prefixes can be inserted at several points in the prefix chain, but two fixed rules apply: firstly, the prefix for the absolutive argument (the "lowest ranking macrorole", see below) precedes all other prefixes, and secondly, the prefix referring to the agent (if there is one) is closest to the verbal root. The picture above is further complicated by the fact that certain local prefixes, e.g. xa- "in", da- "in", etc. (see below) can be inserted in the verbal complex between the prefix slots 4 and 5; moreover, the factitive prefix w- can be inserted immediately before the root. However, we leave these prefixes out of the matrix scheme, because they belong to the domain of word formation more than to morphology. 34 Cp. Giev 1985: 41-57, where arguments to the contrary are disputed. 35 The suffix positions: 1. intransitivity 2. tense 3. mood potential evidential 4. negation interrogativity We shall first deal with the prefixal verbal morphology, and then with the suffixal morphology. VERBAL NEGATION The negation of the verb is expressed with the suffix -q'm (for finite forms) and the prefix m- (for non-finite forms; this prefix immediately precedes the root, or the causative prefix): I s-k'wa-r-q'm 1sg.-go-pres.-af.-neg. "I am not going" yaay x w-m-la-w p-x-r haram- 2sg.-neg.work-ger. 2sg.-eat-NOM sin-af. "It is a sin to eat not working" ("It is a sin if you eat, and not work") The imperative is, according to this criterion, included in non-finite forms: ye s-w-m-ay 1sg.-2sg.-neg.-lament "don't lament me" The prefixal negation can occur in some finite forms, but this usually happens in fixed expressions and proverbs: , I tha, s-m-'a god 1sg.-neg.-know "by god, I don't know" The two verbal negations differ in scope: the prefixed -m- is the narrow scope negation, with thescope just over the verbal nucleus, while the suffixed negation -q'm negates the whole sentence (including the embedded participles, infinitives, and/or gerunds). 36 The other NW Caucasian languages also have prefixal negation with the infinite verbal forms, and suffixal negation with the finite forms. PERSON Kabardian distinguishes three persons singular and plural. Verbal person markers indicate the person of the subject of an intransitive verb / object of a transitive verb (the person which is in the nominative in the case of nouns), the person of the subject of a transitive verb (the person which is in the ergative in the case of nouns), and of the indirect object (the person which, in the case of nouns, is in the ergative in its function of dative, or some other oblique case): a) markers of the person which is in the nominative: sg. s w0// 0- /ma-/mpl. d f0// 0-/ma-/m- 1. 2. 3. The prefix ma- is typically used in the present tense, with intransitive verbs which have only one expressed argument (Rus. odnolinye neperexodnye glagoly), while intransitive verbs with two expressed arguments take the prefix 0- for the person in the nominative. If the verb has a monosyllabic root that ends in -a, the vowel of the 3rd person prefix is lengthened, hence mk'wa "he goes" (from kwa-n), but ma-dagw "he is playing" (from dagw-n). This is in accordance with the phonological rule of lengthening of accented vowels in open syllables (see above). Intransitive verbs with a preverb do not have the prefix ma- in the present tense, cp. m-da(r) "(s)he is sewing", but q'-aw-k'wa "(s)he is coming" (where q'- is a directional preverb, and -awis a present tense marker of dynamic verbs). b) markers of the person which is in the ergative (person of the transitive subject and person of the indirect object): 1. 2. 3. / -s-/-z/ -w-/-b/ -y()-/ -r(-) -d -f -y-xa- (> -y-) In the 3rd person singular the prefix -r- denotes the indirect object (usually the Recipient) 35: s-r-y-t 1sg.-3sg.-3sg.-give "He gives me to him" The usual explanation is that the marker -r- is a result of dissimilation in a sequence of two semivowels -y-...-y- > -y-...-r-; this can be formulated as a synchronic phonological rule, so in most grammars it can be found that the marker for the 3rd person indirect object is -y-, like for the direct object (see Hewitt 2005: 102). 35 37 Personal prefixes indexing Obliques (non-macrorole core arguments, including the causees of causative verbs) are also distinguished from those indexing Actors and Undergoers by ablaut; they regularly have the same form as the markers of the transitive subject, but the vowel is -a- rather than --: a y-xw-va-z-a--- 3pl.-ver.-2pl.-1sg.-caus.-carry-pret.-af. "I made you (pl.) carry him for them" In the preceding example -va- indexes the 2pl. causee argument. Note that it differs from the form of the prefix for the causer (-v-) in the following example: a y-xwa-z-v-a--- 3pl.-ver.-1sg.-2pl.-caus.-carry-pret.-af. "You (pl.) made me carry him for them". The prefix indexing the recipient also has the form marked by a-vocalism: Sy -r q'-za-f-t- my horse-NOM dir.-1sg.-2pl.-give-back "Give me back my horse!" In the 3rd person plural the suffix -xa is usually only added if the verb's subject is not expressed, and if the subject is not placed immediately before the verb 36: xar yayd- = yayd-xa- "they studied" The order of personal markers is always (in terms of traditional grammatical relations): direct object / subject of intrans. verb indirect object subject of trans. verb S/O IO A ( y) a yea (sa wa) b w-ay-s-t-- I you he-ERG 2sg.-3sg.-1sg.-to give-pret.-af. "I gave you to him" (a ) y yea (b sa) wa w-q'-z-ya-t-- (he-ERG I) you 2sg.-dir.-1sg.-3sg.-give-pret.-af. "He gave you to me" Forms with the plural suffix -xa- on the verb are characteristic for the contemporary literary language. 36 38 This schema shows that the verbal agreement system in Kabardian is ergative just like the case system, since the subject of an intransitive verb is treated in the same way as the direct object (S/O), while there is a different set of personal prefixes used for the subject of a transitive verb. With intransitive verbs the third position (A) is, of course, not realized. INDEFINITE PERSON The suffix -?a- denotes the "indefinite person", i.e. that the verb's subject or object is indefinite (it is translated as "somebody"); this suffix is used only when the verb is in the third person: IaI q'a-k'w--?a- dir.-go-pret.-suf.-af. "Somebody came" yI d-za-p-nw-?a 1pl.-part.-watch-fut.-suf. "Are we going to see somebody?" The above examples lead to the conclusion that the suffix -?a- indicates only the person of the nominative argument (i.e. of the intransitive subject or object, the lowest ranking macrorole). It appears to be possible to use it with other arguments as well in participial constructions (Kumaxov & Vamling 1998: 68-69). A different way of expressing the "indefinite person" is to use the second person subject prefix, which is interpreted as referring to indefinite prson. This is possible in proverbs and statements of general truth: ye tn- w-ya-p-n easy-af. 2sg.-3sg.-see-inf. "It is easy to see him", lit. "It is easy for you to see him" The second person prefix with indefinite reference is added to the infinitive (or "masdar") and the predicate must be an adjective such as gww "difficult", tn "easy", dawa "good", halmat "interesting", etc. TRANSITIVITY Verb valency is the number of arguments needed to complete the meaning of the verb in question. Verbs can be avalent (e. g. it is raining this verb is in English syntactically monovalent, but semantically avalent, since no thematic role is assigned to "it"), monovalent (e. g. I am sitting), bivalent (e. g. I am hitting an enemy), trivalent 39 (e. g. I am giving a book to a friend), possibly also quadrivalent (e. g. I am buying a book from a friend for twenty pounds). Verb valency is a semantic concept, realized in syntax through the category of transitivity. In most languages, bi- and trivalent verbs are realized as transitive verbs, i. e. verbs which have a compulsory nominal complement (direct object), possibly two complements (direct and indirect object). Arguments of bivalent verbs express different thematic roles according to the types of meaning they express. For example, verbs of giving (to give, to donate) always distinguish between the sender ("the person who is giving"), the theme ("the thing which is being given") and the recipient ("the person to whom something is being given"), and verbs of seeing distinguish between the thematic roles of the stimulus ("what is being seen") and the experiencer ("the person who is seeing"). Thematic roles can be grouped into macroroles with common semantic-syntactic features. We can distinguish between two macroroles: Actor and Undergoer. The Actor is always the thematic role closer to the left edge of the following hierarchy, while the Undergoer is always close to the right edge of the hierarchy 37:. Finally, the argument of a stative verb would be the traditional subject of verbs such as to lie, to sit, to exist, etc. The macroroles Actor and Undergoer of the action are, in a sense, the semantic correlates of the traditional syntactic-semantic concepts of ''subject'' and ''object'', which cannot be uniformly defined in all the languages of the world 38. Some Kabardian bivalent verbs can appear in their transitive and intransitive form, and many bivalent verbs can only be construed as intransitive (Rus. dvuxlinye neperexodnye glagoly). The way in which transitive and intransitive verbs differ in Kabardian in terms of the number of arguments, i. e. nominal complements to the verb meaning is typologically very interesting. Some linguists, e. g. Georgij Klimov (1986: 51), claim that a large majority of verbs in the Abkhaz-Adyghean languages are intransitive, precisely because they can be used with only one argument as complement, without breaking any syntactical rules. According to this criterion verbs meaning "to hit", "to catch", "to eat", "to kiss", "to lick", "to wait", "to move", "to call", "to do", "to ask", "to want", "to hunt", etc. are also intransitive in the AbkhazThe hierarchy was adapted from Van Valin and LaPolla 1997. In informal terms, the actor is the most "active" of the arguments of a particular verb, while the undergoer is the least active argument. 38 About this see e. g. Matasovi 2005, Klimov (ed.) 1978: 59. 37 40 Adyghean languages. Klimov uses the term ''diffuse'' or ''labile'' verbs for those verbs which can be used both in a transitive and an intransitive construction; this category comprises verbs meaning "to sow", "to graze", "to plow", "to knit", "to embroider", "to weave", etc. 39. These seem to be mostly verbs the first argument of which (the agent) is always a human being or a person, while the second argument (the patient) is inanimate. Sometimes the only difference between transitive and intransitive verbs is in different root vocalism (Ablaut); transitive forms end in , and intransitive forms in -a: d-n "to sew (something)" - da-n "to be involved in sewing", tx-n "to write (something)", txan "to be involved in writing", -n "to avoid", a-n "to run away", tn "to give, to give presents" and tan "to give, to give presents", xn "to eat (something)" and xan "to eat", than "to wash (something)" and thaan "to wash", xn "to reap (something)" and xan "to reap", pn "to collect (something)" and pan "to collect", 'n "to do" and 'an "to know", 'n "to kill" and 'an "to die" 40. Transitive verbs can be derived from intransitive ones using some suffixes and prefixes, e.g. the suffix -h-, cf. q'afa-n "to dance" (intransitive), q'af-h-n "to dance (a dance around something)" (transitive). Sometimes the difference is purely lexical, e.g. the verbs h-n "to carry" and '-n "to do" are always transitive. If we assume that the basic form of the verb is the one with final stem morpheme -a-, while the form with the morpheme -- is derived, then a large majority of Kabardian languages are intransitive. With some exceptions, Kabardian is a language without (underived) transitive verbs. Intransitive verbs with two arguments often express the fact that the Undergoer is not entirely affected by the action, i. e., the fact that the action is not being performed completely; in terms of Role and Reference Grammar, these verbs express activities, but not accomplishments (active accomplishments): x ha-m q'wpxa-r y-dzaq'a dog-ERG bone-NOM 3sg-bite "the dog is biting the bone (to the marrow, completely)" x w ha-r q' pxa-m y-aw-dzaq'a dog-NOM bone-ERG 3sg.-pres.-bite "the dog is gnawing, nibbling at the bone" Ia a 'la-r m-da boy-NOM 3sg.-read "the boy is reading" intransitive verb with 1 argument 39 According to Kumaxov (1971), in the closely related Adyghean language the number of "labile" verbs is significantly greater than in Kabardian. 40 Kuipers (1960) considers the opposition between a and in verbs a part of the wider system of "introvert" forms (with a) and "extrovert" forms (with ) in Kabardian, where a and are not morphemes for "introvertedness/extrovertedness", but the realization of the feature of "openness", which, according to Kuipers, is parallel to the phonological features such as palatalization, glottalization, etc. 41 Ia x 'la-r tx-m y-aw-da boy-NOM book-ERG 3sg.-pres.-read "the boy is reading the book" - intransitive verb with 2 arguments Ia x e 'la-m tx-r ya-d boy-ERG book-NOM 3sg.-read- transitive verb "the boy is reading the book (to the end), young man reads through the book" 41 r mtxa "he is writing" (intransitive) / b txm yatx() "he is writing a letter" (transitive) pa-r pa-m y-xwa "the carpenter is arranging the boards " (intransitive) / pa-m pa-r y-xwa "the carpenter is arranging the boards" (transitive); in the second sentence it is implied that the action will be performed completely, i. e. that the verbal action will be finalized (there is no such implication in the first sentence). Some linguists (Catford 1975, Van Valin & LaPolla 1997: 124) refer to the intransitive construction as the antipassive. The antipassive is a category which exists in many ergative languages (Dyirbal, Chukchi, etc.). The verb becomes intransitive in the antipassive, and the only compulsory argument of such verbs is the doer of the action, which is marked for the same case as the subject of an intransitive verb and the object of a transitive verb in an active (ie. not antipassive) construction. This case is usually called the absolutive, but in Kabardian it is traditionally referred to as the nominative. The patient can either be left out in the antipassive construction, or it can appear in an oblique case. Equating the Kabardian ''bipersonal'' intransitive construction with the antipassive is not correct 42; the affix -(a)w- is not the antipassive marker, as Catford explains it, but the present prefix which is added in the 3rd person to intransitive verbs only, and in the 1sta and 2nd person to all verbs. Monovalent intransitive verbs with a preverb have this prefix as well, and these verbs cannot appear in an antipassive construction, e. g. n-aw-k'wa "he goes (this way)" (dir.-pres.-to go). In works on Kabardian there is quite a lot of confusion regarding this problem (the conditions under which the prefix (a)w- appears are not entirely transparent), but it is clear that some verbs are always either transitive or intransitive, i. e. that the difference is lexical with some verbs (which we wouldn't expect if the intransitive construction was actually the antipassive). The antipassive is usually characteristic for most transitive verbs, similarly as most transitive verbs can form the passive in the nominative-accusative languages. Aside from all this, the antipassive is always a derived, marked construction in the ergative languages, while the intransitive construction in the Abkhaz-Adyghean languages is just as unmarked (underived) as the transitive one. A) Transitive verbs 41 42 My informants tell me that this sentence can also mean "the young man is studying the book". About this see also Hewitt 1982 and Kumakhov & Vamling 2006: 13 ff. 42 Transitive verbs can take markers for all persons, except for the 3rd person direct object (this marker is the ''zero-morpheme'', the prefiks 0-). The order of personal markers is: direct object-(indirect object)-subject: yxa w-s-tx-- you-I-write down-pret.-af. "I wrote you down" y yaa sa wa w-s-w-- I you 2sg.-1sg.-see-pret.-af. "I saw you" yea w-ya-s-t-- you-he-I-give-pret.-af. "I gave you to him" ea (0-)ya-s-t-- (0-)3sg.-I-to give-pret.-af. "I gave it to him" With transitive verbs the subject takes the ergative case, and the object the nominative case. In RRG terms we would say that in constructions with transitive verbs the nominative case is assigned to the lowest ranking macrorole, while all other arguments are assigned the ergative case. Also, the order of personal prefixes can be expressed like this 43: I: lowest ranking macrorole; II: non-macrorole core argument; III: other macrorole (with transitive verbs this will always be the Actor). B) Intransitive verbs The order of personal markers with intransitive verbs is: subject (of an intransitive verb) indirect object; the subject is always the semantic agent (Actor): y s-w-aw-p I-you-pres.-watch "I am watching you" a ax pa-r dna-xa-m q'-y-da girl-NOM shirt-pl.-ERG dir.-3-pl.-sew (intrans.) "The girl is involved in the sewing of shirts" For the RRG terminology see Van Valin & LaPolla 1997; for the overview of verbal morphosyntax in Kabardian in RRG see Matasovi 2006. 43 43 With intransitive verbs the subject is assigned the nominative case, and the object the ergative case (in its dative function): ye x stwdyant-r tx-m student-NOM book-ERG "The student is reading the book" y-aw-da 3sg.-pres.-read ea sa kynaw-m s-ya-p-- I cinema-ERG 1sg.-3sg.-to watch-pret.-af. "I watched the cinema" (= "I was in the cinema") In RRG terms, the case assignment rule is completely identical for transitive and intransitive verbs: the lowest ranking macrorole is assigned the nominative case, while all other verb arguments (in this case the indirect object) are assigned the ergative case. Also, the order of verbal prefixes is the same as with transitive verbs: I: the lowest-ranking macrorole (with intransitive verbs this is also the only macrorole); II: non-macrorole core argument; III: other macrorole (this position is not realized with intransitive verbs, since they only have one macrorole). Verbs with the inverse (dative) construction are also intransitive; these are verbs which express belonging or a mental state, the only macrorole of which is the patient (Undergoer), assigned the Nominative case: I a I '-m a-r y-?a- old man-ERG. money-NOM. 3sg.-hold-af. "The old man has money" I II '-m psaan-r f'f'-t old man-ERG to speak-inf.-NOM like-impf. "The old man liked to speak" The inverse construction corresponds to Latin constructions of the type mihi est "it is to me", mihi placet "it is pleasing to me, I like". From the point of view of the abovementioned case assignment rules these verbs present no problem, because their only (and thus also the lowest ranking) macrorole is marked for the Nominative case. If a transitive verb has two complements (i.e. if it is a trivalent verb), only the lowest ranking macrorole (Undergoer) is in the Nominative: I gwp-m '-r group-ERG old man-NOM a Ia thamda y-'-- thamada 3pl.-make-pret.-af. 44 "The group made the old man thamada (commander of the feast)" in this sentence the noun thamda cannot be marked for the Nominative (i.e. it cannot appear in the form *thamda-r) 44. The object (i.e. the second argument, the Undergoer) of transitive verbs can be omitted; it is expressed by a personal prefix, which, in the case of a third person object, is the ''zero-morpheme (0-): aa 0-s-w-- 3sg.-1sg.-see-pret.-af. "I saw (it)" a 0-s-t--- 3sg.-1sg.-give-back-pret.-af. "I gave (it) back" Note that many, perhaps most bivalent verbs are intransitive in Kabardian: Ix aax '-xa-r m-p-xa man-pl.-NOM 3sg.pres.-watch-pl. "People are watching" a sa s-aw-p 1sg. 1sg.-pres.-watch "I am watching" a a sa b s--p-xwaz-- I there 1sg.-dir.-2sg.-meet-pret.-af. "I met you there" Some intransitive verbs have an "integrated" marker for the 3rd person object; they are "bipersonal" (Rus. dvuxlinye) 45, but their indirect object (oblique argument) is always in the 3rd person singular. The verb sn "to swim" is of this type: s-ya-s-- "I swam", w-ya-s--, "you swam", ya-s-- "he swam", d-ya-s-- "we swam", f-ya-s-- "you swam", ya-s-- "they swam". It seems that yw'n "to kill" behaves in the same way (in opposition to the transitive w'n). Finally, some verbal personal prefixes are different for transitive and intransitive verbs (see above): Kumaxov 1971: 68. With some of these verbs ya- has become part of the stem, ie. only etymologically is it a personal prefix, cf. Kumaxov 1973a. 45 44 45 eI ya-k'w "he goes (through something), he transverses" - transitive aI m-k'wa "he goes" -intransitive LABILE (DIFFUSE) VERBS Labile (or "diffuse") verbs are typically bivalent, but they can be used both transitively and intransitively: a a r m-va "he plows" (intrans.) / a I e b 'r ya-va "he plows the ground" (transitive) a aI r m-?wa "he threshes " (intrans.) / a eI b gwadz-r ya-?wa "he threshes wheat" (transitive) These verbs are relatively rare in Kabardian, but their number is significantly greater in the closely related Adyghean language 46. From works on Kabardian (and based on my own questioning of native speakers) it is unclear whether two lexical units should be distinguished in the case diffuse verbs (two verbs differing with respect to transitivity), or whether it is just one lexical unit (one verb with two uses / constructions). CAUSATIVE Verbs receive an additional argument in the causative construction, i.e. their valence is increased by one. All Kabardian verbs can form the causative, including intransitives, transitives, and ditransitives. The causative prefix is a-. I I I w w k' a-n "to go": m-k' a "he goes": ya--k'wa "he sends him" = "makes him go". The causative prefix a- / - turns intransitive verbs into transitive verbs: Ia aI 'la-r gwbwa-m m-k'wa boy-NOM. field-ERG 3sg.-go "The boy goes into the field" a Ia aI na-m 'la-r gwbwa-m y--k'wa mother-ERG boy-NOM field-ERG 3sg-caus.-go "The mother sends the boy to the field" y ()a swp-r (q'a-)v-- soup-NOM (dir.)-to cook-pret.-af. "The soup was boiling (it was cooking)" 46 46 Ia II y aa 'la c'k'w-m swp-r q'-y--v-- boy little-ERG soup-NOM dir.-3sg.-caus.-to cook-pret.-af. "The boy was cooking soup" Causative can also be built from reflexive verb forms, e.g. zaawan "make someone hit himself". Like, e.g.. Turkish, but unlike many languages, Kabardian allows "double causatives", i.e. the causative suffix can be added to a transitive verb that has already been derived by causativization: thus the causative -va-n "make boil, cook" can be causativized to a--van "make someone cook", taking three arguments: a I... a a a Nbaw-m q'z-ytxw y-?a-t-y... friend-ERG goose-five 3sg.-have-impf.-and y na-m his mother-ERG y-r-y-a--va-r-y 3sg.-3sg.-3sg.-caus.-caus.-boil-pres.-and p-m xw-y-h-- lord-ERG ver.-3sg.-bring-pret.-af. "(His) friend had five geese... and he made his mother cook them, and he brought them to the lord" Cf. also an "burn" (intransitive): a-an "burn" (transitive): a-a-an "make someone burn". Case assignment with causative verbs is typologically very unusual 47. The case of the arguments in a causative construction is not determined by that verb, but by the verb from which the causative verb is derived. If this verb is intransitive and has only one argument, its only argument will be marked for the nominative, while the causer will be marked for the ergative (as the oblique argument), as in the previous example. If, on the other hand, the original verb is intransitive and has an indirect object (oblique argument), the only macrorole ("subject") of the original verb will be marked for the nominative (yadk'war "student" in the following example): eaI eaI y a yaadk'wa-m yadk'wa-r wsa-m q'-r-y-a-d-- teacher-ERG student-NOM poem-ERG dir.-3sg.-3sg.-caus.-to read-pret.-af. "The teacher encouraged the student to read the poem" Information on this is given according to Kumaxov (ed.) 2006: 436 and according to the examples obtained from my informants. 47 47 Finally, if the causative verb is derived from a transitive verb, the lowest-ranking macrorole of this (original) verb will be in the nominative, and the other macrorole in the ergative; the causer is again in the ergative: I Ia a '-m 'la-m dabz-r y-r-y-a-h-- old man-ERG boy-ERG girl-NOM 3sg.-3sg.-3.sg.-caus.-to carry-pret.-af. "The old man made the boy carry the girl" I Ia ya '-m 'la-m pa-r y-r-y-a-q'wt-- old man-ERG boy-ERG tree-NOM 3sg.-3sg.-3sg.-caus.-cut-pret.-af. "The old man made the boy cut the tree" Of course, all of the nominal arguments can be left unexpressed, and proper nouns and indefinite NPs do not receive case marking: ye aI xa Q'arawyay y ha'a-m-ra y -m-ra y-a-x-- Q. 3sg.poss. guest-ERG-and 3sg.poss. horse-ERG-and 3sg.-caus.-eat-pret.-af. "Karaavey fed his guest and his horse" (in this sentence the name Q'arawyay would be in the ergative as the causer, the undergoer of the underived verb, i. e. the food, which is unexpressed, would be in the nominative, and the only case-marked nouns (ha'a and ) are in the ergative as the indirect objects viz. non-macrorole core arguments). These unusual rules of case assignment with causative verbs are related to the rules of case assignment in subordinate clauses (see below), where the case of the nouns in the main clause depends on the role of these nouns in the subordinate clause. Since causers are agents, the causative verb receives a personal prefix for the causer which takes the position of the prefix for the agent / subject of a transitive verb (immediately before the causative prefix), and the noun denoting the causer is in the ergative; the agent of the underived verb is reduced to the status of oblique argument / indirect object. The causative verb can thus take up to four personal markers 48 (for the causer, the subject, the object and the indirect object): I xx a ax '-m fz-m tx-xa-r pa-m y-r-ry--t-xa man woman books girl 3sg.-3sg.-3sg.-caus.-give-3pl. "The man makes the woman give the books to the girl" y ax ya sa wa b-xa-m s-ra-w-z-a-t-- I you he-pl.-ERG 1sg.-3pl.-2sg-1sg.-caus.-give-pret.-af. "I made you give me to them" 48 My informants warn me that examples like these are slightly unnatural, fabricated. 48 The order of personal prefixes is basically the same as with normal transitive verbs (see above), except for the fact that there is an extra position, the one for the causer immediately before the causative prefix 49. According to agirov (1977: 124) and Kumaxov (1989: 218), the causative prefix a(also Adyghe a-) is cognate with the Ubykh causative prefix a-, (for plural objects only) and with the Abkhaz causative prefix r- (the sound correspondence is regular). This would mean that the causative formation is inherited from Proto-NWC. INVOLUNTATIVE A verb in the category of involuntative indicates an action which is done unintentionally. The Russian term is kategorija neproizvol'nosti, cf. Klimov 1986: 45. In the involuntative verbs take the prefix ?a'a-: a a ha-m ba-r y-thal-- dog-ERG fox-NOM 3sg.-kill-pret.-af. "The dog killed the fox" a IIa ham bar ?a'athalh "The dog slaughtered the fox (unintentionally)" a IIa ha-r ba-m ?a'athalh "The fox (unintentionally) slaughtered the dog" I yIa 'la-m dw-r y-w'-- boy-ERG thief-NOM 3sg.-kill-pret.-af. "The young man killed the thief" Ia IIyIa 'la-m dw-r ?a'a-w' "The young man (unintentionally) killed the thief" IIIa s-?a'a-k'wad-- 1sg.-invol.-disappear-pret.-af. 49 Dixon (2000: 49) includes Kabardian in his typology of causatives, claiming that it belongs to a small group of languages in which the causee in a causative derived from a transitive verb retains its Amarking (marking of agents of transitive verbs). As a similar case he adduces an isolate, Trumai (Brasil), in which both the causer and the causee take the ergative marking in a causative construction. However, what is special about Kabardian is that, in causatives built from intransitives, the same thing happens: the original "subject" retains its subject properties, getting the nominative case and not being indexed on the verb. There are other languages in which subjects retain some subject properties in causatives, e.g. Japanese (reflexive binding) and Qiang (case marking). 49 "This accidentally disappeared on me" (Rus. to u menja nevol'no propalo) y yIIyIa wa w-s-?a'a-w'-- 2sg. 2sg.-1sg.-invol.-kill-pret.-af. "I accidentally killed you" As can be seen from the previous example (the order of personal prefixes is patientagent), a transitive verb does not become intransitive in the involuntative, i. e. the action of the verb still ''affects'' its object 50. In Kabardian grammars I find no examples of the involuntative construction with causative verbs. Although causativity seems to presuppose that the first argument of the verb is a conscious instigator of the action (the agent), my informants say that the following sentence is possible: Ia I IIyIa 'la-m '-m dw-r ?a'-y-a-w'-- boy-ERG old.man-ERG thief-NOM invol.-3sg.-caus.-kill-pret.-af. "The boy made the old man accidentally kill the thief" I found the following example in the biography of abagy Kazanoko (Nal'ik 1984): aI I II IIaa bwy-p'-r zady-ry c' c'k'w-r bee-keeper-4 together.rise-and he.goat small-NOM dw-m q'-?a'-a-xw-- wolf-ERG dir.-invol.3pl.-caus.-drop(?)-pret.-af. "Four bee-keepers rose together and made the wolf (unintentionally) drop the little goat" Note that the prefix -?aa- modifies the action of the original actor (the wolf), which is the derived causee, rather than the action of the derived actor (the four beekeepers). It appears that the involuntative cannot be used with stative verbs, such as taysn "sit": Ia e 'la-r ant-m tay-s- boy-NOM chair-ERG dir.-sit-af. Pace Abitov (ed.) 1957: 93, Hewitt 2004: 183. Moreover, the case marking on the arguments remains as in the non-involuntative construction. Prefixes with the similar function to the Kabardian involuntative exist in Abkhaz, but also in Georgian (Hewitt 2004: 183). 50 50 "The boy sits on the chair" but: *lam antm ?a'atays "the boy accidentally sits on the chair"; rather, one must use the following construction with the negated verb xwyayn "want": Ia ey e 'la-r ant-m m-xway-wa tay-s- boy-NOM chair-ERG neg.-want-ger. dir.-sit-af. The verb containing the involuntative prefix can be used in polite questions, and the prefix is best rendered as "perhaps, by chance": IIay I? q'-f-?a'a-m-aw--wa p'ara? horse dir.-2pl.-invol.-neg.-see-pret.-ger. inter. "Haven't you seen a horse, by chance?" The origin of the involuntative prefix is an incorporated syntagm which includes the noun ?a "hand" and the participle 'a "doing" (to do something unintentionally is ''to do something using the hand, and not the mind''). A similar, but etymologically unrelated, "involuntative" prefix exists in Abkhaz (-ama-). FACTITIVE Adding the prefix w- to a nominal stem forms verbs the meaning of which is ''to make something become or have the quality of what the nominal stem expresses", e.g. wf'ayn "to pollute, to make dirty" from f'ay "dirty", or wq'abzn "to clean", from q'bza "clean": a yIea sbyy-m dna-r y-w-f'yay-- kid-ERG shirt-NOM 3sg.-fact.-dirty-pret.-af. "The kid made the shirt dirty" As the case marking on argument shows, the verbs containing the factitive prefix are transitive, just like the causative verbs. In a sense, the factitive is just a special type of denominative causative. The factitive prefix immediately precedes the verbal root. It can be freely combined with the causative prefix, which it follows, cf. e.g. b "soft", wabn "to make soft, soften", yaawabn "make someone soften (something). 51 The division into dynamic and stative verbs does not coincide with the division into transitive and intransitive verbs. Both transitive and intransitive verbs can be either dynamic or static. Dynamic intransitive verbs express action, activity; they are morphologically marked by the prefix -aw- in the present tense. Intransitive dynamic verbs have the prefix ma-(m-) in the 3rd person singular present. Here are some examples of dynamic verbs: s-aw-xda-r "I mock", w-z-aw-h "I carry you", II d-awp''a-r "we hurry", f-aw-la-r "you work", I m-k'wa-r "he goes" Stative verbs express a state, or the result of an action. They are often derived from nouns. They do not have the facultative suffix r in the present, but the affirmative suffix - is compulsory; in the present they do not have the prefix -aw- like dynamic verbs: sa s-- I 1sg.-lie-af. "I am lying" a -r t- he-NOM stand-af. "He is standing" s- "(he) is sitting (on a horse)", "he is riding", cf. - "horse", sn "to sit" All stative verbs are intransitive, except for the verb ?n "to hold". It seems that every noun can be used as a stative verb, i.e. it can be turned into an intransitive verb by adding the suffix - (for affirmative forms): e sa s-prawfayssawr- I 1sg.-professor-af. "I am a professor" Moreover, even adpositions can be turned into (stative) verbs by adding the affirmative suffix -: ay y zwa naw- war after-af. "It was (the time) after the war" 52 APPLICATIVES Kabardian has two sets of applicative prefixes. Applicatives are usually defined as constructions in which the number of object arguments selected by the predicate is increased by one with respect to the basic construction. The object of the original construction is usually demoted to the status of the oblique argument, and the applied argument takes at least some of the properties of the object, cf. the English opposition between Jane baked a cake and Jane baked John a cake, where John is put in the first post-verbal position otherwise reserved for direct objects 51. However, in contradistinction to the applicative construction in most other languages, both Kabardian applicatives do not affect the choice of the object/undergoer. According to Peterson (2007) the benefactive and the comitative functions of the applicative construction are the most common ones cross-linguistically. We have both of them in Kabardian. I. VERSION (BENEFACTIVE/MALEFACTIVE) The prefix xwa-/-xw- indicates version, i.e. for whose benefit the action is performed; it could also be called a benefactive 52: xa p-xwa-s-tx-- 2sg.-ver.-1sg.-to write-pret.-af. "I wrote for you" The prefix -xw- is placed immediately after the prefix for the person for whose benefit the action is performed: Ia s-p-xwa-k'w-- 1sg.-2sg.-ver.-to go-pret.-af. "I went for you (on your behalf)" ya I, x w wy-na-r g f'a-n-, pysmaw q'-xwa-p-tx-ma your-mother-NOM be glad-fut.-af. letter dir.-vers.-2sg.-to write-cond. "Your mother will be glad if you write her a letter." There is also the malefactive (adversative) prefix f'-/f'a-, which seems to be parallel to the version prefix -xw-, but it indicates to whose detriment (or against whose will) the action is performed 53: Note that English does not have any applicative morphology, and that the applied argument does not take all of the object properties, e.g. it cannot be passivized. 52 Applicatives (version prefixes) exist in the other NW Caucasian languages. Hewitt (2004: 134f.) calls the prefixes expressing version in NW Caucasian "relational particles" (cp. Abkhaz -z()- which corresponds to Kab. -xw -) to distinguish them from version prefixes in Kartvelian, where a somewhat more complex system exists. 53 Kumaxov 1971: 276. Cf. the similar "adversative" prefix ca- in Abkhaz. 51 53 yaIIa w--f'-da-k'w-- 2sg.-3pl.-advers.-conj.-go-pret.-af. "You went with them against their will" yIIa w-s-f'-da-k'w-- 2sg.-1sg.-advers.-conj.-go-pret.-af. "You went with them against my will" xIx Ixa Ixa xa-z-a-p' -xa-r maz-m dir.-1sg.-caus.-graze.a.night-pl.-NOM wood-ERG s-f'-xa-ada----y q'-s-xwaxw--r-q'm 1sg.-advers.-dir.-run-back.-pret.-af.-and dir.-1sg.-drive.out-back-pres.-neg. "The horses that I herded at night ran away on me into the wood and I can't drive them out again". IIa -r wagw-m -s-f'a-k'wad-- horse-NOM road-ERG dir.-1sg.-advers.-pret.-af. "The horse disappeared to me on the road, I lost my horse along the road" The category of version in Kabardian should not be confused with the typologically similar applicative construction, which involves the adding of an argument to the core of the clause and increasing the transitivity of a verb. In Kabardian, adding the version prefix -xw- and the adversative prefix -f'- does not affect the transitivity of a verb. The applicative can be freely combined with the causative: II tha-m c'k'w-r q'-p-xw-y-a-w god-ERG this little-NOM dir.-2sg.-ver.-3sg.-caus.-grow/become "May God raise this little one for you!" a sy da-r q'-p-xw-aw--na my gold-NOM dir.-2sg.-ver.-pres.-caus.-remain "I am leaving you my gold" (= "I am making my gold remain for you") 54 II. CONJUNCTIVITY (COMITATIVE) The prefix expressing conjunctivity (Rus. sojuznost') -da-/-d- indicates that the subject is performing the action together with somebody else 54: s-da-k'w-- "I went with him" : s-k'w "I went" 1sg.-conj.-go-pret.-af. da-s-h-- "I carried (it) with him" : sh "I carried (it)" conj.-1sg.-carry-pret.-af. a dabz-r y-na-m d-aw-la girl-NOM 3sg.poss.-mother-ERG conj.-pres.-work "The girl works with her mother" I aIx ea '-m ha'a-xa-m xw y-d-ya-f-- old.man-ERG guest-pl.-ERG sour.milk 3pl.-conj.-3sg.-drink-pret.-af. "The old man drank sour milk with the guests" Note that ha'axa "guests" is in the Ergative in the preceding example, which shows that the applied argument has the status of the oblique, rather than direct object/undergoer. Compare also the Ergative case of the applied NP in the following example: a a I II -r y nbaw c'k'w gwar-m mxwa gwar-m day some-ERG he-NOM 3sg.poss. friend small some-ERG 'an da-dagw-rt-y 'an conj.-play-impf.-and "And one day he played 'an (a game with sheep bones) with his little friend" The conjunctivity prefix follows the person marker it refers to, and it also follows the person marker expressing the argument marked with the Nominative ("the lowest ranking macrorole"); stating this rule in terms of the traditional "Subject" would be confusing, since we would have to say that -da-/-d- precedes the subject of transitive verbs, and follows the subject of intransitives: x b-d-z-aw-x 2sg.-conj.-1sg.-pres.-eat A genetically cognate comitative/conjunctivity prefix exists in the other NW Caucasian languages, cf. Ubykh dz-, Abkhaz and Abaza c()-. Abkhaz has another applicative marker, la-, which has instrumental function (Hewitt 2004: 134). 54 55 "I am eating this with you" (transitive verb) a s-b-d-aw-la 1sg.-2sg.-conj.-pres.-work "I am working with you" (intransitive verb) With transitive verbs, adding a conjunctive prefix can refer not only to the conjunction of actors, but also of undergoers (Kumaxov et alii 2006: 250): e Ia x qwyay-m 'qwa da-x cheese-ERG meat conj.-eat "Eat meat with cheese" a e ex a Hasan sy nrtxw qap-r yazm yay-xa-m d-y-ha-- H. poss.1sg. corn bag-NOM himself his-pl.-ERG conj.-3sg.-grind-pret.-af. "Hasan ground my bag of corn together with his own" Note that the added (applied) argument in the examples above is in the Ergative (in its oblique function). This shows that the added argument is not the object/undergoer, but oblique. According to my informants, the applied argument has to be in the Ergative even if it is indefinite: Ia I aa 'la dabz '-m d-y-w-- boy girl old.man-ERG conj.-3sg.-see-pret.-af. "A boy saw a girl with an old man" Just as with the category of version (see above), the category of conjunctivity involves the adding of another person marker to the verb, so from a typological point of view this looks like the comitative applicatives found, e.g., in Haka-Lai, a Tibeto-Burman language (Peterson 2007). However, the difference lies in the fact that the adding of the conjunctivity prefix does not affect the transitivity of a verb, as is clear from case marking and the shape of the person markers. A related conjunctivity (comitative) prefix exists in Abkhaz (-c()-). The conjunctivity/comitative applicative construction should be distinguished from the incorporation of the adverbial prefix -zda-, -zada- "together". In Russian, this is sometimes referred to as the category of "togetherness" (sovmestnost'). The adding of this stem to the verbal matrix does not involve adding any personal prefixes: y a wara sara d-zad-aw-la I you 2pl.-together-pres.-work "You and I work together" e a y a, 56 eI ay Zagwarm m day nrt w gwp Once H. to Nart rider group zayk'wa zd--a-nw raid together-3pl.-lead-inf. "Once, a group of Nart riders came to Himish, to take him on a raid (together with them)" q'-dh--, dir.-come-pret.-af. RECIPROCITY The verb in the reciprocal form expresses that its two core arguments (the Actor and the Undergoer) act on each other simultaneously. The reciprocal prefix is za- (for intransitive verbs), and zar- (for transitive verbs): I za-gwr?wa-n "to arrange between each other" a zar-w-n "to see each other" a d-zar-wat-- 1pl.-rec.-meet-pret.-af. "We met each other" The core arguments of the verb in the reciprocal form must be in the ergative case, to which the conjunctive suffix -ra "and" is attached: I Iay a '-m-ra y q'wa-m-ra kwad 'wa zar-aw--q'm old.man-ERG-and 3sg.poss. son-ERG-and long doing rec.-see-pret.-neg. "The old man and his son have not seen each other for a long time" Of course, personal pronouns in the 1st and 2nd person are not case marked, but they also receive the conjunctive -ra: Iy a Fara dara kwad m'aw d-zar-w-n- you we long not.doing 1pl.-rec.-see-fut.-af. "We will see each other shortly" Perhaps under the influence of the Russian reciprocal construction (drug-druga), Kabardian has also developed the construction with the "reciprocal pronouns" zdryay ("one-other"): ae Iy, Ia a 57 Z-m dryay-m z--y-a-pk'w-w-ra, one-ERG other-ERG refl.-dir-3sg.-caus.-avoid-ger.-and ?wha-m zarh-- za-y--r hill-ERG meet-pret.-af. brother-suff.-3-NOM "And, after avoiding one another, the three brothers met on the hill" REFLEXIVITY Kabardian does not have reflexive pronouns; reflexivity is expressed by the verbal prefix za-/z-/z-, which indicates that the subject of the action is the same as the object; from the historical point of view, this is the same prefix as the basic reciprocal prefix. Reciprocity and reflexivity are in many languages semantically and morphologically related, cf. the Croatian verbs tui se (= to hit oneself or to hit each other), gledati se (= to look at oneself or to look at each other). The reflexive prefix follows the prefix for the subject of an intransitive verb (the lowest ranking macrorole, see above) and precedes the prefix for the subject of a transitive verb (the other macrorole): oI s-z-aw-wp'- 1sg.-refl.-pres.-ask-back "I ask myself" (intransitive verb) yoI w-z-aw-wp'- 2sg.-refl.-pres.-ask-back "You ask yourself" aI z-z-aw-tha' refl.-1sg.-pres.-wash "I wash myself" (transitive verb) a z-b-aw-xwpa refl.-2sg.-pres.-dress "You dress yourself" (transitive verb) The reflexive marker on the subordinated verb must be controlled by the subject of that verb, not the subject of the verb in the main clause: da wa z-b-wa-nw we you(SG) REFL-2SG-CAUS-relax-INF d-xwyay- 1PL-want-AFF 58 We want you to hit yourself The preceding example cannot be taken to mean *We want you to hit us, with the subject of the main clause (da) as the controller. It is typologically somewhat unusual that, in the case of transitive verbs, the reflexive affix precedes the personal affix for the constituent which has to be coreferent with it. The reflexive prefix can occur with the infinitive as well: x ps-m z-q'-xw-xa-dza-n water-ERG refl.-dir.-ver.-dir.-throw-inf. "to throw oneself into the water for him" The reflexive prefix is often combined with the suffix -(a)-, meaning "back". The details of the use of this suffix should be further examined, since it appears to be obligatory with intransitive bivalent verbs. The following examples are obtained from my informants: Ia II Ia 'la c'k'w-m z-y-'--- boy little-ERG refl.-3sg.-kill-back-pret.-af. "The little boy killed himself" (transitive verb) Ia II ya 'la c'k'w-r za-wa--- boy little-NOM refl.-hit-back-pret.-af. "The little boy hit himself" (intransitive verb) As can be seen from the examples, the reflexive construction of the verb does not change the valency of the verb (this can be seen by looking at the order of personal prefixes and the case assignment in the sentences above). Aside from this, it can be seen that, in a reflexive construction, the subject of an intransitive verb (to hit, wan) is treated in the same way as the subject of a transitive verb (to kill, 'n), i.e. that Kabardian syntax is nominative-accusative according to this criterion. Note the following pair of sentences with causative verbs, which point to the rules governing the use of --: a Ia ya pa-m 'la-r z-r-y-a-w-- girl-ERG boy-NOM refl.-3sg.-3sg.-caus.-hit-pret.-af. "The girl made the boy hit her" (litterally "herself", i.e. the girl) a Ia ya pa-m 'la-r z-r-y-a-wa--- girl-ERG boy-NOM refl.-3sg.-3sg.-caus.-hit-back.-pret.-af. "The girl made the boy hit himself" 59 The suffix -- "again, back", which we could refer to as "repetitive", can also appear without the reflexive prefix; it can often be translated as "again": a ya y w Ada apq'--ry apq' wrda --nw- Adyghean people-old-and people strong become-back-fut.-af. "And the old Adyghean people will become strong again". Besides temporal, the suffix -- also has directional (spatial) meaning, signifying the reverse direction of the action. Thus, while k'wan means "to go", k'wan means "to return", while tn is "to give", tn is "to give back", etc. When added to adjectival stems, it can also mean "even", e.g. ba is "a lot, many", naba is "more", and naba is "even more". In some cases, the suffix - can indicate that the action is performed again, but not by the same subject; in a Kabardian folk-story about the hero Ashamaz, we find a sentence in which his friend asks him to avenge his father: I I Wy da-r q'a-z-w'--r w'- your father-ABS DIR-PART-kill-PRET-ABS kill-back Kill the one who had killed your father! From the descriptive point of view, it can be said that the suffix - indicates that the lowest Macrorole argument of the verb (in traditional terms its intransitive subject or direct object) is doubly affected by the action: with nonreflexives, this may mean either that the action is performed twice (again) on (or by) that argument, or that the action is directed back at it. With reflexive intransitives, it also means that the lowest macrorole argument is doubly affected: once as the instigator of the action, and again as its undergoer. There is no special possessive reflexive. Rather, the usual possessive pronouns are used: , I sy mal zpxwxr maz pa-m q'-tay-z-n-t 1sg.poss. sheep 5-6 woodmeadow-ERG dir.-dir.-1sg.-leave-plup. "I had left my five or six sheep on a meadow in the wood" DEONTIC MODALITY The potential prefix -xwa-/xw- and/or the suffix -f()- express deontic modality, i.e. whether the subject is capable of doing the action expressed by the verb or not: yy w-s-xw-h-nw- 2sg.-1sg.-pot.-carry-fut.-af. 60 "I will be able to carry you" The prefix -xw- is placed immediately after the personal prefix for the agent, the potential doer of the action. It seems to be added only to transitive verbs, and in origin it is probably identical to the "version" marker (benefactive) -xw- (Hewitt 2004: 135; see above). The suffix -f- is added both to transitive and intransitive verbs. It is not entirely clear whether these are variants of the same morpheme (-f-/-xw-) which can be both a suffix and a prefix, or whether they are two different morphemes. Klimov (1986: 45) claims that this is only one morpheme which can be either a suffix or a prefix, and he cites it as -xwa- in Kabardian, -fa- in Adyghean, which is in keeping with the rule according to which the Common Adyghean *xw results in f in Adyghean. However, the suffix -f- is found in Kabardian texts as well, cf. dabza'a sawpsaaf "I speak Kabardian" (i. e. "I can speak Kabardian"); the potential prefix occurs more often with negative and interrogative forms, while the suffix is tied to affirmative forms of the verb. In any case, the potential should be distinguished from the so-called "hypothetical mood", which can be included in the category of evidentiality (see below). Potential differs from the proper verbal moods in that it is negated by the suffix -q'm, rather than with the prefix -m-, i.e. it is a finite verbal form: a II sy Dta Kwaba-m q'-f'a'-f-n-q'm 1sg.poss. sword gate-ERG dir.-pass-pot.-fut.-neg. "He will not be able to pass my 'Sword-Gate'" , Wazrmad dy m-wsa-ma, q'-t-xwa-h-n-q'm W. 1pl.poss. neg.-companion-cond. dir.-1pl.-pot.-carry.away-fut.-neg. "If Wazirmad is not our companion, we will not be able to kidnap her (sc. Satanay)" An interesting feature of the potential prefix is that it reduces the transitivity of the verb, i.e. it turns transitive verbs into intransitive. This is in keeping with the relation between transitivity and the "affectedness of the object", i.e. the patient: in the potential, the patient is not affected by the action, so the verb has to be intransitive, cf. the following two examples (Kumaxov, ed. 2006: 257) 55: ye w-ya-s-t-r-q'm 2sg.-3sg.-1sg.-to give-pres.-neg. "I don't give you to him" (the verb is transitive, so the prefix for the doer of the action, 1sg., is placed next to the verbal root) ye w-s-xw-ya-t-r-q'm 2sg.-1sg.-pot.-3sg.-to give-pres.-neg. "I cannot give you to him" (the verb is intransitive, so the order of the prefixes for 1sg. and 3sg. is reversed) This correlation between (at least some) potentials and intransitives seems to be an areal feature in the Caucasus. Cp. Hewitt 2004: 181ff. for similar examples from Mingrelian, Ingush, Khinalug, and Abkhaz. 55 61 However, the arguments of the verb in the potential form receive the same case endings as in the corresponding indicative 56: I ex smada-m m?arsa-r ya-x (note the 3sg. "transitive subject" prefix ya-) sick.man-ERG apple-NOM 3sg.-eat "The sick man is eating the apple" I x smada-m m?arsa-r xw-aw-x (note the lack of the 3sg. prefix) sick.man-ERG apple-NOM pot.-pres.-eat "The sick man can eat the apple" This can be accounted for if the potential construction is actually of the "inverse-type" (see above), i.e. if the preceding example should be rendered as "it is possible to the sick man to eat the apple". Unlike the potential prefix -xw-, the potential suffix -f- is freely combined with the version prefix -xwa-: I ? St p-xwa-s-'a-f-n ydry? what 2sg.-ver.-1sg.-do-pot.-inf. more "What more can I do for you?" PERSONAL AND DIRECTIONAL PREFIXES The use of directional prefixes is compulsory with many verbs for certain persons and tenses; the use of these prefixes is quite idiomatic, and it seems that each verb has its own pattern 57, cf. the intransitive verb an "to wait": s-n-aw-w-a "I wait for you" 1sg.-dir.-pres.-2sg.-to wait s-v-aw-a "I wait for you (pl.)" 1sg.-2pl.-pres.-to wait s-aw-a "I wait for him/I wait for them" w-q'-s-aw-a "you wait for me" 2sg.-dir.-1sg.-pres.-to wait 56 57 s-ya--- "I waited for him/for them" w-q'-za--- "you waited for me" 62 w-q'-d-aw-a "you wait for us" w-aw-a "you wait for him/for them" q'-z-aw-a "he waits for me" d-n-aw-w-a "we wait for you" 1pl.-dir.-pres.-2sg.-to wait d-aw-a "we wait for him/them" q'-z-aw-a "they wait for me" etc. w-q'-da--- "you waited for us" w-ya--- "you waited for him/them" q'-za--- "he waited for me" d-n-aw--- "we waited for you" d-y--- "we waited for him/them" q'-za--- "they waited for me" Some linguists believe that the use of the directional prefix q'- with polyvalent intransitive verbs depends on the person hierarchy (see below). TENSES Kabardian has a complex system of verbal tenses. It distinguishes the basic dimensions of the present, future and past, and, within the past, two degrees of remoteness: the preterite and the imperfect denote an action which happened in the more recent past, while the pluperfect denotes an event in the distant past. The category of tense is mostly expressed by suffixation (though there are also verbal prefixes in the present tense): present: prefixes ma- (m-), -aw- and the facultative suffix -r for dynamic verbs, without markers for stative verbs preterite: suffix - imperfect: suffix -(r)t for dynamic verbs and -m for stative verbs 58 anterior preterite: suffix --t pluperfect: suffix - anterior pluperfect: suffix -t categorical future: suffix -n factual future: suffix -nw future II: suffix -nwt The terminology for Kabardian verbal tenses differs greatly depending on the author; Kumaxov and Vamling (1996: 39 ff.) refer to the anterior preterite as the "perfect II", and to the preterite as the "perfect". The same authors mention also forms with the suffix , which they call "aorist", but these forms seem to be quite rare in texts; cp. also Abitov 1957: 120f. 58 63 In all verbal tenses there are special negative forms, expressed by the suffix -q'm; in the present of dynamic verbs the prefixes ma-, aw- disappear in the negative form, and the suffix -r becomes compulsory, cp. the following examples: 1. Intransitive monovalent dynamic verb k'wan: I() : I() s-aw-k'wa(r) : ma-k'wa(r) 1sg.-pres.-go 3sg.pres.-go "I go" "He goes" : I : s-k'wa-r-k'm 1sg.-go-pres.-neg. "I don't go" : I : k'wa-r-q'm go-pres.-neg. "He doesn't go" 2. Intransitive stative verb tn "stand" s-t- "I stand" : : t- : "He stands" : s-t-q'm : "I don't stand" : t-q'm "He doesn't stand" 3. Intransitive bivalent (dynamic) verb an "wait" () s-aw-a(r) "I wait (for him)" () y-aw-a(r) s-a-r-q'm y-a-r-q'm "He waits (for him)" "I don't wait (for him)" "He doesn't wait" 4. Transitive (bivalent dynamic) verb dn "sew" () s-aw-d(r) "I sew it" e ya-d-r "He sews it" s-d-r-q'm "I don't sew it" e ya-d-r-q'm "He doesn't sew it" The meaning of anterior verbal tenses is not entirely clear. These are the anterior pluperfect and preterite, and, because of the way it is formed, the future II as well. According to reference books, anterior tenses indicate an action which lasted for some time in the past, and forms in anterior tenses are glossed by adding the adverb "then" (Rus. togd), e. g. k'w "he went" in contrast to k'wt "he went then". Based on examples and the interviews with my informants, I find it most likely that the suffix -t- used in anterior tenses expresses definiteness, i.e. that a verb in an anterior tense indicates an action which was performed at a definite time in the past 59. This can be seen in the following sentence: ax II a, a a a a a Nrt-xa-r m 'p'a-m N-pl.-NOM that land-ERG 59 zamn a-m bz-t There do not seem to be any clear parallels to this kind of tense system in Comrie's cross-linguistic survey (Comrie 1985). 64 that time far-ERG perform.deeds-ant.pret. "The Narts lived in that land, (and) Sosruko's sword performed feats then, long time ago". The use of the anterior preterite in the preceding example is consistent with the use of the adverbial expression zamn am "at that time, long ago". Similarly, the use of the preterite is incompatible (or nearly so) with temporal adverbs such as dwsa "yesterday", which specify the exact time when the action was performed. With such adverbs the anterior preterite must be used: a a Ia / w d sa q'la-m s-k'w-t / yesterday city-ERG 1sg.-go-ant. pret. "I went to the city yesterday" *Ia *s-k'w-- 1sg.-go-pret.-af. The imperfect is, unlike the preterite, used for an action which lasted for some time or was repeated in the past. In narratives this tense alternates with the preterite, which in most cases indicates a one-off action, or an action which is not implied to have lasted for some time or to have been repeated in the past, e.g: . e y Ia da-s-t. Satanyay wna-m q'-'h-- Sawsrq'wa agw-m S. fireplace-ERG dir.-sit-impf. S. house-ERG dir.-enter-pret.-af. "Sosruko was sitting (impf.) by the fireplace. Satanaya entered (pret.) the house" Interestingly, the imperfect is compatible with temporal adverbs specifying the time when the action was performed: a a I w s-k'wa-rt d sa q'la-m yesterday city-ERG 1sg.-go-impf. "I was going to the city yesterday" The opposition between the imperfect and the preterite can easily be seen in the following paragraph: 65 "On the top of Uahamaxwa (Mt. Elbrus) Mazatha, Amish, Thagoled, Sozrash, Hlapsh and others were sitting together with Psatha and marking (y?at, impf.) the drinking of sana (drink of the gods). And so every year these gods organized (y't, impf.) the drinking of sana. And the one who was (taytmy, impf.) manliest on earth, he was brought over (yarty, impf.) and was given to drink (yrfart, impf.) from a horn filled with sana, as a favour to the thirsty little men on earth. The Narts esteemed (yap'art, impf.) highly the man who drank with the gods. And many years passed (yak'wa', pret.) in that way. At the celestial drinking of sana, Psatha, who personally sat as thamada (commander of the feast) got up and said (y?, pret.)." In this paragraph we can see how a sequence of events repeated in the past and expressed by the imperfect was interrupted by the event referred to by the commencing story, which is expressed by the preterite. The pluperfect generally expresses an action performed a long time ago, in the distant past: y yI aIa ax dy w q'-y-wa-nw-r zar-w'--nw- 1pl.poss. after dir.-3sg.-become-inf.-ABS recip.-kill-back-fut.-aff. q'-d---?a-- dir.-1pl.-pref.-3pl.-say-plup.-aff. dy da--xa-m 1pl. father-old-pl.-ERG "Our forefathers said to us long time ago that the ones who will exist after us would kill each other" In vivid narration the present tense can also be used to express a past action: a y . Ia Ia a, aI. 66 Bly ma-w swan-am y mxwq'wa-r. Z-p'--m-y adult 3sg.pres.-become S-ERG 3sg.poss. stepson-NOM part.-raise-pret.-ERG-and y-?-t pw bly-y, z-aw-gwk'wa. 3sg.-have-ant.pret. daughter adult-and refl.-pres.-fall.in.love "The Svan's stepson grows up; those who had raised him had a grown-up daughter, and they fall in love" (note that Swan here refers to a member of a Kartvelian people, the Svans) The difference between the categorical and the factual future is not entirely clear to me. Some sources say that the categorical future expresses an intention to perform the action, while the factual future expresses the speaker's certainty that the action will be performed. According to my informant, the natural way to say "I shall go to the city" is q'lam s-k'wa-nw- (city-ERG 1sg.-go-factual fut.-af.), whereas q'lam s-k'wa-n- (with the categorical future suffix -n-) would be used only if the subject will go to the city under a certain condition. However, from the passages such as the following one it would appear that the categorical future does not refer to any particular time when the action will be performed, while this specification is necessary with the factual future. If so, the opposition between the categorical and the factual future would correspond to the opposition between the preterite and the anterior preterite: ax xay ax yy I ay, a aaIy: "yy Iy " I. I xa : "a , a x I" aI a x a Nrt-xa-m xbza-w y--xa-t za-zawa-nw byy-m nart-pl.-ERG custom-ADV 3sg.-3pl.-in.be-impf. rec.-fight-fut. enemy-ERG p'aa y-r-t-w, br-y y-r--'a-w: "d-va-zawa-nw date 3sg.3pl.give-ger. message-and 3sg.-3pl.-caus.know-ger. 1pl.-2pl.-fight-fut. d-na-k'wa-nw- m-pxwada zamn-m" - --?ara. rha'a 1pl.-dir.-go-fut.-af. this-like time-ERG dir.-3pl.-say but byy-m xbza-r y-q'wta-ry: "Nrtpq' t-q'wta-n-, enemy-ERG custom-NOM 3sg.-break-and Nart.race 1pl.-break-fut.af. nrt xakw t-wn'a-n-" --?a-ry nrt xakw-m q'-y-h-- Nart land 1pl.-seize-fut.-af. dir.-3pl.-and Nart land-ERG dir.-3sg.-carry-pret.-af. "The old Narts had the custom to give the enemy the date, to send him the message that they would come to fight: "We will come to fight at that time", they used to say. However, the enemy broke the custom: "We will come to fight the race of the Narts (eventually), we will seize the land of the Narts", they used to say when they came to the land of the Narts." 67 In the preceding passage, apparently, the Narts used the factual future to give the exact time when they would come to fight, while their enemies just indicated that they would come to fight, without stating exactly when. The opposition clearly seems to be in the definiteness of time reference. Some authors refer to the future II as conditional. It is formed by adding the suffix -t to the factual future form. It seems that forms with the nt suffix, which are sometimes set apart as a distinct verbal mood (the subjunctive), can also be included in this category, cf. s-k'wa-nt "I would go" (see below). II , . y t'sp'a q'--m-xwta-ma, paa-nwta-q'm 3sg.poss. weak.spot dir.-3pl.-neg.-discover-cond. overcome.-fut.II-neg. "If they would not find his weak spot, they would not overcome him" Here are the selected paradigms of the verbal tenses: PRESENT A) dynamic intransitive verb k'wan "to go" 1. s-aw-k'wa(r) "I go" s-k'wa-r-q'm "I don't go" 2. w-aw-k'wa(r) "you go" 3. m-k'wa(r) "he goes" 1. d-a-k'wa(r) "we go" 2. f-a-k'wa(r) "you go" 3. m-k'wa-xa-r "they go" B) static intransitive verb -sn "to sit" 1. s--s- "I sit" 2. w--s- "you sit" 3. -s- "he sits" 1. d--s- "we sit" 2. f--s- "you sit" 3. -s- "they sit" C) dynamic intransitive verb psaan "to converse" sawpsa "I converse" wawpsa "you converse" mpsa "he converses" dawpsa "we converse" fawpsa "you converse" mpsa (mapsaxar) "they converse" D) transitive verb hn "to carry": s-aw-h "I carry him"/"I carry them" w-z-aw-h "I carry you" f-z-aw-h "I carry you (pl.)" 68 w-aw-h "you carry him" /"you carry them" s-b-aw-h "you carry me" d-b-aw-h "you carry us" ya-h "he carries him" / "he carries them" s-ya-h "he carries me" d-ya-h "he carries us" w-ya-h "he carries you" f-ya-h "he carries you (pl.)" f-d-aw-h"we carry you (pl.)" f-aw-h "you carry him" / "you carry them" s-v-aw-h "you (pl.) carry me" d-v-aw-h "you (pl.) carry us" y--h "they carry him" / "they carry them" s--h "they carry me" d--h "they carry us" w--h "they carry you" f--h "they carry you (pl.)" PRETERITE s-k'w- "I went" w- k'w- "you went" k'w- "he went" ss "I was sitting" ws "you were sitting" s "they were sitting" ds "we were sitting" fs "you were sitting" s "they were sitting" sa txm syad "I read a book" wa txm wyad "you read a book" r txm yad "he read a book" da txm dyad "we read a book" fa txm fyad "you read a book" xar txm yad "they read a book" sh "I carried him" / "I carried them" wsh "I carried you" fsh "I carried you (pl.)" ph "you carried him" / "you carried them" sph "you carried me" dph "you carried us" yh "he carried him" / "he carried them" syh "he carried me" dyh "he carried us" wyh "he carried you" fyh "he carried you (pl.)" th "we carried him" / "we carried them" 69 wth "we carried you" fth "we carried you (pl.)" fh "you (pl.) carried him" / you carried them" sfh "you (pl.) carried me" dfh "you (pl.) carried us" yh "they carried him" / "they carried them" sh "they carried me" dh "they carried us" wh "they carried you" fh "they carried you (pl.)" IMPERFECT s- k'wa -(r)t "I was going" w-k'wa(r)t "you were going" ya-k'wa(r)t "he was going" ANTERIOR PRETERITE s- k'w-t "(then) I went" PLUPERFECT s-k'wa-- "I went a long time ago" ANTERIOR PLUPERFECT s-k'wa-t "(then) I went a long time ago" CATEGORICAL FUTURE s- k'wa-n- "I will go" FACTUAL FUTURE s- k'wa-nw- "I will go, I am about to go" ( is the affirmative suffix) FUTURE II s- k'wa-nwt "I was about to go / I would go" INTERROGATIVE The interrogative is sometimes referred to as the question mood. It uses the same type of suffixal formation as verbal moods. Like verbal moods, the interrogative is a nonfinite verbal form (it takes the prefixal negation -m-) and it cannot be combined with the affirmative suffix -. However, considering the function of this category, it is better to think of it as a form of expressing the illocutionary force; the interrogative suffixes bring into question the content of the predicate, ie. the verb. The interrogative suffixes are -ra, -q'a, -wy: yx w-txa-ra 2sg.-write-inter. 70 "Are you writing?" (interrogative) II s-f-?w'a-n-q'a 1sg.-2pl.-meet-fut.-inter. "Will I meet you?" (interrogative) 60 The suffix -q'a can also be used in exclamations: aIxa, yI ! w g v-'axmy, za w-q'-y'-n-q'a wa-m! soon-late once 2sg.-dir.-exit-fut.-inter. hole-ERG "Sooner or later, you will exit that hole!" The interrogative has no suffix in the preterite and in the future, but the affirmative suffix is not used, and the intonation of the sentence serves as another indicator of interrogativity: aaxa f---tx- 2pl.-3pl.-caus.-write-pret. "They made you write (it)?" Iy d-f-xwa-k'wa-nw 1pl.-2pl.-ver.-go-fut. "Are we going to go for you?" The suffix -ra can be used twice in disjunctive questions: I I yII z yas-'a s-y-a-ha'a-f-n-ra s-y-m-a-ha'a-f-n-ra? 1 year-INST 1sg-3sg.-caus.-guest-pot.-fut.-ra 1sg.-3sg.-neg.-caus.-guest.-pot.-fut.-ra "Will he be able to receive me as a guest for a year or will he not?" Interrogativity can also be expressed with interrogative particles, e. g. the particles p'ara, ha "why", etc. They can be freely combined with the interrogative suffixes: a yea ? ha w-z-tay-s- mva-r q'a-bana-ra? why this(NOM) 2sg.-part.-dir.-sit-pret. rock-NOM dir.-leave-inter. "Why are you leaving this rock you were sitting on?" MOODS 60 In the interrogative formed with the suffix -q'a it is assumed that the answer will be affirmative (Kumaxov & Vamling 1998: 53). 71 Kabardian verbal moods are: indicative, imperative, admirative, optative, conditional and permissive. A) Indicative The indicative is the unmarked verbal mood. It has the suffixes - (for affirmative) and -q'm (for negation). B) Imperative The imperative is the bare stem (without any suffixes): la! "paint!" (lan "to paint") a! "lead!" (an "to lead") tx! "write!" (txn "to write") If the lexical verb contains directional prefixes, these remain in the imperative: I mda q'-k'wa "come here!" here dir.-go The third person singular imperative receives the personal prefix: yaI ee y-w-'a taylayfawn-r q'a-z-gwpss--m 3sg.-factitive-life telephone-NOM dir.-part.-invent-pret.-ERG "May live the one who invented the telephone!" The imperative is also used in the 2nd person plural, with the regular person prefix: e a eyI! fy Satanyay gwa f-ya-wp'! poss.2pl. S. lady 2pl.-3sg.-ask "Ask (pl.) your (pl.) Lady Satanay!" Instead of the 1st person plural imperative, the causative of the 2nd person singular or plural imperative is used, with the 1st person plural as the causer: d-v-a-tx (1pl.2pl.-caus.-write) "let's write". This is typologically completely parallel to the English imperative construction (let us write): I Wazrmad wsa d-v-a-' W. companion 1pl.-2pl.-caus.-do "Let us make Wazirmad our companion!" The negation in the imperative is the prefix -m-, as if it were a non-finite form: yI 72 w-m-k'wa 2sg.-neg.-go "don't go" The imperative can be formed from verbal stems containing preffixes for version or conjunctivity: a! "run!" s-xwa-a "run for me" s-xw-da-a "run for me with him!" The imperative can be reinforced by adding the suffix -t: xa "eat!" vs. xa-t "come on, eat!" ! x-m f-xa-pa-t sea-ERG 2pl.-dir.-look-imp. "Come on, look into the sea!" C) Admirative The admirative mood is formed with the suffix the suffix -y. It is used to express the speaker's admiration or the unexpectedness of the performing of the action expressed by the verb; few languages known to me have such a verbal mood, but it does exist, e. g., in Albanian: a sa nawba z ma s-aw---y I today 1 bear 1sg.-see-pret.-af.-adm. "Why, I saw a bear today!" The admirative suffix -y can also have an interrogative sense and imply that the speaker does not approve of the action expressed by the verb. D) Optative The optative is formed with the suffixes -ara(t), -rat and -'at, as well as the prefix -r-ay- (where -ay- is the petrified 3 sg. person marker) expresses a wish for an action to be performed. A morphologically formed optative as a verbal mood is very rare among the languages of Eurasia, but most Caucasian languages have this verbal mood 61. a a() -r q'a-s-ara(t) he-NOM dir.-come-opt. "Oh if he would come!" 61 According to the data in WALS, a morphologically formed optative must be an areal feature of languages spoken in the Caucasus; this doesnt refer only to the indigenous ("Caucasian") languages, but also to languages belonging to other families (Turkic, Iranian) which are spoken there. 73 ay m-r zy xw-r q'a-w---wa s-aw-arat he-NOM whose thigh-bone dir.-become-back-pret.-ger. 1sg.-see-opt. "May I see resurrected the one whose thigh-bone this is" yx exI wax q'yax-'at rain fall-optative "Oh if it would rain!" eI y-ray-'-f 3sg.-opt.-do-pot. "May he manage to do it" There is also an optative prefix w-, apparently identical with the 2nd person prefix; however, the optative formed with this prefix does not distinguish between the 2nd and the 3rd person, cf. w-k'wa "may he go", or "may you go" (Kumaxov 1989: 201). Besides that, a wish can also be expressed with the "optative particle" py(y), as in the greeting wpsaw py "may you be healthy". E) Conditional The conditional has the suffixes -m(a) and -am(a). It expresses the fact that the action is performed under a certain condition. A Kabardian verb in the conditional can be equivalent to an entire conditional clause in English: aa d-f-w--ma 1pl.-2pl.-see-pret.-cond. "If you saw us" Iy ye, yeI f'wa w-yada-ma, wacyanka-f' q'a-p-h-n- well 2sg.-study-cond. grade-good dir.-2sg.-get-fut.-af. "If you study well (hard), you will get a good grade" y I I, y I I thwrmba xw q'-y-'-ma foam white dir.-3sg.-appear-cond. s-q'-aw-k'wa-, 1sg.-dir.-pres.-go-back thwrmba xw q'-y-m-'-ma s-q'a-k'wa--r-q'm foam white dir.-3sg.-neg.-appear-cond. 1sg.-dir.-go-back-pres.-neg. "If a white foam appears, I am coming back, if a white foam does not appear, I am not coming back" 74 z- wa-t-t-nt, q'-t-xwa-b-wat--ma 1-horse-2sg.-1pl.-give-fut.II dir.-2pl.-ver.-2sg.-find-again-cond. "We would give you a horse if you found it for us" The suffix -ama is apparently added to the imperfect -t-; the complex suffix -tama- is used in irreal conditional clauses: a aI Ia, ax y -b s- aq'wa-m mf'a 'a-m-n-t-ama, this-ERG alot-pret. leg-ERG fire dir.-neg.-catch.fire-impf.-cond. ba-mta-xa-r y-s-nwta-q'm bee-hive-pl.-NOM 3sg.-burn-fut.II-neg. "If the leg alloted to him did not catch fire, the bee-hives would not have burned down" (in spite of its weirdness, the translation is correct; in the story from which this example is taken, "he" is the bee-keeper who was "alotted" one leg of a goat, and this leg caused the fire that burned down the beehives). As can be seen from the preceding example, the future II is used in the main clause when there is an irreal (counterfactual) conditional in the dependent clause. F) Permissive The permissive mood has the suffix -m(), -my. It expreses that the action is performed in spite of some fact or circumstance. It is translated into European languages with permissive clauses containing conjunctions such as although. I Ia I I w fa-'a 'al-a-my g -'a '- skin-INST boy-af.-perm. heart-INST man-af. "Although by skin (=judging by the skin) he is a boy, by heart he is a man". Some authors include the subjunctive in the list of verbal moods 62. The subjunctive is expressed by the suffix -nt; forms with this suffix seem to have a conditional meaning, i. e. they express that the action is performed under a condition, e. g. s-k'wa-nt "I would go", but in some contexts they also appear to express the possibility that the action is performed, as in the following example: I ax? st y-'a--nt Nrt-xa-m? what 3pl.-do-back-fut. II N.-pl.-ERG "What could the Narts do?" (asked as a rhetorical question) I, I 62 75 wsa s-p-'-ta-ma, s-na-k'wa-nt companion 1sg.-2sg.-make-impf.-cond. 1sg.-dir.-go-fut. II "If you would make me your companion, I would go". This is presumably the same form referred to as the future II in this grammar (see above). EVIDENTIALITY The basic evidentiality suffix is -an-. It is used to express that the action is probably happening (or that it has happened, or that it will happen), but that this was not evidenced by the speaker 63: a Ia -r q'a-k'wa---an- he-NOM dir.-go-back-pret.-evid.-af. "He probably came back" (but I did not see this) Instead of the category of evidentiality, Kabardian grammars talk about a special "hypothetical mood", Rus. predpoloitel'noe naklonenie. However, it can be shown that this is not a sub-category of mood; evidentiality is a category used to express the source of information on the basis of which the assertion is made. This category exists in many languages, and it is morphologically realized in Turkish, for example. The evidential suffix is actually an agglutination of the pluperfect suffix -a- and the future suffix -n. It often happens that affixes used as tense markers become grammaticalized as evidentiality markers and/or epistemic modality markers (cf. the English will have been in evidential expressions such as It will have been him, or Croatian future tense marker bit e in the evidential phrase Bit e da je doao "He must have come, I guess he came"). As a confirmation that the "hypothetical mood" does not belong to the same category as other verbal moods we can use the fact that, unlike the affixes for true verbal moods, the evidentiality affix can be combined with the indicative/affirmative suffix -, cf. Ia k'w--an- "he probably went" in opposition to k'w-- "he went". The suffix -'a "maybe" can also be used together with the evidential suffix -an, cf. IaI k'w--an-'a ma-w "maybe he went" (ma-w is the 3rd p. sg. present of the verb "to become"). Besides the synthetic evidential construction, there is the analytic construction with the auxiliary verb wn (used in the future) and the (participial) verbal base: It is not quite certain whether the source of information (evidentiality), or rather the uncertainty of the speaker (epistemic modality) is the primary function of this suffix. My informants tend to translate sentences with the suffix -an- using the Russian expression skoree vsego "most probably". 63 76 Ia f-k'w- w-n- 2pl.-go-pret. be-fut.af. "you probably went" ' dagw w-n- old.man deaf be-fut.af. "The old man is probably deaf" DEVERBAL NOMINALS Kabardian has three classes of deverbal nominals: the infinitive (a kind of verbal noun), the participle (a kind of verbal adjective), and the gerund (a verbal adverbial, with many features of participles in other languages; some linguists would call it a converb). I. INFINITIVE The lexical form of verbs is the infinitive, which ends in -n. The infinitive is actually a verbal noun which can be inflected for case, e. g. txan "to write" has the forms txanr (NOM), txanm (ERG), txanm'a (INST) and txanw (ADV). Also, personal prefixes can be added to the infinitive form, cf. forms of the verb laan "to work": 1sg. s-laan 2sg. y w-laan 3sg. laan 1pl. d-laan 2pl. f-laan 3pl. laan The personal prefixes are sometimes optional, especially in obligatory control constructions, when one argument of the infinitive is obligatorily co-referent with one argument of the matrix verb: Ia sa 'a-z-dz-- I dir.-1sg.-begin-pret.-af. "I started to go" ()I (s)-k'wa-n 1sg.-go-inf. However, the personal prefixes cannot be omitted when there is no necessary coreference between the arguments of the infinitive and of the matrix verb: a I s-k'wa-n sa sy-gw-- I 1sg.poss.-think-pret.-af. 1sg.-go-inf. "I intended to go, I thought about going". 77 In the preceding example the personal prefix s- cannot be omitted, because the verb gwan does not have obligatory control. Stative verbs can be formed from nouns and adjectives by adding the infinitive suffix: "man" : -n "to be a man"; f'c'a "black" : f'c'a-n "to be black". In some constructions (especially in subordinate clauses), the infinitive takes the suffix -w as well (identical to the adverbial suffix), and thus becomes formally identical to the future suffix (-nw) 64: a eIa y IIy sa -b ay?-- wna-m 'a-m-'-nw I he-ERG tell-pret.-af. house-ERG dir.-neg.-go-inf. "I told him not to go out of the house" For each infinitive construction (and each verb) it is necessary to learn whether the infinitive takes the suffix -n or -nw. The rule is that, if there is no personal prefix on the infinitive, the only possible infinitive form is the one with the suffix -n. Some authors distinguish verbal nouns or "masdar" from the infinitive. The verbal noun has the same ending as the infinitive (-n), but, unlike the infinitive, it can have possessive forms 65: txan-r "reading", sy-txan-r "my reading". Also, just as any other noun, the verbal noun can be modified by an adjective: Y y I yxa Wa wy dn 'h-r b-wx-- you your sewing long-NOM 2sg.-finish-pret.-af. "You have finished your long sewing" Due to lack of more detailed research we cannot be entirely certain whether it is legitimate to distinguish between infinitives and verbal nouns. II. PARTICIPLES According to grammar text-books participles have the subject, object, instrumental and adverbial form. These forms of the participle correspond to nominal cases, but the affixes for different forms/cases are not entirely equal to the ones in the nominal declension 66. The subject form takes the prefix z()- if it expresses a transitive action; if the action is intransitive, there is no prefix, and the participle is thus the same as the bare stem of the verb: This type of infinitive can also be called the supine. Kumaxov 1989: 279. In Kumaxov (ed.) 2006, I: 324 it is claimed that only the masdar (verbal noun) is inflected for case, while the infinitive has no case forms. 66 The morphology and syntax of participles are the weakest point of Kabardian grammars; cf. Kumaxov 1989: 254 ff. 65 64 78 z-txr "writing it" - ya-z-tr "giving it to him" - lar "working" - txar "writing" (-r is the nominative ending). The object form takes the prefix za-, z- if the participle refers to the indirect object; if not, there is no prefix: za-pr "who he is looking at", z-xwa-q'war "who he is going for", s-txr "which I am writing". What this actually means is that the prefix za-/z- is used when the participle refers to the noun phrase which is marked (or would be marked) by the ergative case, and not by the nominative 67. Participles referring to the nominative noun phrase do not have the prefix z-/za-: I e -b y-a-r "the one whom he is leading" : -b '-r ya-a "he leads the old man" he-ERG 3sg.-to lead-NOM he-ERG old man-NOM 3sg.-to lead sa -r z-xwa-s-a-r I he-NOM part.-ver.-1sg.-to lead-NOM "The one who I am leading (him) for" I sa -r '-m xw-z-aw-a I he-NOM old man-ERG ver.-1sg.-pres.-to lead "I lead him for the old man" In accordance with our schema of case assignment in Kabardian (see above), we can say that the prefix z-/za- indicates that the participle does not refer to the argument which is the lowest ranking macrorole (ie. that it refers to the argument which is not the lowest one in the Actor-Undergoer hierarchy). Since the lowest ranking macrorole in Kabardian, as an ergative language, is equivalent to the traditional notion of the subject, we can give a somewhat simplified statement saying that the prefix z-/zaindicates that the participle does not refer to the "subject" of the sentence. Traditional grammars say that the subject participle form is conjugated according to the person of the object, and the object form according to the person of the subject; what this really means is that the personal prefix on the participle with the z-/zaprefix expresses the argument which represents the lowest ranking macrorole in the verb's logical structure, while the personal prefix on the participle without the z-/zaprefix expresses the argument which is not the lowest ranking macrorole (which is not the "subject", in the sense in which we talk about the subject in Kabardian): s-z-txr "that is writing me down, writing me down"; w-z-txr "that is writing you down"; s-txr "which I am writing"; p-txr "which you are writing" (< *w-txr). 67 79 The participle can be inflected for all persons except for the person of the lowest ranking macrorole (the Undergoer) and for the person indexed by the participial prefix z-. Participles can also contain personal markers of conjunctivity and version: e d-ya-a-r conj.-3sg.-wait-NOM "who is waiting for him/her together with him/her" I xwa-k'wa-r vers.-go-NOM "who is going for him/on his behalf" The participle prefix has the form za- rather than z- when the participle refers to the oblique argument (non-macrorole core argument) of an intransitive verb, e.g. za-da-r "who he/she is calling" (from yadan "call"). The so-called "instrumental" participle form is formed with the prefix zar()-, zarawhich contains the prefix za-: zar-lar "with which you do"; zar-ya-dar f'wa "it is well the way he reads/studies" (Kumaxov 1984: 142). The instrumental form of the participle often behaves as a general-purpose complementizer/subordinator (see below). It can sometimes be translated as "when", "how", or "as", cp. the title Sawsrk'wa y dta -r ap zar-y-'--r (S. poss.3sg. sword-NOM L. part.-3sg.-dopret.-NOM) "How/when Lapsh made Sosruko's sword". This form of the participle can also be added to nominal stems in order to make them suitable for complementation: y a Iy wara sbyy-r q'a-wr-t zar-da-r y-m-'a-w thus child-NOM dir.-grow-impf. part.-Adygh-NOM 3sg.-neg.-know-ger. "Thus the child was growing, without knowing that it was an Adygh (Circassian)" Syntactically, participles behave as qualitative adjectives (they are inflected for case and they are placed after the noun they refer to): ax sbyy-r z--xa-r y child-NOM part.pref.-caus.-feed-NOM poss.3sg. "The one who feeds a baby is its mother" (a proverb) aa na- mother-af. Participles are inflected for tense, but they do not have forms for all tenses. The verb txa-n "to write" has the forms for the active present participle txar "writing, that writes", the preterite participle txr and the future participle txanwr. Participles may receive case affixes, but this is mostly optional: 80 () I, II() a w w w Z- at(-r) ma-g f'a-ry, z-f'a-k' ad(-r) m- part.-find-(NOM) 3sg.-rejoice-and part.-advers.-lose-(NOM) 3sg.-cry "He who finds (it), rejoices, he who loses (it) - cries" (a proverb) There is no correlation between the case ending and the syntactic role of the participle. In the examples above, the participle refers to the actor of a transitive verb (with suppressed object), but it can still be in the nominative. The syntactic role of the participle is indicated only by the presence or absence of the prefix z- (above), or by directional prefixes z()da- (with telic meaning) and (z-)- (with locative/temporal meaning). Take, for example, the following participles: I zda-k'wa-r part.-dir.-go-NOM "where he is going to" a -la-r dir.-work-NOM "where he is working" I -?a-m dir.-talk-ERG "where (people) talk" ea e, I a nbaw dya-t-y, friend to-impf.-and "It is to his friend that he set out, and when he got there, he entered the guest-house" The presence of the case endings -r, -m may indicate definiteness of the argument referred to by the participle. The exact conditions on their use are unknown. Negation of the participle is expressed by the prefix m-: m-txa "that isn't writing", s-z-m-w "that isn't seeing me". Cf. the opposition between the finite negation (-q'm) and the participial one 68: 68 The difference between these two types of negation is used as the basis for the differentiation of finite and non-finite forms in Kabardian (Kumaxov & Vamling 1995: 6). Non-finite forms can only be used in sentences in which they are dependent on finite forms. The only exception to this thesis are imperatives and interrogative constructions, which do not depend on finite forms and they do have the prefixed negation m- like non-finite forms. 81 y yI, I wa w-m-k'wa-ma, sa-ry s-k'wa-r-q'm you 2sg.-neg.-go-cond. I-and 1sg.-go-pres.-af.-neg. "If you don't go, I won't go either" Participles can be construed with the auxiliary verb wn "be, become": I I Ia ?waxw-r sar-'a '- wn-q'm job-NOM I-INST do-pret.(part.) become-neg. "I cannot do this job" (lit. "This job does not become done by me") III. VERBAL ADVERBS (GERUNDS) Verbal adverbs (or gerunds) are formed from verbal roots using the same suffixes (-w(), -wa, -wra, -ra, -'ara) as in the formation of regular adverbs from nouns and adjectives (see above). The particularity of Kabardian verbal adverbs is that they can be inflected for person, and they also distinguish tenses, mood and transitivity/intransitivity. The transitive verbal adverb yad-aw "reading", for example, is inflected in the following way: sg. ey yey ey s-yadaw w-yadaw yadaw pl. ey ey ey, exy d-yadaw f-yadaw yadaw, yada-xa-w In the preterite the suffix -- is added, so the forms are syadw, wyadw, etc. These finite forms of verbal adverbs are equivalent to entire subordinate clauses, so syadw would be translated as "when I was reading", fyadw "when you were reading", etc. ay a Ps-r t--wa ml dfa- river-NOM freeze-pret.-ger. ice smooth-af. "Since the river froze, the ice is smooth" Ia a Iy Sa s-'--q'm r I 1sg.-know-pret.-neg. he-NOM q'a-k'wa-wa dir.-to go-ger. 82 "I didn't know he had come" Iy aI y, Ie a I xIa T'w-ry mf'a-m bada-s-wra, z dap 'ayay-ry two-and fire-ERG dir.-sit-ger. one burning.coal fly.off-and ysp-m y dwarf-ERG his dna kwa'-r shirt lap-NOM pxys'-- burn.through-pret.-af. "As the two (riders) were sitting by the fire, a burning coal flew off (it) and burned through the dwarf's shirt in his lap" DIRECTIONALS The prefix q'a- can be roughly translated as "this way, hither", and the prefix n(a)- as "that way, thither", but their use is quite idiomatic. Their position in the verbal complex is immediately after the first personal prefix, or they come first if the personal prefix is 0- (in the 3 person): I 0-q'a-k'wa 3sg.-this way-pres.-go "He is coming this way" a y e Ia -r wy day 0-na-k'w-- he-NOM 2sg.-poss. to 3-thither-go-pret.-af. "He came towards you (that way)" In some combinations of personal markers these prefixes do not occur, in others they are compulsory 69: oa s-na-w--- "I waited for you", but *s-w()--- 1sg.-thither-2sg.-wait-pret.-af. a s-v--- "I waited for you (pl.)", but *s-n()-v--- 1sg.-2pl.-wait-pret.-af. y q'-d-aw-wa "he is hitting us", but *daw-wa Kumaxov 1971: 253. It seems that the use of directionals depends on the "person hierarchy" (see below). 69 83 hither-1pl.-pres.-hit Colarusso (1992: 92-94) calls these prefixes "horizon of interest", which doesn't mean much. It seems that they function in the same way as directional affixes, which exist in many languages (cf. German hin-, her-, auf-, etc.), indicating the direction in which the action is performed. Some of them are so frequent (e. g. the prefix q'a-) that they must belong to verbal morphology, while others modify only some verbal roots and should therefore be included in the chapter on word formation (see below). There is no clear borderline between these two groups of prefixes. According to Colarusso (1991), there are also preverbs which indicate the manner in which the action is performed, or the state (consistency) of the subject, e. g. -xa- "as mass", -d- "as liquid": xa ps-r 0-q'-xa--- water-NOM 3sg.-hither-as.mass-flow-pret.-af. "The water flowed out" (if it was thrown out of the bucket, as mass) a ps-r 0-q'-d--- water-NOM 3sg.-hither-as.liquid-to flow-pret.-af. "The water flowed out" (if it leaked out through a hole or a pipe) Neither texts nor my informants enabled me to ascertain the existence of these preverbs. The nearest equivalents in the standard language are the directional preverbs da- and xa-, which both denote that the action is performed in some container; it appears, however, that the difference between them lies in the nature of the container: for da-, the container must be empty, while xa- refers to a container that is represented as some kind of mass, or substance. The prefix da- indicates that the action (or, more frequently, state) of the verb is being performed in a certain area, or (empty) container: x tx-r kaf-m da-- book-NOM vessel-ERG da-lie-af. "The book is lying in the vessel" IaI pa-r p'nt'a-m da-dza-n wood-NOM garden-ERG dir.-throw-inf. "to throw wood into the garden" The prefix xa- (x-) denotes the location in some container (conceived as substance), or the orientation of the action towards the interior: ps-m x xa-dza-n 84 water-ERG dir.-throw-inf. "to throw into water" The prefix - indicates the place of the action (usually the place from which the action is performed), e. g. -dzn "to throw off, to throw down from some surface" (cp. dzn "throw"), -n "to descend from" (cp. n "run"), -n "lie on something", -wn "to see something somewhere": a I, aIy y Zamn-r k'wa-rt, Wazrmas-y k'wa-w maz-m -psaw-rt time-NOM go-impf. W-and hunt-ger. wood-ERG dir.-live-impf. "Time was passing, and Wazirmes was living in the wood (and) hunting" The prefix - can also have temporal meaning; participles prefixed with - can be translated as temporal clauses introduced by "when", e.g. -k'w--m "when he went/had gone". The prefix tay- indicates movement onto, or away from some surface, e. g. tay-dzn "throw onto": x e tx-r stawl-m tay-dza-n book-NOM table-ERG dir.-throw-inf. "to throw the book on the table" The prefix 'a- indicates the location under something or inside something (conceptualized as being under some cover), e. g. 'a-dzn "to throw something under something", 'a-n "to run under something", 'a-atn "to fly away from under something": y I Ia wna-m 'a-ha-ry t's-- room-ERG dir.-carry-and sit-pret.-af. "He came into the room and sat (down)" -r bwan'a-m 'a-t- horse-NOM cave-ERG in-sit-af. "The horse is in the cave" The prefix bla- denotes an action by, or past a particular reference point, e. g. bla-an "to run past": y yIa q'a-wv?-- w-r kwaba-m bla--ry horseman-NOM gate-ERG dir.-run-and dir.-stop-pret.-af. "The horseman run past the gate and stopped" 85 The prefix f'a- denotes the falling movement from the surface of something, or the "hanging" position of some object, e. g. f'a-n "jump, fall off": Ia ar-r gwam-m f'a--- wheel-NOM axle-ERG dir.-run-pret.-af. "The wheel fell off the axle" The prefix p- denotes action which is taking place at the end, or edge of something, e. g. p-sn "sit at the edge", p-n "run off from the edge of something", pdzn "throw off from the edge", etc. blatayq'ad xapdanaf'a- 'a- Besides these basic directional and locative prefixes, there are also many secondary prefixes, mostly derived from nouns, often nouns denoting body parts: 1. bada- "towards, away from" (cf. ba "breast"): atan: "fly" vs. badaatan "fly towards" 2. ?w- "near, next to, away from" (cf. ?w "mouth"): atan "fly" vs. ?watn "fly away from" (note that the verbal root also changes its vocalism in derivation) 3. bwr- "sideways" (cf. bw "hip"): xwan "chase, drive" vs. bwrxwan "drive sideways" 4. 'ar- "on(to) the edge of, on(to) the top of" (cf. 'a "tail, end"): an "lead" vs. 'aran "lead to the top, or slope of" 5. axa- "in front of" (cf. a "mouth"): xwan "drive" : axaxwan "drive towards, drive near to" 86 CLASS A - intransitive monovalent verbs Structure of the verbal complex: Subject-V (= the single macrorole - V) a) k'wa-n "to go" (dynamic verb) I. Present 1. sg. s-aw-k'wa "I go" 2. sg. w-aw-k'wa "you go" 3. sg. m-k'wa "he/she/it goes" 1. pl. d-aw-k'wa "we go" 70 87 2. pl. f-aw-k'wa "you go" 3. pl. m-k'wa-(xa) "they go" Cp. '-r m-k'wa "the man goes" II. Preterite 1. sg. sk'w "I went" 2. sg. wk'w "you went" 3. sg. k'w "he/she/it went" 1. pl. dk'w "we went" 2. pl. fk'w "you went" 3. pl. k'w "they went" III. Future 1. sg. sk'wan "I will go" 2. sg. wk'wan "you will go" 3. sg. k'wan "he/she/it will go" 1. pl. dk'wan "we will go" 2. pl. fk'wan "you will go" 3. pl. k'wan "they will go" b) sn "sit" (static verb) I. Present 1. sg. ss 2. sg. ws 3. sg. s 1. pl. ds 2. pl. fs 3. pl. s II Preterite ss ws s ds fs s III. Future ssn wsn sn dsn fsn sn CLASS B - intransitive bivalent verbs Structure of the verbal complex: Subject-Object-V (= the single macrorole - nonmacrorole core argument - V) wa-n "to hit"; an "to wait for" I. Present s-b-aw-wa "I hit you (sg.)" s-f-aw-wa "I hit you (pl.)" s-aw-wa (sawwa) "I hit him/her" s-y-wa "I hit them" w-q'a-s-aw-wa "you hit me" 88 q'a-s-aw-wa "he/she hits me" q'a-s-aw-wa-xa "they hit me" y-aw-wa "he/she hits him" y-wa "he/she hits them" y-aw-wa-xa "they hit him" y-wa-xa "they hit them" '-r q'a-s-aw-wa "the man is hitting me"; -m s -aw-wa "I am hitting a horse" (nominative construction) II. Preterite s-n-aw--- "I waited for you (sg.)" s-va--- "I waited for you (pl.)" s-ya--- "I waited for him" s-ya--- "I waited for them" w-q'-za--- "You (sg.) waited for me" w-q'-da--- "You waited for us" w-ya--- "You waited for him" w-ya--- "You waited for them" w-q'-za--- "You waited for me" w-q'-da--- "You waited for us" q'-za--- "He waited for me" q'-wa--- "He waited for you (sg.)" q'-va--- "He waited for you (pl.)" ya--- "He waited for him" ya--- "He waited for them" d-n-wa--- "We waited for you (sg.)" d-va--- "We waited for you (pl.)" d-ya--- "We waited for him" d-ya--- "We waited for them" f-q'-za--- "You (pl.) waited for me" f-q'-da--- "You (pl.) waited for us" f-ya--- "You (pl.) waited for him" f-ya--- "You (pl.) waited for them" q'-za--- "They waited for me" q'-da--- "They waited for us" q'-wa--- "They waited for you (sg.)" q'-va--- "They waited for you (pl.)" ya--- "They waited for him" ya--- "They waited for them" CLASS C - transitive bivalent verbs Structure of the verbal complex: Object-Subject -V (= the lowest ranking macrorole, Undergoer - the other macrorole, Actor - V) w-n "to see" w-z-aw-w "I see you" 89 s-aw-w "I see him" s-aw-w-xa "I see them" s-b-aw-w < *s-w-aw-w "you (sg.) see me" w-aw-w "you (sg.) see him" s-ya-w "he/she sees me" w-ya- w "he/she sees you (sg.)" w-d-aw-w "we see you (sg.)" f-d-aw-w "we see you (pl.)" d--w "we see them" s-v-aw-w "you (pl.) see me" s-v-aw-w "you (pl.) see me" f-aw-w "you (pl.) see him" d-v-aw-w "you (pl.) see us" f-aw-w-(xa) "you (pl.) see them" ya-w "he/she sees him" ya-w-(xa) "he/she sees them" s--w "they see me" w--w "they see you" d--w "they see us" f--w "they see you (pl.)" y--w-(xa) "they see them" y--w "they see him" '-m syw "the man sees me" -r saww "I see the horse" According to C. Paris, verbs of this class do not take the prefix -(a)w- in the 3rd person (Actor) present tense, cf. ya-w-wa "he is hitting him" (B) in contrast with ya-lw "he sees him" (C). CLASS D transitive trivalent verbs Structure of the verbal complex: Object-Indirect Object-Subject-V (= the lowest ranking macrorole, Undergoer - non-macrorole core argument - the other macrorole, Actor) t-n "to give" w-y-s-t [wzot] "I give you to him" w-y-s-t [wazot] "I give you to them" q'-w-s-t [q'zot]"I give him to you" q'-w-s-t-xa "I give them to you" w-q'a-s-y-t "he gives you to me" w-q'a-s--t "they give you to me" s-r-y-t [sareyt] "he gives me to him" s--ry-t "he gives me to them" y-r-y-t [ireyt] "he gives him to him" y--ry-t [yareyt] "he gives him to them" 90 y-r-y-t-xa "he gives them to him" y-r--t (yrat) "they give him to him" tx'-r q'a-w-s-t (q'wzot) "I give you the letter"; -c'xw-m w-y-s-t (wzot) "I give you to this man" CLASS E - causatives (valency increases by one in relation to the basic verb; transitive construction) Structure of the verbal complex: (Object-Indirect Object)-Subject-Causer-V t-n "to give"; k'wa-n "to go"; the causative prefix is ay-r-t "he gives it to him" : y-r-s-a-t "I make him give it to him" y-r-y-a-t [irreyt] "he makes him give it to him" y-r--a-t [irrt] "they make him give it to him" w-s-a-k'wa [wzok'wa] "I make you go" = "I send them" s-a-k'wa [sok'wa] "I make him go" s-a-k'wa-xa [so-k'waxa] "I make them go" CLASS F verbs derived with some prefixes, e. g. tay- "on"; intransitive verbs Structure of the verbal complex: Subject-Object-Pref.-V fa-n "to fall" s-q'a-p-tay-fa (p < w) "I fall on you" s-tay-fa "I fall on him" s--tay-fa "I fall on them" q'a-p-tay-fa "he falls on you" q'a-p-tay-fa-x(a) "they fall on you" tay-fa "he falls on him" nnaw-r q'-tay-fa "the child falls on him" '-m s-tay-fa "I fall on the man" CLASS G verbs derived with some prefixes which are placed between two personal markers, e. g. p- "all the way, completely"; transitive verbs. Structure of the verbal complex: Object-Pref.-Subject-V wp''-n "to cut" w-p-s-wp'' "I cut you all the way" 91 p-s-wp'' "I cut him all the way" p-s-wp''-xa "I cut them all the way" s-p--wp'' "they cut me all the way" p--qp''-xa "they cut them all the way" -r p-s-wp'' "I cut the man"; -m s-p-y-wp'' "the man cuts me" CLASS H verbs derived with some directional/local prefixes, e. g. ty- (tay-) "on"; transitive verbs. Structure of the verbal complex: Object-Subject-Pref.-V x-n "to lift" w-q'a-t-tay-s-x(') [wq'ttezox''] "I lift you from us" w-q'a-tay-s-x "I lift you from him" w--q'a-tay-s-x "I lift you from them" s-p-tr-ay-x "he lifts me from you" ha-r q'a-p-tay-s-x "I lift the dog from you" nf-m w-q'-tay-s-x "I lift you from the rock" 92 WORD FORMATION In Kabardian words can be formed by derivation (adding suffixes and prefixes), but also by combining lexical morphemes into compounds. COMPOUNDS Like other Abkhaz-Adyghean languages, Kabardian forms words of a more complex, abstract meaning by joining two or more (usually monosyllabic) words of a simpler, concrete meaning. Compounds with nouns denoting body parts and organs such as "heart" are especially common. Guessing the meaning of a compound is quite frequently not a simple task: na-f "eye-rotten" = "blind" pa-s-a "nose-sit-on" = "early" na-p'c' "eye-lie" = "false" na-ps "eye-water" = "tear" na-f' "eye-good" = "goodness" bza-gw "tongue-heart" = "tongue" (as an organ of speech) mf'a-gw "fire-heart" = "train" ?a-pa "hand-nose" = "finger (on the hand)" ha-dza "barley-tooth" = "grain" thak'wma-'h "ear-long" = "rabbit" '-la "new-meat" = "young man, boy" da-xw "together-be born" = "brother (with respect to sister)" dw- "thief-old" = "wolf" da-na "father-mother" = "parents" '-fz "man-old-woman-old" = "grandparents" faw-w "honey-salt" = "sugar" maz-dad "forest-hen" = "pheasant" x-qa "sea-pig" = "dolphin" wna-c'a "house-name" = "surname" xa-wa "eat-time" = "lunch" -da "earth-grease" = "petroleum" a-b "night-summit" = "deep night" a-ps "milk-water" = "sap (of plants)" hada-ma "corpse-smell" = "smell of a corpse" -dw "horse-thief" = "horse-thief" dw- "thief-old" = "wolf" '-k'wa "man-go" = "messenger" faw-w "honey-salt" = "sugar" da-dz "bean-throw" = "fortune-teller" As can be seen from the examples, there are compounds in which both parts are nouns (da-na "parents"), compounds in which nouns are combined with adjectives (na-f "blind") and compounds in which nominal words or adpositions are combined with verbs (pa-s-a "early"). In most cases, the meaning of the compound can be both 93 nominal and adjectival, which is a consequence of a poor syntactical differentiation between nouns and adjectives in Kabardian. In the examples above only two words were joined into a compound, but many Kabardian compounds consist of more than two parts. Compounding is almost a recursive process in Kabardian; using the elements ' "man", "old", f' "good", xwa "big" and k'wa "to go" the following compounds can be formed 71: '- "old man" '-k'wa "messenger" '-f' "good man, good-natured man" '--f' "good old man" '-k'wa-f' "good messenger" '-k'wa--f' "good old messenger" '--f'-xwa "big good old man" When a noun is modified in a double possessive relation (according to the formula X of Y of Z), the first possessive relation is expressed with a compound, e. g. A ay Ada- y q'rw-r Adyghean-blood poss. power-NOM "The power of Adyghean blood" Some compounds retain two accents. They are often built with rhyming morphemes (German Reimbildungen), or they contain fully reduplicated morphemes. Such compounds usually have intensive or copulative meaning (the Sanskrit dvandva-type): yaxa-yafa "eating-drinking" = "a feast" pq'na-pq'naw "in little pieces" natx-patx "beautiful" (of a girl) q'aa-naa "here and there, in a zigzag manner" awa-p'awa "jumping, bouncing" NOMINAL SUFFIXES -ay (suffix for the formation of tree names): day "walnut tree": da "walnut"; ay "oak": "tree" - (suffix denoting place/dwelling): ha "dog house": ha "dog"; a "barn": "horse" -yay (diminutive suffix): dadyay "chicken" : dad "hen" -a (suffix for abstract nouns): 'a "manhood, manliness" : ' "man" -k'wa (suffix for names of professions): txk'wa "writer": txan "to write" 71 94 -w (suffix for nouns denoting participants of an action or members of a group): q'waw "fellow-villager": q'wa "village", laaw "co-worker, colleague": laan "to work". -fa (suffix meaning "a kind of"): wzfa "a kind of disease": wz "disease". Nouns with this suffix are probably originally nominal compounds with the noun fa "skin". VERB FORMATION BY PREFIXING Kabardian verbs are often formed with prefixes of nominal origin. Many such prefixes (preverbs) are derived from nouns denoting body parts, and they usually add spatial meaning to the verb's original meaning (see the section on directionality): na-k'wa-n "to go from there" (cf. na "eye", k'wan "to go") da--n "to lie in something" (cf. n "to lie") -?an "to be in something": r qlam -?-- "he was in town" (cf. ?an "to be, to have") In the case of Kabardian local prefixes it is difficult to decide whether they belong to word formation or to the verb morphology. They express meanings which are in English and other European languages usually expressed by local prepositions, cf. the following examples: y y a bzw-r wna-m bla-at-- sparrow-NOM house-ERG by-fly-pret.-af. "The sparrow flew past the house" (the prefix bla- denotes movement past or by something) eI w -m st tray-'a tree-ERG hoar-frost on-do "The hoar-frost covers the tree" (the prefix tr(ay)- denotes movement onto the surface of something) However, some local prefixes can correspond to Croatian verbal prefixes: aa Ia w q' dma-r -m rgwarw gwa-'a--- branch-NOM tree-ERG again at-to go-back-pret.-af. "The branch adhered (in growing) to the tree again" Croatian: "grana je opet prirasla stablu" (the prefix gwa- denotes connecting with something, cf. gw "heart") e ay e 95 Byard -m zapaw tay-s- B. horse-ERG well on-sit-af. "Berd sits on the horse well (correctly)" (= "Berd rides well") From the typological point of view, local prefixes of the Kabardian verb are not that unusual, since these kind of prefixes exist in European languages as well, cf. the almost synonymous expressions in Croatian skoiti preko ograde (''to jump over the fence'', with a preposition) and preskoiti ogradu (''to jump the fence'', with a local prefix on the verb). However, though both these strategies of expressing spatial relationships exist in Kabardian, verbal prefixes are much more frequent in this language than are local postpositions. VERBAL SUFFIXES A) Several suffixes affect the valence of verbs: The suffix -'- is used to turn intransitive monovalent verbs into intransitive bivalent verbs: 'an "to die": y-'-'-n "to die of something" Suffixes -'(a) and -x() also affect the valence of a verb, but not its transitivity: k'wan "to go": eII ya-k'wa-'a-n 3sg.-go-suff.-inf. "to approach something" an "to run": a a x -r -b y-aw-a-x 3sg.-NOM 3sg.-ERG 3sg.-pres.-run-suff. "he runs away from this" (intransitive) Both of the aforementioned suffixes (a) and x additionally seem to have directional meaning: yaa-n "run" : yaa-'a-n "run towards (someone or something)"; hn "carry": ya-ha-x-n "carry down". B) Other suffixes have adverbial meaning, and can perhaps be treated as incorporated adverbs: The suffix -xw('a) is added to a participial form of the verb to express that the action of the verb is simultaneous with the action of the finite verb (Abitov (ed.) 1957: 99): y II, e 96 wa-r p-'at-xw'a, pa-m z-ya-a-psaxw axe-NOM 2sg.-lift-suff. wood-ERG refl.-3sg.-caus.-relax "While you're lifting the axe, the wood is relaxing" (a proverb) e w m-psaa-x dayla-r-y gwbza- neg.-speak-suff. fool-NOM-and smart-af. "A fool is also smart while he is not speaking" (a proverb) y, a w w s-hawq' a-x , sy -r q'wad-- 1sg.-sleep-suff. my horse-NOM disappear-pret.-af. "While I was sleeping, my horse disappeared" I, y, I Nt'a, tna-m-ra mal-m-ra yes calf-ERG-and sheep-ERG-and q'a-v-wat--xw dir.-2pl.-find-back-until s--s-n-q'm sa-ry ?waxw-na-w I-and work-without-ADV 1sg.-dir.-sit-fut.-neg. "Yes, and until you find the calf and the sheep again, we will not sit idly" As the last two examples show, the action of both the finite verb and the participle can be be either punctual or durative. Accordingly, the suffix of simultaneity can sometimes be translated as "while", and sometimes as "until". The suffix -'a is used to indicate that the action of the verb has been already completed; it can usually be translated as "already" (Abitov (ed.) 1957: 117): ye y yxIa y wytayl-m ynstytwt-r q'-wx--'a- our teacher-ERG university-NOM dir.-finish-pret.-suff.-af. "Our teacher has already finished university" The suffix -pa- has perfectivizing meaning; it seems to indicate that the action has been fully accomplished: laa-n "work" : laa-pa-n "accomplish"; xa-n "eat" : xa-pa-n "eat up" The suffixes -(a)- and -q'wa mean something like "too much, excessively": xa-n "eat": xa--an "eat too much, eat excessively" psaa-n "talk": psaaq'wa-n "talk too much" The suffix -xxa- is best translated as "at all"; it reinforces the negation: 97 s-k'wa-n-q'm "I will not go": s-k'wa-xxa-n-q'm "I will absolutely not go, "I will not go at all" The suffix -x(a)- means "already": s-hazr- "I am ready, I am prepared": s-hazr-xa- "I am already prepared" 98 SYNTAX NOUN PHRASES (NP) Possessive constructions follow the HM (head-marking) pattern. "A man's house" is thus literally "A man his-house": I e ?ana-m y-taypwa table-ERG 3sg.poss.-cover "the cover of the table, tablecloth" ha-m y-pa-r dog-ERG poss.3sg.-nose-NOM "dog's nose, dog nose" In the contemporary standard language the possession marker is sometimes written separately, as an independent word: a e- aaa Nl Q'abardyay-Baq'ar-m y q'l-ha- Nalchik Kabardino-Balkaria-ERG poss.3sg. city-head-af. "Nalchik is the capital city of Kabardino-Balkaria" Kabardian, unlike Abkhaz and Adyghean, does not distinguish alienable and inalienable possession, but there are traces of this opposition in the Besleney dialect of Kabardian 72. Demonstrative pronouns precede the noun they refer to, and sometimes they merge with it as prefixes (see above). They can be separated from the noun by a participle, which is the equivalent of a relative clause in English: a e Iay m fa q'a-f-h- amad-r Daba yaz Thaalad xw-y-'--wa this you dir.-2pl.-bring-pret. scythe-NOM D. personally T.ver.-3sg.-make-pret.-ger. "This scythe you brought was made by Daba personally for Thagoled" A possessive pronoun can occur between a demonstrative pronoun and a noun: m sy sd-m this 1st.poss. anvil-ERG "this anvil of mine", lit. "this my anvil" See Kumaxov 1984: 87-93, Balkarov 1959. It seems that Kabardian had the (Common Adyghean) opposition between alienable and inalienable possession, but it lost it. 72 99 Qualitative adjectives (which can be used as stative verbs) follow the head noun, while relational adjectives (usually nouns used attributively) precede it: a ax : pa dxa : girl beautiful "beautiful girl" y pa wna wood house "wooden house" ADJECTIVE PHRASES Adjectives can be heads of nominal complements, which regularly follow them: a pa nq'- yz xw wood glass-old full sour.milk "A wooden glass full of sour milk" I found no examples of the predicative use of adjective phrases. SYNTACTIC STRUCTURE OF THE SENTENCE Kabardian distinguishes three constructions 73: nominative, ergative and indefinite. In the nominative construction the subject (the only macrorole argument) is in the nominative and the verb is in the intransitive form. If there is an (indirect) object (ie. if the verb is semantically bivalent), the second argument is in the ergative: e ax a Satanyay dxa-r tad-- S. beautiful-NOM get up-pret.-af. "Beautiful Satanaya got up" ye x waynyk-r tx-m y-aw-da student-NOM book-ERG 3sg.-pres.-read "The student is reading a book" In the ergative construction the subject (the highest ranking macrorole argument) is in the ergative, and the verb is transitive. The direct object is in the nominative: x ax aa yn-xa-m nrt-xa-r q'--awz-- I.-pl.-ERG Nart-pl.-NOM dir.-3pl.-crush-pret.-af. "The Ini (giants) crushed the Narts" The so-called "dative" or "inverse" construction (Kardanov 1957) is actually a nominative construction. 73 100 The causative verb is always transitive, so the ergative construction is used with a causative verb: I aI fz-m '-r y--k'wa woman-ERG man-NOM 3sg-caus.-go "The woman sends a man" In the indefinite construction the subject and the object have no case endings. This construction is common in proverbs, in the oral tradition; the verb's arguments are indefinite: Iaa ma dw f'a-balca- bear wolf advers.-hairy-af. "To the bear the wolf is hairy" (a proverb) The verb is stative, and thus intransitive, in this construction. NOMINAL SENTENCE Kabardian has no copula, the nominal predicate is juxtaposed to the subject: I A sy c'-r Alym 1sg.-poss. name-NOM A. "My name is Alim" Adjectives and common nouns in a sentence with a nominal predicate take the affirmative suffix (thus becoming stative verbs): a Mza-r yz- moon-NOM full-af. "The moon is full" M-r maz- this-NOM forest-af. "This is a forest" EQUI-NP DELETION In a coordinated construction, when two verbs share the same argument, this argument can be omitted if the agent is the first argument (agent) of a transitive verb or the only argument of an intransitive verb (ie. the "subject" in the same sense as in English): 101 I eaa I a '-m fz-r q'-ya-w-- y'y q'a--- man-ERG woman-NOM dir.-3sg.-see-pret.-af. and dir.-go-pret.-af. "The man saw the woman and left" Ia Ia 'la-m dabz-r y-w-ry k'wa-- young.man-ERG girl-NOM 3sg.-see-and leave-pret.-af. "The young man saw the girl and left" Ia II , a w 'la c'k' -r q'a-s-ry, dabz-r q'-y-w-- boy little-NOM dir.-come-and girl-NOM dir.-3sg.-see-pret.-af. "The boy came and saw the girl" Ia II II Iy a a 'la c'k'w-m dabz c'k'w-m q'a-k'wa-nw psa y-r-y-t-- boy little-ERG girl little-ERG dir.-come-fut. word 3sg-3sg-3sg-give-pret-af. "The boy promised the girl he would come" (lit. "gave the girl his word he would come"). This shows that Kabardian is not a syntactically ergative language, such as, e. g., Dyirbal or Chukchi. As can be seen from the examples above, when two verbs differing in transitivity are coordinated, the shared subject is in the case assigned to it by the nearest verb (the ergative if this is the transitive verb, the nominative if this is the intransitive verb). However, there seem to be cases when the shared argument is in the ergative case, although the intransitive verb is closer to the shared argument 74. This matter requires further research. SUBORDINATION Most structures, which are equivalent to subordinate sentences in the European languages, are in Kabardian and other West Caucasian expressed by special verbal forms. These are typically infinitives, participles and gerunds: a a I Ia -r -b q'--xwa-k'w--m y?-- 3sg.-NOM 3sg.-ERG dir.-dir.-ver.-go-pret.-ERG say-pret.-af. "When he approached her, he spoke" aax a ha-r z----xa-m -aw-bna dog-NOM pref.-dir.-3pl.-caus.-eat-ERG dir.-pres.-to bark "The dog barks where he is not fed (where they do not feed him)" (a proverb) 74 102 Yaz Yamnay '-r himself Y. earth-NOM try-sa-nw dir.-sow-inf. "Yamine himself is plowing the ground (in order to) sow the seeds of Thagaled" I I? s-zar-thak'wma a-ry dana q'--f-'- 1sg.-part.-ear slow-and how dir.-dir.-2pl.-know-pret. "But how did you know my hearing was bad (lit. that I had slow ear)?" Infinite verbal forms may be modified by adverbial suffixes (see above) with spatial or temporal meaning: , mbdyay -t Sa q'a-z-aza-xw, I dir.-1sg.-return-until here dir.-sit "Sit here until I return!" An infinitive may be marked with instrumental case in the subordinate clause: I yI () Wa ?ha-na w-w-n-'a s-aw-na(r) you lot-without 2sg.-become-inf.-instr. 1sg.-pres.-be.afraid "I am afraid that you will be without a lot (inheritance)" A subordinate structure can also be expressed by a verbal noun (infinitive, or "masdar" according to some linguists) and a possessive pronoun (or prefix) denoting the subject: yxa x da d-wx-- dy-tx-n-r we 1pl.-finish-pret.-af. 1pl.poss.-write-inf.-NOM "We finished writing" or "We stopped writing" , ax ey w Sawsrq'wa-r y-aw-a, Badax dxa-m ya-pa-nw S. 3sg.-pres.-set.out B. beautiful-ERG 3sg.-see-inf. "Sosruko sets out to see beautiful Badah" With many verbs the person of one argument in the subordinate clause is necessarily the same as the person of one argument in the main clause (the so-called control constructions): ya-va Thaalad y lpa-w 3sg.-plow T. 3sg.poss. seed-ADV 103 I ay Ia dabz-m dagw k'wa-n psaw '-y-dz-- girl-ERG dance go-inf. early dir.-3sg.-throw-pret.-af. "The girl started going to dances early" In the previous example, the verb in the subordinate clause k'wan has got the same subject as the verb in the main clause 'adzan ("to start"). Which form the linked verb will take depends mostly on the type of matrix verb it is associated with. As a rule, verbs having obligatory control (i. e. verbs with obligatory co-reference between one argument of the matrix verb and one argument of the linked verb) take the infinitive, while other verbs take either the participle or the gerund (most can take both of these forms). In subordinate structures 75 the subordinated verb can carry the personal prefixes and the reflexive prefix: Ia x y 'la-m tx-r y-h-nw boy-ERG book-NOM 3sg.-to carry-inf. "The boy wanted to carry the book" ea xway-- want-pret.-af. Ia -y-?-- dir.-3sg.-say-pret.af. Ia II II yy w w 'la c'k' -m dabz c'k' -r za-wa--nw boy little-ERG girl little-NOM refl.-hit-back-inf. "The boy told the little girl to hit herself" a x y a sa -b tx q'-z-y-t-nw s-q'-y-a-w-- I he-ERG book dir.-1sg.-3sg.-give-fut. 1sg.-dir.-3sg.-caus.-hope-pret.-af. "He promised me he would give me the book." a I y ya Wa s-w-m f' ddaw s-q'-w-aw-w 2sg. 1sg.-see-ERG good much 1sg.-dir.-2sg.-pres.-see" "I see that you love me very much" (lit. "I see that you are the one who sees me well very much") The use of personal prefixes on infinitives and gerunds is sometimes optional. As can be seen from the preceding examples, in subordinate structures the main verb comes after the subordinate verb; this is in keeping with the general principle of Kabardian syntax, according to which the head of a construction is placed after the dependent: The problem is that the difference between finite and non-finite forms in Kabardian cannot be easily defined and compared to the difference in Indo-European languages. Traditionally, some forms that can have personal endings (e. g. participles) are considered to be non-finite in Kabardian, and the form of the negation serves to distinguish finite from non-finite forms (Kumaxov & Vamling 1995); the negation m- characterizes the non-finite forms, and the negation -q'm the finite forms. 75 104 sa I Constructions in which the subordinate clause is placed after the main clause are also possible, but they are marked: Ia Ia I 'la-m y-'-t dabz-r q'-zar-k'wa-n-r boy-ERG 3sg.-know-ant.pret. girl-NOM dir.-refl.-go-inf.-NOM "The boy knew that the girl would come." Many permutations of the word order are possible, but the subordinated structure cannot be "interrupted" by the main verb. There are also structures with subordinators, but they are stylistically marked and they seem to be developing under the influence of Russian (Kumaxov 1989: 348). Sentences with the complex conjunction stw p'am, st ha'a pp'ama76 "because, since" are of that type: yI a ay I, y a Iax Ia a I. Ydpstw'a r pxwadaw nam q'?wrydzarq'm, stwa pama 'laxam y' wa ?aq'm "For now it is not that important, since these young men haven't done much yet". Note also that the conditional sentences can be construed with the conjunction tma "if", rather than with the conditional mood of the verb (see above); the conjunction tma is originally the verb tn "be, find oneself" in the conditional mood: I yy I x, 76 It seems that these conjunctions are calques of the Russian poetomu, potomu to (see Kumaxov 1984: 150). 105 Maw Badnawq'wa y b'-r '-m q'-xa-f, this B. 3sg.poss. spear-shaft-NOM ground-ERG dir.-pull out-pot. Badaxw w-ry-psaw-w tma B. 2sg.-3sg.-woo-ger. if "You can (surely) pull out Badinoqo's spear-shaft from the ground, if you are wooing Badah" There are a few subordinators that developed from postpositions governing participles or infinitives. The subordinator ndara "since" is combined with the instrumental form of the participle, e.g. zar-k'wa ndara "since (the time that) he went". The temporal subordinator y pa "before" is actually composed of y "its" and pa "nose, front part"; the same syntagm can be used as a spatial postposition ("in front of"). yI, y x p-d-wp'-n--y, ps-m xa-d-dza-n stick dir.-1pl.-cut.off--fut-af.-and water-ERG dir.-1pl.-throw-inf. y pa d-wbara-n- its front 1pl.-beat-fut.-af. "We'll cut off a stick and beat him before we throw him into water" CASE ASSIGNMENT IN SUBORDINATE CLAUSES In complex sentences in which the verb of the main clause shares one of the arguments with the subordinate verb, this argument can be omitted in the subordinate clause, in accordance with the rule that Kabardian is not syntactically ergative (see above): Ia y ea 'la-m dabz-r y-w-nw xway-- boy-ERG girl-NOM 3sg.-see-fut. want.pret.-af. "The boy wanted to see the girl." Ia x y 'la-m tx-r y-h-nw boy-ERG book-NOM 3sg.-carry-fut. "The boy wanted to carry the book." ea xway-- want-pret.-af. In these examples the main verb is intransitive (xwayn "to want"). However, nouns denoting the agent take the ergative suffix, and nouns denoting the patient of the action of the main verb are in the nominative. The reason for this is that case 106 assignment in the main clause in Kabardian can be determined by the role which the argument of the verb of the main clause has in the subordinate clause; if the shared argument of the main and the subordinate clause is the doer of the action (or the highest ranking macrorole) of a transitive verb 77 in the subordinate clause, then this argument is marked by the ergative case, even though the verb in the main sentence is intransitive. If, on the other hand, this argument is the patient or the only argument of an intransitive verb in the subordinate clause (e. g. yawan "to hit"), it will be marked by the nominative case: Ia eyy ea 'la-r dabz-m ya-wa-nw xway-t boy-NOM girl-ERG 3.sg.-hit-fut. want-ant.pret. "The boy wanted to hit the girl." The actual rules for case assignment in subordinate control constructions are more complex and cannot be fully explained here, since they partly depend on the information structure of the sentence (i.e. on the relation between the topic and the focus), and on the word order in the sentence (see Kumaxov & Vamling 1996 and Matasovi 2007). It seems that in the speech of younger speakers (perhaps under the influence of Russian?) constructions in which the verb of the subordinate clause assigns the case to the argument which it shares with the verb in the main clause are becoming increasingly rare. MODAL VERBS Modal verbs such as a'n, xwzaf'a'n "be able, can", bawrn "must" are used as matrix verbs taking linked clauses as complements; their complements can be infinitives or verbal nouns (masdar), but, as a rule, not gerunds or participles (Kumaxov & Vamling 1998. 265ff.): Ia a y I sa s-a'-- wna-r s-'-n I 1sg.-can-pret.-af. that house-NOM 1sg.-do-inf. "I was able to build that house" IIy y I y-'n-r da t-xwzaf'a'-nw-q'm wna-m we 1pl.-can-fut.-neg. house-ERG 3sg.poss.-making-NOM "We will not be able to build the house" Note that the possessive prefix on 'n shows that it is a (verbal) noun; the noun wna "house" is in the ergative, which is the default case in the possessive noun phrase, and 'n is in the nominative case because the matrix verb is transitive. In the sentence 'la-m tx-r y -h-nw xway-- the verb hn "to carry" is transitive, which can be seen by the order of personal prefixes, cf. e. g. w-z-aw-h-r "I carry you" (2sg.-1sg.-pres.-to carryaf.). 77 107 The "debitative modal" xwyayn is not inflected for person; it should be understood as meaning "it is necessary that X", taking whole clauses as complements. In this way it is differentiated from the verb xwyayn "want", which has the full set of personal prefixes, but also takes clausal complements (in obligatory control constructions): III I a I x e yas-m y k'wac'-'a a qasxw-'a sy year-ERG 3sg.poss. duration-INST night every-INST 1sg.poss. -m horse-ERG z maq'w ?ata-ra 1 hay stack-and y-x-n 3sg.-eat-inf. xwyay- must-af. "During the year, my horse must eat one stack of hay and one measure of corn every night". I a xeI e xwyay-r Dypa'a far- la-m y xayy'a w-n from.now.on 2pl.-af. village-ERG their judge become-inf. must-NOM "From now on, it is you who must become judges of the village" PHASAL VERBS Like modal verbs, phasal verbs also take clausal complements, and require coreference between the shared arguments (the actor of the matrix verb must be coreferent with the subject of the linked, embedded verb): ay, a a , yxa x y Iy e x wara, nq'a mza-r q'-y-ha-ry, wdz-r q'a-'-w -xwaya-m but May month-NOM dir.-3sg.-come-and grass-NOM dir.-grow.-ger. dir.-begin-grow "But the month of May came, and the grass began to grow" sa s-wx-- sy-tx-r I 1sg.-finish-pret.-af. poss.1sg.-book-NOM "I finished writing my book" s-tx-n 1sg.-write-inf. REPORTED SPEECH Clauses containing reported speech are embedded in the main clause: 108 a "y y" "a eaI, -b "dwa w--t" he-ERG how 2sg.-dir.-stand "He asked me 'how are you?'" Ia e" eI q'-z--y-?-- dir.-1sg.-dir.-3sg.-say-pret.-af. "dta s-xw-ya-a-', s-xw-ya-wat" q'-z-ay?a sword 1sg.-ver.-3sg.-caus.-make horse 1sg.-ver.3sg.-find dir.-1sg.-say "Have a sword made for me, find a horse for me he tells me." / "He tells me to make him a sword, to find him a horse." Reported speech can also be expressed by a subordinate construction with a participle or a gerund: aI Ia I w -y-?-- [maz zar-k'wa-r] k' a-m hunter-ERG pref.-3sg.-say-pret.-af. [forest part.-go-NOM] "The hunter said he was going to the forest." Ia ay w w w fz-m q'-y-? ax -- [y-p -r la-w] woman-ERG dir.-3sg.-say-pret.-af. 3sg.poss.-daughter-NOM work-ger. "The woman said her daughter was working." The difference between subordinating reported speech by means of a participle and a gerund seems to lie in the level of commitment to the truthfulness of the speech. The use of gerund seems to imply less commitment by the speaker (Jakovlev 1948: 52f.): a Iay y Ia w wa q'-z-a-p-?-- -r q'a-k' -wa zar-t-r he-NOM dir.-come-ger. part.-be-NOM you dir.-1sg.-pref.-2sg.-say-pret.-af. "You told me that he came" a Iay y y Ia -r q'a-k'w-wa t-w wa q'-z-a-p-?-- he-NOM dir.-come-ger. be-ger. you dir-1sg.-pref.-2sg.-say-pret.-af. "You told me that he came (but this need not be so)" AGREEMENT There is no category of gender, and no number and definiteness agreement within the noun phrase (NP), as was shown in the chapter on nouns. Verbs agree in person with the subject, object, and indirect object (if we can talk about person agreement on the verb), and agreement in number is very limited. The verbal suffix for the plural of the subject can be left out if the subject is placed immediately before the verb: Ix aI(x) 109 '-xa-r m-k'wa-(xa) man-pl-NOM 3sg-go-(pl.) "People go" According to C. Paris (1969: 161), the suffix for the plural of the subject is compulsory only if the subject is separated from the verb by other words. This is more or less confirmed by the examples I was able to elicit. Transitive verbs agree in person and number with the subject, i. e. with the doer of the action (marked for the ergative): ax ee yx Nrt-xa-m y yahayafar y-wx-t N.-pl.-ERG. 3pl.poss. peace 3pl.begin-ant.pret. "The Narts restored peace" NEGATIVE CONCORD Kabardian is a language with negative concord. If there is a negated verb in the sentence, the negative (and not the indefinite) pronoun is used, as in Croatian, for example: Iy aI Sawsrq'wa zry -y-m-?a-w m-k'wa S. nothing dir.-3sg.-neg.-say-ger. 3sg.pres.-to go "Sosruko goes without saying anything" Croatian: Sosruko ide nita ne govorei Note that there is no negative concord in (Standard) English: Sosruko goes without saying anything/*nothing. PRO-DROP Since the information about the grammatical relations within a sentence is codified in the verbal complex, all other syntactical elements can be left out. So instead of sa r zaza "I filled it" one can say just zaza (where 0- is the prefix for 3sg., z- the prefix for 1sg. (< s), and the verb is azaan "to fill"). Compare also: sa mva s-aw-dz "I throw a rock" : s-aw-dz-r "I throw it" I rock (3sg.)-1sg.-pres.-throw (3sg.)-1sg.-throw-af. 110 RELATIVE CLAUSES In Kabardian, the translational equivalents of relative clauses are usually expressed by participial constructions (in square brackets): a y xy ea -r [maw t stawra-m] a-xaa-nw xway-t he-NOM near-by stand(part.) guard-ERG dir.-throw oneself-inf. try-ant.pret. "He tried to throw himself on the guard who was standing near-by."; a e aI ay [-r it-NOM nrt-' Nart-hero a a x x , z-a-yay-f nrt-r] nrt xsa-m x--a-rt , part.-caus-move-pot. Nart-NOM Nart council-ERG dir.-3pl.-lead-impf. w--w y-b-rt become-pret.-ger. 3pl.-consider-impf. "The Nart who was able to move it (sc. Hlapsh's rock) they used to take to the Nart council (and) they considered him to have become Nart hero." y ax aa E Ia [Thaalad xw lpaw Nrt-xa q'--r-y-t--r] T millet seed N-pl. dir.-pl.-3sg.-3sg.-give-pret.-NOM Yamna y-f'-y-h-- Y. 3pl.-advers.-3sg.-carry-pret.-af. "The millet seed, that Thagalad gave the Narts, Yamina stole (it) from them." The head of the relative clause usually follows it (exx. 1, 2), but it can also be inserted into it (3). There are no real relative pronouns; however, (under the influence of Russian?) interrogative pronouns can be used with a relative function: x a, a x xat m-a-m-y, -r xa-r-q'm who no-work-ERG-and this-NOM eat-pres.-neg. "Who doesn't work, doesn't eat" (a proverb) COORDINATION Coordinated clauses are linked asyndetically by clitics/suffixes (e. g. ry "and", see above): a -r I I y?a-ry na-'a-r eyIa ap ya-wp'-- 111 that-NOM say-and the youngest-NOM "The youngest one said that and asked Hlapsh" 3sg.-ask-pret.-af. Most likely under the influence of Russian, conjunctions which are separate, independent words have also developed, e. g. wa "but", ya "or", tma "if": a ea ay Ia sa -r q'-ya-z-d-t wa q'a-k'w--q'm I he-NOM dir.-3sg.-1sg.-invite-ant.pret. but dir.-come-pret.-neg. "I invited him, but he didn't come" e yI e yI ya w-'-n ya w-'a-n or 2sg.man-inf. or 2sg.-die-inf. "Either be a man, or die" (a proverb) THE ORDER OF SYNTACTIC ELEMENTS Like most Caucasian languages 78, Kabardian is basically an SOV language, though other (stylistically marked) word orders appear as well: a a eyya Sawsrq'wa wagwna bzda-m tayww-- S. journey bad-ERG set off-pret.-af. "Sosruko set off for his difficult journey " wagwna bzdam tayww Sawsrq'wa a Iax a b sa 'la-xa-m s--y-xwaz-- there 1sg. boy-pl.-ERG 1sg.-dir.-3pl.-meet-pret.-af. "I met the boys there" If the object of this sentence is in focus (i.e. the stress is on boys), the word order changes: Iax a xa sa 'la-xa-m b s-y-xa--xwaz-- "I met the boys there" (pay attention also to the change in the order of the deictic marker and the person marker -y-xa-). Also, if the subject of a transitive verb denoting an action is inanimate, and the object animate, the unmarked word order is OSV: Ia 'la-r 78 xa ps-m y-txal-- 112 boy-NOM water-ERG 3sg.-strangle-pret.-af. "The boy drowned" (literally: "the water strangled the young man") The same OSV order obtains in embedded, subordinate clauses, with infinite verbal forms: I Iy a Dad dad'a y-a-?w--w f-aw-- chicken egg 3sg.-caus.-smart-back-ger. 2pl.-see-pret.-af. "You saw how the egg makes the chicken smart" Interrogative pronouns and other interrogative words stand in the place of the constituent which they substitute (i. e. Kabardian is a language of the Wh-in-situ type) 79: x xya xat-m -r q'a-z-x-w- who-ERG meat-NOM dir.-refl.-eat-inter.-pret. "Who ate the meat?" I exya '-m st-r q'a-y-x-w- man-ERG what-NOM dir.-3sg.-eat-inter.-pret. "What did the man eat?" The order of the arguments in front of the verb is the mirror image of the order of personal prefixes in the verbal complex in a transitive construction; in an intransitive construction the order of the arguments is the same as the order of personal prefixes: y yy wa sa w-q'a-z-aw-wa you I 2sg.-dir.-1sg.-pres.-hit "You hit me" (intransitive construction) y ya sa wa w-s-w I you 2sg.-1sg.-see "I see you" (transitive construction) The rule for the relation between verbal arguments and person markers with transitive verbs can be represented in this way: According to Kumaxov (ed.) 2006, I: 496 the unmarked position of question words is at the beginning of the sentence, e. g. Dpa w-q'a-k'wa--nw "When will you be back?". 79 113 verbal complex TOPICALIZATION/FOCALIZATION The relation between new and old information in the sentence is expressed syntactically in Kabardian, i.e. by the order of syntactic categories in the sentence. Focalization is a process by which the new, unexpected information in the sentence (rhema, what is in focus) is emphasised. The focalized element usually comes at the beginning of the sentence: x Ia y xat y-'- wna-r who 3sg.-do-pret. house-NOM "Who built the house?" aI Ia y p'a-m y-'-- wna-r carpenter-ERG 3sg.-do-pret.-af. house-NOM "The carpenter built the house." In the previous example the word answering the question "who" is in focus, the noun p'a. The SVO order at the same time denotes that the topic of the sentence is at the end (the noun wna) 80. If the question is "what did the carpenter do?", i.e. if wna "house" is not the topic of the sentence, then the noun wna will not be at the end of the sentence, but in front of the verb (i.e., we have the unmarked SOV order): aI I st p'a-m y-'--r what carpenter-ERG 3sg.-do-pret.-NOM "What did the carpenter do?" aI y Ia p'a-m wna y-'-- carpenter-ERG house 3sg.-do-pret.-af. "The carpenter built a house." Wh-words, which are focal as a rule, must be placed before the verb: 81 x 80 81 See Kumaxov & Vamling 2006: 107 ff. See Kumaxov & Vamling 2006: 89. 114 xat y-'-ra wna-r? who 3sg.-do-inter. house-NOM "Who is building the house?" *wnar xat y'ra? *wnar y'ra xat? *y'ra xat wnar? *y'ra wnar xat? The general rule for topicalization/focalization seems to be the following: The focalized element ("rhema") must be placed in front of the verb. The focalized element may be sentence-final, but then it has to be marked by the copula/affirmative marker -: a x ya -b tx-r z-r-y-t--r Mwrt- 3sg.-ERG book-NOM part.-3sg.-3sg.-give-pret.-NOM Murat-af. "To Murat did he give the book", or "It was Murat that he gave the book to". e aa x w m-r yaz-r q'-x --r fy xakw-r- this-NOM himself-NOM dir.-be.born-pret.-NOM your(pl.) country-NOM-af. "The place where he himself was born is your country" x y psa-r z-xa--r y -r- 3sg. soul-NOM part.-dir.-lie-NOM 3sg.poss. horse-NOM-af. "That in which his soul lies is his horse" Aside from the copula/affirmative marker , the suffixes -t (for imperfect), -q'a, -ra (interrogative suffixes) can also occur as focus markers: eaI x e w yaadk' a-q'a tx-r fz-m ya-z-t--r teacher-focus(inter.) book-NOM woman-ERG 3sg.-part.-give-pret.-NOM "The teacher gave the book to the woman" ("It was the teacher that gave the book to the woman") In all focalization constructions the main verb is replaced by the participle. These constructions are typologically similar to the Insular Celtic constructions in which the copula is used for focalization, or to French constructions of the type c'est X qui... 115 116 These are expressions appropriate for men, but not for women: I, I, I, yI ?aw, ?a, ?aw, wa? (these have a similar function as verbal crutches in the language of women) x I txa saw?wa "I swear to god"; txa y c''a saw?a "I swear by god's name" Wallahy "god, by god". Aside from the special characteristics of the idioms used by men and women, there are also special varieties of Kabardian used, for example, by hunters, or young people when conversing without the presence of older people. Some topics are considered inappropriate in the conversation between male speakers (e.g. talking about women and children). Due to a pronounced code of honour insults are not taken lightly, so that verbal communication outside of the family is conducted very cautiously, in order not to offend the person you are talking to; the order of speaking is strictly fixed (young people always speak after older people). On the whole, communication in Kabardian leaves an impression of laconic expression and restraint. 117 THE LEXICON The core layer of the Kabardian lexicon was inherited from the Proto-AbkhazAdyghean language; words belonging to this layer are mostly included in the core lexicon. These are nouns denoting body parts (ie. gw "heart" = Abkhaz a-gw, na "eye" = Abkh. a-la, fa "skin" = Abkh. a-cwa), kin terms (na "mother" = Abkh. an, da "father" = Ubykh tw, q'wa "son" = Ubykh. qwa), and some basic verbs (e. g. 'an "to know" = Abkh. a-c'ara) and adjectives (e. g. "old" = Abkh. a-w), etc. Culturally and historically important are common nouns belonging to the sphere of flora and fauna, e. g. the nouns denoting bear, fox, dog, cow, pig, fish, bee, millet, nut, and plum, as well as the names of the metals copper, gold, and tin. Words common to the Adyghean-Kabardian branch of the Abkhaz-Adyghean languages represent the next layer of the lexicon. Among them there is an especially large number of words belonging to the semantic spheres of agriculture 84 (e. g. Adyghean and Kabardian van "to plow", Adyg. cwbza, Kab. vbdza "plow", Adyg. and Kab. ha "barley", Adyg. ma, Kab. ma "millet (Panicum tiliaceum)", Adyg. kawc, Kab. gwadz "wheat"). The terminology from the sphere of farm animal breeding is also common, especially for the breeding of horses (), cf. Kabardian and Adyghean ar "stirrup", xk'wa "foal", Adyghean k'a, Kabardian ?a "little foal", Adyg. fra, Kab. xwra "a breed of thoroughbred Adyghean horses", etc. Loan-words from Turkish and Turkic languages very frequently belong to the sphere of trade, economy and technology, cf. sawm "ruble", myn "a thousand", stw "shop", tawp "cannon", wn "kettle", bb "duck", bwr "black pepper", barq' "flag".Many Farsisms (words of Persian origin) have entered Kabardian through Turkic languages, e. g. dyn "faith", bazar "market", pth "emperor", haw "air", etc. Aside from these recent borrowings, there are also old Iranian loan-words in Kabardian, which could have been borrowed from Scythian or Alanic (the ancestor language of the today's Ossetian) in the prehistoric period. Many such words were borrowed into other Caucasian languages; for example, Iranian *pasu "sheep" (Cf. Skr. pu, Lat. pecu) was borrowed into Abkhaz with the meaning "sheep" and into Georgian as pasi "price"; the same meaning is found in Kabardian wsa "price" 85. A typologically similar semantic development ("sheep" > "property" > "money") has been recorded in other languages, for example in Latin in the relation between pecu "sheep" and pecnia "money". Some Kabardian words are almost certainly (Indo-)Iranianisms, but because of the shortness of attested forms we cannot be entirely sure, e. g. a "hundred" (Avestan satm), a "goat" (Vedic aja-); some words might be even older Indo-European loan-words, e. g. k'rw "crane", (cf. Latin grs, Armenian krunk, Lithuanian gerv, etc.). A younger layer of loan-words are also Arabic loan-words, which penetrated Kabardian mostly through the language of the Kur'an. They belong to the religious and the ethical-philosophical sphere of the lexicon, e. g. lh "god, Alah", anat "heaven", gwanh "sin", shat "hour", sbr "quiet, serene", mhana "meaning, 84 85 118 sense", q'l "reason, mind", br "news", a "doubt", tzr "punishment", barat "abundance", nsp "happiness", nalt "curse, damnation", zamn "time", sabap "benefit", dwnyay "world", etc. These words are quite numerous in Kabardian and most of them are not perceived as borrowings any longer. Arabic roots occur in some compounds containing native elements, cp. e.g. swrat "picture": swrattayx "photographer" (cp. Kab. tayxn "take off, take away"). The name of Kabardia's capital, Nalchik (Kab. Nlk) contains the stem nl "horse-shoe", which comes from Arabic (na`l). Finally, the chronologically last layer or borrowings are Russian loan-words, which flooded the Kabardian language in the 20th century 86. Russian loan-words cut across all spheres of the lexicon except the core lexicon; an especially large number of them belong to the scientific-technological terminology and the administration terminology, e. g. nwka "science", myna "automobile", smawlayt "aeroplane", rayspwblyka "republic", raydaktawr "editor". It is interesting, however, that the borrowing of suffixes for the formation of abstract nouns did not occur, for example the Russian suffix -cija (> Kabardian -ca); this suffix occurs in Kabardian in words such as rayzawlywca "resolution", rayvawlywca "revolution", mayxanyzaca "mechanization", but it doesn't occur in any word with a Kabardian root. Unlike a few suffixes borrowed from Turkish (e. g. the suffixes -ly, -l < -li, cf. wwr-l "good, benevolent"), the Russian suffixes cannot be added to Kabardian roots, i.e. they haven't become productive in Kabardian 87. Aside from direct borrowings, there are also many Russian calques in Kabardian, e. g. txayda "reader" (Rus. itatel'), sbaza "hoover" (Rus. pylesos), '?a "refrigerator" (Rus. xolodil'nik), bzaana "linguistics" (Rus. jayzkoznanie), etc. Although Russianisms are in Kabardian often pronounced quite differently than in Russian, the official orthography (especially after World War II) in most cases prescribes an identical way of writing them as in Russian. In older Kabardian books the name "Russia" will be found as rsay, but today it is written Rawssyya (in Cyrillic Poccue), and the noun "bank", which is pronounced with the glottalized k' (bnk'), is written, like in Russian, bnk (in Cyrillic ). The noun meaning "newspaper" was written at first as k'zayt, but today, under the influence of Russian (gazeta), it is written gazet (in Cyrillic ). Anglicisms, which have lately been penetrating all the languages of the world, enter the Kabardian standard language via Russian, e. g. kawmpyawtayr "computer", yntayrnayt "Internet", byznays "business", etc 88. It is interesting to note that Sh. Nogma's "Kabardian dictionary", compiled in the first half of the 19th century, contains only 2,5 % of words borrowed from Russian (Apaev 2000: 234). 87 Kumaxova 1972. 88 For a general survey of Kabardian lexicology and lexicography see Apaev 2000. 86 119 TEXTS 1. A Very Simple and Instructive Text about Rabbits (Source: Gwwat, L. et alii Adabza, El'brus, Nal'ik 1984). Thak'wma' h. Rabbit (rabbits) Thak'wma' h-r maz-m -aw-psaw. rabbit-NOM forest-ERG dir.-pres.-to live r p'awra m-a. he-NOM fast 3sg.pres.-run Thak'wma'h-m y r-xa-r a-'a rabbit-ERG young-pl.-NOM milk-INSTR ya-a-xa. Thak'wma' h-m wdz 3sg.-caus.-eat rabbit-ERG grass ay-x, -r ya-w b pr 3sg-to eat wood-NOM 3sg.-gnaw he-ERG hay-stack f' -wa ya-w . Thma'h-r well-ADV 3sg.-see rabbit-NOM amxwam wa-, 'mxwam in the summer grey in the winter w x -. white-af. 120 Nrt w g wp zayk'wa k'wan-w ya---t. wagw zd-tay-t-m, wya bzda q'--t-awwa. Nrt-xa-r za-'ath--wa wagw-ham tay-t-w, Sawsrq'wa q'--'as--. "Mf'a w-y-?a, Sawsrq'wa? '?a-m d-ya-s!" "Sa q'a-z-a-zaxw, f-q'-s-papa," -ya-?ary Sawsrq'wa y Tayay -m z-r-ya-dz, Harama-?wha d-aw-yay-ry z-yaph, ' wna-m z ana- q'--ya-w, ?wwa ha--t-w. Sawsrq'wa anam nasra da-pama, mf'a-m q'-ya-waa'-wa Yn na zq' wa q'-ya-w. Sawsrq'wa p'nt'-am w -wa da-p'. Mf'a-m b ada - Y nm yap'ary z w w padz'a q'-y-p at--. Padz'a-m q'px a dapr Ynm y nam y-xw--. Vocabulary: badan "lie next to" bzda "bad" ana "tower" dap "hot coals" dap'an "jump in" dapan "look in" dan "run up the hill" wagw "road" wna "territory" Harama name of mythological mountain k'wan "go" mf'a "fire" Nrt "Nart" (hero of old times) na "eye" nasn "come to" na "middle part of the face" Padz'a "burning log" pnt'a "gate" q'aazan "return" q'apwatan "catch, get" q''asn "follow, go after" q'papan "wait for" q'taywan "happen, occur" w "horseman" ?a "coldness, cold" hatn "stand above stg." ' "land, earth" taytn "find oneself, be" Tayay name of Sosruko's horse wya "cold" yap'an "jump through, jump over" 121 PART II 122 Translitterated: Wyyy, Wyyy, pna - y Sawsrq'wa y fa - y dary araw Wayy, z mxwa gwarty Y Twayayry Wayy, thak'wma llaw, yaz Sawsrq'wary y m yalalaxw p'nt'am q'dawha. Vocabulary: da - day fa - 1. Kabardian national dress; 2. form, appearance gwar - some lla - weak, shabby mxwa - day pna - ballad pnt'a - gate q'dahan - bring in(to), get in - horse thak'wma -ear Twayay - name of Sosruko's horse wyyy - Hey! yalalaxn - hang yaz - himself y - they say (particle) araan - burn, be hot - old 123 4. Kabardian proverbs (Source: Adabza psa, Nal'ik 1999). 1.Ya w'n ya w'an. 2. 'awa tay'm wy warad yarq'm. 3. Fz bzada ha'a maxa. 4. Fz bda yl' halal. 5. q'l zy?am an y?a. 6. L'ar lm tarq'm. 7. dam y na mwamy wra p'stara wyaxn. 8. C'xwr l'ama, y c'ar q'awnary, vr l'ama, y far q'awna. 9. Wy q'ma t'aw q'wmx, wy psa t'aw wm?a. 10. har psawma p?a 'arq'm. 11. 'anm 'a xa. Vocabulary: 1. ' "man"; 'an "die" 2. warad "song"; en "become weary, become tired" 3. fz "woman"; bzada "bad"; ha'a "guest"; xan "eat" 4. bda "strong"; halal "what is desirable" 5. q'l "mind, wisdom"; an "character" 6. 'a "manliness"; l "death"; tan "fear" 7. da "Adygh; Circassian"; mwa "poor"; w "salt"; p'sta "pasta (Circassian dish) 8. c'xw "person"; c'a "name"; v "ox"; fa "skin"; q'anan "remain" 9. q'ma "dagger"; q'axn "cut"; psa "word"; ?an "say, utter" 10. ha "head"; psawn "live"; p?a "hat" 11. xan "lie (in something)" 124 125 Alchiki (the Russian term for Kabardian 'an) is a traditional game played with sheep, or cattle bones. It is widespread among many peoples of Central Asia and the Caucasus, and it occurs in many variants. The rules always involve trying to get as many alchikis (bones) as you can, at the expense of your opponent. 126 REFERENCES Abdokov, A. I. Vvedenie v sravnitel'no-istorieskuju morfologiju abxazsko-adygskix i naxsko-dagestanskix jazykov, Kabardino-balkarskij Gosudarstvennyj Universitet, Nal'ik 1981. Abdokov, A. I. O zvukovyx i slovarnyx sootvetstvijax severokavkazskix jazykov, El'brus, Nal'ik 1983. Abitov, M. L. et alii Grammatika kabardino-erkesskogo literaturnogo jazyka, AN SSSR, Moskva 1957. Abyt', M. L. et alii Slovar' kabardino-erkesskogo jazyka, Diroga, Moskva 1999. Alparslan, O. & Dumzil, G. "Le parler besney de zennun ky", Journal Asiatique 1963: 337-382. Anderson, J. "Kabardian disemvowelled, again", Studia Linguistica 45/1991: 18-48. Apaev, M. L. Sovremennyj kabardino-erkesskij jazyk. Leksikologija. Leksikografija, l'brus, Nal'ik 2000. Balkarov, B. H. Jazyk besleneevcev, Kabardino-balkarskoe kninoe izdatel'stvo, Nal'ik 1959. Balkarov, B. H. "O astjax rei v kabardinskom jazyke", in: Voprosy sostavlenija opisatel'nyx grammatik, Izdatel'stvo AN SSSR, Moskva 1961: 113-122. Balkarov, B. H. Vvedenie v abxazo-adygskoe jazykoznanie, Nal'ik 1979. Bersirov, B. M. "Jugoslavskie adygi i osobennosti ix rei", Annual of IberoCaucasian Linguistics, 8/1981: 116-127. Bganokov, B. G. Adygskij tiket, El'brus, Nal'ik 1978. Braun, Ja. "Xattskij i abxazo-adigskij", Rocznyk Orientalistyczny 49.1/1994: 15-23. Catford, I. "Ergativity in Caucasian Languages", North Eastern Linguistic Society Papers 6/1975: 37-48. Chirikba, V. A. Common West Caucasian, Research School CNWS, Leiden 1996. Choi, J. D. "An Acoustic Study of Kabardian Vowels", Journal of the International Phonetic Association 21/1991: 4-12. Colarusso, J. A grammar of the Kabardian Language, University of Calgary Press, Calgary 1992. Colarusso, J. "Proto-Northwest Caucasian, or How to Crack a very hard Nut," JIES 22,1-2/1994: 1-35. Colarusso, J. "Phyletic Links between Proto-Indo-European and Proto-Northwest Caucasian", The Journal of Indo-European Studies 25, 1-2/1997: 119-151. Colarusso, J. Nart Sagas from the Caucasus, Princeton University Press, Princeton 2002. Colarusso, J. Kabardian (East Circassian), Lincom Europa, Munich 2006. Comrie, B. Tense, CUP, Cambridge 1985. ern, V. "Verb Class System in Circassian. An Attempt of Classification of Circassian Verbal Forms", Archiv Orientaln 36/1968: 200-212. Dixon, R. M. W. Ergativity, CUP, Cambridge 1994. Dixon, R. M. W. "A Typology of Causatives: Form, Syntax, and Animacy", in: R. M. W. Dixon and A. Aikhenvald (eds.), Changing Valency, C.U.P., Cambridge 2000: 30-83. Dumzil, G. Introduction a la grammaire compare des langues caucasiennes du nord, Champion, Paris 1933. 127 Giev, N. T. Voprosy rgativnogo stroja adygskix jazykov, Adygejskoe otdelenie Krasnodarskogo kninogo izdatel'stva, Majkop 1985. Gordon, M. & A. Applebaum, "Phonetic Structures in Turkish Kabardian", Journal of the International Phonetic Association 36(2) 2006: 159-186. Greenfield, E. R. "Language of Dissent: Language, Ethnic Identity, and Bilingual Education Policy in the North Caucasus", Gjaurgiev, X. Z. & X. X. Sukunov, kol'nyj russko-kabardinskij slovar', Nart, Nal'ik 1991. Halle, M. "Is Kabardian a Vowel-less Language?" International Journal of Language and Philosophy 6/1970: 95. Hewitt, G. "Antipassive and 'labile' constructions in North Caucasian", General Linguistics 22/1982: 158-171. Hewitt, G. "Northwest Caucasian", Lingua 115/2005: 91-145. Hewitt, G. Introduction to the Study of the Languages of the Caucasus, Lincom, Munich 2004. Hewitt, G. (ed.) The Indigenous Languages of the Caucasus: the North West Caucasian Languages, London 1989. Jakovlev, N. F. Grammatika literaturnogo kabardino-erkesskogo jazyka, AN SSSR, Moscow 1948. Kardanov, V. M. "Grammatieskij oerk kabardinskogo jazyka", in: M. L. Apaev et alii, Kabardinsko-russkij slovar', Gosudarstvennoe izdatel'stvo inostrannyx i nacional'nyx slovarej, Moskva 1957: 489-576. Kardanov, V. M. Glagol'noe skazuemoe v kabardinskom jazyke, Kabardinskobalkarskoe kninoe izdatel'stvo, Nal'ik 1957. Keenan E. & B. Comrie, "Noun phrase accessibility and universal grammar", Linguistic Inquiry 8/1977: 63-99. Klimov, G. A. (ed.) Strukturnye obnosti kavkazskix jazykov, Nauka, Moskva 1978. Klimov, G. A. Vvedenie v kavkazskoe jazykoznanie, Nauka, Moskva 1986. Kuipers, A. H. Phoneme and morpheme in Kabardian, Mouton, The Hague 1960. Kuipers, A. H. "The Circassian nominal Paradigm: a Contribution to Case-theory", Lingua XI, 1962: 231-248. Kuipers, A. H. "Unique types and typological universals", in: Pratidnam. Festschrift F. B. J. Kuiper, Mouton, The Hague 1968: 68-88. Kumaxov, M. A. Slovoizmenenie adygskix jazykov, Nauka, Moskva 1971. Kumaxov, M. A. "Kategorija opredelennosti-neopredelennosti v adygskix jazykax", Trudy Tbilisskogo universiteta, V. 3 (142), 1972: 119-128. Kumaxov, M. A. "Teorija monovokalizma i zapadnokavkazskie jazyki", Voprosy jazykoznanija 4/1973: 54-67. Kumaxov, M. A. "Uerbnost' neperexodnyx paradigm v adygskix jazykax", Iberijsko-kavkazskoe jazykoznanie 18/1973a: 127-132. Kumaxov, M. A. "Teorija genealogieskogo dreva i zapadnokavkazskie jazyki", Voprosy jazykoznanija 3/1976: 47-57. Kumaxov, M. A. Sravnitel'no-istorieskaja fonetika adygskix jazykov, Moskva 1981. Kumaxov, M. A. Oerki obego i kavkazskogo jazykoznanija, l'brus, Nal'ik1984. Kumaxov, M. A. Sravnitel'no-istorieskaja grammatika adygskix (erkesskix) jazykov, Nauka, Moskva 1989. Kumaxov, M. A. & Kumaxova, Z. Ju. Jazyk adygejskogo fol'klora, Nauka, Moskva 1982. 128 Kumaxov, M. A. et alii, "Ergative case in the Circassian languages", Lund University Department of Linguistics Working Papers 45(1996): 93-111. Kumaxov, M. A. & K. Vamling, "On Root and Subordinate Clause Structure in Kabardian", Lund University Working Papers in Linguistics 44/1995: 91-110. Kumaxov, M. A. & K. Vamling, Dopolnitel'nye konstrukcii v kabardinskom jazyke, The Lund University Press, Lund 1998. Kumaxov, M. A. & K. Vamling, rgativnost' v erkesskix jazykax, Malm University, Malm 2006. Kumaxov, M. A. (ed.) Oerki kabardino-erkesskoj dialektologii, lbrus, Nal' ik 1969. Kumaxov, M. A. (ed.) Kabardino-erkesskij jazyk (I-II), Izdatel'skij centr l'-Fa, Nal'ik 2006. Kumaxova, Z. Ju. Razvitie adygskix literaturnyx jazykov, Nauka, Moskva 1972. Mafedzev, S. Adyg xabz. Adygi. Obiai. Tradicii, Izdatel'skij centr l'-Fa, Nal'ik 2000. Matasovi, R. Uvod u poredbenu lingvistiku, MH, Zagreb 2001. Matasovi, R. Jezina raznolikost svijeta, Algoritam, Zagreb 2005. Matasovi, R. "Transitivity in Kabardian", in: R. D. Van Valin Jr. (ed.), Investigations of the Syntax-Semantics-Pragmatics Interface, John Benjamins, Amsterdam 2008: 59-74. Matasovi, R. "The "Dependent First" Syntactic Patterns in Kabardian and other Caucasian Languages", paper from the "Conference on the Languages of the Caucasus" held at the Max-Planck Institut fr Evolutionre Anthropologie in Leipzig, December 2007. Nrtxar. Kabardej pos. Nal'ik 1951. Nrtxar. Psaryay 'wxam y brxar. l'brus, Nal'ik 2001. zbek, B. Die tscherkessischen Nartensagen, Esprint-Verlag, Heidelberg 1982. zbek, B. Erzhlungen der letzten Tscherkessen auf dem Amselfeld, Etnographie der Tscherkessen 4, Bonn 1986. Paris, C. "Indices personnels intraverbaux et syntaxe de la phrase minimale dans les langues du Caucase du nord-ouest", Bulletin de la Socit de linguistique de Paris 64/1969: 104-183. Paris, C. Systme phonologique et phnomnes phontiques dans le parler besney de Zennun Ky (Tcherkesse oriental), Klincksieck, Paris 1974. Peterson, D. A. Applicative Constructions, O.U.P, Oxford 2007. Smeets, R. Studies in West Circassian Phonology and Morphology, Brill, Leiden 1984. Smeets, R. "The Development of Literary Languages in the Soviet Union; the Case of Circassian", in: I. Fodor & C. Hagge (eds.), Language Reform. History and Future, VI,1990: 513-541. agirov, A. K. "Kabardinskij jazyk", in: V. V. Vinogradov (ed.) Jazyki narodov SSSR, T. IV Iberijsko-kavkazskie jazyki, Nauka, Moskva 1967: 165-183. agirov, A. K. timologieskij slovar' adygskix (erkesskix) jazykov, I, II, Moskva 1977. agirov, A. K. "Kabardinskij jazyk", in: V. N. Jarceva et alii (ed.), Jazyki mira. Kavkazskie jazyki, Academia, Moskva 1998: 103-115. Tuite, K. "The Myth of the Caucasian Sprachbund": The Case of Ergativity", Lingua 108/1999: 1-26. Van Valin, R. Exploring the Syntax-Semantics Interface, CUP, Cambridge 2005. 129 Van Valin, R. & LaPolla, R. Syntax, CUP, Cambridge 1997. WALS = The World Atlas of Linguistic Structures, ed. by M. Haspelmath et alii, CUP, Cambridge 2006. Uduhu, T. . "Preruptivnye smynye soglasnye zapadnyx dialektov adygejskogo jazyka", in: Z. Ju. Kumaxova (ed.) Struktura predloenija v adygejskom jazyke, Adygejskij nauno-issledovatel'skij institut, Maykop 1976: 135-157. Zekox, U. S. "O strukture prostogo predloenija v adygejskom jazyke", in: Z. Ju. Kumaxova (ed.) Struktura predloenija v adygejskom jazyke, Adygejskij nauno-issledovatel'skij institut, Maykop 1976: 3-49. For information on the history of Kabardians and other Adyghean peoples see About the customs, dances and. culture of the Adyghean peoples see For the bibliography of works on Kabardian (in English) see A few texts about Kabardian and in this language are available at: For the transliteration of the Kabardian Cyrillic see J. Gippert, Alphabet Systems Based upon the Cyrillic Script ( The most extensive bibliography of Russian works on Kabardian can be found in the comparative grammar by M. A Kumaxov (Kumaxov 1989) and the monograph on Kabardian, edited by the same author (Kumaxov (ed.) 2006). 130 Kabardian Note: ALUANIAN = Dagestanian languages NAKH = Chechen, Ingush and Bats (Batsbi) 131 132 APPENDIX III A table of phonological correspondences between Kabardian and Adyghean (according to agirov 1977: 25) Kabardian f f' xw b d d dz gw v ' q' q'w Adyghean w 'w f p t d, c kw cw, w , ky, , d , ', k'y q qw Western Adyghean dialects (Shapsugh and Bzhedukh) are the most archaic Circassian dialects with respect to consonantism. They have a fourfold system of stops, distinguishing voiceless aspirated (ph), voiced (b), ejective (p') and voiceless unaspirated, or "preruptive" (p). It seems that Kabardian had such a system still in the beginning of the 19th century, because traces of it can be found in Sh. Nogma's writings (Uduxu 1976). In literary Kabardian, the voiceless unaspirated stops and affricates became voiced, merging with the original voiced series, and creating a number of homonyms, cp. Kab. da 1. "nut", 2. "we" vs. Bzhedukh da "nut", ta "we", or Kabardian dza 1. "army", 2. "tooth" vs. Bzhedukh dza "army", ca "tooth", etc. 133 APPENDIX IV INDEX OF KABARDIAN GRAMMATICAL MORPHEMES aw- present (for dynamic verbs) - demonstrative pronoun ("this/that") - preterite py optative particle -t anterior preterite wa "but" bla- directional ("by") -bza comparative suffix; "very" -'a Instrumental -'a "already" (verbal suffix) -'a "maybe" (verbal suffix) -'ara adverbializer; gerund -'at optative -' valency adding suffix (for intransitives) d- 1st. person plural verbal prefix da- conjunctivity (sojuznost') da- directional ("in") dana "where" day "towards" dpa "how much, how many" dwa "how" dy 1st. person pl. possessive pronoun dda comparative and superlative particle; "very" f- 2nd person pl. verbal prefix -f "potential" -fa "kind of" (nominal suffix) fy 2nd person pl. possessive pronoun f'- adversative f''(a) "except" gwa- directional ("together with") gwar "some" (quantifier) a- causative -a abstract noun formative -an evidential (probability) - pluperfect -t anterior pluperfect hawa "no" -h transitivizing suffix ndara "since the time that" -'(a) valency increasing suffix; adds directional meaning ("towards") -m Ergative (Oblique) case -m imperfect of stative verbs -m(a) conditional -m(y) permissive; "although" ma- (m-) 3 sg. of intransitives 134 maw- demonstrative pronoun ("that") m- negation (for infinite forms) m, m- demonstrative pronoun ("that") -n Infinitive -n categorical future n(a)- directional ("thither") naw "after" na comparative particle -na "without" -nt subjunctive / future II (?) -nw Infinitive -nw factual future -nwt future II (conditional) nt'a "yes" -pa perfectivizing suffix (indicates accomplished action) p'ara interrogative particle psaw "every" (quantifier) -q'a interrogative, exclamatory, and focus marking suffix q'as "every" q'- directional ("hither") -q'm negation (for finite forms) -q'wa suffix indicating excessive action; "too much" -r Nominative (Absolutive) case -r facultative present of dynamic verbs -ra interrogative -ra gerund -ra, -ry conjunction (clitic); "and" -(r)t imperfect of dynamic verbs rya- optative s-/z- 1st person sg. verbal prefix sy 1sg. possessive pronoun sma associative plural st "what" - affirmative -- suffix indicating excessive action; "too much" -a (elative) superlative a interrogative particle - directional; "from the surface of"; "when" tma "if" -ar(at) optative ha'a "after, because of" ha "every" -xwa "great" 'a- directional prefix; "under" -t imperfect of dynamic verbs -t suffix used in reinforcing the imperative -tam(a) irrealis conditional tay- directional; "on" w-/b- 2nd person sg. verbal prefix 135 wy 2sg. possessive pronoun w- factitive -w Adverbial case; gerund; adverbializing suffix xa-/x- directional ("towards the interior") -xa plural -x(a)- "already" xat "who" -xxa- "reinforced negation" -w transitivizing suffix xwa-/xw- version xwada "like" xwa-....-fa "somewhat" (circumfix modifying adjectives) xw- potential -xw('a) suffix expressing simultaneity of the action, "while" xway- debitative modal y-/r- 3rd. person sg. verbal prefix ya "or" yay attributive 3sg. possessive pronoun yy attributive 3pl. possessive pronoun yaz emphatic pronoun; "personally", "himself" y 3pl. possessive pronoun -y admirative y 3sg. possessive pronoun y' y "and" za-/z- participle forming prefix za-/z-/z- reflexive za-/zara- reciprocal zara- "instrumental" participle prefix; subordinating prefix on participles zy relative possessive pronoun; "whose" zda- "together" -() "back, again"; repetitive ayry "quotative particle" -ay diminutive suffix - transitivizing suffix -?a indefinite person marker, "somebody" ?a'a- involuntative -?wa superlative (elative); "diminutive" comparative 136 TABLE OF CONTENTS List of abbreviations......................................................................................................2 ORTHOGRAPHY...................................................................................................15 MORPHOLOGY.....................................................................................................17 Nominal inflection...................................................................................................17 Number.........................................................................................................................17 Case..............................................................................................................................18 Definiteness..................................................................................................................23 Adjectives.....................................................................................................................25 Personal and demonstrative pronouns..........................................................................27 Possessive pronouns.....................................................................................................27 Interrogative pronouns.................................................................................................27 The emphatic pronoun..................................................................................................28 Quantifiers....................................................................................................................29 Invariable words......................................................................................................30 Numerals......................................................................................................................30 Adverbs........................................................................................................................31 Postpositions.................................................................................................................32 Particles, conjunctions and interjections......................................................................33 Verbs...........................................................................................................................35 The verbal complex......................................................................................................35 Verbal negation............................................................................................................36 Person...........................................................................................................................37 Indefinite person...........................................................................................................39 Transitivity...................................................................................................................39 Labile (diffuse) verbs...................................................................................................46 Causative......................................................................................................................46 Involuntative................................................................................................................49 Factitive........................................................................................................................51 137 Active (dynamic) and stative verbs..............................................................................51 Applicatives.................................................................................................................53 I. Version (Benefactive/Malefactive) ..........................................................................53 II. Conjunctivity (Comitative)......................................................................................55 Reciprocity...................................................................................................................57 Reflexivity....................................................................................................................58 Deontic modality..........................................................................................................60 Personal and directional prefixes.................................................................................62 Tenses...........................................................................................................................63 Interrogative.................................................................................................................70 Moods...........................................................................................................................71 Evidentiality.................................................................................................................75 Deverbal nominals.......................................................................................................76 I. Infinitive....................................................................................................................76 II. Participles................................................................................................................78 III. Verbal adverbs (gerunds).......................................................................................81 Directionals..................................................................................................................82 APPENDIX: VERBAL CLASSES AND PARADIGMS............................................87 WORD FORMATION...........................................................................................93 Compounds...................................................................................................................93 Nominal suffixes..........................................................................................................94 Verb formation by prefixing........................................................................................95 Verbal suffixes.............................................................................................................96 SYNTAX...................................................................................................................99 Noun phrases (NP).......................................................................................................99 Adjective phrases.......................................................................................................100 Syntactic structure of the sentence.............................................................................101 Nominal sentence.......................................................................................................101 Equi-NP deletion........................................................................................................101 Subordination.............................................................................................................102 Case assignment in subordinate clauses.....................................................................106 Modal verbs................................................................................................................107 Phasal verbs................................................................................................................108 Reported speech.........................................................................................................108 Agreement..................................................................................................................109 Negative concord........................................................................................................110 Pro-drop......................................................................................................................110 Relative clauses..........................................................................................................111 Coordination...............................................................................................................112 The order of syntactic elements.................................................................................112 Topicalization/focalization.........................................................................................114 TEXTS......................................................................................................................120 REFERENCES.......................................................................................................127 APPENDIX I: LANGUAGE MAP OF THE CAUCASUS.......................................131 APPENDIX II: ADYGH (CIRCASSIAN) TRIBES IN THE 18TH CENTURY......132 APPENDIX III: Phonological correspondences between Kabardian and Adyghean....................................................................................................................133 APPENDIX IV: Index of Kabardian grammatical morphemes.................................134 139
https://www.scribd.com/document/79616556/Kabardian-Grammar
CC-MAIN-2019-35
en
refinedweb
art legacy series 2.13¶ Previous series release notes Next series release notes (art 3) This series contains no new art-specific features with respect to the previous series. It was created to support ROOT 6.16. For the list of specific external product changes, please consult the release notes for 2.13.00. Platform/compiler support changes¶ This series supports C++17 only (the c2 and e17 qualifiers). It also supports the macOS High Sierra and Mojave operating systems in addition to SLF6 and SL7. Breaking changes¶ ROOT 6.16 does not allow explicit template instantiations in classes.h files, required for creating dictionaries. Users should make the following changes: #include "MyType.h" - template class SomeTemplateWith<MyType>;
https://cdcvs.fnal.gov/redmine/projects/art/wiki/Series_213
CC-MAIN-2019-43
en
refinedweb
Drag and Drop Tutorial for macOS The drag-and-drop mechanism has always been an integral part of Macs. Learn how to adopt it in your apps with this drag and drop tutorial for macOS. Version - Other, Other, Other Ever since the invention of the Mac, drag and drop has been a part of the user interface. The quintessential example is in Finder, where you can drag files around to organize things or drop them in the trash. The fun doesn’t stop there. You can drag your latest sunset panorama from Photos and drop it in Messages, or pull a file from Downloads on the dock and drop it right in an email. You get the point, right? It’s cool and an integral part of the macOS experience. Drag and drop has come a long way from its beginnings and now you can drag almost anything anywhere. Try it and you might be pleasantly surprised by the actions and types supported by your favorite apps. In this drag and drop tutorial for macOS, you’ll discover how to add support to your own apps, so users can get the full Mac experience with your app. Along the way, you’ll learn how to: - Implement core drag and drop actions on NSViewsubclasses - Accept data dropped from other applications - Provide data to be dragged into other views of your app - Create custom dragged types Getting Started This project uses Swift 3 and requires, at a minimum, Xcode 8 beta 6. Download the starter project, open it in Xcode and build and run it. Meet the Project App Many kids enjoy playing with stickers and making cool collages with them, so you’re going to build an app that features this experience. You can drag images onto a surface and then you can kick things up a few notches by adding sparkles and unicorns to the view. After all, who doesn’t like unicorns and sparkles? :] To keep you focused on the objective — building out support for dragging and dropping — the starter project comes complete with the views you’ll require. All you need to do is learn about the mechanics of drag and drop. There are three parts to the project window: - The sticker view where you’ll drag images and other things - Two smaller views that you’ll turn into different dragging sources Take a look at the project. You’ll edit four specific files as you work through this tutorial, and they’re in two places: Dragging Destinations and Dragging Sources: In Dragging Destinations: - StickerBoardViewController.swift: the main view controller - DestinationView.swift: view that sits on top of the upper section of the window — it will be the recipient of your drag actions In Dragging Sources: - ImageSourceView.swift: bottom view with the unicorn image that you’ll turn into a dragging source - AppActionSourceView.swift: view that has the label Sparkles — you’ll turn this into another type of dragging source There are some other files in the Drawing and Other Stuff groups that provide helper methods and are essential to the project app, but you won’t need to give them any of your time. Go ahead and explore if you’d like to see how this thing is built! Pasteboards and Dragging Sessions Drag and drop involves a source and a destination. You drag an item from a source, which needs to implement the NSDraggingSource protocol. Then you drop it into a destination, which must implement the NSDraggingDestination protocol in order to accept or reject the items received. NSPasteboard is the class that facilitates the exchange of data. The whole process is known as a dragging session: When you drag something with your mouse, e.g., a file, the following happens: - A dragging session kicks off when you initiate a drag. - Some data bits — often an image and URL — are chosen to represent the information placed on the dragging pasteboard. - You drop that image on a destination, which chooses to reject or accept it and take some action — for instance, move the file to a new folder. - The dragging session concludes. That’s pretty much the gist of it. It’s a pretty simple concept! First up is creating a dragging destination for receiving images from Finder or any other app. Creating a Dragging Destination A dragging destination is a view or window that accepts types of data from the dragging pasteboard. You create a dragging destination by adopting NSDraggingDestination. This diagram shows the anatomy of a dragging session from the point of view of a dragging destination. There are a few steps involved in creating the destination: - When building the view, you have to declare the types that it should receive from any dragging session. - When a dragging image enters the view, you need to implement logic to allow the view to decide whether to use the data as well as let the dragging session know its decision. - When the dragging image lands on the view, you use data from the dragging pasteboard to show it on your view. Time to get down with some codeness! Select DestinationView.swift in the project navigator and locate the following method: func setup() { } Replace it with this: var acceptableTypes: Set<String> { return [NSURLPboardType] } func setup() { register(forDraggedTypes: Array(acceptableTypes)) } This code defines a set with the supported types. In this case, URLs. Then, it calls register(forDraggedTypes:) to accept drags that contain those types. Add the following code into DestinationView to analyze the dragging session data: //1. let filteringOptions = [NSPasteboardURLReadingContentsConformToTypesKey:NSImage.imageTypes()] func shouldAllowDrag(_ draggingInfo: NSDraggingInfo) -> Bool { var canAccept = false //2. let pasteBoard = draggingInfo.draggingPasteboard() //3. if pasteBoard.canReadObject(forClasses: [NSURL.self], options: filteringOptions) { canAccept = true } return canAccept } You’ve done a few things in here: - Created a dictionary to define the desired URL types (images). - Got a reference to the dragging pasteboard from the dragging session info. - Asked pasteboard if it has any URLs and whether those URLs are references to images. If it has images, it accepts the drag. Otherwise, it rejects it. NSDraggingInfo is a protocol that declares methods to supply information about the dragging session. You don’t create them, nor do you store them between events. The system creates the necessary objects during the dragging session. You can use this information to provide feedback to the dragging session when the app receives the image. NSView conforms to NSDraggingDestination, so you need to override the draggingEntered(_:) method by adding this code inside the DestinationView class implementation: //1. var isReceivingDrag = false { didSet { needsDisplay = true } } //2. override func draggingEntered(_ sender: NSDraggingInfo) -> NSDragOperation { let allow = shouldAllowDrag(sender) isReceivingDrag = allow return allow ? .copy : NSDragOperation() } This is what the code above does: - Creates a property named isReceivingDragto track when the dragging session is inside the view and has data that you want. It triggers a redraw on the view each time it is set. - Overrides the draggingEntered(_:), and decides if it accepts the drag operation. In section two, the method needs to return an NSDragOperation. You have probably noticed how the mouse pointer changes depending on the keys you hold down or the destination of a drag. For example, if you hold down Option during a Finder drag, the pointer gains a green + symbol to show you a file copy is about to happen. This value is how you control those pointer changes. In this config, if the dragging pasteboard has an image then it returns .copy to show the user that you’re about to copy the image. Otherwise it returns NSDragOperation() if it doesn’t accept the dragged items. Handling an Exit What enters the view may also exit, so the app needs to react when a dragging session has exited your view without a drop. Add the following code: override func draggingExited(_ sender: NSDraggingInfo?) { isReceivingDrag = false } You’ve overridden draggingExited(_:) and set the isReceivingDrag variable to false. Tell the User What’s Happening You’re almost done with the first stretch of coding! Users love to see a visual cue when something is happening in the background, so the next thing you’ll add is a little drawing code to keep your user in the loop. Still in DestinationView.swift, find draw(:_) and replace it with this. override func draw(_ dirtyRect: NSRect) { if isReceivingDrag { NSColor.selectedControlColor.set() let path = NSBezierPath(rect:bounds) path.lineWidth = Appearance.lineWidth path.stroke() } } This code draws a system-colored border when a valid drag enters the view. Aside from looking sharp, it makes your app consistent with the rest of the system by providing a visual when it accepts a dragged item. Note: Want to know more about custom drawing? Check out our Core Graphics on macOS Tutorial. Build and run then try dragging an image file from Finder to StickerDrag. If you don’t have an image handy, use sample.jpg inside the project folder. You can see that the cursor picks up a + symbol when inside the view and that the view draws a border around it. When you exit the view, the border and + disappears; absolutely nothing happens when you drag anything but an image file. Wrap up the Drag Now, on to the final step for this section: You have to accept the drag, process the data and let the dragging session know that this has occurred. Append the DestinationView class implementation with the following: override func prepareForDragOperation(_ sender: NSDraggingInfo) -> Bool { let allow = shouldAllowDrag(sender) return allow } The system calls the above method when you release the mouse inside the view; it’s the last chance to reject or accept the drag. Returning false will reject it, causing the drag image to slide back to its origination. Returning true means the view accepts the image. When accepted, the system removes the drag image and invokes the next method in the protocol sequence: performDragOperation(_:). Add this method to DestinationView: override func performDragOperation(_ draggingInfo: NSDraggingInfo) -> Bool { //1. isReceivingDrag = false let pasteBoard = draggingInfo.draggingPasteboard() //2. let point = convert(draggingInfo.draggingLocation(), from: nil) //3. if let urls = pasteBoard.readObjects(forClasses: [NSURL.self], options:filteringOptions) as? [URL], urls.count > 0 { delegate?.processImageURLs(urls, center: point) return true } return false } Here’s what you’re doing in there: - Reset isReceivingDragflag to false. - Convert the window-based coordinate to a view-relative coordinate. - Hand off any image URLs to the delegate for processing, and return true— else you reject the drag operation returning false. Note: Feeling extra heroic? If you were to make an animated drop sequence, performDragOperation(:_) would be the best place to start the animation. Congratulations! You’ve just finished the first section and have done all the work DestinationView needs to receive a drag. Use DestinationView’s Data Next up you’ll use the data that DestinationView provides in its delegate. Open StickerBoardViewController.swift and introduce yourself to the class that is the delegate of DestinationView. To use it properly, you need to implement the DestinationViewDelegate method that places the images on the target layer. Find processImage(_:center:) and replace it with this. func processImage(_ image: NSImage, center: NSPoint) { //1. invitationLabel.isHidden = true //2. let constrainedSize = image.aspectFitSizeForMaxDimension(Appearance.maxStickerDimension) //3. let subview = NSImageView(frame:NSRect(x: center.x - constrainedSize.width/2, y: center.y - constrainedSize.height/2, width: constrainedSize.width, height: constrainedSize.height)) subview.image = image targetLayer.addSubview(subview) //4. let maxrotation = CGFloat(arc4random_uniform(Appearance.maxRotation)) - Appearance.rotationOffset subview.frameCenterRotation = maxrotation } This code does the following tricks: - It hides the Drag Images Here label. - It figures out the maximum size for the dropped image while holding the aspect ratio constant. - It constructs a subview with that size, centers it on the drop point and adds it to the view hierarchy. - It randomly rotates the view a little bit for a bit of funkiness. With all that in place, you’re ready to implement the method so it deals with the image URLs that get dragged into the view. Replace processImageURLs(_:center:) method with this: func processImageURLs(_ urls: [URL], center: NSPoint) { for (index,url) in urls.enumerated() { //1. if let image = NSImage(contentsOf:url) { var newCenter = center //2. if index > 0 { newCenter = center.addRandomNoise(Appearance.randomNoise) } //3. processImage(image, center:newCenter) } } } What you’re doing here is: - Creating an image with the contents from the URLs. - If there is more than one image, this offsets the images’ centers a bit to create a layered, randomized effect. - Pass the image and center point to the previous method so it can add the image to the view. Now build and run then drag an image file (or several) to the app window. Drop it! Look at that board of images just waiting to be made fearlessly fanciful. You’re at about the halfway point and have already explored how to make any view a dragging destination and how to compel it to accept a standard dragging type — in this case, an image URL. Creating a Dragging Source You’ve played around with the receiving end, but how about the giving end? In this section, you’ll learn how to supercharge your app with the ability to be the source by letting those unicorns and sparkles break free and bring glee to the users’ images in the right circumstances. All dragging sources must conform to the NSDraggingSource protocol. This MVP (most valuable player) takes the task of placing data (or a promise for that data) for one or more types on the dragging pasteboard. It also supplies a dragging image to represent the data. When the image finally lands on its target, the destination unarchives the data from the pasteboard. Alternatively, the dragging source can fulfil the promise of providing the data. You’ll need to supply the data of two different types: a standard Cocoa type (an image) and custom type that you create. Supplying a Standard Dragging Type The dragging source will be ImageSourceView — the class of the view that has the unicorn. Your objective is simple: get that unicorn onto your collage. The class needs to adopt the necessary protocols NSDraggingSource and NSPasteboardItemDataProvider, so open ImageSourceView.swift and add the following extensions: // MARK: - NSDraggingSource extension ImageSourceView: NSDraggingSource { //1. func draggingSession(_ session: NSDraggingSession, sourceOperationMaskFor context: NSDraggingContext) -> NSDragOperation { return .generic } } // MARK: - NSDraggingSource extension ImageSourceView: NSPasteboardItemDataProvider { //2. func pasteboard(_ pasteboard: NSPasteboard?, item: NSPasteboardItem, provideDataForType type: String) { //TODO: Return image data } } - This method is required by NSDraggingSource. It tells the dragging session what sort of operation you’re attempting when the user drags from the view. In this case it’s a generic operation. - This implements the mandatory NSPasteboardItemDataProvidermethod. More on this soon — for now it’s just a stub. Start a Dragging Session In a real world project, the best moment to initiate a dragging session depends on your UI. With the project app, this particular view you’re working in exists for the sole purpose of dragging, so you’ll start the drag on mouseDown(with:). In other cases, it may be appropriate to start in the mouseDragged(with:) event. Add this method inside the ImageSourceView class implementation: override func mouseDown(with theEvent: NSEvent) { //1. let pasteboardItem = NSPasteboardItem() pasteboardItem.setDataProvider(self, forTypes: [kUTTypeTIFF]) //2. let draggingItem = NSDraggingItem(pasteboardWriter: pasteboardItem) draggingItem.setDraggingFrame(self.bounds, contents:snapshot()) //3. beginDraggingSession(with: [draggingItem], event: theEvent, source: self) } Things get rolling when the system calls mouseDown(with:) when you click on a view. The base implementation does nothing, eliminating the need to call super. The code in the implementation does all of this: - Creates an NSPasteboardItemand sets this class as its data provider. A NSPasteboardItemis the box that carries the info about the item being dragged. The NSPasteboardItemDataProviderprovides data upon request. In this case you’ll supply TIFF data, which is the standard way to carry images around in Cocoa. - Creates a NSDraggingItemand assigns the pasteboard item to it. A dragging item exists to provide the drag image and carry one pasteboard item, but you don’t keep a reference to the item because of its limited lifespan. If needed, the dragging session will recreate this object. snapshot()is one of the helper methods mentioned earlier. It creates an NSImageof an NSView. - Starts the dragging session. Here you trigger the dragging image to start following your mouse until you drop it. Build and run. Try to drag the unicorn onto the top view. An image of the view follows your mouse, but it slides back on mouse up because DestinationView doesn’t accept TIFF data. Take the TIFF In order to accept this data, you need to: - Update the registered types in setup()to accept TIFF data - Update shouldAllowDrag()to accept the TIFF type - Update performDragOperation(_:)to take the image data from the pasteboard Open DestinationView.swift. Replace the following line: var acceptableTypes: Set<String> { return [NSURLPboardType] } With this: var nonURLTypes: Set<String> { return [String(kUTTypeTIFF)] } var acceptableTypes: Set<String> { return nonURLTypes.union([NSURLPboardType]) } You’ve just registered the TIFF type like you did for URLs and created a subset to use next. Next, go to shouldAllowDrag(:_), and add find the return canAccept method. Enter the following just above the return statement: else if let types = pasteBoard.types, nonURLTypes.intersection(types).count > 0 { canAccept = true } Here you’re checking if the nonURLTypes set contains any of the types received from the pasteboard, and if that’s the case, accepts the drag operation. Since you added a TIFF type to that set, the view accepts TIFF data from the pasteboard. Unarchive the Image Data Lastly, update performDragOperation(_:) to unarchive the image data from the pasteboard. This bit is really easy. Cocoa wants you to use pasteboards and provides an NSImage initializer that takes NSPasteboard as a parameter. You’ll find more of these convenience methods in Cocoa when you start exploring drag and drop more. Locate performDragOperation(_:), and add the following code at the end, just above the return sentence return false: else if let image = NSImage(pasteboard: pasteBoard) { delegate?.processImage(image, center: point) return true } This extracts an image from the pasteboard and passes it to the delegate for processing. Build and run, and then drag that unicorn onto the sticker view. You’ll notice that now you get a green + on your cursor. The destination view accepts the image data, but the image still slides back when you drop. Hmmm. What’s missing here? Show me the Image Data! You need to get the dragging source to supply the image data — in other words: fulfil its promise. Open ImageSourceView.swift and replace the contents of pasteboard(_:item:provideDataForType:) with this: //1. if let pasteboard = pasteboard, type == String(kUTTypeTIFF), let image = NSImage(named:"unicorn") { //2. let finalImage = image.tintedImageWithColor(NSColor.randomColor()) //3. let tiffdata = finalImage.tiffRepresentation pasteboard.setData(tiffdata, forType:type) } In this method, the following things are happening: - If the desired data type is kUTTypeTIFF, you load an image named unicorn. - Use one of the supplied helpers to tint the image with a random color. After all, colorful unicorns are more festive than a smattering of all-black unicorns. :] - Transform the image into TIFF data and place it on the pasteboard. Build and run, and drag the unicorn onto the sticker view. It’ll drop and place a colored unicorn on the view. Great! So.many.unicorns! Dragging Custom Types Unicorns are pretty fabulous, but what good are they without magical sparkles? Strangely, there’s no standard Cocoa data type for sparkles. I bet you know what comes next. :] Note: In the last section you supplied a standard data type. You can explore the types for standard data in the API reference. In this section you’ll invent your own data type. These are the tasks on your to-do list: - Create a new dragging source with your custom type. - Update the dragging destination to recognize that type. - Update the view controller to react to that type. Create the Dragging Source Open AppActionSourceView.swift. It’s mostly empty except for this important definition: enum SparkleDrag { static let type = "com.razeware.StickerDrag.AppAction" static let action = "make sparkles" } This defines your custom dragging type and action identifier. Dragging source types must be Uniform Type Identifiers. These are reverse-coded name paths that describe a data type. For example, if you print out the value of kUTTypeTIFF you’ll see that it is the string public.tiff. To avoid a collision with an existing type, you can define the identifier like this: bundle identifier + AppAction. It is an arbitrary value, but you keep it under the private namespace of the application to minimize the risk of using an existing name. If you attempt to construct a NSPasteboardItem with a type that isn’t UTI, the operation will fail. Now you need to make AppActionSourceView adopt NSDraggingSource. Open AppActionSourceView.swift and add the following extension: // MARK: - NSDraggingSource extension AppActionSourceView: NSDraggingSource { func draggingSession(_ session: NSDraggingSession, sourceOperationMaskFor context: NSDraggingContext) -> NSDragOperation { switch(context) { case .outsideApplication: return NSDragOperation() case .withinApplication: return .generic } } } This code block differs from ImageSourceView because you’ll place private data on the pasteboard that has no meaning outside the app. That’s why you’re using the context parameter to return a NSDragOperation() when the mouse is dragged outside your application. You’re already familiar with the next step. You need to override the mouseDown(with:) event to start a dragging session with a pasteboard item. Add the following code into the AppActionSourceView class implementation: override func mouseDown(with theEvent: NSEvent) { let pasteboardItem = NSPasteboardItem() pasteboardItem.setString(SparkleDrag.action, forType: SparkleDrag.type) let draggingItem = NSDraggingItem(pasteboardWriter: pasteboardItem) draggingItem.setDraggingFrame(self.bounds, contents:snapshot()) beginDraggingSession(with: [draggingItem], event: theEvent, source: self) } What did you do in there? You constructed a pasteboard item and placed the data directly inside it for your custom type. In this case, the data is a custom action identifier that the receiving view may use to make a decision. You can see how this differs from ImageSourceView in one way. Instead of deferring data generation to the point when the view accepts the drop with the NSPasteboardItemDataProvider protocol, the dragged data goes directly to the pasteboard. Why would you use the NSPasteboardItemDataProvider protocol? Because you want things to move as fast as possible when you start the drag session in mouseDown(with:). If the data you’re moving takes too long to construct on the pasteboard, it’ll jam up the main thread and frustrate users with a perceptible delay when they start dragging. In this case, you place a small string on the pasteboard so that it can do it right away. Accept the New Type Next, you have to let the destination view accept this new type. By now, you already know how to do it. Open DestinationView.swift and add SparkleDrag.type to the registered types. Replace the following line: var nonURLTypes: Set<String> { return [String(kUTTypeTIFF)] } With this: var nonURLTypes: Set<String> { return [String(kUTTypeTIFF),SparkleDrag.type] } Now SparkleDrags are acceptable! performDragOperation(:_) needs a new else-if clause, so add this code at the end of the method just before return false: else if let types = pasteBoard.types, types.contains(SparkleDrag.type), let action = pasteBoard.string(forType: SparkleDrag.type) { delegate?.processAction(action, center:point) return true } This addition extracts the string from the pasteboard. If it corresponds to your custom type, you pass the action back to the delegate. You’re almost done, you just need to update StickerBoardViewController to deal with the action instruction. Handle the Action Instruction Open StickerBoardViewController.swift and replace processAction(_:center:) with this: func processAction(_ action: String, center: NSPoint) { //1. if action == SparkleDrag.action { invitationLabel.isHidden = true //2. if let image = NSImage(named:"star") { //3. for _ in 1..<Appearance.numStars { //A. let maxSize:CGFloat = Appearance.maxStarSize let sizeChange = CGFloat(arc4random_uniform(Appearance.randonStarSizeChange)) let finalSize = maxSize - sizeChange let newCenter = center.addRandomNoise(Appearance.randomNoiseStar) //B. let imageFrame = NSRect(x: newCenter.x, y: newCenter.y, width: finalSize , height: finalSize) let imageView = NSImageView(frame:imageFrame) //C. let newImage = image.tintedImageWithColor(NSColor.randomColor()) //D. imageView.image = newImage targetLayer.addSubview(imageView) } } } } The above code does the following: - Only responds to the known sparkle action - Loads a star image from the bundle - Makes some copies of the star image and… - Generates some random numbers to alter the star position. - Creates an NSImageViewand sets its frame. - Gives the image a random color -- unless you're going for a David Bowie tribute, black stars are a bit gothic. - Places the image on the view. Build and run. Now you can drag from the sparkles view onto the sticker view to add a spray of stars to your view. Where to go From Here? Congratulations, you created a custom drag and drop interface in your very own app! You can use the Save Image To Desktop button to save your image as a JPG with the name StickerDrag. Maybe take it a step further and tweet it to the team @rwenderlich. Here's the source code for the the completed project. This drag and drop tutorial for macOS covered the basics of the Cocoa drag and drop mechanism, including: - Creating a dragging destination and accepting several different types of data - Using the dragging session lifecycle to provide user feedback of the drag operation - Decoding information from the pasteboard - Creating a dragging source and providing deferred data - Creating a dragging source that provides a custom data type Now you have the knowledge and experience needed to support drag and drop in any macOS app. There's certainly more to learn. You could study up on how to apply effects, such as changing the dragging image during the drag or implementing an animated drop transition, or working with promised files -- Photos is one application that places promised data on the dragging pasteboard. Another interesting topic is how to use drag and drop with NSTableView and NSOutlineView, which work in slightly different ways. Learn about it from the following resources: - Apple's Drag and Drop Programming Topics - Apple's Sandboxing and Security Scoped Data, where you'll find information about how dragging and dropping files works if an application is sandboxed. - CocoaDragAndDrop sample code (in Objective-C) - DragNDropOutlineView sample code (in Objective-C) If you have any questions or comments about this drag and drop tutorial for macOS, please join the discussion below! And remember, sometimes life is a dragging experience, but everything's better with unicorns and sparkles. :]
https://www.raywenderlich.com/1016-drag-and-drop-tutorial-for-macos
CC-MAIN-2019-43
en
refinedweb
Delete Data from Success factor using SAP Cloud Integration This blog describes the steps to be used to deal with delete data from SFSF using SAP Cloud Platfrom Integration(Cloud Integration) . In SAP Cloud Platform Integration we generally use SuccessFactors adapter to communicate with SFSF where Insert, Select, Update, Upsert options are available but no delete option is there to delete the data from OData application. To deal with delete case in Cloud Integration we can use OData type adapter which will delete data from specified MDF (If the MDF supports delete function) from SFSF. To check if the specified MDF (Entity) supports delete operation follow as describes below as well as which are the primary key fields required for delete query. There are two ways to check that, below are those. - Through SFSF Application: By checking the OData API Data Dictionary against the specified Entity At the top right side it will show the supported operations and in below the Business Keys will be shown which are the primary key and mandatory to send in delete query and the attribute Nullable states the fields which can be deleted. - Through EDMX: Choose Adapter type as OData and transport and message protocol as bellows in Cloud Integration. Now go to the Adapter specific and select Model from the Resource Path in the Processing Details and provide the necessary details such as Address, Username(username@companyid) and password and press next button. Now type and search the Entity name and check if Delete operation is available against the Entity and the key fields will be shown in blue selected checkbox. Here if next button is being clicked then it will show the detailed filter configure conditions for the key fields. Where you can select the key fields as well as the filtering condition based on Operator, Input Type value and condition supported by the specified Entity only Operator fields value would be ‘=’or’>’or’<’, means the key fields value can be checked as equals or less than or greater than. This operator value is totally depends on SFSF, how the entity is restricted and being deleted based on value. In my scenario I got the equal operator only, in that case for multiple records deletion we have to manually set the groovy script and the Integration flow (using Splitter) to handle more than one record. Input Type can be set as Text which is used for manual entry like “externalcode=’62’” where business scenario demands deletion case against a particular record. Property and Header are the variables type which can be set as dynamically through groovy where entry format will be like “externalcode=’${property.A}’ or ‘${header.A}’. A is a variable which you can declare dynamically and pass the value of xml data like through groovy script as bellows. import com.sap.gateway.ip.core.customdev.util.Message; import com.sap.it.api.mapping.MappingContext; import java.lang.Object; import java.text.ParsePosition; def Message processData(Message message) { def properties = message.getProperties(); def messageLog = messageLogFactory.getMessageLog(message); def body = message.getBody(java.lang.String) as String; def parsedXml = new XmlSlurper().parseText(body); def a = parsedXml.RootNode.ChildNode ; message.setProperty(“A”,a); return message; } Query to be created in format: Select the adapter as OData and go to Adapter Details and provide Address with /odata/v2 and Credential Name deployed with Basis Authentication in tenant and in the resource path put the Entity name and query in a merged format and put Custom Query Options field as blank. The Resource path and the query format will be as bellows. EntityName(Primarykey1=datetime’${property.A}’,Primarykey2=datetime’${property.B}’,Primarykey3=’${property.C}’) In the above query I am assuming there are 3 key fields available to filter and first two key fields are belongs with DateTime fields. Example: cust_AttendanceRegularizationdetails(externalCode=datetime’${property.EffCode}’,cust_AttendanceRegularization_effectiveStartDate=datetime’${property.StartDate}’,cust_AttendanceRegularization_externalCode=’${property.EXCode}’) In my scenario our objective is to delete the detail attendance data when a new master attendance data is created in SFSF. In our scenario while creating a new master data based on the effective start date, previous months details are also getting copied and pushed into the new master data (By default in SFSF it is creating). To get rid of that scenario we are checking the attendance records created in the current master effective start date and selecting (By Select query) all details which are old than the current effective start date’s month and splitting each records in Cloud Platform Integration through General Splitter and deleting them from the same master attendance table based on the effective start date. Result in SFSF: General Error Handling: - Issue: Bad Request 400: Entity need a key value in DELETE operation Root Cause: This issue comes as the delete query is pushing to SFSF has not been recognized and cannot perform delete operation as the entity’s primary key value is being sent is either blank or multiple values are being sent. Solution: To get rid of that issue use Content Modifier and Script combination in Cloud Integration to print the values passing to the query in Groovy Script if dynamic values are being passed or extracted from a payload by using messageLog.setStringProperty. Also, I would suggest to create the URL first with constant value before make it dynamic and put it into browser with arranging the URL with server details and check if the query is showing exact value or not from that. Note: The query is being checked by URL in web browser will not delete the exact data from database. If the query is correct then it will show you the details of data exist in the Entity. - Issue:uri.UriSyntaxException – Sequential processing failed, Wrong literal format for literal Root Cause: The issue itself is saying that the issue belongs to the URL while creating that. Solution: To get rid of that issue, create the URL first with constant value before make it dynamic and put it into browser with arranging the URL with server details and check if the query is showing exact value or not from that.
https://blogs.sap.com/2017/07/14/cloud-integration-delete-data-from-successfactors-using-sap-cloud-integration/
CC-MAIN-2018-30
en
refinedweb
This article explains the absolute basics of WPF data binding. It shows four different ways how to perform the same simple task. Each iteration moves closer to the most compact, XAML-only implementation possible. This article is for people with no experience in WPF data binding. Programming in WPF involves a lot of data binding. WPF user interfaces typically use much more data binding than most Windows Forms or ASP.NET user interfaces. Most, if not all, data movement in the user interface is accomplished with data binding. This article should help WPF newbies to start thinking in terms of WPF data binding, by showing how to translate a code-only solution into a compact XAML-only solution. This article does not discuss the binding API much. It only discusses what is relevant to the simple example. If you would like to read more about the technical details of WPF data binding, you can read my article about it here. Throughout this article, we will examine several ways to implement the same simple functionality. Our goal is to create a WPF program that allows us to edit a person’s first and last name. The application should also display that person’s name, formatted as <LastName>, <FirstName>. The formatted name should immediately update whenever the first or last name changes. The user interface should look something like this: First, we will not use data binding to implement this. Let’s create a simple class to hold the person’s name: public class Person { public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return String.Format("{0}, {1}", this.LastName, this.FirstName); } } } Next, we declare a simple user interface in XAML. These controls will display the three properties of our Person class. They exist in our application’s main Window: <StackPanel> <TextBox x: <TextBox x: <TextBlock x: </StackPanel> Finally, we can write some code in the Window’s code-behind file to manually move the data around as necessary: Person _person; // This method is invoked by the Window's constructor. private void ManuallyMoveData() { _person = new Person { FirstName = "Josh", LastName = "Smith" }; this.firstNameTextBox.Text = _person.FirstName; this.lastNameTextBox.Text = _person.LastName; this.fullNameTextBlock.Text = _person.FullName; this.firstNameTextBox.TextChanged += firstNameTextBox_TextChanged; this.lastNameTextBox.TextChanged += lastNameTextBox_TextChanged; } void lastNameTextBox_TextChanged(object sender, TextChangedEventArgs e) { _person.LastName = this.lastNameTextBox.Text; this.fullNameTextBlock.Text = _person.FullName; } void firstNameTextBox_TextChanged(object sender, TextChangedEventArgs e) { _person.FirstName = this.firstNameTextBox.Text; this.fullNameTextBlock.Text = _person.FullName; } Bugs are born in this type of code, like a swamp. This implementation requires the UI code to keep track of what controls need to be updated when certain property values change. This forces us to duplicate knowledge of the problem domain in our UI code, which is never a good thing. If we were dealing with a more complex problem domain, this type of code can get very ugly very fast. There must be a better way… Using the exact same XAML in our Window, let’s rewrite the code-behind so that the controls are data bound to the Person object. Instead of having the Window’s constructor call the ManuallyMoveData method, as seen before, now it will call this method instead: private void BindInCode() { var person = new Person { FirstName = "Josh", LastName = "Smith" }; Binding b = new Binding(); b.Source = person; b.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; b.Path = new PropertyPath("FirstName"); this.firstNameTextBox.SetBinding(TextBox.TextProperty, b); b = new Binding(); b.Source = person; b.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; b.Path = new PropertyPath("LastName"); this.lastNameTextBox.SetBinding(TextBox.TextProperty, b); b = new Binding(); b.Source = person; b.Path = new PropertyPath("FullName"); this.fullNameTextBlock.SetBinding(TextBlock.TextProperty, b); } In this version, we are no longer directly assigning values to the Text property of a TextBox or TextBlock. Now we are binding those properties on the controls to a property on the Person object. The Binding class is part of WPF, in fact it is a core piece of all WPF data binding. Setting a Binding object’s Source property indicates the data source of the binding (i.e. where the data comes from). Setting the Path property indicates how to get the bound value from the data source. Setting the UpdateSourceTrigger property to ‘ PropertyChanged’ tells the binding to update as you type, instead of waiting for the TextBox to lose focus before updating the data source. This seems all well and good, but there is a problem. If you run the program now, the formatted full name will not update when you edit the first or last name. In the previous version the formatted full name updated because we hooked each TextBox’s TextChanged event and manually pushed the new FullName value into the TextBlock. But now all of those controls are data bound, so we cannot do that. What’s the deal? The WPF data binding system is not magical. It has no way to know that our Person object’s FullName property changes when the FirstName or LastName properties are set. We must let the binding system know that FullName has changed. We can do that by implementing the INotifyPropertyChanged interface on the Person class, as seen below: public class Person : INotifyPropertyChanged { string _firstName; string _lastName; public string FirstName { get { return _firstName; } set { _firstName = value; this.OnPropertyChanged("FirstName"); this.OnPropertyChanged("FullName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.OnPropertyChanged("LastName"); this.OnPropertyChanged("FullName"); } } public string FullName { get { return String.Format("{0}, {1}", this.LastName, this.FirstName); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; void OnPropertyChanged(string propName) { if (this.PropertyChanged != null) this.PropertyChanged( this, new PropertyChangedEventArgs(propName)); } #endregion } Notice that the new implementation of Person does not use automatic properties. Since we need to raise the PropertyChanged event when FirstName or LastName is set to a new value, we must use a normal property and field instead. If we run the app now, the formatted full name text updates as we edit the first or last name. This shows that the binding system is listening to the Person object’s new PropertyChanged event. At this point, we have gotten rid of that ugly, bug-prone code in the previous version. Our code-behind has no logic in it that determines when to update which fields. We still have quite a bit of code. It would be better if we could declare the relationships between controls and data in XAML. That would neatly separate the UI layout and configuration away from the application logic. This is especially appealing if you want to use a design tool, such as Microsoft Expression Blend, to create your user interfaces. Now let’s comment out the second version and see how to move all of this binding code into XAML. In the code-behind we will have the Window’s constructor call this method: private void BindInXaml() { base.DataContext = new Person { FirstName = "Josh", LastName = "Smith" }; } The rest of the work is done in XAML. Here is the content of the Window: <StackPanel> <TextBox x: <TextBox.Text> <Binding Path="FirstName" UpdateSourceTrigger="PropertyChanged" /> </TextBox.Text> </TextBox> <TextBox x: <TextBox.Text> <Binding Path="LastName" UpdateSourceTrigger="PropertyChanged" /> </TextBox.Text> </TextBox> <TextBlock x: <TextBlock.Text> <Binding Path="FullName" /> </TextBlock.Text> </TextBlock> </StackPanel> That XAML uses the property-element syntax to establish bindings for each control’s Text property. It looks like we are setting the Text property to a Binding object, but we’re not. Under the covers, the WPF XAML parser interprets that as a way to establish a binding for the Text property. The configuration of each Binding object is identical to the previous version, which was all in code. Running the application at this point shows that the XAML-based bindings work identically to the code-based bindings seen before. Both examples are creating instances of the same class and setting the same properties to the same values. However, this seems like a lot of XAML, especially if you are typing it by hand. It would be nice if there were a less verbose way to create the same bindings… Using the same method in the code-behind as the previous example, and the same Person class, we can drastically reduce the amount of XAML it takes to achieve the same goal. The key here is the fact that the Binding class is actually a markup extension. Markup extensions are like a XAML parlor trick allowing us to create and configure an object in a very compact way. We can use them to create an object within the value of an XML attribute. The XAML of the final version of this program is below: <StackPanel> <TextBox x: <TextBox x: <TextBlock x: </StackPanel> That XAML is almost as short as in the original version. However, in this example there is no plumbing code wiring the controls up to the data source. Most data binding scenarios in WPF use this approach. Using the Binding markup extension feature vastly simplifies your XAML and allows you to spend time working on more important things. There are many ways to hook a user interface up to data. None of them are wrong, and all of them are useful in certain situations. Most of the time WPF developers use data binding via the convenient markup extension syntax. In more complicated, dynamic scenarios, it can be useful to create bindings in code. I hope that this article has shed some light on the topic, so that you can make an informed decision about how you want to get the job done. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/WPF/MovingTowardWpfBinding.aspx
crawl-002
en
refinedweb
How to setup and use KSS on Plone 3.1 Answer to some common questions about the use and setup of KSS for Plone 3.1. 1. Setup disable certain kss features? Products. 1.4 How to disable in-line editing globally?. 2. Migration Plone 3.1 core is completely migrated to using kss.core 1.4. However if you have custom kss code, it may happen that the 3.0 -> 3.1 migration makes your kss fail with a parsing error, if you have not become aware of the deprecation warnings that appear in the kss console log (cyan lines). Fortunately, of all the things we promised we only deprecated the followings: 2.1. Deprecated form() and currentForm() in normal value providers currentForm() You must change rules that use currentForm() in a normal value provider:You must change rules that use currentForm() in a normal value provider: action-server: myServerAction;to: myServerAction-data: currentForm(); action-server: myServerAction currentForm();Or, if you want to keep compatibility with Plone 3.0 (kss 1.2): action-server: myServerAction; myServerAction-kssSubmitForm: currentForm(); form() action-server: myServerAction;to: myServerAction-data: form(); action-server: myServerAction form();Or, if you want to keep compatibility with Plone 3.0 (kss 1.2): action-server: myServerAction; myServerAction-kssSubmitForm: form(); Necessary server side changes the values directly in the method signature, or access them from the form directly. So the following old code: def method(self, data): field1 = data['field1'] field2 = data.get('field2', None) def method(self, field1, field2=None): ...An alternate way is to get them from the request: def method(self): request = self.request field1 = request.form['field1'] field2 = request.form.get('field2', None) 2.1. How to use currentFormVar on multiselect widgets?. Please also have a look at kss.demo, there is an example case. 3. Testing (Selenium) 3.1 Additional package dependencies The following branch of kss.demo must be present to run the Selenium tests When installed into Zope, it needs to have its configure and meta zcml loaded as "slugs". In addition the Zelenium product needs to be installed from: svn://svn.zope.org/repos/main/Zelenium/trunk This howto tells all the instructions for running and creating tests, and it is a very suggested reading. 4. Programming 4. 4. 4. 4. 4.5 What about version numbers of kss? plone.app.kss is released with version number 1.4 in Plone 3. The current development version of plone.app.kss is 1.5, and it will be usable with Plone 3.2. kss.core also followed the same versioning scheme, so 1.4 version is for the 3.1 release, 1.5 will be usable with 3.x. Versioning is summarized int he following table: 4.6 What are the new kss features in this kss.core version? Several syntax improvements, significantly faster page load due to using base2 for css selection, improved demos and testing, refactored code. More be read in the NEWS:txt 4.7 you have one instance that you use for development continously, this may also be a comfortable way of switching. 4. 4.
http://plone.org/documentation/how-to/kss-on-plone-3.1
crawl-002
en
refinedweb
Let's face it Windows Media Player looks sexy but what is more important it sounds a lot better than any other overskinned overpluggined monster out there. At least for me ;) Now it can't open pls files but can play any stream addresses contained in them due to some weird marketing decission. So the sollution is to get url from pls file and pass it to the player as commandline parameter So here is simple utility to open and play internet streams or files contained in popular *.pls playlist format in Windows Media Player by simply clicking on interned radios in web browser. When you click on any radio on internet radio sites like web browser usually asks in which program to open *.pls files. Just browse to pls2wmp.exe utility downloaded from this page (or compile it from provided source code) and mark Allways use check box. But if browser doesn't ask it usually means that pls file is associated with different program. In that case Hold Shift and right click on any downloaded pls file. -> Open with -> Choose Program -> browse for and select pls2wmp.exe -> check Always use selected .... checkbox. And voila all internet radios now open Windows Media Player. Well the code si very simple. It opens pls file and passes first found url/file path to launched windows media player as parameter. To make it little bit less boring it demonstrates how to read and process files without usual check file size -> allocate -> copy memory mantra. Windows does all for us. How it works? Everytime we first time touch the page sized memory (4096 bytes) via pointer returned from MapViewOfFile windows internaly generates exception that allocates page -> copies data from file. But this is all transparent to us so we just read this pointer and let the windows do the dirty job. Another advantage of this approach is that only parts of file that are accessed are allocated/transfered. So offset based operations on multi gigabyte files are extremly fast. I used PAGE_WRITECOPY which means that if we try to modify data windows allocates another temporary memory where he holds just changes without writing changes back to file. But main purpose of this article is to share new way of listening to internet radios on sites like or or also in Windows Media Player.That's it format it the way you like it and enjoy the better sounding music. ;) #include <windows.h> #include <stdio.h> CALLBACK WinMain( HINSTANCE inst, HINSTANCE prev, char* cmd, int show ) { int len = strlen(cmd); if(!len) return -1; cmd++; if(cmd[len-2]=='"') cmd[len-2]=0; HANDLE file = CreateFile(cmd,GENERIC_READ,1,0,OPEN_EXISTING,0,0); if(file==INVALID_HANDLE_VALUE) return -1; HANDLE map = CreateFileMapping(file,0,PAGE_WRITECOPY,0,10,0); if(!map) return -1; char* url = (char*)MapViewOfFile(map,1,0,0,0); if(!url) return -1; url = strstr(url,"ile"); if(!url) return -1; url = strchr(url,'=' ); if(!url) return -1; char* end = strchr(url,'\n' ); if( end) *end = 0; ShellExecute(0,"open","wmplayer",url+1,0,SW_SHOW); UnmapViewOfFile(url); CloseHandle(file); CloseHandle(map); return 0; } General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/audio-video/pls2wmp.aspx
crawl-002
en
refinedweb
Source. In the assembly SourceGrid2.dll are present 2 controls that can be inserted in the Toolbox of Visual Studio and used in any Form: GridVirtual- A grid of virtual cells (ICellVirtual). Grid- A grid of real cells (ICell). There are therefore two fundamental distinctions to do: virtual cells and real cells. Virtual cells are the cells that determine the appearance and the behavior of the cell but don't contain the value, the real cells have the same properties of the virtual cells but contain also the value of the cell, they are therefore associated to a specific position of the grid. Every cells are composed by three fundamental parts: DataModel: The DataModel is the class that manages the value of the cells. Converts the value of the cell in a string for visual representation, create the editor of the cell and validate the inserted values. VisualModel: The VisualModel is the class that draws the cell and contains the visual properties. BehaviorModel: The BehaviorModel is the class that supplies the behavior of the cell. This subdivision grants a great flexibility and reusability of code, save time and supplies a solid base for every type of customizations. For the more common cases there are some classes already arranged and configured, but with little lines of code is possible to create personalized cells (see the next paragraphs for the details). The Grid control is the ideal if you want the greatest flexibility and simplicity but with not many cells. In fact in this control every cells are represented by a .NET class and therefore occupies a specific quantity of resources. Moreover this is the only grid that supports features of RowSpan and ColumnSpan. After to have inserted the control in the form we can begin to write our code to use the grid in the Load event of the form like this: grid1.Redim(2, 2); grid1[0,0] = new SourceGrid2.Cells.Real.Cell("Hello from Cell 0,0"); grid1[1,0] = new SourceGrid2.Cells.Real.Cell("Hello from Cell 1,0"); grid1[0,1] = new SourceGrid2.Cells.Real.Cell("Hello from Cell 0,1"); grid1[1,1] = new SourceGrid2.Cells.Real.Cell("Hello from Cell 1,1"); The previous code creates a table with 2 lines and 2 columns ( Redim method) and populates every positions with a cell. I have used the SourceGrid2.Cells.Real namespace where are present all the real cells. Every cells contains all the necessary display properties, for example to change the background color of the cell we can write: SourceGrid2.Cells.Real.Cell l_Cell = new SourceGrid2.Cells.Real.Cell( "Custom back color"); l_Cell.BackColor = Color.LightGreen; grid1[0,0] = l_Cell; These are the main visual properties of a cell (for an entire list to consult the documentation of the grid): BackColor, ForeColor, Border, Font, TextAlignment, WordWrap ..., Now we try to create an entire grid with headers, automatic sort, resize of the columns with the mouse, string and DateTime editor and a checkbox. grid1.BorderStyle = BorderStyle.FixedSingle; grid1.ColumnsCount = 3; grid1.FixedRows = 1; grid1.Rows.Insert(0); grid1[0,0] = new SourceGrid2.Cells.Real.ColumnHeader("String"); grid1[0,1] = new SourceGrid2.Cells.Real.ColumnHeader("DateTime"); grid1[0,2] = new SourceGrid2.Cells.Real.ColumnHeader("CheckBox"); for (int r = 1; r < 10; r++) { grid1.Rows.Insert(r); grid1[r,0] = new SourceGrid2.Cells.Real.Cell("Hello " + r.ToString(), typeof(string)); grid1[r,1] = new SourceGrid2.Cells.Real.Cell( DateTime.Today, typeof(DateTime)); grid1[r,2] = new SourceGrid2.Cells.Real.CheckBox(true); } grid1.AutoSizeAll(); In the previous code I have set the grid border, the number of columns, the number of fixed rows and created the first header row. For the header I have used a ColumnHeader cell. With a simple cycle for I have then created the other cells using for each column a specific type. The Cell class creates automatically an appropriate editor for the type specified (in this case a TextBox and a DateTimePicker). For the last column I have used a CheckBox cell that allows the display of a checkbox directly on the cell. The form should look equal to the one in the following figure, this example is present also in the project SampleProject with the ZIP file. The GridVirtual control is the ideal when is necessary to visualize a lot of cells and you have already available a structure data like a DataSet, an Array, a document XML or other data structure. This type of grid have the same features of the Grid control except for the automatic sort (this because the grid cannot order automatically any external data structure without copying its content) and the feature of RowSpan and ColumnSpan that allow to span a cell across other adjacent cells. Another disadvantage is that to create a virtual grid is a little difficult. The main concept in a virtual grid is that the cells do not contain the values, but read and write the value in an external data structure. This idea was implemented with an abstract class CellVirtual in which is necessary to redefine the methods GetValue and SetValue. To use the GridVirtual is therefore necessary to create a class that derives from CellVirtual and to personalize the reading using the data source chosen. Usual is better to create also a control that derive from GridVirtual, to have a greater flexibility and a more solid code, overriding the method GetCell. If you prefer you can directly use the GridVirtual control and the event GettingCell. The purpose of the method GetCell and of the event GettingCell is to return, for a given position (row and column), the chosen cell. This allows a large flexibility because you can return for a specific type any ICellVirtual, for example you could return a cell of type header when the row is 0. In the following example I create a virtual grid that reads and writes the values in an array. First I have inserted the control GridVirtual in a form, then I write this code that defines our virtual class: public class CellStringArray : SourceGrid2.Cells.Virtual.CellVirtual { private string[,] m_Array; public CellStringArray(string[,] p_Array):base(typeof(string)) { m_Array = p_Array; } public override object GetValue(SourceGrid2.Position p_Position) { return m_Array[p_Position.Row, p_Position.Column]; } public override void SetValue(SourceGrid2.Position p_Position, object p_Value) { m_Array[p_Position.Row, p_Position.Column] = (string)p_Value; OnValueChanged(new SourceGrid2.PositionEventArgs(p_Position, this)); } } With the previous code I have created a virtual cell with an editor of type string that reads and writes the values in an array specified in the constructor. After the call to the SetValue method we should call the OnValueChanged method to notify the grid to update this cell. In the event Load of the Form I have insert this code: private void frmSample15_Load(object sender, System.EventArgs e) { gridVirtual1.GettingCell += new SourceGrid2.PositionEventHandler( gridVirtual1_GettingCell); gridVirtual1.Redim(1000,1000); string[,] l_Array = new string[gridVirtual1.RowsCount, gridVirtual1.ColumnsCount]; m_CellStringArray = new CellStringArray(l_Array); m_CellStringArray.BindToGrid(gridVirtual1); } I have added an event handler to GettingCell event, created the grid and the array with 1000 rows and 1000 columns, then I have created a new instance of the previous cell and with the method BindToGrid I have linked the cell to the grid. I have created a single cell that will be used for every position of the matrix. Is always necessary to call the method BindToGrid on the cells that we want to use in a virtual grid. In order to finish we should write the method GettingCell and declare the variable for the cell: private CellStringArray m_CellStringArray; private void gridVirtual1_GettingCell(object sender, SourceGrid2.PositionEventArgs e) { e.Cell = m_CellStringArray; } The result should look equal to the one in the following picture, this example is present also in the project SampleProject included in the ZIP file. Namespace: SourceGrid2.VisualModels Every cell have a property VisualModel that returns an interface of type IVisualModel. The cell uses this interface to draw and to customize the visual properties of the cell. The purpose of the VisualModel is to separate the drawing code from the rest of the code and allows to sharing the same visual model between more cells. In fact the same instance of VisualModel can be used on more cells simultaneously optimizing the use of the resources of the system. The default VisualModel classes are read-only, however each VisualModel is provided with a Clone method that allows you to create identical instances of the same model. These are the default VisualModel classes in the namespace SourceGrid2.VisualModels: SourceGrid2.VisualModels.Common- Used for classic cells. In this model you can customize the colors, the font, the borders and a lot other properties. SourceGrid2.VisualModels.CheckBox*- Used for checkbox style cells. The checkbox can be selected, disabled and can contains a caption. SourceGrid2.VisualModels.Header*- Used for header style cells with 3D borders. You can configure the borders to gradually vanish from the color of the border to the color of the cell for a better three-dimensional effect. SourceGrid2.VisualModels.MultiImages- Allows to drawing more then one image in the cell. *The VisualModel marked with an asterisk require a special interface to work correctly, for example the CheckBox model needs a cell that supports the ICellCheckBox interface. Each of these classes contains one or more static properties with some default read-only instances easily useable: SourceGrid2.VisualModels.Common.Default SourceGrid2.VisualModels.Common.LinkStyle SourceGrid2.VisualModels.CheckBox.Default SourceGrid2.VisualModels.CheckBox.MiddleLeftAlign SourceGrid2.VisualModels.Header.Default SourceGrid2.VisualModels.Header.ColumnHeader SourceGrid2.VisualModels.Header.RowHeader This code shows how to assign the same VisualModel to more cells previously created and then change some properties: SourceGrid2.VisualModels.Common l_SharedVisualModel = new SourceGrid2.VisualModels.Common(); grid1[0,0].VisualModel = l_SharedVisualModel; grid1[1,0].VisualModel = l_SharedVisualModel; grid1[2,0].VisualModel = l_SharedVisualModel; l_SharedVisualModel.BackColor = Color.LightGray; Consider also that when you write Cell.BackColor the property calls automatically the BackColor property of the VisualModel associated. To facilitate the utilization of the more common properties if you write Cell.BackColor = Color.Black; the cell automatically clone the current VisualModel , changes the backcolor to the cloned instance and assigns the cloned instance again to the cell. Namespace: SourceGrid2.DataModels To represent the value of a cell in a string format and to supply a cell data editor, is necessary to populate the property DataModel of the cell. If this property is null is not possible to change the value of the cell and the string conversion will be a simple ToString of the value. Usual a DataModel use a TypeConverter of the type asked to manage the necessary conversion, particularly the string conversion (used to represent the cell value). These are the default DataModel classes in the namespace SourceGrid2.DataModels: DataModelBase- Supplies the methods of conversion and allows the alteration of the cell value only by code, does not supply graphic interface. This class is the base of all the other editors and is used also to manage read-only cells but with customized formattings or special editors (for example the CheckBox cell use a read-only editor because the value is changed clicking directly on the checkbox). EditorControlBase- Abstract Class that help to use a control as editor for the cell. EditorTextBox- A TextBox editor. This is one of the more used editor by all types that support string conversion (string, int, double, enum,....) EditorComboBox- A ComboBox editor. EditorDateTime- A DateTimePicker editor. EditorNumericUpDown- A NumericUpDown editor. EditorTextBoxButton- A TextBox editor with a button to open a details mask. EditorUITypeEditor- Supplies the editing of the cell of all types that have an UITypeEditor. A lot of types support this interface: DateTime, Font, a lot of enum, ... A DataModel can be shared between more cells, for example you can use the same DataModel for every cell of a column. To create an editable cell there are 2 possibilities: Utility.CreateDataModel, that returns a DataModelin base to the type specified. grid1[0,0] = new SourceGrid2.Cells.Real.Cell("Hello", typeof(string)); DataModeland then assign it to the cells: SourceGrid2.DataModels.IDataModel l_SharedDataModel = SourceGrid2.Utility.CreateDataModel(typeof(string)); grid1[0,0].DataModel = l_SharedDataModel; grid1[1,0].DataModel = l_SharedDataModel;This method is recommended when you want to use the same editor for more of cells. If you need a greater control on the type of editor or there are special requirements is possible to create manually the editor class. In this case for example I create manually the class EditorTextBox and then I call the property MaxLength and CharacterCasing. SourceGrid2.DataModels.EditorTextBox l_TextBox = new SourceGrid2.DataModels.EditorTextBox(typeof(string)); l_TextBox.MaxLength = 20; l_TextBox.AttachEditorControl(grid1); l_TextBox.GetEditorTextBox(grid1).CharacterCasing = CharacterCasing.Upper; grid1[2,0].DataModel = l_TextBox; Some properties are defined to a DataModel level, while other to an editor control level, in this case the property CharacterCasing is defined to a TextBox control level. To use these properties is necessary therefore to force a linking of the editor to the grid with the method AttachEditorControl and then call the method GetEditorTextBox to returns the instance of the TextBox. This mechanism is also useful for create special editor like the ComboBox editor. To insert a ComboBox you must write this code: SourceGrid2.DataModels.EditorComboBox l_ComboBox = new SourceGrid2.DataModels.EditorComboBox( typeof(string), new string[]{"Hello", "Ciao"}, false); grid1[3,0].DataModel = l_ComboBox; Of course is possible to create custom DataModel editor with custom control or special behaviors. In the following picture it is possible to observe most of the editors available and some options like image properties: Namespace: SourceGrid2.BehaviorModels Every cell have a collection of BehaviorModel that you can read with the Behaviors property. A BehaviorModel is a class that characterizes the behavior of the cell. A model can be shared between more cells and allows a great flexibility and simplicity of any new feature. These are the default classes of type BehaviorModel: SourceGrid2.BehaviorModels.Common- Common behavior of a cell. SourceGrid2.BehaviorModels.Header- Behavior of a header. SourceGrid2.BehaviorModels.RowHeader- Behavior of a row header, with resize feature. SourceGrid2.BehaviorModels.ColumnHeader* - Behavior of a column header, with sort and resize feature. (need ICellSortableHeader) SourceGrid2.BehaviorModels.CheckBox* - Behavior of a CheckBox. (need ICellCheckBox) SourceGrid2.BehaviorModels.Cursor* - Allows to link a cursor to a specific cell. (need ICellCursor) SourceGrid2.BehaviorModels.Button- Behavior of a Button. SourceGrid2.BehaviorModels.Resize- Allows a cell to be resized with the mouse (this model is automatically used by header models). SourceGrid2.BehaviorModels.ToolTipText* - Allows to show a ToolTipText linked to a cell. (need ICellToolTipText) SourceGrid2.BehaviorModels.Unselectable- Blocks a cell to receive the focus. SourceGrid2.BehaviorModels.ContextMenu* - Allows to show a contextmenu linked to a cell. (need a ICellContextMenu) SourceGrid2.BehaviorModels.CustomEvents- Expose a list of events that you can use without deriving from a BehaviorModel. SourceGrid2.BehaviorModels.BindProperty- Allows to link the value of a cell to an external property. SourceGrid2.BehaviorModels.BehaviorModelGroup- Allows to create a BehaviorModelthat automatically calls a list of BehaviorModel, useful when a behavior needs other behaviors to work correctly. *The BehaviorModel marked with an asterisk need special cells to complete their tasks, for example the class CheckBox requires of a cell that supports the interface ICellCheckBox. Every class have some static properties that return a default instance of the class: SourceGrid2.BehaviorModels.Common.Default SourceGrid2.BehaviorModels.Button.Default SourceGrid2.BehaviorModels.CheckBox.Default SourceGrid2.BehaviorModels.ColumnHeader.SortableHeader SourceGrid2.BehaviorModels.ColumnHeader.NotSortableHeader SourceGrid2.BehaviorModels.Cursor.Default SourceGrid2.BehaviorModels.Header.Default SourceGrid2.BehaviorModels.Resize.ResizeHeight SourceGrid2.BehaviorModels.Resize.ResizeWidth SourceGrid2.BehaviorModels.Resize.ResizeBoth SourceGrid2.BehaviorModels.RowHeader.Default SourceGrid2.BehaviorModels.ToolTipText.Default SourceGrid2.BehaviorModels.Unselectable.Default In the following code example I create a BehaviorModel that change the backcolor of the cell when the user moves the mouse over the cell. public class CustomBehavior : SourceGrid2.BehaviorModels.BehaviorModelGroup { public override void OnMouseEnter(SourceGrid2.PositionEventArgs e) { base.OnMouseEnter (e); ((SourceGrid2.Cells.Real.Cell)e.Cell).BackColor = Color.LightGreen; } public override void OnMouseLeave(SourceGrid2.PositionEventArgs e) { base.OnMouseLeave (e); ((SourceGrid2.Cells.Real.Cell)e.Cell).BackColor = Color.White; } } To use this BehaviorModel insert in Load event of a form this code: grid1.Redim(2,2); CustomBehavior l_Behavior = new CustomBehavior(); for (int r = 0; r < grid1.RowsCount; r++) for (int c = 0; c < grid1.ColumnsCount; c++) { grid1[r,c] = new SourceGrid2.Cells.Real.Cell("Hello"); grid1[r,c].Behaviors.Add(l_Behavior); } Namespace: SourceGrid2.Cells These are the default cells available: GridVirtualcontrol, these are all abstract cells and you must derive from these cells to use your custom data source. CellVirtual- Base cell for each other implementation, use for the most common type of virtual cells. Header- A header cell. ColumnHeader- A column header cell. RowHeader- A row header cell. Button- A button cell. CheckBox- A checkbox cell. ComboBox- A combobox cell. Link- A link style cell. Gridcontrol. Cell- Base cell for each other implementation, use for the most common type of real cells. Header- A header cell. ColumnHeader- A column header cell. RowHeader- A row header cell. Button- A button cell. CheckBox- A checkbox cell. ComboBox- A combobox cell. Link- A link style cell. The goal of these classes is to simplify the use of VisualModel, DataModel and BehaviorModel. If we look at the code of any of these classes we can see that these classes use the previous models according to the role of the cell. There are however models that require special interfaces and in this case the cells implement all the required interfaces. This is for example the code of the cell SourceGrid2.Cells.Real.CheckBox: public class CheckBox : Cell, ICellCheckBox { public CheckBox(string p_Caption, bool p_InitialValue) { m_Caption = p_Caption; DataModel = new SourceGrid2.DataModels.DataModelBase(typeof(bool)); VisualModel = SourceGrid2.VisualModels.CheckBox.MiddleLeftAlign; Behaviors.Add(BehaviorModels.CheckBox.Default); Value = p_InitialValue; } public bool Checked { get{return GetCheckedValue(Range.Start);} set{SetCheckedValue(Range.Start, value);} } private string m_Caption; public string Caption { get{return m_Caption;} set{m_Caption = value;} } public virtual bool GetCheckedValue(Position p_Position) { return (bool)GetValue(p_Position); } public virtual void SetCheckedValue( Position p_Position, bool p_bChecked) { if (DataModel!=null && DataModel.EnableEdit) DataModel.SetCellValue(this, p_Position, p_bChecked); } public virtual CheckBoxStatus GetCheckBoxStatus(Position p_Position) { return new CheckBoxStatus(DataModel.EnableEdit, GetCheckedValue(p_Position), m_Caption); } } As you can see the CheckBox class simply use the models SourceGrid2.DataModels.DataModelBase(typeof(bool)), SourceGrid2.VisualModels.CheckBox.MiddleLeftAlign e BehaviorModels.CheckBox.Default implements the ICellCheckBox interface with its method GetCheckBoxStatus. The methods Checked, Caption, GetCheckedValue and SetCheckedValue are methods to simplify the editing of the value of the cell. The main components of a grid are the rows and the columns. To manipulate these informations SourceGrid supplies 2 properties: Rows- Returns a collection of type RowInfoCollectionthat is a strip of classes RowInfo. Columns- Returns a collection of type ColumnInfoCollectionthat is a list of classes ColumnInfo. These are some of the RowInfo class properties: Height, Top, Bottom, Index, Tag. These are instead the properties of the ColumnInfo class: Width, Left, Right, Index, Tag. There are many ways to manipulate rows and columns: grid1.Redim(2,2); grid1.RowsCount = 2; grid1.ColumnsCount = 2; grid1.Rows.Insert(0); grid1.Rows.Insert(1); grid1.Columns.Insert(0); grid1.Columns.Insert(1); These three examples perform all the same task of creating a table with 2 rows and 2 columns. To change the width or the height of a row or a column you can use this code: grid1.Rows[0].Height = 100; grid1.Columns[0].Width = 100; The properties Top, Bottom, Left and Right are automatically calculated using the width and the height of the rows and columns. To manage correctly scrollbars, columns and rows fixed and a lot other details, the grid inside has a panels structure like this: TopLeftPanel- Keeps fixed row and fixed column cells. TopPanel- Keeps fixed rows. LeftPanel- Keeps fixed columns. ScrollablePanel- Keeps all not fixed cells. HScrollBar- Horizontal ScrollBar VScrollBar- Vertical ScrollBar. BottomRightPanel- Panel to manage the small space between the two scrollbars. The mouse and keyboard events can be used with a BehaviorModel or can be connected directly to the grid. All the events are first fired to the panels and then automatically moved to GridVirtual and Grid control. To use these events you can write this code: grid.MouseDown += new System.Windows.Forms.MouseEventHandler( grid_MouseDown); This can be done also with the Visual Studio designer. Look at the example 8 in the project SampleProject for details. The grid has a default ContextMenu that can be customized with the ContextMenuStyle property. It is possible to connect a ContextMenu to the Selection object with the Grid.Selection.ContextMenuItems, that will be used for all selected cells or otherwise you can connect a ContextMenu directly to a specific cell. Look at the example 10 in the project SampleProject for further details. A cell can be selected of can have the focus. Only one cell can have the focus, identified by the FocusCellPosition property of the grid, instead many cells can be selected. A cell is selected when is present in the Selection object of the grid. The cell with the focus receives all of the mouse and keyboard events, while the selected cells can receive actions like the copy/paste. Two of the most used objects in the project SourceGrid are the struct Position and Range. The struct Position identifies a position with a Row and a Column, while the struct Range identifies a group of cells delimited from a start Position and an end Position. To optimize performance of this control use the GridVirtual control when is necessary to visualize a lot of cells and try always to share the models (DataModel, VisualModel, BehaviorModel) between more possible cells. The performance of the grid is quite good even if the drawing code can be still optimized, especially when scrolling. It is possible to consult the project SampleProject for further information on the performance of the grid. In the project SampleProject are present a lot of examples and parts of code that can give ideas or suggestions of how implements custom grid, particularly in the folder Extensions are present some grids that supply functionality like the binding to a DataSet (DataTable), to an Array and to an ArrayList. How to select an entire row: grid1.Rows[1].Select = true; How to select all the cells: grid1.Selection.AddRange(grid1.CompleteRange); How to create an editor with advanced validation rule: grid1[0,0] = new SourceGrid2.Cells.Real.Cell(2, typeof(int)); grid1[0,0].DataModel.MinimumValue = 2; grid1[0,0].DataModel.MaximumValue = 8; grid1[0,0].DataModel.DefaultValue = null; grid1[0,0].DataModel.AllowNull = true; How to create a ComboBox editor to display a value different from the real used value, in this case is displayed a string while the real value is a double. double[] l_RealValues = new double[]{0.0,0.5,1.0}; SourceGrid2.DataModels.EditorComboBox l_EditorCombo = new SourceGrid2.DataModels.EditorComboBox(typeof(double)); l_EditorCombo.StandardValues = l_RealValues; l_EditorCombo.StandardValuesExclusive = true; l_EditorCombo.AllowStringConversion = false; SourceLibrary.ComponentModel.Validator.ValueMapping l_Mapping = new SourceLibrary.ComponentModel.Validator.ValueMapping(); l_Mapping.ValueList = l_RealValues; l_Mapping.DisplayStringList = new string[]{"Zero", "One Half", "One"}; l_Mapping.BindValidator(l_EditorCombo); grid1[0,0] = new SourceGrid2.Cells.Real.Cell(0.5, l_EditorCombo); What SourceGrid can do: TypeConverteror an UITypeEditorassociated. RowSpanand ColumnSpan, to unite more cells. ... and what cannot do It is allowed to change, recompile and to distribute the control SourceGrid for private and commercial use, I ask only to maintain the Copyright notes at the end of the page. I recommend to change the file SourceGrid2.snk with a personalized version to not have problems of compatibility with different versions of the control. Consult MSDN for further information: Version 2 of SourceGrid introduce many changes, and is not possible to list the all. The manner of utilization is very similar, but is not simple to convert a code written with previous versions. These are some suggestions: ICellVirtualinterface, no more with Cellclass. This interface contains only necessary methods and therefore is poorer. For this the code that before was: grid[0,0] = new SourceGrid.Cell("Ciao"); grid[0,0].BackColor = Color.White;now should be transformed in SourceGrid2.Cells.Real.Cell l_Cell = new SourceGrid2.Cells.Real.Cell("Ciao"); l_Cell.BackColor = Color.White; grid[0,0] = l_Cell; Cellclass while now is the interface ICellVirtualand the major change of this class is that does not contain informations about the position (row and column). Celltype now use the Positiontype, is however possible to extract the interface ICellVirtual(and then cast to more specific interfaces) given a struct Positionwith the method Grid.GetCell BorderStylethat is able therefore to eliminate the eventual Panel that before was necessary to introduce a border. BehaviorModel, you can use for example the SourceGrid2.BehaviorModels.CustomEvents. Selectionobject is no more a collection of cells but a collection of Range. Rowsand Columnsobjects the code that first should work on lines and columns now results simpler, besides a lot of methods that before were in the grid now are in the RowInfoCollection, RowInfo, ColumnInfoCollectionor ColumnInfoclasses. CellsContaineris no more present, and even if logically was replaced from the panels, the more commons are linked directly to the grid and therefore the code that before used CellsContainer now could use directly the Gridcontrol. ICellModelnow is the object IDataModel, while the object VisualPropertiesnow became IVisualModel. The previous versions of this control and further information can be founded at SourceGrid - .NET(C#) Grid control News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/grid/csharpgridcontrol.aspx
crawl-002
en
refinedweb
Microsoft just released the first preview of ASP.NET MVC Framework and it's Microsoft's way to develop Web applications based on the model-view-controller architecture. It will not replace the classic ASP.NET WebForms model. For more information about the new Framework, please see the Scott Guthrie's blog message. Since there is just a first preview version available, the final feature list is not complete. At this point of time, there is no built-in solution to validate an HTML form on the client and server-side using a standardized system. The Validator Toolkit for ASP.NET MVC offers a way to do form validation on the client and server-side using validation sets. The toolkit is a new project on Microsoft's Open Source Community site at CodePlex.com. NOTICE: The latest version of the source code and the sample site for the Validator Toolkit can be downloaded from the CodePlex project (source code tab). The screenshots below give you an idea about how the toolkit will work: Here's another screenshot showing how the error messages can be displayed: Now, let's start! As mentioned above, the toolkit uses validation sets as a crucial element. Validation sets are special classes that derive from the ValidationSet base class and define all validation rules (validators) for an HTML form. The toolkit generates client-side JavaScript code based on the defined validators during the view rendering. The client-side uses the very powerful jQuery JavaScript library in conjunction with the jQuery validation plug-in to fulfil the final task of client validation. For more information regarding jQuery and the plug-in, please take a look at the following links: The validation plug-in used by the toolkit is slightly customized to support all the needed behaviour. Besides using the jQuery library for client validation, you can use it for a lot of other stuff, like animations or DOM manipulation. There are also a lot of plug-ins developed that extend the core library. Validation sets are also used to validate forms on the server (based on the Request.Form collection). Before we go ahead, let's have a look at a sample validation set: public class LoginValidationSet : ValidationSet { protected override ValidatorCollection GetValidators() { return new ValidatorCollection ( new ValidateElement("username") { Required = true, MinLength = 5, MaxLength = 30 }, new ValidateElement("password") { Required = true, MinLength = 3, MaxLength = 50 } ); } } This LoginValidationSet class defines the rules for validating a simple login form by overriding the GetValidators method of the base class. The method must return a ValidatorCollection instance with all validators used to validate the HTML form later on. In this case, the username field is required and the input for it must contain at least 5 characters and a maximum of 30 characters. The password field is required too, but within the boundaries of 3 and 50 characters. The order of the defined validators also defines the execution order of validation process. If the toolkit would use custom attributes to set validation rules instead of the method GetValidators, there is no guarantee for the validation process to validate in the same order the attributes are defined, since the Type.GetCustomAttributes method returns the attribute list in alphabetical order. Of course, you can also write your own custom validators or you may use the ValidateScriptMethod validator which allows you to call a specific JavaScript function on the client-side and a method within the validation set class for the server-side. More on that later. Once a validation set class is defined, you attach it to the view and the HTML form processing controller action using the ValidationSet attribute, like this: // // File: LoginController.cs // public class LoginController : Controller { [ControllerAction] public void Login() { RenderView("Login"); } [ControllerAction] [ValidationSet(typeof(LoginValidationSet))] public void Authenticate() { if(this.ValidateForm()) RenderView("Overview"); else RenderView("Login"); } } ... // // File: Login.aspx.cs (CodeBehind) // [ValidationSet(typeof(LoginValidationSet))] public partial class Login : ViewPage { [ValidationSet(typeof(LoginValidationSet))] public partial class Login : ViewPage { } } The controller action Authenticate then calls the ValidateForm method, which uses the ValidationSet attribute to do the server-side form validation, based on the NameValueCollection Request:Form. Within the HTML page, you need to initialize script validation for the login form ( loginForm) as follows: <script type="text/javascript"> $(function(){ updateSettingsForSample1ValidationSet($('#loginForm').validate({rules:{}})); }); </script> In the next step, you define the HTML form as usual: <form id="loginForm" action="/Login/Authenticate" method="post"> Username: <input type="text" id="username" name="username" /><br /> Password: <input type="text" id="password" name="password" /><br /> <input type="submit" value="Login" /> </form> Finally, the script that defines the validation rules must be defined: <% this.RenderValidationSetScripts(); %> You also need to include the jQuery JavaScript library into the form or master page. See the Validator Toolkit sample site for more information. Sample site of the toolkit includes the scripts in the master page: <script type="text/javascript" src="../../Content/jDate.js"></script> <script type="text/javascript" src="../../Content/jQuery.Core.js"></script> <script type="text/javascript" src="../../Content/jQuery.Delegate.js"></script> <script type="text/javascript" src="../../Content/jQuery.Validation.js"></script> That's basically all you need to do - include form validation on the client and server-side. The next section gives you an overview of the standard validators and their usage. The toolkit offers a handful of standard validators out of the box. The following table gives you an overview of the provided validators: There are still some validators missing, e.g. a general regular expression validator or a specific email validator. Each validation set definition class derives from the ValidationSet base class. This base class contains and offers common functionality for the validation process. Let's take a look at the sample below to explain some possibilities the validation set offers to validate complex forms: public class LoginValidationSet : ValidationSet { string Username = ""; string Password = ""; protected override ValidatorCollection GetValidators() { return new ValidatorCollection ( new ValidateElement("username") { Required = true, MinLength = 5, MaxLength = 30 }, new ValidateElement("password") { Required = true, MinLength = 3, MaxLength = 50 }, new ValidateScriptMethod("username", "validateUsername") ); } protected bool ValidateUsername() { // DO HERE SOME VALIDATION AND RETURN RESULT AS BOOLEAN VALUE return true; } protected override void OnValidate() { if(Username.StartsWith("billy") && Password.StartsWith("gat")) throw new ValidatorException("username", "The username/password combination "); } } Creating non-public instance member fields of type String like the Username or Password fields, allows the base class to populate those fields with the accompanying input field values. This is a simple way to access input values without checking the underlying values collection (e.g. Request.Form). The ValidateScriptMethod validator defines a JavaScript function ( validateUsername) to call during the client-side validation. This function must be defined or included in the HTML page. The ValidationSet base class checks during the server-side validation if the current validation set class contains a case-insensitive method named validateUsername. Once all validators defined with the GetValidators method are called during the validation process, the base class will call an overall method called OnValidate. By overriding this method you can do some final validation. If you want to throw an exception you need to throw the ValidatorException with the field name and message as parameters. Using the described techniques, the possibilities of the jQuery library and the custom validators (see below), you can validate most complex forms quite efficiently. The next section will describe ways to localize the messages. It's easy to localize error messages of the toolkit using the standard folder. If the default settings are not changed, the default error message for each validator is stored in the ValidationSet.resx file (in the folder App_GlobalResources). The naming convention for the resource key is as follows: <VALIDATORNAME>_DefaultErrorMessage. To change the default name of the ValidationSet.resx resource file, a derived validation set class can set the name by using the static field DefaultMessageResourceName or by adding the MessageResourceName attribute to the class. It is also possible to combine the techniques and use more than one resource file. The sample site within the Validator Toolkit contains usage examples of localized error messages and field names. The localization is simple and straightforward. Creating custom validators is quite simple, but requires some basic knowledge of the jQuery JavaScript library and the validation plug-in if the validator wants to support client validation. The sample site includes a custom validator called ValidateBuga, which checks the input value against the constant string buga. Each validator derives from the Validator class, which provides a couple of virtual methods a custom validator must override: Here is the source code of the ValidateBuga sample validator: public class ValidateBuga : Validator { public ValidateBuga(string elementsToValidate) : base(elementsToValidate) { } public override ValidatorMethodData GetClientMethodData() { return new ValidatorMethodData( "buga", "function(value,element,parameters){return value=='buga';}", "$.format('" + ErrorMessageFormat + "')" ); } public override string GetClientRule(string element) { return "buga:true"; } public override string GetClientMessage(string element) { return string.Format("buga:'{0}'", GetLocalizedErrorMessage(element)) .Replace("'", "\'"); } protected override void Validate(string element) { if(Values.ContainsKey(element) == false || (Values[element] ?? string.Empty).Trim() != "buga") InsertError(element); } protected override string GetDefaultErrorMessageFormat() { return "The field {0} must contain the value \"buga\""; } } Another way to use custom validation is by using the validator ValidateScriptMethod. It allows to call a JavaScript function on the client-side and a specific method of the validation set class on the server-side. The custom validation is part of the validation set class and is not usable in multiple validation sets. The Validator Toolkit is a simple and easy way to add client-side and server-side HTML form validation to the new ASP.NET MVC Framework. It is easy to extend the toolkit by adding custom validators. You may use the toolkit until Microsoft provides a solution for HTML form validation. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/MvcValidatorToolkit.aspx
crawl-002
en
refinedweb
Common Plone programming recipes Basics of accessing and modifying objects programmatically using Python, Plone's programming language. Introduction. Creating and cloning items This part tells how to put in some data to your portal Creating content objects You can create Plone content objects using the parent folder's invokeFactory method. All folderish content types have this method. # Create folder called employees # title = an example of optional initial values folder = self.portal.invokeFactory("Folder", id="employees", title="Employees") When objects are created from the Plone user interface, a method called PortalFolder.createObject is invoked. createObject gives an object a temporary id and sets the object id from the object title the first time the object is saved. Bypassings invokeFactory security and type checksinvokeFactory method checks that the object is allowed to create within the context. The rules are for the end user in mind. Sometimes you want to create objects without security intervention - for example, workaround the global_allow flag in portal types. There is a method in CMFPlone.utils for such use cases. from Products.CMFPlone.utils import _createObjectByType # invokeFactory barks about global_allow flag # We bypass invokeFactory checks by calling _createObjectByType #self.invokeFactory(CheckoutTool.meta_type, CheckoutTool.id) _createObjectByType(CheckoutTool.meta_type, self, CheckoutTool.id) Creating Zope objectsPlone is not the only application running on Zope. There are a lot of pure products out there. Creating Zope objects differ somewhat of creating Plone objects. Usually Zope 2.x objects have a special manage_add method which must be imported and called to create the object. Here is an example: # Create MySQL connection using ZMySQLDA product # self = portal root here from Products.ZMySQLDA.DA import manage_addZMySQLConnection if not "mysql_connection" in self.objectIds(): manage_addZMySQLConnection(self, "mysql_connection", "MySQL connection", "toholampi@localhost root", check=True) Cloning content objectsCopying an object is not simple as it sounds. One must always keep in mind what happens to - Unique object ids - Content object references - Search index data # Create a copy of the master object classgroup_folder.manage_pasteObjects(proto_folder.manage_copyObjects([sourceId])) # Correct id and title after succesful cloning classgroup_folder.manage_renameObjects([sourceId], [targetId]) # Correct title object = classgroup_folder[targetId] object.setTitle(targetTitle) # Update navigation tree and search index data object.reindexObject() Throght-the-web content creationPlone uses function called createObject to initialize anedit form for a newly created object. createObject automatically generates default object id and sets it's state to "under creation" (object._at_creation_flag = True). If you wish to invoke createObject from your Python script, below is an example. ## Script (Python) "add_expertise_area" ##title=Add expertise area button handler ##bind container=container ##bind context=context ##bind namespace= ##bind script=script ##bind state=state ##bind subpath=traverse_subpath ##parameters=id='' ## # This script is invoked by a custom save button press. # # It saves the current object, creates a child object # and then opens this child object in edit view. # # Author: Mikko Ohtamaa <[email protected]> res = context.content_edit_impl(state, id) context.plone_log("Got res:" + str(res)) # Add new expertise area and move to its edit form if res.status == "success": # Override default "changes saved" message state.kwargs["portal_status_message"] = u"Please fill in expertise area details" context.plone_log("Got state:" + str(state)) # Created returns URL for the new object # The state object is changed by createObject, so we can discard this information created = context.createObject(type_name = "expertiseArea") Reading and writing field values How to access and change the state of your content Fields and accessors Normally one creates custom content types using Archetypes Schema pattern. Schema is a list of fields and their properties. In these examples, we have declared field called "mySomeField" in the schema of the content. Archetypes generates automatically following setters/getters (generated in Archetypes/ClassGen.py): - accessor (get method) getMySomeField() - mutator (set method) setMySomeField(value) - raw accessor (also known as edit accessor, get method which doesn't perform any character encoding) getMySomeFieldRaw() Automatically generated accessors and mutators can be overriden by field properties. E.g. title accessor is Title() instead of getTitle(). value = myItem.getMagic() myItem.setMagic(newValue) Using Field.get and Field.setSometimes you cannot access accessor and mutator directly, or you want to wrap returned values for e.g. international character encoding. Field get and set functions do this. f = myItem.getField("myFunnyField")See file Archetypes/Field.py for more details. # Same Field instance is shared between all content objects so we need to give # the target content object for Field methods as a parameter oldValue = f.get(myItem) f.set(myItem, newValue) Reindexing For quick object look-ups, Plone maintains search catalogs and indexes. These indexes hold a copy of certain object attributes in search friendly form. In certain situations, when an object is being manipulated, changes are not reflected back to indexes automatically. Thus, the navigation tree, search and some other site functions go out-of-sync and start to behave oddly. Namely, object.setTitle() doesn't change the object title in navtree which is generated from PathIndex. To cure the situation call object.reindexObject() after calling mutators. schema = ATContentTypeSchema.copy() + Schema( # field list ( TextField('someField', widget=StringWidget( label='Here is a text value', description="Try to set this", ), ), IntegerField('someIntegerField') ) class MyType(ATCTContent): """ We declare some custom content as example """ schema = schema typeDescription= 'MyType custom content type' meta_type = 'MyType' archetype_name = 'MyType' # ... # Then later on you can manipulate fields # assuming you have created portal.myfolder.item myItem = portal.myfolder.myitem myItem.setSomeField("Moo was here") # increase counter intFieldValue = myItem.getSomeIntegerField() myItem.setSomeIntegerField(intFieldValue + 1) Getting object type # Getting object type directly from the class descriptor type = object.portal_type Listing") Deleting and renaming content objects Deleting and renaming Plone objects programmatically Deleting content object Use parent_container.manage_delObjects to delete content. With Zope, it's possible to delete items using the del keyword. Do not do this. Using pure del doesn't remove the item from various internal indexes leaving your portal in an inconsistent state. Instead, always use the method manage_delObjects for Plone objects. # delObjects takes a sequence of ids as a paramater self.portal.myfolder.manage_delObjects(["myitem"]) # takes list of item ids that are to be removed as a parameter Renaming content objectsChanging id (URL visible) part: folder.manage_renameObject(id='my_item_id', new_id='my_item_new_id') Changing object title folder.item.setTitle("My new content title") folder.item.reindexObject() Security model This chapter takes an overview for Zope security model and how one sets permission barries in Python code Security model Python Methods are protected by Zope security manager. Each method declares permission required to access the method. When method is called from sandboxed context (HTTP URL, page template or site script) all calls going outside the sandbox are checked by Python security manager. After breaching out from the security sandbox, there are no further automatic security manager call checks, since the security management gives heavy performance overhead for each function call. All methods are usually accessible by normal URL (e.g.) and after penetrating security barries once there are no rechecks for the security. It is very important to define proper permissions for each method which could manipulate or export private information. There are several roles, which have set of permissions. Roles are inherited - a subfolder can have different permission set for the role as the parent folder. Users and groups are given roles. Again, user can have different roles in the different part of the site. Defining security for your methods and modules In Zope, Permissions are normal Python strings. The best practice is to refer to permission names through variables so that typing errors are catched in Python during loading and won't cause problems later on. Below are few common Plone permissions. These are declared in CMFCore product permissions module. Permissions are listed in variable name = variable content pairs. - View = "View" - View item having this permissions i.e. access view related class methods and accessors like getId(), getText() - ModifyPortalContent = 'Modify portal content' - Edit item having this permission i.e. access edit related class methods and mutators like setId(), setText() - AddPortalContent = 'Add portal content' - User can add items to the folder with this permission - ListFolderContents = 'List folder contents' - User is allowed to see what items are inside the folder - AddXXXContent = "Add XXX content" - Each content type has its own permission to control creation of this content type. So, to add one new item you need both AddPortalContent and AddXXXContent permissions. An example how to use permissions definitions from CMFCore: from CMFCore import permissions class REAgent(Member): """ Real-estate agent content type """ security = ClassSecurityInfo() archetype_name = 'REAgent' meta_type = 'REAgent' portal_type = 'REAgent' schema = schema security.declareProtected(permissions.View, "showImage") def showImage(self, blaa): """ This method is protected by View permissions. Everyone must have (inherited) View permission to call this method """ ... Module level functions are declared using Zope's ModuleSecurity and class methods are declared using ClassSecurity. - ClassSecurity.declarePublic("myMethodName") Everything can access this functions - even your web browser - ClassSecurity.declarePrivate("myMethodName") Method is accessible after penetrating security barrier. - ClassSecurity.declareProtected(MY_PERMISSION_STRING_CONSTANT, 'myMethodName'). If user has permissions whose name is declared in MY_PERMISSION_STRING_CONSTANT string, the user can call the method and penetrate the security barrier. For module security, read this tutorial. # Zope imports from AccessControl import ClassSecurityInfo, Unauthorized from Globals import InitializeClass # Plone imports from Products.Archetypes.public import registerType # Local imports from Products.MyProduct.permissions import ADD_ISSUES_PERMISSION class Issue(ATCTContent): """ Usability issue for an application """ security = ClassSecurityInfo() security.declareProtected(ADD_ISSUES_PERMISSION, 'initializeArchetype') def doStuff(self, **kwargs): """ Set voting enabled initially called by the generated addXXX factory in types tool """ ATCTContent.initializeArchetype(self, **kwargs) setattr(self, "enableRatings", True) setattr(self, "enableVoting", True) if(notAllowed == True): raise Unauthorized, "Your user can't do this" security.declarePublic('isVoteable') def isVoteable(self): """ Do not allow voting of sealed item """ workflowTool = getToolByName(self, 'portal_workflow') #print "Is voteable:" + str(workflowTool.getStatusOf("issue_workflow", self)) return (workflowTool.getStatusOf("issue_workflow", self)["review_state"] == "in_progress") registerType(Issue, PROJECTNAME) class MyCustomClass: security = ClassSecurityInfo() security.declarePublic("blaa") def blaa(self): pass InitializeClass(MyCustomClass) Permissions How to manipulate Zope permission in Python code Checking for permissionsHow to manually check if the current user has a permission available in the context. In the example below self is used as context. # Use portal_membership tool for checking permissions mtool = context.portal_membership checkPermission = mtool.checkPermission # checkPermissions returns true if permission is granted if checkPermission('Modify portal content', context): return "you can modify it" # # or... # if not getSecurityManager().checkPermission(MANAGE_USABILITY_ITEMS_PERMISSION, self): raise Unauthorized, "User " + str(getSecurityManager().getUser()) + " doesn't have permission to create Application Folders, missing permission:" + MANAGE_USABILITY_ITEMS_PERMISSION Setting object permissions Normally one should never set context object permissions directly in Plone. The correct approach is to create a workflow which has a state which sets the context object permissions.Normally one should never set context object permissions directly in Plone. The correct approach is to create a workflow which has a state which sets the context object permissions. Zope manages permission internally in form "a permission is allowed for certain roles + acquired roles if acquisition is turned on". Use method manage_permissionZope manages permission internally in form "a permission is allowed for certain roles + acquired roles if acquisition is turned on". Use method manage_permission in AccessControl/Role.py to set permissions. def manage_permission(self, permission_to_manage, roles=[], acquire=0, REQUEST=None): """Change the settings for the given permission. If optional arg acquire is true, then the roles for the permission are acquired, in addition to the ones specified, otherwise the permissions are restricted to only the designated roles. """ An example: # Common Plone permissions have pseudovariable definitions # in this module from Products.CMFCore import permissions def fix_assignment_permissions(context): """ Makes the context available for student and tutor only. """ # View is an one of Plone's own permissions. It defines # who are allowed to view the object. # Roles student, tutor and manager are allowed to view the context. context.manage_permission( permissions.View, roles = ["Student", "Tutor", "Manager"], acquire=False) Testing for securityIn unit tests, PloneTestCase has method setRoles. It sets the active security roles for unit test driver. Always use setRoles in unit testing code, so that the tests will catch security errors too. def testCreateServiceRequest(self): """ Create service request and translate it through all states """ self.setRoles(("Member",)) self.portal.service_requests.invokeFactory("BuyerServiceRequest", id="testRequest") req = self.portal.service_requests.testRequest self.setRoles((REAGENT_ROLE,)) workflowTool = self.portal.portal_workflow workflowTool.doActionFor(req, "pick_request") workflowTool.doActionFor(req, "close_request") self.setRoles(("Manager",)) workflowTool.doActionFor(req, "reopen_request") self.setRoles((REAGENT_ROLE,)) workflowTool.doActionFor(req, "pick_request") workflowTool.doActionFor(req, "close_request") Manipulating permissionsThe easiest way to set custom permission for roles is to do it via workflows. Please refer to this tutorial. Note that workflows cannot user permissions before permissions are declared at portal root level. See Zope/AccessControl/Role.py for methods if you need to do it directly. This might come in handy: # Python imports import types from StringIO import StringIO # Zope imports from AccessControl.Permission import Permission)) Roles Creating roles, dealing with content objects and roles Creating a new roleThe example below creates a role programmatically From PloneInstallation RoleInstaller.py def doInstall(self, context): """Creates the new role @param context: an InstallationContext object """ context.portal._addRole(self.role) context.logInfo("Added role '%s'" % self.role) if self.model: # Copies permissions from an existing role permissions = self._currentPermissions(context, self.model) context.portal.manage_role(self.role, permissions=permissions) context.logInfo("Give permissions of '%s' to '%s'" % (self.model, self.role)) if self.allowed: context.portal.manage_role(self.role, permissions=self.allowed) context.logInfo("Allowed permissions %s to '%s'" % (', '.join(["'" + p + "'" for p in self.allowed]), self.role)) if self.denied: permissions = self._currentPermissions(context, self.role) for p in self.denied: permissions.remove(p) context.portal.manage_role(self.role, permissions=permissions) context.logInfo("Denied permissions %s to '%s'" % (', '.join(["'" + p + "'" for p in self.denied]), self.role)) return Checking a role for the current user in context user.getId() in context.users_with_local_role('Owner') Manipulating roles Adding local rules for a user. The role is effective only in the context and nested child objects. object.manage_addLocalRoles(username, ("My Custom Role",)) Adding a role for a group in context Users Getting logged in user, manipulating user database Getting logged in user Getting the current logged in user and his/her username from Products.CMFCore.utils import getToolByName mt = getToolByName(self, 'portal_membership') if mt.isAnonymousUser(): # the user has not logged in pass else: member = mt.getAuthenticatedMember() username = member.getUserName() Delete usersPlone 2.5 way: try: self.portal.acl_users.source_users.doDeleteUser("hr") except KeyError: # User does not exist pass Get member fullname mt = getToolByName(self, 'portal_membership') member = mt.getAuthenticatedMember() fullname" : member.getProperty('fullname') Get member email address mt = getToolByName(self, 'portal_membership') member = mt.getAuthenticatedMember() fullname" : member.getProperty('fullname') Manipulating user databaseManipulating users depends a bit what kind of user backend you have - Zope internal user database - CMFMember or other product which presents users as site content - External user database through PlonePAS (e.g. LDAP Windows user accounts) def createMember(self, id, pw, email, roles=('Member',)):CMFMember example: pr = self.portal.portal_registration member = pr.addMember(id, pw, roles, properties={ 'username': id, 'email' : email }) return member def createREAgent(self, id): md = self.portal.portal_memberdata tmp_id = id + '_tmp_id' md.invokeFactory(type_name='REAgent', id=tmp_id) return md._getOb(tmp_id) Setting a member property on all members This example shows how to change the editor for all users. Below code is used from an external method, it was placed as 'switchToKupu.py' inside a product's 'Extensions/' directory. This was used to move users from Epoz to Kupu: def switchToKupu(self): out = [] # Collect members pm = self.portal_membership for memberId in pm.listMemberIds(): member = pm.getMemberById(memberId) editor = member.getProperty('wysiwyg_editor', None) if editor == 'Kupu': out.append('%s: Kupu already selected, leaving alone' % memberId) else: member.setMemberProperties({'wysiwyg_editor': 'Kupu'}) out.append('%s: Kupu has been set' % memberId) return "\n".join(out) Workflows How to deal with worklows programmatically Default workflowsFor Plone stock workflow state ids and transition ids see DCWorkflow/Default.py Creating workflows To create or manipulate workflows please refer to this tutorial. Getting workflow state of an objectIf you want to read the workflow state of an object, use the following snippet:) Setting workflow state To set workflow state programmatically, you need to use WorkflowTool portal.invokeFactory("SampleContent", id="sampleProperty")WorkflowTool also can list available actions. Note that there can be several workflows per object. This is important to know when retrieving the current workflow state. workflowTool = getToolByName(context, "portal_workflow") workflowTool.doActionFor(portal.sampleProperty, "submit") Getting installed workflowsGets the list of ids of all installed workflows. Test if there is one particular present. # Get all site workflows ids = workflowTool.getWorkflowIds() self.failUnless("link_workflow" in ids, "Had workflows " + str(ids)) Getting default workflow for a portal type # Get default workflow for the type chain = workflowTool.getChainForPortalType(ExpensiveLink.portal_type) self.failUnless(chain == ("link_workflow",), "Had workflow chain" + str(chain)) Getting workflows for an objectHow.failUnless(len(chain) == 1) # this must must be the workflow name self.failUnless(chain[0] == 'link_workflow', "Had workflow " + str(chain[0])) Important content methods Getting content type, URL, workflow state, etc. Schema and fields Getting Field from content field = content.getField("myFieldName") Getting schema schema = content.getSchema()Iterating through fields for id in schema.keys(): field = schema[id] Item type Getting item type item.getTypeInfo().getId() # return factory type information id, e.g. portal_type attribute URLIf you want to give URLs for your content object in page templates and page scripts url = item.absolute_url()or in page template <a href="#" tal: Creating content types How to create your custom content types for Plone I am not going to repeat everything which can be found from Martin Aspeli's excelent RichDocument tutorial. Views and templates Plone has different sets of views appearing for each content type. The basic views are "view" and "edit". This chapter tells how to add and manipulate views. Changing default view templateThe most common use case is that one wishes to have a customized view template for a new content type. First, change "default_view" attribute in AT class definition. This attribute tells what page template is used to view the content object. class consultantInformation(ATCTFolder): """ """ default_view = "consultant_view" schema = schema Adding dynamic views Dynamic views appear in the object's Display menu. The manager can override default view mode for the object. For example, Plone ships with "Album view" for folders which makes folders behave like photo albums, showing thumbnailed images. - Add your custom template to supplied views of your content class class WorkPackageFolder(ATFolder): """ Contains work package items """ schema = schema filter_content_types = True typeDescription= 'Work package folder' meta_type = 'WorkPackageFolder' archetype_name = 'Work package folder' # Generate user friendly id from item title during creation # Effective only for ATContentTypes based classes _at_rename_from_title = True suppl_views = ('my_template_name',) - Create a custom view template my_template_name.pt. Good starting points are base_view.pt in Archetypes product and various templates in ATContentTypes product. - Create my_template_name.pt.metadata file which will contain user readable label for your view and other information. This must be in the same folder with the view template. [default] title = My view name Overriding default edit templateThe default edit template is called "base_edit.cpt". Here are instructions how you replace it for your AT class to add custom text on the template. - Copy base_edit.cpt and base_edit.metadata to your skins directory - Rename them to specific to your item, e.g. wnc_edit.cpt and wnc_edit.metadata - Edit wnc_edit.cpt. The following exampe adds a new button Add expertise area next to Save. Note that <input> name must be in form of "form.button.xxx". <metal:use_body <metal:block fill-slot="buttons" tal:define="fieldset_index python:fieldsets.index(fieldset); n_fieldsets python:len(fieldsets)"> <input tal:condition="python:fieldset_index > 0" class="context" tabindex="" type="submit" name="form_previous" value="Previous" i18n:attributes="value label_previous;" tal:attributes="tabindex tabindex/next; disabled python:test(isLocked, 'disabled', None);" /> <input tal:condition="python:fieldset_index < n_fieldsets - 1" class="context" tabindex="" type="submit" name="form_next" value="Next" i18n:attributes="value label_next;" tal:attributes="tabindex tabindex/next; disabled python:test(isLocked, 'disabled', None);" /> <input class="context" tabindex="" type="submit" name="form_submit" value="Save" i18n:attributes="value label_save;" tal:attributes="tabindex tabindex/next; disabled python:test(isLocked, 'disabled', None);" /> <input class="context" tabindex="" type="submit" name="add_expertise_area" value="Add expertise area" /> <input class="standalone" tabindex="" type="submit" name="form.button.cancel" value="Cancel" i18n:attributes="value label_cancel;" tal: </metal:block> </metal:use_body> - Add following to your content type class from Products.ATContentTypes.content.base import updateActions, updateAliases class WNCCompany(ATCTContent): """ Consultancy company listing entry """ security = ClassSecurityInfo() # This name appears in the 'add' box archetype_name = 'Consultancy company' meta_type = portal_type = 'WNCCompany' global_allow = True _at_rename_after_creation = True schema = schema # Override default edit view actions = updateActions(ATCTFolder, ( { 'id': 'edit', 'name': 'Edit', 'action': 'string:${object_url}/wnc_edit', 'permissions': (permissions.ModifyPortalContent,), }, )) Page template and widget magic How to perform often requested tricks with Archetypes widgets and page templates Foreword Forgive me if page template formatting become messy when copy-pasting code into Kupu. Constructing a callable macro name dynamically Try this code (also, example in the section below): <tal:block tal: <tal:use-macro metal: </tal:block> Iterating through certain content types in a folder The following macro serves as a base how to iterate certain content types in a folder <html xmlns="" xml: <!-- per_content_type_renderer macro definition List certain content types and calls a target macro for them. Target macro is given as a page template filename which contains the macro. Takes arguments: wanted_item_type: string, content type id. Be careful with space padding in template code. view_macro: which macro to be called, template file basename, file contains 'listing_core' macro Author: Mikko Ohtamaa --> <body> <metal:macro <tal:foldercontents <tal:listing <div tal: <tal:block tal: <tal:activity tal: <tal:use-macro metal: </tal:activity> </tal:block> </div> </tal:listing> </tal:foldercontents> </metal:macro> </body> </html> Rendering widget directly from folder listing This is an often heard request - one wants to render a widget outside Archetypes rendering flow <html xmlns="" xml: <body> <tal:define metal: <!-- Calling Archetypes widget renderer directly for a certain field This snippet is useful if one wants to render AT widgets outside their context object, e.g. in a folder summary view. The following code takes an arbitary AT content object in item_object variable. It checks whether this item object has a field "Staff" and then calls Staff default widget renderer. We need to perform the trick of redeclaring context variable which might confuse some code (e.g. permissions) so be careful. Takes arguments: target_item: Object whose field we are rendering field_name: Field name as a string use_label: True or False whether label should be rendered Author: Mikko Ohtamaa --> <tal:has-field tal: <tal:field-context tal: <div tal: <tal:if_perm <tal:if_use_label tal: <metal:use_label </tal:if_use_label> <metal:use_data </tal:if_perm> </div> </tal:field-context> </tal:has-field> </tal:define> </body> </html> Example how to use the page template above: <html xmlns="" xml: <body> <tal:define metal: <h1>Render item_object.Staff field</h1> <tal:field-render-core tal: <tal:call-renderer metal: </tal:field-render-core> </tal:define> </body> </html> Including a nested context in item The following snippet will include code from an nested item in the context folder. The nested item will be rendered in a custom widget view macro. <!-- VIEW --> <metal:define <tal:no-fees <p tal: Please add Fees page with id "course-fees" to see it here. You can use contents tab/rename button to change the id of the object. This messages is visible for editors only. </p> </tal:no-fees> <tal:has-fees <tal:new-context <metal:body </tal:new-context> </tal:has-fees> </metal:define> Quick installer snippets Each Plone product has quick installer script which prepares the portal for the product. Here are some useful snippets which you can reuse. PloneInstallation I sincerely recommend using PloneInstallation product in quick installer scripts. It has many classes needed not to reinvent the wheel every time one writes a quick installer script. Enabling Large Plone Folders Large Plone Folders (also known as BTreeFolders) use binary trees as the folder index. This makes folder look ups faster on large item counts. By default, creation of Large Plone Folders is disabled. To enable it, run this code # Allow creation of large folders lpf = portal.portal_types.getTypeInfo("Large Plone Folder") lpf.global_allow = True Hiding actionsIf you use quick installer script to customize your Plone site you might want to hide certain actions from the end users # Hide some actions actionsTool = self.portal_actions act = actionsTool.getActionInfo("document_actions/print") act.condition = "python: False" act = actionsTool.getActionInfo("document_actions/sendto") act.condition = "python: False" Removing permissions and preventing anonymous registrationThis code removes the permission to create new users from anonymous visitors. The wanted side effect is that Join link also disappears. # Python imports import types # Zope imports from AccessControl.Permission import Permission # Plone imports from Products.CMFCore.permissions import *)) # Prevent registration at the site removePermissionsFromRole(self, "Anonymous", (AddPortalMember,)) Adding permissions for custom rolesThe following snippets allows you to add permissions for roles # Python imports import types # Zope imports from AccessControl.Permission import Permission # Plone imports from Products.CMFCore.permissions import *)) # Make anonymous link submitting possible addPermissionsForRole(self.link_pool, "Anonymous", (AddPortalContent,)) Adding external method # Add external method # # External methods are Python code which lie in Zope content # structure. Each external method has a module in Extensions folder # and Zope object in Zope. You can call external methods by typing # in URL directly. External methods bypass Zope security mechanism. # # Usually external methods are used for automated maintenance tasks. # # # Following adds a function from poll.py which is in the product's Extensions folder # # self = portal root from config import PROJECTNAME from Products.ExternalMethod.ExternalMethod import manage_addExternalMethod if not "poll_sql" in self.objectIds(): manage_addExternalMethod(self, "poll_sql", "Poll external SQL source", PROJECTNAME + ".poller", "poll_sql") Developer scripts Command line scripts which are useful in product development Unit test runner for WindowsSince Plone 2.5.1 invoking per product unit test modules has become a major pain. NOTE: This is only necessary on MS Windows. Linux, OSX and other *NIX platforms can run the tests normally from the commandline. Here is a short .bat file which allows one to run unit tests within the context of one product. @echo off REM Invoking unit test directly doesn't work anymore on Plone 2.5.1 REM See set PYTHON=d:\python24\python.exe set ZOPE_HOME=F:\workspace\plone-2.5.1\Zope-2.9.6\Zope set INSTANCE_HOME=F:/workspace/plone-2.5.1/instance set SOFTWARE_HOME=%ZOPE_HOME%\lib\python set CONFIG_FILE=%INSTANCE_HOME%\etc\zope.conf set PYTHONPATH=%SOFTWARE_HOME% set TEST_RUN=%ZOPE_HOME%\bin\test.py "%PYTHON%" "%TEST_RUN%" --config-file="%CONFIG_FILE%" --usecompiled -vp --package-path=%INSTANCE_HOME%/Products/lsmintra Products.lsmintra Portal catalog queries Portal catalog provides search indexing information for Plone site. Portal catalog queries are much faster than walking through objects manually, Searching content objects by author and typeThe following snippet will perform a search which returns all items for a certain type and a certain creator. # Search site for a consultant profile whose creator the current # user is # from Products.CMFCore.utils import getToolByName portal_catalog = getToolByName(context, 'portal_catalog') mt = getToolByName(context, 'portal_membership') if mt.isAnonymousUser(): # the user has not logged in return None else: member = mt.getAuthenticatedMember() username = member.getUserName() # Please refer to portal_catalog tool # Zope management interface for default Plone seach indexes query = {} query["Creator"] = username query["Type"] = "Consultant Profile" # Return brain objects for search results brains = portal_catalog.searchResults(**query) context.plone_log("Got results:" + str(brains)) if len(brains) > 0: # Return the real object of the first search hit return brains[0].getObject() else: # Np hits - no profile created yet return None Test existence of index and metadata colums # Test if catalog has a search index if not "getFirmName" in catalog_tool.indexes(): catalog_tool.manage_addIndex("getFirmName", "ZCTextIndex", extra) # Test if catalog has a metadata column if not "getSummary" in catalog_tool.schema(): catalog_tool.manage_addColumn("getSummary") Actions for content objects Actions are state changing triggers users perform on objects. For example, edit object, copy or print are actions. This chapter describes how to add new actions and manipulate existing actions. Adding a tab to content typeTo be done Enabling and disabling actions site wide def disable_actions(portal): """ Remove unneeded Plone actions @param portal Plone instance """ # getActionObject takes parameter category/action id # For ids and categories please refer to portal_actins in ZMI actionInformation = portal.portal_actions.getActionObject("document_actions/rss") # See ActionInformation.py / ActionInformation for available edits actionInformation.edit(visible=False) Enabling and disabling actions for content type A sample code to disable few stock Plone actions. The example product RTFExport available here. from Products.ATContentTypes.content.base import updateActions, updateAliases class Employee(ATCTContent): """ Employee record """ # Add RTF export action icon for the object # Hide properties and sharings tabs # Hide cut and copy actions actions = updateActions(ATCTContent, ( { 'id' : 'export_rtf', 'name' : 'Export as RTF', 'action' : 'string:$object_url/export_rtf', 'permissions' : (View,), 'category' : "document_actions", }, { 'id' : 'metadata', 'visible' : False, }, { 'id' : 'local_roles', 'visible' : False, }, { 'id' : 'sendto', 'visible' : False, }, { 'id' : 'cut', 'visible' : False, }, { 'id' : 'copy', 'visible' : False, }, ) ) Overriding edit actionAn usual use case is using custom edit form. Add the following snippet to your content class definition to use consultant_edit form as the edit form: actions = updateActions(ATCTFolder, ( { 'id': 'edit', 'name': 'Edit', 'action': 'string:${object_url}/consultant_edit', 'permissions': (permissions.ModifyPortalContent,), }, )) Properties Properties are flexible key-value pairs assigned to content types and tools. Properties are passed to child content objects via acquisition. PropertiesProperties are a special kind of acquired values. Properties have automatically generated user interface in Zope Management Interface to deal with them. If you hit any folder or object in ZMI it has properties tab were can fiddle around with these. The most common properties one would want to change are probably left_slots and right_slots which control the appearing of portlets in Plone 2.5.x. (Plone 3.0 has reworked portlet system). Setting propertiesSetting properties to an object causes it to override parent properties in an acquisition chain. Properties must not exist on the object before calling _setProperty. _setProperty takes property type which can be found out on ZMI Properties tab. Overriding portlet settings in a subfolder. No portlets are used for items inside this folder: data_storage._setProperty('left_slots', [], 'lines') data_storage._setProperty('right_slots', [], 'lines') Updating properties Existing properties can be updated with _updateProperty. This only works if properties have been created using set before. Properties must exist on the target object itself, inherit properties are not count in. Updating properties left_slots and right_slots for the portal root: portal._updateProperty("left_slots", ["here/portlet_navigation/macros/portlet","here/portlet_login/macros/portlet"]) portal._updateProperty("right_slots", []) Testing existence of a propertyUse object.hasProperty. The following example code will set or update a property. # The following code will create or update property. # Update default view page template for assigments folder # default_page property tells the custom page # template used to render this particular content object # in view mode if not assignments.hasProperty("default_page"): # Create the property assignments._setProperty("default_page", "") # Override assigments value (old or new created) assignments._updateProperty("default_page", "assigments_view") Site and navigation tree properties There is a special tool portal_properties which manages most of Plone's site wide properties. Please refer to its content by peeking it in ZMI. Example: Modifying a navigation tree behavior self.portal_properties.navtree_properties._updateProperty("topLevel", 1) Portlets How to deal with portlets Activating a custom portletThis is Plone 2.x way. Plone 3.0 has revamped portlet system. The item shows portlets which are defined in left_slots or right_slots properties. # Activate shopping portlet # This can be done for any folder # self = portal root right_slots = self.right_slots new_portlet_macro = "here/portlet_shopper/macros/portlet" if not new_portlet_macro in right_slots: self._updateProperty("right_slots", right_slots + (new_portlet_macro,)) Creating a new content type 1-2-3 The check list what you need to do when you create new content types for Plone. This applies for Plone 2.1.x and still works in Plone 2.5.x. It is encouraged to use newer mechanisms provided by Five subsystem when you start new products from scracth. Please read Martin Aspeli's excellent tutorial about the subject. Archetypes is a Plone subsystem to define new content types. Content types are Python classes which have special attributes described by Archetypes product. The most import of them is schema which defines what fields your content type has. Archetypes reference manual is handy. File system product skeleton You need a file system product where new content types is added. You can take the existing product and rip its flesh away or use examples provided by Archetypes manual. Content type Python module Create a Python module containing your content type class declaration - Create a .py module to the "content" folder of the product. - Add Necessary dependency imports for fields, widgets, parent schema and parent class Example # Plone imports from Products.Archetypes.public import * from Products.ATContentTypes.content.base import ATCTContent from Products.ATContentTypes.content.base import updateActions, updateAliases from Products.ATContentTypes.content.schemata import ATContentTypeSchema from Products.ATContentTypes.content.schemata import finalizeATCTSchema # Local imports from Products.MyCustom.BulletField import BulletField from Products.MyCustom.BulletWidget import BulletWidget from Products.MyCustom.config import * - Create the schema definition for your content type. See Archetypes manual for available fields, widgets and their properties. Example schema=ATFolderSchema.copy() + Schema(( TextField('isItForMe', default='', searchable=True, widget=RichWidget( label='Is it for me?', ) ), TextField('benefits', default='', searchable=True, widget=RichWidget( label='What are the benefits of taking this course?', ) ), LinksField('whatDoILearn', default='', searchable=True, widget=RichWidget(label='What do I learn?', macro="what_do_i_learn_widget.pt") ), )) finalizeATCTSchema(schema) The class definition defines security and general properties of your content type. Example: class CoursePage(ATFolder): """ Courses page content type """ # Internal programming id which Plone uses to refer this type portal_type = meta_type = 'CoursePage' # User readable name in "Add new content" drop down menu archetype_name = 'Course page' # Help text for Add new content drop down menu typeDescription = "A course description telling what students should expect for this course" # Fields used in this content type (as defined above) schema = schema # If true, this content type can be created anywhere at the site global_allow = True # We limit what kind of items are allowed in this folderish content type filter_content_types = True # List of allowed content types allowed_content_types = [ "GeneralLSMPage", "CourseModulePage", "CourseModePage", "GeneralLSMPage2", "CourseFeesModePage" ] You need to use registerType to register your class definition for Zope security manager. # CoursePage is your Python class # PROJECTNAME is the name of your product and it's usually declared in config.py registerType(CoursePage, PROJECTNAME) Initializing the product To make your content type available for Plone - You need to import it during the Plone start-up sequence - You need to run the quick installer script to create persistent portal_type entries for your content type In __init__.py of your product, import the new content type module so that Zope security manager is initialized for it """ Copyright 2006 xxx """ __author__ = 'Mikko Ohtamaa <[email protected]>' __docformat__ = 'epytext' from Products.Archetypes.public import process_types, listTypes from Products.Archetypes.ArchetypeTool import getType from Products.CMFCore.DirectoryView import registerDirectory from Products.CMFCore import utils as CMFCoreUtils from config import * from Permissions import * def initialize(context): """ Registers all classes to Zope @param context Zope/App/ProductContext instance """ # initialize security context from content import MyCustomContentTypeModule # Register our skins directory - this makes it available via portal_skins. registerDirectory(SKINS_DIR, GLOBALS) # helper function to go through all content types in your product content_types, constructors, ftis = process_types( listTypes(PROJECTNAME), PROJECTNAME) # Initialize content types CMFCoreUtils.ContentInit( PROJECTNAME + ' Content', content_types = content_types, permission = PermissionNameToCreateThisContent, extra_constructors = constructors, fti = ftis, ).initialize(context) Then, you need to remember to register types in Extension/Install.py quick installer script. This creates persistent portal_type entries for your content type. Each time you change the general properties of your content type (everything except schema) you need to rerun the quick installer. """ Extension/Install.py quick installer script Your Copyright line here """ __author__ = 'Mikko Ohtamaa <[email protected]>' __docformat__ = 'epytext' # Python imports from cStringIO import StringIO # Plone imports from Products.Archetypes.public import listTypes from Products.Archetypes.Extensions.utils import installTypes, install_subskin # Local imports from Products.MyProduct.Extensions.utils import * from Products.MyProduct.config import * def install(self): """ This is called by Plone quick installer tool @param self portal instance object @return String which is added for the installer log """ out = StringIO() # Register layout files in the portal install_subskin(self, out, GLOBALS) registerStylesheets(self, out, STYLESHEETS) registerScripts(self, out, JAVASCRIPTS) # Register Archetypes types in the portal installTypes(self, out, listTypes(PROJECTNAME), PROJECTNAME) print >> out, "Installation completed." return out.getvalue() def uninstall(self): # TODO out = StringIO()
http://plone.org/documentation/tutorial/manipulating-plone-objects-programmatically/tutorial-all-pages
crawl-002
en
refinedweb
One of the biggest annoyances with Sharepoint 2007 is the quirky things you have to do in order to customize a site. This is especially true when it comes to custom master pages. You create a stunning master page in Designer, configure the site to use it, then load the page and wait to bask in the glory. Lo and Behold! It worked! Job done, go grab a beer ... but you better drink it fast because Sharepoint has a nasty surprise in store for you. That master page only works on the content pages in your site. System pages (i.e. viewlists.aspx) will refuse to use your amazing Master page. All that work is wasted on a half complete user experience. Or is it?Why is it not doing what I tell it to do?This is because those system pages are hard-wired to use a different master page (application.master) . To make matters worse, you only get one application.master for everywhere. You could go modify this file, but be careful: changes to this will affect ALL pages based on that master, everywhere. It's not something that can be customized on a site-by-site basis. To make matters still worse, Microsoft *will* update this file in upcoming patches, so odds are good that it will break on you sometime in the future, and likely with little warning.Ok, so what's the skinny?Create a custom httpModule and install it on your Sharepoint site that remaps which .Master pages your pages use. If you aren't familiar with httpModules, fear not, they are extremely simple.The httpModule sits in the middle of the http request processing stream and can intercept page events when they initialize. The pages load up and say "I'm going to use application.master", to which your module replies "not on my watch, buddy" and not so gently forces the page to use the Master page of your choice.The Gory Details(this assumes that you already have the aforementioned Nifty Master Page created. If not, please search Google for any of the hundreds of tutorials on how to do this) Prepare your Master PagesYou will need two .Master pages. One to replace default.master and the other to replace application.master. It is very important that when you are creating these pages that you include all of the ContentPlaceHolders that exist in the .Master page you are replacing. Throw any ContentPlaceHolders that you are not using in a hidden DIV at the bottom of the page - but before the closing FORM tag (the only exception to this seems to be "PlaceHolderUtilityContent" which goes in after the closing form tag). Once in place, you can use the normal Master Page options in the UI to select the default.master replacement. Second, be sure to remove the Search Control from your Application.Master replacement. The reason for this is that the search box does not normally appear on system pages and will cause an error during rendering. You can probably simplify this a bit by using nested master pages, but I haven't had a chance to look into that yet.Step 1 - Create the httpModuleCreate a new Class Library project in Visual Studio and start with the code below. So simple, even a manager could do it (maybe). Obviously, you will have to change the path to match your environment. Oh, and sign the assembly as well. using 3/1/2007 - UPDATE!In response to comments, I have updated these instructions, in particular, I have added the "Prepare your Master Pages" section that addresses most of the issues encountered in the comments. Also, do not use this method if your sharepoint install has the shared Service Provider (SSP) installed on the same web application as your main sharepoint environment. The system pages used by the SSP do not work properly when their master page is replaced like this. I'm sure there is a logical reason why, I simply haven't had the time to dive into it. Pingback from Dreamrift » Useful SharePoint References Daniel: Any idea why I might be getting "could not load file or assembly" errors no matter what? I have followed your instructions above several times and used sn.exe to dig out the public key of my dll. I have also copied the dll to every bin folder related to WSS I can find, and dragged the dll into the GAC.... *sigh*. I'm lost. Any help would be appreciated. follow-up to my post of despair above ... especially since I did not remember my original fix when I had to install the switcher all over again in a new VM instance. My problem was that I had not renamed "MasterPageHandlerModule" to match the name of my .dll and had been attempting to refer to "MasterPageHandlerModule" rather than "wss_redirect". Jason, Glad to hear you got it working! Corps: Après plusieurs discussions entre collègues et lectures sur le net sur les cas pratiques d'utilisation Hi, I created my own custom application.master page and placed it in the /_catalog/masterpage directory as stated above, however my code continues to break on the following line: page.MasterPageFile = "/_catalogs/masterpage/MyCustom.master"; SharePoint continues to come back with a "file not found" error. I have tried changing the path several times to no avail. What am I doing wrong? By the way my class has been compiled, placed in the GAC, and I have made the proper entries in the application's web.config file. Thank You. Arnie, first off, I'd suggest that you verify that your master page has been properly approved through the Content Approval mechanism. It may also be related to how your site is authenticating. If you have anonymous on a site but require authentication at the root, you might be seeing this as well. You might also want to enable debugging to help ensure that the "file Not Found" error you are seeing is actually where you think it is (). Hi David, I tried your suggestion and I am still getting the same error (File Not Found). Thank You for providing the information on how to turn on Asp.NET debugging for the site, unfortunately it did not provide me with any additional information. I am open to suggestions. My code is as follows: using System; using System.Web; using System.Web.UI; using System.IO; namespace HPI_Http_Intercept { public class MasterPageModule :.MasterPageFile != null) { if (page.MasterPageFile.Contains("application.master")) { page.MasterPageFile = "/_catalogs/masterpage/HPhilips_Application.master"; } } public void Dispose() } } Thank You, Arnie Lugo Nice one dave, are you planning to link Sharepoint with your photography ?? Sorry Siam, you must have me confused with someone else. The extent of my photography skills is creating blurry photos of family vacations. Arnie, the only time I've seen this behavior was on a system that was using Content Management and required full approval for the master pages. Just out of curiousity, what happens when you manually set a site to use your new master page via the SharePoint UI? Hello: This is a very good article, I am a new SharePoint developer and I did the same steps as you mentioned. Created a Class Library Project Use your code Build Dll and copied it at inetput/wwwroot/.../bin when I go back and create a site or anything, it does not change the master page with mine. It does not give any error or anything. What do I have to do to make this http module execute? I dont think this code is being hit when I am browing the SharePoint site. Your help is appreciated James, There are a number of things it could be. First off, is your DLL strong-named. Secondly, did you modify the web.config for the site as mentioned in Step 2? Lastly, are you certain you are changing the right Site? SharePoint will create lots of sites and sometimes it may look like one directory is for the site you are working with but it is actually a different one. Also, have you checked both the Event log and the 12\LOGS folder for errors when loading a page. Both of those may provide additional clues. You were right, I was modifying the wrong webconfig file, thanks for your help. Now atleast I can see that its being called, i do get the following error though: ---------------------------------------------------------------------------------- File Not Found. at Microsoft.SharePoint.ApplicationRuntime.SPRequestModuleData.GetWebPartPageData(HttpContext context, String path, Boolean throwIfFileNotFound)) Troubleshoot issues with Windows SharePoint Services. --------------------------------------------------------------------------------------------- It seems like it can not find the file "_catalogs/masterpage/TestSolution.master"; When I deployed my solution, I can see this master page in master page gallery but I am not able to find any virtual directory _catalogs under any site. How can I tell if my new master page is at its right location i.e. _catalogs/masterpage direcotry. I have a solution, which creates a site definition template and when i use this template I can see that its using my master page and when i browse to another area within same site it was using application.master but now its giving me error. Thanks the catalogs folder should be off of the root of the site collection as '/_catalogs' not a subfolder of a particular web. Also, this folder does not appear in the 'Content and Structure' view but does appear when you are working on the site collection in SharePoint Designer. As long as your page is getting in there and it appears in the Master Page Gallery, that should be good enough. While you are in the Master Page Gallery, make sure that the 'Approval Status' for your new master page is set to 'Approved'. Also, make sure you don't have a Search Control on that Master Page as it will cause errors. That's why I recommend 2 master pages. One for normal content pages and one for system pages (i.e. if you see '_layouts' in the url, that means it is a system page) David, This is a great article. This is the exact problem I have with our portal site AS you mentioned, do not use this method if your sharepoint install has the shared Service Provider (SSP) installed on the same web application as your main sharepoint environment. Most of the companies use small farm for SharePoint. We have one server for sql database and the other server for everything else. Do you have any suggestions on how to make application master page works in this situation (SSP and web application on the same box)? Thanks, Peter Hi David! Great stuff, if I just would get it to work. Have copied the code exactly as it says, Signed the assembly and Build the project. Transferred my .dll to the bin-folder of my site, and altered the web.config (should you use the key that you get when signing the assembly or can I use the original key from the code here?) The error i get in eventviewer after changing the web.config is this: Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'MasterPageHandlerModule, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6bdb1331dfc11306' or one of its dependencies. The system cannot find the file specified. (C:\Inetpub\wwwroot\wss\VirtualDirectories\80\web.config line 149) (C:\Inetpub\wwwroot\wss\VirtualDirectories\80\web.config line 149) PS. The dll is named MasterPageModule.dll DS What am I doing wrong? Please Help! Regards /Jimmy Hi Jimmy: The httpModule entry in webconfig shold be as follows: <add name=<Any tag name e.g. "Test"> type=<class name, if you have a namepace as well make sure its namespace.classname>, <DLL file name>, Version=<version>, Culture=neutral, PublicKeyToken=<key that you get from DLL u created for httpModule"/> Hi David, I followed your instruction, created a new class library. I copied the new build DLL into the bin folder under the site collection c:\Inetpub\wwwroot\wss\VirtualDirectories\mysitecollectname\bin\ folder instead of c:\Inetpub\wwwroot\wss\VirtualDirectories\80\bin folder. It looks like application.master page changed to my customized master page only for that site collection. My Central Administration Tool and other site collections still use the application.master page. That is great. Now the system pages switched to my customized master page instead of application.master page, but it does not pick the changes in ItemStyle.xsl file I modified. Is there any way I can force the system pages not only using customized master page but also to use ItemStyle.xsl? Thanks a lot David! Realised that my DLL file name was wrong, changed it and the error disappeared, but when I went to a System page I got an Unknown error. After some thinking and looking at the original application.master, I realised that it was a lot diffrent than default.master (which I just took a copy of, deleted the search-field and renamed) Took a copy of the original application.master and changed it instead, and voila! .. Works like a charm now! Pingback from Customizing Application.master « Greg Galipeau’s Weblog We followed everything what you mentioned in yoru document and it worked all great problem is when we click on mysite we get error. it could be bcoz we have SSP on same server,, But is there any way to solve this problem ?? even other sites on same server also have thsi problem too. I followed everything what you mentioned in your document and it worked all great. the problem is when I try to upload documents to document library I got the following error The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again. I checked the log file under 12\logs directory, there is no related entry logged there. When I commented out the following line of code in web.config. the problem is gone. <!-- <add name="MasterPageHandler" type="MasterPageModule, SPSMasterPageHandler, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5c0b2f6acfaf6191" /> --> Any idea? Corpo: Olá pessoal, Estou escrevendo este post em resposta a uma série de dúvidas relacionadas ao modelo Thanks for the great tip! I am having one problem however. The master page loads correctly on the pages it is supposed to but when I try to add a column to a list there are errors on the page. The page renders but when I fill out the form and click the 'OK' button to submit nothing happens. Disabling the module in the web.config temporarily removes the issue. David, Great article, just one question...do i need to make a copy of application.master and default.master or do I use my customdefault.master as both? At one point in the article it seems to read as needing 2 different master pages and at another it seems like I am using one master in 2 different locations. Thanks. I have created the master pages and its working fine. Now, I want this to be installed on my site as an feature which can be activated / deactivated by the user. Can it be achievable ? Please reply soon Thanks! For me it works on _Layouts only (not on other application sites like List and Documents etc.) Any ideas? Great solution! Thank you. I have perhaps a novice problem whereby the client-side javascripts within my master page appear not to be loading now that the solution is in place. I use the "src" tag to reference my scripts from file for inclusion. I can embed the script into the master page and they work fine. Any clues why separate "src" might not work? I M gettin this exception in my event log. Always this prob occured very frequently once application Pool recycles. Exception type: ConfigurationErrorsException Exception message: Could not load type 'MasterPageModule'. (E:\Inetpub\wwwroot\wss\VirtualDirectories\myapps.csplc.org80\web.config line 145) Pleas suggest me the Solution I deployed this, never get any errors, so tried debugging and everytime it hits: Page page = HttpContext.Current.CurrentHandler as Page; it shows page being null, and thus never replaces the master page. do you have any idea why it would be showing as null? Thank you:.. Great solution! i do have one issue thou. i have one topsite, and several undersites and i cant figure out any way to modify page.MasterPageFile = "/_catalogs/masterpage/CustomApplication.master"; so that i can have one CustomApplication.master for each undersite. page.MasterPageFile = @"~\customer1\_catalogs\masterpage\CustomApplication.master"; works, but then all customers get the same .master file. any suggestions how to make this link dynamic? Thanks, For the great post... I have one doubt. This application.master works fine for the administration pages in the top level site. But when I go to subsites, the administration pages (i.e. _layouts pages) are again taking the master page as the default one (i.e. application.master). So how to achieve a single custom application.master across the sites? I am having the same error as James did in the past.) This error does not happen on the main site collection myportalsite/.../default.aspx. But my web application has about 20 other site collections in addition to the mail site collection. The error happened on all other site collections. myportalsite/.../default.aspx. "_catalogs/masterpage/MyApplication.master"; works fine for the main site collection, but not working for other site collections within the same application. I have tried to use "/sites/IT/_catalogs/masterpage/MyApplication.master"; but it is still not working Any suggestions? Thanks in advance, Same problem here. This application.master works fine for administration pages in the top level site. But when I go to subsites, the administration pages (i.e. _layouts pages) are again taking the master page as the default one (i.e. application.master).
http://www.sharepointblogs.com/dwise/archive/2007/01/08/one-master-to-rule-them-all-two-actually.aspx
crawl-002
en
refinedweb
. Pingback from Links (9/6/2007) « Steve Pietrek’s SharePoint Stuff Strong-typed provisioning of custom lists in Sharepoint 2007 Great Job Waldek, I've also discussed about an association between a lookup field a a list, via a feature. If you please, you can go there apichot.blogspot.com/.../sharepoint-2007-create-field-lookup-and.html it was a greate post. and it is working perfectly but one problem is when we select multiple value from lookup field the result always defferent .mean some time it take 2 value some time it is taking 3 .and it is taking value on its own. plz tell me why it is happning . and can we do it by using xml file. ok thanks once again for this post. can in the preveous comment ,bymistake i have specify wrong mail id.as ( [email protected]) thecorrect one is [email protected] On there is a DSL addin to Visual Studio to model CAML schema including lookups, but there is little information except screencast. I asked about demo and I'm waiting. Pingback from Creating Lookup Columns Using Features in SharePoint 2007 « Rusty’s SharePoint Blog you can also look at infowise product - Infowise Connected Fields - its also a manipulation of MOSS lookup fields using System; using System.Collections.Generic; using System.Text; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Security; using Microsoft.SharePoint; [assembly: AllowPartiallyTrustedCallers] namespace TestClassLibrary { public class HelloWorldWebPart : WebPart { protected override void Render(HtmlTextWriter writer) { currentWeb.AllowUnsafeUpdates = true; SPWeb currentWeb = SPContext.Current.Web; SPList list = currentWeb.Lists[VivekList]; SPListItem newListItem = list.Items.Add(); newListItem["Title"] = "Root"; newListItem["City"] = "Root"; newListItem.Update(); } } } by this you can access list in sharepoint and insert items in that ..this is very usefull article.. i have't founde this on net so i am posting this..but this will insert item when list will be there. so first need to create a list with two column title and city...and then through C# prgm u can add items programtically Enjoy Sharepoint! Hi Waldek, I am trying your approach but I am having some problems. First of all, when I am creating a site from the site definition wich includes this feature, SPContext.Current is null. I think I have overcome this problem using properties.Feature.Parent to get de Web instead. Second, once I get the web being created, the list collection doesn't have the list that I have to reference in my lookup field. (I guess because my code gets called before the list is created in the feature). What should I do? I don't understand why in your scenario you dont have the same problem. Thank you, Manuel
http://www.sharepointblogs.com/tmt/archive/2007/09/06/sharepoint-programmatically-provisioning-lookup-fields.aspx
crawl-002
en
refinedweb
Firefox 3.1 Alpha 2 Offers Cutting-Edge Web Standards Support Mozilla has released a second Firefox 3.1 alpha preview, incorporating some new under-the-hood improvements like HTML 5 video tag support, more CSS 3 properties and improved performance. Firefox 3.1 alpha 2 is still rough around the edges though and is intended for developers, not everyday users. Alpha 2 also lacks some of the standout features scheduled to land in the final release of Firefox 3.1, like the TraceMonkey JavaScript engine. Among the features that did make the alpha 2 cut, perhaps the most important are the increased CSS 3 and HTML 5 support. The support for HTML 5’s new video tag offers web publishers an easier and better way to embed video in their page, though with Internet Explorer potentially years away from including similar support, you might not want to convert your site just yet. Firefox 3.1 also adds some more CSS 3 properties like border-wrap, text-shadow, box-shadow and several more. With the CSS3 spec still not set in stone, Mozilla has packaged the properties within the -moz-property namespace. So far there’s no specific date for the first beta of Firefox 3.1, but Mozilla’s roadmap calls for beta 1 to arrive in late September or early October 2008. In the mean time you can download and test alpha 2, though for everyday use we recommend sticking with Firefox 3. See Also:
http://www.webmonkey.com/blog/Firefox_3DOT1_Alpha_2_Offers_Cutting-Edge_Web_Standards_Support
crawl-002
en
refinedweb
How to implement OpenID authentication and integrate it with TurboGears Identity Note from the original author of this article: A few days after I wrote this page, I found this article: Seems a better way, although I have not digged into. Just for info. What is OpenID? OpenID is an authentication mechanism favouring single-sign-on on the web. If your website implements OpenID authentication (as a client), your site doesn't need to store passwords and ask for simple registration information of your users. If someone has an OpenID (anybody can get an OpenID for free by registering at OpenID provider sites like), he can directly login through this, and your site can access his information. More information about OpenID can be found at: Integrating OpenID with TurboGears Identity Here we are going to discuss about how to integrate OpenID (client part) with TurboGears identity management. It is easy to integrate OpenID authentication with the identity framework in a TurboGears application with some tricks. But before we understand how the integration would work, we must understand some TurboGears identity basics. Let's understand what exactly happens when we call a controller method requiring authentication. Say you have a controller method like this: @expose() @identity.require(identity.not_anonymous()) def some_url(self, param1, param2) return "Hi " + param1 + param2 To call this, you need to invoke¶m2=y. Now, if you are not authenticated, you will be redirected to the login page, where you give your user name and password. When you press the "Login" button in the login form, what is actually invoked is:¶m2=y&user_name=your_name& password=your_password&login=Login. (all on one line) You might like to have a look at login.kid to have an understanding on this. Now let's talk about an interesting rule TurboGears follows. Whenever TurboGears sees an url having user_name, password and login parameters, it removes these parameters, after using them if needed. So, if you invoke:¶m2=y&user_name=your_name& password=your_password&login=Login (all on one line again) after the authentication taking place, what actually some_url will see is only param1 and param2. And, if the authentication fails, you will be redirected to the login page again. Having understood this, let's now have a minimal understanding how OpenID authentication works in general. Very briefly, OpenID authentication is done in two steps: - After getting the OpenID of the user, you call his OpenID site with his name and some other parameters. Let's name the method to call the OpenID site login_begin. - From the OpenID site, you receive authentication information and additional data. Let's name the method to receive this data login_finish. Start to integrate So, to integrate TurboGears identity and OpenID authentication, we need to do the following: Change the login form to post to login_begin instead of ${previous_url}. Your login.kid will now have: <form action="/login_begin" method="POST"> Introduce previous_url as a hidden field, so that its value is preserved. Add this line to the login form: <input type="hidden" name="previous_url" value="${previous_url}"/> Change the id and name of the user_name field to openid_url: <input type="text" id="openid_url" name="openid_url"/> Change the type of password field to hidden: <input type="hidden" id="password" name="password"/> - Write the method login_begin. - Write the method login_finish. In login finish, if OpenID authentication succeeds, you need to set a random password for the user. You may not be able to digest all this now, until you see the tutorial and read through the source code given below. Tutorial - Creating a TurboGears Application with OpenID support Follow these steps below to have an OpenID enabled TurboGears application. This tutorial uses SQLAlchemy and sqlite. - Install SQLAlchemy and sqlite (with pysqlite) on your machine if not already installed. - Install the python library for OpenID support from here. Download the combo pack - latest version. (This code was tested using Python OpenID 1.2.0 Combo and works well with leading OpenID servers, although I am not aware which specification of OpenID it implements.) Create a TurboGears application by the command: tg-admin quickstart -i -s -t tgbig Specify project name and package name as tgopenid. In root.py of the controllers package, ensure that the User class is imported from model.py by having the line: from tgopenid.model import User For OpenID support, we need some imports and utility functions. These are described below. Have these just above the Root class in root.py: ######################################################### # Added for OpenID support ######################################################### import turbogears from turbogears import flash from pysqlite2 import dbapi2 as sqlite from openid.consumer import consumer from openid.store import sqlstore from openid.cryptutil import randomString from yadis.discover import DiscoveryFailure from urljr.fetchers import HTTPFetchingError # Utility functions def _flatten(dictionary, inner_dict): """ Given a dictionary like this: {'a':1, 'b':2, 'openid': {'i':1, 'j':2}, 'c': 4}, flattens it to have: {'a':1, 'b':2, 'openid.i':1, 'openid.j':2, 'c':4} """ if dictionary.has_key(inner_dict): d = dictionary.pop(inner_dict) for k in d.iterkeys(): dictionary[inner_dict +'.' + k] = d[k] def _prefix_keys(dictionary, prefix): " Prefixes the keys of dictionary with prefix " d = {} for k, v in dictionary.iteritems(): d[prefix + '.' + k] = v return d def _get_openid_store_connection(): """ Returns a connection to the database used by openid library Is it needed to close the connection? If yes, where to close it? """ return sqlite.connect("openid.db") def _get_openid_consumer(): """ Returns an openid consumer object """ from cherrypy import session con = _get_openid_store_connection() store = sqlstore.SQLiteStore(con) session['openid_tray'] = session.get('openid_tray', {}) return consumer.Consumer(session['openid_tray'], store) def _get_previous_url(**kw): """ if kw is something like {'previous_url' : 'some_controller_url', 'openid_url' : 'an_openid.myopenid.com', 'password' : 'some_password', 'login' : 'Login', 'param1' : 'param1' 'param2' : 'param2' } the value returned is:? user_name=an_openid.myopenid.com& password=some_password&login=Login¶m1=param1¶m2=param2 (on a single line) """ kw['user_name'] = kw.pop('openid_url') previous_url = kw.pop('previous_url') return turbogears.url(previous_url, kw) Inside the Root controller class, at the bottom, write the code for login_begin and login_finish as below:@expose() def login_begin(self, **kw): if len(kw['openid_url']) == 0: # openid_url was not provided by the user flash('Please enter your openid url') raise redirect(_get_previous_url(**kw)) oidconsumer = _get_openid_consumer() try: req = oidconsumer.begin(kw['openid_url']) except HTTPFetchingError, exc: flash('HTTPFetchingError retrieving identity URL (%s): %s' \ % (kw['openid_url'], str(exc.why))) raise redirect(_get_previous_url(**kw)) except DiscoveryFailure, exc: flash('DiscoveryFailure Error retrieving identity URL (%s): %s' \ % (kw['openid_url'], str(exc[0]))) raise redirect(_get_previous_url(**kw)) else: if req is None: flash('No OpenID services found for %s' % \ (kw['openid_url'],)) raise redirect(_get_previous_url(**kw)) else: # Add server.webpath variable # in your configuration file for turbogears.url to # produce full complete urls # e.g. server.webpath="" trust_root = turbogears.url('/') return_to = turbogears.url('/login_finish', _prefix_keys(kw, 'app_data')) # As we want also to fetch nickname and email # of the user from the server, # we have added the line below req.addExtensionArg('sreg', 'optional', 'nickname,email') req.addExtensionArg('sreg', 'policy_url', '') redirect_url = req.redirectURL(trust_root, return_to) raise redirect(redirect_url) @expose() def login_finish(self, **kw): """Handle the redirect from the OpenID server. """ app_data = kw.pop('app_data') # As consumer.complete needs a single flattened dictionery, # we have to flatten kw. See flatten's doc string # for what it exactly does _flatten(kw, 'openid') _flatten(kw, 'openid.sreg') oidconsumer = _get_openid_consumer() info = oidconsumer.complete(kw) if info.status == consumer.FAILURE and info.identity_url: # In the case of failure, if info is non-None, it is the # URL that we were verifying. We include it in the error # message to help the user figure out what happened. flash("Verification of %s failed. %s" % \ (info.identity_url, info.message)) raise redirect(_get_previous_url(**app_data)) elif info.status == consumer.SUCCESS: # Success means that the transaction completed without # error. If info is None, it means that the user cancelled # the verification. # This is a successful verification attempt. # identity url may be like # strip it to yourid.myopenid.com user_name = info.identity_url.rstrip('/').rsplit('/', 1)[-1] # get sreg information about the user user_info = info.extensionResponse('sreg') u = User.get_by(user_name=user_name) if u is None: # new user, not found in database u = User(user_name=user_name) if user_info.has_key('email'): u.email_address = user_info['email'] if user_info.has_key('nickname'): u.display_name = user_info['nickname'] u.password = randomString(8, "abcdefghijklmnopqrstuvwxyz0123456789") try: u.flush() except Exception, e: flash('Error saving user: ' + str(e)) raise redirect(turbogears.url('/')) app_data['openid_url'] = user_name app_data['password'] = u.password raise redirect(_get_previous_url(**app_data)) elif info.status == consumer.CANCEL: # cancelled flash('Verification cancelled') raise redirect(turbogears.url('/')) else: # Either we don't understand the code or there is no # openid_url included with the error. Give a generic # failure message. The library should supply debug # information in a log. flash('Verification failed') raise redirect(turbogears.url('/')) To test your program, add a method as below: @expose() @identity.require(identity.not_anonymous()) def whoami(self, **kw): u = identity.current.user return "\nYour openid_url: " + u.user_name + \ "\nYour email_address: " + u.email_address + \ "\nYour nickname: " + u.display_name + \ "\nThe following parameters were supplied by you: " + str(kw) - Change login.kid as discussed in the previous section. - Add session_filter.on = True under the global section in app.cfg. OpenID implementation needs session support. - Add server.webpath="" under the global section in dev.cfg. It is needed to build full urls in login_begin. You need a database, called openid_store for OpenID to run. This typically should be different from your application database. To create an OpenID database, run the createstore.py script given below in the project root directory (wherever you have dev.cfg): # createstore.py from pysqlite2 import dbapi2 as sqlite from openid.store import sqlstore con = sqlite.connect('openid.db') store = sqlstore.SQLiteStore(con) store.createTables() In model.py, increase the size of the user_name field in users_table from 16 to 255: Column('user_name', Unicode(255), unique=True), - Create the database for your application by tg-admin sql create. - Test your application! An obvious test case is to try Notes - The sample application is at tgopenid.tar.gz - If you develop an application using OpenID, it might be time consuming while testing the authentication with a live OpenID server like. To save time, you may like to run server.py at python-openid-x.x.x/examples folder in the python library you downloaded from and run it using the command python server.py --port 7999. Then, while logging in from your application, you can use OpenID url as. References - - - - - - Past comments: localhost 2007-05-28 08:13:34 Attached a tg-admin command 'createopenidstore' (loooking for a file configured as 'openid_store', and if this does not exist runs the command given as createstore.py in 10. above).
http://docs.turbogears.org/1.0/RoughDocs/OpenIDWithIdentity
crawl-002
en
refinedweb
" Issue #96 is now available at: This issue marks a change in hosting provisions. The Linux Gazette is no longer associated with SSC (publishers of the Linux Journal). The staff (all volunteers) of the Linux Gazette reached a consensus to make new arrangements and we are all now at the linuxgazette.net site. For details about these changes please read Ben Okopnik's lead editorial: ... and for an historical perspective Rick Moen has but together a retrospective: I, Jim Dennis, will be continuing on in my role as curmudgeon coach to "The Answer Gang" while Mike Orr and Heather Stern continue to do most of the work. One note which might cause some confusion: SSC Inc. has implemented a CMS-driven (content management system) which they are still calling The Linux Gazette ( ). We consider this to be a fork in the project. While we appreciate SSC's support over the last several years, and we wish them well in their endeavors; we felt that the CMS-based approach detracted from Linux Gazette's fundamental nature as an online periodical which could continue to be easily mirrored, packaged, translated, indexed, and incorporated into the Linux Documentation Project (LDP) as it has been since before its sponsorship by SSC. Also please note that Heather Stern and I (Jim Dennis) have no editorial control over the CMS-driven site, despite the appearance of our names under the "Staff" heading at the bottom of linuxgazette.COM's index page. While we're honored for the recognition of our past efforts, we hope this won't cause any confusion for the readership of these two respective sites. -- Jim Dennis, The Linux Gazette "Answer Gang" Linux Gazette is Published under the OPL This is quite disturbing, IMO Posted Nov 2, 2003 17:36 UTC (Sun) by TheOneKEA (subscriber, #615) [Link] Thank God for the OPL, and the Debian LG mirror packages. The Linux Gazette forks Posted Nov 2, 2003 22:04 UTC (Sun) by wolfrider (guest, #3105) [Link] Misleading headline Posted Nov 3, 2003 14:21 UTC (Mon) by dave (guest, #7) [Link] Now, how SSC responds, will be another matter. It looks to me like the LG folks just want to continue doing what they have been doing, but SSC wants to take this in the wrong direction. SSC could be hard-nosed with the domain and cause a fork, but I just can't see it happening. Hopefully they will give the linuxgazette.com domain to the editors and that'll be that. If they don't do that, then maybe we'll have a fork. Dave Posted Nov 3, 2003 14:56 UTC (Mon) by drathos (guest, #6454) [Link] One note which might cause some confusion: SSC Inc. has implemented a CMS-driven (content management system) which they are still calling The Linux Gazette ( ). We consider this to be a fork in the project. Posted Nov 4, 2003 2:29 UTC (Tue) by dave (guest, #7) [Link] Posted Nov 3, 2003 15:17 UTC (Mon) by AnswerGuy (subscriber, #1256) [Link] I don't believe the headline was misleading. We (at the LinuxGazette.NET) team are treating it as a fork because they are continuing to control part of the traditional LG namespace (the .com domain) and run some other sort of content on it. We also would like to see the linuxgazette.com domain ceded to us and would be absolutely delighted to provide prominent links back to SSC's own domains. We generally feel that the CMS-driven site is not a "gazette" and that it should be given a name that more accurately reflects its nature and mission. (We won't presume to suggest any such names, either). :) Ultimately the readership will choose. A large part of our readership is indirect (through the many mirror sites, including the translated versions, Debian packages, Palm PDB forms, etc). Because our old site no longer provides content which is amenable to these forms of distribution we want everyone to know where the traditional LG content can be found. None of us wants this to be another "open source group airs its laundry" bickerfest. We're happy to link back to SSC's tine in the fork and would appreciate if they'd do our (mutual) readership the same courtesy. Readers who wish to express their opinions are encouraged to be polite and to copy both of our sites: [email protected] (the new site for the old team) and (hunts at the old SSC site carrying the new CMS stuff) ... well I guess you'll have to post any feedback to them through their CMS engine at:, JimD Posted Nov 4, 2003 15:03 UTC (Tue) by ekj (guest, #1524) [Link] Posted Nov 4, 2003 19:32 UTC (Tue) by Frodo (guest, #16508) [Link] SSC is censoring its forums; claiming trademark Posted Nov 6, 2003 0:19 UTC (Thu) by rickmoen (subscriber, #6943) [Link] SSC just now summarily deleted the approx. 1/2 to 1/3 of all posts to its Web forum that in any way mentioned the departure of Linux Gazette to. There is no explanation of this in the forum proper, but SSC publisher Phil Hughes has addressed it his Publisher's Corner weblog. In it, he says SSC has been "advised" (impliedly, by legal counsel) to delete all posts that mention the editors' new site, as trespassing against what he calls "our Linux Gazette trademark". As part of the discussion process that went into the staff's (unanimous) decision to leave, the editors attempted due diligence to prevent transgressing anyone's copyrights and trademarks. We found that no trademarks for Linux Gazette had ever been registered — and moreover that, by law, trademark coverage is available only for marks used on an ongoing basis for branding purposes on products sold in commerce. LG has always been explictly non-commercial. Remember, too, that the Gazette had already established its name and identity on its own, long before accepting SSC's kind offer of help in 1996. So I am mystified about the trademark claim, and cannot help thinking it inherently bogus. At the minimum, it will take a great deal of explaining. Also, isn't he implicitly claiming ownership of our volunteer Linux-community magazine's name as a commercial property? Is this going to become the new pattern for SSC's approach to volunteer-run, non-commercial Linux community projects? Offer to help them out, and then convert them against the volunteers' wishes to commercial corporate assets? I hope Mr. Hughes's advisors can convince him that the above honestly is not the precedent he wants for his company's relations with the volunteer Linux community, and will help him turn back from this course of action, before the damage can no longer be repaired. We've been trying to take the high road, and hope everyone else will, too. Last, my thanks to LWN for offering this feedback forum, and for carrying the story. Censorship Posted Nov 7, 2003 7:54 UTC (Fri) by alrac (guest, #16623) [Link] And SSC is using the Linux Gazette name and motto- "Making linux a little more fun" I think this is a bad situation, and SSC looks like the bad guys. Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/56377/
crawl-002
en
refinedweb
Section (3) rtime Name rtime — get time from a remote machine Synopsis #include <rpc/auth_des.h>−>tv_sec. In case of error −1 rpc/auth_des.h < >instead of < >−>h_addr, hent−>h_length); ret = rtime(&name, &time1, use_tcp ? NULL : &timeout); if (ret < 0) perror(rtime error); else { time_t t = time1.tv_sec; printf(%s , ctime(&t)); } exit(EXIT_SUCCESS); }
https://manpages.net/detail.php?name=rtime
CC-MAIN-2022-21
en
refinedweb
Simple C# Threading and Thread Safety A few days ago I compared and contrasted Asynchronous and Parallel Programming. Today I would like to walk you through a very simple threading example in C#, and why the concept of “thread safety” is important to know before you start writing parallel applications. Since we already know what exactly parallel programming is, and how it is different from asynchronous calls we can go ahead and drill down to some more aspects of parallel programming. So here is an application that will start a new thread from the main, and both threads will call the same method: using System; using System.Threading; class Threading101 { static bool fire; static void Main() { new Thread(FireMethod).Start(); // Call FireMethod() on new thread FireMethod(); // Call FireMethod() on main thread } static void FireMethod() { if (!fire) { Console.WriteLine("Method Fired"); fire = true; } } } Now at first glance you might say that we will only see “Method Fired” shown on screen, however when we run the program we will see this output: Well we have obviously called the method on both threads, but we got an undesired output! This example shows the clear issues you will encounter when working with threads in C#. This method is defined as thread unsafe and does not work correctly (in its current state) when used in threading applications. So what can we do about this? Well we need some method of preventing a thread from entering a method when another thread is entering a critical part of the method. So we just have to update our code to account for some type of locking functionality and then re-work our application: using System; using System.Threading; class Threading101 { static bool fire; static readonly object locker = new object(); static void Main() { new Thread(FireMethod).Start(); // Call FireMethod() on new thread FireMethod(); // Call FireMethod() on main thread } static void FireMethod() { // Use Monitor.TryEnter to attempt an exclusive lock. // Returns true if lock is gained if (Monitor.TryEnter(locker)) { lock (locker) { if (!fire) { Console.WriteLine("Method Fired"); fire = true; } } } else { Console.WriteLine("Method is in use by another thread!"); } } } Running the code above now produces a better threading result: Now that looks better! We made sure that only a single thread could enter a critical section of the code, and prevent other threads from stepping in. We first use Monitor.TryEnter(locker) to check for a lock, and if there is no lock we step in and run our code. If there is already a lock, we have added the logic to print that result to screen. Pretty simple huh? So this little app spawns an extra thread, and both threads fire the same method. However, the variable is only changed once, and the message from the method is only printed once. The first snippet is a perfect example of a method that is not thread safe, and the second one is a great way to protect that same method.
https://urda.com/blog/2010/10/06/simple-c-threading-thread-safety
CC-MAIN-2022-21
en
refinedweb
The following code should build just fine on modern versions of swift-corelibs-xctest: import XCTest class PassingTestCase: XCTestCase { static var allTests: [(String, PassingTestCase -> () throws -> Void)] { return [ ("test_passes", test_passes), ] } func test_passes() { XCTAssert(true) } } XCTMain([ testCase(PassingTestCase.allTests), ]) However, it currently fails with: /swift-execution/code-tmp.swift:3:7: error: type 'PassingTestCase' does not conform to protocol 'XCTestCaseProvider' class PassingTestCase: XCTestCase { ^ XCTestCaseProvider hasn't been included in XCTest in any snapshot since March 1, so I assume the Sandbox is running fairly old version of XCTest. This seems odd, though--the Sandbox uses Swift 3, but swift-corelibs-xctest wasn't migrated to Swift 3 until *after* the commit that removed `XCTestCaseProvider`. So what's the deal? Which version of swift-corelibs-xctest is being used on the Sandbox? Is it a fork? Answer by Karl_Weinmeister (1) | May 18, 2016 at 01:54 PM Hi @modocache, with the latest Swift snapshot available on the Swift Sandbox, you will now see: Test Case 'PassingTestCase.test_passes' started. Test Case 'PassingTestCase.test_passes' passed (0.0 seconds). Executed 1 test, with 0 failures (0 unexpected) in 0.0 (0.0) seconds Total executed 1 test, with 0 failures (0 unexpected) in 0.0 (0.0) seconds 57 people are following this question. Why is there an error 404 message when logging on to swift sandbox through github? 2 Answers A weird result while getting address from array elements in swift sandbox 1 Answer simple program iin Swift Sandbox crashes 2 Answers Getting Error while running HelloWorld Program in IBM Swift Sandbox 1 Answer What is the mechanism for reporting bugs in the IBM Swift Sandbox? 1 Answer
https://developer.ibm.com/answers/questions/261810/$%7BprofileUser.profileUrl%7D/
CC-MAIN-2019-43
en
refinedweb
The CWARN.DTOR.NONVIRT.NOTEMPTY checker flags classes in which the class declares virtual member functions inherited from its base class, but its destructor isn't virtual or empty. When an object of a class derived from a base class is deleted through a pointer to the base class, the destructor of the derived class isn't executed, and members of the derived class aren't disposed of correctly. This situation can lead to leaked memory and unreleased resources. It's important to provide a virtual destructor, even if it will be empty, in a class that contains a virtual method and some member data that must be properly disposed of, implicitly or explicitly. Derived classes will inherit from the base class, and if the base class destructor isn't defined as virtual, memory won't be deallocated properly. In a hierarchy of classes declaring or overriding virtual functions, the virtual destructor has to be defined only once, for the root class of the hierarchy. 1 #include <iostream> 2 using namespace std; 3 4 class A 5 { 6 public: 7 8 ~A() {std::cout << "I am A" << std::endl;} 9 virtual void f1(); 10 }; 11 12 class B : public A 13 { 14 public: 15 16 ~B() {cout << "I am B" << endl;} 17 virtual void f1(); 18 }; In this code example, the non-virtual destructor in class A means that Klocwork flags line 16.
https://docs.roguewave.com/en/klocwork/current/cwarn.dtor.nonvirt.notempty
CC-MAIN-2019-43
en
refinedweb
On 05/21/2015 02:06 AM, Artyom Tarasenko wrote: > Hi Richard, > > looking at target-sparc/cpu.h and target-sparc/ldst_helper.c I have an > impression, that 2 mmu modes are not enough for sparc (32) machines: > they have 4 types of accesses: the combination of user/privileged and > data/code. Data vs code doesn't need separate mmu modes. Just different methods of access. That said, sparc64 has 6 modes... > Also afaics cpu_ldu{b,w,l,q}_code uses the currently selected MMU mode. > if this is correct, the current implementation of ASI 0x9 ( /* > Supervisor code access */) in target-sparc/ldst_helper.c is imprecise, > it would use the current mmu translation which is not necessarily > privileged. On sparc32, we are guaranteed to be privileged, and there's a check for that in the translator. #ifndef TARGET_SPARC64 if (IS_IMM) goto illegal_insn; if (!supervisor(dc)) goto priv_insn; #endif On sparc64, there are two modes higher than kernel: nucleus and hypervisor. For these, the access is being done with the wrong mode. Further, there's no check in helper_ld_asi for permissions. The double-bug means there isn't currently a hole in user accessing supervisor code, but to fix one bug requires that we fix the other. > Also I wonder how to implement a user_code access (ASI 0x8). Do I have > to add more NB_MMU_MODES? No, you just need to use the right function. In this case helper_ld*_cmmu, which includes an mmu_idx parameter, performs a read with "code" or execute permissions rather than "data" or read permissions. This whole area could stand to be totally re-written, btw. Especially to support the sparcv9 immediate asi with simple loads from non-default modes, the byte-swapping asis, and the fpu data movement asis. r~
https://lists.gnu.org/archive/html/qemu-devel/2015-05/msg04348.html
CC-MAIN-2019-43
en
refinedweb
So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography? History As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields. While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves. The Discrete Logarithm Problem Koblitz and Miller had insights aplenty, but the central observation in all of this is the following. Adding is easy on elliptic curves, but undoing addition seems hard. What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation means ( times). Definition: Let be an additive group, and let be elements of so that for some integer . The discrete logarithm problem asks one to find when given and . I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that using the elementary-school division algorithm (in the above notation, , and ). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers . The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation for . With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square until doing so would make you bigger than , then multiply by until you hit . But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add to itself over and over until we hit , and it could easily happen that (as a number) is exponentially larger than the number of bits in and . What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication. Here’s the formal statement of the discrete logarithm problem for elliptic curves. Problem: Let be an elliptic curve over a finite field . Let be points on such that for some integer . Let denote the number of bits needed to describe the point . We wish to find an algorithm which determines and has runtime polynomial in . If we want to allow randomness, we can require the algorithm to find the correct with probability at least 2/3. So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol? The Diffie-Hellman Protocol and Problem Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange. A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret. Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves. As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve and a point on that curve . Then we can take some secret integer , and publish and for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover . How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key and Bob’s , and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public. So Alice and Bob agree ahead of time on a public elliptic curve and a public point on . We’ll sometimes call this point the base point for the protocol. Bob can cunningly do the following trick: take his secret key and send to Alice. Equally slick Alice computes and sends that to Bob. Now Alice, having , computes . And Bob, since he has , can compute . But since addition is commutative in elliptic curve groups, we know . The secret piece of shared information can be anything derived from this new point, for example its -coordinate. If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points . What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret . So the attacker wants find out . And we’ll call this the Diffie-Hellman problem. Diffie-Hellman Problem: Suppose you fix an elliptic curve over a finite field , and you’re given four points and for some unknown integers . Determine if in polynomial time (in the lengths of ). On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute and them quickly compute and check if it’s . In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute . Even if some prankster gave you a candidate for , all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log. So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret. Python Implementation The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page. def sendDH(privateKey, generator, sendFunction): return sendFunction(privateKey * generator) def receiveDH(privateKey, receiveFunction): return privateKey * receiveFunction() And using our code from the previous posts in this series we can run it on a small test. import os def generateSecretKey(numBits): return int.from_bytes(os.urandom(numBits // 8), byteorder='big') if __name__ == "__main__": F = FiniteField(3851, 1) curve = EllipticCurve(a=F(324), b=F(1287)) basePoint = Point(curve, F(920), F(303)) aliceSecretKey = generateSecretKey(8) bobSecretKey = generateSecretKey(8) alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x) bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x) sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey) sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey) print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2)) Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve defined by over the finite field . We pick the base point , and call the relevant functions with placeholders for actual network transmission functions. There is one issue we have to note. Say we fix our base point . Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that is the ideal point. Recall that the smallest value of for which is called the order of . And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of . So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key. The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like ). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way. So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring. Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime? Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod , leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes. I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm. The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available. But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews: - Make sure the order of your basepoint has a short facorization (e.g., is or for some prime ). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman. - Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group. - If your curve is defined over , make sure the number of points on is not equal to . Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers. - Don’t pick a small underlying field like for small . General-purpose attacks can be sped up significantly against such fields. - If you use the field , ensure that is prime. Many believe that if has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that being composite at all is dangerous, so we might as well make it prime. This is a sublist of the list provided on page 28 of this white paper. The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this. Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols. Until then! Your Problem statement has misformated latex: “Let be points” It’s weird because that’s the correct format (as you just saw). It’s fixed now. TeX in WordPress is such a hassle. Reblogged this on a version of mine and commented: Diffie Hellman key agreement scheme with python.
https://jeremykun.com/2014/03/31/elliptic-curve-diffie-hellman/?shared=email&msg=fail
CC-MAIN-2020-45
en
refinedweb
ethereum-contractsethereum-contracts Ethereum contracts wrapper which makes it easier to deploy contracts to the blockchain and invoke their methods. Features: - Automatically converts method arguments and return values according to types in contract ABI. - Auto-fetches transaction receipts for sendTransactioncalls. Promise-ified asynchronous interface for easy use. - Errors are gracefully handled - Customizable logging, can be turned on/off at runtime - Works with any web3 instance - No dependencies - works in Node, Electron apps and browsers - Automated tests InstallationInstallation $ npm install ethereum-contracts UsageUsage Basic contract deployment: import Web3 from 'web3'; import solc from 'solc'; import { ContractFactory } from 'ethereum-contracts'; const web3 = new Web3(/* connect to running Geth node */); // create a new factory const factory = new ContractFactory({ web3: web3, /* Account from which to make transactions */ account: web3.eth.coinbase, /* Default gas to use for any transaction */ gas: 500000 }); // compile our contract const soliditySrc = readFile(...); constant contractData = Object.values(solc.compile(soliditySrc, 1).contracts).pop(); // get Contract instance const contract = factory.make({ contract: contractData, }); // Deploy it! contract.deploy() .then((contractInstance) => { // deployed ok! }) .catch(console.error); The deploy() method returns a Promise which resolves to an instance of ContractInstance ( require('ethereum-contracts').ContractInstance) representing an instance of the contract at its deployed address. This instance exposes an API which by which you can methods within the deployed contract. Note: If you get an error stating that your account is locked then you may need to unlock it first using web3.personal.unlockAccount(). Invoking contract methods locallyInvoking contract methods locally Suppose we have a simple contract code: contract Local { function getOne() returns (uint8, string) { return (123, "ok"); } } We can call getOne() on the local blockchain without having to send out a transaction: console.log( contractInstance.localCall('getOne') ); /* [ 123, "ok" ] */ Invoking contract methods via TransactionInvoking contract methods via Transaction Let's say our contract is: contract Counter { uint8 val; function increment(){ val += 1; } } We can invoke increment() by sending a transaction to the blockchain, which returns a Promise: contractInstance.sendCall('increment') .then((txReceipt) => { // do something }) .catch(console.error); The txReceipt object returned above is the result of the call to web3.eth.getTransactionReceipt() for the corresponding transaction. Passing in method argumentsPassing in method arguments Let's say we our contract is: contract Counter { uint8 val; string label; function increment(uint8 amount, string newLabel) { val += amount; label = newLabel; } function isBigger(uint8 check) returns (bool) { return (check > val) ? true : false; } } We can pass in arguments for both local calls and transaction calls as key-value pairs (i.e. Object): // local let result = contractInstance.localCall('isBigger', { check: 5 }) // transaction contractInstance.sendCall('increment', { amount: 10, newLabel: 'hello world' }); Override account and gasOverride account and gas Whether deploying a contract or calling a method via transaction, the gas value and account from which the transaction is sent can be overridden on a per-call basis: import { Contract } from 'ethereum-contracts'; contract = new Contract({ web3: web3, contract: contractData, account: web3.eth.coinbase, gas: 500000 }) contract.deploy({}, { /* override account */ account: '0xaa1a6e3e6ef20068f7f8d8c835d2d22fd5116444', }) .then((contractInstance) => { return contractInstance.sendCall('increment', {}, { /* override gas */ gas: 100000 }); }) Browser usageBrowser usage If you are not using a packaging manager and are instead importing ethereumContracts.js directly then the class is exposed on the global object as EthereumContracts. Thus, in the browser window context you would use it like this: const contractFactory = new window.EthereumContracts.ContractFactory({ web3: web3, account: web3.eth.coinbase, gas: 500000 }); Type conversionsType conversions When passing in method arguments the wrapper will try to type-cast each argument to the required target type as defined in the contract ABI. Specifically, here is what it does for each type: int/uint, int8/uint8, ..., int256/uint256- input argument is converted to a number and then checked to ensure it is within the accepted range of numbers for the given type's boundaries. Note that Dateinstances get auto-converted to their millisecond representations. string- input argument is converted to a string. bool- if input argument is 0, false, "false", or ""it is passed on as falseelse it is passed on as true. address- if input argument is a number it is converted to a hex representation with enough padding to ensure it is a valid address. Otherwise it is string-ified and checked using web3.isAddress(). byte, bytes, bytes, ..., bytes32- input argument is converted to hex using web3.toHex(). For return values, the logic just ensures that int/uint values are returned as actual numbers and not BigNumber instances (as is usually returned by web3). Example Let's say our contructor has: contract Test { constructor(uint256 val, bool flag, address addr) {} } If we deploy with the following arguments... this.contract.deploy({ val: new Date(2016,0,1,5,30,22), flag: 'false', addr: 234234 }); ...the actual values passed to the constructor will be: (1451597422000, false, '0x00000000000000000000000000000000000392fa') DevelopmentDevelopment To build and run the tests: $ npm install $ npm test To run tests with coverage: $ npm run test-coverage ContributionsContributions Contributions welcome - see CONTRIBUTING.md LicenseLicense MIT - see LICENSE.md
https://libraries.io/npm/ethereum-contracts
CC-MAIN-2020-45
en
refinedweb
A module to mock window.localStorage and window.sessionStorage in Jest Available items The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here: Use this module with Jest to run web tests that rely on localstorageand / or sessionStoragewhere you want a working localStorage API with mocked functions. This module has no runtime dependencies so your project won't pull in additional module dependencies by using this. This should only be installed as a development dependency ( devDependencies) as it is only designed for testing. The module is transpiled via babel to support the current active Node LTS version (6.11.3). yarn: yarn add --dev jest-localstorage-mock npm: npm i --save-dev jest-localstorage-mock The simplest setup is to use the module system, you may also choose to create a setup file if needed. In your package.jsonunder the jestconfiguration section create a setupFilesarray and add jest-localstorage-mockto the array. { "jest": { "setupFiles": ["jest-localstorage-mock"] } } If you already have a setupFilesattribute you can also append jest-localstorage-mockto the array. { "jest": { "setupFiles": ["./__setups__/other.js", "jest-localstorage-mock"] } } Alternatively you can create a new setup file which then requires this module or add the requirestatement to an existing setup file. __setups__/localstorage.js import 'jest-localstorage-mock'; // or require('jest-localstorage-mock'); Add that file to your setupFilesarray: "jest": { "setupFiles": [ "./__setups__/localstorage.js" ] } For a create-react-app project you can replace the suggested mock with this at the beginning of the existing src/setupTests.jsfile: require('jest-localstorage-mock'); By including this in your Jest setup you'll allow tests that expect a localStorageand sessionStorageobject to continue to run. The module can also allow you to use the mocks provided to check that your localStorage is being used as expected. The __STORE__attribute of localStorage.__STORE__or sessionStorage.__STORE__is made available for you to directly access the storage object if needed. Check that your localStoragecalls were made when they were supposed to. test('should save to localStorage', () => { const KEY = 'foo', VALUE = 'bar'; dispatch(action.update(KEY, VALUE)); expect(localStorage.setItem).toHaveBeenLastCalledWith(KEY, VALUE); expect(localStorage.__STORE__[KEY]).toBe(VALUE); expect(Object.keys(localStorage.__STORE__).length).toBe(1); }); Check that your sessionStorageis empty, examples work with either localStorageor sessionStorage. test('should have cleared the sessionStorage', () => { dispatch(action.reset()); expect(sessionStorage.clear).toHaveBeenCalledTimes(1); expect(sessionStorage.__STORE__).toEqual({}); // check store values expect(sessionStorage.length).toBe(0); // or check length }); Check that localStoragecalls were not made when they shouldn't have been. test('should not have saved to localStorage', () => { const KEY = 'foo', VALUE = 'bar'; dispatch(action.notIdempotent(KEY, VALUE)); expect(localStorage.setItem).not.toHaveBeenLastCalledWith(KEY, VALUE); expect(Object.keys(localStorage.__STORE__).length).toBe(0); }); Reset your localStoragedata and mocks before each test to prevent leaking. beforeEach(() => { // values stored in tests will also be available in other tests unless you run localStorage.clear(); // or directly reset the storage localStorage.__STORE__ = {}; // you could also reset all mocks, but this could impact your other mocks jest.resetAllMocks(); // or individually reset a mock used localStorage.setItem.mockClear(); }); test('should not impact the next test', () => { const KEY = 'foo', VALUE = 'bar'; dispatch(action.update(KEY, VALUE)); expect(localStorage.setItem).toHaveBeenLastCalledWith(KEY, VALUE); expect(localStorage.STORE[KEY]).toBe(VALUE); expect(Object.keys(localStorage.STORE).length).toBe(1); }); test('should not be impacted by the previous test', () => { const KEY = 'baz', VALUE = 'zab'; dispatch(action.update(KEY, VALUE)); expect(localStorage.setItem).toHaveBeenLastCalledWith(KEY, VALUE); expect(localStorage.STORE[KEY]).toBe(VALUE); expect(Object.keys(localStorage.STORE).length).toBe(1); }); See the contributing guide for details on how you can contribute.
https://xscode.com/clarkbw/jest-localstorage-mock
CC-MAIN-2020-45
en
refinedweb
Available items The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here: This repo contains the image definitions for the components of the cluster logging stack as well as tools for building and deploying them. The cluster logging subsystem consists of multiple components abbreviated as the "EFK" stack: Elasticsearch, Fluentd, Kibana. The primary features this integration provides: * Multitenant support to isolate logs from various project namespaces * Openshift OAuth2 integration * Historical log discovery and visualization * Log aggregation of pod and node logs Information to build the images from github source using an OKD deployment is found here. See the quickstart guide to deploy cluster logging. Please check the release notes for deprecated features or breaking changes . The cluster logging subsystem consists of multiple components commonly abbreviated as the "ELK" stack (though modified here to be the "EFK" stack). Elasticsearch is a Lucene-based indexing object store into which logs are fed. Logs for node services and all containers in the cluster are fed into one deployed cluster. The Elasticsearch cluster should be deployed with redundancy and persistent storage for scale and high availability. Fluentd is responsible for gathering log entries from nodes, enriching them with metadata, and feeding them into Elasticsearch. Kibana presents a web UI for browsing and visualizing logs in Elasticsearch. In order to authenticate the Kibana user against OpenShift's Oauth2, a proxy is required that runs in front of Kibana. Curator allows the admin to remove old indices from Elasticsearch on a per-project basis. The cluster-logging-operator orchestrates the deployment of the cluster logging stack including: resource definitions, key/cert generation, component start and stop order. Determining the health of an EFK deployment and if it is running can be assessed by running the check-EFK-running.shand check-logs.she2e tests. Additionally, see Checking EFK Health Any issues against the origin stack can be filed at. Please include as many details as possible in order to assist us in resolving the issue. Use curl to grab the tarball from github: curl -s -L | tar -C hack/vendor/olm-test-script --strip-components=1 -x -z -f -for example: curl -s -L | tar -C hack/vendor/olm-test-script --strip-components=1 -x -z -f - To contribute to the development of origin-aggregated-logging, see REVIEW.md
https://xscode.com/openshift/origin-aggregated-logging
CC-MAIN-2020-45
en
refinedweb
DASH! micropython test.py no module named 'OmegaExpansion' - LightSwitch last edited by root@Omega-C592:~# micropython test.py Traceback (most recent call last): File "test.py", line 1, in <module> ImportError: no module named 'OmegaExpansion' but... root@Omega-C592:~# python test.py -0.795 works 4.0. I'm using the dash, so I want to use the lv_micropython lib to create the gui. I've had several issues so far and this is just one of them I've created a posting for. The other one is here: What do I have to do to get the microphython to recognize the OmegaExpansion package that is present? I already also did this instruction: opkg install micropython-lib --nodeps The code for test.py is below from OmegaExpansion import AdcExp class Test: def init(self): self.adc = AdcExp.AdcExp(address=0x48); print(self.adc.read_voltage(0)); Test();
https://community.onion.io/topic/4286/dash-micropython-test-py-no-module-named-omegaexpansion
CC-MAIN-2020-45
en
refinedweb
path to the StreamingAssets folder (Read Only). Use the StreamingAssets folder to store Assets.. Use the UnityWebRequest class to access the Assets. using UnityEngine; using System.IO; using UnityEngine.Video; // Application-streamingAssetsPath example. // // Play a video and let the user stop/start it. // The video location is StreamingAssets. The video is // played on the camera background. public class Example : MonoBehaviour { private UnityEngine.Video.VideoPlayer videoPlayer; private string status; void Start() { GameObject cam = GameObject.Find("Main Camera"); videoPlayer = cam.AddComponent<UnityEngine.Video.VideoPlayer>(); // Obtain the location of the video clip. videoPlayer.url = Path.Combine(Application.streamingAssetsPath, "SampleVideo_1280x720_5mb.mp4"); // Restart from beginning when done. videoPlayer.isLooping = true; // Do not show the video until the user needs it. videoPlayer.Pause(); status = "Press to play"; } void OnGUI() { GUIStyle buttonWidth = new GUIStyle(GUI.skin.GetStyle("button")); buttonWidth.fontSize = 18 * (Screen.width / 800); if (GUI.Button(new Rect(Screen.width / 16, Screen.height / 16, Screen.width / 3, Screen.height / 8), status, buttonWidth)) { if (videoPlayer.isPlaying) { videoPlayer.Pause(); status = "Press to play"; } else { videoPlayer.Play(); status = "Press to pause"; } } } }
https://docs.unity3d.com/ScriptReference/Application-streamingAssetsPath.html
CC-MAIN-2020-45
en
refinedweb
Yes, I’m still on the Windows 8.1 preview even though Windows 8.1 has gone out of the door and is generally available as per posts below; Windows 8.1 now available! Now ready for you- Windows 8.1 and the Windows Store Visual Studio 2013 released to web! I just need to get around to installing it Meanwhile, I wanted to experiment a little more with what I started on in my previous post Windows 8.1 Preview- Multiple Windows and specifically I wanted to see what the experience was like for an application that had multiple windows on a machine where a user may have multiple monitors with different display settings and, further, might drag windows from one monitor to another. I’d been kind of curious as to how that would work and especially in the case where those windows contained content that had loaded resources using the standard mechanisms – how dynamic are those mechanisms with respect to display settings changing while the app is running? Experimenting with DisplayInformation I knocked together a sort of skeleton which would allow me to create 2 windows. It took about 5 minutes to add a button to the first screen; and stuck a little bit of code behind that to create my secondary window; public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } async void OnSecondWindow(object sender, RoutedEventArgs e) { if (!this.secondWindowShown) { var view = CoreApplication.CreateNewView(); var viewId = 0; await view.Dispatcher.RunAsync( CoreDispatcherPriority.Normal, () => { viewId = ApplicationView.GetForCurrentView().Id; Window w = Window.Current; Frame f = new Frame(); w.Content = f; f.Navigate(typeof(SecondWindow)); }); await ApplicationViewSwitcher.TryShowAsStandaloneAsync(viewId, ViewSizePreference.UseHalf, ApplicationView.GetForCurrentView().Id, ViewSizePreference.UseHalf); this.secondWindowShown = true; } } bool secondWindowShown; } which is just a bit of XAML to set up a view model and display some information from it; <Page x: <Page.DataContext> <local:WindowDetailsViewModel /> </Page.DataContext> <Grid Background="Gray"> "/> </Grid.RowDefinitions> <TextBlock Text="Resolution Scale" /> <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> </Page> and then there’s just a simple view model class which that second window instantiates declaratively (i.e. default constructor time) via its XAML as you can see up there in the Page.DataContext setter on line 8 above. That view model class is largely just surfacing up the DisplayInformation class from WinRT; class WindowDetailsViewModel : ViewModelBase { public WindowDetailsViewModel() { DisplayInformation di = DisplayInformation.GetForCurrentView(); di.DpiChanged += OnDpiChanged; di.OrientationChanged += OnOrientationChanged; } public DisplayInformation DisplayInformation { get { return (DisplayInformation.GetForCurrentView()); } } void OnOrientationChanged(DisplayInformation sender, object args) { this.ChangeDisplayInformation(); } void OnDpiChanged(DisplayInformation sender, object args) { this.ChangeDisplayInformation(); } void ChangeDisplayInformation() { base.OnPropertyChanged("DisplayInformation"); } } and that ViewModelBase class just implements IPropertyChangedNotification so I won’t include that here. I thought I’d try out that code across the system that I have set up here. I have a Dell XPS 12 which has a 12” monitor (I think – the name would suggest so but I haven’t measured it) running at 1920×1080 and, over DisplayPort, I have a Dell U2412M monitor which is 24” and is running at 1920×1200. You’d have to assume that there were differences in pixel density across those 2 screens. So, I ran up the app and pressed the button to create the second window side-by-side with the first window on my 12” laptop; So, that looks like I have a 175ish DPI screen and it’s in Landscape (it is) and it would usually be landscape (it would, it’s a laptop) and Windows would apply 140% scaling to this UI. Fair enough. I dragged this window to my second monitor, crossed my fingers and waited to see if things updated. They did; and so on this monitor, the DPI is closer to the regular 96 and Windows would apply 100% scaling to this UI. Clearly, at an API level then Windows “knows” that my window has moved monitors and it fires an event such that my app code can pick up that event and do “something”. What I wondered next was whether Windows could do anything to automatically resolve resources based on this knowledge. Experimenting with Scaling I’ll admit that I find I have to think hard about scaling. I find it easier to think in terms of logical pixels being 1/96th of an inch and then thinking about designing in terms of logical pixels. For instance, if I temporarily change my secondary page to be a Grid which is defined as 96×96; <Grid Width="96" Height="96" Background="Red"> Then when I run that on my 12” screen with high DPI, it presents a rectangle that’s around 1 inch square and if I drag the window to my 24” screen with lower DPI it presents a rectangle that around 1 inch square and that, I guess, is the point Similarly, if I use a piece of text; <TextBlock FontSize="96" Text="Hello World" Foreground="White"/> then that looks pretty much the same size on my 2 monitors as I drag the window backwards and forwards. The problem is that some resources (particularly) images just aren’t going to play ball nicely and that’s why Windows has its scaling system to provide alternate images based on scale factors. I went out and find an image of the Surface 2 on the web that was 485px by 323px (not a great size – should really be a multiple of 5 pixels) and dropped it into my UI with a simple image tag; <Image Width="485" Height="323" Stretch="Fill" Source="Assets/surface2.jpg" /> but I also found an image that was twice the size in terms of pixels and I attributed the 2 images so that I could tell them apart; and added them to my project using the regular naming convention; Now, I may have gone a little over the top in terms of the size of my 140% image but I ran up the app and, sure enough on my 24” inch monitor the second window displayed; and on my 12” monitor the second window displayed; and so the resource-resolution system is clearing listening to the right events and is refreshing (in this case) my image for me as it realises that the surrounding graphic capabilities have changed as the window moved from one monitor to another – very neat. Experimenting with Data-Binding That made me wonder. What if the picture to display was actually hidden away inside of a value that was data-bound. As a simple example, what if I had this “ViewModel”; class ViewModel { public string ImageUrl { get { return ("Assets/surface2.png"); } } } and I change my UI such that it sets up one of those objects as its DataContext and then binds to it; <Page x: <Page.DataContext> <local:ViewModel /> </Page.DataContext> <Grid> <Image Width="485" Height="323" Stretch="Fill" Source="{Binding ImageUrl}" /> then does that still work? Does my image still resolve to a different image as I move the window from my 12” screen to my 24” screen? Not surprisingly, yes, it does Local/Web Images Naturally, if you’re loading up images that your app has stored in local storage (i.e. local/roaming/temp storage hanging off ApplicationData.Current) then the system isn’t magically going to swap those images around for you as and when the Window transitions from one monitor to another and the DPI changes. As an aside, one of the things that I’m not 100% sure on is whether there is a way to use the resource system to resolve images in data folders using the same naming system that it uses when loading up packaged resources. For example, if I have a path like ms-appdata:///local/surface2.jpg then can I get the system to resolve that for me to ms-appdata:///local/surface2.scale-100.jpg and follow the same naming convention as for resources? I’m not sure whether I can but I’ll return to that if I plug that gap in my knowledge. Equally, if you’ve loaded images from the web then the system is not usually going to be able to magically pick up display changes and go back to the web for “the right image”. I think in both of those cases, you’re going to have to do some work yourself and make use of the DisplayInformation.ResolutionScale property to figure out what images to load or, if you’re in the JavaScript world, you can use something like the CSS media-resolution feature in order to specify different background-image values for various media queries and hit the problem that way. App Resource Packages and Bundles This is one I haven’t tried yet. I wrote a little blog post about Windows 8.1–Resource Packages and App Bundles where I took a basic look at how for Windows 8.1 it’s possible to build resource packages – e.g. it’s possible to build a package of resources that only apply to the 140% scale so that all those 140% scale images are in that package and that package would not need downloading onto a system running at 100% scale. So…what happens in the circumstance where a user has a system with a DPI that would resolve to 100% scale but then plugs in a monitor that resolves to 140% or 180% scale and then drags a window from their 100% scale monitor to their 140% scale monitor? Does the system go back to the Store and magically get the additional resources required and, if so, how does that affect the running app package? As far as I know, the answer is “yes, the system does do that” but given that the Windows 8.1 Store only opened yesterday I’ve not tried it yet but I’ll update the post as/when I have any more detail. Update: I checked on what happens in the above situation where the app has downloaded its “100%” resources and then one of its windows is dragged to a monitor which would require “140%” resources. In this case, this will prompt the system to realise that it needs to get hold of those “140%” resources the next time it services that app. That would mean that the user might get a slightly different experience when they first run the app and it is running with “100%” resources versus a later run of the app where it has been serviced and has downloaded those “140%” resources and so can now use them.
https://mtaulty.com/2013/10/18/m_14980/
CC-MAIN-2020-45
en
refinedweb
Hi, I want to use LPC54605J256BD100 microcontroller in an applicaion where a pulseon an external pin will trigger an ADC and ADC should sample 12 channels on that single pulse input and give output at the end of the sampling of all the 12 channels. I was going through the user manual of LPC546xx but couldn't find appropriate description for the required functionality. I have follwoing queries regarding Sequencing feature of ADC in LPC546xx: 1) In busrt mode, a) Conversion will start after trigger only right? b) After one sequence(i.e 12 channels sampled), does it stop sampling or initiates a new conversion sequence without any trigger? 2) What is the use of start bit in SEQA_CTRL register? 3) How to acheive the required functionality for my application? 4) Any example code would be helpful Thanks in advance, Prasanna Hi, Prasanna, Regarding your question, pls refer to the following: Q1) In busrt mode, a) Conversion will start after trigger only right? b) After one sequence(i.e 12 channels sampled), does it stop sampling or initiates a new conversion sequence without any trigger? >>>>>Burst mode means that the ADC will convert continuously once the ADC is triggered, it do not stop sampling when all 12 channels are sampled. For your application, DO NOT use burst mode. Q 2) What is the use of start bit in SEQA_CTRL register? >>>>>>The Start bit in SEQA_CTRL register is just used in Software Trigger mode, whne the bit is set by software, ADC convert will be triggered once. Q3) How to acheive the required functionality for my application? >>>>>>>Your application requires that the ADC converter is triggered by external signal from GPIO pins, once ADC is triggered, the ADC will sample all 12 channels, then stop sampling, wait for next external trigger signal. I suggest you use the example code in the LPC SDK package: In the case, I think you can modify the void ADC_Configuration(void), change it from software triggering mode to pin hardware triggering mode. void ADC_Configuration(void) { adc_config_t adcConfigStruct; adc_conv_seq_config_t adcConvSeqConfigStruct; /* Configure the converter. */ #if defined(FSL_FEATURE_ADC_HAS_CTRL_ASYNMODE) & FSL_FEATURE_ADC_HAS_CTRL_ASYNMODE adcConfigStruct.clockMode = kADC_ClockSynchronousMode; /* Using sync clock source. */ #endif /* FSL_FEATURE_ADC_HAS_CTRL_ASYNMODE */ adcConfigStruct.clockDividerNumber = DEMO_ADC_CLOCK_DIVIDER; #if defined(FSL_FEATURE_ADC_HAS_CTRL_RESOL) & FSL_FEATURE_ADC_HAS_CTRL_RESOL adcConfigStruct.resolution = kADC_Resolution12bit; #endif /* FSL_FEATURE_ADC_HAS_CTRL_RESOL */ #if defined(FSL_FEATURE_ADC_HAS_CTRL_BYPASSCAL) & FSL_FEATURE_ADC_HAS_CTRL_BYPASSCAL adcConfigStruct.enableBypassCalibration = false; #endif /* FSL_FEATURE_ADC_HAS_CTRL_BYPASSCAL */ #if defined(FSL_FEATURE_ADC_HAS_CTRL_TSAMP) & FSL_FEATURE_ADC_HAS_CTRL_TSAMP adcConfigStruct.sampleTimeNumber = 0U; #endif /* FSL_FEATURE_ADC_HAS_CTRL_TSAMP */ #if defined(FSL_FEATURE_ADC_HAS_CTRL_LPWRMODE) & FSL_FEATURE_ADC_HAS_CTRL_LPWRMODE adcConfigStruct.enableLowPowerMode = false; #endif /* FSL_FEATURE_ADC_HAS_CTRL_LPWRMODE */ #if defined(FSL_FEATURE_ADC_HAS_TRIM_REG) & FSL_FEATURE_ADC_HAS_TRIM_REG adcConfigStruct.voltageRange = kADC_HighVoltageRange; #endif /* FSL_FEATURE_ADC_HAS_TRIM_REG */ ADC_Init(DEMO_ADC_BASE, &adcConfigStruct); #if !(defined(FSL_FEATURE_ADC_HAS_NO_INSEL) && FSL_FEATURE_ADC_HAS_NO_INSEL) /* Use the temperature sensor input to channel 0. */ ADC_EnableTemperatureSensor(DEMO_ADC_BASE, true); #endif /* FSL_FEATURE_ADC_HAS_NO_INSEL. */ /* Enable channel DEMO_ADC_SAMPLE_CHANNEL_NUMBER's conversion in Sequence A. */ adcConvSeqConfigStruct.channelMask = (0xFFFU << DEMO_ADC_SAMPLE_CHANNEL_NUMBER); /* Includes channel DEMO_ADC_SAMPLE_CHANNEL_NUMBER. Rong modified*/ adcConvSeqConfigStruct.triggerMask = 1U; //select PIN INT0 as trigger source Rong Wrote adcConvSeqConfigStruct.triggerPolarity = kADC_TriggerPolarityPositiveEdge; adcConvSeqConfigStruct.enableSingleStep = false; adcConvSeqConfigStruct.enableSyncBypass = false; adcConvSeqConfigStruct.interruptMode = kADC_InterruptForEachSequence; ADC_SetConvSeqAConfig(DEMO_ADC_BASE, &adcConvSeqConfigStruct); ADC_EnableConvSeqA(DEMO_ADC_BASE, true); /* Enable the conversion sequence A. */ /* Clear the result register. */ ADC_DoSoftwareTriggerConvSeqA(DEMO_ADC_BASE); while (!ADC_GetChannelConversionResult(DEMO_ADC_BASE, DEMO_ADC_SAMPLE_CHANNEL_NUMBER, &gAdcResultInfoStruct)) { } ADC_GetConvSeqAGlobalConversionResult(DEMO_ADC_BASE, &gAdcResultInfoStruct); } You also need to write the PINTSEL[0] regisyter to select the GPIO pin which can trigger ADC, even you have to configure the pin feature in the IOCON module for the pin. Q4) Any example code would be helpful >>>>>>>>>>>Pls use the ADC example in the SDK package. You can download SDK package from the link: Welcome | MCUXpresso SDK Builder Hope it can help you BR XiangJun Rong Hi XiangJun, Thanks for your valuable input!!! Can you please elaborate your answer for question no. 3. I am not able to find that feature in user manual. Thanks and best regards, Prasanna Hi, Prasanna, The ADC supports external pin trigger, pls refer to the following section. For the GINT0/1 module, pls refer to the section: Chapter 14: LPC546xx Group GPIO input interrupt (GINT0/1) Hope it can help you BR XiangJun Rong Hi XiangJun, I need calrity on the feature that provides interrupt after completion of the sampling of all the cahnnels in the sequence. The manual provides details about only single select and burst mode which are not applicable to my application. I want to know the feature that provides this sampling of all channels on 1 trigger and gives an interrupt after completion of sequence. Thanks, Prasanna Hi, Prasanna, You require the feature that provides interrupt after completion of the sampling of all the cahnnels in the sequence with ONE trigger, okay, this is the bit configuration in SEQA_CTRL/SEQB_CTRL BURST bit: 0 SINGLESTEP bit: 0 MODE bit: 1 Hope it can help you BR XiangJun Rong Hi XiangJun, Thanks for your guidance. In short SINGLESTEP bit controls the sequencing option. If SINGLESTEP = 1, only one channel will be sampled on trigger and if SINGLESTEP = 0, all channels will be sampled on trigger right?. Thanks, Prasanna Hi, Prasanne, Yes, you are right, if SINGLESTEP = 0, all channels will be sampled on trigger. BR Xiangjun Rong Hi XiangJun, Thank you for solving this query. I have one more query regarding ADC. I couldn't find the effective number of bits in the user manual and data sheet. Could you please help me find it. Thanks, Prasanna Hi, Prasanna, We have not tested the ENOB(Effective Number of Bits) spec for the ADC of LPC546xx, I am sorry. BR XiangJun Rong
https://community.nxp.com/t5/LPC-Microcontrollers/ADC-Sequencing-in-LPC546xx/m-p/885544
CC-MAIN-2020-45
en
refinedweb
Hy, I want to run a yolo v3 tutorial on my jetson nano (jetpack 4.3) and created a virtual environment where i want to save and run the code, install the needed packages and so on… When I try to install tensorflow (following this guide) I got this information Requirement already satisfied This is what I did in detail: python3 -m virtualenv -p python3 aiguyyolotest1 source aiguyyolotest1/bin/activate sudo apt-get update sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran After the last command i got the information that I already use the newest versions. 0 updated, 0 new installed, 0 to delete and 3 not updated. The next steps: pip3 install --upgrade pip (Successfully installed pip-20.2.3) sudo apt-get install python3-pip After that command again i got the information that I already use the newest version. 0 updated, 0 new installed, 0 to delete and 3 not updated. The next steps: sudo pip3 install -U pip testresources setuptools: pip in /usr/local/lib/python3.6/dist-packages (20.2.3) Requirement already up-to-date: testresources in /usr/local/lib/python3.6/dist-packages (2.0.1) Requirement already up-to-date: setuptools in /usr/local/lib/python3.6/dist-packages (50.3.0) Requirement already satisfied, skipping upgrade: pbr>=1.8 in /usr/local/lib/python3.6/dist-packages (from testresources) (5.4.4) sudo pip3 install -U numpy==1.16.1 future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11: numpy==1.16.1 in /usr/local/lib/python3.6/dist-packages (1.16.1) Requirement already up-to-date: future==0.18.2 in /usr/local/lib/python3.6/dist-packages (0.18.2) Requirement already up-to-date: mock==3.0.5 in /usr/local/lib/python3.6/dist-packages (3.0.5) Requirement already up-to-date: h5py==2.10.0 in /usr/local/lib/python3.6/dist-packages (2.10.0) Requirement already up-to-date: keras_preprocessing==1.1.1 in /usr/local/lib/python3.6/dist-packages (1.1.1) Requirement already up-to-date: keras_applications==1.0.8 in /usr/local/lib/python3.6/dist-packages (1.0.8) Requirement already up-to-date: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (0.2.2) Requirement already up-to-date: futures in /usr/local/lib/python3.6/dist-packages (3.1.1) Requirement already up-to-date: protobuf in /usr/local/lib/python3.6/dist-packages (3.13.0) Requirement already up-to-date: pybind11 in /usr/local/lib/python3.6/dist-packages (2.5.0) Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from mock==3.0.5) (1.14.0) Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf) (50.3.0) sudo pip3 install --pre --extra-index-url tensorflow. Looking in indexes:, Requirement already satisfied: tensorflow in /usr/local/lib/python3.6/dist-packages (2.1.0+nv20.3.tf2) Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (3.13.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.1.0) Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.0.8) Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.14.0) Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.2.0) Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.2.2) Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.9.0) Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.16.1) Requirement already satisfied: tensorflow-estimator<2.2.0,>=2.1.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (2.1.0) Requirement already satisfied: scipy==1.4.1; python_version >= “3” in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.4.1) Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (2.1.1) Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.12.1) Requirement already satisfied: wheel>=0.26; python_version >= “3” in /usr/lib/python3/dist-packages (from tensorflow) (0.30.0) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (3.2.0) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.8.1) Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.1.1) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.28.1) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorflow) (50.3.0) Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow) (2.10.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (0.4.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (1.0.1) Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (1.13.1) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (3.2.1) Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (2.23.0) Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow) (1.3.0) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (4.0.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (0.2.8) Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (4.0) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (1.22) Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (2.6) Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (2018.1.18) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (3.0.4) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow) (3.1.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (0.4.8) Now I tried to run tensorflow in different python versions: First try: python Python 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0] on linux Type “help”, “copyright”, “credits” or “license” for more information. import tensorflow Traceback (most recent call last): File “”, line 1, in ModuleNotFoundError: No module named ‘tensorflow’ Second try: python2.7 Python 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0] on linux2 Type “help”, “copyright”, “credits” or “license” for more information. import numpy import tensorflow Traceback (most recent call last): File “”, line 1, in ImportError: No module named tensorflow Third try: python3.7 Python 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] on linux Type “help”, “copyright”, “credits” or “license” for more information. import tensorflow Traceback (most recent call last): File “”, line 1, in ModuleNotFoundError: No module named ‘tensorflow’ When I run tensorflow within python 3.6.9 in my home directory (not inside a virtual environment) everthing works fine. Can somebody help me with this? I have no idea what to do. BR chris
https://forums.developer.nvidia.com/t/importerror-no-module-named-tensorflow-but-requirement-already-satisfied/154251/7
CC-MAIN-2020-45
en
refinedweb
selinux_getenforcemode(3) SELinux API documentationselinux_getenforcemode(3) selinux_getenforcemode - get the enforcing state of SELinux #include <selinux/selinux.h> int selinux_getenforcemode(int *enforce); selinux_getenforcemode() Reads the contents of the /etc/selinux/config file to determine how the system was setup to run SELinux. Sets the value of enforce to 1 if SELinux should be run in enforcing mode. Sets the value of enforce to 0 if SELinux should be run in permissive mode. Sets the value of enforce to -1 if SELinux should be disabled. 25 May 2004 selinux_getenforcemode(3) Pages that refer to this page: config(5), selinux_config(5)
https://man7.org/linux/man-pages/man3/selinux_getenforcemode.3.html
CC-MAIN-2020-45
en
refinedweb
Containers Set up soft multi-tenancy with Kiosk on Amazon Elastic Kubernetes Service Introduction Achieving complete isolation between multiple tenants running in the same Kubernetes cluster is impossible today. The reason is because Kubernetes was designed to have a single control plane per cluster and all the tenants running in the cluster, share the same control plane. Hosting multiple tenants in a single cluster brings some advantages, the main ones being: efficient resource utilization and sharing, reduced cost, and reduced configuration overhead. However, a multi-tenant Kubernetes setup creates some special challenges when it comes to resource sharing and security. Let’s understand these better. In a shared cluster, one of the goals is for each tenant to get a fair share of the available resources to match its requirements. A possible side effect that needs to be mitigated in this case is the noisy neighbor effect, by ensuring the right level of resource isolation among tenants. The second challenge, the main one, is security. Isolation between tenants is mandatory in order to avoid malicious tenants compromising others. Depending on the security level implemented by the isolation mechanisms, the industry divides the shared tenancy models into hard and soft multi-tenancy. Multi-tenancy Hard multi-tenancy implies no trust between tenants and one tenant cannot access anything from others. This approach fits, for example, to service providers that host multiple tenants, which are unknown to each other and the main focus for this setup is to completely isolate the business among tenants. In the open-source community, there is ongoing work to solve this challenge, but this approach is not widely used across production workloads yet. On the other end of the spectrum, is soft multi-tenancy. This implies a trust relationship between tenants, which could be part of the same organization or team, and the main focus in this approach is not the security isolation but the fair utilization of resources among tenants. There are a few initiatives in the open-source community to implement soft multi-tenancy and one of them is Kiosk. Kiosk is an open source framework for implementing soft multi-tenancy in a Kubernetes cluster. In this post, you will see a step-by-step guide to implement it in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Initial setup Before proceeding with the setup, make sure you fulfill the following pre-requisites: - Log in to your AWS account. - Create an Amazon EKS cluster in the AWS Management Console. - Connect to the Amazon EKS cluster from the local machine. - Install kubectl on the local machine. This is the command line tool for controlling Kubernetes clusters. - Install helm version 3 on the local machine. This is a package manager for Kubernetes. - Kiosk requires Kubernetes version 1.14 and higher. Walkthrough In order to demonstrate how to set up Kiosk on Amazon EKS the following architecture will be deployed, depicting a single Kubernetes cluster shared between two tenants: a Node.js application and a Redis data store. Before starting with the setup, here are some of the basic building blocks of Kiosk: - Cluster Admin – has administrator permissions to perform any operation across the cluster. - Account – resource associated to a tenant. This is defined and managed by the cluster admin. - Account User – can be a Kubernetes user, group, or service account. This is managed by the Cluster Admin and can be associated to multiple accounts. - Space – is a virtual representation of a regular Kubernetes namespace and can belong to a single account. - Account Quota – defines cluster-wide aggregated limits for an account. - Template – is used to initialize a space with a set of Kubernetes resources. A template is enforced through account configurations. This is defined and managed by the cluster admin. - TemplateInstance – is an actual instance of a template when it is applied to a space. This contains information about the template and parameters used to instantiate it. Account, space, account quota, template, and template instance are custom resources created in the cluster when the kiosk chart is installed. Granular permissions can be added to these resources, and this enables tenant isolation. Install Kiosk 1. Verify that you can view the worker nodes in the node group of the EKS cluster. The EKS cluster used in this guide consists of 3 x m5.large (2 vCPU and 8 GiB) instances. $kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311 ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311 ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311 2. Create a dedicated namespace and install Kiosk using helm. Helm is a package manager for Kubernetes. $kubectl create namespace kiosk $helm install kiosk --repo kiosk --namespace kiosk --atomic This step creates a pod in the kiosk namespace. Create users You will create two IAM (Identity and Access Management) users dev and dba, each managing a separate tenant. Because EKS supports integration of Kubernetes RBAC (Role-Based Access Control) with the IAM service through the AWS IAM Authentication for Kubernetes, the next step is to add RBAC access to the two users. 1. Create the users dev and dba by following these steps. Because the IAM service is used for authentication only, you don’t need to grant any permissions during the user creation. Permissions for each user in the Kubernetes cluster will be granted through the RBAC mechanism in the next steps. The IAM user that created the EKS cluster in the initial setup phase is granted automatically administrator permissions for the cluster so you will use it as a cluster admin in this guide. If IAM access keys have not been created already for the cluster admin, then follow these steps to do so and include them under the kube-cluster-admin named profile in the credentials file as described here. Note: all commands in this guide will be executed as a cluster admin unless explicitly stated in the kubectl command. To use the cluster admin IAM credentials, override the AWS_PROFILE environment variable. Linux or macOS $export AWS_PROFILE=kube-cluster-admin Windows 2. Add RBAC access to the two users by updating the aws-auth ConfigMap. $kubectl edit configmap aws-auth -n kube-system 3. Add the two users under data.mapUsers. The user ARN can be copied from the IAM console. Note: the IAM entity that creates the cluster is automatically granted system:masters permissions in the cluster’s RBAC configuration. Users dev and dba will have read-only permissions by default, as they haven’t been added to any group. Impersonate users Kubernetes allows a user to act as another user when running kubectl commands from the command line, through user impersonation. To do this, the impersonating user must have the permission to perform the impersonate action on the type of attribute being impersonated, in this case user. As the cluster admin has system:masters permissions by default, it can impersonate users dev and dba. To impersonate a user, use --as=<username> flag in the kubectl command. Create a Kiosk account for each tenant 1. Create a definition file for the Node.js application’s account. An account defines subjects, which are the account users that can access the account. Account users can be a Kubernetes user, group, or service account. In this case, the account user is dev, which has been previously added to the aws-auth ConfigMap. 2. Create the account $kubectl apply -f node-account.yml 3. Repeat the step for the Redis application. Update the metadata.name to redis-account and spec.subjects[0].name to dba. 4. View the created accounts as a cluster admin. $kubectl get accounts 5. View the created accounts as user dev. You will have access to view only the accounts associated to this user. $kubectl get accounts --as=dev Create a Kiosk space for each account 1. By default, only the cluster admin can create spaces. In order to allow an account user to create spaces, create a Kubernetes RBAC ClusterRoleBinding. Let’s allow users dev and dba to create spaces in their accounts. Note: kiosk-edit is a ClusterRole created when the chart for kiosk was installed in the cluster and allows create, update, and delete actions on space resources by the subjects included in the ClusterRoleBinding configuration. The full configuration of the kiosk-edit role can be seen by running: $kubectl get clusterroles kiosk-edit -n kiosk -o yaml 2. Create the ClusterRoleBinding as a cluster admin. $kubectl apply -f cluster-role-binding.yml 3. Create a space for the Node.js application. First, create the definition file. 4. Impersonate user dev to create the space for the node-account. $kubectl apply -f node-space.yml --as=dev 5. Repeat the step for the Redis application. Update metadata.name to redis-space and spec.account to redis-account. 6. Note that trying to create the space in the redis-account as user dev will result in an error. 7. View the current spaces as a cluster admin. You will see both node-space and redis-space. $kubectl get spaces 8. View the current spaces as user dev. Note that you only have access to the spaces owned by user dev, which in this case is node-space belonging to node-account. $kubectl get spaces --as=dev 9. Spaces are a virtual representation of Kubernetes namespaces, so the same syntax can be used in the command line. For example, to list all pods in a space: $kubectl get pods -n redis-space Apply restrictions on the Kiosk accounts Limit the number of spaces per account 1. Limit the number of spaces that can be associated to an account. Let’s update the definition file node-account.yml and add the space limit. 2. Apply the changes to the node-account as a cluster admin. $kubectl apply -f node-account.yml Now the node-account can only have two spaces. When attempting to create a third space, an error will be thrown. 3. Apply the same limit for the second account by updating the definition file redis-account.yml. Apply account quotas to existing accounts 1. Define the limits of compute resources for an account by defining account quotas. Create the quota definition file. 2. Create the account quota as a cluster admin. $kubectl apply -f node-quota.yml AccountQuotas are very similar to the Kubernetes resource quotas by restricting the same resources, with the added benefit that the restrictions apply across all spaces of the account unlike the resource quotas, which apply to a single namespace. 3. AccountQuotas can be created by cluster admins only. Trying to create an account quota as an account user, results in an error. $kubectl apply -f node-quota.yml –-as=dev User "dev" cannot get resource "accountquotas" in API group "config.kiosk.sh" at the cluster scope 4. View the account quotas across the cluster as a cluster admin. $kubectl get accountquotas Create templates for spaces 1. A template in kiosk serves as a blueprint for a space. Templates are defined and managed by cluster admins by default. 2. Let’s create a template to limit every container deployed in this space to be assigned CPU request of 500 milli CPUs and a CPU limit of 1 CPU. 3. Create the template as a cluster admin. $kubectl apply -f template-definition.yml 4. By default, templates are optional. In order to enforce the space creation to follow the template rules, this needs to be added in the account configuration. Let’s update the redis-account to follow the template when spaces are created within the account. 5. Apply the changes to the account as a cluster admin. $kubectl apply -f redis-account.yml 6. Let’s test this by creating a space within the redis-account. 7. Create the space as a cluster admin. $kubectl apply -f redis-mandatory-space.yml 8. Once the space is created, you can view that the LimitRange resource has been created. $kubectl get limitrange -n redis-mandatory-space 9. For each space created from a template, a template instance is created. Template instances can be used to track resources created from templates. View the instances in the new space. $kubectl get templateinstances -n redis-mandatory-space 10. To test that the template is enforced, you can deploy a test pod in the new space and verify if the limit ranges are applied. $kubectl run nginx --image nginx -n redis-mandatory-space --restart=Never 11. Check the pod configuration and verify the resource limits applied. $kubectl describe pod nginx -n redis-mandatory-space 12. Delete the pod to continue the setup. $kubectl delete pod nginx -n redis-mandatory-space Deploy applications in the two accounts 1. Because an account quota has been created for node-account, the required compute resources need to be specified in the definition file of the deployment. $kubectl apply -f node-deployment.yml -n node-space --as=dev 2. Deploy the Redis data store in the second account as user dba. $kubectl create deploy redis --image redis -n redis-space --as=dba Verify account isolation 1. Check the resources accessible by account user dev. $kubectl get all -n node-space --as=dev 2. Check if the account user dev has access to any resources in the redis-space. You will get plenty of errors. $kubectl get all -n redis-space --as=dev Verify access between tenants 1. Verify the pod in the node-account. Note the name of the pod. $kubectl get pods -n node-space --as=dev 2. Verify the pod in the redis-account. Note the IP address of the pod. $kubectl get pods -n redis-space -o wide --as=dba 3. Test the connection between the two pods. $kubectl exec -n node-space <pod-name> --as=dev -- ping <ip-address> You can see that the tenants are accessible across spaces. For more strict security controls, the native Kubernetes Network Policies and Pod Security Policies can be leveraged. Cleanup Remove the Amazon Elastic Kubernetes Service cluster to avoid further costs. Conclusion. In this post, you have seen how easy it is to set up soft multi-tenancy in a single Kubernetes cluster with Kiosk and the added benefits over the native Kubernetes functionality. You achieved resource isolation across the cluster through account quotas and implemented security boundaries through primitives like accounts, account users, and spaces.
https://aws.amazon.com/blogs/containers/set-up-soft-multi-tenancy-with-kiosk-on-amazon-elastic-kubernetes-service/
CC-MAIN-2020-45
en
refinedweb
I have been using the Policy Injection Application Block which is new to Enterprise Library 3.0 and continue to be awed by what it and other Aspect-Oriented Programming solutions bring to the development experience. There is something very elegant about moving cross-cutting concerns like logging, caching, exception handling, validation, etc. from code and into meta-data exposed as either declarative attributes or configuration information. It not only removes the clutter to help expose the true intention of our code, but also makes maintaining the code a lot simpler. As with anything you can overuse a good thing to cause an utter maintenance nightmare, but if used appropriately you can gain some cleanliness in your code and improve maintenance. The Caching Experience In my web applications, I like to cache the results of expensive calls and use that stale data when appropriate. Not all information needs to be accessible in real-time and can certainly be updated every X minutes without dependencies. This certainly helps with the scalability of your web application. When such simple caching needs are appropriate, the code often follows this simple yet repetitive procedure: public class Service : IService { public Data GetData() { // Pre-Processing Are Results in Cache? If yes, return cached results. // Real-time Expensive Processing Nope. Get data from datastore. // Post-Processing Cache for X minutes and Return Results. } } The above is intentionally generic, but notice all the clutter with logic about caching. Wouldn't it be nice to just declare the caching via meta-data either through attributes or configuration such that all that exists is just the intention revealing call to the service? I think so... It makes so much more sense to me to cache behind-the-scenes for such simple caching cases and perhaps just have an attribute that tips us off that caching is indeed happening when we are looking at the code. public class Service : IService { [Cache for 30 minutes] public Data GetData() { Get data from datastore. } } This is exactly what the Policy Injection Application Block provides us as long as we adhere to a few rules. It allows us to avoid this repetitive code for such simple needs and allow us to better understand and maintain our code when the needs of the application change. It essentially intercepts the method call and does the pre-processing and post-processing on our behalf and allows us to just express the code in its purest form without cluttering it with cross-cutting concerns. The rules are relatively simple. You have to create your services using a factory as opposed to creating them explicitly. Under the covers the Policy Injection Application Block may pass you a proxy class if it finds that you want it to intercept some method calls and do some pre- and post- processing: IService service = PolicyInjection.Create<Service,IService>(); Using attributes, you can simply express where interception needs to occur by adding an attribute ( CachingCallHandler Attribute for caching ) to a method on the Service Class where you want it to cache on your behalf. Specify the time in hours, minutes, and seconds. public class Service : IService { // hours, minutes, seconds [CachingCallHandler(0, 30, 0)] public IList GetData() { Get data from datastore. } } Note that the attribute is purely optional and you can specify the caching via a configuration file which gives you a bit more maintenance flexibility without touching code. The lack of an attribute could also be confusing because it is not obvious that caching is actually occuring. Policy Injection Application Block Tutorials I talk more about the Policy Injection Application Block in various tutorials: Please check out all my Enterprise Library 3.0 Tutorials. by
http://codebetter.com/blogs/david.hayden/archive/2007/04/20/simplify-caching-using-the-policy-injection-application-block-aspect-oriented-programming-opportunities.aspx
crawl-001
en
refinedweb
I’ve heard and read a lot of comments over the months about why can’t we build applications for the 1.1 framework using the VS 2005 IDE? Simple. VS 2005 only targets the 2.0 framework. However, it appears as though making the IDE target multiple versions of the framework is something they are taking seriously and working on. One of the Visual Studio developers, John Rivard, has started blogging as of yesterday, and he starts off wonderfully by explaining Why Visual Studio targets only one version of the .NET Framework. No need for me to elaborate any more on the subject, John does a wonderful job of explaining. Monday, December 05, 2005 (PST) - The REST of Visual Studio 2005 Speaker: Raymond Lewallen Location: Lunch: Oklahoma City OSU/OKC Student Center Dinner: Oklahoma City Downtown Library TOO MUCH MATERIAL, TOO LITTLE TIME… For the first time we are actually planning 2 separate presentations for our lunch and dinner meetings. In the wake of the launch of VS 2005, SQL 2005, Biz Talk 2006, and, of course, the .NET Framework 2.0, there is just too much material to cover for just one short meeting. This month we plan to maximize our opportunity to bring you the most of this new material with different presentations for each session. Following the November 17th, highly attended and outstanding MSDN Event, “The Best of Visual Studio 2005 Launch Event” here in OKC, our own Raymond Lewallen will bring you “The Rest of Visual Studio 2005” Lunch topic: Visual Studio 2005 Launch Event – The Rest of Visual Studio 2005 SQLCLR – How, When and Why Raymond will talk about common uses for SQLCLR and when using the SQLCLR is appropriate, as well as walk through Visual Basic code examples on creating and using objects within the SQLCLR. Raymond will also discuss creating web services from directly within Sql Server. Evening topic: Visual Studio 2005 Launch Event – The Rest of Visual Studio 2005 VB 2005 Language Enhancements Raymond will explore many of the new enhancements to the Visual Basic 2005 language and go through examples on using them to demonstrate how they work, such as Generics, Using, TryCast, IsNot and the new My namespace to name a few. For the last 11 years, Raymond Lewallen has worked with more programming languages than he can recall, but currently specializes in VB and C# as well as Sql Server. Working primarily in the public sector during his career, Raymond has designed and built several very high profile enterprise level applications for all levels of the government, and currently works on a project for the Federal Aviation Administration as a contractor employed by L-3 Communications, Titan Group. As we all know, the big product launch party/event/whatever is happening in San Francisco today. Here is a friend of mine, also president of my fan club, who is attending. I sure hope he doesn’t get beat up wearing this T-Shirt to the event. So I come in this morning and start catching up on the 339 unread rss feed posts in my reader. After 10 minutes, I’ve already come across 4 great developers who have uncovered bugs in the VS 2005 RTM :| I myself haven't gotten very deep into RTM yet because I've been so busy doing other stuff, but somebody even mentioned that they thought Beta 2 was more stable than RTM. I'd hate to go look at the MSDN Feedback for VS2005 at the end of this week. "Yes, we realize that is a bug. It will be fixed in the next major release." Ouch. Wesner Moise Frans Bouma Roy Osherove Anatoly Lubarsky Rick Strahl No doubt that as I continue catching up, this list is going to increase, but hopefully not significantly. We’ll see. I'll continue to add to this list, so check back periodically. November 7, 2005 Group Meeting - INETA Speaker - Josh Holmes Topic: Seamless Integration Through Visual Studio Tools for Office(VSTO) This presentation goes a little deeper because it assumes that you have seen VSTO before and really focuses on behaving properly in Office Speaker:Josh Holmes For the past eight years, Josh Holmes, an MVP and a SRT Solutions’ principal, has worked with a diverse client set, ranging from large Fortune 500 firms to smaller-sized companies. Specializing in mobility solutions, Josh has developed server, web, desktop, handheld and Pocket PC applications.Josh is an accomplished trainer with a deep and varied knowledge base. He has developed software technology courses in the areas of .NET, ASP.NET, XML SPY, XML, VB.NET, C#, Compact Framework and Advanced Compact Framework. Utilizing his unique combination of theoretical knowledge and hands on experience, Josh combines the abstract with real life, practical day-to-day applications.A frequent speaker and lead panelist at national and international software development conferences focusing on emerging technologies, software design and development, Josh also serves as an officer for several non-profit technology organizations including Volunteer IT, Great Lakes Area .NET Users Group and the Ann Arbor Computing
http://codebetter.com/blogs/raymond.lewallen/archive/2005/11.aspx
crawl-001
en
refinedweb
This appendix compares Rational's Ada 83 compiler, Apex for DEC Alpha AXP OSF/1 (DIGITAL UNIX) and DEC Ada on DIGITAL UNIX systems. It also includes differences in the implementation or interpretation of the Ada standard. Apex also provides an Ada 95 compiler validated to the ACVC 2.1 suite. Apex is an integrated development environment for Ada and mixed-language applications. Apex includes a mature, production-quality, optimizing Ada compiler. As a development environment, Apex provides integrated configuration management, source code browsing and automated build control. Rational Software also provides a range of layered products to work with Apex, such as Rational Rose for Visual Modeling, TestMate for test management, and Ada Analyzer for static analysis of source code. The development environment for DEC Ada is provided by the DECset tools. Rational Software can be reached on the World Wide Web at. This appendix discusses the following topics: The following sections highlight differences in types, representations of types, operations, alignment, and related topics. D.1.1 Integer Types Both Apex and DEC Ada provide the following integer types: Table D-1 lists the predefined integer types provided by Apex and DEC Ada, with first and last values. DEC Ada has defined the following additional integer types: Apex has defined the INTEGER_[8,16,32,64] types in the package Interfaces. The following types are also defined there: UNSIGNED_[8,16,32,64] D.1.2 Floating-Point Numbers and Representations Apex provides the following floating-point types: In the package Interfaces Apex defines the following additional floating point types: The predefined attributes and their values that yield the characteristics of each floating-point type are described in the compiler reference guide for Apex. This manual is delivered online in HTML format with the Apex product. DEC Ada on DIGITAL UNIX implements the following floating-point numbers: Table D-2 lists the floating-point types declared in the package STANDARD and their default representations. The predefined attributes that yield the characteristics of each floating-point type are described in the DEC Ada Language Reference Manual. Values of these attributes for the DEC Ada floating-point data representations are listed in Appendix F of the DEC Ada Language Reference Manual. The DEC Ada run-time reference manuals also give information on the internal representation of the DEC Ada floating-point types. On DIGITAL UNIX, DEC Ada provides the pragma FLOAT_REPRESENTATION, which acts as a program library switch to allow control over the internal representation chosen for the predefined floating-point types declared in the packages STANDARD and SYSTEM. On DIGITAL UNIX, the value of this pragma must be IEEE_FLOAT. D.1.3 Record Representation Clause Maximum Alignment On Apex implementations, the record representation clause maximum alignment is 16. On DEC Ada implementations, the maximum alignment is 23 or 8. D.1.4 Record and Array Component Alignment For Apex, all noncomposite components are aligned on natural boundaries, unless overridden with record representation clauses. On Apex, if representation clauses are used and the component does not start on a storage unit boundary, then it must be possible to store the component in a register with one move instruction. On DEC Ada, all noncomposite components are aligned on natural boundaries (unless otherwise specified with the pragma COMPONENT_ALIGNMENT). For example, 1-byte components are aligned on byte boundaries, 2-byte components on 2-byte boundaries, 4-byte components on 4-byte boundaries, and so on. The Alpha hardware runs more efficiently with naturally aligned data. On DIGITAL UNIX systems, DEC Ada allows the simple expression in an alignment clause to have a value between 20 and 216 (inclusive). In other words, the simple expression must be an integer in the range 1 .. 512, 1 .. 65536, or 1 .. 8 that is also a power of 2. The allocations then occur at addresses that are a multiple of the simple expression (a value of 2 aligns the data on a 2-byte boundary, a value of 4 aligns the data on a 4-byte boundary, and so on). D.1.5 Type DURATION The type DURATION has different ranges on Apex and DEC Ada. Table D-3 shows these ranges as well as other attributes of the type DURATION and their values on the two platforms. The following list shows implementation-defined type information for Apex: Representation clauses are based on the target machine's word, byte, and bit order numbering. Apex is consistent with machine architecture manuals for both "big-endian" and "little-endian" machines. Bits within a STORAGE_UNIT are numbered according to the target machine manuals. It is not necessary for a user to understand the default layout for records and other aggregates because fine control over the layout is obtained by the use of record representation clauses. It is possible to align fields correctly with structures and other aggregates from other languages by specifying the location of each element explicitly. The FIRST_BIT and LAST_BIT attributes can be used to construct bit manipulation code applicable to differently bit-numbered systems. The only restriction on record representation clauses is that if a component does not start and end on a storage unit boundary, it must be possible to get it into a register with one move instruction. The size of object modules is aligned. It is assumed that "mod 2" is a worst case restriction, assuming that even the C compiler aligns to a 2-byte boundary. The alignment clause portion of a record representation must be a power of 2. The alignment is obeyed for all allocations of the record type with the following exceptions: For these two exceptions, the maximum alignment obeyed is the default stack and heap alignment. If a record is given a representation clause but no alignment clause, the compiler assumes that the record may be arbitrarily aligned (at an arbitrary bit offset within another structure, for example). D.2.1 ADDRESS Attribute---Apex Implementations The ADDRESS attribute is supported for the following entities: If the prefix of an address attribute is an object that is not aligned on a storage unit boundary, the attribute yields the address of the storage unit containing the first bit of the object. This is consistent with the definition of the FIRST_BIT attribute. D.2.2 Restrictions on Unchecked Type Conversions Both Apex and DEC Ada implementations provide both UNCHECKED_DEALLOCATION and UNCHECKED_CONVERSION. Apex supports the generic function UNCHECKED_CONVERSION with the following restrictions on the class of types involved: If the size of the source differs from the size of the target subtype, a warning is issued by the compiler and results may be unpredictable. Any object allocated can be deallocated. Currently, Apex performs no checks on release objects. However, when an object is deallocated, its access variable is set to null. Subsequent deallocations using the null access variable are ignored. DEC Ada supports the generic function UNCHECKED_CONVERSION with the following restrictions on the class of types involved: When the target type is a type with discriminants, the value resulting from a call of the conversion function resulting from an instantiation of UNCHECKED_CONVERSION is checked to ensure that the discriminants satisfy the constraints of the actual subtype. If the size of the source value is greater than the size of the target subtype, the high order of bits of the value is ignored (truncated). If the size of the source value is less than the size of the target subtype, the value is extended with zero bits to form the result value. D.2.3 Additional Representation Clause Information---Apex Implementations Apex supports the following: The representation clauses allowed in Apex are length, enumeration, record representation, and address clauses. The representation clauses allowed in DEC Ada are length, enumeration, record representation, and address clauses. In DEC Ada, a representation clause is not allowed for: In Apex, an array dope vector is a sequence of triples (3 x 64-bit words, 192 bits) containing the size in bytes (size in bits if the array is packed) of the subarray for that dimension, the value of 'FIRST and the value of 'LAST. An array subtype is completely static if its bounds are all static and its component subtype is static sized. The dope vector for a completely static array subtype is initialized statically. All other dope vectors are initialized by generated inline code. Dope vectors are allocated in different ways as shown in the following table: In DEC Ada, dope vectors are special descriptors that are used in some cases to pass record and array parameters between Ada subprograms or to return record and array function results. They are never used in calls to and from subprograms that are specified in an import, export, or INTERFACE pragma. DEC Ada uses two kinds of dope vectors: For more information on DEC Ada dope vectors, see the DEC Ada Run--Time Reference Manual for DEC OSF/1 Systems. D.4 Parameter Passing Small results are returned in registers. Large results with known targets are passed by reference. Large results of anonymous target and known size are passed by reference to a temporary created on the caller's stack. Large results of anonymous target and unknown size are returned by copying the value down from a temporary in the callee so the space used by the temporary can be reclaimed. Apex passes up to six parameters in registers, the remaining parameters are passed on the stack. The MACHINE_CODE package requires the usage of parameters as operands be consistent with the type of operand expected by the MACHINE_CODE instruction, given that the parameters can be in registers or on the stack. In DEC Ada when importing or exporting routines from other languages or when exporting Ada subprogram, you can explicitly specify the passing mechanisms for one or more parameters or function results. Before deciding to explicitly specify the passing mechanisms, the compiler compilation notes can be used to determine which default mechanisms the compiler chooses for certain parameters or function results. Once the parameter-passing mechanisms are explicitly specified, the MECHANISM option can be used in DEC Ada import or export pragmas to specify one of two values for each parameter. Similarly, the RESULT_MECHANISM option can be used to specify one of the same two values for each function result. The two mechanisms are as follows: The package STANDARD is fully described in the Reference Manual for the Ada Programming Language (ANSI/MIL-STD-1815A-1983) and the the implementation of the package in DEC Ada is fully described in DEC Ada Language Reference Manual. For a discussion of the predefined types in this appendix, see Section D.1. Section D.1. The differences between implementations of the package STANDARD on Apex and on DEC Ada are shown in the following list: Table D-4 compares the sizes of integer and floating-point types between the package STANDARD on Apex and on DEC Ada. The following list compares the differences between implementations of the package SYSTEM on Apex and on DEC Ada: On DEC Ada, the package SYSTEM has the following functions: The Apex and DEC Ada compilers provide additional constant declarations in the predefined package SYSTEM as shown in Table D-5. Neither Apex nor DEC Ada allow the recompilation of the package SYSTEM. Instead, DEC Ada provides several pragmas (SYSTEM_NAME, STORAGE_UNIT, and MEMORY_SIZE) to modify values in the package SYSTEM. In Apex, the pragmas SYSTEM_NAME, STORAGE_UNIT, and MEMORY_SIZE are recognized by the implementation but have no effect. D.7 Tasking and Task-Related Features The concepts particularly relevant to a comparison of tasking on Apex and on DEC Ada for DIGITAL UNIX systems are discussed in the following sections. For detailed information on concepts related to tasking in DEC Ada, see the DEC Ada Language Reference Manual and the relevant run-time reference manual. D.7.1 Implementation of Tasks in DEC Ada for DIGITAL UNIX Systems DEC Ada tasks on DIGITAL UNIX systems run in the context of threads that are created and managed by the DIGITAL UNIX kernel. DEC Ada tasking support is based on DECthreads, an implementation of the proposed POSIX standard for threads. For more information, see the DEC Ada Run--Time Reference Manual for DEC OSF/1 Systems. Apex is available in Threaded and Non-Threaded versions. The Threaded version uses DECthreads. Apex Ada tasks can either be managed within a single context or mapped to DECthreads, by selecting the appropriate predefined library. D.7.2 Task-Related Pragmas Apex supplies the following task-related pragmas: DEC Ada supplies the pragma TASK_STORAGE, which allows the specification of the size of the guard area for a task stack. (The guard area forms an area of memory that has no read or write access and thus helps in the detection of stack overflow.) On DIGITAL UNIX systems, if the pragma TASK_STORAGE specifies a value of zero, a minimal guard area is created. In the absence of a pragma TASK_STORAGE, a default guard area is created. Both Apex and DEC Ada supply the pragma SUPPRESS. For more information on these pragmas, see Table D-6. D.7.3 Scheduling and Task Priority On DIGITAL UNIX systems, the default strategy is that tasks of equal priority take turns at the processor (round-robin task scheduling). A task is run for a certain period of time, then placed at the rear of the ready queue for that priority. DEC Ada provides implementation-defined pragma TIME_ SLICE, which can be used to enable or disable round-robin scheduling of tasks with the same priority. DEC Ada task priorities can be changed dynamically at run time to values of the subtype PRIORITY, as well as to the values of DIGITAL UNIX real-time and system priorities. See the relevant DEC Ada run-time reference manual for information on using the pragma PRIORITY to control DEC Ada task scheduling. See the DEC Ada Run-Time Reference Manual for DEC OSF/1 Systems or the specification of the DEC Ada package SET_TASK_PRIORITY for more information on dynamically changing task priorities on DIGITAL UNIX systems. Apex uses round-robin scheduling with tasks of equal priority. D.7.4 External Interrupts Apex allows task entries to be associated with DIGITAL UNIX signals. DIGITAL UNIX handles all interrupts and faults initially and returns control to the user program as a signal. The Apex Runtime System (RTS) sets up the following signal handlers: On DEC Ada, external interrupts can be associated with task entries. D.8 Pragmas and Pragma-Related Features Both DEC Ada and Apex supply all language-defined pragmas as specified by the Ada standard. These pragmas are as follows: Apex and DEC Ada restrict the predefined language pragmas INLINE and INTERFACE. For more information, see Section D.8.1. Table D-6 summarizes the differences between pragmas supplied by Apex and pragmas supplied by DEC Ada. These differences can affect applications being ported between Apex and DEC Ada. Table D-6 is not intended to provide a complete discussion of DEC Ada and Apex pragmas. In particular, DEC Ada pragmas not available on the DIGITAL UNIX platform are not mentioned. This pragma is essentially always in effect for DIGITAL implementations. It is recognized by Rational implementations but has no effect in the current release. Allowed languages are ASM, C, ADA, FORTRAN, PASCAL, or UNCHECKED. Use C when calling from C++. This pragma has an effect only when the calling conventions of the foreign language differ from those of Ada. This pragma is analogous to the DEC Ada EXPORT pragmas (EXPORT_ OBJECT, EXPORT_FUNCTION, and EXPORT_PROCEDURE) for objects and subprograms. Apex implementations add that recursive calls can be expanded with the pragma up to the maximum depth of 4. Restrictions for both Apex and DEC Ada implementations. DEC Ada implementations sometimes decide to inline implicitly. This pragma also suppresses the generation of a callable version of the routine that saves code space. The DEC Ada compiler uses internal heuristics to optimize code. DEC Ada recognizes ADA, C, FORTRAN, and BLISS. Both compilers implement the pragma INTERFACE_NAME. DEC Ada also provides several import pragmas. See Restrictions on the Pragma INTERFACE for information on restrictions for this pragma. Analogous to the pragmas IMPORT_ FUNCTION and IMPORT_PROCEDURE but INTERFACE_NAME has fewer parameters. Also analogous to IMPORT_OBJECT. Differences between implementations in string parameter interpretation. DEC Ada does not evaluate the second parameter. To pass options to the linker, DEC Ada supports the -L ldflags option, the LDFLAGS environment variable, and the aimport command. In Apex, this pragma is recognized but has no effect in the current release. Both implementations have restrictions on what is considered packable. In Apex, the pragma identifies a scalar or access variable that might be read or written by different tasks. The Ada optimizer does not attempt to optimize away or move any reads or writes to this variable. In DEC Ada, there are restric- tions on floating-point types. On Apex, this pragma is recognized but has no effect. The implementation does not allow the package SYSTEM to be modified by means of pragmas. However, the same effect can be achieved by recompiling the package SYSTEM with altered values. On DEC Ada, this pragma allows only 8 (bits). In DEC Ada, this pragma specifies that all run-time checks are suppressed. In Apex, guarantees that loads and stores to the named object are performed as expected after optimization. The object declaration and the pragma must both occur (in this order) immediately within the same declarative part or package specification.
http://h71000.www7.hp.com/commercial/ada/doc/compare_pro_013.html
crawl-001
en
refinedweb
1992 These header lines are sent by the client in a HTTP protocol transaction. All lines are RFC822 format headers. The list of headers is terminated by an empty line. From Accept Accept-Encoding Accept-Language User-Agent Referer Authorization Charge-To If-Modified-Since Pragma. The set given may of course vary from request to request from the same user. This field may be wrapped onto several lines according to RCFC822, and also more than one occurence of the field is allowed with the signifiance being the same as if all the entries has been in one field. The format of each entry in the list is (/ meaning "or") <field> = Accept: <entry> *[ , <entry> ] <entry> = <content type> *[ ; <param> ] <param> = <attr> = <float> <attr> = q / mxs / mxb <float> = <ANSI-C floating point text represntation> See the appendix on the negotiation algorithm as a function and penalty model. Note that a semicolon has a higher precedence than a comma in this syntax, to conform to MIME use. If no Accept: field is present, then it is assumed that text/plain and text/html are accepted. Accept: text/plain, text/html Accept: text/x-dvi; q=.8; mxb=100000; mxt=5.0, text/x-c In order to save time, and also allow clients to receive content types of which they may not be aware, an asterisk "*" may be used in place of either the second half of the content-type value, or both halves. This only applies to the Accept: filed, and not to the content-type field of course. Accept: *.*, q=0.1 Accept: audio/*, q=0.2 Accept: audio/basic q=1 may be interpreted as "if you have basic audio, send it; otherwise send me some other audio, or failing that, just give me what you've got.". These parameters are to be specified when types are registered.. @@ TBS. Sugestions include the following. Please feed back any references to existing improved abreviations for these: Similar to Accept, but lists the Content-Encoding types which are acceptable in the response. <field> = Accept-Encoding: <entry> *[ , <entry> ] <entry> = <content transfer encoding> *[ , <param> ] Accept-Encoding: x-compress; x-zip Similar to Accept, but lists the Language values which are preferable in the response. A response in an unspecifies language is not illegal. See also: Language . Language coding TBS. (ISO standard xxxx.) This line if present gives the software program used by the original client. This is for statistical purposes and the tracing of protocol violations. It should be included. The first white space delimited word must be the software product name, with an optional slash and version designator. Other products which form part of the user agent may be put as separate words. <field> = User-Agent: <product>+ <product> = <word> [/<version>] <version> = <word> User-Agent: LII-Cello/1.0 libwww/2.5 This optional header field. If a partial URI is given, then it should be parsed relative to the URI of the object of the request. Referer: If this line is present it contains authorization information. The format is To Be Specified (TBS). The format of this field is in extensible form. The first word is a specification of the authorisation system in use. Specification for current one implemented by AL Sep 1993. People at NCSA are designing a PGP/PEM based protection system. Authorization: user fred:mypassword The scheme name is "user". The second word is a user name (typically derived from a USER environment variable or prompted for), with an optional password separated by a colon (as in the URL syntax for FTP). Without a password, this povides very low level security. With the password, it provides a low-level security as used by unmodified FTP, Telnet, etc. Authorization: kerberos kerberosauthenticationsparameters The format of the kerberosauthenticationsparameters is to be specified. This line if present contains account information for the costs of the application of the method requested. The format is TBS. The format of this field must be in extensible form. The first word starts with a specification of the namespace in which the account is . (This is similar to extensible URL definition.) No namespaces are currently defined. Namespaces will be registered with the registration authority . The format of the rest of the line is a function of the charging system, but it is recommended that this include a maximum cost whose payment is authorized by the client for this transaction, and a cost unit. This request header is used with GET method to make it conditional: if the requested document has not changed since the time specified in this field the document will not be sent, but instead a Not Modified 304 reply. Format of this field is the same as for Date:. Syntax is the same as for other multiple-value fields in HTTP, like the Accept: field, namely, a comma-separated list of entries, for which the optional parameters are separated by semicolons. Pragma directives should be understood by servers to which they are relevant, e.g. a proxy server; currently only one pragma is defined: no-cache Pragmas should be passed through by proxies even though they might have significance to the proxy itself. This is necessary in cases when the request has to go through many proxies, and the pragma should affect all of them.
http://www.w3.org/hypertext/WWW/Protocols/HTTP/HTRQ_Headers.html
crawl-001
en
refinedweb
++./07-0331 = WG21 N2461. [Moved to DR at October 2007 meeting.].) Proposed resolution (October, 2007): This issue is resolved by the adoption of paper J16/07-0030 = WG21 N2170. [Voted into WP at April, 2007 meeting.] Section 1.3.11 . James Widman: Names don't participate in overload resolution; name lookup is separate from overload resolution. However, the word “signature” is not used in clause 13 [over]. It is used in linkage and declaration matching (e.g., 14.5.6.1 [temp.over.link]). This suggests that the name and scope of the function should be part of its signature. Proposed resolution (October, 2006): Replace 1.3.11 [defns.signature] with the following: the name and the parameter-type-list (8.3.5 [dcl.fct]) of a function, as well as the class or namespace of which it is a member. If a function or function template is a class member its signature additionally includes the cv-qualifiers specified or deduced). [Note: Signatures are used as a basis for name-mangling and linking. —end note] Delete paragraph 3 and replace the first sentence of 14.5.6.1 [temp.over.link] as follows: The signature of a function template specialization consists of the signature of the function template and of the actual template arguments (whether explicitly specified or deduced). The signature of a function template consists of its function signature, its return type and its template parameter listis defined in 1.3.11 [defns.signature]. The names of the template parameters are significant... (See also issue 537.) [Voted into WP at April, 2007 meeting.] The standard defines “signature” in two places: 1.3.11 [defns.signature] and 14.5.11 [defns.signature] words “the information about a function that participates in overload resolution” isn't quite right either. Perhaps, “the information about a function that distinguishes it in a set of overloaded functions?” Eric Gufford: In 1.3.11 [defns.signature] the definition states that “Function signatures do not include return type, because that does not participate in overload resolution,” while 14.5. James Widman: The problem is that (a) if you say the return type is part of the signature of a non-template function, then you have overloading but not overload resolution on return types (i.e., what we have now with function templates). I don't think anyone wants to make the language uglier in that way. And (b) if you say that the return type is not part of the signature of a function template, you will break code. Given those alternatives, it's probably best to maintain the status quo (which the implementors appear to have rendered faithfully). Proposed resolution (September, 2006): This issue is resolved by the resolution of issue 357. [Voted into WP at April, 2006 meeting.] The standard uses “most derived object” in some places (for example, 1.3.]] [Voted into WP at April 2005 meeting.] (10/00): Replace the two cited sentences from 10.2 [class.member.lookup] paragraph 2 with the following:. Replace the examples in 10.2 [class.member.lookup] paragraph 3 with the following: struct A { int x(int); static int y(int); }; struct V { int z(int); }; struct B: A, virtual V { using A::x; float x(float); using A::y; static float y(float); using V::z; float z(float); }; struct C: B, A, virtual V { }; void f(C* c) { c->x(3); // ambiguous -- more than one sub-object A c->y(3); // not ambiguous c->z(3); // not ambiguous } Notes from 04/01 meeting: The following example should be accepted but is rejected by the wording above: struct A { static void f(); }; struct B1: virtual A { using A::f; }; struct B2: virtual A { using A::f; }; struct C: B1, B2 { }; void g() { C::f(); // OK, calls A::f() } Notes from 10/01 meeting (Jason Merrill): The example in the issues list: struct A { int x(int); }; struct B: A { using A::x; float x(float); }; int f(B* b) { b->x(3); // ambiguous }Is broken under the existing wording: ....Since the two x's are considered to be "from" different objects, looking up x produces a set including declarations "from" different objects, and the program is ill-formed. Clearly this is wrong. The problem with the existing wording is that it fails to consider lookup context. The first proposed solution:.breaks this testcase: struct A { static void f(); }; struct B1: virtual A { using A::f; }; struct B2: virtual A { using A::f; }; struct C: B1, B2 { }; void g() { C::f(); // OK, calls A::f() }because it considers the lookup context, but not the definition context; under this definition of "from", the two declarations found are the using-declarations, which are "from" B1 and B2. The solution is to separate the notions of lookup and definition context. I have taken an algorithmic approach to describing the strategy. Incidentally, the earlier proposal allows one base to have a superset of the declarations in another base; that was an extension, and my proposal does not do that. One algorithmic benefit of this limitation is to simplify the case of a virtual base being hidden along one arm and not another ("domination"); if we allowed supersets, we would need to remember which subobjects had which declarations, while under the following resolution we need only keep two lists, of subobjects and declarations. Proposed resolution (October 2002): Replace 10.2 [class.member.lookup] paragraph 2 with: The following steps define the result of name lookup for a member name f in a class scope C. members they designate, and type declarations (including injected-class-names) are replaced by the types they designate. S(f,C) is calculated as follows. If C contains a declaration of the name f, the declaration set contains every declaration of f in C (excluding bases), the subobject set contains C itself, and calculation is complete. Otherwise, S(f,C) is initially empty. If C has base classes, calculate the lookup set for f in each direct base class subj), S(f,C) is unchanged and the merge is complete. Conversely, if each of the subobject members of S(f,C) is a base class subobject of at least one of the subobject members of S(f,Bi),, consider each declaration d in the set, where d is a member of class A. If d is a nonstatic member, compare the A base class subobjects of the subobject members of S(f,Bi) and S(f,C). If they do not match, the merge is ambiguous, as in the previous step. [Note: It is not necessary to remember which A subobject each member comes from, since using-declarations don't disambiguate. ] - subobjects of D are also base subobjects of E, so S(x,D) is discarded in the first merge step. --end example] Turn 10.2 [class.member.lookup] paragraphs 5 and 6 into notes. Notes from October 2003 meeting: Mike Miller raised some new issues in N1543, and we adjusted the proposed resolution as indicated in that paper. Further information from Mike Miller (January 2004): Unfortunately, I've become aware of a minor glitch in the proposed resolution for issue 39 in N1543, so I'd like to suggest a change that we can discuss in Sydney. A brief review and background of the problem: the major change we agreed on in Kona was to remove detection of multiple-subobject ambiguity from class lookup (10.2 [class.member.lookup]) and instead handle it as part of the class member access expression. It was pointed out in Kona that 11.2 [class.access.base]/5 has this effect:. After the meeting, however, I realized that this requirement is not sufficient to handle all the cases. Consider, for instance, struct B { int i; }; struct I1: B { }; struct I2: B { }; struct D: I1, I2 { void f() { i = 0; // not ill-formed per 11.2p5 } }; Here, both the object expression ("this") and the naming class are "D", so the reference to "i" satisfies the requirement in 11.2 [class.access.base]/5, even though it involves a multiple-subobject ambiguity. In order to address this problem, I proposed in N1543 to add a paragraph following 5.2.5 [expr.ref]/4: If E2 is a non-static data member or a non-static member function, the program is ill-formed if the class of E1 cannot be unambiguously converted (10.2) to the class of which E2 is directly a member. That's not quite right. It does diagnose the case above as written; however, it breaks the case where qualification is used to circumvent the ambiguity: struct D2: I1, I2 { void f() { I2::i = 0; // ill-formed per proposal } }; In my proposed wording, the class of "this" can't be converted to "B" (the qualifier is ignored), so the access is ill-formed. Oops. I think the following is a correct formulation, so the proposed resolution we discuss in Sydney should contain the following paragraph instead of the one in N1543: If E2 is a nonstatic data member or a non-static member function, the program is ill-formed if the naming class (11.2) of E2 cannot be unambiguously converted (10.2) to the class of which E2 is directly a member. This reformulation also has the advantage of pointing readers to 11.2 [class.access.base], where the the convertibility requirement from the class of E1 to the naming class is located and which might otherwise be overlooked. Notes from the March 2004 meeting: We discussed this further and agreed with these latest recommendations. Mike Miller has produced a paper N1626 that gives just the final collected set of changes. (This resolution also resolves isssue 306.) [Voted into WP at April 2005 meeting.] Is the following well-formed? struct A { struct B { }; }; struct C : public A, public A::B { B *p; };The lookup of B finds both the struct B in A and the injected B from the A::B base class. Are they the same thing? Does the standard say so? What if a struct is found along one path and a typedef to that struct is found along another path? That should probably be valid, but does the standard say so? This is resolved by issue 39 February 2004: Moved back to "Review" status because issue 39 was moved back to "Review". .]: A:? Proposed resolution (04/01): The resolution for this issue is contained in the resolution for issue 45. [Voted into WP at the October, 2006 meeting.]. [Moved to DR at 4/01 meeting.].]. [Voted into WP at October 2004 meeting.] We consider it not unreasonable to do the following class A { protected: void g(); }; class B : public A { public: using A::g; // B::g is a public synonym for A::g }; class C: public A { void foo(); }; void C::foo() { B b; b.g(); } However the EDG front-end does not like and gives the error #410-D: protected function "A::g" is not accessible through a "B" pointer or object b.g(); ^ Steve Adamczyk: The error in this case is due to 11.5 [class.protected] of the standard, which is an additional check on top of the other access checking. When that section says "a protected nonstatic member function ... of a base class" it doesn't indicate whether the fact that there is a using-declaration is relevant. I'd say the current wording taken at face value would suggest that the error is correct -- the function is protected, even if the using-declaration for it makes it accessible as a public function. But I'm quite sure the wording in 11.5 [class.protected] was written before using-declarations were invented and has not been reviewed since for consistency with that addition. Notes from April 2003 meeting: We agreed that the example should be allowed. Proposed resolution (April 2003, revised October 2003): Change 11.5 [class.protected] paragraph 1 from When a friend or a member function of a derived class references a protected nonstatic member function or protected nonstatic data member of a base class, an access check applies in addition to those described earlier in clause 11 [class.access]. [Footnote: This additional check does not apply to other members, e.g. static data members or enumerator member constants.] Except when forming a pointer to member (5.3.1 [expr.unary.op]), the access must be through a pointer to, reference to, or object of the derived class itself (or any class derived from that class (5.2.5 [expr.ref]). If the access is to form a pointer to member, the nested-name-specifier shall name the derived class (or any class derived from that class). to An additional access check beyond those described earlier in clause 11 [class.access] is applied when a nonstatic data member or nonstatic member function is a protected member of its naming class (11.2 [class.access.base]). [Footnote: This additional check does not apply to other members, e.g., static data members or enumerator member constants.] As described earlier, access to a protected member is granted because the reference occurs in a friend or member of some class C. If the access is to form a pointer to member (5.3.1 [expr.unary.op]), the nested-name-specifier shall name C or a class derived from C. All other accesses involve a (possibly implicit) object expression (5.2.5 [expr.ref]). In this case, the class of the object expression shall be C or a class derived from C..]. Proposed resolution (04/01): The resolution for this issue is incorporated into the resolution for issue 45. [Moved to DR at 4/01 meeting.]. : created in the default argument expressions are destroyed immediately after return from the constructor. [Voted into WP at April 2005 meeting.] Section 12.2 [class.temporary] paragraph 2, abridged: X f(X); void g() { X a; a = f(a); } a=f(a) requires a temporary for either the argument a or the result of f(a) to avoid undesired aliasing of a. The note seems to imply that an implementation is allowed to omit copying "a" to f's formal argument, or to omit using a temporary for the return value of f. I don't find that license in normative text. Function f returns an X by value, and in the expression the value is assigned (not copy-constructed) to "a". I don't see how that temporary can be omitted. (See also 12.8 [class.copy] p 15) Since "a" is an lvalue and not a temporary, I don't see how copying "a" to f's formal parameter can be avoided. Am I missing something, or is 12.2 [class.temporary] p 2 misleading?: A full-expression is an expression that is not a subexpression of another expression. If a language construct is defined to produce an implicit call of a function, a use of the language construct is considered to be an expression for the purposes of this definition. Conversions applied to the result of an expression in order to satisfy the requirements of the language construct in which the expression appears are also considered to be part of the full-expression. .] There seems to be a typo in 12.2 [class.temporary]/5, which says "The temporary to which the reference is bound or the temporary that is the complete object TO a subobject OF which the TEMPORARY is bound persists for the lifetime of the reference except as specified below." I think this should be "The temporary to which the reference is bound or the temporary that is the complete object OF a subobject TO which the REFERENCE is bound persists for the lifetime of the reference except as specified below." I used upper-case letters for the parts I think need to be changed.(a. [Voted into WP at October 2004 meeting.] Normally reference semantics allow incomplete types in certain contexts, but isn't this: class A; A& operator<<(A& a, const char* msg); void foo(A& a) { a << "Hello"; } required to be diagnosed because of the op<<? The reason being that the class may actually have an op<<(const char *) in it. What is it? un- or ill-something? Diagnosable? No problem at all? Steve Adamczyk: I don't know of any requirement in the standard that the class be complete. There is a rule that will instantiate a class template in order to be able to see whether it has any operators. But I wouldn't think one wants to outlaw the above example merely because the user might have an operator<< in the class; if he doesn't, he would not be pleased that the above is considered invalid. Mike Miller: Hmm, interesting question. My initial reaction is that it just uses ::operator<<; any A::operator<< simply won't be considered in overload resolution. I can't find anything in the Standard that would say any different. The closest analogy to this situation, I'd guess, would be deleting a pointer to an incomplete class; 5.3.5 [expr.delete] paragraph 5 says that that's undefined behavior if the complete type has a non-trivial destructor or an operator delete. However, I tend to think that that's because it deals with storage and resource management, not just because it might have called a different function. Generally, overload resolution that goes one way when it might have gone another with more declarations in scope is considered to be not an error, cf 7.3.3 [namespace.udecl] paragraph 9, 14.6.3 [temp.nondep] paragraph 1, etc. So my bottom line take on it would be that it's okay, it's up to the programmer to ensure that all necessary declarations are in scope for overload resolution. Worst case, it would be like the operator delete in an incomplete class -- undefined behavior, and thus not required to be diagnosed. 13.3.1.2 [over.match.oper] paragraph 3, bullet 1, says, "If T1 is a class type, the set of member candidates is the result of the qualified lookup of T1::operator@ (13.3.1.1.1 [over.call.func])." Obviously, that lookup is not possible if T1 is incomplete. Should 13.3.1.2 [over.match.oper] paragraph 3, bullet 1, say "complete class type"? Or does the inability to perform the lookup mean that the program is ill-formed? 3.2 [basic.def.odr] paragraph 4 doesn't apply, I don't think, because you don't know whether you'll be applying a class member access operator until you know whether the operator involved is a member or not. Notes from October 2003 meeting: We noticed that the title of this issue did not match the body. We checked the original source and then corrected the title (so it no longer mentions templates). We decided that this is similar to other cases like deleting a pointer to an incomplete class, and it should not be necessary to have a complete class. There is no undefined behavior. Proposed Resolution (October 2003): Change the first bullet of 13.3.1.2 [over.match.oper] paragraph 3 to read: If T1 is a complete class type, the set of member candidates is the result of the qualified lookup of T1::operator@ (13.3.1.1.1 [over.call.func]); otherwise, the set of member candidates is empty. [Moved to DR at October 2002 meeting.]..
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2447.html
crawl-001
en
refinedweb
Euruko 2006: Day Two Sun Nov 05 10:18:59 CET 2006 The first talk on Saturday was about Ruby and JavaScript by Sven C. Koehler, who noticed he used less and less Rails, but more and more JavaScript when developing websites. He explained he likes using JavaScript, since he can use DHTML (e.g. animations), AJAX and it also saves the server CPU, since it works on the client side (I’m not really buying that ;-)). He doesn’t want to use RHTML since it’s too obtrusive (cf. JSP), difficult to separate mentally and he prefers prefers CGI-like HTML generation. Also, he doesn’t like RJS because of the “you don’t need to know JavaScript”-mentality and since JavaScript code gets slow quickly with growing size. Before digging deeper, he warned about a few caveats: JavaScript can be taken seriously, it’s a lot unlike Java, it’s not class-based object-oriented, and it’s difficult at times to combine libraries. He showed some idioms in JavaScript (together with Prototype), about using the Ruby-like enumerable or doing default parameters for methods. Next, he also listed some bad features of JavaScript, notably the lack of namespaces, having no nice debugger (it slowly gets better), and having only a single thread in the browser. Then, he went to show some Prototype niceties ( Try.these, Builder.node and using JSON). At the end, he presented a new “Web 2.0” web site he has been developing called MySit.es, best quote during the demo: “It takes a while, because it’s JavaScript”. The whole user-interface of the web site is purely based on JavaScript. Next in line was a talk about ebXML by Sacha Schlegl, which started with the explanation of open standards and why they are useful, about the work of OASIS, where ebXML is from, and that open source and open standards are a good match. ebXML is an XML format for the next generation of e-business, but I didn’t learn a lot more, except that, according to him, it should be EBxml, since the “electronic business”-part is the important one, not the XML. After this, we got into Ruby code again in the talk CodeGolfing with Ruby by Jannis Harder, which was written over-night, but pretty good and interesting nevertheless. Jannis started by showing off a golfed “paint my numbers” solver, and then tried to justify code-golfing: his first point was to learn more about obscure features of Ruby, then he stumbled, and, ehm, ehm, it’s to have fun! He went on with the problem of adding numbers in a path traversing a triangle, and polled the audience whether it was “shorter to code” if he went from top-to-bottom or bottom-to-top. Top-to-bottom actually turned out to be a lot shorted, especially since you can read the input and compute at the same time. Then, he showed some golfing tricks, e.g. using *x=1 instead of x=[1], using map instead of each or Array#* instead of join. He also listed a few other techniques, such as heavy use of side-effects, reuse of return values, use of $...-variables and multiple assignments. After showing us some more examples of golfed code, he ended his presentation referring to, where one can participate in golfing contests. The next talk was about Transparent Archiving by Kero Van Gelder, which is a kind of lightweight API for assignment that can keep more than one timestamp of values and archive automatically to files (that is, some light way of persistence). He wanted the archives to be human readable, editable and not being too harsh on flash memory. Following good manners as a software engineer, it’s of course developed using TDD (he gave a quick introduction to it). After an overview of its usage, he showed tests and how the implementation worked. After this, I presented Sublanguages. You actually can read the presentation for yourself on my talks page. Enjoy! I had to leave Euruko early just after my presentation, so I missed the talks on CRM and Rails, Patching Ruby, Pay4Code and Mongrel, which I really would have liked to see… but then I wouldn’t get the train home. I hope someone else blogged about it, else, you got to wait until the videos get uploaded. Euruko 2006 was big fun for me, and I’d like to thank again the organizers, Ruby Central, O’Reilly for the book each speaker got for free, and everyone attending for the great time I had. See you all next Euruko! NP: Pearl Jam—Nothingman
http://chneukirchen.org/blog/archive/2006/11/euruko-2006-day-two.html
crawl-001
en
refinedweb
Yahoo! UI Library (YUI) Plugin This plugin provides integration with the Yahoo! UI Library. When installing the plugin, it downloads and installs automatically the latest YUI 2.5.1 distribution in your application, and registers itself to be used with the adapative AJAX tags. It also contains two helper tags to easily include additional YUI javascript and css files as well. Installation To install the YUI plugin type this command from your project's root folder: grails install-plugin yui The complete YUI distribution is downloaded and installed under your project's \web-app\js\yui\2.5.1 folder. Usage To use Grails' adaptive AJAX support just add the folowing line in the head section: <g:javascript If you want to include additional YUI javascript and css files include them using: <yui:javascript <yui:javascript // version to be used in case multiple version installed <yui:stylesheet Refer to the Ajax section of the Grails reference documentation for usage. Overriding default javascript files By default only yahoo-dom-event.js and connection-min.js are included when using <g:javascript. Adding additional libraries to the default list can be done in a BootStrap class: import org.codehaus.groovy.grails.plugins.web.taglib.JavascriptTagLib class BootStrap { def init = { servletContext -> JavascriptTagLib.LIBRARY_MAPPINGS.yui += ["yui/2.5.1/calendar/calendar-min", "yui/2.5.1/container/container-min"] } def destroy = { } } It's also possible to replace all default included javascript libraries. For example if you want to use the YUI debug javascript files in development mode: import grails.util.GrailsUtil import org.codehaus.groovy.grails.plugins.web.taglib.JavascriptTagLib class BootStrap { def init = { servletContext -> if (GrailsUtil.isDevelopmentEnv()) { JavascriptTagLib.LIBRARY_MAPPINGS.yui = ["yui/2.5.1/yahoo/yahoo-debug", "yui/2.5.1/dom/dom-debug", "yui/2.5.1/event/event-debug", "yui/2.5.1/connection/connection-debug"] } } def destroy = { } } Serving YUI Files from Yahoo! Servers It's also possible to serve the javascript from the Yahoo! servers. First delete the yui folder from \web-app\js after installing the plugin. Then, in a BootStrap class, override the mapping which contains the javascript files to include by default: import org.codehaus.groovy.grails.plugins.web.taglib.JavascriptTagLib class BootStrap { def init = { servletContext -> JavascriptTagLib.LIBRARY_MAPPINGS.yui = [] } def destroy = { } } The only drawback is that you need to include the references to the javascript files manually. Probably the easiest way is to do this in the main.gsp layout. Upgrading If you want to upgrade: - Delete the plugin from the project's \plugins folder - (Optional) Delete the previous YUI version folder from \web-app\js\yui - Re-install the plugin by executing grails install-plugin yui Plugin version history
http://www.grails.org/YUI+Plugin
crawl-001
en
refinedweb
This is not my usual kind of subject. Most of my post are on the diversity of share worthy tidbits I encounter while building apps. Test Driven Design is quite a hot topic here on Codebetter. So far I had nothing to add to the great writings of Jeremy et al. But a discussion on a post by Jay on Javascript struck me. It gave the impression TDD as something too big or complicated to get started with. In this post I will describe some of my experiences with TDD. It's a very simple story on a very simple problem but it makes clear how TDD is also for me an easy way to tackle problems. This story deals with the Vecozo web service. This secured web service offers a way to check Dutch medical insurance data. The number of pages in the documentation is gigantic but there is no clear example, not even a hint, how to use the service from code. I will use TDD to explore the service and will end with the base for a simple API. The Vecozo package contains: What is lacking is the way to combine all of this to make successful requests. This is where TDD comes in. I am using nunit the mother of all unit test frameworks. My tests are located in a new class library project added to my solution. The test project references the nunit framework and the application project. In the tests I can call all public members of the application. The test code has a reference to the application I'm building but the application itself does not reference anything extra; I do not force my customer into automated testing. There are several tools available to actually run the tests, both command line and GUI versions. Resharper has a nice testrunner which integrates into Visual Studio. As an alternative Nunit contains one which looks and behaves just the same. The test have to be public classes of the new lib. The test runner instantiates objects of Classes decorated with the TextFixture attribute. Public methods in the classes, decorated with the Test attribute, contain the test code to be executed. What code the test will execute depends. A way to start could be to take existing code and automatically test metrics to check the behavior of that code. A test passes when it runs without throwing an exception. To harden a test you can conditionally raise (assertion) exceptions, the nunit Assert class has loads of usable members to do that. Designing and coding these kinds of test is not always easy. What metrics can/should you test ? What algorithm you need to assert that the test passes ? The methodology of Test Driven Development approaches the matter the other way round. The first line of code is the test itself. The implementation of the application is code built to pass those tests. Quite an eye-opener on where to start is Jimmy Nilson's book on Domain Driven Design I read last summer. At first read it chatters away, writing almost trivial code. In tests. But building test upon test a quite good application emerges. Let's give this a try on the Vecozo web service. I start writing out questions in code on what I want to build. The first question: Written out as test [TestFixture] public class VecozoWebServiceTests { [Test] public void CanCreateWrapper() { VecozoLib.ServiceWrapper o = null; Assert.IsNotNull(o); } } For the test code to build I need to write the ServiceWrapper class. namespace VecozoLib public class ServiceWrapper The code builds but the test fails. For it to pass the wrapper object has to be properly instantiated. [Test] public void CanCreateWrapper() VecozoLib.ServiceWrapper; o = new ServiceWrapper(); Assert.IsNotNull(o); This test may seem quite trivial. But it does assert that a ServiceWrapper object can be instantiated, it's constructor did not raise an exception. When refactoring the test will become even more valuable. When the constructor is refactored into a factory, the (slightly rewritten) test will assert the factory does create objects. Here's the unit testing mantra: Red, Green, Refactor. Write a test which fails, write the code to make it pass and safely refactor the code to the evolving needs. For the latter the tests will guard the code to keep working Now the first test reads green and we have a starting place. The object is going to invoke the web service. I do have an url for that so I can add a web reference. VS generates a proxy to the web service. The service appears to have one web method which does all the work. The next question is: I don't care yet whether I'm invoking it correctly, I just want to see whether the web service accepts my requests and returns something. The simplest request sends an empty list to validate. This is implemented in a new method of the service wrapper. public class ServiceWrapper public object InvokeVecozoService() vz3738 ci = new vz3738(); Console.WriteLine(ci.Url); ControleerInput requestMessage = new ControleerInput(); requestMessage.Verzekerden = new AanvraagCovType[0]; object responseMessage = ci.controleer(requestMessage); return responseMessage; Vz3738 is the webservice, the webmethod is named Controleer. Note that the method writes the url of the service to the console. All the test does is call this method. public void CanInvokeService() VecozoLib.ServiceWrapper sw = new ServiceWrapper(); object response = sw.InvokeVecozoService(); Assert.IsNotNull(response); The test fails on an exception. The nice thing is that the test runner catches the output. The url is there and also a stack trace. The message is loud and clear : no Access (Geen toegang). It's time to do something with my Vecozo certificate. Next question: I want the serviceWrapper to expose the Vecozo certificate. Written out in a test [Test()] public void ValidCertificateAvaliable() System.Security.Cryptography.X509Certificates.X509Certificate cert = sw.VecozoCertificate; Assert.IsTrue(DateTime.Parse(cert.GetExpirationDateString()) > DateTime.Now); Unless a valid Vecozo certificate is installed the test will fail. The implementation of the code to get the certificate is pretty straightforward. public X509Certificate VecozoCertificate get X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certStore.Open(OpenFlags.ReadOnly); X509Certificate2Collection certs = certStore.Certificates.Find(X509FindType.FindByIssuerName, "Vecozo", false); if (certs.Count == 1) return certs[0]; else return null; } Now the test shows I do I have a certificate. But what to do with it? Sign the message? Encrypt it ? There are a lot of possibilities. The Vecozo docs are blank on this. Googling around I found the simplest thing to do with a certificate is just pass it to the service in the ChildCertificates collection of the proxy. Let's try this. public object InvokeVecozoService() vz3738 ci = new vz3738(); Console.WriteLine(ci.Url); ci.ClientCertificates.Add(VecozoCertificate); ControleerInput requestMessage = new ControleerInput(); requestMessage.Verzekerden = new AanvraagCovType[0]; object responseMessage = ci.controleer(requestMessage); return responseMessage; This simplest thing is exactly what this service demands. When the certification test passes now the connection test also passes. And now I start communicating with the service. The unit test are a great status monitor; they test the status of the certificate and the availability of the service in a single click. Now my tests can jump right into testing the domain specific features of the service itself. The Assert class of the nUnit framework has loads and loads of (static) methods to compare expected values with the actual values returned from the service. There are many ways to request data from the Vecozo web service. The documentation does give a good overview. In the next example I am requesting the data of one person based on birth date and BSN (you can compare that to a SSN). In case you are going to work with the actual web service check the documentation on this. In case you are just interested in TDD it suffices to say that the method returns some domain specific data. Note that the actual invocation of the web service has already been refactored into a (private) method. The earlier test will make sure I have done that correctly. public RetourinfoCovType PersonData(DateTime birthDate, int bsnNummer) requestMessage.Verzekerden = new AanvraagCovType[1]; requestMessage.Verzekerden[0] = new AanvraagCovType(); requestMessage.Verzekerden[0].PeildatumVerzekering = DateTime.Now; requestMessage.Verzekerden[0].SoortVerzekering = 92; requestMessage.Verzekerden[0].GeboorteDatum = birthDate; requestMessage.Verzekerden[0].BSN = bsnNummer; requestMessage.Verzekerden[0].BSNSpecified = true; return getResponseMessage(requestMessage)[0]; A simple straightforward test against the test data available to check the domain specific functionality of the code: public void IsDeceased() VecozoLib.VecozoService.RetourinfoCovType data = sw.PersonData(DateTime.Parse("1-5-1945"), 141919723); Assert.AreEqual(0, data.ResultaatcodeCOV); The assertion compares the result with 0. The test will fail. Again the output clearly tells why the test failed. The return code was not 0 but 6420. The documentation lists all return codes, where 6420 stands for deceased. Which matches the description of the test data on this "person". To make my code better maintainable I've written out a descriptive enumeration of these return codes. public enum ResultaatRaadpleging : int OK = 0, NietGeauthoriseerdBijAangegevenVerzekeraar = 6400, …… BSNvakerGevonden = 6419, PersoonIsOverleden = 6420 Using that to make the test pass: Assert.AreEqual(VecozoLib.ResultaatRaadpleging.PersoonIsOverleden, (VecozoLib.ResultaatRaadpleging) data.ResultaatcodeCOV); Here I am writing tests which describe the actual desired domain specific functionality of my app. The code I am writing to fulfill the test is code which also fulfills my customers demands. Adding all desired functionality will result in a lot of refactoring. Adding the functionality test by test will make sure that no existing code breaks in the process. So far the TDD gospel in my words. I hope to have made clear that using TDD as a methodology is nothing mysterious, just an easy way to gain and keep control over the app you are building. Nothing more, nothing less. PingBack from Yesterday Jeffrey published his 500th post without even mentioning the milestone. Scott was aware
http://codebetter.com/blogs/peter.van.ooijen/archive/2007/03/02/TDD_3A00_-Test-driving-the-Vecozo-webservice.aspx
crawl-001
en
refinedweb
Feeds Plugin Author: Marc Palmer (marc at anyware.co.uk) A plugin that renders RSS/Atom feeds, or any other formats supported by the ROME API.. Using the dynamic render method and the Feed Builder DSL class YourController { def feed = { render(feedType:"rss", feedVersion:"2.0") { title = "My test feed" link = "" description = "The funky Grails news feed" Article.list().each() { article -> entry(article.title) { link = "{article.id}" article.content // return the content } } } } } The feedType parameter is required. The feedVersion is optional and will default to 2.0 for RSS and 1.0 for Atom if not supplied. Current tested feedType values are "rss" and "atom", but any ROME supported feed type should work. Builder semantics Some There are some smarts, namely: - Common properties of entry nodes you may want to set: Common properties of content nodes you may want to set: Enclosures and descriptions are currently not directly supported by the builder but can be constructed using the ROME API directly. Examples Setting root node (feed) properties directly using FeedBuilder. def builder = new FeedBuilder() builder.feed { title = "My = "" publishDate = new Date() //content here } entry("Article 2") { link = "" publishDate = new Date() //content here } entry(title:"Title here", link:"") { publishDate = new Date() //content here } Using the taglib There"/>
http://www.grails.org/Feeds+Plugin
crawl-001
en
refinedweb
JA-SIG CAS Client Plugin for Grails Introduction This plugin provides a simple way of enabling JA-SIG CAS client integration. Current release version is 1.0 and it is strongly recommended to upgrade to this version. It uses CAS Java Client, which is a less complicated client library with fewer dependencies compared to another Java client called CAS Client for Java. Although the latter is developed with Spring Framework in mind, I personally think it is too heavy to be). Prerequisite The plugin is developed with Grails As a Grails plugin, it couldn't be easier to install it. 'grails install-plugins' should be able to take care of that from either locally downloaded file or remote plugin repository. grails list-plugins should give you a list of available plugins. grails install-plugin cas-client [current-release-version] should install this plugin from remote repository. grails install-plugin /path/to/local/cas/cliet/plugin.zip will install from a locally downloaded file. Please click here to download the latest version, in case the 'grails list-plugins' or 'grails install-plugin' is not responding. Usage As mentioned above, the plugin requires some extra work in Config.groovy to make CAS server and local parameters known to the plugin. So in application's Config.groovy you may add some lines of code below, // Last but not least, there are parameters to control the plugin instead of the behaviour of underlying CAS client library: Examples Simple Solution for Authorization First of all, as users of JA-SIG CAS may already know, CAS is only for authentication but not for authorization (up to 2.x). Therefore authorization is down to users to implement, if necessary. A common pattern is to use the interceptors available with Grails, For instance, you can have a controller AccessController like this, with this plugin installed import edu.yale.its.tp.cas.client.filter.CASFilter class AccessController { def beforeInterceptor = [action: this.&check] def check() { def username = session?.getAttribute(CASFilter.CAS_FILTER_USER) // check username and return a boolean accordingly. // ... } } Please note it looks different from the example controller showed in Grails user guide for before interceptors because: - To take the above example further, a simpler solution is available, In Config.groovy you can have a list of users allowed to access your application, users = ['foo', 'bar'] and then the 'check' method above will look like, def check() { def username = session?.getAttribute(CASFilter.CAS_FILTER_USER)?.toLowerCase() return username in grailsApplication.config.users } That could be helpful if your application has a simple authorization need. Handling Large Number of Protected URLs The plugin can handle a list of urls using 'cas.urlPattern' but it can be quite messy to list every one of them. Thanks to this mailing list post, you can group all the urls to be protected by the plugin using Grails URL mappings feature and then feed the url pattern after mapping to the plugin. Misc. CHANGELOG TODO - upgrade to coming new release of the cas client library which contains a simple authorization utility that can replace the trick mentioned in the example above. Also pls submit your suggestions to Grails users list. Suggestions or Comments Please feel free to submit them in Grails users mailing list. Alternatively you can contact the author of the plugin directly, Chen Wang, contact at chenwang dot org
http://www.grails.org/CAS+Client+Plugin
crawl-001
en
refinedweb
The Audit Logging plugin can add Hibernate Events based Audit Logging to a Grails project and it can also add support to domain models for hooking into the hibernate. Usage You can use the grails-audit-logging plugin in several ways. First, in a domain class... static auditable = true enables audit logging using the introduced domain class AuditLogEvent which will record insert, update, and delete events. Update events will be logged in detail with the property name and the old and new values. Additionally you may use the optional event handlers.] } }) }//*/ } Alternately you may choose to disable the audit logging and only use the event handlers. You would do this by specifying: static auditable = [handlersOnly:true] ... with handlersOnly:true specified no AuditLogEvents will be persisted to the database and only the event handlers will be called. As of version 0) in Config.groovy add these lines: auditLog { actorKey = 'userPrincipal.name' } ...or alternately... auditLog { actorKey = 'userPrincipal.id' } ... if you prefer to log the user's id. If you are using a custom authentication system in your controller that puts the user data into the session you can set up the actorKey to work with this data instead... In Config.groovy auditLog { actorKey = 'session.username' } ... or if your user is similar to the simple authentication user object in the tutorials... in Config.groovy auditLog { actorKey = 'session.user.name' } ... finally if you are using a system such as CAS you can specify the CAS user attribute using a special configuration property to get the CAS user name. In Config.groovy just add the following lines to the top of the file: import edu.yale.its.tp.cas.client.filter.CASFilter auditLog { username = CASFilter.CAS_FILTER_USER } ... and the audit_log table will have a record of which user and what controller triggered the hibernate event.
http://www.grails.org/Grails+Audit+Logging+Plugin
crawl-001
en
refinedweb
GWT Plugin. Required Grails 0.5.6+ Installation How to use it When starting with GWT, the first thing you need to do is create a module. This packages a bunch of client-side code into a single unit. Creating a module grails create-gwt-module <module> The above command will generate a module file and a corresponding client class under your project's src/java directory. If the name of the module includes a package (recommended), then the files are created in the appropriate directory path. For example: grails create-gwt-module org.example.MyApp will create the files - src/java/org/example/MyApp.gwt.xml, and - src/java/org/example/client/MyApp.java Creating a host page Once GWT has something called hosted mode that allows you to test and debug your web interface from a custom browser. This is also available from the plugin. Just run this command: grails run-gwt-client This will launch the custom browser and point it at your running web application (it would be a good idea to run grails run-appPC Almost property, as is done with the Remoting plugin and others: class MyService { static expose = [ 'gwt:example.client' ] List listUsers() { ... } ... } The format of the GWT expose entry Note If you modify either file yourself, then the plugin will no longer update them automatically. This is to ensure that any changes that you make are preserved, such as adding '@gwt.typeArgs' javadoc annotations. When accessing the service from your client GWT code, use the URL described in this example: G and MapTypeArg. These can be used to specify both argument types and return types like so: import org.codehaus.groovy.grails.plugins.gwt.annotation.CollectionTypeArg import org.codehaus.groovy.grails.plugins.gwt.annotation.MapTypeArg class) { ... } ... } Note At the time of writing, the annotations will not work with method arguments. Hopefully this will be rectified for the Grails 1.0 release. Compiling the GWT modules The plugin will automatically compile your GWT modules the first time that you perform a normal Grails compile, for example via grails compile
http://www.grails.org/GWT+Plugin
crawl-001
en
refinedweb
) C# Wish --or-- How I Love to Hate Thee: Property I really want this syntax in C# (informal BNF): getset-declaration := <access-specifier><other-decorations><type><getset-token><publishedname> <initializer>; | <access-specifier><other-decorations><type><getsettoken><publishedname>(privatename) <initializer>; access-specifier := public | protected | private getset-token := getter | setter | getset published-name := identifier private-name := identifier What this means is that the declaration: public int getset Age(_age) = 0; is syntactically equivalent to: private int _age = 0; public int Value { get { return _age; } set { _age = value; } } and the declaration: public string getter Name = "Atalasoft"; is syntactically equivalent to: private string _Name = "Atalasoft"; public string Name { get { return _Name; } } Admittedly, this is syntactic sugar, but it does a really nice thing for me as a programmer: it takes a common syntactic pattern and turns it into something that is easy to work with and will reduce errors. Specifically, if I need to change the type of this property, it happens in exactly one place instead of two. If I have to change the name, I also get a benefit of only really having to change it in one place. The bad news is that the IL for a typical setter doesn't get inlined in the release build. The good news is that the JIT compiler does that for you (but only in release). Still, shouldn't this type of trivial property be auto-inlined within an assembly? At the very least, it makes writing the JIT easier. Here's a chunk of code that makes an object and sets a property within it: MyProperties props = new MyProperties(); props.Foo = 45; Here's the IL: // construct the object IL_0000: newobj instance void PropertyAnalysis.MyProperties::.ctor() // pop the stack store it in local variable 0 (ie, props) IL_0005: stloc.0 // push local variable 0 on the stack IL_0006: ldloc.0 // push a 45 IL_0007: ldc.i4.s 45 // call the setter IL_0009: callvirt instance void PropertyAnalysis.MyProperties::set_Foo(int32) Here's the JIT compiled version of that code in release mode: // push the class pointer 00000000 mov ecx,0A053D0h // call the object allocator 00000005 call FDBD1F98 // this looks like it's part of System.Object 0000000a mov dword ptr [eax+4],0 // inlined field initializer for the class variable 00000011 mov dword ptr [eax+8],0 // set the field to 45 (2d in hex) 00000018 mov dword ptr [eax+8],2Dh Other than setting the field twice in a row (a peep-hole optimizer should do this for you), it's not bad code, but I can't imagine how hard it had to work to get here. If the IL had been inlined, this should've been an easier task. Easier JIT compiling means a less-complicated compiler, which typically means fewer bugs. It also means that in a pure interpretive environment (ie, you're not running a JIT compiler because, say, you don't have the resources), it will run faster anyway. Another benefit of the inlining is that it becomes easier to do static analysis of this code. At a minimum, this object should be allocated on the stack and not from the heap since there are no references to it when the function exits. An extreme optimizer should pretty much no-op this whole thing, since it doesn't actually do anything, and that's determinable at compile time. You laugh, but I've seen the output of really heavyweight optimizers for C that do that level of work. Why shouldn't I expect that standard in other production tools? Now, I really like the notion of properties. It's a fairly nice way to express the set/get pattern and to be able to insulate calling code from the internal details, and to make the code look good. The problem I have with properties is the same problem I have with operator overload and some other OOP features: they give the illusion of being cheap, when in reality they can be very expensive. In the debug build of the typical property, the cost of property access is pretty dang high - even when you're using a release build of an assembly from within a debug project, since the JIT is affected by your project's debug/release settings, and not by those of external assemblies being JIT compiled. You hope that the property you're accessing will be cheap, but there's no way to tell from your code. I was adding a feature to some code that queried a capability of a COM object and reported it back to the user. It was about as straightforward as you'd like it to be: public bool SupportsSpecialOutput { get { int settingsFlags = myComObject.QuerySettings(); return (settingsFlags & OutputCapabilities.SpecialMask) != 0; } } It turned out that QuerySettings() was very expensive - on the order of a second. Had I left this code as is, clients would be horrified by the cost. Another thing to keep in mind is that access to a property or class may have unintended side-effects. My favorite war story had to do with a C++ class that allocated memory in its constructor. It was being used like this: void InteruptServiceRoutine() { SomeVeryBadClass myVariable; myVariable.PerformSomeImportantTask(); } the problem was that the allocation in the default constructor was being performed transparently and once in a while it would destroy the heap since allocation/freeing from the general heap is verbotten at interrupt time. Since this would only happen if there was an incomplete heap operation in progress, it was a "once in a blue moon" bug, which quickly turned into a nightmare to track down. Share this post: | bookmark it! | digg it! | reddit! | kick it! | live it! Published Thursday, August 31, 2006 1:57 PM by Steve Hawley Anonymous comments are disabled
http://www.atalasoft.com/cs/blogs/stevehawley/archive/2006/08/31/10751.aspx
crawl-001
en
refinedweb
New submission from Virgil Dupras <hsoft at hardcoded.net>: Currently, there is no (documented) way to easily extract arguments in an argparse Namespace as a dictionary. This way, it would me easy to interface a function taking a lot of kwargs like this: >>> args = parser.parse_args() >>> my_function(**dict(args)) There's "_get_kwargs()" but it's a private undocumented method. I guess that making it public would be problematic because of the namespace pollution that would occur. That's why I'm proposing to make it iterable. If it isn't rejected, I'd gladly work on the required patch. ---------- components: Library (Lib) keywords: easy messages: 127582 nosy: vdupras priority: normal severity: normal status: open title: Iterable argparse Namespace type: feature request versions: Python 3.3 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/new-bugs-announce/2011-January/009904.html
CC-MAIN-2016-50
en
refinedweb
For my Open Source project, I want to expose a stateless session bean (SLSB) as web service using JBoss 3.2 and Axis 1.1. Now I’m wondering whether to use JAX-B or Castor for my document-style encoding, since the information I found in “Create Web services using Apache Axis and Castor” on IBM’s developerWorks may be a bit outdated already. Any thoughts on this? By the way, this document clearly describes Axis 1.1 and JBoss 3.2.x configuration, exactly for this purpose (exposing a SLSB as WS using Axis and JBoss). Pingback: Bad Credit Car Loans Now Pingback: Business Credit Card Pingback: Tubas xmlbeans is a superb API as long as you have a schema not a DTD for your document, which obviously is your case, too. Unfortunately, I cannot comment upon its integration with Axis, but I think you’ll probably have the same namespace ‘issues’ as the one related to Castor, mentioned by Dan. No problems with xmlbeans, I think (I haven’t got experience). But if there are any arguments in favour of it (such as ease of use :-), I’ll surely consider it an option. From my experience with Castor and Axis, I would say that it works pretty well. IBM’s excellent article provides a solid roadmap for implementing DOC/Literal. My process for managing changes in the interface (new fields exposed in the service) is pretty simple. I make a change in the schema, run an ant target to generate binding classes, merge generated classes with my code base, recompile and deploy. Issues to watch out for include: – namespaces in the schema: I found that the xml returned from the service contained namespace prefixes for every attribute and element! I ended up removing namespace settings from within the generated classes. – Axis bug fixes: There were a few classes in Axis that had bugs relating to Castor. I grabbed the fixes from cvs because they weren’t included in the standard Axis distribution (can’t remember what classes were fixed). I’m sure there are better ways to implement now. I haven’t played with JAX-B so I don’t know how easy it is. Good Luck! What’s the problem with xmlbeans ?
https://technology.amis.nl/2004/08/20/use-jax-b-or-castor-for/
CC-MAIN-2016-50
en
refinedweb
import java.lang.annotation.Retention; import java.lang.annotation.Target; The presence of this annotation on a field of a class annotated with instructs the system to inject a value into this property as described in section JSF.5.3 of the spec prose document in the ManagedBean <managed-property> subsection. The time of instantiation is dictated by the value of the attributes on the usage of ManagedBean and by the application logic itself. The value of the attribute may be a literal value(). public @interface ManagedProperty {ManagedProperty {
http://grepcode.com/file/[email protected][email protected]@javax$faces$bean$ManagedProperty.java
CC-MAIN-2016-50
en
refinedweb
Dancer::Plugin - Extending Dancer's DSL with plugins version 1.9999_01. Allows the plugin to define a keyword that will be exported to the caller's namespace. The first argument is the symbol name, the second one the coderef to execute when the symbol is called. The coderef receives as its first argument the Dancer::Core::DSL object. Any Dancer keyword wrapped by the plugin should be called with the $dsl object like the following: sub { my $dsl = shift; my @args = @_; $dsl->some_dancer_thing; ... }; As an optional third argument, it's possible to give a hash ref to register in order to set some options. The option is_global (boolean) is used to declare a global/non keyword (by default all keywords are global). A non global keyword must be called from within a route handler (eg: session or param) whereas a global one can be called frome everywhere (eg: dancer_version or setting). register my_symbol_to_export => sub { # ... some code }, { is_global => 1} ; A Dancer plugin must end with this statement. This lets the plugin register all the symbols defined with register as exported symbols. Since version 2, Dancer requires any plugin to declare explicitly which version of the core it supports. This is done for safer upgrade of major versions and allow Dancer 2 to detect legacy plugins that have not been ported to the new core. To do so, the plugin must list the major versions of the core it supports in an arrayref, like the following: # For instance, if the plugin works with Dancer 1 and 2: register_plugin for_versions => [ 1, 2 ]; # Or if it only works for 2: register_plugin for_versions => [ 2 ]; If the for_versions option is omitted, it dfaults to [ 1 ] meaning the plugin was written for Dancer 1 and has not been ported to Dancer 2. This is a rather violent convention but will help a lot the migration of the ecosystem. Simple method to retrieve the parameters or arguments passed to a plugin-defined keyword. Although not relevant for Dancer 1 only, or Dancer 2 only, plugins, it is useful for universal plugins. register foo => sub { my ($self, @args) = plugin_args(@_); ... } Note that Dancer 1 will return undef as the object reference. If plugin_setting is called inside a plugin, the appropriate configuration will be returned. The plugin_name should be the name of the package, or, if the plugin name is under the Dancer::Plugin:: namespace (which is recommended), the remaining part of the plugin name. Configuration for plugin should be structured like this in the config.yml of the application: plugins: plugin_name: key: value Enclose the remaining part in quotes if it contains ::, e.g. for Dancer::Plugin::Foo::Bar, use: plugins: "Foo::Bar": key: value Allows a plugin to delcare a list of supported hooks. Any hook declared like so can be executed by the plugin with execute_hook. register_hook 'foo'; register_hook 'foo', 'bar', 'baz'; Allows a plugin to execute the hooks attached at the given position execute_hook 'some_hook'; Arguments can be passed which will be received by handlers attached to that hook: execute_hook 'some_hook', $some_args, ... ; The hook must have been registered by the plugin first, with register_hook. The following code is a dummy plugin that provides a keyword 'block_links_from'. package Dancer::Plugin::LinkBlocker; use Dancer::Plugin; register block_links_from => sub { my $dsl = shift; my $conf = plugin_setting(); my $re = join ('|', @{$conf->{hosts}}); $dsl->before( sub { if ($dsl->request->referer && $dsl->request->referer =~ /$re/) { $dsl->status(403) || $conf->{http_code}; } }); }; register_plugin for_versions => [ 2 ] ; 1; And in your application: package My::Webapp; use Dancer; use Dancer::Plugin::LinkBlocker; block_links_from; # this is exported by the plugin Dancer Core Developers This software is copyright (c) 2012 by Alexis Sukrieh. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~sukria/Dancer-1.9999_01/lib/Dancer/Plugin.pm
CC-MAIN-2016-50
en
refinedweb
This is the XML Schema Scripting module for XHTML $Id: xhtml-script-1.xsd,v 1.2 2009/11/18 18:25:53 smccarro Exp $ Scripting * script, noscript This module declares element types and attributes used to provide support for executable scripts as well as an alternate content container where scripts are not supported. This import brings in the XML namespace attributes The module itself does not provide the schemaLocation and expects the driver schema to provide the actual SchemaLocation.
http://www.w3.org/MarkUp/2009/ED-xhtml-modularization-20091118/SCHEMA/xhtml-script-1.xsd
CC-MAIN-2016-50
en
refinedweb
I have a controller that I want to use to generate reports from, my first report works fine it's simply a sorted output of my CSEmployee table. However my second report isn't giving me what I would expect. Each CS_Employee can have multiple time_entries in the TimeEntry table associated with it. I have indexed the table and created a has_many\belongs_to relationship and all seems to work just fine in my other views. My second report needs to show all associated time_entries for each CS_Employee for a given time period. Here is what I have so far: def cs_employees @cs_employees = CsEmployee.all @cs_employee = @cs_employees.order(:cs_name) respond_to do |format| format.html format.csv { send_data @cs_employee.to_csv } format.xlsx #{ send_data @cs_employee.to_csv(col_sep: "\t") } end def schedules @start_date = params[:start_date] || 2.weeks.ago @end_date = params[:end_date] || Date.today @cs_employee = CSEmployee.all @cs_employees.each do |cs_employee| @cs_employee = cs_employee.find(params[:cs_employee_id]) @time_entries = @cs_employee.time_entries.find(params[:id]) @time_entries.each do |time_entry| tschedules = time_entry.where(:date => (params[:start_date]).to_date..(params[:end_date]).to_date) @schedules += tschedules if tschedules end end end You do not seem to be populating your @cs_employees variable when you call @cs_employees.each Also, there is way too much business logic in your controller. Move it to your model or a concern.
https://codedump.io/share/pCcOmTTVDrat/1/ruby-rails-loop-in-controller
CC-MAIN-2016-50
en
refinedweb
java.lang.Object com.bea.netuix.laf.AbstractLookAndFeelComponentcom.bea.netuix.laf.AbstractLookAndFeelComponent com.bea.netuix.laf.Skincom.bea.netuix.laf.Skin public class Skin Skin provides a read-only interface for retrieving information related to the current Look and Feel skin configuration. Skin objects are valid only as long as the related PortalLookAndFeel object is valid. PortalLookAndFeel.getSkin() public String getName() LookAndFeelComponent public String getPath() skinPathattribute of the lookAndFeelcontrol, or via the skinPathparameter of the reinitmethod. public String getFullPath() public String getChromosome() public List<String> getAvailableChromosomes() public HtmlPathResolver getHtmlPathResolver() HtmlPathResolverrelated to this skin. The object returned from this method shares the same validity constraint as the Skinobject; it is valid only as long as the related PortalLookAndFeel public String resolveImagePath(String uri) throws IllegalStateException imagesconfiguration. It is assumed that the specified uri is a partial image path, i.e. "button.jpg" or "images/corner.gif". NOTE: Calls to PortalLookAndFeel.reinit()will cause all previously resolved image paths to become invalid. LIFECYCLE RESTRICTION: This method can only be called during the render lifecycle; calls prior to the render lifecycle will cause an IllegalStateException. uri- parital image path to resolve IllegalStateException- If called during an inappropriate lifecycle. PortalLookAndFeel.reinit(String, String, String, String, String, String), "Skin configuration element <images>"
http://docs.oracle.com/cd/E26806_01/wlp.1034/e14255/com/bea/netuix/laf/Skin.html
CC-MAIN-2016-50
en
refinedweb
JBoss AS 7.1.1.Final "Brontes" is a maintence update to JBoss AS 7.1.0.Final. AS 7.1 series a significant step forward in application server technology. It builds upon the exceptionally lightweight AS 7 architecture, and is a certified implementation of the Java Enterprise Edition 6 Full Profile specification. In addition AS 7.1 offers greatly improved security, enhanced management features, and advanced clustering capabilities. "Brontes" eliminated 259 issues reported since the release, including a number of small enhancements, primarily in the management area. Please see the AS 7.1.0.Final Release Notes for more details on the functionality in the 7.1.x series. Detailed Release Notes Release Notes - Application Server 7 - Version 7.1.1.Final Bug - [AS7-1412] - ServerEnvironment accepts invalid or non-existent directory configurations - [AS7-1946] - Supported operation names not documented - [AS7-2407] - Cannot connect to domain controller - more detail needed - [AS7-2467] - TimeoutException on node joining - [AS7-2650] - ConnectionFactory.newInstance() does not provide default impl - [AS7-2834] - No config XSLT transform for basic integration tests - [AS7-2842] - TS: integration-tests.sh/bat should ignore failures in smoke tests and continue running - [AS7-2847] - TS: Clustering tests cannot be run for single test with -Dtest=X - [AS7-3027] - NPE when calling operations on inactive modcluster subsystem - [AS7-3099] - <distributable/> application with SFSBs fail to deploy in default configuration (standalone.xml) - [AS7-3111] - @ArquillianResource InitialContext injection does not work in @RunAsClient annotated tests (remote JNDI) - [AS7-3158] - Requests around cluster membership change with REPL+SYNC cache could take up to 17 seconds to respond - [AS7-3180] - Clean shutdown often results in "Problems unmarshalling remote command from byte buffer" - [AS7-3190] - Poor error message when attempting to undeploy something that isn't deployed - [AS7-3340] - NPE during stop of service jboss.jacorb.poa-service.rootpoa - [AS7-3389] - jython-standalone-2.5.2.jar contains jline classes, also contained in jline module - [AS7-3390] - Some org.apache.commons.collections classes in both commons-beanutils-1.8.0.jar , commons-collections-3.2.1.jar - [AS7-3495] - Unexpected auth dir in standalone/tmp when -Djboss.server.temp.dir is used - [AS7-3499] - JDBC driver undeployed while dependent application is still shutting down - [AS7-3513] - Make sure appclient can use EJB clustering - [AS7-3521] - @ArquillianResource ManagementClient injection does not work in 'manual' container mode - [AS7-3534] - TS: Fix datasource info test - check whether it's a module or a deployment. - [AS7-3549] - Make default-cache attribute of cache-container configurable - [AS7-3577] - Blocked Thread when Remote Client Terminates - [AS7-3596] - JPA default datasource cannot be unset - [AS7-3620] - mod_cluster registers ROOT context - [AS7-3628] - Enabling a server doesn't update the server selection model - [AS7-3639] - Adding deployments scanner fails - [AS7-3640] - Server Instance List not properly initialized when linked form external context - [AS7-3642] - Bean validation integration - [AS7-3652] - JCA/Endpoint isn't used - [AS7-3654] - Handling of slot attribute is not done correctly - [AS7-3672] - Security Module options are editable inline - [AS7-3677] - Server results on domain ops are reported as children of the response's "result" - [AS7-3678] - Can not determine the local host in a multi host domain under /host - [AS7-3705] - Provide write access to default-cache-container and default-stack - [AS7-3708] - ejb-jar.xml does not understand tag <concurrent-method> for singleton session type - [AS7-3709] - Annotation @DependsOn does not understand definition of jar file like "name.jar#singletonBeanName" - [AS7-3722] - org.jboss.as.server.deployment.DeploymentUnitProcessingException: JBAS011093: Could not load component class org.springframework.social.facebook.web.FacebookInitTag for spring-social-facebook 1.0.1 - [AS7-3729] - Admin console login fails on latest Chrome - [AS7-3733] - AWT AppContext / EventQueue ClassLoader Memory Leaks - [AS7-3738] - Errors in resource adapters subsystem checking - [AS7-3741] - AS 7.01CR1B (jbosss-as-cmp) does not compile using eclipse - [AS7-3747] - Canceling the login prompt doesn't remove the loading indicator. - [AS7-3749] - Remote EJB call does not work for method defined as asynchronous with void return type - [AS7-3754] - COntent Repository not crash safe - [AS7-3756] - RESTEasy: Deployment fails if there is more than one EJB-JAR in the EAR - [AS7-3760] - Resource roots added via jboss-deployment-structure.xml to a deployment do not get marked as module roots - [AS7-3766] - CLI: Exception in tab completition of operation headers - [AS7-3773] - A security realm definition should not require an authentication mechanism to be defined if 'truststore' is defined. - [AS7-3786] - Some non-Arquillian TestCases are run against hard-coded 127.0.0.1 - [AS7-3788] - :write-attribute(name=excluded-contexts.. doesn't work in modcluster - [AS7-3791] - Cannot add/remove stack, protocols - [AS7-3792] - Remote EJB client still throws exception rather than blocking - [AS7-3795] - Error loading XA datasource metrics - [AS7-3796] - Remote EJB client seems to be leaking messages MAX_OUTBOUND_MESSAGES (ChannelBusyException: Too many open outbound writes) - [AS7-3798] - JGroups subsystem tcp stack should set NAKACK.use_mcast_xmit to false - [AS7-3803] - Setting a blank locale on the settings dialog gives a blank screen - [AS7-3810] - Specifying the HTTPS port for the HTTP Interface but not specifying the keystore results in a NPE - [AS7-3814] - CommandContext.buildRequest(line) throws OperationFormatException for wrong reason - [AS7-3818] - equals()/hashCode() is wrong for component view descriptions - [AS7-3819] - AS testsuite hangs when set to IPv6 - [AS7-3820] - Deployment fail if destination server is not available at startup of client server using "EJB invocations from a remote server instance" - [AS7-3821] - CLI: deploy and batch command handlers do not add headers - [AS7-3824] - Security domain ignored for resource adapters - [AS7-3827] - NotSerializableException, StateTransferInProgressException, TimeoutException while other node in the cluster crashed - [AS7-3828] - EJB client tries to invoke EJBs after application was undeployed - [AS7-3830] - Remote JNDI call does not bind stateful beans - [AS7-3832] - Component start services do not have a dependency on the components View services - [AS7-3835] - Allowed index attribute on caches need to be all caps - [AS7-3845] - generateStandaloneJdrReport test case fails on Windows - [AS7-3846] - Distributed cache attributes don't show up in read-resource-description - [AS7-3849] - Operation with outcome 'failure' don't set the exit code to indicate failure - [AS7-3852] - Crashed host-controller not unregistered from domain-controller - [AS7-3855] - Do not call @PostConstruct multiple times on @Startup @Singleton bean. - [AS7-3856] - Host Controller path resources do not result in runtime services - [AS7-3862] - Configuration of statement tracking fails, according to XSD - [AS7-3865] - CLI: data-source, jms-queue, jms-topic commands do not support --headers argument - [AS7-3866] - Incomplete headers should not be accepted - [AS7-3869] - NullPointerException with SSL and modcluster - [AS7-3870] - Stateful beans should enlist the extended persistence context into the JTA independently of EntityManager invocation - [AS7-3872] - The package sun.util.locale is missing when you use the class java.util.Locale in a JSP file. - [AS7-3873] - Profile specific views not refreshed when changing profile - [AS7-3882] - Configuration for Jacorb does not use jboss.bind.address config property - [AS7-3883] - Remove security/auditing/form-auth.war - [AS7-3884] - Load testing application with Jmeter & jboss remoting throws exception saying "Too many channels open" - [AS7-3885] - add-user.sh doesn't correctly handle property files where final line does not terminate with a new line - [AS7-3886] - start-servers failes when there are no servers defined - [AS7-3889] - Unable to pass smoke tests part of testsuite when specify -Dnode0=<something> - [AS7-3892] - Integration with SpringServletContainerInitializer incorrect - [AS7-3893] - org.jboss.as.clustering.infinispan should not have dependency on org.hibernate.infinispan - [AS7-3895] - Race condition exists between Infinispan cache container and optional MBeanServer service dependency - [AS7-3902] - NPE in JGroups on shutting down the server - [AS7-3906] - "JBAS010242: Partition web message wrapper does not contain Object[] object!" on node rejoining the cluster - [AS7-3911] - Out-of-date standalone-full-ha.xml shipped in AS 7.1.0.Final - [AS7-3913] - NullPointerException during startup if using --cached-dc when deployments are present - [AS7-3916] - Concurrent access leads to empty domain.xml - [AS7-3917] - Eviction gets enabled even when strategy==NONE w/Infinispan 5.1.2/5.2.x - [AS7-3923] - Data source with security domain integration does not work - [AS7-3924] - mod_cluster should use reasonable default load provider - [AS7-3925] - Management - :reload operation changes java.naming.factory.url.pkgs system property - [AS7-3926] - Failures during the domain boot lead to an "empty" domain.xml - [AS7-3927] - Creating RA fails w/o meaningful error message - [AS7-3931] - After removing datasource param, accessing Runtime DS subsystem view in Domain causes No handler for read-resource at address error - [AS7-3932] - DataSource connection property problems - [AS7-3933] - modcluster subsystem doesn't properly encode expression values - [AS7-3939] - Remote client is able to access bean with Local interface - [AS7-3941] - Resource adapters subsystem parser errors. - [AS7-3945] - Fail to enable DS - [AS7-3949] - Admin console: invalid parameter value being specified when creating datasources - [AS7-3951] - operations w/o parameters but with headers fail validation - [AS7-3952] - ClusteredBeanDeploymentTestCase is not executed - [AS7-3953] - maxHttpHeaderSize - [AS7-3959] - SerializingCloner can't clone primitive data types - [AS7-3960] - Unable to set datasource attribute flush-strategy via CLI - [AS7-3966] - In domain mode, stop-server command on server-group fails - [AS7-3973] - Yet another place where hardcoded IPv4 address scheme is used - [AS7-3975] - EJB client invocations sometimes hang indefinitely - [AS7-3978] - SecuritySubsystemParser not writing elements in order - [AS7-3980] - Default subsystem not loaded - [AS7-3982] - Fix ballroom tests - [AS7-3991] - bin/standalone.sh --help says '----server-config=<config> ...' - [AS7-3993] - Fix up generation of IPv6-ready configs for IPv6 testing - [AS7-3994] - CLONE - Can't connect CLI to server on IPv6 address - [AS7-3996] - CLI: deployment crashes console with --header={rollout id=XXX} parameter - [AS7-4002] - Some of subsytem configurations statically refer to localhost - [AS7-4005] - mod_cluster error on boot "ERROR [org.jboss.as.modcluster] (MSC service thread 1-5) JBAS011703: Mod_cluster requires Advertise but Multicast interface is not available" - [AS7-4012] - xts-environment.url in standalone-xts.xml cannot take an expression - [AS7-4020] - add-jvm-option operation does not work - [AS7-4025] - resource-description for subsystem=jgroups not complete - [AS7-4026] - Hang in RemoteProxyController when remote disconnects - [AS7-4028] - Creating new Periodic File Rotating Handler allows to enter a wrong suffix in domain mode - [AS7-4032] - Managed server can't be :stop :removed within a batch - [AS7-4033] - Clustering test failures with IPv6 - [AS7-4036] - JCA subsystem doesn't unregister <short-running-threads> and <long-running-threads> workmanager subelements after removing - [AS7-4038] - After overriding -Duser.dir property in domain mode, jboss doesn't start - [AS7-4056] - JBoss Remote Naming is leaking threads - [AS7-4060] - Remote Naming throws org.jboss.remoting3.NotOpenException: Writes closed - [AS7-4061] - CLONE - String configuration parameters for HornetQ address cannot be set via CLI - [AS7-4069] - Custom bootstrap-context element, added to JCA subsystem by CLI or DMR API disappears after server reload - [AS7-4073] - can not :write-attribute expression values - [AS7-4077] - Unexpected attribute 'ca-certificate-file' encountered - [AS7-4086] - domain.sh fails to start processes in cygwin - [AS7-4087] - SimpleSecurityManager and the JBossCachedAuthenticationManager link - [AS7-4089] - JACC Web permission mapping creating wrong permissions - [AS7-4092] - Remote EJB UserTransaction fails if no EJB invocations occur between begin() and commit/rollback - [AS7-4093] - Intermittent failure in AsyncMethodTestCase.testAsyncDescriptor() - [AS7-4095] - mod_cluster AdvertiseListenerImpl does not take multicast-port setting into account - [AS7-4098] - mod_cluster subsystem does not take proxy-list attribute into account - [AS7-4099] - Clustered EJB invocations for server to server communication fail on secured servers - [AS7-4105] - The default scan-interval (=0) of deployment-scanner did not work - [AS7-4110] - The node name can not set proper via <server name=> attribute - [AS7-4117] - Intermittent failure in StrictMaxUnitTestCase.testMultiThread() - [AS7-4122] - Adding a deployment scanner fails with NPE - [AS7-4125] - ServerEnvironment.getFilesFromProperties reads path.separator from the wrong place Component Upgrade - [AS7-3625] - Upgrade JGroups to 3.0.5.Final - [AS7-3644] - IronJacamar 1.0.9.Final - [AS7-3745] - Update to classfilewriter 1.0.1 - [AS7-3797] - Upgrade to JBossWS 4.0.2.GA - [AS7-3802] - Update jboss-ejb-client to 1.0.3.Final - [AS7-3876] - Upgrade JBoss Marshalling - [AS7-3891] - Need a new version of jboss-remote-naming that builds with jboss-remoting3 3.2.2.GA - [AS7-3907] - Upgrade Infinispan to 5.1.2.FINAL - [AS7-3943] - Upgrade JGroups to 3.0.6.Final - [AS7-3946] - Upgrade to JBoss Metadata 7.0.1.Final Enhancement - [AS7-1751] - infinispan setup to support environment variables within the standalone.xml file - [AS7-2264] - Log the admin console port on startup - [AS7-3646] - After a server restart the long running task checks every server - [AS7-3655] - Create separate Service<GlobalConfiguration> to allow modules to attach a specific ClassLoader to a cache manager - [AS7-3718] - Copy cause exception message to DeploymentException: "Could not deploy to container". - [AS7-3767] - EE subsystem module must not have a hard dependency on javax.xml.ws.api - [AS7-3785] - Complete jgroups DMR descriptions - [AS7-3850] - Convert web subsystem to ResourceDefinition - [AS7-3851] - make Coyote display its IPv6 bind address in a human readable format - [AS7-3898] - Improve the processing time of a request with a large number of parameters on JBossAS7.1. Feature Request - [AS7-1545] - AS build with --debug fails on OOM - [AS7-1758] - CLI usability: operations need convenient help option too - [AS7-1779] - Manage infinispan through console - [AS7-1893] - JDBC cache store configuration improvements - [AS7-2172] - Provide the ability to view OS system properties within the console - [AS7-2190] - Manage jgroups configuration through the console - [AS7-2327] - Document how to disable the console interfaces on hosts which do not function as a domain controller - [AS7-2342] - Multiple Log Directories in one tree - [AS7-2536] - CLI Configurability - [AS7-2750] - Manage mail subsystem through console - [AS7-2929] - Add support for unscheduled write-behind cache stores to Infinispan subsytem - [AS7-2991] - Gzip compression support - [AS7-3607] - Native connector shoudl support the java.net.preferIPv4Stack system property - [AS7-3648] - WARNING from CommandAwareRpcDispatcher: Channel Muxer already has a default up handler installed but now it is being overridden - [AS7-3673] - Provide integration with zanata (translations) - [AS7-3675] - Expose HostControllerEnvironment via management API - [AS7-3692] - Management: jgroups add stack operation does not work - [AS7-3727] - Arq: Make mgmt address and port available in the @Deployment method. - [AS7-3770] - Expose external mechanism to determine master status of SingletonService - [AS7-3806] - Generate standalone and domain configurations from templates - [AS7-3844] - Add ability to disable / enable deployments - [AS7-3848] - @Clustered @Stateless SLSB should send topology information to the client - [AS7-3853] - Delete build/src/main/resources/configuration/retired-once-stable - [AS7-3905] - Allow security domain configurations to reference a deployment login module - [AS7-3918] - TS Clustering: remove manual / managed containers - [AS7-3965] - Make modcluster subsystem attributes to support ${} property notation - [AS7-3969] - Support Infinispan async marshalling - [AS7-3985] - Pooling strategy and configuration information needed at runtime - [AS7-4040] - Add AJP connector to default HA configuration - [AS7-4058] - Investigate use of OperationContext.completeStep() from within - [AS7-4079] - ModClusterSubsystemTestCase.java subsystem.xml needs extending - [AS7-4113] - jboss-cli.sh: give exit code != 0 on connect failure Library Upgrade - [AS7-3667] - Upgrade to JSF 2.1.7 - [AS7-3989] - Upgrade PicketLink to 2.0.2.Final - [AS7-4037] - Upgrade PicketBox to 4.0.7.Final - [AS7-4074] - Upgrade commons-beanutils to 1.8.3 - [AS7-4076] - Upgrade Google Guava to 11.0.2 Patch Quality Risk Task - [AS7-500] - Mark managed domain servers as requiring restart when they fail to apply or miss domain operations - [AS7-1220] - Add native management (ModelControllerClient) API docs to the Admin Guide - [AS7-2548] - Merge the sasl child of the remoting subsystem with any configuration generated from the realm selection. - [AS7-2653] - Intermittent failures in ServerManagementTestCase - [AS7-2734] - Support standard JAXR client config options - [AS7-3238] - Convert CMP subsystem to i18n logging and exceptions messages as per ANDIAMO-2 - [AS7-3241] - Convert jaxr subsystem to i18n logging and exceptions messages as per ANDIAMO-2 - [AS7-3423] - persistence providers deployed with applications should be available via javax.persistence.Persistence api - [AS7-3590] - Deprecate OperationContext.getRootResource(); provide variant to get a read only view of an addressed branch of the Resource tree - [AS7-3633] - The 'other' security domain has reverted to use the UsersRoleLoginModule - [AS7-3688] - Update EAP look&feel - [AS7-3721] - adjust CLI API for public use - [AS7-3769] - Register the whoami operation for domain mode. - [AS7-3771] - If slave HC fails to connect to remote DC due to SSL handlshake failure it should terminate quickly - [AS7-3815] - add a convenience CommandContext.connectController() method w/o arguments - [AS7-3823] - Remove JBossWS JNDI hacks not needed anymore - [AS7-3829] - Remove JUnit from the AS installation tree - [AS7-3868] - Incorprate GUI translations - [AS7-3897] - Remove legacy demos - [AS7-3899] - Upgrade to Remoting JMX 1.0.1.Final - [AS7-3921] - Upgrade to Remoting JMX 1.0.2.Final - [AS7-3935] - Synchronize translations - [AS7-3942] - In add-user.sh the 'What type of user do you want to add?' prompt should not keep looping on bad input. - [AS7-3944] - Update to Console 1.1.0-Final - [AS7-3964] - Running IPv6 clustering tests results in [UDP] failed sending message to cluster (69 bytes): java.lang.Exception: dest=/ff0e:0:0:0:0:0:e600:4:45688 (72 bytes), cause: java.io.IOException: Network is unreachable - [AS7-4003] - add a command to echo DMR requests for commands and operations - [AS7-4006] - TS clustering: failures when testing with IPv6 - [AS7-4008] - Don't enable remote access to JMX by default in domain mode - [AS7-4014] - Somehow a new line character is regularly being appended to base64 encoded passwords - trim whitespace from decoded value. - [AS7-4111] - Update management API versions Sub-task - [AS7-2228] - TS: IPv6 testing issues (tracking) - [AS7-2270] - TS: Define testsuite acceptance test (a testsuite for the testsuite) - [AS7-2451] - TS: Print help banner on `mvn install` without params. - [AS7-2541] - TS: Clustering tests - figure out how they will be run - [AS7-2827] - TS: Configurable timeouts - [AS7-2996] - TS: -Djboss.dist=... has no effect, /build/target/jboss-as* is always used - [AS7-3547] - Unable to add infinispan caches in admin console - [AS7-3702] - TS: Make container JVM --enable-assertions configurable. - [AS7-3763] - Replace hardcoded URLS and ports in clustering tests - [AS7-3778] - TS: Fix XSLT transformations (namespaces problem) - [AS7-3813] - TS: parametrize surefire test.redirectTestOutputToFile property - [AS7-3840] - TS: Pass node0, node1 to <ant ...> since inheritAll="true" doesn't work. - [AS7-3841] - TS: Add $managementAddress and $managementPort to all arquillian.xml's. - [AS7-3843] - TS: Script to check missed tests.
https://developer.jboss.org/wiki/AS711FinalReleaseNotes
CC-MAIN-2016-50
en
refinedweb
Content-type: text/html packetfilter - Ethernet packet filter options PACKETFILTER The packet filter pseudo-device driver provides a raw interface to Ethernets and similar network data link layers. Packets received that are not used by the kernel (for example, to support the IP and DECnet protocol families) are available through this mechanism. The packet filter driver is kernel-resident code provided by the Tru64 UNIX operating system. The driver appears to applications as a set of character special files, one for each open packet filter application. (Throughout this reference page, the word file refers to such a character special file.) To include packet filter support in your kernel, you must include the following option in your configuration file: options PACKETFILTER You must then reconfigure and rebuild your kernel using the doconfig command. For more information see the System Administration. You create the minor device files with the MAKEDEV(8) script using these commands: # cd /dev # MAKEDEV pfilt A single call to MAKEDEV with an argument of pfilt creates 64 character special files in /dev/pf, which are named pfiltnnn, where nnn is the unit number. Successive calls to MAKEDEV with arguments of pfilt1, pfilt2, and pfilt3 make additional sets of 64 sequentially numbered packet filters to a maximum of 256. The maximum number of packet filter special files is limited to 256, which is the maximum number of minor device numbers allowed for each major device number. (See MAKEDEV(8) for more information on making system special files.) For opening these special files, the operating system provides the pfopen(3) library routine. For more information, see pfopen(3). Associated with each open instance of a packet filter special file is a user-settable packet filter ``program'' that is used to select which incoming packets are delivered by that packet filter special file. Whenever a packet is received from the net, the packet filter driver successively applies the filter programs of each of the open packet filter files to the packet, until one filter program ``accepts'' the packet. When a filter accepts the packet, it is placed on the packet input queue of the associated special file. If no filters accept the packet, it is discarded. The format of a packet filter is described later. Reads from these files return the next packet from a queue of packets that have matched the filter. If the read operation specifies insufficient buffer space to store the entire packet, the packet is truncated and the trailing contents lost. Writes to these files transmit packets on the network, with each write operation generating exactly one packet. The packet filter supports a variety of different Ethernet data-link levels: The packet filters treat the entire packet, including headers, as uninterpreted data. The user must supply the headers for transmitted packets (although the system makes sure that the source address is correct) and the headers of received packets are delivered to the user. The packet filter mechanism does not know anything about the data portion of the packets it sends and receives. In addition to the FIONREAD ioctl request (described in the tty(7) reference page), the application can apply several special ioctl requests to an open packet filter file. The calls are divided into five categories: packet-filter specifying, packet handling, device configuration, administrative, and miscellaneous. The Tru64 UNIX packet filter also supports most of the BSD Packet Filter (BPF) ioctl commands. This provides nearly complete source-level compatibility with existing BPF application code. The BPF packet filter format is quite different from the format described in this reference page and may be far more efficient or flexible for many applications. For more information on the BSD Packet Filter Extensions, see bpf(7). struct enfilter { u_char enf_Priority; u_char enf_FilterLen; u_short enf_Filter[ENMAXFILTERS]; }; A packet filter consists of a priority, the filter command list length (in shortwords), and the filter command list itself. Each filter command list specifies a sequence of actions that operate on an internal stack. Each shortword of the command list specifies an action and a binary operator. ((ENF_PUSHWORD+3) | ENF_EQ) The binary operator, which can be one of the following, operates on the top two elements of the stack and replaces them with its result: Use the short-circuit operators whenever possible, to reduce the amount of time spent evaluating filters. When you use them, you should also arrange the order of the tests so that the filter will succeed or fail as soon as possible. For example, checking a word in an address field of an Ethernet packet is more likely to indicate failure than the Ethernet type field. The special action ENF_NOPUSH and the special operator ENF_NOP can be used to only perform the binary operation or to only push a value on the stack. Because both are defined to be zero, specifying only an action actually specifies the action followed by ENF_NOP, and specifying only an operation actually specifies ENF_NOPUSH followed by the operation. After executing the filter command list, a nonzero value (true) left on top of the stack (or an empty stack) causes the incoming packet to be accepted for the corresponding packet filter file and a zero value (false) causes the packet to be passed through the next packet filter.. To resolve problems with overlapping or conflicting packet filters, the filters for each open packet filter file are ordered by the driver according to their priority (lowest priority is 0, highest is 255). When processing incoming packets, filters are applied according to their priority (from highest to lowest) and for identical priority values according to their relative ``busyness'' (the filter that has previously matched the most packets is checked first), until one or more filters accept the packet or all filters reject it and it is discarded. Normally once a packet is delivered to a filter, it is not presented to any other filters. However, if the packet is accepted by a filter in nonexclusive mode (ENNONEXCL set using EIOCMBIS, described in the following section), the packet is passed along to lower-priority filters and may be delivered more than once. The use of nonexclusive filters imposes an additional cost on the system, because it increases the average number of filters applied to each packet. The packet filter for a packet filter file is initialized with length 0 at priority 0 by open(2), and hence, by default, accepts all packets in which no higher-priority filter is interested. Priorities should be assigned so that, in general, the more packets a filter is expected to match, the higher its priority. This prevents a lot of checking of packets against filters that are unlikely to match them. The filter in this example accepts incoming RARP (Reverse Address Resolution Protocol) broadcast packets. The filter first checks the Ethernet type of the packet. If it is not a RARP (Reverse ARP) packet, it is discarded. Then, the RARP type field is checked for a reverse request (type 3), followed by a check for a broadcast destination address. Note that the packet type field is checked before the destination address, because the total number of broadcast packets on the network is larger than the number of RARP packets. Thus, the filter is ordered with a minimum amount of processing overhead. struct enfilter f; buildfilter() { f.enf_Priority = 36; /* anything > 2 should work */ f.enf_FilterLen = 0; /* packet type is last short in header */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 6; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHLIT; f.enf_Filter[f.enf_FilterLen++] = 0x3580; f.enf_Filter[f.enf_FilterLen++] = ENF_CAND; /* Ethernet type == 0x8035 (RARP) */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 10; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHLIT; f.enf_Filter[f.enf_FilterLen++] = 0x0300; f.enf_Filter[f.enf_FilterLen++] = ENF_CAND; /* reverse request type = 0003 */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 0; +_EQ; /* dest addr = FF-FF */ return; } Note that shortwords, such as the packet type field, are in network byte-order. The literals you compare them to may have to be byte-swapped on machines like the VAX. By taking advantage of the ability to specify both an action and operation in each word of the command list, you could abbreviate the filter to the following: struct enfilter f; buildfilter() { f.enf_Priority = 36; /* anything > 2 should work */ f.enf_FilterLen = 0; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 6; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHLIT | ENF_CAND; f.enf_Filter[f.enf_FilterLen++] = 0x3580; /* Ethernet type == 0x8035 (RARP) */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 10; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHLIT | ENF_CAND; f.enf_Filter[f.enf_FilterLen++] = 0x0300; /* reverse request type = 0003 */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 0; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHFFFF | ENF_CAND; /* dest addr = FF-FF */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 1; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHFFFF | ENF_CAND; /* dest addr = FF-FF */ f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHWORD + 2; f.enf_Filter[f.enf_FilterLen++] = ENF_PUSHFFFF | ENF_EQ; /* dest addr = FF-FF */ return; } ioctl(fildes, code, bits) u_short *bits; In these calls, bits is a bitmask specifying which bits to set or clear. The applicable codes are: The bits are: struct enstamp { u_short ens_stamplen; u_short ens_flags; u_short ens_count; u_short ens_dropped; u_int ens_ifoverflows; struct timevalens_tstamp; }; The fields are: If the buffer returned by a batched read(2) contains more than one packet, the offset from the beginning of the buffer at which each enstamp structure begins is an integer multiple of the word-size of the processor. For example, on a VAX, each enstamp is aligned on a longword boundary (provided that the buffer address passed to the read(2) system call is aligned). The alignment (in units of bytes) is given by the constant ENALIGNMENT, defined in <net/pfilt.h>. If you have an integer x, you can use the macro ENALIGNx to get the least integer that is a multiple of ENALIGNMENT and not less than x. For example, this code fragment reads and processes one batch: char *buffer = &(BigBuffer[0]); int buflen; int pktlen, stamplen; struct enstamp *stamp; buflen = read(f, buffer, sizeof(BigBuffer)); while (buflen > 0) { stamp = (struct enstamp *)buffer; pktlen = stamp->ens_count; stamplen = stamp->ens_stamplen; ProcessPacket(&(buffer[stamplen]), pktlen); /* your code here */ */ } If a buffer filled by a batched read contains more than one packet, the final packet is never truncated. If, however, the entire buffer is not big enough to contain a single packet, the packet will be truncated; this is also true for unbatched reads. Therefore, the buffer passed to the read(2) system call should always be big enough to hold the largest possible packet plus an enstamp structure. (See the EIOCDEVP ioctl request later in this reference page for information on how to determine the maximum packet size. See also the EIOCTRUNCATE ioctl request for an example that delivers only the desired number of bytes of a packet.) Normally, a packet filter application blocks in the read system call until a received packet is available for reading. There are several ways to avoid blocking indefinitely: an application can use the select(2) system call, it can set a ``timeout'' for the packet filter file, or it can request the delivery of a signal (see sigvec(2)) when a packet matches the filter. ioctl(fildes, EIOCSETW, maxwaitingp) u_int *maxwaitingp; The argument maxwaitingp points to an integer containing the input queue size to be set. If this is greater than the maximum allowable size (see EIOCMAXBACKLOG later), it is set to the maximum. If it is zero, it is set to a default value. ioctl(fildes, EIOCFLUSH, 0) ioctl(fildes, EIOCTRUNCATE, truncationp) u_int *truncationp; The argument truncationp points to an integer specifying the truncation length, in bytes. Packets shorter than this length are passed intact. This example, a revision of the previous example, illustrates the use of EIOCTRUNCATE, which causes the packet filter to deliver only the first n bytes of a packet, not the entire packet. char *buffer = &(BigBuffer[0]); int buflen; int pktlen, stamplen; struct enstamp *stamp; int truncation = SIZE_OF_INTERESTING_PART_OF_PACKET; if (ioctl(f, EIOCTRUNCATE, &truncation) < 0) exit(1); while (1) { buflen = read(f, buffer, sizeof(BigBuffer)); while (buflen > 0) { stamp = (struct enstamp *)buffer; pktlen = stamp->ens_count; /* ens_count is untruncated length */ stamplen = stamp->ens_stamplen; ProcessPacket(&(buffer[stamplen]), pktlen); /* your code here */ if (pktlen > truncation) /* truncated portion not in buffer */ pktlen = truncation; */ } } Two calls control the timeout mechanism; they are of the following form: #include <net/time.h> ioctl(fildes, code, tvp) struct timeval *tvp; The tvp argument is the address of a struct timeval containing the timeout interval (this is a relative value, not an absolute time). The codes are: ioctl(fildes, code, signp) u_int *signp; The argument signp is a pointer to an integer containing the number of the signal to be sent when an input packet arrives. The applicable codes are: #include <net/socket.h> #include <net/if.h> ioctl(fildes, EIOCIFNAME, ifr) struct ifreq *ifr; The interface name (for example, ``ln0'') is returned in ifr->ifr_name; other fields of the struct ifreq are not set. ioctl(fildes, EIOCSETIF, ifr) struct ifreq *ifr; The interface name should be passed ifr->ifr_name; other fields of the struct ifreq are ignored. The name provided may be one of the actual interface names, such as ``ln0'' or ``xna1'', or it may be a pseudo-interface name of the form ``pfn'', used to specify the nth interface attached to the system. For example, ``pf0'' specifies the first interface. This is useful for applications that do not know the names of specific interfaces. Pseudo-interface names are never returned by EIOCIFNAME. ioctl(fildes, EIOCDEVP, param) struct endevp *param; The endevp structure is defined in <net/pfilt.h> as: struct endevp { u_char end_dev_type; u_char end_addr_len; u_short end_hdr_len; u_short end_MTU; u_char end_addr[EN_MAX_ADDR_LEN]; u_char end_broadaddr[EN_MAX_ADDR_LEN]; }; The fields are: Specifies the device type: ENDT_10MB or ENDT_FDDI. (ENDT_3MB and ENDT_BS3MB are defined but no longer supported.) ioctl(fildes, EIOCMAXBACKLOG, maxbacklogp) int *maxbacklogp; The argument maxbacklogp points to an integer containing the maximum value. (If maxbacklogp points to an integer containing a negative value, it is replaced with the current backlog value, and no action is taken.) ioctl(fildes, EIOCALLOWPROMISC, allowp) int *allowp; The argument allowp points to an integer containing a Boolean value (nonzero means promiscuous mode is set automatically). (If allowp points to an integer containing a negative value, it is replaced with the current Boolean value, and no action is taken.) ioctl(fildes, EIOCALLOWCOPYALL, allowp) int *allowp; The argument allowp points to an integer containing a Boolean value (nonzero means copy-all mode is set automatically). (If allowp points to an integer containing a negative value, it is replaced with the current Boolean value, and no action is taken.) ioctl(fildes, EIOCMFREE, mfree) int *mfree; #include <sys/types.h> #include <net/pfilt.h> ioctl(fildes, code, param) struct eniocb *param; The structure eniocb is defined in <net/pfilt.h> as: struct eniocb { u_char en_addr; u_char en_maxfilters; u_char en_maxwaiting; u_char en_maxpriority; int en_rtout; }; The applicable codes are: Fetch the parameters for this file. Set the parameters for this file. All the fields, which are described later, except en_rtout, are read-only. No longer maintained; use EIOCDEVP. The maximum length of a filter command list; see EIOCSETF. The maximum number of packets that can be queued for reading on the packet filter file; use EIOCMAXBACKLOG. The highest allowable filter priority; see EIOCSETF. The number of clock ticks to wait before timing out on a read request and returning a zero length. If zero, reads block indefinitely until a packet arrives. If negative, read requests return a zero length immediately if there are no packets in the input queue. Initialized to zero by open(2), indicating no timeout. (Use EIOCSRTIMEOUT and EIOCGRTIMEOUT.) A previous restriction against accessing data words past approximately the first hundred bytes in a packet has been removed. However, it becomes slightly more costly to examine words that are not near the beginning of the packet. Because packets are streams of bytes, yet the filters operate on short words, and standard network byte order is usually opposite from little-endian byte-order, the relational operators ENF_LT, ENF_LE, ENF_GT, and ENF_GE are not particularly useful. If this becomes a severe problem, a byte-swapping operator could be added. Packet filter special files. Example packet filter program. Commands: ifconfig(8), MAKEDEV(8), nfswatch(8), pfconfig(8), pfstat(1), tcpdump(8) Files: bpf(7), fta(7), fza(7), ln(7), tty(7), xna(7) Routines: pfopen(3) delim off
http://backdrift.org/man/tru64/man7/packetfilter.7.html
CC-MAIN-2016-50
en
refinedweb
Using the Struts Framework to Develop a Message Board--Part 2: Developing the Model for the Message Board ␡ - Using the Struts Framework to Develop a Message Board--Part 2: Developing the Model for the Message Board - MessageBoard In this seven-part series, Java expert Maneesh Sahu explores how to use Apache Software Foundation's Struts framework to develop a Web-based message board.From the author of As described in the previous article, ActionForm classes that are conformant with the Struts framework need to be developed for every entity involved in the application. Identity The Identity class represents the user who’s involved in the application. When the user has been identified with a name and email, this identity is used when the user creates new messages. Identity is a simplistic class containing input/output properties for each attribute, as shown in Listing 1. It implements ActionForm as necessitated by the Struts framework. Listing 1 Identity.java—Class Representing the User import org.apache.struts.action.ActionForm; public class Identity implements ActionForm { protected String name; protected String email; public void setName(String name) { this.name = name; } public void setEmail(String email) { this.email = email; } public String getName() { return name; } public String getEmail() { return email; } public String toString() { return ("Name: " + getName() + " Email: " + getEmail()); } }
http://www.informit.com/articles/article.aspx?p=19554&amp;seqNum=2
CC-MAIN-2016-50
en
refinedweb
Simple Library for writing CGI programs. See for the CGI specification. This version of the library is for systems with version 2.0 or greater of the network package. This includes GHC 6.6 and later. For older systems, see Based on the original Haskell binding for CGI: Original Version by Erik Meijer mailto:[email protected]. Further hacked on by Sven Panne mailto:[email protected]. Further hacking by Andy Gill mailto:[email protected]. A new, hopefully more flexible, interface and support for file uploads by Bjorn Bringert mailto:[email protected]. Here is a simple example, including error handling (not that there is much that can go wrong with Hello World): import Network.CGI cgiMain :: CGI CGIResult cgiMain = output "Hello World!" main :: IO () main = runCGI (handleErrors cgiMain) Catches any exception thrown by the given CGI action, returns an error page with a 500 Internal Server Error, showing the exception information, and logs the error. Typical usage: cgiMain :: CGI CGIResult cgiMain = ... main :: IO () main = runCGI (handleErrors cgiMain) Add a response header. Example: setHeader "Content-type" "text/plain" Get the value of an input variable, for example from a form. If the variable has multiple values, the first one is returned. Example: query <- getInput "query" Get all the values of an input variable, for example from a form. This can be used to get all the values from form controls which allow multiple values to be selected. Example: vals <- getMultiInput "my_checkboxes" Get the value of a CGI environment variable. Example: remoteAddr <- getVar "REMOTE_ADDR" The.
http://hackage.haskell.org/package/cgi-3001.1.4/docs/Network-CGI.html
CC-MAIN-2016-50
en
refinedweb
[This article was contributed by the SQL Azure team.] SQL Azure currently supports 1 GB and 10 GB databases, and on June 28th, 2010 there will be 50 GB support. If you want to store larger amounts of data in SQL Azure you can divide your tables across multiple SQL Azure databases. This article will discuss how to use a data access layer to join two tables on different SQL Azure databases using LINQ. This technique horizontal partitions your data in SQL Azure. In our version of horizontal partitioning, every table exists in all the databases in the partition set. We are using a hash base partitioning schema in this example – hashing on the primary key of the row. The middle layer determines which database to write each row based on the primary key of the data being written. This allows us to evenly divide the data across all the databases, regardless of individual table growth. The data access knows how to find the data based on the primary key, and combines the results to return one result set to the caller. This is considered hash based partitioning. There is another style of horizontal portioning that is range based. If you are using integers as primary keys you can implement your middle layer to fill databases in consecutive order, as the primary key grows. You can read more about the different types of partitioning here. Performance Gain. Considerations When horizontal partitioning your database you lose some of the features of having all the data in a single database. Some considerations when using this technique include: - Foreign keys across databases are not supported. In other words, a primary key in a lookup table in one database cannot be referenced by a foreign key in a table on another database. This is a similar restriction to SQL Server’s cross database support for foreign keys. - You cannot have transactions that span two databases, even if you are using Microsoft Distributed Transaction Manager on the client side. This means that you cannot rollback an insert on one database, if an insert on another database fails. This restriction can be mitigated through client side coding – you need to catch exceptions and execute “undo” scripts against the successfully completed statements. - All the primary keys need to be uniqueidentifier. This allows us to guarantee the uniqueness of the primary key in the middle layer. - The example code shown below doesn’t allow you to dynamically change the number of databases that are in the partition set. The number of databases is hard coded in the SqlAzureHelper class in the ConnectionStringNames property. - Importing data from SQL Server to a horizontally partitioned database requires that you move each row one at a time emulating the hashing of the primary keys like the code below. The Code The code will show you how to make multiple simultaneous requests to SQL Azure and combine the results to take advantage of those resources. Before you read this post you should familiarize yourself with our previous article about using Uniqueidentifier and Clustered Indexes and Connections and SQL Azure. In order to accomplish horizontal partitioning, we are using the same SQLAzureHelper class as we used in the vertical partitioning blog post. The code has these goals: - Use forward only cursors to maximize performance. - Combine multiple responses into a complete response using Linq. - Only access one database for primary key requests. - Evenly divide row data across all the databases. Accounts Table The example table I am using is named Accounts and has a primary key of a uniqueidentifier, with a clustered index built on the Date column. I created it with this script on two databases: CREATE TABLE [dbo].[Accounts]( [Id] [uniqueidentifier] NOT NULL, [Name] [nvarchar](max) NULL, [Date] [datetime] NULL, CONSTRAINT [PK_Accounts] PRIMARY KEY NONCLUSTERED ( [Id] ASC ) ) ALTER TABLE [dbo].[Accounts] ADD CONSTRAINT [DF__Accounts__Date__7C8480AE] DEFAULT (getdate()) FOR [Date] CREATE CLUSTERED INDEX [idxDate] ON [dbo].[Accounts] ( [Date] ASC ) Partitioning by Primary Key Because the table we are partitioning is using uniqueidentifiers as its primary key, and these are generated by Guid.NewGuid(), we have a fairly random primary key. The primary key is hashed to figure out which database contains the row for that key. The code looks like this: /// <summary> /// Names of the Databases In Horizontal Partition /// </summary> public static String[] ConnectionStringNames = { "Database001", "Database002" }; /// <summary> /// Connections Strings In the Horizontal Partition /// </summary> /// <returns></returns> public static IEnumerable<String> ConnectionStrings() { foreach (String connectionStringName in ConnectionStringNames) yield return ConfigurationManager. ConnectionStrings[connectionStringName].ConnectionString; } /// <summary> /// Return the Index to the Database For the Primary Key /// </summary> /// <param name="primaryKey"></param> /// <returns></returns> private static int DatabaseIndex(Guid primaryKey) { return (BitConverter.ToInt32(primaryKey.ToByteArray(), 0)); } /// <summary> /// Returns the Connectiong String Name for the Primary Key /// </summary> /// <param name="primaryKey"></param> /// <returns></returns> private static String ConnectionStringName(Guid primaryKey) { return (ConnectionStringNames[DatabaseIndex(primaryKey) % ConnectionStringNames.Length]); } /// <summary> /// Returns the Connection String For the Primary Key /// </summary> /// <param name="primaryKey"></param> /// <returns></returns> public static String ConnectionString(Guid primaryKey) { return (ConfigurationManager.ConnectionStrings[ConnectionStringName(primaryKey)] .ConnectionString); } Notice that the ConnectionString() method returns the connection string to use when referencing the primary key. Configure the connection strings in the .config file for your application or web site. The array of databases referenced by the .config file is contained in the ConnectionStrings property. Fetching A Single Row I am building on top of the SQLAzureHelper class defined in the blog post. The idea behind the class is to have a multipurpose access layer to connect to SQL Azure. Within the SQLAzureHelper class, the code to find an account name in the example databases using the primary key looks like this: static String AccountName(Guid id) { var accountDataReader = SQLAzureHelper.ExecuteReader( SQLAzureHelper.ConnectionString(id), sqlConnection => { String sql = @"SELECT [Name] FROM [Accounts] WHERE Id = @Id"; SqlCommand sqlCommand = new SqlCommand(sql, sqlConnection); sqlCommand.Parameters.AddWithValue("@Id", id); return (sqlCommand.ExecuteReader()); }); return ((from row in accountDataReader select (string)row["Name"]). FirstOrDefault()); } Notice that we use the primary key to calculate the connection string and the as well as a parameter to the SqlCommand. Inserting a Single Row When inserting a single row, we need to know the primary key before connecting to the database. In order to accomplish this we call Guid.NewGuid() in the C# code, instead of NewID() on the SQL Azure. The code using the ExecutionContext class looks like this: static Guid InsertAccount(String name) { Guid id = Guid.NewGuid(); SQLAzureHelper.ExecuteNonQuery( SQLAzureHelper.ConnectionString(id), sqlConnection => { String sql = @"INSERT INTO [Accounts] ([Id], [Name]) VALUES (@Id, @Name)"; SqlCommand sqlCommand = new SqlCommand(sql, sqlConnection); sqlCommand.Parameters.AddWithValue("@Name", name); sqlCommand.Parameters.AddWithValue("@Id", id); sqlCommand.ExecuteNonQuery(); }); return (id); } Summary In part three, I will show how to fetch a result set that is merged from multiple responses and how to insert multiple rows into the partitioned tables, including some interesting multi-threaded aspects of calling many SQL Azure databases at the same time. Do you have questions, concerns, comments? Post them below and we will try to address them.
https://azure.microsoft.com/es-es/blog/sql-azure-horizontal-partitioning-part-2/
CC-MAIN-2017-26
en
refinedweb
PathCombine function Concatenates two strings that represent properly formed paths into one path; also concatenates any relative path elements. Syntax Parameters - pszPathOut [out] Type: LPTSTR A pointer to a buffer that, when this function returns successfully, receives the combined path string. You must set the size of this buffer to MAX_PATH to ensure that it is large enough to hold the returned string. - pszPathIn [in, optional] Type: LPCTSTR A pointer to a null-terminated string of maximum length MAX_PATH that contains the first path. This value can be NULL. - pszMore [in] Type: LPCTSTR A pointer to a null-terminated string of maximum length MAX_PATH that contains the second path. This value can be NULL. Return value Type: LPTSTR A pointer to a buffer that, when this function returns successfully, receives the concatenated path string. This is the same string pointed to by pszPathOut. If this function does not return successfully, this value is NULL. Remarks The directory path should be in the form of A:,B:, ..., Z:. The file path should be in a correct form that represents the file name part of the path. If the directory path ends with a backslash, the backslash will be maintained. Note that while lpszDir and lpszFile are both optional parameters, they cannot both be NULL. Examples #include <windows.h> #include <iostream.h> #include "Shlwapi.h" void main( void ) { // Buffer to hold combined path. char buffer_1[MAX_PATH] = ""; char *lpStr1; lpStr1 = buffer_1; // String for balance of path name. char buffer_2[ ] = "One\\Two\\Three"; char *lpStr2; lpStr2 = buffer_2; // String for directory name. char buffer_3[ ] = "C:"; char *lpStr3; lpStr3 = buffer_3; cout << "The file path to be combined is " << lpStr2 << endl; cout << "The directory name path is " << lpStr3 << endl; cout << "The combined path is " << PathCombine(lpStr1,lpStr3,lpStr2) << endl; } ------------ INPUT: ------------ Path for directory part: "C:" Path for file part: "One\Two\Three" ------------ OUTPUT: ------------ The file path to be combined is One\Two\Three The directory name path is C: The combined path is C:\One\Two\Three Requirements
https://msdn.microsoft.com/en-us/library/windows/desktop/bb773571.aspx
CC-MAIN-2017-26
en
refinedweb
raw JSON response (a js.Proxy object): import "package:js/js.dart" as js; import "dart:async"; import "package:jsonp/jsonp.dart" as jsonp; // In this example the returned json data would be: // { "data": "some text" } Future<js.Proxy> result = jsonp.fetch( uri: "?" ); result.then((js.Proxy proxy) { print(proxy.data); // It is important to release the data! js.release(proxy); });; // The js library can make unit testing difficult, you can just // use var as the js.Proxy object in your method. static fromProxy(var proxy) { this.data = proxy.data; } } jsonp.fetch( uri: "?", Type: ExampleData ) .then((ExampleData object) => print(object.data)); Many Requests The fetchMany and dispose. The persistent stream takes up resources. If you no longer need it then you should use the dispose method to release the associated resources. Any unfinished requests will not be handled. By default the Stream provides js.Proxy objects: Stream<js.Proxy> object_stream = jsonp.fetchMany( "object", uri: "?" ); // The uri is optional when making a fetchMany request // as you may just want to configure the Stream Stream<js.Proxy> user_stream = jsonp.fetchMany("user"); object_stream.forEach( (js.Proxy data) => print("Received object!") ); user_stream.forEach( (js.Proxy data) => print("Received user!") ); // You just need to refer to the stream by name to make further requests jsonp.fetchMany( "object", uri: "?" ); jsonp.fetchMany( "object", uri: "?" ); jsonp.fetchMany( "object", uri: "?" ); jsonp.fetchMany( "user", uri: "?" );.
https://www.dartdocs.org/documentation/jsonp/0.0.5/index.html
CC-MAIN-2017-26
en
refinedweb
> >On Tue, Feb 14, 2006 at 12:31:45PM -0800, Dain Sundstrom wrote: > >>I'm getting an IllegalAccessError when using fastclass to invoke a > >>method on an instance where the method is inherited from a parent > >>class. I've reproduced the bug. It is caused because the method is actually defined in a non-public class in another package: the public class org.springframework.ejb.support.AbstractStatelessSessionBean extends the non-public class org.springframework.ejb.support.AbstractSessionBean. IMHO this is kind of weird on Spring's part, but nonetheless it has exposed a bug in FastClass, which is calling invokevirtual on the ancestor class instead of the derived class. The workaround, as you have discovered, is to redefine the method in the derived class, even if it is just to call super.ejbRemove(). Some other parts of CVS HEAD are in transition now (the MethodInterceptor startup optimizations) so even after I fix this bug you'll have to wait a little while. I'll let you know when there is a new version to test. Thanks, Chris
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200602.mbox/%[email protected]%3E
CC-MAIN-2017-26
en
refinedweb
On Wed, Dec 12, 2007 at 07:20:01PM -0500, David Nusinow wrote: > I use mutt but I don't have the library of scripts that I'm sure many DD's > have to deal with debian-specific stuff, although I'd love one. Would > anyone besides me be interested in pooling mutt scripts useful for DD's on > the wiki or something? I believe this kind of stuff should go into devscripts (maybe not in /usr/bin/ if people fear polluting that namespace to much, but still into devscripts). Please file wishlist bugreports against it, with pointers to the relevant
https://lists.debian.org/debian-devel/2007/12/msg00367.html
CC-MAIN-2017-26
en
refinedweb
I'm trying out this Python API for google trends. it created a pandas dataframe where the first column is the date and the others the keywords in kw_list, the values representing how much people search for the keywords. This is my code from pytrends.request import TrendReq # Login to Google. Only need to run this once, the rest of requests will use the same session. pytrend = TrendReq() # Create payload and capture API tokens. pytrend.build_payload(kw_list=['adele', 'wat']) interest_over_time_df = pytrend.interest_over_time() c = print(interest_over_time_df.iloc[-1]['adele']) print(c) This outputs '5'. However, the type is Nonetype, so I can't compare this value to other values. How can I get the output as an integer?
http://www.developersite.org/1000-9083-python
CC-MAIN-2018-22
en
refinedweb
I am trying to implement DBFlow for the first time and I think I might just not get it. I am not an advanced Android developer, but I have created a few apps. In the past, I would just create a "database" object that extends SQLiteOpenHelper, then override the callback methods. In onCreate, once all of the tables have been created, I would populate any lookup data with a hard-coded SQL string: db.execSQL(Interface.INSERT_SQL_STRING); onUpgrade() onDowngrade() onCreate(db);): FlowManager.init(new FlowConfig.Builder(this).build()); FlowManager.getDatabase(BracketsDatabase.NAME).getWritableDatabase(); In the Migration Class, you override the migrate() and then you can use the Transaction manager to initialize lookup data or other initial database content. Migration Class: @Migration(version = 0, database = BracketsDatabase.class) public class DatabaseInit extends BaseMigration { private static final String TAG = "classTag"; @Override public void migrate(DatabaseWrapper database) { Log.d(TAG, "Init Data..."); populateMethodOne(); populateMethodTwo(); populateMethodThree(); Log.d(TAG, "Data Initialized"); } To populate the data, use your models to create the records and the Transaction Manager to save the models via FlowManager.getDatabase(AppDatabase.class).getTransactionManager() .getSaveQueue().addAll(models);
https://codedump.io/share/KmSLON9xTUv2/1/what-is-the-correct-way-to-initialize-data-in-a-lookup-table-using-dbflow
CC-MAIN-2018-22
en
refinedweb
Hello everyone. I'm wondering if there's an option to delete all but 1 extra lines on the bottom of a file when saving. I know there is an option "ensure newline on save" and i have that enabled. This works fine - I get 1 newline when saving. However, if there is multiple newlines already on the bottom, and then I save, there will still be multiple newlines. I only want one. Is there such an option? Thanks in advance. trim_trailing_white_space_on_save? What that does is it trims trailing whitespace. For example, if i haveint main(){ return 0; //4 white spaces after the semicolon here} and i save it with trim_trailing_white_space_on_save enabled, it will only remove the 4 whitespaces after the smicolon, not any extra blank lines on the bottom. Oh right, sorry. Didn't have my laptop at hand to check. If you search packagecontrol for "whitespace" there are a bunch of options that should work. there's no setting for it, but it's super simple to write a plugin to do it: import sublime import sublime_plugin class DeleteExtraTrailingLines(sublime_plugin.EventListener): def on_pre_save(self, view): view.run_command('trim_trailing_extra_lines') class TrimTrailingExtraLinesCommand(sublime_plugin.TextCommand): def run(self, edit): match = self.view.find(r'(?<=\n)\s+\z', 0) if match != (-1, -1): self.view.erase(edit, match) Packages/User/ trim_extra_trailing_lines.py @nexisn4 Install my plugin Single Trailing New Line which does exactly what you want. You can set it up to work with all files or those open in buffers with specific syntaxes. @kingkeith @mattst Great work. Not only is it implemented, but we have lite and deluxe versions to choose from!
https://forum.sublimetext.com/t/sublime-text-3-option-to-delete-all-but-1-extra-lines-on-bottom-of-a-file-upon-save/29658
CC-MAIN-2018-22
en
refinedweb
.RELEASE). -.3.2). Apache HttpClient is an optional dependency (but recommended) dependency of Spring Social. If it is present, Spring Social will use it as a HTTP client. If not, Spring social will use the standard Java SE components. - Spring Social (version 1.1.0.RELEASE). - The config module contains the code used to parse XML configuration files using the Spring Social XML namespace. It also adds support for Java Configuration of Spring Social. -.RELEASE) is an extension to Spring Social and it provides Facebook integration. - Spring Social Twitter (version 1.1.0.RELEASE) is an extension to Social Social which provides Twitter integration. The relevant part of the pom.xml file looks as follows: <!-- Spring Security --> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-taglibs</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.2.0.RELEASE</version> </dependency> <!-- Use Apache HttpClient as HTTP Client --> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.2</version> </dependency> <!-- Spring Social --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-config</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-core</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-security</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-web</artifactId> <version>1.1.0.RELEASE</version> </dependency> <!-- Spring Social Facebook --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-facebook</artifactId> <version>1.1.0.RELEASE</version> </dependency> <!-- Spring Social Twitter --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-twitter</artifactId> <version>1.1.0.RELEASE</version> </dependency> You might also want to read the following documents which give you more information about the dependencies of the frameworks discussed in this blog post (Spring Security and Spring Social): MVC Test. configureup/**", "/user/register/**" ).permitAll() //The rest of the our application is protected. .antMatchers("/**").hasRole("USER") //Adds the SocialAuthenticationFilter to Spring Security's filter chain. .and() .apply(new SpringSocialConfigurer()); } @Override protected void configureSources. - Configure character encoding filter. - Configure the Spring Security filter chain. - Configure Sitemesh. - login functions to our example application. As always, the example application of this blog post is available at Github. Looking forward to second part of tutorial… I start writing it tomorrow. I think that I can publish it next week. great post, helped me very much. I’m waiting for the next. obrigado I am happy to hear that I could help you out. Petri, I have made a pause in a Spring article writing, but you inspired me to return to this =) Hi Alexey, It is good to hear from you! Also, continue writing Spring articles. :) Great, I’m from Brazil and this post helped to understand spring’s configuration. congratulations Thank you! I appreciate your kind words. Hi Petri, nice detail articular. i am struggling to get this working, appreciate you help. 1. i could not working, hence i configured below in xml then, at deployment it failed with below error exception Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘socialAuthenticationFilter’ defined in ServletContext resource [/WEB-INF/spring/LBSWeb/security-config.xml]: Unsatisfied dependency expressed through constructor argument with index 2 of type [org.springframework.social.connect.UsersConnectionRepository]: Could not convert constructor argume nt value of type [com.sun.proxy.$Proxy198] to required type [org.springframework.social.connect.UsersConnectionRepository]: Failed to convert value of type ‘com.sun.proxy.$Proxy198 implementing org.springframework.social.connect.ConnectionRepository,java.io.Serializable,org.springframework. aop.scope.ScopedObject,org.springframework.aop.framework.AopInfrastructureBean, org.springframework.aop.Spring Proxy,org.springframework.aop.framework.Advised’ to required type ‘org.springframework.social.connect.UsersConnectionRepository’; nested exception is java.lang.IllegalStateException: Cannot convert value of type [com.sun.proxy.$Proxy198 implementing org.springframework.social.connect.ConnectionRepository,java.io.Serializable, org.springframework.aop.scope.ScopedObject,org.springframework.aop.framework. AopInfrastructureBean, org.springframework.aop.SpringProxy,org.springframework.aop.framework.Advised] to required type [org.springframework.social.connect.UsersConnectionRepository]: no matching editors or conversion strategy found at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray (ConstructorResolver.java:702) hope you can help me out. Hi Sam, It seems that Wordpress ate your XML configuration. However, it seems that Spring cannot convert the constructor argument with index 2 to required type ( UsersConnectionRepository). It is kind of hard to figure out what could be wrong without seeing the XML configuration file. Can you paste it to Pastebin? Also, have you compared the XML configuration of the example application to your application context configuration? The configuration files which are relevant to you are: exampleApplicationContext-social.xml and exampleApplicationContext-security.xml. Thank you for the quick revert. i posted the xml on Pastebin ‘ SamJay – spring social xml configuration issue’. i am using spring.social.version>1.1.0.M4 actually i tried to make the configuration work as give by you. but for some reason social:jdbc-connection-repository did not work or recongnized, it failed to compiler with nodefine bean expectation for usersConnectionRepository, that’s why i switched to xml configuration at first place. i understand there are 2 place where usersConnectionRepository has being used (socialAuthenticationFilter and socialAuthenticationProvider ), i get the exception with socialAuthenticationProvider . thank you. For some reason I cannot find the XML from Pastebin by using the search term: ‘SamJay – spring social xml configuration issue’. Could you provide a direct link to it? By the way, this example assumes that you use Spring Social version 1.1.0.BUILD-SNAPHOT. The reason for this is that some classes which makes the configuration a lot simpler are not available in the version 1.1.0.M4. here is the link Petri. i did try with the as is configuration given by you but it still does not pick the social:jdbc-connection-repository so its failed at deployment to JBoss. thanks. The reason why you cannot use Java configuration if you deploy to JBoss is that JBoss doesn’t support Spring Java configuration yet. Have you tried to deploy the application to Tomcat 7? It could be useful because this way you could figure out if this is a JBoss related problem. I noticed that you don’t set the value of the indexattribute when you use the constructor-argelement. Have you tried to set it? Also, some of your configuration files use the old Spring versions (3.1). You should probably update them to the latest versions. Have you tried to update your Spring Social version to 1.1.0.BUILD-SNAPSHOT? If you would do that, you should be able to use my versions of the XML configuration files. This would make your application context configuration files easier to read and less brittle. thanks, i am not using Java configuration at all only old faction xml, and adding constructor-arg element also made no difference. i will deploy the app to the tomcat to eliminate the server. below has simile issue being discussed, but could not really help. thanks will write to you with update. After I read that discussion, I realized that is probably an AOP related issue. I noticed a small difference between your XML configuration file and a file mentioned in that thread. Have you tried to declare the JdbcUsersConnectionRepositorybean as follows: thanks, i tried that, but still get the same error, one other think i tried to get the Spring Social version 1.1.0.BUILD-SNAPHOT from., but it also failed downloading the jar’s. dependencies> org.springframework.social spring-social 1.1.0.BUILD-SNAPSHOT spring-snapshots Spring Snapshots true Hi, You can get the correct dependencies from the Spring snapshot repository by following these steps: Check the POM file of the example application for more details about this. Also, It seems that the required modules are found from the snapshot repository. Perhaps the download failed because of a network issue or something. Hi Petri, deployment failed on tomcat 7 as well with same exception. I was expecting that. The problem is related to Spring AOP and not to the used server. I noticed that you answered to this thread. Let’s hope that you get an answer soon (I want to hear what the problem was)! Will keep you posted as soon as i get a answer. on a different note can you please enplane the below please. with the ConnectController, called back (redirect) into your app: GET /connect/facebook?code=xxx, which ends up with page not found. how should i capture the call back here and seamlessly integrate with the app If you want to redirect the user a specific page after the connection has been created, you should override the connectionStatusRedirect() method of the ConnectionController class. Hi do u know the reason for this error please ‘state’ parameter doesn’t match.). Redirecting to facebook connection status page. I haven’t run into this problem but I found some additional information it. You might want to check out the Github issue titled: Facebook connection infinite redirect loop workaround. Hi Petri, I have added /web-inf/jsp/js/app.js and web-inf/jsp/js/controller.js and updated layout.jsp with below includes. <script type="text/javascript" src="”> getting below errors in javascript console Refused to execute script from ‘’ because its MIME type (‘text/html’) is not executable, and strict MIME type checking is enabled. login:1 Refused to execute script from ‘’ because its MIME type (‘text/html’) is not executable, and strict MIME type checking is enabled. login:1 do i need to update webapp? any pointers?? sridhar Hi Sai, The problem is that tou put your Javascript files to WEB-INF/jsp/js/ directory, and servlet containers do not serve any content put to the WEB-INF directory. You can solve this by moving your Javascript files to the src/main/webapp directory. If you use the same approach which I used in the example application, you should move the app.js and controller.js files to the src/main/webapp/static/js/app directory and add the following code to the layout.jsp file: I hope that this answered to your question. I think there is a dependency missing: spring-social-config You are right! Thanks for pointing this out. It seems that the spring-social-config module is a transitive dependency but I agree that it might be better to explicitly specify it (at least in this case). I will update the blog post and the example application. oops, forgot to say first, awesome article, thanks a lot for sharing :) Thanks! I appreciate your kind words. Hi petri, i’am newbie in spring. How to add your example source to my project in eclipse? Thank you Hi Davi, I haven’t been using Eclipse for several years but let’s see if I can help you out. Which Eclipse version are you using? hi Petri, What are you using for this project? I use IntelliJ Idea, but the example application uses Maven. In other words, you can compile / package / run it without using an IDE. All you have to do is to clone it from the Github repository and you are ready to go (if you have installed JDK + Maven). Thank you for your reply. I’m using eclipse kepler. It seems that you should be able to do this by navigating to: File -> Import -> Browse to general -> Maven Projects This wiki page has more information about this (including screenshots). I hope that this solved your problem. Great, thank you. Great tutorial petri. uhmm. It’s should not be this complicated. I agree. It will be interesting to see if Spring Boot will be integrated with Spring Social. New to this , I am getting the following error when I import the code to the eclipse Error loading property file ‘/Users/akumar/Documents/development/tracks/git/spring-social-examples/sign-in/spring-mvc-normal/profiles/dev/socialConfig.properties’ (org.apache.maven.plugins:maven-resources-plugin:2.6:testResources:default-testResources:process-test-resources) You need to create the socialConfig.propertiesfile yourself. This file contains the configuration of your Facebook and Twitter applications. See the README of the example application for more details about this. Hello, im having huge problems adopting new facebook api to our application. Before i knew ill have to add it, i’ve created normal spring security besed User management. But now i have to add facebook. With new Social Security have been added XML based configuration with UserIdSource etc. But i’ve no idea how to use it. Could you be so nice and also create tutorial for XML based configuration that can be adopted to already existing spring security projects :( ? Huge thx for all help. Hi, Have you checked out the XML configuration of the example application? It should help you get started if you want to configure your application by using XML. I was planning to describe it in this blog post as well but I noticed that the blog post would probably be too long. That is why I decided to skip it and provided a link to the XML configuration instead. If you cannot solve problem by reading the configuration files, let me know and I will write a brief description about the required steps. Hi Petri, I have managed to get the FB logging to work. Now i see that data are being populated to ‘userconnection’ table through ConnectController and when i disconnected from the service, the data from the table get deleted as well (hope the expected behavior). My query is: I have a table called ‘user’ which maintains the application form logging users information’s and authenticates via spring-security. What i want to figure out is, i would like to sync userconnection data which maintains FB user data with ‘user’ table which maintain the form application local user accounts. So in a situation where a client logging with a FB, i should be able to create an account in the site as well and pass that information (ex: a passwrod) via a mail. So that user has the ability to either use FB or site account. Can you please help me to understand am i thinking on the right direction? And what are the steps that i should do to achieve this. Thanks Sam Hi Sam, If your application uses email as username and you get the email address of the user from Facebook, you can create a row to the usertable when a user signs in by using Facebook. The way I see this, you have two options: The first option provides a better user experience. The problem is that you cannot use it if your application has to support social sign in providers which don’t return the email address of the user or if you don’t use email address as the username. The second option is easier to implement but it can be annoying because often users expect that they don’t have to enter password information if they use social sign in. I hope that this answered to your question. If you need more advice about this, don’t hesitate to ask additional questions. HiPetri, Thank you very much for the detail explanation & ill go through the links you provided and get back to you on the outcome. I do want to support FB, Twitter, Googal+, hence need to check whether email is being returned by those services. but my current implementation does not use email as the username, yet i am able to get the username with below. Regarding the second point: i am not clear on this, what do you mean is; at the end of authentication success, inject a page to capture a password, is it? another query that i came across is, once the FB authentication is successful, default behaviors is, the flow returns to the facebookConnected.jsp. what is the configuration (bean) to allow the flow to be continued to the application since the user is now authenticated ? Thanks Saranga Hi Sam, First, I am not sure if you have tried the example application of this blog post but its registration flow goes like this: What I meant was that you could ask the password information from users who are using social sign in. If you want to know more about the registration flow of the example application, you should read my blog post titled Adding Social Sign In to a Spring MVC Web Application: Registration and Login. Second, have you integrated Spring Social with Spring Security or are you using only Spring Social? If you have integrated Spring Social with Spring Security, you can configure the AuthenticationSuccessHandlerbean. If you are using Java configuration, you should take a look at this StackOverflow question. On the other hand, if you are using only Spring Social, you could try to set the url by calling the setPostSignInUrl()method of the ProviderSignInControllerclass. I haven’t tested this though so I am not sure if this is the right way to handle this. Hi Petri, i am using the Spring Social with Spring Security and i have gone though your example which user xml, configuration. i have posted my 2 xml file here for your reference. (social-security xmls). hope it will make sense. i am going through the clinks that you are given. i am stuck on what to do with when the authentication call back return to facebookConnected.jsp. i guess, i want to capture the callback and take the control to spring-security and let the application work flow proceed. as you can see, i have used default spring provided controller, i guess i need to overwrite this and configure a way to let the flow run through the application flow. thank you very much for your help. thanks saranga Hi Sam, I just realized something. Do you use the url /auth/[provider] (in other words, for Facebook this url would be /auth/facebook) to start the social sign in flow or do you use the url /connect/[provider]? If you only want to authenticate the user, you should use the first url (/auth/[provider]) because requests send to that url are processed by the SocialAuthenticationFilter. I took a very quick look at your configuration files and they seemed to be fine. I want to make a full example integrating spring social and spring security using MongoDB , i need some examples , links or tuorials that help me to achieve that. i don’t know the needed changes to make in order to use mongodb instead of mysql because the problem that i faced is that Spring Social project already provides a jdbc based connection repository implementation to persist user connection data into a relational database. i don’t know if this is only used by relational databases :( Hi Moussi, The JdbcUsersConnectionRepositoryclass persist connection information to a relational database. If you want to save this information to MongoDB, you have implement the UsersConnectionRepositoryinterface. I have never done this but I found a blog post titled Customize Spring Social Connect Framework For MongoDB (unfortunately this blog post has been removed). It explains how you can persist connection information to MongoDB. You can get the example application of this blog post from Github. I hope this helps you to achieve your goal! I am looking forward on using this solution for testing in my environment. As i have had no contact yet with sitemesh, here’s my question. How would i do sth. like this: I think this work should be done in ExampleApplicationConfig, but i am stuck with this. Is there some easy solution to add things like ? Hi there, forget about my last post. I made a small change in “ExampleApplicationConfig” on setting up the sitemesh FilterRegistration.Dynamic sitemesh = servletContext.addFilter(“sitemesh”, new TagBundlerFilterForSite()); sitemesh.addMappingForUrlPatterns(dispatcherTypes, true, “*.jsp”); While adding the new Class to the same package: public class TagBundlerFilterForSite extends ConfigurableSiteMeshFilter { public TagBundlerFilterForSite(){ this.applyCustomConfiguration(new SiteMeshFilterBuilder()); } @Override protected void applyCustomConfiguration(SiteMeshFilterBuilder builder) { builder.addTagRuleBundle (new DivExtractingTagRuleBundle ()); } } I can now do this: Template: JSP-Page:Some more data for me this really helps to use my template well, maybe u or others can use that. I know the setup on the constructor is a bad thing, but as i really tried to get this working, i was happy for now. If there is a better solution, let me know ! :) It is great to hear that you were able to solve your problem! Also, thanks for coming back and posting your solution to my blog. Now I know where to look if I need similar functionality (I have never needed it yet). Heh, This is really an awesome post.This will help me a lot. Can you please mail me the zip file of the complete code? I tried to copy the code and and run, but it’s not working. Am trying to remove errors since last 3 days, but not able to do so. Please help me out. Mail me as soon as possible. You can get the code from Github (you can either clone the repository or download the code as a Zip file). Remember to read the README as well. Hi petri, I have one doubt.How to set the anonymous user for authentication without xml configuration? Hi, If you are talking about the anonymous authentication support of Spring Security, it should be enabled by default. The default role of the anonymous user is ROLE_ANONYMOUS, and you can use this role when you specify the authorization rules of your application. Unfortunately I don’t how you can customize the anonymous authentication support without using XML configuration. :( Hello! I’m trying to follow this tutorial, but I have a problem downloading the dependencies. Can you help me out? Thanks The following artifacts could not be resolved: org.springframework.social:spring-social-config:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-core:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-security:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-web:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-facebook:jar:1.1.0.BUILD-SNAPSHOT: Failure to find org.springframework.social:spring-social-config:jar:1.1.0.BUILD-SNAPSHOT in was cached in the local repository, resolution will not be reattempted until the update interval of spring-milestone has elapsed or updates are forced Hi Roman, It seems that I forgot to mention that you have to add the Spring Snapshot repository to the pom.xml file (see the pom.xml file of the example application). Also, you might want to use version 1.1.0.RC1 because it was released this week (I will update the example during the weekend). I hope that this answered to your question. If not, let me know! Please let me know if I’m misunderstanding, but it appears that this application permits a user to associate exactly one social-media account–of any type–with their application account, so that a user can’t associate both Facebook and Twitter accounts simultaneously. It appears that the SocialUserDetails interface has a massive flaw in that its getUserId() method takes no parameter specifying *which* social service for which we’re looking up the user’s identity. Did I overlook some out-of-band information to the persistence layer about which social network is being talked about (such as an injectable thread-local holder), or does is this entire setup limited to a single social-media association per user? You are right. If a user creates a user account by using Facebook, he cannot sign in by using Twitter (or associate a Twitter account with his local user account). However, it is possible to support multiple social sign in providers as well. I haven’t done this myself but I think that you can do this by following these steps: Userentity and add support for multiple social sign in providers (this is required only if you want to store this information). UserIdSourceinterface and create a userId which contains the information required to identify the local user and the used social sign in provider. SocialUserDetailsServiceinterface and ensure that it can handle the “new” userId (you have to parse the local username from userId and load the correct user). ProviderSignInUtilsclass. If you have any further questions, don’t hesitate to ask them. Petri, I was looking around on the web to find out how to change the scope when doing a Facebook login authorization. I was looking to make the change in the configuration rather having to send a post with a hidden variable on the social login button. I see in SocialContext that we are adding the Facebook connection factory and it has a method to set the scope. I changed the scope and it does not change the scope on the authorization. Do you know how to change this at the configuration level? There is an “Authorization scope” section that explains how it is done but not at the configuration level. Have you done this before? I was able to verify that setting the scope by calling the setScope(String scope)method of the OAuth2ConnectionFactoryclass ( FacebookConnectionFactoryextends this class) doesn’t seem to do anything. Unfortunately I cannot provide an answer right away, but I promise that I will take a closer look at this because the Javadoc of that method suggests that it should work. I will let you know if I find a solution. I looked around tonight and did not find a way to set the scope but I did find that OAuth2AuthenticationService.defaultScope is really what is being used when it adds the scope to the URL. If you don’t pass a scope as a hidden variable it will use the defaultScope. Thanks again for always being so helpful. Petri, Have you found a work around for this? I have not. I tried looking through the source code and kept getting a little lost and did not find a path to set the scope. Hi, Actually I did found something from the web: It seems that if you want to set the default scope, you have to use the FacebookAuthenticationServiceclass. The SecurityEnabledConnectionFactoryConfigurerobject given as a method parameter to the addConnectionFactories()method of the SocialConfigurerinterface creates the required authentication service objects BUT it doesn’t set the default scope. I assume that if you want to set the default scope, you have remove the @EnableSocialannotation from the SocialContextclass and create all required beans manually. I can take a closer look at this tomorrow. I checked this out and I noticed one problem: I should create a ConnectionRepositorybean but I am unable to do it because the ConnectionRepositoryclass is package-private (and I don’t want to move the SocialContextclass to the same package). You could of course configure Spring Social by using XML configuration but if you don’t want to do that, you should create a new issue to the Spring Social project. Petri, Thanks again. I created a ticket. With your help I hope they have enough information to find the bug. I am a little lost. You are welcome! Let’s hope that the Spring Social team can solve this soon. great post, helped me very much. I am happy to hear that. Hi Petri, I’m trying to implement spring-social-facebook in my application, however I’m stuck in the JdbcUsersConnectionRepository part. I would like to have my own UsersConnectionRepository using Hibernate but without JPA. Thanks much in advance The JdbcUsersConnectionRepositoryclass uses the JDBC API. In other words, you can use it in an application which uses the “native” Hibernate API. You can of course create your own implementation by implementing the UsersConnectionRepositoryinterface but I am not sure if it is worth the effort. Did you mean that you want to create a custom UserDetailsServicewhich uses Hibernate instead of Spring Data JPA? HI Petri At the end of this tutorial I have an error with the servletContext.addServlet, servletContext.addFilter and servletContext.addListener … I´m workinh with Eclipse and the message that appear is “The method addListener(ContextLoaderListener) is undefined for the type ServletContext”. The solution tath Eclipse suggest me is “Add Cast to servletContext” What can I do? Thanks much in advance You need to use the Servlet API 3.0. You can get the full list of required dependencies by reading the pom.xmlfile of the example application. Hi Petri, Thank you very much for the detailed tutorials! I have a question that is similar to the one from Ademir. I would like to integrate spring social into my project however I don’t need any of the spring social persistence stuff and just seems to conflict with my application. All I really need is Spring Social’s Facebook methods. Is this possible to simplify the setup in this way? Any help would be greatly appreciated! Hi, Do you want that your application is able to read information from Facebook and write information to Facebook (and that is it)? If that is the case, you should read a tutorial titled Accessing Facebook Data. It should help you to get started. If that guide doesn’t solve your problem, let me know! hello my friend, i have a trouble to run this sample, in “” class on line 68 we have .apply(new SpringSocialConfigurer()) but i get can’t resolve method ! i’m sure that i provide all maven dependencies correctly, then i tried to upgrade the “spring.security.version” to 3.2.4.RELEASE but the problem remains. whats the problem ? thanks. Are you getting a compilation error or a runtime error? Also, if you get a runtime error, it would be useful to see the stacktrace. hello again, when i start packaging app by using maven directly, i get rid of my first question, because that was just an IDE wrong alert. but now i have another problem after returning from facebook auth, the page redirected into and i an error 404 will raise. of course i can’t find any controller matching /signin url why!? whats the problem you think ? thank you Hi, Petri can you help me on my question ? thanks Hi, Yes, I was wondering if your previous problem was an IDE alert because I used to see a similar alert in IntelliJ Idea. However, it disappeared after I updated it to a newer version. I assume that your problem occurs when a registered user tries to log in to your application (because of the url). If this isn’t the case, let me know. Anyway, you can configure the url in which the user is redirected after a successful login by following the instructions which I gave in this comment. Let me know if this doesn’t solve your problem. here is my changes in http:/localhost:8080/login i click on Sign in With facebook then i redirected to facebook and have a successful login but it still redirected to http:/localhost:8080/signin#_=_ here is statcktrace: 0][route: {s}->https:/graph.facebook.com:443][total kept alive: 0; route allocated: 1 of 5; total allocated: 1 of 10] DEBUG – MainClientExec – Opening connection {s}->https:/graph.facebook.com:443 DEBUG – ttpClientConnectionManager – Connecting to graph.facebook.com/173.252.112.23 allocated: 0 of 10] sorry Petri, can i ask what url you expected to called after return from facebook ? which controller must catch the request and how can i get auth_token to get all friends of logged in user ? i have so many quastions, but at first i need to run application properly, i googling so much for some other examples but yours is best article ever. thanks again. No problem. I happy to help but I am on summer holiday at the moment so my response time might be a bit longer than in a normal situation. Anyway, if you implement the registration and login functions as described in this blog post, the only urls you should care about are: I have never experienced a situation where the user would have been redirected to the ‘/signin’ url, so I am not sure how you can solve this problem (The log you added to your comment doesn’t reveal anything unusual). I think that the easiest to way to solve this problem is to compare the configuration of your application with the configuration of my example application. Unfortunately it is impossible to figure out the root cause without seeing the source code of your application. About your second question: I have never used Spring Social Facebook for accessing user’s Facebook data (such as list of friends), so I don’t know how you can do it. I think that your best bet is to read this guide which explains how you can create a simple web application that reads data from Facebook. excuse me Petri, i found that my server doesn’t have an access to graph.facebook.com:443 this make’s the problem, after i resolve this issue now have a null pointer exception at org.springframework.social.security.SocialAuthenticationFilter.doAuthentication(SocialAuthenticationFilter.java:301) Authentication success = getAuthenticationManager().authenticate(token); and getAuthenticationManager(). return null value ! do you have any suggestion ? thanks you for your replies, Have you configured the AuthenticationManagerbean? You can do this by overriding the protected void configure(AuthenticationManagerBuilder auth)method of the WebSecurityConfigurerAdapterin the SecurityContextclass (assuming that your configuration is similar than mine). finally, all thing worked together successfully. thanks for all of your advises. have a good holidays. Thanks! It is good to hear that you were able to solve your problem. Hi in RepositoryUserDetailService I am getting error “The method getBuilder() is undefined for the type ExampleUserDetails”…Please Help anyone.. Some of the methods of the ExampleUserDetailsclass were left out from this blog post because I wanted to shorten the code listings. The getBuilder()method was one of them. You can get the source code of the ExampleUserDetailsclass from Github. hello Peter.. thank your for this article.. can u create a video tutorial for this article..? Hi, That is a great idea. I will add this to my Trello board, and I will record the video tutorial in the future. I’m using your example and my question is where the method is implemented public User findByEmail(String email);?? i dont see (interface UserRepository.class) The findByEmail(String email)is a custom query method which is found from the UserRepositoryinterface. This has been explained in the section titled ‘Implementing the UserDetailsService interface’. You can also get the source code of the UserRepositoryinterface from Github. yes, but I do not see where is the implementation of the method, see the calling but where is the implementation? I really doubt that userRepository class implements the interface for the method to work findByEmail sorry for my English thanks!! The example application uses Spring Data JPA which provides a proxy implementations for each interface which extends the Repositoryinterface. That is why there is no need to implement the UserRepositoryinterface. Hi Petri, I in the process of implemented Spring Security & Spring Social for the website and also would like to allow for the iOS and Android apps to connect via the social as well. Idea i had in mind is to have the centralised api expose in web end and let it handle the social signup/sing in process where mobile end only connect to this api. Can you please help me on modelling such and how should i go on about this. Thanks Hi Sam, I have never done this myself (surprising but it is the truth). However, I found an answer to StackOverflow question titled ‘Integrate app with spring social website’ which looks quite promising. The idea is that you have to first implement the social sign in by using the native APIs (Android, iOS) and then provide the required credentials to your backend and log the user in. I am planning to extend my Spring Social tutorial to cover REST APIs in the future. I will probably take a closer look at this when I do that. Thanks Petri. Look forward to it. Hi Petri, Did you have a chance to add REST API support in this example? Hi Armen, No. I will update my Spring Data JPA and Spring MVC Test tutorials before I will update this tutorial. If everything goes as planned, I will do it at November or most likely at December. Actually I asked this from the creator of Spring Social because one of my readers asked the same question, and he said that at the moment the best way to do this is to do a normal GET request and let the backend handle the authentication dance. Hi Petri, Thank you for your quick response. I tried working with maven but facing below issue. Can you please help me to look into this issue. [INFO] ———————————————————————— [INFO] BUILD FAILURE [INFO] ———————————————————————— [INFO] Total time: 4.906 s [INFO] Finished at: 2014-07-09T15:47:17+05:30 [INFO] Final Memory: 13M/154M [INFO] ———————————————————————— [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.6:resources (defau lt-resources) on project spring-mvc-normal: Error loading property file ‘F:\workspace\login\profile s\dev\socialConfig.properties’ -> [Help 1] [ERROR] Hi Amit, the Maven build is a bit different than the setup described in this blog post. It expects to find socialConfig.propertiesfile from the profiles/dev/directory. This properties file contains the configuration of your Facebook and Twitter application (app id and app secret). Also, the example application doesn’t contain this file. This means that you have to create it yourself.The README of the example application explains how you can set up the example application before you can run it. I hope this answered to your question. If not, feel free to ask more questions! I deployed the project but not able to access registration page and also authentication page is not available. can you please provide me complete setup of this project to learn more. Did you follow the steps described in the README of the example application? If you did follow them, could you check the log and paste the relevant part of it here (also, if you see any stack traces, add them here as well)? I followed the steps as you have mentioned and getting the login page. But when i tried to login my url is this “” and after submitting the URL it redirects to “”. So, I feel the problem is redirecting the URL. It’s not redirecting properly. Also, After login with facebook the url redirects to “” and it shows 404 error. Please let me know if i have to update anything more in configuration. The reason for this is that the actionattribute of the login form ignores the context path. You can fix this by replacing the formtag with this one: I made the fix to the example application. Thank you for reporting this bug. Another reader had the same problem, and the reason for this behavior was that his server didn’t have access to the Facebook Graph API. Thank you Petri. It works…. cheers :) I tried as mentioned above but still it redirects to same url. I pasted my console log. Please have a look and help me to fix this issue. DEBUG – ttpClientConnectionManager – Connection leased: [id: 0] allo cated: 0 of 10]: 1]-1: Shutdown connection DEBUG – MainClientExec – Connection discarded DEBUG – anagedHttpClientConnection – http-outgoing-1: Close connection DEBUG – ttpClientConnectionManager – Connection released: [id: 1][route: {s}->https :/graph.facebook.com:443][total kept alive: 0; route allocated: 0 of 5; total allo cated: 0 of 10] Your log file looks pretty similar than farziny’s log. His problem was that his server could not access the Facebook Graph API. I am not exactly sure what he meant by that, but I assumed that either his FB application was not configured properly, or the app secret and app key were not correct. When you created the FB Application for your web application, did you enable the Facebook login? Hi Petri, As, you said its blocking the facebook graph api, you were correct. I fixed the problem and working fine everything. You rocks dude… Trying to implement to get the post feed of user and friend list also. May be your help required in future again. So, thanks a lot in advance. Hi Amit, Petri, Even I am getting the same error you specified earlier. After the Facebook login, the control is not returning back to login page/signUp page. Do you know how to resolve this/ Enable the Facebook graph API on the server? DEBUG – headers – http-outgoing-1 << Access-Control-Allow-Origin: * DEBUG – headers – http-outgoing-1 << X-FB-Rev: 1653508 DEBUG – headers – http-outgoing-1 << ETag: "02e90b73697f1bf84bb1c08a06c30817978e2ff1" DEBUG – headers – http-outgoing-1 << Pragma: no-cache DEBUG – headers – http-outgoing-1 << Cache-Control: private, no-cache, no-store, must-revalidate DEBUG – headers – http-outgoing-1 << Facebook-API-Version: v2.0 DEBUG – headers – http-outgoing-1 << Expires: Sat, 01 Jan 2000 00:00:00 GMT DEBUG – headers – http-outgoing-1 << X-FB-Debug: KntgReJ8rZbpdGdWOho0pLPgYBPEpFQei1a+jQNDJJBs+qoE6Sx9pBiHGMk0MsA5NEv6oa0uEv5ABrrVqMwgJg== DEBUG – headers – http-outgoing-1 << Date: Sun, 22 Mar 2015 18:07:53 GMT DEBUG – headers – http-outgoing-1 << Connection: keep-alive DEBUG – headers – http-outgoing-1 < can be kept alive indefinitely DEBUG – ttpClientConnectionManager – Connection released: [id: 1][route: {s}-> kept alive: 1; route allocated: 1 of 5; total allocated: 1 of 10] Hi Petri, I am facing a problem where the Facebook sign-in happens but after that the control is not returning to my applicaiton. It is simply waiting for localhost. I am simply running this code as is without any modification( except creating a new socialConfig.properties ) what basically happens after the facebook login? Does it check for the user existance? I am asking this because I havent configured the Database. Hi, The registration and login process is explained in the second part of my Spring Social tutorial. Does your problem occur when the user tries to create a user account by using social sign in (i.e. he clicks the social sign in link for the first time)? Hi Petri, Nice article!! I have followed exactly the same steps as described here and can go to facebook login page from my apps login page (by clicking on facebook link).After fb login , the app lands back to the apps login page with the url appended with ‘#_=_’ e.g. ‘’. Actually I expected the registration page will be displayed instead.SocialUserService::loadUserByUserId() is not getting called as I put some sop there. Any hints? Best regards, Pradeep my own mistake, forgot adding /usr/register twith permitAll() in security context. Can see the register form now. It is good to hear that you were able to solve your problem! Well the user registers now,I mean registration screen comes and user details are saved to my Candidate table but UserConnection table has no entries of this new user.So looks like some configuration error still there regarding SocialUserService since it’s loadUserByUserId() is still not called. My securitycontext.xml is: and social.xml is My databasecontext.xml has the rest: 3 ? request.getRequestURI().split(‘/’)[3] : ‘guest’}” /> Any hints for this problem? Wordpress ate your XML configuration but I happen to have an idea what might be the problem. You have to persist the user’s connection to the UserConnectiontable after the registration has been successful (and the user uses social sign in). You can do this by following the instructions given in the second part of this tutorial. However, please note that the static handlePostSignUpmethod of the ProviderSignInUtilsclass has been deprecated. You should use the doPostSignUp()method instead. I hope that this solves your problem. That was a perfect hint!! Actually I have commented the call to ProviderSignInUtils.handlePostSignUp() thinking its not useful in my case. Thanks a lot :) You are welcome. It is good to hear that you were able to solve your problem. This article is not simple. It’s difficult to understand so please make simple example Unfortunately I have no idea how I could create a simpler example application since I think that the example application is already pretty simple. However, if you let me know which parts of this blog post were hard to understand, I can try to make them easier to understand. HI Petri, Logout is not working for facebook. It just redirecting to login page. How can I do logout in facebook. And for twitter I’m getting this error. org.springframework.web.client.HttpClientErrorException: 406 Not Acceptable org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91) Do you mean that log out function of my example application is not working, or are you trying to follow the instructions given in this blog post and cannot get the log out to work in your application? Also, do you mean that after the user clicks the log out link, the user is still logged in to the application and can access protected resources (such as the front page of my example application)? yes, when I do click on logout button it’s redirecting to login page. but when I try to open facebook.com it’s directly shows me home page.(It’s not asking me use name and password again) That means logut is not working properly. As far as I know, Spring Social doesn’t support this. However, there are a couple of workarounds which are described in the following forum discussions: I hope that this answered to your question. Also, remember that you have to use similar workarounds for other social sign in providers as well.) Hi Petri, Find the solution. I was not setting callback URL in twitter that was the problem. But when I do login in twitter and when it’s successful it’s redirecting me to registration form. Why is so? do I need to change my callback URL? Great! It is good to hear that you were able to solve your problem. The user is redirected to the registration page because either the user account or the persisted connection is not found from the database. This can be a bit confusing so I will explain it here: When a user “returns” to the example application after clicking a social sign button, Spring social tries to find the persisted connection from the UserConnectiontable. If the connection is not found, the user is forwarded to the registration page because the application assumes that the user doesn’t have a user account yet. On the other hand, if the connection is found, Spring Social tries to find the correct user account by using the value of the userIdcolumn (the example application uses email address as user id). If the user is not found, the user is forwarded to the registration page. I hope that this answered to your question. By the way, if you want to get more information about the social sign in flow, you should read the second part of this tutorial. Thanks Petri. Thank you for help. Project is now working properly. :) You are welcome. :) Hello, I have a problem with facebook when I turn on HTTPS for /**. When user is redirected back from Facebook site after successful login, and after granting permissions for my app. It goes back to my SocialAuthenticationFilter and attemptAuthentication method. Everythink is ok for http, but with https this method is called one more time, and the user is already authenticated (in attemptAuthService()) so it tries to addConnection but token.principal is null so entire method returns null. In the end the AuthenticationServiceException(“authentication failed”) is thrown and user is redirected to defaultFailureUrl. I use XML version of your config. I tried to force http for /auth/** and it WORKS, but i don’t think it’s safe to transfer tokens on unsecured channel. Don’t know what to do :( When the user is redirected back to your web application, is the user redirected by using HTTP or HTTPS? The reason why I ask this is that I am using Spring Social in a web application that is served by using HTTPS, and the Facebook login is working correctly. I checked the Facebook application configuration of that web application and I noticed that I had entered the website url by using HTTPS (e.g. instead of). If I would be you, I would check the website url setting found from the Facebook application configuration and ensure that the website url setting has the ‘https’ prefix instead of ‘http’. Let me know if this solves your problem. Aww… I have finally figured it out. The problem was in my social configuration. I have added authenticationSuccessHandler (SavedRequestAwareAuthenticationSuccessHandler) with useReferer=true to my socialAuthenticationFilter. I have done that because I have bootstrap modal dialog with login form on every page and I wanted to redirect user to the same page after authentication. I had totally forgot about that. It is good to hear that you were able to solve your problem! Everytime a sql-query is placed it results in a 404 Error. For example I can load the main page without problems. But if I put User test = userRepository.findByEmail(“test”) somewhere in the code i get the 404 Error again. Also everytime I try to do something like login -> 404. From the logs i see the sql query which works fine if I copy it into phpmyadmin. Except for this logging I se nothing. If I place a logging before and after the query I only see the first one. I guess the query somehow crashes and produces a 404. I know 404 means not found but this does not make any sense. What can I do? Is there a way to turn up more loggings? I use Glassfish with the source provided from github. I have never tried to run this with Glassfish but I can try to figure out what is going on. Which version of Glassfish are you using? Is it possible to override the /auth/{providerId} path? I need to do this because I have multiple security chains and in one of them the social login is meant to do something a bit different and also direct you to somewhere a bit different to. Hi Ricardo, There is a way to override the url that is processed by the social authentication filter, but you have to make your own copies of the SpringSocialConfigurerand SocialAuthenticationFilterclasses. I know that this is an ugly solution but unfortunately it seems that there is no other way to do this (yet?). The source code of the CustomSocialAuthenticationFilterclass looks as follows (check the comments for more details): The source code of the CustomSpringSocialConfigurerclass looks as follows: After you have created these classes, you need configure your application to use them (modify either the exampleApplicationContext-security.xml file or the SecurityContextclass). I haven't compiled these classes yet but you probably get the idea anyway. Also, I remember that I removed some irrelevant code to make the code sample more clear. This code must be present in your CustomSocialAuthenticationFilterand CustomSpringSocialConfigurerclasses. I hope that this answered to your question. Hi Petri, I am using your demo. I have rename one table named “userconnection” to “mg_userconnection” . I have changed table named in script. But when I am redirecting to register page after social login it’s throws error like this. “bad SQL grammar [select userId from UserConnection where providerId = ? and providerUserId = ?]; nested exception”. My question is how can I rename that table name? Thanks. Hi Naitik, You can configure the table prefix by using setTablePrefix()method of the JdbcUsersConnectionRepositoryclass. The SocialContextclass implements the getUsersConnectionRepository(ConnectionFactoryLocator connectionFactoryLocator)method of the SocialConfigurerclass. You can make the required modifications to this method. After you are done, the getUsersConnectionRepository(ConnectionFactoryLocator connectionFactoryLocator)method looks as follows: Now Spring Social is using the table mg_UserConnectioninstead of the UserConnectiontable. At the moment it is not possible to configure the “whole” name of this database table. In other words, if your database uses case sensitive table names, you cannot transform the name of created database table into lowercase. Hi Petri, with a little bit of customization I’ve this scenario: when a new user registered (not social) displays the phrase “You cannot create an user account because you are already logged in.”. but if I try to go to the home (index.jsp) I get an internal server error, maybe because the context is wrong. here the code of home button: <a href="">home</a> but if I use different path (/upload for example) all works fine (I’ve a controller) all works fine if I logout and login again whit the login form… the problem is for new registration only. can you help me? Hi Tiziano, I tested this out by following these steps: The result is that I can see the home page in my browser. My home link looks as follows: However, this assumes that the application uses context path ‘/’. If you want to use another context path, you should create your link by using the following code: Did I miss something? If so, let me know :) Thanks Petri, I forgot to tell you that I already use the contextPath tag. sorry… I obtain a generic error (500 error). with debug I see that creation time in Object principal (cast to UserDetails) is null inside database until I logout and login again. something miss in registration process? Did you remember to log the user in after the registration is completed? You should take a look at the controller method that processes the request created when the user’s submits the registration form. If you are logging the user in after registration, could you add the stacktrace found from the log file here? It is kind of hard to say what is going on without seeing any code or a log. Hi Petri, Thanks for posting this tutorial this is really good. I used your xml configuration with web.xml . it is unable to locate the bean of userConnectionRepository and connectionFactoryLocator. Below are the error what i am getting Cannot resolve reference to bean ‘usersConnectionRepository’ while setting constructor argument; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named ‘usersConnectionRepository’ is defined. Thank you for pointing this problem out. I just noticed that the XML configuration is still using Spring profiles (I removed these profiles from the Java config). If you want to use the XML configuration, you should either <beans profile="application">element and leave the other elements intact. I will remove the Spring profiles from the XML configuration when I have got time to do it. Again, thank you for pointing this out! thanks Petri for your great tutorial and quick response. I have one clarification that how user/provider is binded with registeration. i want to just persist the data and allow the user what should i do. More clear I dont want the registeration form in the case of social sign in. Hi, The second part of this tutorial explains how you can create a registration form that supports “normal” registration and registration via social sign in. If you have any other questions about the registration process, leave a comment to that blog post. thanks ,very usefully You are welcome! I am happy to hear that this blog post was useful to you. Hi, I cloned the repo from and executed the below command: mvn clean install It failed by giving the below error: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.6:resources (default-resources) on project spring -mvc-normal: Error loading property file ‘D:\Personal\BACKUP\Non-Cloud\Projects\workspace\petrikainulainen-spring-social-examples- master\sign-in\spring-mvc-normal\profiles\dev\socialConfig.properties’ -> [Help 1] [ERROR] Can you help? Thanks. Hi, you need to create a Facebook and a Twitter application before you can run the example application. Also, you need to create the socialConfig.propertiesfile and configure the used social sign in providers in that file. The README of the example application provides more details about these steps. Hi Petri, Thanks for your prompt response. I finished first two parts of this tutorial and sincerely thank you for all your efforts in creating a top-quality example app. I am new to Java & Spring development and learned a lot by going through these two articles and the source code. I am able to run the app by “mvn jetty:run -P dev” now after reading the README file but facing few issues in which I need your help. I plan to create a tomcat-deployable WAR file so I did: mvn war:war -P dev, and came up with a WAR. Now things don’t work as they were working earlier: 1. Once I Log in using my FB account, the app shows me a Welcome page. When I press the Logout button on that page, it takes me to instead of the logout success URL:. 2. Also, if I type again after logging out, I am able to see the Welcome page. That means user is still logged in my app as well as in FB. 3. Both problems 1 & 2 are there when I create a new normal user (without FB etc). 4. Create user account also doesn’t work now. It comes to and displays a blank page. I believe either the WAR is not getting created properly or Tomcat is not getting configured. Can you suggest what could the problem here? I found this link “” but not sure whether it is relevant to my problem. Thanks once again for writing such a wonderful tutorial. I think that these problems are caused by the fact that the application assumes that it is run by using context path ‘/’. It seems that you are running it by using the context path ‘/spring-mvc-normal/’. Am I right? I thought that I already fixed this, but it seems that I didn’t fix it after all. Thank you for pointing this out! I will fix the JSP pages and commit my changes to Github. Update: I fixed the JSP pages. You can get the working example application from Github. Thanks Petri, you nailed down the problem. After pulling your last commit, I was able to run the app perfectly fine from Tomcat. BTW, I want to know whether there was a mistake in my running the app. Is it not the right way to launch when one would deploy the app in production environment? You are welcome. The problem was that I didn’t think that it should be possible to run this application by using other context paths than ‘/’. In other words, it was a bug in the application (my mistake, not yours). Hi Petri, I am facing one strange problem now. When I try to login / register for the first time, I don’t see the “Create user account” link at the top right corner of the page. Once I log in successfully, I don’t see the log out button anymore. I re-used your JSP pages as they are in my test app. But looks like something is missing. What do I need to effectively use “sec:authorize” in my JSP pages? Thanks for your help. Hi, The reference manual of Spring Security states that: Do you configure your application by using XML configuration or Java configuration? If you use Java configuration, you should check the source code of the SecurityContextclass. If you use XML configuration, you should check the exampleApplicationContext-security.xmlfile. Hi Petri, I have matched the XML and JSP files many times but there is no change except the package names which are as per my application. During my debugging, what I realized is that none of the spring security tags present in the layout.jsp file is working. I put many debug prints in layout.jsp but nothing came up on the screen. If I do the same in other JSP files,they all show up. Does this ring any bell? Where have I screwed up? Sorry but since I am on the learning path, hence troubling you too much. Thanks. Hi Sona, Have you declared the Spring Security taglib in every JSP file that uses it? Don’t worry about this. I am happy to help. Hi Petri, Finally, months after reporting this issue, I found the root cause and believe me, it is one of most silly solutions I have found for my issues. Problem is in the filter ordering in my web.xml file. I wasn’t aware that ordering plays a critical role and here sitemesh and spring security were not in order. Following thread from SO helped me in identifying this: Thanks for all your support in debugging this :) Hi Sona, You are are welcome (although I think that was not able to provide you a lot of help on this matter). In any case, it is great to hear that you finally found the reason for your problem and were able solve it! Hi Petri, I am very new to spring security at all and I have implemented spring security with openID authentication(Login through gmail) now I am trying spring facebook integration. For this, I have written custom class which is generic for all i.e. simple security,openId Auth and spring facebook as follows:- public class UserAuthenticationFilter implements UserDetailsService,AuthenticationUserDetailsService,SocialUserDetailsService { public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException, DataAccessException { } public UserDetails loadUserDetails(OpenidAuthenticationToken token) throws UsernameNotFoundException, DataAccessException { } public SocialUserDetails loadUserByUserId(String username) throws UsernameNotFoundException, DataAccessException { } in this way, in above filter, I have overridden the required methods.But I am not able to execute the loadIUserByUserId method at all. I have added following code in security.xml:- and in jsp code is as follows <a href="”> Login with facebook Is it necessary to create the facebook app? Can u plz tell that Is It a correct way to implement this. plz give me the solution to get success on spring facebook login and plz suggest what implemtation is remaining . I am stuck on this. Thanks in advance Hi, Yes. You cannot implement a Facebook login without it because you need a valid app id and app secret. If you haven’t created a Facebook application for your application, this is the reason why you your Facebook login is not working. Unfortunately Wordpress “ate” your XML configuration so I cannot say if there is something wrong with it. However, you can take a look at the exampleApplicationContext-security.xmlfile, and compare it to your security.xmlfile. I hope that this answered to your question. Hi Petri, Thanks for reply, Actually I have same xml configuaration as ur exampleApplicationContext-security.xml and exampleApplicationContext-social.xml. And I have already done data source and transactionManger like configuration in application context.xml. But I have some doubts and issues as follows:- 1)I am getting this error in exampleApplicationContext-security.xml Line 77 in XML document from ServletContext resource [/WEB-INF/application-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 77; columnNumber: 65; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element ‘constructor-arg’ 2) We don’t have created any object then why are we using the “usersConnectionRepository” this value as ref in security.xml plz help me to make this integration working. I am not getting clicked anyway here. Thanks a lot petri…… Hi, it seems that the problem is that you haven’t configured the UsersConnectionRepositorybean. This bean is required because Spring Social persists the user’s connection to a social service provider. My example application uses the JdbcUsersConnectionRepositoryclass that persists connection data to a relational database. The section 2.3. Persisting Connections of the Spring Social reference manual provides more information about this. Hi Petri, Can u plz tell me that how to integrate facebook login with localhost application. Plz help me. I am using gwt with spring and hibernate. My generated local URL is What is the correct way to implement this? Thanks Thanks in advance Hi, I have never used GWT (Google Web Toolkit?), and unfortunately I have no idea if it is possible to integrate Spring Social with it. :( If you want to use Spring Social, you should use Spring MVC and follow the instructions given in my Spring Social tutorial. Excellent post for Spring MVC. How can we make it work for Spring MVC without ViewModel, but plain REST services that return JSON? In that case, the user probably uses some javascript based SPA that already authenticates with the SocialSignInProvider outside the MVC application and already has a token. Thank you for your kind words. I really appreciate them. Unfortunately I have never added social sign in to a Spring powered REST API. In other words, I don’t know how you can use it with Spring MVC if your application provides only a REST API. I have been planning to write a separate tutorial about this for a long time. I guess now is a good time to actually do it. Petri, One of the finest tutorial available on the internet. I searched a lot for spring social sign in with Google+. Your example also implements facebook and twitter Could you please guide or leave comment on how can I implement Google+ sign in using spring MVC? Again many thanks. Thank you for your kind words. I really appreciate them. Unfortunately I have never done this myself :( However, I am going to update my Spring Social tutorial when I have got some free time (I don’t know the exact schedule). I will add the Google sign in to my example application and add a new part to my Spring Social tutorial. Great tutorial ! I was able to integrate Facebook, LinkedIn and Google+ Sign-In and was able to post link on Facebook & LinkedIn. But wondering which API of google to use for the same. Any help please ? Also I noticed, when logging out, data from UserConnection is not getting deleted. Is it right behavior or I missed something in configuration? I noticed , record gets deleted when used POST /connect/facebook with delete hidden parameter like below <form action="” method=”post”> Spring Social Showcase is connected to your Facebook account. Click the button if you wish to disconnect. Disconnect Thank you for your kind words. I really appreciate them! There is a community project called Spring Social Google, but I haven’t tried it out yet. If you want to give it a shot, you can get started by readings its reference manual. This is normal. The data found from the UserConnectiontable links the information received from the social sign in provider with the local user account. If you delete this link, the application cannot log you in (unless you create a new user account). Hi Petri, I have implemented spring social.When I clicked on the”login with facebook” button then it is get redirected to the facebook login page successfully but now after login the facebook ,its get redirected to signup url means it is not finding the user in database. 1) I have registered the user in my application and able to login alsoby using simple username and password without using the facebook login. 2)But when I login through the facebook with the same user then why it is no finding that user and it is get redirected to signup url. First it was giving the errror as “userconnection table not exit” now I have created the userconnection table in my database. I have the following xml configuration :- ——— xml configuration—— —– End of xml configuration—– In jsp page, I have used the code like:- These are my all changes. Apart from this,I couldnt have implemented anything else. So please, what should I do here for the proper working of the applicaiton, my facebook login user should redirect properly to my application and data should be inserted by connect controller in “userconnection” table. I am stuck over here please highligh what should I implement now at this stage. Many Many thanks in advance… The reason for this is that the user who tries to log in by using a social sign in provider is not the “same” user than the user who created a user account by using the “normal” registration. Because the user created a user account by using the “normal” registration, he/she cannot sign in by using a social sign in provider and expect the application to identify his/her existing user account. The application cannot do this because the UserConnectiontable is empty. Also, you cannot just insert a row to this table (unless the user is creating a user account by using social sign in) because you need the information returned by the social sign in provider. If you need to support only Facebook, you can try to fix this by making the following changes to the controller method that renders the sign up view: UserConnectiontable, and log the user in. Also, you have to configure Spring Social to request the user’s email address when the user starts the social sign in flow by using Facebook. This approach has two limitations: I hope that this answered to your question. Thank u so much Petri. As u suggested I tried a lot to create user connection but not work. Is there any url(signup url) through which user connection is created automatically. Thanks.. My example application has no such url, but you can persist the user’s connection by invoking the doPostSignUp()method of the ProviderSignInUtilsclass. Hi Petri, I tried to use the example of Maven but appeared this error in building:cmd.exe /X /C “”C:\Program Files\Java\jdk1.8.0_05\jre\bin\java” -javaagent:C:\Users\Victor\.m2\repository\org\jacoco\org.jacoco.agent.6.3.201306030806\org.jacoco.agent-0.6.3.201306030806-runtime.jar=destfile=C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\coverage-reports\jacoco-ut.exec -jar C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefirebooter5500528348155860158.jar C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefire5217935832005982224tmp C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefire_07294595199859845725tmp” -> [Help 1] Can you help me? Thanks I removed a few lines from this comment since they were irrelevant – Petri Hi, I noticed that you are using Java 8. I assume that the JaCoCo Maven plugin 0.6.3.201306030806 is not compatible with Java 8. If you don’t need to code coverage reports, you can simply remove the plugin configuration. If you want to generate the code coverage reports, you should update its version to 0.7.2.201409121644. Let me know if this solved your problem. Hi Petri, Good morning !!! I made a small application going through your fantastic tutorials and tried hosting it using Apache & Tomcat on a professional Hosting Server. However, I am not able to access the site without having the app name in the URL. I always have to do:. I followed many SO posts and tutorials but no where I could find some one who is having the same environment. So finally thought of asking you as how it should be done. Details of what I have tried are captured here: Please see if you can help. Thanks once again for all your help. Hi, The easiest way to solve to solve your problem is to change the name of deployed war file to ROOT.war. When you deploy a war file called ROOT.war to Tomcat, the context path of that application is ‘/’. This means that you can access your application by using the url. Hi Petri, Thanks for the suggestion. I tried that and it worked like a charm for my localhost but not for the production app running at my hosting machine. When I tried mydomainname.com, it gave me: Index of / [ICO] Name Last modified Size Description [IMG] favicon.ico 2015-02-26 15:54 822 [TXT] index.html.backup 2014-12-20 10:49 603 I checked both tomcat server.xml files and didn’t find any difference. Is there something else which I am missing? It’s kind of hard to say what could be wrong, but the first thing that came my mind was that your Apache might not be configured properly (it seems that is not “forwarding” requests to Tomcat). Are you using an AJP connector? Hi, My project run well based on Spring + Spring Security + JSF + WebFlow . Can I integrate Spring Social to my project without Spring Controler. Thanks in advance Hi, Because I don’t have any experience from JSF, I cannot answer to your question. However, it seems that using Spring Social without Spring MVC requires a lot of work. Hi Petri, You have the best example on spring social. am trying to add ‘login with facebook’ on my website where form based login is already implemented. You have merged both form based and social login into one and made confusing. Is there a way you can create an example with ‘login with facebook’ only based on xml configuraiton? Hi Shashi, Actually the login functions are not merged. The form login is processed by Spring Security and the social login is processed by Spring Social. However, if a user tries to sign in by using a social sign in provider and he doesn’t have an user account, he has to register one by using the registration form. This process is explained in the next part of this tutorial. You can of course skip the registration phase and create a new user account if the user account isn’t found from the database. If you want to do this, you need to modify the SignUpControllerclass. You need to read the user’s information from the request and use this information to create a new user account. The example application has XML configuration files as well. By the way, I will update my Spring Social tutorial later this year. I will address your concerns when I do that. For example, I plan to make this tutorial less confusing :) Thank you for the feedback! Thanks for a wonder post on this! It helped me understand how spring social works quicker than the official spring social post. :) I wanted to add spring social feature to an Appfuse app (uses spring security) that I had been playing with. Following your 2 posts, I was trying to integrate step by step and trying to see the effect of each step. But I ran into a problem right immediately : WARN [qtp937612-50] PageNotFound.noHandlerFound(1118) | No mapping found for HTTP request with URI [/app/auth/facebook] in DispatcherServlet with name ‘dispatcher’ Can you help explain where is the “/auth/facebook” request mapped? Is this done by simply declaring a connectController bean in your SocialContext class (which I have copied)? @Bean public ConnectController connectController(ConnectionFactoryLocator connectionFactoryLocator, ConnectionRepository connectionRepository) { return new ConnectController(connectionFactoryLocator, connectionRepository); } Thank you for your kind words. I really appreciate them! The SocialAuthenticationFilterclass processes requests send to the url: ‘/auth/facebook’. The SpringSocialConfigurerclass creates and configures this filter (check out the SecurityContextclass for more details). Thanks for this project, it is very good to understand social core.I only have question, there is any possibility to adapt this project in mobile or rest api ? I have problem with redirecting in process /signup for REST &( angularApp and mobileApp) ? I have never done this myself, but I think that it is doable. I tried to find information from Google, but I couldn’t find anything interesting. My next step was to ask help from my Twitter followers. I hope that someone can shed some light on this problem. Update: It seems that your only option is to do a normal GET request and let the backend handle the authentication dance. And how to send response to android or rest api after this dance ? Hi Wojciech and Petri, did you fix this problem i have the same many thanks! Hi Mova, Unfortunately your best option is to do a normal GET request and let the backend handle the authentication dance. However, that doesn’t solve your problem because you still need to respond to the client after the dance is complete. I have one idea that might help you to do this: The easiest way to solve this to create two controller methods which handle failed sign in attempt and successful sign in. Implement these controller methods by setting the preferred HTTP response code to the HTTP response. After you have implemented these controller methods, you have to configure Spring Social to redirect the request to the correct url. If you use Java configuration, you can do this by setting the values of the postLoginUrland postFailureUrlfields of the SpringSocialConfigurerclass. If you use XML configuration, you can do this by setting the values of the postLoginUrland postFailureUrlfields of the SocialAuthenticationFilterfilter class. If you don’t have any idea how to do this, check out the XML configuration file of my example application. It should help you to get started. Did this answer to your comment? hi petri, I tried the first part of this tutorial using xml configurations, however i am getting the following error.. No qualifying bean of type [com.project.service.UserRepository] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {} could you please help me out in this, it seems to be an autowiring or dependency injection issue also is there any implementation of UserRepository that we have to write. thanks, shashwat Update: I removed the stack trace since the error message gave enough information about the problem – Petri Hi Shashwat, The example application uses Spring Data JPA, and the UserRepositoryis the repository that is used to manage user information. It seems that you have changed the package hierarchy of the project because old package of the UserRepositoryinterface was: net.petrikainulainen.spring.social.signinmvc.user.repository. This is why Spring Data JPA cannot create the UserRepositorybean. You can solve this problem by configuring Spring Data JPA to use the correct package. You can do this by changing the value of the jpa:repositorieselement’s base-packageattribute. You can find the persistence layer configuration from the exampleApplicationContext-persistence.xml file. Also, if you are not familiar with Spring Data JPA, you should take a look at my Spring Data JPA tutorial. If have any further questions, don’t hesitate to ask them. Hey Petri, thanks for a quick revert. So I get the issue, you are using jpa repository. But the main problem i m facing is that i am using hibernate in my whole application and it would be difficult to migrate the whole app to spring data jpa to just implement a login functionality. Could you please help me with the changes that i will have to do in case i have to implement it in hibernate. I m attaching below my dao setup below which uses a jndi resource. Any help would be appreciated. com.myproject.pojo.* org.hibernate.dialect.MySQL5InnoDBDialect false false true true true false false org.joda.time.contrib.hibernate.PersistentDateTime java:comp/env/jdbc/myproject regards, shashwat Hi, It is actually quite easy to get rid of Spring Data JPA. You need to create a Hibernate repository class that is used to handle user information (or you can use an existing repository) and ensure that the repository implements these methods: User findByEmail(String email)method returns the user whose email address is given as a method parameter. If no use is found, it returns null. This method is invoked when the user logs in to the system or creates a new user account. User save(User saved)method saves the information of the Userobject given as a method parameter and returns the saved object. This method is invoked when a user creates a new user account. If you have any other questions, feel free to ask them. thanks petri, got it working, will be moving to part 2 of this tutorial, will let you know if i get stuck anywhere. Hi Shashwat, You are welcome! It is nice to hear that you were able to solve your problem. Hi Petri, I have integrated spring security with spring social using your example code , configuration is same as you provided in your github code.However after calling /auth/twitter it goes to twitter sigin page and after that it authenticates and redirects back to my login page.What can be the reason for this? I am using the latest version of spring, spring-security and spring-social. Hi Viral, I have a couple of ideas: First, check that the callback url of your Twitter application is correct. Second, Does this happen when the user clicks the “Sign in With Twitter” link for the first time? If so, one possible reason for this is that Spring Social cannot find the persisted connection from the UserConnectiontable. When this happens, it will redirect user to the sign up page (see this discussion). Also, if the user account is not found the database, the user is redirected to the sign up page. Do you use your login page as the sign up page? Hi Petri , Thanks for Your help. It worked ,I was Sign in With Twitter for the first time .Also I had changed the spring social version to 1.1.2 Release ,ProviderSignInUtils class is getting error in 1.1.2 .It is working fine in 1.1.0 Release . It seems that the Spring Social team did some major changes to the ProviderSignInUtilsclass between Spring Social 1.1.0 and 1.1.2. Anyway, another reader had the same problem, and I posted the solution here. Are you planning to update the tutorial to version 2.0.x of spring.social.facebook? There are big changes there, for example, there is no default constructor for ProviderSignInUtils. I am currently struggling with updating my code to version 2.0.x. I would love to do it right now, but I am afraid that I won’t be able to do it until next year because I need to update two older tutorials before I can move on to this one. I took a quick look at the Javadoc of the ProviderSignInUtilsclass and noticed that it has a constructor which takes ConnectionFactoryLocatorand UsersConnectionRepositoryobjects as constructor arguments. You could simply inject these beans to the component that uses the ProviderSignInUtilsclass and pass them as constructor arguments when you create a new ProviderSignInUtilsobject. If you have any other questions, don’t hesitate to ask them! Hi Petri Kainulainen, How can I workout with mongoDB having this set of example, please help me out.. Hi Vinodh, Take a look at this comment. hi Petri, I am getting error in POM.xml when I import your project in Eclipse Mars IDE: Plugin execution not covered by lifecycle configuration: org.jacoco:jacoco-maven-plugin:0.6.3.201306030806:prepare- agent (execution: pre-unit-test, phase: initialise) regards, Laxman Hi, You can fix this by removing the JaCoCo Maven Plugin from the pom.xml file. for those who is also struggle about the Auth setScope issue mentioned at Pls update spring-social-config to later version (1.1.2.REALEASE), you should be fine. Hi, Thank you for sharing! This is a quite common use case, and that is why it is great to find out how I can solve this problem. Hi Petri, I have few questions regarding this tutorial. I am trying to have only ‘login with facebook’ as the way users can signIn to the application. Questions: 1. When would user get a session Id, if user tries to login with facebook? 2. Spring Social document says to have a schema for UserConnections table. So when does Spring adds a particular entry (user) to this table ? Hi Jay, The user gets a session id when he/she opens the first page of your application. Spring Security can create a new session for him/her after login if you have specified it to do so. You need to insert a new row into this table by using the ProviderSignInUtilsclass. If you want get more information about this, check out the next part of my Spring Social tutorial. Hi Petri, I have deployed the project into amazon aws but it is not working fine. sometimes not able to get repsonse from auth/facebook. getting exception “Page has too many redirection” but all the functionalities are working fine in local tomcat. Need suggestion.. I think that this is an AWS specific problem. Unfortunately I don’t know what the problem is :( Dear my friends, I need sample project that use spring MVC framework + restful and use OAuth 2 and use MYSQL database if some body can help me please email to me,thank you in advance. Hi, I am not sure if you can find one example that fulfills your requirements. However, if you are willing to do part of the work yourself, you can take a look at these tutorials: I hope that this helps. Dear Petri, when i want to run this project with wildfly i have this error can you help me. org.springframework.social.config.annotation.SocialConfiguration.connectionFactoryLocator()] threw exception; nested exception is java.lang.IllegalArgumentException: Circular placeholder reference ‘twitter.consumer.key’ in property definitions Update: I removed the irrelevant part of the stack trace – Petri Hi, How did you create the war file? The reason why I ask this is that Maven should replace the property placeholders found from the properties file with the values found from the profile specific configuration file. However, the error message of the thrown IllegalArgumentExceptionstates that this did not happen. Dear Petri, How should i create the war file i run it in Intellij Idea and make the artifact and configure it but this error happened when i select each of AuthServer and RestServer. ERROR [org.springframework.web.servlet.DispatcherServlet] (MSC service thread 1-3) Context initialization failed: java.lang.IllegalArgumentException at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] Dear Petri, Can you tell me how to run this project in Intellij Idea and with wildfly application server with screen shot,thank you in advance. I don’t know how you can run this application with IntelliJ Idea and Wildfly because I have never used Wildfly. However, you should be able to run it as long as you have: Also, keep in mind that you have to create the deployed .war file by running the package Maven goal by using the dev Maven profile. You can configure IntelliJ Idea to invoke a Maven goal before it starts the application server. Hi Petri, Can you tell me this project how to work with Twitter and Facebook. and i want work this with linkedin and google too. Hi Morteza, The example application should work with Facebook and Twitter as long as you have configured it properly (check my previous comment). These projects provide support for Google and LinkedIn: The example application doesn’t support them at the moment. This means that you have to make some minor configuration changes to the application. These changes are explained in the reference manuals of these projects. Also, you need to add the sign in links to the login page and add the new providers to the SocialMediaServiceenum. Dear Petri, I have this error when click the link of facebook (Invalid App ID: foo) and this error when i click the link of twitter org.springframework.web.client.HttpClientErrorException: 401 Authorization Required org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91) org.springframework.web.client.RestTemplate.handleResponseError(RestTemplate.java:576) org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:532) org.springframework.web.client.RestTemplate.execute(RestTemplate.java:504) org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:449) org.springframework.social.oauth1.OAuth1Template.exchangeForToken(OAuth1Template.java:187) org.springframework.social.oauth1.OAuth1Template.fetchRequestToken(OAuth1Template.java:115) what i have to do and i configure the Twitter as you said. Did you create your own Facebook and Twitter applications? The error messages suggest that the configuration file (socialConfig.properties) which should contain the real API credentials provided by Facebook and Twitter contains my placeholders (foo and bar). Yes,I create my own Twitter applications and above error has happened. Did you configure the domains that are allowed to access the application? how to configure domainds? Hi, Unfortunately I don’t remember the details anymore, but if I remember correctly, you should be able to configure the domains in the settings of your Twitter and Facebook applications. Hi, I am P V REDDY.I am Senior Software Engineer.What I have to in our Project is I need authenticate Either Email Id or Phone Number with OTP or 3 Questions and Answers before Login into the Web application in Spring MVC . I need some reference and also I need some api’s Hi, I have never used OTP in my Spring applications, but I found a library that adds OTP support to Spring Security. I think that you should take a look at it. Dear Petri, When i use Google+ and configure with this application and click the button of Google+ redirect me to the page with this error: 400. That’s an error. Error: invalid_request Missing required parameter: scope Learn more Request Details That’s all we know. Hi, It seems that you need to specify the value of the scopeparameter. Take a look at this StackOverflow answer. It explains how you can the add the scopeparameter in your sign in form. Hi Petri, I am trying to use social login in my current Spring project. I have added the dependencies, but when I build the project, I am not able to see the /connect url mapped logs in the console. Unfortunately it’s impossible to say what is going on without running your code. Do have a sample project that reproduces this problem? Superb Post. Thanks a lot You are welcome. Hi I imported the code from Github. And i had maven clean install. But when i run the code using this path url in the browser… I get 404 Error like this …….. What is the reason ? HTTP ERROR 404 Problem accessing /spring-mvc-normal/login. Reason: NOT_FOUND The 404 error means that the page is not found. Are you running the web application by using Jetty Maven plugin or do you use some other servlet container? Yes , I am running the web application using Jetty server Hmm. I have to admit that I don’t know what is wrong :( By the way, are you using Java 8? If I remember correctly, the example application doesn’t work with Java 8 because it uses a Spring version that doesn’t support Java 8.
https://www.petrikainulainen.net/programming/spring-framework/adding-social-sign-in-to-a-spring-mvc-web-application-configuration/
CC-MAIN-2018-22
en
refinedweb
In our current project we are using Castle Windsor to configure WCF on a .NET 4 Application. As the data we are moving back and forth may grow we would like to have some sort of compression (Espessially from client to server). Searching for this on google brought two solutions: Use the built in compression support of .net 45 (By now we are not able to move to .net 45 because we officially need to support Windows XP) or use the example encoder provided by microsoft (). Since the custom encoder option doesn't look that bad, I would like to give it a try until we can move to .net 45. The only thing I really don't know is whether it is possible to configure this scenario using Castle Windsor. Any ideas on this? Thanks Markus after some research of the source code of the involved WCF classes, we were able to find a pretty good solution for our needs: We are currently using the WSHttpBinding for our bindings. This has a method CreateBindingElements which returns a collection of the used binding elements and can be overridden. So we just derived from the WsHttpBinding class and injected the GZipMessageEncoding into the collection before returning it. public class GZipWSHttpBinding : WSHttpBinding { public override BindingElementCollection CreateBindingElements() { BindingElementCollection bec = base.CreateBindingElements(); int index = bec.Count - 1; while (index >= 0 && !(bec[index] is MessageEncodingBindingElement)) index--; if (index >= 0) { GZipMessageEncodingBindingElement gZipCompression = new GZipMessageEncodingBindingElement(bec[index] as MessageEncodingBindingElement); bec[index] = gZipCompression; } return bec; } } The only thing left is using this class instead of the WsHttpBinding class in the Windsor installer, and gzip compression is in place. Best Regards Markus
http://m.dlxedu.com/m/askdetail/3/c5c64269cb525038e68a52874858aa90.html
CC-MAIN-2018-22
en
refinedweb
Stop the PCM playback channel and discard the contents of its queue (plugin-aware) #include <sys/asoundlib.h> int snd_pcm_plugin_playback_drain( snd_pcm_t *handle ); The snd_pcm_plugin_playback_drain() function stops the PCM playback channel associated with handle and causes it to discard all audio data in its buffers.. This function is the plugin-aware version of snd_pcm_playback_drain() . It functions exactly the same way. However, make sure that you don't mix and match plugin- and nonplugin-aware functions in your application, or you may get undefined behavior and misleading results.
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_plugin_playback_drain.html
CC-MAIN-2018-22
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I am trying to clone an issue to another project and also set the Flagged custom field to "Impediment" on the cloned issue: import com.atlassian.jira.component.ComponentAccessor def customFieldManager = ComponentAccessor.getCustomFieldManager() def cf = customFieldManager.getCustomFieldObjectByName("Flagged") def fieldConfig = cf.getRelevantConfig(issue) def optionsManager = ComponentAccessor.getOptionsManager() def value = optionsManager.getOptions(fieldConfig)?.find { it.toString() == 'Impediment' } issue.setCustomFieldValue(cf, value) When I execute this code, I see an exception in the JIRA logs: 2016-03-29 15:42:34,210 http-bio-8080-exec-13 ERROR matt_shelton 942x5658x1 iod9tz 0:0:0:0:0:0:0:1 /secure/QuickCreateIssue.jspa [scriptrunner.jira.workflow.ScriptWorkflowFunction] Script function failed on issue: CAPD-31, actionId: 1, file: null com.atlassian.jira.exception.Create:757) at com.atlassian.jira.issue.managers.DefaultIssueManager.createIssue(DefaultIssueManager.java:645) at com.atlassian.jira.issue.managers.DefaultIssueManager.createIssueObject(DefaultIssueManager.java:770) at com.atlassian.jira.issue.IssueManager$createIssueObject$0.call(Unknown Source) at com.onresolve.scriptrunner.canned.jira.utils.AbstractCloneIssue.doScript(AbstractCloneIssue.groovy:96):10) at com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.CloneIssue.super$3$doScript(CloneIssue.groovy) at com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.CloneIssue.doScript(CloneIssue.groovy:85):923) at com.atlassian.jira.issue.managers.DefaultIssueManager.createIssue(DefaultIssueManager.java:746) ... 8 more Caused by: java.lang.ClassCastException: com.atlassian.jira.issue.customfields.option.LazyLoadedOption cannot be cast to java.util.Collection at com.atlassian.jira.issue.customfields.impl.AbstractMultiCFType.createValue(AbstractMultiCFType.java:41) at com.atlassian.jira.issue.fields.CustomFieldImpl.createValue(CustomFieldImpl.java:854) at com.atlassian.jira.workflow.function.issue.IssueCreateFunction.execute(IssueCreateFunction.java:88) at com.opensymphony.workflow.AbstractWorkflow.executeFunction(AbstractWorkflow.java:1050) at com.opensymphony.workflow.AbstractWorkflow.transitionWorkflow(AbstractWorkflow.java:1446) at com.opensymphony.workflow.AbstractWorkflow.initialize(AbstractWorkflow.java:615) at com.atlassian.jira.workflow.OSWorkflowManager.createIssue(OSWorkflowManager.java:886) ... 9 more Based on the exception, I suspect the issue here is how I'm populating value, but I don't know how else to get it. The line in question was based on this answer, but I may be misunderstanding it. Thanks! The error is here: Caused by: java.lang.ClassCastException: com.atlassian.jira.issue.customfields.option.LazyLoadedOption cannot be cast to java.util.Collection This is because Flagged is (normally) a check box field and takes a Collection of Option objects. So changing the last line to: issue.setCustomFieldValue(cf, [value]) should work. Attempting to get value this way also does not work, despite this being, I think, the more direct route: def value = optionsManager.getOptions(fieldConfig).getOptionById.
https://community.atlassian.com/t5/Jira-questions/How-to-set-flag-on-linked-issue-during-scriptrunner-clone-issue/qaq-p/118570
CC-MAIN-2018-22
en
refinedweb
[Solved] Focus, child items, and the end of my wits Hey guys, I've got an example here where I'm trying to find the "best" way to do this. I've got a QML window with 2 elements: a text box and a listview. You guessed it, it's a search window. I can easily assign focus programatically to 'scope' by setting scope.focus = true. The problem occurs when I try and figure out a decent way for setting the right focus when clicking on an element in the listview. The only way I can think of this happening in the delegate is by doing some parent.parent.parent.parent gibberish but that is very fragile as it depends on the hierarchy and I'm trying to create a re-usable ListView for my app. Can anyone think of a pattern that would help me accomplish what I'm trying to do here? Thanks! @ import QtQuick 2.1 import QtQuick.Controls 1.0 import QtQuick.Layouts 1.0 import QtGraphicalEffects 1.0 ApplicationWindow { width: 600 height: 300 id: win ColumnLayout { anchors.fill: parent Text { Layout.fillWidth: true text: scope.focus ? "ListView has focus" : "ListView doesn't have focus" } TextInput { Layout.fillWidth: true focus: true } /* This is defined somewhere else, independent of the focus scope below */ Component { id: myDelegate Text { text: modelData ? modelData.name + ' (' + modelData.age + ')' : '' width: parent ? parent.width : 0 height: 32 MouseArea { anchors.fill: parent onClicked: { /* how best to set the rectangle enclosing the focus scope to have focus? */ /* This is the effect I want, but without referring to "scope" directly */ /* scope.focus = true */ /* Another way */ var scopehack = parent.parent.parent.parent.parent.parent.parent.parent.parent.parent console.log(scope,scopehack) scopehack.focus = true } } } } /* Actually a Flipable */ Rectangle { id: theflipable color: "#AABBCCDD" Layout.fillWidth: true Layout.fillHeight: true FocusScope { id: scope anchors.fill: parent focus: true Rectangle { anchors.fill: parent color: "transparent" radius: 3 RectangularGlow { anchors.fill: parent visible: scope.activeFocus glowRadius: 10 spread: 0.2 color: "red" cornerRadius: 13 } ScrollView { anchors.fill: parent /* The following two properties allow us to use keyboard navigation within the ListView. See */ flickableItem.interactive: true focus: true ListView { anchors.fill: parent boundsBehavior: Flickable.StopAtBounds clip: true focus: true model: ListModel{} delegate: Loader { width: parent.width sourceComponent: myDelegate property variant modelData: model } highlightFollowsCurrentItem: true highlight: Rectangle { width: parent ? parent.width : 0 color: "#3465A4" } highlightMoveDuration: 250 Component.onCompleted: { for(var ii = 0; ii < 250; ++ii) { model.append({'age':ii,'name':'Bopinder ' + ii}) } } } } } } } } } @ what is the problem with setting the focus via id.focus = true or id.forceActiveFocus()? I don't understand why you don't like that, still better than parent.parent.parent.parent.parent.focus = true I guess :D Edit: if you don't know the item id at that point why not setting it from the outside to a custom property or something? @ property Item focusTarget @ and than from outside just set it to some id you want to get focus when you click on the delegate @ yourComp.focusTarget = scope @ Because the problem I have is something like this: @ MyTopLevelWindow { ColumnLayout { SearchBar {} RectangularGlow{...visible: searchResults.activeFocus } SearchResults{ id: searchResults} } @ The actual ListView is way down, deep somewhere in the guts of SearchResults because I have a particular implementation I want to use. The focus is assigned to that but the rectangular glow effect is on the searchResults container. Does that make sense? It looks like ListView.forceActiveFocus does what I need. Gonna do some testing. see my edit, maybe that helps or I still don't know what the problem is the Item ids are usually accessible from everywhere unless its a separate qml file but then you can simple use an property alias or what i suggested maybe? The key points to getting this was: Understanding that FocusScope is a chain going up ancestry I needed to was to forceActiveFocus on the child I wanted to get the active focus so it would bubble up to the parent that eventually had the RectangularGlow attached. Simply setting child.focus = true was not sufficient. Thanks for the quick help Xander84!
https://forum.qt.io/topic/39494/solved-focus-child-items-and-the-end-of-my-wits/3
CC-MAIN-2018-22
en
refinedweb
I am not sure if this question has been answered before, but are you able to / how do you write a python script that can make use of the sikuli find / click / doubleClick functions in another python app? Just curious. Or perhaps via Jython or Java... Ross Question information - Language: - English Edit question - Status: - Expired - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2010-05-12 - Last reply: - 2010-05-28 Thanks for the speedy reply. I had seen your question/response while searching but wasn't sure if it was the best approach. Just to confirm, these steps are so you can write a java program that includes Jython + Sikuli so that it can run Sikuli functions? I should have more time to work on this tomorrow. My supervisor is interested in using Sikuli to do JUnit testing. Hmm maybe a Sikuli IDE plugin to Eclipse would be in order some time here. That would be Snazzy. I am happy to see OpenCV being abstracted and made more usable for a specific task. Great work Sikuli devs! As far as I understood, you have to write something with python syntax, that is afterwards run by the jython environment (python on java), so you can use the Sikuli defined classes and methods in your program directly without using the Sikuli IDE. Alright, I wrote a simple Java based test using code scraped from the Sikuli source code (ScriptRunner. "Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named python" ------- Steps taken so far. Verified Sikuli can be run via command line java -cp sikuli-script.jar org.python. >>> from sikuli.Sikuli import * ScreenMatchProxy loaded. VDictProxy loaded. ScreenMatchProxy loaded. sikuli-script.jar is part of my build path VM argument is in place -Dpython. ------- Jython project, sikuli-script.jar in build path, VM argument set import os from edu.mit. wd = os.getcwd() print wd setBundlePath(wd) s.click( type("notepad" + Key.ENTER) wait("127369045 click(" type("Hello World") I get the following error in console "C:\Users\ Traceback (most recent call last): File "C:\Users\ setBundlePa NameError: name 'setBundlePath' is not defined" if I change the import to just from edu.mit.csail.uid import * I do get Win32Util loaded. ScreenMatchProxy loaded. VDictProxy loaded. ScreenMatchProxy loaded. before the script dies on the setBundlePath() (if that is removed it dies on the click()) ------- import java.io.File; import java.io. import java.util.Arrays; import java.util.Iterator; import java.util. import org.python. public class MagicClass { private static LinkedList<String> _headers; public static void main(String[] args){ String dotSikuliPath = "C:\\Users\ String[] h = new String[]{ "from python. }; _headers = new LinkedList< try{ runPython( }catch( System. } } public static void runPython(String dotSikuliPath) throws IOException{ File pyFile = getPyFrom( } private static File getPyFrom(String dotSikuliPath) throws IOException{ String name = new File(dotSikuliP String prefix = name.substring(0, name.lastIndexO return new File(dotSikuliPath + "/"+ prefix + ".py"); } } I notice that you are using the old code.. The new sikuli package name should be sikuli.Sikuli in Jython. The old python. Sorry I should have mentioned that I did try from sikuli.Sikuli import * and got Traceback (most recent call last): File "C:\Users\ from sikuli.Sikuli import * ImportError: No module named sikuli This is what eclipse is using to execute the jython script "C:\Program Files (x86)\Java\ "C:/jython2. Files (x86)\Java\ "-Dpython. (x86)\Sikuli\ Files (x86)\Java\ org.python. C:\Users\ If you can run "java -cp sikuli-script.jar org.python. That was the key I needed for the python approach I will continue working on getting the Java to work. For everyone else who has this same question, if you are trying to run Sikuli inside of Eclipse using PyDev you need to set up a new Interpreter that points to sikuli-script.jar NOT your jython installation. After the upgrade to 1.0, I start having this exception (with the modification of import sikuli.Sikuli.* ): Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> File "../../ initSikuli() File "../../ __main__.SCREEN = Screen() File "../../ r = JScreen( at edu.mit. at edu.mit. at edu.mit. at edu.mit. at edu.mit. at sun.reflect. at sun.reflect. at sun.reflect. at java.lang. at org.python. java.lang. @libo: What platform are you running? I guess you don't remove old 0.9 files before install 0.10. Chang, I was running on Mac 10.6 platform. I do not remove my old 0.9, just rename them so that I can roll back when needed, why is that a problem? Command line mode states which library u want to use This question was expired because it remained in the 'Open' state without activity for the last 15 days. I got the following information from the developers (I myself did not try this until now ;-) --------------- home=sikuli- script. jar as a parameter of launching java edu.mit. csail.uid. Sikuli import *" in your python code. (Note that the package name will be changed in 0.10.) (1) The steps to use Sikuli.py without Sikuli IDE: 1. add sikuli-script.jar into Java's CLASSPATH 2. add -Dpython. 3. add "from python. (2) Sikuli need some opencv libraries in tmplib/. Try to add tmplib/ to %PATH% before running Jython. (3) In theory, you can import the other xxx.sikuli/xxx.py and call its functions. A tricky step is that you need to call setBundlePath() before using the image files inside the other .sikuli bundle, otherwise Sikuli's functions doesn't know where to find the images. We will try to make this easier in the future versions. ----------------- Hope this helps a little down the road. Feedback about your experiences and findings are very much appreciated.
https://answers.launchpad.net/sikuli/+question/108782
CC-MAIN-2018-22
en
refinedweb
Apple’s open-source CommonCrypto isn’t shabby for anyone looking to implement encryption in their app, but it isn’t very “Swifty” to use. Luckily, Danny Keogan wrote a nice wrapper called IDZSwiftCommonCrypto, which renders Swift encryption a much friendlier beast. Introduction (0:00) In this post, I’m going to discuss a wrapper I wrote around CommonCrypto called IDZSwiftCommonCrypto, which makes it a lot Swiftier to use. Upfront, I’ll mention that you can find all the below code on GitHub, and if you have any questions about it, you can find me @iOSDevZone on Twitter. For a quick outline, this post will cover: - Intro to CommonCrypto - How to Access CommonCrypto in Swift - IDZSwiftCommonCrypto Design Goals - IDZSwiftCommonCrypto API - Some words of caution - Other Libraries/Projects - Summary What is CommonCrypto? (1:26) CommonCrypto is Apple’s Open Source cryptography library that is included in iOS & OS X. You can find it at opensource.apple.com. When you’re choosing a crypto library, it’s important to choose one that’s open source, because otherwise you don’t really know what’s going on in there. CommonCrypto is a C library, so that makes it a little bit unpalatable to use in Swift. It is part of System.framework and, unfortunately, it’s not directly accessible by default in Swift. But we can work around that. It provides a number of features: message digests, hash-based message authentication codes, cryptors (which are basically a catch-all for encryptors and decryptors), and then a couple of utility routines, such as key derivation routines and random number generation routines. API Design Goals (2:23) When designing my API, I had several goals in mind. First of all, I wanted it to be implementation independent. I didn’t want to be tied into CommonCrypto because there are certain things it can’t do, and certain things that other libraries like OpenSSL does do. The user-facing API of it doesn’t actually bleed through any of CommonCrypto; it’s pretty much independent. There is also an IDZSwiftOpenSSL, but that’s not quite ready for prime time yet. I also wanted to make the layer as thin as possible to make it Swifty, while avoiding the introduction of any security issues. It’s extremely easy to introduce security problems if you’re meddling with a crypto library. Finally, I wanted it be easy to use. There are a lot of inconsistencies in Apple’s CommonCrypto API, and even in C, it’s not that pleasant to use. IDZSwiftCommonCrypto API (3:32) The first problem I ran up against when trying to use CommonCrypto was that Apple does not provide a module map. I kept getting a “No such module” error. You might say “it’s in the system.framework module!”, but somehow they don’t export the right symbols. After a bit of digging about, I found that the solution to this was to create a fake module map for the CommonCrypto library. module CommonCrypto [system] { header "/Applications/Xcode-7.0-7A218.app/Contents/Developer/Platforms/MacOSX.platform/ Developer/SDKs/MacOSX10.11.sdk/usr/include/CommonCrypto/CommonCrypto.h" header "/Applications/Xcode-7.0-7A218.app/Contents/Developer/Platforms/MacOSX.platform/ Developer/SDKs/MacOSX10.11.sdk/usr/include/CommonCrypto/CommonRandom.h" export * } This turns out to be a pain when you’re sending out a library because the header entries require absolute library names. The example here is from my Mac and you can see that I put my Xcode in a non-standard place because, like every Swift developer, I have three or four different versions going at the same time. This is basically something that you want to write a script to do. Originally, I wrote a Bash script. There exists a newer version, as I’ve added support for tvOS and watchOS, that’s all written in Swift. Once you’ve got access to the various routines, we can look and find out what they are, what we can do with them, and how I wrapped them up in Swift. Message Digests (4:52) Basically, this is a cryptographic hash function. It takes a message in and produces digests on the other side. It has a few of properties that make it a cryptographic hash function: - Given measy to calculate h - Given hdifficult to find m - Given m1difficult to find m2such that hash(m1) == hash(m2) - Difficult to find pair ( m1, m2) such that hash(m1) == hash(m2) So, given a message, it should be quick and easy to calculate the hash.h. Also, given the hash.h, it should be difficult (or “infeasible”) to find “m”, the message. Furthermore, if you have a message, m1, it should be extremely difficult to find another message m2 that hashes to the same value. The final thing is that it should be very difficult to come up with two messages, m1 and m2, that hash to the same value. There are fancy names for all of those, like “preimage resistance” and “second preimage resistance,” but that’s the gist of what they are. CommonDigest.h (6:04) If we were trying to do this in C, what would this look like? Here is a quick excerpt from CommonDigest.h: Get more development news like this public func CC_MD2_Init(c: UnsafeMutablePointer<CC_MD2_CTX>) -> Int32 public func CC_MD2_Update(c: UnsafeMutablePointer<CC_MD2_CTX>, _ data: UnsafePointer<Void>, _ len: CC_LONG) -> Int32 public func CC_MD2_Final(md: UnsafeMutablePointer<UInt8>, _ c: UnsafeMutablePointer<CC_MD2_CTX>) -> Int32 There’s an initialization routine that initializes some context, there’s an update which takes in a buffer and updates the calculation, and then there’s a final routine, which gives you the actual digest. Notice that the context is the first argument of the first two routines and the last one in the third. In general, you’ll see that they’re completely inconsistent about where they put their arguments. For digest, though, they’re particularly bad because this is for an old digest function called MD2, which you probably shouldn’t use anymore. Of course, they add more for MD4 and for MD5. Eventually, you’ve got eight algorithms, each with three different routines, and it’s a mess. We can definitely do better when we’re making this Swifty. We don’t want to bring this in all as it is. Simplify the Types (7:04) One thing that I found that Swift is doing for me is that I think a lot more about types. If we step back from those functions a bit, we can see that there were basically three functions: typealias Initializer = (Context) -> (Int32) typealias Updater = (Context, Buffer, CC_LONG) -> (Int32) typealias Finalizer = (Digest, Context) -> (Int32) And if we simplify the types, we can see that this is what they look like. This just gives a much clearer view of what’s going on: class DigestEngineCC<C> : DigestEngine { typealias Context = UnsafeMutablePointer<C> typealias Buffer = UnsafePointer<Void> typealias Digest = UnsafeMutablePointer<UInt8> typealias Initializer = (Context) -> (Int32) typealias Updater = (Context, Buffer, CC_LONG) -> (Int32) typealias Finalizer = (Digest, Context) -> (Int32) /* . . . */ init(initializer : Initializer, updater : Updater, finalizer : Finalizer, length : Int32) func update(buffer: Buffer, _ byteCount: CC_LONG) func final() -> [UInt8] } We have to peel it back a little bit, of course, because there are those nasty pointers and such. Then, if we wrap that up in a class, this all becomes rather nice. This is only parameterized by the context structure of the individual algorithm. Otherwise, we can abstract away all that complexity. If we create one of these for each of the algorithms, we’re then going to be able to use it. protocol DigestEngine { func update(buffer: UnsafePointer<Void>, _ byteCount: CC_LONG) func final() -> [UInt8] } And if we create a protocol, as I have called DigestEngine, then the rest of the code can speak without having to worry about generics or the particular algorithm to our engine. This gave me the first view of my Digest API. The Digest API 1.2 and 2.0 (8:08) public class Digest { public enum Algorithm { case MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm) public func update(data : NSData) -> Digest? public func update(byteArray : [UInt8]) -> Digest? public func update(string : String) -> Digest? public func final() -> [UInt8] var engine: DigestEngine } I think you’ll agree, it looks a little bit nicer than the C version. Basically, the init routine populates the engine based on what algorithm you pass in. From then on, it’s able to talk to the digest engine protocol and it doesn’t care what algorithm you’re using. Above is the version as it was in Swift 1.2, but we can actually tidy things up a little bit more, because in Swift 2.0, they introduced protocol extensions. If we define a protocol Updateable, we can then factor out all that code that was talking about updating based on different types. public protocol Updateable { var status : Status { get } func update(buffer : UnsafePointer<Void>, _ byteCount : size_t) -> Self? } extension Updateable { public func update(data: NSData) -> Self? { update(data.bytes, size_t(data.length)) return self.status == Status.Success ? self : nil } public func update(byteArray : [UInt8]) -> Self? { update(byteArray, size_t(byteArray.count)) return self.status == Status.Success ? self : nil } public func update(string: String) -> Self? { update(string, size_t(string.lengthOfBytesUsingEncoding(NSUTF8StringEncoding))) return self.status == Status.Success ? self : nil } } You have all these different types, and it becomes annoying to use it if you’ve got some stuff that NSData, some stuff that’s UInt arrays, or some stuff that’s string. This just makes it a little bit nicer. With the protocols extensions, I don’t have to repeat this code in every single class, which is what I had to do in Swift 1.2. So win for Swift 2.0. public class Digest : Updateable { public enum Algorithm { case MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm) public func update(buffer: UnsafePointer<Void>, _ byteCount: size_t) -> Self? public func final() -> [UInt8] } Using Digest (9:34) What does it look like now if I want to calculate a digest? let m1 = "The quick brown fox jumps over the lazy dog." let sha1 = Digest(algorithm: .SHA1).update(m1)?.final() let d = Digest(algorithm: .SHA1) d.update(m1) let sha1 = d.final() It’s fairly straightforward. The top version shows if you were just trying to calculate on a short buffer what the digest is. You can use optional chaining to put it all on one line. For the lower view, if you were perhaps trying to generate a digest over something that was coming in off the network, although I only call update once, you would call update as each block comes in and update it and eventually check that your digest is okay at the end. So those are message digests; they’re great for storing passwords in a database, as long as you add Salt to it, and they’re great for testing whether something has changed. If you use Git, of course, you’re familiar with this. One thing that they’re not good for, though, is that they can’t detect if somebody else has intercepted the message and modified it. All you need in order to calculate the digest is the message and knowledge of the algorithm. If you want protection, then you need to use hash-based message authentication codes, or HMAC. HMAC (10:51) Basically, an HMAC takes a message digest and permutes it with a key, such that to calculate the HMAC, you must also be in possession of the key. To verify it, you also have to be in possession of the key. That in itself causes a little bit of a problem, because you have to make sure that the keys are securely transmitted. Luckily, there are key exchange protocols. (Key exchange protocols fall outside the scope of this present post, but if you’re interested in cryptography, these are amazing to look into. The most common one is called Diffie-Hellman key exchange, and it’s absolutely mind-blowing.) A key exchange protocol the additional ability to verify that the message is at it was intended by the person who had control of the key. If you have a good trust that the key belongs to the person, or that only you and the other person can share the key, then you can know that the message came from them and it is the message that they sent. Alright, let’s have a look at the header file: public func CCHmacInit(ctx: UnsafeMutablePointer<CCHmacContext>, _ algorithm: CCHmacAlgorithm, _ key: UnsafePointer<Void>, _ keyLength: Int) public func CCHmacUpdate(ctx: UnsafeMutablePointer<CCHmacContext>, _ data: UnsafePointer<Void>, _ dataLength: Int) public func CCHmacFinal(ctx: UnsafeMutablePointer<CCHmacContext>, _ macOut: UnsafeMutablePointer<Void>) The interesting thing here is that it’s completely inconsistent with the other API. In fact, it looks a little bit more like the message digest API. In particular, there’s this algorithm argument, so it looks like it should be much easier to Swiftify. However, there’s one problem: when Swift imports constants from C, it doesn’t do it as literals, and RawRepresentable enums can’t handle non-literals. Luckily, because Swift enums can have methods attached to them, we can just solve it. It’s not that pretty, but if you just ignore that, you can bury all the complexity down a level. public class HMAC { public enum Algorithm { case MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } static let fromNative : [CCHmacAlgorithm: Algorithm] = [ CCHmacAlgorithm(kCCHmacAlgSHA1):.SHA1, CCHmacAlgorithm(kCCHmacAlgSHA1):.MD5, CCHmacAlgorithm(kCCHmacAlgSHA256):.SHA256, CCHmacAlgorithm(kCCHmacAlgSHA384):.SHA384, CCHmacAlgorithm(kCCHmacAlgSHA512):.SHA512, CCHmacAlgorithm(kCCHmacAlgSHA224):.SHA224 ] func nativeValue() -> CCHmacAlgorithm { switch self { case .SHA1: return CCHmacAlgorithm(kCCHmacAlgSHA1) case .MD5: return CCHmacAlgorithm(kCCHmacAlgMD5) case .SHA224: return CCHmacAlgorithm(kCCHmacAlgSHA224) case .SHA256: return CCHmacAlgorithm(kCCHmacAlgSHA256) case .SHA384: return CCHmacAlgorithm(kCCHmacAlgSHA384) case .SHA512: return CCHmacAlgorithm(kCCHmacAlgSHA512) } } static func fromNativeValue(nativeAlg : CCHmacAlgorithm) -> Algorithm? { return fromNative[nativeAlg] } } Once you do that, then the HMAC API looks like this below. As you can see, it’s beginning to look extremely similar to the digest one. public class HMAC : Updateable { public enum Algorithm { case MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm, key: NSData) public init(algorithm: Algorithm, key: [UInt8]) public init(algorithm: Algorithm, key: String) public func update(buffer: UnsafePointer<Void>, _ byteCount: size_t) -> Self? public func final() -> [UInt8] } The only bit that makes it a little bit different is that I’ve given some convenience initializers so that you don’t have to cast stuff if your key which happens to be coming from Objective-C as NSData, or if you want to use a string as your key. Using HMAC (13:41) Using it is fairly straightforward. In fact, using it is exactly the same as the message digest, except there’s the additional initialization parameter of the key: let key = arrayFromHexString("408d94384216f890ff7a0c3528e8bed1e0b01621") let m1 = "The quick brown fox jumps over the lazy dog." let hmac1 = HMAC(algorithm: .SHA1, key: key).update(m1)?.final() let hmac2 = HMAC(algorithm: .SHA1, key: key) hmac2.update(m1) let hmac2final = hmac2.final() Hopefully you agree that this is a bit better than the C interface. Cryptor (13:58) When you think of cryptography, you generally don’t think much of message digests and message authentication code. Most people think of sending secret messages with keys, and all that sort of thing. The class that looks after that part is the cryptor classes. Adopting CommonCrypto’s terminology, they’re just using cryptor to encompass a decryptor or an encryptor. When sending a message with cryptography, we start with a sender has a message and a key. Similar to the message authentication codes, both the sender and the receiver have to have a shared key that’s shared by some means (we assume it’s by a key exchange protocol, so any eavesdropper can see and it’s not a problem). For some modes of operation, there will be also an intialization vector, which is just a random block used to start off the transmission. The key, the plaintext, and the initialization vector go into the encryption. Out the other side comes the encrypted message, or ciphertext, and you transmit the initialization vector and the ciphertext. On the receiving side, the receiver passes in the key, the initialization vector, and the ciphertext and if everything works out well, they get the plaintext back. As I mentioned, an eavesdropper can see the initialization vector and it wouldn’t matter. The initialization vector serves two purposes. The first one is that, in certain modes, the ciphertext for the current block depends on the previous block. Obviously, that begs the question of what the first block uses, since there’s no previous block. It uses the initialization vector. However, because it’s random, it also serves another purpose: if all your messages were to start off with the same thing, then an attacker might notice the first block is always the same ciphertext. Whereas, if you choose your initialization vector randomly, they won’t be able to notice that. Cryptor in CommonCrypto (16:25) public func CCCryptorCreate(op: CCOperation, _ alg: CCAlgorithm, _ options: CCOptions, _ key: UnsafePointer<Void>, _ keyLength: Int, _ iv: UnsafePointer<Void>, _ cryptorRef: UnsafeMutablePointer<CCCryptorRef>) -> CCCryptorStatus public func CCCryptorUpdate(cryptorRef: CCCryptorRef, _ dataIn: UnsafePointer<Void>, _ dataInLength: Int, _ dataOut: UnsafeMutablePointer<Void>, _ dataOutAvailable: Int, _ dataOutMoved: UnsafeMutablePointer<Int>) -> CCCryptorStatus public func CCCryptorFinal(cryptorRef: CCCryptorRef, _ dataOut: UnsafeMutablePointer<Void>, _ dataOutAvailable: Int, _ dataOutMoved: UnsafeMutablePointer<Int>) -> CCCryptorStatus public func CCCryptorRelease(cryptorRef: CCCryptorRef) -> CCCryptorStatus It’s a little more complicated now, because when we create the cryptor, not only is there a key as there was with the HMAC, but we also have the initialization vector. We have to tell it whether we’re encrypting or decrypting. Also, unlike HMACs and digests, we’re not just trying to calculate a single answer here; as we feed in data, we’re going to be getting data out, so each of the updates now has both input buffers and output buffers. Then, when we get to the end, there could still be some stuff in the buffers, so we get some additional data out at the end. The other thing to note is that there’s also this CCCryptorRelease. To make sure that we clean up all the resources and everything, we have to have to Swift deinit that calls this to make sure that everything’s looked after correctly. Swift 2.0 OptionSetType (17:34) Now for the options parameter. In C, this is a set of flags that can be Bitwise OR’d together. In Swift 1.2, this was a nightmare to deal with because you might think that you could use enums, but you can’t Bitwise OR together enums. (There was a way of doing it, but it took endless amounts of code to get it to work correctly.) I’m not going to go through the Swift 1.2 version here, it was so horrible, but in Swift 2.0, there’s OptionSetType which makes it really easy to do. public struct Options : OptionSetType { public typealias RawValue = Int public let rawValue: RawValue public init(rawValue: RawValue) { self.rawValue = rawValue } public init(_ rawValue: RawValue) { self.init(rawValue: rawValue) } public static let None = Options(rawValue: 0) public static let PKCS7Padding = Options(rawValue:kCCOptionPKCS7Padding) public static let ECBMode = Options(rawValue:kCCOptionECBMode) } At the bottom, those are the three options, and that’s all the code you have to do to bring them in. When you call it from Swift, though, you don’t Bitwise OR them together. Instead, you actually create an array containing the flags that you need. Now, the padding flag. All the currently implemented cryptography algorithms in CommonCrypto are what are called “block-based algorithms”, which means that you pass in a block of a particular size and you get a block out. If you pass in less than that block, it can’t produce anything, and if your input’s not an integral number of blocks long, obviously the final block’s going to get truncated. To work around that, there’s a particular sort of padding that’s also designed so that it doesn’t leak too much information about how long your message is. Unless you’re coding to a particular protocol and you know that you’re going to have an integral number of block lengths, you probably want to specify this flag. The other flag is the complete opposite. There are two modes that CommonCrypto or cryptors can operate in. The first one (the default) is called Chain Block Cipher. That’s the one that uses the initialization vector I mentioned previously, where the ciphertext of the current block depends not only on the current plaintext, but also on the ciphertext of the previous block. Why is this important? Suppose I have a highly secret message, and I’m going to encrypt it. If I use Chain Block Cipher, it will end up essentially being garbled white noise, and will be unintelliglble. If I’d specified Electronic CodeBook Mode, it would come out as readable. In Electronic CodeBook Mode, if you put in the same plaintext, you’re going to get the same ciphertext out for a given key. So, although it’s taking blocks of pixels and encrypting them, lots of the blocks are the same, and there’s enough information leakage that you can make out what the original was. The StreamCryptor API (20:56) Okay, well if we put it all together then, what does the StreamCryptor look like? public class StreamCryptor { public enum Operation { case Encrypt, Decrypt } public enum Algorithm { case AES, DES, TripleDES, CAST, RC2, Blowfish } public struct Options : OptionSetType { static public let None, PKCS7Padding, ECBMode: Options } public convenience init(operation: Operation, algorithm: Algorithm, options: Options, key: [UInt8], iv: [UInt8]) public convenience init(operation: Operation, algorithm: Algorithm, options: Options, key: String, iv: String) public func update(dataIn: NSData, inout byteArrayOut: [UInt8]) -> (Int, Status) public func update(byteArrayIn: [UInt8], inout byteArrayOut: [UInt8]) -> (Int, Status) public func update(stringIn: String, inout byteArrayOut: [UInt8]) -> (Int, Status) public func final(inout byteArrayOut: [UInt8]) -> (Int, IDZSwiftCommonCrypto.Status) } This class is quite complicated, and it has to be that way. First of all, you have to pass in all the parameters for the initialization, and then also, as you pass in each block, you’ve got to get a block out as well. If it happens that you’re not getting data nicely aligned, it’ll correctly handle it as long as you specified the padding. It’ll apply the padding in the final block and make sure that everything gets through correctly. You’ll notice as well in this one that I’m using the Swift idea of being able to return a tuple. There are two things that you might care about here: there’s a Status because unlike the message digest and HMACs, according to the documentation, there are cases under which this can fail. I’ve never actually seen it do so, but I suppose we should put it in there. The other thing is that it may not produce a full block, depending on what’s going on. The other return value is a count of how much ciphertext was produced in that iteration. So, if you’re decrypting data coming over the network or if it’s a big file, this is the interface you want to use. If you’re only doing a small block of data, I also provided a simpler version: public class Cryptor : StreamCryptor, Updateable { internal var accumulator: [UInt8] public func final() -> [UInt8]? public func update(buffer: UnsafePointer<Void>, _ byteCount: Int) -> Self? } You can see that cryptor, if it’s short, is actually pretty straightforward. Using Cryptor (22:56) var aesKey1Bytes = arrayFromHexString("2b7e151628aed2a6abf7158809cf4f3c") var aesIV = arrayFromHexString("deadfacedeadfacedeadfacedeadface") let encryptor = Cryptor(operation: .Encrypt, algorithm: .AES, options: [.PKCS7Padding], key: aesKey1Bytes, iv: aesIV) let ciphertext = encryptor.update(m1)?.final() let decryptor = Cryptor(operation:.Decrypt, algorithm: .AES, options: [.PKCS7Padding], key: aesKey1Bytes, iv: aesIV) let plaintext = decryptor.update(ciphertext!)?.final() This is how you would encrypt to a ciphertext and then decrypt the ciphertext back to plaintext. The plaintext, in this case, will be an array of bytes, so you’ll have to do a little bit more munging to get it into a string to prove that it is equal to what you originally passed in. The PBKDF API (23:16) It might be tempting when you’re trying to come up with a key for something, particularly if you’re just encrypting a file to disk, to use a pass phrase and to use the ASCII encoding of the pass phrase or something like that. In the past, that may well have been done, but with modern computers, you probably don’t want to do that because they can guess too many thousands or millions of keys a second. A better way to do it is to use a password-based key derivation function, or PBKDF. public class PBKDF { public enum PseudoRandomAlgorithm { case SHA1, SHA224, SHA256, SHA384, SHA512 } public class func calibrate(passwordLength: Int, saltLength: Int, algorithm: PseudoRandomAlgorithm, derivedKeyLength: Int, msec : UInt32) -> UInt public class func deriveKey(password : String, salt : String, prf: PseudoRandomAlgorithm, rounds: uint, derivedKeyLength: UInt) -> [UInt8] } This is a function specifically designed to be expensive to compute. Normally, they have a balance between computationally expensive and memory expensive so that it slows down ability to see what the key that came out from that was. I’m not going to bother looking at the header file here, but these are basically Swifty translations of the two functions related to this functionality. The first one is calibrate: given a specific password length, a specific Salt length, a desired derived key length, and the amount of time that you’re willing to wait on a device, it tells you how many rounds of the function you can run. Basically, the higher the number of rounds, the longer it’s going to take an attacker to try and guess any one password. Now, you’re probably not going to do this as part of an app, but it’s useful to have it so that when you bring out the next version of your app, you can up the number of rounds if your amount of computational devices has gone up enough, without inconveniencing your user. Once you’ve decided on the number of rounds, you use deriveKey to guess key material from the password. It takes the password itself, the Salt, which is a bit like the initialization vector that I mentioned for the cryptors. Essenntially, Salt is a random buffer that’s used to permute your password. It’s normally stored with the password or transmitted with it, but the main thing is that it means that if two people have the same password, it won’t appear the same. The Random API (26:34) public class Random { public class func generateBytes(byteCount : Int) throws -> [UInt8] } For both Salts and initialization vectors, it’s important that they’re random. It’s also important that these random numbers are of sufficiently high quality, because low quality randomization could mean that an attacker could have an easier time gaining information about your system. Some Cautionary Words (27:15) IDZSwiftCommonCrypto should be considered beta I’ve tested it reasonably well, but I think there are still some problems in a few places that I’m looking for good tests for. If you’re using it, please treat it as a beta. Don’t try to invent your own protocol; there be dragons! Instead, use one of the existing ones. Use a peer-reviewed one. It’s incredibly easy to shoot yourself in the foot with this. Even some of the existing, well-tested, well-reviewed algorithms like SSH have their problems. Many of the implementations of OpenSSH use a limited version of Diffie-Hellman key exchange, and recent papers have suggested that this is exploitable. If professionals in the area can occasionally screw things up then the chances are, if you’re rolling your own, you’re going to make a mess of it. If the stuff you’re dealing with is very sensitive for your users, that could be a disaster. Don’t write your own crypto libraries I should say that didn’t write any crypto algorithms here, I just wrapped it. I’ve said this to few people and they said, “Well, why? I mean, what could possibly go wrong?” Anybody who knows this field will giggle to themselves. Check local jurisdiction for laws governing use/sale/export of products containing/using cryptography There are generally laws governing particularly strong cryptography. If you’re submitting to the U.S. App Store and you’ve crossed a certain threshold, you’ll have to provide the information. You’ll need to deal with the Bureau of Industry and Security to get your export license. Other Projects (32:56) - RNCryptor (Rob Napier) - Supports C++, C#, Java, PHP, Python, Javascript, and Ruby - CryptoSwift (Marcin Krzyżanowski) - Pure Swift Implementation - Crypto (Sam Soffes) - Extensions to NSData and NSString Summary (34:01) - Use module maps to generate Swift prototypes for forgotten functions - Use generic classes to unify related functions and structures - Use protocol extensions to factor out repetitive code - Use protocols to bridge non-generic to generic classes - Use customized enums to work around RawRepresentablelimitations - Use OptionSetTypeto wrap bitwise flags Q&A (35:36) _Q: One thing I’m curious about is when submitting things to the App Store, there’s a “Does your app contain cryptography?” question, but I’ve never been in a position where I’ve needed to check Yes on that. What is that process like, say, here in the United States? How painful is it if you want to get something approved to the App Store that has cryptography in it? Danny: I can only speak to part of that. One of my apps in the App Store does have cryptography in it, but doesn’t have a level of cryptography that requires me to have a registration number. When you click the Yes button, it asks you a number of other questions. In particular, and this is one of the deciding factors, is something like, “Is it only used to keep user information private?” I was using it for protecting user passwords and there was a small amount of cryptography, and I think Diffie-Hellman key exchange as well. Mine was below the threshold, where I simply said “Yes, it’s in there, but it’s below this threshold, and this is what it does.” That was sufficient for the App Store. If that’s not enough, the first stage is you register with the Bureau of Industry and Security, then you have to fill out a SNAP-R application with a whole bunch of information about how strong the cryptography is and what it’s used for. Depending on that, you get either an ERN or a CCATS. I think you have to be doing really high-level stuff to get a CCATS. I haven’t really had to go through the full process. My initial registration took very little time and was very painless. Q: Did you consider writing this as a bunch of extensions for NSData and NSString? It looks like you’ve written it as a standalone. Also, did you consider using sequence type or generator, because it looks like random gives you a sequence of bytes? It might be possible to implement it using a sequence type. Danny: Basically, I was trying to provide building blocks. It is really easy to implement the extensions using Sam Soffes’s library. It’s like three lines of code for each one. The one big reason you may not want to do it, though, is that some of this stuff takes a while, especially if your data is any way big. It can take quite a few seconds to encrypt, so you wouldn’t want that happening on your main thread. You’d have all sorts of synchronization and things, but it would be fairly easy to do. As for the he sequence type thing, you could, but the underlying mechanism expects you to be requesting a specific length, and you normally know a priori what that length is because you’re asking for it for an initialization vector for Salt. You’re not going to really ask for an endless stream of random numbers. However, there would be no reason why you couldn’t build up such a thing quite easily from this. About the content This content has been published here with the express permission of the author.
https://academy.realm.io/posts/danny-keogan-swift-cryptography/
CC-MAIN-2018-22
en
refinedweb
How to Manipulate Strings with Character in C++ An array is a sequence of variables in C++ that shares the same name and that is referenced using an index. The following Concatenate program inputs two strings from the keyboard and concatenates them into a single string: // Concatenate - concatenate two strings // with a " - " in the middle #include <cstdio> #include <cstdlib> #include <iostream> using namespace std; // prototype declarations void concatString(char szTarget[], const char szSource[]); int main(int nNumberofArgs, char* pszArgs[]) { // read first string... char szString1[256]; cout << "Enter string #1:"; cin.getline(szString1, 128); // ...now the second string... char szString2[128]; cout << "Enter string #2:"; cin.getline(szString2, 128); // ...concatenate a " - " onto the first... concatString(szString1, " - "); // ...now add the second string... concatString(szString1, szString2); // ...and display the result cout << "n" << szString1 << endl; // wait until user is ready before terminating program // to allow the user to see the program results cout << "Press Enter to continue..." << endl; cin.ignore(10, 'n'); cin.get(); return 0; } // concatString - concatenate the szSource string // onto the end of the szTarget string void concatString(char szTarget[], const char szSource[]) { // find the end of the first string int targetIndex = 0; while(szTarget[targetIndex]) { targetIndex++; } // tack the second onto the end of the first int sourceIndex = 0; while(szSource[sourceIndex]) { szTarget[targetIndex] = szSource[sourceIndex]; targetIndex++; sourceIndex++; } // tack on the terminating null szTarget[targetIndex] = ' '; } The Concatenate program reads two character strings and appends them together with a ” – “ in the middle. The program begins by reading a string from the keyboard. The program does not use the normal cin >> szString1 for two reasons. First, the cin >> operation stops reading when any type of whitespace is encountered. Characters up to the first whitespace are read, the whitespace character is tossed, and the remaining characters are left in the input hopper for the next cin >> statement. Thus, if you were to enter “the Dog”, szString2 would be filled with “the” and the word “Dog” would be left in the input buffer. The second reason is that the getline() allows the programmer to specify the size of the buffer. The call to getline(szString2, 128) will not read more than 128 bytes no matter how many are input. Instead, the call to getline() inputs an entire line up to but not including the newline at the end. After reading the first string into szString1[], the program appends ” – “ onto the end by calling concatString(). It concatenates the second string by calling concatString() with szString2[]. The concatString() function accepts a target string, szTarget, and a source string, szSource. The function begins by scanning szTarget for the terminating null character, which it stores in targetIndex. The function then enters a second loop in which it copies characters from the szSource into szTarget starting at the terminating null. The final statement in concatString() slaps a terminating null on the completed string. An example output from the program appears as follows: Enter string #1:this is a string Enter string #2:THIS IS A STRING this is a string - THIS IS A STRING Press Enter to continue...
https://www.dummies.com/programming/cpp/how-to-manipulate-strings-with-character-in-c/
CC-MAIN-2019-47
en
refinedweb
Revision history for Perl extension WWW::Google::SiteMap. 1.00 - Version 1.00 Released! - Fixed Zlib detection problem reported by Lance Cleveland. - Check to make sure that the sitemap file was opened correctly, rather than just crashing when we try to write to it, also reported by Lance Cleveland. - Added support for sitemap indexes (see WWW::Google::SiteMap::Index) - Added support for notifying Google when your sitemaps and sitemap indexes are updated (see WWW::Google::SiteMap::Ping). Suggested by Frank Naude. - Fixed a bug in the ISO-8601 time format checking. 0.03 - Changed from XML::Simple to XML::Twig for XML parsing/generating, this means you can now validate your sitemaps with an XML validator. - Fixed some documentation errors, spotted by Ing. Branislav Gerzo 0.02 Fri Jun 3 17:04:57 2005 - Renamed from Google::SiteMap to WWW::Google::SiteMap, shouldn't have created a new top-level namespace in the first place. 0.01 Fri Jun 3 13:35:47 2005 - original version; created by h2xs 1.23 with options -X Google::SiteMap
https://metacpan.org/changes/release/JASONK/WWW-Google-SiteMap-1.00
CC-MAIN-2019-47
en
refinedweb
Is(); } } 6 thoughts on “#BILTNA Wish 5: Isolate with Fade” That is really really cool. I think that this combined with the Interfence Detect tool of Revit would be very useful. Great work Harry. Good tool, BUT what happen when there is some curtain walls in the projects? I get an error, I’m wandering what will happen when pinning curtain panels (which were unpinned and customized) what is the error? Autodesk.Revit.Exceptions.InvalidOperationException: Element cannot be pinner or unpinned at autodesk.revit.db.element.set_Pinned(Boolean lock) at “namespace”.IsolateAndFadeAndLock.Isolate(Icollection ‘\ eIds, doc, uidoc)… <- method I call from the external command Sorry I cannot copy the message from the revit window, maybe I will adjust the code to have a proper TaskDialog The code I posted above does not pin or unpin anything. What is the relevant element? Ops, i forgot that I’ve added the pin property to your code. It is very useful when used with the selection filter: you can work on the isolated element, without warry about selecting oher elements. Sorry about that, but, maybe I found the solution adding the .CanBeLocked method prior to lock them. Give it a try!
https://boostyourbim.wordpress.com/2018/08/10/biltna-wish-5-isolate-with-fade/
CC-MAIN-2019-47
en
refinedweb
#include <ctype.h> int isxdigit(int c); int isxdigit_l(int c, locale_t locale);. The following sections are informative. The Base Definitions volume of POSIX.1-2008, Chapter 7, Locale, <ctype.h> Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see .
https://man.linuxreviews.org/man3p/isxdigit.3p.html
CC-MAIN-2019-47
en
refinedweb
In this series, we’re going to create a carpooling app with React Native. This will be a two-part series showing you how to create a full-stack React Native app which uses PHP as the back-end. The first part covers the following: - Setting up a Pusher app - Setting up a Google project - Setting up Laradock - Creating the server component - Exposing the server using ngrok While the second part will cover the following: - Creating the app - Running the app I’ve previously written a similar tutorial: Build a ride hailing app with React Native. The main difference between the two is that the first one shows how to build an app similar to the following: The main idea of the above apps is to provide a ride-hailing service to users. This is traditionally called “Ridesharing”. While this tutorial will show you how to build an app similar to these: The main idea of the above apps is for users to share their ride with people who are going the same route as them. This is traditionally called “Carpooling”. Though there’s a couple of differences between traditional carpooling apps and the app that we’re going to build: - The person sharing the ride doesn’t necessarily own the vehicle. This means that they can leave the vehicle at an earlier time than the person they picked up. The only rule is that the person who shared the ride needs to still be in the vehicle until they pick up the other person. - The person sharing the ride can only pick up one person. “One person” doesn’t necessarily equate to a physical person. There can be two or more, but the idea is that once the person has accepted another user to share a ride with, then they can no longer accept a new request from other users. Prerequisites This tutorial requires the following to be already set up on your machine: - React Native development environment - the series assumes that you already have set up all the software needed to create and run React Native apps. The series will show you how to create the app for both Android and iOS devices. We will use the command to create a React Native project. You can either have both Android Studio and Xcode set up on your machine or just one of them. Additionally, you can set up Genymotion so you can easily change your in-app location. Be sure to check out the setup instructions if you haven’t setup your machine already. react-native init - Docker and Docker Compose - the series assumes that you already have Docker and Docker Compose running on your machine. We will be using those to easily setup a server with all the software that we need. This also assures that we both have the same environment. - Git - used for cloning repos. Knowing the basics of creating a React Native app is required. This means you have to know how to run the app on an emulator or your device. You should also have a good grasp of basic React concepts such as props, refs, state, and the component lifecycle. Knowledge of Docker is required. You should know how to setup Docker on your operating system and setup a containers from scratch. Note that Docker has poor support for Windows 7 and 8. So if you’re using any of those systems, you might have difficulty in following this tutorial. Knowledge of the following will be helpful, but not required. I’ll try to cover as much detail as I can, so readers with zero knowledge of the following will still be able to follow along: Lastly, the tutorial assumes that you know your way around the operating system that you’re using. Knowing how to install new software, execute commands in the terminal is required. What we’ll be building Before we proceed, it’s important to know what exactly we’ll be building. The app will have two modes: - sharing - this allows the user to share their ride so that others can make a request to ride with them. For the rest of the series, I’ll be referring to the users who uses this feature as the “rider”. - hiking - this allows the user to make a request to ride with someone. I’ll be referring to these users as “hikers”. Below is the entire flow of the app. I’m using Genymotion emulator for the user that plays the rider, and iPhone for the hiker. This is so I can emulate a moving vehicle by using Genymotion’s GPS emulation tool: I can simply click around the map so that React Native’s Geolocation is triggered. This then allows me to use Pusher Channels to send a message to the hiker so that they’re informed of the rider’s current location. Now, let’s proceed with the app flow: 1. First, the rider enters their username and clicks Share a ride: 2. Rider types in where they want to go and selects it from the drop-down. Google Places Autocomplete makes this feature work: 3. After selecting a place, the app plots the most desirable route from the origin to the destination. The red marker being the origin, and the blue one being the destination: If the rider wants to pick another place, they can click on the Reset button. This will empty the text field for entering the place as well as remove the markers and the route from the map. 4. At this point, the rider clicks on the Share Ride button. This triggers a request to the server which then saves all the relevant data to an Elasticsearch index. This allows hikers to search for them later on. To keep the route information updated, we use React Native’s Geolocation feature to watch the rider’s current location. Every time their location changes, the Elasticsearch index is also updated: 5. Now let’s take a look at the hiker’s side of things. First, the hiker enters their username and clicks on Hitch a ride: 6. Next, the hiker searches for their destination. To keep things simple, let’s pick the same place where the rider is going: 7. Once again, the app plots the most desirable route from the hiker’s origin to their destination: 8. The hiker then clicks on the Search Ride button. At this point, the app makes a request to the server to look for riders matching the route added by the hiker. The rider should now receive the request. Pusher Channels makes this feature work: 9. Once the rider accepts the request, the hiker receives an alert that the rider accepted their request: 10. At this point, the hiker’s map will show rider’s current location. React Native’s Geolocation feature and Pusher Channels make this work: At the same time, the rider’s map will show their current location on the map. This is where you can use Genymotion’s GPS emulation tool to update the rider’s location: 11. Once the rider is near the hiker, both users will receive a notification informing them that they’re already near each other: 12. Once they are within 20 meters of each other, the app’s UI resets and it goes back to the login screen: We will use the following technologies to build the app: - Elasticsearch - for saving and searching for routes. - Pusher Channels - for establishing realtime communication between the rider and the hiker so they are kept updated where each other is. - PHP - for saving and searching documents from the Elasticsearch index. - Google Maps - for showing maps inside the app. - Google Places Autocomplete - for searching for places. - Google Directions API - for getting the directions between the origin and the destination of the riders and hikers. - Geometry Library Google Maps API V3 - for determining whether a specific coordinate lies within a set of coordinates. The full source code of the app is available on this Github repo. Setting up a Pusher app We’ll need to create a Pusher app to use Pusher Channels. Start by creating a Pusher account if you haven’t done so already. Once you have an account, go to your dashboard and click on Channels apps on the left side of the screen, then click on Create Channels apps. Enter the name of your app and select a desirable cluster, preferably one that’s nearest to your current location: Once the app is created, click on the App Settings tab and enable client events: This will allow us to trigger events right from the app itself. That way, the only thing that we need to do on the server is to authenticate requests. Don’t forget to click on Update once you’re done. The API keys which we’ll be using later are on the App keys tab. Setting up a Google project We will be using three of Google’s services to build this app: - Google Maps - Google Places - Google Directions This requires us to create a Google project at console.developers.google.com so we can use those services. On your dashboard, click on the Select a project dropdown then click on Create project. Enter the name of the project and click Create: Once the project is created, click on Library on the left side. Look for the following APIs and enable them: - Maps SDK for Android - Maps SDK for iOS - note that if you don’t enable this, and followed the installation instructions for iOS, Apple Maps will be used instead. - Places SDK for Android - Places SDK for iOS - Directions API - Geocoding API Once those are enabled, click on the Credentials menu on the left side, then click on the Create credentials button and select API key: That will generate an API key which allows you to use the services mentioned above. Take note of the key as we will be using it later. You can choose to restrict access so not just anybody can use your key once they get access to it. To avoid problems while developing the app, I recommend to just leave it for now. Setting up Laradock Laradock is a full PHP development environment for Docker. It allows us to easily set up the development server. Go through the following steps to setup Laradock: Configuring the environment - Clone the official repo ( ). This will create a git clone --branch v7.0.0 directory. Note that in the command above we’re cloning a specific release tag (v7.0.0). This is to make sure we’re both using the same version of Laradock. This helps you avoid issues that has to do with different configuration and software versions installed by Laradock. You can choose to clone the most recent version, but you’ll have to handle the compatibility issues on your own. laradock - Navigate inside the directory and create a copy of the sample laradock file. .env - Open the file on your text editor and replace the existing config with the following. This is the directory where your projects are saved. Go ahead and create a .env folder outside the laradock-projects folder. Then inside the laradock , create a new folder named laradock-projects . This is where we will add the server code: ridesharer APP_CODE_PATH_HOST=../laradock-projects This is the Elasticsearch port configuration. The one below is actually the default one so in most cases, you don’t really need to change anything. But if you have a different configuration, or if you want to use a different port because an existing application is already using these ports then this is a good place to change them: ELASTICSEARCH_HOST_HTTP_PORT=9200 ELASTICSEARCH_HOST_TRANSPORT_PORT=9300 This is the path where the Apache site configuration is located. We will be updating it at a later step. This is just to let you know that this is where it’s located: APACHE_SITES_PATH=./apache2/sites Adding a virtual host - Open the file and add a new virtual host (you can also replace the existing one if you’re not using it): laradock/apache2/sites/default.apache.conf <VirtualHost *:80> ServerName ridesharer.loc DocumentRoot /var/www/ridesharer Options Indexes FollowSymLinks <Directory "/var/www/ridesharer"> AllowOverride All <IfVersion < 2.4> Allow from all </IfVersion> <IfVersion >= 2.4> Require all granted </IfVersion> </Directory> </VirtualHost> The code above tells Apache to serve the files inside the directory whendirectory when /var/www/ridesharer is accessed on the browser. If the directory hasis accessed on the browser. If the directory has file in it, then it will get served by default (if the filename is not specified).file in it, then it will get served by default (if the filename is not specified). index.php The directory maps to the application directory you’ve specified earlier on thedirectory maps to the application directory you’ve specified earlier on the /var/www file:file: .env APP_CODE_PATH_HOST=../laradock-projects This means that is equivalent tois equivalent to /var/www/ridesharer .. /laradock-projects/ridesharer This is why we’ve created a folder inside thefolder inside the ridesharer directory earlier. Which means that any file you create inside thedirectory earlier. Which means that any file you create inside the laradock-projects folder will get served.folder will get served. ridesharer - Update the operating system’s file to point out hosts to ridesharer.loc : localhost 127.0.0.1 ridesharer.loc This tells the browser to not go looking anywhere else on the internet when is accessed. Instead, it will just look in the localhost.is accessed. Instead, it will just look in the localhost. Configuring Elasticsearch Open the file and search forfile and search for docker-compose.yml . This will show you the Elasticsearch configuration:. This will show you the Elasticsearch configuration: ElasticSearch Container ### ElasticSearch ######################################## elasticsearch: build: ./elasticsearch volumes: - elasticsearch:/usr/share/elasticsearch/data environment: - cluster.name=laradock-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 ports: - "${ELASTICSEARCH_HOST_HTTP_PORT}:9200" - "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300" depends_on: - php-fpm networks: - frontend - backend Under the environment, add the following: - xpack.security.enabled=false So it should look like this: environment: - cluster.name=laradock-cluster - bootstrap.memory_lock=true - xpack.security.enabled=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m" This disables the need to authenticate when connecting to Elasticsearch. You can choose to enable it later so that not just anyone can have access to the Elasticsearch index. But to avoid problems with authentication while we’re developing, we’ll disable it for now. Bringing up the container Navigate inside the directory and bring up the container with Docker Compose:directory and bring up the container with Docker Compose: laradock docker-compose up -d apache2 php-fpm elasticsearch workspace This will install and setup Apache, PHP, and Elasticsearch on the container. There’s also a workspace so you can log in to the container. This allows you to install packages using Composer. This process should take a while depending on your internet connection. Troubleshooting Laradock issues If you’re having problems completing this step, it is most likely a port issue. That is, another process is already using the port that the containers wants to use. The quickest way to deal with a port issue is to change the default ports that Apache and Elasticsearch are using (or whatever port is already occupied by another process). Open the file inside thefile inside the .env folder and make the following changes:folder and make the following changes: laradock For Apache, replace the values for either oror APACHE_HOST_HTTPS_PORT (or both):(or both): APACHE_PHP_UPSTREAM_PORT # APACHE_HOST_HTTPS_PORT=443 APACHE_HOST_HTTPS_PORT=445 # APACHE_PHP_UPSTREAM_PORT=9000 APACHE_PHP_UPSTREAM_PORT=9001 For Elasticsearch: # ELASTICSEARCH_HOST_HTTP_PORT=9200 ELASTICSEARCH_HOST_HTTP_PORT=9211 # ELASTICSEARCH_HOST_TRANSPORT_PORT=9300 ELASTICSEARCH_HOST_TRANSPORT_PORT=9311 It’s a good practice to comment out the default config so you know which one’s you’re replacing. If the issue you’re having isn’t a port issue, then you can visit Laradock’s issues page and search for the issue you’re having. Creating the server component Installing the Dependencies Once all the software is installed in the container, Docker will automatically bring it up. This allows you to login to the container. You can do that by executing the following command while inside the directory:directory: laradock docker-compose exec --user=laradock workspace bash Once you’re inside, navigate inside the folder and create afolder and create a ridesharer file:file: composer.json { "require": { "alexpechkarev/geometry-library": "1.0", "elasticsearch/elasticsearch": "^6.0", "pusher/pusher-php-server": "^3.0", "vlucas/phpdotenv": "^2.4" } } Save the file and execute . This will install the following packages:. This will install the following packages: composer install - as mentioned earlier, this allows us to determine whether a specific coordinate lies within a set of coordinates. We will be using this library to determine if the directions returned by the Google Directions API covers the hiker’s pick-up location (origin). geometry-library - this library allows us to query the Elasticsearch index so we can add, search, update, or delete documents. elasticsearch - this is the official Pusher PHP library for communicating with Pusher’s server. We will be using it to authenticate requests coming from the app. pusher-php-server - for loading environment variables from vlucas/phpdotenv files. The .env file is where we put the Elasticsearch, Google, and Pusher config. .env Adding environment variables Inside the `laradock-projects/ridesharer` directory, create a `.env` file and add the following: PUSHER_APP_ID="YOUR PUSHER APP ID" PUSHER_APP_KEY="YOUR PUSHER APP KEY" PUSHER_APP_SECRET="YOUR PUSHER APP SECRET" PUSHER_APP_CLUSTER="YOUR PUSHER APP CLUSTER" GOOGLE_API_KEY="YOUR GOOGLE API KEY" ELASTICSEARCH_HOST="elasticsearch" This file is where you will put the keys and configuration options that we will be using for the server. Loader file Since the majority of the files we will be creating will use either the configuration from the file or connect to the Elasticsearch server, we will be using this file to do those task for us. That way, we simply need to include this file on each of the files instead of repeating the same code.file or connect to the Elasticsearch server, we will be using this file to do those task for us. That way, we simply need to include this file on each of the files instead of repeating the same code. .env Start by importing the class to the current scope. This allows us to use theclass to the current scope. This allows us to use the Elasticsearch\ClientBuilder class without having to refer to its namespaceclass without having to refer to its namespace ClientBuilder everytime we need to use it:everytime we need to use it: Elasticsearch // laradock-projects/ridesharer/loader.php use Elasticsearch\ClientBuilder; Include the vendor autoload file. This allows us to include all the packages that we installed earlier: require 'vendor/autoload.php'; Load the file:file: .env $dotenv = new Dotenv\Dotenv(__DIR__); $dotenv->load(); $elasticsearch_host = getenv('ELASTICSEARCH_HOST'); // get the elasticsearch config After that, connect to Elasticsearch: $hosts = [ [ 'host' => $elasticsearch_host ] ]; $client = ClientBuilder::create()->setHosts($hosts)->build(); Setting the type mapping Since we will be working with coordinates in this app, we need to tell Elasticsearch which of the fields we will be using are coordinates. That way, we can query them later using functions which are specifically created to query geo-point data. This is done through a process called Mapping Start by including the loader file: <?php // laradock-projects/ridesharer/set-map.php require 'loader.php'; Next, we can now proceed with specifying the actual map. Note that an error might occur (for example, the index has already been created, or one of the datatypes we specified isn’t recognized by Elasticsearch) so we’re wrapping everything in a . This allows us to “catch” the error and present it in a friendly manner:. This allows us to “catch” the error and present it in a friendly manner: try..catch try { $indexParams['index'] = 'places'; // the name of the index $myTypeMapping = [ '_source' => [ 'enabled' => true ], 'properties' => [ 'from_coords' => [ 'type' => 'geo_point' ], 'to_coords' => [ 'type' => 'geo_point' ], 'current_coords' => [ 'type' => 'geo_point' ], 'from_bounds.top_left.coords' => [ 'type' => 'geo_point' ], 'from_bounds.bottom_right.coords' => [ 'type' => 'geo_point' ], 'to_bounds.top_left.coords' => [ 'type' => 'geo_point' ], 'to_bounds.bottom_right.coords' => [ 'type' => 'geo_point' ] ] ]; // next: add code for adding the map } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); } Breaking down the code above, we first specify the name of the index we want to use. This shouldn’t already exist within Elasticsearch. If you’re coming from an RDBMS background, an index is synonymous to a database: $indexParams['index'] = 'places'; For the actual type mapping, we only need to specify two properties: andand _source .. properties allows us to specify whether to enable returning of the source when getting documents. In Elasticsearch, theallows us to specify whether to enable returning of the source when getting documents. In Elasticsearch, the _source contains the fields (and their values) that we’ve indexed.contains the fields (and their values) that we’ve indexed. _source In a real-world app, you don’t really want this option to be enabled as it will affect the search performance. We’re only enabling it so that we don’t have to perform an additional step to fetch the source whenever where querying the index: '_source' => [ 'enabled' => true ], The other property that we need to specify is: properties 'from_coords' => [ 'type' => 'geo_point' ], If the field that you want to work with is located deep within other fields, then you use the dot notation to specify the parent: 'from_bounds.top_left.coords' => [ 'type' => 'geo_point' ] Lastly, add the code for creating the index with the map that we specified: $indexParams\['body'\]['mappings']['location'] = $myTypeMapping; // specify the map $response = $client->indices()->create($indexParams); // create the index print_r($response); // print the response Access on your browser and it should print out a success response.on your browser and it should print out a success response. Note that if you have another local development environment that’s currently running, it might be the one that takes priority instead of Laradock. So be sure to disable them if you can’t access the URL above. Creating users When someone uses the app, they need to login first. If the username they used doesn’t already exist then it’s created. Start by getting the data passed from the app, in PHP this is commonly done by extracting the field name from the global variable. But in this case, we’re using the PHP input stream to read the rawglobal variable. But in this case, we’re using the PHP input stream to read the raw data from the request body. This is because this is how Axios (the library that we’ll be using in the app later on) submits the data when sending requests to the server:data from the request body. This is because this is how Axios (the library that we’ll be using in the app later on) submits the data when sending requests to the server: <?php // laradock-projects/ridesharer/create-user.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); $username = $data['username']; // get the value from the username field Construct the parameters to be supplied to Elasticsearch. This includes the and theand the index . You can think of the. You can think of the type as the table or collection that you want to query.as the table or collection that you want to query. type $params = [ 'index' => 'places', // the index 'type' => 'users' // the table or collection ]; Specify the query. In this case, we’re telling Elasticsearch to look for an exact match for the username supplied: $params['body']['query']['match']['username'] = $username; // look for the username specified Execute the search query, if it doesn’t return any “hits” then we create a new user using the username that was supplied: try { $search_response = $client->search($params); // execute the search query if($search_response\['hits'\]['total'] == 0){ // if the username doesn't already exist // create the user $index_response = $client->index([ 'index' => 'places', 'type' => 'users', 'id' => $username, 'body' => [ 'username' => $username ] ]); } echo 'ok'; } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); } Saving routes Whenever a rider shares a ride, the following information needs to be stored in the index: - username - origin - destination - origin coordinates - destination coordinates - the steps from the origin to destination Start by getting the data submitted from the app: <?php // laradock-projects/ridesharer/save-route.php require 'loader.php'; $google_api_key = getenv('GOOGLE_API_KEY'); $data = json_decode(file_get_contents("php://input"), true); $start_location = $data['start_location']; // an array containing the coordinates (latitude and longitude) of the rider's origin $end_location = $data['end_location']; // the coordinates of the rider's destination $username = $data['username']; // the rider's username $from = $data['from']; // the descriptive name of the rider's origin $to = $data['to']; // the descriptive name of the rider's destination $id = generateRandomString(); // unique ID used for identifying the document Make a request to the Google Directions API using the function. Thefunction. The file_get_contents() endpoint expects theendpoint expects the directions andand origin to be passed as a query parameter. These two contains the latitude and longitude value pairs (separated by a comma). We simply pass the values supplied from the app.to be passed as a query parameter. These two contains the latitude and longitude value pairs (separated by a comma). We simply pass the values supplied from the app. destination The function returns a JSON string so we use thefunction returns a JSON string so we use the file_get_contents() function to convert it to an array. Specifyingfunction to convert it to an array. Specifying json_decode() as the second argument tells PHP to convert it to an array instead of an object (when the second argument is omitted or set toas the second argument tells PHP to convert it to an array instead of an object (when the second argument is omitted or set to true ):): false $steps_data = []; $contents = file_get_contents("{$start_location['latitude']},{$start_location['longitude']}&destination={$end_location['latitude']},{$end_location['longitude']}&key={$google_api_key}"); $directions_data = json_decode($contents, true); Loop through the array of steps and construct an array ( ) that only contains the data that we want to store. In this case, it’s only the latitude and longitude values for each of the steps:) that only contains the data that we want to store. In this case, it’s only the latitude and longitude values for each of the steps: $steps_data if(!empty($directions_data['routes'])){ $steps = $directions_data['routes'][0]['legs'][0]['steps']; foreach($steps as $step){ $steps_data[] = [ 'lat' => $step['start_location']['lat'], 'lng' => $step['start_location']['lng'] ]; $steps_data[] = [ 'lat' => $step['end_location']['lat'], 'lng' => $step['end_location']['lng'] ]; } } Next, construct the data that we’ll save to the Elasticsearch index: if(!empty($steps_data)){ $params = [ 'index' => 'places', 'type' => 'location', 'id' => $id, 'body' => [ 'username' => $username, 'from' => $from, 'to' => $to, 'from_coords' => [ // geo-point values needs to have lat and lon 'lat' => $start_location['latitude'], 'lon' => $start_location['longitude'], ], 'current_coords' => [ 'lat' => $start_location['latitude'], 'lon' => $start_location['longitude'], ], 'to_coords' => [ 'lat' => $end_location['latitude'], 'lon' => $end_location['longitude'], ], 'steps' => $steps_data ] ]; } Make the request to index the data: try{ $response = $client->index($params); $response_data = json_encode([ 'id' => $id ]); echo $response_data; }catch(\Exception $e){ echo 'err: ' . $e->getMessage(); } Here’s the function for generating a unique ID:; } Searching routes When a hiker searches for a ride, a request is made to this file. This expects the origin and destination of the hiker to be passed in the request body. That way, we can make a request to the Google Directions API with those data: <?php // /laradock-projects/ridesharer/search-routes.php require 'loader.php'; $google_api_key = getenv('GOOGLE_API_KEY'); $params['index'] = 'places'; $params['type'] = 'location'; $data = json_decode(file_get_contents("php://input"), true); // the hiker's origin coordinates $hiker_origin_lat = $data['origin']['latitude']; $hiker_origin_lon = $data['origin']['longitude']; // the hiker's destination coordinates $hiker_dest_lat = $data['dest']['latitude']; $hiker_dest_lon = $data['dest']['longitude']; $hiker_directions_contents = file_get_contents("{$hiker_origin_lat},{$hiker_origin_lon}&destination={$hiker_dest_lat},{$hiker_dest_lon}&key={$google_api_key}"); $hiker_directions_data = json_decode($hiker_directions_contents, true); Store the hiker’s steps into an array. We will be using it later to determine whether the hiker and the rider have the same route. Note that we’re only storing the for the first step. This is because thefor the first step. This is because the start_location of all the succeeding steps overlaps with theof all the succeeding steps overlaps with the start_location of the step that follows:of the step that follows: end_location $hikers_steps = []; $steps = $hiker_directions_data['routes'][0]['legs'][0]['steps']; // extract the steps foreach($steps as $index => $s){ if($index == 0){ $hikers_steps[] = [ 'lat' => $s['start_location']['lat'], 'lng' => $s['start_location']['lng'] ]; } $hikers_steps[] = [ 'lat' => $s['end_location']['lat'], 'lng' => $s['end_location']['lng'] ]; } Next, we construct the query to be sent to Elasticsearch. Here we use a function calledfunction called decay to assign a score to each of the routes that are currently saved in the index. This score is then used to determine the order in which the results are returned, or whether they will be returned at all.to assign a score to each of the routes that are currently saved in the index. This score is then used to determine the order in which the results are returned, or whether they will be returned at all. gauss Specifying the means all the documents which don’t meet the supplied score won’t be returned in the response. In the code below, we’re querying for documents which are up to five kilometers away from the origin. But once the documents have ameans all the documents which don’t meet the supplied score won’t be returned in the response. In the code below, we’re querying for documents which are up to five kilometers away from the origin. But once the documents have a min_score which are not within 100 meters, the score assigned to them is halved:which are not within 100 meters, the score assigned to them is halved: current_coords $params['body'] = [ "min_score" => 0.5, // the minimum score for the function to return the record 'query' => [ 'function_score' => [ 'gauss' => [ 'current_coords' => [ "origin" => ["lat" => $hiker_origin_lat, "lon" => $hiker_origin_lon], // where to begin the search "offset" => "100m", // only select documents that are up to 100 meters away from the origin "scale" => "5km" // (offset + scale = 5,100 meters) any document which are not within the 100 meter offset but are still within 5,100 meters gets a score of 0.5 ] ] ] ] ]; If you want to know more about how the function works, check this article out: The Closer, The Better. Next, construct the coordinates for the hiker’s origin and destination. We will use this to compute the distance between the hiker’s origin and destination, as well as the hiker’s origin and the rider’s destination. We will need these values later on to determine whether the routes returned from the query matches the hiker’s route: $hikers_origin = ['lat' => $hiker_origin_lat, 'lng' => $hiker_origin_lon]; $hikers_dest = ['lat' => $hiker_dest_lat, 'lng' => $hiker_dest_lon]; Send the request and loop through all the results: try { $response = $client->search($params); if(!empty($response['hits']) && $response['hits']['total'] > 0){ foreach($response['hits']['hits'] as $hit){ $source = $hit['_source']; $riders_steps = $source['steps']; $current_coords = $source['current_coords']; $to_coords = $source['to_coords']; $riders_origin = [ 'lat' => $current_coords['lat'], 'lng' => $current_coords['lon'] ]; $riders_dest = [ 'lat' => $to_coords['lat'], 'lng' => $to_coords['lon'] ]; // check whether the rider's route matches the hiker's route if(isCoordsOnPath($hiker_origin_lat, $hiker_origin_lon, $riders_steps) && canDropoff($hikers_origin, $hikers_dest, $riders_origin, $riders_dest, $hikers_steps, $riders_steps)){ // the rider's username, origin and destination $rider_details = [ 'username' => $source['username'], 'from' => $source['from'], 'to' => $source['to'] ]; echo json_encode($rider_details); // respond with the first match break; // break out from the loop } } } } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); } The function uses thefunction uses the isCoordsOnPath() function from thefunction from the isLocationOnPath() library. This accepts the following arguments:library. This accepts the following arguments: php-geometry - An array containing the latitude and longitude of the coordinate we want to check. - An array of arrays containing the latitude and longitude of each of the steps. - The tolerance value in degrees. This is useful if the place specified isn’t near a road. Here, I’ve used a high value to cover for most cases. As long as the hiker’s origin is somewhat near to a road, then it should be fine. function isCoordsOnPath($lat, $lon, $path) { $response = \GeometryLibrary\PolyUtil::isLocationOnPath(['lat' => $lat, 'lng' => $lon], $path, 350); return $response; } The function determines whether the rider and the hiker are both treading the same route. This accepts the following arguments:function determines whether the rider and the hiker are both treading the same route. This accepts the following arguments: canDropoff() - the coordinates of the hiker’s origin. $hikers_origin - the coordinates of the hiker’s destination. $hikers_dest - the coordinates of the rider’s origin. $riders_origin - the coordinates of the rider’s destination. $riders_destination - an array containing the hiker’s steps. $hikers_steps - an array containing the rider’s steps. $riders_steps The way this function works is that it first determines who leaves the vehicle last: the rider or the hiker. The app works with the assumption that the rider has to ride the vehicle first, and that they should pick up the hiker before they get to leave the vehicle. Otherwise, the hiker won’t be able to track where the vehicle is. This means that there are only two possible scenarios when it comes to the order of leaving the vehicle: - rider rides vehicle → rider picks up hiker → rider leaves the vehicle → hiker leaves the vehicle - rider rides vehicle → rider picks up hiker → hiker leaves the vehicle → rider leaves the vehicle The tracking starts once the rider picks up the hiker. So we measure the distance between the hiker’s origin and their destination, as well as the hiker’s origin and the rider’s destination. This then allows us to determine who will leave the vehicle last by comparing the distance between the two. Once we know the order in which the two users leaves the vehicle, we can now use the function to determine if the destination of the person who will leave the vehicle first is within the route of the person who will leave the vehicle last:function to determine if the destination of the person who will leave the vehicle first is within the route of the person who will leave the vehicle last: isCoordsOnPath() function canDropoff($hikers_origin, $hikers_dest, $riders_origin, $riders_dest, $hikers_steps, $riders_steps) { // get the distance from the hiker's origin to the hiker's destination $hiker_origin_to_hiker_dest = \GeometryLibrary\SphericalUtil::computeDistanceBetween($hikers_origin, $hikers_dest); // get the distance from the hiker's origin to the rider's destination $hiker_origin_to_rider_dest = \GeometryLibrary\SphericalUtil::computeDistanceBetween($hikers_origin, $riders_dest); $is_on_path = false; // whether the rider and hiker is on the same path or not if($hiker_origin_to_hiker_dest > $hiker_origin_to_rider_dest){ // hiker leaves the vehicle last // if the rider's destination is within the routes covered by the hiker $is_on_path = isCoordsOnPath($riders_dest['lat'], $riders_dest['lng'], $hikers_steps); }else if($hiker_origin_to_rider_dest > $hiker_origin_to_hiker_dest){ // rider leaves the vehicle last // if hiker's destination is within the routes covered by the rider $is_on_path = isCoordsOnPath($hikers_dest['lat'], $hikers_dest['lng'], $riders_steps); }else{ // if the rider and hiker are both going the same place // check whether either of the conditions above returns true $is_on_path = isCoordsOnPath($hikers_dest['lat'], $hikers_dest['lng'], $riders_steps) || isCoordsOnPath($riders_dest['lat'], $riders_dest['lng'], $hikers_steps); } return $is_on_path; } Update route Every time the location changes, the app makes a request to this file. The app sends the unique ID that the server responded with when the route was created. This allows us to fetch the existing document from the index. We then update the source with the new coordinates: <?php // laradock-projects/ridesharer/update-route.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); // get the request body and convert it to an array $params['index'] = 'places'; $params['type'] = 'location'; $params['id'] = $data['id']; // the id submitted from the app // the latitude and longitude values submitted from the app $lat = $data['lat']; $lon = $data['lon']; $result = $client->get($params); // get the document based on the id used as the parameter $result['_source']['current_coords'] = [ // update the current coordinates with the latitude and longitude values submitted from the app 'lat' => $lat, 'lon' => $lon ]; $params['body']['doc'] = $result['_source']; // replace the source with the updated data $result = $client->update($params); // update the document echo json_encode($result); Delete route Once the rider accepts a request from the hiker, the app makes a request to this file so that the existing route will be deleted. We need to do this because we don’t want other hikers to make another request to the same rider (remember the 1:1 ratio of the rider to hiker?). Also, note that we’re using the rider’s to query the index. We haven’t really put any security measures to only allow a username to be used on a single app instance, but this tells us that a user can only save one route at a time:to query the index. We haven’t really put any security measures to only allow a username to be used on a single app instance, but this tells us that a user can only save one route at a time: username <?php // laradock-projects/ridesharer/delete-route.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); $params['index'] = 'places'; $params['type'] = 'location'; $params['body']['query']['match']['username'] = $data['username']; // find the rider's username $result = $client->search($params); // search the index $id = $result['hits']['hits'][0]['_id']; // only get the first result unset($params['body']); $params['id'] = $id; $result = $client->delete($params); echo json_encode($result); Delete index Deleting the index ( ) isn’t really required for the app to work. Though it will be useful when testing the app. This allows you to reset the Elasticsearch index so you can control the results that are returned when you search for riders:) isn’t really required for the app to work. Though it will be useful when testing the app. This allows you to reset the Elasticsearch index so you can control the results that are returned when you search for riders: delete-index.php <?php // laradock-projects/ridesharer/delete-index.php require 'loader.php'; try { $params = ['index' => 'places']; $response = $client->indices()->delete($params); print_r($response); } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); } Authenticating requests Below is the code for authenticating requests so that Pusher will allow the user to use the Channels service. This requires the keys from the App keys tab earlier. Be sure to replace the placeholders with your keys: <?php // laradock-projects/ridesharer/pusher-auth.php require 'vendor/autoload.php'; // load the .env file located on the same directory as this file $dotenv = new Dotenv\Dotenv(__DIR__); $dotenv->load(); // get the individual config from the .env file. This should be the same as the one's you have on the .env file $app_id = getenv('PUSHER_APP_ID'); $app_key = getenv('PUSHER_APP_KEY'); $app_secret = getenv('PUSHER_APP_SECRET'); $app_cluster = getenv('PUSHER_APP_CLUSTER'); Set the content type to as this is what the Pusher client expects in the client side:as this is what the Pusher client expects in the client side: application/json header('Content-Type: application/json'); Connect to the Pusher app using the keys and options. The options include the cluster where the app is running from, and whether to encrypt the connection or not: $options = ['cluster' => $app_cluster, 'encrypted' => true]; $pusher = new Pusher\Pusher($app_key, $app_secret, $app_id, $options); Lastly, get the data sent by the Pusher client and use it as an argument for the method. This method returns the success token required by the Pusher client:method. This method returns the success token required by the Pusher client: socket_auth() $channel = $_POST['channel_name']; $socket_id = $_POST['socket_id']; echo $pusher->socket_auth($channel, $socket_id); As you can see, we didn’t really apply any form of authentication in the code above. In a real-world scenario, you want to have some form of authentication before returning the success token. This can be a unique ID that’s only assigned to the users of your app, it can also be a key which is then decrypted to come up with a token used for authenticating the request. This unique ID or key is sent from the client side so the server can verify it. You can test if the server is working by accessing any of the files you created earlier. Exposing the server with ngrok So that you can access the virtual host from the app, you need to setup ngrok. This allows you to expose your virtual host to the internet.from the app, you need to setup ngrok. This allows you to expose your virtual host to the internet. - Go to your dashboard and download ngrok. - Unzip the archive. - Authenticate ngrok using your auth token ( ) .\ngrok authtoken YOUR_AUTH_TOKEN - Expose the virtual host: ngrok http -host-header=ridesharer.loc 80 This will give you an output similar to the following: Copy the HTTPS URL as that’s what we’re going to use in the app later on. Conclusion That’s it! In this tutorial, we’ve set up the server to be used by the app. Specifically, you’ve learned the following: - How to setup and use Laradock. - How to use PHP to index, search and delete Elasticsearch documents. - How to use the Google Directions API to get the directions between two coordinates. - How to use ngrok to expose your virtual host. You can find the code used in this tutorial on this Github repo. In the second part of this series, we’ll be covering how to create the actual app. Originally published on the Pusher tutorial hub.
https://hackernoon.com/create-a-carpooling-app-with-react-native-part-1-setting-up-the-server-sw193djk
CC-MAIN-2019-47
en
refinedweb
Most professional software developers understand the academic definitions of coupling, cohesion, and encapsulation. However, many developers do not understand how to achieve the benefits of low coupling, high cohesion and strong encapsulation, as outlined in this article. Fortunately, others have created. Have you ever played Jenga? It’s that game of wooden blocks that are stacked on top of each other in rows of three. In Jenga you try to push or pull a block out of the stack and place it on top of the stack without knocking the stack over. The player that causes the stack to fall loses. Have you ever thought you were playing a game of Jenga when you were writing or debugging software? For example, you may need to change one field on one screen. You study the stack of code, you look for that little space of light peaking between the classes, and you make the change in the one place that you thought would be safe. Unfortunately, you didn’t realize that the code you were changing was referenced in several critical processes through some strange levels of indirection. The resulting crash of the software stack has left you the loser, cleaning up the mess with your boss breathing down your neck about the customer being upset about their lost data, and … How many times have you been there? I can’t even begin to count how often it’s happened to me. Software development does not have to be like a game of Jenga. There is good news, though: software development does not have to be like a game of Jenga. In fact, software development should not be like any game where there are winners and losers. What you want, instead, is a sustainable pace of software development where everyone wins. You want to ensure that you don’t overwork the developers, that you don’t pressure managers to say “just get it done,” and the customer gets the software they want in a timeframe they agree to. A Sustainable Pace When trying to set a sustainable pace in any endeavor, you first need to understand how far you need to go. You also need to know how fast you need to get there. For example, if you want to run a 50-meter dash, you should run as fast as you possibly can and then push yourself to try to run faster. However, if you want to run an 800-meter race you should set a somewhat slower pace. When you start talking about significant distances like a marathon, the pace becomes significantly slower. For such a distance, you want to set a pace that you can maintain throughout the race. In software development, you can think of the pace you need to run as the timeline of the project combined with the expected features and functionality. In software development, you can think of the pace you need to run as the timeline of the project combined with the expected features and functionality. For shorter timeframes, you need to run faster. However, you also have to consider how much functionality you can reasonably add to your system, given a short timeframe. If you only need a few features and you need to get it done quickly, you may need to sprint toward the goal. If your customer expects you to cram too many features into a short timeframe, you have a higher likelihood of burning out. Imagine trying to sprint for the duration of a marathon, or even a 1600-meter race. The probability of sustaining that pace for that distance is going to approach zero as you continue moving forward. You need to work with your management, your team, and your customers to set the expectations of how fast you can run for a given period. Assuming that you have a reasonable number of features for a given timeframe, you now have to set the pace for feature development in your daily activities. Object-oriented software development has various principles, patterns and practices that help you achieve the sustainable pace you need. The principles include coupling-the extent to which the parts of the system rely on each other; cohesion-the extent to which the parts of a system work together for a single purpose; and encapsulation-the extent to which the details of implementation are hidden. Built on top of these principles are various design and implementation patterns such as strategy, command, bridge, etc. When you combine these principles, patterns and practices they will help you to create systems that are well factored, easy to understand, and easy to change. The Object-Oriented Principles Ideally, you want to ensure that your systems have low coupling and high cohesion. These two principles help you to create the building blocks of a software system. You also want to ensure that you have self-contained building blocks-that is, they are well encapsulated. You don’t want the concerns of one building block leaking into other blocks. If you create building blocks that have the correct size and shape, you can put them together in meaningful ways to solve your problems. Often it seems that developers only discuss these principles in academic settings. Most universities with degrees that cover software development provide at least a cursory introduction to them. However, many software developers seem to miss their correct usage, causing more problems than they solve. Indeed, developers can very easily misapply these principles in the wrong place at the wrong time. To avoid this situation, you need to understand how coupling, cohesion, and encapsulation correctly play into developing software solutions. Low Coupling Coupling in software development is defined as the degree to which a module, class, or other construct, is tied directly to others. For example, the degree of coupling between two classes can be seen as how dependent one class is on the other. If a class is very tightly coupled (or, has high coupling) to one or more classes, then you must use all of the classes that are coupled when you want to use only one of them. You can reduce coupling by defining standard connections or interfaces between two pieces of a system. For example, a key and a lock have a defined interface between them. The key has a certain pattern of ridges and valleys, and the lock has a certain pattern of pins and springs. When you place the right key in the lock, it pushes the pins into a position that allows the mechanism to lock or unlock. If you place the wrong key into the lock, the pins will not move into the correct position and the mechanism won’t move. In software development, developers also work with standard connections and interface definitions. Object-oriented languages such as C++, C#, Java, and Visual Basic have some constructs that allow you to define those interfaces implicitly and explicitly. Whether it’s a class’ public methods and properties, an abstract base class, an explicit interface or other form of abstraction, these constructs allow you to define common interaction points between parts of your system. Without abstraction to decouple these common points of interaction, you are left with pieces of the system that must know about each other directly. Basically, this means you are stuck with a key that is welded directly to the pins of a lock, preventing you from removing the key, and compromising the security of that lock. Imagine that you are working with the structure in Figure 1. The software works fine, at the moment, and you can fix bugs when you need to. Then your boss hands you a new requirement and you realize that the module highlighted in red can handle most of the requirement. You would like to reuse that module but you don’t need any of the surrounding modules for this new feature. When you try to pull out the module in red, though, you quickly realize that you’ll have to bring several more modules with it due to the high coupling between this module and the ones surrounding it. Now imagine a class that has zero coupling. That is, the class depends on nothing else and nothing else depends on it. What benefit does that offer? For one, you can use that class anywhere you want without having to worry about dependencies coming along with it. However, you essentially have a useless class. With zero coupling in the class, you won’t be able to get any information into or out of it. Try to create a class in .NET that does not rely on anything-not an integer, not a string, not a Console or Application static reference; not even the implied object inheritance of every construct in .NET. Go ahead… try it… see how useful that is in your system. Coupling is not inherently evil. If you don’t have some amount of coupling, your software will not do anything for you. Coupling is not inherently evil. If you don’t have some amount of coupling, your software will not do anything for you. You need well-known points of interaction between the pieces of your system. Without them, you cannot create a system of parts that can work together, so you should not strive to eliminate coupling entirely. Rather, your goal in software development should be to attain the correct level of coupling while creating a system that is functional, understandable (readable by humans), and maintainable. High Cohesion Cohesion is the extent to which two or more parts of a system are related and how they work together to create something more valuable than the individual parts. Think of the old adage, “The whole is greater than the sum of the parts.” People seek high cohesion in sports teams, for example. They want to have a team of basketball players that know how and when to pass the ball, and how and when to score. Everyone expects the individual players to play together as a team to increase the chances of the team winning the game. Companies also seek cohesion in their project teams at work. They put developers and user interface designers together with business analysts and database administrators, along with other roles and responsibilities. The intent of creating teams of cross-functional skill sets is to use the strengths of each team member to counter the weaknesses of others. You likely also look for cohesion in the technology you are using and the software that you are writing. You probably want a database system that connects easily to your programming language of choice. You also want a user interface technology that makes it easy to wire up the business logic and data access. Cohesion is all around. You only need to recognize it for what it is. In software systems, developers talk about high-level concerns and low-level implementation details. This scale of concern can help you understand the many perspectives of cohesion within your software. How well do the lines of code in a method or function work together to create a sense of purpose? How well do the methods and properties of a class work together to define a class and its purpose? How well do the classes fit together to create modules? How well do the modules work together to create the larger architecture of the system? Understanding the perspective that you are dealing with at any given time will help you understand the extent to which the pieces are cohesive. Understanding the perspective that you are dealing with at any given time will help you understand the extent to which the pieces are cohesive. Examine the puzzle-picture of my son in Figure 2. If you separate all of the individual pieces, what do you have? You have a series of pieces that provide very limited value on their own. The intrinsic value of an individual piece is only that it can be combined with other pieces. I’m not interested in playing with a single piece, though. I want to have enough pieces to complete the puzzle in question. I want a highly cohesive system of pieces. A complete puzzle has much more value than the all of the individual pieces. Knowing that the puzzle pieces should create a picture of my son also provides a higher level of value to me-more value than any other random puzzle. A puzzle where all of the pieces are black, or a puzzle that shows a picture of a field, will not inspire the feelings of love in me the way a picture of my son will. This desire to complete the picture of my son provides motivation to not only put the puzzle together, but to put the right pieces in the right places. If cohesive systems-software, puzzles, or otherwise-have multiple parts coming together to create more value than then individual parts, it stands to reason that you cannot create a highly cohesive system out of large, all-in-one pieces. For example, a software system cannot be cohesive if it is made up of excessively large “god” classes. These types of classes tend to have too many concerns within them to create any cohesion outside of themselves. A single class that does all of the actions in the following bullet list has far too many concerns to be cohesive with anything else. - Load data from a file or database - Process the data into a structure - Display the data to the user - Obtain input from the user on what to do with the data - Perform the actions requested by the user - Persist the changes back to the file or database The processes listed here are very common. Many software systems use some form of this basic workflow. However, including all of these processes in a single class would make it difficult to create a cohesive system. You might see cohesion between high-level processes but you lose the ability to create cohesion at lower levels such as methods and classes. To create a more cohesive system from the higher and lower level perspectives in this example, you can break out the various needs into separate classes. You might separate the user interface needs, the data loading and saving needs, and the processing of the data. Having these smaller parts, each with their own value in the overall process, gives you a much more cohesive system from the various perspectives. Encapsulation Most developers define encapsulation as information hiding. However, this definition often leads to an incomplete understanding and implementation of encapsulation. It seems that many people see the word information and think data or properties. Based on this they try to make public properties that wrap around private fields, or make properties private instead of public. This perspective, unfortunately, misses out on a tremendous opportunity that encapsulation provides: to hide not just data, but process and logic as well. Strong encapsulation is evidenced by the ability for a developer to use a given class or module by its interface, alone. The developer does not, and should not, need to know the implementation specifics of the class or module. Figure 3 represents a well-encapsulated process where the calling objects, represented by blue boxes, do not have to know about the implementation detail of the red box. The red box should be free to change its implementation without fear of breaking the code that is calling it, so long as the public interface or the semantics of that interface do not change. Encapsulation helps reduce duplication of data and processes in your system. Whether you have a business process, a single point of common data, or a technical or infrastructure process, you should have one and only one implementation to represent the item in question.. In situations where you need to use a process in more than one location, proper encapsulation combined with low coupling will help to ensure that you have a part that can create cohesion in the system. For example,. The saw has encapsulated the process of causing the blade to turn through a simple, public interface-the handle with the trigger. Additionally, the saw itself contains other forms of encapsulation such as the connection points between the saw blade and the motor. This allows you to replace the saw blade without having to reconstruct the motor, the trigger mechanism, or any other part of the saw. Object-Oriented Principles in Our Day to Day Jobs Developers hear about these and other principles of object-oriented development fairly regularly during their professional career. These real-world discussions often center around how the principles would be “nice to achieve,” though, relegating them to the realms of the ivory tower academic. When it comes to the everyday work of software development, it seems that most developers either don’t understand how to get to these principles or don’t think it’s possible in a reasonable time frame. However, these principles are not just for the academics. Developers should apply them to their development efforts. The question should change from “Can you apply the principles, here?” to “How do you correctly apply the principles, here?” S.O.L.I.D. Stepping Stones When you start asking the question of how, it’s a little like looking at a marathon race and wondering how you end up at the finish line. Obviously, for a marathon you arrive at the finish line by running one step at a time. Software development lets you move one step at a time toward your object-oriented goals, as well. The steps are composed of additional principles and implementation goals, such as those outlined in the SOLID acronym: - S: Single Responsibility Principle (SRP) - O: Open-Closed Principle (OCP) - L: Liskov Substitution Principle (LSP) - I: Interface Segregation Principle (ISP) - D: Dependency Inversion Principle (DIP) Originally compiled by Robert C. Martin in the 1990s, these principles provide a clear pathway for moving from tightly coupled code with poor cohesion and little encapsulation to the desired results of loosely coupled code, operating very cohesively and encapsulating the real needs of the business appropriately. The Single Responsibility Principle says that classes, modules, etc., should have one and only one reason to change. This helps to drive cohesion into a system and can be used as a measure of coupling as well. The Open-Closed Principle indicates how a system can be extended by modifying the behavior of individual classes or modules, without having to modify the class or module itself. This helps you create well-encapsulated, highly cohesive systems. The Liskov Substitution Principle also helps with encapsulation and cohesion. This principle says that you should not violate the intent or semantics of the abstraction that you are inheriting from or implementing. The Interface Segregation Principle helps to make your system easy to understand and use. It says that you should not force a client to depend on an interface (API) that the client does not need. This helps you develop well-encapsulated, cohesive set of parts. The Dependency Inversion Principle helps you to understand how to correctly bind your system together. It tells you to have your implementation detail depend on the higher-level policy abstractions, and not the other way around. This helps you to move toward a system that is coupled correctly, and directly influences that system’s encapsulation and cohesion. Throughout the rest of this article, I will walk through a scenario of creating a software system. You will see how the five SOLID principles can help you to achieve strong encapsulation, high cohesion, and low coupling. You will see how you can start with a 50-meter “get it done now” dash, and end with a long term marathon of updates to the system’s functionality. Setting the Pace: A 50-Meter Dash To help understand how you can achieve the goal of an object-oriented system through the use of the SOLID principles, I’ll walk you through a simple scenario, a solution, and the resulting expectations. Scenario: Email an Error Log One day at the office, your manager walks into your cube and looks like his hair is on fire. He informs you that his manager, the CTO, just got off the phone with a very irate customer. Apparently, one of your company’s hosted applications is throwing exceptions and preventing the customer from being able to complete their work. The CTO has informed your manager that he needs immediate knowledge of the exceptions being thrown from this system, and personally wants to see an email in his inbox for every exception thrown, until the system is fixed. Your manager, worried about keeping his job, now wants you to create a quick-and-dirty application that allows a network operations person to send the contents of a log file to the CTO. Furthermore, this thing has to be out the door and in the hands of the network operations person before lunch-a couple of hours from now. Using a running analogy, you are now engaged in a 50-meter dash. You need to crank this code out and deliver it as quickly as possible. Solution: Select a File and Send It A few hours after that conversation with your manager, you have produced a very simple system that allows the user to select a file and send the contents of that file via email (Figure 4). The implementation of this application is very crude by your own standards: you coded the entire application in the form’s code, did no official testing, and did the bare minimum of exception handling (Listing 1). However, you got the job done. New Expectation: All Errors Emailed A week after you wrote that quick-and-dirty email sending application, your boss is back in your cube to talk about it again. This time, he informs you that your application was a smashing success and the CTO has mandated that all systems send error log emails to a special email address for collecting them. The CTO wants you, specifically, to handle this since your original application was received so well. Resetting the Pace As your first assignment, after hearing about this new mandate from the CTO, you want to figure out what log files the network operations personnel will need to send, and how they want to facilitate this. After some discussion with the operations group lead, you have agreed to add two new aspects of functionality of the system: - The operations people want an API to code against for some of their automation scripts. - They need to parse the contents of an XML file to make it a little more human-readable. You have also negotiated a slightly better timeframe with the network operations people than your manager gave you for the original application. They have agreed to a delivery date of close-of-business, tomorrow. With this new deadline and the new requirements in mind, you decide to settle in for a slightly longer race than the original 50-meter dash. The code you started with was sufficient at the time, but now you need to enhance and extend it. You could consider this a 100- or possibly a 400-meter race at this point. The good news is that you know how to set your pace according to the situation you find yourself in. Single Responsibility Principle The Single Responsibility Principle says that a class should have one, and only one, reason to change. This may seem counter-intuitive at first. Wouldn’t it be easier to say that a class should only have one reason to exist? Actually, no-one reason to exist could very easily be taken to an extreme that would cause more harm than good. If you take it to that extreme and build classes that have one reason to exist, you may end up with only one method per class. This would cause a large sprawl of classes for even the most simple of processes, causing the system to be difficult to understand and difficult to change. When the business perception and context has changed, then you have a reason to change the class. The reason that a class should have one reason to change, instead of one reason to exist, is the business context in which you are building the system. Even if two concepts are logically different, the business context in which they are needed may necessitate them becoming one and the same. The key point of deciding when a class should change is not based on a purely logical separation of concepts, but rather the business’s perception of the concept. When the business perception and context has changed, then you have a reason to change the class. To understand what responsibilities a single class should have, you need to first understand what concept should be encapsulated by that class and where you expect the implementation details of that concept to change. Consider an engine in a car, for example. Do you care about the inner working of the engine? Do you care that you have a specific size of piston, camshaft, fuel injector, etc? Or, do you only care that the engine operates as expected when you get in the car? The answer, of course, depends entirely on the context in which you need to use the engine. If you are a mechanic working in an auto shop, you probably care about the inner workings of the engine. You need to know the specific model, the various part sizes, and other specifications of the engine. If you don’t have this information available, you likely cannot service the engine appropriately. However, if you are an average everyday person that only needs transportation from point A to point B, you will likely not need that level of information. The notion of the individual pistons, spark plugs, pulleys, belts, etc., is almost meaningless to you. You only care that the car you are driving has an engine and that it performs correctly. The engine example drives straight to the heart of the Single Responsibility Principle. The contexts of driving the car vs. servicing the engine provide two different notions of what should and should not be a single concept-a reason for change. In the context of servicing the engine, every individual part needs to be separate. You need to code them as single classes and ensure they are all up to their individual specifications. In the context of driving a car, though, the engine is a single concept that does not need to be broken down any further. You would likely have a single class called Engine, in this case. In either case, the context has determined what the appropriate separation of responsibilities is. Separating the Email Application After some quick analysis of your existing application’s code, you decide that the new requirements are really two distinct points of change. Following the Single Responsibility Principle, these two points show you where you need to separate the existing code into multiple classes (Figure 6). A new EmailSender object will provide the ability for the network operations personnel to have an API to code against. Additionally, separating out the format reading from the form is necessary to allow the form or the API to read the file format. To simplify the API that the network operations people need, you decide to put the file reading code into the email sender (Listing 2). This will provide a simple enough interface and let you get the functionality out the door in a timely manner. In the interest of time and not neglecting your other responsibilities, you decide to go ahead and create a single FormatReader class to handle both of the file formats. This code only needs to know if the contents are valid XML. A quick hack to load the contents into an XmlDocument should be sufficient for this small application. string messageBody; try { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(fileContents); messageBody = xmlDoc. SelectSingleNode("//email/body") .InnerText; } catch (Exception) { messageBody = fileContents; } return messageBody; The lesson to remember in this release of the application is that the Single Responsibility Principle is driven by the needs of the business to allow change. “A single reason to change” helps you understand which logically separate concepts should be grouped together by considering the business concept and context, instead of the technical concept alone. Extensibility and Coming Requirements A few days after delivering the API set to your network operations department, your manager is back in your cube with good news-the operations personnel love what you have done for them. The API you delivered was very simple and they were able to get the email process up and running in no time at all. With this success in mind, your manager has been trying to drum up additional uses for your new application. In this effort, he has heard some rumbling about needing to send log messages from more than flat files or XML files. He expects the official request for features to come in soon, and wants you to get a head start on being able to extend the application in this manner. Given the operations group capabilities-they write some code, though it is usually some sort of scripting-you decide that they should be able to extend the supported file formats whenever they need to. After a quick discussion with the operations personnel, they agree and appreciate your confidence in their abilities. From that discussion and the direction from your manager, you decide to move forward on the ability to add new file formats as needed. Open-Closed Principle The Open-Closed Principle says that a class should be open for extension, but closed for modification. In other words, you should be able to easily change the behavior of the class in question without having to modify it. The next time you are at a hardware store, look at the power tools. You will notice that there are a wide range of saw blades that can attach to a single saw. One blade compared to another may not look very different at first, but a closer inspection may reveal some significant differences. Some blades are constructed with different metals, the number of teeth or edges may vary, and the material that is used for the teeth is often designed for special purposes. No matter what the difference, though, if you are comparing two blades that attach to the same type of saw, they will have one thing in common: how they attach to the saw-the interface between the saw and the blade. The individual differences of the blades are what make each type of blade unique. One blade may cut through wood extremely quickly, but leave the edges rough. Another blade may cut wood more slowly and leave the edges smooth. Still others may be suited for cutting metal or other materials. The wide variety of blades, combined with the common method of attaching them to the saw, allows you to change the behavior of the saw without having to modify the mechanical portion of the saw. So, how do you allow a class’s behavior to be modified without actually modifying the class? The answer is surprisingly simple and there are several methods for doing this. Have you ever implemented an interface in a class and then passed an instance of that class into another object? Perhaps you implemented the IPrincipal interface for custom security needs. Or, you may have written your own interface such as the classic example of IAnimal, and implemented a Cat and a Dog object from this interface. The ubiquitous nature of explicit interfaces in .NET, as well as abstract base classes, delegates, and other forms of abstraction, all provide different ways of allowing custom behavior to be supplied to existing classes and modules. You can use design patterns such as Strategy, Template, State, and others to facilitate the behavioral changes through the use of such abstractions. There are still other patterns and abstractions, and other methods of injecting behavior and altering the class at runtime. Chances are, if you have written an application that required even a small amount of flexibility, you have either provided a custom behavior implementation to an existing class, or have written a class that required a custom behavior to be supplied. Restructuring for Open-Closed Given the need for multiple, unknown file types to be parsed, you decide to supply an interface that can be implemented by any number of objects, from any number of third parties, including the network operations personnel. In addition to the actual file parsing, you will need the interface to tell you whether or not the specific implementation can handle the current file contents. Your resulting application structure looks more like Figure 7, with the IFileFormatReader interface defined as follows: public interface IFileFormatReader { bool CanHandle(string fileContents); string GetMessageBody(string fileContents); } Since you know that there are multiple file formats being read now, you also decide to move the existing code that reads the flat file and XML file formats into two separate objects. The flat file reader can handle any non-binary log file, so you decide that this handler does not need to determine if it can handle the file contents sent to it. It only needs to say that it can handle the format, and then send the original content back out. You rewrite the implementation of the flat file format reader as follows: class FlatFileFormatReader: IFileFormatReader { public bool CanHandle(string fileContents) { return true; } public string GetMessageBody( string fileContents) { return fileContents; } } The XML file format reader will contain a check to see if the XML is valid. The GetMessageBody method will then parse the XML for the content, as shown in Listing 3. Next, you want to introduce the FileReaderService class. This will use the various IFileFormatReader implementations and is where the behavioral change will occur when the various format readers are supplied. To support an unknown number of file format readers, you decide to store the list of registered format readers in a simple collection: IList<IFileFormatReader> _formatReaders = new List<IFileFormatReader>(); public void RegisterFormatReader( IFileFormatReader fileFormatReader) { _formatReaders.Add(fileFormatReader); } The RegisterFormatReader method allows any code that calls the FileReaderService API to register as many format readers as they need. Then, when a file needs to be parsed, a call to a GetMessageBody method is made, passing in the contents of the file as a string. This method runs through the list of registered format readers and checks to see if the current one can handle the format. If it can, it calls the GetMessageBody method of the reader and returns the data. public string GetMessageBody(string fileContents) { string messageBody = string.Empty; foreach(IFileFormatReader formatReader in _formatReaders) { if (formatReader.CanHandle(fileContents)) { messageBody = formatReader .GetMessageBody(fileContents); break; } } return messageBody; } At this point, if there is no registered reader that can handle the file contents, an empty string is returned. You realize that you need to add a default file reader. The intention is to ensure that all log files are handled, regardless of the content. If a file can’t be handled by any other reader, you will want to return all of the content through the flat file format reader. By adding a separate RegisterDefaultFileReader method, you can ensure that only one default exists. Listing 4 shows the resulting GetMessageBody implementation. Finally, you need to update the usage of the FormatReader object in your EmailSender. You need to register both the XML file format reader and the flat file format reader in the constructor of the email sender class. private readonly FileReaderService _fileReaderService = new FileReaderService(); public EmailSender() { _fileReaderService.RegisterFormatReader( new XmlFormatReader()); _fileReaderService. RegisterDefaultFormatReader( new FlatFileFormatReader()); } Happy Consumers and More Requirements A few days after releasing this version of the application and API, you hear that the operations group loves your IFileFormatReader and the extensibility it brings to the table. They have successfully implemented several format readers and are planning on more. A short time later, a new request comes in that you were not expecting. One system the operations group must support logs all of its errors to a database, not a text file. Moreover, according the operations personnel, they cannot write code that hits the database in question. Apparently, that’s “above their pay grade.” They need someone on the development staff to do it, and are asking for your help. The most challenging part of this new requirement is the CTO being involved, again. Due to the high visibility of this project and the potential for lost revenue if errors are not proactively corrected, he wants your application updated to support reading from the database, immediately. According to your manager, when the CTO says “immediately” he usually means before the end of the day. It’s only a few hours before the day ends and you’ve been running on very little sleep for the last few days, but you think you can bang out a working version and get it to the operations group in time to make the CTO happy. Liskov Substitution Principle The Liskov Substitution Principle says that an object inheriting from a base class, interface, or other abstraction must be semantically substitutable for the original abstraction. Even if the original abstraction is poorly named, the intent of that abstraction should not be changed by the specific implementations. This requires a solid understanding of the context in which the interface was meant to be used. To illustrate what a semantic violation may look like in code, consider a square and a rectangle, as shown in Figure 8. If you are concerned with calculating the area of a resulting rectangle, you will need a height, a width and an area method that returns the resulting calculation. public class Rectangle { public virtual int Height { get; set; } public virtual int Width { get; set; } public int Area() { return Height * Width; } } In geometry, you know that all squares are rectangles. You also know that not all rectangles are squares. Since a square “is a” rectangle, though, it seems intuitive that you could create a rectangle base class and have square inherit from that. But what happens when you try to change the height or width of a square? The height and width must be the same or you no longer have a square. If you try to inherit from rectangle to create a square, you end up changing the semantics of height and width to account for this. public class Square : Rectangle { public override int Height { get { return base.Height; } set { base.Height = value; base.Width = value; } } public override int Width { get { return base.Width; } set { base.Width = value; base.Height = value; } } } What happens when you use a rectangle base class and assert the area of that rectangle? If you expect the rectangle’s area to be 20, you can set the rectangle’s height to 5 and width to 4. This will give you the result you expect. Rectangle rectangle = new Rectangle(); rectangle.Height = 4; rectangle.Width = 5; AssertTheArea(rectangle); private void AssertTheArea(Rectangle rectangle) { int expectedArea = 20; int actualArea = rectangle.Area(); Debug.Assert(expectedArea == actualArea); } What if you decide to pass a square into the AssertTheArea method, though? The method expects to find an area of 20. Let’s try to set the square’s height to 5. You know that this will also set the square’s width to 5. When you pass that square into the method, what happens? Rectangle square = new Square(); square.Height = 5; AssertTheArea(square); private void AssertTheArea(Rectangle rectangle) { int expectedArea = 20; int actualArea = rectangle.Area(); Debug.Assert(expectedArea == actualArea); } You get the wrong result because 5 x 5 is 25, not 20. That is too high, so now try a height of 4 instead. You know that 4 x 4 is 16. Unfortunately, that’s too low. So the question is, “how can you get 20 out of multiplying two integers?” The answer is: you can’t. The square-rectangle issue illustrates a violation of the Liskov Substitution Principle. You clearly have the wrong abstraction to represent both a square and a rectangle for this scenario. This is evidenced by the square overriding the height and width properties of the rectangle, and changing the expected behavior of a rectangle. What, then, would a correct abstraction be? In this case, you may want to use a simple Shape abstraction and only provide an Area method, as shown in Listing 5. Each specific implementation-square and rectangle-would then provide their own data and implementation for area allowing you to create additional shapes such as circles, triangles, and others that you don’t yet need. By limiting the abstraction to only what is common among all of the shapes, and ensuring that no shape has a different meaning for “area” you can help prevent LSP violations. A Quick-and-Dirty Database Reader After fuelling up with another energy drink and shaking off the sleep you so desperately want, you dive into the code for the database reader. Given the short time frame, you decide to take a shortcut and not introduce a new abstraction or a new method to the API. Rather, you decide to hard code the behavior of reading from the database into the application, facilitated by the use of a special file format reader. You know it’s not the brightest moment in your career, but you just want to get it out the door and go home for the night. What should have been an 800-meter race has now become a 50-meter dash in your mind. Listing 6 shows the result. You deliver the working code on time, and manage to make it home before falling asleep while driving. Overall, you consider it to be a successful day. The following week, you hear word that the network operations personnel liked the ability to read from a database. In fact, they liked it so much that they told another department about this feature. What they didn’t know, though, was that the code you delivered was written for one very specific database and didn’t actually read the connection string from a file. You knew that the security guys would have your head if you stored the real connection information in a plain text file, so you hard coded it into the file reader service. You created the “server=” content of the file as a placeholder to let you know that you should use the database connection reader. So, when the network operations personnel gave your code to the other department, everyone started wondering why the other department was now reading log files from the network operations center. All eyes are now looking squarely at you. Revisiting the Database Reader All of the eyes looking squarely at you were from your friends in the company, fortunately. After explaining the stress and sleeplessness that undermined your ability to code that day, they all laughed and asked when you would have a new version ready for them. Remembering that sprinting out of the gate during a marathon race is likely to cause the same problems again, you inform them that you’ll need a day or two to get the situation sorted out correctly. There’s no immediate need or CTO putting on the pressure at this point, so everyone agrees to the general timeline and waits patiently while you work. After a quick discussion with some coworkers, you realize that you had changed the semantics of the file format reader interface and introduced behavior that was incompatible. After a little more discussion, you end up with the design represented by Figure 9, and the change turns out to be fairly simple. By introducing a separate database reader service, you can remove the type-checking code from the file reader service. By introducing a separate database reader service, you can remove the type-checking code from the file reader service. You can set up the database reader to read the required connection string from the company standard storage for sensitive data. That decision makes the people in network operations, security, and the other department that wants to use the code, happy. Next, you update the UI to include a “Send From Database” button as shown in Figure 10. This button calls into the same email sender object that you’ve been using as the public API. However, the email sender now has a ReadFromDatabase method along with a ReadFromFile method. This keeps the public API centralized while still providing the functionality that the various departments need. public class EmailSender { public void ReadFile() { /* ... */ } public void SendEmail() { /* ... */ } public void ReadDatabase() { /* ... */ } } With this newly structured system in place, you deliver the solution to both of the waiting departments. Your friends are happy to hear that you’ve been getting more sleep and that the application they’ve been waiting for is “finally done” -a day earlier than promised. Still More Use for Your Application Shortly after delivering the updated version of the application with the database reading capabilities, another department gets wind of it and they want to use the API. After a quick conversation with them to find out if your application is what they really need, you deliver the working bits. A day later, one of the developers from that department stops by your cube with a confused look on his face. After some quick chat, you realize that he’s confused by the email sender object. It seems that he doesn’t understand why there’s a “read from database” and “read from file” method on an object that is supposed to send email. Interface Segregation Principle The Interface Segregation Principle says that a client (the calling class or module) should not be forced to depend on an interface that it does not need. If there are multiple concerns represented by an interface, or the methods and properties are unclear, then it becomes difficult to know which methods should be called when. Therefore, you should separate the interface into logical pieces, based on the needs of the consumers. To a certain extent, ISP can be considered a sub-set, or more specific form of the Single Responsibility Principle. The perspective shift of ISP, though, examines the public API for a given class or module. Looking at the IHaveALotOfResponsibilities class in Figure 11, you can see not only a set of methods that you should break into multiple classes for the sake of Single Responsibility, but a very fat interface that may confuse any of the calling clients. If I want to use IHaveALotOfResponsibilities, do I need to call the “Do…” methods? If so, do I need to call them before the “Some…” methods? Or can I just call the “Some…” methods? Or ??? Rather than forcing a developer to know which methods they should call to facilitate the functionality, you should provide a separated set of interfaces that encapsulate the processes in question, independently. This helps to prevent confusion and also helps to cut down on semantic coupling-the idea that a developer has to know the specific implementation of the class to use it correctly. Violations of ISP are not just found in software, though. Most professional workers in modern society have worked in an office at one time or another. Depending on the size of the office-how many people are working there-you will typically find one or more, very large, all-in-one business machines. These machines print, copy, fax, scan & email, and other functions. Think back to the last time you had to use one of these large, multi-function machines. They typically have control panels that include 15 to 20 buttons, an LCD display of some sort, and various other forms of input. The number of functions combined with the number of input options often creates a very frustrating user experience. What buttons or controls do you need to operate the machine? How can you ensure that it is going to create a photocopy of a document instead of scanning and emailing it? Do you need to “clear” the current settings? Do you need to type in a number of copies now, or scan the document first and then tell it how many copies to print? What about the brightness, contrast, or paper size of your copy? Did the machine remember the settings from the last user, who wanted to scan a document and email it to someone? The number of options is overwhelming. The instructions are difficult to follow and you’re not always sure that it did what you want. The worst outcome is when an error messages pops up after performing what you believe is the correct sequence of steps. What does “PC LOAD LETTER” mean, anyway? The large number of capabilities and options that these all-in-one copiers provide may offer some advantages to an office environment. They provide a relatively low cost, high quality, document-centric set of solutions. Unfortunately, they typically come at the cost of a confusing interface. (I, for one, have spent a good number of hours trying to remember how to scan and email a document to myself vs. making a photocopy on the machines in my office.) If manufacturers want to have machines with such a large feature set providing value to so many different users, they should look for ways to separate the interface into each feature so that the user is not led down the wrong path, or led down the right path at the wrong time. Splitting Up the Email Sender API After the discussion about your email sender’s API being rolled up into a single class, and the realization that not everyone needs to read from a database or a file, you decide to separate out the objects as shown in Figure 12. When you dive into the code again, you realize that you don’t have to make many changes to your system. The database reader service and the file reader service are both objects on their own, already. The real change that you need to make is to not call them directly from the email sender. The resulting EmailSender class is significantly smaller. Additionally, you were able to make the message body a parameter of the SendEmail method. This allows for any client of the email sender to provide any message body they need, regardless of the source. public class EmailSender { public void SendEmail(string messageBody) { SmtpClient client = // new ... client.Credentials = // new ... MailAddress from = // new ... MailAddress to = // new ... MailMessage msg = // new ... msg.Body = messageBody; msg.Subject = // ... client.Send(msg); } } The file reader service and database reader service classes needed no changes this time around. You simply moved their calls into the code behind of the form for your application (Listing 7). However, the other departments that are using the API need to know that they are responsible for calling the database reader and file reader, directly. After some discussion with the other departments, there was a little bit of grumbling about how they thought the new API set was more difficult to use. They agreed that a small set of documentation on how to use the API would be sufficient for their needs, though. And honestly, you feel a little more secure knowing that you will have some documentation for the growing API. Trying to Settle in for a Marathon After delivering this version of the system, including the new documentation that you wrote for the API set, you decide to step back for a minute and take stock of your system. What you see is rather surprising at this point. You have a growing number of classes and a lot of functionality compared to the original starting point of reading a flat text file and emailing it. Given the attention that you’ve received for this work and the amount of functionality that you have built in, you decide to ask your manager for some official project support. Rather than working this code base on the side of your other responsibilities, you would like to have a dedicated project for this system. This would help provide long-term support for what is now a mission critical part of the company. Your manager, having received nothing but praise for your hard work and dedication from everyone-including the CTO-happily says yes. He then asks for a timeline to complete the next version. You let him know that you’ll need to think about that for a bit, but off-hand, you expect the code to change when new features are needed, primarily. On your way back to your desk, another coworker stops you and wants to discuss this project. He’s heard a lot about what you have been doing and likes the direction that you have been taking the code. After a few minutes of discussion, your coworker lets you know that he wants to use this project, but isn’t interested in the current format readers or email sender. He’s mostly interested in the general process and wants to know if he can reuse various parts of the system without having to bring all of your specific implementations along. The current objects and interfaces provide most of what your coworker wants. However, you realize that the big picture of the process is still hard coded and tightly coupled in the current application. Your coworker mentions an idea that revolves around a higher-level process providing an abstraction that lower-level details can implement. This piques your interest and you sit down at your desk, ready to tackle this next challenge.. You would not have a usable set of classes if you had zero coupling.. public interface IMessageInfoRetriever { string GetMessageBody(); } public class FileReaderService : IMessageInfoRetriever { public string GetMessageBody() {/* ... */} } public class DatabaseReaderService : IMessageInfoRetriever { public string GetMessageBody() {/* ... */} } This interface allows you to provide any implementation you need to the processing service. You then set your eyes on the email service, which is currently directly coupled to the processing service. A simple IEmailService interface solves that, though. Figure 17 shows the resulting structure. Passing the message info retriever and email service interfaces into the processing service ensures that you have an instance of whatever class implements those interfaces, without having to know about the specific instance types. You can see the end result of this endeavor in Listing 8. Another Day, Another Delivery You coworker meets the delivery of this version with great enthusiasm. He’s excited about the possibilities of using the processing service and providing his own email service implementation, format readers, etc. It seems that you have made yet another successful delivery of this ever-evolving solution and you gladly accept your coworker’s thanks. Heading back to your desk, you realize how high your confidence in this code base, and your ability to continue improving it, actually is. This leads you to wonder what the next change request will be and what new principles and practices you can assimilate into your skill set. It is of little concern, though. Whatever the task, for whoever makes the request, you are certain of your ability to meet their needs. Summary of Your SOLID Race When you first set out on the quest to satisfy your CTO, you had no idea that the resulting race would change so many times or so rapidly. You started at a pace to win a 50-meter dash. Soon after, you slowed the pace down for a 400-meter race. You then continued to change the pace as you continued to add features and functionality, realizing the need to restructure the application for long-term maintenance and enhancements. The journey, now at the point where you are settled in for the marathon of a long-term project, has brought you from a system represented by Figure 18, to a system represented by Figure 19. The difference between these two structures is staggering and looking at them side by side for a moment, you wonder what you’ve actually gained with this sprawl of objects and interfaces. You quickly dismiss the sense of uncertainty, though, as you remember the numerous advantages. Benefits of the New System Structure With a set of classes that are small and each providing a valuable function, you are able to construct a larger system that has more value than just the individual parts. It’s like working with a box of LEGO building blocks. Each individual block may be valuable, but they can create something much more valuable when stacked together correctly. This system is also easily dissectible, and the various parts are easily replaceable. You can change out the specific implementations of the email service, the message info retrieval, and other parts of the system. You can accomplish all of these changes without worrying about adversely affecting the parts of the system that you are not touching. The ability to segment the system, by both horizontal layers and vertical slices, gives you the ability to focus in to a single point in the system. The ability to segment the system, by both horizontal layers and vertical slices, gives you the ability to focus in to a single point in the system. This allows you to assign various parts of the system to different team members, with more confidence of the system not falling apart like a game of Jenga. Achieving Low Coupling By abstracting many of the implementation needs into various interfaces and introducing the concepts of OCP and DIP, you can create a system that has very low coupling. You can take many of these individual pieces out of this system with little to no spaghetti mess trailing after them. Separating the various concerns into the various object implementations also helps to ensure that you can change the system’s behavior as needed, with very little modification to the overall system-just update the one piece that contains the behavior in question. Achieving High Cohesion You can get higher cohesion with a combination of low coupling and SRP-you can stack a lot of small pieces together like building blocks to create something larger and more complex. Any of these individual pieces may not represent much functionality or behavior, but then, an individual puzzle piece isn’t much fun to use without the other pieces. Once separated, DIP allows you to tie the various blocks back together by depending on an abstraction and allowing that abstraction to be fulfilled with different implementations. This creates a system that is much greater than the mere sum of its parts. Achieving Strong Encapsulation LSP, DIP, and SRP all work hand in hand to create encapsulation. You can encapsulate the behavioral implementations in many individual objects, preventing them from leaking into each other. You can ensure that the dependencies on those behaviors are encapsulated behind an appropriate interface. At the same time, you do the necessary due-diligence to ensure that you aren’t violating any of the individual abstraction’s semantics or purpose, according to LSP. This helps ensure that you can properly replace the implementation as needed. A New Utility Belt In the end, the stepping stones of the SOLID principles offer new insight into object-oriented software development. The principles that you once thought were for the academics are now a set of tools that you can readily grasp.
https://www.codemag.com/article/1001061
CC-MAIN-2019-47
en
refinedweb
TL;DR: In this 2-part tutorial series, we'll learn how to build an application that secures a Node back end and an Angular front end with Auth0 authentication. Our server and app will also authenticate a Firebase Cloud Firestore database with custom tokens so that users can leave realtime comments in a secure manner after logging in with Auth0. The Angular application code can be found at the angular-firebase GitHub repo and the Node API can be found in the firebase-auth0-nodeserver repo. Firebase and Auth0 Firebase is a mobile and web application development platform. Firebase was acquired by Google in 2014, and continues to be developed under the Google umbrella. Firebase provides NoSQL databases (RTDB, or Realtime Database and Cloud Firestore, in beta at the time of writing) hosted in the cloud and connected using web sockets to provide realtime capabilities to apps. Auth0 is a cloud-based platform that provides authentication and authorization as a service. As an authentication provider, Auth0 enables developers to easily implement and customize login and authorization security for their apps. Choosing Auth0 + Firebase Authentication If you're already familiar with Firebase's offerings, you might be asking: why would we implement Auth0 with custom tokens in Firebase instead of sticking with Firebase's built-in authentication by itself? Firstly, there is an important distinction to make here. Using Auth0 to secure Firebase does not mean you are not using Firebase auth. Firebase has a custom authentication approach that allows developers to integrate their preferred identity solution with Firebase auth. This approach enables developers to implement Firebase auth so that it functions seamlessly with proprietary systems or other authentication providers. There are many potential reasons we might want to integrate Auth0 with Firebase authentication. Alternatively, there are scenarios where using basic Firebase auth by itself could suffice. Let's explore. You can use Firebase's built-in authentication by itself if you: - Only want to authenticate Firebase RTDB or Firestore and have no need to authenticate additional back ends - Only need a small handful of login options and do not need enterprise identity providers, integration with your own user storage databases, etc. - Do not need extensive user management, profile enrichment, etc. and are comfortable managing users strictly through an API - Have no need to customize authentication flows - Do not need to adhere to compliance regulations regarding the storage of user data You should consider Auth0 with a custom Firebase token if you: - Already have Auth0 implemented and want to add realtime capabilities to your app - Need to easily use issued tokens to secure a back end that is not provided by Firebase - Need to integrate social identity providers beyond just Google, Facebook, Twitter, and GitHub - Need to integrate enterprise identity providers, such as Active Directory, LDAP, ADFS, SAMLP, etc. - Need a customized authentication flow - Need robust user management with APIs and an admin-friendly dashboard - Want to be able to dynamically enrich user profiles - Want features like customizable passwordless login, multifactor authentication, breached password security, anomaly detection, etc. - Must adhere to compliance regulations such as HIPAA, GDPR, SOC2, etc. Essentially, Firebase's basic authentication providers should suffice if you have a very simple app with bare-bones authentication needs and are only using Firebase databases. However, should you need more than that, Firebase offers a great way to use their services with other authentication solutions. This is a much more realistic scenario that many developers will be faced with, so we'll explore it in detail here. "If you need more than simple login with Firebase, using custom tokens with an IDaaS authentication provider like Auth0 is a great option." What We'll Build We're going to build a Node.js API secured with Auth0 that mints custom Firebase tokens and also returns data on ten different dog breeds. We'll also build an Angular front end app called "Popular Dogs" that displays information about the ten most popular dogs in 2016, ranked by public popularity by the American Kennel Club (AKC). Our app will be secured by Auth0, call the Node API to fetch dog data, and call the API to acquire Firebase tokens to authorize users to add and delete comments in realtime with Cloud Firestore. The app will use shared modules as well as implement lazy loading. To implement the app, you will need the following: - Angular CLI - A free Auth0 account with an Application and an API configured - A free Firebase project with a service account Let's get started! Angular CLI Make sure you have Node.js with NPM installed on your local machine. Run the following command to install the Angular CLI globally: $ npm install -g @angular/cli@latest We will generate our Angular app and nearly all of its architecture using the CLI. Auth0 Application and API You'll need an Auth0 account to manage authentication. You can sign up for a free account here. Next, set up an Auth0 Application and API so Auth0 can interface with the Angular app and Node API. Set Up an Auth0 Application - Go to your Auth0 Dashboard and click the "New Application" button. - Name your new app (something like Angular Firebase) and select "Single Page Web Applications". - In the Settings for your new Auth0 application app, add the Allowed Callback URLs. - Enable the toggle for Use Auth0 instead of the IdP to do Single Sign On. - At the bottom of the Settings section, uses username/password database, Facebook, Google, and Twitter. Note: For production, make sure you set up your own social keys and do not leave social connections set to use Auth0 dev keys. Set Up an Auth0 API - Go to APIs in your Auth0 dashboard and click on the "Create API" button. Enter a name for the API, such as Firebase Dogs API. Set the Identifier to your API endpoint URL. In this tutorial, our API identifier is. The Signing Algorithm should be "RS256". - You can consult the Node.js example under the Quick Start tab in your new API's settings. In the next steps, we'll implement our Node API in this fashion using Express, express-jwt, and jwks-rsa. We're now ready to implement Auth0 authentication on both our Angular client and Node back end API. Firebase Project with Service Account Next you will need a free Firebase project. Create a Firebase Project - Go to the Firebase Console and sign in with your Google account. - Click on Add Project. - In the dialog that pops up, give your project a name (such as Angular Firebase Auth0). A project ID will be generated based on the name you chose. You can then select your country/region. - Click the "Create Project" button. Generate an Admin SDK Key In order to mint custom Firebase tokens, you'll need access to the Firebase Admin SDK. To obtain access, you must create a service account in your new Firebase project. Click on the gear wheel icon next to your Project Overview in the Firebase console sidebar and select Project Settings from the menu that appears: In the settings view, click the Service Accounts tab. The Firebase Admin SDK UI will appear, showing a configuration code snippet. Node.js is selected by default. This is the technology we want, and we will implement it in our Node API. Click on the "Generate New Private Key" button. A dialog will appear warning you to store your private key confidentially. We will take care never to check this key into a public repository. Click on the "Generate Key" button to download the key as a .json file. We will add this file to our Node API shortly. Node API The completed Node.js API for this tutorial can be found at the firebase-auth0-nodeserver GitHub repo. Let's learn how to build this API. Node API File Structure We'll want to set up the following file structure: firebase-auth0-nodeserver/ |--firebase/ |--.gitignore |--<your-firebase-admin-sdk-key>.json |--.gitignore |--config.js |--dogs.json |--package.json |--routes.js |--server.js You can generate the necessary folders and files with the command line like so: $ mkdir firebase-auth0-nodeserver $ cd firebase-auth0-nodeserver $ mkdir firebase $ touch firebase/.gitignore $ touch .gitignore $ touch config.js $ touch dogs.json $ touch package.json $ touch routes.js $ touch server.js Firebase Admin SDK Key and Git Ignore Now move the Firebase Admin SDK .json key file you downloaded earlier into the firebase folder. We will take care to make sure the folder is checked in, but its contents are never pushed to a repo using the firebase/.gitignore like so: # firebase/.gitignore * */ !.gitignore This .gitignore configuration ensures that Git will ignore any files and folders inside the firebase directory except for the .gitignore file itself. This allows us to commit an (essentially) empty folder. Our .json Firebase Admin SDK key can live in this folder and we won't have to worry about gitignoring it by filename. Note: This is particularly useful if we have the project pulled down on multiple machines and have different keys (with different filenames) generated. Next let's add the code for the root directory's .gitignore: # .gitignore config.js node_modules Dogs JSON Data Next we'll add the data for ten dog breeds. For brevity, you can simply copy and paste this data into your dogs.json file. Dependencies Let's add our package.json file like so: { "name": "firebase-auth0-nodeserver", "version": "0.1.0", "description": "Node.js server that authenticates with an Auth0 access token and returns a Firebase auth token.", "repository": "", "main": "server.js", "scripts": { "start": "node server" }, "author": "Auth0", "license": "MIT", "dependencies": {}, "devDependencies": {} } We'll install the dependencies with the command line and latest versions will be saved automatically to the package.json file: $ npm install --save body-parser cors express express-jwt jwks-rsa firebase-admin We'll need body-parser, cors, and express to serve our API endpoints. Authentication will rely on express-jwt and jwks-rsa, while Firebase token minting is implemented with the firebase-admin SDK (which we'll have access to using the key we generated). Configuration In the config.js file, add the following code and replace the placeholder values with your own settings: // config.js module.exports = { AUTH0_DOMAIN: '<Auth0 Domain>', // e.g., you.auth0.com AUTH0_API_AUDIENCE: '<Auth0 API Audience>', // e.g., FIREBASE_KEY: './firebase/<Firebase JSON>', // e.g., your-project-firebase-adminsdk-xxxxx-xxxxxxxxxx.json FIREBASE_DB: '<Firebase Database URL>' // e.g., }; Server With our data, configuration, and dependencies in place, we can now implement our Node server. Open the server.js file and add: // server.js // Modules const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); // App const app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); app.use(cors()); // Set port const port = process.env.PORT || '1337'; app.set('port', port); // Routes require('./routes')(app); // Server app.listen(port, () => console.log(`Server running on localhost:${port}`)); This will launch our Node server with Express at. Note: Notice that this is the API identifier we set up in Auth0. API Routes Next open the routes.js file. This is where we'll define our API endpoints, secure them, and mint custom Firebase tokens. Add the following code: // routes.js // Dependencies const jwt = require('express-jwt'); const jwks = require('jwks-rsa'); const firebaseAdmin = require('firebase-admin'); // Config const config = require('./config'); module.exports = function(app) { // Auth0 athentication' }); // Initialize Firebase Admin with service account const serviceAccount = require(config.FIREBASE_KEY); firebaseAdmin.initializeApp({ credential: firebaseAdmin.credential.cert(serviceAccount), databaseURL: config.FIREBASE_DB }); // GET object containing Firebase custom token app.get('/auth/firebase', jwtCheck, (req, res) => { // Create UID from authenticated Auth0 user const uid = req.user.sub; // Mint token using Firebase Admin SDK firebaseAdmin.auth().createCustomToken(uid) .then(customToken => // Response must be an object or Firebase errors res.json({firebaseToken: customToken}) ) .catch(err => res.status(500).send({ message: 'Something went wrong acquiring a Firebase token.', error: err }) ); }); // Set up dogs JSON data for API const dogs = require('./dogs.json'); const getDogsBasic = () => { const dogsBasicArr = dogs.map(dog => { return { rank: dog.rank, breed: dog.breed, image: dog.image } }); return dogsBasicArr; } // GET dogs (public) app.get('/api/dogs', (req, res) => { res.send(getDogsBasic()); }); // GET dog details by rank (private) app.get('/api/dog/:rank', jwtCheck, (req, res) => { const rank = req.params.rank * 1; const thisDog = dogs.find(dog => dog.rank === rank); res.send(thisDog); }); }; At a high level, our routes file does the following: - Sets up authentication checking to ensure that only logged in users can access routes with jwtCheckmiddleware - Initializes the Firebase Admin SDK with the private key generated from the Firebase project service account - Provides a secure GETendpoint that returns a custom Firebase token - Provides a public GET* endpoint that returns a short version of the dogs data - Provides a secure GET* endpoint that returns a specific dog's detailed data, requested by rank *Endpoints use variations of the same base dataset to simulate a more complex API. You can read the code comments for more detail. Serve the API You can serve the Node API by running: $ node server The API will then be available at. Note: If you try to access secure routes in the browser, you should receive a 401 Unauthorizederror. That's it for our server! Keep the API running so that it will be accessible to the Angular app, which we'll set up next. Set Up Angular App Now it's time to create our Angular app and set up some additional dependencies. Create New Angular App You should have already installed the Angular CLI earlier. We can now use the CLI to generate our project and its architecture. To create a new app, choose a containing folder and then run the following command: $ ng new angular-firebase --routing --skip-tests The --routing flag generates an app with a routing module and --skip-tests generates the root component with no .spec.ts file. Note: For brevity, we are not going to cover testing in this article. If you'd like to learn more about testing in Angular, check out the tutorial's conclusion for more resources. Install Front End Dependencies Now let's install our front end dependencies: $ cd angular-firebase $ npm install --save auth0-js@latest firebase@latest @angular/fire@latest We will need the auth0-js library to implement Auth0 authentication in our Angular app. We'll also need the firebase JS SDK and the angularfire2 Angular Firebase library (which was moved to the @angular/fire namespace in version 5) to implement our realtime comments with Firebase. Add Bootstrap CSS To simplify styling, we'll add the Bootstrap CSS CDN link to the <head> of our index.html file like so: <!-- src/index.html --> ... <head> ... <title>Top 10 Dogs<> ... Serve the Angular App You can serve the Angular app with the following command: $ ng serve The app will run in the browser at. Angular App Architecture We're going to use the Angular CLI to generate the complete architecture for our app up front. This way, we can make sure that our modules are functioning properly before we implement our logic and templates. Our app is going to use a modular approach with lazy loading. The sample app in this tutorial is small, but we want to build it in a scalable, real-world manner. "The sample app in this tutorial is small, but we want to build it in a scalable, real-world manner: modular with lazy loading." Root Module The root module has already been created when the Angular app was generated with the ng new command. The root module lives at src/app/app.module.ts. Any components we generate in our Angular app without another module's subdirectory specified will be automatically imported and declared in our root module. Let's generate a component with the CLI now: # create CallbackComponent: $ ng g component callback --is --it --flat --no-spec This command is composed of the following: ng g component: generates a callbackcomponent file with: --isinline styles --itinline template --flatno containing folder --no-specno .spectest file We'll use the callback component to handle redirection after the user logs into our application. It's a very simple component. Note: gis a shortcut for generate. We could also use cas a shortcut for component, making this command ng g c. However, this tutorial will not use shortcuts for the type of files generated, in the interest of clarity. Core Module Architecture Next we'll create the CoreModule and its components and services. This is a shared module. From the root of your Angular project folder, run the following CLI commands. Make sure you run the ng g module core command first, like so: # create Core module: $ ng g module core # create API service with no .spec file: $ ng g service core/api --no-spec # create HeaderComponent with inline styles, no .spec file, and export in module: $ ng g component core/header --is --no-spec --export=true # create LoadingComponent with inline styles, inline template, no folder, no .spec file, and export in module: $ ng g component core/loading --is --it --flat --no-spec --export=true # create ErrorComponent with inline styles, inline template, no folder, no .spec file, and export in module: $ ng g component core/error --is --it --flat --no-spec --export=true # create Dog type interface: $ ng g interface core/dog # create DogDetail type interface: $ ng g interface core/dog-detail Creating the module first ensures that components created in that module's folder will then be imported and declared automatically in that parent module instead of the app's root module. Note: If you wish to use a shared module's components in another module, you need to exportthe components as well as declare them. We can do this automatically with the CLI using the --export=trueflag. This is the basic architecture for the shared core services, components, and models that our app will need access to. Auth Module Architecture Next we'll create our AuthModule. Execute the following CLI commands (again, making sure to generate the module first): # create Auth module: $ ng g module auth # create AuthService with no .spec file: $ ng g service auth/auth --no-spec # create Auth route guard with no .spec file: $ ng g guard auth/auth --no-spec Our Auth module supplies the service and route guard we need to manage authentication, but does not have any components. This is also a shared module. Dogs Module Architecture Our app's homepage will be provided by the DogsModule. This will be the list of ten most popular dogs in 2016 as ranked by the AKC. Use the following CLI commands to generate the structure for this lazy-loaded page module: # create Dogs module: $ ng g module dogs # create DogsComponent with inline styles and no .spec file: $ ng g component dogs/dogs --is --no-spec Dog Module Architecture Our app will also have detail pages for each dog listed in the Dogs component so that users can learn more about each breed. Use the following CLI commands to generate the structure for the lazy-loaded DogModule: # create Dog module: $ ng g module dog # create DogComponent with inline styles and no .spec file: $ ng g component dog/dog --is --no-spec Finally, we need to implement the architecture necessary for our Firebase realtime comments. Use the following CLI commands to generate the structure for the # create Comments module: $ ng g module comments # create Comment model class: $ ng g class comments/comment # create CommentsComponent with no .spec file: $ ng g component comments/comments --no-spec --export=true # create CommentFormComponent with inline styles and no .spec file: $ ng g component comments/comments/comment-form --is --no-spec Environment Configuration Let's add our configuration information for Auth0 and Firebase to our Angular front end. Open the environment.ts file and add: // src/environments/environment.ts const FB_PROJECT_ID = '<FIREBASE_PROJECT_ID>'; export const environment = { production: false, auth: { clientId: '<AUTH0_CLIENT_ID>', clientDomain: '<AUTH0_DOMAIN>', // e.g., you.auth0.com audience: '<AUTH0_API_AUDIENCE>', // e.g., redirect: '', scope: 'openid profile email' }, firebase: { apiKey: '<FIREBASE_API_KEY>', authDomain: `${FB_PROJECT_ID}.firebaseapp.com`, databaseURL: `{FB_PROJECT_ID}.firebaseio.com`, projectId: FB_PROJECT_ID, storageBucket: `${FB_PROJECT_ID}.appspot.com`, messagingSenderId: '<FIREBASE_MESSAGING_SENDER_ID>' }, apiRoot: '<API URL>' // e.g., (DO include trailing slash) }; Replace placeholders in <angle brackets> with your appropriate Auth0, Firebase, and API information. You can find your Auth0 configuration in your Auth0 Dashboard in the settings for the application and API you created for this tutorial. You can find your Firebase configuration in the Firebase Console Project Overview after clicking the large icon labeled Add Firebase to your web app, as shown below: Add Loading Image The last thing we'll do before we begin implementing functionality in our Angular app is add a loading image. Create the following folder: src/assets/images. Then save this loading SVG image into that folder: Implement Shared Modules Let's set up our modules. We'll import the shared modules ( CoreModule and AuthModule) in our root AppModule. Core Module First we'll implement our CoreModule. Open the core.module.ts file and update to the following code: // src/app/core/core.module.ts import { NgModule, ModuleWithProviders } from '@angular/core'; import { CommonModule } from '@angular/common'; import { HttpClientModule } from '@angular/common/http'; import { FormsModule } from '@angular/forms'; import { RouterModule } from '@angular/router'; import { Title } from '@angular/platform-browser'; import { DatePipe } from '@angular/common'; import { HeaderComponent } from './header/header.component'; import { ApiService } from './api.service'; import { LoadingComponent } from './loading.component'; import { ErrorComponent } from './error.component'; @NgModule({ imports: [ CommonModule, RouterModule, HttpClientModule, // AuthModule is a sibling and can use this without us exporting it FormsModule ], declarations: [ HeaderComponent, LoadingComponent, ErrorComponent ], exports: [ FormsModule, // Export FormsModule so CommentsModule can use it HeaderComponent, LoadingComponent, ErrorComponent ] }) export class CoreModule { static forRoot(): ModuleWithProviders { return { ngModule: CoreModule, providers: [ Title, DatePipe, ApiService ] }; } } Since this is a shared module, we'll import the other modules, services, and components that we'll need access to throughout our app. Note: The CommonModuleis imported in all modules that are not the root module. In our imports array, we'll add any modules that may be needed by services or components in the CoreModule, or that need to be available to other modules in our app. The CLI should have automatically added any generated components to the declarations array. The exports array should contain any modules or components that we want to make available to other modules. Note that we have imported ModuleWithProviders from @angular/core. Using this module, we can create a forRoot() method that can be called on import in the root app.module.ts when CoreModule is imported. This way, we can ensure that any services we add to a providers array returned by the forRoot() method remain singletons in our application. In this manner, we can avoid unintentional multiple instances if other modules in our app also need to import the CoreModule. Auth Module Next let's add some code to our AuthModule in the auth.module.ts file: // src/app/auth/auth.module.ts import { NgModule, ModuleWithProviders } from '@angular/core'; import { CommonModule } from '@angular/common'; import { AuthService } from './auth.service'; import { AuthGuard } from './auth.guard'; import { AngularFireAuthModule } from '@angular/fire/auth'; @NgModule({ imports: [ CommonModule, AngularFireAuthModule ] }) export class AuthModule { static forRoot(): ModuleWithProviders { return { ngModule: AuthModule, providers: [ AuthService, AuthGuard ] }; } } We'll import ModuleWithProviders to implement a forRoot() method like we did with our CoreModule. Then we'll import our AuthService and AuthGuard. We also need to import AngularFireAuthModule from @angular/fire/auth so we can secure our Firebase connections in our AuthService. The service and guard should then be returned in the providers array in the forRoot() method. Open the // src/app/comments/comments.module.ts import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { CoreModule } from '../core/core.module'; import { environment } from './../../environments/environment'; import { AngularFireModule } from '@angular/fire'; import { AngularFirestoreModule } from '@angular/fire/firestore'; import { CommentsComponent } from './comments/comments.component'; import { CommentFormComponent } from './comments/comment-form/comment-form.component'; @NgModule({ imports: [ CommonModule, CoreModule, // Access FormsModule, Loading, and Error components AngularFireModule.initializeApp(environment.firebase), AngularFirestoreModule ], declarations: [ CommentsComponent, CommentFormComponent ], exports: [ CommentsComponent ] }) export class CommentsModule { } We'll need to import the CoreModule so we can utilize its exported FormsModule, ErrorComponent. We also need to access our configuration from the environment.ts file. Comments use Firebase's Cloud Firestore database, so let's import the AngularFireModule and AngularFirestoreModule as well as our two components: CommentFormComponent. When we add AngularFireModule to the @NgModule's imports array, we'll call its initializeApp() method, passing in our Firebase configuration. Both of our components should already be in the declarations array, and the exports array so that other components from other modules can use it. Note: We don't need to export The CommentsModule does not provide any services, so there's no need to implement a forRoot() method. App Module Now that our CoreModule, AuthModule, and CommentsModule have been implemented, we need to import them in our root module, the AppModule located in the app.module.ts file: // src/app/app.module.ts import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { CoreModule } from './core/core.module'; import { AuthModule } from './auth/auth.module'; import { CommentsModule } from './comments/comments.module'; import { AppComponent } from './app.component'; import { CallbackComponent } from './callback.component'; @NgModule({ declarations: [ AppComponent, CallbackComponent ], imports: [ BrowserModule, AppRoutingModule, CoreModule.forRoot(), AuthModule.forRoot(), CommentsModule ], bootstrap: [AppComponent] }) export class AppModule { } The AppComponent and CallbackComponent have already been added automatically by the CLI. When we add our CoreModule and AuthModule to the imports array, we'll call the forRoot() method to ensure no extra instances are created for their services. The CommentsModule doesn't provide any services, so this is not a concern for that module. Implement Routing and Lazy Loaded Modules We have two modules that require routing: the DogsModule for the main listing of dogs, and the DogModule, which contains the component showing a dog breed's detail page. App Routing First let's implement our app's routing. Open the app-routing.module.ts file and add this code: // src/app/app-routing.module.ts import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { CallbackComponent } from './callback.component'; import { AuthGuard } from './auth/auth.guard'; const routes: Routes = [ { path: '', loadChildren: './dogs/dogs.module#DogsModule', pathMatch: 'full' }, { path: 'dog', loadChildren: './dog/dog.module#DogModule', canActivate: [ AuthGuard ] }, { path: 'callback', component: CallbackComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } We'll import our CallbackComponent and AuthGuard. The remaining routes will be string references to modules rather than imported components using the loadChildren property. We will set the default '' path to load route children from the DogsModule, and the 'dog' path to load route children from the DogModule. The 'dog' path should also be protected by the AuthGuard, which we declare using the canActivate property. This can hold an array of route guards should we require more than one. Finally, the 'callback' route should simply point to the CallbackComponent. Dogs Module Let's add some code to the dogs.module.ts file: // src/app/dogs/dogs.module.ts import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { Routes, RouterModule } from '@angular/router'; import { CoreModule } from '../core/core.module'; import { CommentsModule } from '../comments/comments.module'; import { DogsComponent } from './dogs/dogs.component'; const DOGS_ROUTES: Routes = [ { path: '', component: DogsComponent } ]; @NgModule({ imports: [ CommonModule, CoreModule, RouterModule.forChild(DOGS_ROUTES), CommentsModule ], declarations: [ DogsComponent ] }) export class DogsModule { } We'll import Routes and RouterModule in addition to our CoreModule and This module has a child route, so we'll create a constant that contains an array to hold our route object. The only child route we'll need inherits the '' path from app-routing.module.ts, so its path should also be ''. It will load the DogsComponent. In our imports array, we'll pass our DOGS_ROUTES constant to the RouterModule's forChild() method. Dog Module The DogModule works similarly to the DogsModule above. Open dog.module.ts and add the following: // src/app/dog/dog.module.ts import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { Routes, RouterModule } from '@angular/router'; import { CoreModule } from '../core/core.module'; import { DogComponent } from './dog/dog.component'; const DOG_ROUTES: Routes = [ { path: ':rank', component: DogComponent } ]; @NgModule({ imports: [ CommonModule, CoreModule, RouterModule.forChild(DOG_ROUTES) ], declarations: [ DogComponent ] }) export class DogModule { } One difference between this module and the DogsModule is that our DOG_ROUTES has a path of :rank. This way, the route for any specific dog's details is passed as a URL segment matching the dog's rank in our list of top ten dog breeds, like so: Another difference is that we will not import the CommentsModule. However, we could add comments to dog details in the future if we wished. Our app's architecture and routing are now complete! The app should successfully compile and display in the browser, with lazy loading functioning properly to load shared code and the code for the specific route requested. We're now ready to implement our application's logic. The loading and error components are basic, core UI elements that can be used in many different places in our app. Let's set them up now. The LoadingComponent should simply show a loading image. (Recall that we already saved one when we set up the architecture of our app.) However, it should be capable of displaying the image large and centered, or small and inline. Open the // src/app/core/loading.component.ts import { Component, Input } from '@angular/core'; @Component({ selector: 'app-loading', template: ` <div [ngClass]="{'inline': inline, 'text-center': !inline, 'py-2': !inline }"> <img src="/assets/images/loading.svg"> </div> `, styles: [` .inline { display: inline-block; } img { height: 80px; width: 80px; } .inline img { height: 24px; width: 24px; } `] }) export class LoadingComponent { @Input() inline: boolean; } Using the @Input() decorator, we can pass information into the component from its parent, telling it whether we should display the component inline or not. We'll use the NgClass directive ( [ngClass]) in our template to conditionally add the appropriate styles for the display we want. Displaying this component in another template will look like this: <!-- Large, full width, centered: --> <app-loading></app-loading> <!-- Inline: --> <app-loading</app-loading> Error Component Next let's quickly implement our ErrorComponent. This component will display a simple error message if shown. Open the error.component.ts file and add: // src/app/core/error.component.ts import { Component } from '@angular/core'; @Component({ selector: 'app-error', template: ` <p class="alert alert-danger"> <strong>Error:</strong> There was an error retrieving data. </p> ` }) export class ErrorComponent { } Authentication Logic Now let's implement the code necessary to get our AuthModule's features working. We'll need the authentication service in order to build out the header in the CoreModule, so it makes sense to start here. We've already installed the necessary dependencies (Auth0 and FirebaseAuth), so let's begin. Authentication Service Before we write any code, we'll determine what the requirements are for this service. We need to: - Create a login()method that will allow users to authenticate using Auth0 - If user was prompted to log in by attempting to access a protected route, make sure they can be redirected to that route after successful authentication - Get the user's profile information and set up their session - Establish a way for the app to know whether the user is logged in or not - Request a Firebase custom token from the API with authorization from the Auth0 access token - If successful in acquiring a Firebase token, sign into Firebase using the returned token and establish a way for the app to know whether the user is logged into Firebase or not - Custom tokens minted by Firebase expire after an hour, so we should set up a way to automatically renew tokens that expire - Create a logout()method to clear session and sign out of Firebase Open the auth.service.ts file that we generated earlier. For tutorial brevity, please check out the full code in the GitHub repo's auth.service.ts file here. There's a lot going on, so let's go through it step by step. First, as always, we'll import our dependencies. This includes our environment configuration we set up earlier to provide our Auth0, Firebase, and API settings, as well as auth0 and firebase libraries, AngularFireAuth, HttpClient to call the API to get a custom Firebase token, and the necessary RxJS imports. You can refer to the code comments for descriptions of the private and public members of our AuthService class. Next is our constructor function, where we'll make Router, AngularFireAuth, and HttpClient available for use in our class. The login() method looks like this: login(redirect?: string) { // Set redirect after login const _redirect = redirect ? redirect : this.router.url; localStorage.setItem('auth_redirect', _redirect); // Auth0 authorize request this._auth0.authorize(); } If a redirect URL segment is passed into the method, we'll save it in local storage. If no redirect is passed, we'll simply store the current URL. We'll then use the _auth0 instance we created in our members and call Auth0's authorize() method to go to the Auth0 login page so our user can authenticate. The next three methods are handleLoginCallback(), getUserInfo(), and _setSession(): handleLoginCallback() { this.loading = true; // When Auth0 hash parsed, get profile this._auth0.parseHash((err, authResult) => { if (authResult && authResult.accessToken) { window.location.hash = ''; // Store access token this.accessToken = authResult.accessToken; // Get user info: set up session, get Firebase token this.getUserInfo(authResult); } else if (err) { this.router.navigate(['/']); this.loading = false; console.error(`Error authenticating: ${err.error}`); } }); } getUserInfo(authResult) { // Use access token to retrieve user's profile and set session this._auth0.client.userInfo(this.accessToken, (err, profile) => { if (profile) { this._setSession(authResult, profile); } else if (err) { console.warn(`Error retrieving profile: ${err.error}`); } }); } private _setSession(authResult, profile) { // Set tokens and expiration in localStorage const expiresAt = JSON.stringify((authResult.expiresIn * 1000) + Date.now()); localStorage.setItem('expires_at', expiresAt); this.userProfile = profile; // Session set; set loggedIn and loading this.loggedIn = true; this.loading = false; // Get Firebase token this._getFirebaseToken(); // Redirect to desired route this.router.navigateByUrl(localStorage.getItem('auth_redirect')); These methods are fairly self-explanatory: they use Auth0 methods parseHash() and userInfo() to extract authentication results and get the user's profile. We'll also set our service's properties to store necessary state (such as whether the user's authentication state is loading and if they're logged in or not), handle errors, save data to our service and local storage, and redirect to the appropriate route. We are also going to use the authentication result's access token to authorize an HTTP request to our API to get a Firebase token. This is done with the _getFirebaseToken() and _firebaseAuth() methods: private _getFirebaseToken() { // Prompt for login if no access token if (!this.accessToken) { this.login(); } const getToken$ = () => { return this.http .get(`${environment.apiRoot}auth/firebase`, { headers: new HttpHeaders().set('Authorization', `Bearer ${this.accessToken}`) }); }; this.firebaseSub = getToken$().subscribe( res => this._firebaseAuth(res), err => console.error(`An error occurred fetching Firebase token: ${err.message}`) ); } private _firebaseAuth(tokenObj) { this.afAuth.auth.signInWithCustomToken(tokenObj.firebaseToken) .then(res => { this.loggedInFirebase = true; // Schedule token renewal this.scheduleFirebaseRenewal(); console.log('Successfully authenticated with Firebase!'); }) .catch(err => { const errorCode = err.code; const errorMessage = err.message; console.error(`${errorCode} Could not log into Firebase: ${errorMessage}`); this.loggedInFirebase = false; }); } We'll create a getToken$ observable from the GET request to our API's /auth/firebase endpoint and subscribe to it. If successful, we'll pass the returned object with the custom Firebase token to the _firebaseAuth() method, which will authenticate with Firebase using Firebase's signInWithCustomToken() method. This method returns a promise, and when the promise is resolved, we can tell our app that Firebase login was successful. We can also schedule Firebase token renewal (we'll look at this shortly). We'll handle any errors appropriately. Our custom Firebase token will expire in 3600 seconds (1 hour). This is only half as long as our default Auth0 access token lifetime (which is 7200 seconds, or 2 hours). To avoid having our users lose access to Firebase unexpectedly in the middle of a session, we'll set up automatic Firebase token renewal with two methods: scheduleFirebaseRenewal() and unscheduleFirebaseRenewal(). Note: You can also implement automatic session renewal with Auth0 in a similar manner using the checkSession()method. In addition, you could use checkSession()to restore an unexpired authentication session in the constructor if a user navigates away from the app and then returns later. We won't cover that in this tutorial, but this is something you should try on your own! scheduleFirebaseRenewal() { // If user isn't authenticated, check for Firebase subscription // and unsubscribe, then return (don't schedule renewal) if (!this.loggedInFirebase) { if (this.firebaseSub) { this.firebaseSub.unsubscribe(); } return; } // Unsubscribe from previous expiration observable this.unscheduleFirebaseRenewal(); // Create and subscribe to expiration observable // Custom Firebase tokens minted by Firebase // expire after 3600 seconds (1 hour) const expiresAt = new Date().getTime() + (3600 * 1000); const expiresIn$ = Observable.of(expiresAt) .pipe( mergeMap( expires => { const now = Date.now(); // Use timer to track delay until expiration // to run the refresh at the proper time return Observable.timer(Math.max(1, expires - now)); } ) ); this.refreshFirebaseSub = expiresIn$ .subscribe( () => { console.log('Firebase token expired; fetching a new one'); this._getFirebaseToken(); } ); } unscheduleFirebaseRenewal() { if (this.refreshFirebaseSub) { this.refreshFirebaseSub.unsubscribe(); } } To schedule automatic token renewal, we'll create a timer observable that counts down to the token's expiration time. We can subscribe to the expiresIn$ observable and then call our _getFirebaseToken() method again to acquire a new token. The signInWithCustomToken() angularfire2 auth method returns a promise. When the promise resolves, scheduleFirebaseRenewal() is called, which in turn ensures that the token will continue to be renewed as long as the user is logged into our app. We'll also need to be able to unsubscribe from token renewal, so we'll create a method for that as well. Finally, the last two methods in our authentication service are logout() and tokenValid(): logout() { // Ensure all auth items removed localStorage.removeItem('expires_at'); localStorage.removeItem('auth_redirect'); this.accessToken = undefined; this.userProfile = undefined; this.loggedIn = false; // Sign out of Firebase this.loggedInFirebase = false; this.afAuth.auth.signOut(); // Return to homepage this.router.navigate(['/']); } get tokenValid(): boolean { // Check if current time is past access token's expiration const expiresAt = JSON.parse(localStorage.getItem('expires_at')); return Date.now() < expiresAt; } The logout() method removes all session information from local storage and from our service, signs out of Firebase Auth, and redirects the user back to the homepage (the only public route in our app). The tokenValid accessor method checks whether the Auth0 access token is expired or not by comparing its expiration to the current datetime. This can be useful for determining if the user needs a new access token; we won't cover that in this tutorial, but you may want to explore Auth0 session renewal further on your own. That's it for our AuthService! Callback Component Recall that we created a CallbackComponent in our root module. In addition, we set our environment's Auth0 redirect to the callback component's route. That means that when the user logs in with Auth0, they will return to our app at the /callback route with the authentication hash appended to the URI. We created our AuthService with methods to handle authentication and set sessions, but currently these methods aren't being called from anywhere. The callback component is the appropriate place for this code to execute. Open the callback.component.ts file and add: // src/app/callback.component.ts import { Component, OnInit } from '@angular/core'; import { AuthService } from './auth/auth.service'; @Component({ selector: 'app-callback', template: ` <app-loading></app-loading> ` }) export class CallbackComponent implements OnInit { constructor(private auth: AuthService) { } ngOnInit() { this.auth.handleLoginCallback(); } } All our callback component needs to do is show the AuthService's handleAuth() method executes. The handleLoginCallback() method will parse the authentication hash, get the user's profile info, set their session, and redirect to the appropriate route in the app. Auth Guard Now that we've implemented the authentication service, we have access to the properties and methods necessary to effectively use authentication state throughout our Angular application. Let's use this logic to implement our AuthGuard for protecting routes. Using the Angular CLI should have generated some helpful boilerplate code, and we only have to make a few minor changes to ensure that our guarded routes are only accessible to authenticated users. Note: It's important to note that route guards on their own do not confer sufficient security. You should always secure your API endpoints, as we have done in this tutorial, and never rely solely on the client side to authorize access to protected data. Open the auth.guard.ts file and make the following changes: // src/app/auth/auth.guard.ts import { Injectable } from '@angular/core'; import { CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { Observable } from 'rxjs'; import { AuthService } from './auth.service'; @Injectable() export class AuthGuard implements CanActivate { constructor(private auth: AuthService) { } canActivate( next: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean { if (this.auth.loggedIn) { return true; } else { // Send guarded route to redirect after logging in this.auth.login(state.url); return false; } } } We'll import AuthService add a constructor() function to make the service available in our route guard. The canActivate() method should return true if conditions are met to grant access to a route, and false if not. In our case, the user should be able to access the guarded route if they are authenticated. The loggedIn property from our AuthService provides this information. If the user does not have a valid token, we'll prompt them to log in. We want them to be redirected back to the guarded route after they authenticate, so we'll call the login() method and pass the guarded route ( state.url) as the redirect parameter. Note: Remember that we set up our entire app's architecture and routing earlier. We already added AuthGuardto our dog details route, so it should be protected now that we've implemented the guard. Core Logic The last thing we'll do in this section of our tutorial is build out the remaining components and services that belong to our CoreModule. We've already taken care of the ErrorComponent, so let's move on to the header. Header Component The header will use methods and logic from our authentication service to show login and logout buttons as well as display the user's name and picture if they're authenticated. Open the header.component.ts file and add: // src/app/core/header/header.component.ts import { Component } from '@angular/core'; import { AuthService } from '../../auth/auth.service'; @Component({ selector: 'app-header', templateUrl: './header.component.html', styles: [` img { border-radius: 100px; width: 30px; } .loading { line-height: 31px; } .home-link { color: #212529; } .home-link:hover { text-decoration: none; } `] }) export class HeaderComponent { constructor(public auth: AuthService) {} } We'll add a few simple styles and import our AuthService to make its members publicly available to our header component's template. Next open the header.component.html file and add: <!-- src/app/core/header/header.component.html --> <nav class="nav justify-content-between mt-2 mx-2 mb-3"> <div class="d-flex align-items-center"> <strong class="mr-1"><a routerLink="/" class="home-link">Popular Dogs ❤</a></strong> </div> <div class="ml-3"> <small * Logging in... </small> <ng-template [ngIf]="!auth.loading"> <button *Log In</button> <span * <img [src]="auth.userProfile.picture"> <small>{{ auth.userProfile.name }}</small> <button class="btn btn-danger btn-sm" (click)="auth.logout()">Log Out</button> </span> </ng-template> </div> </nav> The header now shows: - The name of our app ("Popular Dogs") with a link to the /route - A login button if the user is not authenticated - A "Logging in..." message if the user is currently authenticating - The user's picture, name, and a logout button if the user is authenticated Now that we have our header component built, we need to display it in our app. Open the app.component.html file and add: <!-- src/app/app.component.html --> <app-header></app-header> <div class="container"> <router-outlet></router-outlet> </div> The header component will now be displayed in our app with the current routed component showing beneath it. Check it out in the browser and try logging in! Dog and DogDetail Models Let's implement our dog.ts and dog-detail.ts interfaces. These are models that specify types for the shape of values that we'll use in our app. Using models ensures that our data has the structure that we expect. We'll start with the dog.ts interface: // src/app/core/dog.ts export interface Dog { breed: string; rank: number; image: string; } Next let's implement the dog-detail.ts interface: // src/app/core/dog-detail.ts export interface DogDetail { breed: string; rank: number; description: string; personality: string; energy: string; group: string; image: string; link: string; } API Service With our Node API and models in place, we're ready to implement the service that will call our API in the Angular front end. Open the api.service.ts file and add this code: // src/app/core/api.service.ts import { Injectable } from '@angular/core'; import { HttpClient, HttpHeaders, HttpErrorResponse } from '@angular/common/http'; import { environment } from './../../environments/environment'; import { AuthService } from './../auth/auth.service'; import { Observable, throwError } from 'rxjs'; import { catchError } from 'rxjs/operators'; import { Dog } from './../core/dog'; import { DogDetail } from './../core/dog-detail'; @Injectable() export class ApiService { private _API = `${environment.apiRoot}api`; constructor( private http: HttpClient, private auth: AuthService) { } getDogs$(): Observable<Dog[]> { return this.http .get<Dog[]>(`${this._API}/dogs`) .pipe( catchError((err, caught) => this._onError(err, caught)) ); } getDogByRank$(rank: number): Observable<DogDetail> { return this.http .get<DogDetail>(`${this._API}/dog/${rank}`, { headers: new HttpHeaders().set('Authorization', `Bearer ${this.auth.accessToken}`) }) .pipe( catchError((err, caught) => this._onError(err, caught)) ); } private _onError(err, caught) { let errorMsg = 'Error: Unable to complete request.'; if (err instanceof HttpErrorResponse) { errorMsg = err.message; if (err.status === 401 || errorMsg.indexOf('No JWT') > -1 || errorMsg.indexOf('Unauthorized') > -1) { this.auth.login(); } } return throwError(errorMsg); } } We'll add the necessary imports to handle HTTP in Angular along with the environment configuration, AuthService, RxJS imports, and Dog and DogDetail models we just created. We'll set up private members for the _API and to store the _accessToken, then make the HttpClient and AuthService available privately to our API service. Our API methods will return observables that emit one value when the API is either called successfully or an error is thrown. The getDogs$() stream returns an observable with an array of objects that are Dog-shaped. The getDogByRank$(rank) stream requires a numeric rank to be passed in, and will then call the API to retrieve the requested Dog's data. This API call will send an Authorization header containing the authenticated user's access token. Finally, we'll create an error handler that checks for errors and assesses if the user is not authenticated and prompts for login if so. The observable will then terminate with an error. Note: We are using arrow functions to pass parameters to our handler functions for RxJS pipeable operators (such as catchError). This is done to preserve the scope of the thiskeyword (see the "No separate this" section of the MDN arrow functions documentation). Next Steps We've already accomplished a lot in the first part of our tutorial series. In the next part, we'll finish our Popular Dogs application. In the meantime, here are some additional resources that you may want to check out: Angular Testing Resources If you're interested in learning more about testing in Angular, which we did not cover in this tutorial, please check out some of the following resources: - Angular - Testing - Angular Testing In Depth: Services - Angular Testing In Depth: HTTP Services - Angular Testing In Depth: Components - How to correctly test Angular 4 application with Auth0 integration Additional Resources You can find more resources on Firebase, Auth0, and Angular here: - Firebase documentation - Cloud Firestore documentation - angularfire2 documentation - Auth0 documentation - Auth0 pricing and features - Angular documentation - Angular CLI - Angular Cheatsheet In the next installment of our Auth0 + Firebase + Angular tutorial, we'll display data from our dogs API and learn how to set up and implement realtime comments with Firebase! Check out How to Authenticate Firebase and Angular with Auth0: Part 2 - Async and Realtime now. Auth0 Docs Implement Authentication in Minutes
https://auth0.com/blog/how-to-authenticate-firebase-and-angular-with-auth0-part-1/
CC-MAIN-2019-47
en
refinedweb
GREPPER SEARCH WRITEUPS DOCS INSTALL GREPPER All Languages >> PHP >> laravel run schedule locally “laravel run schedule locally” Code Answer’s laravel run schedule locally php by Excited Echidna on Oct 05 2021 Comment 8 php artisan schedule:work Source: laravel.com run schedule laravel php by Light Lynx on Nov 27 2021 Comment 1 //for running tasks locally php artisan schedule:work //for running task once php artisan command:name laravel run schedule only on production php by Thoughtless Tortoise on Jul 29 2021 Comment 1 if (App::environment('production')) { $schedule->command('file:generate') ->daily(); //run your commands here } Source: stackoverflow.com Add a Grepper Answer Answers related to “laravel run schedule locally” laravel schedule run laravel scheduler every 10 minutes scheduling in laravel in custom cron laravel scheduler every 2 hours run laravel cron job on cpanel test laravel scheduler laravel queue work schedule cpanel laravel schedule kernel code sample laravel task scheduler error Queries related to “laravel run schedule locally” laravel scheduler laravel task scheduling laravel job scheduler laravel run schedule automatically laravel scheduler example laravel create schedule command run schedule laravel php artisan schedule:work php artisan schedule laravel scheduler cron laravel 8 scheduler schedule laravel example laravel run scheduler php artisan schedule work php artisan schedule list laravel task scheduling example laravel schedule job example laravel schedule post laravel scheduler run laravel task schedule laravel run schedule locally task scheduling laravel laravel task scheduling 5 minutes queue create scheduler in laravel laravel scheduler daily laravel scheduler run command cron job schedule laravel laravel schedule run every second laravel schedule controller method how to run laravel scheduler on server job scheduler in laravel laravel schedule run only one time task schedule laravel laravel schedule cron laravel 7 scheduler laravel schedule time how to run schedule in laravel run schedule in laravel laravel scheduler command laravel console command schedule laravel schedule hourlyat laravel cron job scheduling tutorial task scheduling in laravel 8 laravel schedule list laravel scheduler stop laravel command run schedule laravel queue task scheduler how to use task scheduler in laravel schedule list laravel laravel scheduler job laravel schedule a command stop schedule laravel laravel console schedule schedule laravel command artisan schedule schedule:run laravel scheduler run in laravel schedule a command laravel laravel scheduler run once laravel schedule command daily laravel schedule call controller function how does laravel scheduler work Run the task on a custom cron schedule laravel laravel schedule:work Task Scheduling with Cron Job in Laravel 8 laravel command create schedule laravel command schedule example laravel schedule function laravel run task schedule scedule work laravel laravel schedule output scheduler run laravel laravel schedule after laravel task scheduling cron laravel run schedule command manually laravel schedule tutorial laravel schedule event laravel schedule weekly laravel scheduler between command schedule laravel scheduler laravel jobs seconds command to run scheduler laravel 9 task scheduler laravel in cmd laravel scheduler every min schedule_work linux laravel php laravel run scheduler php artisan schedule:run only once laravel schedule command every day laravel scheduler job function how to run laravel scheduler in cpanel laravel scheduling commands creating a new schedule laravel cron job scheduler in laravel laravel scheduler work daily laravel scheduler on window server laravel start schedule how to schedule job in laravel for particular date cron laravel schedule not working locally laravel scheduler on server laravel scheduler blade tutorial laravel 8 scheduler not running laravel schedule multiple commands laravel schedule in job laravel schedule task create command laravel schedule:run output laravel scheduler artisan command laravel scheduler in linux laravel run schedule localy result a schedule task laravel laravel command schedule weekdays task schedulers laravell schedule task run laravel laravel demo to create and run task scheduler waht is laravel scheduler task scheduler in laravel 8 short-schedule laravel laravel schedule $schedule->job laravel laravel artisan scheduler run specific job how to task scheduler job in laravel schedule laravel server schedule laravel component creating scheduler laravel scheduler in laravel 8 laravel set scheduler delaying time run command laravel schedule laravel task scheduler without cron how to run laravel scheduler how to use schedule laravel task scheduling laravel 8 task scheduling laravel tutorial laravel scheduling system laravel task scheduling tutorial queue laravel restart schedule laravel add command without schedule if schedule run i want make something after laravel laravel schedule location run task scheduler from command line laravel every second in laravel schedule laravel monthly schedule can I run laravel schedule on local schedule run at jan 1st laravel cron job schedule laravel create create task scheduler for laravel create a task scheduler for laravel scheduling laravel exmaple every hour schedule in laravel schedule command laravel run only one time how to run scheduleer in spwcific days laravel how to set laravel scheduler in server how to stop laravel scheduler scheduler run command every second laravel insert data from scheduler laravel run schedule laravel command one tine everyHour laravel schedule schedule daemon laravel schedule command with laravel 8 history of schedule laravel how to check scheduler running or not in laravel how to create laravel scheduler without cron job scheduling laravel package php artisan schedule:work for production running schedule periodically in laravel run only one schedule laravel task scheduling laravel run task once task scheduling with cron job in laravel schedule time laravel stop schedule command laravel schedule:run laravel >> run scheduler laravel in live server run scheduled task laravel 7 run specific schedule command laravel start a laravel scheduler command local make schedule on in local project laravel scheduler in laravel web server schedule laravel list example Write the scheduler code in laravel schedule laravel command on specific date make and run schedular laravel schedule laravel command every second schedule run laravel server schedule laravel commands weekly on a specific date and time schedule laravel on local schedule post laravel script weekly scheduling laravel schedule php artisan serve schedule laravel saturday job schedule in laravel laravel scheduler:work where are the command outputs laravel schedule laravel task scheduler daily laravel terminate schedule laravel watch scheduler listen for schedule in php artisan php artisan check scheduler php artisan show schedule by name php artisan show schedule status laravel scheduler night laravel scheduler ON SERV how to run laravel scheduler during tests laravel scheduler run jobs on sequence laravel scheduler run now laravel scheduler task bangla how to add command to schedule laravel how often should u run laravel schedule php scheduler events in laravel schedule laravel database schedule lavarel task schedule laravel docs schedule time laravel without cron schedule with option laravel scheduleler laravel scheduler by year laravel scheduler laravel hot to configure schedule jobs example in laravel time zone scheduler laravel run laravel scheduler every 30 seconds task scheduler with jobs laravel task scheduler laravel using controller schedule job once laravel schedule command only ones in laravel schedule cron calling twice in laravel schedule every second laravel laravel scheduler job tutorial laravel kenel run schedule twice laravel schedule turn off listed command laravel how to find schedule list laravel schedule run once a day laravel cron job for scheduler laravel query schedule laravel run scheduler in or ouside of docker laravel schedule calendar app laravel cron operation with scheduler example laravel 8 scheduler on linux laravel $schedule create insert data laravel scheduler run only the first cron job laravel scheduler run every day server laravel scheduler run continuously laravel scheduler not working locally laravel 9 task scheduler laravel scheduler in production laravel commands schedule calling multiple laravel scheduler everyTenMinutes laravel schedule name laravel artisan run schedule:run automaticcaly laravel schedule short laravel scheduler by itsolution stuff laravel 8 schedule command not working is schedule command run in laravel laravel scheduler cron job tutorial how to run scheduler in laravel on local server laravel schedule command after another one laravel schedule command every six hours laravel schedule command ovh laravel run schedule command in start of week laravel schedule cron time laravel run job on schedule laravel How to stop php artisan schedule:run laravel schedule every other day laravel create schedule job command laravel schedule command in ec2 laravel how to create schedule laravel jobs scheduling laravel local task scheduler laravel Module schedule call laravel php artisan schedule:run options laravel how does schedule:run work laravel run schedule in cron laravel running scheduled command dynamically laravel schedule a one time job laravel schedule between laravel how quick is the scheduler laravel command schedule run in cpanel laravel 9 schedule laravel artisan command schedule laravel artisan scheduler list laravel call schedule laravel command schedule when condition laravel command without schedule laravel create command to schedule jobs laravel creating schedule laravel daily schedule run time ? laravel does not work $schedule execute command laravel 8 scheduling laravel schedule run vs in background laravel schedule runs only one time laravel schedule task to run function laravel schedule work on server laravel schedule:run automatic laravel schedulee:run Laravel scheduler before call laravel scheduler command in localhost laravel scheduler keeps running laravel scheduler not running automatically on local server laravel schedule run on weekend laravel schedule command when event happens laravel schedule daemon laravel schedule exec artisan command laravel schedule function tutorial laravel schedule info laravel schedule run on boot laravel schedule run now laravel schedule run in specific time laravel schedule not running at list time laravel schedule production laravel schedule on boot laravel scheduler run command once cron scheduler run without command laravel daily fixed time in laravel task schedule run deployer scheduler laravel does laravel scheduled commandfs run in the console force php artisan schedule:run how can I store schedule result in database laravel how does schedule:work laravel work how run scheduler from within laravel how to auto run schedule in laravel local cron job how to do schedule in laravel creating a schedule in laravel $schedule laravel daile add a command to laravel scheduler artisan call schedule:run auto schedule laravel 8 in localhost call laravel module using schedule create schedule system in laravel command get run schedule laravel command scheduling laravel create a new task scheduler in laravel using command create command laravel scheduler how to implement artisan schedule run laravel job scheduling in laravel example how to task scheduler job in laravel every second how to use laravel scheduler outside laravel project how we run command in schedule laravel in 1 second to time running schedule:run laravel larave php schedule laravel $schedule->call laravel 5 task schedule laravel 6 schedule job example laravel 7 task scheduling laravel 8 jobs scheduling how to make schedule in laravel how to perform schedule in laravel how to run a scheduled task laravel in terminal how to run laravel 6 schedule continuously how to run laravel scheduler on local server server how to run laravel task schedulers on live server how to run scheduler in laravel in hosting project how to schedule laravel dailyat laravel how to run task scheduler laravel using php artisan command how to run scheduler in laravel and php cli how to run schedule job laravel laravel scheduler run cmd command run laravel schedule on background laravel task scheduling management system run laravel scheduler locally laravel run controller with scheduler laravel register scheduler command laravel php laravel scheduler hangs php artisan terminate schedule run run scheduler laravel linux production server run scheduled taks laravel run schedule laravel linux run schedule on production laravel run schedule on local machine laravel php artisan schedule:work with docker Laravel: schedule larvel run scheduler look schedule command laravel php artisan cron run schedule output laravel scheduler response Laravel's scheduler php artisan schedule run stop laravel task scheduling using admin panel php artisan schedule run vs schedule work php artisan schedule run powershell command php artisan schedule run --high laravel trigger scheduler run scheduler on sql server laravel run view thorough scheduler in laracvel laravel scheduler test local running laravel schedule in controller run task scheduler laravel on linux laravel scheduler setup and command laravel start short schedule running schedule laravel on localhost running same command twice laravel scheduler laravel scheduler timing run scheduler only particular date laravel laravel scheduler update db run task scheduler and job laravel run serve command with scheduler in laravel laravel scheduler run local laravel log file run specific schedule in laravel laravel scheduler run moring laravel scheduler run vs scheduler work laravel stop scheduled command run task schedule laravel run scheduler only particular date laravel one time sceduler in laravel task scheduling command laravel task scheduling laravel 8 tutorial offline scheduling laravel how to stop laravel scheduling everysecond task scheduling in laravel 8 example task scheduling just runnning one time in laravel how to stop scheduling in laravel schedule not running on production laravel site run artisan command in task scheduling laravel scheduled task command laravel send output of a schedule laravel laravel schedule run command after another command command to run scheduled task in laravel php artisan schedule run production how to do scheduling run always in laravel scheduling tasks laravel laracast aravel Task Scheduling set laravel task scheduling cpanel scheduling system in laravel how to schedule a process for later in laravel how to run a scheduled task laravel scheduling laravel 8 laravel schedule async how to get the output of schedule in a file laravel laravel task scheduling event how to schedule commands in larave; laravel scheduler restart laravel task scheduling daily at "12.00" Laravel Run Task Manually laravel task scheduling local laravel kill scheduled command laravel task scheduling make status inactiive laravel task scheduling linux laravel task scheduling host laravel task scheduling for specific database day record everminute schedule laravel laravel scheduling run controller laravel schedule exec command from command line laravel command schedule retry laravel scheduling a task laravel scheduled task s laravel scheduling tasks laravel task scheduling bloog laravel task scheduling auto confirm laravel task scheduling 5 minutes laravel task for events and scheduling related Laravel Set Dynamic Task Scheduling laravel task scheduling minimum time limit schedule dailyon laravel laravel add task to a scheduling controller laravel task scheduling with when method laravel task scheduling with different queue name $schedule->exec('php artisan') schedule command run at last day of month laravel schedule command laravel parameter task scheduling trong laravel laravel Scheduling larvel scheduling make custom tasks scheduling in laravel - command laravel 7 task scheduling or carbon laravel command classes task scheduling $commands = []; laravel cron job task scheduling sheard hosting laravel task scheduling on server laravel task scheduling on queue laravel task scheduling on live server looping a schedule task laravel print schedule result laravel automatically update db laravel scheduler laravel Task Scheduling v8 laravel conjob scheduling run scheduler only particuler date laravel run laravel scheduler on ubuntu restart schedule laravel laravel schedule php artisan schedule run laravel schedule command laravel command schedule scheduler in laravel schedule in laravel laravel scheduler every second laravel scheduling laravel schedule run laravel scheduler without cron schedule run command in laravel laravel run schedule manually laravel run schedule how to run scheduler in laravel laravel schedule jobs artisan run schedule schedule job laravel scheduling in laravel laravel check if schedule is running laravel schedular laravel release schedule laravel jobs schedule laravel run schedule command laravel scheduler tutorial laravel task scheduling tutorial schedule task in laravel laravel scheduler every hour laravel schedule every monday laravel schedule daily at schedule laravel 8 laravel make scheduler laravel scheduler not running automatically task schedule in laravel 8 laravel schedule log laravel schedule:run how to run schedule in laravel excipt sunday schedule command in laravel laravel scheduler cron example task scheduler in laravel run laravel scheduler schedular in laravel laravel daily schedule job scheduling in laravel laravel schedule work laravel task scheduler example laravel scheduler not running laravel 7 scheduler example schedule commands laravel laravel 8 schedule schedule daily laravel laravel schedule call laravel short schedule using task scheduler laravel schedule run in background schedule cron job laravel laravel commands schedule laravel schedule command every minute schedule call function laravel what is scheduler in laravel laravel schedule exec() laravel schedule command monthly laravel schedule run not working schedule event laravel schedule in laravel example laravel schedule dailyAt laravel schedule description laravel schedule when() how to build schedule with laravel output of a schedule task laravel how to create schedule command in laravel laravel scheduler windows laravel 5.6 how to make php artisan schedule:run laravel cron job schedule laravel 7 schedule changes laravel 8 cron job task scheduling laravel 8 job scheduling scheduler in laravel tutorial php artisan run scheduler php artisan schedule stop laravel php artisan schedule run laravel database scheduler php artisan run:schedule laravel scheduler dailyat not working laravel schedule job run immediately php artisan schedule:run specific ron Scheduling Jobs Laravel scheduling task laravel starting the scheduler laravel how to run schedule laravel Laravel task scheduler running outdated code laravel schedule set delaying time laravel scheduler run always cron job laravel schedule laravel job schedule laravel schedule specific time laravel scheduling jobs laravel 8 scheduler run how to schedule laravel scheduler from window server laravel scheduler running twice crate schedule laravel how to make schedule in laravel 8 laravel add command to schedule laravel schedule run vs work laravel schedule stops working laravel schedule delaying laravel time schedule laravel scheduler dailyAt scheduler run only one date and time laravel laravel create schedule task task scheduler laravel documentation how to schedule run laravel 8 schedule task laravel method laravel cron task scheduling laravel deployer scheduler laravel event scheduling laravel how to schedule a command schedule a task using laravel $schedule- call laravel laravel 8 artisan schedule schedule in laravel 8 schedule laravel in docker laravel 9 task scheduling laravel generate schedule from start date to end date run one schedule laravel laravel one scheduler run run laravel scheduler command run laravel scheduler on cpanel run a specific schedule laravel schedule run artisan laravel schedule every day laravel task scheduling controller schedule cron jobs laravel make schedule command laravel difference between php artisan schedule:work & php artisan schedule:run schedule tasks in laravel laravel 8 Scheduling Shell Commands laravel schedule tasks php artisan Scheduler:run how to use laravel command in scheduled task laravel schedule on specific date dd in scheduler command laravel scheduling jobs laravell cron job vs scheduler in laravel scheduler on production laravel create scheduler command in laravel console not running on schedule:run in laravel does laravel scheduler use cron how to scheduler in laravel how to run laravel scheduler task in window run schedule automatically in laravel local how to debug in scheduler in laravel schedule command laravel from database how to write function in scheduler in laravel how to put a schedule post on laravel 8 schedule laravel at exec schedule automatically in laravel schedule:run laravel schedule command on specific date laravel how many scheduler we can create in laravel 8 schedule command on start laravel launch schedule job laravel short laravel task scheduler php artisan schedule run laravel 7 task scheduling in laravel 9 schedule work in live laravel schedule something in laravel Task Scheduling with Cron Job in Laravel 5.8 task scheduler laravel stop run scheduler every second laravel starting the scheduler laravel 6 start schedule cron laravel php laravel scheduler job locally php artisan to run scheduler schedule:run + laravel scheduling system laravel windows laravel schedule schedule laravel exemple "run task scheduler during tests laravel" add laravel schedule to server artisan schedule stop schedule laravel local using schedule in laravel schedule laravel getting old code use laravel schedule when does laravel daily schedule run scheduling work in laravel schedule on server laravel $schedule laravel ->twiceDaily laravel tasks schedule in future laravel see stuff to run in schedule:run laravel set scheduler time manually laravel task scheduler i commands schedule in laravel Laravel: schedule a task with Mode; make scheduler command in laravel php artisan schedule run every second check schedule task laravel run how to make schedule:run from controler laravel laravel scheduler not running automatically lically laravel scheduler once every secod laravel scheduler run command production how to create schedule laravel laravel scheduler saturday only once laravel scheduler to change status daily laravel scheduler to run every 5 seconds laravel scheduler two times how to run schedule localy laravel schedule laravel devopsschool task schedule work in server for laravel schedule run on development laravel schedule watch laravel scheduled in laravel run daily scheduler and jobs in laravel stop schedule task laravel scheduler larvel register command in schedule in laravel with minutes response in scheduler laravel run scheduler on custon time laravel running a scheduler every wednesday in laravel task scheduler laravel is not running schedule command dailyat laravel schedule command with option laravel schedule dailyat function laravel schedule job example in laravel 8 laravel scheduler friday only once laravel custom command schedule laravel error runing artisan:schedule laravel schedule run, initiates only one task laravel schedule run a file laravel schedule one time laravel run all schedule laravel running scheduler in crontab laravel schedule job once laravel create schedule table laravel start schedule from controller laravel $schedule->call(function () laravel scheduler run on specific date and time laravel 8 schedule job make laravel create model scheduler Laravel scheduler is not running automatically laravel add artisan schedule to path laravel schedule:run after pulling laravel schedule end time laravel scheduler before after laravel schedule not in console laravel schedule posts laravel 8 schedule run on boot laravel schedule job or command laravel 7 scheduler locally in one second 2 time run schedule laravel how to stop schedule run in laravel laravel schedule js laravel schedule cronjob run temporarly from console laravel run schedule on sunday laravel schedule command twice weekly laravel run schedule automatically windows laravel schedule daily at work every minute laravel run command schedule laravel schedule every one month laravel schedule everyminute laravel schedule job everysecond schedule call and schedule command laravel laravel how to run schedule ont time event laravel jon schedule laravel local task scheduler fails laravel package add to schedule laravel ru schedule laravel run linux command with task scheduler laravel run schedule on sunday midnight laravel scedule laravel schedule a task laravel schedule call when laravel getlist of schedule and run a particular schedule laravel 8 task scheduling manager package laravel add scheduler from controller laravel artisan get scheduler time laravel artisan scheduler run specific job once laravel call scheduler inside job file laravel execute shell command in task scheduler laravel console schedule each day laravel create new schedule laravel cron job scheduling tutorial itstaff laravel database-schedule laravel schedule comand run automatically laravel schedule:run in background laravel schedule run weekly laravel schedule setup laravel schedule wehn laravel schedule() laravel schedule:run not working laravel scheduler 8 Laravel scheduler call laravel scheduler demo laravel scheduler like node jd laravel scheduler programatically laravel schedule command not working in local laravel schedule command with options laravel schedule daily at time laravel schedule finish laravel schedule in package laravel schedule insert data daily laravel schedule run after command laravel schedule run not running automatic laravel schedule run command laravel schedule report result in terminal laravel schedule post system laravel 8 task scheduling force run task schedule laravel custom scheduling laravel daily schedule package in laravel difference between schedule run and call in laravel everyminit run laravel scheduler how execute laravel schedule command controler how can we call schedule in laravel after date how is laravel schedule jobs run in server how run scheduler from within laravel while serving how to call your task scheduling without php artisan command how to enable laravel scheduler how to laravel scheduler $schedule->command in laravel add php artisan schedule work artisan run schedule' automatically run scheduler in laravel cannot run laravel scheduler command checking if laravel schedule:work is running command schedule laravel; commands and schedule in laravel create a scheduler laravel create job scheduling in laravel cron how to make a schedule run on cpanel for laravel how to start short scheduler in laravel how to use a laravel scheduler how to use schedular in laravel 8 import Schedule laravel is task scheduler running on local laravel how to set up server for laravel scheduler laravel $schedule->command artisan laravel 6 cron job manually run schedule laravel 6 scheduler laravel 8 cron job task scheduling example how to start scheduler laravrl how to run particular scheduler command in laravel how to put a schedule store on laravel 8 how to run a scheduler in one time in laravel how to run laravel schedule i how to run laravel scheduler on windoes how to run schedul in laravel how to see scheduler response in laravel how to schedule a task to run everyday laravel how to run scheduler in laravel daily how to run scheduler in larave without command php artisan schedule:work how to run schedule inside call laravel set report when schedule run in laravel run schedule crontab laravel run laravel scheduler on background run laravel schedule command on server run artisan command in scheduler laravel refresh schedule laravel run schedule from terminal in laravel run schedule:work with serve command laravel run scheduler laravel 7 in local run scheduled commnads laravel run schedule tasks laravel run schedule on perticler date time laravel run schedule laravel ubuntu laravel task scheduling crontab php artisan run command schedule lavavel schedule command php artisan run schedule laravel make laravel scheduler example make schedule run automatically laravel laravel use schedule inside schedule php artisan schedule:run in background php artisan schedule work vs run php artisan schedule run showing callback php artisan schedule run only once php artisan schedule --help laravel Task Scheduling crud run task scheduler laravel on server run two commands in laravel scheduler laravel scheduler tasks laravel scheduleworkj laravel set up schedule laravel scheduler times laravel scheduling job running schedule command in the background laravel running laravel scheduler programmatically running laravel scheduler jobs on server laravel Scheduling Shell Commands $schedule->exec( laravel start scheduler dcocker laravel task scheduling cpanel run short scheduler laravel terminal run specific php artisan schedule:work run scheduler using artisan laravel task scheduler run commande laravel scheduler run with scheduler laravel stop scheduler laravel task scheduling cron job hostinger scedule laravel how to scedule in laravel use task schedule in laravel controller task scheduling format time japan laravel task scheduling in larave task scheduling in laravel 8 blog php artisan make command scheduled task task scheduling in laravel to perform action after certain time automatically Task scheduling in laravel package development exemplo Scheduling laravel run laravel scheduler in docker command line run scheduled task in laravel every minutes task scheduling in laravel dynamic schedule time manage laravel debugging laravel scheduling task php artisan schedule run on production run task scheduling in cpanel laravel how to run scheduled task on local laravel artisan run scheduled tasks how to run artisan schedule in production Queues - Task Scheduling laravel scheduling laravel school time table how to run a scheduled task in laravel php laravel run task Task Scheduling laravel works in production mode? laravel scadule laravel task scheduling equivalent model not working laravel task scheduling everymonth laravel task scheduling datetime how to start scheduler in laravel vapor laravel task scheduling for calling controller fucntion laravel task scheduling minimum time laravel task scheduling management system package laravel task scheduling list all upcoming tasks laravel project with event scheduling laravel Task Scheduling from controller how to stop schedule woring in command class in larave laravel scheduling event macro laravel scheduling not working laravel schedule exec command from command line Laravel Scheduling call controller laravel schedule debug laravel schedule countdonw if statement on schedule laravel laravel task scheduling block until one is completed laravel task scheduling at laravel scheduling tasks event laravel schedule:work is not command laravel scheduling worked on everysecond laravel task scheduling monitor laravel vapor task scheduling schedule everymunite in laravel laravel task scheduling with user id laravel automatic scheduling laravel task scheduling with chain schedule laravel day saturday task scheduling with php artisan larave docs task scheduling laravel scheduling task tutorial itstaff tasks scheduling laravel automated test task scheduling laravel schedule parallel laravel laravel task scheduling mutiple time php artisan stop schedule laravel task scheduling on run once laravel task scheduling only run once laravel task scheduling not working on cpanel laravel task scheduling not support equivalent model reset schedule laravel laravel command schedule medium Artisan::call('schedule:run') laravel task scheduling test check laravel comand schedule laravel task scheduling start laravel schedule monthly on schedule laravel task scheduler laravel scheduler laravel laravel task scheduler php artisan schedule:run laravel schedule job laravel calendar scheduler schedule run laravel schedule command laravel run scheduler laravel schedule task laravel laravel schedule not running laravel cron schedule php artisan schedule run command laravel schedule task laravel cron scheduler laravel scheduler cron job artisan schedule:run php artisan run schedule scheduling laravel laravel schedule every second laravel schedule command not working laravel short schedule laravel schedule daily laravel stop schedule schedule jobs in laravel how to run schedule command in laravel laravel dynamic scheduler task scheduler laravel 8 laravel schedule commands laravel run schedule on server run laravel scheduler locally laravel schedule every hour laravel start scheduler artisan schedule run run schedule laravel command scheduler laravel 8 run scheduled task laravel laravel scheduler every day task scheduling in laravel how to schedule any task in laravel laravel 8 schedule example laravel if application is from command schedule short schedule laravel run schedule command laravel schedule artisan command laravel artisan schedule in controller laravel schedule stop loop schedule command laravel laravel run schedule->command manually job scheduler laravel laravel create schedule laravel job scheduling laravel schedule run controller method schedule a job laravel run laravel scheduler on windows laravel run schedule only on production how to schedule command in laravel laravel schedule calendar laravel scheduler monthly on Laravel Scheduler jobs vs commands laravel tasl scheduling laravel schedule daily time laravel turn off scheduler what is schedule in laravel laravel schedule when laravel schedule exec laravel schedule a job Laravel Cron Jobs Scheduling laravel run scheduler locally Running The Scheduler laravel laravel sceduler laravel task scheduler in seconds cron schedule laravel php artisan schedule:list execute laravel scheduler laravel 5 scheduler schedule jobs laravel laravel schedule command parameters laravel - creating custom commands & scheduling tasks laravel scheduler docs laravel scheduler date laravel schedule make command scheduler work laravel laravel scheduler targetting a controller laravel schedule at specific date php artisan schedule:run not working laravel 8 laravel schedule command vs job laravel schedule deployment laravel schedule methods laravel schedule once create schedule in laravel event schedule laravel difference between php artisan schedule:work & php artisan schedule:run laravel check schedule list laravel stop schedule task laravel stop task scheduler laravel scheduler package how to create scheduler in laravel php artisan make schedule how to implement scheduler in laravel laravel schedule in seconds laravel schedule call vs command why laravel scheduler not running automatically working with laravel schedule laravel scheduler execute how to run schedule tasks in laravel schedule laravel run command schedule class laravel schedule something laravel how to setup scheduler in laravel schedule work laravel laravel docker scheduler laravel call scheduler automatically schedule orders on laravel schedule a task for one execution laravel laravel $schedule- run job daily schedule monthly laravel laravel 8 schedule run after processing laravel 8 task scheduling tutorial run task scheduler laravel schedule laravel serwer windows schedule task laravel laravel task scheduler anil scheduler command laravel laravel trigger scheduler local laravel job scheduler command how to run task scheduler laravel php artisan schedule:work run in background laravel schedule call info laravel debug scheduler laravel schedule cron job laravel windows task scheduler laravel scheduler run every week job scheduling laravel how to stop specific schedule task in laravel laravel run schedule work command laravel scheduler doesnt run locally laravel schedule artisan command dailyat schedule laravel example run schedule automatically in laravel scheduler run command laravel schedule something that run automatically in laravel schedule laravel 6 configure task scheduler laravel schedule commands in laravel how to run a scheduler in laravel as run a project how to run schedule method daily laravel scheduling a command in laravel how to start scheduler in server laravel scheduler run job laravel implementing task scheduler in laravell laravel8 artisan schedule:run schedule file in laravel execute job laravel from scheduler failed schedule laravel for each time in laravel task scheduler how to call a controller everyday in schedule laravel How to create a task scheduler website laravel starting tha scheduler laravel run code using laravel schedule run task schedule laravel on server php artisan schedule run 1 task scheduling laravel for laravel schedule command run laravel scheduler command on production laravel time schedule laravel 8 schedule:run vs schedule:work laravel run scheduler laravel each minute run scheduler manually laravel schedule:work on production laravel task schedule on server laravel test laravel scheduler locally run schedule laravel for a particular job windows laravel schedule run Schedule laravel second schedule laravel only running once local make schedule command in laravel automatic schedule not working laravel 8 windows 10 laravel schedule run schedule run laravel docker container to run the scheduler php artisan command locally using command to run schedule laravel set task scheduler to run every min laravel what after schedule:run+laravel what schedule run deos in laravel scheduler run laraavel laravel scheduler with job tutorial where are the laravel job scheduler located when to use laravel scheduler laravel task scheduler restart laravel use dynamic cron schedule latavel scheduler command options php artisan all schedule command php artisan schedule::run laravel scheduler weeklyon laravel scheduler monthly once how to run schedule command automatically in laravel laravel scheduler post laravel scheduler run every 30 seconds does schedule scheduler run daily from today laravel laravel scheduler sundays only once how to check if scheduler is running laravel how set schedule in laravel work update schedule laravel local schedule to every second laravel schedule laravel six daily task schedule with docker laravel\ schedule time laravel seconds task scheduler demo in laravel scheduleer laravel scheduler before laravel show response in scheduler laravel schedule laravel 8ù schedule artisan php run auto schedule laravel run task schedule job in laravel in windows task scheduler laravel windows task scheduler laravel automatically on server local schedule command laravel on server schedule controller laravel schedule db update laravel task scheduler in larave laragon task schedule laravel laravel documentation task scheduler laravel get data from database in scheduler laravel schedule run without any command laravel name schedule task laravel que scheduler laravel run schedule cron now laravel schedule a function in my controller laravel schedule call class scheduler list laravel larave add schedule job laravel 6 scheduler list laravel scheduler run on production server laravel 8 scheduler job laravel 8 task schedule laravel 9 scheduler every second laravel automatic schedule without cron laravel command schedule does is construct every time? laravel schedule call model function in controller laravel command run schedule daily laravel schedule not running job laravel schedule refresh laravel schedule without cron laravel scheduler controller example laravel 7 run schedule how to trigger schedule laravel how to run shedule work for laravel 6 laravel console run schedule laravel schedule after execution laravel schedule command not running at cron laravel run schedule job after 6 hours in production server laravel schedule cron daily laravel create a command to run schedule laravel How to stop php artisan schedule:work laravel enable schedule task run laravel schedule free script laravel schedule command not working automatically laravel run artisan command schedule laravel job sample with scheduling laravel keep running schedule laravel make a scheduler laravel php artisan schedule work laravel run jobs from scheduler laravel run schedular from cron job laravel running scheduled command callback laravel schdule laravel schedule artisan call laravel schedule check laravel schedule command from controller method laravel 9 create command task scheduler Laravel Advanced - Task Scheduling - CRON Job laravel artisan run schedule:work automatically laravel bus schedule laravel check command scheduler laravel command scheduling laravel create a new scheduler laravel create scheduled task laravel cron schedule use system time laravel defin command and schedule "laravel" "task scheduling" "database" laravel schedule run task once laravel schedule runs multiple laravel schedule shell command laravel schedule work command cpanel laravel schedule:run example laravel schedule run once a day full update laravel scheduler after laravel scheduler calling funtion and use when laravel scheduler event at laravel scheduler not runnig laravel schedule run permanently laravel schedule lo laravel schedule commands commands laravel schedule event to date laravel schedule function for date time laravel schedule in php laravel schedule make schedule a task laravel in server laravel schedule run no scheduled laravel schedule run a file from terminal laravel schedule report result laravel schedule output add time laravel 8 schedule run but not excecute cron laravel schedule custom scheduling laravel with due date deploy scheduler laravel docker scheduler laravel Execution #2 output laravel scheduler what is it? cron job time schedule format laravel how does laravel schedule works how run schedule in laravel local how schedule works laravel how to debug scheduler laravel cron job vs task scheduling laravel 8 check scheduler is running laravel * * * * * php artisan schedule:run apply new code schedule php laravel artisan schedule:start bus schedule laravel check the outout of command running via schedule laravel class schedule laravel command scheduler laravel crear scheduling job laravel create a scheduler run every second in laravel create new schedule laravel laravel 8 schedule run how to stop laravel scheduler in terminal how to use a laravel scheduler in laravel 8 how to use task scheduler laravel on server in 1 second to time runned schedule:run laravel job scheduler laravel automatically laravel create task schedule laravel $schedule->exec inside function handle() laravel 6 init schedule laravel 6 scheduler example laravel 8 execute manually schedule command how to set scheduled commands time in laravel how to motitor Schedule in laravel how to run a schedule command in laravel how to run command in a schedule task in laravel how to run laravel schedule in laravel 6 how to run laravel scheduler task windows how to set schedule job in laravel How to Schedule tasks with cron jobs using Laravel Scheduler how to run task scheduler on particular date in laravell how to run schedule command in laravel in the backgroud how to run schedule laravel commands how to run schedule commands in laravel larvel manually call scheduler run php artisan schedule command in controller run laravel scheduler localy run schedule in laravel cron response when schedule laravel production run schedule laravel run schedule laravel into live server run scheduler on particular date time laravel run scheduler in server in laravel run schedule using link laravel laravel task scheduling error on stop laravel task scheduling every second php laravel local scheduler long running task laravel laravel scheduler run controller function load laravel package schedule make a schedule in laravel make schedule in laravel new code schedule php laravel not apply php artisan schedule:work on linux Php artisan schedule:run command php artisan schedule run vs work laravel turn of scheduler php artisan schedule run command php 8 php artisan run scheduler laravel manually run laravel scheduler running laravel artisan schedule in controller laravel scheduler that runs every day laravel scoyu laravel see schedule laravel scheduler specific date schedual laravel running scheduler jobs laravel in cpanel laravel scheduling every 30 min laravel scheduler with serve laravel scheduler with no command laravel scheduler w laravel scheduler work run scheduler with serve laravel laravel scheduler run hourlyAt run scheduler with run server laravel laravel task scheduler run commande artisan laravel task scheduler after laravel scheduler set cron locall laravel task schedule kill laravel stop task scheduler with artisan command weeklyat laravel schedule task scheduling laravel run once task scheduling fixed schedule in laravel stop schedule php artisan schdule laravel how to set up a time for a scheduling platform laravel task scheduling in laravel dev.to how to stop task scheduling in laravel running laravel schedule on aws run schedule on production laravel ubuntu "laravel" "task scheduling" "dynamic" fire scheduled command earlyer laravel laravel command schedule if production do task scheduling in every 10 seconds in lara php artisan schedule:run only ones run scheduled task laravel locally \Artisan::call('schedule:run'); config laravel scheduler in server how to run particular scheduler command of laravel in cpanel screen laravel task scheduling scheduling site with laravel php artisan scheduling scheduling laravel jobs to execute from database laravel schedule for production only laravel task scheduling equivalent model methos not working laravel scheduler shell_exec laravel scheduler run in midlenight laravel task scheduling east african time laravel task scheduling daily at 12.00 laravel schedule command not working in local + laravel 8 how to stop scheduler task laravel laravel keep running schedule:run in mac os everyMinute laravel schedule laravel run schedule automatically digitalocean laravel task scheduling in ubuntu server laravel task scheduling for specific day laravel task scheduling call function laravel consolekernel schedule laravel schedule seconds laravel scheduling envoy laravel scheduling application laravel schedule dailyat not working laravel scheduling step by step laravel task scheduling 8 laravel task scheduling block laragon schedule laravel task scheduling 4 time a day laravel schedule specific day laravel scheduling tools task scheduling simple project laravel laravel 9 task scheduling step by step laravel test task scheduling what is scheduling in laravel laravel automatic start scheduling aravel scheduler what is laravel task scheduling task scheduling to change the value in DB laravel task scheduling vs event laravel Laravel - Creating Custom Commands & Scheduling Tasks free schedule:work time out laravel turn of scheduling in laravel laravel task scheduling windows server laravel documentation task scheduling laravel task scheduling on windows laravel task scheduling on queue name laravel dynamic task scheduling laravel task scheduling not working laravel Task Scheduling not run laravel command files task scheduling $commands = []; laravel task scheduling when schedular stop laravel laravel task scheduling remove run scheduled laravel Laravel Task Scheduling set to run every minute but it run only once .
https://www.codegrepper.com/code-examples/php/laravel+run+schedule+locally
CC-MAIN-2022-40
en
refinedweb
Human readable captcha for z3cform Project description Introduction collective.z3cform.norobots provides a “human” captcha widget based on a list of questions/answers. This captcha can be used as a plone.app.discussiom captcha plugin. The widget is based on z3c.form.TextWidget. Requirements - tested with Plone 4.0 as a plugin for plone.app.discussion, should work with Plone 3 - plone.app.z3cform Installation Just a simple easy_install collective.z3cform.norobots is enough. Alternatively, buildout users can install collective.z3cform.norobots as part of a specific project’s buildout, by having a buildout configuration such as: [buildout] ... eggs = collective.z3cform.norobots ... [instance] ... zcml = collective.z3cform.norobots In portal_setup, apply the profile collective.z3cform.norobots:default. Add a new question In the Plone Property Sheet “norobots_properties” (portal_properties/norobots_properties), add a new property: Name: The question id (ex: "question4") Value: your_question::the_answer (ex: "What is 10 + 12 ?::22") Type: string Usage You can use this widget setting the “widgetFactory” property of a form field: from zope import interface, schema from z3c.form import interfaces, form, field, button, validator from plone.app.z3cform.layout import wrap_form from collective.z3cform.norobots.i18n import MessageFactory as _ from collective.z3cform.norobots.widget import NorobotsFieldWidget from collective.z3cform.norobots.validator import NorobotsValidator class INorobotsForm(interface.Interface): norobots = schema.TextLine(title=_(u'Are you a human ?'), description=_(u'In order to avoid spam, please answer the question below.'), required=True) class NorobotsForm(form.Form): fields = field.Fields(INorobotsForm) fields['norobots'].widgetFactory = NorobotsFieldWidget # wrap the form with plone.app.z3cform's Form wrapper NorobotsFormView = wrap_form(NorobotsForm) # Register Norobots validator for the correponding field in the IContactInfo interface validator.WidgetValidatorDiscriminators(NorobotsValidator, field=INorobotsForm['norobots']) for more information see contact_info.py in the plone_forms directory in the package. Credits Changelog 1.1 - 2010-09-15 Support for using as a plone.app.discussiom captcha plugin (Plone 4) [Petri Savolainen] Finnish translations [Petri Savolainen] 1.0 Initial release [Sylvain Boureliou] Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/collective.z3cform.norobots/1.1/
CC-MAIN-2022-40
en
refinedweb
Introduction Multithreading Interview Questions and Answers C++ Before we go into the threading and related concepts regarding the interview, I would like to give a brief idea about how things work before the title of the article i.e. Multithreading starts to play its part. Let’s quickly see the details There are 3 types of computer language - Low level (machine level) - The middle level (Assembly level) and - High level (like C++, JAVA, COBOL etc) These high-level languages (in our case we will consider C++) interact with the machines with the use of programs (which has codes built into them). A translator helps to translate the details to machine language (0’s and 1’s) just like a tour guide that translates one language to another. Now once this exchange of information takes place between man (via codes) and a machine, concepts like threading and process come into the picture. These details we will discuss via question and answer for “C++ threading interview questions”. Now, if you are looking for a job that is related to Multithreading C++ then you need to prepare for the 2021 Multithreading Interview Questions C++. It is true that every interview is different as per the different job profiles. Here, we have prepared the important Multithreading Interview Questions and answers C++ which will help you get success in your interview. In this 2021 Multithreading Interview Questions C++ article, we shall present 10 most important and frequently asked Multithreading Interview Questions C++. These interview questions are divided into two parts are as follows: Part 1 – Multithreading Interview Questions C++ (Basic) This first part covers basic Multithreading Interview Questions C++. Q1. What is multithreading? Answer: The thread is a sequence of execution; it can also be referred to as a feature of OS (operating system). Let’s understand the above sentence in a simpler way. For any action taken by a user on the system, there must be a process to complete that action (as asked by a user). Every process must have at least one thread associated with it. The OS built in every system is responsible to allocate the processing time for every thread. So Multithreading is a more specialized way of multitasking. This behavior allows the user to perform multiple tasks simultaneously. Q2. Come up with every detail that you know regarding the Process? Answer: Let’s see what a process exactly is, Mr. A logs into the system and wants to see the dashboard of his business. In order to view his business dashboard on the system, he will navigate to the dashboard section. In doing so he generates a process that is handled by the respective system OS. The OS will allocate memory for the process and also the OS will make sure that the memory of one process is not accessible by other processes. Hence, we can say that a process is nothing but a program in execution. Layout (components) of the process – A process has different stages which can also be referred to as process life cycle – - Start - Ready - Running - Waiting - Terminated or Exit Let us move to the next Multithreading Interview Questions C++. Q3. Highlight some of the advantages of thread with its types? Answer: In general, there are 2 types of thread - UI thread – these are used to create UI components. Eg. Message box pops-out to display any information to the user. - Worker Thread – no message pump is included in it Advantages – - Minimizes the context switch time - Boost communications - Easy to create and connect the threads - Threads usage makes the process more concurrent Q4. Why do we need more than one thread? Answer: This is the common Multithreading Interview Questions C++ asked in an interview. As we know that there must be at least one thread associated with every process. Talking of more threads to a single process has multiple benefits. - UI interface – The first and foremost reason is to have a great UI with great user experience. Multi-threading concept help in doing so. - Multi-tasking – with having more threads one can do more things simultaneously. - Usability – different components of the system might be using different components at a given point of time. Here multi-threading can be a time saver. Q5. What are the ways to create a thread in C++? Answer: There are 4 ways of doing this which are as follows – - Thread creation using the function pointer - Thread creating using the function object - Thread creating using lambda - Thread creation using the member function Q6. How to launch a thread using function objects and function pointer? Answer: Using function object – class fn_object_class { void operator () (params) { } Std:: thread thread_object(fn_class_object(), params) Using function pointer – Void foo(param) { } std::thread thread_obj (foo, params); Q7. What kind of issue do you find in this code? Answer: The code is given to candidate – #include <iostream> int main(int argc, char **argv) { const int & r1 = 100; int v = 200; int &r2 = v; int & r3 = 200; return 0; } The rvalue must be a variable. The issue is there in the initialization of r3. Part 2 – Multithreading Interview Questions C++ (Advanced) Let us now have a look at the advanced Multithreading Interview Questions and Answers C++. Q8. Brief me about the available models in Multithreading? Answer: Many to many relations Many to One relation One to one relation Q9. Name the design pattern for the thread? Answer: Some popular test cases in the current IT industry Thread Pool (Boos and Worker) Peer (Work Crew) Pipeline Let us move to the next Multithreading Interview Questions C++. Q10. Define busy waiting and how it can be avoided? Answer: When a thread is waiting for another thread with the use of an active looping structure, that doesn’t do anything is known as the busy-waiting state. This cane avoided using mutexes. Q11. What do you understand by priority inversion terminology? Answer: A higher priority thread must wait behind a lower priority thread in a case where the lower priority threads hold a lock-in which a higher priority thread is waiting. This is a case of priority inversion. Q12. Is there any difference between a user-level and Kernel-level threads? Answer: This is the most popular Multithreading Interview Questions C++ asked in an interview. Yes, there are some crucial differences between the two. They are listed below. Q13. Name the functions that are used to create the threads? Answer: An afxbeginthread function is used to create the threads (both kinds). Thread creation is done in two modes – one that starts executing it and another that gets created in the suspended mode which can be resumed later. Q14. What are the 6 synchronizations primitive available in Multithreading? Answer: They are as follows – - Mutex - Join - Condition Variable - Barriers - Spin Lock - Semaphore Recommended Articles This has been a guide to the list of Multithreading Interview Questions and answers C++ so that the candidate can crackdown these Multithreading Interview Questions C++ easily. Here in this post, we have studied top Multithreading Interview Questions C++ which is often asked in interviews. You may also look at the following articles to learn more –
https://www.educba.com/multithreading-interview-questions-c-plus-plus/
CC-MAIN-2022-40
en
refinedweb
I have the following code: def fetch_into (uri, name) http = Net::HTTP.new(uri.host, uri.port) req = Net::HTTP::Get.new(uri.path) req.basic_auth(USERNAME, PASSWORD) start_time = Time.now.to_f File.open(name, “w”) do |f| print " - fetching #{name}" http.request(req) do |result| f.write(result.body) f.close() elapsed = Time.new.to_f - start_time bps = (result.body.length / elapsed) / 1024 printf “, at %7.2f kbps\n”, bps end end end this is run in a very simple loop that doesn’t do anything that requires much CPU. the files downloaded are about 10Mb and since the connection is not that fast (about 15Mbit/sec) I would expect this to consume little CPU, but in fact it gobbles up CPU. on a 2Ghz AMD it eats 65% CPU on average (the job runs for hours on end). where are the cycles going? I assumed it would be a somewhat suboptimal way of doing it since there might be some buffer resizing in there, but not that badly. anyone care to shed some light on this? (I would assume that there is a way of performing an http request in a way where you can read chunks of the response body at a time?) -Bjørn
https://www.ruby-forum.com/t/net-http-performance-question/74531
CC-MAIN-2022-40
en
refinedweb