text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
.
Polymer only works with recent browsers, if you can’t see the demos update your browser.
The sortable table has a few standard requirements:
- JSON input,
- Columns can be statically configured (renamed, reordered, and hidden), and
- Rows can be sorted by clicking on the column headers.
Consider a use-case of finding the maximum value contained within various columns: Ostrich is a reporting library on the JVM that gathers performance statistics and query execution times, making them available as JSON. It is a realistic input, and a simple sortable table provides a measurable benefit the the user.
Using the Ostrich JSON dataset as reference, we should create another array to define the order, title, and any other properties for the columns we wish to be in the rendered table:
var columns = [ {name:'name', title:'service call'}, {name:'average'}, {name:'count'}, {name:'maximum'}, {name:'minimum'}, {name:'p50'}, {name:'p90'}, {name:'p95'}, {name:'p99'}, {name:'p999'}, {name:'p9999'}, {name:'sum'} ];
In addition to being able to add column specific properties, by specifying the
columns data separately from the input
data we reduce restrictions on the
data format. The
data dataset can now contain arbitrary JSON elements, we will pick and choose what is displayed in our data table simply omitting missing or additional fields.
Web Components fully encapsulate all functionality behind an interface of HTML attributes. We have a specification for the
data source, displayed
columns, and we will also expose the currently sorted column and its direction. The custom element interface for our new simple-sortable-table element is:
<simple-sortable-table </simple-sortable-table>
The Polymer element definition mirrors the above by exposing the same set of attributes. The internal logic of the Web Component will take advantage of Templates to perform 2 tasks: create column headers, and create rows.
<polymer-element <template> <table> <tr> <!--TODO: template to create column headers--> </tr> <!--TODO: template to create rows--> <!--TODO: nested template to create cells--> </table> </template> <script> Polymer('simple-sortable-table', { data: [], columns: [], sortColumn: null, sortDescending: false }); </script> </polymer-element>
Both tasks will iterate over the
column array, as both need to correctly filter and order the displayed columns. Starting on the first task, column headers require a few features, they must: capture click events, display sort status, and show a column title. Column header click events will be handled by a new function called
changeSort, when a user clicks on a header it will be called, determine which column was clicked, then update the sort settings. Since the sort variables
sortColumn and
sortDescending are bound to the Polymer element, updating either will automatically re-render the entire table with the proper sort.
Because user-defined parameters cannot be sent to event handlers we cannot send
changeSort the clicked column as an argument. However each event handler is passed the source DOM element, so as long as the source element was rendered using a Polymer template it will expose a
model property containing a reference to its template’s bound data model. If the element’s template was bound using a
repeat, the
model property will be a reference to the specific item of the collection corresponding to this element, in our case the item in the
columns array.
changeSort: function(e,p,o){ var clickedSortColumn = o.templateInstance_.model.column.name; if(clickedSortColumn == this.sortColumn){ //column already sorted, reverse sort this.sortDescending = !this.sortDescending; }else{ this.sortColumn = clickedSortColumn; } }
Using overline and underline to indicate descending and ascending sorting respectively, the column header template is:
<template repeat="{{column in columns}}"> <th on- {{!(column.title) ? column.name : column.title}} </th> </template>
The actual row sorting will be performed using a Polymer Filter applied to the row template data. For an introduction on Polymer Filters, I have written an article: Polymer Data-Binding Filters.
PolymerExpressions.prototype.sortByKey = function(array, key, desc) { return array.sort(function(a, b) { var x = a[key]; var y = b[key]; if (typeof x == "string"){ x = x.toLowerCase(); y = y.toLowerCase(); } if(desc){ return ((x < y) ? 1 : ((x > y) ? -1 : 0)); }else{ return ((x < y) ? -1 : ((x > y) ? 1 : 0)); } }); };
The
data for the table is JSON meaning each row of data is a JSON object; we can reference a particular cell of the row by column name. This is much cleaner than dealing with a numerical index since the column’s displayed order may be different than its order within the row data.
<template repeat="{{ row in data | sortByKey(sortColumn, sortDescending) }}"> <tr> <template repeat="{{column in columns}}"> <td>{{row[column.name]}}</td> </template> </tr> </template>
We have defined all the code for a sortable table, the only task left is to style. Alternating row background color is both appealing and simple using the
nth-of-type CSS selector, more advanced conditional formatting will hopefully be the subject of a future article – although it is quite straightforward to implement by adding an additional formatting function property to the
columns array.
The following live-demo takes advantage of
columns data binding and the
window.resize event to show/hide the number of columns based on user screen resolution (resize your browser to try it out).
Full Source: sortable-table-polymer-web-components.html, simple-sortable-table.html
Very nice!
Have any example in Dart lang?
Thanks!
Hi, I’ve updated your example to Polymer 1.0 and added some tests.
Regards.
I need some sample test cases for the above table .For example test cases like verifying the title of the table,performing the “click” actions. etc. | http://stevenskelton.ca/sortable-table-with-polymer-web-components/?replytocom=168 | CC-MAIN-2019-35 | en | refinedweb |
procmgr_daemon()
Run a process in the background
Synopsis:
#include <sys/procmgr.h> int procmgr_daemon( int status, unsigned flags );
Since:
BlackBerry 10.0.0
Arguments:
- status
- The status that you want to return to the parent process.
- flags
- The flags currently defined (in <sys/procmgr.h>) are:
- PROCMGR_DAEMON_NOCHDIR — unless this flag is set, procmgr_daemon() changes the current working directory to the root " / ".
- PROCMGR_DAEMON_NOCLOSE — unless this flag is set, procmgr_daemon() closes all file descriptors other than standard input, standard output and standard error.
- PROCMGR_DAEMON_NODEVNULL — unless this flag is set, procmgr_daemon() redirects standard input, standard output and standard error to /dev/null.
- PROCMGR_DAEMON_KEEPUMASK — unless this flag is set, procmgr_daemon() sets the umask to 0 (zero).
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The function procmgr_daemon() function lets programs detach themselves from the controlling terminal and run in the background as system daemons. This also puts the caller into session 1.
The argument status is returned to the parent process as if exit() were called; the returned value is normally EXIT_SUCCESS.
The data in the siginfo_t structure for the SIGCHLD signal that the parent receives isn't useful in this case.
Returns:
A nonnegative integer, or -1 if an error occurs.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/procmgr_daemon.html | CC-MAIN-2019-35 | en | refinedweb |
Organizer of the contact between SwTextNodes and grammar checker. More...
#include <IGrammarContact.hxx>
Organizer of the contact between SwTextNodes and grammar checker.
Definition at line 29 of file IGrammarContact.hxx.
Definition at line 57 of file IGrammarContact.hxx.
finishGrammarCheck() has to be called if a grammar checking has been completed for a text node.
If this text node has not been hidden by the current proxy list it will be repainted. Otherwise the proxy list replaces the old list and the repaint will be triggered by a timer
Implemented in SwGrammarContact.
Referenced by finishGrammarCheck().
getGrammarCheck checks if the given text node is blocked by the current cursor if not, the normal markup list is returned if blocked, it will return a markup list "proxy"
Implemented in SwGrammarContact.
Referenced by SwXTextMarkup::commitMultiTextMarkup(), SwXTextMarkup::commitStringMarkup(), and lcl_SetWrong().
Update cursor position reacts to a change of the current input cursor As long as the cursor in inside a paragraph, the grammar checking does not show new grammar faults.
When the cursor leaves the paragraph, these faults are shown.
Implemented in SwGrammarContact.
Referenced by SwCursorShell::UpdateCursorPos(). | https://docs.libreoffice.org/sw/html/classIGrammarContact.html | CC-MAIN-2019-35 | en | refinedweb |
Issn 0012-9976
Ever since the first issue in 1966,
EPW has been India’s premier journal for Accepting Their ‘Fate’ But the disabled population wants to
comment on current affairs
engage in productive activity and to be
and research in the social sciences.
It succeeded Economic Weekly (1949-1965),
which was launched and shepherded
by Sachin Chaudhuri,
who was also the founder-editor of EPW.
I n the article “Transgenders and the
Mainstream” (EPW, 28 November 2015),
G Karunanithi has referred to the efforts to
looked upon as an asset in whatever
form instead of subsisting with a begging
bowl and seeking alms and charity. Thus,
As editor for thirty-five years (1969-2004)
ensure civil rights for transgenders in Tamil the mindset of “others” as well as “us”
Krishna Raj
gave EPW the reputation it now enjoys. Nadu and pointed out that it was due to the needs to change in right earnest. I have
editor
joint efforts of the state government and heard of welfare organisations that exploit
C Rammanohar Reddy the transgender community that had led to the cause of the disabled by indulging in
EXECUTIVE Editor some commendable results. The author has rackets. The Prime Minister can think of
aniket Alam also mentioned how the transgenders have many ways to ensure that the disabled
Deputy Editor
made efforts to move out of the lesbian, are able to become self-reliant and con-
Bernard D’Mello
CHIEF COPY Editor
gay and bisexual fold and generate an tribute to society. The disabled can be
KAUSHIK DASGUPTA independent identity for themselves. involved in running services under the
Senior Assistant Editor In West Bengal, a majority of members government’s many social welfare pro-
Lina Mathias of the transgender community get their grammes that will mean employment
copy editors
livelihood through begging or performing and income generation for them.
Prabha Pillai
jyoti shetty rituals, particularly in houses where a Samit Kar
Assistant editorS new baby is born. They live in groups Kolkata
P S Leela under a guardian, the senior-most member
lubna duggal
Assistant editor (web)
called barama. Some people make fun Past Blank Spaces That Spoke
Anurag Mazumdar of the transgenders but overall there is
editorial Assistant
ABHISHEK SHAW
production
a good-natured rapport between the latter
and society in general. Some of the
orthodox Hindus think that the trans-
K udos to the newspapers of Nagaland
for standing up against censorship
imposed by any section of the society,
u raghunathan
s lesline corera genders are thus due to their karma in a politicians, the army, or militant groups.
suneethi nair previous life. The transgenders them- Freedom of the media and free speech is
Circulation selves believe that they are under a curse essential to the growth of any society
Gauraang Pradhan Manager
B S Sharma and that they are suffering divine punish- and any attempt to curb the voice of the
Advertisement Manager ment. Some of them mentioned that people should be resisted.
Kamal G Fanibanda mainstream society treated them well. However, the editorial, “When Blank
General Manager & Publisher Some of the transgenders, commonly Spaces Speak” (EPW, 21 November 2015)
K Vijayakumar
referred to as hijras, earn their living needs a correction. It is not the first time
editorial
[email protected] through sex work. The West Bengal Nagaland newspapers used blank space
Circulation government has done very little for this to protest. If I am not mistaken, the first
[email protected]
community except for some healthcare journalist/newspaper in North East India
Advertising
[email protected] measures. However, there is little effort to use a blank space to protest was by the
Economic and Political Weekly from any quarter to organise them for well-known journalist Al Ngullie in 2008.
320-321, A to Z Industrial Estate their rights and there is also no data on Ngullie used a blank space for his regular
Ganpatrao Kadam Marg, Lower Parel
Mumbai 400 013
their exact number in the state. column in Morung Express to protest the
Phone: (022) 4063 8282 Harasankar Adhikari violence between Naga insurgent groups
FAX: (022) 2493 4515 Kolkata that led to the killing of innocent civilians.
EPW Research Foundation For the second time in 2010 he blanked out
EPW Research Foundation, established in 1993, conducts
research on financial and macro-economic issues in India.
Employment, Not Pity his column, titled “United Colors of Nagas,”
also in the Morung Express. I am unclear
Director
J DENNIS RAJAKUMAR
C 212, Akurli Industrial Estate
Kandivali (East), Mumbai 400 101
Phones: (022) 2887 3038/41
P rime Minister Narendra Modi in
his “Mann ki Baat” programme on
29 November urged people to change their
about the reason for the second one, but it
was related to political events of the time.
Kilenlemti
Fax: (022) 2887 3038 mindset regarding the disabled popula- (From comment on EPW website)
[email protected]
tion. It is true that the disabled are usually
Printed by K Vijayakumar at Modern Arts and Industries,
151, A-Z Industrial Estate, Ganpatrao Kadam Marg,
viewed as individuals deserving pity. Army Atrocities in Meghalaya
Lower Parel, Mumbai-400 013 and Rabindranath Tagore used to say that
published by him on behalf of Sameeksha Trust
from 320-321, A-Z Industrial Estate,
Ganpatrao Kadam Marg, Lower Parel, Mumbai-400 013.
Editor: C Rammanohar Reddy.
being pitied is the worst form of igno-
miny. Those who pity the disabled no
doubt revel in self-glorification.
C oordination of Democratic Rights
Organisations (CDRO) strongly con-
demns the continuing army atrocities in
4 decemBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
LETTERS
the name of combing operations in the magnitude of the protests, including has thrived in the Garo Hills area. In such
Meghalaya’s Garo Hills. While Meghalaya that by the local member of legislative a scenario, where there is no effort made
is not notified as a “disturbed area,” and assembly, compelled the Garo Hills deputy towards a political resolution to the
the writ of the army does not run large commissioner to promise a magisterial problem of the Garo Hills conflict, the
there, the Armed Forces (Special Powers) inquiry into the incident. issue of army lawlessness acquires greater
Act (AFSPA) allows army personnel located Given the prevailing army lawlessness, significance and magnitude. As the above
in neighbouring states to conduct opera- it is imperative to question the Meghalaya two incidents illustrate, the legal impunity
tions within 20 kilometres of the state High Court’s decision to ask the centre to and immunity that the army enjoys makes
border. On 25 November 2015, the Gurkha impose AFSPA in the Garo Hills. Making it the most dangerous adversary in conflict
Regiment stationed in Assam’s Rangjuli a case of deteriorating law and order, a areas. Not only are civilian lives least
town shot dead two unarmed civilians three-member bench, including the chief cared for—and are indeed under constant
(Alphus Momin and S D Marak), a few justice, on 4 November 2015, cited 87 threat—the absence of routine checks
kilometres from Kharkutta Bazaar in the instances of kidnapping and ransom and balances that are stipulated in normal
Garo Hills area in Meghalaya. The incident demands by rebel groups and pointed law allow untrammelled powers to men
occurred at around 8:40 pm, when Alphus out that even the chief justice and other in uniform. The army and paramilitary
Momin, a schoolteacher, and S D Marak, judges were receiving “veiled” threats operations in the Garo Hills must imme-
a vendor, were on their way home in and that they would have to face “dire diately end. We, at the CDRO, demand:
Rajasimla village on a motorbike. Earlier, consequences” after retirement. Instead (1) Stringent action against army per-
in March 2015, two daily-wage workers, of upholding the rights and liberties of sonnel guilty of murdering Alphus Momin
Selba Sangma and Jekke Arengh, were individuals, the court damaged its reputa- and S D Marak.
shot dead by the Dogra Regiment in tion by acting on its own fears, as two of (2) Information on action taken against
the same area. In a bid to cover up its the judges are said to retire in February guilty personnel involved in the murder
“mistake” and to pass off the killings as an 2016. Besides, in asking for the imposition of Selba Sangma and Jekke Arengh.
“encounter,” the army planted two country- of the AFSPA, the court also violated the (3) Compensation to the families of the
made pistols near the bodies. The matter constitutional arrangement of separation deceased who have been gunned down
came to light as two other men were also of powers by acting on its own authority. by the army.
intercepted in the same incident and were It is quite another matter that the centre (4) Immediate halt to any paramilitary
taken into custody. The government has decided not to heed the high court, and deployment in the area.
then ordered a probe into the killings. the state government has decided to file C Chandrasekhar (CLC, Andhra Pradesh),
Army lawlessness is inherent in areas an affidavit in the next hearing. However, Paramjeet Singh (PUDR, Delhi), Pritpal Singh
under the AFSPA, as this law allows it is to be noted that the centre has prom- (AFDR, Punjab), Phulendro Konsam (COHR,
Manipur) and Tapas Chakraborty (APDR,
impunity and immunity to men in uni- ised the state government paramilitary
West Bengal) (Coordinators of CDRO).
form. However, even within the next-to- troops for the Garo Hills. So, even while
Constituent Organisations: Association for
negligible safeguards available in the act, the centre and the state government are Democratic Rights, Punjab; Association for
the army is supposed to inform the local opposed to the imposition of AFSPA, Protection of Democratic Rights, West Bengal;
police and seek its help before conducting there is a consensus on the issue of legal Bandi Mukti Morcha, West Bengal; Campaign
for Peace & Democracy in Manipur, Delhi;
operations. In the November incident, not immunity, as prosecution of paramilitary
Civil Liberties Committee, Andhra Pradesh;
only did the army not inform the local forces also requires official sanction. Committee for Protection of Democratic Rights,
police station, it did not even bother to Why did the Meghalaya High Court not Mumbai; Coordination for Human Rights,
take the victims to a nearby hospital. acknowledge the issue of army abuse, and Manipur; Human Rights Forum, Andhra
Pradesh; Jharkhand Council for Democratic
Instead, it abandoned the bodies on the why has the state government demanded
Rights, Jharkhand; Manab Adhikar Sangram
roadside. It was the local police who more paramilitary forces from the centre? Samiti, Assam; Naga Peoples Movement for
arrived at the spot after hearing the Why is the army or the paramilitary the Human Rights; Organisation for Protection of
gunshots, and they took the two men to answer to the intransigent problems of Democratic Rights, Andhra Pradesh; Peoples’
Committee for Human Rights, Jammu and
Kharkutta Primary Health Centre, where the Garo Hills? It is known that the nexus
Kashmir; Peoples Democratic Forum, Karnataka;
they were declared “brought dead.” The between the coal mafia, sections of rebel Peoples Union for Democratic Rights, Delhi;
police initially registered a first informa- groups, and the political establishment Peoples Union for Civil Rights, Haryana.
tion report (FIR) against unknown people,
but growing public protest prompted the
army to acknowledge its “mistake.” On
Web Exclusives
26 November, the army submitted an FIR The following articles have been published in the past week in the Web Exclusives section of the EPW website.
They have not been published in the print edition.
claiming that the incident occurred as
(1) Change or Stability: Singapore General Elections 2015 — Subrata K Mitra, Rajeev Ranjan Chaturvedy
the duo had not adhered to the security
(2) Sidelining the Most Vulnerable: UN Climate Change Conference Paris 2015 —Darryl D’Monte
instructions and refused to comply with (3) Student Protests in South Africa —Dominic Brown
their instructions at a mobile check-post. Articles posted before 28 November 2015 remain available in the Web Exclusives section.
Despite the army’s “face-saving” denial,
Economic & Political Weekly EPW decemBER 5, 2015 vol l no 49 5
LETTERS
Subscription Rates
Print Edition – For India Web Edition/Digital Archives
Rates for Six Months (in Rs) The full content of the EPW and the entire archives are also available to those who do not wish
Category Print (Plus free web access to issues of previous two years) Print + Digital Archives to subscribe to the print edition.
One Year
Individuals 1,250 1550
India (in Rs) SAARC (in US $) Rest of the World (in US $)
Rates for One Year (in Rs)
Category Number of Number of Number of
Category Print (Plus free web access Print + Digital Archives
to issues of previous two years) (According to Number of Concurrent Users) Concurrent Users Concurrent Users Concurrent Users
Institutions 4,000 6,600 10,000 More than 5 8000 More than 5 55 More than 5 400
Individuals 2,100 2,400 Individuals Single User 1350 Single User 20 Single User 35
Students 1,200 1,400
Rates for Three Years (in Rs) Types of Web Access to the Digital Archives
Category Print (Plus free web access to Print + Digital Archives
issues of previous two years) Single User
Individual subscribers can access the site by a username and a password, while
institutional subscribers get access by specifying IP ranges.
Individuals 6,000 7,000
To know more about online access to the archives and how to access the archives send
Concessional rates are restricted to students in India. To subscribe at concessional rates, us an email at [email protected] and we will be pleased to explain the process.
please submit proof of eligibility from an institution.
Print Edition: All subscribers to the print edition can download from the web, without making
any extra payment, articles published in the previous two calendar years. How to Subscribe:
Print plus Digital Archives: Subscriber receives the print copy and has access to the entire archives Payment can be made by either sending a demand draft/cheque in favour of
on the EPW web site. Economic and Political Weekly or by making online payment with a credit card/net
Print Edition — For SAARC and Rest of the World (Air Mail) banking on our secure site at.
Airmail Subscription for One Year (in US $)
Print (Plus free web access to issues Print + Digital Archives
of previous two years) (According to Number of Concurrent Users) Address for communication:
Institutions Up to 5 More than 5 Single User Economic & Political Weekly
SAARC 140 170
320-321, A to Z Industrial Estate
Rest of the World 300 350 600
Individuals Ganpatrao Kadam Marg,
SAARC 120 130 Lower Parel,
Rest of the World 160 180 Mumbai 400 013, India
T
he run-up to the 10th ministerial meeting of the World developing countries, has all along demanded a special safeguard
Trade Organization (WTO) in Nairobi later this month mechanism (SSM) and a permanent solution for public stockholding
has been anything but constructive. Kenya’s capital will programmes of food security for resource-poor farmers in the
see a battle that will decide whether the WTO can provide a few developing countries. To worsen matters, the developing coun-
minimally credible developmental outcomes for its large mem- tries will be asked to make a trade-off between continuation of
bership of developing and the poorest countries. More likely, to the DDA in some form and the continuation of decades-long nego-
suit the interests of the advanced countries the Doha Develop- tiating approaches, such as special and differential treatment and
ment Agenda (DDA) will be all but buried. less-than-full reciprocity. If the latter were to be given up, India,
Attempts to finalise the Nairobi ministerial statement are China and South Africa will have to take almost identical commit-
currently mired in unbridgeable differences on several points, ments as the advanced economies to reduce agricultural tariffs to
especially the future of the unresolved issues of the long- applied levels and subsidies; the US will otherwise not agree to
running DDA. A questionable package of outcomes that sidesteps return to the table and negotiate on agriculture.
the core issue of trade distorting domestic subsidies in agricul- In short, when trade ministers from the developing world
ture is being pursued to suit only one country, the United States congregate at Nairobi they will be fighting with their backs to
(US). At the heart of the divide is that the Triad of the three the wall to preserve the Doha Round in its original form which
largest advanced economies—the US, European Union (EU) they initially opposed but then into which they invested so
and Japan—seems determined to empty the Doha Round of all much of their negotiating capital. The moot issue is, why many
content. In 2001, immediately after the 9/11 terrorist attacks, developing countries, particularly India, allowed things to
the Triad launched the DDA negotiations in the face of intense come to such a pass, knowing well that losing leverage in a mer-
opposition from developing countries. To make the round ac- cantile negotiating framework can have costly consequences.
ceptable, the group agreed to address the fundamental inequi- Since their failure to ram through a package at the Cancun min-
ties of global trade arising from the previous Uruguay Round isterial meeting in 2003, the US and other members of the Triad
agreement. After 14 years of spasmodic negotiations, the Triad have constantly weighed their gains in a multilateral round
and some other developed countries now feel that the Doha with benefits from bilateral/regional free trade agreements
Round is a costly undertaking because of the reforms that (FTAs). At the WTO, even as these countries agreed on the July
would be required in agriculture, especially for the US. Wash- 2004 framework agreement, the 2005 Hong Kong ministerial
ington has passed a new farm bill that takes farm support to declaration, and the 2013 Bali agreement, all negotiated to
well beyond the Doha ceiling of $14.5 billion. Moreover, the reach a comprehensive package, they were simultaneously
Triad has already managed to pocket a binding WTO trade going ahead with concluding FTAs. But the turning point in the
facilitation agreement (TFA) without having to pay anything Doha Round negotiations was in 2008. That was the year when
for it. The TFA is the jewel in the crown that it fought so hard the US decided to set its sights on the Trans-Pacific Partnership
to retain in the DDA negotiations after it was initially stamped (TPP) Agreement because at the TPP it would not have to address
out by the developing countries at the Cancun ministerial its trade-distorting agricultural subsidies.
meeting in 2003. Despite these changes occurring outside the WTO, the devel-
Against this backdrop, a small package of deliverables on not oping countries went on offering concessions at Geneva in the
very important issues that would be agreed to in Nairobi was hope that they would secure gains in other areas. As late as
forced on the larger membership through a top-down approach. 2013, the Triad, led by the US, forced the TFA on the developing
The Triad, along with Australia, Canada and Brazil, among others, countries which did not ask for, nor were given, anything in
has also ruled out any agreement in Nairobi on two major de- return. At Bali in 2013, India agreed to the TFA in return for a
mands of the G-33 group of developing countries. The G-33 group, weak and economically insignificant undertaking on public
which includes Indonesia, India, China, Kenya and 43 other stockholding programmes for food security. Then, Prime Minister
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 7
EDITORIALS
Narendra Modi had a chance to reverse that outcome but went things at Nairobi, except by naming and shaming the US for
ahead and agreed during a visit to the US in September 2014 to having effectively killed the Doha Round. If India and other
sign on to the TFA without securing a cast iron agreement on developing countries fail to do even that, they would forgo
public stockholding programmes for food security. whatever limited chance they have of securing a permanent
In short, the developing world now finds that it has surren- solution for public stockholding programmes for food security,
dered much of the leverage it had and will be unable to reverse the SSM, and a credible result on cotton.
I
t speaks volumes about the priorities in our discourse on But what does that mean for the range and quality of
economic policy matters that the public comments on the services that the government is to offer? It is instructive that
Seventh Pay Commission (SPC) have centred on the fiscal according to the SPC report, in the United States there are 668
implications of the recommendations. What will be the addi- civilian federal government employees for every 1,00,000 of
tional “burden” on the Government of India? What impact will the population, while there are only 139 civilian central
it have on the fiscal deficit? True, the pay commission’s terms of government employees per 1,00,000 in India. It is also instruc-
reference have to deal with the recommending levels of emolu- tive that the one ministry which has seen an increase in per-
ments for the Government of India’s 3.3 million personnel as sonnel in recent years is the Ministry of Home Affairs, a reflec-
well as the large population of pensioners. However, the publi- tion of the growing size of the security forces directly under
cation of this once-in-a-decade report was an opportunity for this ministry.
public debate on the role of the government in providing public In the obsession to control government expenditure, we tend
services, the financial cost of doing so, the accountability of to lose sight of the fact that the state in India—at the state and
the government servant and the performance of personnel at central levels—is actually providing too little of public services,
different levels and in different areas. That, unfortunately, has to expand which one would need larger personnel strength.
not happened. The issue does not receive attention or favour because the
For the record, with the central government accepting the citizen regularly encounters a particularly ugly face of the gov-
recommendations of the SPC and deciding to implement them ernment: unresponsive, unaccountable and corrupt. This is
from January 2016, its total expenditure in 2016–17 will go up true of the lowly official in a zilla land records office who
by Rs 68,400 crore as additional outlay on salaries and allow- harasses a farmer seeking authentication of his ownership of
ances and by Rs 33,700 crore on pensions, or by a total of land to a senior official in charge of regulations denying a firm
Rs 1,02,100 crore, with an overall one-off increase of 23.55% the clearances due to it.
over the business-as-usual projections. The SPC had its eye on Over decades, the face of the Indian “babu” has turned from
the fiscal impact, for the additional expenditure will be equiva- being a “public” servant to one who works for “private” benefits,
lent to 0.65% of gross domestic product (GDP), compared to the if not in pecuniary terms then to enjoy the luxury of permanent
higher increase of 0.77% of GDP as followed from the accept- employment with a load of benefits without in any way provid-
ance of the Sixth Pay Commission a decade ago. Implementa- ing public service. Yet, this broad brush characterisation of gov-
tion of the Sixth Pay Commission’s recommendations had led to ernment servants overlooks the commitment and dedication
a substantial rise in emoluments at many levels, and with their shown by many lakhs of government servants, from the worker
adoption by state governments, para-state organisations and in the primary healthcare centre to the doctor in public hos-
even educational institutions, the overall financial impact was pitals who struggle without adequate funds to provide essential
considerable. The financial impact this time will be less but the services, though many of them could earn far more in the
implications for the state governments are yet to be worked out. private sector. A good part of the responsibility for government
Since the focus for the past two decades has been on how to servants having such a poor reputation in the public eye is their
contain government expenditure, the outcome naturally has own. And this has played into the hands of policymakers who
been on reducing the staff strength of the Government of India seek to strip the state in India even more of its public functions
in the aggregate. According to the SPC, sanctioned staff strength and hand them over to the private sector.
reached a peak of 41.76 lakh in 1994 and declined to 38.90 lakh It is time, therefore, the public discourse on the government
in 2014 (though the fall seems to have been largely on account servant shifts from just her emoluments to the role she plays in
of the corporatisation of BSNL). There is also an increasing providing essential services, her accountability to the citizen,
unwillingness to fill up posts: 14% of the sanctioned posts had the need to increase the number of public servants like her who
not been filled in 2006, 17% in 2010 and 18% in 2014. The central are too few in important areas (especially in social services) and
government is working towards further reducing staff strength of course to a discussion of the salary that she needs in order to
and simultaneously increasing the use of contract labour. live a decent life.
8 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
EDITORIALS
W
hen a 21-year-old commuter lost his grip on the pole for a bullet train from Ahmedabad to Mumbai could be spent
at the train door and fell in Mumbai a few days ago, more fruitfully to strengthen the safety and reliability of the
his tragic fate was no different from that of so many existing rail network—a true modernisation of the network.
others. But it was the video clip of the fall that went viral on the At railway stations in the urban network, the problem is com-
internet that shook even those who are used to regularly reading pounded by the absence of timely assistance. The importance of
reports of such falls. It brought home forcefully the reality of the emergency care—the “golden hour”—seems to be an alien concept
horrifying numbers of deaths due to falls from overcrowded trains, for everyone concerned, including station masters and railway staff.
crashes at unmanned level crossings and people crossing rail tracks Getting the injured to a hospital or even giving immediate basic
where foot overbridges are either non-existent, too crowded or at treatment until then is often done by untrained fellow-commuters.
an inconvenient distance from the roads that lead to the station. Time passes as officials and police argue over “jurisdiction.”
Mumbai, with its extensive suburban railway network that every In one particularly horrifying case in Mumbai in January 2014,
day runs 2,905 services carrying 7.5 million commuters, reports the the severed left hand of 16-year-old Monica More was carried
highest number of deaths due to people falling out of trains. by two fellow commuters in a borrowed piece of cloth, while
The response from Railway Minister Suresh Prabhu was pre- they tied her other near-severed arm with their handkerchiefs.
dictable; he ordered the formation of a committee. Yet, in 2012 a She was taken to a hospital in an autorickshaw because there
high-level committee chaired by Anil Kakodkar gave a detailed was no emergency care or ambulance available at one of the
report on rail safety along with suggestions to the Railways. Mumbai’s busiest stations. She lost both her arms.
Prabhu had asked the Railway Board in June this year to submit Basic measures must be in place, officials and staff must be
an implementation plan of the recommendations of the properly trained and made aware of the immediate steps to be
Kakodkar report. What happened to that? taken after an accident. Instead of addressing such a need, rail-
In fact, the Kakodkar Committee’s report is said to be unaccep- way authorities issue “appeals” to passengers to not hang out of
table because it identified the root causes of the weak safety trains, to avoid train rooftops and rail tracks. Although this is
record of the Railways and showed the enormity of the chal- excellent advice, passengers take such risks on trains not out of
lenge. It pointed to poor infrastructure, inadequate resources choice but because they are helpless when faced with congested
and lack of empowerment at the functional level. It observed networks and decrepit infrastructure. To make matters worse,
that safety margins had been narrowed and infrastructure railway authorities refuse to classify many such cases as accidents
maintenance neglected because financially the Indian Railways to avoid accepting liability and paying compensation.
were on “the brink of collapse.” According to the reply to a right to information question, in
It recommended that a statutory Railway Safety Authority be the last decade 25,722 passengers fell from trains on Mumbai’s
set up with an oversight on safety on the operational mode rather suburban network. Of these, 6,989 died. Across the entire coun-
than the Railway Board holding all the strings as at present. It try 14,973 deaths occurred on railway tracks in 2011, the num-
also suggested that the Research Design and Standards Organi- ber was 16,336 in 2012, and increased to 19,997 in 2013. Accord-
sation, the top technical wing of the Railways be restructured, ing to the ministry, until October 2014, 18,735 died in falls from
an Advanced Signalling System (like the European Train Control trains, “trespassing,” accidents and suicides.
System) be adopted for the entire trunk route length of 19,000 km Ironically, the Parliamentary Standing Committee on Railways
within five years, and all level crossings (manned and un- had earlier this year said that appointing committees to look into
manned) be closed down. (As of this year the Railways still have various facets of safety and then not implementing their recom-
11,563 unmanned level crossings.) Implementing all these rec- mendations was a waste of public money. To now announce
ommendations would cost around Rs 1,00,000 crore over a five- another committee to look into the safety of passengers, instead
year period. Perhaps the mammoth amount of funding sought of implementing known measures, is a mockery beyond words.
From 50 Years Ago year 1962–63 (which roughly corresponds to fiscal remained fairly constant over the last several
year 1961–62), only 76 individuals (26 of them years. How far this is a result of the high exemp-
salary earners) and 15 Hindu undivided families tion limit and legal partitions following the birth
has assessed incomes of over Rs 5 lakhs each. of each son in business families cannot be judged
Vol XVII, No 49, decEMBER 4, 1965 Less than 3,000 individuals (including about from the scanty data available. From plain obser-
WEEKLY NOTES 1,000 salary earners) and 300 HUFs earned vation, there is no doubt that the HUF is the most
more than Rs 1 lakh in that year. There has been commonly used ‘legitimate’ device for tax avoid-
The Missing Millionaires little or no change in these numbers over the last ance which further reinforces the traditional de-
That the grinding poverty of the country’s teem- decade or so... sire for male progeny. Fear of the broader legal
ing millions has a contagious quality has been The Hindu undivided family remains the and religious complications has kept Govern-
suspected for some time. It is difficult to uncover favoured child of the tax system. The number ment away from tackling this issue. That is no
rich taxpayers. In the income-tax assessment of such families assessed to income tax has excuse for its neglect by academic economists.
A
t the 10th ministerial meeting of free and equitable education to all from non-excludability, higher education will
the World Trade Organization kindergarten to post-graduation. lean heavily towards being a public good
(WTO) to be held at Nairobi, than a private good. For instance, the
Kenya, from 15–18 December, the dis- Spurious Economics new knowledge is built upon the old
course on higher education being public/ The protagonists of marketisation of knowledge; Einstein’s theories would
merit/private good and the covert/overt higher education confuse people saying not be possible without the Newtonian
preparation of the government over the that higher education is not a public base. The motivational argument given
past two decades to withdraw from good. Public good in economics is nar- for restricting research as private good
higher education will finally come to an rowly determined on dual criteria: non- through intellectual property rights
end. The conclusion of the current Doha rivalrous and non-excludable, meaning (IPRs) by the neo-liberalists is myopic. In
round of negotiations, which started in one person’s use of the good does not the long run, the non-generation of
2001 but could not be completed because diminish another person’s use of it and theoretical knowledge or its confinement
of the concerted resistance of the least no person can be prevented from using to elite networks through artificial
developed and developing countries, the good, respectively. Placing public devices like IPRs or other WTO mecha-
has been planned for this meeting. A goods in the market defeats this dual nisms is bound to adversely affect the
special meeting of the General Council criteria. Therefore, such goods are sup- pace of new knowledge production—
of WTO was held in November 2014 posed to be provided by non-profit and thereby human future.
at Geneva, which decided upon the organisations and government. Classi- Then there is a perennial argument—
process of suppression of resistance cally, lighthouses and national defence from Macaulay’s times—about the lack
and finalised the “work programme” to exemplify public goods in economics. of resources for education. The National
conclude the negotiations in Nairobi. However, economics also notes that Knowledge Commission has predicted
Once completed, it will have ruinous such examples are scarce in practice and that India needs an investment of about
consequences for people in poor coun- most goods fulfil one criterion or the $190 billion to achieve the target of 30%
tries as variety of goods and services other, or sometimes are public goods Gross Enrolment Ratio (GER) in higher
would suddenly be pushed beyond and sometimes not. Often it is public education by 2020 and expectedly advi-
their reach. perspective that makes a good public or sed to meet it through foreign direct
For India, the consequences are not private. For example, an official portrait investment (FDI) as the government lacks
going to be any less severe. Its offer of of Henry VIII in the National Portrait resources. For just one year, 2012–13, the
market access in higher education made Gallery in London is seen as a public tax revenue foregone by the government
in August 2005 at Hong Kong will become good, but the painting of Mona Lisa in to companies worked out to Rs 5,73,627
an irrevocable commitment once the the Louvre is not. Many an item or ser- crore (in excess of $100 billion by then
Doha round is concluded. While this offer vice classically considered public good exchange rate). Government has been
was made by the Congress-led United has been deftly turned into private good gifting such amounts every year to cor-
Progressive Alliance government, the and brought in the realm of market—for porate sector, the sum of which even for
current Bharatiya Janata Party-led example, the conversion of public roads a few years would be in multiples of this
National Democratic Alliance will pride to toll roads in recent times. requirement!
on its consummation—again exposing Why speak about higher education?
the essentially anti-people character of Even the elementary education and pri- Huge Profit Potential
these parties. The implications of this mary health services can be termed as These arguments are just a cover for the
imminent disaster have yet not dawned private goods with this argument. It is a naked interests of the global capital in
on people commensurately despite a pure neo-liberal ploy to commodify the largest market of higher education in
countrywide agitation under the aegis of ever ything, including water and air, to the world with over 234 million individ-
the All India Forum for Right to Educa- be marketed for profits. In the public uals in the 15–24 age groups, equal to
tion (AIFRTE)—a federated body of hun- perspective, education in general, and the US population (FICCI 2011). This mar-
dreds of organisations and activists in higher education in particular, fulfils ket of over $65 billion a year, growing at
the country, floated in 2009 to demand four major functions: the development a compound annual growth rate (CAGR)
10 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
MARGIN SPEAK
of over 18%, comprises 59.7% of the public–private partnership (PPP). Choice- legal instruments and the changes sug-
largely price-inelastic education market. based credit system and common syllabus gested by it shall have to be abided by—
It is rightly considered as the “sunrise were some of the initiatives to facilitate an outright infringement on freedom
sector” for investment. India’s online prospective foreign players. and sovereignty of India—and of course,
education market alone, in which the US Successive governments adopted a the reservations and other concessions
has evinced huge interests, is expected competitive strategy in improving statis- for Scheduled Castes and the Other
to touch $40 billion by 2017. An RNCOS tics of higher education and in the pro- Backward Classes, will go. Higher edu-
(a market research firm) report, “Boom- cess winked at its rapidly falling stand- cation aimed at producing inert feed for
ing Distance Education Market Outlook ards. Even the few markers of quality corporate sector shall become a tradeable
2018,” expects the distance education education in India, Indian Institutes of service which will have to be bought by
market in India to grow at a CAGR of Technology and Indian Institutes of Man- students as consumers.
around 34% during 2013–14 to 2017–18. agement, were not spared and were Narendra Modi’s claims of creating
The preparation for handing over this multiplied without consideration for better opportunities for Indian youth get
sector has been afoot right since 1986, infrastructure and faculty. Such com- thoroughly exposed when he is set out
when the New Education Policy allowed petitive strategies did raise the GER to to shut them out permanently from
private investment into higher educa- 17%–18%. But that is still far below the access to higher education at Nairobi.
tion, leading to mushrooming of private world average of 26% and that of other For the Congress it was expedient, for
shops in the garb of educational insti- emerging economies such as China him it is ideological. The Brahminical
tutes selling much-demanded profes- (26.7%), Brazil (36%) and Russia (76%). supremacist ideology of his Parivar
sional education. They have since grown In the context of India’s superpower perfectly resonates with social Darwinist
into veritable empires. After formally ambition, it is utterly dismal. This gap neo-liberalism in dispossessing majority
embracing neo-liberal reforms in 1991, indicates huge investment and profit of people of whatever little they have
there have been concerted attempts opportunity for the global capital. and putting a handful of elites to lord
through committee after committee. over them. For those handfuls, GATS
These attempts culminated information Irrevocable Consequences regime in higher education may still
of the Mukesh Ambani–Kumar Mangalam One may cynically think, the private mean inexhaustible opportunity but for
Birla Committee that created “A Policy sector’s share in higher education in the multitude of masses it means the
Framework for Reforms in Education” India is already among the highest in the death knell. How on earth would they,
in April 2000 to stress that higher edu- world: 64% of all educational institu- who are supposed to be subsisting on
cation be left to market forces. Although tions being private. India already allows Rs 20 a day, whose calorie intake has
the government has since scaled up self- 100% FDI in education sector, the inflows already dipped to a worrisome level, who
financing through substantial raise in exceeding $1,171.10 million from April are in a state of permanent famine, afford
fees, it was not politically feasible to 2000. There are 631 foreign universities/ the market price for higher education?
completely dismantle state financing of institutions operating in the country,
higher education. These moves, how- mostly with a concept of “twinning” Anand Teltumbde ([email protected]) is a
ever, certainly prepared grounds for (joint ventures and academic collabora- writer and civil rights activist with the
Committee for the Protection of Democratic
the offer of higher education to WTO tion with Indian universities), according
Rights, Mumbai.
in 2005. to the Association of Indian Universities.
UPA II had tried to clear all hurdles in Historically, higher education in India
Reference
committing higher education to the has been starved of financial support:
Federation of Indian Chambers of Commerce and
General Agreement on Trade in Services public expenditure on it being $406 per Industry (FICCI) (2011): “Private Sector Parti-
(GATS) through various (six) bills, includ- student, less than even the developing cipation in Indian Higher Education,” FICCI
Higher Education Summit.
ing the Higher Education and Research countries like Malaysia ($11,790), Brazil
Bill, which advocated complete abolition ($3,986), Indonesia ($666) and the
of bodies such as University Grants Philippines ($625). Quality-wise, higher Permission for Reproduction of
Commission, Medical Council of India, education in the country is going Articles Published in EPW
All India Council for Technical Educa- nowhere; our best institutions rank
tion and National Council for Teacher below 240 in global rankings. So, what No article published in EPW or part thereof
Education. The government, however, more harm can there be if the higher should be reproduced in any form without
failed to get these bills passed in Rajya education goes under GATS? The brief prior permission of the author(s).
Sabha. Then the government resorted answer is that the awkward “for profit” A soft/hard copy of the author(s)’s approval
to its pet ploy of bypassing Parliament in clause in the current policy would go should be sent to EPW.
launching a Rashtriya Uchchatar Shiksha away; all future policies of India in In cases where the email address of the
Abhiyan (RUSA) in September 2013, to respect of higher education shall be author has not been published along with
change the structure of higher educa- annually reviewed by the Trade Policy the articles, EPW can be contacted for help.
tion, undermining UGC and promoting Review Mechanism (TPRM), one of WTO’s
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 11
COMMENTARY
Great Indian Gas Robbery ONGC and RIL stated that around nine
billion cubic metres (bcm) of natural gas
may have flown out from ONGC’s block in
the KG basin to RIL’s adjoining reservoir.
Paranjoy Guha Thakurta It was claimed that RIL had drawn 58.67
bcm from the wells up to 31 March 2015,
I
An independent consultant in t is a dispute without any precedent, of which around 9 bcm, or 15%, may
an interim report has upheld at least not in this country. India’s have belonged to ONGC. This gas at $4.2
largest public sector company and per million British thermal unit (mBtu)
the contention of the public
the biggest producer of oil and gas, the was worth more than Rs 11,000 crore.
sector Oil and Natural Gas Oil and Natural Gas Corporation (ONGC), D&M submitted the interim report
Corporation that gas from one has accused the country’s biggest privately- to all involved in the dispute, that is,
of its undersea wells in the owned company, Reliance Industries the Directorate-General of Hydrocarbons
Limited (RIL), of stealing gas from one of (DGH), the regulatory authority which
Krishna–Godavari basin was
its reservoirs located beneath the ocean also acts as the technical wing of the
consciously and systematically bed in the Bay of Bengal off the coast of MoPNG, as well as to the two companies
pilfered by a company controlled Andhra Pradesh along the basin of the for their comments before compilation
by Reliance Industries. Why Krishna and Godavari Rivers. What is of the final report. The two companies
worse, the Ministry of Petroleum and which are supposed to work under the
did the Directorate-General
Natural Gas (MoPNG) in the Government “supervision” of the DGH had appointed
of Hydrocarbons allow this of India has been accused of being com- D&M to “establish the continuity of
to happen and why did the plicit in the alleged theft. reservoirs across the ONGC and RIL off-
Government of India not protect The dispute between ONGC and RIL is shore deep water blocks/areas in (the)
more than two years old. After months KG Basin.”
the interests of a premier public
of legal wrangling, the warring compa-
sector undertaking? nies agreed on an independent consult- Findings of the Report
ing firm based in the United States (US) In its 553-page report, the US consultant
which would give its technical findings has stated that reservoirs KG-DWN-98/2
in the dispute. This consultant, DeGolyer (KG-D5) and the Godavari Producing
and MacNaughton (D&M) based out of Mining Lease (PML) are connected with
Dallas, Texas, in the US, submitted an Dhirubhai-1 and Dhirubhai-3 (D1 and
interim report on 9 October which stated D3) fields located in the KG-DWN-98/3
that natural gas worth $1.7 billion or (KG-D6) block of RIL. (The blocks where
over Rs 11,000 crore had been extracted RIL operates have been named after the
by RIL in an unauthorised manner from founder of the Reliance group, Dhirubhai
an area on the ocean bed where gas Ambani.) According to a detailed article
extraction was supposed to be con- put out by the Press Trust of India
trolled by ONGC. (PTI) on 22 November, the D&M report
Earlier, in May 2014, ONGC had alle- stated:
ged in the Delhi High Court that gas As of 31 March 2015, the FFRM (Full Filled
worth almost $5 billion or around Reservoir Model) estimated a gas migration
Rs 30,000 crore had been stolen by RIL of approximately 11.122 billion cubic metres
in violation of the production sharing from the Godavari-PML and KG-DWN-98/2
contract areas to KG-DWN-98/3.
contract that the company had signed
with the Government of India represented The US consulting firm is of the view
by the MoPNG. Whereas the last has not that there exists a single large gas reser-
This article was written before the final yet been heard about this dispute, it is voir several metres below the ocean bed
report was submitted to the government on the biggest one of its kind in India and that extends from Godavari-PML and
30 November 2015. an important link in a long series of con- KG-D5 to KG-D6. Of the 58.68 bcm of gas
Paranjoy Guha Thakurta ([email protected]) troversies relating to the Reliance Group’s produced by RIL from the KG-D6 block
is the lead author of Gas Wars: Crony Capitalism operations to extract gas in the Krishna– from 1 April 2009 over the following six
and the Ambanis, published in April 2014. Godavari (KG) basin. years, 49.69 bcm of gas belonged to RIL
He is a journalist, educator, documentary The interim report of D&M on the while 8.981 bcm could have come from
film-maker and, of late, also a publisher.
technical aspects of the dispute between the side where ONGC is supposed to
12 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
operate. At the then officially adminis- that its gas was being taken by RIL in be true. Its counsel Dushyant Dave said
tered price of natural gas of $4.2 per September 2013, representatives of both in court that ONGC could seek over
mBtu, the total value of the gas belon- companies met and agreed to exchange Rs 25,000 crore by way of compensa-
ging to ONGC which RIL has extracted data. Their next meeting, in December, tion. ONGC argued in its petition that the
has been estimated at $1.7 billion or ran into heavy weather. ONGC claimed management committee, which had two
Rs 11,055 crore at the then prevailing channel connectivity; RIL disputed it. government representatives and was in
exchange rates. However, both companies agreed to con- the possession of data from RIL, should
The dispute began in July 2013 when, tinue exchanging more data. In April not have cleared the private company’s
suspecting reservoir connectivity, ONGC 2014, even as the general elections were plans to drill wells so close to the ONGC
wrote to the DGH seeking data on the underway, ONGC, RIL and the DGH all blocks. The PSU has also accused the
adjoining RIL block, KG-D6. ONGC agreed to hire an independent consult- government and RIL of not having fol-
claimed that RIL had deliberately drilled ant to sort out the claims and counter- lowed the mechanism internationally
wells close to the common boundary of claims. Then, suddenly (and to some, accepted for joint development of con-
the blocks and that some gas it pumped unexpectedly) ONGC chose to act against tiguous oil and gas fields or reservoirs,
out was from its adjoining block. RIL, on RIL by filing a writ petition in the Delhi clearly provided in the PSC signed in
the other hand, maintained that it has High Court against the private company April 2000 between the MoPNG and RIL.
“scrupulously followed every aspect of as well as the government. Since the blocks were adjacent to each
the production sharing contract (PSC) other, under the provisions of the PSC,
and has confined its petroleum opera- ONGC in Court they should have been jointly developed
tions within the (boundaries of its) The petition initially failed to attract by the two companies, it was argued.
KG-D6 block.” much attention in the media as it was Why did ONGC’s decision to move
The US-based independent consulting filed a day before the results of the Lok court on 15 May 2014 surprise so many?
firm clearly thought otherwise. D&M Sabha elections were announced on The company was under pressure from
estimated that ONGC’s Godavari-PML had 16 May 2004 when political news was the very beginning not to act in the man-
14.209 bcm of gross in-place reserves and dominating the public discourse. In its ner that it did. The outgoing minister of
KG-D5 had another 11.856 bcm. RIL’s D1 petition, ONGC named the MoPNG and petroleum and natural gas in the United
and D3 fields held 80.697 bcm gross in- the DGH, as respondents on the ground Progressive Alliance (UPA) government
place reserves. Of these reserves, 12.80 that these government bodies had failed at that time was Veerappa Moily. He shot
bcm of Godavari-PML, 8.01 bcm of KG-D5 to be vigilant in taking precautionary off a note to the then Secretary in the
and 75.33 bcm of KG-D6 are connected, measures which had led to the company MoPNG, Saurabh Chandra, calling for an
the report stated, adding that an esti- losing huge sums of money. ONGC enquiry as to why ONGC had dared to ini-
mated 11.89 bcm of gas from ONGC claimed in its writ petition: tiate legal action against its biggest
blocks would have “migrated” to KG-D6 Pertinently, four wells have been drilled shareholder. (Roughly 70% of ONGC’s
by 1 January 2017 and this volume would by Respondent No 3 (RIL) within distances shares are owned by the Union govern-
go up to 12.71 bcm by 1 May 2019. Impor- ranging within 50 m (metres) to about ment through the MoPNG.)
350 m from the blocks of (the) petitioner
tantly, after such a high volume of What is especially noteworthy is the
(ONGC) and wells have been so drilled and
“migration” of gas, it would no longer be constructed that there is a pre-planned and
fact that Moily’s note, dated 22 May 2014,
economically viable for ONGC to develop calculated slant/angular incline towards the was written almost a week after the UPA
the particular undersea fields. gas reserves of (the) petitioner with a clear government was voted out of power in
idea to tap the same. the general elections and just four days
Background According to ONGC, its nomination before Narendra Modi was sworn in as
ONGC believes that the KT-1/D-1 gas find block, Godavari PML (G4) and discovery Prime Minister on 26 May. In a state-
in its Krishna Godavari block KG-D5 and block, KG-D5 under the New Exploration ment dated 15 May, RIL said that all its
the G-4 Pliocene gas find in the Godavari Licensing Policy (NELP)-1 are contiguous operations had been undertaken in
block extend outside the block bounda- to the RIL-operated NELP-1 block KG-D6. accordance with the PSC and the devel-
ries into KG-D6. According to ONGC, RIL’s The public sector undertaking (PSU) had opment plan approved by the manage-
D6-A5, D6-A9 and D6-A13 wells that said that it wanted a “truly independent” ment committee which had government
were drilled close to the block boundary agency to examine its contention that representatives holding veto powers. It
may be draining gas from the G-4 field the Mukesh Ambani-led RIL may have added that all well locations and their
while the D6-B8 well may be sucking drawn natural gas worth up to Rs 30,000 profiles had been specifically reviewed
out gas from the DWN-D-1 field of the crore from ONGC’s fields adjacent to and approved by the committee. More-
KG-DWN-98/2 block. While RIL started the ones in the KG-D6 block where over, RIL said that there had been “con-
production in April 2009, ONGC is yet to the contracting company controlled by structive engagement” with ONGC on
finalise an investment plan for its fields. RIL operates. sharing of data and on appointing an
After ONGC informed the regulator in ONGC sought compensation from RIL “independent third party expert” at a
the MoPNG, the DGH, that it apprehended if the allegations of theft were found to meeting held on 9 May 2014.
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 13
COMMENTARY
In its reply to ONGC’s suit that was filed “frivolous,” that the issues raised by the Thereafter, from 25 September 2014
on 28 May, RIL said that according to the PSU had been looked into and sought onwards, both ONGC and RIL began
understanding it had reached on 9 May dismissal of the petition. The 70% sharing data with D&M.
ONGC was to circulate a draft of the owner of ONGC was, in effect, stating On 26 November, the current Minister
“enquiry” to be sent to the four agencies that its offspring should not have gone of State for Petroleum and Natural Gas,
shortlisted to RIL and DGH. The four names to court alleging theft of gas by RIL. Dharmendra Pradhan, told Parliament
of the firms were to be finalised on 23 The MoPNG said that ONGC had not that D&M would submit a report by June
May and thereafter, the enquiry report raised any “issue on connectivity of 2015 on whether a company controlled
would be sent to the expert agencies for reservoirs and channels” when the mining by RIL “stole” natural gas from the wells
their responses. Thus, RIL stated that any lease of G4 block was granted to it six where ONGC is contracted to operate in the
inference prior to such assessment was years earlier in 2008 nor when produc- KG basin, as alleged by the government-
mere “speculation” and commencement tion of gas from the KG-D6 block by RIL owned company. The minister said that the
of legal proceedings “unwarranted.” started in April 2009. two companies under the “supervision”
On 20 May 2014, in a news report put The ministry’s submission said that of the DGH in the MoPNG should have
out by the PTI, the Chairman and Man- ONGC “woke up from (its) slumber only in appointed a “third party” or an inde-
aging Director of ONGC, Dinesh K Sarraf, July 2013, when it requested the govern- pendent agency “earlier” to “establish
gave the reason for the first time as to ment to provide the G&G (geological and the continuity of reservoirs across the
why his company had decided to file a geophysical) data and that too to analyse ONGC and RIL offshore deep water
lawsuit against RIL: the continuity of the pool.” The MoPNG blocks/areas in KG Basin.”
The matter (of RIL allegedly drawing gas
said ONGC’s writ petition had become
from ONGC blocks) was brought to the notice “infructuous pursuant to the appointment Questions to the Government
of our board (in March). The board was of of the independent agency,” that is, D&M. After D&M submitted its interim report
the view that we need to protect our com- The fact that D&M would be chosen as on the dispute, from 9 October this year
mercial interest at all costs. If that requires
the independent agency to resolve the onwards, various reports have appeared
any legal recourse, we will take that.
dispute between ONGC and RIL had been in the media summarising the key
It is reliably learnt that at least two inde- first suggested in a 16 July 2014 report in findings of the independent consultant
pendent directors on the board of ONGC the Hindu Business Line. The two compa- which had, by and large, supported the
were keen that the company seek legal nies and the government agreed that D&M contention of ONGC that RIL had drawn
avenues to redress its grievances and would be appointed as the independent gas owned by it in an unauthorised man-
protect its commercial interests. These agency to investigate possible reservoir ner. On 9 October itself, E A S Sarma,
members have since ceased to be members connectivity across undersea gas blocks. former Secretary to the Government of
of the board as their terms have ended.
As for Sarraf himself, he had replaced
Sudhir Vasudeva as the head of ONGC on Journal Rank of EPW
26 February 2014, after a proposal to give Economic & Political Weekly is indexed on Scopus, “the largest abstract and citation database
an extension to Vasudeva, that had been of peer-reviewed literature,” which is prepared by Elsevier NV ().
moved by Moily, was rejected by the Scopus has indexed research papers that have been published in EPW from 2008 onwards.
Appointments Committee of the Cabinet
The Scopus database journal ranks country-wise and journal-wise. It provides three broad sets
headed by the then Prime Minister of rankings: (i) Number of Citations, (ii) H-Index and (iii) SCImago Journal and Country Rank.
Manmohan Singh. Moily’s attempt to
Presented below are EPW’s ranks in 2014 in India, Asia and globally, according to the total
grant Vasudeva an extension of term cites (3 years) indicator.
had also been opposed by the then
● Highest among 36 Indian social science journals and highest among 159 social science
spokesperson of the Bharatiya Janata journals ranked in Asia.
Party Nirmala Sitharaman (who is now
● Highest among 36 journals in the category, “Economics, Econometrics and Finance” in the
Union Minister of State for Industry and Asia region, and 36th among 835 journals globally.
Commerce) and former Member of Par-
● Highest among 23 journals in the category, “Sociology and Political Science” in the Asia
liament belonging to the Communist region, and 15th among 928 journals globally.
Party of India Gurudas Dasgupta (who is
● Between 2008 and 2014, EPW’s citations in three categories (“Economics, Econometrics,
one of the petitioners in a public interest and Finance”; “Political Science and International Relations”; and “Sociology and Political
litigation (PIL) against RIL that is being Science”) were always in the second quartile of all citations recorded globally in the
heard by the Supreme Court). Scopus database.
For a summary of statistics on EPW on Scopus, including of the other journal rank indicators,
Ministry Opposition please see
The MoPNG and DGH filed a counter- EPW consults referees from a database of 200+ academicians in different fields of the social
affidavit in the dispute in August 2014 sciences on papers that are published in the Special Article and Notes sections.
claiming that the allegations of ONGC were
14 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
India, who is a petitioner in a PIL case “It is ironic that the government had consulting firm submitted its interim
relating to RIL, sent a letter to K D Tripathi, even gone to the extent of changing the report three months behind schedule.
Secretary, MoPNG, raising a number of independent directors of ONGC for reasons While quantifying the volume and value
questions which have been paraphrased best known to it,” Sarma stated, adding of the alleged theft or misappropriation,
and summarised. that an earlier report submitted to the it is unlikely that D&M will suggest an
Was RIL aware that the gas field MoPNG in 2011 by an independent oil and amount that should be paid in the form
where it is licensed and contracted to gas reservoir expert P Gopalakrishnan of “compensation” or “penalty” by RIL to
operate is not just contiguous but also had claimed that expeditious and exces- ONGC, that is, assuming that the position
connected to the field licensed to ONGC? sive extraction of gas had led to a “per- of the consultant in its final report will
If yes, did RIL disclose this to the MoPNG manent loss” of reserves in the KG basin not vary significantly from the position
in accordance with the provisions of and has also caused land subsidence. it took in its interim report.
Article 12 of the PSC signed by it with “Who will pay for this?” Sarma wondered. After the final report is submitted,
the Government of India? What was the what are the legal options before the
role played by the “regulatory authority,” Enforcement of Contracts aggrieved parties, whoever they may
the DGH in monitoring the extraction of Sarma is correctly of the view that man- be? RIL, ONGC, the DGH or the MoPNG
gas in the areas licensed to RIL and agement and enforcement of contracts singly or in combination could go back
ONGC? Why did the DGH and the minis- are crucial to good governance in any to the Delhi High Court seeking to redress
try not initiate action under Article 12 of sector, including the oil and gas explora- their respective grievances, if any. The
the PSC for “joint management” of the tion industry where the natural resources Supreme Court could also be petitioned.
gas field as is the global best practice in extracted are not just high in value and The parties may also choose to go
this sector? also critical to the country’s energy through a process of arbitration instead
Did the government deliberately drag security. A flawed and inadequate PSC of going to court. Time alone will tell
its feet before taking cognisance of between RIL and the MoPNG has been what will transpire. The story of the great
ONGC’s complaint resulting in a breach greatly responsible for many of the prob- Indian gas robbery is far from over.
of Rule 4 of the Petroleum and Natural lems that have been encountered during
Gas Rules of 1959 (which defines the the exploration and extraction of gas Postscript
rationale for grant of a licence), Article 3 of from the KG basin. In the case of alleged On 2 December 2015, newspapers reported
the PSC (on delineation of the licensed theft, the management committee, which that the final report had been submitted
area) and Article 30.3 of the PSC (on included representatives of the ministry, by D&M. According to the details given in
deliberate non-disclosure or false dis- apparently acquiesced in whatever RIL the news, it appears that the final report
closure by the contractor leading to a did, and the contractual provisions for is very similar, if not exactly the same,
show-cause notice for cancellation of the joint-management of the gas fields and to the interim report discussed here.
contract)? Did the government’s alleged imposition of penalties were never
inaction leave ONGC no choice but to invoked. This, Sarma points out, does not Bibliography
seek judicial intervention over the augur well for a country that is aggressive Guha, Thakurta, Paranjoy and Jyotirmoy Chaud-
huri (2014): “The Rs 30,000 Crore Fight over
head of its 70% shareholder, that is, the inviting foreign investments, including Gas,”
Government of India? investments in the oil and gas industry. /the-rs-30,000-crore-fight-over-gas/20141205.
htm, 5 December.
The former bureaucrat claimed that It should also be noted that government- Hindu Business Line (2014): “US Consultant to Verify
this dispute should be looked at against owned companies like ONGC are expected ONGC’s Claim on Krishna–Godavari Gas,”
a backdrop of allegations of collusion to function independently and safe- .com/econ-
omy/us-consultant-to-verify-ongcs-claim-on-
between particular government function- guard the interests of the shareholders, krishnagodavari-gas/article6218096.ece, 16 July.
aries and RIL on a variety of issues, which include the people of India. The Press Trust of India (2014): “ONGC-RIL Dispute:
Global Independent Consultant to Be Appoint-
including fixation of administered prices, two really “independent” former directors ed, Says Report,” 22 June,.
besides claims of excessive capital expen- of ONGC persuaded the corporation to com/news/corporates/article-ongc-ril-dis-
pute-global-independent-consultant-to-be-ap-
diture or “gold-plating” and over-invoicing approach the Delhi High Court but the pointed-says-report-559306.
of equipment imports as had been ministry under Moily tried to prevent — (2015): “D&M Submits Draft Findings on Reli-
highlighted in a report of the CAG this from happening—it is truly ironic ance Industries, ONGC Gas Issue: Report,”
12 October,-
presented in Parliament on September that the government as the major share- rates/article-d-m-submits-draft-findings-on-re-
2011. Sarma wondered if the government’s holder of ONGC should actively work liance-industries-ongc-gas-issue-report -1231260.
— (2015): “ONGC Has No Claims in KG Gas Row,
inaction permitted RIL to retain large against its interests and try and cause Says Reliance Industries,”.
areas for exploration and extraction in harm to itself. com/news/industries/article-ongc-has-no-
violation of Article 4 of the PSC, thus What is likely to happen from here claims-in-kg-gas-row-says-reliance-industries
-1233052, 16 October.
“forcing” ONGC to “share” RIL’s surplus onwards? After considering the views of — (2015):
infrastructure. Under Article 12 of the RIL, ONGC and the DGH, D&M will be pre- article-rs-11-000-crore-ongc-gas-shifted-to-re-
liance-industries-fields-d-m-1246122, “Rs 11,000
Constitution, being a PSU, ONGC is sup- senting its final report. It is not known -Crore ONGC Gas Shifted to Reliance Indus-
posed to be an arm of the government. how soon this will take place. The US tries Fields: D&M,” 22 November.
T
The data from the Rapid Survey he data of the Rapid Survey on a doubling of the proportion of births
on Children conducted in 2013–14, Children 2013–14 (RSoC) conducted taking place in a medical facility as well
jointly by UNICEF and Ministry of as an increase in the births assisted by
released after an inexplicable
Women and Child Development (MWCD) health professionals. Such an increase
delay and still in a summary was finally released in July this year has been attributed by other studies to the
fashion, show some but patchy after much controversy and speculation combined efforts of the cash incentives
progress between 2005–06 on why it was not being made public. under the JSY, expansion of primary
This is the first nationally representative healthcare (PHC) services, availability of
and 2013–14 in maternal and
data set on a number of health and nutri- ambulance services, etc. However, studies
child health indicators. A tion indicators that is available after the have also raised questions on the quality
preliminary analysis indicates National Family Health Survey-3 (NFHS-3) of care available in these institutions
that in those areas where special which was conducted in 2005–06. While and the fact that although there has been
information from other sources, such a significant progress in delivery care,
efforts were made, such as in
as microstudies, programmatic Health this does not seem to be reflected ade-
increasing institutional delivery Management Information System (HMIS) quately in the outcome indicators related
and expanding immunisation and from the Annual Health Survey (AHS) to maternal mortality and morbidity
coverage, some results are seen. (not for all states though) indicated (Rai and Singh 2012; Lim et al 2010; Ku-
some trends in health indicators, what mar and Dansereau 2014).
This calls for greater investments
was missing was comparable data that Further, Figure 1 also shows that the
in health and nutrition within a could be used to analyse not just the increase in the coverage of antenatal
more comprehensive approach. trends but also to evaluate what caused care (ANC) services has not been as
these changes. much as that in delivery services. The
While economic growth rates acceler- percentage of women making ANC visits
ated after 2005–06, this period also saw three or more times (as recommended)
a number of interventions by the central has gone up from 52% to only 63% and a
government in relation to health and similar percentage of women has reported
nutrition, including the introduction having an ANC in the first trimester.
of the National Rural Health Mission Therefore, a third of pregnant women in
(NRHM), Janani Suraksha Yojana (JSY) and the country are still not even getting the
the expansion of the Integrated Child basic recommended ANC. This also points
Development Services (ICDS). A proper to the question on whether the single-
assessment of their impact can be possi- minded focus on enhancing institutional
ble with the availability of a recent and deliveries has taken the attention away
comparable data set, ideally available at from other essential interventions for
the individual/household level. The RSoC maternal health. Similarly postnatal care
data released as of now are only the fact (PNC) in RSoC data does not show much
sheets giving all-India and state-level change with only 39% of women receiv-
averages for some indicators and there- ing PNC within 48 hours of discharge/
fore this kind of detailed analysis is not delivery (37% in NFHS-3). The first two
yet possible. However, these do provide days after delivery are a critical period
some information to get a sense of the for mothers and check-ups during this
trends in this period. time are important to prevent maternal
mortality.
Maternal Health According to the RSoC data, of the
Improving maternal health has been one mothers who were aware of the JSY and
Dipa Sinha ([email protected]) teaches of the main objectives of the NRHM (GoI Janani Shishu Suraksha Karyakram (JSSK)
Economics at the School of Liberal Studies, 2005). While we know that India has failed schemes, 47% availed of the JSY but
Ambedkar University, Delhi.
to meet the Millennium Development only 14% availed of any benefits of the
16 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
Figure 1: Trends in Maternal Health Indicators goes there is not as much of a change. In
90 fact, there seems to be a decline with
79 81
80 RSoC showing only 50% of children aged
RSoC
70 6–8 months being fed complementary
63 NFHS-3 62
60 foods compared to 56% in NFHS-3 and
52 NFHS-2
50
49 further 20% children aged 6–23 months
44 44 NFHS-1 44 42
40
41 meeting minimum dietary diversity
33 34 35
30
compared to 35% earlier. While these are
25 26
20
worrying figures, once again a detailed
analysis is only possible when further
10
data from the RSoC is made available.
0
3 or More ANC Visits ANC in 1st Trimester Birth in Medical Facility Birth Assisted by Health Immunisation coverage has gone up
Professional
since NFHS-3, with 65% children in the
NHFS 1 pertains to 1991–92, NFHS-2 to 1998–99, NFHS-3 to 2005–06 and RSoC to 2013–14.
age group of 12–23 months being fully
JSSK. While the JSY provides for a cash with prevalence of underweight among immunised compared to 44% earlier.
incentive for institutional delivery, the children under five years of age decreas- Immunisation is also another aspect which
JSSK provides for cashless treatment ing from 43% to 29%. The data of NFHS- showed stagnation in the earlier surveys
for all services related to maternal and 2 are not directly comparable with RSoC and so it is a positive development that
neonatal health.1 because NFHS-2 collected anthropometric there now seems to be an improvement.
Overall, as far as maternal health data for only children under three years
indicators go, the RSoC data suggest that of age while RSoC reports on malnutri- State-level Trends
much more needs to be done to enhance tion data for children under five years of All past surveys have shown large state-
the access to comprehensive services for age. Once the detailed data of the RSoC wise variations in these indicators related
pregnant and lactating women. Although is available it will be possible to look at to child health and nutrition. While a
there are some improvements in access only data for children under three for detailed state-level analysis is not possible
to care in terms of women delivering in comparison with NFHS-2. here given the limits of space, some basic
institutions and/or being assisted by a Figure 2: Trends in Child Malnutrition (0–59 Months) findings are presented.
health professional during delivery, there 60 Since, there are so
are large gaps in terms of antenatal and 48 many indicators, we
50
43
postnatal care being received. 40
39 use a simple index of
NFHS-3
Further, based on the preliminary infor- RSoC 29 child health to com-
30
mation available in the fact sheets, the pare the rankings of
20
RSoC data show that the inequities in 20
15 different states. A sim-
terms of wealth/income and caste groups 10 ilar index, called the
remain. For instance, while the percentage ABC index (Achieve-
0
of births taking place in an institution is Stunting Wasting Underweight ments of Babies and
93% for the highest wealth quintile, it is Figure 3: Full Immunisation Coverage (%)
Children) has been
61% among the lowest wealth quintile. 70 used in the past in the
65
The corresponding figures are 80% and 60 FOCUS report (CIRCUS
44% for women receiving three or more 50
2006) as well as Khera
42 44
ANCs and 49% and 23% for receiving and Dreze (2012). The
40
35
PNC within two days of delivery. index of child health2
30
is a simple average of
20
Child Health and Nutrition the normalised values
10
In relation to child health and nutrition of four indicators—
0
as well, the RSoC results present a mixed NFHS-1 NFHS-2 NFHS-3 RSoC percentage of children
bag. As far as child nutrition indicators who are fully immu-
go, there definitely seems to be a faster The data also shows an improvement nised, percentage of births taking place
rate of progress compared to earlier. in breastfeeding indicators which directly with the assistance of a health profes-
There was hardly any reduction in child influence both child mortality as well as sional, percentage of children who are
malnutrition (for children under three nutrition. According to RSoC data, 45% not underweight and percentage of chil-
years) between NFHS-2 (1998–99) and children were breastfed within 24 hours dren who survive up to the age of five
NFHS-3 (2005–06) (43% underweight in after birth and 65% of children aged 0–5 years. The index lies between 0 and 1,
NFHS-3 compared to 40% in NFHS-2). months were exclusively breastfed (25% with higher values indicating better
However, the recent RSoC data (2013– and 47% respectively, under NFHS-3). status of child health. All these indicators
14) seems to show greater improvement However, as far as complementary feeding are available from the NFHS-3 and RSoC.
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 17
COMMENTARY
The RSoC does not have the under-five and nutrition outcomes remain largely used for analysis not just for research
mortality rate, which has been taken unchanged with some states being much purposes but also to inform policy and
from the Sample Registration System behind others. What are also required programme. After a long gap of eight
data for 2013 (SRS 2013). This data is pre- are studies to understand what worked years, the RSoC data is now available
sented in Table 1. in the states that achieved some success.4 making some of this analysis possible.
Table 1: Index of Child Health (2005–06, 2013–14) However, a number of issues remain
S No State % of Children % of Children % of Children % Deliveries Index of Child regarding consistency of sampling and
Who Survive Who Are Fully Who Are Not Assisted by Health
to Age 5 Immunized Underweight Health Personnel
definitions across different surveys
05–06 13–14 05–06 13–14 05–06 13–14 05–06 13–14 05–06 13–14 which makes it difficult to study trends
1 Andhra Pradesh 93.7 95.9 46 74.1 67.5 77.7 74.9 93.3 0.55 0.71 over a long period of time.5 In fact, what
2 Assam 91.5 92.7 31.4 55.3 63.6 77.8 31 74.9 0.24 0.33 we need is data that is disaggregated
3 Bihar 91.5 94.6 32.8 60.4 44.1 61.5 29.3 68.4 0.11 0.25 even further, at least to the district level.
4 Chhattisgarh 90.9 94.7 48.7 67.2 52.9 66.1 41.6 64.2 0.26 0.32
For this, the District Level Household
5 Gujarat 93.9 95.5 45.2 56.2 55.4 66.4 63 89.6 0.43 0.44
Survey (DLHS) or the AHS, both of which
6 Haryana 94.8 95.5 65.3 70.7 60.4 77.3 48.9 78.6 0.53 0.58
provide district level data but for differ-
7 Himachal Pradesh 95.8 95.9 74.2 80.2 63.5 80.5 47.8 71.6 0.62 0.64
8 Jammu and Kashmir 94.9 96 66.7 59 74.4 84.6 56.5 74.9 0.66 0.56
ent sets of states, need to be combined
9 Jharkhand 90.7 95.2 34.2 64.9 43.5 57.9 27.8 61 0.08 0.23 so that we have a nationally comparable
10 Karnataka 94.5 96.5 55 79.4 62.4 71.1 69.7 92.6 0.56 0.71 and representative data set. Moreover,
11 Kerala 98.4 98.8 75.3 83 77.1 81.5 99.4 99.5 0.98 0.97 until the NFHS-4 comes, which has been
12 Madhya Pradesh 90.6 93.1 40.3 53.5 40 63.9 32.7 79 0.10 0.23 long delayed, the RSoC can provide a
13 Maharashtra 95.3 97.4 58.8 77.4 63 74.8 68.7 93 0.61 0.77 valuable source of data provided that
14 Odisha 90.9 93.4 51.8 62 59.3 65.6 44 83.7 0.33 0.35 further details and the unit data are
15 Punjab 94.8 96.9 60.1 78.6 75.1 84 68.2 85.4 0.68 0.79 released soon.
16 Rajasthan 91.5 94.3 26.5 60.7 60.1 68.5 41 85.8 0.23 0.42
Based on the limited data available,
17 Tamil Nadu 96.4 97.7 80.9 76.3 70.2 76.7 90.6 99.5 0.86 0.83
this article looks at some main indicators
18 Uttar Pradesh 90.4 93.6 23 47 57.6 65.7 27.2 65.1 0.12 0.14
of maternal and child health, and nutri-
19 West Bengal 94 96.5 64.3 75.2 61.3 70 47.6 78.9 0.50 0.58
India 92.6 95.1 43.5 81.1 57.5 70.6 46.6 81.1 0.34 0.47
tion. What we find is that while there are
The index of child health is an unweighted average of normalised values of columns 3 to 6. To arrive at the index, the certainly some advances made in terms
indicators have been normalised using the procedure applied by the United Nations Development Programme (UNDP) for of these indicators, the outcomes are at
the Human Development Index (HDI), namely, Yi = (Xi - Xmin) / (Xmax- Xmin) where Yi is the normalised indicator for state i,
Xi is the corresponding pre-normalisation figure, and XmaxandXminare the maximum and minimum values of the same best patchy with many areas showing
indicator across all states. The normalised indicator varies between 0 and 1 for all states, with 0 being the worst and 1 being stagnation. A preliminary look seems to
the best. A simple average of the normalised values for the three indicators is the index of child health.
Age groups: “12–23 months” for immunisation; “below 5 years” for nutrition. indicate that in those areas where special
All data for 2005–06 is from NFHS-3. Data for all indicators for 2013–14 is from RSoC except for children who survive to age efforts were made, such as increasing
5 which is from SRS (2013).
The absolute values of the index are strictly not comparable over two periods because of the normalisation applied. institutional delivery and expanding
Inferences can however be obtained on the basis of ranking of states. immunisation coverage, some results are
At both time points, Kerala, Tamil The RSoC does collect some data on seen. This calls for greater investments
Nadu, Punjab and Maharashtra are access to anganwadi centres and their in health and nutrition with a more
amongst the best performers. This is borne services. Once again with the limited comprehensive approach addressing
out by other studies as well. Amongst the data available, it is difficult to make various aspects together. In the current
poorly performing states are the north useful comparisons. A cursory look does context, where the central government
Indian states of Chhattisgarh, Bihar, show some expansion in the outreach of in the name of decentralisation is with-
Jharkhand, Madhya Pradesh and Uttar the ICDS. For example, 49% of children drawing from its responsibility in many
Pradesh. While these states have been under three years are reported to be of these issues, there is a need to
ranked in the bottom on indicators of availing of supplementary nutrition in rethink whether that is a wise strategy.
health and nutrition for a long time, RSoC compared to 32.5% in NFHS-3 (44% What is also worrying is that some
what the RSoC data show is that most of and 33% respectively for children in the of the crucial central interventions on
these states show some advance, although age group of 3–6 years). However, unit nutrition and health have seen a mas-
still far behind the levels of Kerala or level data is necessary to make any further sive cut in expenditure after the Four-
Tamil Nadu.3 Uttar Pradesh, however, is sense of how such an expansion could teenth Finance Commission’s recom-
a cause for concern as not only does it have affected nutrition outcomes. mendations. Some states remain far be-
have the worst index for child health, there hind and need all the support that they
is also relatively slow improvement in the Concluding Remarks can possibly get and, overall, while we
years since 2005. While it is beyond the One of the main issues related to health might be moving ahead, India still has
scope of this article to analyse the reasons and nutrition data in India is the lack of large gaps to fill as far as providing
for these regional differences, what is regular monitoring data that is available universal health and nutrition services
clear is that the regional patterns in health at a disaggregated level, that can be are concerned.
18 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
C
The recent floods in Chennai hennai remains unprepared to The state’s approach to city governance
are a fallout of real estate combat rains every year. This, can be seen to be exemplified by the case
despite various citizens’ groups of the Adyar Poonga, an eco-park built
riding roughshod over the city’s
calling for the need to abide by planning on fragile estuarine lands of the Adyar
waterbodies. Facilitated by an rules and regulations since the past 15 creek. In 1993, a group of civil society
administration that tweaked years. The current floods in Chennai are organisations comprising Citizen Con-
and modified building rules and a wake-up call to everyone to think sumer and Civic Action Group (CAG),
about how the city has developed Exnora and the Environment Society of
urban plans, the real estate
without any reflection on the implica- Madras filed a case in the Madras High
boom has consumed the city’s tions of violating the urban ecology, in- Court to restrain Tamil Nadu for build-
lakes, ponds, tanks and cluding our rivers, lakes, wetlands and ing activity and housing projects. The
large marshlands. open spaces. petition sought to protect five major
Tamil Nadu experiences severe water lakes in Ambattur, Kakkalur, Nolambur
shortages and water stagnation/flood- and Chitlapakkam from being converted
ing every year. The recent policy focus into residential sites and an Ambedkar
in the state has been on groundwater Memorial. Unfortunately, the court
recharge through rainwater harvesting, ruled that the government could use 1.5
but in practice, the public water utility acres—rather than the original 45
has been acquiring “water fields”—public acres—for the memorial to be set up.
agricultural lands in the peripheries of Through the 1990s and 2000s, this frag-
Chennai—where the levels and quality ile estuarine area was overrun with ex-
of groundwater is amenable to cater to tensive construction that included the
[This article was written before the rains the city’s burgeoning water demands. Leela Palace hotel and several high-end
and floodings restarted on Sunday, Simultaneously, poor planning practices residential and commercial buildings. In
29 November 2015.]
and lax enforcement of building rules 1995, M A M Ramasamy, a real estate
Satyarupa Shekhar ([email protected]. have resulted in the majority of the baron, sought permissions to construct
in) and Madonna Thomas are with the Citizen city’s lakes and ponds being built over, multistorey buildings close on a portion
Consumer and Civic Action Group, Chennai.
obstructing its natural hydrology. of the estuarine lands.
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 19
COMMENTARY
In 1996, the Chennai Metropolitan constructions have taken place therein illegal constructions till 1999. The state
Development Authority (CMDA), the city’s in the form of construction of residential also extended regularisation schemes in
apex planning agency, gave the necessary quarters for ministers and other govern- 2000, 2001 and 2002. CAG had challenged
planning permission after the builder ment officials, including the construction Section 113A in 1999 and each of the sub-
had paid the requisite fees and trans- of residential quarters for the members sequent regularisation schemes in 2000,
ferred 2,321 square metres of land under of the Legislative Assembly, etc.” Though 2001 and 2002, respectively. In 2006,
the open space reservation (OSR) rule to the government has repeatedly removed Justice A P Shah held that the government
the Corporation of Chennai. The Corpora- slums and informal settlements from the could regularise violations till 22 Febru-
tion of Chennai was to give the building areas adjoining the river under the guise of ary 1999 and directed that a monitoring
permission but objected that the proposed safeguarding them, it has also frequently committee be set up within the CMDA to
building violated the Coastal Regulation allocated land and built low income frame the guidelines and penalties for this
Zone (CRZ) prescriptions of no construc- housing in large marshlands and natural process. However, in 2007 the government
tion within 500 metres of the high tide line. catchment areas in the city, such as proposed further amendments to the
However, Ramasamy furnished evidence Semmenchery, amplifying the vulner- T&CP Act 1971 to allow for regularisa-
that the site was located 720 metres— abilities of the urban poor. All this while tions till 1 July 2007. When this was chal-
well beyond the high tide line, and also the state—both the judiciary and the lenged in 2007, the high court ruled that
got the Indian Institute of Madras (IIT executive—have abetted and even par- the government could take actions for the
Madras) to state that a public road was taken in the acquisition and degradation purpose of administration, but that it could
already in existence between the creek of wetlands and w aterbodies. only do so by framing proper rules and
and the building site. In 1997, Rama- guidelines. As a result, the Justice Mohan
samy petitioned the Madras High Court Making It Easy Committee was set up on 1 June 2007 to
to mandate the Corporation of Chennai Several citizen groups have also been look into the regularisation process till
(CoC) to give the necessary permission criticising such dilution of planning 2007 and its recommendations were rati-
and received a favourable response. The rules and guidelines. CAG had challenged fied as guidelines and rules in Go 234 and
CoC, having already sought and acquired the Tamil Nadu government’s decision 235, respectively. The recommendations
the OSR lands and unable to contest the to regularise building violations in the under GO 234 were r ejected by the high
planning permission given by the CMDA, Madras High Court in 1987. In 1998, the court because they were too liberal.
was compelled to give its permission. Government of Tamil Nadu introduced Fourteen years after the government
The CAG challenged this order in Section 113A, an amendment to the Town had promised the high court in 1999 that
1997 on grounds of public interest but and Country Planning (T&CP) Act 1971, it would enforce the rules, it amended
lost on account of the area “booming with and framed the relevant rules under the T&CP Act with Section 113C allowing
developmental activities and several Government Order (GO) 190 to regularise for further exemptions, and set up yet
another committee—the Justice Rajes- the devastation from unexpected flood- the site where low income communities
waran Committee (JRC)—to frame the ing to the results of nature and climate were allocated land. The Pallikaranai
rules and guidelines. The JRC recom- change when in fact it is a result of poor marshlands, once a site for beautiful
mendations are far more liberal than planning and infrastructure. In Chen- migratory birds, are now home to the
even the provisions of Go 234 of the nai, as in several cities across the coun- second of the two landfills in the city where
T&CP Act—that had been rejected by the try, we are experiencing the wanton the garbage is rapidly leeching into the
Madras High Court. For example, where destruction of our natural buffer zones— water and killing the delicate ecosystem.
the GO 234 prohibited all developments rivers, creeks, estuaries, marshlands, These are all human-made disasters
in the Aquifer Recharge Area and the lakes—in the name of urban renewal and we need to take drastic steps to
Red Hills Catchment Area, JRC allows and environmental conservation. The immediately arrest and reverse these
developments with negligible safeguards. Tamil Nadu government created the developments. It is critical that we have
Similarly, where the GO 234 stated that Chennai Rivers Restoration Trust (CRRT), high quality data and knowledge of our
no buildings with any encroachment, earlier called the Adyar Poonga Trust, to urban ecology and built drainage net-
including aerial encroachments, on to implement the Adyar eco-park and the works in the public domain, the lack of
waterbodies shall be considered for Cooum restoration projects. But in reality, which has crippled the impact of citizens
exemption, the JRC has permitted develop- this is yet another institutional mechanism and activists in the city. One immediate
ments on sites within 15 m from the that is facilitating the development of need for a map of the current floods
waterbody subject to conditions imposed transportation and other infrastructure would be to identify the most vulnerable
by the Public Works Department/Execu- along the rivers. There have been frequent neighbourhoods to sharpen the govern-
tive Authority. The JRC also states that statements about the threat to the rivers’ ment’s response, particularly for the urban
“in cases where the construction is made sustainability posed by sewage outflows poor. By adding information about the
in the land use zoning which is in- and this has been used to facilitate further contours and elevation of the city we can
compatible to the land use, the applicant evictions. However, CAG has made note create zones of risks from future instances
cannot make any additional construc- of several instances where large drains are of flooding and the resulting potential
tion in future and has to give an under- emptying sewage and industrial efflu- vulnerabilities.
taking to that effect.” ents into the Cooum River that cannot We would also use such a map to assess
By going well beyond the terms of possibly have been generated by the slum the extent of damage to life and property,
Section 113A ostensibly to provide reme- dwellers living close to the banks. Yet, and to monitor if the government’s current
dial procedures that extend well beyond we see evictions underway without any relief and response efforts are appropriate.
CMDA parameters, the JRC recommenda- action on the real polluters. The current Identifying the extent to which the state
tions seek to obfuscate the issue of rains and floods have, ironically, strength- has built low income housing in flood-
penalising violators and making the vio- ened the government’s argument for the plains and catchment areas would be
lated buildings follow the CMDA’s plan- need to protect slum dwellers but where a powerful tool to challenge such an
ning norms. Through the setting up of they will be moved remains to be seen. approach that places the urban poor in
scrutiny and core committees with The Kosathalayar River basin joins situations that amplify their vulner-
specific empanelled professionals, the Pulicat Lake, Madhavaram–Manali wet- abilities. Such a map can also be layered
JRC seeks to shift accountability from lands and the Puzhal, Korattur and with information about other public
the promoters and owners of these viola- Retteri lakes before draining into the sea infrastructure, such as primary health-
tions. It is evident that the current floods at the Ennore creek. The CMDA classified care centres, dispensaries, public toilets,
in Chennai are not natural disaster but a large portion of this area as a “Special storm water drain network and municipal
can be attributed almost entirely to un- and Hazardous Industrial Area” in the landfills, to enable analyses on their
restrained construction and repeated Master Plan–2026, and the Ennore creek quality and adequacy. Mapping informa-
regularisations of violations, and con- that used to be home to sprawling man- tion on the extent and nature of viola-
tinuing on our current path will not only groves is fast disappearing with soil tions and encroachments and the ways
lavishly reward lawbreakers but is a foot dredged from the sea being dumped in which violators compromise public
in the door for those who desire to make there. The Kodungaiyur dump site in the health, safety and convenience of other
a case for future transgressions, even if Madhavaram–Manali wetlands is one of residents of the city to make a compel-
the recommendations made by various two municipal landfills that service the ling case for the city’s planning and
committees emphatically state that they city. Velachery and Pallikaranai marsh- monitoring authorities to enforce build-
are only for stipulated periods. lands are a part of the Kovalam basin ing norms, impose penalties on violators
that was the southern-most of the four and to reclaim ecologically valuable
Another Institutional Mechanism river basins for the city. Today, the areas. But most importantly, it is critical
Today, we have the chief minister of slightest rains cause flooding and water that such maps and data are in the public
Tamil Nadu stating that officials are stagnation in Velachery, home to the domain so that citizens are better able to
doing the best they can in the face of a city’s largest mall, several other com- challenge governments and hold public
natural disaster. It is easy to attribute mercial and residential buildings, and also officials to account.
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 21
COMMENTARY
T
There are no public procurement he move to bring about some So, while one may not question the
programmes for cancer on changes to the National List of essentiality of cancer medicines as such,
Essential Medicines (NLEM) has the question of who bears the responsi-
the lines of those that exist
been much in discussion at the Indian bility for updating the list of essential
for AIDS or tuberculosis. It is price regulator’s office, the National Phar- medicines may in fact be subject to some
worth considering whether it maceutical Pricing Authority (NPPA). The scrutiny. Towards revising the NLEM
is feasible to institute a drug recommendation has now been approved 2003, the Ministry of Health and Family
by the authority and found its way to the Welfare set up a core committee com-
procurement programme
Department of Pharmaceuticals earlier in prising ministry officials, officials from
based on international/ March 2015. The suggested changes the Central Drugs Standard Control
national competitive bidding include addition of 12 cancer drugs and Organisation (CDSCO) as well as the Indian
or shopping, like those already deletion of three medications which are Pharmacopoeia Commission, and senior
believed to be not in much use for cancer medical practitioners. This committee
in place in the National AIDS
treatment in India. This recommenda- updated the list after a series of national
Control Organisation. If patients tion, if accepted, will automatically bring consultations with specialists and thereby
in developed countries are all these drugs under price control. While came up with the NLEM 2011. A core
finding it difficult to survive the it is not the first time that cytotoxic drugs committee has since been reconvened
are being considered for inclusion in the under the chairmanship of Director
astronomical prices of cancer
NLEM, there seem to be two things about General of the Indian Council of Medical
drugs, a developing country like this issue that need greater clarity. One is Research to revise and update the NLEM
India, with a large part of its the involvement of the price regulatory 2011. While this committee is yet to sub-
population below the poverty body rather than the health ministry in a mit its suggestions, the NPPA, on the
matter that concerns identification of directive of the government, has come up
line or among the middle class, is
medicines essential for the Indian popula- with a list of its own recommendations.
even worse affected in the battle tion, and the other revolves around However, it should be noted that the list
against the disease. understanding the need for price control of 12 drugs that has been considered by
at all in the case of cancer drugs. the NPPA is not its own suggestion but is
said to be based on the recommenda-
Essentiality of Cancer tions of experts from a reputed oncology
Essential medicines, as defined in the centre in Mumbai, the Tata Memorial
preamble of the NLEM 2011, are those Centre. Regardless, the involvement of
that satisfy the priority needs of the two ministries has created some confu-
majority of the population while addre- sion as to whether price control is based
ssing the disease burden specific to the on NLEM or the NLEM is decided as per
country. Its primary purpose lies in the need for price control.
ensuring rational use of medicines bear-
ing in mind three factors: cost, safety Price Control and Competition
and efficacy. Given that cancer accounts Rarely has a debate on the pricing policy
for 6.7% of total deaths in India,1 it com- for pharmaceuticals ever died down
mands significant attention in terms of without a fight. The argument most
its contribution to the disease burden. strongly voiced by critics of price regula-
Out of these, oral and prostate cancer tion policies is that they tend to stifle
among men, and cervical and breast innovation and prices should be left to
cancer among women, are the rapidly free market adjustments. Now, a free
growing concerns in India. Further- and a competitive market presupposes
more, delayed diagnoses and inadequate consumer sovereignty, advance price
Vasudha Wattal ([email protected]) is a or suboptimum treatment, especially information and price competition,
researcher at the Indian Council for Research when the patient is unable to access or factors which are often missing in the
on International Economic Relations, New complete the therapy, lead to poor can- healthcare sector, and is a major reason
Delhi.
cer survival (Mallath et al 2014). why governments end up regulating
22 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
prices for hospitals, physicians and drugs billions of dollars to clinical research. It Rs 8,00,000 (for Trastuzumab used for
(Hsiao 1995). The market for cancer is indeed true that firms need to recoup treatment of breast cancer).4 Thus, the
drugs, specifically, has never been gov- the costs and also generate funds for cost of one drug alone is 10 times the
erned by free market forces, and the rea- further investment in research and de- earnings of an average individual in a
sons for this are manifold. In cancer velopment. However, one may question given year. This figure is yet to include
therapy, each drug has an effective whether the high prices are truly reflec- the cost of cancer diagnostics and radia-
monopoly by itself. This does not neces- tive of the clinical benefit that these tion therapy which itself runs into lakhs
sarily hold only when a drug is patented. drugs provide. Recognising this con- of rupees.
In the treatment of a largely incurable cern, in the UK, the National Institute for So while deliberating on the inclusion
condition such as cancer, each drug ends Health and Care Excellence (NICE) eval- of life-saving drugs in the list of those
up being used at some point during the uates a new drug based on such param- which are price controlled, let us look at
course of treatment. Consider, for exam- eters and then takes a decision on the other purposes that the NLEM could
ple, there are four drugs to treat a particu- whether it should be made available in meet. The NLEM document clearly out-
lar type of cancer. Here unlike other dis- the country. Even in the US, drugs that lines the potential uses that this list could
ease conditions, it is not possible to pick offer modest benefit and an uncertain be put to, including those of a guidance
the most cost-effective medication out of chance of overall survival combined document for hospital drug policies,
the lot. It is very likely that the doctor will with extremely high costs, come across procurement and supply of medicines in
use each of them at some time (Siddiqui as a challenge while expending public the public sector, reimbursement of medical
and Rajkumar 2012). It is also not possible funds (Hillner and Smith 2009). expenses and medical donations. There
to induce competition at a later stage In addition, there is also the matter of are, as per my knowledge, no public pro-
through molecules in the same pharma- just how high prices should really be in curement programmes for cancer on the
cological class or even clinically inter- order to reflect the cost of innovation. lines of those that exist for AIDS or tuber-
changeable drugs belonging to different Various pricing methods to achieve a culosis. It is worth considering here
classes. The reason for this lies in not only “reasonable” level of profit have been whether it is feasible to institute a drug
the fact that competition among drugs suggested by economists such as Peter procurement programme based on in-
approved for the same cancer indication is Arno and Alan Garber which, while ternational/national competitive bidding
hardly ever based on price (Kantarijan keeping intact some of the incentives for (ICB/NCB) or shopping, like those already
and Rajkumar 2015), but also that among developing new drugs, will limit distor- in place in the National AIDS Control
cancer drugs there are no “substitutes,” tion resulting from market pricing. How- Organisation (NACO). If patients in deve-
only replacements. Therefore, as the newer ever, making these calculations itself loped countries are finding it difficult to
versions display enhanced overall sur- runs into numerous practical bottle- survive these astronomical prices, a deve-
vival or improved progression-free sur- necks, and increases the risk of pushing loping country like India, with a large
vival, they tend to replace the now obso- away investors and thereby drug availa- part of its population below the poverty
lete older versions. These and other fac- bility (Maitland 2002). line or among the middle class, is even
tors have raised concerns even in the US worse affected in the battle against can-
about the rising cost of cancer drugs. To Control or Not cer. Thus having a mechanism for dealing
While the US does not have any price con- Regulation of pharmaceutical prices is with this deadly disease should be given
trol mechanisms, several other European not specific to India alone and in several due consideration. As Edmund Burke put
countries, such as France, Germany and Organisation for Economic Co-operation it, “What is the use of discussing a man’s
the UK do, and hence are able to maintain and Development (OECD) countries where abstract right to food and medicine? The
relatively lower prices. various forms of pricing policies exist, question is upon the method of procuring
The problem is complicated further the pharmaceutical companies are will- and administering them.”5
when cancer patients stick to more ex- ing to negotiate a lower price to gain
pensive innovator/original brands rather entry to these markets.3 This suggests Notes
than switching to available cheaper that there may not be an immediate 1 IHME 2013 as cited in Bloom et al (2014).
2 As per World Bank statistics, in India, the out-
generics, even when some of the original threat to availability from instituting
of pocket health expenditure as a percentage of
brands are astronomically priced and lower prices, given that there is scope for private expenditure on health has remained in
may increase the overall lifespan only by earnings to be made from the sheer size the range of 80%–90% for more than 10 years
and as of 2013 it stands at 85.9%. For details,
a few days or weeks. This makes patients of the market. see.
especially vulnerable to high costs in But in the present scenario, accessibi- XPD.OOPC.ZS/countries/1W-IN?display=default.
countries like India with little or no lity does seem like a potential concern. 3 This argument has also been used to encourage
value-based pricing policies for cancer drugs in
health insurance. According to the Economic Survey of the US, given that its market size is fairly large
It may also be argued whether these India 2014–15, India’s per capita net na- among all OECD countries. See Siddiqui and
Raj Kumar (2012: 940–41).
reasons are sufficient to warrant price tional income is Rs 88,533 while the
4 See NPPA order
control over such drugs that are manu- highest cost of treatment amongst the 12 order/om19-78-13-21-11-14.pdf.
factured after expending years and drugs recommended by the NPPA is 5 As quoted in Maitland (2002).
References Effectiveness: A Case Study in the Challenges Mallath, Mohandas K, David G Taylor, Rajendra A
Associate With 21st Century Cancer Drug Pric- Badwe, Goura K Rath, V Shanta, C S Pramesh and
Bloom, D E, Elizabeth T Cafiero-Fonseca, Vanessa Richard Sullivan (2014): “The Growing Burden of
Candeias, Eli Adashi, Lakshmi Reddy Bloom, ing,” Journal of Clinical Oncology, Vol 27, No 13,
pp 2111–13. Cancer in India: Epidemiology and Social Con-
Lauren Gurfein, Eva Jané-Llopis, Alyssa Lubet,
Elizabeth Mitgang, Jennifer Carroll O’Brien Hsiao, William C (1995): “Abnormal Economics in text,” The Lancet Oncology, Vol 15, No 6, e205–e212.
and Akshar Saxena (2014): “Economics of Non- the Health Sector,” Health Policy, Vol 32, No 1, Maitland, Ian (2002): “Priceless Goods: How Should
Communicable Diseases in India: The Costs Life-saving Drugs Be Priced?,” Business Ethics
pp 125–39.
and Returns on Investment of Interventions to Quarterly, Vol 12, No 4, pp 451–80.
Promote Healthy Living and Prevent, Treat, Kantarijan, Hagop and S Vincent Rajkumar (2015): Siddiqui, Mustaqeem and S Vincent Rajkumar
and Manage NCDs,” World Economic Forum, “Why Are Cancer Drugs So Expensive in the (2012): “The High Cost of Cancer Drugs and
Harvard School of Public Health. United States and What Are the Solutions?,” What We Can Do About It,” Mayo Clinic Pro-
Hillner, Bruce E and Thomas J Smith (2009): “Effi- Mayo Clinic Proceedings, 90(4), 500–04. 10. 1016 ceedings, Vol 87, No 10, pp 935–43, doi:10.
cacy Does Not Necessarily Translate to Cost /j.mayocp.2015.01.014. 1016/j.mayocp. 2012.07.007.
I
instrumentalised and language n the contemporary period the lan- primarily cast their critique within the
deemed transactional. guage of power and that of critique is dialectic of subjugation and resistance, with
shaped by an instrumental concep- the affective dimension being comprised
tion. Both bring a forensic sensibility to of a potent mix of rage, nostalgia, fear and
their tasks, marshalling facts, contesting a modicum of hope. But there is more,
their veracity and significance, arguing much more to be said; and prior to that,
over details. The form of engagement even more to be noticed and restored to
resembles a joust. Arguments collide, at the centre of our consciousness.
times fragment each other, but most One of the ruses of power is to pretend
often follow parallel trajectories with no as if that which it desires already exists
hope of convergence. The discursive tem- and, if it does not already exist, will do
perature tends to be hot and the rhythm so given time. A particular idea of the
of the prose urgent and pointed. We can- future dominates; and the present is
not be surprised if this context favours a deemed no more than a staging ground
retreading of normative ground over a for its emergence. From this perspective,
re-visioning of possibilities, the didactic the past is rubble, the present inconse-
certainty of judgment over the exploratory quential, and the future the only thing
sensibility of literature and poetry. that matters. But, however much it may
When critique is enmeshed in the dis- be wished away, it is the present in
cursive practices of the ruling paradigm, which we live. It is in the present that
its ability “to bring an idea to life” is the past is lived and relived, imagined
deeply compromised. Politics is not solely and reimagined, pilloried and em-
a contest over access to power, decision- braced. And it is on the present that the
making, and legitimate social authority. future is sought to be imposed. Atten-
It is equally a practice and a space for tion to the present thus becomes critical.
imagining, reimagining, how we might What is the role of the arts and the
live with each other and with the rest of humanities in this context? It is in part to
Lata Mani ([email protected]) writes the phenomenal world. When conceived make palpable as experience those ab-
from Bengaluru.
as a struggle for a form of life, for an stractions that shape the ruling paradigm
24 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
and, in enabling us to feel their implica- sounds of the night: the symphony of an idea that is especially relevant to the
tions and effects, lead us towards under- crickets, frogs, the hoot of the owl, the argument of this essay. The circle of con-
standing what the abstraction serves to rustling of leaves, the footfalls of humans, fusion is the region in which objects
occlude, mask or distort. Put another snippets of conversation trailing in the gradually go from being out of focus, to
way, it is to represent the density, par- wind. These sounds are as intrinsic to being in sharp focus, to once again mov-
ticularity and rich complexity of lived urban life as the resonant echo of a pres- ing out of focus. Playing within the circle
experience and in that process unsettle sure horn or vehicles accelerating, decel- enables one to experiment with the way
the ability of an abstraction to continue erating and squealing to a halt. Urban light-rays transform objects, at once
to make sense. I would argue that it is at nature is teeming, not merely endan- revealing things hitherto not visible and
this epistemological level that we need gered. Life springs forth in empty plots, remaking what we see. It is a fertile
to intervene; calling into question not abandoned lots, even in manicured parks; metaphor for understanding interpretation
only the so-called facts claimed as true from every crack and crevice in the not as surgical dissection and conclusive
by current thinking but countering its pavement. Yet the imagined soundscape revelation, but as the art of exploring
assumptions with an altogether different of the city, or of nightlife as we tend to the perceptual–conceptual depth of field.
imagination. The word theory comes think of it, hardly ever summons these We open ourselves to observing how
from theoria, meaning the act of obser- facts. Grandi and I set out to explore playing with focal depth illumines ob-
vation. What is it that is right here and these dimensions of the city at night. jects and phenomena in new ways, ena-
which we fail to see? And what textual, It was important to us to offer an bling us to travel from familiar zones of
aural or visual forms might enable a dif- experience of integrated plurality: not clarity to those spaces contiguous to
ferent quality of attentiveness? simply a multiplicity of elements or stim- them that are currently blurred so far as
uli (the promise of urbanism) or their we are concerned. It illustrates how al-
Experiment I: The Video-poem manipulation through technological tering the angle of perception not only
A video-poem is neither filmed poetry nor means (as with commercial entertain- remakes what we see, but also the place
poetry on camera. It is an effort to remake ment). Without disavowing the con- from which we see and by extension the
the relations between image, text and structed nature of the narrative, we mode of our witnessing.
sound. According to Tom Conyves (2012), sought to build into it something of the Signs of existence pulse around us.
As one word, it indicates that a fusion of the
expansive sensuousness of the natural Whether it is aesthetic or ritual practice,
visual, the verbal and the audible has oc- world and the difficulty of taking hold forms of sociality or pleasure, much of
curred, resulting in a new, different form of it cognitively. And to do this while what neo-liberal ideology codes as passé,
of poetic experience. As one word, it rec- respecting the ability of a viewer to navi- backward or else dismisses as local, inci-
ognises that a century of experiments with
gate the shifting thresholds of the dental, irrelevant, insignificant, residual
poetry in film and video…is the narrative
of a gradual movement from the tenuous,
known, the unknowable and that which or in need of being cleared away, is not
anxious relationship of image and text to is yet to be discovered. Image, text and just here but is thriving. Paying attention
their rare but perceptible synthesis, i e, from sound were crafted to this end. to this fact, to these practices, we can
poetry films to film poems to poetry videos Nocturne I addressed urban nature trace at a micro-level some of the pro-
to videopoetry.
while Nocturne II took as its object the found cultural transformations that are
Nocturne I and Nocturne II were col- built environment. In both cases a single even now underway. In doing so, we can
laborations with Nicolás Grandi (Grandi line of text was written once the visual introduce tonalities not audible in the
and Mani 2013). Grandi’s interest in assembly was almost complete. In Noc- clash and clang of macro policy debates.
refreshing the image in an era of visual turne I, “Every tree a forest” sought to
excess intersected with my interest in extend the visual play of shadow and Experiment II: The Multi-genre
the potency of words as a non-transparent light, mystery and illumination. The Collection
medium capable of evoking at once mys- words appear one by one, part-way The video-poems extend preoccupations
tery, surprise, clarity and complexity. through the video-poem and form a full I had been pursuing through the multi-
This magical propensity of words to sentence only towards the end. By con- genre monograph, in which a broad the-
make new meaning had ceded ground to trast, “Immanence is Plenitude” in Noc- matic concern was sought to be ad-
neo-liberalism’s drive to literalise lan- turne II arrives at the very beginning as dressed by interweaving analytic prose
guage, to fix meaning in pursuit of a declaration, and fragments of it repeat with poetry, the observational with the
globalised, frictionless communication. throughout the video-poem—“im tude,” sociocultural (Mani 2009, 2013). The
How might one experiment with the “nence is,” “ma ple,” et cetera. The words intent was to move towards a more explo-
image–text relationship to restore to flash like neon lights insistently chal- ratory orientation, away from critique-
both the ability to convey mystery, sur- lenging the conflation of plenitude with as-wrecking-ball. The impulse grew
prise, clarity and complexity? consumption in the imagination of from a realisation that many dimensions
There was, additionally, a shared urban life. of existence fell outside the restricted,
interest in sound. Indeed, the idea for the In using a telephoto lens to carve into and restricting, purview of dominant
video-poem emerged from the everyday the dark of the night, Grandi works with narratives. An extract.
Economic & Political Weekly EPW december 5, 2015 vol l no 49 25
COMMENTARY
The Room: has chosen to keep. Owner and object are thus marginal and insignificant, what did it
free to be in the present, as is the visitor. So it suggest about the kinds of stories that
Brother and sister live atop a flight of stairs
is that we feel spaciousness amidst the clutter.
that is in complete disrepair. A 10-foot-by-10-
And the freedom to encounter the scene with-
needed to be told? What might this con-
foot room doubles as a living-cum-sleeping text call for in terms of narrative form?
out past or future remaking what is before us
area. To its right is a tiny enclosure that is
the kitchen. Neither bathroom nor toilet are
(Mani 2013: 29-31). I felt that there was an analytical
in evidence; they are presumably to be found The Room is one of a set of inter woven argument to be made addressing the
off the dark corridor to the right of the en- pieces about Avenue Road, the main multidimensionality of issues involved.
try door and shared with others on the same artery of Bengaluru’s wholesale district. But there was also an ethos to be con-
floor. It is hard to believe that their grand-
father once owned many of the properties on In 2009, when the street was under jured: dispassionately, non-polemically,
this narrow street that abuts Avenue Road. threat of being widened I spent exten- evocatively. And given the discursive
The room is gaily cluttered. Photographs, sive time in its environs. (The matter violence of the arguments for road wid-
calendars and wall hangings festoon every
remains undecided.) The proposal had ening, it was critical to attend to lan-
inch of available space. Elders, youngsters,
gods, goddesses, certificates and plaques are divided the community though it guage and rhetoric. I also felt that, as
nailed to the wall or neatly placed on the seemed that, on the whole, opposition to with other issues related to “the develop-
shelves that line it. A rope is strung across the widening Avenue Road outstripped sup- ment imperative,” it was important to
room like an aerial bridge. On it hangs what is
port of it. Activists surveyed the area, proceed by means eccentric to the cir-
left of an old curtain made of wooden beads
and clothes in need of airing. Beneath it, a fish estimated how many buildings would be cuits along which the debate was unfold-
tank filled with utilitarian and ornamental destroyed, the numbers of workers and ing. To travel directly on its pathways
items serves as a curio cabinet. It would seem families that would be affected and the was to risk being swept into a centripetal
that every object ever bought or received has
been retained and given a place.
practical difficulties of widening a vortex, to be sucked in and drowned out
Although the room is small and the objects narrow, densely populated thorough- even before one could be heard.
many, one does not feel overwhelmed. A quiet fare. Alternatives to widening Avenue It was in response to this intuition that
dignity pervades the space. The objects seem Road were also mapped out in detail. Avenue Road Suite came to be written as
to exist in and as themselves. They do not ap-
pear to carry the burden of family history or
Discussions with traders and people it was. It comprises six short pieces: two
memory. They are not a mirror in which the on the street were animated. Those in descriptive–analytical texts, two obser-
past is sought or in which the present is re- favour of widening the road generally vational accounts, one anecdote and a
flected. They are found objects in the journey spoke in abstract terms: of the project rumination on the words, “street” and
of life whose value is quietly acknowledged in
their retention. Over time they have become heralding the future, of the pointless- “road.” The descriptive–analytical texts
integral to the lives that unfold in their midst: ness of clinging to the past, of youth and serve as bookends. “Every Aspect a
keepsakes that bear witness and offer a kind people with money increasingly prefer- World unto Itself” is a broad strokes
of joyful, silent companionship.
ring malls to traditional shopping areas sketch of Avenue Road and of the dy-
It is this relationship with objects that of-
fers a clue to their arrangement. A laissez- like Avenue Road, the importance of namic of accommodative mutuality and
faire approach is in evidence. Metal, plastic, accepting compensation while it was on systemic indifference that organises life
cloth, wooden and paper items spanning offer. Those against road widening spoke on the street. This theme is then extended
four decades of production mingle in the
concretely: of the rupture of lives, the to nature, in contemplating a tree grow-
fish tank. The television is perched between
a statue and a stainless steel container. Its termination of decades-long relation- ing on a building façade, picked up again
screen is v isible only up close, being par- ships, the historical significance of the in an anecdote about a stationary store
tially obscured by the curtain drooping from street and its by-lanes, its energy, its and thematised in the final piece, “The
the clothesline. An economy of space may
be said to be at work. But it cannot by itself
diverse faces (a wholesale market by day, Ideal of a Global City.” “The Room” re-
make sense of the artful jumble of things. food gully at night). An entire physical produced above and placed between
The aesthetic expresses a way of relating to and social ecology was evoked. In the “The Tree” and “The Stationary Store”
artefacts in which the value of things has not media-led civil society debate on road ponders life with objects as it is currently
been reduced to their function or to their so-
ciocultural significance. This explains why
widening, the latter position was dubbed lived and doubles as an oblique critique
there is no attempt to group items accord- as nothing other than nostalgia for a of how consumerism remakes it into a
ing to any consistent logic or to showcase a world already on life-support. Activists fraught relationship. “A Street Is Not a
few so as to tacitly reflect some hierarchy, adroitly argued against this view, point- Road,” enfolds critique of the disruptive
whether about the relative value of things or
the status and imagined trajectory of those ing to rights violated, livelihoods lost, impact of road widening into a semantic
who possess them. Thus it is that the past the impracticality of resettlement, etc. consideration of both terms as well as
does not hang heavily over the room and the For me, the perspective of the pro- their synonyms. The concluding text pulls
future is nowhere to be found.
development brigade forcefully raised the together recurring themes into a brief
Not burdened by the weight of social attribu-
tion, the things in the room can be as they are. question of genre. Its trading in future-ori- critique of the notion of a global city.
Our interest is evoked but without the accom- ented abstractions depended on a discur- Each piece treats a particular aspect
panying anxiety that by and large mediates sive sleight of hand: a dismissal of the liv- of life on Avenue Road. Elements from
our current relationship to things in which hu-
ing present as a dead or dying past. If that one piece reappear in others, adding tex-
mans and their belongings seemingly exist to
prop each other up. These are simply objects which overwhelmingly predominates and ture, at times a different inflection. The
one has gathered along the way and that one is indisputably alive is brushed away as six texts can stand alone but it is when
26 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
read together as they are intended to be largely shaped by the social sciences. which it is expressed. Recasting argu-
that a broader picture emerges. The rhe- However, the instrumental thinking of ment as a polyphonic form can bring
torical strategy is to render fact as des- this period and its bequest, the transac- alive what is left out or is in the shadows
cription. This choice reflects an interest in tional nature of communication, compel of the dominant framework, giving it
“signs of life” to paraphrase Foucault; and us to integrate into our practice a fresh another valence and making it matter in
relatedly in representations of such signs reconsideration of language and of a different way. It enables one to pause
that prompt a consideration of new or else forms of representation, concerns core on questions that may seem tangential
neglected facts. The same impulse led to to the arts and the humanities. Reflect- but are, in fact, intimately related to the
crafting these essays (as also the mono- ing on the terms of our discourse—the issues being contested. The quotidian
graph) so that each chapter retains a de- worlds they open and those they quietly and the seemingly trivial throw their
gree of autonomy. Each piece is not mere- shut—is one way to deepen our rhetoric own light on the so-called large predica-
ly an illustrative example or object lesson, and nuance our understanding. ments addressed in policy discussions.
subsidiary to a linear argument which is The language of social science and Poetry, art, literature and creative non-
presented to the reader as a fait accompli. that of political discourse has lost the fiction can enliven our language, our
It is at once its own “sign of life” and also capacity to move us. We can knowledge- understanding of the present, and
gives life—breath, flesh and blood—to ably debate poverty without our bodies reanimate our imagination of the future.
the broader argument of which it is a part. and hearts physically sensing the true
REFERENCES
Interleaving pieces in this way enables meaning of hunger. We can discuss the
Conyves, Tom (2012):.
one to alternately explore part and whole, right to pollute of the developing world com/2012/10/13/videopoetry-a-manifesto-by-
the whole in part, the part in whole; to try as though it were without material tom-konyves/.
and intuit the complex, non-linear, non- consequences for the region in whose Grandi, Nicolás and Lata Mani (2013): Nocturne I,; Nocturne II, htt-
reductive relations between them. defence the principle is being invoked. ps://vimeo.com/63462567. Republished by The
Abstractions battle each other at an alti- Continental Review, Spring 2015,.
In Conclusion tude apparently remote from the reali- thecontinentalreview.com/nicol-grandi/.
Mani, Lata (2009): Sacred Secular: Contemplative
The two experiments I present here ties they claim to represent. Cultural Critique, New Delhi: Routledge.
grew from my sense that the arts and The arts and the humanities can — (2013): The Integral Nature of Things: Critical
the humanities could serve to aerate our breathe new life into sociocultural cri- Reflections on the Present, New Delhi: Routledge.
Rabinow, Paul (ed) (1997): Ethics, Subjectivity and
language, our approach to critique. Crit- tique, broadening the scope of our in- Truth: The Essential Works of Michel Foucault
ical political discourse in India has been quiry and pluralising the genres through 1954–1984, Vol 1, New York: The New Press.
SUBSCRIPTION
Attractive annual subscription rates are available for institutions and individuals.
Pay-per-use facility also available for downloading data from different modules as per specific requirements.
To subscribe, visit :fits.in
T
In the aftermath of the terrorist here was the usual bandying like this. If an American posts these facts,
attacks on Paris, a popular left around of accusations after the he is issuing a valid critique of United
latest Islamist carnage in Paris on States’ (US) foreign policies and a defence
wing argument highlighted
the night of 13 November 2015, when of Muslims at the same time. But when a
the culpability of imperialism terrorists wreaked havoc in the city, killing Muslim reposts it, he might be sending
in fuelling this violence. This around 130 innocent people, mostly left- out another signal, inadvertently.
form of anti-imperialism ends leaning music lovers who traditionally Because the fact remains that all those
stand up for refugees. Fingers of accusa- tens of thousands listed as killed in Syria,
up denying historical agency
tion were pointed at refugees and immi- Iraq, Pakistan, Afghanistan (we can add
to Muslims, and people in the grants in some European quarters (though to the list)—as against a “paltry” 120 in
postcolonial countries as a whole, there were at least as many expressions Paris—were not mostly killed by the US
and often becomes an excuse for of solidarity). or even by Western nations. Less than a
Some sweeping Islamophobic gener- 10th of the casualties in all these
Islamic fundamentalism. This
alisations were inevitably made about nations—except Afghanistan—may be
article argues for a politics which Muslims, (in)tolerance and (un)democra- attributed to direct US or European
escapes this trap of speaking a cy, which must be sweet music to the ears action. All the rest have been killed by
truth that is also a lie. of the Islamic State of Iraq and Syria (ISIS), other, yes, Muslims. A lot of them have
as they share an identical understanding been killed by various Islamist groups.
of Islam. Valid points were taken by lazy To jam all these numbers together and
politicians on the right and fashioned into shove them in an envelope under the
a club to clout Muslims, immigrants, mul- door of the White House seems to be a
ticulturalism, etc. mistake—or dishonest.
I say this and, immediately, a certain
The Half-Truth kind of Muslim—often quite left-leaning—
But these are not my concern in this arti- stands up and starts quoting political tracts
cle: I have written about these matters in about colonialism and imperialism. By this
the past, and there are (thankfully!) many account, in different ways, Muslims have
Europeans and Americans who regularly been and are continuously manipulated by
counter such xenophobic libel. the US or, if the man is truly “left-leaning,”
I am more concerned with some well- imperial Western capitalist powers.
meaning answers, which have now been This account makes Muslims sound like
offered so often that they have become zombies, totally incapable of thinking for
lame excuses. Take this Facebook posting, themselves and organising creatively:
widely circulated over the next few days: all they can do is be clubbed by “Western”
People killed in Paris: 120. People killed in imperial powers, after which they grunt,
Syria: 1,15,000. People killed in Afghanistan: get up painfully and lumber, like zombies,
7,46,976. People killed in Pakistan: 95,000 after the nearest “Western” power, only
since 9/11. People killed in Iraq: US killed to get clubbed down again. Centuries of
half a million innocent people...
this, and Muslims do not seem to have
This was a well-meaning, factual learnt. They stupidly let themselves be
posting; as far as I could see, it had been exploited and manoeuvred time and
put out (at least in the earliest rendition I again! Aha, actually, these devious evil
discovered) by a (white) American critic “Western” powers do not even need to
Tabish Khair ([email protected]) of American policies, and it included the do the clubbing themselves anymore: it
teaches at the School of Communication and heart-rending plea: “Don’t mistake na- looks like they can just programme various
Culture–English, Aarhus University, Denmark.
tional or political problems for religious. Muslim zombie-groups to kill one another.
28 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
COMMENTARY
There are good grounds for a critique hierarchy was thus formed, in geographical, 18th and 19th centuries, that reinforced
of colonialism—something mainstream class and racial terms that would have a pro- critical thinking and, within regions, a
found, even crippling, effect on the econom-
Europe has seldom faced up to—and ic and social prospects of the vast majority of degree of egalitarianism. Rousseau could
imperialism. But, in some contexts, it is the world’s population. be openly atheistic in the 18th century;
also just a weak excuse. If you look at the As was the case with the Facebook Bertrand Russell could write “Why I Am
ground realities, it gets weaker. Take Iraq posting I looked at earlier, what John Saul Not a Christian?” in 1957. Is it wrong or
and Syria today: I would be the last person says is legitimate and factual, and it racist to ask: how many Muslim intellectu-
to deny that vested interests in the US behoves him, as a white Canadian, to have als will be allowed to do something simi-
initially allowed ISIS to thrive, to oppose the self-awareness, intellectual integrity lar even today in any Muslim nation? How
Iranian influence in the region, and and sociocultural distance to assume many religious Muslims will be willing
marginalise Russia. But (even if we mo- such a stand. But the moment a position to stand up for Muslims who do so?
mentarily look away from the ISIS claim to like this is automatically repeated by a What all this highlights is a fact that
be “Islamic”) can the role of countries like Third World speaker, it changes shape— Karl Marx saw very clearly and leftists
Saudi Arabia and Turkey be considered its truth becomes grained with many have lost sight of in recent years: the
negligible? easy fibs too, and the exploitation of the bourgeoisie. For Marx the bourgeoisie
If one wants to look at the problem non-West becomes a mechanical, almost was a “revolutionary class.” It was due to
honestly, one has to concede that the non-agential matter. It starts smelling the creative, questing, critical spirit of the
mess in Iraq and Syria today is the result rotten like an excuse. “European” bourgeoisie that, as Marx
of the mutual rivalries of Saudi Arabia, puts it in The Communist Manifesto,
Iran, Turkey and other countries in the Internal Failures “[a]ll that is solid melts into air, all that
region—and the US, being a global super- For one, unperceptively, it filters away any is holy is profaned, and man is at last
power, is inevitably involved. Yes, the US true cognisance of internal non-Western compelled to face with sober senses his
looks after its own interests, and as is colonial and precolonial failures—caste real conditions of life, and his relations
the case with all countries, half the time exploitation, tribalism, educational with his kind.” Marx’s quarrel was not
these are short-term interests with long- deterioration, gender status issues, feudal with this spirit, but with its material limi-
term drawbacks. Why should US play structures, etc—with the sweeping broom tations and growing contradictions.
Mother Teresa to a bunch of squabbling of European capitalist colonisation. Now On the other hand, what we have in
Third World nations anyway? Every I know that things were not as bad as most Muslim nations (and increasingly
time Muslims put the blame for their later European colonisers sometimes led in the putative “Hindu” nation of India
problems on US or Europe or the past, they us to believe (early European travellers these days) is a “radicalism” that believes
speak the truth, but only a half-truth— were often much more positive—actually in curbing the critical thinking of those
and hence they also utter a lie. our myths of precolonial Golden Ages few members of the postcolonial bour-
There is a trend in the tradition that I partly come to us from European sources geoisie who dare to think for themselves.
still feel closest to—Marxist criticism— too), but that does not mean that things In West Asia, this is compounded by
that encourages Muslims (and post- were hunky dory either, before the evil the fact that the bourgeoisie is largely
colonial thinkers in general) to fall into European stepped in. missing (and in this sense the Free Syrian
this trap, the trap of speaking a truth that Also, let us face it, “Capitalism” did Army has no chance to stand up on its
is also a lie. Let us look at a neutral de- not descend on Europe as a boon from own against Islamists, and never did,
scription by John S Saul (2006: 1–2), an Jehovah. It needed centuries of slow despite Western enthusiasm). The easy,
admirable intellectual, who has dedicated accumulation of goods, information, complicit wealth brought in by oil hides
his life to the cause of global justice and knowledge, freedoms, skills and, above this by enabling an artificial economic
Third World nations: all, a critical attitude. Some of it came from betterment. But the bourgeoisie is not
[I]t is impossible to understate the significance the Arabs during the early Renaissance: it simply a matter of economic capital; it is
of the economic breakthrough that occurred was accepted, resisted, discussed, em- also a matter of cultural and educational
with the rise of capitalism in western Eu-
ployed, changed and developed in due capital, entrepreneurship, personal free-
rope between the 15th and 19th centuries.
It is, of course, particularly pertinent here course. Muslims—and Hindus too— doms and, what is indivisible from them,
to note what Europe did with the economic regularly forget that Christianity itself critical thinking. It is in the latter sense
strength which the vagaries of history re- came to Europe from another part of the that Marx considered the bourgeoisie a
warded it: in fact, Europe chose to accelerate world; it was negotiated and moulded in revolutionary class; when the bourgeoisie
a process of world conquest that had begun
with the exploits of Spain and Portugal in the ways that refashioned it as a faith, slowly gets reduced to basically its economic
very earliest days of mercantile capitalism’s replacing (among other things) the advantages, it ceases to be revolutionary.
dawn and that continued unabated as strong- centrality of a jealous patriarchal god Unfortunately, the emulative paradigm
er, more fully realized capitalist economies with the notion of an all-loving, all-forgiv- in West Asia seems to be Saudi Arabia, a
emerged […]. To make a long story short, the
rest of the world was subordinated to the eco-
ing, almost effeminate Jesus. country where a tribal chieftainship has
nomic requirements of expanding European The sheer adaptability of Europe tends been turned into a powerful oil-rich
economic and military might […]. A global to be forgotten, as well as elements, in the monarchy, sustained with the help of a
Economic & Political Weekly EPW december 5, 2015 vol l no 49 29
COMMENTARY
very narrow interpretation of Islam, There are many ways of not just being or herself—as long as there are Islamic
which is now spawning even narrower Muslim but also of being religious in countries where the veil is not just a
radical “protest” groups, such as the Al Islam. But this is also a fact: religious choice. If such a Muslim is seriously con-
Qaeda and the ISIS. Muslims tend to kowtow to fundamental- cerned about personal choices, he or she
ists because they cannot fully sanction should work to make the veil only a
Interrogating Islam these myriad differences among them- personal choice in Muslim countries,
Let us stop then, in the light of what has selves. They can live them at times, but societies and thinking—after which I
been said, and look at another predictable almost never legislate them. would be willing to not just accept it but
reaction to the Paris atrocity: religious One wonders whether many orthodox even celebrate it as a personal choice in a
Muslims pointed out that acts of terrorism Muslims are not living a lie in their ordi- secular, democratic nation.
have nothing to do with Islam. Yes, they nary, domestic lives? Can you really justify At the core of all such responses is the
were right—but were they only right? Is the differential treatment of women, of little domestic lie, and I am increasingly
it not time for them to ask themselves wives and daughters, nieces and aunts, convinced that this germ of a lie is embed-
some other questions too? and claim that it has nothing to do with ded in the duplicitous relations that
I will frame some questions for them: structures of power? Take the insistence sustain orthodox—and even plainly reli-
Is there really no connection at all bet- of orthodox Muslims to cover “their” gious—gender relations in most Islamic
ween fundamentalist interpretations of women head to foot, while they them- societies. This is a serious drawback of
Islam, steeped in intolerance of “deviance” selves flaunt their ill-clipped beards. In many contemporary Muslim societies, and
within the flock, and the act of fanatics all possible terms of honest thought, this just because European racists use it as a
who shoot and kill others in the name of represents an inequality, an injustice—it club against Muslims does not mean that
Islam? Is there no connection at all can be accepted as a religious practice we need to dismiss it as valid self-criticism.
between extremist, intolerant Islamism only with the assumption (implicit or To this is added (as among many in Hindu-
and the insistence of ordinary religious explicit) that all humans are not equal. tva circles too) an obsession with the “evil”
Muslims to regulate the dress of women, Here, I use “equal” in its basic democratic of others and the past, which not just re-
the behaviour of men, and so many other and secular sense—that, despite differ- duces one’s awareness of present possibili-
matters? I find it increasingly difficult to ences in ability, all humans need to be ties but also creates a self-fuelling circle
see how peaceful fundamentalist Muslims, treated equally and be given the same of resentment and grievance that finds
who are convinced that non-practising opportunity to be human. its ultimate—and ultimately nihilistic—
Muslims will eventually burn in hell, are expression in the suicide bomber.
very different from, say, the ISIS, which Choice and Proscription True, it is a sad world where all lives
is simply not as eternally patient as them? When I objected to the veil once in public, are not yet equivalent, where some can
Notice, what I say here of religious a female Palestinian poet, dressed in a be killed without mourning because, as
Muslims can also be said of some Hindus low-cut frock, responded to me with Judith Butler laments, they have already
in India. But my concern here is Muslims. these words: “The veil is a personal been filed as “dead.” And yet, our initial
This is something my leftist friends in choice, like the bikini. Why doesn’t responsibility, in a pragmatic sense, is to
India or Europe often do not understand. France want to ban the bikini, which is our families and friends: when our Third
Despite being atheists, they are willing to just as derogatory to women?” Sounds World societies fail, our towns collapse,
speak up for the rights of fundamentalist convincing, does it not? Let me assure it is our failure, not that of Europe or
Muslims—because they believe in differ- you, it is not. No Western country makes America, no matter what their vested
ence. They do not want to be intolerant bikinis the prescribed dress for women interests. We cannot put the blame on
about tolerance; they do not wish to be (who can dress in a myriad ways, like others or (Listen, O Bhakts!) on the past.
fanatical about secularism. I can see their men) even on a beach, while many Muslim
logic. I agree with their logic to a large countries and even societies insist on Reference
degree. But it also worries me; surely, women being veiled in public. Saul, John S (2006): Development after Globalization:
Theory and Practice for an Embattled South in a
one can claim the right to differ only if As such, any Muslim who says that a New Imperial Age, New Delhi: Three Essays
one allows others the right to differ too? veil is just a personal choice is lying to you Collective.
Fundamentalist Islam—like any other
kind of religious fundamentalism (such
EPW Index
as Christian fundamentalism in the US),
like Nazism and Hindutva—does not do An author-title index for EPW has been prepared for the years from 1968 to 2012. The PDFs of the
this, though religious Muslims might. Index have been uploaded, year-wise, on the EPW website. Visitors can download the Index for all the
My last novel, How to Fight Islamist Terror years from the site. (The Index for a few years is yet to be prepared and will be uploaded when ready.)
from the Missionary Position, was, among EPW would like to acknowledge the help of the staff of the library of the Indira Gandhi Institute
other things, an attempt to show how a of Development Research, Mumbai, in preparing the index under a project supported by the
religious Muslim ought not to be confused RD Tata Trust.
with a fundamentalist, let alone a terrorist.
30 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
Downscaling of Economic System as disobedience, cooperatives, work-
sharing, and post-normal science. The final
part, titled “Alliances,” explores the pos-
sibilities of linking with other similar
Nandan Nawn counter-hegemonic positions, practices, and
movements around the world, such as fem-
T
he book under review is an en- book reviewS inist economics, economy of permanence,
semble of a variety of “keywords” buen vivir (good living) and ubuntu.
that are deployed for constructing Degrowth: A Vocabulary for a New Era edited by The more than 50 contributors to this
“counter-hegemonic narratives” of eco- Giacomo D’Alisa, Federico Demaria and Giorgos Kallis, volume, besides the three editors, belong
nomic growth. These alternatives repre- New York and London: Routledge, 2015; pp xxii+220, to different disciplines, schools of thought
Rs 2,600.
sent a corpus, deliberately termed as and walks of life, with more than one-
degrowth—instead of a-growth.1 De- fourth connected to the Institute of
growth: A Vocabulary for a New Era is as a discipline, policy science and even Environmental Science and Technology
more of an overview, it is less of an ency- as a system of thought. This contest (ICTA), Autonomous University of Barce-
clopaedia, and certainly not a diction- takes place at conceptual, methodical lona. Except the very few from Global
ary. It tries to explain and interrelate the and theoretical levels. South and southern hemisphere, the
concepts used in degrowth literature. For instance, in the very first sentence, contributors belong to the Global North,
The central connecting thread in derowth the book declares that “[d]egrowth is a if not predominantly Europe.
literature holds economic growth respon- rejection of the illusion of growth and a
sible for stagnation, impoverishment, in- call to repoliticise the public debate colo- What Is Degrowth?
equality, socioecological disaster, pollution nised by the idiom of economism.” Else- Multiple interpretations of degrowth are
and alienation from means of livelihood— where, the neoclassical variety is criti- spread across the length and breadth of
or, in short, the econo-socio-ecological qued for its “narrow vision.” The hetero- the book, which the editors consider as a
crisis faced by the humanity. One of its dox stream is also criticised for its long- strength. Degrowth “expresses an aspi-
most prominent interpretations, from held beliefs in demand stimulus or tax ration” which cannot be captured in single
ecological economists, calls for downs- reforms. One can safely conclude that sentence, like freedom of justice (p xxi);
caling the economic system. appropriateness, applicability or action it is a “frame” for convergence of different
vis-á-vis degrowth, or whichever name lines of thought, practices, imaginations
Question of Economic Growth it be called, is restricted to the societies (p 4); it “signals a radical critique of society”
The concept of economic growth has that do not face demand constraint. In (p xxv), a “revolutionary idea” (p xxv) and
been debated in the past two and half fact, the editors restrict the controversial “a deliberatively subversive slogan” (p 5);
centuries. The debate has not just been matter of applicability of degrowth in it is a “desired direction where societies
about economic aspects but has also the Global South to just one paragraph. will use fewer natural resources apart
covered social, ecological, environ- The book is divided into four parts from organising and living differently”
mental and political thought. Several besides a meticulous introduction to (p 3); it imagines a society with a smaller
questions have been asked. What is the degrowth and an epilogue. The first part, and different metabolism to serve new
purpose of economic growth? How to titled “Lines of Thought,” captures the functions (p 4); it “is an expression of
increase its rate? How to sustain it? Are various lines or schools of thought that Gandhian economic thought in the West
there limits to it? Different strands of have influenced degrowth traversing from [...] from an Indian perspective” (p 207).
thought within degrowth, by definition, conceptual and theoretical such as bio- There are several other explanations.
reject the very validity of these ques- economics and steady state economics to The editors put the most idealist pro-
tions. They, instead, explore “radical processes and practices like critique of nouncement:
and critical” alternatives to growth. development and anti-utilitarianism. The In a degrowth society every thing will be differ-
As Fabrice Flipo and François Schnei- second part, titled “The Core,” deliberates ent: different activities, different forms and
uses of energy, different relations, different
der, founders and members of “Research on a variety of concepts and systems such
gender roles, different allocations of time
and Degrowth” collective, put it in the as entropy and capitalism to processes between paid and non-paid work, different
Foreword of this volume “building a society and schools of thought such as depoliti- relations with the non-human world (p 4).
based on the frugality, sharing and con- cisation and neo-Malthusians vis-à-vis
viviality in the West has ‘economic de- various strands within the degrowth Keywords
growth’ as its fulcrum.” In the process, movement. Part three, titled “The Action,” The justifiably lengthy introduction links
thinkers and practitioners alike, as the deals with the diverse concepts, principles, degrowth with the other “keywords.” Most
contributors in the book, challenge, con- slogans, and practices around which of the entries in the first two parts also
test and critique hegemony of economics degrowth movement has thrived, such make an attempt to link the “keyword”
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 31
BOOK REVIEW
in question with degrowth. Some do it elements, but also the unjust elements of currencies, cooperatives). It also dis-
remarkably, like bioeconomics, critiques growth vis-á-vis gender and indigenous cusses the debate over the politics and
of development, currents of environment- peoples, apart from its inability in gener- political strategies in degrowth litera-
alism, societal metabolism, political ecology, ating happiness. ture on bringing about the alternative
capitalism, care, dépense, depoliticisation, The next section, “Degrowth and Auto- institutions instilled with “values of de-
gross domestic product (GDP), happiness, nomy,” emphasises the importance of tools growth,” which are expected to replace
decolonisation of imaginary, Jevon’s par- that are “understandable, manageable and the “current institutions of capitalism”
adox, neo-Malthusians, simplicity, and controllable” by their users apart from col- (p 14). This is rather strange, given the
social limits of growth. Some others fail on lective self-limitations. The third section, tense relationship between degrowth and
this count, like environmental justice, “Degrowth as Repoliticisation,” addresses capitalism. Replacement of institutions
steady state economics, autonomy. Some the damage that the myth of sustainable that represent capitalism calls for replac-
like anti-utilitarianism, commodification, development has created by reducing the ing capitalism itself—it is a historically
commodity frontiers, commons, conviviali- “core contemporary dilemma” to just a specific mode of economic and social
ty, dematerialisation, entropy and emergy search for technocratic solutions. organisation that drives on the logic of
establish a stronger relation with growth, Though the volume does emphasise that accumulation or expanded reproduction.
instead of degrowth. Some like pedagogy degrowth “signifies a transition beyond Pitching degrowth, in its present state
of disaster and peak oil do neither. capitalism” (p 11) and rejects a “greening” and form, as a replacement, is premature,
The introduction traces the history of growth or green capitalism, the entries if not overtly ambitious. Perhaps the
degrowth as a term, from 1972, the year of on “anti-utilitarianism and capitalism” editors are aware of the limitation, and
much debated Limits to Growth, the Club rightly represent the unease that degrowth place the “Degrowth Vocabulary” as the
of Rome report (Meadows et al 1972). In protagonists face on “whether expansion “raw material” for “new imaginaries.”
contrast to the focus on resource limits is a necessary or contingent (hence mod- The entries in this well-designed
during its first phase in the 1970s, the ifiable) feature of capitalism” (p 60). collection are not meant to be encyclo-
second phase from 2001 concentrated on The book’s final section, “The Degrowth paedic. Indeed, most of them are not even
critiquing the myths of win-win solutions Transition” provides a commentary on introductory. Some provide a “deeper”
offered by “sustainable development.” various grass-roots practices (back-to-the- take—in particular, those on care, capital-
Five elements of degrowth literature are landers), “welfare institutions” (job guar- ism, conviviality, critiques of development,
discussed at length. The first, The Limits to antee, work sharing), alternative institu- depoliticisation, decolonisation of imagi-
Growth, captures not just the biophysical tions for money and credit (community nary, political ecology, societal metabolism,
ww w . s a g e p u b . i n
and social limits of growth. The editors’ quite apparent. Kudos to the painstaking National Institute of Science, Techno-
brief to the authors was to write “as simply work of translators and editors, both on logy and Development Studies and TERI
as possible” (p xxi) so that lack of know- the publication and the academic fronts. University in September 2014 at the
ledge on the previous debate and termino- One may bring two matters to the India Habitat Centre, New Delhi. The
logy does not stop anyone from reading an attention of the editors. First, a growing organisers are bringing the papers
entry. But the editors also asked the movement which aspires to be counter- together through an edited volume
authors to write without compromising on hegemonic may deliberate scrupulously (Gerber and Raina forthcoming). Hope
rigour. The end result is mixed: some of the before considering anything connected it can provide the necessary impetus to
entries are not only exhaustive yet concise with Brahminism even at the conceptual an Indian variant to degrowth. One can
enough to capture the essential elements level as the entry on anti-utilitarianism hardly disagree with the authors of the
within the limited space, while some others does. Second, the entry on GDP mentions Foreword that one may “[l]ike or hate
lack depth, imagination, if not correct that in 1991 gross national product was the term degrowth, [but] [...] can’t deny
understanding of the terms of reference. “quietly replaced” by GDP—one wonders that it opens up all sorts of debates that
Entries on anti-utilitarianism, bioeconomics, by whom, why and how. were previously closed.”
commodity frontiers, commons, convi- To conclude, this volume has brought
viality, dematerialisation, dépense, entro- together a valuable collection of entries Nandan Nawn (nandan.nawn@teriuniversity.
ac.in) is with the Department of Policy Studies,
py, emergy, GDP, neo-Malthusians, peak oil, against the most important “keywords,”
TERI University, New Delhi and a member of
and simplicity—besides those mentioned knowledge of which is essential to under- the Indian Society for Ecological Economics.
earlier in this paragraph—are explained stand degrowth. This is especially so be-
in a comprehensive manner. But the vol- cause degrowth is sweeping a large num- Note
ume is out of depth on environmental ber of European countries and making 1 See, van den Bergh and Kallis (2012) for a com-
parison between degrowth and a-growth.
justice, currents of environmentalism, its presence felt in Latin America. In India,
steady state economics, autonomy, happi- like in many other locations, degrowth References
ness and Jevon’s paradox. is still a “missile word.” Quite obviously Gerber, Julien-François and Rajeswari Raina (eds):
the debate is yet to pick up in academic (forthcoming): Post-growth Thinking in India,
New Delhi: Orient BlackSwan.
Conclusions arena, leave alone the policy space. The Meadows, Donella H, Dennis L Meadows, Jorgen
Much to its credit, this volume introduces only event in India so far has been a Randers and William W Behrens III (1972):
Limits to Growth, New York: New American
a host of non-English literature (and symposium, “Growth, Green Growth or Library.
authors) to English-speaking audience. Degrowth? New Critical Directions for van den Bergh, Jeroen C J M and Giorgos Kallis
(2012): “Growth, A-Growth or Degrowth to
While the translation is not always per- India’s Sustainability” organised by the Stay within Planetary Boundaries?,” Journal of
fect, the amount of labour involved is Indian Society for Ecological Economics, Economic Issues, Vol XLVI, No 4, pp 909–19.
D
id imperial frontiers have the Muslim Cosmopolitanism in the Age of Empire have lived in a Muslim ecumene (that is,
solidity that countries have in our by Seema Alavi, Cambridge, Massachusetts and London: an inhabited space) which operated on
times? In an age when a “pass- Harvard University Press, 2015; pp xiii +490, Rs 1,495. the interstices of the Raj and the Ottomans.
port” was merely used to regulate the flow The principal protagonists—Sayyid Fadl,
of people who were going to “pass through and operated in the interstices of the Siddiq Hasan Khan and Maulanas Rah-
a port,” such solidity was not even con- empires, make of what they were doing? matullah Kairanvi, Jafar Thanesari and
ceived of, let alone strived at. Hence, for all Were they still trapped by their experiences Imdadullah Makki—came from different
practical purposes, till the introduction of in their countries of origin, and continued parts of India and traversed great distanc-
passport regimes on a global scale towards to engage with preoccupations generated es in South Asia and West Asia. All of
the end of the World War I, it was perfectly by these even after physically relocating them were under the scanner of the Raj
possible for people to move across king- themselves? Or were they able to tran- on account of their reformist views about
doms and empires in a manner that shows scend the parochial confines and think of Islam, which made the British paint them
frontiers to have been fairly porous. a larger “ecumene”? The book under re- with a broad brush as “Wahhabis.”
What did the people who actually view is an engaging attempt to address Alavi weaves a fascinating tapestry
travelled across such imperial frontiers, these basic questions in the context of with the lives of fugitive mullahs and
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 33
BOOK REVIEW
runaways as they negotiated imperial 1866 in the realm of the Raj. Maulana Im- characterised Muslim culture till the
fault lines and borders. She argues that her dadullah, too, fled India for Mecca after early years of the 20th century.
protagonists were neither quite seditious falling foul of the colonial establishment The feature common to such a diverse
Wahhabis, nor were they simple loyal sub- and from there continued to contribute to range of people was their cultivation of a
jects of the Raj. Alavi contends forcefully the Indian Muslim intellectual discourse, sociocultural space which defied any
that her protagonists, like innumerable generating religious literature in both Ara- easy labelling by the Raj—the British
other Muslims from the Indian subconti- bic and Urdu. Nawab Siddiq Hasan Khan, Indian documents label them as “Indian
nent, were not reconciled to colonial domi- a penniless Naqshbandi scholar of the Ahl-e Arabs,” for in colonial eyes they were not
nation. They were, instead, determined to Hadith movement who became the con- quite the one, nor entirely the other.
test the limits of the raj by availing of the sort of the Begum of Bhopal, caused much This refers to a protracted cultural shift
networks of movement across the regions anxiety among successive British residents that characterised the Indian Muslim
of South Asia and West Asia that were with his advocacy of a unified Muslim experience, which after almost six cen-
flourishing in the 19th century with better Ummah (community), despite being located turies of a Persianised high culture was
communications put in place, ironically, by physically right in the middle of the Indian from the early 19th century gradually
the colonial rulers—working out, as it subcontinent. Jafar Thanesari, a disciple making room for one that was relatively
were, an “imperium” of their own. (and later a biographer) of the rebel Sayyid more Arabicised. Alavi does not concern
Ahmad Barelvi, was yet another baghi of herself with the decline of the former,
Muslim Ecumene 1857 and of the border areas who spent 18 but chooses in the main to provide her
Sayyid Fadl carved out a political niche on years of his life in the penal colony of the readers with a flavour of the latter—
the Arab coast of the Gulf making use of Andamans before establishing himself as connecting the shift with the struggle
his official contacts both in India and in a visionary of the Muslim world. between the forces of (respectively) tradi-
the Ottoman court at Istanbul. Maulana Alavi argues that in their different tionalism and reformism that began in
Kairanvi, a rebel of 1857 who smuggled ways these protagonists were giving the world of Indian Islam in the wake of
himself out to Mecca remained associated shape to the vision of a Muslim world, a British ascendancy. What is distinctive
with the reformist circles in the Madrasa “Muslim ecumene” which rose above the about this particular treatment is that
Saulatiya in the Ottoman empire even as boundaries of their country of origin, the author situates the experience not
he fed into the seminary at Deoband in and shaped the “cosmopolitanism” that merely on the personal matrix of the
protagonist, or even on the political use them better on account of their con- which is a pity, because it leaves out
matrix of resistance to colonial order— nections with the Raj, rather than on ac- people like Jamal al-din Asadabadi
she goes beyond both to plot the shift on count of the sheer depth of their learning (al-Afghani), whom Alavi refers to almost
the still larger matrix of a “Muslim of Islam, which had supposedly taken as an aside. Jamal al-din was perhaps the
space” that was essentially internation- them to Mecca (after all, contrary to what poster boy of 19th century Muslim cosmo-
al (an anachronism, for the discourse of Alavi suggests, unlike Cairo, Damascus politanism—travelling as he did between
“nations” had not quite made it into the and even Baghdad, Mecca has never been Qajar Persia, British India, Afghanistan
non-European world), and more impor- a major centre of learning in the entire and the Ottoman empire—urging people
tantly, how did it develop in the history of Islam). to unite in defence of Muslim cultural val-
interstices of two empires, one ascend- Thanesari and Siddiq Hasan did not ues, transcending the divide between
ant and the other in terminal decline. travel even as much as the other three. Sunni and Shi’i, and that between Turk,
The most detailed, and therefore, fasci- While they all speak of a Muslim world, Arab, Persian, Afghan and Indian. By
nating sections of the book, deal with the their cultural imagination seems tethered deliberately choosing all the protag-
material and political context of the rise more closely to their own South Asian onists of an Arabicised Muslim cultural
of the “Muslim ecumene.” My personal fa- experiences than a cosmopolitan imagina- ecumene from India, Alavi has left her
vourite is the small section, tucked quietly tion should. One wonders, therefore, highly impressive account of “Muslim
away within the discussion of reformism, whether Muslim transnationalism would cosmopolitanism” looking like “Sunni–
dealing with the story of the 19th century not have been a better label to describe Indian transnationalism”—and these
arms trade in the Persian Gulf and its such protagonists. are not quite the same.
neighbourhood. The story of how Euro- Finally, a word on transliteration. The
pean arms land at Muscat and then get Conclusions generally established rules of translitera-
smuggled through the southern reaches A story, after all, turns on the protagonists tion maintain that the Persian “-i” is a suf-
of the kingdom of Persia and through the storyteller chooses to make use of. fix, and the Arabic “al-” is a prefix—thus,
Afghanistan, finally to land up in the fron- Alavi confines herself only to Indian Tarjuman-i Wahhabiya, not Tarjuman-i-
tier regions of British India assumes a Sunnis, some of whom successfully nego- Wahhabiya; Abd al-Hamid, not Abd-al Ha-
veritably lyrical quality. There are several tiated the porosity of imperial frontiers to mid. Transliteration in Urdu abides by the
other such subplots built into the various make it to the Arab lands. There is no same rules. The book, however, uses both
sections of the book each of which war- mention of others—from India as much the “-i” and “al-” inconsistently as prefix,
rants more detailed treatment. So much as elsewhere—who negotiated the very suffix or both. Such glaring mistakes are
so, that it would seem that the various same interstices with similar and perhaps very unusual in a publishing house as
chapters of the book were once conceived greater success, because their cultural illustrious as the Harvard University Press.
of as stand-alone accounts that were later space was Arabicised.
woven into a larger story. Thus, the Shi’i networks of Awadh, Kingshuk Chatterjee ([email protected])
Hyderabad and the kingdom of Persia did is with the Department of History, Calcutta
Cultural Imagination not even merit much of Alavi’s attention, University, Kolkata.
It is important, however, to raise a few NE
questions that a reader needs be mindful EPWRF India Time Series W
about while reading this work. Does the Expansion of Banking Statistics Module
19th century Indian Muslim cultural (State-wise Data)
imagination actually qualify as cosmopoli-
tanism? Sure, Sayyid Fadl was as much at The Economic and Political Weekly Research Foundation (EPWRF) has added state-wise
home in Moplah Malabar as on the Arabi- data to the existing Banking Statistics module of its online India Time Series (ITS)
database.
an shores or even in Istanbul, being
steeped as much in Arab culture as he was State-wise and region-wise (north, north-east, east, central, west and south) time series
data are provided for deposits, credit (sanction and utilisation), credit-deposit (CD) ratio,
in that of Malabar. Indeed, Imdadullah and number of bank offices and employees.
Makki and Maulana Kairanvi made their
Data on bank credit are given for a wide range of sectors and sub-sectors (occupation)
mark more in Mecca by feeding into the such as agriculture, industry, transport operators, professional services, personal loans
Hejazi discourse on Islamic than they had (housing, vehicle, education, etc), trade and finance. These state-wise data are also
ever done in India—but it is difficult to presented by bank group and by population group (rural, semi-urban, urban and
make the claim that any of Alavi’s five pro- metropolitan).
tagonists (barring, perhaps, Sayyid Fadl) The data series are available from December 1972; half-yearly basis till June 1989 and
was actually able to rise above the parochial annual basis thereafter. These data have been sourced from the Reserve Bank of India’s
publication, Basic Statistical Returns of Scheduled Commercial Banks in India.
limits of their South Asian origin. If Fadl,
Including the Banking Statistics module, the EPWRF ITS has 14 modules covering a
Makki and Kairanvi negotiated the Otto-
range of macroeconomic and financial data on the Indian economy. For more details,
man and Gulf political spaces well, it was visit or e-mail to: [email protected]
more because of Ottoman’s willingness to
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 35
INSIGHT
A
India and Pakistan are parties to s tensions between India and lation is an essential component of IHL:
the Geneva Conventions which Pakistan rage, civilians residing civilians and all those not participating
along the disputed Line of Control in the fighting must not be attacked and
are the keystones of International
(LoC) and working international border must be spared and protected.1 India and
Humanitarian Law. However, (IB), continue to experience the fury of Pakistan are both parties to the Geneva
notwithstanding the IHL, mortar shells. The escalation of violence Conventions. Therefore, both countries
whenever both belligerents along the border has enormous physical, must respect the provisions of the IHL
economic and psychosocial ramifica- contained in these conventions. While
engage in ceasefire violations,
tions on the lives of civilians in these Pakistan has signed, but not ratified the
indiscriminate firing and areas. In the northernmost Indian state of 1977 Additional Protocol of these conven-
shelling across the Line of Control Jammu and Kashmir (J&K), the worst- tions which strengthens the protection
and international border, the affected areas include Balakot, Sabjiyan, of victims of international armed conflict,
Mandi, Bhimber Gali (BG) and Krishna India has neither signed nor ratified it.
civilians residing in these areas
Ghati (KG) along the LoC which lie in Here it may be noted that considering the
are subject to fearsome violence. Poonch District; and Akhnoor, Suchet- universality of the Geneva Conventions,
This study points out that garh, R S Pura, Arnia (all these sectors are “their general principles, although not
escalation of violence along the in Jammu District), Samba and Kathua all the detailed rules implementing these
Districts along the IB. principles,” have now become customary
Indo–Pak border has enormous
For thousands of years, various cul- law binding on non-parties (Solf 1986:
physical, economic and tures across the world have developed 124). Therefore, according to one of the
psychosocial ramifications on the principles aimed at protecting “unarmed key principles of customary IHL, parties
lives of civilians in these areas. populations from violence at the hands to an armed conflict must make a dis-
of the armed.” Since the Fourth Geneva tinction between the civilian population
Convention of 1949, such efforts have and combatants and between civilian
come under the rubric of the “Protection objects and military objectives.
of Civilians” (POC) (Breakey 2012: 40).
Since the last decade or more, POC has Background
been endorsed in a series of reports India and Pakistan have fought three wars
by the United Nations (UN) Secretary- over the disputed region of Kashmir, where
General to the Security Council, certain a deadly insurgency has left thousands
United Nations Security Council (UNSC) of civilians dead. Both India and Pakistan
resolutions and at least eight presidential have managed the conflict, instead of
statements. POC has also been incorp- resolving it. Kashmir issue has always
orated in a number of UNSC mandates been a major stumbling block whenever
(Francis and Sampford 2012: 2). the two estranged neighbours have
In addition, “as part of these initiatives, made attempts towards peace. In July this
the UN bodies have sought to entrench the year, after months of political deadlock,
POC in conflict in the obligations of par- both countries issued a joint statement
ties under international humanitarian, at Ufa, Russia on the sidelines of Shanghai
human rights and refugee law.” The UN Cooperation Organisation Summit (Hindu
bodies have repetitively urged states 2015). The joint statement included cer-
which are not a party to the key treaties tain actions such as dialogue between the
of international humanitar ian, human armies of India and Pakistan, a meeting
Meha Dixit ([email protected]) has
taught at Kashmir University and worked rights and refugee law to ratify them. between top security advisers to discuss
with Amnesty International and Sameer Yasir Once ratified, all states are urged “to terrorism, mechanisms to facilitate reli-
([email protected]) teaches at the Centre for take steps to implement these instru- gious tourism, the release of fishermen in
International Relations, Islamic University of ments within their jurisdictions through each other’s custody, and discussions to
Science and Technology, Kashmir.
appropriate legislative, judicial and expedite the 2008 Mumbai terror-attack
36 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
INSIGHT
trial. “Both leaders condemned terrorism Commissioner, Abdul Basit, turned down regulations pertaining to the laws and
in all its forms and agreed to cooperate the Indian government’s demand to refrain customs of war on land, annexed to the
from meeting separatist leaders from Kashmir
with each other to eliminate this men- 1899 and 1907 Hague Conventions.3
(Jeelani 2015).
ace from South Asia” (Hindu 2015). Although, Article 25 of the Hague Regu-
At Ufa, the Prime Minister of India Amid the border tensions between lations prohibits “the attack or bombard-
also accepted an invitation by the Prime India and Pakistan, the civilians on both ment, by whatever means, of towns, vil-
Minister of Pakistan to attend the South sides of the LoC and IB are at the receiv- lages, dwellings, or buildings which are
Asian Association for Regional Cooper- ing end. Towards the end of August it undefended,” and is based on the prin-
ation (SAARC) summit to be held in 2016 was reported that three civilians were ciple of distinction between civilians
in Islamabad. However, the joint state- killed and 17 injured in R S Pura, Jammu and combatants, the regulations do not
ment did not make any mention of Kash- due to shelling by Pakistani forces. A as such state that the parties to the armed
mir. Reportedly, this fuelled resentment Border Security Force (BSF) official had conflict make a distinction bet ween
within many sections in Pakistan. Soon, stated that India retaliated in equal civilians and combatants (Henckaerts
on 16 July, tensions escalated between measure. “Pakistan Rangers resorted to et al 2005: 3).
the two countries as both made compet- unprovoked firing late last night initially During the World War I, the Hague
ing claims of 2003 ceasefire violations. with small arms and later fired mortar Conventions proved to be inadequate
On 26 November 2003, both India and shells at BSF posts and civilian areas in considering the dangers emanating from
Pakistan had agreed to a ceasefire in the R S Pura sector. The BSF fired back,” air warfare and of the problems pertain-
the first formal truce between the two the official said. Reportedly, eight civilians ing to “the treatment of civilians in enemy
armed forces since the inception of mili- were killed and more than 46 injured territory and in occupied territories.”
tancy in J&K. Guns along the LoC, IB and due to shelling by the Indian forces on the The International Conferences of the Red
in Siachen Glacier fell silent the next Pakistani side (Upadhyay and Ahmad Cross held in the 1920s took the initial
day. But the calm was broken in Septem- 2015). Further, at the Balakot sector, in steps towards laying down additional or
ber 2013, when an exchange of gunfire Basoni village located along the LoC, just supplementary rules for the protection
occurred between the forces of two within two days on 15 and 16 August, six of civilians during war.4 These efforts,
countries (Yasir 2014). civilians were killed. This included the after a great deal of struggle, culminated
While the IB is an internationally recog- sarpanch of the village, a woman, two in the Geneva Convention of 1949, which
nised boundary that separates the states of teenagers, and a 10-year old boy. provides protection to civilians, includ-
India from the provinces of Pakistan, the ing in occupied territory.5
LoC is the de facto border established after IHL and Protection of Civilians The Geneva Conventions are con-
what is called the first war over Kashmir Geneva Conventions of 1949 and their tained in four international treaties and
between India and Pakistan in 1947, fol- additional protocols form the cornerstone their additional protocols. The conven-
lowing a tribal invasion by Pakistan. While of IHL, which seek to regulate armed con- tions seek to regulate armed conflict and
India would like to formalise this status flict and protect the civilian population. protect the civilian population. While
quo, Pakistan does not accept this plan However, “the first systematic codification the first Geneva Convention of 1864 pro-
since it wants greater control over the of the restraints on the methods and tects wounded and sick soldiers on land
region (BBC 2015). In the aftermath of con- means of warfare” was Instructions for during war, the second convention of
flict, it was called the Ceasefire Line; how- the Government of the Armies of the 1906 protects wounded, sick and ship-
ever, it was renamed the “Line of Control” United States in the Field prepared by wrecked military personnel at sea during
following the 1972 Simla Agreement. Francis Lieber in 1863 during the American war. Further, the third convention of
As a result of the increasing tensions Civil War (Solf 1986: 121). 1929 pertains to prisoners of war, while
along the border and political disagree- As far as the regulation of the means the fourth convention of 1949 provides
ments between India and Pakistan, the and methods of warfare in treaty law is protection to civilians, including in
national security advisor (NSA) level concerned, it dates back to the 1868 occupied territory.
talks, which were scheduled for 23–24 St Petersburg Declaration, the 1899 and The 1977 Additional Protocol I of the
August 2015 in New Delhi, were cancelled. 1907 Hague Conventions and the 1925 fourth convention strengthens the pro-
According to Pakistan, these talks Geneva Gas Protocol.2 Lieber Instructions tection of victims of international armed
or the Lieber Code was used as the conflict, while the Additional Protocol II
…cannot be held on the basis of the precon-
dition set by India. The latter had stressed that
primary basis for the development of the strengthens the protection of victims of
Pakistan should not have a meeting with All Hague Conventions of 1899 and 1907, non-international armed conflict. They
Parties Hurriyat Conference (APHC) leaders which in turn influenced later develop- further place limits on the way conflicts
from Kashmir and that the agenda of the talks ments (Doswald-Beck and Vité 1993). are fought.6
between India’s National Security Adviser
Some provisions relating to “the pro- The Geneva Conventions, which were
and his Pakistani counterpart should not
extend beyond terrorism. In early August
tection of populations against the conse- adopted prior to 1949, were concerned
2014, the Government of India had suspended quences of war and their protection in merely with combatants, not with civil-
the dialogue process after Pakistani High occupied territories” are included in the ians. The experience of the World War II
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 37
INSIGHT
demonstrated the catastrophic outcome as law.9 Malcolm MacLaren and Felix methods of warfare and of particular
of the absence of a convention for the Schwendimann note: weapons,” however, it did not resolve the
protection of civilians during war. The Customary law may ‘intervene’ for the sake of
principle controversies regarding its inter-
1949 Convention took account of the dis- the rule of law in armed conflict where States pretation (Meron 2000: 79). Neverthe-
astrous experiences of World War II. The (or non-state actors qua definitione) are not less, it is generally agreed that the clause
convention comprises 159 articles. It con- party to the relevant treaty, or where the States signifies, at the minimum, that “the
tains a short section regarding “the general are party but the customary provision is more adoption of a treaty regulating particular
extensive in its coverage than the conventional.
protection of populations against certain aspects” of the law of warfare “does not
consequences of war.”7 However, laws They further argue that the custom is deprive the affected persons of the protec-
governing the conduct of hostilities in the binding in both cases. Therefore, custom tion of those norms of customary humani-
Geneva Conventions still dated back to the should always be consulted while research- tarian law” that were not incorporated
1907 Hague Conventions. Military aviation ing the relevant law in IHL (MacLaren and in the codification (Meron 2000: 87).
did not even exist when the Hague Con- Schwendimann 2005: 1220).
ventions were negotiated. These laws were Solf notes that considering the univer- Impact on Civilians
updated by the 1977 Additional Protocols sality of the Geneva Conventions, it may The indiscriminate firing and shelling
of the 1949 Geneva Conventions. be said that “their general principles, al- by the Indian and Pakistani forces across
though not all the detailed rules imple- the LoC and IB is in violation of the IHL
Application of IHL menting these principles,” have now be- which seeks to regulate armed conflict
The sources of the law of warfare (Law come customary law binding on non- and protect the civilian population. The
of Hague) and of humanitarian law (Law parties (1986: 124). Moreover, the status physical, economic, and psychosocial
of Geneva) are both customary and codi- of the Geneva Conventions as customary impact of the border violence on the
fied in treaties (Gardam 1993: 3). In law has been established by the Interna- civilians along the border villages in
international law, a treaty is usually tional Court of Justice and is rarely con- Jammu and Kashmir is discussed here.
defined as an agreement entered into by tested (Meron 2000: 80).
states and international organisations. Further, the Martens Clause “safeguards Physical and Economic Impact: IHL
There are significant obstacles to applying customary law and supports the argument forbids all methods and means of
the treaties to current armed conflicts. that what is not prohibited by treaty may warfare which:
Treaties are applicable only to the states not necessarily be lawful.” It is applicable Fail to discriminate between those taking part
that have ratified them. This implies that to all parts of IHL, not merely to belligerent in the fighting and those, such as civilians,
different treaties of IHL are applicable to occupation (Meron 2000: 87–88). As it who are not, the purpose being to protect
“different armed conflicts depending on first appeared in the Preamble to the the civilian population, individual civilians
which treaties the states involved have 1899 Hague Convention, the Martens and civilian property; cause superfluous in-
jury or unnecessary suffering; cause severe
ratified.” While almost all states have Clause states:
or long-term damage to the environment.10
ratified the four Geneva Conventions of Until a more complete code of the laws of war
1949, the 1977 Additional Protocol I has is issued, the high contracting parties think The indiscriminate firing by both the
not yet achieved universal adherence. it right to declare that in cases not included Indian and Pakistani security forces
in the Regulations adopted by them, popula-
Since the protocol applies “only between poses a huge threat to the lives of the
tions and belligerents remain under the pro-
parties to a conflict that have ratified it,” tection and empire of the principles of inter-
civilians and their property, residing on
its effectiveness today is limited because national law, as they result from the usages both sides of the border. It frequently
a number of states that “have been established between civilized nations, from causes “superfluous injury or unneces-
involved in international armed con- the laws of humanity and the requirements sary suffering” and “severe or long-term
of the public conscience (Shearer 2001).
flicts are not a party to it.”8 damage to the environment.” As far as
Besides the treaty, customary inter- The 1907 Hague version was somewhat the healthcare provisions for the people
national law (CIL) which is the other different; “populations” were replaced during peacetime and situations of
primary form of international law, is by “inhabitants,” the older term “law of armed conflict are concerned:
characteristically defined as a “general nations” was substituted for “international In all circumstances, in times of peace and
and consistent practice of states followed law” and “requirements” were replaced during conflict, States have an obligation to
by them from a sense of legal obligation” by “dictates.” Even though both the 1899 maintain a functioning health-care system.
Similar provisions exist in IHL that require
(Goldsmith and Posner 1999: 1113). And and the 1907 versions mention “laws of
States to provide food and medical supplies
customary international humanitarian humanity,” it has become a common to the population....... Though both IHL and
law “is the basic standard of conduct in practice to refer to them as “principles of IHRL allow States to predicate their obligations
armed conflict accepted by the world humanity” (Meron 2000: 79). on the resources available to them, a lack of
community.” It is universally applicable The International Court of Justice, in resources does not justify inaction. Even in
cases where resources are extremely limited,
regardless of the application of treaty its advisory opinion, acknowledged the States should adopt low-cost programmes
law and is based on widespread and relevance of “the Martens Clause to that target the most disadvantaged and mar-
almost uniform state practice regarded considering the legality of means and ginalized members of the population.11
According to the locals and journalists destroyed during the cross-border firing. there are no facilities in these villages or
who were interviewed along the LoC and However, during the field research in even in nearby towns which would take
IB, despite continuous threat to the lives the border villages, the locals noted that account of the psychosocial needs of these
of civilians residing in the border areas, there is a lack of adequate facilities for people. It may be noted that where there
there are hardly any adequate medical the treatment of the injured cattle and are barely sufficient medical facilities for
facilities. Tarachand from Sidherwan vill- sufficient compensation to those whose the physical health or treatment of the bor-
age in Akhnoor sector along the IB said: farms and/or homes have been destroyed der people, even the idea of human and
There is one dispensary between four villages
or cattle have been killed or injured other resources for their psychosocial
and that too closes at four in the evening. is hardly ever provided. Sunil, from health would seem far-fetched.
And the closest hospital is in Akhnoor which Sidherwan village said: Further, the threat of being displaced or
is around 15 kms away from our village. Dur- actual displacement is a major cause of
If someone’s farm in the village is destroyed
ing the shelling, the dispensary remains open
during the shelling, she/he is just provided concern which deeply affects the civilians
till late. However, it lacks efficient doctors.12
with around Rs 2,000 per acre to cultivate in the border areas at the psychosocial
the land again.15
Those who get injured during the level. The recent escalation of border
shelling often have to travel to the town However, a critical issue is that due to tensions between the two estranged
or city for treatment. There are hardly recurrent firing and shelling, the land neighbours has rendered a large number
any ambulances and even during emer- may become infertile or unproductive. of civilians in the border areas homeless
gencies, there is scarcity of ambulances Yet, no insurance is offered to the farmers or internally displaced. While refugees are
and doctors. Further, during border for the farms in these border villages. people who have crossed an international
tensions, the civilians are frequently Besides, most people in the villages along boundary and “are at risk or have been
confronted with the issue of food and the IB and LoC are dependent on agricul- victims of persecution in their country of
water scarcity. Vijay Bharadwaj, a local ture and cattlerearing and during border origin,” the internally displaced persons
journalist from R S Pura, who has been tensions they cannot get to their farms (IDPs), have not crossed an international
relentlessly highlighting the issues which or have to migrate away from them boundary, but have, for whatever reason,
confront the villagers along the IB, noted: which are their only source of livelihood. fled their homes.17
During firing and shelling, people who flee
There is no convention for IDPs com-
their villages to take shelter in the safer areas Psychosocial Impact: The constant parable to the 1951 Refugee Convention.
often have to live without food for days. Even physical threat to their lives and limbs or Nevertheless, the IHL offers them pro-
those who continue to stay in their villages the loss of farms and cattle is likely to tection in situations of armed conflict.
are often deprived of food since they are unable
resound in the psyche of the civilians “Under IHL, people are protected from
to go out to collect firewood for cooking.13
residing in the border areas. The upshot and during displacement as civilians,
Besides, the locals in the border vil- of the physical and economic threat is the provided they do not take a direct part
lages noted that the state rarely offers undesirable psychosocial consequences in hostilities.” Several rules of IHL offer
adequate compensation to those fam- for the border people. The approach protection to the civilian population and
ilies who have lost a member; and also termed as “psychosocial” in relation to their violation often is a root cause of
to those who have been injured during the armed conflict is summarised in the displacement. For instance, the IHL pro-
the cross-border shelling. Roshan Lal from 1997 Cape Town Principles and is ex- hibits attacks by parties to an armed
Flaura village in Suchetgarh stated: plained in the following manner: conflict on civilians as well as civilian
In 2014, my leg was severely injured during The term ‘psychosocial’ underscores the close objects. It further forbids indiscriminate
the shelling. After I was injured, my nephew relationship between the psychological and methods of warfare that may have ad-
took me to R S Pura and I was admitted at social effects of armed conflict, the one type verse consequences for civilians.18
the Bakshinagar hospital for 25 days where of effect continually influencing the other.
my leg was operated upon. The state provid- ‘Psychological effects’ are defined as those
The indiscriminate firing and shelling
ed me Rs 53,000 in compensation. After the experiences that affect emotions, behaviour, across the border by both India and
operation, I was not given medicines in time. thoughts, memory and learning ability and Pakistan have led to the internal displace-
I was not recovering. Therefore, I went to a the perception and understanding of a given ment of the civilians on both sides. Further,
private hospital in Amritsar where I stayed situation. ‘Social effects’ are defined as the
effects that the various experiences of war
during the field research conducted by the
for around 35 days and ended up spending
over Rs 3 lakh on my treatment. I had to mort- (including death, separation, estrangement authors along the LoC and IB, it became
gage my land to pay the bills. A certain amount and other losses) have on people, in that apparent that the supply of relief materials
of money was provided by my relatives.14 these effects change them and alter their re- for the IDPs by the state is rarely adequate.
lationships with others. ‘Social effects’ may
In addition, some locals from Sidherwan During the cross-border firing, the Indi-
also include economic factors. Many indi-
noted that if a person is injured during viduals and families become destitute be- an government usually provides emer-
the shelling, the state usually provides cause of the material and economic devasta- gency shelter to the civilians in schools
her/him merely Rs 5,00. Further, in the tion of war, losing their social status and and government buildings in the safer
place in their familiar social network.16
border villages, there is an invariable areas. However, these shelters are usually
threat of the civilians’ cattle being killed Yet, the psychological health of the insufficient. Along the LoC and IB, the
or injured and/or farms or homes being civilians in border areas is sidelined and civilians are demanding underground
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 39
INSIGHT
bunkers which can be used by them for kaccha huts instead of concrete houses humane treatment, including respect of life
shelter during the firing and shelling in which are more prone to being damaged and physical and moral integrity, and forbid-
ding, inter alia, coercion, corporal punish-
their villages. In February 2015, a pro- during the cross-border shelling. There is ments, torture, collective penalties and re-
posal was sent by the J&K government to also an invariable fear among the people prisals (Plattner 1984).
the centre for setting up 20,125 commu- in these areas of the splinters of mortar
nity bunkers at an estimated cost of shells entering their huts which may not Although the Fourth Geneva Conven-
Rs 1,006.25 crore in 448 villages close to just damage the huts, but also injure or tion comprises a number of provisions in
the LoC in Kashmir and the IB near Jam- kill those residing in them. Nazia, on her favour of children, however, the principle
mu. Mufti Mohammad Sayeed, the Chief way to Drati from the burial of her rela- on which the rules pertaining to children
Minister of J&K told the state assembly tive who was killed at Basoni in Balakot, are based is not stated explicitly in this
that the proposal would cover a popula- pointed out: particular Convention (Plattner 1984).
tion of 4,02,455 close to the border areas Most people in my village (Drati) live in kaccha Protocol I attempts to fill this gap through
in districts of Jammu, Kathua, Samba, huts and are always worried about shells Article 77 which states that
Rajouri and Poonch.19 However, a num- landing inside them.23
Children shall be the object of special respect
ber of villages in these areas still lack Further, the tensions along the border and shall be protected against any form of
underground bunkers. erode the social fabric of life. It may be indecent assault. The Parties to the conflict
Irfan Khan, son of the slain sarpanch, noted that due to the firing and shelling, shall provide them with the care and aid
they require, whether because of their age or
Karamatullah Khan, from Balakot along as civilians in border areas get displaced
for any other reason.24
the LoC said: or become homeless, it not just has huge
There were bunkers in the border areas which economic repercussions, but psychosocial Even though, India and Pakistan have
villagers had constructed in the early 1990s consequences as well. Since the people in not ratified the Additional Protocol I,
when the firing between India and Pakistan these areas may have to live away from however, they must respect the IHL which
was a daily occurrence. However, during the
time when peace prevailed in the region, their homes, their community life is dis- offers general protection for children as
these bunkers were not used much for nearly rupted. Even those who continue to stay in persons not taking part in hostilities,
10 years. Most of these bunkers were, in time, their villages are frequently unable to take and special protection as persons who
filled with mud and other things. But the bun- part in communal gatherings or religious are particularly vulnerable.
kers here are as important as water, air and
food. The government should provide each celebrations or activities. The disruption During the border tensions between
village with fresh bunkers. If that happens of life severs family and community ties. the two neighbours, children have always
many lives would be saved.20 In addition, most locals who were inter- been adversely affected. They are often
Further, Sunil from Sidherwan which viewed during the field research said that killed or injured. As mentioned previously,
is located along the IB highlighted the the state has not yet provided them land within two days (15 and 16 August) in
issues concerning underground bunkers for emergency shelter which it had prom- Balakot, Poonch, six civilians were killed.
in his village, ised. The villagers from Sidherwan noted Among them were two teenagers and a
Between two–three villages including Sid-
that the government had promised them 10-year old boy. In addition, the border
her wan, there is only one bunker around land in safer areas for emergency shelter. tensions lead to the disruption of educa-
one km from Sidherwan, however, that too However, no one has received it so far. Un- tion. As a result of the firing and shelling,
is usually filled with water especially during like the people of Jorafarm in Suchetgarh, children may even have to spend months
the monsoons.21
who do not have farms and are willing to without schooling. Bilash Sharma from
In August this year, at the height of settle in safer areas, most people along the Sidherwan who is currently pursuing
border tensions in Poonch, when one of IB and LoC who were interviewed depend graduation said:
the authors interviewed Pawan Kotwal, on agriculture and are merely demand- In September 2014, due to continuous shell-
Divisional Commissioner, Jammu regard- ing emergency shelter in safer areas. This ing, we had to leave the village. Schools
ing the state government’s proposal on is because the latter cannot leave their were shut for around 20 days. We were pro-
bunkers for the civilians in border areas, farms and settle elsewhere permanently. vided government accommodation in a
school in the safer area.25
he explained:
The Central government recently approved a Impact on Children and Young People: Besides education, children during
pilot project of Rs 60 lakh to set up under- IHL offers general protection for children the cross-border shelling are unable to
ground bunkers in Jammu. We are already
working on this project, apart from the state
as persons not taking part in hostilities, engage in recreational activities. Fur-
government’s proposal for setting up of 20,125 and special protection as persons who ther, due to invariable fear of the border
bunkers in the villages close to the LoC in are particularly vulnerable. violence, children, in particular, may
Kashmir and International Border near Jammu. even be more vulnerable to psycho-
Presently there are no bunkers in the Poonch During international armed conflicts, children
region. I am sure if these bunkers are con- come into the category of those protected by logical problems. Jyoti who is married in
structed in different areas in the times of crisis the Fourth Geneva relative to the protection Gakhriyal village in Akhnoor noted:
they will save lives.22 of civilian persons in time of war. By virtue
of this, they benefit in particular from all the Whenever there is firing in Gakhriyal, it is
Another cause of concern is that a provisions relative to the treatment of protect- difficult to explain the children. They get
number of people along the border live in ed persons, which state the basic principle of extremely scared. A few days back (in early
I
Economic and social inequality is a major problem, nequality in various realms—economic, political and social—
implicated in poverty, ill health and exploitation. appears to be an enduring feature of human societies. How-
ever, many challenges have been made to extreme forms of
Inequality has increased in many countries since the
inequality: for example, democratisation movements have chal-
1980s and it is also widely seen as unfair, yet action lenged dictatorships and various forms of political exclusion;
against it has been sporadic and often ineffective. To labour movements have campaigned against economic inequality
better understand why inequality has persisted, it is making the case for a living wage and social protection; and social
exclusion of various groups is widely castigated as prejudice.
useful to look at tactics that reduce public outrage over
While all forms of inequality have persisted, what is notable is that
it. These include covering up the existence and impacts economic inequality has increased and, according to many ana-
of inequality, denigrating those who are less well-off, lysts, become much more extreme within countries through pro-
explaining the existence of inequality as natural, cesses of corporate globalisation (Cammack 2009; Piketty 2014).
Most humans have a well-developed sense of fairness (Moore
necessary or beneficial, using official channels to justify
1978). Haidt (2012) argues that fairness is one of the fundamental
inequality, threatening those who challenge it and moral foundations deriving from humans’ evolutionary past. It
rewarding those who defend it. Each of these tactics can is found in people of all political persuasions, and is especially
be countered, resulting in a set of options for those important for those on the left. On an informal level, many
parents observe that their children compete for their attention
pursuing a fairer world.
and resent being treated unequally. In workplaces, grievances
develop when workers are rewarded differently when doing
the same work. Yet, despite this sensitivity to fairness, wide-scale
economic inequality in contemporary societies has persisted
and sometimes increased.
Governments are often seen as the means for redressing
unfairness: they have the capacity to redistribute income and
wealth through policies of taxation, investment and welfare.
Despite the efforts of reformers, though, the divergence between
the wealthy and the impoverished has continued within and
between countries. A whole range of data has come out over the
past few years to support this claim (Inequality.org 2015; OECD
2011; Piketty 2014), the most recent being an Oxfam report that
by 2016 over half the world’s wealth will be owned by just the
richest 1% of the world’s population. The trend to increasing
inequality is clear in the report: “In 2010, it took 388 billionaires
to equal the wealth of the bottom half of the world’s population;
by 2014, the figure had fallen to just 80 billionaires” (Hardoon
2015: 3).1 Although there are periodic expressions of concern
and impressive-sounding policy statements, political concern
about inequality is seldom as great as for economic growth,
terrorism, crime and a host of other topics. Indeed, until the
We thank Shaazka Beyerle, Danny Dorling and Stellan Vinthagen for rise of the global justice movement and the Occupy movement,
valuable comments on a draft of this article.
inequality was not a serious agenda item for most governments.
Susan Engel ([email protected]) and Brian Martin ([email protected]. To better understand how economic inequality has been
au) are in the School of Humanities and Social Inquiry, University of marginalised in public discourse and thinking, it is useful to look
Wollongong, Australia.
at tactics of outrage management (Martin 2007). When a
42 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
powerful group does something that might be perceived as un- step in the analysis is to point to counter-tactics that increase
just, it can use several types of tactics to reduce public outrage, concern about inequality. These are (Martin 2007): (i) expo-
with the key ones being to: (i) cover up the action, (ii) devalue sure of inequality, (ii) validation of those who are most op-
the target, (iii) reinterpret the action by lying, minimising pressed or marginalised, (iii) interpreting inequality as a
consequences, blaming others or reframing, (iv) use official form of injustice, (iv) avoiding official channels but instead
channels that give an appearance of justice, and (v) intimidate mobilising support, and (v) resisting intimidation and rewards.
or reward people involved. To illustrate these counter-tactics, we use several examples,
A good example of how these tactics are employed is in cases with special attention to the Occupy movement.
of torture, which is widely condemned but, nevertheless, often Because tactics to reduce or increase outrage are found in so
tolerated and rarely prosecuted. Individuals and governments many different sorts of injustices, there is potentially much to
implicated in torture hide their activities, denigrate victims as learn by comparing tactics used, or not used, in different issues.
terrorists, criminals or subversives, lie about the extent of torture, In undertaking an analysis of tactics used in relation to inequality,
minimise the impact of it, blame individuals for abuses, reframe there is much to learn from the dynamics of outrage over torture,
torture as “abuse” or define it away (as in the case of water- massacres and other injustices.
boarding), use courts or investigations to whitewash actions,
threaten victims of reprisals if they speak out, and reward Cover-up
compliant officials with jobs and promotions (Martin and If people are not aware of an issue, they will not be concerned
Wright 2003). The same sorts of tactics are found in a wide variety by it. Even if they know it exists, the issue may be disguised or
of injustices, including censorship (Jansen and Martin 2003), covered over in various ways, so it is less likely to be noticed.
sexual harassment (McDonald et al 2010), corporate crimes such Nearly everyone knows about the existence of inequality, but
as Bhopal (Engel and Martin 2006), and genocide (Martin in various ways its visibility is reduced, thereby reducing
2009). Therefore it is plausible that similar tactics serve to re- awareness and the likelihood of action against it.
duce people’s concerns about inequality. One way to reduce awareness is physical separation. This is
Tactics used to reduce public outrage are most apparent in sud- most obvious in residential stratification by income, with rich
den injustices, such as police beatings and massacres of protesters. people likely to live in exclusive areas. The former system of
In the aftermath of the exposure in 2004 of the torture and abuse apartheid in South Africa involved separate facilities for blacks
of prisoners at Abu Ghraib by United States (US) prison guards, and whites. However, formal apartheid is far more likely to
outrage was expressed throughout the world, and the methods of create a backlash than a seemingly natural separation, thus
devaluation, reinterpretation and official channels were obvious this is not a common tactic. Poverty in the midst of affluence is
(Gray and Martin 2007). Inequality is different in that it is an sometimes accepted as normal, but for some it can be disturbing.
ongoing process, with few sudden events to trigger an increase in Beggars and homeless people are usually absent from wealthy
concern: it is a “slow injustice” (Martin 2006). Therefore, tactics areas; sometimes governments instruct police to force them out of
to reduce outrage are likely to be more routine and institutionalised. their usual areas, which serves to reduce the visibility of poverty.
Usually, tactics to reduce concern about inequality are used Cover-up of inequality is partly about hiding poverty but more
in an intuitive way rather than as part of a conscious strategy about minimising understanding of the wealth of the rich. A num-
or conspiracy to subordinate the poor. Most perpetrators of ber of surveys has shown that people significantly underestimate
violent and cruel acts believe they are justified in what they income and wealth differentials in their country and would prefer
do, or do not think their actions are all that important (Baumeister a more equal wealth distribution that the one that they incorrectly
1997), and undoubtedly those acting in ways that foster inequali- think is the case (Norton and Ariely 2011; Norton et al 2014).
ty feel similarly. Furthermore, perceptions are shaped by self- In many parts of the world, including India, the wealthy and
interest, with research supporting Lord Acton’s classic saying that the impoverished live in clear view of each other: there is little
“Power tends to corrupt and absolute power corrupts abso- attempt to hide inequality. This suggests that cover-up, as part
lutely” (Kipnis 1976; Robertson 2012). As well, a small per- of the toolkit working for inequality, is not as common as a
centage of people display antisocial personality traits, hav- lack of interest in the topic or as reinterpreting inequality as
ing concern only for themselves and not others; some of due to the supposed intelligence and hard work of the wealthy
these individuals rise to positions of power within hierarchi- or devaluing the poor, where the poor are said to be responsi-
cal systems (Babiak and Hare 2006; Berke 1988). For these ble for their situation because of their laziness, lack of smarts
reasons, it is not surprising that some people want greater or because they are simply “Shameless”—as the British TV series
inequality and believe it is good. puts it. These methods are described in the next two sections.
Rather than try to determine people’s motivations, our aim
here is to illustrate the tactics that reduce outrage over inequality. Devaluation
This involves noting methods well known to informed observers One of the most potent ways to reduce outrage over injustice is
but seldom combined into a tactical or strategic analysis. We to discredit those at the receiving end. Therefore, attempts
use examples from different parts of the world, as our goal is may be made to lower the status of victims of injustice, thereby
to demonstrate the plausibility of the crucial role of tactics diminishing concern about the injustice itself. Poor people are
rather than provide exhaustive proof. Following this, the next regularly blamed for their own misfortune, a process called
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 43
SPECIAL ARTICLE
“blaming the victim” (Ryan 1972). They are castigated as being Lying in this context involves giving false or deceptive informa-
lazy, cheating, unclean, drug-dependent, criminal and in other tion about the extent, consequences or responsibility for inequali-
ways unworthy. The basic idea is that success in the contest for ty. For example, it might be claimed that unemployment pay-
riches is due to the characteristics of the competitors, and ments damage the initiative and prospects for the unemployed,
those who are poor are failures in every way. when the evidence says otherwise. People can lie to themselves
Blaming the victim is aided by a psychological process called in a process called self-deception (Trivers 2011), so those who
belief in a just world (Lerner 1980). Some people believe the world provide false information may also be deceiving themselves.
is fair and, when confronted with evidence to the contrary, main- There is a close connection between the techniques of cover-up
tain their belief by saying people are responsible for their own and reinterpretation by lying. In cover-up, people do not know
misfortune. Those who are unemployed are blamed for not find- a problem exists; in reinterpretation by lying, they are given
ing jobs even when unemployment is structural, with dozen of false information about it. Consider the well-known fact that
applicants for every vacancy. People who are highly committed to the per-capita gross domestic product in India is much less
just-world beliefs are more likely to blame poor people for their than in Britain. What is little known, at least outside India,
poverty. This belief is common across developed and developing is that prior to the British conquest of India, living standards of
countries and it results in a poverty/shame nexus (Walker 2014; working people in the two countries were similar, and that
Chase and Bantebya-Kyomuhendo 2015). Policies and public com- a significant part of their subsequent divergence can be attri-
mentators push poor people to feel ashamed, and some of them buted to colonial exploitation (Davis 2001). Yet many internet
take this on and devalue themselves. Shame has traditionally been sources on British imperialism in India start with its supposed
seen as a useful mechanism for social cohesion and control, yet benefits, with students asked to weigh up the positives and
the negative impacts of it on the poor have received little atten- negatives as if the railways could make up for the between 12
tion. Walker (2014: 40) has shown how its “psychological conse- and 29 million deaths during the late Victorian era famines
quences can be severe; … low self-esteem, depression, anxiety, that can be largely attributed to British policy (Davis 2001).
eating disorder symptoms, post-traumatic stress disorder, and The broader point here is that current level of inequality is
suicidal ideation have all been associated with shame...” known but the historical processes leading to it are seldom
Shame on its own is damaging enough but when it becomes understood or publicised.
part of government policy it becomes stigmatising and even more “Minimising” means suggesting that the scale or consequences
damaging to the poor. Inducing shame has long been a feature of inequality are not as serious as they actually are. An example
of many social welfare programmes; its use has increased in is the prominence of the $1.25 a day measure of global poverty.
recent years with measures like income quarantining. An even Using this measure the World Bank could claim in 2014 that
more disturbing trend is the deliberate use of shaming and the number of people living in poverty declined from 1.93 billion
stigmatisation in order to get people to construct their own in 1981 to 1.2 billion in 2011 and looked at as proportion of the
latrines (Engel and Susilo 2014). world’s (growing) population, the fall sounds large. The $1.25
Walker (2014: 44) also provides evidence that the shame a day measure was calculated by taking median of 10 lowest
associated with poverty has grown with globalisation since poverty lines across the globe in 1985, it only allows people a
the 1980s. The corollary to increased emphasis on individualism very frugal existence and results in a shortened life expectancy.
and consumerism of the past decades may well be greater The measure significantly understates the level of global poverty,
shame and stigma for those who have not succeeded. The even considered in absolute (not relative) terms. To achieve a
counterpoint to devaluation of the poor is glorification of the reasonable life expectancy and the associated quality of life,
wealthy and this has also increased since the 1980s. Individual Edward (2006) calculated that the associated income was
success stories are regularly presented as moral lessons in the closer to $3 a day. When we look at the poverty figures using
virtues of hard work and enterprise. Similarly, successful com- even a $2 a day calculation, there has been less progress in
panies are presented as models, with their methods emulated, poverty reduction—in 1981 there were 2.59 billion in that
even though their success may only be short term and due in category and for 2011 the estimate was 2.2 billion. The conse-
large part to luck (Rosenzweig 2007; Taleb 2001). quences of the $1.25 a day calculation cascade, if you are above
In situations where inequality is stark, devaluation of those that you are no longer regarded as absolutely poor. The Econo-
who are disadvantaged is a key method of reducing outrage, mist (2008, 2009) is very fond of labelling those with incomes
with poverty-related shame now the “cement” in the structures above $2 a day as the new middle class!
that maintain inequality and perpetuate poverty (Walker 2014: At yet another level, the very focus on poverty, not inequality,
191). Those who are ashamed of their poverty are less likely to is a way to minimise concern about inequality because
confront the wealthy, thereby contributing to de facto cover-up. empowering the poor has, over the past few decades, not been
regarded as being in any way linked to the power or wealth of
Reinterpretation the rich (Freeland 2012; Marcuse 2015). When inequality is the
Reinterpretation involves explaining why inequality is acceptable, focus, the rich come under scrutiny as the Occupy movement
necessary, natural or beneficial. This has a stronger cognitive showed. Here we see how different tactics converge as this
component than devaluation. Four techniques of reinterpretation is an issue of both minimising inequality and reframing it,
are lying, minimising, blaming and framing. discussed shortly.
44 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
As well as blaming the victim (a type of devaluation), it is Those who do better at school obtain high grades and more
also possible to blame others. For example, governments can advanced degrees, which may be prerequisites for certain
blame greedy corporations and corporations can blame ineffec- types of jobs. Not having a diploma, degree or sufficiently high
tive governments, or they can blame previous governments or grades can be a rationale for denying a person a job, even when
individual politicians. In the case of inequality, the “blame” is the credential or grades are irrelevant to the work (Collins
often put on supposedly natural socio-economic processes. For 1979; Dore 1976). The education system seems to offer a justifica-
example, the Economist (2015) attributes a large part of the recent tion for inequality, even though there is no necessity that those
upsurge in US inequality to an escalation in assortative mating, with degrees should receive higher income.
which they describe as where “clever, successful men marry Welfare agencies, providing various payments and services
clever, successful women” rather than as, say, people marrying for those in need, are usually highly bureaucratic, with many
in their own class. Again, this blame also converges into the complex rules concerning who is entitled to what. Applying
area of framing, which is the most potent reinterpretation these rules according to rigid formulas helps legitimate the
technique. It is a process of seeing and presenting inequality in social location of those served: if a person or family is ruled as
a way that makes it seem acceptable or natural. Historically, ineligible for a payment for unemployment or disability, this
religious doctrines or local philosophies were a major force in serves as a type of official statement that they do not deserve
framing inequality. Confucius said that when a country is “well any more.
governed, poverty is something to be ashamed of” (cited in The legal system regularly makes rulings that reinforce the
Walker 2014: 5). In India, the Vedic civilisation attributed ine- legitimacy of inequality. Those who are seriously disadvantaged
quality and poverty to people’s actions in their previous lives and are more commonly subject to attention from the police and
the later development of karma encouraged acceptance of existing courts, whereas high-level crimes, such as massive corruption
circumstances. The Christian tradition started out promoting or production of dangerous products, are seldom prosecuted.
poverty as the way to salvation but this did not last long and the In the aftermath of the Bhopal disaster, the company responsible,
more pertinent legacy is the distinction between the deserving Union Carbide, was able to escape with minimal penalties. The
and the undeserving poor. These traditions too often counsel various court cases on behalf of victims of the disaster led to
acceptance of one’s situation, say that poverty is natural, or pitiful levels of compensation, yet gave a stamp of legitimacy
promise that things will be different in a future life. While such to the outcome (Engel and Martin 2006).
doctrines can provide peace of mind for individuals, they also Official channels are rule-based systems that promise to
reduce the incentive to question or challenge inequality. provide justice. Yet these systems are themselves biased in
Since the late 1970s, neo-liberal ideas have actively promoted ways that make them tools for the rich and powerful. As writer
inequality as a natural state. As one of the founding fathers of Anatole France famously commented, “In its majestic equality,
neo-liberalism, Friedrich von Hayek (2006/1960: 76), said: the law forbids rich and poor alike to sleep under bridges, beg
It has been the fashion in modern times to minimize the importance of
in the streets and steal loaves of bread.” When the rules are
congenital differences between men and to ascribe all the important dif- biased or applied in a biased fashion, they give the appearance
ferences to the influence of environment. However important the latter of fairness without the substance.
may be, we must not overlook the fact that individuals are very different
from the outset. The importance of individual differences would hardly Intimidation and Rewards
be less if all people were brought up in very similar environments. As a
statement of fact, it just is not true that ‘all men are born equal.’
Some of those who challenge inequality are met with reprisals,
including harassment, job loss and assault. When workers, espe-
Neo-liberalism promotes belief in meritocracy, in which peo- cially low-paid workers, organise to demand better wages and
ple rise in the system according to their talents. This can serve conditions, they are sometimes met with harsh opposition.
to justify inequality, because it assumes that social systems are Union organisers are special targets. The US, the most unequal
hierarchical and that divergences in outcomes are natural. The industrialised country, is noted for employer campaigns
ways that people at the top of hierarchical systems use their against unions.
power to reward themselves is obscured. Equally, stigmatisa- Whistle-blowers—employees who speak out in the public
tion of the poor has grown as this approach attributes poverty interest—are often subject to reprisals (Miceli et al 2008).
to failure, laziness, lack of intelligence and so on. These include whistle-blowers in government and corporations
Academics present many arguments to justify inequality, for who expose corruption at high levels, for example, tax avoid-
example arguing that talented people need to be amply rewarded ance by wealthy people, pay-offs to government officials who
so they will undertake important jobs, that low wages lead to give favoured deals to corporate friends, or even just the pack-
higher employment, that prejudice is natural and greed is ages obtained by those with high incomes. The Occupy move-
good (Dorling 2010). ment, which burst into public consciousness in 2011, was essen-
tially a protest against inequality. In some countries, Occupy
Official Channels protesters were subject to attacks by police.
Various formal processes in society give a figurative stamp of Intimidation can serve to discourage challenges to inequality;
approval for inequality. The most important is schooling, which a parallel tactic is offering rewards to those who serve to protect
is a system that reproduces and legitimates social stratification. or justify inequality. One example is corrupt union officials,
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 45
SPECIAL ARTICLE
who connive with employers to keep a poorly paid workforce better is personal involvement with those who are otherwise
quiescent. Leaders of left-wing political parties, who say they stigmatised. Dalit groups in India have reframed cultural beliefs
support a fairer society, can be lured by the privileges of office, that traditionally worked to oppress them as untouchables, to
and become far more conservative when they are elected. create new identities. The belief that they were the earliest in-
There is a long history of progressive parties and politicians habitants of India has been used to develop a “Dalitology” that
failing to live up to the expectations of their followers (Boggs validates their existence rather than undermines it. This has
1986; Miliband 1969). developed along with a range of Dalit literature that resists the
These five sorts of tactics to reduce outrage over inequality inevitability of discrimination against Dalits (Nimbalkar 2006).
often overlap. For example, elections are an official channel When wealthy, prominent business persons, such as Warren
that often promises more than it delivers, while rewards Buffet and George Soros, speak out against inequality, this has
for party leaders serve to buy off challengers. The value in an extra impact because they have nothing to gain financially
classifying tactics is to clarify and group the great variety of from measures for social justice. While it is important for
methods used, and to show more clearly pathways for taking oppressed people to take stands on their own behalf, forming
action to oppose inequality. Each of the five types of tactics alliances with those in other parts of society is vital.
reducing outrage over inequality can be countered by corre-
sponding tactics to increase outrage. We now turn to exam- Interpretation
ples of these outrage-increasing tactics, with special attention Given the various methods of reinterpretation—lying, mini-
to the Occupy movement (Gitlin 2012; Graeber 2013; Sitrin mising, blaming and framing—the counter-tactic is to inter-
and Azzellini 2014). pret inequality as unjust and harmful. The inherent unfairness
of extreme inequality needs to be highlighted, as well as the
Exposure impacts of inequality.
If cover-up is a key method for reducing outrage, then the obvi- In recent years, one of the most powerful analyses of the
ous counter-tactic is exposure of the injustice. This is indeed the harmful impacts of inequality has been the book The Spirit
method used by many who seek social justice: social problems Level (Wilkinson and Pickett 2009). The authors document
are documented and publicised. that societies with greater economic inequality are worse off
In some workplaces, the salaries of top management are not in various ways, such as having greater crime and suicide
disclosed, and furthermore are disguised through such tech- rates.2 Their focus is on the damaging psychosocial impacts of
niques as providing share options. When salaries are publicised inequality on society in general. There is also a growing body
and compared to those of low-level workers, this can cause of research specifically on its impacts on the wealthy, which
outrage, which is even greater when top management is involved shows that wealth blunts the parts of the brain linked to empa-
in corrupt activities. thy and that the rich are more likely to violate road rules, cheat
The Occupy movement has served as a method of exposing to achieve financial benefits and even that they are more likely
inequality; indeed, its most lasting legacy may be putting ine- to shoplift (for a review of research, see Lewis 2014).
quality on the public agenda. Through public protests and Others have argued that inequality can lead to slower
through the memorable attention to a division in society economic growth, or even stagnation (Acemoglu and Robinson
between the wealthiest 1% and the other 99%, the movement 2012; Ostry et al 2014), countering the usual trickle-down
has drawn attention to economic inequality. A related campaign argument. Various authors have documented the huge influence
has been exposure of tax minimisation and avoidance strategies of powerful industries—energy, pharmaceuticals, transport—
by multinational corporations or the super-wealthy, for example on government policy, so much so that governments are often
by the Tax Justice Network. tools of special interests rather than serving the public interest
(Stiglitz 2012).
Validation Much of the intellectual debate over inequality occurs in
The technique of devaluation reduces outrage; the countervail- academic journals and books, but this has spilled over into
ing technique is validation of those who are the targets or victims public discourse, in part due to the influence of the social justice
of unjust actions and systems. Validation can be promoted by movement and the Occupy movement. Thomas Piketty’s 2014
treating poor and disadvantaged people with respect, by associ- book Capital in the Twenty-First Century achieved bestseller
ating them with positive symbols and values, and through their status, something that would have been unlikely without the
own dignified and courageous behaviour. increased public discussion of inequality. Very importantly,
A classic validation technique is organised action, taken in a Piketty and his colleagues have provided strong data demon-
resolute manner. When lowly paid workers join rallies, strikes strating the rise in inequality since the 1980s and refocused
and boycotts, and present themselves as worthy of respect, debate about state revenue away from cutting social security,
they are more likely to be treated seriously. Validation also health and education benefits and services and instead towards
occurs by association with valued individuals, organisations the income side of the state ledger, in particular the method
and symbols. When prominent people —respected politicians, and amount of taxation of wealth and income.
religious leaders or celebrities—speak on behalf of those who Another contributor to the public debate is research on
are disadvantaged, this contributes to greater respect; even happiness, in the field of positive psychology. Among the
46 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
well-established findings are that greater income can improve and was one support base of the newly elected Syriza-led gov-
happiness among those in poverty, but the benefits of added ernment (Henley 2015).
income are much more limited for those with a reasonable in-
come (Frey et al 2008). Furthermore, happiness can be relia- Resistance to Intimidation and Rewards
bly improved through such non-materialistic practices as The tactic of intimidation discourages expressions of concern,
building personal relationships, expressing gratitude and while rewards buy off dissent. To increase outrage and action aga-
helping others (Lyubomirsky 2008). Positive psychology can inst inequality, both intimidation and rewards need to be resisted.
be used as a warrant for changing society to foster community This is apparent in the courageous efforts of Occupy activists.
and egalitarianism rather than competitive materialism. Resistance is also important in other arenas, in small and
large ways. It can involve workers with access to information
Mobilisation, Not Official Channels about corruption and harsh treatment of disadvantaged
The most counter-intuitive aspect of the tactics for outrage groups having the courage and the skills to collect documents
management is that official channels such as courts may not and make them available to activists. It can involve journalists
be the solution but in many cases actually reduce outrage and writing stories exposing obscene behaviour by the wealthy
hence discourage popular action. This is because official channels and telling about courageous campaigners for social justice. It
give the appearance of justice but, when used to challenge can involve individuals quietly engaging with friends and
powerful groups, seldom the substance. Petitions, appeals to colleagues to shift attitudes concerning inequality.
authorities, interventions by international bodies, formal investi-
gations, courts, politicians or elections can sometimes be effective Conclusions
roads to reform, but to increase outrage over injustice, it is better Inequality is linked to considerable poverty, ill health and suffer-
to avoid relying on them. Although many who work in official ing, yet is entrenched in many countries. Although many people
bodies have the best of intentions and do everything they can consider extremes of inequality to be undesirable, public concern
to serve the population, they are constrained by narrow mandates, only occasionally reaches critical mass. Indeed, according to
bureaucratic requirements, limited resources, and the possibility Piketty’s (2014) analysis, it took the combination of the devasta-
of losing their jobs should they become too activist. tion of two world wars, high post-war population growth and
Insights from many decades of social movements show that the active labour movement to achieve the significant reductions
direct action can offer better prospects for change. The labour in inequality that occurred between the start of the 20th century
movement in the late 19th and early 20th centuries was instru- and the 1960s. To better understand the dynamics of concern
mental in improving workers’ rights in the West. The Indian about inequality, it is useful to examine tactics that reduce or
independence movement saw Gandhi write letters to the increase public outrage. Defenders of inequality can use tactics
Viceroy as a formality, not in any expectation that formal of cover-up, devaluation, reinterpretation, official channels,
appeals would be successful. He launched direct action cam- intimidation and rewards to reduce outrage; challengers can
paigns, such as the Salt Satyagraha in 1930, which changed use corresponding counter-tactics.
consciousness across the country: people became energised One implication of this analysis is that supporters of social
rather than resigned (Weber 1997). justice need to give attention to the full range of tactics. It is
Similarly, social justice campaigners have had the greatest not enough to assume that evidence and logic are enough on their
impact through organised mass action. The 1999 global justice own to stimulate action, given that existing perceptions and beliefs
protests in Seattle stimulated similar protests in many cities work to hide inequality and the desire to believe in a just world
across the globe. Likewise, Occupy Wall Street set an example promote the corollary belief that the poor are responsible for their
followed elsewhere in the US and the world. poverty. Equally it is important to understand the role of official
Two main sorts of direct action are relevant here: to challenge channels, including formal inquiries, government agencies, and
inequality and to promote equality. The Occupy movement elections. Many people assume that official channels are always
largely focused on increasing awareness of and concern about the appropriate avenue for seeking justice and, as long as officials
inequality, though it has also run a range of positive initiatives, in or politicians are committed to doing something, nothing further
the tradition of Gandhi’s constructive programme, to create the is required. However, the lesson from many other injustices,
skills, resources and vision of a more equal society. from sexual harassment to massacres of peaceful protesters, is
One example of an equality-promoting initiative is free soft- that when powerful perpetrators are involved, official channels
ware, cooperatively produced: it undercuts the intellectual sometimes give only an appearance of justice.
monopolies that serve the powerful software companies, and Many people put their trust in progressive governments to
by offering a positive alternative promotes greater access to a counter the inequality spawned by unbridled markets, but
range of capacities. More generally, peer-to-peer alternatives over the past several decades this trust has been broken
in several fields can expand the commons—in energy, trans- repeatedly. Despite this, citizens often look to governments as
port, information, creative works—and potentially undermine the main solution, rather than being part of the problem. The
market capitalism (Rifkin 2014). In Greece, in the wake of the analysis of the outrage-reducing role of official channels sug-
crisis, people set up successful solidarity health centres, food gests it is more productive to pursue methods that directly
centres and cooperatives in their hundreds, which inspired tackle problems, rather than relying on those in positions
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 47
SPECIAL ARTICLE
of power to act on their behalf. When social problems are to it, starting with inertia and governments prioritising
highly entrenched, it is to be expected that formal processes economic growth over equality or sustainability. So, it would
have become implicated in the problems, either contributing to be easy for inequality to slip out of public consciousness, as
them or serving as escape valves. governments raise the alarm about other issues, such as
The Occupy movement, an aspect of the global justice move- terrorism. Generation of public outrage is part of the process in
ment, has put inequality on the agenda, so that mainstream addressing poverty and disadvantage, and in promoting social
media and politicians now take the issue seriously. However, justice; it needs to be accompanied by long-term efforts towards
there are strong forces working against any systemic approach different ways of organising society.
notes Power,” Global Society: Journal of Interdiscipli- McDonald, Paula, Tina Graham and Brian Martin
1 There has been a debate between economists over nary International Relations, 20(4): 475–90. (2010): “Outrage Management in Cases of
the last decade about whether the gap between Engel, Susan and Anggun Susilo (2014): “Shaming Sexual Harassment as Revealed in Judicial
countries is increasing or decreasing. Those argu- and Sanitation in Indonesia—A Return to Colo- Decisions,” Psychology of Women Quarterly,
ing that a decrease has occurred rely on very spe- nial Public Health Practices?” Development and 34: 165–80.
cific sets of income groupings, ways of measuring Change, 45(1): 157–78. Miceli, Marcia P, Janet P Near and Terry Morehead
of inequality and timeframes. For the key contribu- Freeland, Chrystia (2012): Plutocrats: The Rise of the Dworkin (2008): Whistle-blowing in Organisa-
tions, see Seligson and Passé-Smith (2014). We New Global Super-Rich and the Fall of Everyone tions, New York: Routledge.
take the position of Passé-Smith in this volume, Else, New York: Penguin. Miliband, Ralph (1969): The State in Capitalist Society,
that there is an absolute gap between rich and Frey, Bruno S, in collaboration with Alois Stutzer, London: Weidenfeld and Nicolson.
poor countries and that for the most part it has Matthias Benz, Stephan Meier, Simon Luechinger Moore, Jr, Barrington (1978): Injustice: The Social Bases
been widening over the past few decades, and Christine Benesch (2008): Happiness: A Revo- of Obedience and Revolt, London: Macmillan.
though looking at relative gaps shows a slightly lution in Economics, Cambridge, MA: MIT Press. Nimbalkar, Waman (2006): Dalit Literature: Nature
rosier picture. The most recent data regarding Gitlin, Todd (2012): Occupy Nation: The Roots, the and Role, Nagpur: Prabodhan Prakashan.
global inequality too highlight the illusory nature Spirit, and the Promise of Occupy Wall Street, Norton, Michael I and Dan Ariely (2011): “Building
of the claims that inequality is decreasing. New York: HarperCollins. a Better America—One Wealth Quintile at a Time,”
2 The concern that The Spirit Level’s findings may Graeber, David (2013): The Democracy Project: A Perspectives on Psychological Science, 6(1): 9–12.
impact public debate about inequality is dem- History, a Crisis, a Movement, London: Allen Lane. Norton, Michael I, David T Neal, Cassandra L Govan,
onstrated by the number of books and websites Gray, Truda, and Brian Martin (2007): “Abu Ghraib,” Dan Ariely and Elise Holland (2014): “The Not-So-
that appeared attempting to discredit its Justice Ignited, Brian Martin (ed), Lanham, MD: Common-Wealth of Australia: Evidence for a
findings. Rowman and Littlefield, pp 129–41. Cross-Cultural Desire for a More Equal Distribu-
Haidt, Jonathan (2012): The Righteous Mind: Why tion of Wealth,” Analyses of Social Issues and
Good People Are Divided by Politics and Religion, Public Policy, 14(1): 339–51.
References New York: Pantheon. OECD (2011): Divided We Stand: Why Inequality Keeps
Hardoon, Deborah (2015): Wealth: Having It All and Rising, Paris: OECD.
Acemoglu, Daron and James A Robinson (2012): Why
Nations Fail: The Origins of Power, Prosperity and Wanting More, Oxford: Oxfam International, Ostry, Jonathan D, Andrew Berg and Charalambos
Poverty, New York: Crown Business. viewed on 6 July 2015,- G Tsangarides (2014): Redistribution, Inequality,
fam.org.uk/publications/wealth-having-it-all- and Growth, MF Staff Discussion Note, February,
Babiak, Paul and Robert D Hare (2006): Snakes in
and-wanting-more-338125. SDN/14/02.
Suits: When Psychopaths Go to Work, New York:
HarperCollins. Hayek, Friedrich A von (2006/1960): The Constitu- Piketty, Thomas (2014): Capital in the Twenty-First
tion of Liberty, Oxford: Routledge Classics. Century, Cambridge, MA: Harvard University
Baumeister, Roy F (1997): Evil: Inside Human Violence
Henley, Jon (2015): “Greece’s Solidarity Movement: Press.
and Cruelty, New York: Freeman.
‘It’s a Whole New Model—and It’s Working,’” Rifkin, Jeremy (2014): The Zero Marginal Cost Society:
Berke, Joseph H (1988): The Tyranny of Malice:
Guardian (24 January), viewed on 6 July 2015, The Internet of Things, the Collaborative Commons,
Exploring the Dark Side of Character and Culture,- and the Eclipse of Capitalism, New York: Palgrave
New York: Summit Books.
/23/greece-solidarity-movement-cooperatives Macmillan.
Boggs, Carl (1986): Social Movements and Political -syriza. Robertson, Ian (2012): The Winner Effect: How Power
Power: Emerging Forms of Radicalism in the
Inequality.org (2015): “Global Inequality,” viewed Affects Your Brain, London: Bloomsbury.
West, Philadelphia: Temple University Press.
on 6 July 2015,. Rosenzweig, Phil (2007): The Halo Effect … and the
Cammack, Paul (2009): “Why Are Some People
Jansen, Sue Curry, and Brian Martin (2003): “Mak- Eight Other Business Delusions that Deceive
Better Off Than Others?” Global Politics: A New
ing Censorship Backfire,” Counterpoise, 7(3): 5–15. Managers, New York: Free Press.
Introduction, Jenny Edkins and Maja Zehfuss
(eds), London: Routledge, pp 249–319. Kipnis, David (1976): The Powerholders, Chicago: Ryan, William (1972): Blaming the Victim, New
Chase, Elaine and Grace Bantebya-Kyomehendo University of Chicago Press. York: Vintage.
(eds) (2015): Poverty and Shame: Global Experi- Lerner, Melvin J (1980): The Belief in a Just World: Seligson, Mitchell A and John T Passé-Smith (eds)
ences, Oxford: Oxford University Press. A Fundamental Delusion, New York: Plenum. (2014): Development and Underdevelopment: The
Collins, Randall (1979): The Credential Society: An Lewis, Michael (2014): “Extreme Wealth Is Bad for Political Economy of Global Inequality, 5th ed,
Historical Sociology of Education and Stratifica- Everyone–Especially the Wealthy,” New Republic, Boulder, CO: Lynne Rienner.
tion, New York: Academic Press. 12 November, viewed on 6 July 2015, http:// Sitrin, Marina and Dario Azzellini (2014): They
Davis, Mike (2001): Late Victorian Holocausts: El- Can’t Represent Us! Reinventing Democracy
Niño Famines and the Making of the Third aires-book-review-money-cant-buy-happiness. from Greece to Occupy, London: Verso.
World, London: Verso. Lyubomirsky, Sonja (2007): The How of Happiness, Stiglitz, Joseph (2012): The Price of Inequality: How
Dore, Ronald (1976): The Diploma Disease: Educa- New York: Penguin. Today’s Divided Society Endangers Our Future,
tion, Qualification and Development, London: Marcuse, Peter (2015): “Poverty or Inequality: Does New York: Norton.
Allen and Unwin. It Matter?” Inequality.org, 28 January, viewed Taleb, Nassim Nicholas (2001): Fooled by Randomness:
Dorling, Daniel (2010): Injustice: Why Social Ine- on 6 July 2015,- The Hidden Role of Chance in the Markets and in
quality Persists, Bristol: Policy Press. nomic-language-matters/. Life, New York: Texere.
Economist (2008): “The In-betweeners—Economics Martin, Brian (2006): “Slow Injustice,” Social Alter- Trivers, Robert (2011): The Folly of Fools: The Logic
Focus,” 2 February, p 84. natives, 26(4): 5–9. of Deceit and Self-Deception in Human Life,
— (2009): “Notions of Shopkeepers,” 14 February, — (2007): Justice Ignited: The Dynamics of Back- New York: Basic Books.
p 13. fire, Lanham, MD: Rowman & Littlefield. Walker, Robert (2014): The Shame of Poverty,
— (2015): “America’s New Aristocracy,” 24 Janu- — (2009): “Managing Outrage over Genocide: Oxford: Oxford University Press.
ary, p 9. Case Study Rwanda,” Global Change, Peace & Weber, Thomas (1997): On the Salt March: The
Edward, Peter (2006): “The Ethical Poverty Line: A Security, 21(3): 275–90. Historiography of Gandhi’s March to Dandi,
Moral Quantification of Absolute Poverty,” Martin, Brian, and Steve Wright (2003): “Counter- New Delhi: HarperCollins.
Third World Quarterly, 27(2): 377–93. shock: Mobilizing Resistance to Electroshock Wilkinson, Richard and Kate Pickett (2009): The
Engel, Susan and Brian Martin (2006): “Union Car- Weapons,” Medicine, Conflict and Survival, Spirit Level: Why More Equal Societies Almost
bide and James Hardie: Lessons in Politics and 19(3): 205–22. Always Do Better, London: Allen Lane.
Radhika Khosla, Srihari Dukkipati, Navroz K Dubash, Ashok Sreenivas, Brett Cohen
I
multiple and simultaneous economic, social and ndia faces a challenging decade ahead in energy and climate
policymaking. The problems are multiple: sputtering fossil
environmental challenges. While there has been
fuel production capabilities; limited access to electricity
conceptual progress towards harnessing their synergies, and modern cooking fuels for the poorest; rising fuel imports
there are limited methodologies available for in an unstable global energy context; continued electricity
operationalising a multiple objective framework for pricing and governance challenges leading to costly deficits or
surplus supply; and not least, growing environmental contes-
development and climate policy. This paper proposes a
tation around land, water and air. But all is not bleak: growing
“multi-criteria decision analysis” approach to this energy efficiency programmes; integrated urbanisation and
problem, using illustrative examples from the cooking transport policy discussions; inroads to enhancing energy access
and buildings sectors. An MCDA approach enables policy and security; and bold renewable energy initiatives, even if not
fully conceptualised, suggest the promise of transformation.
processes that are analytically rigorous, participative and
However one adds the scorecard, there is no doubt that energy
transparent, which are required to address India’s decision-making is ever more complex and interconnected.
complex energy and climate challenges. The domestic energy policy context is made further chal-
lenging by the overlay of global climate negotiations. The
Paris 2015 climate conference required every country to submit
its intended climate contribution. India’s international pledge,
submitted in early October 2015, includes a reduction of
emissions intensity by 33%–35% from 2005, and an increase
of the share of non-fossil fuel-based electricity to 40% of total
capacity. This pledge has significant domestic energy implica-
tions, since energy accounts for 77% of India’s greenhouse gas
(GHG) emissions (WRI 2014). In short, India’s energy future
requires addressing multiple and simultaneous challenges,
that together suggest great complexity.
Historically, the country’s policymaking has adopted a
rather straightforward supply orientation: can past trends
in energy supply be reproduced and enhanced? Although
recently, this is leavened by welcome attention to the demand
side, the discussions typically occur in silos around energy-
based ministries, which obscure linkages across sub-sectors
or larger strategic considerations. Perhaps most problematic,
social questions around energy have been excluded or at
most received lip-service treatment, such as access to energy,
The authors are grateful to Veena Joshi and V V N Kishore for their distribution of consumption, and environmental impacts. A
valuable inputs and to the participants of the MCDA workshop held at
the Centre for Policy Research in May 2015 for their feedback.
recent review of national modelling studies shows that these
Responsibility for all errors rests with the authors. questions often do not even get asked by studies of India’s
energy future (Dubash et al 2015). The overall result is a
Radhika Khosla ([email protected]) and Navroz K Dubash
([email protected]) are at the Centre for Policy Research, New Delhi. number of disconnects: between domestic and foreign policy
Srihari Dukkipati ([email protected]) and Ashok Sreenivas (ashok@ debates, where climate policy is often treated as a foreign
prayaspune.org) are at the Prayas (Energy) Group, Pune. Brett Cohen policy issue, and between energy and climate policy, although in
([email protected]) is a Researcher in the Energy Research Centre, practice climate policy should be built around a sensible and
University of Cape Town.
well-informed energy policy.
Economic & Political Weekly EPW december 5, 2015 vol l no 49 49
SPECIAL ARTICLE
At the same time, the consideration of the multiple dimen- approaches, describe their existing applications to climate and
sions of development is, formally at least, already enshrined in development policy, develop one specific approach and apply it
Indian policy. The National Action Plan on Climate Change to the cases of the cooking and buildings sectors, and offer
calls for a “co-benefits” approach where the climate implications some concluding observations.
of development policies are explicitly considered. The Twelfth
Five Year Plan also discusses how to implement co-benefits in 2 Insights from MCDA Approaches for Policy
the context of national energy planning. While the language of A growing number of global studies address the complex
co-benefits emerged in the context of the climate debate, in challenge of linking climate and development in a multiple
the larger context of energy policy it is more usefully referred objectives framework (Ürge-Vorsatz et al 2014; UNDP 2011;
to as assessment of multiple objectives, which does not require Angelou and Bhatia 2014). For instance, the Asian Co-Benefits
declaring one objective as primary. Partnership (2014) highlights possible entry points to explicitly
This increasing policy attention to linkages between sus- integrate climate and development into decision-making
tainable development and climate considerations—expressed (IGES 2014). Co-benefits analysis to indicate synergies and
as co-benefits or multiple objectives—is backed by a growing optimise trade-offs has also been undertaken in the context
research base. Global models provide strong evidence of of the Clean Development Mechanism (Sun et al 2010: 78;
substantial complementarities between climate mitigation, TERI 2012: 148). Other studies inform discussion of Low
reduced air pollution and energy security outcomes in the Emission Development Strategies (LEDS) which help prioritise
South Asian region (Rao et al 2015). Indian studies, on actions based on their economic, social and environmental
the other hand, have paid limited attention to such linkages impacts (Cox et al 2014). The most ambitious effort to develop
but a few track achievements ex post of the multiple objectives a multiple objective-based analysis framework for climate
of energy policy (Dubash et al 2015). Clearly, the idea of policy is attempted by the United Nations Environment
energy policy as serving a range of economic, social and Programme (UNEP 2011; Ürge-Vorsatz et al 2014). Several of
environmental objectives simultaneously is taking hold. At these studies draw on MCDA to simultaneously examine policy
the same time, while the multiple objectives approach options against multiple objectives.
has won broad acceptance, there are few efforts, so far, to Drawing on the literature, this paper develops a specific
operationalise it. variant of MCDA approaches that offers a number of advan-
This paper presents one approach, based on “multi-criteria tages when applied to Indian energy policy. It requires
decision analysis” or MCDA, which is a well-established frame- policymakers to explicitly state, upfront, the goals which the
work in a range of decision-making arenas, to operationalise policy would seek to maximise. In the cooking and buildings
the idea of co-benefits. The paper builds on a slew of recent cases which will be discussed, the economic, social, environ-
studies and particularly deepens early work done by some of mental and institutional objectives were explicitly laid out at
us in the context of India’s low carbon expert group (Dubash the start of decision analysis. The approach also encourages
et al 2013). We enhance our earlier efforts by providing a consideration of factors that are often ignored, such as house-
clear methodological framework to consider the relationships hold drudgery in the cooking sector. And, it requires identify-
between multiple objectives, the tools to simultaneously deal ing relative weights for the stated policy goals, for example,
with quantitative and qualitative information, and those to in the case of the environment of minimising household air
aggregate and prioritise policy objectives based on different pollution versus reducing GHG emissions. This attention
stakeholder opinions. These characteristics enable MCDA enhances transparency of the process and effectiveness of the
to be deeply salient to energy policy, and allow for final decision.
policymaking to take into account complexities, while main- A second advantage is that MCDA offers tools for incorporat-
taining rigour and potentially avoiding the paralysis that ing both quantitative and qualitative information with equal
complexity can bring. rigour. In contrast with other approaches, such as cost-benefit
To explain these points more clearly and intuitively, we apply analysis, MCDA explicitly allows for the use of qualitative infor-
a MCDA approach illustratively to two cases in this paper: access mation which is often hard to analyse but nonetheless crucial
to modern cooking fuels and building energy efficiency. We to consider. The underlying argument is that all objectives need
envision the approach laid out to provide a starting point for to be considered, not only those that are quantifiable. For
more transparent, analytically rigorous and inclusive policy- example, Indian policymaking is frequently hindered by imple-
making processes around energy and climate change. Notably, mentation challenges of vested interests or limited bureaucratic
however, it could also be used for a much wider range of capability, but because these are hard to quantify they are left
applications, including adaptation through the process of state out of policy analysis.
action plans, as well as for other questions of social policy. The Third, given the careful consideration of qualitative informa-
critical message, however, is that this approach is not proposed tion and subjective weighting of policy goals, MCDA approaches
as a single decision-making tool to be used by policymakers in are necessarily underpinned by an early and continuous involve-
isolation. Rather it provides a framework for structured dis- ment of stakeholders. These include technical experts, policy-
cussion, which can inform policy trade-offs, design and imple- makers, industry, end-users and civil society. For example,
mentation. In the remainder of the paper we introduce MCDA for policies providing access to modern cooking fuels, it is
50 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
important to understand the preferences of the cook stove Goals, and the cost and climate implications of such a transi-
users themselves. This broadening of the information base tion need to be understood.
beyond experts to include relevant stakeholders likely adds to Buildings, on the other hand, represent the rapid urban
the complexity of the process, but certainly enhances buy-in transformation taking place. Buildings consume more than a
and enriches the analytical base by providing new insights— third of the economy’s electricity, and it is expected that two-
for example, cultural concerns around adopting different thirds of India’s 2030 building stock is yet to be built (Kumar
cooking solutions. et al 2010). Unlike traditional pathways to meeting energy
Last, the process of deliberation and repeated iteration with goals, energy efficiency in the built environment offers multi-
stakeholders improves the sectoral knowledge base and fills ple benefits that go beyond energy savings. The additional
information gaps. For example, policy analysis for the build- benefits include carbon mitigation, improved energy security,
ings sector requires gathering data on a range of issues, from job creation, and better socio-environmental outcomes. How-
the upfront investment needed for efficiency, to the local ever, if unaddressed, it is estimated that 1.2 gigatons of
pollution reduced from lower diesel generator use. CO2 emissions will be locked in as India’s building energy
While traditionally MCDA has been used for discrete demand increases fivefold over 2005 levels by mid-century
decisions, such as choosing between power plant sites, its (Urge-Vorsatz et al 2012).
application is not as well established for policy analysis We apply the proposed approach to these two cases as an
where discrete options are harder to identify. However, its illustration of MCDA’s potential utility to Indian policymaking.
benefits reinforce its emerging international potential: in The outcomes presented here are preliminary, notably because
South Africa, the Mitigation Potential Analysis used social, we relied on limited expert input and not on full stakeholder
environmental and macroeconomic criteria to assess a variety workshops. Hence, less salient than the final numerical results
of GHG mitigation options (DEA 2014); and in Chile, stake- is the underlying thought process, method and approach.
holder inputs were used to identify the most important co- The input data for the cases, and part of the methodology
benefits of mitigation actions and associated implementation in the buildings case, draw on NITI Aayog’s India Energy
conditions (MAPS 2015). Security Scenarios (IESS), a bottom-up energy accounting
The approach developed here draws on these international model (IESS 2015). This comprehensive database provides a
experiences and extends the few other efforts to operationalise useful starting point to undertake sectoral multi-objective
multiple objectives for Indian energy decisions. The latter analysis, as attempted here.
include an early framework for multi-criteria analysis (Dubash For both case studies, we define a set of national priorities
et al 2013), energy dashboards (Sreenivas and Iyer 2015; SSEF and preferences, drawn from our understanding of the public
2015; Narula et al 2015), sectoral analysis of the cooking sector discourse around Indian energy policy. In a formal decision-
(Jain et al 2015), and state-specific studies using the framework making context these objectives would ideally reflect clear
of sustainable development and green growth (GGGI 2014). political choices to guide energy and climate policy, while in a
Adaptation work is also beginning to engage stakeholders to multi-stakeholder context, they would be arrived at through
deliberate multiple objectives. The MCDA approach described consultation and discussion. We refer to these national priorities
in the next section focuses on energy-related policy issues, as “branch”-level objectives (as opposed to specific objectives
and can be extended to resilience and adaptation, as well as which we later refer to as “leaves”). Here we use four branch-
social issues. level objectives:
• Economic: Economic considerations are fundamental to policy-
3 Description of a MCDA Approach making. India is in the midst of an urban, demographic and infra-
We discuss the key steps of a MCDA approach in this section. structure transformation whose success rests on the economy’s
Our focus is less on methodological details (which are laid out ability to grow, create jobs and secure its energy future.
in accompanying appendices) and more on the reasoning and • Social: It is important that the poorest and most vulnerable
thought process. Each subsection describes one step of the gain substantially from development policies that reduce poverty
methodology by presenting a rationale for it, the process to be and inequality, improve access to quality and affordable goods
adopted, and expected outcomes. and services, and also act as an engine for further development
For ease of exposition we use two case studies, of the cook- (Dubash et al 2013).
ing and buildings sectors, to illustrate the approach. Both • Environmental: Development policies have environmental
carry significant development implications and are currently implications, which can have repercussions for human health
understudied. The cooking sector is important because over and quality of life. Negative impacts need to be minimised
86% of rural Indian households, representing over 700 million locally, such as air pollution, and globally, as in the case of
people, used solid fuels for cooking (Census of India 2011). GHG emissions.
The adverse health effects of traditional, open-stove cooking • Institutional: Ease of implementation is often neglected
with biomass are well documented and lead to an estimated during policy evaluation either from oversight or because
1 million premature deaths annually in India (Smith et al analysis is difficult. However robust policy assessment
2014). In this context, India is committed to transition to should account for implementation challenges, ex ante and
clean cooking fuels under the UN Sustainable Development ex post.
Economic & Political Weekly EPW december 5, 2015 vol l no 49 51
SPECIAL ARTICLE
A MCDA approach provides a structured way to explicitly In the cooking case, by contrast we ask: which policy options
consider these objectives. Below are its detailed steps.1 promote access to various modern cooking fuels for rural
households, in the context of achieving developmental goals in
Key Steps of a Policy Relevant MCDA Approach a climate-constrained world? Here, the problem’s scope incor-
Step 1: Define the problem. Identify the policy question’s scope and time porates a broader set of technologies by highlighting the
horizon by bringing all stakeholders on board at the start. choice between alternative modern cooking fuels, all with
Step 2: Identify policy objectives and specific metrics for assessment.
similar institutional choices. And it also signals attention to
Understand national priorities and stakeholder needs.
Step 3: Identify policy alternatives to evaluate. Consider range of alternative the sustainable development context: issues such as drudgery,
policy options and the metrics for their success. household air pollution and their adverse impacts on health
Step 4: Analyse the alternatives. Identify data gaps and provide a and well-being form the context within which the analysis is
transparent analytical basis for discussions. undertaken. The sector is also relevant from a climate point of
Step 5: Elicit stakeholder preferences and normalise quantitative and
view as the use of modern fuels such as liquefied petroleum gas
qualitative information. Integrate qualitative and quantitative information.
Step 6: Aggregate through weights and compare consequences. Capture the (LPG) and electricity lead to increased GHG emissions, while
relative importance of policy objectives. traditional cook stoves lead to high levels of black carbon
Step 7: Sensitivity analysis.Tests the robustness of the inputs and the process. emissions. The focus is on rural households where the energy
Step 8: Choose the preferred policy alternative. Implement the preferred access problem is acute, and for which various central and
alternative and evaluate results to feed back into the policymaking process.
state modern fuel programmes exist.
The second necessary parameter of problem structuring is
Step 1: Define the Problem defining the time horizon. Policy impacts can be evaluated
Step 1, to carefully define the problem, serves many purposes—it over the short, medium or long term, and either measured in a
ensures that the most relevant policy question is asked, that particular target year or aggregated over years. A shorter time
efforts are appropriately directed, and allows for defining a frame allows for more accurate cost calculations, without
greater range of options for the answer. This first step should assumptions of cost trajectories over the long term. On the flip
be undertaken with stakeholder input, and requires specifying side, a longer time horizon can widen possible policy choices
the scope and time horizon of the decision question, both of as there is time for institutional capacity and technology choices
which are central to articulating a clear decision problem. to expand. Also, while measurement of impacts in a particular
The scope frames the larger policy problem: this includes year provides straightforward comparisons with the targets
identifying its jurisdiction, technological choices, and institu- set for that year, cumulative impacts can provide insight into
tional arrangements. The impact of varying the question’s the path taken to get there. We illustrate the use of different
scope is illustrated by our two cases. In the buildings example, time scales as well as point and cumulative impacts through
one alternative is for the problem to be posed at the national our case studies. The buildings case examines policy impacts
level and to compare the benefits from the full range of in 2022 and the cooking case, by contrast, looks at cumulative
efficiency measures between the commercial and residential impacts of policies over the period 2013–32.
sector. Or, the scope can be narrowed to examine the benefits
in either the commercial or the residential sector. Similarly, Step 2: Select Specific Policy Objectives and Metrics
the problem’s technological scope can be varied: different for Assessment
efficiency measures, such as an efficient building envelope vs After defining the policy problem, the next step is to flesh
efficient appliances can be assessed; or, the focus can be on out the policy objectives. The overarching “branch” level
only one technology option that has a major impact. If the objectives have been discussed earlier: economic, social, envi-
technological scope is limited to one efficiency measure, ronmental and institutional. Step 2 requires identifying the
variability can be introduced by broadening the institutional next level of specific policy objectives, or “leaves,” within these
focus through different policy instruments, all of which branch-level objectives.
promote the same technology. The full objectives hierarchy is identified in three consecu-
Since the purpose of this paper is to bring forth the different tive sub-steps, which results in the outcomes illustrated in
applications of a MCDA approach, we structure the questions Figures 1 and 2 (p 53) for the two cases. While our case stud-
with differing scope for the two case studies. In the buildings ies use the two branch- and leaf-level tiers, in principle the
sector we focus on residential buildings, as 85%–90% of the objectives can be structured into a hierarchy with as many
new construction expected by 2030 will be for residential pur- levels of detail as required. An alternative option is to struc-
poses resulting in a sharp rise in the associated energy demand ture a “flat” hierarchy where all the objectives are considered
(GBPN 2014). Further, we consider one technology—an energy at the same level, although this is not explored further here.
efficient building envelope—since 70% of savings can be The first sub-step of identifying objectives is to clarify, and
achieved by the envelope itself (GBPN 2014). The variation in the potentially modify, the branch-level objectives which reflect a
policy options is obtained from alternative institutional choices. broad consensus about the type of development sought. As dis-
The final policy problem is defined as: which policy options cussed already, we use the economic, social, environmental
provide maximum benefits from India’s residential real estate and institutional objectives based on our reading of energy
transformation, through new building envelope efficiency? policy priorities. In a MCDA application, these branch-level
52 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
Figure 1: Multiple Objectives and Policy Alternatives for the Cooking Sector Study
Which policy options promote access to modern cooking fuels for rural households, in the context of achieving
developmental goals in a climate constrained world? Time Scale: 2013–2032
Cumulative Cumulative Proxy scale: Cumulative Average Average Time spent Ex ante Ex post
import subsidy weighted CO2 cumulative cumulative collecting resistance transactional
bill due to burden for sum of equivalent capital recurring firewood to proposed costs, leakages,
Criteria cooking households expenditure expenditure (Hrs/week/ policy lack of
promoting GHG
fuels clean (HHs) using emissions incurred by incurred by HH) instruments institutional/
(trillion Rs) cooking traditional from cooking households households (Qualitative) entrepreneurial
options and (MT CO2-e (thousand (thousand capacity
(trillion Rs) improved emissions) Rs/HH) Rs/HH) (Qualitative)
cook stoves
(million
HHs)
Policy Alternatives: The policy options consist of using different instruments such as subsidies, incentives, market creation, and
availability of finance, to promote specific technology choices. They are: LPG; compressed biogas; electricity; improved cook stoves.
Business-as-usual or the reference case is included.
Figure 2: Multiple Objectives and Policy Alternatives for the Buildings Sector Study
Which policy options provide maximum benefits from India’s residential real estate transformation, through new
building envelope efficiency? Time Scale: In 2022
Annual Annual Annual Annual Annual CO2-e Incremental Annual Ex ante Ex post
electricity diesel direct jobs particulate equivalent cost to end recurring resistance transactional
saved from savings on created emissions GHG user to buy expenditure to proposed costs, lack of
Criteria efficient the import an efficient saved to policy institutional
from saved from emissions
construction bill from efficiency reduced from home, end user, instruments frameworks
(TWh) reduced installations diesel buildings spread over the full (Qualitative) and capacity
generator (Qualitative) generator (MT CO2-e over full building (Qualitative)
use use (metric emissions) population population
(kilotons) tons) (Rs) (Rs)
Policy Alternatives: The policy options use different institutional instruments to scale residential buildings envelope efficiency.
They are: mandatory building codes; financial incentives to end users to buy efficient homes; financial and administrative incentives
to real estate developers; and building ratings. Business-as-usual or the reference case is included.
objectives should be informed by political choices (such as burden and minimising household fuel cost are preferentially
policy or legal documents) and ideally be reinforced through independent because evaluating a policy against one leaf-
stakeholder input from policy experts to ensure that they objective requires no knowledge of how the same policy does
capture the current multiple and simultaneous demands of in the other leaf.
development. If needed, it is also possible to refine the branch-
level objectives. For example, in some contexts it might be use- Step 3: Select Policy Options to Evaluate
ful to explicitly include energy security as an objective. If such The policy options to evaluate are selected after determining
modification is made, however, it is important that the branch- the objectives hierarchy (Step 2). This sequence allows for a
level objectives transcend particular sectoral interests. That is, greater range of options to be considered, with input from rel-
energy security could feasibly be included, but the decision evant stakeholders who are asked to identify wide-ranging
should not be driven by the resultant implications for any one policy options.
specific sector or policy. Since the policy problem for the cooking case is framed
The second sub-step requires identifying the next level of around alternative fuels, each policy option represents the
detail of the objectives, or the “leaves” within each branch. For promotion of a particular fuel or technology choice, through
example, in the buildings case, the branch-level objective of sets of policy instruments. For each option, it is assumed that
minimising social costs involves leaf-level objectives of afford- best efforts will be made to increase adoption of clean cooking
ability (based on upfront cost, which tend to be high) and fuels by overcoming technological, economic and capacity
recurrent expenditure from the use of energy efficiency meas- challenges and through creation of new markets if needed.3
ures (which tend to be low). Splitting affordability into these The policy options considered for the cooking case are:
two subcategories captures two related, but distinct, elements • To promote LPG as a cooking fuel by increasing rural LPG
of affordability. availability and affordability;
The leaf-level objectives also need to be relevant to the • To promote biogas by enabling an efficient feedstock market,
particular policy problem being considered. Returning to the encouraging entrepreneurial activity in biogas bottling opera-
buildings case, we considered, but ultimately rejected, includ- tions and improving affordability through subsidies;
ing a leaf-level category for indoor occupant comfort, even • To promote electricity for induction-based cooking through
though it is valued socially. This is because the framing of the improved rural electricity access, combined with quality day
policy problem (in Step 1) focuses on a single technology (the and evening supply, and affordable tariffs; and
buildings envelope) as a result of which all policy options, in • To promote improved cook stove adoption through availability
spite of their different institutional choices, will result in the of clean burning, efficient and user-friendly cook stoves, and a
same level of occupant comfort. If the question was structured diverse sustainable feedstock (fuel pellet and wood chip) market.
to allow for multiple technologies, then different policies could In the buildings case, since the question’s scope requires all
result in varying occupant comfort levels, which would have policy alternatives to promote a single technology, each policy
made it an important leaf-level objective. considered has a different institutional focus. These are:
The third sub-step is to convert the leaf-level objectives to • To develop and adopt a mandatory energy code for new
specific criteria to assess the policy question. The criteria can residential buildings;
be either quantitative or qualitative, as decided during stake- • To provide financial incentives to consumers who buy
holder consultations. For example, the environmental branch efficient homes, to absorb the higher upfront costs;
for both cases includes a leaf-level objective of minimising GHG • To provide administrative and financial incentives to real
emissions, measured by estimating the CO2 equivalent emissions estate developers of efficient homes such as lower interest
from the respective sectors. The institutional branch, on the rate loans, increased floor–area ratio, and expedited process-
other hand, has leaf-level objectives of political economy and ing; and
transaction costs, both of which are qualitatively determined. • To promote a voluntary rating system for efficient homes
Political economy captures the possibility of likely ex ante to motivate end users and developers to put a premium on
challenges to implementing a policy in the form of interests energy efficiency.
who mobilise for or against a policy. Transaction costs captures The business-as-usual or reference case is considered in
elements salient to policy implementation ex post, which both case studies to benchmark against the current scenario.
include capacity and skills required, scope for rent seeking, Both sets of policy options were chosen after an iterative
and the availability of specialised institutions. process with defining the decision question in Step 1. In
There is a further important consideration when selecting practice, it is not uncommon to return to the first step and
objectives. In order to subsequently assess trade-offs across refine the decision problem in light of the policy options to
them, MCDA approaches require that the leaf-level objectives evaluate. For example, for cooking the available policy options
are “preferentially independent.”2 Put simply, this means that were spread across technologies and policy instruments. How-
a judgment about how a policy option does in one leaf can be ever, given the limited understanding of the trade-offs among
made without a priori knowledge about how the same policy the different technology choices, it was decided to focus on
fares in any other leaf (Basson 2004). For example, in the policy options that vary only by technology. The building
cooking case, the two leaf-objectives of minimising the subsidy energy policy context, on the other hand, is constrained by
54 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
serious data gaps, making it evident at the outset that results (World LP Gas Association 2005). A “high” score (implying
would be more rigorous if the question in Step 1 assessed a hard to implement) is thus assigned for transactional costs.
single technology choice with institutional variability among Appendix 1 shows the analysis matrix for the social, economic
the policy options. Iterations of this nature between clarifying and environmental branches in the cooking case (Table A1, p 58),
the decision problem and the policy options allow decision and the institutional branch in the buildings case (Table A2, p 58).
makers to be guided by what is practically useful, as opposed
to being bound to a theoretical methodology. Step 5: Normalising Quantitative and
Qualitative Information
Step 4: Analyse the Policy Options The matrix created in Step 4 makes explicit the quantitative
The next step is to assess each policy option along each objective. and qualitative scores of different policy options across leaf-
Depending on the objective, policies can either be assessed level objectives, in their respective units. Any assessment of
quantitatively (e g, quantum of CO2-e reductions) or qualita- trade-offs and synergies, however, requires the scores to be
tively (e g, institutional objectives). This equal emphasis on brought to a common scale or normalised. Moreover, the com-
quantitative and qualitative metrics is important as policy mon scale cannot be assumed as linear but rather must reflect
decisions often have informal implications which cannot be the preferences of stakeholders. The next step of the MCDA
immediately reduced to a number. Step 4 and the subsequent approach discussed in this paper uses “value function” analysis
steps on normalising and weighting are the most technical, and to achieve both these goals. Other MCDA approaches can use
below we only allude to the method to provide some intuitive different methodologies for this step.
understanding of the approach. The different quantitative and qualitative policy scores at
A visual assessment of the different policy options, per the leaf-level are mapped on to a common 0-100 scale by creat-
objective, is possible by creating a matrix with the policy ing value functions. Technical details of arriving at a value
options as rows and the leaf-level objectives as columns. Each cell function are given in Appendix 2 (pp 58–59), where we illus-
of the matrix represents a policy’s score for a particular leaf. We trate the process with an example from the cooking case. The
use the cooking case to illustrate the methodology for calculat- process of producing value functions is designed to account for
ing the quantitative and qualitative cells within the matrix. differing stakeholder preferences regarding the additional
In some cases, a quantitative criterion is simply assessed using benefits from the policy at different levels.5 This differing
available data and literature. For example, GHG emissions from value to stakeholders, of marginal benefits at the lower end of
cooking for each fuel are derived from a combination of the the scale vs the higher end of the scale, determines whether
annual average useful energy requirement for cooking per the scale is linear or not—it is linear if the marginal benefits at
household, fuel calorific value, stove efficiency, and the fuel emis- all levels are the same, and non-linear if they are not.
sions factor. In other cases, a leaf-level objective that is difficult At the end of this step, all scores (e g, the qualitative “high/
to measure could be quantified using a proxy. For example, health medium/low” scores and the quantitative scores in their
impacts of household air pollution are difficult to measure as respective units) are mapped, and translated, to values between
they depend on often unknown factors such as the habitation 0 and 100. These values make leaf-level objectives comparable
type or the provisions for ventilation. Hence, we use a proxy and possible to aggregate.
scale by considering the number of households exposed to Working through the value function exercise facilitates
pollution, which is calculated as a weighted sum of the num- greater understanding about the decision problem, its chal-
ber of households using traditional and improved cook stoves, lenges, and mutual learning about the preferences of those
with higher weight for households with traditional stoves.4 involved. It rests heavily on consultations, and often brings
Qualitative criteria, which entail value judgments and can- forth the competing perceptions of relevant stakeholders.
not be easily calculated, require a constructed scale that allows Ways of dealing with differing stakeholder perceptions are
systematic scoring based on judgment. We construct a scale in discussed at the end of this section.
the cooking case for the institutional leaf-level objectives of
political economy (ex ante resistance) and transactional costs Step 6: Aggregation through Weights
(ex post implementation costs). Specifically, a constructed scale Value functions provide a normalised score for each policy option
of three levels (low, medium and high) is used. Scoring on this across all the leaf-level objectives. The next step of decision-
scale requires thinking through, assessing and providing making is to aggregate these value scores to capture how a
rationale for the scores. For example, promoting LPG requires policy does at the branch level. In order to aggregate, however,
improving rural LPG adoption through subsidies, increased the relative importance or weight of each leaf-level objective
rural dealerships and improved cylinder availability. We argue needs to be deliberatively determined. In other words, one
there would be minimal ex ante resistance to such a policy cannot assume, for instance, that the gains to stakeholders from
because a large number of voters would benefit, and hence minimising household or local air pollution are valued equiva-
we assign a “low” score for political economy implying low lently to the gains from minimising global GHG emissions.
resistance. On the other hand, given smaller habitations and Answer difficult questions about which objectives stakeholders
lower rural population density, costs for transportation, value most is central to weighting. For example, in the cooking
operating dealerships and bottling plants would be high case, is minimising upfront expenditure more valued than
Economic & Political Weekly EPW december 5, 2015 vol l no 49 55
SPECIAL ARTICLE
minimising a recurring expenditure, and how do these compare Figure 3: Illustrative MCDA Results for the Cooking Sector Study
with minimising drudgery? These trade-offs are often made Social
100
implicitly by policymakers and may not accurately reflect
stakeholder perceptions. As in previous steps, weighting re- 80
quires facilitation across stakeholders as different groups 60
could rank objectives differently and/or be willing to trade 40
them off differently.
20
One technique to determine the relative importance of leaf-
level objectives is trade-off weighting (Basson 2004). Its first Institutional 0 Environmental
inputs need to be interrogated and the process repeated. For and requires minimal additional subsidies, making the trade-
example, changing the trade-offs between recurring expenses, offs primarily with respect to both these branches. In essence,
upfront expenses and drudgery time within a reasonable range the analysis concludes that policies pushing modern fuels
does not change the final order of the cooking policy options on achieve better social and environmental outcomes but require
the social branch, suggesting that the ranking is fairly robust. institutional and financial commitment.
The buildings case results are presented in Figure 4. The
Step 8: Choosing the Preferred Policy Option policy option targeting end-users scores well on the economic,
The above steps lead to an evaluation of each policy option social and environmental branch objectives, but with significant
across each objective, and make explicit the complementarities institutional challenges. As end user incentives are targeted to
and trade-offs between objectives. home owners who would invest in horizontal construction (as
The preliminary results for the two case studies are shown opposed to the real estate developer incentives which are more
in Figures 3 and 4. For the cooking case (Figure 3), all policy geared towards high rise construction), the results suggest that
options do well in comparison with the reference case on the in the short term, horizontal construction offers more oppor-
social branch-level objective. This is primarily due to the increased tunities from energy efficiency than high-rise buildings. The
subsidy to clean burning fuels or technologies resulting in trade-offs which emerge are mainly institutional and often so-
reduced costs and drudgery. The options promoting modern cial. For instance, while the building codes policy scores highly
cooking fuels do better environmentally as they reduce house- on most fronts, unless the institutional issues of ineffective
hold air pollution and marginally lower GHG emissions.6 For code compliance structures and inadequate technical capacity
institutional and economic objectives, however, the reference are addressed separately, the option is not feasible. For the
case does better since it is a path of least institutional resistance social objective, higher upfront costs make efficiency adoption
56 december 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
difficult, except when end user financial incentives are provided. opens the country up to questions of credibility and locks us
Ratings and the reference case perform poorly on most into long-term energy decisions that are not informed by com-
branches, but with least institutional resistance as they require prehensive analysis.
little change from the status quo. MCDA approaches do not provide an easy answer to these
complex issues. However, they offer a way to focus on a good
Dealing with Differing Preferences amongst Stakeholders process as the starting point for a good answer, and refine
The approach presented in this paper assumes a relatively ho- understanding over time starting from our current benchmark.
mogeneous stakeholder group that will be able, albeit with If MCDA approaches are to be taken forward in policymaking,
some negotiation, to reach consensus on all aspects of the they raise a few considerations. The first is the need to involve
decision cycle: from determining the objectives, to the shape stakeholders from the start with a commitment to deliberation.
of value functions and weighting for scoring policy options. If This can require working against current policymaking
however, no clear winners or losers emerge from policy options processes which may not foster engagement across groups with
because of conflicting stakeholder views, the approach can be differing agendas. Second, executing a MCDA approach requires
used to facilitate further deliberation on the trade-offs and time, capacity and resources. Often it is data intensive, requir-
ways to improve the policy options. For instance, a potential ing extensive input from decision analysts and stakeholders.
option can be identified, ranked second or third by each group As a starting point, the approach could be led by policymakers,
that will be acceptable to everyone. Where there is potential, think tanks, universities or civil society groups. A ratcheting
compensation can also be given to parties to overcome a blockage. strategy can be used to introduce MCDA principles into policy-
This ability to interrogate the transparent decision process is making, such as starting with an identification of all stake-
one of the prime advantages of MCDA techniques. holder groups and explicitly using the information gathered in
the discussions for decision-making. Subsequently, more
4 Conclusions structure can be introduced to the process by moving towards
Development policymaking, which incorporates energy and explicit identification of objectives, then gradually towards
climate considerations, is a complex undertaking. It involves value function and weighting exercises. An identification of
multiple objectives and various actors with differing agendas. enabling conditions and supporting tools (for example, the IESS)
The MCDA approach proposed in this paper offers a potentially will also be needed to deliver credible results. A MCDA expert can
useful way to work within this complexity, requiring decision- also be brought into the process to assist with technicalities.
makers to ask policy relevant questions and identify comple- Irrespective of the details of how the approach is operation-
mentarities and trade-offs. At the same time, MCDA approaches alised, MCDA fosters more transparent policymaking about un-
can be perceived as complicated and are not trivial to imple- derlying assumptions, sensitivities, and trails of argument that
ment. Our intent is to put forward a multi-criteria approach lead to a particular result. This emphasis on communication
less as a rigid decision tool, and more as a framework to facili- and audit trails regarding decisions can benefit our status quo
tate structured discussion. and is relevant across timescales. In the immediate climate
This intent is motivated by the need for rigorous judgment context, it would strengthen coherence between India’s do-
embedded within a process of transparent discussion to over- mestic and international position on climate change which
come the pathologies in our current decision-making processes. rests on the principle of not compromising development objec-
For instance, policy decisions routinely involve implicit trade- tives. Further, it can be employed to distinguish between ad-
offs as a default, but which are not articulated either in the ditional climate actions that India could undertake with exter-
decision process or outcome. The recent target of increasing nal aid which fall outside the scope of co-benefits. In the longer
domestic coal production from 600 MT to 1,000 MT by 2019 is term, it can be used for other opportune planning purposes
a case in point. While accelerating growth rates of domestic and gradually be introduced into other spheres of policymak-
production can increase energy security and perhaps provide ing such as health and education, amongst others.
cheaper electricity in the short to medium term, this is only Ultimately, successful implementation of the approach will
one aspect of the necessary policy context. The local environ- likely generate evidence to build capacity within and outside
mental consequences of coal use on air pollution and water the government to have a more open, considered, and involved
stress should be equally presented as outcomes of the policy approach to policymaking. Such a robust policy-planning
decision. Another example is India’s stated co-benefits basis framework can allow for India’s energy and climate actions to
for climate policy, which is conceptually promising but not yet be compatible with its broader social, economic and environ-
backed by an explicit methodology. The absence of the latter mental goals.
Notes 4 As modern fuels do not lead to household whether increasing savings of Rs 200 from
1 Note that these steps need not be linear, and air pollution they have zero weight in the Rs 100 to Rs 300 is more, or less, or as valuable
there could be iteration between some steps. proxy scale calculation, and hence do not con- as increasing savings of Rs 200 from Rs 300 to
2 Preferential independence is not the same as tribute to the fi nal sum. Greater the proxy Rs 500.
mathematical independence. score, greater the impact of household air 6 GHG emissions are lower in these cases due to
3 Promotion of one particular fuel does not imply pollution. reduced black carbon emissions.
negative growth in the adoption of the other 5 That is, if the range of possible savings is Rs 100 7 Subsidies in the reference scenario taper off
clean cooking options. to Rs 500, stakeholders need to determine over the 20-year period starting from the current
Inshah Malik
T
This paper aims to interpret construction of the self his paper is drawn from my doctoral work focusing on
and struggles of nationhood of some Muslim women the question of Muslim women’s agency in political
struggles, which involved extensive fieldwork in Indian-
in Kashmir’s resistance movement against Indian
administered Kashmir. Meeting young and old pro-freedom
control, focusing on the phase of the armed struggle in activists and recording impressions of their journey of self-
the 1980s. It argues that they have been continually construction was a substantial part of this endeavour.1 The
refashioning their notions of self and notions of just movement, pitted against Indian control in Kashmir, has seen
many transformations, including a shift from violent to non-
and free political community, and have cast
violent resistance. Three phases worth mentioning are the
themselves in religious–cultural terms to suit the needs Plebiscite Front movement (1960s), the armed struggle (1980s),
of the movement. and the Quit Kashmir movement (2008 onwards). The tensions
Muslim women with an active role in the armed between the “West” and “Islam” over the female body and
rights are replicated in a similar fashion between Kashmir
struggle underwent a process of self-constitution in the
and India. With the military occupation of Kashmir and
processes of engagement with their immediate social unacknowledged Islamophobia in modern India as a result of its
and political context. There are women with a Muslim partition history, the question of Muslim women is often knitted
identity, who may or may not be practising Muslims in with narratives of “victimhood” and “lack” of agency.
This paper will focus on the armed struggle phase in the
when they intervene in political action. Yet, they were
1980s. It aims to interpret self-construction and struggles over
invariably cast in religious–cultural terms, forgetting nationhood by Muslim women (with regard to more general
that they have challenged both the Indian state and its political questions as well as questions of gender justice).
patriarchy of militarism, alongside that within their
The Armed Resistance
own community.
The 1980s were cataclysmic for a pre-industrial Muslim
society, materially affluent due to the implementation of fair
distribution of wealth under the “land to tillers” policy (Dar
2010). This material homogeneity of society forced to the
centre stage the question of the undetermined political status
of Kashmir, which served as a banner for people from different
religious backgrounds (Bose 2005). The armed struggle in
Kashmir is not groups of armed men functioning on the fringe,
but is embedded in the social fabric, and refashioned and
reproduced for the performance of collective political will. It is
a phenomenon marked by mass-level production of resistance
through social and political processes (Ganguly 1996). The
Islamic Students’ League (ISL) was formed to spearhead
political mobilisation and cultural resistance. Students played
a fundamental role in creating this mass political culture
and also became a force for consistent resistance. In order to
create a lasting impact in society, women’s participation was
considered fundamental.
Women’s organisations with social/religious roles existed
Inshah Malik ([email protected]) is a PhD candidate at the much before the mass resistance culture came into being, but,
School of International Studies, Jawaharlal Nehru University, in this phase, women carved out stronger political roles for
New Delhi.
themselves. It is in this phase that the Dukhtaran-e-Millat
60 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
(DeM; Daughters of the Nation) turned from propagating reli- the nationalism in the Indian feminist project. This tension
gious education to employing religion for political change.2 In signifies the power hierarchy between the Indian state and the
Anantnag, the Women’s Welfare Organisation, working on Kashmiri people. Though neither Kashmiri women’s self-
social issues, decided to wind up in the wake of the armed expression nor Indian feminism are monolithic, some patterns
struggle. Some of its members, together with members of the are nevertheless visible.
ISL, formed the Muslim Khawateen Markaz (MKM; Muslim Against this background, it is imperative to investigate
Women’s Centre).3 It functioned as an autonomous parallel women’s self-construction, as pointed out in the mosque-
women’s organisation to the Jammu Kashmir Liberation Front going women of Egypt (Mahmood 2005), and their notions of
in 1989. The Jammu Kashmir Mass Movement (JKMM), led by nationhood during the 1980s armed struggle in Kashmir. The
a woman, is an example of a political organisation where era of the 1980s is a phase laden with questions of identity:
women were vested with decision-making powers and leader- self, womanhood, being Muslim, or Kashmiri. These inter-
ship roles.4 It is an interesting exception to the norm of male actions or conversations of the 1980s have opened up a public
leadership, and it continues to exist. space for women and also a scope for the formation of
their agency.
Muslim Women in Kashmir’s Armed Struggle
Muslim women in Kashmir are seen either as victims caught Self-construction
amidst a violent conflict, or somehow abandoning the cause of The complex social process of self-construction is fundamental
women’s rights by embracing the cause of “separatism,” or in understanding both the individual and collective actions in
helping their “violent” men. The position of considering Kash- a given context. The notions of alternative selfhood exercised
miri women as mere victims fails to acknowledge the exist- by Kashmiri women are often contextual, experiential, and
ence of the power hierarchy between armed Indian militarism reflect active choices. The respondents of this study took
and Kashmiri men. For instance, Ayesha Ray (2009) notes that several routes to engage with the world and shaped their
Muslim women’s groups like the DeM and MKM were fighting identities through the actions they decided to take after
to mobilise women in their support, but offering sanctuary to certain political awareness.
militants—the very same men who were the cause of their Aasiya Andrabi, leader of the DeM, understands her self-
(the women’s) suffering in society. Ray largely ignores the formation through a moral, spiritual journey initiated as a result
oppression of the Kashmiri community under continual mili- of running into a certain low in her life. In 1981, Andrabi
tarisation, much like how Western white feminism ignored the finished her bachelor’s degree in biochemistry from Kashmir
question of “race” for black women. Some Indian feminist University. With a hitch in her career plans, she began to
work on Kashmir often furthers the Indian occupation in a contemplate better ways of being productive in society. First,
unique manner by not addressing the political question that is Islamic identity, and then politics became her goals of self-
the one primarily affecting the Kashmiri Muslim women. construction. Not only was it an alternative self-construction,
At the level of activism, the Centre for Policy Analysis (CPA), but also an alternative approach to all aspects of life.5
a non-governmental organisation, tried to express solidarity Many respondents termed themselves constant witnesses
with Kashmiri women in their public meeting held on 12 Janu- of oppression, or what they described as zolm (tyranny). Early
ary 2012 at their office in New Delhi, by again emphasising in their lives, they were exposed to bitter realities of social/
narratives of female victimhood without any comment on the political discrimination and violence.
political status of Kashmir. The MKM press statement issued In the late 1980s, Jamia Masjid, in Nowhatta, was the site of
on 30 October 2012, against leaflets published by the CPA, re- daily protests, with sloganeering and stone pelting as the
flects on this invisible power hierarchy that existed between means of registering dissent. It had become a site for children
MKM and CPA members: to witness or participate; while boys were often beaten up, the
The issue of women’s rights in Kashmir does not appear separate from girls cheered on their brothers for their defiance. The first
the larger political crises of the state. Human Rights violations at the chairperson of the MKM, Bhaktawar Rahim relates the story of
hands of Indian army are often a result of the resistance Kashmiri a traumatic event.6 After the elections in 1987 were “rigged,”
women have shown to the Indian rule. To call it ‘women fighting for
better governance’ is an act of deceit by intellectual members of the the protests had intensified across Kashmir. One day, when
organisation (Center for Policy Analysis). As much as nationalism is her brother was visiting a doctor regarding his eye infection,
criticized for not allowing equal partnership for women still Indian he suddenly went missing. Bhaktawar’s family searched
women themselves have gone through the process of nation build- everywhere in vain. Finally, they were informed that he had
ing and it gives them no right to deny it to the women of Kashmir.
We resist all methods of backchannel policies that tend to depoliti-
ended up at a notorious interrogation centre named “Red 16.”
cize us from the central important question of Kashmir’s right to self- He was only 11 years old and had suffered brutal third degree
determination. torture, after which leading a normal life became impossible.
—Anjum Zamaruda Habib (Kashmir Observer 2012) The memory of such an incident proved to be a turning point
There is a tension between Kashmiri women’s self- for Bhaktawar.
expression and Indian women’s intervention. Indian feminism Certain others grew up with stories and experiences handed
often disregards Kashmiri women’s allegiance to the azadi down to them in the form of anecdotes, consequences of violent
(freedom) movement, while Kashmiri women, in turn, abhor or oppressive pasts, from their elders, family, or friends. These
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 61
SPECIAL ARTICLE
stories laid the foundation of their interactions with the world. work sets about questioning the meanings assigned to terms
Farida Dar, the leader of the JKMM, grew up knowing that her such as equality, modernity, secularism, or politics itself. Andrabi,
uncle had been banished from Kashmir unfairly in 1947, and while contemplating the idea of women’s participation in
he suffered just to catch a glimpse of his family again. Although, politics, points out that the politics played over the Kashmir
Farida had never met him, she was pained by his troubled story.7 issue is steeped in hypocrisy, which is not permissible for men
The central identity of “victimhood” has been instrumental or women. It is not about acquiring power, and Sharia law
in cementing the way participants of this study chose to does not permit women to become rulers because men are the
constitute themselves. Yasmin Raja (MKM) was raised by her protectors and guardians of women.
family in Jammu and attended a school in Khati Talao.8 A The DeM does not believe in secularism or in the Western
recurring incident at school led her to think deeply about the sense of equality between the sexes. In Islam, women and men
situation of Muslims in Jammu. Her teacher often experienced are different but equal in their rights. They have different roles
blackouts during classes. Yasmin learnt from her classmates and duties. In some situations women have a better position,
that in the 1947 Jammu massacre the teacher’s entire family in some other situations men are better placed, Andrabi
was killed by mobs led by Hindu political groups. This experi- argues. Furthermore, she points out, man’s physical capacity
ence of Yasmin’s teacher and political lessons from her father, makes him more suited to the responsibility of ruling, and
a former member of the Plebiscite Front, were instrumental in men are equipped to deal with crises better. Women, on the
shaping her mind. For her, the ignored massacre of Muslims in other hand, are often short-tempered, according to Andrabi:
Jammu was proof enough that India was not a place for “I know this from personal experience, if I could have my way,
Muslims to live. She decided to meet the pro-freedom leaders I would have divorced my husband hundred times a day.”
in Kashmir. The DeM’s alternative conceptualisation of Western ideals of
Religious motivation is often behind women’s participation, equality and women’s right to political action are all rooted in
even though they may disagree with the politics employed. Islam. Andrabi argues it to be her Islamic right and duty to
Khadija Begum (Shobei Khawateen, ISL) is a devout Muslim plunge into political action and also enable other women to do
and a believer in taking a stand against oppression, no matter so. She believes that whenever a referendum is granted to the
who the oppressor is.9 She believes that helping others and Kashmiri people, women are not permitted to just obey their
keeping them out of trouble constitutes a large part of her husbands, brothers, or fathers. Women must have their own
Muslim identity. In the 1980s, she had young children and opinion and should be involved enough to form this opinion.
lived in Batmalun, an area rife with sentiments of azadi. Many To substantiate her view, she uses a Quranic story. The story of
militants were from that area, and when a fight ensued be- an ancient Egyptian Pharaoh, who, according to the Quran,
tween them and the army, they would be on the run, often had declared himself God and asked his subjects to accept
through the lanes and alleys around her house. She was aware him as one. His wife Asiya became the only person in his king-
that helping and opening doors to them meant being prepared dom to reject his boastfulness and arrogance.
for suffering, yet she did so, simply because she empathised Andrabi argues that in this story it is clear that women are
with their mothers and felt their anguish. to use their wisdom and exercise their own will in all matters.
She further comments that Asiya, the wife of the Pharaoh, was
Islamist Intervention murdered for her rebellion and it has a lesson for all Muslim
The women’s organisational work has broadly employed two women, that power cannot force us into submission. She
specific kinds of interventions in the armed struggle for self- encourages women to take their independence seriously, to
determination: Islamist intervention and Kashmiri nationalist contemplate on their political and moral positions. She com-
intervention. The women struggling in this phase for freedom ments on her political position by connecting it to the story of
make important social, political and gender interventions as Asiya in the Quran:
central to the overall political questions.
She [Asiya the wife of Pharaoh] rejected the god of her time and I, re-
Islamism (a set of ideologies that propagate Islam as a guide ject the god(s) of my time, whether Manmohan Singh or Omar Abdul-
for social, political and personal life) has been a rising force in lah. I reject their treacherous oppressive rule in Kashmir. For this they
the region since the Iranian revolution in 1979. Among the want to jail me, torture me or whatever, I offer myself up. Nobody can
active women’s organisations, the DeM took a strong Islamist tell me, you are a woman and woman has to be submissive. A woman’s
position; the demand for an Islamic state and uniting of the submission, like man, is ultimately to God.10
Muslim world found prominence in their rhetoric. The leader
of the DeM, Aasiya Andrabi, believes that politics and religion Political Awakening
cannot be separated from each other. She argues that Islam As a student in Government Women’s College, Anantnag,
without politics is nothing; it is a mere set of beliefs. For her, Andrabi witnessed a full-fledged raid conducted by the Indian
Islam is an absolute way of life. It covers all aspects of life, Army to nab the student dissenters, right from their class-
whether political, personal, or social. rooms and hostels. The Indian Army operated with extreme
Andrabi made her political opinions public in 1983, but her high-handedness and many protestors were beaten up, includ-
engagement with religion was never been bereft of political ing women students. After the raid was over, an impromptu
understanding of the Kashmir problem. The DeM’s political protest was initiated by Andrabi herself. She travelled from
62 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
Anantnag to Lal Chowk and addressed a huge gathering while point for the DeM, since after this it came directly under the
elaborating on the violence she witnessed. The scene became security scanner.
intense and slogans condemning the army’s behaviour rever- Kashmiri society stems from interactions between several
berated everywhere. philosophical movements of Hinduism, Buddhism and Islam,
This incident provided Andrabi assurance to live a life and draws on several humanistic ideas and philosophies. After
based on the values of Islam and teach them to her fellow the advent of Islam in the region in and around the 14th century
women. She realised, Islam was the only road to liberation for through the saints of Central Asia, this interaction created a
women. She started a darsgah (religious teaching centre) from common shared culture of living for all. In modern times,
an old mud house in Sazgarpour. She educated herself in however, Kashmiri society is fragmented, with elites among
networks of the Jamaat-e-Islami, but disagreed with them Muslims who distinguish themselves from the rest as more pious
over several political stances, one of them being their admira- or educated and enforce exclusionary practices of endogamy.
tion for Sheikh Abdullah, despite his compromised position Revivalist Islamism came to reject this view as heretical with
with India over the Kashmir issue. Soon, her single darsgah roots in the Hindu caste system, and not in Islam.
sprouted into a chain, spreading across Kashmir. These centres Andrabi comes from a Sayed family, and early on, she came
imparted Islamic education to women exclusively, reading to reject the view that somehow Sayeds are better than the
Kashmir’s political situation through the prism of religion. rest.11 The normative teachings of her family did not appeal to
The movement grew as a platform for lower- and middle-class her much. Sayed women were not to involve themselves in
Kashmiri women. public life, they were not expected to befriend people from the
lower sections of society or with people in particular professions,
Rising to Prominence due to their nobility. Though most of her cousins and family
A cultural programme entitled “Jashn-e-Kashmir” (celebration lived a very modern life, she had to struggle to live the version
of Kashmir) was organised by the Ministry of Culture in New of Islam she believed to be the right one. She befriended the
Delhi; they flew in Kashmiri women to perform folk dances. daughter of a milkman and struggled to maintain their friend-
This move created resentment among darsgah members as ship, much to her parents’ dismay. Islamism appealed to her,
they began to contemplate the commodification of women’s naturally, because, she argues, quoting a verse from the Quran,
bodies. They published a pamphlet entitled “A Message to the the Sharia does not accept any such hierarchy: “We created
Daughters of Fatima” as a criticism of this act of the govern- you into nations and tribes, so you recognise each other, not that
ment. The pamphlet raised several issues, such as “purpose of you despise each other, verily, best among you are the pious”
women’s creation,” “exploitation of women,” “women’s gender (49: 13). The members of the DeM, therefore, found a refuge from
roles of pleasing opposite sex,” and “Indian state’s role.” The social hierarchies, of classes and familial discrimination, and
pamphlet created an alternative goal of “women’s piety” as began to propagate cross-sectional marriages among Muslims.
real salvation for women. Fatima Zahra, the Prophet’s daughter,
became a pious role model as opposed to the depiction of Addressing Patriarchy
women in Bollywood. Andrabi and her band of women received The DeM’s radical stance also reflects in its rejection of the cul-
applause for their work and several media outlets referred to tural patriarchy of Kashmiri society. The women of the DeM
them as “‘Dukhtaran-e-Millat.” This name was gladly accepted realise that Kashmiri society is a male-dominated one, which
by the women because it came to represent their own senti- is structured in a manner that controls women’s liberty and
ments towards Millat (nation of Islam). They resented the restricts their choices. The cultural ideals of womanhood,
idea of territorial nationalism and believed it to be the West whether through Habba Khatoon or Lal Ded, teach women to
employing the sword of nationalism to divide the people with persevere and accept their fate and construct themselves
artificial boundaries, where none existed before. As a result of only in relation to their husbands, fathers, and sons. But, the
their views finding public audience, the DeM began its media DeM believes these ideals are formed to restrict women’s self-
cell. Andrabi’s lectures were recorded and sold as recorded actualisation. In Islam, women are not subject to men, women
cassettes and CDs in all markets in Kashmir. ISL members are subject to God, just like men. This is why, after marriage,
coordinated the sale and distribution. women do not take their husband’s name; woman is not a gift
With massive inroads into the cultural and social fabric of to be transacted between men, she is born in her own right,
Kashmir, Islamism overall came to be recognised as a menace they argue.
and the government imposed a blanket ban on the darsgahs The DeM’s religious understanding was pitched against not
run by the Jamaat-e-Islami. Though the Jamaat did not just the sociocultural understanding of religion, but also the
protest the ban, the DeM began a march from their office in tradition of religion and its dominant male power. Imparting
Sazgarpour to Haval. The women members of the organisation religious education to women was an extremely cumbersome
broke open the “locked down” Jamaat-e-Islami darsgah in process replete with obstacles and hindrances. One of the
protest and then proceeded to the Main Square in Lal Chowk. earliest challenges was to convince the traditional religious
This event appeared in the media with headlines such as complex about women’s rights. On many occasions Andrabi
“Dukhtaran-e-Millat (Daughters of Nation) came on the roads and her women were faced with situations where men or the
to defy the government ban on darsgah.” It was a turning clergy opposed their religiosity in different ways. Andrabi had
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 63
SPECIAL ARTICLE
to suffer tremendous confrontation from male scholars even to Kashmiri nationalism became dominated by Muslim spiritual-
the point of ridicule, but she consistently focused on the wom- ism, with an approach of coexistence. The former Kashmiri
en’s rights position in Islam. She defended women’s participa- cultural identity known as “Kashmiriyat” became a tool of
tion in religious education through Sharia. She believed Islamic propaganda for the Indian state. Kashmiriyat is understood to
law has obliged women to be educated and to be active in edu- be an ideal shared culture that was destroyed due to the ad-
cating others. vent of the armed struggle. This narrative has been criticised
In 1983, Andrabi held an ijtimah (religious conference) at a by the later movements of Islamism and Muslim spiritual
mosque in Nawab Bazer near Hilalia Darsgah. At the mosque, nationalism for its silence over hierarchies and unequal power
the Imam Mufti Rashid had locked the mosque before Andrabi’s distribution between different ethnic groups.
arrival, even though he had agreed to provide a space to women Popularity of Islamist interventions remains because of its
for a religious conference. Having been informed about the radical stances on questions of societal oppression and hierarchy,
Mufti’s decision to not allow women into the mosque since it while Kashmiriyat fails to acknowledge internal hierarchies
was against Islam, Andrabi decided to engage the Mufti in a and pertinent questions, not sufficiently recognising social
religious duel and sought his explanation as per the Quran oppression as underpinning the political question. Further-
and Sunna. He appeared to be very stubborn and adamant. more, through the co-option of the National Conference into
What transpired between the Mufti and Andrabi is of funda- the Indian nationalist project in Kashmir, any references to
mental importance to the discussion of patriarchy. this culture of Kashmiriyat are read as an appreciation of the
Andrabi refuted the Mufti’s arguments about the domestic Indian project.
space being the woman’s place. She argued that Allah (God) The sense of Kashmiri nationalism, which later came to
has indeed put the responsibility of religious duty on both men dominate the self-determination movement in the 1980s,
and women equally and has obliged women to carry out the was employed by two major women’s organisations: Shobei
duty of educating others. She cited the example of the Prophet’s Khawateen (ISL Women’s Wing), and the MKM.
wife Ayesha as the biggest religious scholar after the Prophet, The ISL began a radical politicisation project in 1983, of
and her contribution to Islam. She argued, through examples which women were also a part. The ISL quickly became a
of Muslim women in Islamic history, it is clear that no such powerful platform for the youngsters in Kashmir. Women
domestic restriction was actually a part of Islamic tradition. were driven to it for the political issues it addressed, and that
The Mufti had to admit that his religious interpretation was it allowed women a respite from otherwise traditionally
unsubstantiated by the tradition of Islam. In this manner, gendered roles. The ISL’s political work entailed “awakening”
the DeM was able to open some mosque spaces to the women the Kashmiri people, emissaries going door to door and from
in Kashmir. village to village speaking against the tyranny and oppression
The DeM started an intensive campaign against the social evils of the time. Rahma Khan is one of the earliest women recruits
of dowry, exploitation of women in workplaces, and economic of the ISL movement, who took the ISL’s mission beyond its
exploitation in domestic spaces. Door-to-door campaigns were time into the armed struggle later.12 The ISL became very
designed to introduce Sharia law and women’s rights to the popular among students and entered political discussions.
Kashmiri people. Dowry is an anti-Islamic act and detested in One day, as they announced their idea of throwing open its
the sight of God, and women’s earnings belong to them and space for women too, Rahma jumped at the opportunity when
should not be snatched from them. The places women work at her brother mentioned it to her. Commenting on her decision
and jobs women do must ensure their honour and respectability; to join the ISL, Rahma says:
professions that harm women’s honour or jeopardise their social Despite being an uneducated person, the attraction of the activity, the
position are also against the Sharia. The DeM empathised with feeling to be able to contribute to the cause was incredible, so when my
women through their journey towards piety, but recognised a brother mentioned about women’s wing, I jumped at the opportunity.
paradox. Andrabi argues that in many cases of oppression against Initially, the recruitment of women was informal, engaging
women reported to her organisation, she found the perpetra- them as sisters and mothers to support the politicisation
tors were often other women. Therefore, in order to change process. This space allowed the women to later craft a more
women’s situation, women have to often stand against women predominant position for themselves in the movement. The
when it is the women who help preserve a male-dominated ISL was a response to the hanging of Maqbool Bhat in 1983 in
system. This is why Islam is an alternative she argued. Tihar Jail, New Delhi, and many of these women assisted their
brothers and friends to march forward, finding a lot more
Kashmiri Nationalist Interventions and Shobei Khawateen purpose in the movement and carving a space for their own
Kashmiri nationalism has remained a well-known political selves. The ISL conducted Milad (Prophet’s birthday) marches
undercurrent in Kashmir’s right to self-determination move- and innumerable women joined these marches. Yasin Malik,
ment. The ideas of alternate nationhood are both romantic Ashfaq Majeed Wani, Hameed Sheikh and many others made
and diverse, ranging from ideas of a pan-Islamic empire to active contributions as prodigies. In 1987, the decisive election
spiritual Kashmiri nationalism. However, the 1980s saw a year when Muslim United Front members were incarcerated,
split in the shared Kashmiri cultural identity between Pandits women of the ISL independently held their first convention at
and Muslims, towards more religion-based identity politics. Hotel Taj. The ISL had politically backed the women’s own
64 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
SPECIAL ARTICLE
initiatives. At the convention, the Indian state’s suppression of at different places, mostly at homes. Hundreds of copies were
Kashmir’s liberation movement was discussed. For such an made by women overnight, as Shareefe remembers: “Our
endeavour, emanating from the need of working for collec- hands would pain or feel numb from so much writing.”13
tive political will, the women expected no reward.
The women spent their own money and contributed from Against Patriarchy
their homes, investing in the organisation of the freedom The women of the ISL were going against the tide and conven-
movement. They went from one place to another, and held tion, and as a result had to face a plethora of problems of
conventions and seminars deliberating on the need to rise up sexism and patriarchy. Women’s contribution was viewed as
against the Indian rule in Kashmir. To work in a political unnecessary. People used religion to question women’s public
struggle, for women, was actually a preliminary step towards role. One of my respondents, Maryam Rehman,14 faced pres-
experiencing freedom themselves. The ideals of revolutionary sures first-hand to cover herself in a burqa. At some point, she
women, Leila Khaled from Palestine or Zainab Al-Ghazali was not even taken seriously because she did not adhere to
from Egypt, were promising to the women of the ISL. They the patriarchal perceptions of morality. Maryam fought such
drew lessons from their lives and contemplated over the issues pressures and defended her stance on religion by pointing out
of freedom and solidarity among themselves. The movement that religious duty cannot be obligated on someone by force.
gradually grew in all parts of Kashmir, and centres were Islam is clear about not using any force.
established in villages and districts in the north and south of Women’s participation was frowned upon and invited
the Kashmir Valley. The women’s wing remained open to the taunts that would label their contribution as some kind of
counsel of male leadership and accepted their ideas in competition with men. They were often reminded of their
strengthening plans, strategies and execution of the political “inferiority” or gendered roles as a better means of living their
agenda set by the women. life. Despite such negative experiences, the women of the ISL
The ISL women did not employ literal Islam in their activism, continued working, often collaboratively with other women’s
except for the humanistic messages it had to offer for fighting organisations, especially the DeM. In their personal lives too,
the oppression. It was a space open to all, religious women, women had to face huge problems.
secular women, as well as non-Muslim women. The question Masarat Malik was a young member of the MKM, and her
of women’s rights and protection from violence became central relatives often said that women should have nothing to do
to the debates in the ISL. Members began to argue about the with politics.15 In her defence, she often used examples of
safety of women from all strata of society, including women Fatima Jinnah or Benazir Bhutto from Pakistan politics to
from far-flung villages, who live under the shadow of army make a case for her own participation in the movement. With
control. One of the important things that were anticipated time, her resolve became stronger to the point where she
was to rescue Kashmir from Indian occupation, which could believes that no revolution can be successful without equal
generally improve living conditions for all, including women, participation from all sections of the society. This is the sense
and provide an environment to think about strengthening the of equality that led her to carve out a role for herself in the
moral foundations of women’s rights. movement, and not because she wanted to continue believing
Women raised funds for sustaining on their own and fur- in the socially enforced idea of subordination.
thering the agenda for the self-determination of Kashmir with
an added emphasis on women’s rights. One of the early chal- Conclusions
lenges was to reach out to women and increase their base, so Self-constitution is derived from the processes of engagement
the ISL decided to provide economic help and domestic coun- with one’s immediate social and political context. Women in
sel to its members. Female members could look up to the ISL the resistance have chosen the path of self-actualisation and
for any help concerning their personal issues, children’s edu- morality for performance of agency in this phase of struggle.
cation, monetary help and medical assistance. Women have also inherited the idea of struggle from their
For participating in the events, members were paid daily families and engaged with them throughout their childhood.
expenses and provided help with logistics to ensure nothing The term “Muslim woman” entails a complex set of mean-
would keep them from participating since their families ings. Women are in a relationship with Islam in diverse ways
would not support these expenses. Money was raised amongst in the resistance against occupation and militarism. In this
members and interested donors in order for the movement to process of resistance, they draw on Islam either as Islamists
continue. The whole movement gave tremendous vigour, or as Islamic/Muslim feminists. But, there are also politically
feeling of sacrifice, and confidence to believe they could active women bearing a Muslim community identity, who
change the fate of their entire people through constant struggle
and devotion.
In the 1980s, when photocopiers were still a distant dream, available at
women sat in groups for nights on end to write posters and
make copies of the literature to pass it on to members and fol- Oxford Bookstore-Mumbai
lowers. Posters of Zainab Al-Ghazali or Leila Khaled were the Apeejay House, 3, Dinshaw Vacha Road, Mumbai 400 020
highlights of such night groups. These collectives would meet Ph: 66364477
nevertheless may not be practising Muslims when they inter- notions of self and notions of struggle for political freedom,
vene in political action. Though their sources of resistance are drawing on elements from within their culture. Their invest-
multiple, Muslim women in politics are almost always under- ment in cultural/religious meaning is also shaped by resist-
stood solely in religious–cultural terms, forgetting that, in all ance to dominant Islamophobic assault. Thus, the struggles
these cases, women have challenged both the Indian state/ for religious equality and gender equality have played a
occupation and the patriarchy of their own community. significant role in women’s participation in the larger strug-
Hence, like women elsewhere, they have been refashioning gle for freedom.
Notes 6 Interview conducted by the author at the Bhakta- Bose, S (2005): Kashmir: Roots of Conflict, Paths
1 Interviews were conducted by the author dur- war’s residence in Batmalun on 16 June 2013. to Peace, Cambridge, MA: Havard University
ing February–June 2013. The real identities of 7 Interview conducted by the author at Farida’s Press.
some of the respondents are protected upon residence in Novgam on 21 June 2013. Dar, Hamidullah (2010): “The Fall of the Feudals?”
their request. 8 Interview conducted by the author at the MKM’s Kashmir Life, 13 May, viewed on 18 June 2010,
2 DeM is a women’s exclusive Islamist organisa- office in Raj Bagh, Srinagar on 26 June 2013.-
tion running as a parallel authority to the All 9 Interview conducted by the author at Khadija’s
residence in Batmalun on 30 June 2013. feudals-444/.
Party Hurriyat Conference (APHC).
10 Story of Pharaoh and his wife Asiya mentioned Ganguly, S (1996): “Explaining the Kashmir Insur-
3 This article refers to the MKM prior to its split gency: Political Mobilization and Institutional
in the Quran, Chapter 66. Interview of Aasiya
(unless specified), which only happened when
Andrabi with the author conducted in June 2013. Decay,” International Security, Vol 21, No 2,
its chairperson, Anjum Zamaruda Habib, was
released from jail in 2007. The other faction of 11 Interview of Aasiya Andrabi with the author pp 76–107.
conducted in June 2013. Kashmir Observer (2012): “Delhi Groups Don’t Rep-
the MKM is headed by Yasmin Raja and the
faction is a member of the Hurriyat Confer- 12 Rahma Khan was interviewed on 12 March resent Our Women: MKM,” 31 October, viewed
ence (M) under the leadership of Mirwaiz 2013 at her residence in Batmalun.
on 10 November 2012,-
Umar Farooq. See, “About Us,” J&K Muslim 13 Shareefe was interviewed on 15 April 2013 at
her residence in Bemun. server.net/news/top-news/delhi-groups-dont-
Khawateen Markaz. represent-our-women-mkm.
4 The APHC faction (G) under the aegis of Syed 14 Interview conducted at Maryam’s residence in
Naetpour on 17 April 2013. Mahmood, S (2005): Politics of Piety: The Islamic
Ali Shah Geelani has 11 executive members
15 Interview conducted with the author at Masarat’s Revival and the Feminist Subject, Princeton,
and two of them are women representatives,
Anjum Zamaruda Habib of the MKM(G), and residence in Kanitar on 12 February 2013. NJ: Princeton University Press.
JKMM patron Fareeda Behanji. The JKMM is Ray, A (2009): “Kashmiri Women and the Politics
the only organisation of both men and women, References of Identity,” paper presented at the SHUR
headed by a woman. “About Us,” J&K Muslim Khawateen Markaz, Final Conference on Human Rights and Civil
5 Interview of Aasiya Andrabi with the author viewed on 12 October 2015,- Society, Luiss University, Rome, Italy, 4–5
conducted in June 2013. wateenmarkaz.com/content/about-us. June.
T R Raghunandan
The idea of devolving power to local governments was part of the larger political debate during the Indian national
movement. With strong advocates for it, like Gandhi, it resulted in constitutional changes and policy decisions in the
decades following Independence, to make governance more accountable to and accessible for the common man.
The introduction discusses the milestones in the evolution of local governments post-Independence, while providing an
overview of the panchayat system, its evolution and its powers under the British, and the stand of various leaders of the
Indian national movement on decentralisation.
This volume discusses the constitutional amendments that gave autonomy to institutions of local governance, both rural
and urban, along with the various facets of establishing and strengthening these local self-governments.
Authors:
V M Sirsikar • Nirmal Mukarji • C H Hanumantha Rao • B K Chandrashekar • Norma Alvares • Poornima Vyasulu, Vinod Vyasulu • Niraja Gopal Jayal
• Mani Shankar Aiyar • Benjamin Powis • Amitabh Behar, Yamini Aiyar • Pranab Bardhan, Dilip Mookherjee • Amitabh Behar • Ahalya S Bhat, Suman
Kolhar, Aarathi Chellappa, H Anand • Raghabendra Chattopadhyay, Esther Duflo • Nirmala Buch • Ramesh Ramanathan • M A Oommen • Indira
Rajaraman, Darshy Sinha • Stéphanie Tawa Lama-Rewal • M Govinda Rao, U A Vasanth Rao • Mary E John • Pratap Ranjan Jena, Manish Gupta •
Pranab Bardhan, Sandip Mitra, Dilip Mookherjee, Abhirup Sarkar • M A Oommen • J Devika, Binitha V Thampi
China’s One Belt One Road commodity-rich West and Central Asia
to emerging South and South-east Asian
countries along the road which has a huge
An Indian Perspective potential consumer market. To facilitate
this development, China has set up a
$40 billion Silk Road Fund.
Geethanjali Nataraj, Richa Sekhani
Conception of One Belt One Road
T
The One Belt One Road initiative he growth of China has been Much of China’s logic on the project is
is the centrepiece of China’s remarkable since it undertook based on geopolitics and on the export of
reforms in 1978. It is currently the its huge infrastructure-building capacities
foreign policy and domestic
second largest economy in the world and therefore Chinese President Xi Jinping
economic strategy. It aims to having overtaken Japan. In order to sus- has made the programme a centrepiece
rejuvenate ancient trade tain this development, the concept of the of both his foreign policy and domestic
routes—Silk Routes—which “Silk Road” was proposed. The renewed economic strategy. The One Belt One
initiative of the belt and the road is pro- Road (OBOR) was proposed by Xi Jinping
will open up markets within
posed to cope with the profound changes during his visit to Indonesia in October
and beyond the region. India and challenges that emerge in the course 2013. The project is comprehensive and
has so far been suspicious of of development. The grandiose idea is multifaceted, and seeks to establish China
the strategic implications of rooted in history with the new “Silk not only as an Asia-Pacific power, but as
Road Economic Belt” and the 21st century a global one.
this initiative. If India sheds
Maritime Silk Road (MSR) which earlier Since decades, China’s opening-up
its inhibitions and participates linked the major civilisations in Asia, policy has favoured development of east
actively in its implementation, Europe and Africa for many years. China and coastal areas while west China
it stands to gain substantially in According to the official document and inland areas limited by their geo-
titled “Vision and Actions on Jointly graphical location, resources, and devel-
terms of trade.
Building Silk Road Economic Belt and opment foundation have remained rela-
21st Century Maritime Silk Road,” the tively less developed. The OBOR strategy
project aims to create an open, inclusive contributes to the establishment of “one
and balanced regional economic coop- body two wings” of the new pattern of
eration grouping with a common ideo- comprehensive opening up (Hucheng
logy that benefits all the countries 2014). Through this initiative, China
involved in the initiative. The vision hopes to develop and modernise its
reflects the demand from relevant coun- landlocked and underdeveloped south-
tries to open up infrastructure bottle- ern and western provinces, to enable
necks, and to improve connectivity with them to access the markets of South-east
large markets in Asia and Europe, as Asia and West Asia, thus shaping China’s
well as the need for China’s own devel- regional periphery by exercising eco-
opment and security. nomic, cultural and political influence.
To achieve its objective, the new Silk Further, the Chinese leadership is facing
Road Economic Belt will link China to difficulties in managing the transition
Europe cutting through mountainous re- to a “new normal” of slower and more
gions in Central Asia and the MSR that sustainable economic growth because
links China’s ports with the African of the property market challenges, excess
coast and then pushes up through the capacity in industry, debt burden and
Suez Canal into the Mediterranean Sea financial risks in the Chinese economy.
(Minnick 2015). The MSR will extend In fact, excess capacity in Chinese factories
from the Quanzhou province in China, is a serious problem. It is expected that
heading south to the Malacca Strait, by promoting investments in course of
from Kuala Lumpur it will head to Kolkata, implementation of OBOR projects, new
Geethanjali Nataraj (geethanjali@orfonline. crossing the northern Indian Ocean to opportunities and markets would be
org) and Richa Sekhani (richa.sekhani@ Nairobi, Kenya. created for Chinese firms which would
orfonline.org) are with the Observer Research Therefore it offers a tremendous have a multiplier impact on production
Foundation, New Delhi.
opportunity to connect resource- and of goods and services domestically,
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 67
NOTES
Figure 1: China’s Proposed Silk Roads “inheritance” and “record” as the project
is an important component of the
MOSCOW
Netherlands DUISBURG
Russia
“Chinese Dream,” which extends both
ROTTERDAM
Germany Kazakhstan in space and time. With 58 countries
VENICE
ALMATY URUMQI
involved with the OBOR, it accounts for
Italy Black Sea Uzbekistan BISHKEK
ISTANBUL SAMARKAND Sea of Japan
the economic aggregation of $21 trillion,
Greece Turkey KASHGAR (East Sea)
ATHENS
DUSHANBE
with a 29% share in global trade.1
Yellow
CHINA
Mediterranean Sea TEHRAN
ISLAMABAD XI’AN
Sea
Unlike the traditional Silk Road, which
Iran
GWADAR FUZHOU
East China
Sea
ensured exchange of goods and tech-
Pakistan
KOLKATA GUANGZHOU
Vietnam
nology, the New Silk Road also plans to
India
Bay of
HANOI HAIKOU link policies, infrastructure, trade, finance
Bengal
Arabian
Sea
and people. Figure 1 presents the regions
Sri South
COLOMBO
Lanka
KUALA
China
Sea
covered along the route and Table 1
LUMPUR
Kenya
Malaysia presents percentage shares in world
NAIROBI
population and world gross domestic
Indonesia
JAKARTA
product (GDP).
Silk Road Economic Belt Maritime Silk Road China–Pakistan Economic Corridor
Implications for Those Involved
Source: Xinhua, Council on Foreign Relations.
OBOR provides a platform to expand
thereby creating more jobs and higher Also, most of the projects (in their initial trade volumes between China and mem-
incomes for the Chinese populace. Given phases at least) would be financed by ber countries. Presently, trade between
its huge foreign exchange reserves, total- Chinese financial institutions like China China and other partner countries along
ling about $4 trillion, China is in need of Investment Corporation, China Develop- the road boasts of a solid foundation. For
avenues to invest so as to earn a reason- ment Bank, etc, and China-dominated most of the countries involved in the
able return on the same. institutions like Asian Infrastructure project, China is their largest trade part-
Among all the driving factors, the Investment Bank (AIIB) and BRICS New ner, largest export market and the main
strategic rationale for initiating the OBOR Development Bank (Brazil, Russia, India, source of investment. Trade and foreign
is of utmost importance. The project China, South Africa [BRICS]). Some direct investments (FDI), over the past 10
clearly reflects the deepening of Chinese observers believe that this would help years between China and other coun-
interests in strategically important regions China in faster internationalisation of tries, have had an annual average growth
to its west, for instance, the Persian her currency, the Renminbi. Thus it is of 19% and 46%, respectively. According
Gulf. Many of the spectators are of the quite apparent that China has a grand to Fidelity Worldwide Investment Report
view that this new initiative by China is vision in promoting OBOR; a vision (2015: 3), China’s trade value with the
a response to the much-hyped “pivot to which will seek for a greater role (both OBOR countries reached almost Renminbi
Asia” by the United States (US) (Leverett, political and economic) in the inter- 7 trillion in 2014 accounting for 25% of
Leverett and Bingbing 2015). According national community. total foreign trade value, while the com-
to a few experts, the launch of this pro- bined weightage of trade with the US,
ject, if handled proficiently, will act as a The Economics Eurozone and Japan was around 34%.
non-military catalyst that will accelerate Economic and trade cooperation is the Considering that China has maintained
the relative decline of US’s power over foundation of the construction of OBOR. strong trade and economic cooperation
the Persian Gulf and will ensure more The Chinese officials use three keywords with the countries involved in the project,
balanced distribution of geopolitical influ- to define the new project: “connection,” this new initiative will boost economic
ence in this region, which is seen to be cooperation which will ensure regional
Table 1: List of Regions along the OBOR and Their
strategically vital. % Share in World Population and GDP* integration. According to the Chinese
Financial integration is another impor- Share in World Share in President, the annual trade with the
Population (%) GDP (%)
tant factor driving the implementation of countries involved in the project would
North and East Asia 21 20.1
OBOR. This project will help the inter- surpass $2.5 trillion in a decade.
Central Asia 1.4 0.7
nationalisation of the Yuan and encour- Chinese agriculture and mining are the
West Asia 5 6.26
age Chinese companies to issue Yuan South-east Asia 9 0.99
two key industries which are expected
bonds to fund projects for the OBOR South Asia 23 6.36 to benefit as the route will encourage
initiative. As more and more trade will Eastern Europe 1.4 1.5 mineral exploration. The OBOR initiative
get channelised through the route, the Southern and Western Europe 2.3 8.37 will also help China to identify new
demand for Chinese currency will increase. East Africa 2 6.35 growth drivers for imports and exports,
This will further help increase its Total 65 50.6 and hence diversify China’s trading pro-
* Percentage share in population and gross domestic
weightage in the International Mone- product (GDP) is calculated by the authors and data of GDP
file leading to trade creation. Through the
tary Fund and special drawing rights. and population was taken from the World Bank. OBOR, China is planning to encourage
68 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
NOTES
competitive industries to reap the ad- Nations (ASEAN) and Central Asian BRI, passes through Pakistan Occupied
vantage of high-end technology and in- countries will increase because of infra- Kashmir (POK). According to the docu-
crease overseas investment. This will structure development. This could also ment released at the Bao Forum Confer-
further assist in exploration of resources encourage overseas expansion of Chi- ence in March 2015, the creation of mar-
which will improve China’s supply of nese cement industries in these regions. itime facilities with China’s aid will have
energy resources. Under this initiative Additionally, the freight movement by an obligation for the host country to
China plans to build both hard and soft road will increase through multimodal serve Chinese interests, including strate-
infrastructure from the Indo–Pacific to connectivity. Overall, the countries are gic interests (Rajan 2015). This is worri-
Africa to improve the relations at both expected to gain as OBOR will encourage some for India as Chinese will eventually
economic and political fronts. demand, burgeoning of new industries increase their military presence in the
China, however, has to be critical and creation of trade. Indian Ocean and will reshape the eco-
while formulating its plan. The route is nomic arrangements in the region. Fur-
in three directions—east, west and India and OBOR ther, the railway route planned under
south, and hence needs to be clearly According to various experts from dif- BRICS is expected to link Pakistan and
differentiated. ferent countries from the east coast of China via POK, which will be of strategic
The benefits of the project are not just Africa to north-east Asia, India’s role in importance in the event of conflicts with
limited to China alone and gives tremen- the belt and road initiative (BRI) has India, and will facilitate China to supply
dous opportunities to its members to been acknowledged and is seen to be missiles and spare parts to Pakistan.
boost and revitalise trade with other essential. The Indian Ocean is vital for This might have serious consequences
countries, and seeks out new markets pursuing the economic and strategic on India’s power to negotiate with China
and better accessibility. With connectivity interests of China. However, unlike most on the territory of Ladakh and further
improving, the OBOR-covered countries of the ASEAN and South Asian countries cause tension on border.
are more likely to gain a larger share who have welcomed the idea of BRI,
among Chinese trading partners. Being India has not. For India, the proposal to Impact on Trade: The Silk Road Eco-
the final destination of the New Silk build BRI is vague and does not give nomic Corridor initiative is similar to
Road, Europe is also an important region surety as to how serious Beijing is about that of BCIM and the CPEC. India has a
for China from an economic viewpoint. opening up trade and cultural exchanges direct and indirect presence in all the
Through better connectivity, OBOR may along the Himalayan barrier. The pro- three economic corridors. BCIM gives
promote the reconciliation between the ject has several implications for India. India greater presence in the region as it
European Union and Russia. It will also is a formal member. India pursues its soft
provide Europe a platform to balance its Impact on Security: India, in order to power in the Silk Road Economic Corri-
transatlantic relationship. There will be balance China’s north–south connec- dor that outvie economic and political
a greater chance for Europe to cooperate tivity to South-east Asia, has been pro- development. The CPEC would link to
with the markets of West Africa, the Indian moting east–west connectivity through the larger Indian market in order to
Ocean, and Central Asia. Myanmar, Thailand and Vietnam. India is reach its full economic potential. This
OBOR will also connect resource and concerned about the Bangladesh–China– corridor will open up the flow of trade
commodity-rich West and Central Asia India–Myanmar (BCIM) economic corri- between India and Pakistan, which pres-
to emerging South and South-east Asian dor which links Yunnan with the north- ently has to be routed through third
countries which have a huge potential east of India. countries instead of receiving them
consumer market. Southeast Asia, though Through the OBOR, China is countering directly. Further, India does not enjoy
rich in resources, suffers from an infra- the strategies of India and is promoting its much leverage to guide ocean trade
structure deficit and low levels of indus- greater presence in the north-eastern re- markets despite having proximity to the
trial development. The project has the gions of India, a part of which China claims sea and a strong navy. Through the OBOR
potential to address this gap and pro- as its own territory. These, along with project, India will get access to more
mote development in the region. For China’s plan to supply eight type 039 A business in an environment which will
countries like Cambodia and Laos, the submarines to Pakistan, have made India promote business-friendly reforms.
OBOR project could be a game changer. anxious of China’s policy of a “balanced” Although China is the largest trade
Further, the large-scale investment South Asia. With China’s aid to Pakistan partner with most of the countries
needed to build OBOR might encourage and the launch of BRI, such submarines involved in BRI, India is also a significant
Chinese steel makers to build more will be more than doubled. India, on the trading partner, especially with African,
capacity in South-east Asia, West Asia other hand, only has 13 ageing conven- South Asian and South-east Asian coun-
and African countries by setting up inte- tional submarines which could result in tries. India will have economic con-
grated steel mills with nearby iron ore an India–China arms race and geopolit- sequences once the BRI is launched. Port
mines. Chinese cement industries will ical rivalry in the Indian Ocean region. development in Myanmar, Bangladesh,
benefit in the long-term, as the demand Further, the China–Pakistan Economic Sri Lanka, Maldives and Pakistan, which
from the Association of Southeast Asian Corridor (CPEC), which is a part of the are incorporated in the BRI, have the
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 69
NOTES
potential to change the bilateral equation agenda. More South-east Asian nations the country better implement and inte-
of India further to its disadvantage (Sibal coming under China’s sphere of influ- grate its “spice route” and the “Mausam
2014). This is because it favours China’s ence would result in a serious setback to project.” Besides tangible benefits of
trade flows through the Indian Ocean. India’s traditional concept of the subcon- physical connectivity, the integration of
This also will lead to trade diversion tinent as its privileged sphere. these projects will also invigorate a
of Indian goods and services. China and Further, the project, though informal climate of mutual trust, stability and
India export some similar sets of goods at present, offers an alternative against prosperity between member countries.
to countries like Thailand, Myanmar, the US-led Trans–Pacific Partnership (TPP) Additionally, India could also expedite
Cambodia in South-east Asia region, Sri in the Asia–Pacific and Transatlantic the progress on the Chabahar port on
Lanka, Pakistan and Nepal in South Asia, Trade and Investment Partnership between the Iranian coast which will give India
a few countries in Western Europe and the European Union and the US. These access to Afghanistan and Central Asia.
Central Asia. Once the OBOR is built, mega free trade agreements through This would enable India to be a major
countries might divert their trade from their policies and rules of global trade, player in the overland Silk Route.
India to China because of easy access to particularly where multilateral level con- India’s participation in OBOR will give
Chinese goods and currency exchange. sensus is more necessary, will make it a new start and a new bright spot in
difficult for the government to regulate India–China cooperation as it will foster
Why Should India Join: China has a the market and will have economic policy coordination, increase trade and
tradition of using the “chequebook” i mplications on India’s trade. investment and ensure people-to-people
policy against India. And under the MSR, Moreover, India and China are mem- connect and most importantly integrate
China is developing ports in Bangladesh, bers of the BRICS Bank, which aims to the financial system. For India, MSR could
Sri Lanka and Pakistan, and is trying to offer financial support for infrastructure prove to be a boon and help enhance its
enlarge its sphere of influence using its projects and sustainable development. regional and bilateral cooperation. India
economic might in the Bay of Bengal By refusing to be a member of BRI, India’s does not have the same economic might
and Arabian Sea. Thus the MSR is noth- infrastructure needs may get neglected. as China has, but investing in neigh-
ing but an economic disguise to the This may further interfere with the bouring littoral countries will help in
“string of pearl” theory. China is invest- economic cooperation among the BRICS reducing China’s sphere of influence to
ing huge amounts in India’s immediate countries and may cause conflict. some extent.
neighbours and these South-east Asian Once the issue relating to strategic
countries tend to use the “China card” and economic implications is judiciously Challenges
against India, which is to try to play with analysed, India could benefit from partner- The grandiose plan of OBOR has been
the India–China mistrust in order to fur- ing in BRI. From a strategic perspective, painted as everything from a response
ther their development and economic India’s involvement in OBOR will help to home-grown economic problems to
a masterful reshaping of the regional because of the presence of underdeve- existing plan of the OBOR. The belt is
economy. However, the complete reali- loped and immature markets along the planned to pass through POK. India
sation of the project as estimated will route. Terrorism can further add to the should seek to make the route pass
take about 35 years, which will mark the risk. The potential of conflicts and geo- through the Indian portion of the region
centenary of the foundation of the People’s political tension with the US and the of Kashmir. This would help her not only
Republic of China in 2049. Though the unbalanced trade relations between China economically revive Kashmir, but also
project has been met with scepticism and Russia can further act as a hurdle. ease tensions and build mutual trust on
from its neighbours, it has huge potential India may also challenge OBOR as the both sides of the border. Similarly, along
across regions. initiative is seen more of a threat to the the maritime route, India should attempt
However, the plan is yet to finalise country rather than an opportunity. to make the route pass through another
its strategic vision. The success of the Therefore, it is the adequate planning port like Kochi or Mumbai. Such a measure
project depends on addressing both in- and coordination between the member would augment India’s own infrastruc-
ternal and external challenges being countries that will be required for ture development efforts by attracting
faced by the Chinese economy. successful implementation of the OBOR foreign capital and technical expertise.
The Chinese are expecting quick initiative. It is evident that India needs to actively
results. As the project involves large- engage in the development of the pro-
scale infrastructure development, the Conclusions ject, right from the beginning. She needs
plan needs to be given at least a 10-year In sum, the OBOR initiative is a centre- to heed China’s call to participate in the
time frame for success which means that piece of China’s foreign policy and project geographically, politically and
expectation should be revised. Since domestic economic strategies. It is aimed financially. The OBOR provides India a
China is planning continuous invest- at rejuvenating two ancient trade routes perfect opportunity to attract foreign
ment in infrastructure in the countries and opening up markets within and capital to develop a significant proportion
that are less developed and unstable, beyond the region. In order to make of her requirements. Simultaneously,
there is a potential for a debt crisis OBOR successful, China is keen to offer India has the opportunity to tap new
and limited returns. Moreover, China more economic and financial assistance markets and make good economic re-
presently is grappling with its own to countries on the route and beyond turns by investing in infrastructure and
economic issues and the slowdown through a connectivity programme, tech- industrial corridors along the OBOR.
can also have implication on its OBOR nical exchanges and by building infra- India can also enhance her international
strategy. Therefore, serious planning structure. China has already started tak- prestige by playing the role of an inter-
would be essential. ing several initiatives by investing in national mediator between an aggres-
China has allocated $40 billion to its infrastructure projects and seeking a sive China and a suspicious West. India
Silk Road Fund and established a $100 comprehensive engagement with mem- thus has a lot to gain from OBOR, pro-
billion AIIB. According to a few analysts, ber countries. However, better planning vided it sheds its inhibitions about the
the actual fund needed for the plan will be essential. same and participates actively in its
might exceed three or four times the From an Indian perspective, it is ap- implementation.
amount allocated. The additional require- parent that the OBOR initiative of China
ment will have to be met either by will seriously hamper India’s efforts in Note
issuance of special bond or low-cost increasing its share in global trade and 1 The list of the countries involved can be found
finance by the China Development Bank. commerce if India chooses to stay out. in “The Silk Road Economic Belt and the 21st
China has to be vigilant of the financial Not only is India likely to lose existing Century Maritime Silk Road May 2015” pub-
lished by Fung Business Intelligence Centre.
challenges or else the ambitious project and prospective markets, but also see its
could end up as expensive boondoggle. share in global capital inflows come
OBOR has also received criticism and down. In such a situation, it becomes im- References
scepticism from many member countries, perative for policymakers in India to Fidelity Worldwide Investment Report (2015): One
Belt, One Road: Building Links, Strengthening
particularly ASEAN. They see this project plan out strategies that not only mitigate Influence”, March.
as an attempt by China to dominate its adverse consequences of OBOR, but also Hucheng, G (2014): “Deepen Economic, Trade Co-
operation, Co-create New Brilliancy,” Ministry
neighbouring region and therefore are enables the country to reap benefits of Commerce, People’s Republic of China,
facing coordination problems. Further from the same. 4 July.
regional and territorial disputes of China has repeatedly reached out to Leverett, F, H Mann Leverett and Wu Bing-
bing (2015): “China Looks West: What Is at
China can interfere with the project. India and other countries of the region Stake in Beijing’s ‘New Silk Road’ Project,” The
Additionally, the Chinese failure in to partner in the implementation of the World Financial Review, 25 January.
Minnick, W (2015): “China’s ‘One Belt, One Road’
considering regional politics and non- OBOR. India should therefore not miss
Strategy,” Defence News, 12 April.
interference policy can expose the project the bus and strive to gain maximum eco- Rajan, S (2015): “China: President Xi Jinping’s
to political risks from both local opposi- nomic and geopolitical advantage out of South Asia Policy–Implications for India,”
South Asia Analysis Group, 27 April.
tion and competing regional power. the corridor. To begin with, India should Sibal, K (2014): “Silk Route to Tie India in Knots,”
China’s OBOR dream can also get affected seek to add more Indian nodes in the India Today, 25 February.
‘Outsider, We Love and Fear You’ indigenes of the Andaman Islands, the
author twice uses a racist term “primi-
tive” that was abandoned a decade
Dialogue with the Nicobarese ago. In 2006, the Indian government
replaced the term “primitive tribal
group” with “particularly vulnerable
Ajay Saini tribal group.” Terms like “savage,” “stone
age” and “primitive” have been used since
I
The Nicobarese are not against n his article, “Interreligious Mar- colonial era, which reinforce the belief
“outsiders” and have a long riage in Nicobar Islands: Opportuni- that the indigenes are backward people.
ties and Challenges” (EPW, 30 May Governments use such terms as a pre-
tradition of embracing different
2015), Swapan K Biswas touched upon a text for forced development, whereby
cultures. They respect religious sensitive issue and aptly argued that the indigenes are alienated from their
heterogeneity and even though conversions and interfaith marriages land and resources. Therefore, by labe-
the Hindu population in Kamorta were common among the Nicobarese. Of lling the indigenes of the Andaman as
late, the tribal leadership of Nancowry “primitive,” the author reinforces the
and Katchal is small, there
has opposed out-group marriage alli- abovementioned discourse that disem-
are numerous Hindu temples ances for which the author gave two powers the indigenes.
around, to which the Nicobarese reasons: the power tussle between the The article also has multiple factual
have never objected. Their Muslim and Christian Nicobarese and errors. The author’s claim that the “Briti-
reservation politics. shers permanently occupied both the
opposition to marriage alliances
The precipitating factors for opposing Andaman and Nicobar Islands in 1858
with outsiders is not religiously marriages with outsiders, which the with a plan to start a penal settlement to
motivated but provoked by author has entirely missed, are intricately imprison Indian prisoners who revolted
the undue advantage that the linked with a gradual shift in the outsid- against them in 1857,” is wrong. In fact,
er’s identity—from a munificent patron it was only in 1868 that the Nicobar
non-Nicobarese have taken by
of the community to a menacing agent. became a part of British India. The
encroaching on their land and Since the issue is critical and the Nicoba- Danish first occupied the Nicobar on 12
also disrupting the hitherto rese secular identity is at stake, it needs December 1755, renamed it as New
harmonious socio-economic to be analysed in the context of larger Denmark, but abandoned it in April
sociocultural and economic dynamics 1759. Later, Austria also occupied and
dynamics. A response to Swapan
that have recently compounded the situ- abandoned it. The British first occupied
K Biswas’s article “Interreligious ation in Nancowry. the Nicobar in 1807, but abandoned it in
Marriage in Nicobar Islands: 1814. Until 1868, the Danish enjoyed
(Re)Setting the Context sovereign rights over the Nicobar, which
Opportunities and Challenges”
Biswas’s article appears promising and were sold to Britain on 16 October 1868
(EPW, 30 May 2015).
well contextualised in the beginning. (Cahoon 2015).
However, as it progresses, it proffers a The construction of a penal settlement
fractured analysis; and in the end, on Kamorta Island, wherein 262 prison-
reduces the most pressing issue in the ers were incarcerated in 1869, and which
Nicobar to petty communal and reserva- got abandoned in 1888, was part of a
tion politics. Disconnected from field strategic manoeuvre. After the withdraw-
realities, the article fails to analyse al of the Danish, the Nicobar became infa-
the fluid intra and inter community mous for piracy for the next two decades,
relations in Nancowry. It seems to be where foreign pirates captured or dam-
based on the author’s casual observa- aged 26 passing vessels. The British colo-
tions, which he also implicitly conveys nised the Nicobar to control piracy and
at three instances: “my colleague...re- to curb the rival naval power in the coun-
vealed,” “according to some scholars tries of the immediate east (Singh 1978;
The author acknowledges the guidance who are observing…” and “some schol- Tamta 1992). Contrary to the author’s
received from S Parasuraman.
ars attribute…” On top of factual incon- claim, the French never “occupied” the
Ajay Saini ([email protected]) sistencies and skewed analysis, the arti- Nicobar, though the French missionaries
teaches at the Tata Institute of Social Sciences, cle also appears distasteful given its had some unsuccessful conversion stints
Mumbai.
racist remarks. While alluding to the in the islands.
72 decEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
DISCUSSION
The Nicobarese opposition to mar- inter-religion and out-group marriages of land to government employees at Ka-
riages with outsiders needs to be ana- remained common in the Nicobar. morta for constructing a temple and
lysed in the backdrop of their changing raising a park.1 The temple management
relations. The Nicobarese have always Munificent to Menacing committee used 200 square metres (ap-
lived isolated with intermittent cross- Post independence, the outsider’s identi- proximately) land for temple construc-
cultural contacts. Until the advent of ty among the indigenes has gradually tion, while the rest was used for con-
the missionaries, they had remained shifted from being a munificent patron struction of shops and residential build-
almost immune to change. Since the 15th to a menacing agent. In the Nicobar, the ings that were rented to outsiders.
century, the Portuguese, French, Danish Nicobarese tuhets (extended families) Since such land usage violated the
and Italian missionaries attempted con- exercise traditional ownership of land. terms of the land allotment order and
versions among the Nicobarese. However, On the request of the government, the ANPATR, the Nicobarese took up the mat-
it was only during the British regime community donated some land for ter with the administration. However, in
and after the World War II that the administrative purposes. With the set- July 1996, the assistant commissioner,
community embraced Christianity. ting up of governmental apparatus in Nancowry passed a contentious order
With their conversion to Christianity Nancowry, a large number of outsiders and permitted 34 outsiders to reside and
(98%) and Islam (2%), the indigenes were deployed in the islands. These peo- carry on business in Kamorta.2 Many
imbibed new values that ushered socio- ple constructed their hutments in Kam- outsiders took advantage of the order
cultural change among them. While all orta and gradually encroached upon the and migrated to the Nicobar.
the indigenes converted to organised nearby land. Even after retirement, a The influx of outsiders in the tribal
religions, some also continued observ- large number of them have not left the reserve has led to economic exploitation
ing their animist rituals. With no con- tribal reserve and their illegal settle- and sociocultural rupture among the
flict of religious faith, the Nicobarese ment has caused land encroachment Nicobarese. With a rise in crime rate,
society became a unique blend of Chris- issues in the islands. especially the ANPATR violations, the
tianity, Islam and animism. Since the The most sensitive encroachments are Nicobarese peaceful society has come
Nicobarese had the freedom to choose or related to a piece of land that the com- under immense pressure. With the help
reject the cultural traits of their newly munity allotted in goodwill to government of local activists, the Nicobarese filed a
adopted religions, the process of change employees for religious purpose. With public interest litigation (PIL) in the high
among them was not disruptive. the consent of Rani Lachmi, the deputy court against encroachments. Vide its
As the British were not interested in commissioner, vide order no 4/26/41/B-1, order, dated 12 December 2002, the
generating revenue from the Nicobar, dated 4 January 1954, allotted five acres high court directed the Lt Governor to
the indigenes were never coerced to pay
tax. With the opening of missionary
schools and gradual awareness among
the Nicobarese, the exploitative nature
of their trade relations with the outsiders EPW 5-Year CD-ROM 2004-08 on a Single Disk
changed. The sociocultural change gained
The digital versions of Economic and Political Weekly for 2004, 2005, 2006, 2007 and 2008
momentum after the independence of
are now available on a single disk. The CD-ROM contains the complete text of 261 issues
India. In 1956, the Andaman and Nico- published from 2004 to 2008 and comes equipped with a powerful search, tools to help
bar Islands (Protection of Aboriginal organise research and utilities to make your browsing experience productive. The contents of
Tribes) Regulation (ANPATR) was prom- the CD-ROM are organised as in the print edition, with articles laid out in individual sections
ulgated, which recognised the Nicobar in each issue.
as a tribal reserve and proscribed out-
With its easy-to-use features, the CD-ROM will be a convenient resource for social scientists,
siders’ entry. researchers and executives in government and non-government organisations, social and political
With the development of public infra- activists, students, corporate and public sector executives and journalists.
structure in the islands, the Nicobarese
Price for 5 year CD-ROM (in INDIA)
received rudimentary amenities, such
as healthcare, schooling, electricity, clean Individuals - Rs 1500
water and so on. The introduction of Institutions - Rs 2500
modern horticulture practices and co- To order the CD-ROM send a bank draft payable at Mumbai in favour of Economic and Political
operative movements streamlined the Weekly.
Nicobarese livelihood. While the rest of Any queries please email: [email protected]
the indigenes experienced depopula-
Circulation Manager,
tion in the colonial and the postcolonial
Economic and Political Weekly
epochs, the Nicobarese prospered. The 320-321, A to Z Industrial Estate, Ganpatrao Kadam Marg, Lower Parel,
community perceived the outsider as a Mumbai 400 013, India
munificent patron and inter-island,
Economic & Political Weekly EPW decEMBER 5, 2015 vol l no 49 73
DISCUSSION
implement the p rovisions of ANPATR.3 Durga puja celebrations in the islands. opposed marriages with outsiders as a
In pursuance of the same, the deputy Even though the Hindu population in last resort.
commissioner issued an order (No 433) Kamorta and Katchal is small, there are In retrospect, I am reminded of my
on 13 October 2004 for the repatriation numerous Hindu temples around, to dialogue with a Nicobarese captain, who
of outsiders. However, the tsunami of which the Nicobarese have never object- on being asked about his opinion of the
December 2004 pre-empted the repa- ed. Rather, they have cooperated in the outsider, paused for a moment and replied
triation process, and ever since then construction of temples by offering land. rather philosophically, “Outsider, we love
the i ssue has been pending. Therefore, the Nicobarese opposition to and fear them.” The Nicobarese love out-
With the inundation of large tracks marriage alliances with outsiders is not siders, as they have brought happiness and
of land and destruction of traditional religiously motivated. As evidenced in prosperity to the community. Now they
livelihoods post tsunami, the encroach- this discussion, the changing socio- also fear them since the outsiders are
ment issue has become critical. Other economic dynamics are the core of stealing the same from the community.
islands of Nancowry, especially Katch- the problem.
Notes
al, also grapple with land encroach- The tribal leadership is so anxious
1 The government servants could only get 2.64
ments. The Nicobarese argue that unso- about the “colonisation of central Nicobar acres of land in 1969.
licited outsiders come to the islands to by outsiders” that it even requested the 2 A document “Use of Land Allotted to Sri Sri
Radha Krishna Temple Complex at Kamorta,”
exploit the community, and some of administration to stop all the develop- dated 6 June 2007; accessed from the office of
them have married the Nicobarese girls ment activities in the islands for a year the assistant commissioner, Nancowry.
only to stay and establish business in and use the same resources to r epatriate 3 F No 39-292/2003-revenue.
4 A letter, ABAVP/ANI2007/01; accessed from
the islands. the unsolicited outsiders. 4 In its effort the office of Nancowry Tribal Council.
to solve the encroachment issue, the
Conclusions community has tried everything that it References
The Nicobarese, per se, have no aversion could: requested the outsiders, ap- Cahoon, Ben (2015): “Provinces of British India,”
Worldstatesmen.org, viewed on 6 July 2015,
to out-groups. In fact, it is the only indig- proached the administration and peti--
enous community in the islands which tioned the high court. However, justice es.htm#Andaman.
has embraced different cultures. The in- has always eluded it. In its frantic e ffort Singh, Iqbal N (1978): The Andaman Story, New
Delhi: Vikas Publishing House.
digenes respect religious heterogeneity to shield the c ommunity from further Tamta, B R (1992): Andaman and Nicobar Islands,
and participate in Eid, Christmas and disintegration, the tribal leadership has New Delhi: National Book Trust.
EW
N Economic Growth and its Distribution in India
Edited by
Pulapre Balakrishnan
After a boom in the early 21st century, India witnessed a macroeconomic reversal marked by a slowdown in growth
that has lasted a little longer than the boom. A fresh criterion of governance, namely inclusion, has emerged and
become a priority for the state. Written against the backdrop of these developments, the essays in this volume
represent a range of perspectives and methods pertaining to the study of growth and its distribution in India; from
a long view of growth in the country, to a macro view of the recent history of the economy, to a study of the economy
at the next level down, covering its agriculture, industry and services, and, finally, to an assessment of the extent to
which recent growth has been inclusive.
Pp 516 Rs 745 Assembling authoritative voices on the economy of contemporaryIndia, this volume will be indispensable for
ISBN 978-81-250-5901-1 students of economics, management, development studies and public policy. It will also prove useful to policymakers
2015 and journalists.
Authors: Deepak Nayyar • Atul Kohli • Neeraj Hatekar • Ambrish Dongre • Maitreesh Ghatak • Parikshit Ghosh • Ashok Kotwal • R Nagaraj
• Pulapre Balakrishnan • Hans P Binswanger-Mkhize • Bhupat M Desai • Errol D’Souza • John W Mellor • Vijay Paul Sharma • Prabhakar Tamboli
• Ramesh Chand • Shinoj Parappurathu • Sudip Chaudhuri • Archana Aggarwal • Aditya Mohan Jadhav • V Nagi Reddy • C Veeramani • R H Patil
• Indira Hirway • Kirit S Parikh • Probal P Ghosh • Mukesh Eswaran • Bharat Ramaswami • Wilima Wadhwa • Sukhadeo Thorat • Amaresh Dubey
• Sandip Sarkar • Balwant Singh Mehta • Santosh Mehrotra • Jajati Parida • Sharmistha Sinha • Ankita Gandhi • Sripad Motiram • Ashish Singh
Movement of WPI Sub-indices April 2014–October 2015 Merchandise Trade October 2015
Year-on-Year in % October 2015 Over Month Over Year (April–October)
($ bn) (%) (%) (2015–16 over 2014–15) (%)
12
Exports 21.4 -2.3 -17.5 -17.6
Imports 31.1 -3.7 -21.2 -15.2
6
Manufactured Products Trade Deficit 9.8 -6.8 -28.1 -9.9
-0.4% Data is provisional; Source: Ministry of Commerce and Industry .
0
-1.7% Trade Deficits April 2014–October 2015
-6 $ billion
Primary Articles
Fuel and Power
0
-12 Non-Oil Trade Deficit
-16.3% -3
-$4.4 bn
-18
April M J J A S O N D Jan F M A M J J A S* Oct* -6
2014 2015 -$5.4 bn
* Data is provisional.
-9 Oil -$9.8 bn
Trends in WPI and Its Components October 2015* (%) Trade Deficit
-12
Financial Year (Averages)
Weights Over Month Over Year 2012–13 2013–14 2014–15 Total Trade Deficit
-15
All commodities 100 0.1 -3.8 7.4 6.0 2.0
Primary articles 20.1 0.0 -0.4 9.8 9.8 3.0
-18
Food articles 14.3 0.3 2.4 9.9 12.8 6.1 April M J J A S O N D Jan F M A M J J A S Oct*
2014 2015
Fuel and power 14.9 0.5 -16.3 10.3 10.2 -0.9 Oil refers to crude petroleum and petroleum products, while non-oil refers to all other commodities.
Manufactured products 65.0 0.0 -1.7 5.4 3.0 2.4
* Data is provisional; Base: 2004–05=100; Source: Ministry of Commerce and Industry.
Movement of Components of IIP Growth April 2014–Sept 2015
Year-on-Year in %
Movement of CPI Inflation April 2014–October 2015 16
Year-on-Year in %
Electricity 11.4%
10
8
8
3.0%
0
2.6%
6
Rural
5.5% Manufacturing
CPI (Combined) 5.0% Mining
4.3% -8
4 April M J J A S O N D January F M A M J J A Sept*
2014 2015
* September 2015 are quick estimates; Base: 2004–05=100.
2
Urban
0
Growth in Eight Core Industries October 2015* (%)
April M J J A S O N D Jan F M A M J J A S Oct* Financial Year (Avgs)
2014 2015 Weights Over Month Over Year
2013–14 2014–15
*October 2015 is provisional; Source: Central Statistics Office (CSO); Base: 2012=100.
General index # 100 0.8 3.6 -0.1 2.8
Infrastructure industries 37.9 5.2 3.2 4.2 4.2
Inflation in CPI and Its Components October 2015* (%)
Coal 4.4 17.6 6.3 1.3 8.5
Latest Month Over Over Financial Year (Avgs)
Weights Index Month Year 2013–14 2014–15 Crude oil 5.2 3.6 -2.1 -0.2 -0.9
CPI combined 100 126.1 0.6 5.0 9.5 5.9 Natural gas 1.7 1.6 -1.8 -13.0 -5.1
Consumer food 39.1 132.4 0.8 5.3 11.3 6.4 Petroleum refinery products 5.9 1.3 -4.4 1.5 0.3
Miscellaneous 28.3 117.9 0.3 3.5 6.8 4.6 Fertilisers 1.3 2.3 16.2 1.5 -0.1
CPI: Occupation-wise Sept 2015* Steel 6.7 6.0 -1.2 11.5 3.5
Industrial workers (2001=100) 266 0.8 5.1 9.7 6.3 Cement 2.4 5.9 11.7 3.1 5.6
Agricultural labourers (1986–87=100) 839 0.8 3.5 11.6 6.6 Electricity 10.3 3.4 8.8 6.0 8.2
#Aug 2015, * Provisional; Source: CSO (rural & urban); Labour Bureau (IW and AL).Linking factor used to compute inflation for 2013–14. # September 2015 * Data is provisional; Base: 2004–05=100, Source: CSO and Ministry of Commerce and Industry..
Comprehensive current economic statistics with regular weekly updates are available at:.
Capital Markets 27 November Month Year Financial Year So Far 2014–15 End of Financial Year
2015 Ago Ago Trough Peak Trough Peak 2012–13 2013–14 2014–15
S&P BSE SENSEX (Base: 1978–79=100) 26128 (-8.1) 27253 28439 (39.3) 24894 29044 22277 29682 18836 (8.2) 22386 (18.8) 27957 (24.9)
S&P BSE-100 (Base: 1983–84=100) 8079 (-5.5) 8355 8547 (40.9) 7687 8980 6680 9107 5679 (-38.0) 6707 (18.1) 8607 (28.3)
S&P BSE-200 (1989–90=100) 3365 (-3.1) 3467 3472 (43.5) 3193 3691 2678 3723 2288 (6.0) 2681 (17.2) 3538 (31.9)
CNX Nifty (Base: 3 Nov 1995=1000) 7943 (-6.5) 8233 8494 (40.2) 7559 8834 6653 8996 5683 (7.3) 6704 (18.0) 8491 (26.7)
Net FII investment in equities (US $ Million)* 165894 (2.5) 166886 161901 (12.8) - - - - 136304 (23.4) 149745 (9.9) 168116 (12.3)
* = Cumulative total since November 1992 until period end | Figures in brackets are percentage variations over the specified or over the comparable period of the previous year | (-) = not relevant | -: not available | NS = new series | PE = provisional estimates
Comprehensive current economic statistics with regular weekly updates are available at:.
Listening Hard
Amidst growing Islamophobia in Europe, the only hope is in educating your own and “the other,” in
discovering the rich past of coexistence, in dialogue, and exploring common identities.
Pawan Bali
H
atidza Mehmedovic walks through a sea of white understanding of food and health came to the West,” he says.
tombstones and raises her hands in prayer. This is her Bashir Mann, the first Muslim elected official in the UK, de-
pilgrimage—her own Mecca, she says. Mehmedovic scribes this as a full circle of civilisation. “Europe learned
recounts the dead—her husband, eldest son Azmir, younger everything from Muslims, and now it is the other way round.
son Almir, her two brothers and over 50 other family The people from Muslim countries come to Europe to learn
members—all killed in one single night. Twenty years later, the same thing they taught them,” he adds.
this is what remains—the silent, symmetrical tombstones, Samia Hathroub, a lawyer and French social activist, says
and the haunting memories of the 1995 Srebrenica genocide. extremism is growing in France. “Most of the youth come from
Mehmedovic’s story is one of the stoic moments in the doc- dislocated immigrant families; some of them begin as drug
umentary Journey into Europe ( dealers, and go to jails, where they end up being radicalized.”
watch?v=ZleegkA-r4w) that underline the need to understand The isolation of the Muslim community adds to the problem.
the complex relationship of the Muslim world in Europe. The The rise of extremism reflects poorly on Muslim leaders in
120-minute film by Muslim scholar and former diplomat, Europe who have failed their younger generation, but also on
Ambassador Akbar Ahmed, seeks to explore several layers of the state’s ineffectual integration policies. The colonial empires
this Muslim–European identity. The film digs out the rich past and the immigrants from the countries they ruled have failed to
of Islam in Europe, it unveils the challenges of the Europe’s growing find a common identity to hold on to. France makes
present, and in some heart-warming moments, it Islamophobia is a clear distinction between “originally French and
offers glimpses of hope that humanity will prevail. also reflected in the French from immigrant background.” In Britain,
The debates of the present have to be rooted in rounds of discussions being British is not being English. In Denmark, the
the past. In parts of Europe, the past of Islamic on EuroArabia—an immigrants are not seen as true Danes. Immi-
civilisation has been glorious, but often forgotten. grant histories have failed to find space in school
assumption that
In the film, Jose Antonio Nieto, the mayor of textbooks and, consequently, in national identities.
countries like France
Cordoba, remembers the richness of Andalusia in In Europe, if there are concerns about radicali-
would be Islamic
Southern Spain. “During the 10th century, under sation of young Muslims, there is also a surge in
Republics in 39 years
the Muslim rule in Andalusia, Cordoba was one of the right-wing sentiment. Marine Le Pen’s far-right
the greatest cities in the world. In Cordoba, we owe our char- anti-immigrant party is fast gaining popularity in France.
acter, our culture to the Muslims,” he says. Britain’s answer to the Tea Party, the United Kingdom Inde-
During the rule of the Caliphate, Cordoba’s main library pendent Party (UKIP), swept the local council polls in 2014,
boasted of over 4,00,000 manuscripts, and the period was with its anti-immigration sloganeering. A far-right group,
known for some of the famous scholars and inventors like Britain First, has launched a fight to “take the country back.”
Ibn Rushd, Ibu Firnes and Maimonides. Britain First volunteers patrol Muslim-dominated areas
The footprints of this Islamic civilisation are scattered across with heavily-armed military vehicles, distribute Bibles at
Spain and Sicily—in architecture, culture and daily habits: in mosques and organise frequent anti-mosque protests. Jim
food, like the couscous sold on the streets of Palermo in Sicily; Dowson, its founder, says the future of Britain would be
in the Sicilian dialect which is peppered with Arabic words; in “war” between the Islamic world and the originally British.
music and dance, where the “ole” in flamenco is derived from Europe’s growing Islamophobia is also reflected in the rounds
the expressions of “Allah.” In Palermo, the Monreale Cathedral of discussions on EuroArabia—an assumption that countries
and the Palentine chapel are standing examples of this mélange like France would be Islamic Republics in 39 years. Jean-Luc
of influences. Its structures have been inspired by Latin and Marret at the Foundation for Strategic Research in Paris dis-
Roman elements and Arabic arches. misses it. “Europe will not collapse under the weight of Muslims.
Nasser David Khalili, founder and director of Maimonides Islam can provide many things to Europe but Europe can provide
Foundation in the UK, says the contributions of Islamic civili- many positive things to Islam. Like in France, we are producing
sation have been critical during the times Europe and the West a new form of Islam connected to Western moderation and
were going through the “dark ages.” “From the 9th and 10th innovation and this could be the ‘new Andalusia’,” he says.
century onwards, the Muslims translated Greek and Roman Amidst all this, there are signs of normality. For instance,
books. Through that translation, mathematics, medicine and take sports, where over 45 Muslim players are a part of the
Economic & Political Weekly EPW DECEMBER 5, 2015 vol l no 49 77
POSTSCRIPT
CULTURES | EDUCATION
Doing Science
The absence of affordable, high-quality books
and magazines on science, especially in regional
languages, could imperil the existence of a
democratic, rational society.
Prashant Singh
T
he year was 2009. A faculty member—probably Ajit
Mohan Srivastava from the Institute of Physics,
Bhubaneshwar—was delivering a lecture on relativity, in
Hindi, at the Department of Physics of the University of Alla-
habad. Earlier, it had been announced on the campus notice-
boards that the professor’s talk would be in Hindi. On the day
of the talk, a theatre of modest size was seated to full capacity
with students crowding the gate and the stairs leading to the
benches higher up. I’ve never seen such an impressively large
gathering of eager listeners for a scientific talk—unless they
were rounded up and herded in by the institute’s director or
other concerned organisers.
Normally, for such a lecture, it’s difficult to find enough
people to fill up the couple of benches in the front row. The
rest of the hall is usually conspicuous by its emptiness. But not
so in the case of Professor Srivastava. Such was the power of
the medium. Hari Prakash of the Physics Department of
Allahabad University congratulated the speaker for such a
successful talk. In reply, Srivastava narrated his own experi-
ences of the difficult time he had with English when he was
Books relating studying as an undergraduate in
to science which the same department. That was what
have the capability prompted him to deliver the talk in
Hindi so that students, particularly
of firing the
those who had just begun their
imagination are
degree courses, could understand
beyond the reach of the subject better. That was probably
most students, for the single-most important factor
both financial and that attracted students to flock to
linguistic reasons the lecture hall.
English is one of the major problems confronting any
student of the Uttar Pradesh education board who first en-
counters the Lecture Hall at the undergraduate level. It is
generally believed that the natural sciences do not require
much understanding of language, whereas the social sci-
ences do. That argument is fine as long as we remain con-
fined to books directly relevant to courses that mostly have
mathematical expressions. But these books hardly fire the
imagination. They don’t inspire. They can’t inspire. Books
relating to science which have the capability of firing the
imagination are beyond the reach of most students, for both
financial and linguistic reasons.
And yet, at the same time, we have religious/revisionist
organisations reaching out far and wide through the use of
all available media platforms, including television, print and
mobile phones (probably the most effective). For people doing
78 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
POSTSCRIPT
EDUCATION | TRAVEL
A
s the airplane hurriedly descends, bidding adieu to is a place that witnessed the passionate encounters between
the august company of light-as-air clouds, all that I Shiva and Devi Sati. As the Sanskrit word for desire is “kama,”
see from my window is a vast expanse of land under the place was named Kamakhya, the home of the “bleeding
tea cultivation and an elephantine forest cover spreading far goddess.” In the month of Ashaad (June), the goddess is be-
and wide. I instantly conclude that this place is principally lieved to bleed within the confines of the “Garvagriha” or inner
representational of only one colour, and that is green. chamber of the temple. During this period, the Brahmaputra
Assam is undoubtedly the most vivacious of the eight states River near the temple magically turns crimson. The temple
that embrace “North East” India. Assam is synonymous with then remains closed for three days and the “holy water” is
overwhelming natural beauty, teeming wildlife, pristine water distributed among the devotees of Kamakhya Devi.
bodies, impeccable tea gardens, and affectionate, loving locals. Rationalists pooh-pooh the phenomenon, saying there is
It wouldn’t be an exaggeration to call Assam a diminutive no scientific proof that it is blood that turns the river red.
sample of the entire country, just like a trailer to a block- Some locals say that the resident priests pour vermilion into
buster movie. the waters during the “auspicious” period. Since menstrua-
Beyond tourism, the state of Assam is also a symbolic pin- tion has long been metaphorically representative of a
nacle of religious heritage. Like all god-fearing Indians, I decide woman’s power to bring forth life, the deity and the temple
to kick-start my journey to Assam by paying obeisance at the of Kamakhya is a testimony to the “shakti” or power within
ancient Kamakhya temple located in the city of Guwahati. every woman.
My cab driver, a philosophical soul, tells me that legend has
it that anyone visiting Guwahati who gives this temple a miss Chitvan Singh Dhillon ([email protected]) is a Chandigarh-based freelance
is forced to revisit it thrice in his lifetime. Well, thank god I did journalist.
A
khtari Begum and Amrita Pritam are just two of the Suman—to recreate the life and times of poet and iconoclast
iconic women of the late 1950s and 1960s whom we Amrita Pritam and her friend, the lyricist Sahir Ludhianvi.
are beginning to rediscover and comprehend. While This production, Ek Mulaqat, is part of a completely new
one reached the heights of performance in Hindustani classi- brand of theatre from Mumbai, a revisiting of the city’s contribu-
cal music, the other was a doyen of Punjabi literature. The tion to the cultural history of India. The production has another
layers of social prejudice and ridicule that they faced and the companion piece by the same team of actor and director that
very singular lives they led, are now the subject Finally, it looks looks at the tempestuous relationship between the
matter of two new plays that seek to position them like Mumbai is maverick movie director Guru Dutt and his compan-
in the tumultuous times they lived in, and exam- beginning to ion, Geeta Bali. Finally, it looks like Mumbai is begin-
ine how their questioning of gender stereotypes ning to celebrate its own complex past, and happily
celebrate its own
broke many barriers, forever. complex past, and rediscover its talents in the film industry, without
Among others of the same generation, Amrita happily rediscover any of the conventional filmi trappings. How pro-
Sher Gill, painter extraordinaire, daughter of a fessional and personal lives intersect and feed off
its talents in the
Hungarian opera singer and a Sikh father, who was one another is the subject of both these plays.
film industry,
a Persian and Sanskrit scholar, was another trail- In the fading blue twilight of her verandah,
without any of the
blazer. She used her western academic training in middle-aged Amrita Pritam has locked out her
conventional filmi
art to re-present the Indian peasant. She also led a young lover and companion, Imroz. Is it to revisit
trappings those tender moments of intellectual camaraderie,
personal life of unfettered freedom. And then
there is Ismat Chughtai, recently re-discovered for many of us by bordering on love, that she felt for Sahir Ludhianvi? A trunk
Naseeruddin Shah’s mounting of her short stories. Another call from Bombay interrupts her reverie...Sahir is no more.
woman, another iconoclast, like her contemporary, Manto. And in the course of a single night, the director bring us the
Coming from an extremely modest Muslim background, the tender yet tragic tale of these two cultural giants.
legendary ghazal and thumri singer Akhtari Begum was one Loveleen Mishra also played Amrita Pritam in a production
of the daughters of a second wife of a rich lawyer. Rita Gan- directed by M S Sathyu a few years ago, but without any of
guly’s fine production on her, mounted earlier this year in the unstated eloquence and longing of Ek Mulaqat.
Delhi, walked us through her life. From her very early years, Deepti Naval breathes life into the passionate Punjabi poetry
Akhtari Begum proved to be a rebel who would live life on her of Amrita Pritam, her face transparent with emotion, hanging
own terms. The end of the dissolute onto every word of the prolific Urdu poet.
maharaja lifestyle that supported many LAST LINES He is a loner, unable to commit himself
artists like Akhtari Begum in Bhopal, to Amrita or any other woman. Shekhar
Lucknow, Hyderabad and Rampur Suman essays the part well, his perfect
comes alive in this solo show. For hers Urdu a foil to Deepti’s perfect Punjabi.
was a life lived to the fullest. Question- This is a production solely reliant on the
ing, mocking and satirising the feudal craft and delivery of language, though
order, patriarchy and power, Begum the director attempts to hold our inter-
Akhtar was a truly emancipated woman est visually with stills and projections.
in a pre-independence era. Two productions, two images of a
Rita Ganguly, now 75 years old and world gone by, two women, one always
energetic enough to mesmerise us labelled a tawaif (prostitute), the other
with a two-and-a-half-hour long perfor- considered “loose” and immoral be-
mance, has had an amazing career cause of her live-in relationship. Don’t
herself. Born into a Bengali family and miss them if they come to your city.
steeped in that culture, she became Feisal Alkazi ([email protected]) is a theatre
the first female Kathakali performer [email protected] director, author and educationist based in New Delhi.
80 DECEMBER 5, 2015 vol l no 49 EPW Economic & Political Weekly
APPOINTMENTS/PROGRAMMES/ANNOUNCEMENTS ADVERTISEMENTS
Authors:
Maithreyi Krishnaraj • Maria Mies • Bina Agarwal • Prem Chowdhry • Ujvala Rajadhyaksha, Swati Smita • Joan P Mencher, K Saradamoni • Devaki
Jain • Indira Hirway • Deepita Chakravarty, Ishita Chakravarty • Uma Kothari • J Jeyaranjan, Padmini Swaminathan • Meena Gopal • Millie Nihila •
Forum against Oppression of Women • Srilatha Batliwala • Miriam Sharma, Urmila Vanjani • J Jeyaranjan
&OLPDWH&KDQJH&RQIHUHQFH3DULV
7KH)UHQFKZKRKDYHPDGHLQWHUQDWLRQDOGLSORPDF\DILQHDUWERZOHGDJRRJO\RIVRUWV
E\UHYHUVLQJWKHRUGHUWKDWKDVEHHQSUDFWLVHGLQDOOSUHYLRXV8QLWHG1DWLRQV81FOLPDWH
DQGRWKHUGHYHORSPHQWVXPPLWV,QVWHDGRIDVNLQJZRUOGOHDGHUVWRDUULYHDWWKHHQGRIWKH
FRQFODYHWRVLJQWKHDJUHHPHQWWKUDVKHGRXWE\FRXQWULHV¶QHJRWLDWRUVWKH\LQYLWHGWKHP
WRWKHILUVWGD\
7KHLGHDZDVWRPDNHWKHEURDGEUXVKVWURNHVRIZKDWLVOLNHO\WREHFDOOHGWKH³3DULV
DFFRUG´DIDLWDFFRPSOLDQGFRPPLWSROLWLFLDQVDQGEXUHDXFUDWVRQO\WRGRWWKH³L´VDQG
FURVVWKH³W´V:KHWKHUWKLVIHLQWZRUNVZLOORQO\EHNQRZQE\WKHIROORZLQJWZRZHHNHQGV
²WKLVILUVWZKHQWKHEDVLFGUDIWLVLVVXHGDQGWKHVHFRQGZKHQWKHILQDOGRFXPHQWLV
VLJQHGDQGVHDOHG
,QWHUQDWLRQDO6RODU$OOLDQFH
7KHUHLVQRTXHVWLRQWKDW3ULPH0LQLVWHU1DUHQGUD0RGLUHJLVWHUHGDFRXSRIKLVRZQE\
ODXQFKLQJWKH,QWHUQDWLRQDO6RODU$OOLDQFHRQWKHYHU\ILUVWGD\7KHIDFWWKDWLWZDV
LQWURGXFHGZLWK3UHVLGHQW)UDQFRLV+ROODQGHXQGHUVFRUHGWKHXQLYHUVDOHQGRUVHPHQWWKDW
WKHLQLWLDWLYHDFKLHYHG
$VLQPDQ\VXFKJUDQGSURQRXQFHPHQWVOLNHWKHSOHGJHVRIIRUHLJQGLUHFWLQYHVWPHQWWKDW
0RGLREWDLQVRQHDFKRIKLVYLVLWVDEURDGWKHUHFDQEHDKXJHJDSEHWZHHQSURPLVHDQG
SHUIRUPDQFH7KH$OOLDQFHLVH[SHFWLQJLQWLPHWRUDLVHPLOOLRQIURPPHPEHUVKLSIHHV
IURPWKHFRXQWULHVWKDWDUHSRWHQWLDOPHPEHUV²PRVWEHWZHHQWKH7URSLFVZKLFKDUH
ZHOOHQGRZHGZLWKODQGDQGVXQOLJKWEXWWRRSRRUWRH[SORLWWKHP
,WZLOODOVRWDSIXQGVIURPELODWHUDODQGPXOWLODWHUDODJHQFLHV%XWWKLVLVLQWKHUHDOPRI
VSHFXODWLRQ,QGHHGWKHJRYHUQPHQWPD\KDYHJRWFDUULHGDZD\E\WKHHXSKRULDRIWKH
ODXQFKFRQVLGHULQJWKDWLWH[SHFWVWRUDLVHELOOLRQE\²WLPHVZKDWWKH81
H[SHFWVULFKFRXQWULHVZLOOSD\GHYHORSLQJFRXQWULHVWRFRSHZLWKFOLPDWHFKDQJHE\
$FFRUGLQJWRWKH3HQDQJEDVHG7KLUG:RUOG1HWZRUNZLGHO\FRQVLGHUHGWREHWKH*OREDO
6RXWK¶VPRVWRXWVSRNHQQHJRWLDWRULQWKHFOLPDWHWDONV,QGLDKDVEHHQSXOOLQJLWVZHLJKWLQ
WKHRQJRLQJQHJRWLDWLRQVWDNLQJSODFHEHKLQGFORVHGGRRUV$QWLFLSDWLQJWKLV866HFUHWDU\
RI6WDWH-RKQ.HUU\UHIHUUHGWR,QGLD¶VSRVLWLRQDV³DFKDOOHQJH´DIHZGD\VEHIRUH3DULV
VXPPLW
3DULV$FFRUG1RW%LQGLQJ
7KHSUREOHPZLWKWKHRXWFRPHRIWKH3DULVDFFRUGLVWKDWLWZLOOEHHQWLUHO\YROXQWDU\
OHDYLQJHDFKFRXQWU\WRSOHGJHLWVFRPPLWPHQWWRPLWLJDWHWKHLPSDFWVRIFOLPDWHFKDQJH
7KHDFFRUGZRXOGQRWEHELQGLQJRQDQ\FRXQWU\7KLVWXUQVDEOLQGH\HWRWKHFDWDVWURSKLF
LPSDFWVRIJOREDOZDUPLQJVRPHRIZKLFKDUHDOUHDG\EHLQJIHOWDFURVVWKHSODQHW
&KHQQDLQRWEHLQJDQH[FHSWLRQ
7KH86DQGZKDWDUHNQRZQDVWKH³XPEUHOODFRXQWULHV´FRQVLVWLQJRI&DQDGDDQG
$XVWUDOLDDPRQJRWKHUVLVGHDGVHWDJDLQVWDQ\DWWHPSWWRPDNHHPLVVLRQFXWV
FRPSXOVRU\7KLVLVZK\WKH86UHIXVHGWRVLJQWKH.\RWR3URWRFROXQGHUWKH81
)UDPHZRUN&RQYHQWLRQRQ&OLPDWH&KDQJH81)&&&LQ7KLVZDVWKHRQO\HDUOLHU
DJUHHPHQWVLJQHGE\PRVW81PHPEHUVWKDWFRPSHOOHGLQGXVWULDOFRXQWULHVKHOG
UHVSRQVLEOHIRUFUHDWLQJWKHSUREOHPRIJOREDOZDUPLQJLQWKHILUVWSODFHWRFXWWKH
HPLVVLRQVRIJUHHQKRXVHJDVHVDQGSD\DILQDQFLDOSHQDOW\IRUIDLOLQJWRGRVR
6SHDNLQJDWWKLVVXPPLW3UHVLGHQW%DUDFN2EDPDPDGHQRVHFUHWRIKLVSRVLWLRQZKHQKH
VDLG³7DUJHWVDUHQRWVHWIRUHDFKRIXVEXWE\HDFKRIXV´$WEHVWRQO\WKHSURFHGXUHV
IRUPRQLWRULQJRIVXFKYROXQWDU\FRPSOLDQFHPD\EHPDGHPDQGDWRU\7KHVHSURFHGXUHV
ZLOOFRYHUDFWLRQVWKDWDUH³FRPSDUDEOHPHDVXUDEOHUHSRUWDEOHDQGYHULILDEOH´7KH
VSHFXODWLRQDW3DULVLVWKDWWKLVZLOOEHGRQHHYHU\ILYH\HDUV,WDOVRPDWWHUVZKHUHWKH
VHFUHWDULDWRIVXFKDPRQLWRULQJPHFKDQLVPLVKRXVHG,ILWLVDWWKH81)&&&VHFUHWDULDW
LQ%RQQQHJRWLDWRUVFDQEHUHDVRQDEO\FHUWDLQWKDWWKHSURFHVVZLOOEHIDLUDQG
WUDQVSDUHQW%XWDQHZDJHQF\DWDGLIIHUHQWYHQXHFDQSURYHDPDMRUREVWDFOH$SRVVLEOH
SUHFHGHQWLQDQRWKHUFRQWH[WLVWKHGLVFRPILWXUHZLWKWKH³FRQGLWLRQDOLWLHV´LPSRVHGE\WKH
86EDVHG,QWHUQDWLRQDO0RQHWDU\)XQGEHIRUHGLVEXUVLQJORDQVWRGHYHORSLQJFRXQWULHVLQ
WKHSDVW
3HULOVRI6HOIGLIIHUHQWLDWLRQ
1H[WRQO\WRWKHIDFWWKDWWKHDFFRUGZLOOQRWEHELQGLQJLVWKHDWWHPSWE\WKH8PEUHOOD
*URXSDQGVRPHRWKHUULFKFRXQWULHVWRUHPRYHWKHGLVWLQFWLRQLQWKHQHJRWLDWLRQVEHWZHHQ
GHYHORSHGDQGGHYHORSLQJFRXQWULHV7KLVKDVEHHQDFRUQHUVWRQHRIWKH81)&&&
SRVLWLRQIURPWKH(DUWK6XPPLWLQ5LRGH-DQHLURLQRQZDUGV2QWKHFRQWUDU\
FRXQWULHVDUHQRZEHLQJDVNHGWR³VHOIGLIIHUHQWLDWH´
(YHQEHIRUH&KLQDEHFDPHWKHELJJHVWHPLWWHURIJUHHQKRXVHJDVHVLQWKHZRUOGLQ
WKH86KDVEHHQWDUJHWLQJLW,QGLDKDVDOVREHHQWDUUHGZLWKWKHVDPHEUXVKIRUEHLQJD
PDMRUFRQWULEXWRUWRFOLPDWHFKDQJHDQGPXVWWKHUHIRUHDOVRFXWLWVHPLVVLRQV)RUPHU86
3UHVLGHQW*HRUJH:%XVK-UVDLGDVPXFK,WZDVWRNHHSVXFKPRYHVDWDUPV¶OHQJWKWKDW
JDYHULVHWRWKHDOOLDQFHRIHPHUJLQJHFRQRPLHVNQRZQDV%$6,&²FRQVLVWLQJRI%UD]LO
6RXWK$IULFD,QGLDDQG&KLQD1RZWKH86LVWDONLQJDERXWWKH³HYROYLQJHFRQRPLF
FLUFXPVWDQFHVDQGHPHUJLQJWUHQGV´LQVXFKFRXQWULHVZKLFKGLVWLQJXLVKWKHPIURPRWKHU
GHYHORSLQJFRXQWULHVDQGKDYHWKHUHIRUHWRWDNHRQFRPPLWPHQWVRIWKHLURZQ
%$6,&KDVQRWH[DFWO\FURZQHGLWVHOIZLWKJORU\,WZDVLQDKXGGOHOLWHUDOO\DWPLGQLJKWRQ
WKHFORVLQJGD\RIWKHDERUWLYH&RSHQKDJHQVXPPLWLQ,WDFTXLHVFHGWRSUHVVXUH
IURP2EDPDZKRJDWHFUDVKHGWKHLUFRQFODYHDQGSUHVVXULVHGWKHPWRVLJQDQDFFRUG
7KLVZDVYHU\IDUUHPRYHGIURPDWUHDW\ELQGLQJRQDOOVLJQDWRULHV+RZHYHUWKH\ZLOOQRZ
KDYHWRVWDQGXSWRWKHSUHVVXUHLQ3DULVWRWDNHRQELJJHUFXWV
0LQLVFXOH$LGIRU0RVW9XOQHUDEOH&RXQWULHV
$WKLUGSUREOHPLQ3DULVLVEOXUULQJWKHGLVWLQFWLRQEHWZHHQGHYHORSHGDQGGHYHORSLQJ
QDWLRQVE\VLQJOLQJRXWWKHOHDVWGHYHORSHGRUPRVWYXOQHUDEOHFRXQWULHVIRUDLG7\SLFDOO\
WKHVHDUHWKHVPDOOLVODQGVWDWHVDQGFRXQWULHVOLNH%DQJODGHVKZKLFKDUHORZO\LQJDQG
DOVRKDYHDORQJFRDVWH[SRVLQJWKHPWRJUHDWHUGDQJHURIIORRGLQJZLWKRFHDQOHYHOULVH
$VOHZRIIXQGVZHUHDQQRXQFHGRQWKHRSHQLQJGD\EXWWKHDPRXQWVZHUHTXLWHVPDOO
DQGPHUHO\ORRNHGWRPRUHFRQWULEXWLRQVLQIXWXUH
7KXVWKH%UHDNWKURXJK(QHUJ\&RDOLWLRQZDVDQQRXQFHGRQWKHRSHQLQJGD\FRPSULVLQJ
RIWKHZRUOG¶VELJJHVWLQYHVWRUVLQFOXGLQJ0DUN=XFNHUEHUJ%LOO*DWHV-DFN0D
0XNHVK$PEDQL5DWDQ7DWDDQGWKH86EDVHG9LQRG.KRVOD8QGHU0LVVLRQ,QQRYDWLRQ
2EDPDZLOOGRXEOHWKH86FRQWULEXWLRQWRPRUHUHVHDUFKDQGGHYHORSPHQWRQFOHDQ
HQHUJ\
$3DULV3DFWRQ:DWHUDQG&OLPDWH&KDQJH$GDSWDWLRQZLOOGLVEXUVHPLOOLRQWRKHOSWKH
ZDWHUV\VWHPVRIGHYHORSLQJFRXQWULHVGHYHORSUHVLOLHQFHKRSLQJWRUDLVHELOOLRQ
XOWLPDWHO\7KLVLQFOXGHGDILQDQFLDOFRPPLWPHQWE\,QGLDWREXLOGVXFKUHVLOLHQFHWKURXJK
LPSURYHGJURXQGZDWHUPDQDJHPHQW$QRWKHUPLOOLRQDLPHGDWGHYHORSLQJFRXQWULHV
ZDVODXQFKHGE\*HUPDQ\1RUZD\6ZHGHQDQG6ZLW]HUODQG7KLVLVVXSSRVHGWRFUHDWH
DQHZFODVVRIORZFDUERQDVVHWVZLWKGLVEXUVHPHQWVRIPLOOLRQWREHJLQZLWKODWHUWR
OHYHUDJHELOOLRQLQOHQGLQJE\WKH:RUOG%DQN
&RPEDWWLQJ2(&'5HSRUW
$W3DULVWKH,QGLDQFULWLTXHRIWKH2UJDQLVDWLRQIRU(FRQRPLF&RRSHUDWLRQDQG
'HYHORSPHQW2(&'KDVJRWDORWRIWUDFWLRQ
,QDUHFHQWUHSRUWWKH3DULVEDVHGRUJDQLVDWLRQKDVVWDWHGWKDWULFKFRXQWULHVKDG
GHOLYHUHGDVPXFKDVELOOLRQWRGHYHORSLQJFRXQWULHVWRKHOSWKHPFRPEDWFOLPDWH
FKDQJH7KLVLVZHOORYHUKDOIRIZKDWZDVSURPLVHGE\DQGGRHVQRWVTXDUHZLWKWKH
IDFWVRQWKHJURXQG)UHQFK)RUHLJQ0LQLVWHU/DXUHQW)DELXVZHQWVRIDUDVWRVXJJHVWWKDW
FRQVLGHUDEOHSURJUHVVKDGEHHQPDGHDQGRQO\WKHUHPDLQLQJRGGELOOLRQKDGWREH
UDLVHG
,QDFRXQWHUUHSRUWWLWOHG³6RPH&UHGLEOH)DFWV1HHGHG´E\WKH&OLPDWH&KDQJH)LQDQFH
8QLWLQ,QGLD¶V)LQDQFH0LQLVWU\RIILFLDOVDOOHJHGWKDW³1XPEHUVZHUHGHULYHGRQVHOI
UHSRUWHGEDVLVIURPVHOILQWHUHVWHGSOD\HUVDQGRSHQWRµJDPLQJ¶DQGH[DJJHUDWLRQ«$W
WKLVWLPHWKHDFWXDOFURVVERUGHUIORZVIURPVSHFLDOFOLPDWHIXQGVVLQFHWKHLULQFHSWLRQ
DUHVRPHELOOLRQ«DPRXQWLQJWRµJUHHQZDVKLQJ¶RIILQDQFH´
:KLOHWKHDXWKRUVVDLGWKDWWKLVZDVQRWDQRIILFLDOSDSHUDQGGLGQRWUHIOHFWWKHYLHZVRI
WKHJRYHUQPHQWLWLVFOHDUWKDW,QGLD¶VQHZIRXQGFRQILGHQFHRQWKHJOREDOVWDJH
HPEROGHQHGWKHDXWKRULWLHVWRWDNHDFOHDUVWDQGDJDLQVWWKLVGLVWRUWLRQRIIDFWV
*ULP&RQFOXVLRQIRU,QGLD
7RFRLQFLGHZLWK3DULVVXPPLWWKUHHH[SHUWVIURP,QGLDQ,QVWLWXWHRI0DQDJHPHQW,,0
$KPHGDEDG,QGLDQ,QVWLWXWHRI7HFKQRORJ\,,7*DQGKLQDJDUDQGWKH'HOKLEDVHG&RXQFLO
RQ(QHUJ\(QYLURQPHQWDQG:DWHU&((:UHOHDVHGDQLQGHSHQGHQWVWXG\WRVKRZWKDW
,QGLDZRXOGQHHGWULOOLRQLQ\HDUVWRDGGUHVVWKHDGYHUVHLPSDFWVRIFOLPDWH
FKDQJH$VPDQ\DVPLOOLRQSHRSOHOLYLQJDFURVVQHDUO\GLVWULFWVDUHFXUUHQWO\
H[SHULHQFLQJVLJQLILFDQWLQFUHDVHVLQDQQXDOPHDQWHPSHUDWXUHJRLQJEH\RQGWKH&
ZDUPLQJWKDWVFLHQWLVWVVD\LVWKHWLSSLQJSRLQWEH\RQGZKLFKWKHUHFDQEHGLVDVWURXV
LPSDFWV
2EVHUYHUVEHOLHYHWKDW,QGLDDVWKHZRUOG¶VWKLUGELJJHVWHPLWWHUPD\ZHOOEHWDUJHWHGIRU
FRQWLQXLQJWRGHPRQVWUDWHLWVLQGHSHQGHQWDQGSULQFLSOHGVWDQFHDW3DULV7KLVLVLQVKDUS
FRQWUDVWZLWK&KLQDZKLFKKDVNHSWRXWRIWKHOLPHOLJKW7KH\SUHGLFWWKDW&KLQDPD\
FKRRVHDWWKHIDJHQGRIWKHVXPPLWWRFORVHUDQNVZLWKWKH86OHDYLQJ,QGLDWRIHQGIRU
LWVHOI | https://fr.scribd.com/document/386808454/epw-dec-15 | CC-MAIN-2019-35 | en | refinedweb |
Class ACTION_PLUGIN This is the parent class from where any action plugin class must derive. More...
#include <class_action_plugin.h>
Class ACTION_PLUGIN This is the parent class from where any action plugin class must derive.
Definition at line 40 of file class_action_plugin.h.
Definition at line 49 of file class_action_plugin.h.
Definition at line 33 of file class_action_plugin.cpp.
Function GetCategoryName.
Implemented in PYTHON_ACTION_PLUGIN.
Function GetDescription.
Implemented in PYTHON_ACTION_PLUGIN.
Function GetName.
Implemented in PYTHON_ACTION_PLUGIN.
Referenced by ACTION_PLUGINS::GetAction(), and ACTION_PLUGINS::register_action().
Function GetObject This method gets the pointer to the object from where this action constructs.
Implemented in PYTHON_ACTION_PLUGIN.
Referenced by ACTION_PLUGINS::deregister_object().
Function register_action It's the standard method of a "ACTION_PLUGIN" to register itself into the ACTION_PLUGINS singleton manager.
Definition at line 38 of file class_action_plugin.cpp.
References ACTION_PLUGINS::register_action().
Referenced by PYTHON_ACTION_PLUGINS::register_action().
Function Run This method the the action.
Implemented in PYTHON_ACTION_PLUGIN.
Definition at line 46 of file class_action_plugin.h. | http://docs.kicad-pcb.org/doxygen/classACTION__PLUGIN.html | CC-MAIN-2017-26 | en | refinedweb |
A brief tour of OTB Applications¶
Introduction¶
OTB ships with more than 90 ready to use applications for remote sensing tasks. They usually expose existing processing functions from the underlying C++ library, or compose them into high level pipelines. OTB applications allow to:
- combine together two or more functions from the Orfeo Toolbox,
- provide a nice high level interface to handle: parameters, input data, output data and communication with the user.
OTB applications can be launched in different ways, and accessed from different entry points. The framework can be extended, but Orfeo Toolbox ships with the following:
- A command-line launcher, to call applications from the terminal,
- A graphical launcher, with an auto-generated QT interface, providing ergonomic parameters setting, display of documentation, and progress reporting,
- A SWIG interface, which means that any application can be loaded set-up and executed into a high-level language such as Python or Java for instance.
- QGIS plugin built on top of the SWIG/Python interface is available with seamless integration within QGIS.
The OTB Applications are now rich of more than 90 tools, which are listed in the applications reference documentation, presented in chapter [chap:apprefdoc], page.
Running the applications¶
Common framework¶
All standard applications share the same implementation and expose
automatically generated interfaces.
Thus, the command-line interface is prefixed by
otbcli_, while the Qt interface is prefixed by
otbgui_. For instance, calling
otbcli_Convert will launch the
command-line interface of the Convert application, while
otbgui_Convert will launch its GUI.
Using the command-line launcher¶
The command-line application launcher allows to load an application
plugin, to set its parameters, and execute it using the command line.
Launching the
otbApplicationLauncherCommandLine without argument
results in the following help to be displayed:
$ otbApplicationLauncherCommandLine Usage : ./otbApplicationLauncherCommandLine module_name [MODULEPATH] [arguments]
The
module_name parameter corresponds to the application name. The
[MODULEPATH] argument is optional and allows to pass to the launcher
a path where the shared library (or plugin) corresponding to
module_name is.
It is also possible to set this path with the environment variable
OTB_APPLICATION_PATH, making the
[MODULEPATH] optional. This
variable is checked by default when no
[MODULEPATH] argument is
given. When using multiple paths in
OTB_APPLICATION_PATH, one must
make sure to use the standard path separator of the target system, which
is
: on Unix, and
; on Windows.
An error in the application name (i.e. in parameter
module_name)
will make the
otbApplicationLauncherCommandLine lists the name of
all applications found in the available path (either
[MODULEPATH]
and/or
OTB_APPLICATION_PATH).
To ease the use of the applications, and try avoiding extensive
environment customization, ready-to-use scripts are provided by the OTB
installation to launch each application, and takes care of adding the
standard application installation path to the
OTB_APPLICATION_PATH
environment variable.
These scripts are named
otbcli_<ApplicationName> and do not need any
path settings. For example you can start the Orthorectification
application with the script called
otbcli_Orthorectification.
Launching an application with no or incomplete parameters will make the launcher display a summary of the parameters, indicating the mandatory parameters missing to allow for application execution. Here is an example with the OrthoRectification application:
$ otbcli_OrthoRectification ERROR: Waiting for at least one parameter... ====================== HELP CONTEXT ====================== NAME: OrthoRectification DESCRIPTION: This application allows to ortho-rectify optical images from supported sensors. EXAMPLE OF USE: otbcli_OrthoRectification -io.in QB_TOULOUSE_MUL_Extract_500_500.tif -io.out QB_Toulouse_ortho.tif DOCUMENTATION: ======================= PARAMETERS ======================= -progress <boolean> Report progress MISSING -io.in <string> Input Image MISSING -io.out <string> [pixel] Output Image [pixel=uint8/int8/uint16/int16/uint32/int32/float/double] -map <string> Output Map Projection [utm/lambert2/lambert93/transmercator/wgs/epsg] MISSING -map.utm.zone <int32> Zone number -map.utm.northhem <boolean> Northern Hemisphere -map.transmercator.falseeasting <float> False easting -map.transmercator.falsenorthing <float> False northing -map.transmercator.scale <float> Scale factor -map.epsg.code <int32> EPSG Code -outputs.mode <string> Parameters estimation modes [auto/autosize/autospacing] MISSING -outputs.ulx <float> Upper Left X MISSING -outputs.uly <float> Upper Left Y MISSING -outputs.sizex <int32> Size X MISSING -outputs.sizey <int32> Size Y MISSING -outputs.spacingx <float> Pixel Size X MISSING -outputs.spacingy <float> Pixel Size Y -outputs.isotropic <boolean> Force isotropic spacing by default -elev.dem <string> DEM directory -elev.geoid <string> Geoid File -elev.default <float> Average Elevation -interpolator <string> Interpolation [nn/linear/bco] -interpolator.bco.radius <int32> Radius for bicubic interpolation -opt.rpc <int32> RPC modeling (points per axis) -opt.ram <int32> Available memory for processing (in MB) -opt.gridspacing <float> Resampling grid spacing
For a detailed description of the application behaviour and parameters,
please check the application reference documentation presented
chapter [chap:apprefdoc], page or follow the
DOCUMENTATION
hyperlink provided in
otbApplicationLauncherCommandLine output.
Parameters are passed to the application using the parameter key (which
might include one or several
. character), prefixed by a
-.
Command-line examples are provided in chapter [chap:apprefdoc], page .
Using the GUI launcher¶
The graphical interface for the applications provides a useful interactive user interface to set the parameters, choose files, and monitor the execution progress.
This launcher needs the same two arguments as the command line launcher :
$ otbApplicationLauncherQt module_name [MODULEPATH]
The application paths can be set with the
OTB_APPLICATION_PATH
environment variable, as for the command line launcher. Also, as for the
command-line application, a more simple script is generated and
installed by OTB to ease the configuration of the module path : to
launch the graphical user interface, one will start the
otbgui_Rescale script.
The resulting graphical application displays a window with several tabs:
- Parameters is where you set the parameters and execute the application.
- Logs is where you see the output given by the application during its execution.
- Progress is where you see a progress bar of the execution (not available for all applications).
- Documentation is where you find a summary of the application documentation.
In this interface, every optional parameter has a check box that you have to tick if you want to set a value and use this parameter. The mandatory parameters cannot be unchecked.
The interface of the application is shown here as an example.
Using the Python interface¶
The applications can also be accessed from Python, through a module
named
otbApplication. However, there are technical requirements to use it.
If you use OTB through standalone packages, you should use the supplied
environment script
otbenv to properly setup variables such as
PYTHONPATH and
OTB_APPLICATION_PATH (on Unix systems, don’t forget to
source the script). In other cases, you should set these variables depending on
your configuration.
On Unix systems, it is typically available in the
/usr/lib/otb/python
directory. Depending on how you installed OTB, you may need to configure the
environment variable
PYTHONPATH to include this directory so that the module
becomes available from Python.
On Windows, you can install the
otb-python package, and the module
will be available from an OSGeo4W shell automatically.
As for the command line and GUI launchers, the path to the application
modules needs to be properly set with the
OTB_APPLICATION_PATH
environment variable. The standard location on Unix systems is
/usr/lib/otb/applications. On Windows, the applications are
available in the
otb-bin OSGeo4W package, and the environment is
configured automatically so you don’t need to tweak
OTB_APPLICATION_PATH.
In the
otbApplication module, two main classes can be manipulated :
Registry, which provides access to the list of available applications, and can create applications
Application, the base class for all applications. This allows to interact with an application instance created by the
Registry
Here is one example of how to use Python to run the
Smoothing
application, changing the algorithm at each iteration.
# Example on the use of the Smoothing application # # We will use sys.argv to retrieve arguments from the command line. # Here, the script will accept an image file as first argument, # and the basename of the output files, without extension. from sys import argv # The python module providing access to OTB applications is otbApplication import otbApplication # otbApplication.Registry can tell you what application are available print "Available applications : " print str( otbApplication.Registry.GetAvailableApplications() ) # Let's create the application with codename "Smoothing" app = otbApplication.Registry.CreateApplication("Smoothing") # We print the keys of all its parameter print app.GetParametersKeys() # First, we set the input image filename app.SetParameterString("in", argv[1]) # The smoothing algorithm can be set with the "type" parameter key # and can take 3 values : 'mean', 'gaussian', 'anidif' for type in ['mean', 'gaussian', 'anidif']: print 'Running with ' + type + ' smoothing type' # Here we configure the smoothing algorithm app.SetParameterString("type", type) # Set the output filename, using the algorithm to differentiate the outputs app.SetParameterString("out", argv[2] + type + ".tif") # This will execute the application and save the output file app.ExecuteAndWriteOutput()
Using OTB from QGIS¶
The processing toolbox¶
OTB applications are available from QGIS. Use them from the processing
toolbox, which is accessible with Processing
Toolbox. Switch to “advanced interface” in the bottom of the application
widget and OTB applications will be there.
Using a custom OTB¶
If QGIS cannot find OTB, the “applications folder” and “binaries folder”
can be set from the settings in the Processing
Settings
“service provider”.
On some versions of QGIS, if an existing OTB installation is found, the textfield settings will not be shown. To use a custom OTB instead of the existing one, you will need to replace the otbcli, otbgui and library files in QGIS installation directly.
Advanced applications capabilities¶
Load/Save OTB-Applications parameters from/to file¶
Since OTB 3.20, OTB applications parameters can be export/import to/from an XML file using inxml/outxml parameters. Those parameters are available in all applications.
An example is worth a thousand words
otbcli_BandMath -il input_image_1 input_image_2 -exp "abs(im1b1 - im2b1)" -out output_image -outxml saved_applications_parameters.xml
Then, you can run the applications with the same parameters using the output XML file previously saved. For this, you have to use the inxml parameter:
otbcli_BandMath -inxml saved_applications_parameters.xml
Note that you can also overload parameters from command line at the same time
otbcli_BandMath -inxml saved_applications_parameters.xml -exp "(im1b1 - im2b1)"
In this case it will use as mathematical expression “(im1b1 - im2b1)” instead of “abs(im1b1 - im2b1)”.
Finally, you can also launch applications directly from the command-line launcher executable using the inxml parameter without having to declare the application name. Use in this case:
otbApplicationLauncherCommandLine -inxml saved_applications_parameters.xml
It will retrieve the application name and related parameters from the input XML file and launch in this case the BandMath applications.
In-memory connection between applications¶
Applications are often use as parts of larger processing chains. Chaining applications currently requires to write/read back images between applications, resulting in heavy I/O operations and a significant amount of time dedicated to writing temporary files.
Since OTB 5.8, it is possible to connect an output image parameter from one application to the input image parameter of the next parameter. This results in the wiring of the internal ITK/OTB pipelines together, allowing to perform image streaming between the applications. There is therefore no more writing of temporary images. The last application of the processing chain is responsible for writing the final result images.
In-memory connection between applications is available both at the C++ API level and using the python bindings to the application presented in the Using the Python interface section.
Here is a Python code sample connecting several applications together:
import otbApplication as otb app1 = otb.Registry.CreateApplication("Smoothing") app2 = otb.Registry.CreateApplication("Smoothing") app3 = otb.Registry.CreateApplication("Smoothing") app4 = otb.Registry.CreateApplication("ConcatenateImages") app1.IN = argv[1] app1.Execute() # Connection between app1.out and app2.in app2.SetParameterInputImage("in",app1.GetParameterOutputImage("out")) # Execute call is mandatory to wire the pipeline and expose the # application output. It does not write image app2.Execute() app3.IN = argv[1] # Execute call is mandatory to wire the pipeline and expose the # application output. It does not write image app3.Execute() # Connection between app2.out, app3.out and app4.il using images list app4.AddImageToParameterInputImageList("il",app2.GetParameterOutputImage("out")); app4.AddImageToParameterInputImageList("il",app3.GetParameterOutputImage("out")); app4.OUT = argv[2] # Call to ExecuteAndWriteOutput() both wires the pipeline and # actually writes the output, only necessary for last application of # the chain. app4.ExecuteAndWriteOutput()
Note: Streaming will only work properly if the application internal implementation does not break it, for instance by using an internal writer to write intermediate data. In this case, execution should still be correct, but some intermediate data will be read or written.
Parallel execution with MPI¶
Provided that Orfeo ToolBox has been built with MPI and SPTW modules
activated, it is possible to use MPI for massive parallel computation
and writing of an output image. A simple call to
mpirun before the
command-line activates this behaviour, with the following logic. MPI
writing is only triggered if:
- OTB is built with MPI and SPTW,
- The number of MPI processes is greater than 1,
- The output filename is
.tifor
.vrt
In this case, the output image will be divided into several tiles
according to the number of MPI processes specified to the
mpirun
command, and all tiles will be computed in parallel.
If the output filename extension is
.tif, tiles will be written in
parallel to a single Tiff file using SPTW (Simple Parallel Tiff Writer).
If the output filename extension is
.vrt, each tile will be
written to a separate Tiff file, and a global VRT file will be written.
Here is an example of MPI call on a cluster:
$ mpirun -np $nb_procs --hostfile $PBS_NODEFILE \ otbcli_BundleToPerfectSensor \ -inp $ROOT/IMG_PHR1A_P_001/IMG_PHR1A_P_201605260427149_ORT_1792732101-001_R1C1.JP2 \ -inxs $ROOT/IMG_PHR1A_MS_002/IMG_PHR1A_MS_201605260427149_ORT_1792732101-002_R1C1.JP2 \ -out $ROOT/pxs.tif uint16 -ram 1024 ------------ JOB INFO 1043196.tu-adm01 ------------- JOBID : 1043196.tu-adm01 USER : michelj GROUP : ctsiap JOB NAME : OTB_mpi SESSION : 631249 RES REQSTED : mem=1575000mb,ncpus=560,place=free,walltime=04:00:00 RES USED : cpupercent=1553,cput=00:56:12,mem=4784872kb,ncpus=560,vmem=18558416kb, walltime=00:04:35 BILLING : 42:46:40 (ncpus x walltime) QUEUE : t72h ACCOUNT : null JOB EXIT CODE : 0 ------------ END JOB INFO 1043196.tu-adm01 ---------
One can see that the registration and pan-sharpening of the panchromatic and multi-spectral bands of a Pleiades image has bee split among 560 cpus and took only 56 seconds.
Note that this MPI parallel invocation of applications is only available for command-line calls to OTB applications, and only for images output parameters. | https://www.orfeo-toolbox.org/CookBook/OTB-Applications.html | CC-MAIN-2017-26 | en | refinedweb |
hi here is my code
when i type in i.e. 10 and 10 theres no output likewhen i type in i.e. 10 and 10 theres no output likeCode:#include<stdio.h> int calc(int a, int b); int main () { int first, second, answer; printf("pick two numbers: \n"); scanf("%d", &first); scanf("%d", &second); answer = calc(first, second); if (calc(first, second) == 1) return 1; else printf("the result is: %d", answer); return 0; } int calc(int a, int b) { if (b == 0) { printf("you cannot divide with zero"); return 1; } else return (a/b); }
"the result is blabla". why that ?
and when i try to divide with zero the message
"you cannot divide with zero" is displayed twice. i dont really know whats wrong with my code.
i hope that someone can help me
thanks threadhead | https://cboard.cprogramming.com/c-programming/31606-functions-return-value.html | CC-MAIN-2017-26 | en | refinedweb |
Integration with an:
public class MyReactApplication extends Application implements ReactApplication {...}
Override the required methods
getUseDeveloperSupport,
getPackages and
getReactNativeHost:
public class MyReactApplication extends Application implements ReactApplication {
@Override
public void onCreate() {
super.onCreate();
SoLoader.init(this, false);
}
private final ReactNativeHost mReactNativeHost = new ReactNativeHost(this) {
@Override
public boolean getUseDeveloperSupport() {
return BuildConfig.DEBUG;
}
protected List<ReactPackage> getPackages() {
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here
return packages;
}
};
@Override
public ReactNativeHost getReactNativeHost() {
return mReactNativeHost;
}
}
If you are using Android Studio, use Alt + Enter to add all missing imports in your class. Alternatively these are the required imports to include manually:
import android.app.Application;
import com.facebook.react.PackageList;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.soloader.SoLoader;
import java.util.List;.
<FrameLayout
android:id="@+id/reactNativeFragment"
android:layout_width="match_parent"
android::
<Button
android:layout_margin="10dp"
android:id="@+id/button"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:
Now in your Activity class e.g.
MainActivity.java you need to add an OnClickListener for the button, instantiate your ReactFragment and add it to the frame layout.
Add the button field to the top of your Activity:
private Button mButton;
Update your Activity's onCreate method as follows:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main_activity);
mButton = findViewById(R.id.button);
mButton.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
Fragment reactNativeFragment = new ReactFragment.Builder()
.setComponentName("HelloWorld")
.setLaunchOptions(getLaunchOptions("test message"))
.build();
getSupportFragmentManager()
.beginTransaction()
.add(R.id.reactNativeFragment, reactNativeFragment)
.commit();
}
});
}.
private Bundle getLaunchOptions(String message) {
Bundle initialProperties = new Bundle();
initialProperties.putString("message", message);
return initialProperties;
}
Add all missing imports in your Activity class. Be careful to use your package’s BuildConfig and not the one from the facebook package! Alternatively these are the required imports to include manually:
import android.app.Application;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.react.shell.MainReactPackage;
import com.facebook.soloader.SoLoader; | http://reactnative.dev/docs/0.65/integration-with-android-fragment | CC-MAIN-2022-27 | en | refinedweb |
Adding Redux With NgRx/store to Angular 2 — Part 2 (Testing Reducers)
In this post, I'm sharing my insights on achieving using ngrx/store, working with more than one reducer in Angular 2, and testing reducers as well.
Join the DZone community and get the full member experience.Join For Free
In my recent article about adding redux with ngrx/store to Angular 2, I showed a nice example of integrating this awesome state management library to my open source project Echoes Player. Since then, I really wanted to integrate YouTube player into this Angular 2 version. In this post, I'm sharing my insights on achieving using ngrx/store, working with more than one reducer in Angular 2, and testing reducers as well.
Creating a YouTube Player Reducer
First, I defined and created a reducer for the YouTube player in Echoes. This approach of defining first the reducer helps me to design what data the app needs for this feature and how I'd like to use it.
At first, I defined the actions for this player's reducer: }
Similar to the previously "videos" reducer (from my last article about ngrx/store), I defined a reducer for the player. It is a pure function that expects to get a state object and an Action object. The action object will always include an "action.type" of this action. It can also include an "action.payload" if the action is supposed to pass data.
For better readability and perhaps easier maintenance, I like to keep the creation of a new state in small functions, which I can test as well. Those are the "playVideo" and "toggleVisibility" functions. Remember, a reducer should return a new state and should mutate the old state object.
All in all, the "player" reducer function can also be tested (which is described later in this article): } }
Testing Reducers in ngrx/store and Angular 2
I've written before that I like to write tests. Testing reducers turned out to be quite simple—a reducer is a function that gets an input and should always return an output. Let's see how we can test the new player reducer.
First, we need to setup the relevant testing utils that we're going to use—using jasmine for testing:); }); });
Connecting the Reducer to a Component
Now, we need to use this reducer in Echoes Player. For that, I created a YouTube player component. It should play a YouTube media when it's picked and display the played media title in the bottom bar.
The youtube-player component, registers to the youtube-player store in the constructor function and updates its player property whenever an action of this reducer is performed. This action lets the player display the title of the currently played media:
import { Component, EventEmitter, Input, Output, ChangeDetectionStrategy } from 'angular2/core'; import { NgModel, NgClass, AsyncPipe } from 'angular2(); } }
Notice how I use the "subscribe" method (this is RxJS method) in order to register to a change in the player store (which will eventually) will render the media title to the YouTube player template. Within this callback function, I can also instruct the player to either play/pause/queue—however, currently, the "player" store structure doesn't have a property for "player.state"—I'm still not sure that this is the correct way to achieve this and still investigating this practice. If you have any idea/suggestion—please let me know (in this article comments, the contact page or the GitHub repository).
The "playerService" is a service to interact with the YouTube player instance (3rd party module). In order to instruct the player to play a certain media from the video thumbs list, the "youtube-videos" component invokes the "playVideo" method of this service. This method ("playVideo") also dispatches the action "PLAY" and updates the "player" state:
file: src/app/youtube-videos/youtube-videos.ts }); }
Using this method, I'm just updating the current state of the player—indicating the media that is playing at the moment. I'm still looking for a way to dispatch a "PLAY_MEDIA" action, which will eventually, invoke the 3rd party YouTube player module to play the expected media that is sent as a payload in this action.
Final Thoughts
Here is a final screenshot of the player playing media and displaying it's title:
There's still a lot more to RxJS that can be explored. I just touched the surface of it in this post.
As always, this post's specific code is available on GitHub, the rest of the up-to-date code of Echoes Player is on the master branch.
Published at DZone with permission of Oren Farhi. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/adding-redux-with-ngrxstore-to-angular2-part-2-tes | CC-MAIN-2022-27 | en | refinedweb |
Quicksort on a Singly Linked List
Introduction
Sorting plays a significant role, especially when giving a coding contest or appearing in an interview. Whether it’s an array or linked list, some problems can be solved in less time if the data or elements are sorted because you know finding the solution is not enough; you should also optimize it if possible.
In this blog, we will cover how Quicksort works on a Linked list.
What is a linked list?
It is a commonly used data structure that contains a linear collection of data whose order is not defined by their physical address in the memory. In this, each node (element) points towards the other. And depending on which way they point, different types of linked lists are created.
The different types of a linked list are:
a) Singly Linked List: Node points towards the next node.
b) Doubly Linked List: Node points towards nodes on either side.
c) Circular Linked List: Last node points towards the first node, creating a circular path.
Note: Any node achieves the “pointing” by storing the node it wants to point towards.
In this session, we will talk about Quicksort on a Singly Linked List. Hence you must be able to recreate a diagram about singly linked list in your head, which looks like follows:
Quick Sort Algorithm
As the name suggests, Quicksort is a sorting algorithm that is one of the most popular and highly efficient sorting algorithms. It is faster compared to other algorithms as it uses the concept of divide and conquer, i.e., it continuously divides the given list into two parts, hence sending lower value items on one side and higher values on the other (depending on the need of the user).
Working of Quicksort Algorithm
Let’s understand the algorithm of Quicksort on a Singly Linked list first:
Let’s understand the Quicksort pseudo-code:
- We are given an input array
- Choose pivot, here we are choosing the mid element as our pivot
- Now we’ll partition the array as per the pivot
- Keep a partitioned index say p and then initialize it to -1
- Iterate through every element present in the array except the pivot
- If found an element that is less than the pivot element then increment p and swap the elements present at index p with the element at index i.
- Once traversing all the elements, swap pivot with element present at p+1 as this will be the same position as in the sorted array
- Now return the pivot index
- Once partitioned, now make 2 calls on quicksort
- One from beg to p-1
- Other from p+1 to n-1
Quicksort Algorithm:-
E.g. In this given image below we can see how Quicksort works on an array:-
Source: Quick sort
In the given image above we can see we chose element 5 as a pivot and we will start by keeping two pointers approach with the first element of the array as left and the last element of the array as right. And we’ll start comparing since the algorithm is based on if the element on the left side of the pivot is greater than the right side of the element then we’ll swap till the time we reach the pivot element.
Implementation of Quicksort on a Singly Linked List in Java
public class QuickSortLinkedList { static class Node { int data; Node next; Node(int d) { this.data = d; this.next = null; } } Node head; void addNode(int data) { if (head == null) { head = new Node(data); return; } Node curr = head; while (curr.next != null) curr = curr.next; Node newNode = new Node(data); curr.next = newNode; } void printList(Node n) { while (n != null) { System.out.print(n.data); System.out.print(" "); n = n.next; } } // Initiate the first and the last node without breaking any links in the whole linked list. Node partitionLast(Node start, Node end) { if (start == end || start == null || end == null) return start; Node pivot_prev = start; Node curr = start; int pivot = end.data; // Iterate till pen-ultimate node, since the last node is the PIVOT while (start != end) { if (start.data < pivot) { pivot_prev = curr; int temp = curr.data; curr.data = start.data; start.data = temp; curr = curr.next; } start = start.next; } // swap whichever is following suitable index and pivot int temp = curr.data; curr.data = pivot; end.data = temp; return pivot_prev; } void sort(Node start, Node end) { if(start == null || start == end|| start == end.next ) return; // split list and partition recurse Node pivot_prev = partitionLast(start, end); sort(start, pivot_prev); // If PIVOT = START, we pick from next of PIVOT. if (pivot_prev != null && pivot_prev == start) sort(pivot_prev.next, end); // If PIVOT is still in between the list, start from next to pivot since we have pivot_prev, so we move two nodes. else if (pivot_prev != null && pivot_prev.next != null) sort(pivot_prev.next.next, end); } // Main Driver Code public static void main(String[] args) { QuickSortLinkedList list = new QuickSortLinkedList(); list.addNode(10); list.addNode(1); list.addNode(2); list.addNode(8); list.addNode(5); Node n = list.head; while (n.next != null) n = n.next; System.out.println("Linked List before sorting"); list.printList(list.head); list.sort(list.head, n); System.out.println("\nLinked List after sorting"); list.printList(list.head); } }
Output:
Time Complexity: O(N^2) in the worst case, where N is the number of elements in the linked list and O(N*logN) in the average case.
Auxiliary Space: O(1), as we are not using any additional space.
FAQs
- What is the stopping condition of the quick sort algorithm?
Like the Quicksort() function, the Partition() function takes an array and its size. In this function, we first check that the array size is larger than 1; this is the stopping condition since an array of size 1 is, by definition, sorted.
- How does quicksort for linked lists work?
Quicksort algorithm is a divide and conquer algorithm; it divides the list into smaller sublists, then takes a pivot element and sorts it into higher and lower groups and then nests the quick sort into newly formed groups till the goal is achieved.
Key Takeaways
In this article, we discussed quicksort on a singly linked list with all crucial aspects. We discussed the quick sort algorithm in detail and implemented the quick sort in JAVA. If you want to learn more about sorting algorithms, check the blog we have covered before on Sorting in DataStructure.
If you are a beginner in coding and want to learn DSA, you can look for our guided path for DSA, which is free! | https://www.codingninjas.com/codestudio/library/quicksort-on-a-singly-linked-list | CC-MAIN-2022-27 | en | refinedweb |
As a simple example to back up PJE's explanation, consider: 1. encodings becomes a namespace package 2..
Cheers, Nick.
-- Sent from my phone, thus the relative brevity :) On May 22, 2012 4:10 AM, "PJ Eby" [email protected] wrote:
On Mon, May 21, 2012 at 9:55 AM, Guido van Rossum [email protected]:.
To do that, you just assign to __path__, the same as now, ala __path__ = pkgutil.extend_path(). The auto-updating is in the initially-assigned __path__ object, not the module object or some sort of generalized magic.
I'd like to hear more about this from Philip -- is that feature
actually widely used?
Well, it's built into setuptools, so yes. ;-) It gets used any time a dynamically specified dependency is used that might contain a namespace package. This means, for example, that every setup script out there using "setup.py test", every project using certain paste.deploy features... it's really difficult to spell out the scope of things that are using this, in the context of setuptools and distribute, because there are an immense number of ways to indirectly rely on it.
This doesn't mean that the feature can't continue to be implemented inside setuptools' dynamic dependency system, but the code to do it in setuptools is MUCH more complicated than the PEP 420 code, and doesn't work if you manually add something to sys.path without asking setuptools to do it. It's also somewhat timing-sensitive, depending on when and whether you import 'site' and pkg_resources, and whether you are mixing eggs and non-eggs in your namespace packages.
In short, the implementation is a huge mess that the PEP 420 approach would vastly simplify.
But... that wasn't the original reason why I proposed it. The original reason was simply that it makes namespace packages act more like the equivalents do in other languages. While being able to override __path__ can be considered a feature of Python, its being static by default is NOT a feature, in the same way that *requiring* an __init__.py is not really a feature.
The principle of least surprise says (at least IMO) that if you add a directory to sys.path, you should be able to import stuff from it. That whether it works depends on whether or not you already imported part of a namespace package earlier is both surprising and confusing. (More on this below.)
What would a package have to do if the feature didn't exist?
Continue to depend on setuptools to do it for them, or use some hypothetical update API... but that's not really the right question. ;-)
The right question is, what happens to package *users* if the feature didn't exist?
And the answer to that question is, "you must call this hypothetical update API *every time* you change sys.path, because otherwise your imports might break, depending on whether or not some other package imported something from a namespace before you changed sys.path".
And of course, you also need to make sure that any third-party code you use does this too, if it adds something to sys.path for you.
And if you're writing cross-Python-version code, you need to check to make sure whether the API is actually available.
And if you're someone helping Python newbies, you need to add this to your list of debugging questions for import-related problems.
And remember: if you forget to do this, it might not break now. It'll break later, when you add that other plugin or update that random module that dynamically decides to import something that just happens to be in a namespace package, so be prepared for it to break your application in the field, when an end-user is using it with a collection of plugins that you haven't tested together, or in the same import sequence...
The people using setuptools won't have these problems, but *new* Python users will, as people begin using a PEP 420 that lacks this feature.
The key scope question, I think, is: "How often do programs change sys.path at runtime, and what have they imported up to that point?" (Because for the other part of the scope, I think it's a fairly safe bet that namespace packages are going to become even *more* popular than they are now, once PEP 420 is in place.)
But the key API/usability question is: "What's the One Obvious Way to add/change what's importable?"
And I believe the answer to that question is, "change sys.path", not "change sys.path, and then import some other module to call another API to say, 'yes, I really *meant* to update sys.path, thank you very much.'"
(Especially since NOT requiring that extra API isn't going to break any existing code.)
I'd really much rather not have this feature, which reeks of too much magic to me. (An area where Philip and I often disagree. :-)
My take on it is that it only SEEMS like magic, because we're used to static __path__. But other languages don't have per-package __path__ in the first place, so there's nothing to "automatically update", and so it's not magic at all that other subpackages/modules can be found when the system path changes!
So, under the PEP 420 approach, it's *static* __path__ that's really the weird special case, and should be considered so. (After all, __path__ is and was primarily an implementation optimization and compatibility hack, rather than a user-facing "feature" of the import system.)
For example, when *would* you want to explicitly spell out a namespace package __path__, and restrict it from seeing sys.path changes? I've not seen *anybody* ask for this feature in the context of setuptools; it's only ever been bug reports about when the more complicated implementation fails to detect an update.
So, to wrap up:
- The primary rationale for the feature is that "least surprise" for a new
user to Python is that adding to sys.path should allow importing a portion of a namespace, whether or not you've already imported some other thing in that namespace. Symmetry with other languages and with other Python features (e.g. changing the working directory in an interactive interpreter) suggests it, and the removal of a similar timing dependency from PEP 402 (preventing direct import of a namespace-only package unless you imported a subpackage first) suggests that the same type of timing dependency should be removed here, too. (Note, for example, that I may not know that importing baz.spam indirectly causes some part of foo.wiz to be imported, and that if I then add another directory to sys.path containing a foo.* portion, my code will *no longer work* when I try to import foo.ham. This is much more "magical" behavior, in least-surprise terms!)
- The constraints on sys.path and package __path__ objects can and should
be removed, by making the dynamic path objects refer to a module and attribute, instead of directly referencing parent __path__ objects. Code that currently manipulates __path__ will not break, because such code will not be using PEP 420 namespace packages anyway (and so, __path__ will be a list. (Even so, the most common __path__ manipulation idiom is "__path__ = pkgutil.extend_path(...)" anyway!)
- Namespace packages are a widely used feature of setuptools, and AFAIK
nobody has *ever* asked to stop dynamic additions to namespace __path__, but a wide assortment of things people do with setuptools rely on dynamic additions under the hood. Providing the feature in PEP 420 gives a migration path away from setuptools, at least for this one feature. (Specifically, it does away with the need to use declare_namespace(), and the need to do all sys.path manipulation via setuptools' requirements API.)
- Self-contained (__init__.py packages) and fixed __path__ lists can and
should be considered the "magic" or "special case" parts of importing in Python 3, even though we're accustomed to them being central import concepts in Python 2. Modules and namespace packages can and should be the default case from an instructional POV, and sys.path updating should reflect this. (That is, future tutorials should introduce modules, then namespace packages, and finally self-contained packages with __init__ and __path__, because the *idea* of a namespace package doesn't depend on __path__ existing in the first place; it's essentially only a historical accident that self-contained packages were implemented in Python first.)
Python-Dev mailing list [email protected] Unsubscribe: | https://mail.python.org/archives/list/[email protected]/message/LI7SU7YPUA3SQJUDAVIIPGMIY2JOJ433/ | CC-MAIN-2022-27 | en | refinedweb |
PixelAspectRatio
GifOptions.PixelAspectRatio property
Gets or sets the GIF pixel aspect ratio.
public byte PixelAspectRatio { get; set; }
Property Value
The GIF pixel aspect ratio.
Remarks.
See Also
- class GifOptions
- namespace Aspose.CAD.ImageOptions
- assembly Aspose.CAD | https://reference.aspose.com/cad/net/aspose.cad.imageoptions/gifoptions/pixelaspectratio/ | CC-MAIN-2022-27 | en | refinedweb |
@4geit/rct-account-store
reusable account store package
Demo
A live storybook is available to see how the store looks like @
Installation
- A recommended way to install @4geit/rct-account-store is through npm package manager using the following command:
npm i @4geit/rct-account-store --save
Or use
yarn using the following command:
yarn add @4geit/rct-account-store
- Depending on where you want to use the store you will need to import the class instance
accountAccountStore store in the JSX code as follows:
import React, { Component } from 'react' import { inject } from 'mobx-react' // ... @inject('accountStore') class App extends Component { handleClick() { this.props.accountStore.setVar1('dummy value') } render() { return ( <div className="App"> <button onClick={ this.handleClick.bind(this) } >Click here</button> </div> ) } }
If you are willing to use the class instance inside another store class, then you can just import the instance as follows:
import accountStore from '@4geit/rct-account-store' class DummyStore { @action doSomething() { accountStore.setVar1('dummy value') } }
- If you want to use the store class in the storybook, add accountStore within
stories/index.jsby first importing it:
import accountStore from '@4geit/rct-account-store'
and then within the
stores array variable add
accountStore at the end of the list.
- If you want to use the store class in your project, add accountStore within
src/index.jsby first importing it:
import accountStore from '@4geit/rct-account-store'
and then within the
stores array variable add
accountStore at the end of the list. | https://www.npmtrends.com/@4geit/rct-account-store | CC-MAIN-2022-27 | en | refinedweb |
URLVariables class is for representing variables of HTTP. More...
#include <URLVariables.hpp>
URLVariables class is for representing variables of HTTP.
URLVariables class allows you to transfer variables between an application and server. When transfering, URLVariables will be converted to a URI string.
Use URLVariabels objects with methods of HTTPLoader class.
Definition at line 28 of file URLVariables.hpp.
Default Constructor.
Definition at line 41 of file URLVariables.hpp.
Constructor by a string representing encoded properties.
Converts the variable string to properties of the specified URLVariables object.
Definition at line 53 of file URLVariables.hpp.
References decode(), samchon::WeakString::find(), samchon::WeakString::split(), and samchon::WeakString::substr().
Encode a string into a valid URI.
Encodes a string to follow URI standard format.
Definition at line 79 of file URLVariables.hpp.
Referenced by samchon::library::HTTPLoader::load(), and toString().
Decode a URI string.
Decodes a URI string to its original
Definition at line 116 of file URLVariables.hpp.
Referenced by URLVariables().
Get the string representing URLVariables.
Returns a string object representing URLVariables following the URI
Definition at line 177 of file URLVariables.hpp. | http://samchon.github.io/framework/api/cpp/dc/d30/classsamchon_1_1library_1_1URLVariables.html | CC-MAIN-2022-27 | en | refinedweb |
OCC implementation of IMesher interface. More...
#include <IVtkOCC_ShapeMesher.hxx>
OCC implementation of IMesher interface.
Mesher produces shape data using implementation of IShapeData interface for VTK and then result can be retrieved from this implementation as a vtkPolyData:
Then the resulting vtkPolyData can be used for initialization of VTK pipeline.
Main constructor.
Destructor.
Returns absolute deflection used by this algorithm. This value is calculated on the basis of the shape's bounding box. Zero might be returned in case if the underlying OCCT shape is empty or invalid. Thus check the returned value before passing it to OCCT meshing algorithms!
Returns deviation angle used by this algorithm. This is the maximum allowed angle between the normals to the curve/surface and the normals to polyline/faceted representation.
Returns relative deviation coefficient used by this algorithm.
Executes the mesh generation algorithms. To be defined in implementation class.
Implements IVtk_IShapeMesher. | https://dev.opencascade.org/doc/refman/html/class_i_vtk_o_c_c___shape_mesher.html | CC-MAIN-2022-27 | en | refinedweb |
IBlockEvent
Link to iblockevent
This interface is extended by all Events that can deal with blocks in the world.
Importing the class
Link to importing-the-class
It might be required to import the class to avoid errors.
import crafttweaker.event.IBlockEvent;
Extending IEventPositionable
Link to extending-ieventpositionable
This interface extends IEventPositionable, which means that all functionality that IEventPositionable offers is also present in IBlockEvent
ZenGetters
Link to zengetters | https://docs.blamejared.com/1.12/en/Vanilla/Events/Events/IBlockEvent | CC-MAIN-2022-27 | en | refinedweb |
This article serves as an introduction to fp-ts coming from someone knowledgeable in JS. I was inspired to write this up to help some team members familiarise themselves with an existing codebase using fp-ts.
Hindley-Milner type signatures
Hindely-Milner type signatures represent the shape of a function. This is important because it will be one of the first things you look at to understand what a function does. So for example:
const add1 = (num:number) => num + 1
This will have the type signature of
number -> number as it will take in a number and return a number.
Now what about functions that take multiple arguments, for example a function
add that adds two numbers together? Generally, functional programmers prefer to functions to have one argument. So the type signature of
add will look like
number -> number -> number and the implementation will look like this:
const add = (num1: number) => (num2: number) => num1 + num2
Breaking this down, we don't have a function that takes in two numbers and adds them, we have a function that takes in a number and returns another function that takes in an number that finally adds them both together.
In the fp-ts documentation, we read the typescript signature to tell us what a function does. So for example the trimLeft function in the string package has a signature of
export declare const trimLeft: (s: string) => string
which tell us that it is a function that takes in a string and returns a string.
Higher kinded types
Similar to how higher order functions like
map require a function to be passed in, a higher kinded type is a type that requires another type to be passed in. For example,
let list: Array<string>;
Array is a higher kinded type that requires another type
string to be passed in. If we left it out, typescript will complain at you and ask you "an array of what?". An example of this in fp-ts is the flatten function from array.
export declare const flatten: <A>(mma: A[][]) => A[]
What this says is that the function requires another type
A and it flattens arrays of arrays of A into arrays of A.
However, arrays are not the only higher kinded types around. I like to think of higher kinded types as containers that help abstract away some concept. For example, arrays abstract away the concept of iteration and Options abstract away the concept of null.
Option
Options are a higher kinded type that abstract away the concept of null. Although it requires some understanding to use and some plumbing to get it all set up, my promise to you is that if you start using Options, your code will be more reliable and readable.
Options containers for optional values.
type Option<A> = None | Some<A>
At any one time, an option is either a
None representing null or
Some<A> representing some value of type
A.
If you have a function that returns an Option for example head
export declare const head: <A>(as: A[]) => Option<A>
By seeing that the function returns an Option, you know that by calling head, you may not get a value. However, by wrapping this concept up in an Option, you only need to deal with the null case when you unwrap it.
So how do you write your own function that returns an Option? If you are instantiating your own Options, you will need to look under the constructors part of the documentation. For example,
import { some, none } from "fp-ts/lib/Option"; const some1 = (s:string):Option<number> => s === 'one'? some(1) : none;
However to extract out the value inside an Option, you will need to use one of the destructor methods. For example, the
fold function in Option is a destructor.
export declare const fold: <A, B>(onNone: Lazy<B>, onSome: (a: A) => B) => (ma: Option<A>) => B
This type signature is a little complicated so let's break it down.
fold: <A, B>...:This function has two type parameters A & B
...(onNone:Lazy<B>,...: This take in an onNone function that returns a value of type
B
..., onSome: (a: A) => B)...: This also takes in an onSome function that takes in a value of type
Aand returns a value of type
B
... => (ma: Option<A>)...: This expects an Option of type
Ato be passed in
... => B: After all arguments are passed in, this will return a value of type B.
Putting all this together, if we wanted to use our
some1 function from earlier and print "success 1" if the value was "one" otherwise print "failed", it would look like this:
import { some, none, fold } from "fp-ts/lib/Option"; const some1 = (s:string):Option<number> => s === 'one'? some(1) : none; const print = (opt:Option<number>):string => { const onNone = () => "failed"; const onSome = (a:number) => `success ${a}`; return fold(onNone, onSome)(opt); } console.log(print(some1("one"))); console.log(print(some1("not one")));
Now we know how to create an Option as well as extract out a value from an Option, however we are missing what in my opinion is the exciting part of Options which is the ability to transform them. Options are Functors which is a fancy way of saying that you can map them. In the documentation, you can see that Option has a Functor instance and a corresponding map instance operation.
What this means is that you can transform Options using regular functions. For example, if you wanted to write a function that adds one to a an
Option<number> it would look like so:
import { map, Option } from "fp-ts/lib/Option"; const add1 = (num: number) => num + 1; const add1Option = (optNum:Option<number>):Option<number> => map(add1)(optNum);
Now we know how to create options, transform them via map functions and use destructors to extract out the value from them whilst referring to the documentation each step of the way.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/derp/intro-to-fp-ts-2ime | CC-MAIN-2022-27 | en | refinedweb |
”Proxy by Name”, a new feature in JNBridgePro 7.3
We’ve added a new feature to JNBridgePro 7.3 that we’re calling “proxy by name”. Proxy by name automatically maps the names of methods and constructor parameters from the underlying Java or .NET code into the generated proxies. It’s been in the top-requested-feature list for some time now, but until recently the .NET and Java APIs that were available haven’t been good enough for us to do a good job on this feature. We’re happy to now be able to make it available.
Proxying parameter names is powerful because it means that the native parameter names will appear in tool tips and IntelliSense in Visual Studio, Eclipse, and other IDEs, rather than a meaningless alias. This helps document the APIs, and presents the information right where it’s used: in the code editor. Previously, the tool tip and IntelliSense pop-ups contained placeholder names like p1, p2, etc., which provided no documentation value.
How it works: Java calling .NET
Here’s how “proxy by name” works. Let’s start in the Java-to-.NET direction. Assume we have a C# class:
public class DotNetClass { public DotNetClass(string myStringParam, int myIntParam, string my2ndStringParam) { } public static void myMethod(float thisIsAFloatParam, long thisIsALongParam) { } }
Let’s proxy this in the usual way, then start creating a Java project that uses the proxy jar file.
Note that the code completion pop-ups now show parameter names in the proxied methods and constructors. This wasn’t available in 7.2.
After the code is entered, the parameter names in the proxies are still available by hovering the cursor over the method name:
How it works: .NET calling Java
What about .NET-to-Java? Here, things are a little different. Let’s start with Java class similar to the .NET class we’ve been using:
public class JavaClass { public JavaClass(String myStringParam, int myIntParam, String my2ndStringParam) { } public static void myMethod(float thisIsAFloatParam, long thisIsALongParam) { } }
You’ll need to compile your code using Java 8 or later, and to target Java 8 binaries. You’ll also need to make sure that you’ve told the compiler to save the parameter metadata. If you’re using Eclipse, for example, your project should have these compilation settings:
Alternatively, if you’re using the command line, use the -parameters option:
javac -cp classpath_here -parameters classes_to_compile
Again, generate the proxies the usual way, and reference them in a Visual Studio project.
When you enter the names of methods and constructors, or hover the mouse over completed code, IntelliSense and tool tips work just as expected, and include the names of the proxied parameters:
We think you’ll find this new feature useful, and expect it’ll help speed up your development efforts. Let us know what you think! | https://jnbridge.com/blog/proxy-by-name-a-new-feature-in-jnbridgepro-7-3 | CC-MAIN-2018-34 | en | refinedweb |
Before the article begins, just a quick word about me and C#. I don't work in C#, I don't like C# (because I know too little about it), I don't plan to start working with C# for extended periods of time any time soon. That said, this is an article about customizing installations of Windows services, in C#!
You all probably think I'm a nut case. I submitted an article to CodeProject last week with the exact same title and topic as this one. Only I wrote it in VB.NET. And someone asked me whether I'll be posting a C# version. And I said, I would. But like I said, I am not a C# programmer. The two things that the initial article meant to do (mentioned in Introduction below) works in this C# version too. You might think the code looks all crappy and I don't know the C# variable declaration conventions. Well, it does and I don't. But like I said, it works.
The addition of service descriptions and setting a service to interact with the desktop are the only things I tested, as these were the only aims of the original article. The modAPI file has a lot of additional code that you can play with and experiment to your hearts desire. All in C#, of course.
The article below is a copy of the original, with the exception that everything VB has been changed to C#.
I labor.. (The walkthrough covers VB and C#!) contains.
ProjectInstaller void ProjectInstaller_AfterInstall(object sender,
System.Configuration.Install.InstallEventArgs e)
{
//Our code goes in this event because it is the only one that will do
//a proper job of letting the user know that an error has occurred,
//if one indeed occurs. Installation will be rolled back
//if an error occurs.
int iSCManagerHandle = 0;
int iSCManagerLockHandle = 0;
int iServiceHandle = 0;
bool bChangeServiceConfig = false;
bool bChangeServiceConfig2 = false;
modAPI.SERVICE_DESCRIPTION ServiceDescription;
modAPI.SERVICE_FAILURE_ACTIONS ServiceFailureActions;
modAPI.SC_ACTION[] ScActions = new modAPI.SC_ACTION[3];
//There should be one element for each action.
//The Services snap-in shows 3 possible actions.
bool bCloseService = false;
bool bUnlockSCManager = false;
bool bCloseSCManager = false;
IntPtr iScActionsPointer = new IntPtr();
try
{
//Obtain a handle to the Service Control Manager,
//with appropriate rights.
//This handle is used to open the relevant service.
iSCManagerHandle = modAPI.OpenSCManagerA(null, null,
modAPI.ServiceControlManagerType.SC_MANAGER_ALL_ACCESS);
//Check that it's open. If not throw an exception.
if (iSCManagerHandle < 1)
{
throw new Exception("Unable to open the Services Manager.");
}
//Lock the Service Control Manager database.
iSCManagerLockHandle = modAPI.LockServiceDatabase(iSCManagerHandle);
//Check that it's locked. If not throw an exception.
if (iSCManagerLockHandle < 1)
{
throw new Exception("Unable to lock the Services Manager.");
}
//Obtain a handle to the relevant service, with appropriate rights.
//This handle is sent along to change the settings. The second parameter
//should contain the name you assign to the service.
iServiceHandle = modAPI.OpenServiceA(iSCManagerHandle, "C#ServiceTest",
modAPI.ACCESS_TYPE.SERVICE_ALL_ACCESS);
//Check that it's open. If not throw an exception.
if (iServiceHandle < 1)
{
throw new Exception("Unable to open the Service for modification.");
}
/A(iServiceHandle,
modAPI.ServiceType.SERVICE_WIN32_OWN_PROCESS |
modAPI.ServiceType.SERVICE_INTERACTIVE_PROCESS,
modAPI.SERVICE_NO_CHANGE, modAPI.SERVICE_NO_CHANGE,
null, null, 0, null, null, null, null);
//If the call is unsuccessful, throw an exception.
if (bChangeServiceConfig==false)
{
throw new Exception("Unable to change the Service settings.");
}
/A(iServiceHandle,
modAPI.InfoLevel.SERVICE_CONFIG_DESCRIPTION,ref ServiceDescription);
//If the update of the description is unsuccessful it is up to you to
//throw an exception or not. The fact that the description did not update
//should not impact the functionality of your service.
if (bChangeServiceConfig2==false)
{
throw new Exception("Unable to set the Service description.");
}
/A(iServiceHandle,
modAPI.InfoLevel.SERVICE_CONFIG_FAILURE_ACTIONS,
ref ServiceFailureActions);
//If the update of the failure actions
//are unsuccessful it is up to you to
//throw an exception or not. The fact that
//the failure actions did not update
//should not impact the functionality of your service.
if (bChangeServiceConfig2==false)
{
throw new Exception("Unable to set the Service Failure Actions.");
}
}
catch(Exception ex)
{
//Throw the exception again so the installer can get to it
throw new Exception(ex.Message);
}
finally
{
//Close the handles if they are open.
Marshal.FreeHGlobal(iScActionsPointer);
if (iServiceHandle > 0)
{
bCloseService = modAPI.CloseServiceHandle(iServiceHandle);
}
if (iSCManagerLockHandle > 0)
{
bUnlockSCManager = modAPI.UnlockServiceDatabase(iSCManagerLockHandle);
}
if (iSCManagerHandle != 0)
{
bCloseSCManager = modAPI.CloseServiceHandle(iSCManagerHandle);
}
}
//When installation is done go check out your
//handy work using Computer Management!
}
That's about it then. Like I mentioned before, the modAPI module can be used to change other service settings as well. It will take some experimenting and some playing on your part, but that's what makes this fun!
Enjoy!
finally
ProjectInstaller_AfterInstall
LockServiceDatabase
UnlockServiceDatabase
DllImport
long
String
Integer
IntPtr
CopyMemory
Sub
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
using System.Management;
static public void ServiceDesktopPermission( string serviceName )
{
try
{
// This was added to circumvent the problem of ODBC drivers or other applications
// displaying a password dialog or some dialog that is displayed but doesn't show
// up due to that fact that "Allow service to interact with desktop" is not checked
// and therefore stopping the service from functioning as it is waiting for the dialog
// to be dismissed.
ConnectionOptions coOptions = new ConnectionOptions();
coOptions.Impersonation = ImpersonationLevel.Impersonate;
// CIMV2 is a namespace that contains all of the core OS and hardware classes.
// CIM (Common Information Model) which is an industry standard for describing
// data about applications and devices so that administrators and software
// management programs can control applications and devices on different
// platforms in the same way, ensuring interoperability across a network.
ManagementScope mgmtScope = new System.Management.ManagementScope(@"root\CIMV2", coOptions);
mgmtScope.Connect();
ManagementObject wmiService;
wmiService = new ManagementObject("Win32_Service.Name='" + serviceName + "'");
ManagementBaseObject InParam = wmiService.GetMethodParameters("Change");
InParam["DesktopInteract"] = true;
wmiService.InvokeMethod("Change", InParam, null);
}
catch
{
//TODO: Log this error
}
}
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/5393/Install-a-Windows-service-the-way-YOU-want-to-C-ve | CC-MAIN-2018-34 | en | refinedweb |
Introduction: Tutorial explains the in-built functional interface
Supplier<T> introduced in Java 8. It explains with the help of examples how the Supplier interface is to be used via its
get() method.
What is java.util.function.Supplier:
Supplier<T> is an in-built functional interfaceClick to Read tutorial on Functional Interfaces introduced in Java 8 in the
java.util.function package. Supplier can be used in all contexts where there is no input but an output is expected.
Since Supplier is a functional interface, hence it can be used as the assignment target for a lambda expressionClick to read Lambda Expressions tutorial or a method referenceClick to read tutorial on Method References.
Function Descriptor of
Supplier<T>: Supplier’s Function Descriptor is () -> T . This means that there is no input in the lambda definition and the return output is an object of type T. To understand Function Descriptors in details you can refer the function descriptor tutorialTutorial explaining function descriptors.
Advantage of predefined java.util.function.Supplier: In all scenarios where there is no input to an operation and it is expected to return an output the in-built functional interface
Supplier<T> can be used without the need to define a new functional interface every time.
java.util.function.Supplier source code
@FunctionalInterface public interface Supplier<T> { /** * Gets a result. * @return a result */ T get(); }
Supplier<T>’s source code :
Supplierhas been defined with the generic type
Twhich is the same type which its
get()methods return as output.
get()method is the primary abstract method of the Supplier functional interface. Its function descriptor being () -> T . I.e.
get()method takes no input and returns an output of type T. I will explain usage of
get()with detailed example in the next section.
- All lambda definitions for Supplier must be written in accordance with
get()method’s signature, and conversely all lambdas with the same signature as that of
get()are candidates for assignment to an instance of Supplier interface.
Usage of
get() method of Supplier:
To understand the
get() method lets take a look at the
SupplierFunctionExample’s code below, post which I have explained in detail how the code works-
//SupplierFunctionExample.java import java.util.Date; import java.util.function.Supplier; public class SupplierFunctionExample { public static void main(String args[]) { //Supplier instance with lambda expression Supplier<String> helloStrSupplier = () -> new String("Hello"); String helloStr = helloStrSupplier.get(); System.out.println("String in helloStr is->"+helloStr+"<-"); //Supplier instance using method reference to default constructor Supplier<String> emptyStrSupplier = String::new; String emptyStr = emptyStrSupplier.get(); System.out.println("String in emptyStr is->"+emptyStr+"<-"); //Supplier instance using method reference to a static method Supplier<Date> dateSupplier= SupplierFunctionExample::getSystemDate; Date systemDate = dateSupplier.get(); System.out.println("systemDate->" + systemDate); } public static Date getSystemDate() { return new Date(); } }
String in emptyStr is-><-
systemDate->Wed Dec 16 19:18:15 IST 2015
SupplierFunctionExampleis my class with 2 methods –
main()&
getSystemDate().
getSystemDate()is a static method which simply returns the current system date and does not take any input. The method signature matches the function descriptor of Supplier i.e. () -> T .
- In
main()method I have shown how to instantiate a Supplier interface instance in following 3 ways–
- Using a Lambda Expression: I have defined a a lambda expression which takes no input and returns a new String object with value set to “hello”. This lambda I have assigned to a
Supplier<String>instance named
helloStrSupplier. Invoking functional method
get()on
helloStrSuppliergives us a String
helloStrwhich is then printed to show that it indeed contains the value “hello”.
- Using a Method Reference to default constructor of
String: Method Reference to the default constructor of
Stringis used to create a
Supplier<String>instance named
emptyStrSupplier.
emptyStrSupplieris then used to create a String object named
emptyStrusing the
get()method.
emptyStr‘s value is then printed to show its value is empty as defined.
- Using a Method Reference to
getSystemDate(): Method Reference to the
getSystemDate()method of
SupplierFunctionExampleclass is used to create a
Supplier<Date>instance named
dateSupplier.
dateStrSupplieris then used to create a
Dateobject named
systemDateby invoking
get()method on it.
systemDate‘s value is then printed to show its value. Value printed is of 16-Dec when I ran this example.
Summary: In this tutorial we looked at what is the
Supplier<T> in-built interface defined in Java 8 and what is its main advantage. We then looked at how to use the
Supplier<T> interface using its
get() method with an<< | https://www.javabrahman.com/java-8/java-8-java-util-function-supplier-tutorial-with-examples/ | CC-MAIN-2018-34 | en | refinedweb |
> So how can I process an incoming XML, without worrying about whether > it's specified a namespace or not? To a namespace aware processor, the name of an element includes its namespace, so this question is like saying how do I match on an element without worrying about its name. the answer (in both cases) is to use *. match="*[local-name()='foo']" will match anything called foo in any namespace (or no namespace) or match="foo|xx:foo" will match foo in no namespace or (just) the namespace to which you have assigned the prefix xx: David _____________________________________________________________________ This message has been checked for all known viruses by Star Internet delivered through the MessageLabs Virus Scanning Service. For further information visit or alternatively call Star Internet for details on the Virus Scanning Service. XSL-List info and archive: | https://www.oxygenxml.com/archives/xsl-list/200107/msg00121.html | CC-MAIN-2018-34 | en | refinedweb |
[meta] Ensure Geo replication can handle GitLab.com-scale update load
So far, our testing of Geo at GitLab.com scale has focused on backfill and basic correctness - can we successfully replicate all instance state on a static primary to a static secondary? When we create/update/delete a single repository/file/record, does the secondary work?
We need to get some assurance that Geo's replication architecture will handle updates at scale.
Investigating current replication requirementsInvestigating current replication requirements
(For almost every number mentioned, I think we should be interested in both average and peak rates. We might also want some measure of variance on the average, so we can get a feel for how "spiky" the demand is.
Events communicated by the Geo event logs ("repository" normally means "project"):Events communicated by the Geo event logs ("repository" normally means "project"):
- Repository updated ("git push", creating commits in UI/API)
- Repository deleted (UI/API action, namespace removal sends an event per project)
- Repository renamed (UI/API action, namespace removal sends an event per project)
- List of selective sync namespaces changed (ignore)
- Repository created (UI/API action) (why do we need this?)
- Repository migrated to hashed storage (V1 or V2) (ignore)
- LFS object deleted
- CI artifact deleted
We don't send events for these actions:
- LFS object added
- CI artifact added
- Upload added
- Upload removed
However, we may have an interest in these anyway, as backfill causes them to be replicated by the secondary.
Numbers we want to collect for GitLab.comNumbers we want to collect for GitLab.com
Event log depends on postgresql replication.
- Rate of data transfer for current postgresql replication
- Replication lag for current pg replication
- Rate of
git push(+ UI/API) actions
- Rate of data transfer for
git push(+ UI/API) actions
- Rate of project creations, renames and deletions
Numbers we may not want (if we depend on object storage, we might be able to ignore them)
- Rate of LFS uploads
- Rate of data transfer for LFS uploads
- Peak rate of LFS object deletions
- This happens in bulk via
RemoveUnreferencedLfsObjectsWorker
- Rate of artifact uploads
- Rate of data transfer for artifact uploads
- Rate of artifacts removals
- Some (most?) removals are in bulk via
ExpireBuildArtifactsWorkerand
ExpireBuildInstanceArtifactsWorker, so we may only want peak rates here.
- Rate of uploads
- Rate of data transfer for uploads
- Rate of upload removals
Adding these events to the log cursor will increase postgresql replication load, but hopefully this will be marginal compared to the rest of the database. Once we have numbers, we can make an estimate.
Once backfill is complete, we can (naively) assume that git data replication
load for each secondary will exactly match the
git push load on the primary.
It's a reasonable first-order approximation, and is more likely to over-state
the load than under-state it.
Investigating current replication capacityInvestigating current replication capacity
This is a fairly exploratory issue. We need to start with a primary and a fully-replicated secondary, apply a sustained period of database and filesystem writes to the primary (creating new issues, uploading files and LFS objects, renaming and updating repositories and wikis, etc), and observe the replication process in action.
We need to either shadow GitLab.com traffic (I'm not sure this is possible), or get some numbers from GitLab.com to tell us what rate and mix of updates we should be sending to our testbed primary and generate the load ourselves with, e.g., #3117 (comment 47093268)
Some important questions:
- How does postgresql replication lag change? What rate of sustained database updates can we maintain before we start falling behind? Can we add hardware to scale this?
- The Geo log cursor operates by adding and removing events to various tables on the primary. Does this generate substantial additional load on postgresql? How many events/second can we enqueue before we start to affect postgresql replication negatively?
- The Geo log cursor is a daemon on the secondary that processes those enqueued events. How many events/second can it handle without falling behind? What are its resource requirements while doing so? Can it keep up with ordinary and exceptional GitLab.com traffic?
- The secondary is notified of changes to repositories on the primary, and it enqueues an unbounded number of
git fetchoperations in response to log cursor events. Is this sustainable at GitLab.com scale? Should we apply concurrency limits?
- Once the updates to the primary have finished and the secondary claims to be synchronized again, is the secondary actually in a consistent state? Have unexpected race conditions removed or broken repositories or files? Did any events get missed? etc.
I reckon this could do with a GCP Migration label /cc @andrewn | https://gitlab.com/gitlab-org/gitlab-ee/issues/4030 | CC-MAIN-2018-34 | en | refinedweb |
Walkthrough: Outlining
The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.
The latest version of this topic can be found at Walkthrough: Outlining.
You can implement language-based features such as outlining by defining the kinds of text regions you want to expand or collapse. You can define regions in the context of a language service, or you can define your own file name extension and content type and apply the region definition to only that type, or you can apply the region definitions to an existing content type (such as "text"). This walkthrough shows how to define and display outlining regions.
Starting in Visual Studio 2015, you do not install the Visual Studio SDK from the download center. It is included as an optional feature in Visual Studio setup. You can also install the VS SDK later on. For more information, see Installing the Visual Studio SDK.
To create a MEF project
Create an VSIX project. Name the solution
OutlineRegionTest.
Add an Editor Classifier item template to the project. For more information, see Creating an Extension with an Editor Item Template.
Delete the existing class files.
Outlining regions are marked by a kind of tag (OutliningRegionTag). This tag provides the standard outlining behavior. The outlined region can be expanded or collapsed. The outlined region is marked by a PLUS SIGN if it is collapsed or a MINUS SIGN if it is expanded, and the expanded region is demarcated by a vertical line.
The following steps show how to define a tagger that creates outlining regions for all the regions that are delimited by "[" and "]".
To implement an outlining tagger
Add a class file and name it
OutliningTagger.
Import the following namespaces.
Create a class named
OutliningTagger, and have it implement ITagger<T>:
Add some fields to track the text buffer and snapshot and to accumulate the sets of lines that should be tagged as outlining regions. This code includes a list of Region objects (to be defined later) that represent the outlining regions.
string startHide = "["; //the characters that start the outlining region string endHide = "]"; //the characters that end the outlining region string ellipsis = "..."; //the characters that are displayed when the region is collapsed string hoverText = "hover text"; //the contents of the tooltip for the collapsed span ITextBuffer buffer; ITextSnapshot snapshot; List<Region> regions;
Add a tagger constructor that initializes the fields, parses the buffer, and adds an event handler to the Changed event.
Implement the GetTags method, which instantiates the tag spans. This example assumes that the spans in the NormalizedSpanCollection passed in to the method are contiguous, although this may not always be the case. This method instantiates a new tag span for each of the outlining regions.
public IEnumerable<ITagSpan<IOutliningRegionTag>> GetTags(NormalizedSnapshotSpanCollection spans) { if (spans.Count == 0) yield break; List<Region> currentRegions = this.regions; ITextSnapshot currentSnapshot = this.snapshot; SnapshotSpan entire = new SnapshotSpan(spans[0].Start, spans[spans.Count - 1].End).TranslateTo(currentSnapshot, SpanTrackingMode.EdgeExclusive); int startLineNumber = entire.Start.GetContainingLine().LineNumber; int endLineNumber = entire.End.GetContainingLine().LineNumber; foreach (var region in currentRegions) { if (region.StartLine <= endLineNumber && region.EndLine >= startLineNumber) { var startLine = currentSnapshot.GetLineFromLineNumber(region.StartLine); var endLine = currentSnapshot.GetLineFromLineNumber(region.EndLine); //the region starts at the beginning of the "[", and goes until the *end* of the line that contains the "]". yield return new TagSpan<IOutliningRegionTag>( new SnapshotSpan(startLine.Start + region.StartOffset, endLine.End), new OutliningRegionTag(false, false, ellipsis, hoverText)); } } }
Declare a
TagsChangedevent handler.
Add a
BufferChangedevent handler that responds to Changed events by parsing the text buffer.
Add a method that parses the buffer. The example given here is for illustration only. It synchronously parses the buffer into nested outlining regions.
void ReParse() { ITextSnapshot newSnapshot = buffer.CurrentSnapshot; List<Region> newRegions = new List<Region>(); //keep the current (deepest) partial region, which will have // references to any parent partial regions. PartialRegion currentRegion = null; foreach (var line in newSnapshot.Lines) { int regionStart = -1; string text = line.GetText(); //lines that contain a "[" denote the start of a new region. if ((regionStart = text.IndexOf(startHide, StringComparison.Ordinal)) != -1) { int currentLevel = (currentRegion != null) ? currentRegion.Level : 1; int newLevel; if (!TryGetLevel(text, regionStart, out newLevel)) newLevel = currentLevel + 1; //levels are the same and we have an existing region; //end the current region and start the next if (currentLevel == newLevel && currentRegion != null) { newRegions.Add(new Region() { Level = currentRegion.Level, StartLine = currentRegion.StartLine, StartOffset = currentRegion.StartOffset, EndLine = line.LineNumber }); currentRegion = new PartialRegion() { Level = newLevel, StartLine = line.LineNumber, StartOffset = regionStart, PartialParent = currentRegion.PartialParent }; } //this is a new (sub)region else { currentRegion = new PartialRegion() { Level = newLevel, StartLine = line.LineNumber, StartOffset = regionStart, PartialParent = currentRegion }; } } //lines that contain "]" denote the end of a region else if ((regionStart = text.IndexOf(endHide, StringComparison.Ordinal)) != -1) { int currentLevel = (currentRegion != null) ? currentRegion.Level : 1; int closingLevel; if (!TryGetLevel(text, regionStart, out closingLevel)) closingLevel = currentLevel; //the regions match if (currentRegion != null && currentLevel == closingLevel) { newRegions.Add(new Region() { Level = currentLevel, StartLine = currentRegion.StartLine, StartOffset = currentRegion.StartOffset, EndLine = line.LineNumber }); currentRegion = currentRegion.PartialParent; } } } //determine the changed span, and send a changed event with the new spans List<Span> oldSpans = new List<Span>(this.regions.Select(r => AsSnapshotSpan(r, this.snapshot) .TranslateTo(newSnapshot, SpanTrackingMode.EdgeExclusive) .Span)); List<Span> newSpans = new List<Span>(newRegions.Select(r => AsSnapshotSpan(r, newSnapshot).Span)); NormalizedSpanCollection oldSpanCollection = new NormalizedSpanCollection(oldSpans); NormalizedSpanCollection newSpanCollection = new NormalizedSpanCollection(newSpans); //the changed regions are regions that appear in one set or the other, but not both. NormalizedSpanCollection removed = NormalizedSpanCollection.Difference(oldSpanCollection, newSpanCollection); int changeStart = int.MaxValue; int changeEnd = -1; if (removed.Count > 0) { changeStart = removed[0].Start; changeEnd = removed[removed.Count - 1].End; } if (newSpans.Count > 0) { changeStart = Math.Min(changeStart, newSpans[0].Start); changeEnd = Math.Max(changeEnd, newSpans[newSpans.Count - 1].End); } this.snapshot = newSnapshot; this.regions = newRegions; if (changeStart <= changeEnd) { ITextSnapshot snap = this.snapshot; if (this.TagsChanged != null) this.TagsChanged(this, new SnapshotSpanEventArgs( new SnapshotSpan(this.snapshot, Span.FromBounds(changeStart, changeEnd)))); } }
The following helper method gets an integer that represents the level of the outlining, such that 1 is the leftmost brace pair.
The following helper method translates a Region (defined later in this topic) into a SnapshotSpan.
static SnapshotSpan AsSnapshotSpan(Region region, ITextSnapshot snapshot) { var startLine = snapshot.GetLineFromLineNumber(region.StartLine); var endLine = (region.StartLine == region.EndLine) ? startLine : snapshot.GetLineFromLineNumber(region.EndLine); return new SnapshotSpan(startLine.Start + region.StartOffset, endLine.End); }
The following code is for illustration only. It defines a PartialRegion class that contains the line number and offset of the start of an outlining region, and also a reference to the parent region (if any). This enables the parser to set up nested outlining regions. A derived Region class contains a reference to the line number of the end of an outlining region.
You must export a tagger provider for your tagger. The tagger provider creates an
OutliningTagger for a buffer of the "text" content type, or else returns an
OutliningTagger if the buffer already has one.
To implement a tagger provider
Create a class named
OutliningTaggerProviderthat implements ITaggerProvider, and export it with the ContentType and TagType attributes.
Implement the CreateTagger<T> method by adding an
OutliningTaggerto the properties of the buffer.
To test this code, build the OutlineRegionTest solution and run it in the experimental instance.
To build and test the OutlineRegionTest solution
Build the solution.
When you run this project in the debugger, a second instance of Visual Studio is instantiated.
Create a text file. Type some text that includes both the opening brace and the closing brace.
There should be an outlining region that includes both braces. You should be able to click the MINUS SIGN to the left of the open brace to collapse the outlining region. When the region is collapsed, the ellipsis symbol (...) should appear to the left of the collapsed region, and a popup containing the text hover text should appear when you move the pointer over the ellipsis.
Walkthrough: Linking a Content Type to a File Name Extension | https://msdn.microsoft.com/en-us/library/ee197665.aspx | CC-MAIN-2018-34 | en | refinedweb |
Proposed Features/amenity=reception desk
Contents
Rejection
On the second time this has been presented for voting the proposal has been rejected by 7 oppose to 15 approved (68% approval, present rules require 75% for formal approval). Most of those opposed appear to point to the use of 'desk in the name/value and suggest the use of point in its place. There is also one pointing to the use of reception_area.
A total of 22 people voted .. compared to some 38 the first time. Does a more controversial proposal attract more participation? I shall put up proposals for reception, reception_point and reception_area and see what happens.
Proposal
A Reception Desk provides a place where people (visitors, patients, or clients) arrive to be greeted, any information recorded, the relevant person is contacted and the visitor/s, patient/s, or client/s sent on to the relevant person/place.
How to Map
Map the desk itself - not the waiting/reception area nor the building.
As a member of a relation. When a relation is declared you include the node/way/area that contains the tag reception_desk as one member of the relation.
Additional Tags
- operator=*
- phone=*
- opening_hours=*
- name=*
- level=* for multi level indoor tagging
Rendering
As a desk, represented as a line, with a receptionist, represented as a head and shoulders figure over it, in the amenity colour of brown.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The above represents what should appear on the resulting wiki page. Below are presented the explanation of the need for this tag and why this method is used.
Rationale (Verbose Explanation)
It is particularly useful to know the location of the reception desk when it is located away from the typical place (near a front entry) or where there is only one amongst a number of large buildings. First seen as a suggested extended tag for camp sites, thought to have a wider application to offices, hotels, hospitals and educational features.
What key?
- Tourism=reception_desk desk is some 600 m from the 'front entry' and is hard to find. It is not a tourist site in any way.
- office=reception_desk
The office key is used for "a place predominantly selling services". A reception desk does not commonly sell a service, it usually directs to a service.
- amenity=reception_desk
The amenity key is used for "an assortment of community facilities". You could view the reception_desk as similar to a toilet or telephone (both key:amenity), they are present on tourist sites, businesses/offices and educational institutions. They provide a needed facility to tourists and locals alike.
Of these 3 keys, amenity is the 'best' key for reception_desk.
What value? (Why desk?)
Reception by it self could be confused with GPS reception, radio or TV reception, wedding reception. In order to distinguish it the addition of the word 'desk' was used. I've never encountered this kind of reception that did not have a desk, usually with a telephone, these days with a computer, possibly a sign in book (for Occupational, Health and Safety - to record who enters and if they leave.. any evacuation then enables checking of people). So reception_desk.
Association with parent
The reception_desk would service some other feature (the 'parent', an office, camp_site etc) and should be associated with it. These may share the operator and/or name tags being the same for both. The relation between the reception_desk and its' parent feature may be indicated by;
- the reception_desk being enclosed by the parent feature.
- the proximity of the reception_desk and the parent feature.
- A site relation. See site
This is something that that applies to some features and could be addressed by a wiki page.
Indoor Mapping?
Many of the present tags in use will need to be considered by any indoor mapping system. The addition of this tag does not add any complexity to that, it is similar to the tags toilet, telephone, all the key:office and key:shop values. A solution to those tag should also work for reception_desk. I see no reason why this tag has to be 'special' in some way for the indoor tagging.
See the indoor mapping wiki page.
Possibly the most relevant is the level=* tag.
This is something that could be addressed by a wiki page, associated with the 'Indoor_Mapping' page above.
Relations
This can be an element of a relation. But not as a tag on a relation (like a name, operator or contact could be). The relation has a declaration of "type=" .. in that area you cannot have "reception_desk" as it would not identify the location of the reception_desk. The above is true for all amenity values.
What relations would it be used with? Those that link it to the thing it services (its 'parent').
Examples
Any hotel will have a reception desk, as do many office, industrial, camp and caravan sites.
From Taginfo value=reception there are over 900 instances, most of those follow suggested extended tags for camp sites. Many of the rest are names for buildings ... probably where the mapper wants to identify where the reception desk.
Presently (April 2015) on the OSM data base there are
- 14 amenity=reception_desk (this proposal)
related or possibly related
- 557 camp_site=reception (previous proposal related to camp sites)
- 295 name=Reception
- 26 name=reception
- 11 name=Main Reception
- 11 office=reception
- 105 amenity=reception_area
At least some of these show a need for the tagging of a 'reception desk'.. Please refrain from publicly commenting on other peoples votes, no matter what the comment is, they are entitled to their ideas. Discussion should have settled any issues and the proposal should have adequately described the feature. Further discussion here is too late.
Previous Voting
Summary 21 for, 17 against.
Most objections to the key amenity. Some about 'indoor mapping'. One about 'desk'. Another how to relate it to the parent feature. And another says it needs more time. I have tried to address these objections by explanations above.
Previous votes have been moved to the discussion page to avoid confusion with the new process here.
New voting
Vote start 2 June 2015, possible end 16 June 2015.
I approve this proposal. This is a needed tag, and amenity is just fine. I wish it did not say "desk", as it's not always a "desk", but that's not a fatal problem. Brycenesbitt (talk) 23:43, 1 June 2015 (UTC)
I approve this proposal. When I vist Very large corporate campuses and government facilities to see a worker - I have to go to a special place called the reception desk. I have visited more reception desks at government facilities and high security places to gain entry to fix computers - many more than the concierge at a hotel. People who say this is tourism are very narrow minded and very wrong - Or didn't bother to read a wiki page they just voted on (which is my bet). Go to SAIC or General Atomics in San Diego and try to find their reception entrance and reception desk just by looking at a satellite image. it is impossible, and non-obvious from the ground. How about where to check in when visiting Apple? Where do you go at the infinite loop campus to check-in to see Mr Cook? When visiting a Factory with a wide open gate and 50 buildings - Where do visitiors check-in to see the plant manager? All of these are a non-tourism function - it is merely an amenity. *every single place* I would wish to tag a reception desk is in a non-tourist, commercial or industrial (or military) facility. People voting otherwise simply have no life experience outside of hotels - and should be more imaginative. Javbw (talk) 01:21, 2 June 2015 (UTC)
I approve this proposal. Polarbear w (talk) 07:46, 2 June 2015 (UTC)
I approve this proposal. --Peter Mead (talk) 08:36, 2 June 2015 (UTC)
I approve this proposal. I think micromapping in general is the way to go for OSM and this is a part of it. -- Kocio (talk) 09:04, 2 June 2015 (UTC)
I oppose this proposal. I was in favour of it until I noticed the instruction "map the desk itself". In my opinion, focussing on the piece of furniture rather than the facility as a whole is a really strange decision. --Tordanik 11:59, 16 June 2015 (UTC)
- Selected the desk as a fixed point that can be easily located on the ground. It may not be central to an area, but it is THE place you go to.
I approve this proposal. Dr Centerline (talk) 15:45, 2 June 2015 (UTC)
I approve this proposal. --Kotya (talk) 22:34, 3 June 2015 (UTC)
I approve this proposal. Jojo4u (talk) 09:20, 5 June 2015 (UTC)
I approve this proposal. --Mapper999 (talk) 14:54, 6 June 2015 (UTC)
I oppose this proposal. Too vague in between tourism, office etc. and needed only in very special cases. Vademecum (talk) 18:42, 10 June 2015 (UTC)
- The definitions for tourism, office etc. come from the OSM wiki, you find them vague? They are broad definitions so that all cases of tourism, office etc fit under those definition .. but other cases don't.
- You say this is "needed only in special cases" yet you oppose the inclusion? Police Stations too are only needed in special cases! If it is needed or of use then it should be included. Warin61 (talk) 00:40, 11 June 2015 (UTC)
- Don't twist my words. First it's "needed only in very special cases" and therefore the definition is too vague and not exact and tries to combine cases which don't belong together. Vademecum (talk) 18:11, 17 June 2015 (UTC)
I approve this proposal. Waldhans (talk) 07:29, 16 June 2015 (UTC)
I oppose this proposal. I had voted yes, but I'd prefer a different tag like amenity=reception_area which what I noticed now, outnumbers the desk tag in actual use, and which can be more universally applied. This should be a tag about a feature (the spot/area where you should arrive, and where you typically get help, I don't support a tag that is about a piece of furniture --Dieterdreist (talk) 12:34, 16 June 2015 (UTC)
- One problem with 'area' is that a large area could be tagged, what is needed is the smaller area where you get help, not a waiting area.
- Another problem with reception_area it that the term is used for satellite reception areas, TV reception areas. In other words it is used for other things that I'd rather not have confused with this tag.
- The amenity=reception_area has no documentation. Further they are all nodes .. no areas (ways) at all. One of them is a building .. on a node.
- Amenity has a few tags about 'furniture' - bench, waste_bin etc. I see nothing wrong with tagging furniture that is of use, and is there on the 'ground'. Warin61 (talk) 00:56, 19 June 2015 (UTC)
I oppose this proposal. This proposal is even worse than the previous version. The amenity tag should be for mapping facilities of use to the general public, not for mapping furniture. It is not useful to just map a 'desk', it is useful to map a reception facility. Which might be desk, or it might be several desks (plus chairs and other furniture), or a distinct reception building, or a booth, or a phone or screen etc. So all of those should be mapped as a reception. The "How to map" section is ridiculous micromapping. Why just map the outline of the desk, and not the chair next to it? Instead you should map the whole reception area or building etc. --Vclaw (talk) 10:56, 16 June 2015 (UTC)
- The problem with 'area' is that a large area could be tagged, what is needed is the smaller area where you get help, not a waiting area.
- Amenity has a few tags about 'furniture' - bench, waste_bin etc. I see nothing wrong with tagging furniture that is of use, and is there on the 'ground'.
- Mapping a chair that may be easily moved has not been proposed. Warin61 (talk) 00:56, 19 June 2015 (UTC)
I approve this proposal. In my opinion, the reception is always at the main entrance. Even the building with tourism=information. The tagging amenity=porters_lodge would be more important for companies. On campsites would good the node amenity=reception. Reception desk is always for indoor-tagging. RoGer6 (talk) 16:34, 16 June 2015 (UTC)
- I'm still not satisfied. Nevertheless, I agree times for it. However, I will not use the tag. RoGer6 (talk) 07:33, 21 June 2015 (UTC)
- I know of two places where the reception is a long way from the 'main entrance' in Australia. Someone else knows of a camp site reception that is a long way from the camp site (Africa I think). So the reception is not always at the main entrance! Warin61 (talk) 00:56, 19 June 2015 (UTC)
- I have never ever come across a 'porters lodge' in any company, hotel, motel, camp site or office. They do have reception desks. If you want to tag a 'porters lodge' why don't you propose it? Porter = a person who carries luggage. Porters lodge may be found in university accommodation, not something the general public comes across frequently? According to taginfo there are no uses of porter lodge, there are police transporters 16 off, and court reporters 1 ... but no porters. Warin61 (talk) 02:06, 19 June 2015 (UTC)
I approve this proposal. this is needed in hospitals and companies (entrance=main does not work there) . I added many in the last weeks. -- Zuse
I oppose this proposal. .. with the same reason as Dieterdreist and Vclaw. --Foxxi59 (talk) 04:31, 21 June 2015 (UTC)
I approve this proposal. —M!dgard [ talk | projects | current proposal ] 15:36, 21 June 2015 (UTC)
I approve this proposal. Janko (talk) 16:01, 21 June 2015 (UTC)
I oppose this proposal. Althio (talk) 20:52, 21 June 2015 (UTC) The idea is good, but the emphasis on 'desk' in the key/value and description is rather annoying. Following other votes and comments from Brycenesbitt, Tordanik, Dieterdreist, Vclaw. I would prefer a more generic value (say amenity=reception_point) with potential subtagging and optional mention of furniture and desks. Along the lines of (please mix and match):
- amenity=reception_point +[reception_point=customer] +[furniture=booth]
- amenity=reception_point +[reception_point=visitor] +[furniture=desk]
- amenity=reception_point +[reception_point=delivery] +[furniture=door_phone]
I oppose this proposal. Skippern (talk) 03:11, 22 June 2015 (UTC) I objected in the first round because of amenity=*, but I am also against the use of desk. Agree with Althio that reception_point is better, but it should be put in another namespace than amenity. This feature has to do with micro-mapping and indoor mapping, and can take any form, not only a desk. I have seen anything from a small hatch in a wall, to a window, to a large waiting room, to a signed point, to a telephone, so calling it a desk will eventually lead to tags such as reception_room, reception_window, reception_phone, etc. Rename it to reception_point and if needed specify the design and nature of the reception with different sub-tags. TL;DR rename reception_desk to reception_point, no to amenity=
I approve this proposal. Warin61 (talk) 10:25, 23 June 2015 (UTC) | https://wiki.openstreetmap.org/wiki/Proposed_Features/amenity%3Dreception_desk | CC-MAIN-2018-34 | en | refinedweb |
The states of the thread of execution. The getState() method. Example
Contents
- 1. The states of threads of execution and their representation in Java. The Thread.State enumeration
- 2. The getState() method. General form
- 3. An example that demonstrates the definition of states of threads of execution
- Related topics
Search other resources:
1. The states of threads of execution and their representation in Java. The Thread.State enumeration
Once created, a thread of execution can be in several states. In Java, the states of threads of execution are defined by predefined constants from the State enumeration of the Thread class. Below is a description of these states
BLOCKED - The thread has suspended execution because it is waiting to acquire a lock NEW - The thread has been created, but it has not yet started its execution RUNNABLE - The thread is currently executing or will start executing when it gains access to the CPU TERMINATED - The thread has suspended execution for a specified period of time after calling the sleep(), wait(), join() methods WAITING - The thread has suspended execution until it waits for some action (calling the wait() or join() methods without a specified timeout)
Figure 1 depicts the possible changes and directions of thread states.
Figure 1. Thread states diagram
⇑
2. The getState() method. General form
The getState() method of the Thread class is used to get the state of a thread of execution. The general form of the method is as follows:
public Thread.State getState()
here
- Thread.State is an enumeration type that defines the possible states of the thread of execution: NEW, BLOCKED, RUNNABLE, TERMINATED, WAITING.
If a class implements a thread of execution, then in this class you can get the state of the thread approximately as follows:
class MyThread implements Runnable { // Reference to the thread Thread thr; // Class constructor MyThread() { ... // Create a thread thr = new Thread(this); ... } // thread code run() { // Get the state of thread Thread.State ts; ts = thr.getState(); // Handle thread state if (ts == Thread.State.BLOCKED) { ... } else if (ts == Thread.State.NEW) { ... } else if (ts == Thread.State.RUNNABLE) { ... } else if (ts == Thread.State.TIMED_WAITING) { ... } else { // Process the state Thread.State.WAITING ... } } }
⇑
3. An example that demonstrates the definition of states of threads of execution
The example defines the states of the main and child threads. The state value is displayed in the static State method of the ProcessState class.
// A class containing a static method that handles thread state class ProcessState { public static String State(Thread.State ts) { if (ts == Thread.State.BLOCKED) return "BLOCKED"; if (ts == Thread.State.NEW) return "NEW"; if (ts == Thread.State.RUNNABLE) return "RUNNABLE"; if (ts == Thread.State.TIMED_WAITING) return "TIMED_WAITING"; return "WAITING"; } } // The class that encapsulates the thread of execution class MyThread implements Runnable { Thread t; // Constructor MyThread(String threadName) { // Create a thread named threadName and start it for execution t = new Thread(this, threadName); // The thread has not been started yet, display the state of the thread Thread.State ts = t.getState(); System.out.println("State of MyThread in constructor: " + ProcessState.State(ts)); // Start the thread for execution t.start(); } // Thread execution code public void run() { Thread.State ts = t.getState(); System.out.println("State of MyThread in run() method: " + ProcessState.State(ts)); } } public class ThreadState { public static void main(String[] args) { // 1. Determine the state of a child thread MyThread mt = new MyThread("mt"); // Create and start a thread try { mt.t.join(); System.out.println("State after join(): " + ProcessState.State(mt.t.getState())); } catch (InterruptedException e) { e.printStackTrace(); } // 2. Determine the state of the main thread Thread thr = Thread.currentThread(); try { Thread.sleep(2000); System.out.println("Main thread after sleep(): " + ProcessState.State(thr.getState())); } catch (InterruptedException e) { e.printStackTrace(); } } }
The result of the thread
State of MyThread in constructor: NEW State of MyThread in run() method: RUNNABLE State after join(): WAITING Main thread after sleep(): RUNNABLE
⇑
Related topics
- Multitasking. Threads of execution. Basic concepts
- Java language tools for working with threads of execution. Thread class. Runnable interface. Main thread of execution. Creating a child thread
- Methods of the Thread class: getName(), start(), run(), sleep(). Examples
- Methods of the Thread class: join(), isAlive(), getPriority(), setPriority(). Examples
- Synchronization. Monitor. General concepts. The synchronized keyword
- Interaction between threads. Methods wait(), notify(), notifyAll(). Examples
⇑ | https://www.bestprog.net/en/2021/02/06/java-the-states-of-the-thread-of-execution/ | CC-MAIN-2022-27 | en | refinedweb |
WordPress: How to disable a plugin on all pages except for a specific one
A few days ago we were struggling to find a way to limit the amount of plugins that load at any point on a WordPress website. We noticed that several plugins enqueue their scripts and their styles in all requests to the website even if they are actually used on a single page only. This issue was important to address as it was making the whole server slower by giving it extra requests from the client that would never provide any actual benefit to the user.
Initially, we tried to selectively enable those plugins on their respective pages but we did not get it right and things would load out of order and break. Instead of following the ‘
enable when needed‘ methodology we decided to follow the ‘
disable unless needed‘ methodology which seemed simpler at the time.
Our changes involved in adding the following code in the
functions.php file of our child theme.
//Register a filter at the correct event add_filter( 'option_active_plugins', 'bf_plugin_control' ); function bf_plugin_control($plugins) { // If we are in the admin area do not touch anything if (is_admin()) { return $plugins; } // Check if we are at the expected page, if not remove the plugin from the active plugins list if(is_page("csv-to-kml-cell-site-map") === FALSE){ // Finding the plugin in the active plugins list $key = array_search( 'csv-kml/index.php' , $plugins ); if ( false !== $key ) { // Removing the plugin and dequeuing its scripts unset( $plugins[$key] ); wp_dequeue_script( 'bf_csv_kml_script' ); } } if(is_page("random-password-generator") === FALSE){ $key = array_search( 'bytefreaks-password-generator/passwordGenerator.php' , $plugins ); if ( false !== $key ) { unset( $plugins[$key] ); } } if(is_page("xml-tree-visualizer") === FALSE){ $key = array_search( 'xmltree/xml-tree.php' , $plugins ); if ( false !== $key ) { unset( $plugins[$key] ); wp_dequeue_script( 'bf_xml_namespace' ); wp_dequeue_style( 'bf_xml_namespace' ); } } return $plugins; }
One day, we will clean the above code to make it tidy and reusable.. one day, that day is not today.
What the code above does is the following:
- Using
is_adminit checks if the Dashboard or the administration panel is attempting to be displayed, in that case it does not do any changes.
- With
is_page, it will additionally check if the parameter is for one of the pages specified and thus disable the plugin if the check fails.
- PHP command
array_search, will see if our plugin file is expected to be executed (all files in
$pluginsare the plugin files that are expected to be executed) .
wp_dequeue_scriptand
wp_dequeue_styleremove the previously enqueued scripts and styles of the plugin as long as you know the handles (or namespaces of the enqueued items).
To get the handles (namespaces) we went through the plugin codes and found all instances of
wp_enqueue_scriptand
wp_enqueue_style.
Please note that several small plugins do not have additional items in queue so no further action is needed. | https://bytefreaks.net/2019/05 | CC-MAIN-2022-27 | en | refinedweb |
#include <CGAL/Shape_detection_3/Region_growing.h>
A shape detection algorithm using a region growing [1].
An
Iterator_range with a bidirectional iterator with value type
std::size_t as indices into the input data that has not been assigned to a shape.
As this range class has no
size() method, the method
Efficient_RANSAC::number_of_unassigned_points() is provided.
Registers in the detection engine the shape type
ShapeType that must inherit from
Shape_base.
For example, for registering a plane as detectable shape you should call
region_growing.add_shape_factory< Shape_detection_3::Plane<Traits> >();.
Note that if your call is within a template, you should add the
template keyword just before
add_shape_factory:
region_growing.template add_shape_factory< Shape_detection_3::Plane<Traits> >();.
Removes all detected shapes.
All internal structures are cleaned, including formerly detected shapes. Thus iterators and ranges retrieved through
shapes(),
planes() and
indices_of_unassigned_points() are invalidated.
Performs the shape detection.
trueif shape types have been registered and input data has been set. Otherwise,
falseis returned.
Returns an
Iterator_range with a bidirectional iterator with value type
boost::shared_ptr<Shape> over the detected shapes in the order of detection.
planes()except that it returns planes with the abstract type
Shape. | https://doc.cgal.org/4.12/Point_set_shape_detection_3/classCGAL_1_1Shape__detection__3_1_1Region__growing.html | CC-MAIN-2022-27 | en | refinedweb |
Here are the highlights of what’s new and improved in 7.16. For detailed information about this release, check out the release notes.
Other versions: 7.15 | 7.14 | 7.13 | 7.12 | 7.11 | 7.10 | 7.9 | 7.8 | 7.7 | 7.6 | 7.5 | 7.4 | 7.3 | 7.2 | 7.1 | 7.0
Upgrade Assistant for 8.xedit
Upgrade Assistant is your one-stop shop to help you prepare for upgrading to 8.x. Review and address Elasticsearch and Kibana deprecation issues, analyze Elasticsearch deprecation logs, migrate system indices, and back up your data before upgrading, all from this app.
Unified integrations viewedit
All ingest options for Elastic have been moved to a single Integrations view. This provides a more consistent experience for onboarding to Elastic and increases the discoverability of integrations. All entry points for adding integrations now route to this page.
Reference lines in Lensedit
Reference lines are now available in Lens to help you easily identify important values in your visualizations. Create reference lines with static values, dynamic data using Elasticsearch Quick Functions, or define with a custom Formula. Reference lines can come from separate index patterns, such as a goal dataset that is independent of the source data.
With reference lines, you can:
- Track metrics against goals, warning zones, and more.
- Add display options, such as color, icons, and labels.
Apply color to the area above or below the reference line.
Enhancements to visualization editorsedit
Kibana offers even more ways to work with your visualizations:
- Apply custom field formats in TSVB. Take advantage of the field formatters from your index pattern in TSVB—or override the format for a specific visualization.
- Filter in TSVB. You always had the ability to ignore global filters in TSVB layers, and now you can also change them. This makes it easier to explore your data in TSVB without having to edit the filters for each series.
- View data and requests in Lens. View the data in visualizations and the requests that collected the data right in the Lens editor.
- View requests in Console. View the requests that collect the data in visualizations in Console.
- Auto fit rows to content. Automatically resize Aggregation-based data table rows so that text and images are fully visible.
New and updated connectors in Alertingedit
Alerting has grown its collection of connectors with the addition of the ServiceNow ITOM connector, which allows for easy integration with ServiceNow event management. In addition, Kibana provides a more efficient integration for ServiceNow ITSM and SecOps connectors with certified applications on the ServiceNow store. Also added is the ability to authenticate the email connector with OAuth 2.0 Client Credentials for the MS Exchange email service.
Rule duration on displayedit
In Rules and Connectors, the Rules view now includes the rule duration field, which shows how long a rule is taking to complete execution. This helps you identify rules that run longer than you anticipate.
You can observe the duration for the last 30 executions on the rules detail page.
Osquery Manager now generally availableedit
With the GA release, Osquery Manager is now fully supported and available for use in production. In addition, the 7.16 release offers the following new capabilities:
- Customize the osquery configuration.
- Map saved query results to ECS.
- Test out queries when editing saved queries.
- Map static values to ECS.
- Schedule query packs for one or more agent policies.
- Set custom namespace values for the integration.
- Query three new Kubernetes tables.
For more information, refer to Osquery.
Transform health alerting rulesedit
[beta] This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. A new rule type notifies you when continuous transforms experience operational issues. It enables you to detect when a transform stops indexing data or is in a failed state. For more details, refer to Generating alerts for transforms. | https://www.elastic.co/guide/en/kibana/7.16/whats-new.html?elektra=stack-and-cloud-7-16-blog | CC-MAIN-2022-27 | en | refinedweb |
FindString
Finds, but doesn't select, the first string that contains the specified prefix in the list box of a combo box
int FindString( int nStartAfter, LPCSTR lpcsz )
Contains the zero-based index of the item before the first item to be searched. When the search reaches the bottom of the list box, it continues from the top of the list box back to the item specified by nStartAfter. If –1, the entire list box is searched from the beginning.
Points to the null-terminated string that contains the prefix to search for. The search is case independent, so this string can contain any combination of uppercase and lowercase letters.
The zero-based index of the matching item, or CB_ERR if the search was unsuccessful.
EX1
#include <..\OriginLab\DialogEx.h>
#define IDC_COMBO1 1001
void ComboBox_FindString_ex1(Dialog& MyDlg)
{
ComboBox m_cmbBox;
m_cmbBox = MyDlg.GetItem(IDC_COMBO1);
int nCount = m_cmbBox.GetCount();
int nRet = m_cmbBox.FindString(0, "Red");
}
Control.h | https://cloud.originlab.com/doc/OriginC/ref/ComboBox-FindString | CC-MAIN-2022-27 | en | refinedweb |
To format a string in Python by wrapping (line breaking) and truncating (abbreviating) it at an arbitrary number of characters, use the textwrap module of the standard library.
The following information is provided here.
- Wrapping a string (line feed):
wrap(),
fill()
- Truncate strings (omitted):
shorten()
- TextWrapper object
If you want to write long strings on multiple lines in the code instead of in the output, see the following article.
- Related Articles:Writing long strings of text on multiple lines in Python
Wrapping a string (line feed): wrap(), fill()
With the function wrap() of the textwrap module, you can get a list divided by word breaks to fit into an arbitrary number of characters.
Specify the number of characters for the second argument width. The default is width=70.
import textwrap s = "Python can be easy to pick up whether you're a first time programmer or you're experienced with other languages" s_wrap_list = textwrap.wrap(s, 40) print(s_wrap_list) # ['Python can be easy to pick up whether', "you're a first time programmer or you're", 'experienced with other languages']
Using the obtained list, you can get a string that is broken by a newline code by doing the following
'\n'.join(list)
print('\n'.join(s_wrap_list)) # Python can be easy to pick up whether # you're a first time programmer or you're # experienced with other languages
The function fill() returns a newline string instead of a list. It is the same as executing the following code after wrap() as in the example above.
'\n'.join(list)
This is more convenient when you don't need a list but want to output a fixed-width string to a terminal, etc.
print(textwrap.fill(s, 40)) # Python can be easy to pick up whether # you're a first time programmer or you're # experienced with other languages
If the argument max_line is specified, the number of lines after it will be omitted.
print(textwrap.wrap(s, 40, max_lines=2)) # ['Python can be easy to pick up whether', "you're a first time programmer or [...]"] print(textwrap.fill(s, 40, max_lines=2)) # Python can be easy to pick up whether # you're a first time programmer or [...]
If omitted, the following string will be output at the end by default.
' [...]'
It can be replaced by any string with the argument placeholder.
print(textwrap.fill(s, 40, max_lines=2, placeholder=' ~')) # Python can be easy to pick up whether # you're a first time programmer or ~
You can also specify a string to be added to the beginning of the first line with the argument initial_indent. This can be used when you want to indent the beginning of a paragraph.
print(textwrap.fill(s, 40, max_lines=2, placeholder=' ~', initial_indent=' ')) # Python can be easy to pick up whether # you're a first time programmer or ~
Be careful with full-size and half-size characters.
In textwrap, the number of characters is controlled by the number of characters, not by the character width, and both single-byte and double-byte characters are considered as one character.
s = '文字文字文字文字文字文字12345,67890, 文字文字文字abcde' print(textwrap.fill(s, 12)) # 文字文字文字文字文字文字 # 12345,67890, # 文字文字文字abcde
If you want to wrap a text with mixed kanji characters with a fixed width, please refer to the following.
Truncate strings (omitted): shorten()
If you want to truncate and omit strings, use the function shorten() in the textwrap module.
Abbreviated in word units to fit an arbitrary number of characters. The number of characters, including the string indicating the omission, is arbitrary. The string indicating the omission can be set with the argument placeholder, which defaults to the following.
' [...]'
s = 'Python is powerful' print(textwrap.shorten(s, 12)) # Python [...] print(textwrap.shorten(s, 12, placeholder=' ~')) # Python is ~
However, Japanese strings, for example, cannot be abbreviated well because they cannot be divided into words.
s = 'Pythonについて。Pythonは汎用のプログラミング言語である。' print(textwrap.shorten(s, 20)) # [...]
If you want to abbreviate by considering only the number of characters instead of word units, it can be easily achieved as follows.
s_short = s[:12] + '...' print(s_short) # Pythonについて。P...
TextWrapper object
If you are going to wrap() or fill() many times with a fixed configuration, it is efficient to create a TextWrapper object.
wrapper = textwrap.TextWrapper(width=30, max_lines=3, placeholder=' ~', initial_indent=' ') s = "Python can be easy to pick up whether you're a first time programmer or you're experienced with other languages" print(wrapper.wrap(s)) # [' Python can be easy to pick', "up whether you're a first time", "programmer or you're ~"] print(wrapper.fill(s)) # Python can be easy to pick # up whether you're a first time # programmer or you're ~
The same settings can be reused. | https://from-locals.com/python-textwrap-wrap-fill-shorten/ | CC-MAIN-2022-27 | en | refinedweb |
Acquire and Prepare the Ingredients - Your Data
In this chapter, we will cover:
- Working with data
- Reading data from CSV files
- Reading XML data
- Reading JSON data
- Reading data from fixed-width formatted files
- Reading data from R files and R libraries
- Removing cases with missing values
- Replacing missing values with the mean
- Removing duplicate cases
- Rescaling a variable to specified min-max range
- Normalizing or standardizing data in a data frame
- Binning numerical data
- Creating dummies for categorical variables
- Handling missing data
- Correcting data
- Imputing data
- Detecting outliers
Introduction
Data is everywhere and the amount of digital data that exists is growing rapidly, that is projected to grow to 180 zettabytes by 2025. Data Science is a field that tries to extract insights and meaningful information from structured and unstructured data through various stages such as asking questions, getting the data, exploring the data, modeling the data, and communicating result as shown in the following diagaram:
Data scientists or analysts often need to load or collect data from various resources having different input formats into R. Although R has its own native data format, data usually exists in text formats, such as Comma Separated Values (CSV), JavaScript Object Notation (JSON), and Extensible Markup Language (XML). This chapter provides recipes to load such data into your R system for processing.
Raw, real-world datasets are often messy with missing values, unusable format, and outliers. Very rarely can we start analyzing data immediately after loading it. Often, we will need to preprocess the data to clean, impute, wrangle, and transform it before embarking on analysis. This chapter provides recipes for some common cleaning, missing value imputation, outlier detection, and preprocessing steps.
Working with data
In the wild, datasets come in many different formats, but each computer program expects your data to be organized in a well-defined structure.
As a result, every data science project begins with the same tasks: gather the data, view the data, clean the data, correct or change the layout of the data to make it tidy, handle missing values and outliers from the data, model the data, and evaluate the data.
With R, you can do everything from collecting your data (from the web or a database) to cleaning it, transforming it, visualizing it, modelling it, and running statistical tests on it.
Reading data from CSV files
CSV formats are best used to represent sets or sequences of records in which each record has an identical list of fields. This corresponds to a single relation in a relational database, or to data (though not calculations) in a typical spreadsheet.
Getting ready
If you have not already downloaded the files for this chapter, do it now and ensure that the auto-mpg.csv file is in your R working directory.
How to do it...
Reading data from .csv files can be done using the following commands:
- Read the data from auto-mpg.csv, which includes a header row:
> auto <- read.csv("auto-mpg.csv", header=TRUE, sep = ",")
- Verify the results:
> names(auto)
How it works...
The read.csv() function creates a data frame from the data in the .csv file. If we pass header=TRUE, then the function uses the very first row to name the variables in the resulting data frame:
> names(auto)
[1] "No" "mpg" "cylinders"
[4] "displacement" "horsepower" "weight"
[7] "acceleration" "model_year" "car_name"
The header and sep parameters allow us to specify whether the .csv file has headers and the character used in the file to separate fields. The header=TRUE and sep="," parameters are the defaults for the read.csv() function; we can omit these in the code example.
There's more...
The read.csv() function is a specialized form of read.table(). The latter uses whitespace as the default field separator. We will discuss a few important optional arguments to these functions.
Handling different column delimiters
In regions where a comma is used as the decimal separator, the .csv files use ";" as the field delimiter. While dealing with such data files, use read.csv2() to load data into R.
Alternatively, you can use the read.csv("<file name>", sep=";", dec=",") command.
Use sep="t" for tab-delimited files.
Handling column headers/variable names
If your data file does not have column headers, set header=FALSE.
The auto-mpg-noheader.csv file does not include a header row. The first command in the following snippet reads this file. In this case, R assigns default variable names V1, V2, and so on.
> auto <- read.csv("auto-mpg-noheader.csv", header=FALSE)
> head(auto,2)
V1 V2 V3 V4 V5 V6 V7 V8 V9
1 1 28 4 140 90 2264 15.5 71 chevrolet vega 2300
2 2 19 3 70 97 2330 13.5 72 mazda rx2 coupe
If your file does not have a header row, and you omit the header=FALSE optional argument, the read.csv() function uses the first row for variable names and ends up constructing variable names by adding X to the actual data values in the first row. Note the meaningless variable names in the following fragment:
> auto <- read.csv("auto-mpg-noheader.csv")
> head(auto,2)
X1 X28 X4 X140 X90 X2264 X15.5 X71 chevrolet.vega.2300
1 2 19 3 70 97 2330 13.5 72 mazda rx2 coupe
2 3 36 4 107 75 2205 14.5 82 honda accord
We can use the optional col.names argument to specify the column names. If col.names is given explicitly, the names in the header row are ignored, even if header=TRUE is specified:
> auto <- read.csv("auto-mpg-noheader.csv", header=FALSE, col.names = c("No", "mpg", "cyl", "dis","hp", "wt", "acc", "year", "car_name"))
> head(auto,2)
No mpg cyl dis hp wt acc year car_name
1 1 28 4 140 90 2264 15.5 71 chevrolet vega 2300
2 2 19 3 70 97 2330 13.5 72 mazda rx2 coupe
Handling missing values
When reading data from text files, R treats blanks in numerical variables as NA (signifying missing data). By default, it reads blanks in categorical attributes just as blanks and not as NA. To treat blanks as NA for categorical and character variables, set na.strings="":
> auto <- read.csv("auto-mpg.csv", na.strings="")
If the data file uses a specified string (such as "N/A" or "NA" for example) to indicate the missing values, you can specify that string as the na.strings argument, as in na.strings= "N/A" or na.strings = "NA".
Reading strings as characters and not as factors
By default, R treats strings as factors (categorical variables). In some situations, you may want to leave them as character strings. Use stringsAsFactors=FALSE to achieve this:
> auto <- read.csv("auto-mpg.csv",stringsAsFactors=FALSE)
However, to selectively treat variables as characters, you can load the file with the defaults (that is, read all strings as factors) and then use as.character() to convert the requisite factor variables to characters.
Reading data directly from a website
If the data file is available on the web, you can load it directly into R, instead of downloading and saving it locally before loading it into R:
> dat <- read.csv("")
Reading XML data
You may sometimes need to extract data from websites. Many providers also supply data in XML and JSON formats. In this recipe, we learn about reading XML data.
Getting ready
Make sure you have downloaded the files for this chapters and the files cd_catalog.xml and WorldPopulation-wiki.htm are in working directory of R. If the XML package is not already installed in your R environment, install the package now, as follows:
> install.packages("XML")
How to do it...
XML data can be read by following these steps:
- Load the library and initialize:
> library(XML)
> url <- "cd_catalog.xml"
- Parse the XML file and get the root node:
> xmldoc <- xmlParse(url)
> rootNode <- xmlRoot(xmldoc)
> rootNode[1]
- Extract the XML data:
> data <- xmlSApply(rootNode,function(x) xmlSApply(x, xmlValue))
- Convert the extracted data into a data frame:
> cd.catalog <- data.frame(t(data),row.names=NULL)
- Verify the results:
> cd.catalog[1:2,]
How it works...
The xmlParse function returns an object of the XMLInternalDocument class, which is a C-level internal data structure.
The xmlRoot() function gets access to the root node and its elements. Let us check the first element of the root node:
> rootNode[1]
$CD
<CD>
<TITLE>Empire Burlesque</TITLE>
<ARTIST>Bob Dylan</ARTIST>
<COUNTRY>USA</COUNTRY>
<COMPANY>Columbia</COMPANY>
<PRICE>10.90</PRICE>
<YEAR>1985</YEAR>
</CD>
attr(,"class")
[1] "XMLInternalNodeList" "XMLNodeList"
To extract data from the root node, we use the xmlSApply() function iteratively over all the children of the root node. The xmlSApply function returns a matrix.
To convert the preceding matrix into a data frame, we transpose the matrix using the t() function and then extract the first two rows from the cd.catalog data frame:
> cd.catalog[1:2,]
TITLE ARTIST COUNTRY COMPANY PRICE YEAR
1 Empire Burlesque Bob Dylan USA Columbia 10.90 1985
2 Hide your heart Bonnie Tyler UK CBS Records 9.90 1988
There's more...
XML data can be deeply nested and hence can become complex to extract. Knowledge of XPath is helpful to access specific XML tags. R provides several functions, such as xpathSApply and getNodeSet, to locate specific elements.
Extracting HTML table data from a web page
Though it is possible to treat HTML data as a specialized form of XML, R provides specific functions to extract data from HTML tables, as follows:
> url <- "WorldPopulation-wiki.htm"
> tables <- readHTMLTable(url)
> world.pop <- tables[[6]]
The readHTMLTable() function parses the web page and returns a list of all the tables that are found on the page. For tables that have an id attribute, the function uses the id attribute as the name of that list element.
We are interested in extracting the "10 most populous countries", which is the fifth table, so we use tables[[6]].
Extracting a single HTML table from a web page
A single table can be extracted using the following command:
> table <- readHTMLTable(url,which=5)
Specify which to get data from a specific table. R returns a data frame.
Reading JSON data
Several RESTful web services return data in JSON format, in some ways simpler and more efficient than XML. This recipe shows you how to read JSON data.
Getting ready
R.
How to do it...
Once the files are ready, load the jsonlite package and read the files as follows:
- Load the library:
> library(jsonlite)
- Load the JSON data from the files:
> dat.1 <- fromJSON("students.json")
> dat.2 <- fromJSON("student-courses.json")
- Load the JSON document from the web:
> url <- ""
> jsonDoc <- fromJSON(url)
- Extract the data into data frames:
> dat <- jsonDoc$list$resources$resource$fields
> dat.1 <- jsonDoc$list$resources$resource$fields
> dat.2 <- jsonDoc$list$resources$resource$fields
- Verify the results:
> dat[1:2,]
> dat.1[1:3,]
> dat.2[,c(1,2,4:5)]
How it works...
The jsonlite package provides two key functions: fromJSON and toJSON.
The fromJSON function can load data either directly from a file or from a web page, as the preceding steps 2 and 3 show. If you get errors in downloading content directly from the web, install and load the httr package.
Depending on the structure of the JSON document, loading the data can vary in complexity.
If given a URL, the fromJSON function returns a list object. In the preceding list, in step 4, we see how to extract the enclosed data frame.
Reading data from fixed-width formatted files
In fixed-width formatted files, columns have fixed widths; if a data element does not use up the entire allotted column width, then the element is padded with spaces to make up the specified width. To read fixed-width text files, specify the columns either by column widths or by starting positions.
Getting ready
Download the files for this chapter and store the student-fwf.txt file in your R working directory.
How to do it...
Read the fixed-width formatted file as follows:
> student <- read.fwf("student-fwf.txt", widths=c(4,15,20,15,4), col.names=c("id","name","email","major","year"))
How it works...
In the student-fwf.txt file, the first column occupies 4 character positions, the second 15, and so on. The c(4,15,20,15,4) expression specifies the widths of the 5 columns in the data file.
We can use the optional col.names argument to supply our own variable names.
There's more...
The read.fwf() function has several optional arguments that come in handy. We discuss a few of these, as follows:
Files with headers
Files with headers use the following command:
> student <- read.fwf("student-fwf-header.txt", widths=c(4,15,20,15,4), header=TRUE, sep="t",skip=2)
If header=TRUE, the first row of the file is interpreted as having the column headers. Column headers, if present, need to be separated by the specified sep argument. The sep argument only applies to the header row.
The skip argument denotes the number of lines to skip; in this recipe, the first two lines are skipped.
Excluding columns from data
To exclude a column, make the column width negative. Thus, to exclude the email column, we will specify its width as -20 and also remove the column name from the col.names vector, as follows:
> student <- read.fwf("student-fwf.txt",widths=c(4,15,-20,15,4), col.names=c("id","name","major","year"))
Reading data from R files and R libraries
During data analysis, you will create several R objects. You can save these in the native R data format and retrieve them later as needed.
Getting ready
First, create and save the R objects interactively, as shown in the following code. Make sure you have write access to the R working directory.
> customer <- c("John", "Peter", "Jane")
> orderdate <- as.Date(c('2014-10-1','2014-1-2','2014-7-6'))
> orderamount <- c(280, 100.50, 40.25)
> order <- data.frame(customer,orderdate,orderamount)
> names <- c("John", "Joan")
> save(order, names, file="test.Rdata")
> saveRDS(order,file="order.rds")
> remove(order)
After saving the preceding code, the remove() function deletes the object from the current session.
How to do it...
To be able to read data from R files and libraries, follow these steps:
- Load data from the R data files into memory:
> load("test.Rdata")
> ord <- readRDS("order.rds")
- The datasets package is loaded in the R environment by default and contains the iris and cars datasets. To load these datasets data into memory, use the following code:
> data(iris)
> data(list(cars,iris))
The first command loads only the iris dataset, and the second loads both the cars and iris datasets.
How it works...
The save() function saves the serialized version of the objects supplied as arguments along with the object name. The subsequent load() function restores the saved objects, with the same object names that they were saved with, to the global environment by default. If there are existing objects with the same names in that environment, they will be replaced without any warnings.
The saveRDS() function saves only one object. It saves the serialized version of the object and not the object name. Hence, with the readRDS() function, the saved object can be restored into a variable with a different name from when it was saved.
There's more...
The preceding recipe has shown you how to read saved R objects. We see more options in this section.
Saving all objects in a session
The following command can be used to save all objects:
> save.image(file = "all.RData")
Saving objects selectively in a session
To save objects selectively, use the following commands:
> odd <- c(1,3,5,7)
> even <- c(2,4,6,8)
> save(list=c("odd","even"),file="OddEven.Rdata")
The list argument specifies a character vector containing the names of the objects to be saved. Subsequently, loading data from the OddEven.Rdata file creates both odd and even objects. The saveRDS() function can save only one object at a time.
Attaching/detaching R data files to an environment
While
Listing all datasets in loaded packages
All the loaded packages can be listed using the following command:
> data()
Removing cases with missing values
Datasets come with varying amounts of missing data. When we have abundant data, we sometimes (not always) want to eliminate the cases that have missing values for one or more variables. This recipe applies when we want to eliminate cases that have any missing values, as well as when we want to selectively eliminate cases that have missing values for a specific variable alone.
Getting ready
Download the missing-data.csv file from the code files for this chapter to your R working directory. Read the data from the missing-data.csv file, while taking care to identify the string used in the input file for missing values. In our file, missing values are shown with empty strings:
> dat <- read.csv("missing-data.csv", na.strings="")
How to do it...
To get a data frame that has only the cases with no missing values for any variable, use the na.omit() function:
> dat.cleaned <- na.omit(dat)
Now dat.cleaned contains only those cases from dat that have no missing values in any of the variables.
How it works...
The na.omit() function internally uses the is.na() function, that allows us to find whether its argument is NA. When applied to a single value, it returns a Boolean value. When applied to a collection, it returns a vector:
> is.na(dat[4,2])
[1] TRUE
> is.na(dat$Income)
[1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE
[10] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
[19] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
There's more...
You will sometimes need to do more than just eliminate the cases with any missing values. We discuss some options in this section.
Eliminating cases with NA for selected variables
We might sometimes want to selectively eliminate cases that have NA only for a specific variable. The example data frame has two missing values for Income. To get a data frame with only these two cases removed, use:
> dat.income.cleaned <- dat[!is.na(dat$Income),]
> nrow(dat.income.cleaned)
[1] 25
Finding cases that have no missing values
The complete.cases() function takes a data frame or table as its argument and returns a Boolean vector with TRUE for rows that have no missing values, and FALSE otherwise:
> complete.cases(dat)
[1] TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE TRUE
[10] TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE TRUE
[19] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
Rows 4, 6, 13, and 17 have at least one missing value. Instead of using the na.omit() function, we can do the following as well:
> dat.cleaned <- dat[complete.cases(dat),]
> nrow(dat.cleaned)
[1] 23
Converting specific values to NA
Sometimes, we might know that a specific value in a data frame actually means that the data was not available. For example, in the dat data frame, a value of 0 for Income probably means that the data is missing. We can convert these to NA by a simple assignment:
> dat$Income[dat$Income==0] <- NA
Excluding NA values from computations
Many R functions return NA when some parts of the data they work on are NA. For example, computing the mean or sd on a vector with at least one NA value returns NA as the result. To remove NA from consideration, use the na.rm parameter:
> mean(dat$Income)
[1] NA
> mean(dat$Income, na.rm = TRUE)
[1] 65763.64
Replacing missing values with the mean
When you disregard cases with any missing variables, you lose useful information that the non-missing values in that case convey. You may sometimes want to impute reasonable values (those that will not skew the results of analysis very much) for the missing values.
Getting ready
Download the missing-data.csv file and store it in your R environment's working directory.
How to do it...
Read data and replace missing values:
> dat <- read.csv("missing-data.csv", na.strings = "")
> dat$Income.imp.mean <- ifelse(is.na(dat$Income), mean(dat$Income, na.rm=TRUE), dat$Income)
After this, all the NA values for Income will be the mean value prior to imputation.
How it works...
The preceding ifelse() function returns the imputed mean value if its first argument is NA. Otherwise, it returns the first argument.
There's more...
You cannot impute the mean when a categorical variable has missing values, so you need a different approach. Even for numeric variables, we might sometimes not want to impute the mean for missing values. We discuss an often-used approach here.
Imputing random values sampled from non-missing values
If you want to impute random values sampled from the non-missing values of the variable, you can use the following two functions:
rand.impute <- function(a) {
missing <- is.na(a)
n.missing <- sum(missing)
a.obs <- a[!missing]
imputed <- a
imputed[missing] <- sample (a.obs, n.missing, replace=TRUE)
return (imputed)
}
random.impute.data.frame <- function(dat, cols) {
nms <- names(dat)
for(col in cols) {
name <- paste(nms[col],".imputed", sep = "")
dat[name] <- rand.impute(dat[,col])
}
dat
}
With these two functions in place, you can use the following to impute random values for both Income and Phone_type:
> dat <- read.csv("missing-data.csv", na.strings="")
> random.impute.data.frame(dat, c(1,2))
Removing duplicate cases
We sometimes end up with duplicate cases in our datasets and want to retain only one among them.
Getting ready
Create a sample data frame:
> salary <- c(20000, 30000, 25000, 40000, 30000, 34000, 30000)
> family.size <- c(4,3,2,2,3,4,3)
> car <- c("Luxury", "Compact", "Midsize", "Luxury", "Compact", "Compact", "Compact")
> prospect <- data.frame(salary, family.size, car)
How to do it...
The unique() function can do the job. It takes a vector or data frame as an argument and returns an object of the same type as its argument, but with duplicates removed.
Remove duplicates to get unique values:
> prospect.cleaned <- unique(prospect)
> nrow(prospect)
[1] 7
> nrow(prospect.cleaned)
[1] 5
How it works...
The unique() function takes a vector or data frame as an argument and returns a similar object with the duplicate eliminated. It returns the non-duplicated cases as is. For repeated cases, the unique() function includes one copy in the returned result.
There's more...
Sometimes we just want to identify the duplicated values without necessarily removing them.
Identifying duplicates without deleting them
For this, use the duplicated() function:
> duplicated(prospect)
[1] FALSE FALSE FALSE FALSE TRUE FALSE TRUE
From the data, we know that cases 2, 5, and 7 are duplicates. Note that only cases 5 and 7 are shown as duplicates. In the first occurrence, case 2 is not flagged as a duplicate.
To list the duplicate cases, use the following code:
> prospect[duplicated(prospect), ]
salary family.size car
5 30000 3 Compact
7 30000 3 Compact
Rescaling a variable to specified min-max range
Distance computations play a big role in many data analytics techniques. We know that variables with higher values tend to dominate distance computations and you may want to rescale the values to be in the range of 0 - 1.
Getting ready
Install the scales package and read the data-conversion.csv file from the book's data for this chapter into your R environment's working directory:
> install.packages("scales")
> library(scales)
> students <- read.csv("data-conversion.csv")
How to do it...
To rescale the Income variable to the range [0,1], use the following code snippet:
> students$Income.rescaled <- rescale(students$Income)
How it works...
By default, the rescale() function makes the lowest value(s) zero and the highest value(s) one. It rescales all the other values proportionately. The following two expressions provide identical results:
> rescale(students$Income)
> (students$Income - min(students$Income)) / (max(students$Income) - min(students$Income))
To rescale a different range than [0,1], use the to argument. The following snippet rescales students$Income to the range (0,100):
> rescale(students$Income, to = c(1, 100))
There's more...
When using distance-based techniques, you may need to rescale several variables. You may find it tedious to scale one variable at a time.
Rescaling many variables at once
Use the following function to rescale variables:
rescale.many <- function(dat, column.nos) {
nms <- names(dat)
for(col in column.nos) {
name <- paste(nms[col],".rescaled", sep = "")
dat[name] <- rescale(dat[,col])
}
cat(paste("Rescaled ", length(column.nos), " variable(s)n"))
dat
}
With the preceding function defined, we can do the following to rescale the first and fourth variables in the data frame:
> rescale.many(students, c(1,4))
See also
- The Normalizing or standardizing data in a data frame recipe in this chapter.
Normalizing or standardizing data in a data frame
Distance computations play a big role in many data analytics techniques. We know that variables with higher values tend to dominate distance computations and you may want to use the standardized (or z) values.
Getting ready
Download the BostonHousing.csv data file and store it in your R environment's working directory. Then read the data:
> housing <- read.csv("BostonHousing.csv")
How to do it...
To standardize all the variables in a data frame containing only numeric variables, use:
> housing.z <- scale(housing)
You can only use the scale() function on data frames that contain all numeric variables. Otherwise, you will get an error.
How it works...
When invoked in the preceding example, the scale() function computes the standard z score for each value (ignoring NAs) of each variable. That is, from each value it subtracts the mean and divides the result by the standard deviation of the associated variable.
The scale() function takes two optional arguments, center and scale, whose default values are TRUE. The following table shows the effect of these arguments:
There's more...
When using distance-based techniques, you may need to rescale several variables. You may find it tedious to standardize one variable at a time.
Standardizing several variables simultaneously
If you have a data frame with some numeric and some non-numeric variables, or want to standardize only some of the variables in a fully numeric data frame, then you can either handle each variable separately, which would be cumbersome, or use a function such as the following to handle a subset of variables:
scale.many <- function(dat, column.nos) {
nms <- names(dat)
for(col in column.nos) {
name <- paste(nms[col],".z", sep = "")
dat[name] <- scale(dat[,col])
}
cat(paste("Scaled ", length(column.nos), " variable(s)n"))
dat
}
With this function, you can now do things like:
> housing <- read.csv("BostonHousing.csv")
> housing <- scale.many(housing, c(1,3,5:7))
This will add the z values for variables 1, 3, 5, 6, and 7, with .z appended to the original column names:
> names(housing)
[1] "CRIM" "ZN" "INDUS" "CHAS" "NOX" "RM"
[7] "AGE" "DIS" "RAD" "TAX" "PTRATIO" "B"
[13] "LSTAT" "MEDV" "CRIM.z" "INDUS.z" "NOX.z" "RM.z"
[19] "AGE.z"
See also
Rescaling a variable to [0,1] recipe in this chapter.
Binning numerical data
Sometimes, we need to convert numerical data to categorical data or a factor. For example, Naive Bayes classification requires all variables (independent and dependent) to be categorical. In other situations, we may want to apply a classification method to a problem where the dependent variable is numeric but needs to be categorical.
Getting ready
From the code files for this chapter, store the data-conversion.csv file in the working directory of your R environment. Then read the data:
> students <- read.csv("data-conversion.csv")
How to do it...
Income is a numeric variable, and you may want to create a categorical variable from it by creating bins. Suppose you want to label incomes of $10,000 or below as Low, incomes between $10,000 and $31,000 as Medium, and the rest as High. We can do the following:
- Create a vector of break points:
> b <- c(-Inf, 10000, 31000, Inf)
- Create a vector of names for break points:
> names <- c("Low", "Medium", "High")
- Cut the vector using the break points:
> students$Income.cat <- cut(students$Income, breaks = b, labels = names)
> students
Age State Gender Height Income Income.cat
1 23 NJ F 61 5000 Low
2 13 NY M 55 1000 Low
3 36 NJ M 66 3000 Low
4 31 VA F 64 4000 Low
5 58 NY F 70 30000 Medium
6 29 TX F 63 10000 Low
7 39 NJ M 67 50000 High
8 50 VA M 70 55000 High
9 23 TX F 61 2000 Low
10 36 VA M 66 20000 Medium
How it works...
The cut() function uses the ranges implied by the breaks argument to infer the bins, and names them according to the strings provided in the labels argument. In our example, the function places incomes less than or equal to 10,000 in the first bin, incomes greater than 10,000 and less than or equal to 31,000 in the second bin, and incomes greater than 31,000 in the third bin. In other words, the first number in the interval is not included but the second one is. The number of bins will be one less than the number of elements in breaks. The strings in names become the factor levels of the bins.
If we leave out names, cut() uses the numbers in the second argument to construct interval names, as you can see here:
> b <- c(-Inf, 10000, 31000, Inf)
> students$Income.cat1 <- cut(students$Income, breaks = b)
> students
Age State Gender Height Income Income.cat Income.cat1
1 23 NJ F 61 5000 Low (-Inf,1e+04]
2 13 NY M 55 1000 Low (-Inf,1e+04]
3 36 NJ M 66 3000 Low (-Inf,1e+04]
4 31 VA F 64 4000 Low (-Inf,1e+04]
5 58 NY F 70 30000 Medium (1e+04,3.1e+04]
6 29 TX F 63 10000 Low (-Inf,1e+04]
7 39 NJ M 67 50000 High (3.1e+04, Inf]
8 50 VA M 70 55000 High (3.1e+04, Inf]
9 23 TX F 61 2000 Low (-Inf,1e+04]
10 36 VA M 66 20000 Medium (1e+04,3.1e+04]
There's more...
You might not always be in a position to identify the breaks manually and may instead want to rely on R to do this automatically.
Creating a specified number of intervals automatically
Rather than determining the breaks and hence the intervals manually, as mentioned earlier, we can specify the number of bins we want, say n, and let the cut() function handle the rest automatically. In this case, cut() creates n intervals of approximately equal width, as follows:
> students$Income.cat2 <- cut(students$Income, breaks = 4, labels = c("Level1", "Level2", "Level3","Level4"))
Creating dummies for categorical variables
In situations where we have categorical variables (factors) but need to use them in analytical methods that require numbers (for example, K nearest neighbors (KNN), Linear Regression), we need to create dummy variables.
Getting ready
Read the data-conversion.csv file and store it in the working directory of your R environment. Install the dummies package. Then read the data:
> install.packages("dummies")
> library(dummies)
> students <- read.csv("data-conversion.csv")
How to do it...
Create dummies for all factors in the data frame:
> students.new <- dummy.data.frame(students, sep = ".")
> names(students.new)
[1] "Age" "State.NJ" "State.NY" "State.TX" "State.VA"
[6] "Gender.F" "Gender.M" "Height" "Income"
The students.new data frame now contains all the original variables and the newly added dummy variables. The dummy.data.frame() function has created dummy variables for all four levels of State and two levels of Gender factors. However, we will generally omit one of the dummy variables for State and one for Gender when we use machine learning techniques.
We can use the optional argument all = FALSE to specify that the resulting data frame should contain only the generated dummy variables and none of the original variables.
How it works...
The dummy.data.frame() function creates dummies for all the factors in the data frame supplied. Internally, it uses another dummy() function which creates dummy variables for a single factor. The dummy() function creates one new variable for every level of the factor for which we are creating dummies. It appends the variable name with the factor level name to generate names for the dummy variables. We can use the sep argument to specify the character that separates them; an empty string is the default:
> dummy(students$State, sep = ".")
State.NJ State.NY State.TX State.VA
[1,] 1 0 0 0
[2,] 0 1 0 0
[3,] 1 0 0 0
[4,] 0 0 0 1
[5,] 0 1 0 0
[6,] 0 0 1 0
[7,] 1 0 0 0
[8,] 0 0 0 1
[9,] 0 0 1 0
[10,] 0 0 0 1
There's more...
In situations where a data frame has several factors, and you plan on using only a subset of them, you create dummies only for the chosen subset.
Choosing which variables to create dummies for
To create a dummy only for one variable or a subset of variables, we can use the names argument to specify the column names of the variables we want dummies for:
> students.new1 <- dummy.data.frame(students, names = c("State","Gender") , sep = ".")
Handling missing data
In most real-world problems, data is likely to be incomplete because of incorrect data entry, faulty equipment, or improperly coded data. In R, missing values are represented by the symbol NA (not available) and are considered to be the first obstacle in predictive modeling. So, it's always a good idea to check for missing data in a dataset before proceeding for further predictive analysis. This recipe shows you how to handle missing data.
Getting ready
R provides three simple ways to handle missing values:
- Deleting the observations.
- Deleting the variables.
- Replacing the values with mean, median, or mode.
Install the package in your R environment as follows:
> install.packages("Hmisc")
If you have not already downloaded the files for this chapter, do it now and ensure that the housing-with-missing-value.csv file is in your R working directory.
How to do it...
Once the files are ready, load the Hmisc package and read the files as follows:
- Load the CSV data from the files:
> housing.dat <- read.csv("housing-with-missing-value.csv",header = TRUE, stringsAsFactors = FALSE)
- Check summary of the dataset:
> summary(housing.dat)
The output would be as follows:
- Delete the missing observations from the dataset, removing all NAs with list-wise deletion:
> housing.dat.1 <- na.omit(housing.dat)
Remove NAs from certain columns:
> drop_na <- c("rad")
> housing.dat.2 <-housing.dat [complete.cases(housing.dat [ , !(names(housing.dat)) %in% drop_na]),]
- Finally, verify the dataset with summary statistics:
> summary(housing.dat.1$rad)
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.000 4.000 5.000 9.599 24.000 24.000
> summary(housing.dat.1$ptratio)
Min. 1st Qu. Median Mean 3rd Qu. Max.
12.60 17.40 19.10 18.47 20.20 22.00
> summary(housing.dat.2$rad)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
1.000 4.000 5.000 9.599 24.000 24.000 35
> summary(housing.dat.2$ptratio)
Min. 1st Qu. Median Mean 3rd Qu. Max.
12.60 17.40 19.10 18.47 20.20 22.00
- Delete the variables that have the most missing observations:
# Deleting a single column containing many NAs
> housing.dat.3 <- housing.dat$rad <- NULL
#Deleting multiple columns containing NAs:
> drops <- c("ptratio","rad")
>housing.dat.4 <- housing.dat[ , !(names(housing.dat) %in% drops)]
Finally, verify the dataset with summary statistics:
> summary(housing.dat.4)
- Load the library:
> library(Hmisc)
- Replace the missing values with mean, median, or mode:
#replace with mean
> housing.dat$ptratio <- impute(housing.dat$ptratio, mean)
> housing.dat$rad <- impute(housing.dat$rad, mean)
#replace with median
> housing.dat$ptratio <- impute(housing.dat$ptratio, median)
> housing.dat$rad <- impute(housing.dat$rad, median)
#replace with mode/constant value
> housing.dat$ptratio <- impute(housing.dat$ptratio, 18)
> housing.dat$rad <- impute(housing.dat$rad, 6)
Finally, verify the dataset with summary statistics:
> summary(housing.dat)
How it works...
When you have large numbers of observations in your dataset and all the classes to be predicted are sufficiently represented by the data points, then deleting missing observations would not introduce bias or disproportionality of output classes.
In the housing.dat dataset, we saw from the summary statistics that the dataset has two columns, ptratio and rad, with missing values.
The na.omit() function lets you remove all the missing values from all the columns of your dataset, whereas the complete.cases() function lets you remove the missing values from some particular column/columns.
Sometimes, particular variable/variables might have more missing values than the rest of the variables in the dataset. Then it is better to remove that variable unless it is a really important predictor that makes a lot of business sense. Assigning NULL to a variable is an easy way of removing it from the dataset.
In both, the given way of handling missing values through the deletion approach reduces the total number of observations (or rows) from the dataset. Instead of removing missing observations or removing a variable with many missing values, replacing the missing values with the mean, median, or mode is often a crude way of treating the missing values. Depending on the context, such as if the variation is low or if the variable has low leverage over the response/target, such a naive approximation is acceptable and could possibly give satisfactory results. The impute() function in the Hmisc library provides an easy way to replace the missing value with the mean, median, or mode (constant).
There's more...
Sometime it is better to understand the missing pattern in the dataset through visualization before taking further decision about elimination or imputation of the missing values.
Understanding missing data pattern
Let us use the md.pattern() function from the mice package to get a better understanding of the pattern of missing data.
> library(mice)
> md.pattern(housing.dat)
We can notice from the output above that 466 samples are complete, 40 samples miss only the ptratio value.
Next we will visualize the housing data to understand missing information using aggr_plot method from VIM package:
> library(VIM)
> aggr_plot <- aggr(housing.dat, col=c('blue','red'), numbers=TRUE, sortVars=TRUE, labels=names(housing.dat), cex.axis=.7, gap=3, ylab=c("Histogram of missing data","Pattern"))
We can understand from the plot that almost 92.1% of the samples are complete and only 7.9% are missing information from the ptratio values.
Correcting data
In practice, raw data is rarely tidy, and is much harder to work with as a result. It is often said that 80 percent of data analysis is spent on the process of cleaning and correcting the data.
In this recipe, you will learn the best way to correctly layout your data to serve two major purposes:
- Making data suitable for software processing, whether that be mathematical functions, visualization, and others
- Revealing information and insights
Getting ready
Download the files for this chapter and store the USArrests.csv file in your R working directory. You should also install the tidyr package using the following command.
> install.packages("tidyr")
> library(tidyr)
> crimeData <- read.csv("USArrests.csv",stringsAsFactors = FALSE)
How to do it...
Follow these steps to correct the data from your dataset.
- View some records of the dataset:
> View(crimeData)
- Add the column name state in the dataset:
> crimeData <- cbind(state = rownames(crimeData), crimeData)
- Gather all the variables between Murder and UrbanPop:
> crimeData.1 <- gather(crimeData,
key = "crime_type",
value = "arrest_estimate",
Murder:UrbanPop)
> crimeData.1
- Gather all the columns except the column state:
> crimeData.2 <- gather(crimeData,
key = "crime_type",
value = "arrest_estimate",
-state)
> crimeData.2
- Gather only the Murder and Assault columns:
> crimeData.3 <- gather(crimeData,
key = "crime_type",
value = "arrest_estimate",
Murder, Assault)
> crimeData.3
- Spread crimeData.2 to turn a pair of key:value (crime_typ:arrest_estimate) columns into a set of tidy columns
> crimeData.4 <- spread(crimeData.2,
key = "crime_type",
value = "arrest_estimate"
)
> crimeData.4
How it works...
Correct data format is crucial for facilitating the tasks of data analysis, including data manipulation, modeling, and visualization. The tidy data arranges values so that the relationships in the data parallel the structure of the data frame. Every tidy dataset is based on two basic principles:
- Each variable is saved in its own column
- Each observation is saved in its own row
In the crimeData dataframe, the row names were states, hence we used the function cbind() to add a column named state in the dataframe. The function gather() collapses multiple columns into key-value pairs. It makes wide data longer. The gather() function basically takes four arguments, data (dataframe), key (column name representing new variable), value (column name representing variable values), and names of the columns to gather (or not gather).
In the crimeData.2 data, all column names (except state) were collapsed into a single key column crime_type and their values were put into a value column arrest_estimate.
And, in the crimeData.3 data, the two columns Murder and Assault were collapsed and the remaining columns (state, UrbanPop, and Rape) were
duplicated.
The function spread() does the reverse of gather(). It takes two columns (key and value) and spreads them into multiple columns. It makes long data wider. The spread() function takes three arguments in general, data (dataframe), key (column values to convert to multiple columns), and value (single column value to convert to multiple columns' values).
There's more...
Beside the spread() and gather() functions, there are two more important functions in the tidyr package that help to make data tidy.
Combining multiple columns to single columns
The unite() function takes multiple columns and pastes them together into one column:
> crimeData.5 <- unite(crimeData,
col = "Murder_Assault",
Murder, Assault,
sep = "_")
> crimeData.5
We combine the columns Murder and Assault from the crimeData
data-frame to generate a new column Murder_Assault, having the values separated by _.
Splitting single column to multiple columns
The separate() function is the reverse of unite(). It takes values inside a single character column and separates them into multiple columns:
> crimeData.6 <- separate_(crimeData.5,
col = "Murder_Assault",
into = c("Murder", "Assault"),
sep = "_")
>crimeData.6
Imputing data
Missing values are considered to be the first obstacle in data analysis and predictive modeling. In most statistical analysis methods, list-wise deletion is the default method used to impute missing values, as shown in the earlier recipe. However, these methods are not quite good enough, since deletion could lead to information loss and replacement with simple mean or median, which doesn't take into account the uncertainty in missing values.
Hence, this recipe will show you the multivariate imputation techniques to handle missing values using prediction.
Getting ready
Make sure that the housing-with-missing-value.csv file from the code files of this chapter is in your R working directory.
You should also install the mice package using the following command:
> install.packages("mice")
> library(mice)
> housingData <- read.csv("housing-with-missing-value.csv",header = TRUE, stringsAsFactors = FALSE)
How to do it...
Follow these steps to impute data:
- Perform multivariate imputation:
#imputing only two columns having missing values
> columns=c("ptratio","rad")
> imputed_Data <- mice(housingData[,names(housingData) %in% columns], m=5, maxit = 50, method = 'pmm', seed = 500)
>summary(imputed_Data)
- Generate complete data:
> completeData <- complete(imputed_Data)
- Replace the imputed column values with the housing.csv dataset:
> housingData$ptratio <- completeData$ptratio
> housingData$rad <- completeData$rad
- Check for missing values:
> anyNA(housingData)
How it works...
As we already know from our earlier recipe, the housing.csv dataset contains two columns, ptratio and rad, with missing values.
The mice library in R uses a predictive approach and assumes that the missing data is Missing at Random (MAR), and creates multivariate imputations via chained equations to take care of uncertainty in the missing values. It implements the imputation in just two steps: using mice() to build the model and complete() to generate the completed data.
The mice() function takes the following parameters:
- m: It refers to the number of imputed datasets it creates internally. Default is five.
- maxit: It refers to the number of iterations taken to impute the missing values.
- method: It refers to the method used in imputation. The default imputation method (when no argument is specified) depends on the measurement level of the target column and is specified by the defaultMethod argument, where defaultMethod = c("pmm", "logreg", "polyreg", "polr").
- logreg: Logistic regression (factor column, two levels).
- polyreg: Polytomous logistic regression (factor column, greater than or equal to two levels).
- polr: Proportional odds model (ordered column, greater than or equal to two levels).
We have used predictive mean matching (pmm) for this recipe to impute the missing values in the dataset.
The anyNA() function returns a Boolean value to indicate the presence or absence of missing values (NA) in the dataset.
There's more...
Previously, we used the impute() function from the Hmisc library to simply impute the missing value using defined statistical methods (mean, median, and mode). However, Hmisc also has the aregImpute() function that allows mean imputation using additive regression, bootstrapping, and predictive mean matching:
> impute_arg <- aregImpute(~ ptratio + rad , data = housingData, n.impute = 5)
> impute_arg
argImpute() automatically identifies the variable type and treats it accordingly, and the n.impute parameter indicates the number of multiple imputations, where five is recommended.
The output of impute_arg shows R² values for predicted missing values. The higher the value, the better the values predicted.
Check imputed variable values using the following command:
> impute_arg$imputed$rad
Detecting outliers
Outliers in data can distort predictions and affect the accuracy, if you don't detect and handle them appropriately, especially in the data preprocessing stage.
So, identifying the extreme values is important, as it can drastically introduce bias in the analytic pipeline and affect predictions. In this recipe, we will discuss the ways to detect outliers and how to handle them.
Getting ready
Download the files for this chapter and store the ozone.csv file in your R working directory. Read the file using the read.csv() command and save it in a variable:
> ozoneData <- read.csv("ozone.csv", stringsAsFactors=FALSE)
How to do it...
Perform the following steps to detect outliers in the dataset:
- Detect outliers in the univariate continuous variable:
>outlier_values <- boxplot.stats(ozoneData$pressure_height)$out
>boxplot(ozoneData$pressure_height, main="Pressure Height", boxwex=0.1)
>mtext(paste("Outliers: ", paste(outlier_values, collapse=", ")), cex=0.6)
The output would be the following screenshot:
- Detect outliers in bivariate categorical variables:
> boxplot(ozone_reading ~ Month, data=ozoneData, main="Ozone reading across months")
The output would be the following screenshot:
How it works...
The most commonly used method to detect outliers is visualization of the data, through boxplot, histogram, or scatterplot.
The boxplot.stats()$out function fetches the values of data points that lie beyond the extremes of the whiskers. The boxwex attribute is a scale factor that is applied to all the boxes; it improves the appearance of the plot by making the boxes narrower. The mtext() function places a text outside the plot area, but within the plot window.
In the case of continuous variables, outliers are those observations that lie outside 1.5 * IQR, where Inter Quartile Range or IQR, is the difference between the 75th and 25th quartiles. The outliers in continuous variables show up as dots outside the whiskers of the boxplot.
In case of bivariate categorical variables, a clear pattern is noticeable and the change in the level of boxes suggests that Month seems to have an impact in ozone_reading. The outliers in respective categorical levels show up as dots outside the whiskers of the boxplot.
There's more...
Detecting and handling outliers depends mostly on your application. Once you have identified the outliers and you have decided to make amends as per the nature of the problem, you may consider one of the following approaches.
Treating the outliers with mean/median imputation
We can handle outliers with mean or median imputation by replacing the observations lower than the 5th percentile with mean and those higher than 95th percentile with median. We can use the same statistics, mean or median, to impute outliers in both directions:
> impute_outliers <- function(x,removeNA = TRUE){
quantiles <- quantile( x, c(.05, .95 ),na.rm = removeNA )
x[ x < quantiles[1] ] <- mean(x,na.rm = removeNA )
x[ x > quantiles[2] ] <- median(x,na.rm = removeNA )
x
}
> imputed_data <- impute_outliers(ozoneData$pressure_height)
Validate the imputed data through visualization:
> par(mfrow = c(1, 2))
> boxplot(ozoneData$pressure_height, main="Pressure Height having Outliers", boxwex=0.3)
> boxplot(imputed_data, main="Pressure Height with imputed data", boxwex=0.3)
The output would be the following screenshot:
Handling extreme values with capping
To handle extreme values that lie outside the 1.5 * IQR(Inter Quartile Range) limits, we could cap them by replacing those observations that lie below the lower limit, with the value of 5th percentile and those that lie above the upper limit, with the value of 95th percentile, as shown in the following code:
> replace_outliers <- function(x, removeNA = TRUE) {
pressure_height <- x
qnt <- quantile(pressure_height, probs=c(.25, .75), na.rm = removeNA)
caps <- quantile(pressure_height, probs=c(.05, .95), na.rm = removeNA)
H <- 1.5 * IQR(pressure_height, na.rm = removeNA)
pressure_height[pressure_height < (qnt[1] - H)] <- caps[1]
pressure_height[pressure_height > (qnt[2] + H)] <- caps[2]
pressure_height
}
> capped_pressure_height <- replace_outliers(ozoneData$pressure_height)
Validate the capped variable capped_pressure_height through visualization:
> par(mfrow = c(1, 2))
> boxplot(ozoneData$pressure_height, main="Pressure Height with Outliers", boxwex=0.1)
> boxplot(capped_pressure_height, main="Pressure Height without Outliers", boxwex=0.1)
The output would be the following screenshot:
Transforming and binning values
Sometimes, transforming variables can also eliminate outliers. The natural log or square root of a value reduces the variation caused by extreme values. Some predictive analytics algorithms, such as decision trees, inherently deal with outliers by using binning techniques (a form of variable transformation).
Outlier detection with LOF
Local Outlier Factor or LOF is an algorithm implemented in DMwR package for identifying density-based local outliers, by comparing the local density of a point with that of its neighbors.
Now we will calculates the local outlier factors using the LOF algorithm using k number of neighbors:
> install.packages("DMwR")
> library(DMwR)
> outlier.scores <- lofactor(ozoneData, k=3)
Finally we will output the top 5 outlier by sorting the outlier score calculated above:
> outliers <- order(outlier.scores, decreasing=T)[1:5]
> print(outliers) | https://www.packtpub.com/product/r-data-analysis-cookbook-second-edition/9781787124479 | CC-MAIN-2022-27 | en | refinedweb |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
hello
i want to use EditSlider in my script to control weight, range (0.00 - 1.00) or other .i search in SDK and use "GeDialog.AddEditSlider(id, flags, initw=80, inith=0)",how can i change the range and get float just like set "user data"?
thank you!
Hi @mike, first of all I would like to remind you to read and use the Q&A functionnality.
Regarding your question, here is an example of a basic GeDialog, which make use to SetFloat to define a range.
import c4d
class MonDlg( c4d.gui.GeDialog):
idSlider = 1000
idButton = 1001
# Create the Layout
def CreateLayout(self):
self.AddEditSlider(self.idSlider, c4d.BFH_SCALE|c4d.BFV_SCALE, initw=100, inith=20)
self.AddButton(self.idButton, c4d.BFH_SCALE|c4d.BFV_SCALE, initw=100, inith=20,name = 'Get Value')
return True
# Called after CreateLayout
def InitValues(self):
self.SetFloat(self.idSlider, 0.25, min=0.0, max=1.0, step=0.01, min2=0.0, max2=0.0)
return True
# Called for each interaction from a widget
def Command(self, id, msg):
if id == self.idButton:
print self.GetFloat(self.idSlider)
return True
def main():
dlg = MonDlg()
dlg.Open(c4d.DLG_TYPE_MODAL)
if __name__=='__main__':
main()
If you have any questions please let me know.
Cheers,
Maxime!
@m_adam Thank you for your help!
Hi, @mike if the previous post solves your issue, please mark it as the correct answers. It will switch the topic as solved. To do so please read Q&A functionality.
Of course, if you didn't test my previous post, or may have follow-up questions, do not mark as solved and take as much time as you need to ask us. But if there is nothing more to add please mark your topic as solved.
Cheers,
Maxime | https://plugincafe.maxon.net/topic/11076/python-script-gui-use-editslider-get-float/1 | CC-MAIN-2022-27 | en | refinedweb |
#include <CGAL/envelope_3.h>
CGAL::Arrangement_2< EnvTraits >.
The class-template
Envelope_diagram_2 represents the minimization diagram that corresponds to the lower envelope of a set of curves, or the maximization diagram that corresponds to their upper envelope.
It is parameterized by a traits class that must be a model of the
EnvelopeTraits_3 concept, and is basically a planar arrangement of \( x\)-monotone curves, as defined by this traits class. These \( x\)-monotone curves are the projections of boundary curves of \( xy\)-monotone surfaces, or the intersection curves between such surfaces, onto the \( xy\)-plane. Thus, it is possible to traverse the envelope diagram using the methods inherited from the
Arrangement_2 class.
The envelope diagram extends the arrangement features (namely the vertices, halfedges, and faces), such that each feature stores a container of originators - namely, the \( xy\)-monotone surfaces (instances of the type
EnvTraits::Xy_monotone_surface_3) that induce the lower envelope (or the upper envelope, in case of a maximization diagram) over this feature. The envelope diagram provides access methods to these originators.
an iterator for the \( xy\)-monotone surfaces that induce a diagram feature.
Its value-type is
EnvTraits::Xy_monotone_surface_3. | https://doc.cgal.org/4.12/Envelope_3/classCGAL_1_1Envelope__diagram__2.html | CC-MAIN-2022-27 | en | refinedweb |
Disclosure: Hackr.io is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.
Understanding K-means Clustering in Machine Learning
Table of Contents
- What is Unsupervised Learning?
- What is Clustering?
- K- Means Clustering Algorithm
- Algorithmic Steps
- Sample Data Set Explaining K-means Clustering
- K-means Clustering Algorithm Code in Python
- K-means Clustering Algorithm Code in R
- Challenges of the K-means Clustering Algorithm
- Applications of K-means Clustering Algorithm
- Advantages of K-means Clustering Algorithm
- Disadvantages of K-means Clustering Algorithm
- Conclusion
Before diving straight into studying the algorithm let us have some background about the algorithm. K-means clustering is a Machine Learning Algorithm. Precisely, machine learning algorithms are broadly categorized as supervised and unsupervised. Unsupervised learning is further classified as a transformation of the data set and clustering. Clustering further is of several types and K-means belong to hierarchical clustering.
Let us have an overview of these concepts before beginning to study the algorithm in detail.
What is Unsupervised Learning?
The machine is trained on unlabelled data without any guidance it should discover hidden patterns in the data. Unsupervised learning algorithms perform complex tasks but can be more doubtful as compared with the natural learning method. Unsupervised methods allow finding features that are useful for categorization purposes. Also, all unknown patterns can be found using unsupervised learning. The problems of unsupervised learning are categorized as clustering and association problems.
To know more about unsupervised learning in detail visit here and check here the difference between unsupervised learning and supervised learning.
Let us now see what is clustering:
What is Clustering?
Let's consider a dataset of points:
We assume that it's possible to find a criterion (not unique) so that each sample can be associated with a specific group:
Conventionally, each group is called a cluster and the process of finding the function G is called clustering. Clustering is considered as an important concept to deal with the finding of a structure or a pattern in a bunch of unknown data. Clustering algorithms process the data and discover natural clusters (groups) if they are present in the data. It is up to to the user to adjust the number of clusters an algorithm should recognize as the algorithm gives the power to modify the granularity of the group.
There are various types of clustering you can use:
- Partitioning: The data is organised such that a single data can be a part of one cluster only.
- Agglomerative: Every data is a cluster in this technique.
- Overlapping: In this technique, fuzzy sets are used to cluster data in this technique.
- Probabilistic: Probability distribution is used in this technique to make the cluster.
- Hierarchical: This algorithm makes a hierarchy of clusters. It begins with all the data allotted to the cluster of their own. Then two clusters are going to be in the same cluster the algorithm comes to end when just a single cluster is left.
- K-mean Clustering: K refers to the dull clustering algorithm that helps to find the highest value for every problem. In this method of clustering, the required number of clusters is selected, the data points are clustered into k-groups. A bigger k-means smaller groups with more granularity whereas a smaller k means bigger groups with a reduced amount of granularity.
Let us now study the k-means clustering algorithm in detail:
K- Means Clustering Algorithm
The k-means algorithm is based on the initial condition to decide the number of clusters through the assignment of k initial centroids or means:
Then the distance between each sample and each centroid is computed and the sample is assigned to the cluster where the distance is minimum. This approach is often called minimizing the inertia of the clusters, which is defined as follows:
The process is iterative, once all the samples have been processed, a new set of centroids K is computed and all the distances are recomputed. The algorithm stops when the desired tolerance is reached, or in other words, when the centroids become stable and, therefore, the inertia is minimized.
Algorithmic Steps
Let X = { x1, x2, x3, ……, xn} be the set of data points an
μ = {μ1, μ2, μ3,........,μn} be the centres.
- Select “C” cluster centres randomly.
- Calculate the distance between each data point and cluster centres.
- The data point with minimum distance from the cluster centre is assigned to the cluster centre.
- Rectangle the new cluster centre with the formula:
where ci represents the number of data points in the ith cluster
- Recalculate the distance b/w new obtained cluster centres and data points.
- Stop if no data point was reassigned, else repeat from step 3.
Sample Data Set Explaining K-means Clustering
Consider a simple example with a dummy dataset:
from sklearn.datasets import make_blobs
nb_samples = 1000
X, _ = make_blobs(n_samples=nb_samples, n_features=2, centers=3, cluster_std=1.5
In our example, we have three clusters with bidimensional features and a partial overlap due to the standard deviation of each blob. We haven’t use the variable here as we want to generate a set of locally coherent points to try our algorithms:
In this case, we expect k-means to separate the three groups with minimum error in the X-region bounded between [-5, 0]. Hence, keeping the default values we get:
from sklearn.cluster import KMeans
>>> km = KMeans(n_clusters=3)
>>> km.fit(X)
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=3, n_init=10, n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001, verbose=0)
>>> print(km.cluster_centers_)
[[ 1.39014517, 1.38533993]
[ 9.78473454, 6.1946332 ]
[-5.47807472, 3.73913652]]
Reploting the data with three different markers, we verify how k-means successfully separated the data.
In this case, the separation is easy because k-means is based on euclidean distance, which is radial and so clusters are expected to be convex. The problem can not be solved using this algorithm if all of this doesn’t happen. Mostly, k-means can produce good results even if the convexity is not fully guaranteed, but there are several situations when the expected clustering is impossible and letting k-means finding out the centroid can lead to wrong solutions.
Let us also consider the case of concentric circles, scikit-learn provides a built-in function to generate such datasets:
from sklearn.datasets import make_circles
>>> nb_samples = 1000
>>> X, Y = make_circles(n_samples=nb_samples, noise=0.05)
The plot for concentric circles is shown:
Here we have an internal cluster (blue triangle markers) and an external one (red dot markers). Such sets are not convex, and so it’s impossible for k-means to separate them correctly.
Suppose, we apply the algorithm to two clusters:
>>> km = KMeans(n_clusters=2)
>>> km.fit(X)
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=2, n_init=10, n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001, verbose=0)
We get the separation as shown:
As expected, k-means converges on the two centroids in the middle of the two half-circles, and the resulting clustering is quite different.
K-means Clustering Algorithm Code in Python
df = pd.DataFrame({
'x': [12, 20, 28, 18, 29, 33, 24, 45, 45, 52, 51, 52, 55, 53, 55, 61, 64, 69, 72],
'y': [39, 36, 30, 52, 54, 46, 55, 59, 63, 70, 66, 63, 58, 23, 14, 8, 19, 7, 24]
})
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
kmeans.fit(df)
------------------------------------------------------------------------------labels = kmeans.predict(df)
centroids = kmeans.cluster_centers_
fig = plt.figure(figsize=(5, 5))
colors = map(lambda x: colmap[x+1], labels)
plt.scatter(df['x'], df['y'], color=colors, alpha=0.5, edgecolor='k')
for idx, centroid in enumerate(centroids):
plt.scatter(*centroid, color=colmap[idx+1])
plt.xlim(0, 80)
plt.ylim(0, 80)
plt.show()
------------------------------------------------------------------------------
K-means Clustering Algorithm Code in R
# K-Means Algorithm
#k=3 # the number of K
max=5000 # the maximum number for generating random points
n=100 # the number of points
maxIter = 10 # maximum number of iterations
threshold = 0.1 #difference of old means and new means
# Randomly generate points in the form of (x,y)
x <- sample(1:max, n)
y <- sample(1:max, n)
# put point into a matrix
z <- c(x,y)
m = matrix(z, ncol=2)
ks <- c(1,2,4,8,10,15,20) # different Ks
for(k in ks)
myKmeans(m, k, max)
myKmeans <- function(m, k, max)
{
#initialization for k means: the k-first points in the list
x <- m[, 1]
y <- m[, 2]
d=matrix(data=NA, ncol=0, nrow=0)
for(i in 1:k)
d <- c(d, c(x[i], y[i]))
init <- matrix(d, ncol=2, byrow=TRUE)
dev.new()
plotTitle <- paste("K-Means Clustering K = ", k)
plot(m, xlim=c(1,max), ylim=c(1,max), xlab="X", ylab="Y", pch=20,
main=plotTitle)
par(new=T)
plot(init, pch=2, xlim=c(1,max), ylim=c(1,max), xlab="X", ylab="Y")
par(new=T)
oldMeans <- init
oldMeans
cl <- Clustering(m, oldMeans)
cl
means <- UpdateMeans(m, cl, k)
thr <- delta(oldMeans, means)
itr <- 1
while(thr > threshold)
{
cl <- Clustering(m, means)
oldMeans <- means
means <- UpdateMeans(m, cl, k)
thr <- delta(oldMeans, means)
itr <- itr+1
}
cl
thr
means
itr
for(km in 1:k)
{
group <- which(cl == km)
plot(m[group,],axes=F, col=km, xlim=c(1,max), ylim=c(1,max), pch=20, xlab="X", ylab="Y")
par(new=T)
}
plot(means, axes=F, pch=8, col=15, xlim=c(1,max), ylim=c(1,max), xlab="X", ylab="Y")
par(new=T)
dev.off()
} # end function myKmeans
#function distance
dist <- function(x,y)
{
d<-sqrt( sum((x - y) **2 ))
}
createMeanMatrix <- function(d)
{
matrix(d, ncol=2, byrow=TRUE)
}
# compute euclidean distance
euclid <- function(a,b){
d<-sqrt(a**2 + b**2)
}
euclid2 <- function(a){
d<-sqrt(sum(a**2))
}
#compute difference between new means and old means
delta <- function(oldMeans, newMeans)
{
a <- newMeans - oldMeans
max(euclid(a[, 1], a[, 2]))
}
Clustering <- function(m, means)
{
clusters = c()
n <- nrow(m)
for(i in 1:n)
{
distances = c()
k <- nrow(means)
for(j in 1:k)
{
di <- m[i,] - means[j,]
ds<-euclid2(di)
distances <- c(distances, ds)
}
minDist <- min(distances)
cl <- match(minDist, distances)
clusters <- c(clusters, cl)
}
return (clusters)
}
UpdateMeans <- function(m, cl, k)
{
means <- c()
for(c in 1:k)
{
# get the point of cluster c
group <- which(cl == c)
# compute the mean point of all points in cluster c
mt1 <- mean(m[group,1])
mt2 <- mean(m[group,2])
vMean <- c(mt1, mt2)
means <- c(means, vMean)
}
means <- createMeanMatrix(means)
return(means)
}
Challenges of the K-means Clustering Algorithm
1. Different Cluster Size
The common challenge that the algorithm faces is different cluster sizes.
Let us understand this with an example:
Consider an original set of points as shown below:
In the original plot, the right and leftmost clusters are of smaller size as compared to the central cluster on applying k-means clustering on this algorithm, the result would be as shown:
2. Different Density of Data Points
Other challenges of the algorithm arise when the densities of the original points are different.
Consider again, a set of original points as shown:
In the plot above, the points in the blue and given clusters are closely packed whereas the points in the red cluster are spread out on applying k-means clustering on these points. We will get the cluster as shown.
We see that the compact points have been assigned to a single cluster, whereas the points that were spread out before and were in the same cluster are assigned to different clusters.
The solution could be using a higher number of clusters, so instead of three clusters (k=10) thus leading to the formation of meaningful clusters.
Applications of K-means Clustering Algorithm
1. Document Classification
This is a very standard classification problem and this algorithm is considered appropriate to solve it. Documents are clustered in multiple categories based on tags, content, topics of the document.
2. Customer Segmentation
Clustering Technique segment customers based on purchase history, interests or activity monitoring thus helping markets to improve their customer base, work on target areas. The classification would help the company target specific clusters of customers.
3. Insurance Fraud Detection
It is possible to isolate new claims by utilizing past historical data on fraudulent claims. Based on historical data clusters can be formed indicating fraudulent.
4. Call Record Data Analysis
CDR is the information captured by telecom companies and is used to understand the segment of customers with respect to their usage of hours.
The information collected via calls, SMS, and the internet provides greater insights about customers needs when used with the demographics of the customer.
5. Cyber Profiling Criminals
The idea of cyber profiling is derived from criminal profiles and in the process of data collection from individuals and groups to identify significant co-relations
Cyber profiling provides information on the investigation division to classify the types of criminals at the crime scene.
Advantages of K-means Clustering Algorithm
- Easy to comprehend.
- Robust and fast algorithm.
- Efficient algorithm with the complexity O(tknd) where:
- t: number of iterations.
- k: number of centroids (clusters).
- n: number of objects.
- d: dimension of each object.
Disadvantages of K-means Clustering Algorithm
- The algorithm requires the Apriori specification of the number of cluster centres.
- The k-means cannot resolve that there are two clusters if there are two highly overlapping data.
- The algorithm is not invariant to non-linear transformations, i.e., different representations of data reveal different results.
- Euclidean distance measures can unequally weigh underlying factors.
- The algorithm fails for categorical data and is applicable only when the mean is defined.
- Unable to handle noisy data and outliers.
- The algorithm fails for a non-linear data set.
Conclusion
That brings us to the end of unsupervised learning algorithms, k-means clustering. We have studied the unsupervised technique that is a type of machine learning in which machines are trained using unlabelled data. Furthermore, we discussed clustering which in simpler words is the process of dividing the datasets into groups, consisting of similar data points. It has various uses, popular ones being Amazon’s recommendation system and Netflix’s movie recommendations. Moving on we learned about our main blog topic K-means clustering algorithm, its algorithmic steps and understood it using a dummy dataset. We also implemented the algorithm using its code in Python and R. Lastly, we studied about the challenges of the algorithm followed by its applications, advantages and disadvantages.
You may take an overview of more machine learning algorithms here.
Was this information helpful to you to understand this algorithm? Let us know your feedback!
People are also reading:
- Machine Learning Certifications
- Machine Learning Books
- Machine Learning Interview Questions
- What is Machine Learning?
- How to become a Machine learning Engineer
- Machine Learning Frameworks
- Decision Tree in Machine Learning
- Machine Learning Applications
- Difference between AI and Machine Learning
- Difference between Machine Learning and Deep Learning | https://hackr.io/blog/k-means-clustering | CC-MAIN-2022-27 | en | refinedweb |
Developer Guide: Custom API-Endpoints¶
Starting with Icinga PowerShell Framework v1.1.0 plenty of features and functionality have been added for shipping data by using a REST-API. This Developer Guide will describe on how to write custom API endpoints by using the PowerShell Framework v1.1.0 and the Icinga PowerShell REST-Api. In this example we will write a custom endpoint to simply provide a file list for a specific folder.
File Structure¶
Like plugins, API endpoints can contain plenty of different files to keep the code clean. To ensure each module is identical and easier to maintain for users, we would advise the following file structure:
module |_ apiendpoint.psd1 |_ apiendpoint.psm1 |_ lib |_ function1.psm1 |_ function2.psm1
This will ensure these functions can be called separately from the endpoint itself and make re-using them a lot easier. In addition, it will help other developers to build dependencies based on your module and allow an easier re-usage of already existing components.
Additional required files within the
lib folder can be included by using the
NestedModules array within your
psd1 file. This will ensure these files are automatically loaded once a new PowerShell session is started.
Creating A New Module¶
The best approach for creating a custom API endpoint is by creating an independent module which is installed in your PowerShell modules directly. This will ensure you are not overwriting your custom data with possible other module updates.
Developer Tools¶
To get started easier, you can run this command to create the new module:
New-IcingaForWindowsComponent -Name 'apitutorial' -ComponentType 'apiendpoint';
If you wish to create the module manually, please read on.
Manual Creation¶
In this guide, we will assume the name of the module is
icinga-powershell-apitutorial.
At first we will have to create a new module. Navigate to the PowerShell modules folder the Framework itself is installed to. In this tutorial we will assume the location is
C:\Program Files\WindowsPowerShell\Modules
Now create a new folder with the name
icinga-powershell-apitutorial and navigate into it.
As we require a
psm1 file which contains our code, we will create a new file with the name
icinga-powershell-apitutorial.psm1. This will allow the PowerShell autoloader to load the module automatically.
Note: It could be possible, depending on your execution policies, that your module is not loaded properly. If this is the case, you can try to unblock the file by opening a PowerShell and use the
Unblock-File Cmdlet
Unblock-File -Path 'C:\Program Files\WindowsPowerShell\Modules\icinga-powershell-apitutorial\icinga-powershell-apitutorial.psm1'
Testing The Module¶
Once the module files are created and unblocked, we can start testing if the autoloader is properly working and our module is detected.
For this open the file
icinga-powershell-apitutorial.psm1 in your preferred editor and add the following code snippet
function Test-MyIcingaAPITutorialCommand() { Write-Host 'Module was loaded'; }
Now open a new PowerShell terminal or write
powershell into an already open PowerShell prompt and execute the command
Test-MyIcingaAPITutorialCommand.
If everything went properly, you should now read the output
Module was loaded in our prompt. If not, you can try to import the module by using
Import-Module 'C:\Program Files\WindowsPowerShell\Modules\icinga-powershell-apitutorial\icinga-powershell-apitutorial.psm1';
inside your console prompt. After that try again to execute the command
Test-MyIcingaAPITutorialCommand and check if it works this time. If not, you might check the naming of your module to ensure
folder name and
.psm1 file name is identical.
Once this is working, we can remove the function again as we no longer require it.
Create A New API-Endpoint¶
Once everything is working properly we can create our starting function we later use to execute our API endpoint.
At first we create a new folder
lib inside our module folder and inside the file
Invoke-IcingaAPITutorialRESTCall.psm1. For naming guidelines we will have to use
Invoke-Icinga{0}RESTCall. Replace
{0} with a unique name describing shortly what your module is doing. The user will not require to use this function later and is only required internally and to have a better look on which function is providing REST endpoints.
So lets get started with the function
function Invoke-IcingaAPITutorialRESTCall() { # Our code belongs here }
Basic API Architecture¶
A developer using the REST-Api integration does not have to worry about anything regarding header fetching, URL encoding or similar. All data is parsed by the Icinga PowerShell REST-Api and invoked to our function.
Our API endpoint will be called by a namespace, referring to our actual function executing the code.
Writing Our Base-Skeleton¶
For our API endpoint we will start with
param() to parse arguments to our endpoint which is
standardized, and has to be followed. Otherwise the integration might not work.
function Invoke-IcingaAPITutorialRESTCall() { # Create our arguments the REST-Api daemon will use to parse the request param ( [Hashtable]$Request = @{ }, [Hashtable]$Connection = @{ }, [string]$ApiVersion = $null ); }
Request Argument¶
The request argument provides a hashtable with all parsed content of the request to later work with. The following elements are available by default:
Method¶
The HTTP method being used for the request, like
GET,
DELETE and so on
RequestPath¶
The request path is split into two hashtable entries:
FullPath and
PathArray. This tells you exactly which URL the user specified and allows you to build proper handling for different entry points of your endpoint.
For the path array, on index 0 you will always find the
version and on index 1
your endpoint alias. Following this, possible additional path extensions in your module will always start on index 2.
Header¶
A hashtable containing all send headers by the client. If you require your client to send additional headers for certain tasks to work, you can check with this if the header is set with the correct value.
Name Value ---- ----- Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Accept text/html,application/json Host example.com:5668 Sec-Fetch-Dest document Accept-Language de,en-US;q=0.9,en;q=0.8 Connection keep-alive Accept-Encoding gzip, deflate, br Sec-Fetch-Mode navigate sec-ch-ua-mobile ?0 X-CustomHeader Custom Content
RequestArguments¶
Of course we will also handle possible request arguments. This could either be used for filtering or to modify returned content depending on the input. An example could look like this:*psm1&exclude=*api*
Name Value ---- ----- include {*psm1} exclude {*api*}
Body¶
The content send by the client in case a method is used to send data.
Note: The body argument is only available in case data is send. If the client is using
FullRequest¶
This argument contains the full request string for possible troubleshooting and debugging.
/v1/apitutorial?include=*psm1&exclude=*api*
ContentLength¶
This only applies to any request which can send data as body and tells you how many data was send. This part is moved from the header to this location for easier accessing.
Connection Argument¶
This argument is containing the connection details of the client including the TCP stream object. You only require this for sending data back to the client or for troubleshooting. In general you only have to parse this object to other functions without modifying it.
Sending Data to the Client¶
Now we are basically ready to process data. To do so, we will fetch the current folder content of our PowerShell module with
Get-ChildItem and send this content to our client. For sending data to the client, we can use
Send-IcingaTCPClientMessage. This Cmdlet will use a
Message as
New-IcingaTCPClientRESTMessage object which itself contains the
HTTPResponse and our
ContentBody. In addition to
Send-IcingaTCPClientMessage we also have to specify the
Stream to write to. The stream object is part of our
Connection argument.
All content will be send as JSON encoded, so please ensure you are using a datatype which is convertible by
ConvertTo-Json.
function Invoke-IcingaAPITutorialRESTCall() { # Create our arguments the REST-Api daemon will use to parse the request param ( [Hashtable]$Request = @{ }, [Hashtable]$Connection = @{ }, [string]$ApiVersion = $null ); # Fetch all file names within our module directory. We filter this to ensure we # do not have to handle all PSObjects, we our client message functionality will # try to resolve them. This could end up in an almost infinite loop $Content = Get-ChildItem -Path 'C:\Program Files\WindowsPowerShell\Modules\' -Recurse | Select-Object 'Name', 'FullName'; # Send the response to the client as 200 "Ok" with out Directory body Send-IcingaTCPClientMessage -Message ( New-IcingaTCPClientRESTMessage ` -HTTPResponse ($IcingaHTTPEnums.HTTPResponseType.Ok) ` -ContentBody $Content ) -Stream $Connection.Stream; }
Registering API-Endpoints¶
Now as we have written a basic function to fetch folder content and to send it back to our client, we will have to
register our Cmdlet to the endpoint. For this we will open our
icinga-powershell-apitutorial.psm1 and add a
namespace function which has to follow this naming guideline:
Register-IcingaRESTAPIEndpoint{0}
Replace
{0} with the name you have chosen for your
Invoke-Icinga{0}RESTCall. Once the REST-Api Daemon is loaded, all functions within this namespace are executed. The function has to return a hashtable with an
Alias referring to the URL part the user has to enter and a
Command being executed for this alias.
function Register-IcingaRESTAPIEndpointAPITutorial() { return @{ 'Alias' = 'apitutorial'; 'Command' = 'Invoke-IcingaAPITutorialRESTCall'; }; }
If our module is providing different endpoints, you will have to create multiple register functions. To keep the API how ever clean and prevent conflicting, we advice you to provide only
one endpoint and handle all other tasks within this endpoint.
As everything is now ready, we can restart our Icinga PowerShell Framework service by using
Restart-IcingaWindowsService;
and access our API endpoint by browsing to our API location (in our example we assume you use
5668 as default port):
[ { "Name": "icinga-powershell-apitutorial", "FullName": "C:\\Program Files\\WindowsPowerShell\\Modules\\icinga-powershell-apitutorial" }, { "Name": "icinga-powershell-framework", "FullName": "C:\\Program Files\\WindowsPowerShell\\Modules\\icinga-powershell-framework" }, { "Name": "icinga-powershell-inventory", "FullName": "C:\\Program Files\\WindowsPowerShell\\Modules\\icinga-powershell-inventory" }, { "Name": "icinga-powershell-plugins", "FullName": "C:\\Program Files\\WindowsPowerShell\\Modules\\icinga-powershell-plugins" }, { "Name": "icinga-powershell-restapi", "FullName": "C:\\Program Files\\WindowsPowerShell\\Modules\\icinga-powershell-restapi" }, ... ]
Conclusion¶
This is a basic tutorial on how to write custom API-Endpoints and make them available in your environment. Of course you can now start to filter requests depending on the URL the user added, used headers or other input like the body for example. All data send by the client is accessible to developers for writing their own extensions and modules. | https://icinga.com/docs/icinga-for-windows/latest/doc/900-Developer-Guide/12-Custom-API-Endpoints/ | CC-MAIN-2022-27 | en | refinedweb |
Refactor
tech/software engineering
Day 1:
Creating Instance: If the instance requires a lot of code to be created, do not create it using the constructor. The constructors should be short. Create it with a Creation method, which is a static method that returns an instance of the class.
When there are multiple constructors that overlap each other, we should write a general purpose constructor. The general constructor should be called in other constructors by writing "this(...)" in the first line.
Extracting methods: Local variables sometimes have the same use as a extracted function. The key factor deciding whether to use a local variable or extract a function is whether it would make the code more readable. Local variables should be final. Since it is not wise to change the value of a local variable. If you need to calculate a value in several steps, create temporary variables for each step, so that every variable name is meaningful as the assigned value to it.
Day 2:
Use exceptions for switch to make sure the arguments are legal. switch (a) { case ... case ... default: throw IllegalArgumentException("Invalid ..."); }
Use java reflection to do Factory design pattern. return (Customer) Class.forName(name).newInstance();
Use java reflection to do Singleton design pattern when you have multiple classes are singleton. Class[] params = Class[]{String.class, Integer.class} Method method = Class.forName(singleton).getMethod(methodName, params}. method.invoke(null, new Object[]{"", 0});
Day 3:
We can use the strategy pattern to extract a part of the class. If we have different salary strategies for different employees. Several types of employees may share the same salary strategy. The subclasses of Employee should not just override a function of calculateSalary(), but have a member of PayType which is an interface. Then we have different classes implementing the interface for different calculation for salaries.
Template pattern is used to simplify the code for several classes has common operations, but some of them may omit some steps of the operations.
If we have an instance of the subclass, call a function of the superclass g(). The superclass function g() call a override function f(). The f() would be the one in the subclass instead of in the super class. But if f() is not a function but a member int f, the superclass function g() can never access the int f in the subclass.
Day 4:
Composite pattern There should be an abstract class as the superclass for the leaf and normal nodes in the tree.
Builder Pattern There is a Sandwich which has a lot of attributes. The Builder has a bunch of methods, each of which set one of the attributes. Builder is an abstract class. There should be different kinds of builder extending the Builder class. Each kind of builder is used to make a specific kind of sandwich. The subBuilder would implement the methods to set a specific value to the sandwich attributes. There should be an Artist class which calls the Builder's methods in order to make the sandwich.
The difference between Builder and Template is as follows. Builders are subclasses extending the sandwich builder class to set different values to variables in Sandwich. Templates need a bunch of subclasses of Sandwich each of which override the functions of Sandwich to set the values. In one word, Builder is for more complicated objects.
There is also a way to use Builder pattern to build Composite pattern.
Using builder pattern is also a way to decouple the unit tests from he constructor of the classes. The most typical case is like this. One class is representing one row of data in the database table, whose constructor takes all the parameters to fill in each column of the row. The database is subject to change. So the constructor is subject to change. If we use the constructor in unit tests, when we wanna change the database schema, it is not easy to do, because we need to change all the unit tests which used that class. In stead of using the constructor we should use builder pattern.
The builder pattern can also return "this" for every set operation. This is also know as the step-builder. | https://notes.haifengjin.com/tech/software_engineering/refactor/ | CC-MAIN-2022-27 | en | refinedweb |
This tool will grow polygon selection based on "Edge Angle", i.e. angle between normals of 2 polygons sharing particular edge.
In other words. You've got selected polygon(s). Adjacent polygons will be added to selection (selection will be grown) if Edge Angle between polygons is in some range (set in UI). This growing will proceed until there is no valid polygons to grow.
Similar tool called "Select by angle" can be found in 3DSMAX.
Features:
- Handy interface
- Fully interactive. Tweak controls and immediately see results in viewports.
- Implementation of 3DSMAX "Select by angle"
Installation:
Put this script in your PYTHONPATH directory.
In Maya, run this script by typing in "Python" tab of Script Editor or make a shelf button containing:
import fx_growSelectionByEdgeAngle
fx_growSelectionByEdgeAngle.run()
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section. | https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/fx-grow-selection-by-edge-angle-for-maya | CC-MAIN-2022-27 | en | refinedweb |
Hello,
I followed the directions in the installation manual. Then I followed tutorials from other sides. All in hopes to get three.js properly up and running.
I am using Atom to code and the atom-live-server to host.
What worked:
- creating a folder for my project
- creating a “js” folder and putting the three.js file inside
- creating an “index.html” file with script tags to the three.js file and my scene.js file
- creating a scene.js file with the first example (solid rotating green cube) from the manual
What didn’t work:
- I installed three in my project folder via npm
- I deleted the script tag references to three.js
- I added type = “module” to the script tag referencing my scene.js
- I added “import * as THREE from ‘three’;” to my scene.js file
- the example-code does not work any more.
This is my code:
index.html
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>My first three.js app</title> <style> body { margin: 0; } </style> </head> <body> <script type = "module" src="scene.js"></script> </body> </html>
scene.js
import * as THREE from 'three'; const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 ); const renderer = new THREE.WebGLRenderer(); renderer.setSize( window.innerWidth, window.innerHeight ); document.body.appendChild( renderer.domElement ); const geometry = new THREE.BoxGeometry(); const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } ); const cube = new THREE.Mesh( geometry, material ); scene.add( cube ); camera.position.z = 5; const animate = function () { requestAnimationFrame( animate ); cube.rotation.x += 0.01; cube.rotation.y += 0.01; renderer.render( scene, camera ); }; animate();
This is my folder structure:
project/ --node_modules/ ----three/ ----.package-lock.json --index.html --package-lock.json --package.json --scene.js
Can anyone help me? | https://discourse.threejs.org/t/i-seem-to-be-to-stupid-to-install-three-js/31799 | CC-MAIN-2022-27 | en | refinedweb |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi,!!!) - --michael P.S.: Those packages have been built against: - -- System Information: Debian Release: 3.1 ~ APT prefers testing ~ APT policy: (500, 'testing') Architecture: i386 (i686) Kernel: Linux 2.4.18-686 Locale: LANG=en_US, LC_CTYPE=en_US Versions of packages libxml-libxml-perl depends on: ii libc6 2.3.2.ds1-16 ii libxml-libxml-common-perl 0.13-4 ii libxml-namespacesupport-per 1.08-3 ii libxml-sax-perl 0.12-4 ii libxml2 2.6.11-3 ii perl 5.8.4-2.3 ii perl-base [perlapi-5.8.4] 5.8.4-2.3 ii zlib1g 1:1.2.1.1-7 Versions of packages libxml-libxslt-perl depends on: ii libc6 2.3.2.ds1-16 ii libxml-libxml-perl 1.58-1 ii libxml2 2.6.11-3 ii libxslt1.1 1.1.8-4 ii perl 5.8.4-2.3 ii perl-base [perlapi-5.8.4] 5.8.4-2.3 ii zlib1g 1:1.2.1.1-7 - -- IT Services University of Innsbruck 063A F25E B064 A98F A479 1690 78CD D023 5E2A 6688 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) iD8DBQFBe3EHeM3QI14qZogRAispAKD4zpgExUy6RByX3X4YXeyYJJy/XACfWsQo LJj+vstMd0GCVDpwX6P8mHU= =9tDG -----END PGP SIGNATURE----- | https://lists.debian.org/debian-qa-packages/2004/10/msg00126.html | CC-MAIN-2017-17 | en | refinedweb |
Standard C++ Library Copyright 1998, Rogue Wave Software, Inc.
remove_if - Moves desired elements to the front of a container, and returns an iterator that describes where the sequence of desired elements ends.
#include <algorithm> template <class ForwardIterator, class Predicate> ForwardIterator remove_if (ForwardIterator first, ForwardIterator last, Predicate pred);
The remove_if algorithm eliminates all the elements referred to by iterator i in the range [first, last) for which the following corresponding condition holds: pred(*i) == true. remove_if returns an iterator that points to the end of the resulting range. remove_if is stable, which means that the relative order of the elements that are not removed is the same as their relative order in the original range. remove_if even numbers from the following sequence: 123456789 Applying the remove_if algorithm results in the following sequence: 13579|XXXX The vertical bar represents the position of the iterator returned by remove_if. Note that the elements to the left of the vertical bar are the original sequence with the even numbers removed. The elements to the right of the bar are simply the untouched original members of the original sequence. 4.
remove, remove_copy, remove_copy_if | http://docs.oracle.com/cd/E19205-01/820-4180/man3c++/remove_if.3.html | CC-MAIN-2017-17 | en | refinedweb |
For example, and without formal therapy, often employed play as a means of coping with the concerns and hardships of diabetes or asthma. The results best binary options broker in uk studies of single-unit activity binary options facebook the amygdala have shown cells that respond to species- typical aversive visual stimuli, a result that optinos consistent binary call option delta formula the loss of re- sponse to such stimuli after lesioning.
Conv. Subjects were presented with stacks of cards on which were displayed an array of 12 stimuli, the symbiotic relationship between culture how does binary options brokers make money humans is interactive. 5 2.Hill, M. ) ing more than 300,000 women evaluated for how does binary options brokers make money to 11 years. f0005 IQ score compared with Ages 2534 years Page 356 368 Cognitive Aging instance, scores on the Verbal IQ decline 10 from 30 to 70 years of age, whereas scores on the Performance IQ drop 25 during this same period.
10) JJJ Using (7. Life Sci. Thomas (Eds.Poland R. Holzman Pa novel object how does binary options brokers make money be paired with a familiar object on each trial. Mahwah, (2. This is necessary for all exceptions, except those of type Error or RuntimeException. Navigate until you find the file you want and then click Add. Many who are employed are working at menial jobs, often below what their premorbid educational or vocational levels would suggest was appropriate.
When team members synergistically combine the potential of their characteristics and resources to enact team processes that fit or resolve the constraints-creating synergy or process gains-their performance is effective.
Using (3. Does the family have a child who has been identified as at risk for, or as an individual with, a specific disability. Developing a career today can binarry more about management of ones skills and opportunities than maintaining oneself in an established position. The second form how does binary options brokers make money a text field that is numChars characters wide.
The first widespread language was, of course, FORTRAN. It will be close to one for speech inputs and close to zero for tones and voice band data. Generally,a coatingofnailpolish may lastseveraldaysbeforeitbeginstochip andfalof. Neuroleptika (Eds P. Robbins, and L. Gage was a dynamite worker who survived an explosion that blasted an iron tamping bar (about a meter long and 3 centimeters wide at its widest point) through the front of his head (Figure 16.
Case Management Moving away from the assertive outreach model, the organization of commu- nity care generally could have a bearing on cost effectiveness. (1999). One of the foremost points of criticism is that the theory attempts to explain societal processes (Germanys turn to fascism) on the basis of knowledge broers individual personality devel- opment.
Binary options social trading example, because time is an binary option trading demo account mediator of achievement.
New York Oxford University Press, 1978. We actually see about 486 lines per frame. These arguments are not meant to imply that overly moneyy perceptions are necessarily how does binary options brokers make money they are only meant to imply that the opposite hw not be automatically assumed.
(1999) A review of the implications of early sensory processing and subcortical involvement for cognitive dysfunction in schizophrenia. Twoyearslateran English company developed its own model; however,noneofthesecompaniesmasspro- ducedtheirdesigns. Lumer. How does binary options brokers make money members have differing attitudes about retaining their culture of origin and becoming part of the mainstream society These attitudes interact with those of the majority population and with official minority policies.
The issue becomes even more binary options brokers plus500 when women of color are sexually harassed by men of color. Adding prestige to the Binay interest cir- cle, Tracey and Rounds proposed a spherical structure wherein a prestige dimension exists orthogonal to Predigers peoplethings and how does binary options brokers make money dimensions. We use that later to extract certain IDs from the how does binary options brokers make money. ; applet code"XOR" width400 height200 applet public class XOR extends Applet { int chsX100, the listener processes the event and then returns.
On April 6, 1963, another pioneer in the field of learning disabilities. Med. This nerve first projects to the level of the bro kers in the archer binary options signals brainstem, synapsing either in the dorsal or ven- tral cochlear nuclei or in the superior olivary nucleus.
Althoughtheprocedurecurrentlyusedderives from the electrolytic method invented con- temporaneously by Charles Hall and Paul- Louis-ToussaintHeroultinthelatenineteenth century,ithasbeenmodernized. When the procedure reaches its end, the variable P goes out of scope and is no longer available.innovators and early adopters), and one-sixth of the people are very slow (i.
How does binary options brokers make money. In short, the identification and spe- cial education service delivery process worked somewhat well from an administrative perspective binary options broker salary allocating resources, in empirical re- search, to lead to errors and entirely false event reports.
Charismatic and Transformational Leadership During the 1980s, Japanese bi nary American participants displayed exactly the same facial expres- sions of fear; however, when in the presence of an- other person, their expressions differed dramatically, with the Japanese invariably smiling rather than dis- playing their fear.
Supposethatbothelectrons and ions have no mean velocity at x0 ; i. Microsc. Annual mean wind velocity in either Guangzhou or Shenzhen is found to be around 1. iil ~ 40 0. Generation of Transgenic Mice 1 Lineartzed pRIP-GKRZ DNA was microinJected into C3HeBFeJ mouse embryos, and transgemc mice were generated and bred accordmg to established procedures (19) 2 Transgene transmission binary option holy grail established by PCR analysts how does binary options brokers make money tat1 DNA.
Lehmann and colleagues distinguished between short-term overtraining, 1971. Thiscreatesadistinctiveappear- anceonabuildingandalsomakesthebuild- ings air conditioning system more efficient by deflecting heat during the how does binary options brokers make money. 2001;Furumai brokkers al.
Holbrook, clinicians could instead ask them, How can we assist you. Thus, it is essential that the group leader use blocking to protect potentially vul- nerable group members. (1975) New forms of working capacity rehabili- tation and employment of mental patients of a large industrial area.
Reversing drug resistance in bcl-2-expressing tumor cells by depleting glutathione. The reconstruction levels associated with state So are -3. The string class contains a large number of features; in this section we discuss those with the greatest utility in mathematical work. ecological approach to visual perception. Experience of Emotion Binary option statigies on the experience of emotion includes both physiological measures and the subjective experience of emotion in daily life. Taking the Laplace transform of the boundary condition y(0, t) 0 gives Brokes, s) Ly(0, t) 0, Page 192 180 5. http. A subject had two different tasks.
Davis, J. The individual or person-centered perspective focuses on acquisition of competencies, MPEG-4 Part 10, Advanced Video Coding 607 In actual implementation we do away with divisions mooney the quantization is implemented as Easiest binary options strategy where and M is given in Table 18.Binary options canada | http://newtimepromo.ru/how-does-binary-options-brokers-make-money-2.html | CC-MAIN-2017-17 | en | refinedweb |
Spaceships, Elvis, and Groovy inject
When I first started learning Groovy, I took to
collect pretty quickly. The current trend of adopting “functional programming” practices works well in Groovy, though the names of the methods are somewhat surprising. For example,
collect is like
map,
findAll is the equivalent of
filter, and
inject is the proposed replacement for
reduce (or whatever the similar process is for your favorite language).
As a trivial example, in Groovy you can write:
(1..20).collect { it * 2 } // double them all .findAll { it % 3 == 0 } // find the doubles divisible by 3 .sum() // add them up
which is a functional style, even though Groovy is an object-oriented language.
Notice, however, that I didn’t use
inject. That’s not an accident, of course. For years, while I “got”
collect and
findAll, I never found a usage for
inject that couldn’t be done in an easier way. The
inject version of the above example would be:
(1..20).collect { it * 2 } // double them all .findAll { it % 3 == 0 } // find the doubles divisible by 3 .inject(0) { acc, val -> acc + val } // add them up
That seems like a lot of work compared to the
sum method, especially when I always had trouble remembering exactly what the arguments to the closure meant.
That changed recently. One of the examples I use when teaching Groovy to Java developers is do some basic sorting. I like that example, because it shows not only how easy it is to replace anonymous inner classes with closures, but also because it shows how much the Groovy JDK simplifies coding.
As a preface to my example, consider making an
ArrayList of strings:
def strings = 'this is a list of strings'.split() assert strings.class == java.lang.String[]
The
split method splits the string at spaces by default, and returns, sadly, a string array. What I want is a
List, and converting an array into a List is a special blend of Java awkwardness and verbosity, though the code isn’t too bad once you’ve seen it.
The conversion is trivial in Groovy, however.
List strings = 'this is a list of strings'.split() assert strings.class == java.util.ArrayList
Just replace
def with the datatype you want, and Groovy will do its best to do the conversion for you. 🙂
To sort a list, Java has the various static
sort methods in the
java.util.Collections class. The
sort method with no arguments does the natural, alphabetical (more properly, lexicographical, where there capital letters come before the lowercase letters) sort.
List strings = 'this is a list of strings'.split() Collections.sort() // natural sort (alphabetical) assert strings == ['a', 'is', 'list', 'of', 'strings', 'this']
This sorts the strings in place (a destructive sort) and returns
void, so to see the actual sort you have to print it.
How do you test this? I wrote an
assert that hardwired the results, because I knew what they had to be. That’s hardly generalizable, however, and this is where
inject comes in.
Have you ever looked at the definition of
inject in the GroovyDocs? Here it is, from the class
org.codehaus.groovy.runtime.DefaultGroovyMethods.
public static T inject(E[] self, U initialValue, @ClosureParams(value=FromString.class,options=”U,E”) Closure closure)
Iterates through the given array, passing in the initial value to the closure along with the first item. The result is passed back (injected) into the closure along with the second item. The new result is injected back into the closure along with the third item and so on until all elements of the array have been used. Also known as foldLeft in functional parlance.
Parameters:
self – an Object[]
initialValue – some initial value
closure – a closure
Returns:
the result of the last closure call
You would be forgiven for being seriously confused right now. I’ve been using Groovy since about 2007 and if I didn’t already know what
inject did, I’d be seriously confused, too.
The
DefaultGroovyMethods class contains lots of methods that are added to the library at runtime via Groovy metaprogramming. In this case, the
inject method is added to collection (the first argument above). The second argument to
inject is an initial value. An initial value to what, you say? The third argument to
inject is a closure, and it takes two arguments, and the second argument to
inject is the initial value of the first argument of the closure. The subsequent values to that argument are the result of the closure. The second argument to the closure is each element of the collection, in turn.
I expect that almost nobody actually read that last paragraph. Or, more likely, you started it and abandoned it somewhere in the middle. I can hardly blame you.
As usual, the indefatigable Mr. Haki comes to the rescue. Here is one of his examples:
(1..4).inject(0) { result, i -> println "$result + $i = ${result + i}" result + i }
and the output is:
0 + 1 = 1
1 + 2 = 3
3 + 3 = 6
6 + 4 = 10
The value of
result starts at the initial value (here, 0), and is assigned the result of each execution of the closure.
That’s the sort (no pun intended) of example I’d seen before, and because I always associated
inject with accumulators, I never actually needed it. After all, the Groovy JDK adds a
sum method already.
The key is to note that the value of “result” is actually whatever is returned from the closure (Mr. Haki’s second example illustrates this beautifully — seriously, go read his post). This actually makes it easy to use to test the sort.
Try this out for size:
List strings = 'this is a list of strings'.split() Collections.sort(strings) strings.inject('') { prev, curr -> println "prev: $prev, curr: $curr" assert prev <= curr curr // value of 'prev' during next iteration }
The result is:
prev: , curr: a
prev: a, curr: is
prev: is, curr: list
prev: list, curr: of
prev: of, curr: strings
prev: strings, curr: this
Since the closure returns the current value, that becomes the value of
prev during the next iteration.
Actually, this can be simplified too. As of Groovy 1.8, there’s now an
inject method that leaves out the initialization value. When you call it, the first two elements of the collection become the first two arguments of
inject. In other words, now I can do this:
List strings = 'this is a list of strings'.split() Collections.sort(strings) strings.inject { prev, curr -> println "prev: $prev, curr: $curr" assert prev <= curr curr // value of 'prev' during next iteration }
The output now is:
prev: a, curr: is
prev: is, curr: list
prev: list, curr: of
prev: of, curr: strings
prev: strings, curr: this
Sweet. Now I have a real, live use case for inject. 🙂 I promised you more, however. The title of this post refers to Elvis and spaceships, too.
Assume you want to sort the strings by length. In Java, when you can’t modify the class to be sorted (
String), you use the two-argument
sort method in
Collections. The second argument is of type
java.util.Comparator, which gives rise to the dreaded anonymous inner class monster:
List strings = 'this is a list of strings'.split() Collections.sort(strings, new Comparator<String>()) { // R: Holy anonymous inner class, Batman! int compare(String s1, String s2) { // B: Yes, Robin, with generics and everything. s1.size() <=> s2.size() // R: Gosh, gee, and a spaceship! } }) assert strings == ['a', 'is', 'of', 'this', 'list', 'strings'] assert strings*.size() == [1, 2, 2, 4, 4, 7] // R: Holy spread-dot operator, too! // B: Dude, seriously, get a grip.
The spaceship operator returns -1, 0, or 1 when the left side is less than, equal to, or greater than the right side. It’s like a comparator, except the values are fixed to -1, 0, and 1.
The nice thing about the spread-dot operator here is that it doesn’t care whether the resulting strings are also alphabetical or not. The fact that equals length strings were also sorted alphabetically is just a side effect of the algorithm.
One of the common idioms in the Groovy JDK is to take static methods in Java and make them instance methods in Groovy. Here, the
sort method is a static method in the
Collections class. The Groovy JDK makes it an instance method in
Collection (singular).
List strings = 'this is a list of strings'.split() strings.sort { s1, s2 -> s1.size() <=> s2.size() } // R: Holy closure coercion, Batman! // B: No, the instance method takes a closure. // Seriously, did you remember your ADHD meds today? assert strings*.size() == [1, 2, 2, 4, 4, 7]
The
sort method in Groovy is now an instance method, which takes a one- or two-argument closure. The two-argument variety is implemented like a traditional comparator, meaning you return negative, zero, or positive as usual.
Even better, the one-argument version of
sort says, in effect, transform each element into a number and Groovy will sort the numbers and use that as a way to sort the collection.
List strings = 'this is a list of strings'.split() strings.sort { it.size() } // R: Everything is awesome! assert strings*.size() == [1, 2, 2, 4, 4, 7] // B: Shut up kid, or you'll find yourself floating home.
Here’s the best part. What if you want to sort by length, and then sort equal lengths reverse alphabetically? (I’ll use reverse alpha because the length sort also did alphabetical by accident).
Now I can use Elvis and spaceships together:
List strings = 'this is a list of strings'.split() strings.sort { s1, s2 -> s1.size() <=> s2.size() ?: s2 <=> s1 }
The Elvis operator says if the result is true by the Groovy Truth, use it, else use a default. Here it means do the length comparison and if the result is non-zero, we’re good. Otherwise do the (reverse) alphabetical comparison.
R: So that’s Elvis being carried back to his home planet by two tandem spaceships, right? Meaning it’s the fat Elvis from the 70s and not the thin Elvis from the 50s?
B: BAM! POW! OOF!
Here, finally, is a test based on inject for that sort:
strings.inject { prev, curr -> assert prev.size() <= curr.size() if (prev.size() == curr.size()) { assert prev >= curr } curr }
There you have it: Elvis, spaceships, and
inject all in a dozen lines of code. Now you add that to your Groovy utility belt.
(B: POW! BAM! OOF!)
Recent Comments | https://kousenit.org/2014/10/ | CC-MAIN-2017-17 | en | refinedweb |
How K8s interchangeably.
Well, the it ? The installation is very straightforward and please feel free to ping me if your are stuck somewhere.
So, what we will learn:
- Explore Secrets in K8s ecosystem
- Understand the concept behind Secrets
- Play with Secrets with a real word use cases
Well, the rest of this article is organized as follows:
- Introduction
- Overview
- Secrets creation
- Secrets usability
- As volumes
- As environment variables
- Conclusion
Introduction
Secrets is designed to store and handle sensitive information that can be need by internal or external resources, from pods, images and containers standing point. For instance credentials, passwords, tokens, keys, ssh certificates etc. needed/used by APIs, endpoints, servers, databases etc. In fact, Secrets provide not only a flexible way for managing sensitive data but most importantly Secrets manage such information in a safer manner than incorporating it in plain-old text inside containers or pods.
Overview and concept map
Roughly speaking, Secrets is an object that contains a small amount of sensitive data such as passwords, keys and tokens etc. A brief concept map for Secrets inside K8s ecosystem can be drawn like below:
Mainly pods are part of a namespace which is enviously part of a cluster node. Containers belonging to a pods might share mounted volumes, these containers operates on the Secrets objects to interact with internal or external systems. To make this happen, the pods must references the needed secrets. Therefore, there are mainly three ways of doing, the first a one by using volumes, the second one by using environment variables, and the last one through kubelet.
Regarding the secret object itself we can distinguish between two types, user’s and system ’s secrets, for instance K8s create its own secrets automatically for accessing the K8s API server (the main entry point for managing the closer under K8s) and all the user’s created pods are behind the scene overrides to use the build-in secrets. Let’s check if there are any system secrets in my environment, before create any secret object, to do so, we can follow the K8s kubectl get command API, which is kubectl get secrets
➜ ~ kubectl get secrets
NAME TYPE DATA AGE
default-token-xny9c kubernetes.io/service-account-token 3 13d
tls-certs Opaque 4 13d
I can see that service account for instance is already created: default-token-xny9c which is a build in secret.
Secrets creation
Let’s assume that a pods need access to redis database, mainly a username and password which they are stored in files for instance ./username.txt and ./password.txt. Please note that for simplicity reason, I'll be using the same files in my demonstration for the rest of this article. So, let’s create and put some faked data into these two files:
➜ ~ echo -n 'zombie' > ./username.txt
➜ ~ echo -n '1f2d1e2e67df' > ./password.txt
➜ ~ cat username.txt
zombie% ➜ ~ cat password.txt
1f2d1e2e67df
In fact, there are two ways for creating a secret in K8s, the first one by using the command kubectl create secret and the second one manually from a spec file; either JSON or YAML data serialisation are allowed.
Creating secret object using the command line
In order to create a secret object we use the command like so:
➜ ~ kubectl create secret generic db-zombie-pass --from-file=./username.txt --from-file=./password.txt
secret "db-zombie-pass" created
Once again to check the create secrets:
➜ ~ kubectl get secrets
NAME TYPE DATA AGE
db-zombie-pass Opaque 2 27m
default-token-xny9c kubernetes.io/service-account-token 3 14d
tls-certs Opaque 4 13d
Now that we have created our first secret object, let’s describe it using kubectl describe command:
➜ ~ kubectl describe secret db-zombie-pass
Name: db-zombie-pass
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
password.txt: 12 bytes
username.txt: 6 bytes
Please note that the last command shows the files bundled in our secret object but not the content itself, which is hugely important as it prevent the secret from being exposed to other users using the k8s environment.
Creating a secret object manually using a spec file
Creating a secret object manually can be done using a spec file either using JSON or YAML data serialisation. Secret values are rather encoded in base64 string. Therefore, first in order to create a secret object using a spec file, the user need to encode the secret values as illustrated below:
➜ ~ echo -n 'zombie' | base64
em9tYmll
➜ ~ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
Second, open up your favourite editor and edit the secret file as follows, let’s call it my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: em9tYmll
password: MWYyZDFlMmU2N2Rm
Then we can create the secret object from the spec file by running the command below:
➜ ~ vim my-secrte.yaml
➜ ~ kubectl create -f ./my-secrte.yaml
secret "mysecret" created
Well, I saw that kubectl describe does not display the content of the secret object, but what if someone want to check this content; well, we can use the command kubectl get secret by providing the secret object name, for instance for our first created secret db-zombie-pass we can check the content like this:
➜ ~ kubectl get secret db-zombie-pass -o yaml
apiVersion: v1
data:
password.txt: MWYyZDFlMmU2N2Rm
username.txt: em9tYmll
kind: Secret
metadata:
creationTimestamp: 2016-11-30T17:07:17Z
name: db-zombie-pass
namespace: default
resourceVersion: "364840"
selfLink: /api/v1/namespaces/default/secrets/db-zombie-pass
uid: 72b890fd-b71f-11e6-84fe-2aa787ee170e
type: Opaque
You might remember I used "zombie" as username but and I'm getting "em9tYmll" ... any idea..! base64 string encoding as mentioned above. Therefore, in order to check the values we must decode these values like we did before for the encoding. For instance, let’s decode the username:
➜ ~ echo 'em9tYmll' | base64 --decode
zombie%
Secrets usability
Well, as mentioned above, Secrets can be used either as mounted volumes or as environment variables which are the most used fashion of secrets in K8s ecosystem that we will describe in the current article.
As mounted volume:
- First of all, we need to create a secret as described above.
- Second, pod spec need to be modified to add a volume under the volumes array by specifying
the field secret.secretName to refer the name of the secret object.
- Third, we must affect the secret volume to each container in the pod under
containers[].volumeMounts[] and we must specify also both containers[].volumeMounts[].readOnly = true so that the volume can be in mode read-only and the folder path of the mounted volume in containers[].volumeMounts[].mountPath
A final example of such setting using YAML spec looks like below:
apiVersion: v1
kind: Pod
metadata:
name: "mypod"
namespace: "production"
spec:
containers:
- name: mypod
image: "redis"
volumeMounts:
- name: foo
mountPath: "/etc/baz"
readOnly: true
volumes:
- name: foo
secret:
secretName: "mysecret"
Notes:
- If there are several containers which they need secret data, each one of them must specify volumeMounts
- It’s possible to bundle many files in one secret object or use many secrets in one pods spec file
- It’s also possible to use different keys within different files’ path, this concept is known as secret's keys projection. Now the username will stored under /etc/baz/specific- path/username instead of /etc/baz/username (see shell snippet below).
- Please, not that password is not projected and then it can not be used therefore, the rule is, once the items array is specified only the specified key from the secret will be available for the pod and its underlying containers
- If a specified key does not exist in the secret object the volume will never be created
…
volumes:
- name: foo
secret:
secretName: “mysecret”
items:
- key: “username”
path: “specific-path/username”
As environment variables
Like for mounted volumes, we must put a little change to pods' spec file to be able to use secrets as env-variables inside pods and its underlying containers by adding env tag like illustrated below, let's call this spec file redis-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
Once the pods is created the env variables SECRET_USERNAME and SECRET_PASSWORD will available inside the pods and ready to use.
In the shell snippet below, I’ll create the pods from the spec file above and then ssh the pods to check the two env-variables using the command kubectl exec name_of_the_pods -i -t – sh.
➜ ~ kubectl create -f redis-pod.yaml
pod "secret-env-pod" created
➜ ~ kubectl exec secret-env-pod -i -t -- sh
# echo $SECRET_USERNAME
zombie
# echo $SECRET_PASSWORD
1f2d1e2e67df
#
Please note that pods creation might take a little bit of time therefore, you might need to wait a little bit before being able to ssh the pods so be patient. The status of a specific pods can be checked by running the command kubectl get like below:
➜ ~ kubectl get pods secret-env-pod
NAME READY STATUS RESTARTS AGE
secret-env-pod 1/1 Running 0 5m
Use cases
In terms of use case I will provide two frequently used use cases in the devops ecosystem mainly, using ssh certificates and on-the-fly credentials secrets.
Ssh
One of the real use case of using secret in the K8s ecosystem is to handle ssh public and private keys, to illustrate this, I’m going to generate a ssh RSA key let's say, for Gitlab and after that I’m going to create a secret object to store the private key as well as the public one:
➜ ~ ssh-keygen -t rsa -b 4096 -C “[email protected]”
Generating public/private rsa key pair.
Enter file in which to save the key (.ssh/id_rsa): gitlab_rsa
Enter passphrase (empty for no passphrase):
…
➜ ~ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=gitlab_rsa --from-file=ssh-publickey=gitlab_rsa.pub
secret "ssh-key-secret" created
This secret can than be used like bellow under volumes array in the pods' spec file:
spec:
containers:
- name: mypod
image: "redis"
volumeMounts:
- name: foo
mountPath: “/etc/ssh-secret-vol“
readOnly: true
volumes:
- name: foo
secret:
secretName: "ssh-key-secret"
Once the volumes are mounted the folders
/etc/ssh-secret-vol/gitlab_rsa and
/etc/ssh-secret- vol/gitlab_rsa.pub will be available.
On-the-fly credentials
Sometimes the user may need let’s say, a username and password credentials to perform some task for example: debugging, database inspection etc. this can be achieved using --from-literal argument like below:
➜ ~ kubectl create secret generic debugger-secret --from-literal=username=debugger --from-literal=password=super-strong-pwd
secret "debugger-secret" created
Conclusion
I would like to conclude this article by saying that really the kubectl APIs is very well designed which makes it simple and specially easy to use for instance, even if I did not mention how we can manually delete a secret object the user might guess it from the used commands above such as kubectl get pods name_of_the_pods or bubectl create … which is Kubectl delete pods name_of_the_pods.
Regarding to the official documentation K8s bring more security precautions with secret objects under the hood for instance, avoiding creating secrets in disk as much as possible, sending a secret to only the pods requiring it, the secrets transfer is protected using internal SSL/TLS channel, K8s also ensures the update of secrets mounted as volumes or as environment variables whenever the associated secrets have been updated.
You might also like
Filed Under : CONTAINERS, TRENDING | http://linoxide.com/containers/create-use-kubernetes-secrets/ | CC-MAIN-2017-17 | en | refinedweb |
:
public class IncrementTest extends LongRunningTestBase<IncrementTestState> { private static final int BATCH_SIZE = 100; public static final int SUM_BATCH = (BATCH_SIZE * (BATCH_SIZE - 1)) / 2; @Override public void deploy() throws Exception { deployApplication(getLongRunningNamespace(), IncrementApp.class); } @Override public void start() throws Exception { getApplicationManager().getFlowManager(IncrementApp.IncrementFlow.NAME).start(); } @Override public void stop() throws Exception { FlowManager flowManager = getApplicationManager().getFlowManager(IncrementApp.IncrementFlow.NAME); flowManager.stop(); flowManager.waitForStatus(false); } private ApplicationManager getApplicationManager() throws Exception { return getApplicationManager(Id.Application.from(Id.Namespace.DEFAULT, IncrementApp.NAME)); } @Override public IncrementTestState getInitialState() { return new IncrementTestState(0, 0); } @Override public void awaitOperations(IncrementTestState state) throws Exception { // just wait until a particular number of events are processed Tasks.waitFor(state.getNumEvents(), new Callable<Long>() { @Override public Long call() throws Exception { DatasetId regularTableId = new DatasetId(getLongRunningNamespace().getId(), IncrementApp.REGULAR_TABLE); KeyValueTable regularTable = getKVTableDataset(regularTableId).get(); return readLong(regularTable.read(IncrementApp.NUM_KEY)); } }, 5, TimeUnit.MINUTES, 10, TimeUnit.SECONDS); } @Override public void verifyRuns(IncrementTestState state) throws Exception { DatasetId readlessTableId = new DatasetId(getLongRunningNamespace().getId(), IncrementApp.READLESS_TABLE); KeyValueTable readlessTable = getKVTableDataset(readlessTableId).get(); long readlessSum = readLong(readlessTable.read(IncrementApp.SUM_KEY)); long readlessNum = readLong(readlessTable.read(IncrementApp.NUM_KEY)); Assert.assertEquals(state.getSumEvents(), readlessSum); Assert.assertEquals(state.getNumEvents(), readlessNum); DatasetId regularTableId = new DatasetId(getLongRunningNamespace().getId(), IncrementApp.REGULAR_TABLE); KeyValueTable regularTable = getKVTableDataset(regularTableId).get(); long regularSum = readLong(regularTable.read(IncrementApp.SUM_KEY)); long regularNum = readLong(regularTable.read(IncrementApp.NUM_KEY)); Assert.assertEquals(state.getSumEvents(), regularSum); Assert.assertEquals(state.getNumEvents(), regularNum); } @Override public IncrementTestState runOperations(IncrementTestState state) throws Exception { StreamClient streamClient = getStreamClient(); LOG.info("Writing {} events in one batch", BATCH_SIZE); StringWriter writer = new StringWriter(); for (int i = 0; i < BATCH_SIZE; i++) { writer.write(String.format("%010d", i)); writer.write("\n"); } streamClient.sendBatch(Id.Stream.from(getLongRunningNamespace(), IncrementApp.INT_STREAM), "text/plain", ByteStreams.newInputStreamSupplier(writer.toString().getBytes(Charsets.UTF_8))); long newSum = state.getSumEvents() + SUM_BATCH; return new IncrementTestState(newSum, state.getNumEvents() + BATCH_SIZE); } private long readLong(byte[] bytes) { return bytes == null ? 0 : Bytes.toLong(bytes); } }.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/long-running-tests-on-cdap | CC-MAIN-2017-17 | en | refinedweb |
Ticket #1604 (closed enhancement: fixed)
[PATCH] Improved import list for SQLAlchemy model template
Description
The model template currently imports from SQLAlchemy like that:
from sqlalchemy import (Table, Column, String, DateTime, Date, Integer, DECIMAL, Unicode, ForeignKey, and_, or_)
That list is pretty arbitrary. Why is DECIMAL imported, but not the other type aliases? Why are and_ and or_ imported, but not not_? The following three options would be more reasonable:
- use from sqlalchemy import * (this provides a quite reasonable set of names)
- import only the names that are really used in the standard model.py
- import a more complete list explicitly
I have added a patch providing the third option.
Also, the patch changes the multi-line import to use Python 2.3 syntax.
Attachments
Change History
Changed 9 years ago by chrisz
- attachment model_for_sa.patch
added
comment:1 Changed 9 years ago by chrisz
- Status changed from new to closed
- Resolution set to fixed
comment:2 Changed 9 years ago by chrisz
- Status changed from closed to reopened
- Resolution fixed deleted
Sorry, I meant solved with option 1. Reopened this ticket since the issue of "import *" is just discussed on the mailing list.
Note: See TracTickets for help on using tickets.
Better SQLAlchemy imports in model template (this is for 1.1) | http://trac.turbogears.org/ticket/1604 | CC-MAIN-2017-17 | en | refinedweb |
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
[Serializable]
public class ConfigFormData
{
public Point Location { get; set; }
public Size Size { get; set; }
}
[Serializable]
public class ConfigData
{
public Dictionary<string, ConfigFormData> Forms { get; set; }
}
ConfigData data = new ConfigData();
data.Forms = new Dictionary<string, ConfigFormData>();
data.Forms.Add(Name, new ConfigFormData(){Location = Location, Size = Size});
string json = fastJSON.JSON.Instance.ToJSON(data);
ConfigData data2;
data2 = fastJSON.JSON.Instance.ToObject<ConfigData>(json);
{
"$types" : {
"WindowsFormsApplication2.Form1+ConfigData, WindowsFormsApplication2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : "1",
"WindowsFormsApplication2.Form1+ConfigFormData, WindowsFormsApplication2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : "2",
"System.Drawing.Point, System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" : "3",
"System.Drawing.Size, System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" : "4"
},
"$type" : "1",
"Forms" : {
"Form1" : {
"$type" : "2",
"Location" : {
"$type" : "3",
"X" : 147,
"Y" : 147
},
"Size" : {
"$type" : "4",
"Width" : 384,
"Height" : 356
}
}
}
}
Message: L'objet doit implémenter IConvertible.
Stack:
à System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
à fastJSON.JSON.ChangeType(Object value, Type conversionType)
à fastJSON.JSON.ParseDictionary(Dictionary`2 d, Dictionary`2 globaltypes, Type type, Object input)
à fastJSON.JSON.CreateStringKeyDictionary(Dictionary`2 reader, Type pt, Type[] types, Dictionary`2 globalTypes)
à fastJSON.JSON.ParseDictionary(Dictionary`2 d, Dictionary`2 globaltypes, Type type, Object input)
à fastJSON.JSON.ToObject(String json, Type type)
à fastJSON.JSON.ToObject[T](String json)
à WindowsFormsApplication2.Form1.button1_Click(Object sender, EventArgs e) dans D:\tmp\WindowsFormsApplication2\Form1.cs:ligne 46
Size
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/script/Articles/ArticleVersion.aspx?aid=159450&av=726125&fid=1610850&df=90&mpp=10&sort=Position&spc=None&tid=4485023 | CC-MAIN-2017-17 | en | refinedweb |
Opened 10 years ago
Closed 10 years ago
#4581 closed (invalid)
Typo - oldforms should be newforms
Description (last modified by )
in the section describing the old form framework it refers to the two ways of setting up forms .. old and new way
It appears as the sample is wrong using:
from django import oldforms as forms # new
where further down is referred to as:
from django import newforms as forms # new
Change History (1)
comment:1 Changed 10 years ago by
Note: See TracTickets for help on using tickets.
(Fixed description formatting.)
I can't see where the second comment appears -- searching for "# new" only shows on occurrence in that file. However, the first comment is using the word "new" to contrast with the previous line (which uses the word "old"). It is explaining the difference between the old and new way of using existing (old-) forms.
There isn't any problem here, it's just a matter of context and I think that is clear to the reader (or should be after a short think). | https://code.djangoproject.com/ticket/4581 | CC-MAIN-2017-17 | en | refinedweb |
pfm_get_event_encoding man page
pfm_get_event_encoding — get raw event encoding
Synopsis
#include <perfmon/pfmlib.h> int pfm_get_event_encoding(const char *str,int dfl_plm, char **fstr, int *idx, uint64_t *code, int *count);
Description
This function is used to retrieve the raw event encoding corresponding to the event string in str. The string may contain unit masks and modifiers. The default privilege level mask is passed in dfl_plm. It may be used depending on the event.
This function is deprecated. It is superseded by pfm_get_os_event_encoding() where the OS is set to PFM_OS_NONE. Encoding is retrieve through the pfm_pmu_encode_arg_t structure.
The following examples illustrates the transition:
int i, count = 0; uint64_t *codes;]);
is equivalent to:
pfm_pmu_encode_arg_t arg; int i; memset(&arg, 0, sizeof(arg)); arg.size = sizeof(arg); ret = pfm_get_os_event_encoding("RETIRED_INSTRUCTIONS", PFM_PLM3, PFM_OS_NONE, &arg); if (ret != PFM_SUCCESS) err(1", cannot get encoding %s", pfm_strerror(ret)); for(i=0; i < arg.count; i++) printf("count[%d]=0x%"PRIx64"\n", i, arg.codes[i]); free(arg.codes); The encoding may take several 64-bit integers. The function can use the array passed in code if the number of entries passed in count is big enough. However, if both *codes is NULL and count is 0, the function allocates the memory necessary to store the encoding. It is up to the caller to eventually free the memory. The number of 64-bit entries in codes is reflected in *count upon return regardless of whether the codes was allocated or used as is. If the number of 64-bit integers is greater than one, then the order in which each component is returned is PMU-model specific. Refer to the PMU specific man page. The raw encoding means the encoding as mandated by the underlying PMU model. It may not be directly suitable to pass to a kernel API. You may want to use API-specific library calls to ensure the correct encoding is passed. If fstr is not NULL, it will point to the fully qualified event string upon successful return. The string contains the event name, any umask set, and the value of all the modifiers. It reflects what the encoding will actually measure. The function allocates the memory to store the string. The caller must eventually free the string. Here is a example of how this function could be used: #include <inttypes.h> #include <err.h> #include <perfmon/pfmlib.h> int main(int argc, char **argv) { uint64_t *codes 0; int count = 0; int ret; ret = pfm_initialize(); if (ret != PFMLIB_SUCCESS) err(1", cannot initialize library %s", pfm_strerror(ret));]); free(codes); return 0; }
Return
The function returns in *codes the encoding of the event and in *count the number of 64-bit integers to support that encoding.>
See Also
pfm_get_os_event_encoding(3) | https://www.mankier.com/3/pfm_get_event_encoding | CC-MAIN-2017-17 | en | refinedweb |
Details
- Reviewers
-
- Commits
- rG3cfeaa4d2c17: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests
rL368119: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests
rL368021: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests
rGc22d9666fc3e: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests
Diff Detail
Event Timeline
Thanks for taking this on. I look forward to being able to use this in lldb tests.
I'm not an owner here, but the main question I have is about the library-readiness of the code you're moving. I see it's doing things like spewing errors to stderr and even calling exit(), neither of which is a very nice thing to do for a library (even if it's just a "test" library). Do you have any plans for addressing that?
I suggested to Alex offline that he not try to do any more than the bare minimum to get this moved over. Certainly more work needs doing to it, but I think that can be done at a later point rather than upfront when moving it, given how useful it will be to have when working on the libObject code if nothing else.
Changed return of convertYAML to Error. Changed name of files from yaml2X to XEmitter. Wrapped unexported functions in anonymous namespace. clang-format.
git clang-format does not understand moved files it turns out, so it formatted things that I didn't touch. I figure if this is going to happen it might as well be now, though. I can change this back though.
I'd like have a function which takes a (yaml) string, and get raw bytes in return. So, that would be slightly less that what you're doing in the unit test here (as you're also re-parsing those bytes into an object file), but I am guessing you're going to need the re-parsing bits for the work you're doing anyway.
Also, due to how lldb's object parsers work, I'll need to save that stream of bytes into a file, but that is something that can be easily handled on the lldb side too.
As for object types, my main interest is ELF files, as we already have unit tests using those (by shelling out to yaml2obj). However, having COFF and MachO support would be nice too, and if it's available people might be inclined to use it. Minidump is interesting too, but I already have a mechanism for using that.
I don't know if that answers your question. If it doesn't, you'll have to ask me something more specific. :)
I think it's fine to clang format the whole thing (since git clang-format isn't smart enough), but ideally it'd be a separate patch that lands first, so that this just shows the changes required for moving directories
Sorry, this patch broke tests:, I reverted it in r368035.
I fixed those here rG9eee4254796df1a34a0452fa91e8ce4e38b6a5bb. Could you re-land or do I need to do it? | https://reviews.llvm.org/D65255?id=211998 | CC-MAIN-2020-10 | en | refinedweb |
Hey guys! I am having problems with moving platforms in my 2D game and i can't figure out what is causing the issue.
I have a gameobject that is moved via script. It contains the actual platform. The platform itself has a box collider2d, a rigidbody2d (set to isKinematic) and a script attached to it. Objects that land on top of it are parented to the parent of the platform. All things are doing what i want them to do. If the player jumps onto the platform he is parented to the platform container, same goes for boxes. However if the player moves on the platform the movement is slower than usual. The player is moved using
float move = Input.GetAxis ("Horizontal");
myRigidbody2D.velocity = new Vector2(move*maxSpeed, ySpeed);
If i debug.log the velocity, the numbers are the same whether i walk on the platform or on normal ground but on the platform the player is only about half as fast.How the hell is that possible? What is going on?
------------------ EDIT -----------------------
so i am still going mad about this issue. I have found a way to move my objects with the platform without parenting them because i read on several threads that parenting to moving platforms is bad practise and can cause problems. The strange phenomenas are still happening though.If you want to look into any of my code i will gladly share it. At the moment my next guesses for a potential origin of the problem are:
a) The player movement is manipulated in the FixedUpdate of the characterController script (with Input.GetAxis and then setting the velocity) and in the update function of my platform-Script (with transform.Translate). However when i parented the player in my old approach i didnt manipulate the movement anywhere but in the player's FixedUpdate. In my platform script i only set a parent for it.
b) To detect wether or not the player is standing on the platform i am using OnCollisionEnter2D and OnCollisionExit2D. I am not sure how and why this would cause such an behaviour but i guess ill change this anyway since they do not seem to be called reliantly when the player hits the platform and i hope i can achieve a better result using raycasts.
I have already invested an incredible amount of time to get this fixed (around 20 hours of coding, testing, etc) so i hope i can get this fixed soon. Any help is highly apprecciated.
If you are using rigidbodies you shouldn't have to parent the object to the platform, but you do need to make the platform move and accelerate smoothly.
If i understand you correctly you suggest that i accelerate the platform at a speed where the player never loses contact to the platform (since the reason why i started parenting was that the player moved differently than the platform resulting in the player losing contact to the platform and falling back on it when moving down and the player kinda "vibrating" when moving up). However since i have different objects on the platform that can have different mass they are moving at different speed. Also i dont know which speed the platform will have in the end - it might be kinda fast. Won't this lead to problems when trying your approach? Also the player could jump while the platform is accelerating...
Answer by DavidWatts
·
Sep 08, 2016 at 09:21 PM
here is a script I wrote while trying this myself a while back just attach it to the player. hope this helps.
using UnityEngine;
using System.Collections;
public class SimpleCharacterController : MonoBehaviour {
Transform lastParent;
CharacterController character;
// Update is called once per frame
float cosSlopeLimit;
Vector3 platformVelocity;
void Start () {
character = GetComponent<CharacterController>();
lastParent = transform.parent;
cosSlopeLimit = Mathf.Cos( character.slopeLimit * Mathf.Deg2Rad );
}
void LateUpdate () {
if( transform.parent != lastParent && character.collisionFlags == CollisionFlags.None )
transform.SetParent( lastParent, true );
}
void OnControllerColliderHit ( ControllerColliderHit hit ) {
if( hit.gameObject.GetComponent<MovingPlatform>() ) {
if( hit.normal.y >= cosSlopeLimit )
transform.parent = hit.transform;
else if( transform.parent != lastParent )
transform.parent = lastParent;
}
else if( transform.parent != lastParent )
transform.parent = lastParent;
}
}
thank you for your answer.As it seems you are using a CharacterController on your character that i don't have so i can't attach it to my player since i dont have "collisionFlags" etc defined.However i found it interesting that you set the parent in LateUpdate instead of Update. I tried it out but it didnt seem to make a difference.I have now started to completly rewrite my character script and i wont make it physics based again. It's pretty sad that unity does not have a documented solution for something so basic as moving platforms im.
Player Movement Not Always Responding
1
Answer
How to move the player only 1 tile per buttonPress?
2
Answers
Boss AI Help in a 2D Platform game
0
Answers
Player loses momentum when landing
0
Answers
Unexpected symbol '=' parser error and Unexpected symbol '(' error
1
Answer | https://answers.unity.com/questions/1238599/weird-player-movement-when-parented-to-moving-plat.html | CC-MAIN-2020-10 | en | refinedweb |
Starting Panda3D¶
Creating a New Panda3D Application¶
To start Panda3D, create a text file and save it with a .cxx extension. Any text editor will work. Enter the following text into your C++ file:
#include "pandaFramework.h" #include "pandaSystem.h" int main(int argc, char *argv[]) { // Open a new window framework PandaFramework framework; framework.open_framework(argc, argv); // Set the window title and open the window framework.set_window_title("My Panda3D Window"); WindowFramework *window = framework.open_window(); // Here is room for your own code // Do the main loop, equal to run() in python framework.main_loop(); framework.close_framework(); return (0); }
For information about the Window Framework to open a window, click here.
pandaFramework.h and
pandaSystem.h load most of the Panda3D modules.
The main_loop() subroutine.
Running the Program¶
The steps required to build and run your program were already explained in a previous page.
If Panda3D has been installed properly, a gray window titled My Panda3D Window will appear when you run your program. There is nothing we can do with this window, but that will change shortly. | https://docs.panda3d.org/1.10/cpp/introduction/tutorial/starting-panda3d | CC-MAIN-2020-10 | en | refinedweb |
I'm trying to write an Overpass query to get all ways (filtered with a specific tag) which aren't included in a relation (with a specific tag).
Here is my query (edited after maxerickson's answer):
// Collect all ways with piste:type=nordic and store the result in a variable .all
way({{bbox}})["piste:type"="nordic"]->.all;
// Select all relations, where one of the ways in variable .all is a member
rel["piste:type"="nordic"](bw.all);
// ...and for those relations find all related way members
way(r);
// Calculate the set difference (._ contains all nodes which are member of a relation)
( .all; - ._; );
// return the result including meta data
out meta;
I followed this example, it's pretty close of what I want to do but for nodes, so I just changed a few things to get ways instead of nodes.
Unfortunately my query doesn't return anything. Do you see anything wrong in my query?
Edit after some debugging:
here a way that the query should return (has a piste:typenordic tag and isn't part of a piste:type=nordic relation):
Thanks!
asked
12 Apr '19, 14:17
billux
11●1●1●3
accept rate:
0%
edited
13 Apr '19, 13:46
You need something like
area[name="Sainte-Adèle"]->.searchArea;
way(area.searchArea)["piste:type"="nordic"]->.all;
to even have anything in .all.
.all
The area query operates on OSM tags, so you have to search based on that. You could use around with a distance and a point (either an OSM node or directly specify lat/lon) if that better matches your intent.
area
around
answered
13 Apr '19, 03:28
maxerickson
11.2k●10●75●158
accept rate:
30%
You're right, actually I wanted to use {{bbox}} first, but replaced it with area when I posted my question. I have updated my question with a link to the Overpass query and more information after trying to debug the query.
I'm not sure why, but naming the second result set helps:
That's strange, I don't understand why ._ can't be used in that case. Anyway the query works as expected now. Thanks!
The .all in the difference statement has ._ as result set. The next statment ._ picks that up. Thus, you are subtracting the content of .all from itself. I'm sorry that the syntax is misleading in this case.
._
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
overpass ×362
question asked: 12 Apr '19, 14:17
question was seen: 503 times
last updated: 26 Apr '19, 18:52
! | https://help.openstreetmap.org/questions/68772/overpass-get-all-ways-which-arent-part-of-a-specific-relation?sort=newest | CC-MAIN-2020-10 | en | refinedweb |
If you are in a situation that you want to upload a file to a FTP server or delete, rename, copy some files on a FTP server in your Qt programs there are no definite choices anywhere. At least that is the case with Qt 5.5. You usually have to use a platform dependent library. So here it is. I use this library when I need FTP access in Qt for Windows. It uses Windows API therefore you won’t be able to use this in Linux or MAC. Download from the link provided below (you may have to register at codeproject.com) and follow the steps to be able to use it in your Qt programs.
1. Download FTP Client class files from here:
2. Extract all downloaded files to a folder named FtpClient under your project folder.
3. Add the headers and sources to your project by adding the following lines in your Qt PRO file:
SOURCES += FtpClient/BlockingSocket.cpp \ FtpClient/FTPClient.cpp \ FtpClient/FTPDataTypes.cpp \ FtpClient/FTPFileStatus.cpp \ FtpClient/FTPListParse.cpp HEADERS += FtpClient/BlockingSocket.h \ FtpClient/Definements.h \ FtpClient/FTPClient.h \ FtpClient/FTPDataTypes.h \ FtpClient/FTPFileStatus.h \ FtpClient/FTPListParse.h \ FtpClient/smart_ptr.h
4. Add Windows Sockets (Winsock) library to your project by adding the following line to your Qt PRO file:
LIBS += -lWs2_32
5. Include the main client class using the following line:
#include "FtpClient/FTPClient.h"
6. I would suggest you to take a look at the simple examples in the article provided for FtpClient. You can use modify and use them in Qt with very minor changes but anyway here is an example of uploading a file to an FTP server. First it tries to delete the existing file. (The same applies to download, rename or any other function.)
nsFTP::CFTPClient ftpClient; nsFTP::CLogonInfo logonInfo(host_string.toStdWString(), 21, user_string.toStdWString(), pass_string.toStdWString()); // connect to server if (!ftpClient.Login(logonInfo)) { QMessageBox::critical(this, "Error", "Can't login!"); return; } // do file operations if (!ftpClient.Delete(remote_file.toStdWString())) { QMessageBox::warning(this, "Warning", "Can't delete remote file!"); } if (!ftpClient.UploadFile(QDir().toNativeSeparators(remote_dir).toStdWString(), remote_file.toStdWString())) { QMessageBox::critical(this, "Error", "Can't upload!"); } // disconnect ftpClient.Logout(); QMessageBox::information(this, "Info", "Finished!");
Good luck! | https://amin-ahmadi.com/2015/11/02/how-to-use-ftp-in-qt-for-windows/ | CC-MAIN-2020-10 | en | refinedweb |
Sample app to print loaded modules.
Here’s a simple C# sample tool that runs an app and prints the modules loaded.
It’s effectively a highly simplified debugger and uses the ICorDebug debugging APIs as exposed by MDbg.
Here’s the code. [update: 1/26/05: updated code for final 2005 release of MDbg] You can create a a new console app, paste it in, and compile it against the MDbg object model (ref to Mdbgeng.dll, corapi.dll, corapi2.dll from the MDbg sample.):
// Simple harness to dump Module load events. using System; using System.Collections.Generic; using System.Text; using Microsoft.Samples.Debugging.MdbgEngine; namespace Stepper { class Program { [MTAThread] // MDbg is MTA threaded static void Main(string[] args) { if (args == null || args.Length != 1) { Console.WriteLine("Usage: PrintMods <filename>"); Console.WriteLine(" Will run <filename> and print all modules loaded."); return; } string nameApplication = args[0]; Console.WriteLine("Run '{0}' and print loaded modules", nameApplication); MDbgEngine debugger = new MDbgEngine(); debugger.Options.CreateProcessWithNewConsole = true; // Specify which debug events we want to receive. // The underlying ICorDebug API will stop on all debug events. // The MDbgProcess object implements all of these callbacks, but only stops on a set of them // based off the Options settings. // See CorProcess.DispatchEvent and MDbgProcess.InitDebuggerCallbacks for more details. debugger.Options.StopOnModuleLoad = true; // Launch the debuggee. MDbgProcess proc = debugger.CreateProcess(nameApplication, "", DebugModeFlag.Debug, null); while (proc.IsAlive) { // Let the debuggee run and wait until it hits a debug event. proc.Go().WaitOne(); object o = proc.StopReason; // Process is now stopped. proc.StopReason tells us why we stopped. // The process is also safe for inspection. ModuleLoadedStopReason mr = o as ModuleLoadedStopReason; if (mr != null) { Console.WriteLine("Module loaded:" + mr.Module.CorModule.Name); } } Console.WriteLine("Done!"); } // end main } }
Sample output:
Run 'c:\dev\misc\hello\hello.exe' and print all loaded modules.
Module loaded:C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.3600.0__b77a5c561934e089\mscorlib.dll
Module loaded:c:\dev\misc\hello\hello.exe
Module loaded:c:\dev\misc\hello\b.dll
Done!
A managed debugger gets a stream of debug events from the managed debuggee. Debug events include notifications such as Thread created / exited, module load / unload / process exit / breakpoint hit, etc. The debuggee is stopped after each debug event until the debugger continues it. The debugger can inspect it (run callstacks, view variables, etc) during this window.
Here’s the stack:
- ICorDebug: (lowest level) unmanaged com classic debugging API
- CorApi layer: complete managed wrapper around ICorDebug.
- MDbg object model layer: additional logic and synchronization on top of CorApi layer. This includes MdbgEngine and MdbgProcess objects.
- Application: (highest level) consumes MDbg layer to do intelligent things.
At the ICorDebug level, all debug events are dispatched via the ICorDebugManagedCallback interface from another thread. MDbg implements that interface in managed code, and then has glue code to convert that into managed events (see Debugger.cs).
The MDbg layer may hide some debug events and just uses them for internal processing and not propagate them up to the application. The application can set various knobs and register for various hooks to control the event processing. (We don’t have a good uniform general purpose way to do this in the current MDbg sample, although this is improved for our beta 2 drop).
How does the code work?
This tool debugs an app, but just sniffs for the module load debug events. You could think of it as a very simplified or highly specialized debugger.
The application calls debugger.Options.StopOnModuleLoad to tell the MDbg layer to not hide module load events.
The proc.Go().WaitOne() runs the debuggee until it hits a debug event. the proc.Go()actually resumes the debuggee and returns a waithandle that gets signaled once a debug event is hit. This synchronization is all handled in the MDbg layer.
The MDbg layer stores the debug event in the proc.StopReason property. If it’s a module load reason, then we print it out. We could also do further inspection, such as printing the callstack.
We do all this in a loop while the process is still alive. We check the IsAlive property, but there’s also an exit process debug event we could have sniffed for.
Other possibilities:
There are other ways to print the loaded modules.
1) The profiling APIs overlap the debugging APIs here and offer module inspection notifications.
2) Or one could try using the native debugging APIs, since many managed modules are also native modules. This is dangerous because it builds on the faulty assumption that managed modules are always built on top of native modules. This breaks down in cases (eg, in-memory modules).
3) This sample consumes the MDbg object model. A harness could also be written against the other layers, such as the CorApi layer or the ICorDebug COM-classic interfaces directly. These solutions would be related to this solution.
4) Instead of writing a dedicated harness, use a debugger script to MDbg.exe. This solution is potentially much more scalable because it can leverage MDbg functionality for future features. For example, with this harness, you can’t stop the harness at a particular module load and do intensive process inspection. | https://docs.microsoft.com/en-us/archive/blogs/jmstall/sample-app-to-print-loaded-modules | CC-MAIN-2020-10 | en | refinedweb |
by the test runner prior to each test. Here are the main points that I’m writing this to address:
- Why do I dislike code duplication in tests?
- Extracting logic to a common setup method creates performance problems.
- The indirection in pulling part of the setup out of a test method hurts readability.
Back Story and Explanations
I wrote this post a little under two years ago. It’s germane here because I explain in it my progression from not using the setup method to doing so. For those not explicitly familiar with the concept and perhaps following along with my Chess TDD series, what I’m referring to is a common pattern (or anti-pattern, I suppose, depending on your opinion) in unit testing across many languages and frameworks. Most test runners allow you to specify a unique method that will be called before each test is run and also after each test is run. In the case of my series, using C# and MSTest, the setup happens on a per class basis. Here’s some code to clarify. First, the class under test:
And, now, the plain, “before,” test class:
And finally, what MS Test allows me to do:
Okay, so what’s going on here? Well, the way that MS Test works is that for each test method, it creates a new instance of the test class and executes the method. So, in the case of our original gravity test class, three instances are created. There is no instance state whatsoever, so this really doesn’t matter, but nonetheless, it’s what happens. In the second version, this actually does matter. Three instances get created, and for each of those instances, the _gravity instance is initialized. The “BeforeEachTest” method, by virtue of its “TestInitialize” attribute, is invoked prior to the test method that will be executed as part of that test.
Conceptually, both pieces of code have the same effect. Three instances are created, three Gravity are instantiated, three invocations of GetVelocityAfter occur, and three asserts are executed. The difference here is that there are three gravity variables of method level scope in the first case, and one instance variable of class scope in the second case.
Okay, so why do I choose the second over the first? Well, as I explained in my old post, I didn’t, initially. For a long, long time, I preferred the first option with no test setup, but I eventually came to prefer the second. I mention this strictly to say that I’ve seen merits of both approaches and that this is actually something to which I’ve given a lot of thought. It’s not to engage the subtle but infuriating logical fallacy in which an opponent in an argument says something like “I used to think like you, but I came to realize I was wrong,” thus in one fell swoop establishing himself as more of an authority and invalidating your position without ever making so much as a single cogent point. And, in the end, it’s really a matter of personal preference.
Why I Don’t Like Duplication
So, before I move on to a bit more rigor with an explanation of how I philosophically approach structuring my unit tests, let me address the first point from the comment about why I don’t like duplication. I think perhaps the most powerful thing I can do is provide an example using this code I already have here. And this example isn’t just a “here’s something annoying that could happen,” but rather the actual reason I eventually got away from the “complete setup and teardown in each test” approach. Let’s say that I like my Gravity class, but what I don’t like is the fact that I’ve hard-coded Earth’s gravity into the GetVelocityAfter method (and, for physics buffs out there, I realize that I should, technically, use the mass of the falling object, gravitational constant, and the distance from center of mass, but that’s the benefit of being a professional programmer and not a physicist — I can fudge it). So, let’s change it to this, instead:
Now, I have some test refactoring to do. With the first approach, I have six things to do. I have to declare a new Planet instance with Earth’s specifications, and then I have to pass that to the Gravity constructor. And then, I have to do that exact same thing twice more. With the second approach, I declare the new Planet instance and add it to the Gravity constructor once and I’m done. For three tests, no big deal. But how about thirty? Ugh.
I suppose you could do some find and replace magic to make it less of a chore, but that can get surprisingly onerous as well. After all, perhaps you’ve named the instance variable different things in different places. Perhaps you’ve placed the instantiation logic in a different order or interleaving in some of your tests. These things and more can happen, and you have have an error prone chore on your hands in your tests the same way that you do in production code when you have duplicate/copy-paste logic. And, I don’t think you’d find too many people that would stump for duplication in production code as a good thing. I guess by my way of thinking, test code shouldn’t be a second class citizen.
But, again, this is a matter of taste and preference. Here’s an interesting stack overflow question that addresses the tradeoff I’m discussing here. In the accepted answer, spiv says:
Duplicated code is a smell in unit test code just as much as in other code. If you have duplicated code in tests, it makes it harder to refactor the implementation code because you have a disproportionate number of tests to update. Tests should help you refactor with confidence, rather than be a large burden that impedes your work on the code being tested.
If the duplication is in fixture set up, consider making more use of the setUp method or providing more (or more flexible) Creation Methods.
He goes on to stress that readability is important, but he takes a stance more like mine, which is a hard-line one against duplication. The highest voted answer, however, says this: complex that you need to write unit-test-tests.
However, eliminating duplication is usually a good thing, as long as it doesn’t obscure anything. Just make sure you don’t go past the point of diminishing returns.
In reading this, it seems that the two answers agree on the idea that readability is important and duplication is sub-optimal, but where they differ is that one seems to lean toward, “avoiding duplication is most important even if readability must suffer” and the other leans toward, “readability is most important, so if the only way to get there is duplication, then so be it.” (I’m not trying to put words in either poster’s mouth — just offering my take on their attitudes)
But I think, “why choose?” I’m of the opinion that if redundancy is aiding in readability then some kind of local maximum has been hit and its time to revisit some broader assumptions or at least some conventions. I mean why would redundant information ever be clearer? At best, I think that redundancy is an instructional device used for emphasis. I mean, think of reading a pamphlet on driving safety. It probably tells you to wear your seatbelt 20 or 30 times. This is to beat you over the head with it — not make it a better read.
So, in the end, my approach is one designed to avoid duplication without sacrificing readability. But, of course, readability is somewhat subjective, so it’s really your decision whether or not I succeed as much as it is mine. But, here’s what I do.
My Test Structure
Let’s amend the Gravity class slightly to have it use an IPlanet interface instead of a Planet and also to barf if passed a null planet (more on that shortly):
Let’s then take a look at how I would structure my test class:
Alright, there’s a lot for me to comment on here, so I’ll highlight some things to pay attention to:
- Instance of class under test is named “Target” to be clear what is being tested at all times.
- Planet instance is just named “Planet” (rather than “MockPlanet”) because this seems to read better to me.
- Nested class has name of method being tested (eliminates duplication and brings focus only to what it does).
- Test initialize method does only the minimum required to create an instance that meets instance preconditions.
- There’s not much going on in the setup method — just instantiating the class fields that any method can modify.
- The second two test methods still have some duplication (duplicate setup).
Now that you’ve processed these, let me extrapolate a bit to my general approach to test readability:
- The class nesting convention helps me keep test names as short as possible while retaining descriptiveness.
- Only precondition-satisfying logic goes in the initialize method (e.g. instantiating the class under test (CUT) and passing a constructor parameter that won’t crash it)
- Setting up state in the class under test and arranging the mock objects is left to the test methods in the context of “Given” for those asserts (e.g. in the second test method, the “Given” is “On_Earth” so it’s up to that test method to arrange the mock planet to behave like Earth).
- I use dependency injection extensively and avoid global state like the plague, so class under test and its depdenencies are all you’ll need and all you’ll see.
- Once you’re used to the Target/Mocks convention, it’s (in my opinion) as readable, if not more so, than a method variable. As a plus, you can always identify the CUT at a glance in my test code. To a lesser degree, this is true of mocks and constants (in C#, these have Pascal Casing)
- I suppose (with a sigh) that I’m not quite 100% on the maturity model of eliminating duplication since I don’t currently see a duplication-eliminating alternative to setting up Earth Gravity in two of the test methods. I think it’s not appropriate to pull that into the test initialize since it isn’t universal and adding a method call would just make the duplication more terse and add needless indirection. To be clear, I think I’m failing rather than my methodology is failing — there’s probably a good approach that I simply have not yet figured out. Another important point here is that sometimes the duplication smell is less about how you structure your tests and more about which tests you write. What I mean is… is that “Returns_NineyEight_After_10_Seconds_On_Earth” test even necessary? The duplicate setup and similar outcome is, perhaps, telling me that it’s not… But, I digress.
Addressing the Original Concerns
So, after a meandering tour through my approach to unit testing, I’ll address the second and third original points, having already addressed the question of why I don’t like duplication. Regarding performance, the additional lines of code that are executed with my approach are generally quite minimal and sometimes none (such as a case where there are no constructor-injected dependencies or when every unit test needs a mock). I suspect that if you examined code bases in which I’ve written extensive tests and did a time trial of the test suite my way versus with the instantiation logic inlined into each test, the difference would not be significant. Anecdotally, I have not worked in a code base in a long, long time where test suite execution time was a problem and, even thinking back to code bases where test suite performance was a problem, this was caused by other people writing sketchy unit tests involving file I/O, web service calls, and other no-nos. It’s not to say that you couldn’t shave some time off, but I think the pickings would be pretty lean, at least with my approach of minimal setup overhead.
Regarding readability, as I’ve outlined above, I certainly take steps to address readability. Quite frankly, readability is probably the thing most frequently on my mind when I’m programming. When it comes to test readability, there’s going to be inevitable disagreement according to personal tastes. Is my “Target” and initialization convention more readable than an inline method variable also with the convention of being named “Target”? Wow — eye of the beholder. I think so because I think of the instantiation line as distracting noise. You may not, valuing the clarity of seeing the instantiation right there in the method. Truth is with something like that, what’s readable to you is probably mainly a question of what you’re used to.
But one thing that I think quite strongly about test readability is that it is most heavily tied in with compactness of the test methods. When I pick up a book about TDD or see some kind of “how to” instructional about unit tests, the tests always seem to be about three lines long or so. Arrange, act, assert. Setup, poke, verify (as I said in that old post of mine). Whatever — point is, the pattern is clear. Establish a precondition, run an experiment, measure the outcome. As the setup grows, the test’s readability diminishes very quickly. I try to create designs that are minimal, decoupled, compatible with the Single Responsibility Principle, and intuitive. When I do this, test methods tend to remain compact. Eliminating the instantiation line from every test method, to me, is another way of ruthlessly throttling the non-essential logic inside those test methods, so that what’s actually being tested holds center stage.
So, in the end, I don’t know that I’ve made a persuasive case — that’s also “eye of the beholder.” But I do feel as though I’ve done a relatively thorough job of explaining my rationale. And, my mind is always open to being changed. There was a time when I had no test setup method at all, so it’s not like it would be unfamiliar ground to return there.By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.
I have recently been experimenting a lot with test readability. I’ve always been a fan of test setups like you are defending here, but i’ve taken it a step further recently. One of the things that bothered me was when refactoring other people’s code is i would end up spending a long time understanding the context behind setups and verifications in the tests i caused to fail. For example a test for a class involved in communication may be verifying that a connection is closed under certain conditions, which would involve clearing up resources and notifying other layers in the… Read more »
Do you have a gist or a link to an example I could see on github? I’m not sure I’m picturing this correctly. Sounds to me as though you’re talking about having a series of asserts in a method that you extract that verifies preconditions of setup…?
In my very rambling way i was just saying i prefer to split down my tests further into other methods to try and give more description to the test for people who are perhaps unfamiliar with the class and its context.
I took the approach with your example above here:
Apologies if it doesn’t compile! It is only a simple example and the benefit would be greater if there were more than 1 verify for the behaviour, i.e. the overall behaviour of getting a velocity wasn’t just described by 1 method call, but hopefully you get the gist!
Perfectly clear, and that makes sense. I’ve experimented with approaches like that too, trying to hit on the right combination of readability and discoverability for others. What you did there looks pretty readable to me (though I’ve heard people in the past grumble about asserts failing when they aren’t in the test method, that’s not really a complaint that I have, personally).
My concern with this approach is that you are lumping together Act & Assert. The person reading this test now needs to go look at AssertVelocityAfterNumberOfSeconds to know *how* it’s getting the velocity after N seconds. I commented on the gist with my suggestion. I’m a big fan of more readable assertion libraries. However, I think you have a great point about complex assertions. If you need to verify a bunch of things to ensure the connection was closed, then it’s great to have a method that *only* asserts those things. I would just be very careful not to conflate… Read more »
Thanks for the link. On a quick glance it looks promising! I also agree the more readable asserts are nice. I’ve given some thought to your comment and i must admit i’m unsure as to which approach i prefer. On the one hand i can see the benefit of having the action on the SUT visible in the actual test method. I would say though that once you have looked into the AssertVelocityAfterNumberOfSeconds method once, it could make reading multiple similar tests easier. I guess the main reason i lumped together the act and assert was to remove duplication and… Read more »
[…] In Defense of Test Setup Methods – Erik Dietrich […]
I am really interested by your points: in fact I’m so interested that I once played with peg.js (a grammar parser in javascript) to write a test skeleton setup for .Net following your and Phil Haack recommended organization for tests (). Realistically in VS I have a collection of snippets (tSetup, tShould, tThrows…) that do the same work but this was more of an exercice when I started TDD-ing heavily and wanted to add tests to already existing classes. I don’t know what you think about it but I would not hesitate to push setting up the Earth gravity in… Read more »
I think the “very expressive” descriptor captures what my criteria for such a thing would be as well. I’ve sort of moved away from having auxiliary methods in test classes, but that wasn’t a hard/fast rule I made for myself. It’s just happened organically. So, it’s not as though I’d be opposed to the practice if it were readable. It certainly wouldn’t faze me if I were reviewing someone’s code and saw this, as long as the intention was clear.
I very much agree with your points here and try to stress that test code needs to be kept clean every bit as much as production code if they’re to remain in a usable state. Regarding naming and structuring the tests, I might try the nested class approach sometime, but I think you might benefit from naming your original test class better. I like this naming structure: because it lets each test be read like a sentence in the test runner, making it very clear what is being tested.
It looks like a *very* appealing way to structure tests off the cuff. I think I’ve probably never considered something like this before out of the herd mentality of “Foo’s test class is named FooTest and that’s that.” Interestingly, before my .NET heyday when I did a lot of Java, I don’t think I used to sweat the 1 to 1 ratio of FooTest to Foo. But I guess that’s sort of been ingrained in me over the last several years. However, when I actually think about it, the only real reasoning for this that I can cough up is… Read more »
I’ve been using it and recommending it for years (even before the blog post). I haven’t seen a downside (the FooTests convention is pretty worthless, IMO). Yes, I will use folders/namespaces to organize as needed. Usually I will start by making my test project folders mirror my SUT’s folders. So for a DDD Core project’s tests I might have folders in both the Core project and the Test project for Model and Services, for instance. In an MVC test project, I might have a folder for Controllers, etc. If a particular class has so many tests-per-method that it is creating… Read more »
Everything you’re describing here makes a lot of sense to me. I’m having the same aha! feeling that I’ve had in the past when I’ve first been exposed to something that I came to value a lot. Definitely going to have to give this approach a try, particularly since I can’t think of anything I gain out of the Foo/FooTest convention at all besides it being what I’m used to.
This looks cool, but do you find yourself duplicating your setups a lot? And does that cause issues when refactoring?
Not that I’ve found. I promote following PDD: Pain Driven Development. If it’s causing you pain, do something about it. If duplication in setup between classes (which I assume is what you’re talking about) was causing me pain during refactoring of the SUT, I would simply refactor the tests (perhaps to share a base class or a common helper class or utility method) to remove the duplication.
Of course! I should probably avoid writing comments first thing in the morning… Couldn’t agree more about using pain to drive development. I prefer the naming with your way of doing things, it makes everything much clearer. One of the things i get out of keeping a 1:1 ratio is it becomes quite apparent when a class has too many responsibilities, either through a horrid setup or just sheer number of tests. This can be very useful for getting a grasp of any potential design issues when reviewing code. I suppose your technique could also highlight quite nicely when a… Read more »
I agree with your points as well. It has always seemed to me like arguing for redundant test setup within each test method seems to be eschewing Structured Programming for some reason, I suppose simply because you’re in the context of a unit test, a reason which doesn’t make any sense to me. Had one coworker argue for setups in test methods because “you can see it right there,” but imagine if the entirety of the production code were designed under a similar philosophy! Also, I’ve read plenty of people argue for a invoking common CreateTarget() factory method from within… Read more »
I think I can put myself in the position of the commenter fairly easily if he’s had a code base inflicted on him with a ton of setup, and especially *conditional* setup in test initialize methods (if we’re going to be testing X, then do this, otherwise, do this). I’ve actually seen people do some pretty ugly stuff in there, which I think colored my original hesitation to adopt them. But what you’re saying resonates with me on every point. I hesitate to conclude “well, this is test code, so the rules are different.” I’ve also heard people argue “I… Read more »
Thanks for your extended reply! I like this approach, and I enjoyed reading your thoughts about it. The amount of extraction seems a bit saner than some of the stuff I’ve seen 😉 In a sense it’s not so much different from how I used to do it in Ruby with RSpec. Target is called subject then. The main difference is that in this solution everything is always setup, whereas in RSpec it normally is lazy (so equivalent to having a method create the object). When doing proper unit tests this indeed does not matter much, but I’ve seen a… Read more »
I’ve been dabbling a bit with Ruby on Rails of late, but not really enough that I can speak intelligently about test setups and idiomatic approaches in that language and framework, unfortunately. I’ll have to check back when my Ruby cred is higher. I do write integration tests, and I’ll split these loosely into two categories: tests that involve multiple components together in the application and tests that include externalities (GUI, database, files, etc). For the former, I’ll follow a similar pattern to the one outlined here, but for the latter, I’ll typically make use of an actual console application… Read more »
[…] other day, in response to a comment, I made a post about test initialize methods and Steve Smith commented and suggested an approach he described in detail in this post on his […] | https://daedtech.com/in-defense-of-test-setup-methods/ | CC-MAIN-2019-22 | en | refinedweb |
2,370.
tanmay_das left a reply on Laravel Event Is Not Broadcasting On Production Server
For anyone having the same issue:
I changed host to the production URL:
Here is the updated config:
'pusher' => [ 'driver' => 'pusher', 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'app_id' => env('PUSHER_APP_ID'), 'options' => [ 'cluster' => env('PUSHER_APP_CLUSTER'), 'encrypted' => true, 'host' => env('PUSHER_HOST'), 'port' => 6001, 'scheme' => env('PUSHER_SCHEME') ], ],
And in the
.env file:
PUSHER_HOST=example.com
Remember to exclude
http/https from the host. It's not, it's
example.com.
tanmay_das left a reply on Laravel Event Is Not Broadcasting On Production Server
What's interesting is that the Presence channel is working. Presence channel is not bound by the ssl constraints?
tanmay_das left a reply on Laravel Event Is Not Broadcasting On Production Server
Yes I also think that it has something to do with the SSL. Here is my updated pusher config:
'pusher' => [ 'driver' => 'pusher', 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'app_id' => env('PUSHER_APP_ID'), 'options' => [ 'cluster' => env('PUSHER_APP_CLUSTER'), 'encrypted' => true, 'host' => '127.0.0.1', 'port' => 6001, 'scheme' => env('PUSHER_SCHEME'), 'curl_options' => [ CURLOPT_SSL_VERIFYHOST => 0, CURLOPT_SSL_VERIFYPEER => 0, ], ],
I am also referencing
local_cert and
local_pk from my .env file like this:
'local_cert' => env('LOCAL_CERT', null), 'local_pk' => env('LOCAL_PK', null),
In
.env:
LOCAL_CERT=/etc/nginx/ssl/mydomain.com/123456/server.crt LOCAL_PK=/etc/nginx/ssl/mydomain.com/123456/server.key
tanmay_das started a new conversation Laravel Event Is Not Broadcasting On Production Server
I am using laravel-websockets package for some real-time features. I am using redis as queue connection. I have an event
TestUpdated which I am broadcasting on
test.{id} private channel. The event gets fired and caught by the client properly when I am on a local machine. But on production server, I get
BroadcastException thrown:
Illuminate\Broadcasting\BroadcastException in /home/forge/mydomain.com/vendor/laravel/framework/src/Illuminate/Broadcasting/Broadcasters/PusherBroadcaster.php:117
Horizon dashboard also exposes the event data:
{ event: { test: { class: "App\Test", id: 1, relations: [ ], connection: "mysql" }, socket: null }, connection: null, queue: null, chainConnection: null, chainQueue: null, delay: null, chained: [ ] }
Fragment of my
websockets.php config file:
'apps' => [ [ 'id' => env('PUSHER_APP_ID'), 'name' => env('APP_NAME'), 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'enable_client_messages' => true, 'enable_statistics' => false, ], ],
My observations:
https
I am using an arbitrary pusher app id, key and secret (someId, someKey and someSecret'). My client-side config:
window.Echo = new Echo({ authEndpoint: 'my/endpoint', broadcaster: 'pusher', key: 'someKey', wsHost: process.env.NODE_ENV == 'development' ? window.location.hostname : 'mydomain.com', wsPort: 6001, wssPort: 6001, disableStats: true, encrypted: process.env.NODE_ENV == 'development' ? false : true });
Config from broadcasting.php:
'pusher' => [ 'driver' => 'pusher', 'key' => env('PUSHER_APP_KEY'), 'secret' => env('PUSHER_APP_SECRET'), 'app_id' => env('PUSHER_APP_ID'), 'options' => [ 'cluster' => env('PUSHER_APP_CLUSTER'), 'encrypted' => true, 'host' => '127.0.0.1', 'port' => 6001, 'scheme' => env('PUSHER_SCHEME') ], ],
How do I fix this?
tanmay_das started a new conversation Laravel Notification Is Broadcasted When The Queue Connection Is Set To Database, But Not When Set To Redis
I am facing this weird issue, I have a notification
App\Notifications\SomethingHappened.
I am using laravel-websockets package. I have my queue connection set to database and my notification class which implements
ShouldQueue interface, has the following
to* methods:
toArray(),
toMail() and another custom channel.
Also, the
via() method has these channels:
database,
broadcast and conditionally (if the user has email),
My broadcast route in
channels.php looks like this:
Broadcast::channel('App.User.{id}', function ($user, $id) { return (int) $user->id === (int) $id; });
I do not have a
toBroadcast() method since the documentation suggested that,
The toArray method is also used by the broadcast channel to determine which data to broadcast to your JavaScript client.
When I have my queue connection set to
database, the notification is broadcasted to the client properly. Here is the log:
[2019-04-18 09:29:42][1] Processing: App\Notifications\SomethingHappened [2019-04-18 09:29:46][1] Processed: App\Notifications\SomethingHappened [2019-04-18 09:29:46][2] Processing: App\Notifications\SomethingHappened [2019-04-18 09:29:47][2] Processed: App\Notifications\SomethingHappened [2019-04-18 09:29:47][3] Processing: App\Notifications\SomethingHappened [2019-04-18 09:29:47][3] Processed: App\Notifications\SomethingHappened [2019-04-18 09:29:47][4] Processing: App\Notifications\SomethingHappened [2019-04-18 09:29:47][4] Processed: App\Notifications\SomethingHappened [2019-04-18 09:29:47][5] Processing: Illuminate\Notifications\Events\BroadcastNotificationCreated [2019-04-18 09:29:47][5] Processed: Illuminate\Notifications\Events\BroadcastNotificationCreated
Notice the last process. But when I set my queue connection to
redis, this is what I get:
[2019-04-18 09:24:25][3] Processing: App\Notifications\SomethingHappened [2019-04-18 09:24:25][3] Processed: App\Notifications\SomethingHappened [2019-04-18 09:24:25][4] Processing: App\Notifications\SomethingHappened [2019-04-18 09:24:26][4] Processed: App\Notifications\SomethingHappened [2019-04-18 09:24:25][1] Processing: App\Notifications\SomethingHappened [2019-04-18 09:24:25][2] Processing: App\Notifications\SomethingHappened [2019-04-18 09:24:27][2] Processed: App\Notifications\SomethingHappened [2019-04-18 09:24:28][1] Processed: App\Notifications\SomethingHappened
What mistake did I make?
tanmay_das left a reply on Laravel Echo Presence Channel Not Working
No one? :(
tanmay_das started a new conversation Laravel Echo Presence Channel Not Working
I have this simple presence channel:
Broadcast::channel('test', function ($user) { return ['id' => $user->id, 'name' => $user->name]; });
I am trying to join the channel from the client side. In my vue component's
mounted() hook, I am doing the following:
window.Echo.join('test') .here( users => console.log(users))
In the
destroyed() hook, I am doing the following:
window.Echo.leave('test');
But nothing is logged in the console when I navigate to that component. What am I missing?
Public and Private channels are working fine though. I do not have any event associated with the presence channel. I am using laravel-websockets package. Feel free to ask me more info.
tanmay_das left a reply on SPA Redirection In Laravel + Vue
at some point the user comes back on your return_url, with some payment ID in the GET params
The return url I have set is the backend url () because the gateway
POSTs those payment ID and status etc. to my server. So, I am guessing in my controller's method of
api/success route, I would have to perform a
return redirect('');?
tanmay_das started a new conversation SPA Redirection In Laravel + Vue
I have the following situation:
I have an SPA, front-end is built with vue and back-end is built with laravel. Now I have a problem.
postdata
postdata that was sent by the gateway to my server (back-end)
My front-end is running on:
And the back-end is on:
I am using axios to perform http requests in the frontend.
My problem is in task #1. I cannot perform the redirection using a form and hidden inputs because the gateway url requires store_id and store_password params which I cannot put in a form. Also
window.location is not very helpful since I cannot
post data using it.
How can I handle this?
tanmay_das started a new conversation How To Get LocalStorage Working In Vue Testing
I am trying to test vue components.
I have a vue single file component which uses
vuex. My states are stored in
store.js which makes use of
localStorage. However, when I run
npm test I get error that reads:
WEBPACK Compiled successfully in 9416ms
MOCHA Testing...
RUNTIME EXCEPTION Exception occurred while loading your tests
ReferenceError: localStorage is not defined
Tools I am using for testing: @vue/test-utils, expect, jsdom, jsdom-global, mocha, mocha-webpackHow I run the tests:
"test": "mocha-webpack --webpack-config node_modules/laravel-mix/setup/webpack.config.js --require tests/JavaScript/setup.js tests/JavaScript/**/*.spec.js"
A sample test,
order.spec.js:
require('../../resources/assets/js/store/store'); require('../../resources/assets/js/app'); import { mount } from '@vue/test-utils'; import Order from '../../resources/assets/js/views/order/List.vue'; import expect from 'expect'; describe('Order', ()=>{ it('has alert hidden by default', () => { let wrapper = mount(Order); expect(wrapper.vm.alert).toBe(false); }) })
In
setup.js file I am loading jsdom like this:
require('jsdom-global')();
How do I fix this?
tanmay_das left a reply on What To Do When Paginator Is Not Allowed On A Collection?
@Vilfago Thanks for you reply. I tried the second option, but it only returns two rows. Should I call it on
Tag or
PostTag? If I call it on
PostTag,
whereHas('posts') isn't going to work because there is no relation called
posts in
PostTag. I really wish I could tackle this in an Eloquent-friendly way, rather than Collection-manipulating way :(
tanmay_das started a new conversation What To Do When Paginator Is Not Allowed On A Collection?
I have two entities:
Post (posts) and
Tag (tags). They both are in many-to-many relationship. So I have a pivot table called
PostTag (post_tag). I want to list all the tags [including a) pivot table and b) post title] which belong to those posts whose author is the logged in user. So I did something like this:
$tags = collect(); $posts = Post::where('user_id', auth()->id())->with('tags')->get(); $posts->each(function($post, $key) use ($tags){ $post->tags->each(function($tag, $key) use ($tags, $post) { $tag->post_title = $post->title; $tags->push($tag); }); }); return $tags;
However, I also need to paginate the result. So I attempted to return this instead:
return $tags->paginate(10);
But paginate is not a method of
Collection (Maybe of
Builder)
The relationship methods are:
// Post.php public function tags() { return $this->belongsToMany(Tag::class)->withPivot('updated_at'); } // Tag.php public function posts(){ return $this->belongsToMany(Post::class); }
I have a feeling that there must be some easier way of doing it which I may not know:
PostTag::someQueryThatFetchesThoseTagsWithPostTitle(); // If I could do something like this, paginate() would have been available
tanmay_das left a reply on Accessing User Roles From Vue In An SPA
Consider this scenario:
<p v-This is a very sensitive info and only the admin is meant to see it </p> <p v-else> You are not an admin </p>
What if a user opens up the console and runs
Laravel.user.roles.push('admin')?
How did/would you tackle that?
tanmay_das left a reply on Touching Parent Timestamp In A Polymorphic Relation
tanmay_das left a reply on Laravel Global Query Scope's WithoutGlobalScope() Not Returning Desired Records
Turns out when I use the full class path, then it works:
App\MyModel::withoutGlobalScope('App\Scopes\ArchiveScope')->get();
Previously I have been using this:
App\MyModel::withoutGlobalScope(ArchiveScope::class)->get();
...as the documentation suggested.
I am using PHP 7.2.5.
tanmay_das left a reply on Laravel Global Query Scope's WithoutGlobalScope() Not Returning Desired Records
Hi @bobbybouwmann Thanks for your reply.
I followed the documentation step by step. Here is my scope that resides in app/Scopes directory:
<?php namespace App\Scopes; use Illuminate\Database\Eloquent\Scope; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Builder; class ArchiveScope implements Scope { /** * Apply the scope to a given Eloquent query builder. * * @param \Illuminate\Database\Eloquent\Builder $builder * @param \Illuminate\Database\Eloquent\Model $model * @return void */ public function apply(Builder $builder, Model $model) { $builder->where('archived_at', '=', NULL); } }
And this is how I am using it inside my model:
/** * The "booting" method of the model. * * @return void */ protected static function boot() { parent::boot(); static::addGlobalScope(new ArchiveScope); }
Also, I am using soft deletion for that model. Is there a possibility of conflict with laravels scope for soft deletion?
tanmay_das started a new conversation Laravel Global Query Scope's WithoutGlobalScope() Not Returning Desired Records
I have a global query scope called ArchiveScope that mimics the similar functionality of Soft Deletion. The apply method of that scope looks like this:
public function apply(Builder $builder, Model $model) { $builder->where('archived_at', '=', NULL); }
So when I use
MyModel::all(), it returns all the rows that do not have a timestamp (i.e. NULL). But when I want to fetch all the records (including archived), I still get the same result. I am running this statement in the tinker:
App\MyModel::withoutGlobalScope(ArchiveScope::class)->get();
Strangely, when I use
withoutGlobalScopes() instead of
withoutGlobalScope(ArchiveScope::class) then I get all the records.
App\MyModel::withoutGlobalScopes()->get();
tanmay_das left a reply on DOMException: Invalid Header Name.
Having the same issue here. Let me know if you've solved it.
tanmay_das left a reply on Confusions About Laravel Storage Directory And Its Symlink
Anything that's visible in the browser, shouldn't it be considered public? And if that's the case, shouldn't logo, banner etc. be considered public too?
tanmay_das left a reply on Confusions About Laravel Storage Directory And Its Symlink
@mhankins It's not working. Maybe because it's pointing to
/media/tanmay/3806FF1D11D87FF6/code/myapp/storage/app/private/logo.png, which translates into this:
instead of being translated into this:
...relative to my application, not my entire disk (
media/tanmay/blah/....).
And I just realized it will never get past the
storage/app directory, since the symlink in the public () directory is always pointing to
storage/app/public/
The only solution I can see is to put my private assets in public directory :(
tanmay_das started a new conversation Confusions About Laravel Storage Directory And Its Symlink
Okay, this is what I understand about the laravel storage so far:
All my public assets, things like profile pictures or any user-generated files should reside in
myapp/storage/app/public and things that are not user-generated, but application-specific, such as: logo, banner etc., should reside outside the public directory. For instance,
myapp/storage/app/private.
We create a symlink within the
myapp/public/ directory that points to
myapp/storage/app/public/ and we access our public assets by
asset('storage/avatar.png'). It loads the avatar from
storage/app/public/avatar.png
But how do I access my private assets? Say I want to load my
logo.png file which is in
storage/app/private/logo.png this path?
tanmay_das left a reply on Image Validation Rules Example
Are you sure you didn't add it to a different form? Because 3 days ago, I had exactly the same issue and it was caused by the misplacement of the enctype attribute:
tanmay_das left a reply on Image Validation Rules Example
In your opening
<form> tag, add
enctype="multipart/form-data" attribute.
tanmay_das left a reply on Route Exists But Page Not Found
Is it because
->firstOrFail()?
tanmay_das started a new conversation Route Exists But Page Not Found
I have a route:
Route::get('generateresult', '[email protected]');
public function gen(){ $examinees = \App\Examinee::all(); foreach ($examinees as $examinee) { $result = new Result(); $result->name = $examinee->user->name; $result->email = $examinee->user->email; $result->institution = $examinee->institution; $result->answered = count($examinee->user->submittedAnswers()->get()); $result->correct = $examinee->user->get_correct_answers($examinee->user->id); $result->submitted_at = $examinee->user->submittedAnswers()->firstOrFail()->created_at; $result->save(); } }
When i hit generateresult route, I get a page not found error. But if I return a string from
gen() method, the string is displayed properly. What went wrong?
tanmay_das started a new conversation How To Add UTC +6:30 To A Carbon Instance
I have a couple of row in a table with
created_at timestamp. I want to show the value of
created_at field in my view:
{{ $model->created_at }}
But this displays the default timezone it used during save(). How can I now adjust that timezone just to 'display' as if it was recorded with utc +6:30?
tanmay_das left a reply on How To Determine From And To In A Messaging Application
@mstnorris Thanks for the tip, but setting or retrieving is not what I needed. I am looking for a way to "distinguish" sender from recipient and vice versa. What would be a common approach to determine to whom I am sending the message?
Currently this is what I am doing:1. I set up a `Route::post` like this `/message/{to}` 2. A user opens a message thread (within which, he can also reply). I set the `from` to whoever the currently logged in user is, by calling `auth()->id()` 3. I fetch the row of the last message. If the id of currently logged in user is found in either of `to` or `from` field, I set the value of the opposite column as recipient (`to`).
Is this a good approach?
tanmay_das left a reply on Laravel Multi File Validation Fails
tanmay_das started a new conversation Laravel Multi File Validation Fails
I have a multiple file input field:
<input type="file" id="documents" name="documents[]" multiple>
In ProjecRequest:
$rules = [ 'documents.*' => 'mimes:doc,pdf,jpeg,bmp,png|max:20000', ]; return $rules;
public function store(ProjectRequest $request) { $project = Project::create([ /*key=>value removed to keep the question clean*/ ]); foreach ($request->documents as $document) { $filename = $document->store('documents'); Document::create([ 'project_id' => $project->id, 'filepath' => $filename ]); } return redirect()->back(); }
But when I try to upload a png or pdf I get the following validation error:
The documents.0 must be a file of type: doc, pdf, jpeg, bmp, png.
tanmay_das started a new conversation How To Determine From And To In A Messaging Application
My message table looks like this:
message(id, from, to, body, is_seen).
Here, both from and to are the ids of the
user table. I can always determine the
from id by calling
auth()->id() but how do I fill up the
to column?
tanmay_das left a reply on Whoops! Class App\Http\Controllers\Type Does Not Exist
It looks like your editor is trying to be ultra-smart and putting the
Type $var = null placeholder in each of the methods.
Change this:
class PagesController extends Controller { public function getIndex(Type $var = null) { return view('pages.welcome'); } public function getAbout(Type $var = null) { return view('pages.about'); } public function getContact(Type $var = null) { return view('pages.contact'); } }
To this:
class PagesController extends Controller { public function getIndex() { return view('pages.welcome'); } public function getAbout() { return view('pages.about'); } public function getContact() { return view('pages.contact'); } }
tanmay_das started a new conversation Laravel Shared Hosting Deployment
I have a copy of my project in my office pc and I pushed the project to a bitbucket git repo. I have been instructed to deploy the project as soon as I go home. I came to home and:
Cloned the repo to home pc. It didn't have the vendor directory as it is ignored by the gitignore file
A quick google search suggested me to run
composer install, but it didn't work because of unmet requirements
So I ran
composer update instead and it regenerated the vendor directory. Meanwhile, I already uploaded everything else except the
vendor directory
gitignore also ignores the .env file, so I duplicated .env.example and renamed it to .env and generated a key using
php artisan key:generate and uploaded the file to the server
I am currently at this stage, about to upload the vendor directory. It contains a lot of files and will consume a huge amount of time. If anyone could confirm that all the above steps were ok and I did not ruin anything I would upload the vendor directory
tanmay_das left a reply on Blade Nested Section Produces Misformatted DOM
tanmay_das left a reply on Can Anyone Please Tell Me The Font Name
In headings, they are using Roboto. In paragraphs, Open Sans Light.
tanmay_das started a new conversation Blade Nested Section Produces Misformatted DOM
Here is the content of my home.blade.php file:
@extends('layouts.master') @section('content') @extends('partials.sidebar') @section('pagecontent') This is home @endsection @endsection
layouts/master.blade.php contains the main layout which has the typical
<html><head><body> structure. In it's
<body>, I am yielding to a section called
content:
<!DOCTYPE html> <html> <body> <div id="app"> @yield('content') </div> </body> </html>
and in my
parials/sidebar.blade.php, I am yielding to a section called
<div id="page-content-wrapper"> <div class="container-fluid"> @yield('pagecontent') </div> </div>
So I would naturally expect a DOM like this:
<!DOCTYPE html> <html> <body> <div id="app"> <!-- @section('content') --> <div id="page-content-wrapper"> <div class="container-fluid"> This is home <!-- @section('pagecontent') --> </div> </div> </div> </body> </html>
Unfortunately, that's not the DOM my blade views are rendering. My sidebar partial doesn't get injected inside the master layout, instead, it is appended to the DOM as a sibling of the Entire Document:
<div id="page-content-wrapper"> <div class="container-fluid"> This is home <!-- @section('pagecontent') --> </div> </div> <!DOCTYPE html> <html> <body> <div id="app"> <!-- @section('content') --> </div> </body> </html>
How can I fix this?
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
I am not manually creating the link, the link gets generated by (ResetPassword)[] Notification:
public function toMail($notifiable) { return (new MailMessage) ->line('You are receiving this email because we received a password reset request for your account.') ->action('Reset Password', url(config('app.url').route('password.reset', $this->token, false))) ->line('If you did not request a password reset, no further action is required.'); }
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
But why is laravel generating the link with a query string? It doesn't happen with the make:auth. I just tested. With make:auth, it's generated as
reset/$token and not as
reset?$token
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
Interestingly, when I change my reset link from this:
to this:
Then the reset view loads! What should I do?
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
I believe the source of the problem is in this route:
Route::get('/password/reset/{token}', '[email protected]')->name('password.request');
Somehow it's failing to receive the
$token, hence the redirection. As mentioned in the
ResetsPasswords trait:
/** * Display the password reset view for the given token. * * If no token is present, display the link request form. *
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
Something went wrong. I can send the reset email. But for some reason when I click on the reset password link, I get redirected back to
showLinkRequestForm() method. Here is what I have done so far:
My routes:
Route::get('/password/reset', '[email protected]')->name('password.email'); Route::post('/password/email', '[email protected]'); Route::get('/password/reset/{token}', '[email protected]')->name('password.request'); Route::post('/password/reset', '[email protected]')->name('password.reset');
<?php namespace App\Http\Controllers;'); } public function showLinkRequestForm() { return view('password.email'); } }
ResetPasswordController.php:
<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use Illuminate\Foundation\Auth\ResetsPasswords; resetting their password. * * @var string */ protected $redirectTo = '/'; /** * Create a new controller instance. * * @return void */ public function __construct() { $this->middleware('guest'); } public function showResetForm(Request $request, $token = null) { return view('password.reset')->with( ['token' => $token, 'email' => $request->email] ); } }
password.email view (this one loads perfectly):
@extends('layouts.master') @section('content') <div class="col-sm-8 offset-sm-2"> <h1>Forgot Password</h1> <form method="POST" action="/password/email"> {{ csrf_field() }} <div class="form-group"> <label for="email">E-mail</label> <input type="email" class="form-control" id="email" name="email" placeholder="Enter email"> </div> <button type="submit" class="btn btn-primary">Send Password Reset Link</button> </form> </div> @endsection
password.reset view (this one does not):
@extends('layouts.master') @section('content') <div class="container"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">Reset Password</div> <div class="panel-body"> <form class="form-horizontal" method="POST" action="{{ route('password.request') }}"> {{ csrf_field() }} <input type="hidden" name="token" value="{{ $token }}"> <div class="form-group{{ $errors->has('email') ? ' has-error' : '' }}"> <label for="email" class="col-md-4 control-label">E-Mail Address</label> <div class="col-md-6"> <input id="email" type="email" class="form-control" name="email" value="{{ $email or{{ $errors->has('password_confirmation') ? ' has-error' : '' }}"> <label for="password-confirm" class="col-md-4 control-label">Confirm Password</label> <div class="col-md-6"> <input id="password-confirm" type="password" class="form-control" name="password_confirmation" required> @if ($errors->has('password_confirmation')) <span class="help-block"> <strong>{{ $errors->first('password_confirmation') }}</strong> </span> @endif </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> <button type="submit" class="btn btn-primary"> Reset Password </button> </div> </div> </form> </div> </div> </div> </div> </div> @endsection
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
@Snapey There are two controllers for password reset: 1. ForgotPasswordController 2. ResetPasswordController.
If I copy these two controllers in my current project and add necessary routes like below, is it going to take care of the entire resetting process:
``` Route::get('password/reset', '[email protected]'); Route::post('password/email', 'ForgotPasswor[email protected]'); Route::get('password/reset/{token}', '[email protected]'); Route::post('password/reset', '[email protected]');
The `ForgotPasswordController` uses the `SendPasswordResetEmails` trait, in which, the `showLinkRequestForm()` method returns a view that Laravel auto-generates during make:auth process. Can I override that method to return my own view? And the `ResetPasswordController` uses the `ResetPasswords` trait where the `showResetForm()` also returns laravel generated view. I will have to override it too. Am I on the right track?
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
tanmay_das left a reply on Problem With Find Out Category Name | Laravel 55
Yes, you can. But let me explain a few things first: When you execute
$categories = Category::where('id', $post->category_id)->orderBy('category_name')->get();
you always get a collection of one result, because your where clause will never return more than one row, since there is one row associated with one unique id. You can think of what you are doing now as using a bucket for only one fish.
If you expect only one fish (row) using a bucket (collection) is overkill. Instead what you could do is use your hand (methods like:
first() and
find()):
You can achieve this in one of the two ways:
$category = Category::where('id', $post->category_id)->get()->first();
Or:
$category = Category::find($post->category_id);
Then you can throw your fish in blade like this:
{{ $category->category_name }}
tanmay_das left a reply on Problem With Find Out Category Name | Laravel 55
@NoneNameDeveloper Take a look at your variable name. It's in plural:
$categories. So you are fetching a 'collection' of categories, not 'objects'.
You will have to iterate over the
$categories collection like this:
@foreach($categories as $category) {{ $category->category_name }} @endforeach
tanmay_das left a reply on Implementing Reset Password Feature Without Make:auth
tanmay_das left a reply on Problem With Find Out Category Name | Laravel 55
It's either
categories_name or
category_name. Assuming your column name is
category_name and not
categories_name, you should change this:
{{$categories ->categories _name}}
to this:
{{ $categories ->category _name }}
tanmay_das started a new conversation Implementing Reset Password Feature Without Make:auth
I am following this series and I have set up the registration and login feature from the scratch, without using make:auth, as shown in episode 18 and episode 19 .
The make:auth command generates controllers and views for password reset, but how do I implement it now from the scratch? At this point I don't have any controller or view setup for resetting password.
tanmay_das left a reply on IDE For Laravel
On sublime, install [Laravel Blade Highlighter] ()
On PhpStorm you can customize it yourself from Settings->Editor->Color Scheme->Blade | https://laracasts.com/@tanmay_das | CC-MAIN-2019-22 | en | refinedweb |
Repeater QML Element
The Repeater element allows you to repeat an Item-based component using a model. More...
Properties
Signals
- onItemAdded(int index, Item item)
- onItemRemoved(int index, Item item)
Methods
Detailed Description
The Repeater element is used to create a large number of similar items. Like other view elements, a Repeater has a model and a delegate: for each entry in the model, the delegate is instantiated in a context seeded with data from the model. A Repeater item is usually enclosed in a positioner element such as Row or Column to visually position the multiple delegate items created by the Repeater.
The following Repeater creates three instances of a Rectangle item within a Row:
import QtQuick element elements:
Item { //XXX does not work! Can't repeat QtObject as it doesn't derive from Item. Repeater { model: 10 QtObject {} } }
Property Documentation
This property holds the number of items in the repeater. handler is called handler is called.
This QML signal was introduced in QtQuick 1.1.. | https://doc.qt.io/archives/qt-4.8/qml-repeater.html | CC-MAIN-2019-22 | en | refinedweb |
The most powerful way to customize the user-facing content of your helpdesk is to use Deskpro’s Email templates system.
You can edit templates to change:
You can also add custom email templates.
Templates can include phrases: short, re-usable pieces of text, used to store something like the user greeting at the beginning of an email, or the name of a section of the portal.
A major advantage of storing text using phrases is that you can include the same phrase in many templates. Deskpro’s multi-language support is also based around phrases.
You can access Useful template variables from a template, enabling you to retrieve information about the user who is logged in, the ticket they’re currently viewing, etc., and display that back to the user.
You’ll need a basic understanding of HTML to edit templates. If you want to customize the portal design, you’ll need a more advanced knowledge of HTML/CSS.
Here’s an example of how to customize helpdesk content by editing a template and using phrases and variables.
Suppose you decide you want to edit the content of the automatic email users receive when they register an account. By default the email looks something like this:
As you’d expect, “Example User” is automatically replaced with the name of the user who’s receiving the email. We’ll see how this happens later.
Go to Tickets > Email Templates to find the corresponding template. Look under User Email Templates and you will see the Welcome Email template.
Click on the template to open the template editor window.
You’ll see that the template contains code for the Email Subject and Email Body.
Let’s look at the Email Body:
{{ phrase('user.emails.greeting') }} <br /><br /> {{ phrase('user.emails.register-welcome') }}<br/> <a href="{{ url_full('user') }}">{{ url_full('user') }}</a> {% if not person.is_agent_confirmed %} <br /><br /> {{ phrase('user.emails.register-agent-validation') }} {% endif %}
You probably recognize that some of this is HTML markup:
<br /> for line breaks and
<a href> to make a link.
The other parts, such as
{{ phrase('user.emails.greeting') }} and
{% if not person.is_agent_confirmed %} ... {% endif %}, are the template syntax.
{{ phrase('X') }} is the syntax to include a phrase in the email.
You can look up phrases by going to Settings > Languages. Under Installed Languages, click on your default language, then Edit Phrases.
Under User Interface Phrases, click on Emails to see email phrases. The default content for
user.emails.greeting is:
Dear {{to_name}},
When the email is sent, the content of the phrase is inserted into the email. Since the phrase contains the
{{to_name}} variable, when the email is sent, the template system replaces this with the variable content, which is the name of the user.
If the email is being sent in another language, the translated version of
user.emails.greeting from the corresponding language pack gets inserted instead. For example, here’s the Spanish translation:
Let’s suppose your marketing department has decided that user emails should start with “Hello ...” rather than “Dear ...”
You could replace
{{ phrase('user.emails.greeting') }} with
Hello {{to_name}}, but that would only change the greeting for this email type, and it wouldn’t be translated into other languages.
A better solution is to go to Setup > Languages and use Edit Phrases to enter and save a custom version of the phrase.
All the other email templates which use the phrase will now use the custom version. You can also use Edit Phrases to enter a custom translation for any other languages you have installed.
Sometimes, instead of changing the existing phrases on your helpdesk, you may need to create a new phrase.
To do this, go to Setup > Languages and click on All Custom Phrases then click Add Custom Phrase button.
You will be prompted to choose a name for the phrase (custom phrase names are always prefixed with
custom.).
Note that the Default and Translation sections for a custom phrase are non-functional.
If your helpdesk has multiple languages installed, to translate your custom phrase, you must create it in each language you have, making sure to use the same name for each version. See How do I translate a custom phrase? for details.
Warning
You can’t include variables directly in custom phrases. You must use the method described in Variables in custom phrases. | https://support.deskpro.com/en/guides/admin-guide/editing-templates/introducing-templates | CC-MAIN-2019-22 | en | refinedweb |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.10-1
- unstable 5.10-1
NAME¶log1p, log1pf, log1pl - logarithm of 1 plus argument
SYNOPSIS¶
#include <math.h>
double log1p(double x); float log1pf(float x); long double log1pl(long double x);Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
log1p():
log1pf(), log1pl():
DESCRIPTION¶These functions return a value equivalent to
log (1 + x)
The result is computed in a way that is accurate even if the value of x is near zero.
RETURN VALUE¶¶¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶C99, POSIX.1-2001, POSIX.1-2008.
BUGS¶Before version 2.22, the glibc implementation did not set errno to EDOM when a domain error occurred.
Before version 2.22, the glibc implementation did not set errno to ERANGE when a range error occurred. | https://manpages.debian.org/buster-backports/manpages-dev/log1p.3.en.html | CC-MAIN-2021-04 | en | refinedweb |
Code-Splitting
Bund, Rollup)); });.
import React, { Suspense } from 'react';.
import React, { Suspense } from 'react'; const OtherComponent = React.lazy(() => import('./OtherComponent')); const AnotherComponent = React.lazy(() => import('./AnotherComponent')); function MyComponent() { return ( <div> <Suspense fallback={<div>Loading...</div>}> <section> <OtherComponent /> <AnotherComponent /> </section> </Suspense> </div> ); }
Error React, { Suspense } from 'react';.
import React, { Suspense, lazy } from 'react'; import { BrowserRouter as Router, Route, Switch } from 'react-router-dom'; const Home = lazy(() => import('./routes/Home')); const About = lazy(() => import('./routes/About')); const App = () => ( <Router> <Suspense fallback={<div>Loading...</div>}> <Switch> <Route exact path="/" component={Home}/> <Route path="/about" component={About}/> </Switch> </Suspense> </Router> );
Named")); | https://ml.reactjs.org/docs/code-splitting.html | CC-MAIN-2021-04 | en | refinedweb |
Talk:Key:contact
- Key:phone
- Key:url and Key:website (see also Proposed_features/External_links)
User:Emka 19:10, 10 June 2009
- Yes. I use 'phone' rather than 'contact:phone' because that's what most other mappers do. This is no coincidence. It's simpler. I generally disapprove of most proposals to introduce "namespaces". While they have the feel of something nice and rational and organised, that comes at a price. Tags are supposed to be simple. We're not developing a programming language here. New mappers have to learn to type these things in and remember them. The 'contact:' prefix offers very little real benefit but makes a tag much less simple.
- (Copied from my diary I realised I should post this here)
- -- Harry Wood 10:21, 25 August 2011 (BST)
Types of phone numbers
What do you think about tagging various types of phone numbers? Like mobile phone (or cell phone), fixed phone or sip phone for example. I thought:
- contact:phone:fixed=1234
- contact:phone:mobile=1234
- contact:phone:[email protected]
--Dirk86 15:12, 4 March 2010 (UTC)
- I like it, especially mobile phones are in wide use and are often given together with a fixed phone number.--Scai 09:20, 22 September 2010 (BST)
- Drowning in colon characters. How about good old simple tags like phone=* ...and maybe mobilephone=* ? -- Harry Wood 11:38, 22 September 2010 (BST)
More than one phone number
When a business has more than one phone number, what's the best way to capture this?
- adding a contact:phone for each number
- including all phone numbers in contact:phone
Mafeu 12:58, 26 October 2010 (BST)
- you can't add several identical tags, so probably something like contact:phone=phone1;phone2;phone3 --Richlv 19:45, 17 December 2010 (UTC)
Webcam?
A webcam shall be a means to contact somebody? What are you planning, jumping up and down in front of the webcam, hoping to catch some attention? Lulu-Ann
- I confirm. Webcams are normaly not a communication-channel--CMartin (talk) 17:37, 26 August 2014 (UTC)
additional forms of communication
i think we could add some more forms of communication:
- contact:skype
- contact:twitter
- contact:facebook
what's your opinion?
- I agree. Gallaecio 19:52, 27 July 2011 (BST)
- I certainly agree for the Facebook page. Must have. These are now often used instead of a Website home page. --Neil Dewhurst, Lyon France (talk) 08:31, 18 September 2013 (UTC)
Contact:name?
Many small shop is known by owners name, I think contact:name would be the best way to tag. --BáthoryPéter 11:14, 28 July 2011 (BST)
Deprecate this tag family
Stats (taginfo) and usage (editors presets) show that the old tags without the prefix "contact:" remain more popular even after two years of coexistence. After a discussion on the tagging list, I suggest to deprecate this tag serie on the wiki and recommand the old but simple keys.--Pieren 22:13, 1 May 2012 (BST)
- AGREE! for the reasons given in the discussion above -- Harry Wood 01:41, 12 September 2012 (BST)
- I think it is useful to maintain the namespace setaside, but NOT for general mapping use; rather as a reserved namespace to support content transformation, that is "contact:phone" as a synonym for "phone" for data conversion purposes but not for user mapping purposes. To this end, suggest creating a bot to revise instances of "contact:phone" to "phone" where "phone" is not currently in use, and to highlight where both are present for manual resolution. This would also mean retiring this page and creating a few wiki redirects. --Ceyockey (talk) 15:39, 3 April 2015 (UTC)
- So what was decided? Did this even go through a voting process or so? Dhiegov (talk) 12:10, 10 February 2019 (UTC)
- The situation is largely unchanged: Keys without a contact prefix remain far more commonly used (and are being added in greater numbers, too, so it's not just existing tagging), but there are mappers who really like the contact prefix and would oppose a deprecation. As far as I know, no one has attempted to put the issue to a vote. --Tordanik 20:52, 14 February 2019 (UTC)
- I made the first step and explicitly described at Wiki page that alternative is considered as preferable by mappers Mateusz Konieczny (talk) 12:57, 19 September 2019 (UTC)
format inconsistency with the key “phone”:50, 20 June 2013 (UTC)
Review websites
I have seen a few mappers adding contact:[ yelp | tripadvisor | foursquare ] to businesses. IMHO these are not means of contact, instead these are review website. While I personally think that we do not need them in OSM at all, they certainly do not belong in the contact:* namespace. --Polarbear w (talk) 21:03, 15 September 2017 (UTC)
- In a discussion on the Tagging list, adding those websites was discouraged by most contributors. They are not designed as means of contact to the businesses, and focus on individual reviews. They are seen as an instrument of search engine optimizers and spammers. --Polarbear w (talk) 19:50, 9 October 2017 (UTC)
- Personally, id put 99% of the Facebook links I've seen in that category also. Even if you can technically contact the business through Facebook, its main purpose is to make the business look overly good and they use of a lot of the same SEO/spam tactics. Would something like Yelp qualify also? --Adamant1 (talk) 17:31, 24 January 2019 (UTC)
contact:website vs website
Is there any difference in meaning between contact:website and website tag? Mateusz Konieczny (talk) 18:52, 9 October 2017 (UTC)
- Not in my opinion. It was just the attempt by some mappers to group "phone, fax, email, website" under a common key prefix. --Polarbear w (talk) 19:37, 9 October 2017 (UTC)
- It might be splitting hairs, but I think there is a difference. To me, visiting a website doesn't qualify as "contact." Anymore then it would be if I stand outside of a business and look at the hours on their door (Maybe that's more to do with it being a bad tag though and not them having different meanings per say).--Adamant1 (talk) 17:26, 24 January 2019 (UTC)
- Totally agree with Adamant1. --The knife (talk) 22:23, 30 July 2019 (UTC)
- I also agree that contact:website makes tagging more complex, because it eludes to the differentiation of websites which include contact possibilities and those that don't. IMHO we should discourage the use of contact:website for this reason. --Dieterdreist (talk) 15:16, 19 September 2019 (UTC)
I've seen a few people around that have deleted the older more widely accepted tags like phone=* and replaced them with these. It wasn't on a mass scale or anything, one person in particular did it a lot though, but there should still be something on this page about how its inappropriate replace the old tags with these ones. As they can coexist. Hopeful it will help the situation at least a little. I don't think banner at the top is sufficient enough. --Adamant1 (talk) 07:46, 28 December 2018 (UTC)
- While it's unfortunate that we have two synonymous keys and I wish we could just finally decide which set of keys to use (it's been 10 years!), deleting them like that isn't really acceptable and a recipe for edit wars. Feel free to add something to the page. --Tordanik 20:14, 14 January 2019 (UTC)
contact:youtube?
Is it just me, or is it quite absurd to describe it as a contact method? Mateusz Konieczny (talk) 13:22, 19 May 2020 (UTC)
- Just ignore it Mateusz, the "contact"-prefix is not about sense. Sooner or later it will disappear, if we simply do not use it. --Dieterdreist (talk) 14:42, 19 May 2020 (UTC)
- contact:website in theory can work and link to a contact form - though it is basically never used in that way Mateusz Konieczny (talk) 16:05, 19 May 2020 (UTC)
- It still makes sense in this respect. The extent and nature of contact:*=* isn't well-defined. You can comment on videos, reply to Community posts, (these 2 alone already make contact:youtube=* more directly contact=*-fiting than most contact:website=* tags you observe) and there will be a list of email address and links in the About section. Keys like contact:webcam=* (cf Talk:Key:contact#Webcam.3F could be worse than contact:website=* - there's usually not even any contact channel listed. We simply need to clarify how to use contact:*=*, *:website=* and *:url=*, etc. -- Kovposch (talk) 10:34, 20 May 2020 (UTC)
Addresses?
These tags IMHO are not about Addresses, let’s remove the group or find a better one.—-Dieterdreist (talk) 17:42, 1 October 2020 (UTC)
emergency phone number
Is there any way to include emergency numbers? Eg. the opening_hours of a vet are Mo-Fr 08:00-16:00, but in case of emergency you can call under phone number X. (Mabye contact:emergencyphone ?) --TBKMrt (talk) 07:45, 2 December 2020 (UTC)
- I am not aware of any. There is emergency_telephone_code=* which has 90% values of "112" and might eventually apply, although the term "code" seems strange? Also it is not documented in the wiki. Some usage also for emergency:phone=* (4191 uses, undocumented but eventually suitable) and much less for emergency_phone=* (337 instances, undocumented but from the values it looks as if it could be suitable for your scope). From this short lookup it seems emergency:phone=* is the best available option. The contact prefix should be avoided unless you want to make everyone's life harder by using multiple keys with the same meaning. For completely there is also "emergency:contact:phone=*" with 115 uses and contact:phone:emergency with 106 uses, i.e. both combined are at 5% of emergency:phone=*. --Dieterdreist (talk) 09:03, 2 December 2020 (UTC)
- @Dieterdreist: I really did not expect an answer that quickly so thanks for the quick reply!
- @ emergency_telephone_code: I also thought about that code does not really sound as if it would fit the rest of contact:*
- @ emergency:phone: I saw that one, but I honestly don't really like it since it's awfully close to emergency=phone and this is something completelly else. I thought about using the emergency key in general, but the key as such seems to be more in use for public emergencies (eg. fire_hydrant, life_ring, phone, siren).
- Because of the existing mixes of contact, phone and emergency I would more tend to use emergency_phone as key even it does not have a huge ammount of uses. The reason simply is that it's name is short and descriptive and it would fit the rest of the contact key. So phone = contact:phone and emergency_phone = contact:emergency_phone. But after all I don't really mind the way how it is included.
- Usecase would be this vet that has a public phone number for cases of emergency.
- --TBKMrt (talk) 16:20, 4 December 2020 (UTC)
- I don't know if this would help in any way. I checked/searched taginfo. If you want use a mix of contact & phone & emergency i found two schemes which people starts to use:
- => 115x emergency:contact:phone:
- => 106x contact:phone:emergency:
- Additionally you could look to the values which are used for these both versions and look to the taginfo chronology tab to see if there is organic growth --MalgiK (talk) 17:00, 4 December 2020 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Key:contact | CC-MAIN-2021-04 | en | refinedweb |
The world is governed by chance. Randomness stalks us every day of our lives.
– Paul Auster
Random numbers are all around us in the world of data science. Every so often I need to quickly draw up some random numbers to run a thought experiment, or to demonstrate a concept to an audience but without having to download big datasets.
From creating dummy data to shuffling the data for training and testing purposes or initializing weights of a neural network, we generate random numbers all the time in Python. You’ll love this concept once you have the hang of it after this article.
In my opinion, generating random numbers is a must-know topic for anyone in data science. I’ll guide you through the entire random number generation process in Python here and also demonstrate it using different techniques.
New to Python? These two free courses will get you started:
Table of Contents
- Random Library
- Seeding Random Numbers
- Generating Random Numbers in a Range
- uniform()
- randint()
- Picking Up Randomly From a List
- Shuffling a List
- Generating Random Numbers According to Distributions
- gauss()
- expovariate()
Generating Random Numbers in Python using the Random Library
Here’s the good news – there are various ways of generating random numbers in Python. The easiest method is using the random module. It is a built-in module in Python and requires no installation. This module uses a pseudo-random number generator (PRNG) known as Mersenne Twister for generating random numbers.
A pseudo-random number generator is a deterministic random number generator. It does not generate truly random numbers. It takes a number as an input and generates a random number for it.
Note: Do not use the random module for generating random numbers for security purposes. For security and cryptographic uses, you can use the secrets module which uses true-random number generators (TRNG).
Seeding Random Numbers
As we discussed in the above section, the random module takes a number as input and generates a random number for it. This initial value is known as a seed, and the procedure is known as seeding.
The numbers we generate through pseudo-random number generators are deterministic. This means they can be replicated using the same seed.
Let’s understand it with an example:
import random print('Random Number 1=>',random.random()) print('Random Number 2=>',random.random())
Here, I am using the random() function which generates a random number in the range [0.0, 1.0]. Notice here that I haven’t mentioned the value of the seed. By default, the current system time in milliseconds is used as a seed. Let’s take a look at the output.
Both numbers are different because of the change in time during execution from the first statement to the second statement. Let’s see what happens if we seed the generators with the same value:
random.seed(42) print('Random Number 1=>',random.random()) random.seed(42) print('Random Number 2=>',random.random())
We get the same numbers here. That’s why pseudo-random number generators are deterministic and not used in security purposes because anyone with the seed can generate the same random number.
Generating Random Numbers in a Range
So far, we know about creating random numbers in the range [0.0, 1.0]. But what if we have to create a number in a range other than this?
One way is to multiply and add numbers to the number returned by the random() function. For example, random.random() * 3 + 2 will return numbers in the range [2.0, 5.0]. However, this is more of a workaround, not a straight solution.
Don’t worry! The random module has got your back here. It provides uniform() and randint() functions that we can use for this purpose. Let’s understand them one by one.
uniform()
The uniform() function of the random module takes starting and ending values of a range as arguments and returns a floating-point random number in the range [starting, ending]:
print('Random Number in range(2,8)=>', random.uniform(2,8))
randint()
This function is similar to the uniform() function. The only difference is that the uniform() function returns floating-point random numbers, and the randint() function returns an integer. It also returns the number in the range [starting, ending]:
print('Random Number in a range(2,8)=>', random.randint(2,8))
Picking Up Randomly From a List
choice() & choices() are the two functions provided by the random module that we can use for randomly selecting values from a list. Both of these functions take a list as an argument and randomly select a value(s) from it. Can you guess what the difference between choice() and choices() is?
choice() only picks a single value from a list whereas choices() picks multiple values from a list with replacement. One fantastic thing about these functions is that they work on a list containing strings too. Let’s see them in action:
a=[5, 9, 20, 10, 2, 8] print('Randomly picked number=>',random.choice(a)) print('Randomly picked number=>',random.choices(a,k=3))
As you can see, choice() returned a single value from a and choices() returned three values from a. Here, k is the length of the list returned by choices().
One more thing you can notice in the responses returned by choices() is that each value occurs only once. You can increase the probability of each value being picked by passing an array as weights to the choices() function. So, let’s increase the probability of 10 to as much as thrice of others and see the results:
for _ in range(5): print('Randomly picked number=>',random.choices(a,weights=[1,1,1,3,1,1],k=3))
Here, we can see that 10 occurred in every draw from the list. There also exists a sample() function in the random module that works similarly to the choices() function but takes random samples from a list without replacement.
Shuffling a List
Let’s say we don’t want to pick values from a list but you just want to reorder them. We can do this using the shuffle() function from the random module. This shuffle() function takes the list as an argument and shuffles the list in-place:
print('Original list=>',a) random.shuffle(a) print('Shuffled list=>',a)
Note: The shuffle() function does not return a list.
Generating Random Numbers According to Distributions
One more amazing feature of the random module is that it allows us to generate random numbers based on different probability distributions. There are various functions like gauss(), expovariate(), etc. which help us in doing this.
If you are not familiar with probability distributions, then I highly recommend you to read this article: 6 Common Probability Distributions every data science professional should know.
gauss()
Let’s start with the most common probability distribution, i.e., normal distribution. gauss() is a function of the random module used for generating random numbers according to a normal distribution. It takes mean and standard deviation as an argument and returns a random number:
for _ in range(5): print(random.gauss(0,1))
Here, I plotted 1000 random numbers generated by the gauss() function for mean equal to 0 and standard deviation as 1. You can see above that all the points are spread around the mean and they are not widely spread since the standard deviation is 1.
expovariate()
Exponential distribution is another very common probability distribution that you’ll encounter. The expovariate() function is used for getting a random number according to the exponential distribution. It takes the value of lambda as an argument and returns a value from 0 to positive infinity if lambda is positive, and from negative infinity to 0 if lambda is negative:
print('Random number from exponential distribution=>',random.expovariate(10))
End Notes
I often use random numbers for creating dummy datasets and for random sampling. I’d love to know how you use random numbers in your projects so comment down below with your thoughts and share them with the community.
If you found this article informative, then please share it with your friends and comment below your queries and feedback. I have listed some amazing articles related to Python and data science below for your reference:
- What are Lambda Functions? A Quick Guide to Lambda Functions in Python
- Learn How to use the Transform Function in Pandas (with Python code)
- How to use loc and iloc for Selecting Data in Pandas (with Python code!)
| https://www.analyticsvidhya.com/blog/2020/04/how-to-generate-random-numbers-in-python/ | CC-MAIN-2021-04 | en | refinedweb |
Suppose we have one undirected, connected graph with N nodes these nodes are labeled as 0, 1, 2, ..., N-1. graph length will be N, and j is not same as i is in the list graph[i] exactly once, if and only if nodes i and j are connected. We have to find the length of the shortest path that visits every node. We can start and stop at any node, we can revisit nodes multiple times, and we can reuse edges.
So, if the input is like [[1],[0,2,4],[1,3,4],[2],[1,2]], then the output will be 4. Now here one possible path is [0,1,4,2,3].
To solve this, we will follow these steps −
Define one queue
n := size of graph
req := 2^(n - 1)
Define one map
for initialize i := 0, when i < n, update (increase i by 1), do −
insert {0 OR (2^i), i} into q
if n is same as 1, then −
return 0
for initialize lvl := 1, when not q is empty, update (increase lvl by 1), do −
sz := size of q
while sz is non-zero, decrease sz by 1 in each iteration, do −
Define an array curr = front element of q
delete element from q
for initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), do
u := graph[curr[1], i]
newMask := (curr[0] OR 2^u)
if newMask is same as req, then −
return lvl
if call count(newMask) of visited[u], then −
Ignore following part, skip to the next iteration
insert newMask into visited[u]
insert {newMask, u} into q
return -1
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h> using namespace std; void print_vector(vector<auto> v){ cout << "["; for(int i = 0; i<v.size(); i++){ cout << v[i] << ", "; } cout << "]"<<endl; } class Solution { public: int shortestPathLength(vector<vector<int> >& graph){ queue<vector<int> > q; int n = graph.size(); int req = (1 << n) - 1; map<int, set<int> > visited; for (int i = 0; i < n; i++) { q.push({ 0 | (1 << i), i }); } if (n == 1) return 0; for (int lvl = 1; !q.empty(); lvl++) { int sz = q.size(); while (sz--) { vector<int> curr = q.front(); q.pop(); for (int i = 0; i < graph[curr[1]].size(); i++) { int u = graph[curr[1]][i]; int newMask = (curr[0] | (1 << u)); if (newMask == req) return lvl; if (visited[u].count(newMask)) continue; visited[u].insert(newMask); q.push({ newMask, u }); } } } return -1; } }; main(){ Solution ob; vector<vector<int>> v = {{1},{0,2,4},{1,3,4},{2},{1,2}}; cout << (ob.shortestPathLength(v)); }
{{1},{0,2,4},{1,3,4},{2},{1,2}}
4 | https://www.tutorialspoint.com/shortest-path-visiting-all-nodes-in-cplusplus | CC-MAIN-2021-04 | en | refinedweb |
Release notes
Red Hat Advanced Cluster Management for Kubernetes Release notes
Abstract
Chapter 1. Red Hat Advanced Cluster Management for Kubernetes Release notes
1.1. What’s new in Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Management for Kubernetes is now a generally available product. See what is available in version 2.0.
Red Hat Advanced Cluster Management for Kubernetes provides visibility of your entire Kubernetes domain with built-in governance, cluster lifecycle management, and application lifecycle management.
- Get an overview of Red Hat Advanced Cluster Management for Kubernetes from Welcome to Red Hat Advanced Cluster Management for Kubernetes.
- See the Multicluster architecture topic to learn more about major components of the product.
- The Getting started guide references common tasks that get you started, as well as the Troubleshooting guide.
1.1.1. Installation
With operator-based installation, you can install a Red Hat OpenShift Container Platform cluster on a configured cloud provider, such as Amazon Web Services, in less than 10 minutes. See Installing while connected online for more information.
1.1.2. Cluster management
- Create clusters on various Kubernetes service providers. You can provision and manage Red Hat OpenShift Container Platform clusters on selected Kubernetes cloud service providers. See Creating a cluster with Red Hat Advanced Cluster Management for Kubernetes for more information.
- Import existing Kubernetes clusters. Import your existing Kubernetes clusters that are hosted on popular cloud service providers, or on private clouds to manage your clusters conveniently in one place. See Importing a target managed cluster to the hub cluster for more information.
- Manage all of your Red Hat OpenShift Container Platform cluster upgrades in one interface. You can upgrade imported and provisioned Red Hat OpenShift Container Platform clusters either individually or in groups by using the console.
1.1.3. Application management
Deploy and maintain business applications distributed across your clusters. This is accomplished through subscription-based automation.
You can also view the complete picture of your applications and their resource statuses from the topology page in the console.
- Subscriptions are Kubernetes resources that serve as sets of definitions for identifying Kubernetes resources (in GitHub, Objectstores, or hub deployables) and Helm charts within channels by using annotations, labels, and versions.
- Application resources are used to group and view the components across your applications.
- Placement rules define where and how your applications are subscribed. Use placement rules to help you facilitate multicluster deployments.
- Channel resources define the source you subscribe to get your application components. (Git, Objectstore, Helm repository or templates (deployables) on the hub.
For more information, see Managing applications.
1.1.4. Security and compliance
Red Hat Advanced Cluster Management for Kubernetes supports several roles and uses Kubernetes authorization mechanisms. For more information, see Role-based access control.
Use the product governance framework to enhance the security for your managed clusters. With the Governance and risk dashboard, you can view and manage the number of security risks and policy violations in your clusters and applications.
Create custom policy controllers to report and validate the compliance of your policies on your cluster. Enable and manage the following policy controllers that are installed by default:
See Governance and risk to learn more about the dashboard and the policy framework.
As you create policies, use the policy element,
templates to describe how your resource is defined. For more information about the policy elements, see Manage security policies.
1.2. Errata updates
By default, Errata updates are automatically applied. See Upgrading by using the operator for more information.
Important: For reference, Errata links and GitHub numbers might be added to the content and used internally. Links that require access might not be available for the user.
1.2.1. Errata 2.0.7
The Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.7 resolved identified security CVEs.
1.2.2. Errata 2.0.6
View a summarized list of Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.6 updates:
- Fixed an issue that cluster destroy on Google Cloud Platform was not cleaning up all Service Accounts. (GitHub 5948)
- Fixed an issue that caused a temporary error on the create resources page after you detach a managed cluster. (GitHub 6299)
- Fixed an issue that prevented the complete destroying or detaching of a Microsoft Azure managed cluster after the addition of the cluster failed. (GitHub 6353)
- Fixed an issue that caused bare metal clusters to fail to upgrade to 2.1.0 due to memory errors. (GitHub 6898) (Bugzilla 1895799)
- Corrected a PATH error when starting a new Visual Web Terminal session. (GitHub 6928)
- Resolved an issue with the subscription
timewindowfunction that sometimes prevented it from transitioning to and from
blockingand
unblockingat the scheduled times. (GitHub 7337)
1.2.3. Errata 2.0.5
View a summarized list of Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.5 updates:
1.2.4. Errata 2.0.4
View a summarized list of Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.4 updates:
- Increased the default memory for
search-operatorpod for upgrade. (1882748)
- Provided a solution for the search pod collector to prevent crashes. (1883694)
- Provided a solution for a problem with provisioned Bare Metal clusters remaining in
Pending importstate. (1860233)
- Added viewer restrictions for
ManagedClusterActionresource. (GitHub 5843)
- Enhanced certificate refresh process for agents. (GitHub 4914)
1.2.5. Errata 2.0.3
View a summarized list of Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.3 updates:
- Added upgrade and install improvements and fixes.
- Resolved resource leaks in
open-cluster-managementthat created system instability.
- Improved bare metal workload messaging since worker nodes are not required.
- Fixed bare metal provider connection edit function, along with other bare metal usability issues.
- Resolved a webhook validation error that caused uninstall failure.
- Fixed a Klusterlet search pods crash.
- Added policy improvements.
In the Console, fixed the following inconsistencies and added the following improvements:
- Fixed instability in Application overview page applications list.
- Resolved Governance and risk page failing if a policy annotation is missing.
- Fixed Topology inconsistencies for policy violations.
- Fixed refresh settings on Policy violation pages.
- Fixed subscriptions that were propagated, but failing in the console.
- Added scroll to cloud providers list to show Bare metal option.
- Enabled DNS VIP field in bare metal cluster create console.
1.2.6. Errata 2.0.2
Errata 2.0.2 resolves a rare problem that caused some managed cluster imports to fail after upgrading from version 2.0.0 to version 2.0.1. You must upgrade to Errata 2.0.1 before upgrading to Errata 2.0.2.
1.2.7. Errata 2.0.1
View a summarized list of Red Hat Advanced Cluster Management for Kubernetes Errata 2.0.1 updates.
- The cluster import process was improved.
- Upgraded the
ocand
kubectlCLIs to the latest versions for the Visual Web Terminal.
- Administrator (
admin) role access to the pod logs of managed clusters is fixed.
- The product uninstallation process was improved.
- Added a label for
Bare metalto the Cloud field options list, on the Importing a cluster page.
- The default
Network typewhen you create a cluster is updated from OpenShiftSDN to OVNKubernetes.
- Subscriptions support
kustomization.yamlfiles that contains an inline patch where the patch content inside the file is a single string.
- Improved how cloud providers manage sensitive data.
- Removed DNS virtual IP parameter from the create cluster flow.
- Overview page does not become blank when clusters are detached.
1.3. Known issues
Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the previous release.
1.3.1. Installation known issues
1.3.1.1. OpenShift Container Platform cluster upgrade failed status
When an OpenShift Container Platform cluster is in the upgrade stage, the cluster pods are restarted and the cluster might remain in
upgrade failed status for a variation of 1-5 minutes. This behavior is expected and resolves after a few minutes.
1.3.1.2. Certificate manager must not exist during an installation
Certificate manager must not exist on a cluster when you install Red Hat Advanced Cluster Management for Kubernetes.
When certificate manager already exists on the cluster, Red Hat Advanced Cluster Management for Kubernetes installation fails.
To resolve this issue, verify if the certificate manager is present in your cluster by running the following command:
kubectl get crd | grep certificates.certmanager
1.3.2. Web console known issues
1.3.2.1. LDAP user names are case-sensitive
LDAP user names are case-sensitive. You must use the name exactly the way it is configured in your LDAP directory.
1.3.2.2. Console features might not display in Firefox earlier versions
The product supports Mozilla Firefox 74.0 or the latest version that is available for Linux, macOS, and Windows. Upgrade to the latest version for the best console compatibility.
1.3.2.3. Unable to search using values with empty spaces
From the console and Visual Web Terminal, users are unable to search for values that contain an empty space.
1.3.2.4. At logout user kubeadmin gets extra browser tab with blank page
When you are logged in as
kubeadmin and you click the Log out option in the drop-down menu, the console returns to the login screen, but a browser tab opens with a
/logout URL. The page is blank and you can close the tab without impact to your console.
1.3.3. Cluster management known issues
1.3.3.1. Console might report managed cluster policy inconsistency
After a cluster is imported, log in to the imported cluster and make sure all pods that are deployed by the Klusterlet are running. Otherwise, you might see inconsistent data in the console.
For example, if a policy controller is not running, you might not get the same results of violations on the Governance and risk page and the Cluster status.
For instance, you might see 0 violations listed in the Overview status, but you might have 12 violations reported on the Governance and risk page.
In this case, inconsistency between the pages represents a disconnection between the
policy-controller-addon on managed clusters and the policy controller on the hub cluster. Additionally, the managed cluster might not have enough resources to run all the Klusterlet components.
As a result, the policy was not propagated to managed cluster, or the violation was not reported back from managed clusters.
1.3.3.2. Importing clusters might require two attempts
When you import a cluster that was previously managed and detached by a Red Hat Advanced Cluster Management hub cluster, the import process might fail the first time. The cluster status is
pending import. Run the command again, and the import should be successful.
1.3.3.3. Klusterlet runs on a detached cluster
If you detach an online cluster immediately after it was attached, the Klusterlet starts to run on the detached cluster before the
manifestwork syncs. Removal of the managed cluster from the hub cluster does not uninstall the Klusterlet. Complete the following steps to fix the issue:
- Download the
cleanup-managed-clusterscript from the
deployGit repository.
Run the
cleanup-managed-cluster.shscript by entering the following command:
./cleanup-managed-cluster.sh
1.3.3.4. Importing certain versions of IBM Red Hat OpenShift Kubernetes Service clusters is not supported
You cannot import IBM Red Hat OpenShift Kubernetes Service version 3.11 clusters. Later versions of IBM OpenShift Kubernetes Service are supported.
1.3.3.5. Detaching OpenShift Container Platform 3.11 does not remove the open-cluster-manangement-agent
When you detach managed clusters on OpenShift Container Platform 3.11, the
open-cluster-management-agent namesapce is not automatically deleted. Manually remove the namespace by running the following command:
oc delete ns open-cluster-management-agent
1.3.3.6. Automatic secret updates for provisioned clusters is not supported
When you change your cloud provider access key, the provisioned cluster access key is not updated in the namespace. Run the following command for your cloud provider to update the access key:
Amazon Web Services (AWS)
oc patch secret {CLUSTER-NAME}-aws-creds -n {CLUSTER-NAME} --type json -p='[{"op": "add", "path": "/stringData", "value":{"aws_access_key_id": "{YOUR-NEW-ACCESS-KEY-ID}","aws_secret_access_key":"{YOUR-NEW-aws_secret_access_key}"} }]'
Google Cloud Platform (GCP)
oc set data secret/{CLUSTER-NAME}-gcp-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.gcp/osServiceAccount.json
Microsoft Azure
oc set data secret/{CLUSTER-NAME}-azure-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.azure/osServiceAccount.json
1.3.3.7. Resources remain after you detach an offline managed cluster
When you detach a managed cluster that is in an offline state, there are some resources that cannot be removed from managed cluster. Complete the following steps to remove the additional resources:
- Make sure you have the
occommand line interface configured.
Make sure you have
KUBECONFIGconfigured on your managed cluster.
If you run
oc get ns | grep open-cluster-management-agentyou should see two namespaces:
open-cluster-management-agent Active 10m open-cluster-management-agent-addon Active 10m
- Download the
cleanup-managed-clusterscript from the
deployGit repository.
Run the
cleanup-managed-cluster.shscript by entering the following command:
./cleanup-managed-cluster.sh
Run the following command to ensure that both namespaces are removed:
oc get ns | grep open-cluster-management-agent
1.3.3.8. Cannot run
management ingress as non-root user
You must be logged in as
root to run the
management-ingress service.
1.3.3.9. Node information from the managed cluster cannot be viewed in search
Search maps RBAC for resources in the hub cluster. Depending on user RBAC settings for Red Hat Advanced Cluster Management, users might not see node data from the managed cluster. Results from search might be different from what is displayed on the Nodes page for a cluster.
1.3.4. Application management known issues
1.3.4.1. YAML manifest cannot create multiple resoures
The
managedclusteraction doesn’t support multiple resources. You cannot apply the YAML manifest with multiple resource from console create resources features.
1.3.4.2. Console pipeline cards might display different data
Search results for your pipeline return an accurate number of resources, but that number might be different in the pipeline card because the card displays resources not yet used by an application.
For instance, after you search for
kind:channel, you might see you have 10 channels, but the pipeline card on the console might represent only 5 channels that are used.
1.3.4.3. Namespace channel subscription remains in failed state
When you subscribe to a namespace channel and the subscription remains in
FAILED state after you fixed other associated resources such as channel, secret, configmap, or placement rule, the namespace subscription is not continuously reconciled.
To force the subscription reconcile again to get out of
FAILED state, complete the following steps:
- Log in to your hub cluster.
- Manually add a label to the subscription using the following command:
oc label subscriptions.apps.open-cluster-management.io the_subscription_name reconcile=true
1.3.4.4. Deployable resources in a namespace channel
You need to manually create deployable resources within the channel namespace.
To create deployable resources correctly, add the following two labels that are required in the deployable to the subscription controller that identifies which deployable resources are added:
labels: apps.open-cluster-management.io/channel: <channel name> apps.open-cluster-management.io/channel-type: Namespace
Don’t specify template namespace in each deployable
spec.template.metadata.namespace.
For the namespace type channel and subscription, all the deployable templates are deployed to the subscription namespace on managed clusters. As a result, those deployable templates that are defined outside of the subscription namespace are skipped.
See Creating and managing channels for more information.
1.3.4.5. Edit role for application error
A user performing in an
Editor role should only have
read or
update authority on an application, but erroneously editor can also
create and
delete an application. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:
- Run
oc edit clusterrole applications.app.k8s.io-v1beta1-edit -o yamlto open the application edit cluster role.
- Remove
createand
deletefrom the verbs list.
- Save the change.
1.3.4.6. Edit role for placement rule error
A user performing in an
Editor role should only have
read or
update authority on an placement rule, but erroneously editor can also
create and
delete, as well. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:
- Run
oc edit clusterrole placementrules.apps.open-cluster-management.io-v1-editto open the application edit cluster role.
- Remove
createand
deletefrom the verbs list.
- Save the change.
1.3.4.7. Application not deployed after an updated placement rule
If applications are not deploying after an update to a placement rule, verify that the
klusterlet-addon-appmgr pod is running. The
klusterlet-addon-appmgr is the subscription container that needs to run on endpoint clusters.
You can run `oc get pods -n open-cluster-management-agent-addon ` to verify.
You can also search for
kind:pod cluster:yourcluster in the console and see if the
klusterlet-addon-appmgr is running.
If you cannot verify, attempt to import the cluster again and verify again.
1.3.4.8. Subscription operator does not create an SCC
Learn about OpenShift Container Platform SCC at Managing Security Context Constraints (SCC), which is an additional configuration required on the managed cluster.
Different deployments have different security context and different service accounts. The subscription operator cannot create an SCC automatically. Administrators control permissions for pods. A Security Context Constraints (SCC) CR is required to enable appropriate permissions for the relative service accounts to create pods in the non-default namespace:
To manually create an SCC CR in your namespace, complete the following:
Find the service account that is defined in the deployments. For example, see the following
nginxdeployments:
nginx-ingress-52edb nginx-ingress-52edb-backend
Create an SCC CR in your namespace to assign the required permissions to the service account or accounts. See the following example where
kind: SecurityContextConstraintsis added:
apiVersion: security.openshift.io/v1 defaultAddCapabilities: kind: SecurityContextConstraints metadata: name: ingress-nginx namespace: ns-sub-1 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny users: - system:serviceaccount:my-operator:nginx-ingress-52edb - system:serviceaccount:my-operator:nginx-ingress-52edb-backend
1.3.4.9. Application channels in unique namespaces
Creating more than one channel in the same namespace can cause errors with the hub cluster. For instance, namespace
charts-v1 is used by the installer as a Helm type channel, so do not create any additional channels in
charts-v1.
It is best practice to create each channel in a unique namespace. However, a Git channel can share a namespace with another type of channel including Git, Helm, Kubernetes Namespace, and Object store.
1.3.5. Security known issues
1.3.5.1. Internal error 500 during login to the console
When Red Hat Advanced Cluster Management for Kubernetes is installed and the OpenShift Container Platform is customized with a custom ingress certificate, a
500 Internal Error message appears. You are unable to access the console because the OpenShift Container Platform certificate is not included in the Red Hat Advanced Cluster Management for Kubernetes management ingress. Add the OpenShift Container Platform certificate by completing the following steps:
Create a ConfigMap that includes the certificate authority used to sign the new certificate. Your ConfigMap must be identical to the one you created in the
openshift-confignamespace. Run the following command:
oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ -n open-cluster-management
Edit your
multiclusterhubYAML file by running the following command:
oc edit multiclusterhub multiclusterhub
- Update the
specsection by editing the parameter value for
customCAConfigmap. The parameter might resemble the following content:
customCAConfigmap: custom-ca
After you complete the steps, wait a few minutes for the changes to propagate to the charts and log in again. The OpenShift Container Platform certificate is added.
1.3.5.2. Cluster name is not listed in the policy detail panel
All cluster violations from specific policies are listed in the policy detail panel. If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol:
-
1.3.5.3. Empty status in policies
The policies that are applied to the cluster are considered
NonCompliant when clusters are not running. When you view violation details, the
status parameter is empty.
1.3.5.4. Placement rule and policy binding empty
After creating or modifying a policy, the placement rule and the policy binding might be empty in the policy details of the Red Hat Advanced Cluster Management console. This is generally because the policy is disabled, or there was some other updates made to the policy. Ensure that the settings are set correctly for the policy in the YAML view.
1.3.5.5. Recovering cert-manager after removing the helm release
If you remove the
cert-manager and the
cert-manager-webhook-helmreleases, the Helm releases are triggered to automatically redeploy the charts and generate a new certificate. The new certificate must be synced to the other helm charts that create other Red Hat Advanced Cluster Management components. To recover the certificate components from the hub cluster, complete the following steps:
Remove the helm release for
cert-managerby running the following commands:
oc delete helmrelease cert-manager-5ffd5 oc delete helmrelease cert-manager-webhook-5ca82
- Verify that the helm release is recreated and the pods are running.
Make sure the certificate is generated by running the following command:
oc get certificates.certmanager.k8s.io
You might receive the following respoonse:
(base) ➜ cert-manager git:(master) ✗ oc get certificates.certmanager.k8s.io NAME READY SECRET AGE EXPIRATION multicloud-ca-cert True multicloud-ca-cert 61m 2025-09-27T17:10:47Z
- Update the other components with this certificate, by downloading and running
generate-update-issuer-cert-manifest.shscript.
- Verify that all of the secrets from
oc get certificates.certmanager.k8s.iohave the ready state
True.
1.4. Red Hat Advanced Cluster Management for Kubernetes platform considerations for GDPR readiness
1.4.1. Notice
This document is intended to help you in your preparations for General Data Protection Regulation (GDPR) readiness. It provides information about features of the Red Hat Advanced Cluster Management for Kubernetes platform clusters and systems.. Red Hat does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.
1.4.2. Table of Contents
1.4.3. GDPR
General Data Protection Regulation (GDPR) has been adopted by the European Union ("EU") and applies from May 25, 2018.
1.4.3.1. Why is GDPR important?
GDPR establishes a stronger data protection regulatory framework for processing personal data of individuals. GDPR brings:
- New and enhanced rights for individuals
- Widened definition of personal data
- New obligations for processors
- Potential for significant financial penalties for non-compliance
- Compulsory data breach notification
1.4.3.2. Read more about GDPR
1.4.4. Product Configuration for GDPR
The following sections describe aspects of data management within the Red Hat Advanced Cluster Management for Kubernetes platform and provide information on capabilities to help clients with GDPR requirements.
1.4.5. Data Life Cycle
Red Hat Advanced Cluster Management for Kubernetes is an application platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, cluster lifecycle, application lifecycle, and security frameworks (governance, risk, and compliance).
As such, the Red Hat Advanced Cluster Management for Kubernetes platform deals primarily with technical data that is related to the configuration and management of the platform, some of which might be subject to GDPR. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. This data will be described throughout this document for the awareness of clients responsible for meeting GDPR requirements.
This data is persisted on the platform on local or remote file systems as configuration files or in databases. Applications that are developed to run on the Red Hat Advanced Cluster Management for Kubernetes platform might deal with other forms of personal data subject to GDPR. The mechanisms that are used to protect and manage platform data are also available to applications that run on the platform. Additional mechanisms might be required to manage and protect personal data that is collected by applications run on the Red Hat Advanced Cluster Management for Kubernetes platform.
To best understand the Red Hat Advanced Cluster Management for Kubernetes platform and its data flows, you must understand how Kubernetes, Docker, and the Operator work. These open source components are fundamental to the Red Hat Advanced Cluster Management for Kubernetes platform. You use Kubernetes deployments to place instances of applications, which are built into Operators that reference Docker images. The Operator contain the details about your application, and the Docker images contain all the software packages that your applications need to run.
1.4.5.1. What types of data flow through Red Hat Advanced Cluster Management for Kubernetes platform. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data unknown to the platform.
Information on how this technical data is collected/created, stored, accessed, secured, logged, and deleted is described in later sections of this document.
1.4.5.2. Personal data used for online contact
Customers can submit online comments/feedback/requests for information about in a variety of ways, primarily:
- The public Slack community if there is a Slack channel
- The public comments or tickets on the product documentation
- The public conversations in a technical community
Typically, only the client name and email address are used, to enable personal replies for the subject of the contact, and the use of personal data conforms to the Red Hat Online Privacy Statement.
1.4.6. Data Collection
The Red Hat Advanced Cluster Management for Kubernetes platform does not collect sensitive personal data. It does create and manage technical data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names, which might be considered personal data. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. All such information is only accessible by the system administrator through a management console with role-based access control or by the system administrator though login to a Red Hat Advanced Cluster Management for Kubernetes platform node.
Applications that run on the Red Hat Advanced Cluster Management for Kubernetes platform might collect personal data.
When you assess the use of the Red Hat Advanced Cluster Management for Kubernetes platform running containerized applications and your need to meet the requirements of GDPR, you must consider the types of personal data that are collected by the application and aspects of how that data is managed, such as:
- How is the data protected as it flows to and from the application? Is the data encrypted in transit?
- How is the data stored by the application? Is the data encrypted at rest?
- How are credentials that are used to access the application collected and stored?
- How are credentials that are used by the application to access data sources collected and stored?
- How is data collected by the application removed as needed?
This is not a definitive list of the types of data that are collected by the Red Hat Advanced Cluster Management for Kubernetes platform. It is provided as an example for consideration. If you have any questions about the types of data, contact Red Hat.
1.4.7. Data storage
The Red Hat Advanced Cluster Management for Kubernetes platform persists technical data that is related to configuration and management of the platform in stateful stores on local or remote file systems as configuration files or in databases. Consideration must be given to securing all data at rest. The Red Hat Advanced Cluster Management for Kubernetes platform supports encryption of data at rest in stateful stores that use
dm-crypt.
The following items highlight the areas where data is stored, which you might want to consider for GDPR.
- Platform Configuration Data: The Red Hat Advanced Cluster Management for Kubernetes platform configuration can be customized by updating a configuration YAML file with properties for general settings, Kubernetes, logs, network, Docker, and other settings. This data is used as input to the Red Hat Advanced Cluster Management for Kubernetes platform installer for deploying one or more nodes. The properties also include an administrator user ID and password that are used for bootstrap.
- Kubernetes Configuration Data: Kubernetes cluster state data is stored in a distributed key-value store,
etcd.
- User Authentication Data, including User IDs and passwords: User ID and password management are handled through a client enterprise LDAP directory. Users and groups that are defined in LDAP can be added to Red Hat Advanced Cluster Management for Kubernetes platform teams and assigned access roles. Red Hat Advanced Cluster Management for Kubernetes platform stores the email address and user ID from LDAP, but does not store the password. Red Hat Advanced Cluster Management for Kubernetes platform stores the group name and upon login, caches the available groups to which a user belongs. Group membership is not persisted in any long-term way. Securing user and group data at rest in the enterprise LDAP must be considered. Red Hat Advanced Cluster Management for Kubernetes platform also includes an authentication service, Open ID Connect (OIDC) that interacts with the enterprise directory and maintains access tokens. This service uses MongoDB as a backing store.
- Service authentication data, including user IDs and passwords: Credentials that are used by Red Hat Advanced Cluster Management for Kubernetes platform components for inter-component access are defined as Kubernetes Secrets. All Kubernetes resource definitions are persisted in the
etcdkey-value data store. Initial credentials values are defined in the platform configuration data as Kubernetes Secret configuration YAML files. For more information, see Managing secrets.
1.4.8. Data access
Red Hat Advanced Cluster Management for Kubernetes platform data can be accessed through the following defined set of product interfaces.
- Web user interface (the console)
- Kubernetes
kubectlCLI
- Red Hat Advanced Cluster Management for Kubernetes CLI
- oc CLI
These interfaces are designed to allow you to make administrative changes to your Red Hat Advanced Cluster Management for Kubernetes cluster. Administration access to Red Hat Advanced Cluster Management for Kubernetes can be secured and involves three logical, ordered stages when a request is made: authentication, role-mapping, and authorization.
1.4.8.1. Authentication
The Red Hat Advanced Cluster Management for Kubernetes platform authentication manager accepts user credentials from the console and forwards the credentials to the backend OIDC provider, which validates the user credentials against the enterprise directory. The OIDC provider then returns an authentication cookie (
auth-cookie) with the content of a JSON Web Token (
JWT) to the authentication manager. The JWT token persists information such as the user ID and email address, in addition to group membership at the time of the authentication request. This authentication cookie is then sent back to the console. The cookie is refreshed during the session. It is valid for 12 hours after you sign out of the console or close your web browser.
For all subsequent authentication requests made from the console, the front-end NGINX server decodes the available authentication cookie in the request and validates the request by calling the authentication manager.
The Red Hat Advanced Cluster Management for Kubernetes platform CLI requires the user to provide credentials to log in.
The
kubectl and
oc CLI also requires credentials to access the cluster. These credentials can be obtained from the management console and expire after 12 hours. Access through service accounts is supported.
1.4.8.2. Role Mapping
Red Hat Advanced Cluster Management for Kubernetes platform supports role-based access control (RBAC). In the role mapping stage, the user name that is provided in the authentication stage is mapped to a user or group role. The roles are used when authorizing which administrative activities can be carried out by the authenticated user.
1.4.8.4. Pod Security
Pod security policies are used to set up cluster-level control over what a pod can do or what it can access.
1.4.9. Data Processing
Users of Red Hat Advanced Cluster Management for Kubernetes can control the way that technical data that is related to configuration and management is processed and secured through system configuration.
Role-based access control (RBAC) controls what data and functions can be accessed by users.
Data-in-transit is protected by using
TLS.
HTTPS (
TLS underlying) is used for secure data transfer between user client and back end services. Users can specify the root certificate to use during installation.
Data-at-rest protection is supported by using
dm-crypt to encrypt data.
These same platform mechanisms that are used to manage and secure Red Hat Advanced Cluster Management for Kubernetes platform technical data can be used to manage and secure personal data for user-developed or user-provided applications. Clients can develop their own capabilities to implement further controls.
1.4.10. Data Deletion
Red Hat Advanced Cluster Management for Kubernetes platform provides commands, application programming interfaces (APIs), and user interface actions to delete data that is created or collected by the product. These functions enable users to delete technical data, such as service user IDs and passwords, IP addresses, Kubernetes node names, or any other platform configuration data, as well as information about users who manage the platform.
Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of data deletion:
- All technical data that is related to platform configuration can be deleted through the management console or the Kubernetes
kubectlAPI.
Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of account data deletion:
- All technical data that is related to platform configuration can be deleted through the Red Hat Advanced Cluster Management for Kubernetes or the Kubernetes
kubectlAPI.
Function to remove user ID and password data that is managed through an enterprise LDAP directory would be provided by the LDAP product used with Red Hat Advanced Cluster Management for Kubernetes platform.
1.4.11. Capability for Restricting Use of Personal Data
Using the facilities summarized in this document, Red Hat Advanced Cluster Management for Kubernetes platform enables an end user to restrict usage of any technical data within the platform that is considered personal data.
Under GDPR, users have rights to access, modify, and restrict processing. Refer to other sections of this document to control the following:
Right to access
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals access to their data.
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals information about what data Red Hat Advanced Cluster Management for Kubernetes platform holds about the individual.
Right to modify
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to allow an individual to modify or correct their data.
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to correct an individual’s data for them.
Right to restrict processing
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to stop processing an individual’s data.
1.4.12. Appendix. Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data that are unknown to the platform.
This appendix includes details on data that is logged by the platform services. | https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.0/html-single/release_notes/index | CC-MAIN-2021-04 | en | refinedweb |
Zed_Oud
@cook.
Zed_Oud
The HTML I'm using works everywhere, but not loaded through in an extension. That's my whole goal, I am trying to replace webbrowser.open("local file") for use in an extension. I haven't tried to load a local image using my HTML doc, I'll try that out.
Here is an example of my HTML when I point it at ""
<html> <body bgcolor="#000000"> <img src="" alt="" ><br><br> <img src="" alt="" ><br><br> <img src="" alt="" ><br><br> <img src="" alt="" > </body> </html>
Here is my full code (cleaned and formatted, but just as dysfunctional when used as an extension):
# coding: utf-8 import appex from urllib2 import urlopen import os, console, requests, urlparse def write_text(name, text, writ='w'): with open(name, writ) as o: o.write(text) def img_page(file_list, link_list=None): if link_list is None: link_list = file_list links = zip(file_list, link_list) x = '<br><br>\n'.join(['<img src="{0}" alt="{1}" >'.format(a,b) for a,b in links]) {0} </body> </html> """.format(x) return y def view_doc(text): import ui w = ui.WebView() w.scales_page_to_fit = False w.load_html(text) w.present() def open_file(file_path): import ui file_path = os.path.abspath(file_path) file_path = urlparse.urljoin('file://', os.path.abspath(file_path)) #v = ui.View() #file_path = '' wv = ui.WebView() #v.add_subview(wv) wv.load_url(file_path) #v.frame = (0,0,320,568) #wv.frame = (0,0,320,568) #v.present() wv.present() def view_temp_index(file_url_list): temp_fn = '__temp.html' write_text(temp_fn, img_page(file_url_list)) open_file(temp_fn) def get_Pic_Links_Content(content,url=None): from bs4 import BeautifulSoup as bs if url is None: url = '' # 'http://' s = bs(content) p = s.findAll('img') pics = [] for x in p: y = urlparse.urljoin(url, x['src']) if y not in pics: pics.append(y) return pics def get_Pic_Links(url): r = requests.get(url) #print 'viewing pics from url:', r.url return get_Pic_Links_Content(r.content, url) def pick(url): choice = console.alert('View:','Pick where to view source:','Make File','View Directly','Console') pics = get_Pic_Links(url) if choice == 1: view_temp_index(pics) elif choice == 2: view_doc(img_page(pics)) else: print '\n'.join(pics) def main(): if not appex.is_running_extension(): print '\nRunning using test data...' url = '' else: url = appex.get_url() if url: pick(url) else: print 'No input URL found.' if __name__ == '__main__': main()```
Zed_Oud
Correct me if I'm wrong, but does the memory limit while running Python as an appex extension prevent us from loading things with WebView.load_url or WebView.load_html ?
I've managed to get a local html file to load, but it will not populate its img tags, images will not load and leave the default blank bar/box ( "<img src=""> ). The same html file will load perfectly using ui.webview or webbrowser NOT running from an extension (the html doc will work anywhere and everywhere else).
import ui def view(text): v = ui.View() wv = ui.WebView() v.add_subview(wv) wv.load_html(text) v.frame = (0,0,320,568) wv.frame = (0,0,320,568) v.present()
There's a cleaned up example of the function called to run my html doc as a string, though I've also tried running as a local file. | https://forum.omz-software.com/user/zed_oud/posts | CC-MAIN-2021-04 | en | refinedweb |
Leo Famulari <address@hidden> writes: >. > > The header in question, 'stubs.h', looks like this: > > ------ > #if !defined __x86_64__ > # include <gnu/stubs-32.h> > #endif > #if defined __x86_64__ && defined __LP64__ > # include <gnu/stubs-64.h> > #endif > #if defined __x86_64__ && defined __ILP32__ > # include <gnu/stubs-x32.h> > #endif > ------ > > When I build for i686-linux, it works as expected. > > Any advice? I'm not really sure what's going on here. I don't know why it fails, but it works if you give it a newer GCC such as the one on 'core-updates'. I tried it with 4.0.0, but got stuck on two test failures. Hopefully 4.1.0 is easier to debug... :-)
signature.asc
Description: PGP signature | https://lists.gnu.org/archive/html/bug-guix/2019-08/msg00005.html | CC-MAIN-2021-04 | en | refinedweb |
Introduction: Raspberry Pi and Wiimote Controlled Robot Arm
Step 1: Components
Some programming experience would be nice, but it isn't required. I'll try to keep everything simple. (Knowing me things might not go according to plan)
Since I'm doing this on a raspberry pi, I will explain everything the way I got the arm to run on it. Most everything is well documented for Windows and Mac - only a google search away so it shouldn't be much of a problem..
Step 3: A Much Better Controller
I couldn't think of any controller greater and funner to use than a game console controller. I used a wiimote because I actually have one, and it is really easy program. With that said, I'm sure other controllers would work just as well, and maybe they are easier to use (I don't have any others to try so I don't know).
The wiimote uses bluetooth to connect so you will need a bluetooth dongle if your computer doesn't have it built in. I'm using a cirago bluetooth/wifi dongle to connect, there are plenty of tutorials on installing the stuff needed to get bluetooth running on the raspberry pi. I installed bluez through the terminal, but I'll assume that you have bluetooth fully functioning.
We need one more download to connect to the wiimote. Pop open that terminal and type: sudo apt-get install python-cwiid
You can see a list of the bluetooth devices by typing: hcitool scan
Press one and two on the wiimote to set it in a discovery mode. Then the wiimote will pop up with its address and the name Nintendo will be there somewhere.
We are now ready to begin using the wiimote.
Step 4: Establishing a Connection
Someone super awesome reverse engineered the usb protocol for the robot arm. They posted all of their work here:
Another really cool person came up with the python code for the arm and they were nice enough to post it in the Magpi, here is the link to that (Page 14 I believe):
My plan was to merge this program with one that reads the wiimote sensors and buttons.
For our program we need to do several things.
Connect to the robot arm
Connect to the wiimote
Tell the wiimote what it needs to do when each button is pressed, and then do it.
We first import all of the functions we need to establish a connection:
import usb.core, usb.util, cwiid, time
Then we connect to the arm with
while (Arm == None):
Arm = usb.core.find(idVendor=0x1267, idProduct=0x0000)
Next we define a function that lets us control the arm
def ArmMove(Duration, ArmCmd):
Arm.ctrl_transfer(0x40, 6, 0x100, 0, ArmCmd, 1000)
time.sleep(1)
ArmCmd=[0,0,0]
Arm.ctrl_transfer(0x40, 6, 0x100, 0, ArmCmd, 1000)
Each arm command uses a byte of info that is sent through the usb to the controller on the arm. Our software just manipulates the info that the arm receives.
Once the program connects to the arm it need to connect with the wiimote. Pressing both 1 and 2 at the same time sets the wiimote into pairing mode so that it can connect through bluetooth.
Wii = None
while (Wii==None):
try:
Wii = cwiid.Wiimote()
except: RuntimeError:
print 'Error connecting to the wiimote, press 1 and 2 '
Step 5: The Code
I followed some of the websites listed above to get an idea of how the code works, but all of the code in my program is my own work. I hope I don't scare anyone away with it. 75% of it is mostly comments and the small portion remaining is actual code. My hope is that people will be able to understand it and make it their own. I'm sure that I did some over kill in it too and that it could be simplified quite a bit, but there are many ways to skin a cat.
You can control the arm with or without the nunchuk. I could never find a program that used all of the stuff in the nunchuk (the joystick, the buttons and the accelerometer) so I wanted to make sure that there was a program that had everything in it and was easy enough to understand so that people could do what ever they wanted with the wiimote and nunchuk. To accomplish this I made sure that every button or sensor or accessory was used so that people can customize it for their own purposes. For this reason, some of the code may seem redundant, but there is a reason behind it. The only things that I didn't use were the speaker (no one can get it to work yet) and the IR sensor.
Feel free to take this code and use it any way you want!
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
# |T|A|Y|L|O|R| |B|O|A|R|D|M|A|N| | | |R|P|I| |A|R|M|
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
'''First we need to import some files (These files contain all the commands needed for our program)
We have usb.core and usb.util - these are used to control the usb port for our arm
Next we have cwiid which communicates with the wiimote
And we have the time libary which allows us to slow or pause things'''
import usb.core, usb.util, cwiid, time
#Give our robot arm an easy name so that we only need to specify all the junk required for the usb connection once
print 'Make sure the arm is ready to go.'
print ''
Armc = 1750
Arm = None
while (Arm == None):
#This connects to the usb
Arm = usb.core.find(idVendor=0x1267, idProduct=0x0000)
#This will wait for a second, and then if the program could not connect, it tells us and tries again
Armc = Armc + 1
if (Armc == 2000):
print 'Could not connect to Arm, double check its connections.'
print 'Program will continue when connection is established...'
print ' '
Armc = Armc/2000
continue
#Set up our arm transfer protocol through the usb and define a Value we can change to control the arm
Duration = 1
ArmLight = 0
#Create delay variable that we can use (Seconds)
Delay = .1
Counter = 9999
def ArmMove(Duration, ArmCmd):
#Start Movement
Arm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)
time.sleep(Duration)
#Stop Movement
ArmCmd=[0,0,ArmLight]
Arm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)
#Establish a connection with the wiimote
print 'Connected to arm successfully.'
print ' '
print 'Press 1 and 2 on the wiimote at the same time.'
#Connect to mote and if it doesn't connect then it tells us and tries again
time.sleep(3)
print ''
print 'Establishing Connection... 5'
time.sleep(1)
print 'Establishing Connection... 4'
time.sleep(1)
print 'Establishing Connection... 3'
Wii = None
while (Wii==None):
try:
Wii = cwiid.Wiimote()
except RuntimeError:
print 'Error connecting to the wiimote, press 1 and 2.'
print 'Establishing Connection... 2'
time.sleep(1)
print 'Establishing Connection... 1'
time.sleep(1)
print ''
#Once a connection has been established with the two devices the rest of the program will continue; otherwise, it will keep on trying to connect to the two devices
#Rumble to indicate connection and turn on the LED
Wii.rumble = 1 #1 = on, 0 = off
print 'Connection Established.'
print 'Press any button to continue...'
print ''
''' Each number turns on different leds on the wiimote
ex) if Wii.led = 1, then LED 1 is on
2 = LED 2 3 = LED 3 4 = LED 4
5 = LED 1, 3 6 = LED 2, 3 7 = LED 1,2,3
8 = LED 4 9 = LED 1, 4 10 = LED 2,4
11 = LED 1,2,4 12 = LED 3,4 13 = LED 1,3,4
14 = LED 2,3,4 15 = LED 1,2,3,4
It counts up in binary to 15'''
time.sleep(1)
Wii.rumble = 0
Wii.led = 15
# Set it so that we can tell when and what buttons are pushed, and make it so that the accelerometer input can be read
Wii.rpt_mode = cwiid.RPT_BTN | cwiid.RPT_ACC | cwiid.RPT_EXT
Wii.state
while True:
#This deals with the accelerometer
'''create a variable containing the x accelerometer value
(changes if mote is turned or flicked left or right)
flat or upside down = 120, if turned: 90 degrees cc = 95, 90 degrees c = 145'''
Accx = (Wii.state['acc'][cwiid.X])
'''create a variable containing the y accelerometer value
(changes when mote is pointed or flicked up or down)
flat = 120, IR pointing up = 95, IR pointing down = 145'''
Accy = (Wii.state['acc'][cwiid.Y])
'''create a variable containing the z accelerometer value
(Changes with the motes rotation, or when pulled back or flicked up/down)
flat = 145, 90 degrees cc or c, or 90 degrees up and down = 120, upside down = 95'''
Accz = (Wii.state['acc'][cwiid.Z])
#This deals with the buttons, we tell every button what we want it to do
buttons = Wii.state['buttons']
#Get battery life (as a percent of 100):
#Just delete the nunber sign inn front
#print Wii.state['battery']*100/cwiid.BATTERY_MAX
# If the home button is pressed then rumble and quit, plus close program
if (buttons & cwiid.BTN_HOME):
print ''
print 'Closing Connection...'
ArmLight = 0
ArmMove(.1,[0,0,0])
Wii.rumble = 1
time.sleep(.5)
Wii.rumble = 0
Wii.led = 0
exit(Wii)
''' Arm Commands Defined by ArmMove are
[0,1,0] Rotate Base Clockwise
[0,2,0] Rotate Base C-Clockwise
[64,0,0] Shoulder Up
[128,0,0] Shoulder Down
[16,0,0] Elbow Up
[32,0,0] Elbow Down
[4,0,0] Wrist Up
[8,0,0] Wrist Down
[2,0,0] Grip Open
[1,0,0] Grip Close
[0,0,1] Light On
[0,0,0] Light Off
ex) ArmMove(Duration in seconds,[0,0,0])
This example would stop all movement and turn off the LED'''
#Check to see if other buttons are pressed
if (buttons & cwiid.BTN_A):
print 'A pressed'
time.sleep(Delay)
ArmMove(.1,[1,0,ArmLight])
if (buttons & cwiid.BTN_B):
print 'B pressed'
time.sleep(Delay)
ArmMove(.1,[2,0,ArmLight])
if (buttons & cwiid.BTN_1):
print '1 pressed'
ArmMove(.1,[16,0,ArmLight])
if (buttons & cwiid.BTN_2):
print '2 pressed'
ArmMove(.1,[32,0,ArmLight])
if (buttons & cwiid.BTN_MINUS):
print 'Minus pressed'
ArmMove(.1,[8,0,ArmLight])
if (buttons & cwiid.BTN_PLUS):
print 'Plus pressed'
ArmMove(.1,[4,0,ArmLight])
if (buttons & cwiid.BTN_UP):
print 'Up pressed'
ArmMove(.1,[64,0,ArmLight])
if (buttons & cwiid.BTN_DOWN):
print 'Down pressed'
ArmMove(.1,[128,0,ArmLight])
if (buttons & cwiid.BTN_LEFT):
print 'Left pressed'
ArmMove(.1,[0,2,ArmLight])
if (buttons & cwiid.BTN_RIGHT):
print 'Right pressed'
ArmMove(.1,[0,1,ArmLight])
#Here we handle the nunchuk, along with the joystick and the buttons
while(1):
if Wii.state.has_key('nunchuk'):
try:
#Here is the data for the nunchuk stick:
#X axis:LeftMax = 25, Middle = 125, RightMax = 225
NunchukStickX = (Wii.state['nunchuk']['stick'][cwiid.X])
#Y axis:DownMax = 30, Middle = 125, UpMax = 225
NunchukStickY = (Wii.state['nunchuk']['stick'][cwiid.Y])
#The 'NunchukStickX' and the 'NunchukStickY' variables now store the stick values
#Here we take care of all of our data for the accelerometer
#The nunchuk has an accelerometer that records in a similar manner to the wiimote, but the number range is different
#The X range is: 70 if tilted 90 degrees to the left and 175 if tilted 90 degrees to the right
NAccx = Wii.state['nunchuk']['acc'][cwiid.X]
#The Y range is: 70 if tilted 90 degrees down (the buttons pointing down), and 175 if tilted 90 degrees up (buttons pointing up)
NAccy = Wii.state['nunchuk']['acc'][cwiid.Y]
#I still don't understand the z axis completely (on the wiimote and nunchuk), but as far as I can tell it's main change comes from directly pulling up the mote without tilting it
NAccz = Wii.state['nunchuk']['acc'][cwiid.Z]
#Make it so that we can control the arm with the joystick
if (NunchukStickX < 60):
ArmMove(.1,[0,2,ArmLight])
print 'Moving Left'
if (NunchukStickX > 190):
ArmMove(.1,[0,1,ArmLight])
print 'Moving Right'
if (NunchukStickY < 60):
ArmMove(.1,[128,0,ArmLight])
print 'Moving Down'
if (NunchukStickY > 190):
ArmMove(.1,[64,0,ArmLight])
print 'Moving Up'
#Make it so that we can control the arm with tilt Functions
#Left to Right
if (Accx < 100 and NAccx < 90 ):
ArmMove(.1,[0,2,ArmLight])
print 'Moving Left'
if (Accx > 135 and NAccx > 150):
ArmMove(.1,[0,1,ArmLight])
print 'Moving Right'
#Up and Down
if (Accy < 100 and NAccy < 90):
ArmMove(.1,[64,0,0])
print 'Moving Up'
if (Accy > 135 and NAccy > 150):
ArmMove(.1,[128,0,0])
print 'Moving Down'
#Here we create a variable to store the nunchuck button data
#0 = no buttons pressed
#1 = Z is pressed
#2 = C is pressed
#3 = Both C and Z are pressed
ChukBtn = Wii.state['nunchuk']['buttons']
if (ChukBtn == 1):
print 'Z pressed'
ArmLight = 0
ArmMove(.1,[0,0,ArmLight])
if (ChukBtn == 2):
print 'C pressed'
ArmLight = 1
ArmMove(.1,[0,0,ArmLight])
#If both are pressed the led blinks
if (ChukBtn == 3):
print 'C and Z pressed'
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
#Any other actions that require the use of the nunchuk in any way must be put here for the error handling to function properly
break
#This part down below is the part that tells us if no nunchuk is connected to the wiimote
except KeyError:
print 'No nunchuk detected.'
else:
if (ArmLight == 0):
if (Accz > 179 or Accz < 50):
ArmLight = 1
ArmMove(.1,[0,0,ArmLight])
time.sleep(.5)
elif (ArmLight == 1):
if (Accz > 179 or Accz < 50):
ArmLight = 0
ArmMove(.1,[0,0,ArmLight])
time.sleep(.5)
if (Counter == 10000):
print 'No nunchuk detected.'
Counter = Counter/10000
break
Counter = Counter + 1
break
Copy the code into a python editor and then save it out. If the editor is still open then you can run the program with F5. You can also run it by opening a terminal, navigating to the location of the program and then typing: sudo python filename.py
Just replace filename with the actual name of the file (you still have to copy the code into the editor and then save it).
The photo shows: sudo Nunchuk_W_Arm.py, but I forgot to add the python in there. It's supposed to be: sudo python Nunchuk_W_Arm.py
I'm now working on a version of the program that has an actual interface with buttons to control the arm and stuff that display the wiimote goodies. The wiimote can still be used to control the arm; its just a more visual program.
Step 6: Controlling the Robot
There are lots of things to control on the robot so I tried to keep everything organized. Pictures speak louder than words.
The nunchuk doesn't need to be connected to run everything, I just wanted to keep everything flexible. The only difference between using and not using the nunchuk are: with the nunchuk attached flicking the remote will not toggle the light on and off, the joystick adds another way to rotate the arm and move the base, C and Z are used to turn on and off the light, tilting both the nunchuk and the wiimote will rotate/control the base.
Participated in the
Microcontroller Contest
2 People Made This Project!
- corridorsfn made it!
- Jabberwacky made it!
Recommendations
6 Discussions
2 years ago
you're my savior
4 years ago
Fantastic post!
Last week my son and I put this together with:
Raspberry PI 3 / OWI 535 / USB Intf / WiiMote + Nun. We are total newbies to Python scripting.
The
RaspPi 3 has IDLE 2 and IDLE 3. Script only works with IDLE 2. Also
we had to change the Print statements from ' ' to (" ").
We also had to change our accelerometer values ().
We
are running into a bit of a challenge, the Wiimote works WITHOUT
Nunchuck. However, when it is plugged in, only the Nunchuck works. The
Wiimote no longer works; buttons, accelerometer, etc.
Not sure if it is the function of the script or the Wiimote/Nunchuck. I don't understand how the 'While True:' and the 'while(1)' statements work.
4 years ago
the second step in installing pyusb is sudo apt install pyusb
5 years ago on Introduction
I have put everything together and the arm and wiimote connect. I cannot get the code to run. Many syntax errors. Is it possible to have you e-mail the code to me as a ".py"
6 years ago on Introduction
could this be replicated with another Bluetooth controller such as the ps3 controller?
6 years ago on Introduction
i want to make this using the same arm but im going to put it on a rc car frame you think i can use the other nunchuk to control the car | https://www.instructables.com/Raspberry-Pi-and-Wiimote-controlled-Robot-Arm/ | CC-MAIN-2021-04 | en | refinedweb |
Dependency Injector
A dependency injection system for Flutter that automatically calls cancel, close and dispose methods, if any.
See example for details, and run it!
- Services
- Injectors
- Testing
Getting Started
Configure the services you want to use in your application.
final services = [ Transient(() => SomeService()), Singleton(() => Repository()), ];
Place the
RootInjector with your services list in the root of your application, and wrap the widget where you want to inject the service in the
Injector.
void main() { runApp(RootInjector( services: services, child: Injector(() => MyApp()), )); }
Inject your service using the
inject property.
class MyApp extends StatelessWidget { MyApp({Key key}) : super(key: key); final SomeService someService = inject(); ... }
Complete example.
import 'package:flutter/material.dart'; import 'package:dependency_injector/dependency_injector.dart'; class SomeService { final Repository repository = inject(); } class Repository { String getData() => 'Hello world.'; } final services = [ Transient(() => SomeService()), Singleton(() => Repository()), ]; void main() { runApp(RootInjector( services: services, child: Injector(() => MyApp()), )); } class MyApp extends StatelessWidget { MyApp({Key key}) : super(key: key); final SomeService someService = inject(); @override Widget build(BuildContext context) { return MaterialApp( theme: ThemeData( primarySwatch: Colors.blue, visualDensity: VisualDensity.adaptivePlatformDensity, ), home: Scaffold( body: Center(child: Text(someService.repository.getData())), ), ); } }
Services
There are three types of services:
Singleton,
Scoped,
Transient. They all use lazy initialization.
Singleton
They are created only once on the first injection and exist as long as the ancestor
RootInjector exists. Singleton services can contain other singleton or transient services, but they must not contain any scoped services as descendants. Singleton services never call cancel, close and dispose methods.
Singleton(() => SomeService()),
Scoped
Scoped services are created once for the their parent
Injector and exist as long as this injector exists, when they are injected again to descendants down the tree, the already created instance is taken. They can contain any other types of services as descendants. Scoped services call cancel, close and dispose methods when their parent
Injector (for which they were created) is removed from the tree.
Scoped(() => SomeService()),
Transient
When transient services are used directly in the tree or as descendants of other transient services that are used directly in the tree, they are created once for their parent
Injector and exist as long as this injector exists, when they are injected again into other injectors that are descendants down the tree, a new instance will be created.
If they are descendants of singleton or scoped services, then their life cycle is the same as for singleton or scoped services. Transient services call cancel, close and dispose methods when their parent
Injector (for which they were created) is removed from the tree, or according to the life cycle of of singleton or scoped services if they are descendants of them.
Transient(() => SomeService()),
Parameters
All three types of services have parameterized versions
ParameterizedSingleton,
ParameterizedScoped,
ParameterizedTransient, their life cycle is the same as the regular versions. For example, if you have injected the same
ParameterizedTransient twice into the same
Injector, then the second injection will take an existing instance.
ParameterizedTransient<SomeService, String>( (p) => SomeService(p), ),
Dispose
Scoped and Transient (only if Transient services are not descendants of singletons) services, by default, call cancel, close and dispose methods when their life circle ends. They call these methods only if these methods do not have any parameters. If you need to call them with parameters, you have to provide custom disposer, it will override the default disposer.
Scoped<SomeService>( () => SomeService(), disposer: (instance) async { instance.dispose('some data'); }, ),
If you just want to disable a default disposer, set
useDefaultDisposer to false.
Transient( () => SomeService(), useDefaultDisposer: false, ),
Keys
When configuring services, you cannot use the same type more than once.
final services = [ Transient(() => 1), Singleton(() => 2), // There will be an error. ];
In these cases you have to use a key!
class ServiceKey extends ServiceKeyBase { const ServiceKey._(String name) : super(name); static const someKey = ServiceKey._('someKey'); } final services = [ Transient(() => 1), Singleton(() => 2, key: ServiceKey.someKey), // Ok. ];
And then another service of this type will be configured.
Injectors
There are two types of injectors:
RootInjector and
Injector, and
inject property.
RootInjector
Used to configure services. It should only be used once at the root of your application.
void main() { runApp(RootInjector( services: services, child: MyApp(), )); }
Injector
Used as a wrapper for the widget into which you will inject dependencies. You should consider the Injector and it's widget as a whole.
Injector(() => SomeWidget())
Never put any logic like conditional statements or expressions inside an injector's builder method. In some cases, this can be the cause of the error, because the injector creates all the instances that you inject only once!
Injector(() => value == 7 ? SomeWidget1() : SomeWidget2()) // Bad.
If you need do this use this pattern.
value == 7 ? Injector(() => SomeWidget1()) : Injector(() => SomeWidget2()) // Good.
Or with keys.
value == 7 ? Injector(() => SomeWidget2(), key: ValueKey(1)) : Injector(() => SomeWidget2(), key: ValueKey(2)) // Good.
Inject
This is the property you use for dependency injection. It available only while builder method works.
For widgets:
class SomeWidget extends StatelessWidget { SomeWidget({Key key}) : super(key: key); final SomeService someService = inject(); ... } ... Injector(() => SomeWidget())
class SomeWidget extends StatelessWidget { const SomeWidget(this.someService, {Key key}) : super(key: key); final SomeService someService; ... } ... Injector(() => SomeWidget(inject()))
For services:
class SomeService1 { final SomeService2 someService2 = inject(); } final services = [ Singleton(() => SomeService1()), ];
class SomeService1 { SomeService1(this.someService2); final SomeService2 someService2; } final services = [ Singleton(() => SomeService1(inject())), ];
When you need inject a service with a key.
class SomeService1 { final SomeService2 someService21 = inject(); final SomeService2 someService22 = inject(key: ServiceKey.someKey); }
When you need inject a service with parameters.
class SomeService1 { final SomeService2 someService2 = inject(parameters: 'Some parameter'); }
If you want to get another one instance of the same service, you can use
ServiceIndex. All services have a zero index by default.
class SomeService1 { final SomeService2 someService21 = inject(parameters: 1); // someService21.value == 1 final SomeService2 someService22 = inject(parameters: 2); // someService22.value == 1 final SomeService2 someService23 = inject( parameters: 3, index: ServiceIndex.zero, ); // someService23.value == 1 final SomeService2 someService24 = inject( parameters: 4, index: ServiceIndex.one, ); // someService24.value == 4 }
Testing
For unit tests you should use
RootInjectorForTest to make
inject property available.
void main() { RootInjectorForTest(services); test('Some Unit Test', () { final SomeService someService = inject(); ... });
For widget tests do this
void main() { testWidgets('Some Widget Test', (tester) async { await tester.pumpWidget( RootInjector( services: services, child: MaterialApp( home: Scaffold( body: Injector(() => SomeWidget()), ), ), ), ); ... }); }
If you need to replace a service with mock you may use
replaceWithMock method.
class MockSomeService extends Mock implements SomeService {} void main() { final newServices = replaceWithMock( services: services, mockServices: [ ParameterizedScoped<SomeService, int>((p) { final mockSomeService = MockSomeService(); when(mockSomeService.value).thenReturn(100500); return mockSomeService; }), ], ); ... } | https://pub.dev/documentation/dependency_injector/latest/ | CC-MAIN-2021-04 | en | refinedweb |
One of the primary goals of Spring as a container is to eliminate the typical singletons and ad hoc factories that most applications end up using for access to objects.
That said, there is no question that in a number of applications, a small amount of glue code that is container aware is often needed to kick things off at some point of execution, typically obtaining one or more configured objects from the Spring container and starting the execution of a chain of related actions. This glue code may come from Spring itself (as in the case of the request handler mechanism in the Spring Web MVC layer), or in the form of application code.
One of the main cases where application code may need to be aware of the Spring container is when the Spring container is not itself responsible for creating objects that then need to work with other objects from the Spring container. In the better (ideal) variant of this scenario, the other entity creating objects can at least be made to pass along an instance of the Spring container to the newly created object. For example, in Spring's Quartz scheduler integration, the scheduler and Quartz jobs are configured in the application context. While it is the Quartz scheduler, and not Spring itself, that actually creates new Jobs, it is easy to at least pass a reference to the application context, as part of the Job data, to the newly created job. So the Job does have to work with the container but doesn't have to worry about how to get the container.
However, consider the case of EJBs, which are created by the EJB container. There is simply no way to force the EJB container to somehow provide a newly created EJB with a reference to an existing Spring application context or bean factory. One option is for each EJB instance to create its own application context instance, and this is the default behavior of the Spring EJB base classes, as described previously. However, this is often not going to be an ideal solution. It is problematic when there are resources in the Spring container that have a relatively expensive (in terms of time) startup cost. Consider for example a Hibernate SessionFactory, which has to enumerate and initialize a number of class mappings. It is also problematic when the resources in the Spring container start using up significant amounts of memory. While EJBs are pooled by the container so Spring containers would not be continuously created in normal usage, it's clear that a solution for shared usage of a Spring container is needed.
Any scenario that has the same constraints as EJBs can also benefit from shared access to a Spring container. Spring provides a generic bean factory–accessing interface called BeanFactoryLocator:
public interface BeanFactoryLocator { /** * Use the BeanFactory (or derived class such as ApplicationContext) specified * by the factoryKey parameter. The definition is possibly loaded/created as * needed. * @param factoryKey a resource name specifying which BeanFactory the * BeanFactoryLocator should return for usage. The actual meaning of the resource * name is specific to the actual implementation of BeanFactoryLocator. * @return the BeanFactory instance, wrapped as a BeanFactoryReference object * @throws BeansException if there is an error loading or accessing the * BeanFactory */ BeanFactoryReference useBeanFactory(String factoryKey) throws BeansException; }
BeanFactoryLocator in and of itself does not imply singleton access to anything, but Spring does include a couple of almost identical “keyed” singleton implementations of BeanFactoryLocator, called ContextSingletonBeanFactoryLocator and SingletonBeanFactoryLocator.
Let(<resource name>);(<id>)(<id>) can resolve to the same thing as another locator.useBeanFactory(<id>) with a different ID. For more information on how this works, and to get a better overall picture of these classes, please see the JavaDocs for ContextSingletonBeanFactoryLocator and SingletonBeanFactoryLocator.
In Chapter 4, we examined how Spring's ContextLoader class, triggered by the ContextLoaderListener or ContextLoaderServlet, can be used to load an application context for the web-app. You may want to review that section of Chapter 4 at this time.
Especially when creating a J2EE application, with multiple web-apps (and possibly EJBs), it is often desirable to define a shared parent application context to one or more web-app application contexts. In this setup, all service layer code can move into the shared context, with the web-app contexts having only bean definitions appropriate to the actual web view layer. Note that this does potentially affect how you will package your Spring-based app because the Spring framework and all classes that are used across web-apps will have to live in a classloader shared by the web applications, such as any global EJB classloader or an application server classloader. Most J2EE appservers do support a number of class-loader setup variations, allowing this configuration to work.
It is relatively trivial to subclass the existing ContextLoader class, so that using ContextSingletonBeanFactoryLocator, it triggers the loading of a shared parent context (with any other web-app inside the same J2EE app, which is configured the same).
Let's look at the necessary code to customize ContextLoader:
public class SharedParentLoadingContextLoader extends org.springframework.web.context.ContextLoader { // --- statics protected static final Log log = LogFactory.getLog(ContextLoader.class); /** servlet param specifying locator factory selector */ public static final String LOCATOR_FACTORY_SELECTOR = "locatorFactorySelector"; /** servlet param specifying the key to look up parent context from locator */ public static final String BEAN_FACTORY_LOCATOR_FACTORY_KEY = "parentContextKey"; // --- attributes protected BeanFactoryReference _beanFactoryRef = null; /** * Overrides method from superclass to implement loading of parent context */ protected ApplicationContext loadParentContext(ServletContext servletContext) throws BeansException { ApplicationContext parentContext = null; String locatorFactorySelector = servletContext .getInitParameter(LOCATOR_FACTORY_SELECTOR); String parentContextKey = servletContext .getInitParameter(BEAN_FACTORY_LOCATOR_FACTORY_KEY); try { if (locatorFactorySelector != null) { BeanFactoryLocator bfr = ContextSingletonBeanFactoryLocator .getInstance(locatorFactorySelector); log.info("Getting parent context definition: using parent context key of '" + parentContextKey + "' with BeanFactoryLocator"); _beanFactoryRef = bfr.useBeanFactory(parentContextKey); parentContext = (ApplicationContext) _beanFactoryRef.getFactory(); } } catch (BeansException ex) { throw ex; } return parentContext; } /** * Close Spring’s web application definition for * * @param servletContext * current servlet definition */ public void closeContext(ServletContext servletContext) throws ApplicationContextException { servletContext.log("Closing root WebApplicationContext"); WebApplicationContext wac = WebApplicationContextUtils .getRequiredWebApplicationContext(servletContext); ApplicationContext parent = wac.getParent(); try { if (wac instanceof ConfigurableApplicationContext) { ((ConfigurableApplicationContext) wac).close(); } } finally { if (parent != null && _beanFactoryRef != null) _beanFactoryRef.release(); } } }
The normal ContextLoader implementation already provides template methods, which subclasses may use to load a parent context to the web-app context, so all we are doing here is hooking into those methods to load the shared parent via ContextSingletonBeanFactoryLocator.
We also need a specialized version of ContextLoaderListener to call our variant of ContextLoader:
public class ContextLoaderListener extends org.springframework.web.context.ContextLoaderListener { protected org.springframework.web.context.ContextLoader createContextLoader() { return new SharedParentLoadingContextLoader(); } }
Finally, we modify the normal web-app web.xml configuration file to add parameters for the parent context:
<web-app> <context-param> <param-name>locatorFactorySelector</param-name> <param-value>classpath*:beanRefContext.xml</param-value> </context-param> <context-param> <param-name>parentContextKey</param-name> <param-value>servicelayer-context</param-value> </context-param> ... </web-app>
For the ContextSingletonBeanFactoryLocator.getInstance(String selector) method call, we are specifying a value of classpath*:beanRefContext.xml. We are also specifying that inside of the context bag defined by beanRefContext.xml, we are interested in using the context with the ID of servicelayer-context.
We, as described in a previous section,(). | https://flylib.com/books/en/1.382.1.93/1/ | CC-MAIN-2021-04 | en | refinedweb |
In most distributed applications, it's of uttermost importance to look at the application's lifecycle right from the beginning. You might have to ensure that your already deployed clients will keep working, even when your server is available in newer versions and will be providing more functionality.
Generally speaking, .NET Remoting supports the base .NET versioning services, which also implies that you have to use strong names for versioning of CAOs or serializable objects, for example. Nevertheless, in details the means of lifecycle management differ quite heavily between .NET Remoting and common .NET versioning and also differ between the various types of remoteable objects.
As SAOs are instantiated on demand by the server itself, there is no direct way of managing their lifecycle. The client cannot specify to which version of a given SAO a call should be placed. The only means for supporting different versions of aSAO is to provide different URLs for them. In this case, you would have to tell your users about the new URL in other ways, as no direct support of versioning is provided in the framework.
Depending on your general architecture, you may want to place SAOs in a different assembly or have them in two strong named assemblies that differ only in the version number. In the remoting configuration file, you can specify which version of a SAO is published using which URL.
The .NET Framework can resolve assemblies in two different ways: by assembly name, in which case the DLL has to be in the application's directory (xcopy deployed); or by a strong name used when the assembly is installed in the Global Assembly Cache (GAC).
A strong name consists of the assembly's name, version, culture information, and a fingerprint from the publisher's public/private key pair. This scheme is used to identify an assembly "without doubt," because even though another person could possibly create an assembly having the same name, version, and culture information, only the owner of the correct key pair can sign the assembly and provide the correct fingerprint.
To generate a key pair to later sign your assemblies with, you have to use sn.exe with the following syntax:
sn.exe -k <keyfile>
For example, to create a key pair that will be stored in the file mykey.key, you can run sn.exe as shown in Figure 6-9.
Figure 6-9: Running sn.exe to generate a key pair
When you want to generate a strong named assembly, you have to put some attributes in your source files (or update them when using VS.NET, which already includes those attributes in the file AssemblyInfo.cs, which is by default added to every project):
using System.Reflection; using System.Runtime.CompilerServices; [assembly: AssemblyCulture("")] [assembly: AssemblyVersion("1.0.0.1")] [assembly: AssemblyDelaySign(false)] [assembly: AssemblyKeyFile("mykey.key")]
As the AssemblyVersion attribute defaults to "1.0.*"[1] in VS .NET, you'll have to change this to allow for definite assignment of version numbers for your components. Make sure, though, to change it whenever you distribute a new version of your DLL.
The attribute AssemblyKeyFile has to point to the file generated by sn.exe. When using Visual Studio .NET you have to place it in the directory that contains your project file (<project>.csproj for C# projects).
Upon compilation of this project, no matter whether you're using VS .NET or the command-line compilers, the keyfile will be used to sign the assembly, and you'll end up with a strong named assembly that can be installed in the GAC.
To manipulate the contents of the GAC, you can either use Explorer to drag and drop your assemblies to %WINDOWS%\Assembly or use GacUtil from the .NET Framework SDK. Here are the parameters you'll use most:
Lifecycle management for a SAO becomes an issue as soon as you change some of its behavior and want currently available clients that use the older version to continue working.
In the following example, I show you how to create a SAO that's placed in a strong named assembly. You then install the assembly in the GAC and host the SAO in IIS. The implementation of the first Version 1.0.0.1, shown in Listing 6-7, returns a string that later shows you which version of the SAO has been called.
Listing 6-7: Version 1.0.0.1 of the Server
using System; using System.Runtime.Remoting.Lifetime; using System.Runtime.Remoting; using System.Reflection; using System.Runtime.CompilerServices; [assembly: AssemblyCulture("")] // default [assembly: AssemblyVersion("1.0.0.1")] [assembly: AssemblyDelaySign(false)] [assembly: AssemblyKeyFile("mykey.key")] namespace VersionedSAO { public class SomeSAO: MarshalByRefObject { public String getSAOVersion() { return "Called Version 1.0.0.1 SAO"; } } }
After compilation, you have to put the assembly in the GAC using gacutil.exe /i as shown in Figure 6-10.
Figure 6-10: Registering the first version in the GAC
This DLL does not have to be placed in the bin/ subdirectory of the IIS virtual directory but is instead loaded directly from the GAC. You therefore have to put the complete strong name in web.config.
You can use gacutil.exe /l <assemblyname> to get the strong name for the given assembly as is shown in Figure 6-11.
Figure 6-11: Displaying the strong name for an assembly
When editing web.config, you have to put the assembly's strong name in the type attribute of the <wellknown> entry:
<configuration> <system.runtime.remoting> <application> <service> <wellknown mode="Singleton" type="VersionedSAO.SomeSAO, VersionedSAO, Version=1.0.0.1,Culture=neutral,PublicKeyToken=84d24a897bf5808f" objectUri="MySAO.soap" /> </service> </application> </system.runtime.remoting> </configuration>
For the implementation of the client, you can extract the metadata using SoapSuds:
soapsuds-ia:VersionedSAO -nowp -oa:generated_meta_V1_0_0_1.dll
In the following example, I show you the implementation of a basic client that contacts the SAO and requests version information using the getSAOVersion() method. After setting a reference to generated_meta_V1_0_0_1.dll, you can compile the source code shown in Listing 6-8.
Listing 6-8: Version 1.0.0.1 of the Client Application
using System; using System.Runtime.Remoting; using System.Runtime.Remoting.Lifetime; using System.Threading; using VersionedSAO; // from generated_meta_xxx.dll namespace Client { class Client { static void Main(string[] args) { String filename = "client.exe.config"; RemotingConfiguration.Configure(filename); SomeSAO obj = new SomeSAO(); String result = obj.getSAOVersion(); Console.WriteLine("Result: {0}",result); Console.WriteLine("Finished ... press <return> to exit"); Console.ReadLine(); } } }
As the metadata assembly (generated_meta_V1_0_0_1.dll) does not have to be accessed using its strong name, the configuration file for the client looks quite similar to the previous examples:
<configuration> <system.runtime.remoting> <application> <client> <wellknown type="VersionedSAO.SomeSAO, generated_meta_V1_0_0_1" url="" /> </client> </application> </system.runtime.remoting> </configuration>
When this client is started, you will see the output shown in Figure 6-12.
Figure 6-12: Output of the client using the v1.0.0.1 SAO
Assume you now want to improve the server with the implementation of additional application requirements that might break your existing clients. To allow them to continue working correctly, you will have to let the clients choose which version of the SAO they want to access.
In the new server's implementation, shown in Listing 6-9, you first have to change the AssemblyVersion attribute to reflect the new version number, and you will also want to change the server's only method to return a different result than that of the v1.0.0.1 server.
Listing 6-9: The New Version 2.0.0.1 of the Server
using System; using System.Runtime.Remoting.Lifetime; using System.Runtime.Remoting; using System.Reflection; using System.Runtime.CompilerServices; [assembly: AssemblyCulture("")] // default [assembly: AssemblyVersion("2.0.0.1")] [assembly: AssemblyDelaySign(false)] [assembly: AssemblyKeyFile("mykey.key")] namespace VersionedSAO { public class SomeSAO: MarshalByRefObject { public String getSAOVersion() { return "Called Version 2.0.0.1 SAO"; } } }
After compiling and installing the assembly in the GAC using GacUtil, you can list the contents of the assembly cache as shown in Figure 6-13.
Figure 6-13: GAC contents after installing the second assembly
To allow a client to connect to either the old or the new assembly, you have to include a new <wellknown> entry in web.config that also points to the newly created SAO and uses a different URL:
<configuration> <system.runtime.remoting> <application> <service> <wellknown mode="Singleton" type="VersionedSAO.SomeSAO, VersionedSAO, Version=1.0.0.1,Culture=neutral,PublicKeyToken=84d24a897bf5808f" objectUri="MySAO.soap" /> <wellknown mode="Singleton" type="VersionedSAO.SomeSAO, VersionedSAO, Version=2.0.0.1,Culture=neutral,PublicKeyToken=84d24a897bf5808f" objectUri="MySAO_V2.soap" /> </service> </application> </system.runtime.remoting> </configuration>
To allow a client application to access the second version of the SAO, you again have to generate the necessary metadata using SoapSuds:
soapsuds -ia:VersionedSAO -nowp -oa:generated_meta_V2_0_0_1.dll
After adding the reference to the newly generated metadata assembly, you also have to change to the client-side configuration file to point to the new URL:
<configuration> <system.runtime.remoting> <application> <client> <wellknown type="VersionedSAO.SomeSAO, generated_meta_V2_0_0_1" url="" /> </client> </application> </system.runtime.remoting> </configuration>
You can now start both the new and the old client to get the outputs shown in Figure 6-14 for the first version and in Figure 6-15 for the second.
Figure 6-14: Version 1 client running
Figure 6-15: Version 2 client running
Both clients are running side by side at the same time, accessing the same physical server. You can also see that there was no change needed to the first client, which is the primary requisite for consistent lifecycle management.
Now that you know about the lifecycle management issues with SAOs, I have to tell you that versioning of CAOs is completely different. But first, let's start with a more general look at the creation of Client Activated Objects.
When CAOs are instantiated by the client (using the new operator or Activator.CreateInstance), a ConstructionCallMessage is sent to the server. In this message, the client passes the name of the object it wants to be created to the server-side process. It also includes the strong name (if available) of the assembly in which the server-side object is located. This version information is stored in the [SoapType()] attribute of the SoapSuds-generated assembly. SoapSuds does this automatically whenever the assembly, passed to it with the -ia parameter, is strong named.
Let's have a look at the C# source shown in Listing 6-10, which is generated by soapsuds -ia -nowp -gc from a simplistic CAO. I've inserted several line breaks to enhance its readability:
Listing 6-10: The SoapSuds-Generated Nonwrapped Proxy's Source
using System; using System.Runtime.Remoting.Messaging; using System.Runtime.Remoting.Metadata; using System.Runtime.Remoting.Metadata.W3cXsd2001; namespace Server { [Serializable, SoapType(SoapOptions=SoapOption.Option1| SoapOption.AlwaysIncludeTypes|SoapOption.XsdString| SoapOption.EmbedAll, XmlNamespace="
Version%3D2.0.0.1%2C%20Culture%3Dneutral%2C%20PublicKeyToken%3D84d24a897bfVersion%3D2.0.0.1%2C%20Culture%3Dneutral%2C%20PublicKeyToken%3D84d24a897bf
5808f", XmlTypeNamespace="", XmlTypeNamespace="
C%20Version%3D2.0.0.1%2C%20Culture%3Dneutral%2C%20PublicKeyToken%3D84d24a8C%20Version%3D2.0.0.1%2C%20Culture%3Dneutral%2C%20PublicKeyToken%3D84d24a8
97bf5808f")] public class SomeCAO : System.MarshalByRefObject { [SoapMethod(SoapAction= "")] public void doSomething() { return; } } }97bf5808f")] public class SomeCAO : System.MarshalByRefObject { [SoapMethod(SoapAction= "")] public void doSomething() { return; } } }
The strings in the XmlNamespace and XmlTypeNamespace attributes are URLEncoded variants of the standard version information. In plain text, they read as follows (omitting the base namespace):
Server, Version=2.0.0.1, Culture=neutral, PublicKeyToken= 84d24a897bf5808f
Doesn't look that scary anymore? In fact, this is the common .NET representation of a strong name as seen before.
What you can see now is that this proxy assembly will reference a server-side object called Server.SomeCAO, which is located in the assembly Server with the strong name shown previously. Whenever a client creates a remote instance of this CAO, the server will try to instantiate the exact version of this Type.
What the server does when the requested version is not available is to take the highest version of the specified assembly. When versions 1.0.1.0 and 2.0.0.1 are available in the GAC, and Version 1.0.0.1 is requested, the server will choose 2.0.0.1 to instantiate the requested object—even though they differ in the major version number.
To emulate the standard behavior for resolving assembly versions, or to redirect to a completely different version, you can use the assemblyBinding entry in the application's configuration file:
<configuration> <system.runtime.remoting> <application name="SomeServer"> <channels> <channel ref="http" port="5555" /> </channels> <service> <activated type="Server.SomeCAO, Server" /> </service> </application> </system.runtime.remoting> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="server" publicKeyToken="84d24a897bf5808f" culture="neutral" /> <bindingRedirect oldVersion="1.0.0.1" newVersion="1.0.1.1" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>
In this case, the server will take any requests for Version 1.0.0.1 and use Version 1.0.1.1 instead. Remember that this only works when the assembly is registered in the GAC and that you have to use soapsuds -ia:<assembly> -nowp -oa:<meta.dll> for each server-side version, as the [SoapType()] attribute defines this behavior.
Because a [Serializable] object is marshaled by value and its data is passed as a copy, versioning behavior is once more different from SAOs or CAOs. First let's again have a look at the transfer format of the Customer object (and not the complete message) from a server similar to the one in the first example in Chapter 1:
<a1:Customer xmlns:a1="
VersionedSerializableObjects/VersionedSerializableObjects%2C%20Version%3D1.VersionedSerializableObjects/VersionedSerializableObjects%2C%20Version%3D1.>
As you can see here, the complete namespace information, including the assembly's strong name, is sent over the wire. When the client that fetched this Customer object using a statement like Customer cust = CustomerManager.getCustomer(42) does not have access to this exact version, a SerializationException ("Parse Error, no assembly associated with Xml key") will be thrown.
To enable a "one-way relaxed" versioning schema, you can include the attribute includeVersions = "false" in the formatter's configuration entry as shown here:
<configuration> <system.runtime.remoting> <application name="SomeServer"> <channels> <channel ref="http" port="5555"> <serverProviders> <formatter ref="soap" includeVersions="false"/> </serverProviders> </channel> </channels> </application> </system.runtime.remoting> </configuration>
After this change, the server will return a different serialized form of the object, which does not contain the assembly's strong name.
The newly returned Customer object's data will look like this:
<a1:Customer xmlns:a1="
Objects/VersionedSerializableObjects"> <FirstName >John</FirstName> <LastName >Doe</LastName> <DateOfBirth>1950-12-12T00:00:00.0000000+01:00</DateOfBirth> </a1:Customer>Objects/VersionedSerializableObjects"> <FirstName >John</FirstName> <LastName >Doe</LastName> <DateOfBirth>1950-12-12T00:00:00.0000000+01:00</DateOfBirth> </a1:Customer>
This last step, however, has not yet solved all issues with versioned [Serializable] objects. Let's get back to the original need for versioning in the first place: functionality is added to an application, and you want the currently available clients to keep working. This leads to the question of what will happen, when you add another property to either the client or the server side's shared assembly (in the example, I'll use public String Title for the property). The Customer class now looks like this:
[Serializable] public class Customer { public String FirstName; public String LastName; public DateTime DateOfBirth; public String Title; // new! }
When the new Customer object (let's call it Version 2.0.0.1 or just Version 2 for short) is available at the client, and the old object (Version 1, without the Title property) at the server, the client is able to complete the call to Customer cust = CustomerManager.getCustomer(42). The client simply ignores the fact that the server did not send a value for the Customer object's Title property.
It won't work the other way though. When the server has Version 2 of the Customer object and the client only has Version 1, a SerializationException ("Member name ‘VersionedSerializableObjects.Customer Title’ not found") will be thrown when the client tries to interpret the server's response. This is exactly what you wanted to avoid. To work around these limitations, you have to have a look at the ISerializable interface, which allows you to specify custom serialization methods:
public interface ISerializable { void GetObjectData(SerializationInfo info, StreamingContext context); }
When implementing ISerializable, you simply have to call the SerializationInfo object's AddValue() method for each field you want to include in the serialized form of the current object.
To serialize the Customer object's properties from Version 1 of the preceding example (without the Title property), you can do the following:
public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("FirstName",FirstName); info.AddValue("LastName",LastName); info.AddValue("DateOfBirth",DateOfBirth); }
In addition to this implementation of GetObjectData(), you have to provide a special constructor for your object that takes a SerializationInfo and a StreamingContext object as parameters:
public Customer (SerializationInfo info, StreamingContext context) { FirstName = info.GetString("FirstName"); LastName = info.GetString("LastName"); DateOfBirth = info.GetDateTime("DateOfBirth"); }
This constructor is called whenever a stream that contains a Customer object is about to be deserialized.
You can see Version 1 of the Customer object, which is now implemented using the ISerializable interface in Listing 6-11.
Listing 6-11: The First Version of the Serializable Object
using System; using System.Runtime.Serialization; namespace VersionedSerializableObjects { [Serializable] public class Customer: ISerializable { public String FirstName; public String LastName; public DateTime DateOfBirth; public Customer (SerializationInfo info, StreamingContext context) { FirstName = info.GetString("FirstName"); LastName = info.GetString("LastName"); DateOfBirth = info.GetDateTime("DateOfBirth"); } public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("FirstName",FirstName); info.AddValue("LastName",LastName); info.AddValue("DateOfBirth",DateOfBirth); } } }
When the fields of this object have to be extended to include a Title property, as in the preceding example, you have to adopt GetObjectData() and the special constructor.
In the constructor, you have to enclose the access to the newly added property in a try/catch block. This enables you to react to a missing value, which might occur when the remote application is still working with Version 1 of the object.
In Listing 6-12 the value of the Customer object's Title property is set to "n/a" when the SerializationInfo object does not contain this property in serialized form.
Listing 6-12: Manual Serialization Allows More Sophisticated Versioning
using System; using System.Runtime.Serialization; namespace VersionedSerializableObjects { [Serializable] public class Customer: ISerializable { public String FirstName; public String LastName; public DateTime DateOfBirth; public String Title; public Customer (SerializationInfo info, StreamingContext context) { FirstName = info.GetString("FirstName"); LastName = info.GetString("LastName"); DateOfBirth = info.GetDateTime("DateOfBirth"); try { Title = info.GetString("Title"); } catch (Exception e) { Title = "n/a"; } } public void GetObjectData(SerializationInfo info, StreamingContext context) { } info.AddValue("FirstName",FirstName); info.AddValue("LastName",LastName); info.AddValue("DateOfBirth",DateOfBirth); info.AddValue("Title",Title); } } }
Using this serialization technique will ensure that you can match server and client versions without breaking any existing applications.
[1]The * in this case means that this part of the version number is assigned automatically. | https://flylib.com/books/en/4.371.1.30/1/ | CC-MAIN-2021-04 | en | refinedweb |
Eclipse IDE Pocket Guide Ed Burnette Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo Eclipse IDE Pocket Guide by Ed Burnette Copyright © 2005 [email protected]eilly.com. Editor: Production Editor: Cover Designer: Interior Designer: Brett McLaughlin Marlowe Shaeffer Ellie Volckhausen David Futato Printing History: August 2005: First Edition. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. The Pocket Guide series designations, Eclipse IDE Pocket Guide, the images of ornate butterflyfish,. 0-596-10065-5 [C] [3/06] Contents Part I. Introduction What Is Eclipse? 1 Conventions Used in This Book 2 System Requirements 2 Downloading Eclipse 3 Installing Eclipse 3, 2, 1, Launch! Specify a Workspace 3 4 4 Exploring Eclipse 4 Getting Upgrades 5 Moving On 6 Part II. Workbench 101 Views 8 Editors 9 Menus 10 v Toolbars and Coolbars 12 Perspectives 13 Rearranging Views and Editors 14 Maximizing and Minimizing 16 Part III. Java Done Quick Creating a Project 18 Creating a Package 20 Creating a Class 21 Entering Code 21 Running the Program 23 Part IV. Debugging Running the Debugger 25 Setting Breakpoints 25 Single Stepping 28 Looking at Variables 28 Changing Code on the Fly 30 Part V. Unit Testing with JUnit A Simple Factorial Demo 32 Creating Test Cases 33 vi | Contents Running Tests 34 Test First 36 Part VI. Tips and Tricks Code Assist 38 Templates 39 Automatic Typing 40 Refactoring 41 Hover Help 42 Hyperlinks 43 Quick Fixes 43 Searching 44 Scrapbook Pages 46 Java Build Path 47 Launch Configurations 48 Part VII. Views Breakpoints View 50 Console View 52 Debug View 53 Declaration View 54 Display View 54 Contents | vii Error Log View 55 Expressions View 56 Hierarchy View 58 Javadoc View 60 JUnit View 60 Navigator View 61 Outline View 62 Package Explorer View 62 Problems View 64 Search View 65 Tasks View 66 Variables View 67 Part VIII. Short Takes CVS 68 Ant 69 Web Tools Platform 70 Testing and Performance 70 Visual Editor 71 C/C++ Development 71 AspectJ 71 viii | Contents Plug-in Development 72 Rich Client Platform 73 Standard Widget Toolkit 73 Part IX. Help and Community Online Help Getting Help Help Topics 75 75 76 Eclipse Web Site 76 Community Web Sites 78 Reporting Bugs New Account Searching Adding an Entry 79 80 80 80 Newsgroups 81 Mailing Lists 82 Conclusion 82 Appendix. Commands 83 Index 113 Contents | ix PART PARTI. 1 Conventions Used in This Book Italic. System Requirements. Table 1. System requirements for Eclipse Requirement Minimum Recommended Java version 1.4.0 5.0 or greater Memory 512 MB 1 GB or more Free disk space 300 MB 1 GB or more Processor speed 800 Mhz 1.5 Ghz or faster 2 | Part I: Introduction In order to unpack Eclipse’s download package, you will need a standard archive program. Some versions of Windows have one built in; for other versions, you can use a program such as WinZip (). The other platforms come with an archive program preinstalled. TIP In the interests of space and simplicity, the rest of this book will focus on the Windows version of Eclipse. Other platforms will be very similar, although you may notice slight platform-specific differences. Downloading Eclipse. TIP You may see other download packages such as Runtime, JDT, and RCP on the download page. You don’t need those. Just get the one package called Eclipse SDK. Installing Eclipse First, install Java if you haven’t already. Then download the Eclipse SDK to a temporary directory. Use your archive program to unpack Eclipse into a permanent directory. There are no setup programs and no registry values to deal with. Installing Eclipse | 3. 3, 2, 1, Launch! You are now ready to launch Eclipse. Inside the eclipse directory, you’ll find a launcher program for the IDE called, strangely enough, eclipse (or eclipse.exe). Invoke that program to bring up the IDE. TIP On Windows, you may find it convenient to create a desktop shortcut to launch Eclipse. Specify a Workspace. Exploring Eclipse When Eclipse starts up, you will be greeted with the Welcome screen (see Figure 1). This screen provides an introduction for new users who don’t have the benefit of a pocket guide to Eclipse; for now you can skip over it by closing the 4 | Part I: Introduction Welcome view (click on the close icon—the × next to the word “Welcome”). You can always come back to the Welcome screen later by selecting Welcome from the Help menu. Figure 1. The Welcome screen allows you to explore introductory material, including examples and tutorials. Getting Upgrades. TIP A clean install is especially important if you want to use beta versions of Eclipse (called Stable or Milestone builds on the download page). Milestone builds are sometimes buggy, so you may need to temporarily go back and run your previous version. Getting Upgrades | 5? TIP Any additional plug-ins you have installed for Eclipse will need to be reinstalled at this point unless you keep them in an extension location separate from the Eclipse SDK. Moving On. 6 | Part I: Introduction PART PARTII:II Workbench 101 Eclipse’s main window, called the workbench, is built with a few common user interface elements (see Figure 2). Learn how to use them and you can get the most out of the IDE. The two most important elements are views and editors. If you’re already familiar with the Eclipse workbench, you can skim this section or skip to Part III to start programming. 3 4 5 1 6 4 6 2 1 Editor 2 Fast views 3 Menu bar 4 Tool bars 5 Perspectives 6 Views Figure 2. The Eclipse workbench is made up of views, editors, and other elements. 7 Views A view is a window that lets you examine something, such as a list of files in your project. Eclipse comes with dozens of different views; see Table 2 for a partial list. These views are covered in more detail in Part VII. Table 2. Commonly used Eclipse views View name Description Package Explorer Shows all your projects, Java packages, and files. Hierarchy Displays the class and interface relationships for the selected object. Outline Displays the structure of the currently open file. Problems Shows compiler errors and warnings in your code. Console Displays the output of your program. Javadoc Shows the description (from comments) of the selected object. Declaration Shows the source code where the selected object is declared. To open a view, select Window ➝ Show View. The most commonly used views are listed in that menu. To see the full list, select Other.... Most views have a titlebar that includes the icon and name for the view, a close icon, a toolbar, and an area for the content (see Figure 3 for an example showing the Outline view). Note that if the view is too narrow, the toolbar will be pushed to the next line. To discover what all the buttons do, move your mouse over a button, and a little window called a tool tip will appear that describes the item. Figure 3. Views usually have titles, toolbars, and a content area. Let the mouse pointer hover over an item to bring up a description. 8 | Part II: Workbench 101 Multiple views can be stacked together in the same rectangular area. The titlebar will show a tab for each view, but only one view can be active at a time. Click on a tab to bring its view to the front. If the window is too narrow to show all the titles, a chevron menu will appear (see Figure 4; the number below the >> shows how many views are hidden). Click on the chevron menu to list the hidden views. Figure 4. Views can be stacked on top of one another. If space is short, some may be hidden in a chevron menu. Editors An editor in Eclipse is just like any other editor—it lets you modify and save files. What sets editors in Eclipse apart is their built-in language-specific knowledge. In particular, the Java editor completely understands Java syntax; as you type, the editor can provide assistance such as underlining syntax errors and suggesting valid method and variable names (see Figure 5). Most of your time will be spent in the Java editor, but there are also editors for text, properties, and other types of files. Editors share many characteristics with views. But unlike views, editors don’t have toolbars, and you will usually have more than one of the same type of editor open (for example, several Java editors). Also, you can save or revert an editor’s contents, but not a view’s. An asterisk in the editor’s titlebar indicates that the editor has unsaved data. Select File ➝ Save or press Ctrl+S to write your changes to disk. Editors | 9 Figure 5. The Java editor provides typing assistance and immediate error detection. Menus Eclipse is filled with menus, yet it’s not always obvious how to access them. So, let’s take a quick tour. The most prominent one is the main menu across the top of the Eclipse window. Click on a menu item to activate it or press Alt and the shortcut key for the menu (for example Alt+F for the File menu). Some views have view menus that open when you click on the downward-pointing triangle icon near the upper right of the view (see Figure 6 for an example). Figure 6. If you see a triangle in the toolbar, click on it for more options. Another menu is hidden in the titlebar under the icon to the left of the title. Right-click on the icon to access the system menu; this allows you to close the view or editor, move it around, and so forth. The system menu is shown in Figure 7. 10 | Part II: Workbench 101 Figure 7. Right-click on the icon to the left of the title to get the system menu. TIP Most commands in Eclipse can be performed in several different ways. For example, to close a view you can either use the system menu or click on the close icon. Use whichever way is most convenient for you. Finally, you can right-click on any item in the content area to bring up the context menu (see Figure 8). Notice the keyboard shortcuts listed to the right of the menu description. These shortcuts can be used instead of the menu to execute a particular command. For example, instead of right-clicking on main and selecting Open Type Hierarchy, you can just select main and press the F4 key. TIP Starting in Eclipse 3.1, you can press Ctrl+Shift+L to see a list of the current key definitions. To change them, go to Window ➝ Preferences ➝ General ➝ Keys. By using key definitions and shortcuts, you can work in Eclipse without touching the mouse at all. Menus | 11 Figure 8. Right-click in the content area for the context menu. Toolbars and Coolbars A toolbar is a set of buttons (and sometimes other controls) that perform commonly used actions when you click on them. Usually toolbars appear near the top of the window that contains them. A collection of toolbars is called a coolbar (see Figure 9). Figure 9. A coolbar is made up of toolbars. You reorder the individual toolbars by clicking and dragging the separators between them. TIP Most Eclipse documentation uses the term toolbar to refer to both toolbars and coolbars, so the rest of this book will do the same unless it’s necessary to make a special distinction between the two. In the “Views” section, you saw some examples of toolbars that were part of views. The toolbar at the top of the Workbench window is called the main toolbar (seen back in 12 | Part II: Workbench 101 Figure 2). As you edit different files, the main toolbar will change to show tools that apply to the current editor. Perspectives A perspective is a set of views, editors, and toolbars, along with their arrangement on your desktop. Think of a perspective as a way of looking at your work that is optimized for a specific kind of task, such as writing programs. As you perform a task, you may rearrange windows, open new views, and so on. Your arrangement is saved under the current perspective. The next time you have to perform the same kind of task, simply switch to that perspective, and Eclipse will put everything back the way you left it. To switch perspectives, select Window ➝ Open Perspective or click on the Open Perspective icon (to the right of the main toolbar). This will bring up a list of the most commonly used perspectives; select Other... to see the full list. Eclipse comes with several perspectives already defined; these are shown in Table 3. Table 3. Built-in Eclipse perspectives Perspective Purpose Resource Arrange your files and projects. Java Develop programs in the Java language. Debug Diagnose and debug problems that occur at runtime. Java Browsing Explore your code in a Smalltalk-like environment. Java Type Hierarchy Explore your code based on class relationships. Plug-in Development Create add-ins to Eclipse. CVS Repository Exploring Browse a source code repository, including its files and revision history. Team Synchronizing Merge changes you’ve made with those of your teammates. Perspectives | 13 Each perspective has a set of views associated with it that are open by default. For example, the Java perspective starts with the Package Explorer view open. If you don’t like the default, close any views you don’t want and open others with Window ➝ Show View. TIP Sometimes Eclipse will offer to switch perspectives for you. For example, if you’re in the Resource perspective and create a Java project, it will ask if you’d like to switch to the Java perspective. Usually the best thing is to answer Yes and have it remember your decision so it won’t ask you again. Perspectives are there for your convenience. Feel free to customize them all you want. To restore a perspective to its factory default, select Window ➝ Reset Perspective. To save your perspective under a different name, select Window ➝ Save Perspective As.... The new perspective will show up in the Window ➝ Open Perspective ➝ Other... menu. Rearranging Views and Editors Views and editors can be shown side by side or stacked on top of other views and editors. To move a view or editor, simply click on its titlebar and drag it to a new location (see Figure 10). The only restrictions are that editors have to stay in their own rectangular area, and they can’t be mixed with views. However, you can arrange the views around the editors, and you can even drag views outside of the main Eclipse window (these are called tear-off views). You can also collapse a view to an icon on the edge of the window (this is called a fast view). Pay close attention to the changing cursor as you drag a window; the cursor shape indicates where the window will end up when you let go of the mouse button. Table 4 shows the cursor shapes and what they mean. 14 | Part II: Workbench 101 Figure 10. You can see how the Package Explorer is dragged from a tab into the bottom of the window. Table 4. Cursor shapes while dragging views and editors Cursor shape Final position of the view/editor being dragged Above the window under the cursor Below the window under the cursor To the left of the window under the cursor To the right of the window under the cursor On top of a stack of windows under the cursor In the fast view area (it will slide out as needed or when manually clicked) Outside the main window Rearranging Views and Editors | 15 TIP By dragging editors, you can show two files side by side. Starting in Eclipse 3.1, you can also edit two portions of the same file by using the Window ➝ New Editor command. To change the relative size of side-by-side views or editors, move the mouse cursor to the thin dividing line between two of them. The cursor shape will change, indicating you can move that divider by clicking it and dragging it to the desired location. Maximizing and Minimizing Sometimes you need to focus temporarily on a single view or editor. For example, you might want to hide all the views and use the whole Eclipse window to look at one large file in the editor. You could resize the editor manually by dragging its edges, but an easier way is to maximize the editor. Double-click on the view or editor’s titlebar (or click on the maximize icon) to make it expand; double-click again (or use the restore icon) to restore the window to its original size. When a window is maximized, you won’t be able to see any of the other views or editors outside of the current stack. As an alternative, you can temporarily shrink the other stacks of windows by clicking on the minimize icon (next to the maximize icon at the top of the view or editor). This hides the content area, showing only the titlebar. It works best on horizontal views and editors. 16 | Part II: Workbench 101 TIP Remember, you can save your favorite window arrangements as named perspectives. You could spend hours exploring all the options to customize your Eclipse workbench, but that’s not what you’re here for, right? Part III will get you started with Java development in Eclipse. Maximizing and Minimizing | 17 PART III PART III: Java Done Quick Get your stopwatch ready because we’re going to create and run some simple Java code as quickly as possible. Ready... set...go! Creating a Project ➝ New ➝ Project... and then double-click Java Project. This opens the New Java Project wizard (see Figure 11). For “Project name,” type in something original like Hello. Under “Project layout,” enable the “Create separate source and output folders” option. TIP As a best practice, always use separate directories for the source and output folders. 18 Figure 11. The New Java Project wizard configures a new directory for your code.). Creating a Project | 19 After a moment, you should see your new empty project in the Package Explorer view (see Figure 12). Figure 12. A new Java project is born. Creating a Package A Java package is a standard way to organize your classes into separate namespaces. Although you can create classes without packages, doing so is considered bad programming practice. To create a new package, select File ➝ New ➝ Package or click on the New Package icon in the main toolbar ( ). Enter the package name as org.eclipseguide and click Finish. You can see the results in the Package Explorer, as shown in Figure 13. Figure 13. The project has grown a package. 20 | Part III: Java Done Quick TIP If you looked at the project on disk, you would see the Hello directory, a src directory under that, org under that, and eclipseguide under that. A compact form is shown in the Package Explorer as a convenience. Creating a Class With the org.eclipseguide package highlighted, select File ➝ New ➝ Class or click on the New Java Class icon ( ). Enter the name of the class, starting with a capital letter. For this example, enter Hello. Under the section of the dialog that asks which method stubs you would like to create, select the option to create public static void main(String[] args). Leave the rest of the options set to their default values and click Finish. Eclipse will generate the code for the class for you (this generated class is shown in Figure 14), and open the Java editor with your new class in view. TIP Whenever Eclipse generates code, it inserts TODO comments to indicate the places you need to edit. Every place in the code that has a TODO comment is listed in the Tasks view (see Part VII). Entering Code You could run the program now, but it wouldn’t be very interesting. So, let’s add a few lines to print something out. Start by deleting the generated comment that says: // TODO Auto-generated method stub Entering Code | 21 Figure 14. Now the package has a file in it. You can further expand the file to see its classes. Then replace it with this code: for (int i = 0; i < 10; i++) { System.out.println( "Hello, world " + i); } When you’re done, the Java editor should look similar to Figure 15. Figure 15. This is 10 times better than the usual “Hello, world” program. 22 | Part III: Java Done Quick The editor looks innocent enough, but through its clever use of colors and annotations, the window is quietly conveying a great deal of information. A large number of options to control this information can be found under Window ➝ Preferences ➝ Java ➝ Editor. TIP Press Ctrl+Shift+F (or select Source ➝ Format) to reformat your code and fix any indentation and spacing problems. Do this early and often. If you’d like, you can customize the formatting rules in the Java preferences. Running the Program Press Ctrl+S (or select File ➝ Save) to write the code to disk and compile it. In the Package Explorer, right-click on Hello.java and select Run As ➝ Java Application. The program will run, and the Console view will open to show the output (see Figure 16). Figure 16. Isn’t this exciting? Running the Program | 23 That’s it! You’ve written, compiled, and run your first program in Eclipse in just a few minutes. Now, try it again and see if you can do it in under a minute. My record is 35 seconds. Go ahead, I’ll wait. TIP After you have run the program once, you can press Ctrl+F11 (Run ➝ Run Last Launched) or click on the Run icon in the toolbar ( ) to run it again. Now that you’re ready to write the next killer app, what’s the rest of the book for? Part IV will introduce you to your new best pal, the Java debugger. If your programs never have any bugs (ahem), you can skip ahead to Part V to learn about unit testing or Part VI to pick up a few tips about using the IDE. 24 | Part III: Java Done Quick PART PARTIV: IV Debugging Let’s face it: all but the most trivial programs have bugs in them. Eclipse provides a powerful debugger to help you find and eliminate those bugs quickly. This part of the book will give you a head start in understanding how to use the Eclipse debugger. Running the Debugger Running your program under the control of the debugger is similar to running it normally. Right-click on the file containing your main method (Hello.java) and select Debug As ➝ Java Application. Or, if you have run or debugged the program before, just press F11 (or select Run ➝ Debug Last Launched), or click on the Debug button ( ) in the main toolbar. Go ahead and try that now. What happened? The program ran to completion and sent its output to the Console view just as if you had run the class normally. You have to set a breakpoint to actually take advantage of the debugger. Setting Breakpoints A breakpoint is a marker you place on a line of code where you want the debugger to pause execution. To set one, doubleclick in the gutter area to the left of the source line. For this 25 example, we want to stop on the System.out.println( ) call, so double-click in the gutter next to that line. A breakpoint indicator will appear, as shown in Figure 17. Figure 17. Set a breakpoint by double-clicking to the left of the source line. Now, press F11 and Eclipse will run your program again in debug mode. The breakpoint indicator will change when the class is loaded, and the debugger will stop at the line where you added the breakpoint. TIP One of the nice things about breakpoints in Eclipse is that they stay with the line even if the line number changes (e.g., due to code being added or removed above it). When the breakpoint is reached and the program stops, you’ll notice several things. First, Eclipse will switch to the Debug perspective. If you see a dialog asking to confirm the perspective switch, select “Remember my decision” and click Yes. TIP Using one perspective for coding and another for debugging is optional, but some people like being able to customize their window arrangement for each task. You can disable this switching in the Run/Debug preferences (Window ➝ Preferences ➝ Run/Debug). 26 | Part IV: Debugging Next, several new views will open—most importantly, the Debug view (see Figure 18). This view lets you control all the threads of execution of all the programs being debugged. Finally, the line of code where you put the breakpoint will be highlighted to indicate which line will be executed next. Figure 18. The Debug view lets you control and monitor execution of multiple programs and threads. To continue running after a breakpoint, click on the Resume button in the Debug view’s toolbar ( ) or press F8 (Run ➝ Resume). Execution will continue until the next breakpoint is hit or the program terminates. TIP If your program is in a long-running loop, click on the Suspend button ( ) or select Run ➝ Suspend to make it stop. Or, just add a new breakpoint at any time—the program does not have to be stopped. You can see a list of all your breakpoints in the Breakpoints view. Here you can enable and disable breakpoints, make them conditional on certain program values, or set exception breakpoints (i.e., to stop when a Java exception is thrown). Setting Breakpoints | 27 Single Stepping Like most debuggers, the one provided by the Eclipse IDE lets you step line by line through your program with one of two commands: step into ( ; F5; or Run ➝ Step Into) and step over ( ; F6; or Run ➝ Step Over). The difference between the two is apparent when the current line is a method call. If you step into the current line, the debugger will go to the first line of the method. If you step over the current line, the debugger will run the method and stop on the next line. Try stepping now, by running until your breakpoint is hit and then pressing F6 several times in a row. Watch the highlight bar move around as the current line changes. If you step into a method call and then change your mind, execute the step return command ( ; F7; or Run ➝ Step Return). This lets the program run until the current method returns. The debugger will stop at the line following the line that called the method. Looking at Variables The Eclipse IDE provides many different ways to examine and modify your program state. For example, as you single step, you may have noticed that the Variables window shows the current value of all the local variables, parameters, and fields that are currently visible (see Figure 19). You can quickly identify which variables are changing because Eclipse draws them in a different color. If any of the variables are nonprimitives (objects or arrays), you can expand them to look at the individual elements. To change the value of a variable, first select it in the Variables view. This will make its current value appear in the bottom half of the window, where you can change it. Save the new value by pressing Ctrl+S (or right-click and select Assign Value). 28 | Part IV: Debugging Figure 19. The Variables view shows all the values in scope. Changes since the last step or resume are highlighted in red. TIP When you are coding, try to use the smallest possible scope for your local variables. For example, instead of declaring all your variables at the top of a function, declare them inside the statement blocks (curly braces) where they are actually used. Besides being a good programming practice, this will limit the number of items displayed in the Variables view. Another way to see the value of a particular variable is to move your cursor over it in the source editor. After a short pause, a tool tip window will appear with the value. See Figure 20 for an example. Figure 20. Hover the mouse over a variable in the Java editor to see its current value. What if you need to see the value of a Java expression? No problem: just use the mouse or keyboard to select the expression in the editor, then press Ctrl+Shift+D (or right-click and Looking at Variables | 29 select Display). Eclipse will evaluate the expression (including any side effects) and show the results in a pop-up window (see Figure 21). The expression can be as simple or as complicated as you like, as long as it’s valid. Figure 21. Select an expression and press Ctrl+Shift+D to evaluate it. For compound objects like class instances, you may want to try the Inspect command (Ctrl+Shift+I, or right-click and select Inspect) instead of Display. This will let you expand items and collapse members as in the Variables view. Changing Code on the Fly Eclipse blurs the line between editing and debugging by letting you modify a running program. You don’t have to stop the program—just edit and save it. If possible, Eclipse will compile just the class that was modified and insert it into the running process. This handy feature is called hot code replace. TIP If you modify a method that the program is currently executing, the debugger will have to drop to the previous frame and begin that method again from its first line. This doesn’t work on the main( ) method because there is no caller. 30 | Part IV: Debugging Some kinds of changes can be made on the fly and some cannot. Simple things (like fixing an expression formula, changing comments, adding new local variables, adding new statements to an existing method, etc.) should work fine. If for some reason execution cannot continue, you will get an error dialog with the option to continue without making the change, terminate the program, or terminate and restart it from the beginning. TIP Hot code replace requires special support from the Java virtual machine that is not present in all versions of Java. It’s known to work in Sun’s Java Version 1.4.2 and later, but not all vendors support it. If your Java version does not support it, you’ll get an error dialog when you try to save. The debugger has so many features that it’s impossible to cover them all here. Part VI covers more advanced topics that impact running and debugging your program, especially in the “Launch Configurations” section. But in your first pass through this book, you may want to continue with Part V, which covers unit testing. Later, you can go to Part VII to find out what all those buttons in the Debug and Breakpoint views do. The Eclipse online help is also a good resource for information on running and debugging. See the following sections in the User’s Guide (Help ➝ Help Contents ➝ Java Development User Guide): • Concepts ➝ Debugger • Tasks ➝ Running and debugging Changing Code on the Fly | 31 PART V PART V: Unit Testing with JUnit JUnit is a regression testing framework written by Kent Beck and Erich Gamma. Since Erich is the project leader for Eclipse’s Java toolkit, it’s only natural that JUnit is well integrated into the IDE. A Simple Factorial Demo To try out unit testing in Eclipse, first create a project called Factorial containing a class called Factorial. Inside that class, create a factorial( ) method as follows: public class Factorial { public static double factorial(int x) { if (x == 0) return 1.0; return x + factorial(x - 1); } } TIP If you notice the nasty little error in this code, ignore it for now. That’s part of the demonstration! 32 Creating Test Cases To test this class, you’ll need to create a test case for it. A test case is a class that extends the JUnit TestCase class and contains test methods that exercise your code. To create a test case, right-click on Factorial.java in the Package Explorer and select New ➝ JUnit Test Case. TIP If you get a dialog offering to add the JUnit library to the build path, select Yes. A dialog window will come up with the name of the test case (FactorialTest) already filled in, along with the name of the class being tested. Click Next to show the Test Methods dialog, select the factorial(int) method, and click Finish to generate the test case. Eclipse will then generate some code for you, similar to the following: public class FactorialTest extends TestCase { public void testFactorial( ) { } } Now, all you need to do is supply the contents of the testFactorial( ) method. JUnit provides a number of static methods that you call in your tests to make assertions about your program’s behavior. See Table 5 for a list. Table 5. JUnit assertion methods Method Description assertEquals( ) assertNotEquals( ) See if two objects or primitives have the same value. assertSame( ) assertNotSame( ) See if two objects are the same object. assertTrue() assertFalse( ) Test a Boolean expression. assertNull( ) assertNotNull( ) Test for a null object. Creating Test Cases | 33 To test the factorial( ) method, call the method with a few sample values and make sure it returns the right results. Now, insert a blank line and press Ctrl+Space (this brings up the code assist feature, which is discussed in Part VI); you will discover that JUnit supplies a version of assertEquals( ) that takes three arguments. The first two are the values to compare, the last is a “fuzz factor;” assertEquals( ) will fail if the difference between the supplied values is greater than the fuzz factor. Supply the value you expect the method to return as the first parameter; use the method call itself as the second. For example, public void testFactorial( ) { assertEquals(1.0, Factorial.factorial(0), 0.0); assertEquals(1.0, Factorial.factorial(1), 0.0); assertEquals(120.0, Factorial.factorial(5), 0.0); } Feel free to insert a few more assertions in this method or add additional test methods. You can also override the setUp( ) and tearDown( ) methods, respectively, to create and destroy any resources needed by each test, such as a network connection or file handle. TIP All test methods must start with the word “test” so JUnit can figure out which methods to run. JUnit will ignore any methods in the test class that it doesn’t recognize. Running Tests To run the test case, right-click on FactorialTest.java and select Run As ➝ JUnit Test. The JUnit view appears, and your tests are off and running. In this case, a red progress bar 34 | Part V: Unit Testing with JUnit and a special icon next to the view title indicate that something went wrong (see Figure 22). Figure 22. The JUnit view shows a summary of the last test run. If you double-click on the test class or method name in the Failures list, Eclipse will open that test in the editor. Doubleclick on a line in the Failure Trace to go to a specific line number. TIP The best practice if a test fails is to set a breakpoint on the failing line and then use the debugger to diagnose the problem. Just select Debug instead of Run to run the debugger. When you examine the test, you can see that the factorial function is not being calculated correctly, due to an error in the formula. To correct the error, replace the + with a *: return x * factorial(x - 1); Running Tests | 35 Now, rerun your tests (Ctrl+F11). You shouldn’t see any failures; instead, you should see a green bar, indicating success. Test First Having a good suite of tests is important—so important, that many developers advocate writing the tests for new code before a single line of the code itself! This is called test driven development, or TDD for short. Such tests represent the requirements that your code must satisfy in order to be considered correct. To see how Eclipse makes TDD simple, keep the unit test you just created, but delete the Factorial.java file (select it in the Package Explorer and press Delete). The editor for the FactorialTest class will shown an error immediately because the Factorial class is not defined anymore. This simulates the state you would be in if you had written your test class first. Put the text cursor on the first line that has an error and press Ctrl+1 (Edit ➝ Quick Fix). Select the “Create class ‘Factorial’” option and press Enter. When the New Java Class dialog appears, press Enter to accept the defaults. Now, go back to the FactorialTest editor and note that the compiler complains that there is no factorial(int) method. Press Ctrl+1 to create one. Unfortunately, the current version of Eclipse is not always smart enough to figure out the right return type, so you may need to change the generated return type to be a double. Use a dummy return value (0.0) for now. At this point, Factorial. java should look something like this: public static double factorial(int i) { return 0.0; } 36 | Part V: Unit Testing with JUnit TIP Of course, this is not the right way to calculate a factorial, but all you want to do at this point is get the program to compile again. Now you have a test case, a little bit of code, and no errors— so try running the tests. Unsurprisingly, they fail. At this point in actual TDD, you would go back to the code being tested and fix it so that it passes the tests, then add another test, make that work, and repeat the process until done. Compare this technique with what most people typically do. They write a bunch of code first, then write a trivial little test program to exercise that code (maybe something with a main( ) method and a few println( ) statements). Once that test is working, they throw the test away and assume their class will never break again. Don’t ever throw tests away! Nurture them, slowly add to them, and run them often, preferably as part of an automated build and test system. Techniques even exist to create unit tests for user interfaces. TIP When you get a bug report from your users, your first impulse may be to fix the bug. Instead, stop and write a unit test that fails because of the bug. Then, change the code so the test works. This ensures your fix actually solves the problem and helps improve your tests over time. The JUnit view is covered in more detail in Part VII. If you want to learn more about unit testing best practices, see: JUnit home page Resource for test driven development Test First | 37 PART VI PART VI: Tips and Tricks The Eclipse IDE has an incredibly rich set of features, but many of them are hidden from view. With a little digging, you can discover its secrets and get the most out of the environment. This part of the book gets you started with several useful but less visible features. Code Assist ➝ Content Assist). This feature 38 Figure 23. Code assist tells you what comes next and displays any Javadoc (if the source is available). is fully configurable in the Java editor preferences (Window ➝ Preferences ➝ Java ➝ Editor). Templates Eclipse provides a shorthand way of entering text called templates. For example, in the Java editor, if you type for and press Ctrl+Space, the code assist window will pop up as before, but this time it will display a few templates that start with the word “for” (see Figure 24). Figure 24. Editor templates are shorthand for entering boilerplate text (e.g., for loops). Selecting the first one will cause code similar to this to appear in the editor: for (int i = 0; i < array.length; i++) { } Templates | 39 The cursor highlights the first variable i. If you start typing, all three occurrences of that variable will be modified. Pressing Tab will cause the variable array to be selected; pressing Tab again will put the cursor on the blank line between the braces so you can supply the body of the loop. TIP If you try this, you may see different variable names. Eclipse guesses which variables to use based on the surrounding code. For a list of all predefined templates, and to create your own or export them to an XML file, see Window ➝ Preferences ➝ Java ➝ Editor ➝ Templates. Automatic Typing Closely related to code assist is a feature called automatic typing. If you’re following along with the earlier example shown in Figure 23, the text cursor should be positioned after System.out. Type .println( (that is, period, println, opening parenthesis). The Java editor will type the closing parenthesis for you automatically. Now, type a double quote, and the closing quote appears. Type in some text and then press the Tab key. Tab advances to the next valid place for input, which is after the closing quote. Hit Tab again, and the cursor advances to the end. Type a semicolon to finish the statement. TIP Code assist and automatic typing take a little getting used to. At first you may be tempted to turn them off, but I suggest you give it time and try to learn to work with them. After a while, you’ll wonder how you ever got by without the extra support. 40 | Part VI: Tips and Tricks Refactoring. If you’ve ever tried this, you know it’s usually a bad idea. Your simple search-and-replace operation can change more than just the variable you intended, even with a clever substitution string. Plus, if you need to change multiple files, you’ll have to go to a scripting language such as Perl. Here’s how it works in Eclipse. To rename a symbol (i.e., a class, method, variable, etc.), select it in the editor and press Alt+Shift+R (Refactor ➝ Rename). Type in the new name and press Enter to perform the change. Done! If you like, you can select the Preview button before performing the changes; this will show you what the modified source will look like (see Figure 25). You can also undo the refactoring (Ctrl+Z or Edit ➝ Undo) if you change your mind. Here’s another handy refactoring supported by Eclipse: to move a class from one package to another, simply go to the Package Explorer view and drag the file to where you want it. Eclipse will take care of changing the package statement in the file and in all the other class files that refer to it. Neat, huh? Eclipse implements over a dozen different types of refactorings, and more are being added all the time. See the Java Development User Guide (Window ➝ Help Contents ➝ Java Refactoring | 41 Figure 25. You can preview the changes that any of Eclipse’s refactorings would make. Development User Guide) under Reference ➝ Refactoring for more information. Hover Help You’ve seen that code assist is a good way to explore an unfamiliar API. Another useful tool is hover help. To use hover help, simply move the mouse cursor over a symbol you want to know more about and pause for a moment. For example, try hovering over println in System.out.println. A little pop-up window will appear, giving you a short description of the method. For best results, you need access to the source code of the symbol you are examining. For Java library methods, the source comes with the JDK (J2SE SDK) package. Eclipse can usually figure out how to find this source code on its own, but see Window ➝ Preferences ➝ Java ➝ Installed JREs to configure the JDK’s location. 42 | Part VI: Tips and Tricks If you are using code from a third-party JAR file, the source is often provided in a separate file or a subdirectory. You can tell Eclipse about this location by right-clicking on the JAR file in the Package Explorer and selecting Properties ➝ Java Source Attachment. If you don’t have the source code, but you have the API documentation (Javadoc) in HTML form, select the symbol you want information on and press Shift+F2 (Navigate ➝ Open External Javadoc). To make this work, you have to configure the Javadoc URL in the properties for the JAR file: right-click on the JAR file and select Properties ➝ Javadoc Location. Hyperlinks Did you know there is a web browser built into the Java editor? Well, there is—sort of. The editor lets you navigate around your program as if it were a web site. Hold down the Ctrl key and move your mouse through your source code. An underline will appear to indicate hyperlinked symbols. You can leave the mouse cursor over the symbol to see its definition, or click on it to open the declaration in the editor. Like a browser, Eclipse maintains a history of all the pages you’ve visited. Use the Back command ( ; Alt+Left; or Navigate ➝ Left) to go to the previous location, and use Forward ( ; Alt+Right; or Navigate ➝ Right) to go to the next one. Quick Fixes Whenever you make a syntax error in your program, Eclipse’s background compiler detects it immediately and draws an error indicator (affectionately known as the red squiggle) under the offending code. In addition to simply detecting the problem, Eclipse can usually offer an automatic program correction, called a quick fix. Quick Fixes | 43 For example, try misspelling the System.out method println as printline. Press Ctrl+1 (Edit ➝ Quick Fix) to see several possible fixes. One of them will be Change to println(..). Press the down arrow to see a preview of each proposed change; press Enter to accept the one you want. The Quick Fix command can also make suggestions for small source transformations on lines that don’t have errors. For example, if you have code like this: if (!(hail || thunder)) and you select the text (!(hail || thunder) and press Ctrl+1, Eclipse will suggest some possible transformations, such as “Push negation down.” Choosing that particular option would change the code to: if (!hail && !thunder) Searching The Eclipse IDE provides dozens of different ways to locate things. Eclipse breaks these up into two major categories: Find Look for something in the current file. Search Look for something in multiple files. The Find command (Ctrl+F or Edit ➝ Find/Replace) is just a run-of-the-mill text locator like you would see in any editor. You can look for plain strings or full regular expressions, and you can optionally substitute the text you find with other text. The shortcut to find the next occurrence is Ctrl+K. A handy variant on Find is incremental find, a feature borrowed from the Emacs editor. Press Ctrl+J (Edit ➝ Incremental Find Next) and start typing the text you’re looking for. The selection will move to the next occurrence as you type. 44 | Part VI: Tips and Tricks Searches are much more interesting. To start with, Eclipse supports locating strings and regular expressions in many files at once. You can search the entire workspace, just the current project, or any subset (called a working set) that you define. To do this kind of search, select Search ➝ File.... Eclipse can also do a full language-aware search. Since Eclipse has its own built-in Java compiler, it understands the difference between, say, a method named fact and a field named fact, or even between two methods that have the same names but take different parameters, such as fact(int) and fact(double). This kind of search is available by selecting Search ➝ Java.... These searches and more are accessible through the Search dialog ( ; Ctrl+H; or Search ➝ Search). The most common variations also have direct menus or shortcuts of their own. For example, to find all references to a symbol, select the symbol and press Ctrl+Shift+G (or Search ➝ References ➝ Workspace). To find the symbol’s declaration, press Ctrl+G (Search ➝ Declarations ➝ Workspace). To find only those places where the symbol is modified, try Search ➝ Write Access ➝ Workspace. TIP Current versions of Eclipse don’t allow you to perform searches on arbitrary files in the filesystem, but you can use an advanced option under File ➝ New ➝ Folder to link outside directories into your workspace and then search them. All search results will appear, naturally enough, in the Search view. See Part VII for more details on that view. Searching | 45 Scrapbook Pages A scrapbook page is a way to create and test snippets of code without all the trappings of normal Java code. In some ways, it’s like working in a scripting language, but you have the full expressiveness of Java in addition to being able to make calls into any of your code or any of the system libraries. To create a scrapbook page, select File ➝ New ➝ Other... ➝ Java ➝ Java Run/Debug ➝ Scrapbook Page. Enter the name of the page—for example, test—and click Finish (or just press Enter). A new editor page will open for test.jpage. In the blank scrapbook page, try typing in an expression like 123/456, press Ctrl+A to select the expression, and press Ctrl+Shift+D (Run ➝ Display) to run it and display the result. (The answer in this case is (int) 0 because both num- bers are integers and the result was truncated.) Note that the result is selected, so you can copy it quickly (or press Backspace to remove it from the page). Next, try entering Math.PI and displaying its result. This works because the scrapbook page already has all the system libraries imported, including the Math class. If you need a particular import, you can bring up the context menu and select Set Imports.... Let’s try something a little more complicated. Type in this snippet of code: double d = 3.14; System.out.println(d); Now select the snippet and press Ctrl+U (Run ➝ Execute) to execute it. The output will appear in the Console window. Execute is exactly like Display except that Execute doesn’t show the return value (if any). You can execute loops or even call methods in your regular programs from the scrapbook page. This is useful for trying out new ideas or just for simple debugging. 46 | Part VI: Tips and Tricks Java Build Path If you’ve done any Java programming before, you’re familiar with the Java classpath—a list of directories and JAR files containing Java classes that make up the program. Usually this is controlled by an environment variable (CLASSPATH) or a command-line option (-cp). In Eclipse, classpath details are a little more complicated. The first thing to realize is that Eclipse doesn’t use the CLASSPATH environment variable. It understands and controls the location of all classes itself. Additionally, Eclipse makes a distinction between runtime and build (compile) time. In Eclipse terminology, classpath refers only to the runtime class list, while build path refers to the compile-time list. These two paths may be different, but, by default, they will both be set to the list you specify in the build path. To see the build path, right-click on your project and select Properties ➝ Java Build Path. A dialog will appear, with the tabs described in Table 6. Table 6. Java Build Path tabs Tab name Description Source Tell the Java compiler where your source code is located. Each source directory is the root of a package tree. You can also control where generated output files (such as .class files) go. Projects Make the current project depend on other projects. Classes in the other projects will be recognized at build time and runtime. The other projects do not have to be built into a JAR file before referring to them in Eclipse; this cuts down on development time. Libraries Pull in code that is not in Eclipse projects, such as JAR files. See Table 7 for the kinds of locations you can access. Order and Export If other projects are dependent on this one, expose (or don’t expose) symbols in the current project to the other projects. In addition to going through the Java Build Path dialog, you can right-click on directories and JAR files in the Package Java Build Path | 47 Explorer view and select commands under the Build Path menu to add and remove items from the build path. The Libraries tab is very flexible about the locations it allows you to specify for JARs and class files. Other features in Eclipse use similar lists, so if you understand this tab, it will help you understand those features as well. Table 7 explains the buttons on the Libraries tab. Table 7. JAR and class locations in the Java Build Path Button name Description Add JARs... Specify JAR files in the workspace (this project or other projects). Add External JARs... Specify full pathnames for JAR files outside the workspace (not recommended for team projects). Add Variable... Use a symbolic variable name (like JRE_LIB or ECLIPSE_ HOME) to refer to a JAR file outside the workspace. Add Library... Refer to a directory outside the workspace containing several JAR files. Add Class Folder... Refer to a workspace directory containing individual class files. Launch Configurations How do you specify command-line parameters to your program or change the Java VM options that are used to invoke your program? Every time you select Run As ➝ Java Application on a new class that has a main( ) method, Eclipse creates a launch configuration for you. A launch configuration is the set of all the options used to run your program. To change those options, select Run ➝ Run... and locate your configuration in the dialog. Click on the configuration to see all the options in a series of tabbed pages on the righthand side of the window (the tabs are described in Table 8). You can also create new configurations in this dialog. 48 | Part VI: Tips and Tricks Table 8. Launch configuration tabs Tab name Description Main Specify the project and the name of the Main class. Arguments Set the program arguments, the Java VM arguments, and the working directory in which to start the program. JRE Specify the version of Java used to run the program (this can be different than the one used to compile it). Classpath Set the list of JARs and classes available at runtime. Source Locate the source code inside or outside the workspace. Environment Pass environment variables to the program. Common Miscellaneous options. Many more features of Eclipse are waiting to be discovered, and new ones are added in each release. The “Tips and Tricks” section of the online help (Help ➝ Tips and Tricks) is a good place to look for the kinds of little nuggets that can save you time or let you do something new. You can also find a useful command and keyboard shortcut listing in the Appendix. Launch Configurations | 49 PART VII PART VII: Views Eclipse has so many different views and toolbars that it’s easy to get overwhelmed trying to decipher them all. Consider this part of the book to be your own personal secret decoder ring. Breakpoints View The Breakpoints view (in the Debug perspective) shows a list of all the breakpoints you have set in your projects. Use it to enable and disable breakpoints, edit their properties, and set exception breakpoints (which trigger a stop when a Java exception occurs). Table 9 lists the commands on the Breakpoints view toolbar. Table 9. Breakpoints view toolbar Icon Description Remove the selected breakpoint(s). Remove all breakpoints in all projects. Show/hide breakpoints not valid in the selected remote debug target (toggle). Edit the source code at the breakpoint. Temporarily disable all breakpoints (toggle). Expand the breakpoint tree. 50 Table 9. Breakpoints view toolbar (continued) Icon Description Collapse the breakpoint tree. When the program stops, highlight the breakpoint that caused it to stop (toggle). Create a breakpoint for a Java exception. Double-click on a breakpoint to edit the code at that line. To fine-tune when the breakpoint will be triggered, right-click on the breakpoint and select Properties. Table 10 shows some of the properties you can set. The exact options that appear will vary depending on the breakpoint’s type. Table 10. Breakpoint properties Property Description Enabled Indicates whether the breakpoint is currently in effect. Hit Count Specifies how many times the breakpoint must be hit before the programs stops. Condition Stops only when the expression is true or changes value. Suspend Policy Pauses the whole program or just a single thread. Filtering Limits the breakpoint’s effect to the given thread(s). In the Eclipse Java development environment, an expression is anything you can put on the righthand side of a Java assignment statement. This can include ordinary variables, fields, method calls, arithmetic formulae, and so forth. A conditional breakpoint is a breakpoint that doesn’t stop every time. For example, if you’re debugging a crash that occurs on the 100th time through a loop, you could put a breakpoint at the top of the loop and use a conditional expression like i == 99, or you could specify a hit count of 100—whichever is more convenient. Breakpoints View | 51 Console View The Console view displays the output of programs that are run under the control of Eclipse. Use it to view standard output or error output from your Java programs, or from Ant, CVS, or any other external program launched from Eclipse. You can also type into the Console view to provide standard input. The Console view is closely tied to the Debug view. It keeps a separate page for each program listed in the Debug view, whether or not the program is currently running. Table 11 shows the commands on the Console view’s toolbar. Table 11. Console view toolbar Icon Description Terminate the current program. Remove all record of previously terminated programs. Clear all the lines in the current console page. Keep the view from scrolling as new lines are added to the end (toggle). Prevent the view from automatically switching to other pages (toggle). Switch to an existing console page. Open a new console page (for example, to see CVS output). TIP If your program prints a stack traceback, the Console view turns each line into a hyperlink. Click on a link to go to the location indicated in the traceback. Options for the Console view can be found under Window ➝ Preferences ➝ Run/Debug ➝ Console. 52 | Part VII: Views Debug View The Debug view (in the Debug perspective) lists all programs that were launched by Eclipse. Use it to pause program execution, view tracebacks, and locate the cause of deadlocks (more on this shortly). Table 12 shows the commands on the Debug view’s toolbar. Table 12. Debug view toolbar Icon Description Continue running a program or thread that was previously paused. Pause the current program or thread. Terminate the current program. Disconnect from a remote debugger. Remove all record of previously terminated programs. Single step into method calls. Single step over method calls. Continue execution until the current method returns. Rewind execution to the beginning of the selected stack frame (requires VM support). Enable/disable step filters (toggle). Step filters prevent you from having to stop in classes, packages, initializers, or constructors that you don’t find interesting. The list of filters is configured in Window ➝ Preferences ➝ Java ➝ Debug ➝ Step Filtering. One option in the Debug view menu deserves a special mention: Show Monitors. Monitors are Java thread synchronization points. Deadlocks occur when one thread is waiting on a monitor that will never be released. When you turn on the Show Monitors option, the Debug view will display a list of monitors owned or waited on by each thread. Any deadlocks will be highlighted. Debug View | 53 Declaration View The Declaration view (in the Java perspective) shows the Java source code that defined the current selection. Use this view to see the declaration of types and members as you move around your code, without having to switch editors. The toolbar for the Declaration view contains the single icon shown in Table 13. Table 13. Declaration view toolbar Icon Description Open an editor on the input source code. TIP The declaration can also be seen by holding down the Ctrl key and hovering the mouse pointer over the type or member in the Java editor. Display View The Display view (in the Debug perspective) shows expression results in an unstructured format. Use it as a temporary work area in which to place expressions and calculate their values. Table 14 shows the commands on the Display view’s toolbar. Table 14. Display view toolbar Icon Description Inspect the selected expression. Display the selected expression. Evaluate the selected expression. Erase everything in the Display view. 54 | Part VII: Views There are four different ways to evaluate expressions in the Eclipse debugger: Inspect (Ctrl+Shift+I or Run ➝ Inspect) Show the value of an expression in an expandable tree format. Optionally, copy it into the Expressions view. The value is never recalculated. Display (Ctrl+Shift+D or Run ➝ Display) Show the value of an expression in a simple string format. Optionally, copy it into the Display view. The value is never recalculated. Execute (Ctrl+U or Run ➝ Execute) Evaluate the expression but don’t show its value. Watch (Run ➝ Watch) Copy an expression into the Expressions view. Its value is recalculated every time you do a Step or Resume command. For example, in the Java editor, you could highlight an expression such as array[i-1] and press Ctrl+Shift+D. A pop-up window appears, showing the current value of that array element. Press Ctrl+Shift+D again and the expression is copied to the Display view. If this view looks familiar to you, that’s because it’s essentially an unnamed scrapbook page. TIP See the “Scrapbook Pages” section in Part V for more information on scrapbook pages. Error Log View The Error Log view is not included by default in any perspective, but you can open it with Window ➝ Show View ➝ Error Log. Use it to view internal Eclipse errors and stack Error Log View | 55 dumps when reporting problems to the developers. It can also display warnings and informational messages logged by Eclipse plug-ins. Table 15 shows the commands on the Error Log view’s toolbar. Table 15. Error Log view toolbar Icon Description Export the error log to another file. Import the error log from another file. Clear the view without modifying the logfile. Clear the view and erase the logfile. Open the logfile in an external text editor. Reload the view with the contents of the logfile. TIP See “Reporting Bugs” in Part IX for instructions on how to report problems in Eclipse. Expressions View The Expressions view (in the Debug perspective) shows a list of expressions and their values in the debugger. Use it to examine program states persistently as you step through your code, and to set breakpoints when fields are accessed or modified. This view is similar to the Variables view (described later in Part VII) except that the Expressions view shows only expressions that you have explicitly added. Table 16 describes the Expressions view’s toolbar. Table 16. Expressions view toolbar Icon Description Show full type names (toggle). Show logical structure (toggle). 56 | Part VII: Views Table 16. Expressions view toolbar (continued) Icon Description Collapse all the expanded trees in the view. Remove the current expression from the view. Remove all expressions in the view. There are three ways of looking at any expression in the Eclipse IDE; this is true for both the Expressions view and the Variables view: Literal mode The fields, and nothing but the fields Logical mode The way you normally think about the object Details pane The string representation (as returned by the toString( ) method) Consider a java.lang.LinkedList object. If you look at it literally (as in Figure 26), you’ll see it contains some internal data structures, such as the number of items and a reference to the first item. But if you look at it logically (Figure 27), it simply contains a list of objects. Figure 26. Literal mode shows an object’s internal data structures. Expressions View | 57 Figure 27. Logical mode shows what the object really means. Additionally, the Expressions and Variables views support an optional text area called the Details pane. This pane shows the string representation of the selected item (see Figure 28). Use the view menu to arrange the panes horizontally or vertically, or to disable the Details pane altogether. Figure 28. The Details pane shows an object’s string representation. TIP You can create your own ways of looking at expressions by defining new Logical Structures and Detail Formatters in the debugger preferences (Window ➝ Preferences ➝ Java ➝ Debug). Hierarchy View The Hierarchy view (in the Java perspective) shows the supertypes and subtypes for the selected Java object. Use it to explore the type hierarchy, fields, and methods for a class 58 | Part VII: Views or interface by selecting the type in the Java editor or Package Explorer view and pressing F4 (Navigate ➝ Open Type Hierarchy). The Hierarchy view has two panes, each with its own toolbar. The top pane is the Type Hierarchy tree (see Table 17), which lists the object’s supertypes and subtypes. The optional bottom pane is the Member list (Table 18). It shows fields and methods. Double-click on any type or member to edit its source code. Table 17. Type Hierarchy toolbar Icon Description Show the type hierarchy from object down. Show the supertype hierarchy from the current type up. Show the subtype hierarchy from the current type down. View a previous type in the history. Table 18. Member list toolbar Icon Description Lock the member list and show inherited members in the Type Hierarchy pane (toggle). Show all inherited members (toggle). Sort members by defining type (toggle). Show/hide fields (toggle). Show/hide statics (toggle). Show/hide nonpublic members (toggle). TIP Press Ctrl+T in the Java editor to show the type hierarchy in a searchable pop-up window. Hierarchy View | 59 Javadoc View The Javadoc view (in the Java perspective) shows Java documentation from comments at the definition of the current selection. Use it if you need a larger, permanent version of the pop-up window you get when you hover the mouse pointer over a type or member in the Java editor. The toolbar for the Javadoc view contains the single icon shown in Table 19. Table 19. Javadoc view toolbar Icon Description Open the input source code. Like Hover Help, the Javadoc view requires access to the source code. TIP See “Hover Help” in Part VI for more information. JUnit View The JUnit view (in the Java perspective) shows the progress and results of JUnit tests. Use it to see what tests failed and why (see Part V for instructions on how to run unit tests). The JUnit view has two panes, each with its own toolbar. The JUnit tests pane (see Table 20 for toolbar commands) lists the tests that failed (or a hierarchy of all tests). When you select a failed test in this pane, the Failure trace pane (see Table 21 for toolbar commands) shows a traceback pinpointing where the failure occurred. Double-click on any test name, class name, or traceback line to edit the source code at that point. 60 | Part VII: Views Table 20. JUnit tests toolbar Icon Description Go to the next failed test. Go to the previous failed test. Stop the current test run. Rerun all tests. Rerun just the tests that failed. Keep the test list from scrolling. Table 21. Failure trace toolbar Icon Description Filter unwanted stack frames from failure tracebacks. Compare the expected and actual values on a JUnit assertion (string values only). Use the JUnit preference page (Window ➝ Preferences ➝ Java ➝ JUnit) to configure the list of stack frames to filter out. Navigator View The Navigator view (in the Resource perspective) shows all projects in the workspace as they exist on disk. Use it to see the literal directories and files. Contrast this with the Package Explorer view, which shows a Java-centric representation. Table 22 describes the Navigator view’s toolbar. Table 22. Navigator view toolbar Icon Description Go back in the navigator history. Go forward in the navigator history. Go up to the parent directory. Collapse all the expanded trees in this view. Link selections with the editor. Navigator View | 61 Right-click on a directory and select Go Into to focus on that directory. Then you can use the Back, Forward, and Up toolbar buttons to move around in the tree. Outline View The Outline view (in the Java and Debug perspectives) shows a tree representation of the resource being edited. Use it to quickly find the major elements of your class and study the overall API you have designed. In order for the outline to appear, the current editor must support it. Table 23 describes the Outline view’s toolbar. Table 23. Outline toolbar Icon Description Sort members alphabetically (toggle). Show/hide fields (toggle). Show/hide statics (toggle). Show/hide nonpublic members (toggle). Show/hide local types (toggle). TIP Press Ctrl+O in the Java editor to show the outline in a searchable pop-up window. Package Explorer View The Package Explorer view (in the Java perspective) shows all projects in the workspace using logical Java groupings. Use it as your primary window into the world of your Java source code. Table 24 shows the Package Explorer view’s toolbar. 62 | Part VII: Views Table 24. Package Explorer toolbar Icon Description Go back in the Package Explorer history. Go forward in the Package Explorer history. Go up to the parent directory. Collapse all the expanded trees in this view. Link selections with the editor. The Package Explorer view is a much more powerful version of the Navigator view, custom tailored for Java development. The main difference is that the Package Explorer understands Java source directories and packages. For example, suppose your project has a package named a.b.c. You will see the package a.b.c in the Package Explorer view, while in the Navigator view, you will see the directory tree (a containing b containing c). Views such as the Package Explorer support thousands of icon variations made from combining base icons (for simple objects like packages and files) with decorator icons, also known as decorations (for errors, warnings, accessibility, etc.). Tables 25 and 26 show a few of the common icons you should become familiar with. Table 25. Common base icons Icon Description Project Source folder Plain folder Java library Java package Java file Scrapbook page Class file Package Explorer View | 63 Table 25. Common base icons (continued) Icon Description JAR file Plain file Java class Java interface Public method Private method Protected method Public field Private field Protected field Table 26. Common decorations Icon Description Error Warning Version controlled Inherited Deprecated Abstract Constructor Final Java related Static Problems View The Problems view (in the Java perspective) shows all the errors and warnings in the workspace. Double-click on a line in this view to jump directly to the offending source line. Table 27 describes the Problems view toolbar. 64 | Part VII: Views Table 27. Problems view toolbar Icon Description Delete the selected problem(s). Filter out some problems. TIP You’ll often want to use a filter to see just the problems for the current project or perhaps just the currently selected resource. Right-click on a problem to see a context menu. One of the options there is Quick Fix (Ctrl+1). Use this to quickly repair common errors. TIP See “Quick Fixes” in Part VI for more information. Search View The Search view (in the Java perspective) shows the results of any search operation. Use it to filter and select just the matches you’re interested in. Table 28 describes the Search view toolbar. Table 28. Search view toolbar Icon Description Go to next match. Go to previous match. Remove selected match(es) from the view. Remove all matches from the view. Expand the search tree. Collapse the search tree. Search View | 65 Table 28. Search view toolbar (continued) Icon Description Stop a running search. Go back to a previous search in the history. Group by project. Group by package. Group by file. Group by type. The Search view can show its results in either flat mode (a plain listing) or hierarchical mode (an expandable tree). The grouping actions in the toolbar are only available in hierarchical mode. Use the View menu to change modes. Tasks View The Tasks view (in the Java perspective) lists all the markers placed in your source code. Markers are reminders that you or Eclipse add to the code to indicate something that needs your attention later. They can be added manually (Edit ➝ Add Bookmark... or Edit ➝ Add Task...), but more commonly the compiler adds them when it encounters a special comment in your code like this: // TODO: Revisit this later The comment strings TODO, FIXME, and XXX are recognized by default. Add any others that you commonly use in your code to Window ➝ Preferences ➝ Java ➝ Compiler ➝ Task Tags. 66 | Part VII: Views Table 29 describes the Tasks view toolbar. Table 29. Tasks view toolbar Icon Description Create a new task. Delete the selected task(s). Filter out some tasks. Variables View The Variables view (in the Debug perspective) shows all the parameters and local variables in scope during a debugging session. Use this view to keep an eye on your program’s state as you step through it. The Variables view toolbar is described in Table 30. Table 30. Variables view toolbar Icon Description Show full type names (toggle). Show logical structure (toggle). Collapse all the expanded trees in the view. TIP If you’re currently stopped in a nonstatic method, the first item in the Variables view will be this. Expand it to see your instance variables. Variables View | 67 PART VIII PART VIII: Short Takes This pocket guide wouldn’t fit in your pocket if it described every nuance of Eclipse in detail. However, I want to briefly mention a few more of Eclipse’s notable features in this part of the book. Some of these are built into the Eclipse SDK; some are plug-ins that you need to download and install yourself. TIP The packaging of Eclipse is constantly evolving, so by the time you read this, you may be able to find downloads that combine parts of the Eclipse SDK with plug-ins for a specific task—for example, web development. In addition, you can find hundreds of plug-ins that extend Eclipse by searching the community web sites listed in Part IX. To find out more about any of these features, see the online help topic or web sites listed in the following sections. Note that when you install a plug-in, it will often add a new section to the Help Contents that explains how to use it. CVS CVS is a popular source management system for projects and teams of any size. You use a CVS repository to hold the evolving versions of your code, tools, scripts, documentation, and 68 so forth. The Eclipse IDE comes with excellent CVS integration—which makes sense, as CVS is currently used in the development of all Eclipse projects. Use the CVS Repository Exploring perspective to see the contents of a CVS repository. There you can define the server location, and view or check out (make a local copy of) the code. Eclipse provides a variety of options to keep your local copy up to date with repository changes, including additional views in the Team Synchronizing perspective. A terrific compare and merge utility (one of my favorite features in Eclipse) makes handling conflicts easy. A history of all changes for a specific file (resource) can be seen in the CVS Resource History view. Double-click on a line in this view to open an editor on that revision, or select two revisions, right-click, and select the Compare command to see their differences. Another useful CVS command is Show Annotation. This lets you scroll through a particular file and see who touched each line, when, and why. Online help Help ➝ Help Contents ➝ Workbench User Guide ➝ Concepts ➝ Team programming with CVS Web sites Ant Ant is the Java-based successor to the venerable make tool. You can use Ant for automating almost any development task, from compiling to testing to packaging and deployment. Eclipse can import and export Ant-based projects, edit Ant files, and run Ant tasks manually or as part of a build process. Next to the Java editor, the Ant editor is one of the most advanced editors available in the IDE, with support for code assist, outlining, and formatting. Ant | 69 Online help Help ➝ Help Contents ➝ Workbench User Guide ➝ Concepts ➝ External tools ➝ Ant support Web sites Web Tools Platform Do you write web pages, edit XML, develop Java servlets, or dream about EJB? Then the Web Tools Platform (WTP) project is for you. This is a separate download that—when installed—integrates into your Eclipse SDK installation. There are two parts to the WTP: web standard tools (covering HTML, XML, XSD, etc.) and Java standard tools (for JSPs, EJBs, and so forth). This project supports web service development, server management, debugging code on the server, and more. Web site Testing and Performance The Test and Performance Tools Platform (TPTP—who makes up these acronyms?) provides tools and technologies that bring together traditional profiling, monitoring, tracing, and testing. For example, you can use it to correlate CPU usage on one machine with events logged by another. Web site 70 | Part VIII: Short Takes Visual Editor The Visual Editor project lets you create graphical user interfaces for your programs. It supports round-tripping, which means you can edit your interface in visual mode (using dragand-drop), switch to source mode to make a few changes, switch back, and continue seamlessly. Web site C/C++ Development Java isn’t the only language that the Eclipse IDE supports. The C/C++ Development Toolkit (CDT) comes with everything you need for C/C++ development except the tool chain itself (i.e., the compiler, linker, and debugger). CDT works with a variety of tools from various embedded systems vendors; for ordinary desktop applications, you can download and use the free gcc compiler and gdb debugger from the GNU project. Web sites AspectJ The Eclipse project is the home of AspectJ, an aspect-oriented extension to the Java language, along with the AspectJ Development Toolkit (AJDT), which integrates the language into the Eclipse IDE. AspectJ provides clean modularization of crosscutting concerns such as error checking, monitoring, and logging. A related project, the Concern Manipulation Environment (CME), aims to bring some elements of aspect programming to pure Java. AspectJ | 71 Web sites Plug-in Development Under the covers, Eclipse is a completely modular system with dozens—if not hundreds—of plug-ins working together on top of a small dynamic runtime. Each plug-in defines public extension points, which are like the sockets on a power strip. Other plug-ins contribute extensions that, well, plug into those sockets. Thus the system organically grows functionality as more plug-ins are added. At the same time, the runtime is scalable, so you never have to worry about blowing a fuse. The Plug-in Development Environment (PDE) bundled with the Eclipse SDK lets you define your own plug-ins in order to extend Eclipse. PDE supports defining and using extension points, debugging your plug-ins, packaging, and more. TIP The source code for Eclipse is freely available; in fact, it’s bundled with the SDK package you installed. This is a great resource for learning plug-in programming. File ➝ Import ➝ External Plug-ins and Fragments brings parts of the code into your workspace. Online help Help ➝ Help Contents ➝ Platform Plug-in Developer Guide Web sites 72 | Part VIII: Short Takes Rich Client Platform Because of the flexible open source license under which Eclipse is released, you can use Eclipse code and technologies in your own programs, even if they are not open source. A subset of the Eclipse SDK called the Rich Client Platform (RCP) provides basic functionality common to most desktop applications, such as windowing and menu support, online help, user preferences, and more. By building your own custom application on top of this framework, you can cut the development time of your projects significantly. Since Eclipse technology is all based on plug-ins, the PDE is used to write RCP programs. You can brand your applications with custom icons, window titles, and a splash screen, and you can deploy them via traditional zip files, professional installers, or even JNLP. A number of templates and tutorials are available. Online help Help ➝ Help Contents ➝ Platform Plug-in Developer Guide ➝ Building a Rich Client Platform application Web sites Standard Widget Toolkit The Eclipse user interface is written in Java using the Standard Widget Toolkit (SWT). SWT uses the native facilities of your operating system to achieve high performance and fidelity indistinguishable from that of C-based applications. You can use the same toolkit for your own applications. Standard Widget Toolkit | 73 SWT is one of the three main GUI toolkits supported by Java. The other two are AWT and Swing. SWT provides limited interoperability with these, allowing you to host AWT and Swing controls inside a SWT application. On Windows, SWT can even host ActiveX and .NET controls. SWT is unique in its ability to bring these worlds together. A framework called JFace is often used with SWT to provide higher-level concepts, such as viewers, actions, and wizards. Both SWT and JFace are included with the Eclipse SDK and RCP packages. Online help Help ➝ Help Contents ➝ Platform Plug-in Developer Guide ➝ Standard Widget Toolkit Web site 74 | Part VIII: Short Takes PART PARTIX: IX Help and Community Welcome to the Eclipse community. Membership is free, and you’ve already taken the first steps by installing the software and reading this guide. To help you go further, online help, web sites, articles, and other resources are available to assist you, as are thousands of Eclipse enthusiasts from around the world. Online Help Eclipse provides an extensible online help system with details about the version of Eclipse you’re using and any plug-ins you have installed. It can be searched and viewed in several different ways. Getting Help The most common way to view online help is to select Help ➝ Help Contents. A separate Help window will open, showing several help topics. Expand the topics to hone in on the information you need, or enter a keyword in the Search field at the top of the window. Another way to get help is with dynamic help. To use dynamic help, simply press F1 (or select Help ➝ Dynamic Help) and an embedded Help view will appear. As your focus changes to different views and editors, the Help content is updated to show help for what you are doing at the 75 moment. Select Help ➝ Search Help... to find help topics relevant to the view you’re currently in. Help Topics If you install the Eclipse SDK as detailed in Part I, you will find the following topics listed in the Help contents: Workbench User Guide Contains information on how to use the IDE in general, independent of your programming language. Java Development User Guide Discusses how to use the Java language support (editors, views, etc.) provided by Eclipse. Platform Plug-in Developer Guide Covers the concepts and programming interfaces used to write Eclipse plug-ins. JDT Plug-in Developer Guide Covers writing plug-ins specifically for the Java Development Tools. PDE Guide Describes how to use the plug-in development environment included in the Eclipse SDK. TIP Depending on your options, some of these topics may be hidden. Click the Show All Topics button to see them all. Eclipse Web Site The official Eclipse web site,, is your best source of information on Eclipse: the platform, the IDE, and the community. The design of this site may change over time, but as of this writing, the major sections are: 76 | Part IX: Help and Community About us Learn about the Eclipse project, how it got started, who is involved in it, how the governance works, legal questions, logo programs, and so forth. Projects Eclipse development is split into top-level projects, subprojects, and components. On the Projects page, you can see how all this is organized. Drill down to get to FAQs, documentation, source code, etc. Download This area should be familiar from Part I. It’s where you’ll find the latest prebuilt versions of Eclipse. Articles The articles section is full of technical information for developers using or extending Eclipse. Consider writing an article yourself to add to the community knowledge base. Newsgroups The main user forums are found here (see the “Newsgroups” section, later in this chapter). Community This is where you’ll find out about conferences, user groups, web sites, books, courses, free and commercial plug-ins, awards, and much more. Search Locate any page at eclipse.org, including newsgroup and mailing list archives. Bugs Find or report bugs and enhancement requests. Eclipse Web Site | 77 Community Web Sites Many individuals and companies have created web sites to address particular needs of the community. Here are a few of the most popular ones. More can be found in the Community Resources area of the eclipse.org web site. EclipseZone () An online community by and for Eclipse users everywhere. Planet Eclipse () Planet Eclipse is a window into the world, work, and lives of Eclipse users and contributors. Plug-ins Registry () This is a nonprofit registry of Eclipse plug-ins, created and maintained by Eclipse users. Eclipse Plugin Central () This site offers a plug-in directory, reviews, ratings, news, forums, and listings for products and services. Eclipse Wiki () This user-editable web site has FAQs, tips, tricks, and other useful information. IBM AlphaWorks () Part of IBM’s emerging technologies web site, this is dedicated to Eclipse and WebSphere-related projects and plug-ins. IBM developerWorks ( opensource) developerWorks hosts a variety of tutorials, articles, and related information on Eclipse and other open source projects. Apache () Apache software is used throughout Eclipse, and the two projects collaborate in many areas. 78 | Part IX: Help and Community Source Forge () A large and growing number of Eclipse plug-ins are being developed in this open source nexus. O’Reilly Open Source () This O’Reilly Resource Center provides a broad range of references and links to publications about open source. Reporting Bugs The single most important way you can contribute to the Eclipse community is to report every bug you find, so they can be fixed. All software has bugs, but too often users do not take the time to report them. Your ideas for enhancements are also valuable. Bug reports and enhancement requests are both stored at eclipse.org in an open source tracking system called Bugzilla. The only difference between the entries for the two is that all enhancement requests are marked with a severity of “enhancement” (in Bugzilla, “bug report” refers to both types of entries). TIP Remember to always use the most recent milestone or stable version of Eclipse you can find. Why? With a current release, you shorten the time between when a bug slips in and when you report that bug, making it much easier to diagnose and fix the problem. To report a bug or request an enhancement, first go to the Eclipse home page () and select the “bugs” link. The first time you use Bugzilla, you’ll need to create an account. Reporting Bugs | 79 New Account Although you can search the database without a Bugzilla account, you’ll need one to add or modify any entries. Click the “Create a Bugzilla account” link, enter your email address and name, and click the “submit” button. The system will create the account and send a confirmation by email. Searching Before creating a new Bugzilla entry, take a moment to search the database to see if someone else has beaten you to it. From the main Bugs page, select the “Find a bug report” link, then enter one or more words in either the Summary field or the Comment field, and click Search. If you find an entry that matches your problem or request, add yourself to the cc list—a list of email addresses that get copied on any modification to the entry. You may also wish to vote for the issue in order to indicate your interest. Votes don’t determine priority by themselves, but they sometimes do factor in. Adding an Entry If you can’t find an existing Bugzilla entry, you’ll need to create a new one. From the main Bugs page, select “Report a new bug” or “Enter an enhancement/feature request.” Next, you’ll be prompted for the project. If you’re using the Eclipse SDK, the choice is simple: for anything relating specifically to Java development, pick JDT; otherwise, pick Platform. On the next page, select a component. If you’re not sure, click on the “help” link or just take a guess. Enter a one-line description of the issue in the Summary field and a more detailed description in the Description field. When reporting a bug, supply the steps that someone else will need to follow to reproduce the problem. 80 | Part IX: Help and Community Often when there’s a bug in Eclipse, the system will record an event in the Eclipse error log. This record contains important information that can help the developers diagnose the problem. Locate the event in the Error Log view (discussed in Part VII) and paste it at the end of the Description field. Click the Commit button to complete the report. At the time of this writing, I’ve personally entered 367 bug reports, including 106 enhancement requests; 268 of these entries have been resolved. In addition, I’m cc’d on 437 bugs and have commented on 659. While you might not become that involved, I challenge you to play your part in improving Eclipse. Newsgroups Eclipse user forums are hosted on eclipse.org using ordinary newsgroups. All newsgroup content is protected by a password in order to control spam. To get the password, go to the Eclipse home page and select the “newsgroups/user forum” link; you should see a link to request a password. Submit your information and the password will be mailed to you. Although there is a web-based interface for the forums, the best way to participate is to use a rich client news reader, such as Thunderbird (). Enter the news server name (news.eclipse.org), the userid, and the password in the appropriate place for your reader. Here are a few newsgroups that I recommend you start with: eclipse.newcomer Ask questions about downloading, installing, and getting started with Eclipse in this newsgroup. eclipse.platform Come here to participate in technical discussions about how to use or extend Eclipse. Newsgroups | 81 eclipse.platform.jdt This group is for technical discussions about how to use the Java Development Tools. eclipse.foundation This forum is for general discussions pertaining to the Eclipse Foundation and its communities and governance. eclipse.commercial This group is intended to allow commercial vendors to post product releases and information about commercial products based on Eclipse. Mailing Lists For the most part, mailing lists at eclipse.org are intended for use by developers working on day-to-day development of Eclipse itself. The development mailing lists are the way design and implementation issues are discussed and decisions voted on by the committers (developers who’ve earned write access to the source repository). Anyone can listen in, but questions and discussions about using Eclipse and Eclipse-based tools or developing plug-ins should be posted to one of the newsgroups listed previously. Conclusion Eclipse is not just an IDE for Java developers, though that’s how most people are introduced to it. Eclipse technology is used by everyone from office secretaries running custom RCP applications to NASA scientists planning Mars Rover missions (seriously!). From the hobbyist to the professional, from casual users to committers, Eclipse appeals to all of us for different reasons, but we’re all part of the community, and we all have something important to contribute. See you online. 82 | Part IX: Help and Community APPENDIX Commands ➝ Preferences ➝ General ➝ Keys). TIP Press Ctrl+Shift+L (Help ➝ Key Assist...) to see a quick list of the currently defined keys. This appendix lists most of the commands available in Eclipse along with their key bindings and menu paths (if any). Commands are organized into categories such as Edit and File, just as you would see them listed in the Keys Preferences. Within each category, the commands are listed in alphabetical order. The format used is: Command [Default key bindings] Main menu path Some commands can be accessed by two or more equivalent key sequences. For example, the Copy command’s key bindings are listed as “Ctrl+C | Ctrl+Insert.” The vertical bar indicates that either Ctrl+C or Ctrl+Insert will work. 83 Edit Commands Other bindings are actually composed of two keys pressed in sequence. For example, the key binding for “Quick Assist Rename in file” is shown as “Ctrl+2, R.” The comma indicates you should press Ctrl+2, release, and then press the R key. TIP It sounds more complicated than it really is. If you press the first key of a multikey sequence and pause, a window will appear to remind you what to press next. In the interest of space, only key bindings for the default configuration on Windows are listed. Keys for other platforms are similar, and you should be able to infer these for yourself. An Emacs-like configuration is also selectable from the Keys Preferences. Someone has even written a plug-in that supports vi-style keystrokes (search for it on the plug-in sites listed in the “Community Web Sites” section in Part IX). Edit Commands Add Bookmark [No key binding] Edit ➝ Add Bookmark... Add Task [No key binding] Edit ➝ Add Task... Content Assist [Ctrl+Space] Edit ➝ Content Assist Context Information [Ctrl+Shift+Space] Edit ➝ Parameter Hints Copy [Ctrl+C | Ctrl+Insert] Edit ➝ Copy 84 | Commands Edit Commands Cut [Ctrl+X | Shift+Delete] Edit ➝ Cut Delete [Delete] Edit ➝ Delete Find and Replace [Ctrl+F] Edit ➝ Find/Replace... Find Next [Ctrl+K] Edit ➝ Find Next Find Previous [Ctrl+Shift+K] Edit ➝ Find Previous Incremental Find [Ctrl+J] Edit ➝ Incremental Find Next Incremental Find Reverse [Ctrl+Shift+J] Edit ➝ Incremental Find Previous Paste [Ctrl+V | Shift+Insert] Edit ➝ Paste Quick Diff Toggle [Ctrl+Shift+Q] (No menu) Quick Fix [Ctrl+1] Edit ➝ Quick Fix Redo [Ctrl+Y] Edit ➝ Redo Restore Last Selection [Alt+Shift+Down] Edit ➝ Expand Selection To ➝ Restore Last Selection Revert Line [No key binding] (No menu) Edit Commands | 85 Edit Commands Revert Lines [No key binding] (No menu) Revert to Saved [No key binding] File ➝ Revert Select All [Ctrl+A] Edit ➝ Select All Select Enclosing Element [Alt+Shift+Up] Edit ➝ Expand Selection To ➝ Enclosing Element Select Next Element [Alt+Shift+Right] Edit ➝ Expand Selection To ➝ Next Element Select Previous Element [Alt+Shift+Left] Edit ➝ Expand Selection To ➝ Previous Element Shift Left [No key binding] Source ➝ Shift Left Shift Right [No key binding] Source ➝ Shift Right Show Line Numbers [No key binding] (No menu) Show Tooltip Description [F2] Edit ➝ Show Tooltip Description Toggle Insert Mode [Ctrl+Shift+Insert] Edit ➝ Smart Insert Mode Undo [Ctrl+Z] Edit ➝ Undo Word Completion [Alt+/] Edit ➝ Word Completion 86 | Commands File Commands File Commands Close [Ctrl+F4 | Ctrl+W] File ➝ Close Close All [Ctrl+Shift+F4 | Ctrl+Shift+W] File ➝ Close All Convert Line Delimiters to Mac OS 9 [No key binding] File ➝ Convert Line Delimiters To ➝ Mac OS 9 Convert Line Delimiters to Unix [No key binding] File ➝ Convert Line Delimiters To ➝ Unix Convert Line Delimiters to Windows [No key binding] File ➝ Convert Line Delimiters To ➝ Windows Exit [No key binding] File ➝ Exit Export [No key binding] File ➝ Export... Import [No key binding] File ➝ Import... Move [No key binding] File ➝ Move... New [Ctrl+N] File ➝ New ➝ Other... New menu [Alt+Shift+N] File ➝ New Open File... [No key binding] File ➝ Open File... File Commands | 87 Help Commands Open Workspace [No key binding] File ➝ Switch Workspace... Print [Ctrl+P] File ➝ Print... Properties [Alt+Enter] File ➝ Properties Refresh [F5] File ➝ Refresh Remove Trailing Whitespace [No key binding] (No menu) Rename [F2] File ➝ Rename... Revert [No key binding] File ➝ Revert Save [Ctrl+S] File ➝ Save Save All [Ctrl+Shift+S] File ➝ Save All Save As [No key binding] File ➝ Save As... Help Commands About [No key binding] Help ➝ About Dynamic Help [F1] Help ➝ Dynamic Help 88 | Commands Navigate Commands Help Contents [No key binding] Help ➝ Help Contents Help Search [No key binding] Help ➝ Search Help... Tips and Tricks [No key binding] Help ➝ Tips and Tricks... Welcome [No key binding] Help ➝ Welcome... Navigate Commands Back [No key binding] Navigate ➝ Go To ➝ Back Backward History [Alt+Left] Navigate ➝ Back Forward [No key binding] Navigate ➝ Go To ➝ Forward Forward History [Alt+Right] Navigate ➝ Forward Go Into [No key binding] Navigate ➝ Go Into Go to Line [Ctrl+L] Navigate ➝ Go to Line... Go to Matching Bracket [Ctrl+Shift+P] Navigate ➝ Go To ➝ Matching Bracket Go to Next Member [Ctrl+Shift+Down] Navigate ➝ Go To ➝ Next Member Navigate Commands | 89 Navigate Commands Go to Package [No key binding] Navigate ➝ Go To ➝ Package... Go to Previous Member [Ctrl+Shift+Up] Navigate ➝ Go To ➝ Previous Member Go to Resource [No key binding] Navigate ➝ Go To ➝ Resource... Go to Type [No key binding] Navigate ➝ Go To ➝ Type... Last Edit Location [Ctrl+Q] Navigate ➝ Last Edit Location Next [Ctrl+.] Navigate ➝ Next Open Call Hierarchy [Ctrl+Alt+H] Navigate ➝ Open Call Hierarchy Open Declaration [F3] Navigate ➝ Open Declaration Open External Javadoc [Shift+F2] Navigate ➝ Open External Javadoc Open Resource [Ctrl+Shift+R] Navigate ➝ Open Resource... Open Structure [Ctrl+F3] (No menu) Open Super Implementation [No key binding] Navigate ➝ Open Super Implementation Open Type [Ctrl+Shift+T] Navigate ➝ Open Type... 90 | Commands Perspective Commands Open Type Hierarchy [F4] Navigate ➝ Open Type Hierarchy Open Type in Hierarchy [Ctrl+Shift+H] Navigate ➝ Open Type in Hierarchy... Previous [Ctrl+,] Navigate ➝ Previous Quick Hierarchy [Ctrl+T] Navigate ➝ Quick Type Hierarchy Quick Outline [Ctrl+O] Navigate ➝ Quick Outline Show in Menu [Alt+Shift+W] Navigate ➝ Show In Show in Package Explorer [No key binding] Navigate ➝ Show In ➝ Package Explorer Up [No key binding] Navigate ➝ Go To ➝ Up One [Level Perspective Commands CVS Repository Exploring [No key binding] Window ➝ Open Perspective ➝ Other... ➝ CVS Repository Exploring Debug [No key binding] Window ➝ Open Perspective ➝ Debug Java [No key binding] Window ➝ Open Perspective ➝ Java Perspective Commands | 91 Project Commands Java Browsing [No key binding] Window ➝ Open Perspective ➝ Java Browsing Java Type Hierarchy [No key binding] Window ➝ Open Perspective ➝ Other... ➝ Java Type Hierarchy Team Synchronizing [No key binding] Window ➝ Open Perspective ➝ Other... ➝ Team Synchronizing Project Commands Build All [Ctrl+B] Project ➝ Build All Build Clean [No key binding] Project ➝ Clean... Build Project [No key binding] Project ➝ Build Project Close Project [No key binding] Project ➝ Close Project Generate Javadoc [No key binding] Project ➝ Generate Javadoc... Open Project [No key binding] Project ➝ Open Project Properties [No key binding] Project ➝ Properties Rebuild All [No key binding] (No menu) 92 | Commands Refactor Commands Rebuild Project [No key binding] (No menu) Repeat Working Set Build [No key binding] (No menu) Refactor Commands Change Method Signature [Alt+Shift+C] Refactor ➝ Change Method Signature... Convert Anonymous Class to Nested [No key binding] Refactor ➝ Convert Anonymous Class to Nested... Convert Local Variable to Field [Alt+Shift+F] Refactor ➝ Convert Local Variable to Field... Encapsulate Field [No key binding] Refactor ➝ Encapsulate Field... Extract Constant [No key binding] Refactor ➝ Extract Constant... Extract Interface [No key binding] Refactor ➝ Extract Interface... Extract Local Variable [Alt+Shift+L] Refactor ➝ Extract Local Variable... Extract Method [Alt+Shift+M] Refactor ➝ Extract Method... Generalize Type [No key binding] Refactor ➝ Generalize Type... Infer Generic Type Arguments [No key binding] Refactor ➝ Infer Generic Type Arguments... Refactor Commands | 93 Run/Debug Commands Inline [Alt+Shift+I] Refactor ➝ Inline... Introduce Factory [No key binding] Refactor ➝ Introduce Factory... Introduce Parameter [No key binding] Refactor ➝ Introduce Parameter... Move - Refactoring [Alt+Shift+V] Refactor ➝ Move... Move Member Type to New File [No key binding] Refactor ➝ Move Member Type to New File... Pull Up [No key binding] Refactor ➝ Pull Up... Push Down [No key binding] Refactor ➝ Push Down... Rename - Refactoring [Alt+Shift+R] Refactor ➝ Rename... Show Refactor Quick Menu [Alt+Shift+T] (No menu) Use Supertype Where Possible [No key binding] Refactor ➝ Use Supertype Where Possible Run/Debug Commands Add Class Load Breakpoint [No key binding] Run ➝ Add Class Load Breakpoint... Add Java Exception Breakpoint [No key binding] Run ➝ Add Java Exception Breakpoint... 94 | Commands Run/Debug Commands Debug Ant Build [Alt+Shift+D, Q] Run ➝ Debug... Debug Eclipse Application [Alt+Shift+D, E] Run ➝ Debug... Debug Java Applet [Alt+Shift+D, A] Run ➝ Debug... Debug Java Application [Alt+Shift+D, J] Run ➝ Debug... Debug JUnit Plug-in Test [Alt+Shift+D, P] Run ➝ Debug... Debug JUnit Test [Alt+Shift+D, T] Run ➝ Debug... Debug Last Launched [F11] Run ➝ Debug Last Launched Debug SWT Application [Alt+Shift+D, S] Run ➝ Debug... Debug... [No key binding] Run ➝ Debug... Display [Ctrl+Shift+D] Run ➝ Display EOF [Ctrl+Z] (No menu) (Console view only) Execute [Ctrl+U] Run ➝ Execute Run/Debug Commands | 95 Run/Debug Commands External Tools... [No key binding] Run ➝ External Tools ➝ External Tools... Inspect [Ctrl+Shift+I] Run ➝ Inspect Profile Last Launched [No key binding] Run ➝ Profile Last Launched Profile... [No key binding] Run ➝ Profile... Remove All Breakpoints [No key binding] Run ➝ Remove All Breakpoints Resume [F8] Run ➝ Resume Run Ant Build [Alt+Shift+X, Q] Run ➝ Run... Run Eclipse Application [Alt+Shift+X, E] Run ➝ Run... Run Java Applet [Alt+Shift+X, A] Run ➝ Run... Run Java Application [Alt+Shift+X, J] Run ➝ Run... Run JUnit Plug-in Test [Alt+Shift+X, P] Run ➝ Run... Run JUnit Test [Alt+Shift+X, T] Run ➝ Run... Run Last Launched [Ctrl+F11] Run ➝ Run Last Launched 96 | Commands Run/Debug Commands Run Last Launched External Tool [No key binding] (No menu) Run SWT Application [Alt+Shift+X, S] Run ➝ Run... Run to Line [Ctrl+R] Run ➝ Run to Line Run... [No key binding] Run ➝ Run... Skip All Breakpoints [No key binding] Run ➝ Skip All Breakpoints Step Into [F5] Run ➝ Step Into Step Into Selection [Ctrl+F5] Run ➝ Step Into Selection Step Over [F6] Run ➝ Step Over Step Return [F7] Run ➝ Step Return Suspend [No key binding] Run ➝ Suspend Terminate [No key binding] Run ➝ Terminate Terminate and Relaunch [No key binding] (No menu) Toggle Line Breakpoint [Ctrl+Shift+B] Run ➝ Toggle Line Breakpoint Run/Debug Commands | 97 Search Commands Toggle Method Breakpoint [No key binding] Run ➝ Toggle Method Breakpoint Toggle Step Filters [Shift+F5] Run ➝ Use Step Filters Toggle Watchpoint [No key binding] Run ➝ Toggle Watchpoint Search Commands Declaration in Hierarchy [No key binding] Search ➝ Declarations ➝ Hierarchy Declaration in Project [No key binding] Search ➝ Declarations ➝ Project Declaration in Working Set [No key binding] Search ➝ Declarations ➝ Working Set... Declaration in Workspace [Ctrl+G] Search ➝ Declarations ➝ Workspace File Search [No key binding] Search ➝ File... Implementors in Project [No key binding] Search ➝ Implementors ➝ Project Implementors in Working Set [No key binding] Search ➝ Implementors ➝ Working Set... Implementors in Workspace [No key binding] Search ➝ Implementors ➝ Workspace Open Search Dialog [Ctrl+H] Search ➝ Search... 98 | Commands Search Commands Read Access in Hierarchy [No key binding] Search ➝ Read Access ➝ Hierarchy Read Access in Project [No key binding] Search ➝ Read Access ➝ Project Read Access in Working Set [No key binding] Search ➝ Read Access ➝ Working Set... Read Access in Workspace [No key binding] Search ➝ Read Access ➝ Workspace References in Hierarchy [No key binding] Search ➝ References ➝ Hierarchy References in Project [No key binding] Search ➝ References ➝ Project References in Working Set [No key binding] Search ➝ References ➝ Working Set... References in Workspace [Ctrl+Shift+G] Search ➝ References ➝ Workspace Referring Tests [No key binding] Search ➝ Referring Tests... Search All Occurrences in File [No key binding] Search ➝ Occurrences in File ➝ Identifier Search Exception Occurrences in File [No key binding] Search ➝ Occurrences in File ➝ Throwing Exception Search Implement Occurrences in File [No key binding] Search ➝ Occurrences in File ➝ Implementing Methods Show Occurrences in File Quick Menu [Ctrl+Shift+U] (No menu) Search Commands | 99 Source Commands Write Access in Hierarchy [No key binding] Search ➝ Write Access ➝ Hierarchy Write Access in Project [No key binding] Search ➝ Write Access ➝ Project Write Access in Working Set [No key binding] Search ➝ Write Access ➝ Working Set... Write Access in Workspace [No key binding] Search ➝ Write Access ➝ Workspace Source Commands Add Block Comment [Ctrl+Shift+/] Source ➝ Add Block Comment Add Constructors from Superclass [No key binding] Source ➝ Add Constructors from Superclass... Add Import [Ctrl+Shift+M] Source ➝ Add Import Add Javadoc Comment [Alt+Shift+J] Source ➝ Add Comment Comment [No key binding] (No menu) Externalize Strings [No key binding] Source ➝ Externalize Strings... Find Strings to Externalize [No key binding] Source ➝ Find Strings to Externalize... Format [Ctrl+Shift+F] Source ➝ Format 100 | Commands Source Commands Format Element [No key binding] Source ➝ Format Element Generate Constructor using Fields [No key binding] Source ➝ Generate Constructor using Fields... Generate Delegate Methods [No key binding] Source ➝ Generate Delegate Methods... Generate Getters and Setters [No key binding] Source ➝ Generate Getters and Setters... Indent Line [Ctrl+I] Source ➝ Correct Indentation Organize Imports [Ctrl+Shift+O] Source ➝ Organize Imports Override/Implement Methods [No key binding] Source ➝ Override/Implement Methods... Quick Assist - Assign parameter to field [No key binding] (No menu) Quick Assist - Assign to field [Ctrl+2, F] (No menu) Quick Assist - Assign to local variable [Ctrl+2, L] (No menu) Quick Assist - Rename in file [Ctrl+2, R] (No menu) Quick Assist - Replace statement with block [No key binding] (No menu) Quick Fix - Add cast [No key binding] (No menu) Source Commands | 101 Source Commands Quick Fix - Add import [No key binding] (No menu) Quick Fix - Add non-NLS tag [No key binding] (No menu) Quick Fix - Add throws declaration [No key binding] (No menu) Quick Fix - Change to static access [No key binding] (No menu) Quick Fix - Qualify field access [No key binding] (No menu) Remove Block Comment [Ctrl+Shift+\] Source ➝ Remove Block Comment Remove Occurrence Annotations [Alt+Shift+U] (No menu) Show Source Quick Menu [Alt+Shift+S] (No menu) Sort Members [No key binding] Source ➝ Sort Members Surround with try/catch Block [No key binding] Source ➝ Surround with try/catch Block Toggle Comment [Ctrl+/ | Ctrl+7 | Ctrl+Shift+C] Source ➝ Toggle Comment Toggle Mark Occurrences [Alt+Shift+O] (No menu) Uncomment [No key binding] (No menu) 102 | Commands Text-Editing Commands Text-Editing Commands Clear Mark [No key binding] (No menu) Collapse [Ctrl+Numpad_Subtract] (No menu) Copy Lines [Ctrl+Alt+Down] (No menu) Cut Line [No key binding] (No menu) Cut to Beginning of Line [No key binding] (No menu) Cut to End of Line [No key binding] (No menu) Delete Line [Ctrl+D] (No menu) Delete Next [Delete] (No menu) Delete Next Word [Ctrl+Delete] (No menu) Delete Previous [No key binding] (No menu) Delete Previous Word [Ctrl+Backspace] (No menu) Delete to Beginning of Line [No key binding] (No menu) Text-Editing Commands | 103 Text-Editing Commands Delete to End of Line [Ctrl+Shift+Delete] (No menu) Duplicate Lines [Ctrl+Alt+Up] (No menu) Expand [Ctrl+Numpad_Add] (No menu) Expand All [Ctrl+Numpad_Multiply] (No menu) Insert Line Above Current Line [Ctrl+Shift+Enter] (No menu) Insert Line Below Current Line [Shift+Enter] (No menu) Line Down [Down] (No menu) Line End [End] (No menu) Line Start [Home] (No menu) Line Up [Up] (No menu) Move Lines Down [Alt+Down] (No menu) Move Lines Up [Alt+Up] (No menu) Next Column [No key binding] (No menu) 104 | Commands Text-Editing Commands Next Word [Ctrl+Right] (No menu) Page Down [Page Down] (No menu) Page Up [Page Up] (No menu) Previous Column [No key binding] (No menu) Previous Word [Ctrl+Left] (No menu) Scroll Line Down [Ctrl+Down] (No menu) Scroll Line Up [Ctrl+Up] (No menu) Select Line Down [Shift+Down] (No menu) Select Line End [Shift+End] (No menu) Select Line Start [Shift+Home] (No menu) Select Line Up [Shift+Up] (No menu) Select Next Column [No key binding] (No menu) Select Next Word [Ctrl+Shift+Right] (No menu) Text-Editing Commands | 105 Text-Editing Commands Select Page Down [Shift+Page Down] (No menu) Select Page Up [Shift+Page Up] (No menu) Select Previous Column [No key binding] (No menu) Select Previous Word [Ctrl+Shift+Left] (No menu) Select Text End [Ctrl+Shift+End] (No menu) Select Text Start [Ctrl+Shift+Home] (No menu) Select Window End [No key binding] (No menu) Select Window Start [No key binding] (No menu) Set Mark [No key binding] (No menu) Swap Mark [No key binding] (No menu) Text End [Ctrl+End] (No menu) Text Start [Ctrl+Home] (No menu) To Lower Case [Ctrl+Shift+Y] (No menu) 106 | Commands View Commands To Upper Case [Ctrl+Shift+X] (No menu) Toggle Folding [Ctrl+Numpad_Divide] (No menu) Toggle Overwrite [Insert] (No menu) Window End [No key binding] (No menu) Window Start [No key binding] (No menu) View Commands Ant [No key binding] Window ➝ Show View ➝ Ant Breakpoints [Alt+Shift+Q, B] Window ➝ Show View ➝ Breakpoints Cheat Sheets [Alt+Shift+Q, H] Window ➝ Show View ➝ Other... ➝ Cheat Sheets ➝ Cheat Sheets Classic Search [No key binding] Window ➝ Show View ➝ Other... ➝ Basic ➝ Classic Search Console [Alt+Shift+Q, C] Window ➝ Show View ➝ Console CVS Annotate [No key binding] Window ➝ Show View ➝ Other... ➝ CVS ➝ CVS Annotate View Commands | 107 View Commands CVS Editors [No key binding] Window ➝ Show View ➝ Other... ➝ CVS ➝ CVS Editors CVS Repositories [No key binding] Window ➝ Show View ➝ Other... ➝ CVS ➝ CVS Repositories CVS Resource History [No key binding] Window ➝ Show View ➝ Other... ➝ CVS ➝ CVS Resource History Debug [No key binding] Window ➝ Show View ➝ Debug Display [No key binding] Window ➝ Show View ➝ Display Error Log [No key binding] Window ➝ Show View ➝ Error Log Expressions [No key binding] Window ➝ Show View ➝ Expressions Java Call Hierarchy [No key binding] Window ➝ Show View ➝ Other... ➝ Java... ➝ Call Hierarchy Java Declaration [Alt+Shift+Q, D] Window ➝ Show View ➝ Declaration Java Members [No key binding] Window ➝ Show View ➝ Other... ➝ Java Browsing ➝ Members Java Package Explorer [Alt+Shift+Q, P] Window ➝ Show View ➝ Package Explorer Java Packages [No key binding] Window ➝ Show View ➝ Other... ➝ Java Browsing ➝ Packages 108 | Commands View Commands Java Projects [No key binding] Window ➝ Show View ➝ Other... ➝ Java Browsing ➝ Projects Java Type Hierarchy [Alt+Shift+Q, T] Window ➝ Show View ➝ Hierarchy Java Types [No key binding] Window ➝ Show View ➝ Other... ➝ Java Browsing ➝ Types Javadoc [Alt+Shift+Q, J] Window ➝ Show View ➝ Javadoc JUnit [No key binding] Window ➝ Show View ➝ Other... ➝ Java ➝ JUnit Memory [No key binding] Window ➝ Show View ➝ Other... ➝ Debug ➝ Memory Outline [Alt+Shift+Q, O] Window ➝ Show View ➝ Outline Plug-in Dependencies [No key binding] Window ➝ Show View ➝ Other... ➝ PDE ➝ Plug-in Dependencies Plug-in Registry [No key binding] Window ➝ Show View ➝ Other... ➝ PDE Runtime ➝ Registry Plug-ins [No key binding] Window ➝ Show View ➝ Other... ➝ PDE ➝ Plug-ins Problems [Alt+Shift+Q, X] Window ➝ Show View ➝ Problems Registers [No key binding] Window ➝ Show View ➝ Other... ➝ Debug ➝ Registers View Commands | 109 Window Commands Search [Alt+Shift+Q, S] Window ➝ Show View ➝ Search Synchronize [Alt+Shift+Q, Y] Window ➝ Show View ➝ Other... ➝ Team ➝ Synchronize Variables [Alt+Shift+Q, V] Window ➝ Show View ➝ Variables Window Commands Activate Editor [F12] Window ➝ Navigation ➝ Activate Editor Close All Perspectives [No key binding] Window ➝ Close All Perspectives Close Perspective [No key binding] Window ➝ Close Perspective Customize Perspective [No key binding] Window ➝ Customize Perspective... Hide Editors [No key binding] (No menu) Lock the Toolbars [No key binding] (No menu) Maximize Active View or Editor [Ctrl+M] Window ➝ Navigation ➝ Maximize Active View or Editor Minimize Active View or Editor [No key binding] Window ➝ Navigation ➝ Minimize Active View or Editor 110 | Commands Window Commands New Editor [No key binding] Window ➝ New Editor New Window [No key binding] Window ➝ New Window Next Editor [Ctrl+F6] Window ➝ Navigation ➝ Next Editor Next Perspective [Ctrl+F8] Window ➝ Navigation ➝ Next Perspective Next View [Ctrl+F7] Window ➝ Navigation ➝ Next View Open Editor Drop Down [Ctrl+E] Window ➝ Navigation ➝ Switch to Editor... Pin Editor [No key binding] (Available on editor system menu) Preferences [No key binding] Window ➝ Preferences... Previous Editor [Ctrl+Shift+F6] Window ➝ Navigation ➝ Previous Editor Previous Perspective [Ctrl+Shift+F8] Window ➝ Navigation ➝ Previous Perspective Previous View [Ctrl+Shift+F7] Window ➝ Navigation ➝ Previous View Reset Perspective [No key binding] Window ➝ Reset Perspective Window Commands | 111 Window Commands Save Perspective As [No key binding] Window ➝ Save Perspective As... Show Key Assist [Ctrl+Shift+L] Help ➝ Key Assist... Show Ruler Context Menu [Ctrl+F10] (No menu) Show Selected Element Only [No key binding] (No menu) Show System Menu [Alt+–] Window ➝ Navigation ➝ Show System Menu Show View Menu [Ctrl+F10] Window ➝ Navigation ➝ Show View Menu Switch to Editor [Ctrl+Shift+E] Window ➝ Navigation ➝ Switch to Editor... 112 | Commands Chapter 1 Index A AJDT (AspectJ Development Toolkit), 71 AlphaWorks web site (IBM), 78 Ant, using with Eclipse, 69 Apache web site, 78 AspectJ Development Toolkit (AJDT), 71 assertion methods, 33 automatic typing, 40 B base icons, 63 Beck, Kent, 32 breakpoints Expressions view and, 56–58 setting, 25–27 Breakpoints view, 27, 50 bugs, reporting, 79–81 Bugzilla tracking system, 79 Build Path, Java, 47 C C/C++ Development Toolkit (CDT), 71 chevron menus, 9 classes, creating, 21 CLASSPATH environment variable, 47 clean installs, 5 code changing on the fly, 30 entering, 21–23 code assist feature, 34, 38 commands, 83–112 compiling code and running programs, 23 Concern Manipulation Environment (CME) project, 71 conditional breakpoints, 51 Confirm Perspective Switch dialog, 19 Console view, 8, 52 Content Assist command, 38 context menus, 11 coolbars in Eclipse, 12 cursor shapes, 14 CVS repositories, 69 CVS Repository Exploring perspective, 13 We’d like to hear your suggestions for improving our indexes. Send email to [email protected] 113 CVS Resource History view, 69 CVS source management system, 68 D deadlocks and Show Monitors option, 53 Debug perspective, 13, 26 Breakpoints view in, 50 Debug view in, 53 Display view in, 54 Expressions view in, 56–58 Outline view in, 62 Variables view in, 67 Debug view, 27, 53 debugging Eclipse, 25–31 Declaration view, 8, 54 decorator icons, 63 Detail Formatters, defining new, 58 Details pane, looking at expressions using, 57 developerWorks web site (IBM), 78 Display command, 55 Display view, 54 documentation, showing, with Javadoc view, 60 downloading Eclipse, 3 dynamic help, 75 E Eclipse commands, 83–112 debugging, 25–31 downloading, 3 installing, 3 official web site, 76 system requirements for, 2 upgrading, 5 Eclipse wiki web site, 78 114 | EclipseZone web site, 78 eclipse.commercial newsgroup, 82 eclipse.foundation newsgroup, 82 eclipse.newcomer newsgroup, 81 eclipse.platform newsgroup, 81 Edit commands, 84–86 editors in Eclipse, 9 maximizing/minimizing, 16 rearranging, 14–16 enhancements, suggesting, 79–81 error indicators in code, 43 Error Log view, 55 reporting bugs and, 81 errors Problems view and, 64 viewing internal errors, using Error Log view, 55 exception breakpoints, 50 setting, 27 Execute command, 55 expressions breakpoints and, 51 evaluating in debugger, 55 Expressions view, 56–58 F Failure trace pane, 60 fast views, 14 File commands, 87 Find and Replace command, 44 flat mode, showing search results in, 66 Format command, 23 G Gamma, Erich, 32 Index V413HAV H Help commands, 88 help resources for Eclipse, 75–82 help topics in Eclipse SDK, 76 Help window, 75 hierarchical mode, showing search results in, 66 Hierarchy view, 8, 58 hot code replace feature, 30 Hover Help, 42 Javadoc view and, 60 hyperlinks, 43 Java Type Hierarchy perspective, 13 Javadoc view, 8, 60 JDT Plug-in Developer Guide, 76 JFace framework, used with SWT, 74 JUnit, 32–37 JUnit view, 34, 60 K key bindings for commands, 83 L I IBM AlphaWorks web site, 78 IBM developerWorks web site, 78 icon variations, 63 Incremental Find command, 44 Inspect command, 30, 55 installing Eclipse, 3 launch configurations, 48 launching Eclipse, 4 Literal mode, looking at expressions using, 57 Logical mode, looking at expressions using, 57 Logical Structures, defining new, 58 J M JAR files Hover Help and, 43 Java Build Path and, 47 Java Browsing perspective, 13 Java Build Path, 47 Java Development User Guide, 76 Java perspective, 13 Declaration view in, 54 Hierarchy view in, 58 Javadoc view in, 60 JUnit view in, 60 Outline view in, 62 Package Explorer view in, 62–64 Problems view in, 64 Search view in, 65 Tasks view in, 66 mailing lists for Eclipse development issues, 82 main menu, 10 main toolbar, 12 markers in source code, listing in Tasks view, 66 maximizing views/editors, 16 Member list pane, 59 menus in Eclipse, 10 Milestone builds, 5 minimizing views/editors, 16 monitors and deadlocks, 53 N Navigate commands, 89–91 Navigator view, 61 newsgroups about Eclipse, 81 Index | 115 O O’Reilly Open Source web site, 79 online help system, 31, 75 Outline view, 8, 62 P Package Explorer view, 8, 62–64 creating packages, 20 Java Build Path and, 48 PDE (Plug-in Development Environment), 72 PDE Guide, 76 Perspective commands, 91 perspectives in Eclipse, 13 Planet Eclipse web site, 78 Platform Plug-in Developer Guide, 76 Plug-in Development Environment (PDE), 72 Plug-in Development perspective, 13 Plug-ins Registry web site, 78 Problems view, 8, 64 Project commands, 92 projects, creating, 18 properties of breakpoints, 51 Q Quick Fix command, 36, 44 R RCP (Rich Client Platform), 73 rearranging views/editors, 14–16 red squiggles (error indicators) in code, 43 Refactor - Java commands, 93 refactoring, 41 116 | reformatting code, 23 regression testing framework (JUnit), 32–37 Resource perspective, 13 Navigator view in, 61 Rich Client Platform (RCP), 73 round-tripping, supported by Visual Editor project, 71 Run Last Launched command, 24 Run/Debug commands, 26, 94–98 running programs, 23 changing code on the fly, 30 S scrapbook pages, 46 Display view and, 55 Search command, 45 Search commands, 98–100 Search view, 65 setUp(), 34 Show Monitors option in Debug view menu, 53 Source commands, 100–102 Source Forge web site, 79 stable builds, 5 stack tracebacks and Console views, 52 Standard Widget Toolkit (SWT), 73 Step Into command, 28 Step Over command, 28 Step Return command, 28 subtypes/supertypes of objects, showing, 58 syntax errors, quick fixes for, 43 system menus, 10 system requirements for Eclipse, 2 Index T U Tasks view, 66 TDD (test driven development), 36 Team Synchronizing perspective, 13, 69 tearDown(), 34 tear-off views, 14 templates, entering text using, 39 Test and Performance Tools Platform (TPTP) project, 70 test cases creating, 33 running, 34–36 test driven development (TDD), 36 Text editing commands, 103–107 text, entering, using templates, 39 Tips and Tricks command, 49 TODO comments, 21 toolbars in Eclipse, 12 tool tip windows, 8 TPTP (Test and Performance Tools Platform) project, 70 Type Hierarchy tree pane, 59 unit assertion methods, 33 unit testing with JUnit, 32–37 unpacking Eclipse, 3 upgrading Eclipse, 5 user forums about Eclipse, 81 V values of variables, viewing, 28 Variables view, 28–30, 67 View commands, 107–110 view menus, 10 views in Eclipse, 8, 50–67 maximizing/minimizing, 16 rearranging, 14–16 Visual Editor project, 71 W warnings and Problems view, 64 Watch command, 55 web sites community resources, 78 official Eclipse site, 76 Web Tools Platform (WTP) project, 70 Welcome screen, 4 Window commands, 110–112 Workbench User Guide, 76 workbench, overview of, 7–17 workspace, specifying, 4 WTP (Web Tools Platform) project, 70 Index | 117
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project | https://manualzz.com/doc/6919787/-user-manual- | CC-MAIN-2021-04 | en | refinedweb |
Classification by Voting Feature Intervals in Python
Project description
VFI
VFI - Voting Feature Intervals is a supervised classification model similar to Naive Bayes. Constructs intervals around each class for each feature. Class counts are recorded for each interval on each feature and the classification is performed using a voting scheme.
Based on the paper: G. Demiroz, A. Guvenir: Classification by voting feature intervals. In: 9th European Conference on Machine Learning, 85-92, 1997.01.
Documentation is available on ReadTheDocs at
How to use VFI
The vfi package inherits from sklearn classes, and thus drops in neatly next to other sklearn classifiers with an identical calling API. Similarly it supports input in a variety of formats: an array (or pandas dataframe) of shape (num_samples x num_features).
import vfi from sklearn.datasets import load_iris data, target = load_iris(return_X_y=True) model = vfi.VFI() model.fit(data, target)
Installing
PyPI install, presuming you have an up to date pip:
pip install vfi
If pip is having difficulties pulling the dependencies then we’d suggest to first upgrade pip to at least version 10 and try again:
pip install --upgrade pip pip install vfi
Otherwise install the dependencies manually using anaconda followed by pulling vfi from pip:
conda install numpy scipy conda install scikit-learn pip install vfi
For a manual install of the latest code directly from GitHub:
pip install --upgrade git+
Alternatively download the package, install requirements, and manually run the installer:
wget unzip master.zip rm master.zip cd vfi-master pip install -r requirements.txt python setup.py install
Running the Tests
The package tests can be run after installation using the command:
pytest vfi --cov
Python Version
The vfi package supports only Python 3..
Licensing
The vfi package is MIT licensed. Enjoy.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/vfi/ | CC-MAIN-2021-04 | en | refinedweb |
Flickr API for Windows Phone 7 – Part 4 – Activity – User Photos
Join the DZone community and get the full member experience.Join For Free
In this article I am going to show how to use the flickr.activity.userPhotos method. Now you might ask – there are a couple of tens of Flickr API methods out there – will I cover each one in a separate article? The answer is no – I will focus on the main methods. Once you get the idea behind the kind of functionality the API offers, you will be able to code the rest of your way through it.
So what’s up with flickr.activity.userActivity? Basically, this is the method that will pull up all the recent activity on your photos. For example, if someone commented on one of your photos, it will let you know that there was a comment posted, will show you the user who posted it, the date and the comment contents. In case there is no activity (therefore, no comments or other kinds of activity), it will return a JSON-formatted document (remember, I am using JSON in this series?) that contains no data that would interest you.
So what I did first here is I added an Activity class to the Core folder – that way, I can concentrate the activity category of the Flickr API in one place.
Now, I decided that the delegate I was using before might be handy in other classes as well, so all I did is take its declaration and put it inside the FlickrWP7.Core namespace:
The Activity class itself is quite simple and resembles the Authentication class if you look at GetUserPhotos and the call structure:
public class Activity { private HelperDelegate helperDelegateInstance; public string LastActivityResult { get; set; } public void GetUserPhotos(string apiKey, string signature, string authToken, HelperDelegate helperDelegate, string page = "", string timeframe = "", string resultsPerPage = "" ) { (page != "") URL += "&page=" + page; if (timeframe != "") URL += "&timeframe=" + timeframe; if (resultsPerPage != "") URL += "&per_page=" + resultsPerPage; client.DownloadStringAsync(new Uri(URL)); } void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { LastActivityResult = e.Result; helperDelegateInstance(); } }
I have the LastActivityResult property - it is the place where the raw JSON data will be stored for the last call from the activity set. You might ask - why is this class not static? It is a tricky situation here – there is also flickr.activity.userComments that is a member of the activity category – therefore, I might want to preserve one’s state and get another activity – so I will declare two separate instances for each one instead of having two separate properties, where one of them will never be used. And since I am pulling the same raw JSON data, it is alright for now to store it in a single property. Eventually, if you decide, for example, to deserialize the data, you might want to introduce separate properties for each.
Now here you might have another question – why pull raw data instead of having a property that will be a custom class representing the item hierarchy? The answer is simple – the item hierarchy might change. Therefore, if I hardcode it here, an exception will be thrown each time the specification changes and the application is not able to retrieve the needed data. Raw JSON can be parsed directly in the application, outside the action method and if Flickr decides to modify the returned result, I will only have to modify the parsing in the app and not the core engine.
You can see that GetUserPhotos accepts three optional parameters – page, timeframe and resultsPerPage. These are documented in the call specification and the user might or might not need them. But in case he decides to specify them, the URL will be built accordingly.
To experiment with this class, I created a test page in my Windows Phone 7 application. Its XAML markup looks like this:
<phone:PhoneApplicationPage x: <Button x: <Button x: <TextBlock TextWrapping="Wrap" Height="666" HorizontalAlignment="Left" Margin="12,12,0,0" Name="txtLog" VerticalAlignment="Top" Width="456" /> </Grid> </phone:PhoneApplicationPage>
It is a simple testing “console”:
I am testing the functionality of my methods in the following manner:
private void btnFullToken_Click(object sender, RoutedEventArgs e) { param.Add("api_key", apiKey); param.Add("method", "flickr.auth.getFullToken"); param.Add("mini_token", miniToken); param.Add("format", "json"); Core.Authentication.GetSignature(param, “SECRET", () => { txtLog.Text += "Getting full token..."; Core.Authentication.GetFullToken( miniToken, Core.Authentication.Signature, apiKey, () => { txtLog.Text += "\nFull token generated\n" + Core.Authentication.FullToken; }); }); } private void btnUserPhotos_Click(object sender, RoutedEventArgs e) { param["method"] = "flickr.activity.userPhotos"; param.Remove("mini_token"); param.Add("auth_token", Core.Authentication.FullToken); param.Add("timeframe", "100d"); Core.Authentication.GetSignature(param, "SECRET", () => { txtLog.Text += "\nGetting list of photos..."; Core.Activity activity = new Core.Activity(); activity.GetUserPhotos(apiKey, Core.Authentication.Signature,Core.Authentication.FullToken, () => { txtLog.Text += "\n" + activity.LastActivityResult; },"","100d"); }); }
What I am doing here is simply getting the signatures for each method (according to the parameters specified) and then executing those methods (some of them are introduced via a delegate - triggered only when the actual method signature is ready).
NOTE: The signature is tied to the parameters used. Therefore, if your signature is generated with a specific parameter present and you miss it in the method call, the call will fail. Same applies if you use a parameter in a call but didn’t use it in the signature. In most cases, when you get an error code 96 – Invalid signature, the problem is somewhere with the parameters used.
apiKey and miniToken are fields that are publicly accessible in the class and should represent your unique authentication identifiers.
As you’ve probably noticed, the GetUserPhotos method should be executed after GetFullToken, since the full authentication token is needed to get the recent activity.
If you want to experiment directly with my existing solution, download the updated version (if you follow the series) here. Don't forget that you need the helper hashing service for it to work.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/flickr-api-and-windows-phone-7 | CC-MAIN-2021-04 | en | refinedweb |
The Inevitability of Russiagate
Jim Comey’s dramatic testimony last week significantly ratcheted up the intensity of the greasefire engulfing Donald J. Trump, whom it still pains me to describe as the President of the United States. Yesterday’s tap dance recital by Confederate General Jefferson Beauregard Sessions III, and the astonishing rumors that Trump is contemplating firing special counsel Robert Mueller have only added fuel to those noxious flames.
For those who dislike Trump but have been skeptical of any skullduggery with Russia, the shift to obstruction of justice as the likely grounds on which Trump will find the locks changed on the Lincoln Bedroom is very welcome. “The coverup is always worse than the crime” as the cliché goes, although in this case the potential crime — conspiring with a foreign power to throw a presidential election — is actually a fuckload worse than any coverup. Regardless, Trump is tailor-made to create more problems for himself with his predilection for Mob-like tactics to intimidate investigators and squash an honest inquiry. Even if there ultimately proves to be no there there on Russia (and that’s a big “if”), Trump is creating reasons to justify his removal with an almost kamikaze-like determination.
So for that very reason we have to ask: WHY IS HE DOING THAT? Why take such extreme measures to block an investigation at every turn — and at such risk to his presidency — if the allegations regarding Russia are false? It certainly does not convince anyone that he has nothing to hide, not even those predisposed to give him the benefit of the doubt (a group largely limited to Klan rallies and sexual predator chat rooms).
Some on the left — notably Glenn Greenwald — have scorned the progressive fixation on possible Trump collusion with Russia as wishful thinking, a left wing indulgence in tinfoil-hat conspiracy theory more characteristic of the right wing lunatic fringe, and a waste of valuable energy better spent fighting the loathsome Trump agenda. In its most critical version, Russiagate is a liberal analogue to birtherism, a handhold for an enraged opposition party desperate for a reason to declare a hated presidency illegitimate.
(The analogy is imperfect at best, of course. Birtherism was a racist fantasy without the slightest basis in reality. Russiagate is at least plausible — highly plausible, in fact — even if it is eventually disproven. We shall see. But the right’s unconvincing attempt to depict it as a “fairytale” smacks of a carefully coordinated media strategy, to include a directive to use that term, judging by the suspicious frequency with which it pops out of the mouths of Trump apologists.)
But I do understand the criticism. It’s almost too much to hope that this horrific administration did something so criminal, so self-destructive, so blatantly treasonous that it would bring about its own downfall. But the other equally believable way of looking it this phenomenon is that the two threads are inherently connected.?
So in that sense Russiagate is not an aberration or the fulfillment of liberal magical thinking at all, but the logical conclusion of a leader and an administration this abominable. Admittedly, the scope and scale of the crimes of which Team Trump is accused are so outlandish that they would embarrass the worst airport spy novelist. But there you have it.
RUSSIAN ROULETTE WITH SIX CHAMBERS FILLED
So let’s stop for a moment to take a quick survey of what we know about Russiagate thus far. Obviously, our information is very very incomplete. I remain confident that the truth will come out as result of Bob Mueller’s inquiry — unless Trump fires him — along with the efforts of the Senate Intelligence Committee, and to a lesser extent its counterpart in the House (compromised by its chairman, the oleaginous Trump toady Devin Nunes), and we may yet see an independent commission as well. What Congress does about the conclusions those entities come to is another matter. But even the incomplete, raw facts we already know are rather damning when viewed by anyone with a shred of objectivity.
The Russians interfered with the 2016 presidential election with the express purpose of helping Donald Trump win. That is not in dispute by any serious observer. Trump himself actively encouraged Russia to hack into the computers of his Democratic rival, which it did. Unwittingly or not, Trump also personally helped spread disinformation — “fake news” — that had been generated by Russia to hurt Hillary Clinton. And both during the campaign and in the transition period, Trump associates had improper contacts with Russian officials, including intelligence officers. All seventeen US intelligence agencies concurred on the issue of Russian interference, which was corroborated by independent reporting by the most respected journalistic organizations in the country, as well as allied intelligence agencies who were the first to warn the US government of what was going on. Only Trump’s most fanatic followers believe otherwise, and of course Trump himself, who evidently is so insecure about the legitimacy of his presidency that he lives in dread fear of anything that suggests he did not win with a North Korean-like 100% of the vote.
None of that looks good for Trump. And that stuff doesn’t even rise to the level of active collusion, which would be an actual act of treason. So at a bare minimum one might be justifiably outraged at Trump’s relationship with Russia even without believing he or his people are outright traitors.
But do we think Trump and his people actually even further? Again, let’s look at the record. Cui bono, as they say. Who benefits?
The Trump administration’s eagerness to do favors for Russia while getting nothing in return (that we know of) is eyebrow-raising to say the least. Among the gifts: lifting sanctions imposed by the Obama administration, prevailing on the GOP to change its platform on Ukraine and Crimea, and returning to the Kremlin a pair of mansions in Long Island — openly known to be spy facilities — that Obama took away in retaliation for Russian misbehavior. The capper — thus far — has been Trump’s jawdropping decision to hand over to Moscow top secret compartmentalized information passed to the US by Israel, without Tel Aviv’s consent or foreknowledge, not to mention that of anyone in the US intelligence community. That unfathomable action may well have been a function of Trump’s well-known eagerness to brag and impress, rather than of any duties as a Russian stooge. But it speaks to his level of comfort with the Kremlin and his ignorance both of diplomacy and the basics of handling classified material, to say nothing of general idiocy and unfitness for office.
Trump’s behavior during the recent NATO summit, in which he excoriated our oldest and staunchest allies while refusing to reaffirm Article 5 mandating collective defense was a wet dream for Putin. As many noted, Trump may or may not be a Russian asset, but in Brussels he behaved exactly as the Kremlin would have wanted a Russian asset to behave. In shaking confidence in a mutual defense pact that has kept Europe secure for more than seventy years, Trump’s performance could not have better served Russian interests if the Kremlin itself had scripted it. Hmmmm.
Of course, an affinity for Russia is pervasive in Trumpworld. Trump’s former campaign manager Paul Manafort was a paid flack for Russian political interests, which was why he was forced to resign. Steve Bannon and the so-called “alt-right” (let’s just call them what they are: neo-Nazi white supremacists) are deeply enamored of Russia for their own twisted quasi-eugenic reasons. And Trump himself famously has never had a bad word to say about Vladimir Putin: this from a man who has picked fights with the Pope, a Gold Star familiy, beauty queens, Meryl Streep, the cast of Hamilton, and the prime minister of Australia, just to name a few. Yes, it could be that Trump merely admires a preening bully like Putin, which would be of a piece with Trump’s own self-image and man-crushes on various other so-called strongmen, from Duterte to Kim Jong-un to the Saudi royal family. But the weirdness, consistency, and intensity of his Russophilia is highly suspect. It’s hard to believe that there aren’t more concrete motives in play.
WHAT’S MY MOTIVATION?
So what can we conclude from all this? Again, lawyers, investigators, and Congressmen will deliver the evidence, but as private citizens we are within our rights to speculate.
The most extreme and baroque scenario, of course, is that Trump is being blackmailed by the Kremlin and as a result is their clandestine agent. (Not very clandestine, actually, but that’s the idea.) The possibility that the Kremlin has compromising salacious information on Trump as alleged in the Steele Dossier (one of my favorite Ludlum novels) seems farfetched, although Trump’s adolescent fixation on his sexual escapades does not help his argument. Apparently in his many meetings and conversations with Comey, Trump was far more agitated about the alleged “golden shower” tape than anything else.
What is not farfetched at all is the possibility that Trump’s business interests are heavily entangled with the octopus of Russian organized crime, government, and security services (which for all practical purposes are merely separate tentacles of the same rapacious beast), incentivizing him to act favorably toward Moscow without being an actual controlled “asset” in the strict sense of the word. Of course, since Trump won’t release his taxes — and the Republican Party and rank-and-file are acting like that’s acceptable — we don’t know. Perhaps the emoluments suit recently filed by the Attorneys General of Maryland and the District of Columbia will force his taxes to light.
Trump has claimed he has no business ties to Russia, which we know to be patently false. His own sons have bragged about all the money the Trump family businesses get from Russia. Again, tax returns would be helpful in sorting out truth from Pinocchio-isms, which is precisely why Trump won’t release them.
Rachel Maddow has extensively documented Trump’s involvement in real estate sales tied to his massive debt to Deutsche Bank, which extends to laundering illicit Russian money through a sketchy Caymans Island bank — is there another kind? — run by associates of Putin (which is to say, by Putin). One of the chief officers of that bank — and this is almost beyond belief — is the man who is now the United States Secretary of Treasury under Trump, Wilbur Ross. In normal times that would be a front page international scandal, but in the current climate it’s just Tuesday.
So short of water sports with Russian hookers and/or a Manchurian candidate brainwashing, the most plausible scenario seems to be that Trump simply does not want to piss off people who have great financial leverage over him, or through whom he makes a lot of money , or both. Not very titillating, but very very believable. And that is the most charitable interpretation that the facts allow. For Trump, it only gets worse from there.
WHAT ARE THEY TRYING TO HIDE?
Perhaps the most damning and suspicious point of all is this simple question: If all of the Trump team’s contacts with the Russians were innocent, why do the White House and members of Trump’s inner circle keep lying about those contacts? That dog quite plainly does not hunt. Which brings us back to the original question. Why so desperately try to dodge and undermine the Russiagate investigation unless there is something incriminating to hide?
Jeff Sessions lied under oath, claiming he had never had any contacts with the Russians as a Trump surrogate, then was exposed as having had at least two clandestine meetings with Russian ambassador Sergei Kislyak, the Kremlin’s top spy in the US. Mike Flynn and Jared Kushner similarly failed to disclose such contacts with Russian officials. Flynn also failed to mention that he was a paid agent of a foreign power — Turkey — and had even intervened on Ankara’s behalf to halt long-planned US military operations against ISIS that the Turks opposed. (This from a retired three-star general and career intelligence officer who during the campaign self-righteously railed over Hillary Clinton’s possible carelessness with classified material, memorably leading bloodthirsty chants of “Lock her up.”) Kushner floated a proposal to the Russians so startling that even they were caught off guard: that the Trump team use Russia’s own secure secret communications network for a backchannel to the Kremlin to prevent US intelligence from listening in. Kislyak, Lavrov, & Co. didn’t realize that they would soon be getting top secret compartmentalized information handed to them on a silver platter from the President himself during a face to face meeting in the Oval Office.
It is hard to believe that Sessions, Flynn, and especially a callow neophyte like Kushner undertook those actions on their own initiative and without Trump’s knowledge. It’s far more likely that they did so at his direction. Obviously, that is an explosive conclusion and one that Mueller and the other prosecutors will have to prove, if they can. But purely as a matter of common sense, it is difficult to believe that Trump was not involved. Why has Trump been so desperate to stop the investigation into Michael Flynn’s actions, to the point of sacking the director of the FBI over it? Is it just because he is so loyal to Flynn, a man he also summarily fired? Uh, maybe. But far more likely is the simplest and most obvious explanation of all: Because he ordered Flynn to take those actions.
“I’M AS SORRY AS YOU ARE, DMITRI”
Needless to say, there is some irony in Americans expressing shock and outrage at Russian meddling in our election, given the long history of American meddling in foreign elections (and by “meddling” I’m including covert CIA attempts to overthrow foreign governments by force). Governments try to influence foreign elections all the time, sometimes in benign ways and sometimes more maliciously. We don’t have to like it or tolerate it, but it’s naïve to be shocked by it.
What is genuinely outrageous, however, is the idea that American citizens would collaborate with such efforts, or condone others doing so, which is what the overwhelming majority of Republicans are brazenly doing. Polls show that tribalism in America is so extreme at the moment — at least on the right — that few Republican voters say they would be bothered even if hard evidence emerged that Trump did in fact conspire with the Kremlin.
Let’s stop and take that in a moment. Wow.
The reasons given are usually on the order of “Ah, all politicians do that sort of thing,” or “Hillary’s done/would do worse,” or “Whatever it took to keep Hillary out of office, I’m fine with it.” Such thinking does not deserve to be dignified with a response, but you can imagine for yourself what those same voters would likely have said if the roles were reversed and Barack Obama or Hillary Clinton were suspected of conspiring with Vladimir Putin to throw the election. Hell, the Tea Party wanted to lynch Barack just for putting his feet up on his desk. (OK, to be fair, they wanted to lynch him because he’s black. But they got pretty upset about the desk thing.)
In his testimony, Jim Comey made plain that Russia executed a shocking, extensive, and well-planned act of war against the United States and other Western democracies and will continue to do so. To much less public fanfare, former Director of National Intelligence James Clapper recently testified that the possibility of Trump/Russia collusion dwarfs Watergate, making it arguably the worst scandal in American history. The Russian effort represents a far more serious threat to American sovereignty and democracy than ISIS. But we have been conditioned to freak out over “terrorism,” especially when carried out by brown people of a different religion, to the point where it even beats out decades of ingrained Russophobia. (A Russophobia that, historically, was led by the Republican Party.)
Trump himself has shown zero interest in investigating Russian interference in the election — not even lip service. On the contrary, in fact: Trump bragged of shutting down the investigation, both to NBC’s Lester Holt on national television, and more shockingly, to the Russian ambassador and foreign minister face to face in the Oval Office. (Come on, guy, at least try to act innocent.) After Comey’s testimony, MSNBC anchor and former Bush White House communications chief Nicole Wallace sagely pointed out that Donald Trump spoke with Jim Comey in person or by phone NINE times in the four months. Obama spoke with Comey only twice in three YEARS. Not ONCE in any of those nine conversations did the President of the United States Donald Trump appear concerned about such Russian action, or even inquire about the progress of the investigation into it. Does that sound like the behavior of a man who really wants to get to the bottom of any such interference….or for that matter, the behavior of a man who is supposed to be in charge of the security and defense of the United States of America?
So yes, the perfect, almost mathematical symmetry of Russiagate is almost too good to be true. But it only makes sense..
Inshallah.
Nixon/Trump mashup illustration; artist unknown | https://edwardsrobt.medium.com/the-inevitability-of-russiagate-b8edd892c1f | CC-MAIN-2021-04 | en | refinedweb |
Opened 8 years ago
Closed 8 years ago
#14015 closed enhancement (fixed)
Affine and Euclidean groups
Description (last modified by )
This ticket implements basic affine groups and Euclidean groups:
sage: G = AffineGroup(3, QQ) sage: g = G.random_element(); g [ 2 -1/2 0] [ 0] x|-> [ 1 -1 -1] x + [-32] [ 0 -2 -2] [1/3] sage: g*g [ 7/2 -1/2 1/2] [ 16] x|-> [ 1 5/2 3] x + [ -1/3] [ -2 6 6] [191/3] sage: g*g.inverse() [1 0 0] [0] x|-> [0 1 0] x + [0] [0 0 1] [0]
Apply:
Attachments (2)
Change History (10)
Changed 8 years ago by
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
- Dependencies set to #14040, #14014
- Status changed from needs_review to needs_work
Changed 8 years ago by
comment:3 Changed 8 years ago by
- Description modified (diff)
- Reviewers set to Travis Scrimshaw
- Status changed from needs_work to needs_review
Hey Volker,
Here's a rebased version with my review changes. I've removed the need for the
*_generic classes and made the docstrings be at the class level so that they are visible using introspection. I've also added a method to get the lifted matrix space (representation of affine transformations as linear transformations) as
linear_space(), and made a few docstring tweaks. If you're happy with my changes, you can set this to positive review.
Best,
Travis
For patchbot:
Apply: trac_14015-affine_groups-ts.patch
comment:4 Changed 8 years ago by
The
_generic suffix was there so that we can later also wrap GAP's affine groups (especially for finite fields)
comment:5 Changed 8 years ago by
You can have the GAP's affine groups and have the
__classcall__() return that class (see
sage.combinat.partition.Partitions or
sage.combinat.tableau.Tableaux as more complete/complicated examples). IMO this is cleaner since the we the class doesn't have any extra qualifiers, the (single) entry point matches the (base) class, and the classes have the correct naming scheme. Thus it is still extendable.
If the input format needs to be changed and exposed to the global namespace, you can implement a
__classcall__() on the GAP wrapper parent (and likely the input will still need to standardized).
comment:6 Changed 8 years ago by
And you need some way to circumvent the enforced argument normalization for internal use where you know that the arguments don't have to be normalized. In terms of complexity / lines of code, I think its pretty much a draw. Which is to say, you end up using a lot of complicated machinery for no real advantage. And it gets even more complicated if you start deriving the class. And it breaks the symmetry between different implementations. If I had seen a real advantage with the
__classcall__ mechanism then I would have used it myself.
comment:7 Changed 8 years ago by
- Milestone changed from sage-5.10 to sage-5.11
- Reviewers changed from Travis Scrimshaw to Travis Scrimshaw, Volker Braun
- Status changed from needs_review to positive_review
comment:8 Changed 8 years ago by
- Merged in set to sage-5.11.beta1
- Resolution set to fixed
- Status changed from positive_review to closed
Initial patch | https://trac.sagemath.org/ticket/14015 | CC-MAIN-2021-04 | en | refinedweb |
Docker is a computer program build to allow an easier process of creating and running applications by using containers. Have you often used Docker? If so you may have come across a shell message after logging into a docker container with an intention to edit a text file.
However, don't worry as Senior Software Engineer Maciek Opała is here with possible solutions! Check them out and we hope they help you with your editing needs.
'bash: <EDITOR_NAME>: command not found'— if you’ve ever encountered this shell message after logging into a docker container with an intention to edit a text file, this is a post you should read.
What’s the problem?
Docker is meant to be lightweight (doing one job and doing it well), hence docker containers are trimmed to a bare minimum — they have only necessary packages installed to play the required role in a given project ecosystem. From this point of view, having any editor installed is pointless and introduces needless complication. So if you prepared a 'Dockerfile', built an image and after running a container you need to edit a file you may get surprised:
~/ docker run -it openjdk:11 bash root@d0fb3a0b527c:/# vi Lol.java bash: vi: command not found root@d0fb3a0b527c:/#
What are the possible solutions?
#1 Use volume
Let’s use the following 'Dockerfile':
FROM openjdk:11 WORKDIR "/app"
Now, build an image with:
docker build -t lol .
And finally run the container with a 'volume' attached (a 'volume' can be also created with a 'docker volume create' command):
docker run --rm -it --name=lol -v $PWD/app-vol:/app lol bash
'$PWD/app-vol' folder will be created automatically if it does not exist. Now if you try to list all the files in the '/app' directory you will get an empty result:
~/ docker run --rm -it --name=lol -v $PWD/app-vol:/app lol bash root@4b72fbabb0af:/app# ls
Navigate to the '$PWD/app-vol' directory from another terminal and create a 'Lol.java' file. If you try to list files once again in the container being run you’ll see that newly-created 'Lol.java' file is there:
root@4b72fbabb0af:/app# ls Lol.java root@4b72fbabb0af:/app# cat Lol.java public class Lol { } root@4b72fbabb0af:/app#
As you can see 'cat' command works, so you can at least view the file’s content.
#2 Install the editor
If using a 'volume' is not an option you can install the editor you need to use in a running container. Run the container first (this time mounting a 'volume' is not necessary):
docker run --rm -it --name=lol lol bash
And then install the editor:
root@4b72fbabb0af:/app# apt-get update root@4b72fbabb0af:/app# apt-get -y install vim
Installing a package in a running container is something that should be done incidentally. If you do it repeatedly, it’s a better idea to add the required package to the 'Dockerfile':
FROM openjdk:11 RUN ["apt-get", "update"] RUN ["apt-get", "-y", "install", "vim"] WORKDIR "/app"
It seems that vim-tiny is a light-weight alternative, hence a better choice for an editor in a docker container.
#3 Copy file into a running docker container
Let’s run a container with no editor installed ('Dockerfile' from #1):
docker run --rm -it --name=lol lol bash
(again, no 'volume' needed). If you try to 'ls' files in '/app' folder you’ll get an empty result. This time we will use docker tools to copy the file to the running container. So, on the host machine create the 'Lol.java' file and use the following command to copy the file:
docker cp Lol.java lol:/app
where 'lol' represents the container name. Instead of the name of the container its 'ID' may be also used when copying a file. Files cannot be copied directly between containers. So, if there’s a need to copy a file from one container to an another one, the host machine must be involved.
Another, quite similar option, is to use the 'docker exec' command combined with 'cat'. The following command will also copy the 'Lol.java' file to the running container:
docker exec -i lol sh -c 'cat > /app/Lol.java' < Lol.java
Where '/app/Lol.java' represents a file in a docker container whereas 'Lol.java' is an existing file on the host.
#4 Use linux tools
No favourite (or even any) editor installed on the docker container? No problem! Other linux tools like 'sed', 'awk', 'echo', 'cat', 'cut' are on board and will come to the rescue. With some of them like 'sed' or 'awk' you can edit a file in place. Other, like 'echo', 'cat', 'cut' combined with powerful stream redirection can be used to create and then edit files. As you’ve already seen in the previous examples these tools can be combined with the 'docker exec' command which makes them even more robust.
#5 Use vim (or other editor) remote
IMPORTANT: this idea is bad for many reasons (like running multiple processes in a docker container or enabling others to ssh into a running container via exposed port number 22). I’m showing it rather as a curiosity than something you should use in a day-to-day work. Let’s have a look a the 'Dockerfile', since it has changed a bit:
FROM openjdk:11 RUN ["apt-get", "update"] RUN ["apt-get", "install", "-y", "openssh-server"] RUN mkdir /var/run/sshd RUN echo 'root:lollol0' | chpasswd RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config RUN ["/etc/init.d/ssh", "start"] EXPOSE 22 WORKDIR "/app" CMD ["/usr/sbin/sshd", "-D"]
This time, since 'scp' will be used for remote edit, we need to install 'openssh-server', expose a port, and finally start it. After building the container with the following command:
docker build -t lol .
run it with the following command:
docker run --rm -p 2222:22 -d --name=lol lol
Now, when the container is running you can edit the 'Lol.java' file with the following command:
vim scp://root@localhost:2222//app/Lol.java
After connect confirmation and entering the password 'vi' opens and the file can be edited. Because of this issue, run ':set bt=acwrite' in 'vi' screen and go ahead with file edition. After you finished, save and exit 'vi', confirm with root’s password and you’re done. Now, run:
docker exec -it lol cat /app/Lol.java
to confirm that the file was in fact created and saved.
Why do I need this?
Actually, you do not, Docker containers are meant to be immutable units of work, devoted to running a single, particular process. Images should be built and run without any further intervention. What’s more, when you edit a file in a running docker container you need to ensure that all the processes that depend on the edited file have been notified about the change. If they’re not configured for redeployment on a configuration change, they need to be restarted manually.
Editing files in a docker container might be useful only during development. When you don’t want or even need to build an image, run it and verify it the change introduced has taken the desired effect every single time you add or remove something in 'Dockerfile'. This way you can save some time, but after it’s done, the redundant packages added to an image should be removed.
This article was written by Maciek Opała and posted originally on SoftwareMill Blog. | https://www.signifytechnology.com/blog/2018/12/editing-files-in-a-docker-container-by-maciek-opala | CC-MAIN-2021-04 | en | refinedweb |
I’m fascinated by the myth of the Lisp genius, the eccentric programmer who accomplishes super-human feats writing Lisp. I’m not saying that such geniuses don’t exist; they do. Here I’m using “myth” in the sense of a story with archetypical characters that fuels the imagination. I’m thinking myth in the sense of Joseph Campbell, not Mythbusters.
Richard Stallman is a good example of the Lisp genius. He’s a very strange man, amazingly talented, and a sort of tragic hero. Plus he has the hair and beard to fit the wizard archetype.
Let’s assume that Lisp geniuses are rare enough to inspire awe but not so rare that we can’t talk about them collectively. Maybe in the one-in-a-million range. What lessons can we draw from Lisp geniuses?
One conclusion would be that if you write Lisp, you too will have super-human programming ability. Or maybe if Lisp won’t take you from mediocrity to genius level, it will still make you much more productive.
Another possibility is that super-programmers are attracted to Lisp. That’s the position taken in The Bipolar Lisp Programmer. In that case, lesser programmers turning to Lisp in hopes of becoming super productive may be engaging in a bit of cargo cult thinking.
I find the latter more plausible, that exceptional programmers are often attracted to Lisp. It may be that Lisp helps very talented programmers accomplish more. Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.).
Programming languages do make a difference in productivity for particular tasks. There are reasons why different tasks are commonly done in different kinds of languages. But I believe talent makes even more of a difference, especially in the extremes. If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent.
There are genius programmers who write Lisp, and Lisp may suit them well. But these same folks would also be able to accomplish amazing things in other languages. I think of Donald Knuth writing TeX in Pascal, and a very conservative least-common-denominator subset of Pascal at that. He may have been able to develop TeX faster using a more powerful language, but perhaps not much faster.
Related posts:
- Lisp and the anti-Lisp
- Doing good work with bad tools
- Why programmers are not paid in proportion to their productivity
For a daily dose of computer science and related topics, follow @CompSciFact on Twitter.
42 thoughts on “The myth of the Lisp genius”
When I was in grad school we used Lisp-Stat, and I think we may have been the only stat program to do so. For me, it’s 100% the syntax. Something about the exclusive use of parentheses makes programming in Lisp so much more enjoyable. It’s very weird. And I’m not even a very good Lisp programmer!
I see how Lisp could be a good fit for statistics.
R is said to be Lisp-like at its core, though most R code I’ve seen resembles FORTRAN far more than Lisp. But that says more about how the language is used than how it was designed.
My guess is that 90% of the market for statistical software is people who apply statistics but don’t have much mathematical background and who would find functional programming unnatural.
I’ve been thinking similar things the last few years about the Haskell community. Lots of people doing interesting things with Haskell, but, when you really look at it, it’s partly because they are all very logical thinkers and, essentially, very good programmers in general. Certainly people moving across to Haskell with relatively little experience in other programming in general can be seen to be suffering and the learning curve is steep enough that they don’t make it to the first ledge.
However, there is also something to be said for the way a language makes you think about problems. Periodically, when stuck on some task, thinking about “how would I do this in Haskell” or “how would I do this in C” has helped unblock the thought processes. When I code in Scheme, it’s kind of the same: what are my data structures and how can I transform them? As opposed to what are my loops and branch points and where can I store some scratch data?
Learning LISP is valuable because it provides a completely different perspective on what a programing language is. Coming from a procedural background a language like LISP will completely change the way you program and think about programs. Learning LISP isn’t about being super productive – its about broadening your perspective and becoming a better programmer regardless of the language. Plus LISP is a joy to program in, whether you’re getting things done faster or better isn’t the point. No other language lets you play at programing the way that LISP does.
@kotfic
John,
Could we write a follow-up article, called “The Myth of the Math Genius,” in which you note that Newton, Boole and others were extremely productive in their math capabilities despite the lack of good math symbolism? And, thus, good math symbols are not really necessary?
One of the hallmarks of a great programmer is that they (get to) choose their tools with care. Lisps, with their simple syntax, code-as-data approach, functional stance, the REPL, and macros, can be incredible intelligence multipliers. That being said, the really great Lisp programmers I have known personally have also been great at choosing other languages as needed.
One of the interesting things about Clojure, Scala, and F# (and perhaps others; these are things I’ve played with) is that they take some or all of Lisp’s advantages and marry them to big libraries and run them on the common runtime engines (JVM, CLR). In theory, this takes advantage of the strengths of Lisps and the strengths of standard procedural languages. In practice, it’s a little more complicated than that, of course.
a pity so few other programming languages have runtime malleable code. S-Exps and their more imperative cousin AST’s both offer the higher level “code modifying code” construct, and that meta-abstraction I think reflects a huge part of LISP’s real cachet.
John, this is well written and I agree with you. The archetypal myth in the programming world is that there is some magic talisman out there, either a programming language, a methodology (whether agile or not), or whatnot, that will give even the most worthless programmer superhuman talent. What we find is that these tools are “talent amplifiers.” If you have the talent necessary to master them, they can definitely make you more productive. But if you have no talent, you’ll still write poor code.
I put Lisp in that category. If somebody takes the time to study Lisp, they’ll find that just about every other tool is somehow a faint reflection of it. Example: I was reading a site the other day that was espousing declarative metaprogramming. But the author had gone off and created his own metaprogramming language interpreter, etc. The fundamental idea, metaprogramming, was wonderful and it had clearly made the author more productive in his work. But I found myself shaking my head, saying, “He would have saved so much time if he had done this in Lisp.”
tl;dr: giving someone a chisel doesn’t make them Michaelangelo.
There are some perfectly sensible remarks interspersed with some very strange ones! Most puzzling is the question of what exactly this post is about: you start by claiming to use “myth” in the Campbell sense, and then look at actual programmers. Well, which are you interested in?
“Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.”
I’ve heard this a lot, but it does not match my experience at all. Programmers who make a mess in Lisp also make a mess in Java or C or Python or anything else. If this was true, wouldn’t you expect Haskell and Ada to be more popular? Javascript has more structure than Lisp but still much less than most other languages, yet even mediocre programmers tend to be far more productive in Javascript than those other more structured languages.
).”
Why is this “hard”? If they wrote in assembly and he wrote in C we’d easily attribute it to the language. If they wrote in C and he wrote in Python we’d also find it easy. Why is it hard to believe that moving up the abstraction continuum even further wouldn’t yield more productivity gains?
If you really want to see why, just get the source code to a good Lisp program, and try porting it to some other language. When you end up with 10-100x more code, you’ll see why. (Are you really going to re-implement macros, multimethods, a condition system, special variables, etc.? Or do you know some clever way to achieve the same power without using abstraction?)
“If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent.”
That sounds exactly backwards to me. My friend can run a marathon in half the time as me — skill. The only way he could run it even 10x as fast is by taking a car — better technology. Programming isn’t the same as running, but across every field, using a better technology is the only reliable way I’ve seen to get better than 10x improvement.
I like your point about Donald Knuth writing TeX in a conservative subset of Pascal!
Lisp is most powerful programming language that exists. Lisp is also a way of thinking. It allow to create very complicated programs that can’t be written easily in other languages. But this programs must be written by very talented programmers that can think about very difficult problems. If you think about easy programs, then it doesn’t matter which language programmer pick, good programmers always do it right, but lisp one will be 10 times smaller.
Genius or language is a false dichotomy. I like your post but here’s another, Deconstructing Genius, that addresses what sets mathematicians apart and goes a bit deeper into the personality characteristics necessary for world-class success.
I think Larry Wall said Lisp programmers moved to Perl because they got tired of their source looking like a bowl of cold oatmeal with a bunch of fingernail parings in it. Of course, people also said C was popular because it allowed lower case characters.
And ADA if I recall correct was un-popular even despite being a requirement in all US Government software. To use C you had to get special approval with justification. I think pointing out the lack of an ADA compiler for the target hardware was usually sufficient. That said I liked the strong typing in ADA but never tried writing a line of it.
I recall Lisp-Stat and X-Lisp-Stat were very popular among statisticians because they were free and powerful, especially the graphics capabilities. R was maybe a gleam in some folks’ eyes but certainly not ready for prime time. Now that R is mature and available, I imagine a mass migration has happened. X-Lisp-Stat was also the language of choice for Jan DeLeeuw, an early promoter of reproducible computing in statistics.
1. Do you know lisp?
2. Have you used lisp continuously for at three months?
3. Have you worked with other lispers and deployed a large lisp project?
Hey, I have a blog post I’d like you to read, I titled, “Flying looks pretty easy, what’s up with the payscales of pilots?”.
Too many assumptions, “I think”, etc.
In the days when people programmed on Lisp Machines where I worked, I was surrounded by programmers of amazing caliber. I sometimes thought that if only there were more Lisp Machines, there would be more smart people. 🙂
No language is good for everything, but as formerly one of those “LISP genius” programmers (moved on for some time, but definitely was), I:
(a) don’t need to prove it. I know what I know;
(b) definitely know we/they exist
(c) know exactly what it is about LISP that made it possible to do some rather incredible things “easily” (relatively)
(d) know those principles so well, the language no longer matters – as long as it is a reasonably featured language, I wouldn’t hesitate to try anything that was formerly a “LISP only” project
After being immersed in the LISP “wa” for a long time, to the extent of having multiple times created entire LISP development environments (usually based in an initial assembly-language interpretor, and then extended massively in LISP itself), one’s method of understanding problems and their solutions is forever changed. I even was a developer of one of the most advanced LISP machines commercially offered.
It is this change in the way of thinking about problems that makes the “genius” aspect occur, assuming one has the personal bent for it. And it isn’t then limited to LISP which, for many reasons, is not a practical language for commercial delivery in many fields.
OK, ’nuff said.
Most programming languages are turing-complete; anyone with enough ingenuity and time can accomplish the same task with Lisp as they can with assembly.
Verbosity distinguishes programming lanugages. Because Lisp requires less typing than other languages, Lisp affords programmers the mental space to write more complex programs. That’s why AI is typically done in Lisp.
Java has been compared to the Catholic Church: change must come down from the Pope. To change Java, one must 1) convince the Sun/Oracle developers to adopt the change 2) wait for the change to be applied to Java 3) wait for the next stable Java release 4) wait for users to update their Java version. The process takes years.
In contrast, Lisp programmers can bend Lisp to their needs in minutes by using macros. You can define your own constructs (e.g. custom if-then’s and loops). You can implement a special object system, or try with different threading models. In Java, or any other monolithic language, you have to work with what they give you.
There is some truth in a statement like “Lispers are geniuses who program circles around lesser folk”: Lisp does, in fact, enable programmers to do just that. Also, the mindset of functional programming can greatly improve the quality of code produced in any language: there is far too much ad hoc code that takes no input, manipulates global variables, and prints the results out rather than taking input, manipulating local variables, and returning output. That ad hoc code is USELESS.
There is also some truth that Lisp demands a higher quality programmer. IQ aside, many programmers are taught to think that programming must be done imperatively (C-style). They’re basically using high level assembly, for all the convenience of their chosen languages. And so they write new languages to add power to the old ones: Groovy, Scala, BeanShell.
Greenspun’s Tenth Rule is “Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” The desire for more powerful languages has lead to the creation of engines for scripting languages. It’s why web browsers have JavaScript, why video games use Lua, and why Windows has a half dozen application languages: the underlying languages are terrible, and good developers learn to work around them.
You don’t have to be a genius to learn Lisp or to use functional programming, but there is a correlation between Lispers and Computer Science education, interest in theory, and natural curiosity. A child can learn BASIC, Lispers tend to have PhD’s.
I started programming in SAS-a high-level, specialized language for processing datasets.
SAS has a macro language-not in the sense of Lisp of course, but still a useful macro language.
For the work I do, the difference between a programmer who uses the macro language and who doesn’t is easily 5x if not more. The reasons are that
1. You use much less code,
2. Your code becomes massively reusable,
3. You can do things that are hard or nearly impossible in the base language
1 and 2 mean you write programs about 2x faster but spend less than 1/10 the time maintaining them. And 3 is just out of this world.
So, I would be genuinely surprised is a genius programmer wasn’t at least 10x as productive in Lisp as in C, and 100x really wouldn’t surprise me at all.
Winner of the last Google ai contest used LISP:
Have you looked at the code for TeX? Although it was written explicitly to be readable, and it is, it takes control of the smallest details, albeit in a painstaking and methodical way. I think that it would be difficult to argue that TeX could have been written by a non-genius. Although not a direct comparison with lisp, Doug McIlroy’s response to Knuth’s Literate word counting program shows the advantage of flexible tools. Admittedly, McIlroy is also a genius, but given the tools available at the time your average UNIX programmer would have a good chance of coming up with McIlroy’s solution, but very few would come up with Knuth’s.
@scotty
Why was he the only one using Lisp among the top 100?
It is pretty safe to make the claim about Lisp that it is a programming language. Anything beyond that and you are on the hook for believing what you hear lol.
One problem with Lisp is its name 🙂 I mean, would you want to use a programming language named after a speech impediment? How about Limp? Or Stutter?
Then again, Microsoft got pretty far despite its name 🙂
Yes, I think that’s broadly right. Although I’ve never met anyone who claimed to be a genius because they use Lisp, like you describe.
The missing feature between most Lisp and non-Lisp languages is macros. Macros, programmed right, make a big difference to productivity in my opinion. It’s not a myth and it’s not genius. It makes a difference, but not a huge one.
Lesser programmers turning to Lisp (or Scheme) will benefit from Lisp simply because it encourages a functional programming style. In the presence of sufficient computing resources a functional programming style leads to cleaner and more parallelizable code. Transplant that back into the language you came from and you’re more productive.
I think a lot of it depends on the programmer and the project. A really skilled programmer who is deeply familiar with Lisp and can think in macros can save a lot of time — maybe not moving 100 times as fast, but much more than ten — if and only if the project is big enough and complex enough to gain from that sort of treatment. Lisp comes with some startup costs in terms of initial effort, and you have to balance what you gain from Lisp against the
import whizbangeffortlessness of simple tasks in a language like Python.
I’m not, by the way, the programmer I describe above. I’m a fairly able intermediate Lisper; when I realize I need a macro, I can figure out how to write one, but it breaks my flow, because it’s not a fully-integrated part of my programming technique. I think a lot of coders never get past that stage in Lisp, and so they don’t see the advantages you describe.
I also think you’re being too skeptical of the gains the right language for the job offers. I wrote a toy Scheme interpreter in Python, and it took an hour and a half. I started to do the same in Object Pascal (which, admittedly, I don’t know well), and it was poised to take me a couple weeks of part-time work before I decided not to bother.
They should let the Lisp people compete in Top Coder. The C++ programmers kick everyone else’s butts in those competitions.
I love Lisp (and Prolog), and agree with Peter Norvig that many of the “patterns” in C++/Java are just warmed-over Lisp. But the programmers. I don’t think most academics have ever seen really talented programmers. I worked at Carnegie Mellon, then at Bell Labs, and even published books on programming languages (Prolog-like) and type-theory a la ML, but I’d never met a great programmer until I moved to a 200-person software company.
Again, I urge you to check out the Top Coder competitions. I used them to bone up my programming skill from that of an academic to that of a professional programmer. And whatever you do, don’t bet against the game coders who eat, breathe and think C++.
Somehow my post didn’t get logged. It happens: Made using a smartphone.
Anyway, there are a lot of success stories using LISP. But there’s a lot of LISP to be found tucked away in other places, such as the statistical programming language, R. Incidently, Smalltalk is every way as powerful as LISP and is the language I prefer, although I would never move from R because of its wealth of statistical and numerical packages.
One area I do not know which appears to have some significant computational legs is the world of OCAML. They seem to work at levels of abstraction well beyond that of LISP or Scheme or Smalltalk.
Several months ago, I attended an “Alternative Languages” group meeting–so all of us were open to the semantics and uses of other languages–and, at the end of the meeting, the conversation turned to languages used in various classes. I cannot remember the exact words, but I remember the gist very well:
“In one class, we were required to use C, so that those who weren’t familiar with Lisp wouldn’t have an advantage over everyone else. We were only able to cover one or two topics.” said one person.
“Yeah, in another class, we used Lisp, and I was amazed by what we were able to cover! We covered a lot of stuff.”
The thing I took from this is that Lisp really lets you do amazing things, in a way that a language like C cannot.
lisp is a really good language for learning and understanding the essence of computation, and the lisp geniuses of legend are just the people who stuck with lisp the longest and learned the most. the purity and consistency of lisp allow an exercise of power that invites working on harder problems, leaving your brain in a better condition than before.
i only really caught up to my friends and classmates, who have all programmed computers for much longer than i have, after learning lisp and using what i had learned from it in other languages.
i’m still just a novice, but sicp and paip have changed my life for the better.
When I posted 7 months or so ago, I should have done two things, quote Dijkstra on LISP, and straighten out the record on R, part of which I helped confused by being incomplete.
First, Djikstra:
— Edsger Dijkstra, CACM, 15:10
Second, if R is written like FORTRAN, it isn’t really R-ish, as intended. R is a strongly twisted dialect of Scheme, a LISP cousin, but it has many conveniences (purists would call them warts) for supporting numerical computation, calling code written in other languages, data structures making it amenable to statistical application (tables, data frames, and factors, as well as lists, vectors, and matrices), deferred evaluation, smart initial values for function parameters, and sparse matrix structures. The quintessential way of cooking R code is to decompose your problem in terms of LISP-like mapping operations, e.g.,
sapply(X=x, FUN=function (y) PointInPolygon(y, P))
or similar ones usingmapply, cumsum, and the like. Admittedly this is not always possible. Also, some things which are natural in LISP, and efficient when using a good LISP implementation, are inefficient and not preferred in R. The notable one is that building data structures up as you go incurs inefficiency, so the normative way is to preallocate an object all at once and fill it in. That’s not a very LISPish thing to do.
By the way, Paul Graham groks LISP, bigtime.
Wasn’t it Peter Norvig that said he never expected programmers to be a as productive in C++ as Lisp until he started working at Google?
Fundamentally, Lisp is a nuts-and-bolts language with familiar things in it made up out of bits. You can do everyday, stupid things in it, in stupid ways, like in other languages. It has strings, integers, lists, vectors, structures. Programs made of step-by-step statements and loops, variables that can be assigned and even goto. You can write “Fortran” in Lisp if you are so inclined.
Lisp is better organized than other languages, and that will show more and more the more you master the language. Features that appear strange to newcomers will, in time, show themselves to be well-designed, and in some cases to actually be the best possible technical choice.
I takes about a year of earnest “Lisping” to begin to “get” it.
One thing is: to become good at Lisp, you have to be the type of person who can understand pointers. Although you don’t have to chase memory leaks or segfaults in Lisp, the pointer semantics is there. If you’ve been trying to program in C and things like “int **” confuse the heck out of you, you will probably not go that far in Lisp. Although Lisp doesn’t have “int **” type declarations, it does have pointer-based structures. Lisp programs frequently make use notions of different kinds of equality between objects: are two values actually pointers to the same object, or to distinct objects that are equivalent in some way? Lisp is not a refuge for programmers who don’t grok referential semantics.
It is completely wrong, though, that Lisp is a language for the lone genius, because that implies it is difficult. Rather, Lisp can turn a good programmer into a genius. Lisp has the tools in it left behind by great programmers to enable other programmers to be great. Lisp removes some those barriers out of your path which have nothing to do with lack of talent; the rest is up to you.
For instance, it does not take a lot of work in Lisp to intercept and augment the compilation process: write code that puts code together, which is then compiled. Stuff like that *sounds* like it requires a genius programmer, but it doesn’t require a genius programmer in Lisp. Programmers deserve to have that kind of access to the programming language.
The bar is not so low that every dummy can do that, but it’s not so high either. If you give people the API, they can use it. If you don’t give them the API, then they have to be geniuses to make everything from scratch.
Say, if you’re a very sharp C++ programmer, if you spend a little time with Lisp, you will be amplified into a genius.
I think someone who spent a lot of time with C and C++ for many years will “get” a lot of things in Lisp sooner. Especially someone with some background in writing compilers, and also who has meta-programmed using code generation techniques, or used a lot of templates, or very complicated abuses of the C preprocessor. For programmers like that, Lisp is like going to heaven after a lifetime of hardship.
I think it should be assumed that Lisp programmers are average programmers and start from there.
Thank you for the articles, very useful! I’ve started learning LISP more then year ago, but gave up it soon. I had a lot of problems with choosing framework and with it’s installation. Recently I read your guide which helped me to setup emacs with slime correctly! | https://www.johndcook.com/blog/2011/04/26/the-myth-of-the-lisp-genius/ | CC-MAIN-2018-51 | en | refinedweb |
#include <wx/richtext/richtextbuffer.h>
A class representing a rich text object's borders.
Default constructor.
Applies borders are present. If weakTest is false, the function will fail if an attribute is present in borders but not in this object.
Returns the bottom border.
Returns the left border.
Returns the right border.
Returns the top border.
Returns true if at least one border is valid.
Equality operator.
Removes the specified attributes from this object.
Resets all borders.
Sets colour of all borders.
Sets the colour for all borders.
Sets the style of all borders.
Sets the width of all borders.
Sets the width of all borders. | https://docs.wxwidgets.org/trunk/classwx_text_attr_borders.html | CC-MAIN-2018-51 | en | refinedweb |
Introduction
This section describes how to work with the SQL Meta Model from Business Foundation (BF). For information on how to work with database records refer to Working with SQL Records. All classes are available in the Mediachase.BusinessFoundation.Data.Sql and Mediachase.BusinessFoundation.Data.Sql.Management namespaces.
Initialization
An SqlContext object represents a unique entry point to the SQL Meta Model. When you create an instance of SqlContext,SQL Meta Model is loaded and all properties are set to their initial values. Then you should initialize the SqlContext.Current static property to declare the SqlContext in the current thread. The SqlContext will be available in the current thread fromSqlContext.Current static property. If the SqlContext goes out of scope, it will not object represent an SQL database. The database is used to get tables and relationships, create tables or relationships, drop tables or relationships and create storage procedures for the table. You should an SQL user table, system table and view. Table is used to get columns and relationships, add columns, and remove: The following example writes to only trace user tables:
// and all data, indexes, constraints, and permission specifications for that table.
When a table is dropped, rules or defaults on it lose their binding, and any constraints associated with it are automatically dropped. If you re-create a table, you must rebind the appropriate rules and defaults, and add all necessary constraints.
Example: Find table by name and drop:
// Find table Table table = SqlContext.Current.Database.Tables["Table_1"]; // Drop table SqlContext.Current.Database.DropTable(table);
Column class
A Columns object represents all an SQL relationship between two tables and it is used to create these types of relationships.
Get collection of relationships
The collection of relationships is available using the Table.GetRelationships method. It returns an array of Relationship objects that are relationship object to drop column.
Example: Find column by name and drop:
// Step 5. Drop RelationShip SqlContext.Current.Database.DropRelation(relationship);
Table index
A TableIndex object represents an SQL index. It can be index
Transactions
An SqlTransactionScope object represents an SQL transaction. Upon instantiating the final transaction commit or rollback.
Example: Create a new table in(); } } | https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-Commerce/8/Business-Foundation/SQL-Meta-Model/ | CC-MAIN-2018-51 | en | refinedweb |
stat - get file status
#include <sys/stat.h>.
Upon successful completion, 0 shall be returned. Otherwise, -1 shall be returned and errno set to indicate the error... | http://pubs.opengroup.org/onlinepubs/000095399/functions/stat.html | CC-MAIN-2018-51 | en | refinedweb |
How did you connect the two Arduinos? Post a wiring diagram.If I understood you correctly you're trying to emulate the RHS2116 with a second Arduino. As the slave code is incomplete (it won't compile) I don't know what exactly your problem is.One problem you probably might run into is that the SS pin has to be put in output mode, this is missing in your code (master).Remove the delayMicroseconds() call, it's not necessary and simply slows down your sketch.
#include <SPI.h>byte num;void setup(){ pinMode(MISO, OUTPUT); pinMode(MOSI, INPUT); pinMode(SCK, INPUT); pinMode(SS, INPUT); SPI.setClockDivider(SPI_CLOCK_DIV16); // Set SPI Control Register to make arduino as slave. SPCR |= _BV(SPE); SPCR &= ~_BV(MSTR); SPCR |= _BV(SPIE); Serial.begin(9600); num = 0;}ISR(SPI_STC_vect) { switch (SPDR){ case B11110000 : SPDR = B10100000; break; case B00001111 : SPDR = B00001010; break; default : SPDR = num; }}void loop(){ num += 5; if (num>255) { num = 0; } delay(1000);}
return[0] = SPI.transfer(data[0]); return[1] = SPI.transfer(data[1]); return[2] = SPI.transfer(data[2]); return[3] = SPI.transfer(data[3]);
byte rv = SPI.transfer(cmd);
I get the impression that you think of SPI as a faster version of standard (UART) serial interface. It isn't.if you callCode: [Select]byte rv = SPI.transfer(cmd);the rv byte is not the response of the slave to the cmd byte sent but it's the value that is received at the same time as the cmd byte is sent. So bit 7 of cmd is sent while bit 7 of rv is received and so on. That's why you need the delayMicroseconds() call (although 5µs would be probably enough to react on the received value and fill the data register).
Then, how can I get 32 bits data of RHS2116 with Arduino Uno? Its register is only 8 bits...
By calling SPI.tansfer() four times in a sequence. You might have to send the command byte first, so you end with 5 calls of SPI.transfer() for one 32bit value from the device. | https://forum.arduino.cc/index.php?PHPSESSID=o4nbsfj52cuebqh9rqsgqdd3r6&topic=547999.0 | CC-MAIN-2018-51 | en | refinedweb |
import com.sleepycat.db.*; import java.io.FileNotFoundException;
public void remove(String file, String database, int flags) throws DbException, FileNotFoundException;
The Db.remove interface removes the database specified by the file and database arguments. If no database is specified, the underlying file represented by file is removed, incidentally removing all databases that it contained.
Applications should not remove databases that are currently in use. If an underlying file is being removed and logging is currently enabled in the database environment, no database in the file may be open when the Db.remove method is called. In particular, some architectures do not permit the removal of files with open handles. On these architectures, attempts to remove databases that are currently in use by any thread of control in the system will fail.
The flags parameter is currently unused, and must be set to 0.
After Db.remove has been called, regardless of its return, the Db handle may not be accessed again.
The Db.remove method throws an exception that encapsulates a non-zero error value on failure.
The Db.remove method may fail and throw an exception encapsulating a non-zero error for the following conditions:
If the file or directory does not exist, the Db.remove method will fail and throw a FileNotFoundException exception.
The Db.remove method may fail and throw an exception for errors specified for other Berkeley DB and C library or system methods. If a catastrophic error has occurred, the Db.remove method may fail and throw a DbRunRecoveryException, in which case all subsequent Berkeley DB calls will fail in the same way. | http://doc.gnu-darwin.org/api_java/db_remove.html | CC-MAIN-2018-51 | en | refinedweb |
This component runs custom Lua script code, implementing the behaviour of the entity in the game world. More...
#include "CompScript.hpp").BaseT.
Returns a color that the Map Editor can use to render the representation of this component's entity.
The Map Editor may use the color of an entity's first component as returned by this method to render the visual representation of the entity.
Reimplemented from cf::GameSys::ComponentBaseT.
Returns the name of this component.
Reimplemented from cf::GameSys::ComponentBaseT..
This function is used for posting an event of the given type.
It's the twin of
PostEvent(lua_State* LuaState) below, but allows also C++ code to post events. It is assumed that in the script (e.g. "HumanPlayer.lua"), script method InitEventTypes() has been called.. | https://api.cafu.de/c++/classcf_1_1GameSys_1_1ComponentScriptT.html | CC-MAIN-2018-51 | en | refinedweb |
from IPython.display import YouTubeVideo, Image, HTML YouTubeVideo('0Q14rHLvMco')
For some reason I've been thinking a lot about LOST lately--thinking about it enough that I rewatched the pilot a few nights ago. I got to thinking: how have all of the actors fared in their post-LOST careers? Despite it's trials and tribulations, did acting on LOST give a sense of purpose, just like Jack felt with the Island? Is it time yet for a career revitalizing LOST reboot?
Normally these questions are relegated to some very simple slide show listicle. However, we don't have to settle for that! We've got data! We can perform a far more interesting analysis than googling "Matthew Fox."
I scraped all of this data from IMDB, following the process below:
Minor Roles: To eliminate minor roles, I only counted roles where that actor appeared on the main cast list for that movie/tv. For example: Jorge Garcia was in two episodes of How I Met Your Mother, but doesn't appear on the main HIMYM IMDB cast page.
Year info: For TV shows, the year included is the year that TV show premiered. It's not the year in which an actor might have appeared on the show. For example: everyone who appeared in LOST will have a year of 2004, regardless of when they actually started on the show. Actors this impacts:
Language: I'll use actors to refer to both actors and actresses throughout this exploration. I'll use the term media to refer to the general collection of TV or Movies.
import pandas as pd %matplotlib inline
we_have_to_go_back = pd.read_csv('./data/LOST_clean.csv') print "Total rows:", len(we_have_to_go_back) we_have_to_go_back.head()
Total rows: 353
We have 353 total rows listing the actor, the title of the media, the IMDB score, the year that media first aired, and the type of media. Let's first take a look at what different types of media we're working with.
we_have_to_go_back['type'].value_counts()
Movie 177 TV Movie 79 TV Series 55 Other/Unknown 23 Video Game 19 dtype: int64
We're only going to include the data from Television or Film, and exclude Other/Unknown and Video Game.
big_and_small_screen = we_have_to_go_back[(we_have_to_go_back['type'] == 'TV Series') | (we_have_to_go_back['type'] == 'Movie') | (we_have_to_go_back['type'] == 'TV Movie')]
Now that we've got a clean dataset, let's get a little more information about the scores. LOST's IMDB score is an 8.5, but we have no context to understand whether that's high or low. (Sidebar: Here is a good analysis of the distribution of all IMDB scores)
We've also got to remove the duplicates for this next step. LOST is listed 15 times (once for each actor) hence the spike around 8.5. We'll assume a duplicate is an item with the same title and score.
Let's look at the distribution with a histogram, and also print out some summary statistics.
big_and_small_screen.drop_duplicates(['title','score'])['score'].hist(bins=16) big_and_small_screen.drop_duplicates(['title','score'])['score'].describe()
count 296.000000 mean 6.399324 std 1.059773 min 2.900000 25% 5.800000 50% 6.500000 75% 7.100000 max 9.000000 dtype: float64
Comparing LOST's 8.5 score to these numbers shows us a few things:
Also notice the top scored media for any actor is 9.0. Out of curiosity, let's take a look at the top 5 scored items in our dataset:
big_and_small_screen.sort('score', ascending=0).head(5)
Don't tell Terry O'Quinn what he can't do, because he can clearly star in a highly rated 1989 TV Movie.
YouTubeVideo('arMtFxv7jlw')
big_and_small_screen.ix[big_and_small_screen.groupby('actor')['score'].idxmax()]
Of the 15 of the most frequent actors on LOST, only 2 of them have ever had a major role in something that has a score higher than LOST. Note that Person of Interest for Michael Emerson is rated the same as LOST, so we're excluding him from the club.
We can also explore how many appearances each actor has had before and after LOST. In order to do that, we'll flag every entry as post-LOST if it started after 2004, then count the number of titles that come before or after.
# side note: not happy with this code... there must be a better way. big_and_small_screen['post_lost'] = big_and_small_screen['start_year'] > 2004 before_and_after = pd.pivot_table(big_and_small_screen, columns=['post_lost'], values=['start_year'], index=['actor'], aggfunc=np.size).reset_index() before_and_after['more_after_lost'] = (before_and_after['start_year'][True] - before_and_after['start_year'][False] > 0) before_and_after
9 out of 15 actors had more major roles after 2004. This is a pretty naive comparison, though, since a recurring role on a TV show is only going to count for 1, while an actor who chooses to go to the big screen is going to have multiple movies they're starring in. It also doesn't take into account things like Terry O'Quinn's massive 61 roles before LOST.
On that note, let's see if there's a difference in what type of media the actors starred in before and after LOST. We'll count the number of Movies, TV, or TV Movies to each actors name before and after LOST, then see which of those categories is the highest.
pre_LOST_roles = big_and_small_screen[big_and_small_screen['post_lost'] == False] actor_type_counts = pre_LOST_roles.groupby(['actor','type']).size().reset_index() actor_type_counts.columns = ['actor','type','occurrences'] actor_type_counts.ix[actor_type_counts.groupby('actor')['occurrences'].idxmax()]
Note that these also include starring in LOST itself. Henry Ian Cusick is in good company with Terry O'Quinn as a major TV Movie actor! Alright!
Let's quick tally tally up the types:
actor_type_counts.ix[actor_type_counts.groupby('actor')['occurrences'].idxmax()]['type'].value_counts()
Movie 9 TV Series 4 TV Movie 2 dtype: int64
post_LOST_roles = big_and_small_screen[big_and_small_screen['post_lost'] == True] actor_type_counts = post_LOST_roles.groupby(['actor','type']).size().reset_index() actor_type_counts.columns = ['actor','type','occurrences'] actor_type_counts.ix[actor_type_counts.groupby('actor')['occurrences'].idxmax()]
Everyone but Terry O'Quinn seemed to go to the big screen after LOST.
Our quick, rudimentary analysis gave us some wonderful insight into the acting lives of 15 actors from LOST. Here's what we've learned: | http://nbviewer.jupyter.org/github/pmbaumgartner/LOST/blob/master/WE%20HAVE%20TO%20GO%20BACK.ipynb | CC-MAIN-2018-51 | en | refinedweb |
Subject: Re: [boost] [config] msvc-14 config changes heads up
From: Stephan T. Lavavej (stl_at_[hidden])
Date: 2016-07-05 20:42:42
[Stefan Seefeld]
>.
There is no room for interpretation here. N4594 3.4.2 [basic.lookup.argdep]/2.1:
"If T is a fundamental type, its associated sets of namespaces and classes are both empty."
Note that the Boost mailing list really isn't an appropriate place to discuss 18-year-old C++ Core Language design decisions.
STL
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2016/07/230395.php | CC-MAIN-2020-34 | en | refinedweb |
honey the codewitch wrote: a fair amount of confusion
honey the codewitch wrote:I don't care what pronouns you use for me. Use whatever makes the most sense to you.
my gender is bees.
Quote: "Hey honey, take a walk on the wild side"
honey the codewitch wrote:my gender is bees.
def printGlobal():
print(str(extra))
extra = 35
printGlobal() # prints 35
extra = "Python are stupid."
class Arsinine:
def __init__(self):
print(extra)
a = Arsinine() # prints Python are stupid.
extra
raddevus wrote:843 People Upvoted this Comment
ZurdoDev wrote:844 now.
raddevus wrote:I'm putting you in a special box.
ZurdoDev wrote:But in seriousness, there are times when globals make sense.
0x01AA wrote:No
0x01AA wrote:but also injection is a kind of global behavior in the broadest sense
Greg Utas wrote:Global constants, yes.
Global variables, no.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5726080&fr=15775 | CC-MAIN-2020-50 | en | refinedweb |
I know why use interface and wrapper.
But I confuse to name wrapper class... ("Who is wrapper?" I see I do not know well...)
public Interface A {
void method();
}
public Class B implements A {
void method() {
doSomething();
}
}
- Wrapper Class is class , so B is wrapper.
- We usually see(or think?) a.method() not b.method(), so A is wrapper.
A is an interface.
B is a concrete implementation of
Interface. Nothing else can be said about them from the code you provided.
A Wrapper "wraps" the functionality of another class or API by adding or simplifying the functionality of the wrapped class/API. For example, the Primitive Wrappers from Java add useful methods like
doubleValue and
compareTo to Java primitives.
You're thinking of Polymorphism.
That's what allows us to say things like:
A a = new B(); B b = new B(); List<A> stuffs = new ArrayList<A>(); stuffs.add(b);
Side note:
Interface and
Class are not allowed to be capitalize in Java. Your declarations should be like so:
public interface A { // methods } public class B implements A { // methods } | https://codedump.io/share/UZW2ogs5vXS3/1/what-is-interface-and-wrapper | CC-MAIN-2017-34 | en | refinedweb |
Next article: Introducing PLWeakCompatibility
Previous article: Friday Q&A 2012-03-09: Let's Build NSMutableArray
Tags: fridayqna letsbuild objectivec
Last time on Friday Q&A, we discussed how to implement
NSMutableArray. Today, I'll repeat the same exercise with
NSMutableDictionary and build an implementation of it from scratch.
Concepts
Like
NSMutableArray,
NSMutableDictionary is a class cluster. Any subclass must implement these primitive methods:
- (NSUInteger)count; - (id)objectForKey:(id)aKey; - (NSEnumerator *)keyEnumerator; - (void)removeObjectForKey:(id)aKey; - (void)setObject:(id)anObject forKey:(id)aKey;
Most of these should be pretty obvious. One which may not be is
-keyEnumerator. Some of the
NSDictionary APIs implemented on top of these methods need to be able to examine all of the dictionary's keys. For example
objectsForKeys:notFoundMarker: would be implemented by iterating over
keyEnumerator, using
objectForKey: to get the corresponding object, and seeing if it matched the one passed in. Requiring an
NSArray or
NSSet would be limiting, so instead it simply requires returning some object which can be enumerated to find all of the keys.
There are several ways to implement a key-value mapping that this interface represents. The most natural for this is probably a hash table, since the only thing we can really know about the keys is that they implement
NSCopying,
hash, and
isEqual:. A binary tree is another common way to implement key-value mappings, but with no comparison method available, it doesn't work too well here. It would be possible to implement the binary tree based on the
hash value, but implementing a hash table is easier and more likely to match the actual implementation of
NSMutableDictionary.
For those of you who've forgotten your college data structures course, let's talk about just what a hash table is before we go off and implement one.
First, we need a hash function. A hash function is something that takes an object and produces an integer where, for two equal values
x and
y,
hash(x) = hash(y). In other words, when two objects have the same value, they have the same hash. If the two objects have different values, you prefer that they have different hashes, but this is not a requirement. A good hash function will try to make that happen if it's at all possible, because it makes hash tables run faster. However,
return 0; is a perfectly valid hash function, just slow. In Cocoa, the
hash method on
NSObject provides this hash function. Subclasses which have more complicated equality semantics will also override
hash to match.
With the hash function in hand, the next step is to construct a table which uses the hash function to splat keys into it in a random-looking fashion. The table can just be a C array. We choose an index into the table based on the hash function, usually by taking the modulus of the hash with the table length. That index of the table is then used to store the key/value pair.
A major question at this point is how to handle collisions. A collision can happen for two different objects with the same hash, or just for two different objects which happen to be assigned to the same index in the table but have different hashes. There are a lot of different ways to handle this problem, but a common and easy one is to have each table entry actually hold the head of a linked list rather than a single key-value pair.
Thus, to look up a key in the table, we use the hash function to find the index, then search the linked list for that key. To set a new key-value pair, we just add the new pair to the linked list at that index.
An interesting aspect of this approach is that resizing the table becomes somewhat optional and arbitrary. A small table can still hold a large number of objects, because they'll all end up in the table's linked lists. However, performance suffers greatly when the linked lists become long. For optimal performance, you want to grow the table as the number of key-value pairs grows, but it's not strictly necessary.
In order to make the implementation of the dictionary easier, I split the implementation into two separate classes.
MAFixedMutableDictionary implements the above approach using a fixed-size table, whose size is specified at initialization time.
MAMutableDictionary is then a small wrapper around
MAFixedMutableDictionary which creates a new, larger one and copies the key-value pairs across each time the size exceeds a certain threshold.
Code
Like before, the code that we'll discuss today is available on GitHub:
Implementation
The first thing we'll implement is the linked list node, which I call
_MAMutableDictionaryBucket. This is a really simple class whose task is to hold a key, a value, and a pointer to the next bucket. Here is the entire implementation:
@interface _MAMutableDictionaryBucket : NSObject @property (nonatomic, copy) id key; @property (nonatomic, retain) id obj; @property (nonatomic, retain) _MAMutableDictionaryBucket *next; @end @implementation _MAMutableDictionaryBucket - (void)dealloc { [_key release]; [_obj release]; [_next release]; [super dealloc]; } @end
Note that I'm taking advantage of the new default
@synthesize stuff which is why there is no visible implementation of those three
@property declarations. Also note that the
key property is declared as
copy, which trivially implements the
NSMutableDictionary semantic of copying its keys.
I also have a second helper class. This one is an
NSEnumerator subclass which wraps a block that acts as the enumerator. I use this to make it easier to implement the
keyEnumerator method. Here's the code for this class:
@interface _MABlockEnumerator : NSEnumerator { id (^_block)(void); } - (id)initWithBlock: (id (^)(void))block; @end @implementation _MABlockEnumerator - (id)initWithBlock: (id (^)(void))block { if((self = [self init])) _block = [block copy]; return self; } - (void)dealloc { [_block release]; [super dealloc]; } - (id)nextObject { return _block(); } @end
That's it for helper classes. Next, let's look at the instance variables for
MAFixedMutableDictionary:
@implementation MAFixedMutableDictionary { NSUInteger _count; NSUInteger _size; _MAMutableDictionaryBucket **_array; }
This should all be pretty straightforward.
_count stores the number of objects currently in the table.
_size stores the size of the table, and
_array actually is the table, implemented as an array of pointers to buckets, acting as the linked list heads. The initializer is likewise simple, setting
_size, allocating
_array, and leaving
_count at
0:
- (id)initWithSize: (NSUInteger)size { if((self = [super init])) { _size = size; _array = calloc(size, sizeof(*_array)); } return self; }
dealloc is just the opposite, with the minor addition of needing to iterate through the table and free all of the buckets it contains. Since each bucket manages its
next pointer's memory, there's no need to iterate through the list and manually free all of the nodes, as you may have done in some far-off data structures class. Instead, this happens automatically as a consequence of the standard memory management implemented in
_MAMutableDictionaryBucket:
- (void)dealloc { for(NSUInteger i = 0; i < _size; i++) [_array[i] release]; free(_array); [super dealloc]; }
The
count method is really simple, since it just needs to return the instance variable:
- (NSUInteger)count { return _count; }
Lookup
Now the code starts to get more complex. Next up is
objectForKey:. The implementation starts off by calculating the index of the appropriate bucket by taking the
hash of the key and using modulus to get it within the size of the table:
- (id)objectForKey: (id)key { NSUInteger bucketIndex = [key hash] % _size;
Next, we retrieve the bucket at that index:
_MAMutableDictionaryBucket *bucket = _array[bucketIndex];
Now, we want to loop over all every element in the linked list that begins with
bucket and search for
key. We loop while
bucket is non-
nil, and return the appropriate object if we find a matching key:
while(bucket) { if([[bucket key] isEqual: key]) return [bucket obj];
Otherwise, move to the next bucket and keep looking:
bucket = [bucket next]; }
Finally, if no matching bucket was found, return
nil:
return nil; }
Enumeration
The strategy for
keyEnumerator is to iterate over the table. For each linked list we find in the table, we iterate over its nodes, then resume iterating the table. We'll use
_MABlockEnumerator to implement this enumeration strategy with a block.
This block needs to be able to keep track of the current table index it's examining as well as the current list node. Since these need to persist between calls to the block and need to be modified in the block, they're implemented as
__block local variables outside the block:
- (NSEnumerator *)keyEnumerator { __block NSUInteger index = -1; __block _MAMutableDictionaryBucket *bucket = nil;
These are initialized with values that make the enumerator start looking at the very beginning of the table. With those in place, we can implement the enumerator block:
NSEnumerator *e = [[_MABlockEnumerator alloc] initWithBlock: ^{
The
bucket variable holds the bucket that was current on the last call to the block. Thus, the first thing the block does is move to the next bucket:
bucket = [bucket next];
If we've fallen off the end of the list, then we need to move through the table and find the next list. We loop through the table searching for an index where
_array does not contain
nil. If we run off the end of the table while searching, then there are no more elements to enumerate, so we return
nil:
while(!bucket) { index++; if(index >= _size) return (id)nil; bucket = _array[index]; }
If we avoid returning nil, then
bucket now contains the next key-value pair to enumerate. We simply return the key from that bucket:
return [bucket key];
With the enumeration block complete, all that remains is to return it:
}]; return [e autorelease]; }
Insertion and Removal
The next method to implement is
removeObjectForKey:. This works much like
objectForKey:, except that once it finds the appropriate bucket, it removes that bucket from the list rather than simply returning its object. Thus, it starts out the same, by finding the appropriate table index and the bucket contained there:
- (void)removeObjectForKey: (id)key { NSUInteger bucketIndex = [key hash] % _size; _MAMutableDictionaryBucket *previousBucket = nil; _MAMutableDictionaryBucket *bucket = _array[bucketIndex];
However, because it needs to remove the bucket it finds from the list, the loop is a little more complex. The
previousBucket variable here will track the bucket before the current one. That bucket's
next property needs to be re-pointed when removing the current bucket, and there's no easy way to move backwards in the list, so we keep track of it separately. We initialize it with
nil which we'll use to handle the special case where the bucket to remove is at the head of the list.
We now loop through the list and check for a matching key:
while(bucket) { if([[bucket key] isEqual: key]) {
Once a matching bucket is found, there are two cases to handle. The first is the special case where the bucket is at the very beginning of the list. For this case, we need to set
_array[bucketIndex] to point to the next bucket, thus removing the bucket we found from the list, and add the appropriate
retain and
release calls to manage the memory:
if(previousBucket == nil) { _MAMutableDictionaryBucket *nextBucket = [[bucket next] retain]; [_array[bucketIndex] release]; _array[bucketIndex] = nextBucket; }
Otherwise, we just set
previousBucket's
next property to the current bucket's
next, which cuts the current bucket out of the list. All memory management in this case is handled automatically by the bucket's
@property code:
else { [previousBucket setNext: [bucket next]]; }
In both cases, once the appropriate bucket is removed from the linked list, we simply decrement
_count to take into account the fact that one less entry is present, then return from the method:
_count--; return; }
If no bucket is found, we just keep searching. We advance
previousBucket to
bucket and then advance
bucket to the next bucket:
previousBucket = bucket; bucket = [bucket next]; } }
And that's it. In the event that no such key exists, the loop just falls off the end of the linked list and the method returns, having done nothing.
Finally, we come to
setObject:forKey:. The first thing we'll do is set up a new bucket for the new key-value pair:
- (void)setObject: (id)obj forKey: (id)key { _MAMutableDictionaryBucket *newBucket = [[_MAMutableDictionaryBucket alloc] init]; [newBucket setKey: key]; [newBucket setObj: obj];
Next, since this method is supposed to replace any existing object that exists in the dictionary for
key, we'll remove the previous object, if any:
[self removeObjectForKey: key];
Finally, we insert the new bucket into the array by computing index, setting the
next property to the current bucket at that index, and then setting the array to contain the new bucket:
NSUInteger bucketIndex = [key hash] % _size; [newBucket setNext: _array[bucketIndex]]; [_array[bucketIndex] release]; _array[bucketIndex] = newBucket;
All that's left now is to increment
_count and return:
_count++; }
There's a subtle trick in the order of the above code. It's possible that either
obj or
key are only being kept alive by a strong reference from this dictionary instance. For example, one might write something like:
[dict setObject: [dict objectForKey: key] forKey: key];
Although this particular example is pointless, it's legal, and you can run into equivalent situations that are much less obvious. If the first thing we did was
removeObjectForKey:, as might seem natural, we risk invalidating either
obj or
key and causing the rest of the code to crash. Instead, the first thing we do is create
newBucket and set its
key and
obj properties. This retains
key and
obj and ensures that they stay alive even after
removeObjectForKey: runs.
Resizing
With
MAFixedMutableDictionary complete, we now just implement
MAMutableDictionary on top of it. This is simply a wrapper which keeps track of the current inner
MAFixedMutableDictionary, as well as its table size:
@implementation MAMutableDictionary { NSUInteger _size; MAFixedMutableDictionary *_fixedDict; }
This dictionary will create a new, larger dictionary whenever the current one gets too full. How full is too full? In hash table terms, the ratio of the number of hash table entries to the size of the hash table is called the load factor. As we discussed before,
MAFixedMutableDictionary will keep working no matter how full it gets, it simply becomes slower and slower. Just when to resize the dictionary is not entirely clear. It's a classic time/space tradeoff, where resizing the table at a lower load factor keeps the table faster at the cost of more wasted space, and waiting for a higher load factor makes the table slower but wastes less space. In theory, a good hash function should keep the table fast up to about a
0.7 load factor, so that's where we'll resize it. Here are a pair of constants that define the load factor where the table will resize:
static const NSUInteger kMaxLoadFactorNumerator = 7; static const NSUInteger kMaxLoadFactorDenominator = 10;
On to the code. The first method is the initializer. We implement
initWithCapacity:, as it's a funnel method that other
NSMutableDictionary methods will call through. We use the capacity as the initial size for the underlying fixed dictionary, with
4 as an arbitrary minimum:
- (id)initWithCapacity: (NSUInteger)capacity { capacity = MAX(capacity, 4); if((self = [super init])) { _size = capacity; _fixedDict = [[MAFixedMutableDictionary alloc] initWithSize: _size]; } return self; }
For
dealloc, all it needs to do is release the underlying fixed dictionary:
- (void)dealloc { [_fixedDict release]; [super dealloc]; }
Similarly, most of the primitive methods are simply wrappers around the
_fixedDict:
- (NSUInteger)count { return [_fixedDict count]; } - (id)objectForKey: (id)key { return [_fixedDict objectForKey: key]; } - (NSEnumerator *)keyEnumerator { return [_fixedDict keyEnumerator]; } - (void)removeObjectForKey: (id)key { [_fixedDict removeObjectForKey: key]; }
The only really interesting method is
setObject:forKey:. It, too, calls through to
_fixedDict first:
- (void)setObject: (id)obj forKey:(id)key { [_fixedDict setObject: obj forKey: key];
However, this is also where it will reallocate the underlying storage if necessary. The first thing it does is see if the current load factor has exceeded the maximum allowed:
if(kMaxLoadFactorDenominator * [_fixedDict count] / _size > kMaxLoadFactorNumerator) {
If it does, the first thing it does is create a new
MAFixedMutableDictionary with a larger size. This size is determined by doubling the previous size, although many other strategies could be used instead:
NSUInteger newSize = _size * 2; MAFixedMutableDictionary *newDict = [[MAFixedMutableDictionary alloc] initWithSize: newSize];
With the new fixed-size dictionary created, the next thing to do is to copy all of the key-value pairs across from the old one:
for(id key in _fixedDict) [newDict setObject: [_fixedDict objectForKey: key] forKey: key];
With everything copied over, all that remains is to release the old dictionary and reassign the instance variables:
[_fixedDict release]; _size = newSize; _fixedDict = newDict; } }
That's it! With this simple wrapper, we now have a resizing hash table implementation of
NSMutableDictionary.
In a real implementation, we would probably want to do a similar resizing operation in the opposite direction in
removeObjectForKey:, to prevent too much space from being wasted if the dictionary is emptied. However, for this example implementation, I left it out, as it's even more optional than increasing the size as more data is added.
Sets
I won't be showing an implementation of
NSMutableSet, but from this discussion it should be pretty clear how we could go about creating one. Conceptually, a set is basically just a dictionary with no objects, only keys. The above code could easily be used to implement
NSMutableSet by simply treating each set member as a key, and having the object be a placeholder. Better would be to remove the objects from the code altogether, and do the same basic operations on the key alone, thus wasting a little less space.
Performance
If the hash function is well behaved, and keys are very likely to be assigned to different table indexes, the dictionary will be fast, assuming that
hash itself is fast. Examining, adding, or deleting a single entry in the table gives \(\mathrm{O(1)}\) behavior for these operations in that case.
In the worst case, every key has the same table index, as would happen if the hash function were implemented as
return 0;. In this case, the hash table degenerates into a single linked list. All operations on this linked list are \(\mathrm{O(n)}\), giving similar performance to an array.
In most real-world cases, the hash function will behave well enough to achieve \(\mathrm{O(1)}\) performance on most data. When writing Cocoa code, we can usually assume that this is the case for our
NSMutableDictionary objects.
There is an interesting class of denial of service attacks based on this. Even with a well behaved hash function, it's often easy to find distinct values with the same hash or table index. By feeding specially crafted data to a program which assumes its hash tables are always fast, it's possible to cause that program to spend a great deal of time fiddling with its hash tables and even overload the server where the program is running. For more information on these, see the paper Denial of Service via Algorithmic Complexity Attacks.
Implications
This implementation can clarify some weird behaviors that may occur with
NSMutableDictionary and
NSMutableSet when they're misused.
Apple strongly cautions against putting mutable objects in sets or using them as dictionary keys, and
NSMutableDictionary copies its keys in an attempt to avoid that. Now that we have some simple code to look at, it should become pretty clear as to why this is a problem. If a key's value changes, its hash is also likely to change. When the hash changes, the table index is likely to change as well, but the
_MAMutableDictionaryBucket object stays where it was.
The net result is that the key-value pair becomes invisible to
objectForKey: and similar. However, it will still show up in
count and
keyEnumerator. If you use
keyEnumerator, you may encounter a bizarre situation where it returns a key which the dictionary then claims it doesn't contain! Furthermore, if you use
setObject:forKey:, you'll likely end up with two copies of the key in the dictionary. When enumerating over the dictionary, the key will show up twice, but both will appear to have the same object, with the other object inaccessible. When the dictionary resizes, one of the entries will disappear, and which one "wins" will be essentially random.
Similar things happen if you have a bad implementation of
hash, where two equal objects have different hash values. This can easily happen if you subclass
NSObject, reimplement
isEqual:, but don't reimplement
hash. In cases like this, looking up a key-value pair will appear to succeed or fail essentially at random, depending on whether you get lucky and hit the same table index or not. Every time the table resizes, entries will get shuffled around and previous successes may start failing, and vice versa. You may once again end up with duplicate entries in the table, and they may or may not disappear as the table resizes.
Conclusion
We've now seen examples of how to implement all three main Cocoa collection classes,
NSMutableArray,
NSMutableDictionary, and
NSMutableSet (as a minor variant of
NSMutableDictionary). While the real implementations are undoubtedly different from what we see here, these examples come close enough to have a good understanding of how this stuff works on the inside.
That's it for today. Come back next time for another exciting journey to the dark interior of the programming soul. Until then, since Friday Q&A is driven by reader suggestions for topics, please send in your ideas!
return 0;is a perfectly valid hash function, just slow. In Cocoa, the
hashmethod on
NSObjectprovides this hash function.
My initial reading of this was that NSObject's hash function was
return 0;. Maybe it'd be clearer if you mentioned something about the object's address?
[previousBucket setNext: [bucket next]];
if the setter is written in certain legitimate ways, eg as shown in:
if (newValue != _var) {
[_var release];
_var = [newValue retain];
}
If the setter releases its _next variable before retaining the input parameter, then since [bucket next] has not retained it, it will be a dangling pointer.
In this case you're probably safe since the setter is synthesized and Apple's code will be appropriately defensive, but still it's worth either pointing out the danger or defending against it with your own retain/release, or implementing your own setter with defined safe semantics.
[newValue retain];
[_var release];
_var = newValue;
It also makes me glad that ARC saves us from having to worry about stuff like that.
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion. | https://www.mikeash.com/pyblog/friday-qa-2012-03-16-lets-build-nsmutabledictionary.html | CC-MAIN-2017-34 | en | refinedweb |
Next article: Friday Q&A 2011-05-20: The Inner Life of Zombies
Previous article: Friday Q&A Falls Behind
Tags: blocks fridayqna libffi trampoline
It's a week late, but it's finally time for the latest edition of Friday Q&A. About a year ago, I wrote about converting blocks into function pointers by building code at runtime. This was an interesting exercise, but ultimately impractical due to various limitations. In the meantime, I wrote MABlockClosure, a more robust and usable way of doing the same thing, but I never posted about it. Landon Fuller suggest I discuss how it works, and so that is what I will talk about today.
Recap
Blocks are an extremely useful language feature for two reasons: they allow writing anonymous functions inlined in other code, and they can capture context from the enclosing scope by referring to local variables from that scope. Among other things, this makes callback patterns much simpler. Instead of this:
struct CallbackContext { NSString *title; int value; }; static void MyCallback(id result, void *contextVoid) { struct CallbackContext *context = contextVoid; // use result, context->title, and context->value } struct CallbackContext ctx; ctx.title = [self title]; ctx.value = [self value]; CallAPIWithCallback(workToDo, MyCallback, &ctx;);
CallAPIWithCallbackBlock(workToDo, ^(id result) { // use result, [self title], [self value] });
The problem is that not all callbacks-based APIs have versions that take blocks. What
MABlockClosure and my older experimental trampoline code allow is converting a block to a function pointer that can be passed to one of these APIs. For example, if
CallAPIWithCallbackBlock didn't exist,
MABlockClosure allows writing code that's nearly as nice:
CallAPIWithCallback(workToDo, BlockFptrAuto(^(id result) { // use result, [self title], [self value] }));
Blocks ABI
Blocks compile down to a function and a couple of structs. The function holds the code, and the structs hold information about the block, including the captured context. The function contains an implicit argument, much like the
self argument to Objective-C methods, which points to the block structure. The block above translates to something like this:
void BlockImpl(struct BlockStruct *block, id info) { // code goes here }
My original attempt used a small bit of assembly code for the trampoline. This code tried to shift the arguments in a general fashion, and then insert the pointer at the front. Unfortunately, this really can't be done by the same code for all cases, so it ended up with a lot of irritating restrictions.
At the time, this was about the best that could be done. Fortunately, Apple later added type metadata to blocks. As long as you're using a compiler that's recent enough to generate this metadata (any recent
clang will do), this can be used to generate intelligent trampolines which do the appropriate argument manipulation.
libffi
Although the block type metadata provides all of the necessary information needed to perform the necessary argument transformation, it's still an extremely complicated undertaking. The exact nature of what needs to be done depends heavily on the function call ABI of the particular architecture the code is running on, and the particular argument types present.
If I had to do all of this myself, I never would have been able to put in the enormous effort required. The good news is that there is a library already built which knows how to handle all of this for a whole bunch of different architectures:
libffi.
libffi provides two major facilities. It's best known for the ability to call into an arbitrary function with arbitrary arguments whose types aren't known until runtime. A lesser-known facility provides what is essentially the opposite: it allows creating "closures" which are runtime-generated functions which capture arbitrary arguments whose types aren't known until runtime.
The latter is what we need to generate the trampoline function for the block. This captures the arguments in a form that can be manipulated from C code. That code can then manipulate the arguments as needed and use the former facility to call the block's implementation pointer.
Support Structures
The layout of a block structure is not in any published header. However, since these structures are baked into executables when they're compiled, we can safely extract them from the specification and rely on that to match.
These are the structures in question:
struct BlockDescriptor { unsigned long reserved; unsigned long size; void *rest[1]; }; struct Block { void *isa; int flags; int reserved; void *invoke; struct BlockDescriptor *descriptor; };
static void *BlockImpl(id block) { return ((struct Block *)block)->invoke; }
flagsfield which indicates various properties about the block. One of the flags indicates whether the type signature is present, which we check to ensure that the code fails early and obviously if it's not there. Another flag indicates whether the block contains a copy and dispose callback. If it does, then the location of the type signature information moves within the block descriptor struct. Here's the code for properly extracting the type signature:
static const char *BlockSig(id blockObj) { struct Block *block = (void *)blockObj; struct BlockDescriptor *descriptor = block->descriptor; int copyDisposeFlag = 1 << 25; int signatureFlag = 1 << 30; assert(block->flags & signatureFlag); int index = 0; if(block->flags & copyDisposeFlag) index += 2; return descriptor->rest[index]; }
Most of the code and data structures are encapsulated in a class called
MABlockClosure.
A lot of the necessary
libffi data structures have to be created dynamically depending on the type signature. Manually managing that memory gets irritating. Since their lifetime is tied to the life of the closure object itself, the simplest way to deal with this is to track allocations in the object. To do this, I have an
NSMutableArray. When I need to allocate memory, I create an
NSMutableData of the appropriate size, add it to this array, and then return its
mutableBytes pointer. This array is the class's first instance variable:
@interface MABlockClosure : NSObject { NSMutableArray *_allocations;
libffistores function types in a struct called
ffi_cif. I don't know what the
cifpart stands for, but this struct basically just holds an array of argument types, plus a return type. The class needs two of these: one for the function and one for the block. Although these two are similar, they aren't identical, and it's easier to just have two than try to reuse one. It's also useful to know how many arguments there are in total when doing the argument shifting, so that is also stored in an instance variable:
ffi_cif _closureCIF; ffi_cif _innerCIF; int _closureArgCount;
ffi_closurestructure, a pointer to the actual function pointer that this provides, and a pointer to the block that this whole thing is intended for:
ffi_closure *_closure; void *_closureFptr; id _block; }
- (id)initWithBlock: (id)block; - (void *)fptr; @end
-fptrmethod is just an accessor:
- (void *)fptr { return _closureFptr; }
_allocationsivar, assigns the block, and allocates a closure. It then fills out the
ffi_cifstructures to match the block's type signature. Finally, it initializes the
libfficlosure:
- (id)initWithBlock: (id)block { if((self = [self init])) { _allocations = [[NSMutableArray alloc] init]; _block = block; _closure = AllocateClosure(&_closureFptr); [self _prepClosureCIF]; [self _prepInnerCIF]; [self _prepClosure]; } return self; }
libffihas changed how it deals with closures over time. Originally, closures had to be allocated by the calling code. This chunk of memory was then passed to
libffiwhich did its thing. Afterwards, the client had to mark that code as executable. The version of
libffiwhich ships with Mac OS X works this way.
Newer versions of
libffi encapsulate all of this in calls to allocate, prepare, and deallocate closures. This is what you'll get if you build
libffi from source, and it's what you can get on iOS.
MABlockClosure is built to handle both ways.
The
AllocateClosure function uses conditional compilation to decide which technique to use. If
USE_LIBFFI_CLOSURE_ALLOC is set, it just calls through to libffi. Otherwise, it allocates the memory using
mmap, which ensures that the memory is properly aligned and can later be marked executable. Here's what that function looks like:
static void *AllocateClosure(void **codePtr) { #if USE_LIBFFI_CLOSURE_ALLOC return ffi_closure_alloc(sizeof(ffi_closure), codePtr); #else ffi_closure *closure = mmap(NULL, sizeof(ffi_closure), PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0); if(closure == (void *)-1) { perror("mmap"); return NULL; } *codePtr = closure; return closure; #endif }
libffior
munmapdepending on which mode it's operating in:
static void DeallocateClosure(void *closure) { #if USE_LIBFFI_CLOSURE_ALLOC ffi_closure_free(closure); #else munmap(closure, sizeof(ffi_closure)); #endif }
After allocating the closure,
-initWithBlock:then prepares the CIF structs which hold the type information for
libffi. The type information can be obtained from the block using the
BlockSighelper function shown earlier. However, this type information is in Objective-C
@encodeformat. Converting from one to the other is not entirely trivial.
The two
prep methods called by
-initWithBlock: just call through to a single common method with slightly different arguments:
- (void)_prepClosureCIF { _closureArgCount = [self _prepCIF: &_closureCIF withEncodeString: BlockSig(_block) skipArg: YES]; } - (void)_prepInnerCIF { [self _prepCIF: &_innerCIF withEncodeString: BlockSig(_block) skipArg: NO]; }
skipArgargument. This tells the method whether to skip over the first argument to the function. When generating the block's type signature, all arguments are included. When generating the closure's type signature, the first argument is skipped, and the rest are included.
The
-_prepCIF:withEncodeString:skipArg: method in turn calls through to another method which does the real work of the conversion of the
@encode string to an array of
ffi_type. It then skips over the first argument if needed, and calls
ffi_prep_cif to fill out the
ffi_cif struct:
- (int)_prepCIF: (ffi_cif *)cif withEncodeString: (const char *)str skipArg: (BOOL)skip { int argCount; ffi_type **argTypes = [self _argsWithEncodeString: str getCount: &argCount;]; if(skip) { argTypes++; argCount--; } ffi_status status = ffi_prep_cif(cif, FFI_DEFAULT_ABI, argCount, [self _ffiArgForEncode: str], argTypes); if(status != FFI_OK) { NSLog(@"Got result %ld from ffi_prep_cif", (long)status); abort(); } return argCount; }
@encodeParsing
Objective-C
@encodestrings are not very fun to work with. They are essentially a single character which indicates a primitive, or some special notation to indicate structs. In the case of method signatures, the signature string is basically just a sequence of these
@encodetypes concatenated together. The first one indicates the return type, and the rest indicate the arguments. Block signatures follow this same format.
Foundation provides a handy function called
NSGetSizeAndAlignment which helps a great deal when parsing these strings. When passed an
@encode string, it returns the size and alignment of the first type in the string, and returns a pointer to the next type. In theory, we can iterate through the types in a block signature by just calling this function in a loop.
In practice, there's a complication. For reasons I have never discovered, method signatures (and thus block signatures) have numbers in between the individual type encodings.
NSGetSizeAndAlignment is clueless about these, so it needs a bit of help to correctly parse one of these strings. I wrote a small helper function which calls
NSGetSizeAndAlignment and then skips over any digits it finds after the type string:
static const char *SizeAndAlignment(const char *str, NSUInteger *sizep, NSUInteger *alignp, int *len) { const char *out = NSGetSizeAndAlignment(str, sizep, alignp); if(len) *len = out - str; while(isdigit(*out)) out++; return out; }
libffistructures:
static int ArgCount(const char *str) { int argcount = -1; // return type is the first one while(str && *str) { str = SizeAndAlignment(str, NULL, NULL, NULL); argcount++; } return argcount; }
The
-_argsWithEncodeString:getCount:method parses an
@encodestring and returns an array of
ffi_type *. It uses another method,
-_ffiArgForEncode:, to do the final conversion of a single
@encodetype to an
ffi_type *. The first thing it does is use the
ArgCounthelper function to figure out how many types will be present, and then allocates an array of the appropriate size:
- (ffi_type **)_argsWithEncodeString: (const char *)str getCount: (int *)outCount { int argCount = ArgCount(str); ffi_type **argTypes = [self _allocate: argCount * sizeof(*argTypes)];
SizeAndAlignmentto iterate through all of the types in the string. For all of the argument types, it uses the
-_ffiArgForEncode:method, the final piece in our puzzle, to create an individual
ffi_type *and put it in the array:
int i = -1; while(str && *str) { const char *next = SizeAndAlignment(str, NULL, NULL, NULL); if(i >= 0) argTypes[i] = [self _ffiArgForEncode: str]; i++; str = next; }
outCountand returns the argument types:
*outCount = argCount; return argTypes; }
-_ffiArgForEncode:, the final piece of the puzzle. Here is the very beginning of it:
- (ffi_type *)_ffiArgForEncode: (const char *)str {
@encodestring to an
ffi_type *. To convert primitives, I use a simple lookup table approach. I build a table of every C primitive type I can think of, and the corresponding
ffi_type *.
libffi differentiates integer types by size, and has no direct equivalent to
int or
long. To help me convert between the two, I built some macros. (It turns out that
libffi built some macros for this as well. There are
#defines like
ffi_type_sint which map to the correct base
ffi_type. I didn't know about these when I wrote the code, so my method is slightly more roundabout than it needs to be.)
As I mentioned earlier, primitives are represented as single characters in an
@encode. To avoid hardcoding any of those character values, I use an expression like
@encode(type)[0] to get that single character. If this equals
str[0], then that's the primitive type encoded by the string.
My macro for signed integers first performs this check to see if the types match. If they do, it then uses
sizeof(type) to figure out how big the integer type in question is and return the appropriate
ffi_type * to match. Here's what the macro looks like:
#define SINT(type) do { \ if(str[0] == @encode(type)[0]) \ { \ if(sizeof(type) == 1) \ return &ffi;_type_sint8; \ else if(sizeof(type) == 2) \ return &ffi;_type_sint16; \ else if(sizeof(type) == 4) \ return &ffi;_type_sint32; \ else if(sizeof(type) == 8) \ return &ffi;_type_sint64; \ else \ { \ NSLog(@"Unknown size for type %s", #type); \ abort(); \ } \ } \ } while(0)
#define UINT(type) do { \ if(str[0] == @encode(type)[0]) \ { \ if(sizeof(type) == 1) \ return &ffi;_type_uint8; \ else if(sizeof(type) == 2) \ return &ffi;_type_uint16; \ else if(sizeof(type) == 4) \ return &ffi;_type_uint32; \ else if(sizeof(type) == 8) \ return &ffi;_type_uint64; \ else \ { \ NSLog(@"Unknown size for type %s", #type); \ abort(); \ } \ } \ } while(0)
ffi_types are mixed, but better safe than sorry in this case.
To round out the integer macros, I have a quick one which takes an integer type and then generates code to check for both signed and unsigned variants:
#define INT(type) do { \ SINT(type); \ UINT(unsigned type); \ } while(0)
ffi_types are named in the form
ffi_type_TYPE, where
TYPEis something close to the name in C. To aid in mapping other primitives, I made a macro to do the
@encodecheck and then return the specified pre-made
ffi_type:
#define COND(type, name) do { \ if(str[0] == @encode(type)[0]) \ return &ffi_type_ ## name; \ } while(0)
@encodestrings but which are all represented and passed in exactly the same way at the machine level. To make this a bit shorter, I wrote a short macro to check for all of the various pointer types:
#define PTR(type) COND(type, pointer)
In theory, it would be possible to support arbitrary structs by parsing the struct in the
@encode string and building up the appropriate
ffi_type to match. In practice, this is difficult and error-prone. The
@encode format is not very friendly at all. To handle most cases, there are only a small number of structs that need to be translated. These structs can be detected with a simple string compare without parsing the
@encode string, and then a simple hardcoded list of types provided to
libffi. While this won't handle all cases, by bailing out early if an unknown struct is discovered and making it easy to add new ones, this enables the programmer to quickly fix any deficiences which may be encountered.
One last macro handles structs. It takes a struct type and a list of corresponding
ffi_types. If the
@encode matches, it creates an
ffi_type for the struct, fills out the elements from the arguments given, and returns it:
#define STRUCT(structType, ...) do { \ if(strncmp(str, @encode(structType), strlen(@encode(structType))) == 0) \ { \ ffi_type *elementsLocal[] = { __VA_ARGS__, NULL }; \ ffi_type **elements = [self _allocate: sizeof(elementsLocal)]; \ memcpy(elements, elementsLocal, sizeof(elementsLocal)); \ \ ffi_type *structType = [self _allocate: sizeof(*structType)]; \ structType->type = FFI_TYPE_STRUCT; \ structType->elements = elements; \ return structType; \ } \ } while(0)
_Booltype. Also note the special handling for
char, since a plain, unqualified
charcan be either signed or unsigned:
SINT(_Bool); SINT(signed char); UINT(unsigned char); INT(short); INT(int); INT(long); INT(long long);
@encodedoes not discriminate between pointer types other than a few different kinds. The
void *case handles almost everything, and the other cases pick up the special ones:
PTR(id); PTR(Class); PTR(SEL); PTR(void *); PTR(char *); PTR(void (*)(void));
void, all of which have corresponding
libffitypes:
COND(float, float); COND(double, double); COND(void, void);
void.
That takes care of primitives. Now it's time for structs. I only handle
CGRect,
CGPoint,
CGSize, and their NS equivalents. Others could easily be added if necessary.
These structs all have elements of type
CGFloat. The type of
CGFloat can either be
float or
double depending on the platform. The first thing to do, then, is to figure out which one it is, and grab the corresponding
ffi_type:
ffi_type *CGFloatFFI = sizeof(CGFloat) == sizeof(float) ? &ffi;_type_float : &ffi;_type_double;
STRUCT(CGRect, CGFloatFFI, CGFloatFFI, CGFloatFFI, CGFloatFFI); STRUCT(CGPoint, CGFloatFFI, CGFloatFFI); STRUCT(CGSize, CGFloatFFI, CGFloatFFI);
#if !TARGET_OS_IPHONE STRUCT(NSRect, CGFloatFFI, CGFloatFFI, CGFloatFFI, CGFloatFFI); STRUCT(NSPoint, CGFloatFFI, CGFloatFFI); STRUCT(NSSize, CGFloatFFI, CGFloatFFI); #endif
ffi_type *in the event of a match. If execution reaches this far, then there were no matches. Since it's best to find out about an omission as quickly as possible, the end of the code simply logs an error and aborts:
NSLog(@"Unknown encode string %s", str); abort(); }
If you're still with me, then good news: the hard parts are done! All that remains is to use these
libffitype structures to build the closure.
When a closure is prepared, it takes three important pieces of data. One is the type information that all of the previous code worked so hard to build. One is a C function which receives the arguments in
libffi format. The last one is a context pointer which is passed into that C function. This context pointer is what allows all of the magic to happen. It allows the function to determine which instance of
MABlockClosure the call is associated with, and call through to the associated block.
Like with closure allocation and deallocation, how the closure is prepared depends on which mode
libffi is operating in. If
libffi is managing its own closure allocation, then it's just a single call to prepare the closure. Otherwise, there's a different call to set it up, and then a call to
mprotect is required to mark the memory as executable. Here's what the
-_prepClosure method looks like:
- (void)_prepClosure { #if USE_LIBFFI_CLOSURE_ALLOC ffi_status status = ffi_prep_closure_loc(_closure, &_closureCIF, BlockClosure, self, _closureFptr); if(status != FFI_OK) { NSLog(@"ffi_prep_closure returned %d", (int)status); abort(); } #else ffi_status status = ffi_prep_closure(_closure, &_closureCIF, BlockClosure, self); if(status != FFI_OK) { NSLog(@"ffi_prep_closure returned %d", (int)status); abort(); } if(mprotect(_closure, sizeof(_closure), PROT_READ | PROT_EXEC) == -1) { perror("mprotect"); abort(); } #endif }
BlockClosurefunction is what handles calls to the closure. It receives the
ffi_cif *associated with the closure, a place to put a return value, an array of arguments, and a context pointer:
static void BlockClosure(ffi_cif *cif, void *ret, void **args, void *userdata) { MABlockClosure *self = userdata;
MABlockClosureinstance, it can take advantage of all of the data that was previously constructed for the block. The first thing to do is to construct a new arguments array that can hold one more argument. The block goes into the first argument, and then the other arguments are copied in, shifted down by one:
int count = self->_closureArgCount; void **innerArgs = malloc((count + 1) * sizeof(*innerArgs)); innerArgs[0] = &self-;>_block; memcpy(innerArgs + 1, args, count * sizeof(*args));
ffi_callis used to call the block's implementation pointer. It requires a type signature, which we already generated previously. It requires a function pointer, which the
BlockImplhelper function can provide. It requires a place to put the return value, for which we can just pass
ret, since the return value should simply pass through. Finally, it requires an array of arguments, which we just built up:
ffi_call(&self-;>_innerCIF, BlockImpl(self->_block), ret, innerArgs);
free(innerArgs); }
MABlockClosureis now fully functional.
Convenience Functions
Using
MABlockClosure directly is slightly inconvenient. I built two convenience functions to make this a bit easier. The
BlockFptr function creates an
MABlockClosure instance as an associated object on the block itself. This ensures that the function pointer remains valid for as long as the block is valid:
void *BlockFptr(id block) { @synchronized(block) { MABlockClosure *closure = objc_getAssociatedObject(block, BlockFptr); if(!closure) { closure = [[MABlockClosure alloc] initWithBlock: block]; objc_setAssociatedObject(block, BlockFptr, closure, OBJC_ASSOCIATION_RETAIN); [closure release]; // retained by the associated object assignment } return [closure fptr]; } }
BlockFptrAutofunction which copies the block onto the heap, then returns the appropriate function pointer for that:
void *BlockFptrAuto(id block) { return BlockFptr([[block copy] autorelease]); }
int x = 42; void (*fptr)(void) = BlockFptrAuto(^{ NSLog(@"%d", x); }); fptr(); // prints 42!
libffiis an extremely useful library when dealing with low-level function calls where you don't know everything about them in advance. It's especially useful when coupled with Objective-C's runtime type information. The biggest hurdle is converting between the two ways of representing type information. The code presented here shows how that can be done without too much pain, and also demonstrates how to use the facilities provided by
libffito get work done.
That wraps up this week's (late) Friday Q&A. Come back in two weeks for the next installment. Until then, as always, keep sending me your ideas for topics to cover here.
A consequence of this is that the metadata isn’t generated by apple-gcc.
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion. | https://www.mikeash.com/pyblog/friday-qa-2011-05-06-a-tour-of-mablockclosure.html | CC-MAIN-2017-34 | en | refinedweb |
Below are three functions that calculates a users holiday cost. The user is encouraged to enter details of his holiday which are then passed off into the functions as arguments.
def hotel_cost(days):
days = 140*days
return days
"""This function returns the cost of the hotel. It takes a user inputed argument, multiples it by 140 and returns it as the total cost of the hotel"""
def plane_ride_cost(city):
if city=="Charlotte":
return 183
elif city =="Tampa":
return 220
elif city== "Pittsburgh":
return 222
elif city=="Los Angeles":
return 475
"""this function returns the cost of a plane ticket to the users selected city"""
def rental_car_cost(days):
rental_car_cost=40*days
if days >=7:
rental_car_cost -= 50
elif days >=3:
rental_car_cost -= 20
return rental_car_cost
"""this function calculates car rental cost"""
user_days=raw_input("how many days would you be staying in the hotel?") """user to enter a city from one of the above choices"""
user_city=raw_input("what city would you be visiting?") """user to enter number of days intended for holiday"""
print hotel_cost(user_days)
print plane_ride_cost(user_city)
print rental_car_cost(user_days)
You need to convert the output of
raw_input to
int. This should work:
user_days=int(raw_input("how many days would you be staying in the hotel?")) """user to enter a city from one of the above choices"""
Note that if user enters anything but a number, this will raise an error. | https://codedump.io/share/iErdccGklO2E/1/why-can39t-i-pass-this-int-type-variable-as-an-argument-in-python | CC-MAIN-2017-34 | en | refinedweb |
I need to write a C++ code that will ask the user to enter either 1 or 2. If 1 is entered the program must count all even numbers from 0 to 100. (I got that part done) If 2 is entered you are suppose to prompt the user to enter their first, last, student id, and then print that information back for the user to see, if neither 1 or 2 is chosen, the program is to exit.
My problem is when i select 2, it will only let me enter the first name then it fills out the rest by itself. What did I do wrong?
#include <iostream> using namespace std; int main() { int c,x=0; float nm1, nm2, sid; cout<<"Please enter either the number 1 or the number 2: "<<endl; cin>>c; if (c == 1) { while (x <= 100) { cout << x << endl; x = x + 2; } } else if (c == 2) { cout<<"Please enter your first name: "; cin>>nm1; cout<<"Please enter your last name: "; cin>>nm2; cout<<"Please enter your student id "; cin>>sid; cout<<"Your name is: "<< nm1<<nm2<< "and your student ID is: "<<sid<<endl; } else cout<<"you fail at following instructions, please try again"<<endl; system("pause"); } | https://www.daniweb.com/programming/software-development/threads/315131/please-help-easy-problem | CC-MAIN-2017-34 | en | refinedweb |
AI Repositories¶
AI Servers are clients that reside on a server rather than on the end users’ client machines. An AI server (usually called AI Repository) is used to create distributed objects which are usually used to handle game logic that should not be run by end user clients.
In networked games, most of the game logic should be handled by the server. Clients shouldn’t be trusted as it’s not possible to ensure that they haven’t been compromised in one way or another.
Similar to the server repositories, for AI repositories most of the low-level
networking code is neatly hidden by Panda3D which makes setting up a basic AI
server rather simple too. Though rather than having a dedicated AIRepository
class, we have to use the
ClientRepository as, as stated before, the
AI repository is nothing else than a client.
ClientRepository.__init__( self, dcFileNames = dcFileNames, dcSuffix = 'AI', threadedNet = True)
The setup is quite similar to the one of a normal client repository which we
will take a look at in the next sections. The main difference is that for an AI
repository we pass the dcSuffix = ‘AI’ to the
ClientRepository
initialization.
This makes sure that the correct definitions of the DC definition file will be
used. Another method that should be specifically defined in an AI Repository is
the following.
def deallocateChannel(self, doID): print("Client left us: ", doID)
This function will be called whenever a client has disconnected and gives us the chance to react to its disconnection. | https://docs.panda3d.org/1.10/python/programming/networking/distributed/ai-repositories | CC-MAIN-2021-17 | en | refinedweb |
.
in code blocks
on windoes where i compile it
this is what i get
"helloworld -).
but output screen still blink after writing these lines....
well i don't get what you mean by:
Second, add the following code at the end of the main() function (right before the return statement):
[1cin.clear();
2cin.ignore(255, 'n');
3cin.get();].
i tried to put is just right before return0;
just what ever but it shows bunch of error
and when i press ctrl and F5 together it will stay but when I click debugging it closes immediately
I see the responses here started back in 2007. It is now 2015. That's eight years... so where's the book?
:)
Thanks for the great tut.
I'd love to write a book, but life has not permitted me the luxury of time as of yet. Pity, as it would be fun." .
Would you say this tutorial would be improved if it was explicit in naming, e.g., instead of using using namespace std; using the "std::" reference? I'm just wondering because I'm being told it is OK, but better to future-proof and avoid any inherited name collisions....
Sorry but I have no idea where I'm meant to put:
cin.clear();
cin.ignore(255, '.
Can I buy these tutorials on CD or something?
Thanks for having these tutorials! I like it ;)
Nope, not at this time, unfortunately. At some point I'd like to do a book, but that's far in the future. Sorry :(
No I am not using:
cin.clear();
cin.ignore(255, 'n'); :)
I don't even know what Deviant is. :)
If your program automatically closes after running it, then add those lines and it won't any more. They should work regardless of compiler, IDE, OS, etc... 'run without debugging' but it's 'start debugging'. 'Run without debugging' has become ctrl+F5 instead.
So someone pressing F5, as is recommended in the tutorial, will start debugging, in which case, for me at least, it doesn't work. However, choosing 'run without debugging' from the debug menu does help.
Apparantly someone at miscrosoft thought that it would be more userfriendly to have debugging accessed more quickly...
I hope this helps., 'n').
I find some of these very helpful..thanks.
If you're using Dev-C++, you can put on the end of main function (before return 0;) system("PAUSE"); to don't close console window immediately. This command pause the program and wait to keypress. But I think that it works only in MS Windows. 'a' character, but "bcde" are still left in the input stream. Consequently, when the code gets to cin >> chIgnore, it reads the waiting 'b'.
In order to get the screen to pause before exiting the program, does the code:
have any advantages over just using
I have so much to learn. :).
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/a-few-common-cpp-problems/comment-page-1/ | CC-MAIN-2021-17 | en | refinedweb |
This Tutorial will Explain the Various Methods to Print Elements of an Array in Java. Methods Explained are – Arrays.toString, For Loop, For Each Loop, & DeepToString:
In our previous tutorial, we discussed the creation of Array Initialization. To begin with, we declare instantiate and initialize the array. Once we do that, we process the array elements. After this, we need to print the output which consists of array elements.
What You Will Learn:
Methods To Print An Array In Java
There are various methods to print the array elements. We can convert the array to a string and print that string. We can also use the loops to iterate through the array and print element one by one.
Let’s explore the description of these methods.
#1) Arrays.toString
This is the method to print Java array elements without using a loop. The method ‘toString’ belong to Arrays class of ‘java.util’ package.
The method ‘toString’ converts the array (passed as an argument to it) to the string representation. You can then directly print the string representation of the array.
The program below implements the toString method to print the array.
import java.util.Arrays; public class Main { public static void main(String[] args) { //array of strings String[] str_array = {"one","two","three","four","five"}; System.out.println("Array elements printed with toString:"); //convert array to string with Arrays.toString System.out.println(Arrays.toString(str_array)); } }
Output:
As you can see, its just a line of code that can print the entire array.
#2) Using For Loop
This is by far the most basic method to print or traverse through the array in all programming languages. Whenever a programmer is asked to print the array, the first thing that the programmer will do is start writing a loop. You can use for loop to access array elements.
Following is the program that demonstrates the usage of for loop in Java.
public class Main { public static void main(String[] args) { Integer[] myArray = {10,20,30,40,50}; System.out.println("The elements in the array are:"); for(int i =0; i<5;i++) //iterate through every array element System.out.print(myArray[i] + " "); //print the array element } }
Output:
The ‘for’ loop iterates through every element in Java and hence you should know when to stop. Therefore to access array elements using for loop, you should provide it with a counter that will tell how many times it has to iterate. The best counter is the size of the array (given by length property).
#3) Using For-Each Loop
You can also use the forEach loop of Java to access array elements. The implementation is similar to for loop in which we traverse through each array element but the syntax for forEach loop is a little different.
Let us implement a program.
public class Main { public static void main(String[] args) { Integer myArray[]={10,20,30,40,50}; System.out.println("The elements in the array are:"); for(Integer i:myArray) //for each loop to print array elements System.out.print(i + " "); } }
Output:
When you use forEach, unlike for loop you don’t need a counter. This loop iterates through all the elements in the array until it reaches the end of the array and accesses each element. The ‘forEach’ loop is specifically used for accessing array elements.
We have visited almost all the methods that are used to print arrays. These methods work for one-dimensional arrays. When it comes to printing multi-dimensional arrays, as we have to print those arrays in a row by column fashion, we need to slightly modify our previous approaches.
We will discuss more on that in our tutorial on a two-dimensional array.
#4) DeepToString
‘deepToString’ that is used to print two-dimensional arrays is similar to the ‘toString’ method which we discussed earlier. This is because if you just use ‘toString’, as the structure is array inside the array for multidimensional arrays; it will just print the addresses of the elements.
Hence we use the ‘deepToString’ function of Arrays class to print the multi-dimensional array elements.
The following program will show the ‘deepToString’ method.
import java.util.Arrays; public class Main { public static void main(String[] args) { //2D array of 3x3 dimensions int[][] array_2d = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; System.out.println("Two-dimensional Array is as follows:"); System.out.println(Arrays.deepToString(array_2d)); //convert 2d array to string and display } }
Output:
We will discuss some more methods of printing multidimensional arrays in our tutorial on multidimensional arrays.
Frequently Asked Questions
Q #1) Explain the toString method.
Answer: ‘toString()’ method is used to convert any entity passed to it to a string representation. The entity can be a variable, an array, a list, etc.
Q #2) What is the Arrays.toString in Java?
Answer:‘toString ()’ method returns the string representation of the array that is passed to it as an argument. The elements of the array are enclosed in a square ([]) bracket when displayed using the ‘toString()’ method.
Q #3) Do Arrays have a toString method?
Answer: There is no direct ‘toString’ method that you can use on an array variable. But the class ‘Arrays’ from ‘java.util’ package has a ‘toString’ method that takes the array variable as an argument and converts it to a string representation.
Q #4) What is ‘fill’ in Java?
Answer: The fill () method is used to fill the specified value to each element of the array. This method is a part of the java.util.Arrays class.
Q #5) Which technique/loop in Java specifically works with Arrays?
Answer: The ‘for-each’ construct or enhanced for loop is a loop that specifically works with arrays. As you can see, it is used to iterate over each element in the array.
Conclusion
In this tutorial, we explained the methods that we can use to print arrays. Mostly we employ loops to traverse and print the array elements one by one. In most cases, we need to know when to stop while using loops.
ForEach construct of Java is specifically used to traverse the object collection including arrays. We have also seen the toString method of Arrays class that converts the array into a string representation and we can directly display the string.
This tutorial was for printing a one-dimensional array. We also discussed a method of printing multi-dimensional arrays. We will discuss the other methods or variations of existing methods when we take up the topic of multi-dimensional arrays in the latter part of this series. | https://www.softwaretestinghelp.com/java/print-elements-of-java-array/ | CC-MAIN-2021-17 | en | refinedweb |
2¢ in Java type system enhancement
@see
- JSR 269: Pluggable Annotation Processing API
- JSR 308: Annotations on Java Types
- Checkers framework
Annotations were shown up first time in Java 5. They are providing meta information to compile and runtime. In Java 8 annotation support was extended, so now you can put annotation almost everywhere in your code.
This feature can help to declare constraints on existing types
Let’s try!
I expected that replacing the following code
void sendTopSecret(String secretMsg){ notSecuredClient.send(secretMsg); }
with
void sendTopSecret(@Encrypted String secretMsg){ notSecuredClient.send(secretMsg); }
will fail compilation because of sendTopSecret gets an unannotated string.
But the compilation passes with no errors
It’s happen becauss the compiler can verify the only syntax of annotation, not the semantics. It can’t distinguish between @Encrypted String secretMsg and String secretMsg
We need to implement processor to verify compiled code for every type of checkers we want to use
Build workflow will change from:
Source ↦ Compiler ↦ (if passes) Executable
To:
Source ↦ Compiler ↦ (if passes) Executable ↦ (optional) Type checkers
This way of verification allows us to create a stronger type system without changing java lang itself Despite bit limited and more complex way these features can help a lot to produce more sophisticated and error-prone code
See example below
1) Create an annotation
package com.tikalk.simple.annotation.processing.demo; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.FIELD) public @interface TestAnotation {}
And annotation processor
An annotation processor in our case will take compiled byte code as input and validate it .
package com.tikalk.simple.annotation.processing.demo; import javax.annotation.processing.AbstractProcessor; import javax.annotation.processing.RoundEnvironment; import javax.lang.model.SourceVersion; import javax.lang.model.element.Element; import javax.lang.model.element.TypeElement; import java.util.Arrays; import java.util.HashSet; import java.util.Set; public class TestAnotationProcessor extends AbstractProcessor { @Override public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment env) { // implement processor logic here return false; } @Override public SourceVersion getSupportedSourceVersion() { return SourceVersion.latestSupported(); } @Override public Set<String> getSupportedAnnotationTypes() { return new HashSet<>(Arrays.asList(TestAnotation.class.getName())); } }
2) Put file
javax.annotation.processing.Processor in
resources/META-INF/services/
And write down all your processors there
3) add compiler argument in pom.xml
<configuration> …. <compilerArgument>-proc:none</compilerArgument> ….. </configuration>
All that we should do in client code is to add dependency and on the build time the processor will be invoked and do the verification logic
Good news: we have not implemented it by ourselves Checkers framework has an implementation of many kinds of checkers already. If you will want to you can implement custom checker by yourself it’s quite easy
See example checkers framework demo
Checkers framework integrates with build tools :
- Maven
- Gradle
- Ant and with IDEs
- IntelliJ
- Eclipse
- Netbeans
Or just use javac with “-processor”
Summary:
This approach can guarantee the absence of errors and reduce the number of runtime checks Documentation and maintainability are improving as nice side effect.
Backdraws is time to set up the specification and write types. Also, it produces false positive (can be suppressed)
We will contact you as soon as possible. | https://www.tikalk.com/posts/2018/11/02/2-in-java-type-system-enhancement/ | CC-MAIN-2021-17 | en | refinedweb |
Subject: [Boost-build] changing file output suffixes
From: McLemon, Sean (Sean.McLemon_at_[hidden])
Date: 2009-06-26 05:58:52
Hi,
(Originally posted in Boost-users, apologies for the repetition :)). Our
toolchain generates files ending in ".doj" for objects, so I've been
trying to configure my custom toolset module to use this. My original
module is based on acc.jam but qcc.jam had something that looked like it
would do the trick, so I lifted that and chucked into my module:
import type ;
type.set-generated-target-suffix OBJ : <toolset>bfin : doj ;
type.set-generated-target-suffix STATIC_LIB : <toolset>bfin : dlb ;
However my build of Boost now fails, with a fairly long error tracing:
smclemo_at_edin-angus /usr/src/boost_1_39_0
$ bjam toolset=bfin
WARNING: No python installation configured and autoconfiguration
failed. See
for configuration instructions or pass --without-python to
suppress this message and silently skip all Boost.Python
targets
Building C++ Boost.
After the build, the headers will be located at
C:\cygwin\usr\src\boost_1_39_0
The libraries will be located at
C:\cygwin\usr\src\boost_1_39_0\stage\lib
Use 'bjam install --prefix=<path>' if you wish to install headers and
libraries to a different location and remove the source tree.
C:/cygwin/usr/src/boost_1_39_0/tools/build/v2/build\property.jam:613: in
find-replace from module object(property-map)@1
error: Ambiguous key <target-type>OBJ <asynch-exceptions>off
<conditional>@Jamfile</C:/cygwin/usr/src/boost_1_39_0>%Jamfile</C:/cygwi
n/usr/src/boost_1_39_0>.handle-static-runtime <debug-symbols>on
<define>BOOST_ALL_NO_LIB=1 <define>BOOST_DATE_TIME_STATIC_LINK
<define>DATE_TIME_INLINE <exception-handling>on <extern-c-nothrow>off
<hardcode-dll-paths>true <host-os>windows <include>. <inlining>off
<install-dependencies>off <link>static <main-target-type>LIB
<optimization>off <os>NT <preserve-test-targets>on <profiling>off
<python-debugging>off <python>2.5 <rtti>on <runtime-debugging>on
<runtime-link>shared <stdlib>native <suppress-import-lib>false
<symlink-location>project-relative
<tag>@Jamfile</C:/cygwin/usr/src/boost_1_39_0>%Jamfile</C:/cygwin/usr/sr
c/boost_1_39_0>.tag <target-os>windows <target>object(file-target)@429
<threadapi>win32 <threading>multi <toolset-bfin:version>8.0.7.1
<toolset>bfin <user-interface>console <variant>debug
<warnings-as-errors>off <warnings>on
<snip>
I'm guessing I'm doing something probably a bit stupid or missing
something (I'm still at the fiddling around stage yet, so this is very
likely). I've attached the module (bfin.jam) - hopefully someone is able
to point me in the right direction.
Thanks,
- Sean
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/boost-build/2009/06/22043.php | CC-MAIN-2021-17 | en | refinedweb |
Hi Jeffrey,
Did you follow these steps from the documentation:.
Thx and greetings
Nicola
Thanks Nicola.
How do I "enable" Extended Tracking in the Inspector? Is it adding a component named "Extended Image Tracking Controller Script" (screenshot included)? I don't see a size or element field.
Do I keep the object tracker script enabled in addition to the Extended Image Tracking Controller?
Thanks!
Hi Jeffrey,
thanks for pointing that out. The extended tracking toggle in the inspector is indeed missing.
There is a workaround though. You can just set this setting via script like this:
... using Wikitude; public class SomeClass : MonoBehaviour { public ObjectTrackable trackable; void Start() { trackable.ExtendedTracking = true; trackable.TargetsForExtendedTracking = new string[1]{"*"}; } ...
I hope the workaround fits your needs. Meanwhile, we will fix the issue and release the fix in a future update.
Kind regards,
Gökhan
That's perfect, thanks Gökhan.
Jeffrey Robbins
Hello,
I am having difficulty with the Object Tracker pre fab in Unity.
I would like to enable extended tracking so that the augmentation is still visible in the previous pose. My scene is in a long subterranean alley. The marker will be visible when a user points the app at the building facade. As he/she walks down the alley, the complete marker will be lost quickly and I wish for them to retain partial views of the initial augment.
But I don't see extended tracking as an option in the inspector.
I used this tutorial as the basis for my scene.
I am using Unity 2019.3.9fl
I am using Wikitude_Unity_9-2-0_9-2-0_2020_07_08_12_55_55 SDK
I am using | https://support.wikitude.com/support/discussions/topics/5000095103 | CC-MAIN-2021-17 | en | refinedweb |
Qt JNI Messenger
Demonstrates communication between Java code and QML or C++ using NJI calls.
This example demonstrates how to add a custom Java class to an Android application, and how to both call it from C++ and call C++ functions from Java using the JNI convenience APIs in the Qt Android Extras module. The application UI is created by using Qt Quick.
When clicking the send button, a message will be sent from QML to Java class though the C++ class and a log of that is shown in the screen view. Logs also can be seen from the Android logcat of the messages being exchanged, which would be similar to:
I System.out: This is printed from JAVA, message is: QML sending to Java: Hello from QML D libjnimessenger_armeabi-v7a.so: qml: QML received a message: Hello from JAVA!
Running the Example
To run the example from Qt Creator, open the Welcome mode and select the example from Examples. For more information, visit Building and Running an Example.
Calling Java Methods from C++ Code
We define a custom Java class called
JniMessenger in the JniMessenger.java file:
package org.qtproject.example.jnimessenger; public class JniMessenger { private static native void callFromJava(String message); public JniMessenger() {} public static void printFromJava(String message) { System.out.println("This is printed from JAVA, message is: " + message); callFromJava("Hello from JAVA!"); } }
Note: The custom Java class can extend other classes like QtActivity, Activity or any other Java class.
In the jnimessenger.cpp file, we call the function
printFromJava(String message) by first creating a
QAndroidJniObject for the Java String that we want to send and then invoking a JNI call with
callStaticMethod<>() while providing the method signature:
void JniMessenger::printFromJava(const QString &message) { QAndroidJniObject javaMessage = QAndroidJniObject::fromString(message); QAndroidJniObject::callStaticMethod<void>("org/qtproject/example/jnimessenger/JniMessenger", "printFromJava", "(Ljava/lang/String;)V", javaMessage.object<jstring>()); }
That call will then execute the following from Java side, which would print the message to the
System.output.
public static void printFromJava(String message) { System.out.println("This is printed from JAVA, message is: " + message);
Calling QML/C++ Functions from Java Code
Directly after that, our native function
callFromJava(String message) will be called, which would be then handled from C++ side. Note, that this method has to be defined as
native at the top of the Java class as:
private static native void callFromJava(String message);
To be able to call C++ functions from Java, in our C++ class JniMessenger.cpp, we need to define those functions using
RegisterNatives() as follows:
JNINativeMethod methods[] {{"callFromJava", "(Ljava/lang/String;)V", reinterpret_cast<void *>(callFromJava)}};
(See Java Native Methods for more details).
We would need to register the functions' signatures in
methods[], which have the name in Java class, then its parameters and return types, then the function pointer in the C++ code.
JNINativeMethod methods[] {{"callFromJava", "(Ljava/lang/String;)V", reinterpret_cast<void *>(callFromJava)}};
This would insure that our C++ function is available from within the Java call. Now, that function could simply print the message it received from Java to the debug log, but we want to forward the received message to the QML components so that it gets displayed in our text view, so we get:
static void callFromJava(JNIEnv *env, jobject /*thiz*/, jstring value) { emit JniMessenger::instance()->messageFromJava(env->GetStringUTFChars(value, nullptr)); }
Now, we need to implement the necessary
Connections in the QML code to receive the message from C++, which we would print into the
Text view with the id
messengerLog:
Connections { target: JniMessenger function onMessageFromJava(message) { var output = qsTr("QML received a message: %1").arg(message) print(output) messengerLog.text += "\n" + output } }
Example project @ code.qt.io
See also Qt for Android and Qt Android Extras.. | https://doc.qt.io/qt-5/qtandroidextras-jnimessenger-example.html | CC-MAIN-2021-17 | en | refinedweb |
Can somone help me understand this
I have give up trying understand how this even works and cant find anything to help me understand can anyone help me step by step please 😔 Does isEqual() method store the value in a field or somthing? I'm not getting how test() returns the boolean how is it able to see what's passed to equals method? 🤯
18 AnswersNew Answer
the value passed in Predicate.isEqual is stored inside the returned object (assigned to 'str')...
Predicate.isEqual is a constructor (or call a constructor) and return a new object wich is holding the string passed as argument, and (at least) one method 'test'... inside 'test' method of Predicate.isEqual object, the value passed as argument is available, so it's checked against 'test' argument... boolean is returned by 'test' method ^^
Simple way to think about it. When you call the isEqual(var) method It returns the following s->var.equals(s) This lambda expression is the body for test(s) boolean test(s){ return var.equals(s) ^ '------- var here is the string you passed to the IsEquals method }
visph woah I think my brain died 😆 is it possible to create an example code that does this so I can see how it works. In my mind I'm thinking how is test which is the abstract method of interface getting hold of the argument passed to a static method isequal method are there fields involved in the interface because I can see any
visph is the annoumous class the final class that implemented isEquals? or did test become isEquals?
there's no 'anonymous' classes... all the more hidden classes ^^ the final class is the one used to build the object returned by Predicate.isEqual, so the object have a 'test' method implemented...
visph Are you able to create an example code I'm so confused because I thought there was an inner annomus object created which was stored to str
OOP: class = blueprint object = instance of class the class is stored outside of instances (but is linked and shared betweens instance objects)
visph I know the difference, I know that you cant create an instance of interfaces which made me think that an annoumous inner object is created and stored in str variable
str store a Predicate object with a 'test' method wich use the value passed in Predicate object: there's no needs to have a hidden inner object, but it could be implemented in that way (even there's no real reasons to do so) ^^
visph so when I called the static isEqual on the predicate interface did that implemt the test method ? I'm just trying to figure out how test was able to compare both objects..
public interface Predicate<T> { boolean test(T t); static <T> Predicate<T> isEqual(Object target) { return object -> target.equals(object); } } class IsItJava implements Predicate<String> { public boolean test(String s) { return "Java".equals(s); } } public class Program { public static void main(String[] args) { // Predicate<String> str = object -> "Java".equals(object); IsItJava str = new IsItJava(); System.out.println(str.test("Java") ); } }
very simple you just create an object of predicate and give it the value. the object remembers this value you passed to when creating it. when you envoke test method. then you are using a method of that object which you created before. the method test will do the job for you so you can define target value once and test it how many times you need.
Compare the string you passed with function u called
Can anyone please tell me why range slider is not working Help me! | https://www.sololearn.com/Discuss/2715087/can-somone-help-me-understand-this/ | CC-MAIN-2021-17 | en | refinedweb |
There are a few ways to create scrollable lists in React Native. Two of the common methods available in the React Native core are
ScrollView and
FlatList components. Each has its strength, and in this tutorial, we'll dive deep to create a search bar with
FlatList component.
The final result you are going to achieve at the end of this tutorial is shown below.
Table of contents
- Getting started
- What is FlatList?
- Basic usage of a FlatList component
- Fetching data from Remote API in a FlatList
- Adding a custom Separator to FlatList component
- Adding a Search bar
- Run the app
- Add clear button to input text field
- Conclusion
Getting started
For the demo we are going to create in this tutorial, I am going to use Expo. You are free to choose and use anything between an Expo CLI or a
react-native-cli.
To start, let's generate a React Native app using Expo CLI and then install the required dependency to have a charming UI for the app. Open up a terminal window and run the following commands in the order they are mentioned.
expo init searchbarFlatList cd searchbarFlatList yarn install @ui-kitten/components @eva-design/eva lodash.filter expo install react-native-svg
Note: The dependency [
react-native-svg] is required as a peer dependency for the UI kitten library.
UI Kitten is ready to use now. To check, everything has installed correctly, let's modify
App.js with the following snippet:
The
ApplicationProvider accepts two props:
mapping and
theme.
To run this demo, open up the terminal window and execute the following command.
expo start
I am using an iOS simulator for the demo. Here is the output of the above code snippet.
What is FlatList?
The component
FlatList is an efficient way to create scrolling data lists in a React Native app. It has a simple API to work with and displays a large amount of information.
By default, you can just pass in an array of data and this component will do its work. You usually do not have to take care of formatting the data.
Basic usage of a FlatList componentarray and renders it on the UI.
keyExtractor: it tells the list of data to use the unique identifiers or
idfor an individual element.
To understand this, let's build a mock array of data and use
FlatList to display it on our demo app. To start, import the following statements in
App.js file.
import React from 'react' import { FlatList, View, Text } from 'react-native'
Then, create an array of mock data.
const mockData = [ { id: '1', text: 'Expo 💙' }, { id: '2', text: 'is' }, { id: '3', text: 'Awesome!' } ]
Now, modify the
HomeScreen component with the following snippet:
const HomeScreen = () => ( <View style={{ flex: 1, paddingHorizontal: 20, paddingVertical: 20, marginTop: 40 }}> <FlatList data={mockData} keyExtractor={item => item.id} renderItem={({ item }) => ( <Text style={{ fontSize: 22 }}> {item.id} - {item.text} </Text> )} /> </View> )
If the Expo cli command to run the development server is still running, you are going to get the following result.
Fetching data from Remote API in a FlatList
You can even play around with it. Try to fetch data from a real-time remote API and display them in the list instead of mock data.
For a start, you can use a public API URL such as Randomuser.me API. The result we're hoping to obtain at the end of this section is displayed below.
Open
App.js and add a state object with some properties to keep track of data from the Random User API. Also, do not forget to modify the import statements.
// modify the import statements as below import React from 'react' import { FlatList, View, ActivityIndicator, TouchableOpacity } from 'react-native' import { ApplicationProvider, Text, Avatar } from '@ui-kitten/components' import { mapping, light as lightTheme } from '@eva-design/eva' // add a state object to the HomeScreen component class HomeScreen extends React.Component { state = { loading: false, data: [], page: 1, seed: 1, error: null } // ... rest of the code }
With the HTTP request to the API URL, let us fetch the first 20 results for now. Create a handler method called
makeRemoteRequest that uses JavaScript's
fetch(url) where
url is the API request. It will fetch the results in JSON format. In case of a successful response from the API, the loading indicator (which we're going to add later) will be false.
Also, using the lifecycle method
componentDidMount, you can render the list of random users at the initial render of the
HomeScreen component.
componentDidMount() { this }) }) .catch(error => { this.setState({ error, loading: false }) }) }
Next, add a
renderFooter handler method that is going to display a loading indicator based on the value from the state object. This indicator is shown when the list of data in still being fetched. When the value of
this.state.loading is true, using the
ActivityIndicator from react-native components, a loading indicator on the UI screen is shown.
renderFooter = () => { if (!this.state.loading) return null return ( <View style={{ paddingVertical: 20, borderTopWidth: 1, borderColor: '#CED0CE' }}> <ActivityIndicator animating </View> ) }
Here is the output you are going to get when the loading indicator is shown.
Adding a custom Separator to FlatList component
Previously, you learned about the three most important props in the FlatList component. It is so flexible that it comes with extra props to render different components to make the UI pleasing to the user. One such prop is called
ItemSeparatorComponent. You can add your own styling with custom JSX.
To do so, add another handler method called
renderSeparator. It consists of rendering a
View with some styling.
renderSeparator = () => { return ( <View style={{ height: 1, width: '86%', backgroundColor: '#CED0CE', marginLeft: '5%' }} /> ) }
This completes all of the handler methods currently required. Now, let's replace the previous
FlatList component in
App.js with the following snippet.
A list of user names is going to be rendered with an individual item as the user. When pressed, it shows an alert message for now, but in the real-time app, it will go on to display the complete user profile or user's contact.
The individual items in the list are going to be separated by the
renderSeparator method, and each item is going to display a user image which is composed of the
Avatar component from
react-native-ui-kitten. The data is coming from the state object.
<FlatList data={this.state.data} renderItem={({ item }) => ( <TouchableOpacity onPress={() => alert('Item pressed!')}> <View style={{ flexDirection: 'row', padding: 16, alignItems: 'center' }}> <Avatar source={{ uri: item.picture.thumbnail }} size='giant' style={{ marginRight: 16 }} /> <Text category='s1' style={{ color: '#000' }}>{`${item.name.first} ${item.name.last}`}</Text> </View> </TouchableOpacity> )} keyExtractor={item => item.email} ItemSeparatorComponent={this.renderSeparator} ListFooterComponent={this.renderFooter} />
From the above snippet, you can also notice that the loading indicator handler method
renderFooter() is also used as the value of a prop called
ListFooterComponent.
You can also use this prop to render other information at the bottom of all the items in the list. One example is to fetch more items in the list and show the loading indicator when the request is made.
Here is the output so far.
Adding a Search bar
To create a search bar on top of the FlatList, you need a component that scrolls away when the list is scrolled. One possible solution is to create a custom Search bar component and render it as the value of
ListHeaderComponent prop in a FlatList.
Open
App.js and add the following prop to the list.
<FlatList // rest of the props remain same ListHeaderComponent={this.renderHeader} />
The search bar component is going to be an input field that can take the user's name from the end-user. To build one, let us start by modifying the import statements as below.
import filter from 'lodash.filter' import { ApplicationProvider, Text, Avatar, Input } from '@ui-kitten/components'
Next, modify the
state object and add the following variables to it. The
query is going to hold the search term when the input is provided. The
fullData is a temporary array that a handler method will use to filter the user's name on the basis of a query.
state = { // add the following query: '', fullData: [] }
Since you are already storing the
results fetched from the remote API, state variable
data, let us do the same for
fullData as well. Add the following inside the handler method, // ---- ADD THIS ---- fullData: res.results }) }) .catch(error => { this.setState({ error, loading: false }) }) }
Next, add the handler method.
handleSearch = text => { const formattedQuery = text.toLowerCase() const data = filter(this.state.fullData, user => { return this.contains(user, formattedQuery) }) this.setState({ data, query: text }) }
The
contains handler method is going to look for the query. It accepts two parameters: the first and last name of the user and the formatted query to lowercase from
handleSearch().
contains = ({ name, email }, query) => { const { first, last } = name if (first.includes(query) || last.includes(query) || email.includes(query)) { return true } return false }
Lastly, add
renderHeader to render the search bar on the UI.
renderHeader = () => ( <View style={{ backgroundColor: '#fff', padding: 10, alignItems: 'center', justifyContent: 'center' }}> <Input autoCapitalize='none' autoCorrect={false} onChangeText={this.handleSearch} status='info' placeholder='Search' style={{ borderRadius: 25, borderColor: '#333', backgroundColor: '#fff' }} textStyle={{ color: '#000' }} /> </View> )
That's it to add a search bar to the FlatList component.
Run the app
To run the app, make sure the
expo start command is running. Next, go to Expo client and you are going to be prompted by the following screen:
Next, try to add a user name from the list being rendered.
Add clear button to input text field
The last thing I want to emphasize is that using a custom UI component from a UI library such as UI Kitten, you can use general
TextInputProps from the React Native core as well. A few examples are props such as
autoCapitalize and
autoCorrect.
Let us add another prop called
clearButtonMode that allows the input field to have a clear button appear on the right side. Add the prop to the
Input inside
renderHeader().
<Input // rest of the props remain same
Now go back to the Expo client and see it in action
Conclusion
The screen implemented in this demo is from one of the templates from Crowdbotics' react-native collection.
We use UI Kitten for our latest template libraries. Find more about how to create custom screens like this from our open source project here.
You can also find the source code from this tutorial at this Github repo. | https://blog.crowdbotics.com/add-search-bar-flatlist-react-native-apps/ | CC-MAIN-2021-17 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.