markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Scraping the Government Response
url='https://publications.parliament.uk/pa/cm201617/cmselect/cmwomeq/963/96302.htm' #Inconsistency across different reports in terms of presentation, linking to evidence
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit
<p style="text-align: right; direction: rtl; float: right;"> ื”ืฉื•ืจื” ื”ืจืืฉื•ื ื” ื”ื™ื ื”ื—ื™ื“ื•ืฉ ืคื”: ื‘ืฉื•ืจื” ื–ื• ืื ื—ื ื• ืžื‘ืงืฉื™ื ืงืœื˜ ืžื”ืžืฉืชืžืฉ (ืืช ื”ืฉื ืฉืœื•), ื•ืฉื•ืžืจื™ื ืืช ื”ืงืœื˜ ืฉื”ื–ื™ืŸ ื‘ืžืฉืชื ื” ื‘ืฉื <code>name</code>.<br> ื‘ืจื’ืข ืฉืคื™ื™ืชื•ืŸ ืžื’ื™ืขื” ืœึพ<code dir="ltr">input()</code>, ื”ื™ื ืขื•ืฆืจืช ื›ืœ ืคืขื•ืœื”, ืขื“ ืฉืชืงื‘ืœ ืงืœื˜ ืžื”ืžืฉืชืžืฉ.<br> ืœืื—ืจ ืžื›ืŸ ื”ื™ื "ืžื—ืœื™ืคื”" ืืช <code dir="ltr">input()</code> ื‘ืงืœื˜ ืฉืงื™ื‘ืœื” ืžื”ืžืฉืชืžืฉ.<br> ืœื“ื•ื’ืžื”, ืื ื”ื–ื ืชื™ ื›ืงืœื˜ <em>Moishalah</em>, ืžื” ืฉื™ืงืจื” ื‘ืคื•ืขืœ ืืœื• ื”ืฉื•ืจื•ืช ื”ื‘ืื•ืช (ื”ืฉื•ื• ืขื ื”ืงื•ื“ ืžืœืžืขืœื”): </p>
name = "Moishalah" message = "Hello, " + name + "!" print(message)
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="align: right; direction: rtl; float: right;">ืชืจื’ื•ืœ</p> <p style="text-align: right; direction: rtl; float: right;"> ื›ืชื‘ื• ืงื•ื“ ื”ืžื‘ืงืฉ ื›ืงืœื˜ ืฉืœื•ืฉื” ื ืชื•ื ื™ื: ืฉื ืคืจื˜ื™, ืฉื ืžืฉืคื—ื” ื•ืชืืจื™ืš ืœื™ื“ื”.<br> ื”ืงื•ื“ ื™ืฆื™ื’ ืœืžืฉืชืžืฉ ื‘ืจื›ื” ื—ื‘ื™ื‘ื”.<br> ืœื“ื•ื’ืžื”, ืขื‘ื•ืจ ื”ื ืชื•ื ื™ื <code>Israel</code>, <code>Cohen</code>, <code>22/07/1992</code>, ื”ื•ื ื™ืฆื™ื’: </p> Hi Israel Cohen! Your birthday is on 22/07/1992. <p style="align: right; direction: rtl; float: right;">ื”ืžืจืช ืขืจื›ื™ื</p> <p style="text-align: right; direction: rtl; float: right;"> ืžื™ ืžื›ื ืฉื–ื•ื›ืจ ื”ื™ื˜ื‘ ืืช ื”ืฉื™ืขื•ืจื™ื ื”ืงื•ื“ืžื™ื, ื•ื“ืื™ ื™ื•ื“ืข ืฉืœื›ืœ ืขืจืš ืฉื ื›ืชื•ื‘ ื™ืฉ ืกื•ื’ (ืื• "<dfn>ื˜ื™ืคื•ืก</dfn>"). </p>
type(5) type(1.5) type('Hello')
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right;"> ืื ืืชื ืžืจื’ื™ืฉื™ื ืฉืžืžืฉ ื”ืกืคืงืชื ืœืฉื›ื•ื—, ืฉื•ื•ื” ืœื›ื ืœื”ืฆื™ืฅ <a href="3_Types.ipynb">ื‘ืคืจืง 3</a>, ืฉืžืœืžื“ ืขืœ ื˜ื™ืคื•ืกื™ื. </p> <p style="text-align: right; direction: rtl; float: right;"> ืื ื ืฉืชืขืฉืข ืžืขื˜ ืขื <code dir="ltr">input()</code>, ื ื’ืœื” ืžื”ืจ ืžืื•ื“ ืฉืœืคืขืžื™ื ื”ืงืœื˜ ืžื”ืžืฉืชืžืฉ ืœื ืžื’ื™ืข ืืœื™ื ื• ื‘ื“ื™ื•ืง ื›ืžื• ืฉืจืฆื™ื ื•.<br> ื‘ื•ืื• ื ืจืื” ื“ื•ื’ืžื”: </p>
moshe_apples = input("How many apples does Moshe have? ") orly_apples = input("How many apples does Orly have? ") apples_together = moshe_apples + orly_apples print("Together, they have " + apples_together + " apples!")
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ื–ื™ื ื• ืืช ื”ื ืชื•ื ื™ื ื‘ืกืขื™ืคื™ื 1 ืขื“ 5 ืœืชื•ื›ื ื™ืช ื”ืชืคื•ื—ื™ื ืฉืœ ืžืฉื” ื•ืื•ืจืœื™ ื”ืžื•ืคื™ืขื” ืžืขืœื”, ื•ื ืกื• ืœื”ื‘ื™ืŸ ืžื” ืงืจื”. </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div> <ol style="text-align: right; direction: rtl; float: right; white-space: nowrap; list-style-position: outside; display: inline-block;"> <li style="white-space: nowrap;">ืœืžืฉื” ื™ืฉ <code>0</code> ืชืคื•ื—ื™ื, ื•ืœืื•ืจืœื™ ื™ืฉ <code>5</code> ืชืคื•ื—ื™ื.</li> <li style="white-space: nowrap;">ืœืžืฉื” ื™ืฉ <code>2</code> ืชืคื•ื—ื™ื, ื•ืœืื•ืจืœื™ ื™ืฉ <code>3</code> ืชืคื•ื—ื™ื.</li> <li style="white-space: nowrap;">ืœืžืฉื” ื™ืฉ <code dir="ltr">-15</code> ืชืคื•ื—ื™ื, ื•ืœืื•ืจืœื™ ื™ืฉ <code>2</code> ืชืคื•ื—ื™ื.</li> <li style="white-space: nowrap;">ืœืžืฉื” ื™ืฉ <code>2</code> ืชืคื•ื—ื™ื, ื•ืœืื•ืจืœื™ ื™ืฉ <code dir="ltr">-15</code> ืชืคื•ื—ื™ื.</li> <li style="white-space: nowrap;">ืœืžืฉื” ื™ืฉ <code>nananana</code> ืชืคื•ื—ื™ื, ื•ืœืื•ืจืœื™ ื™ืฉ <code dir="ltr">batman!</code> ืชืคื•ื—ื™ื.</li> </ol> <p style="align: right; direction: rtl; float: right;">ืื– ืžื” ืงืจื” ื‘ืชืจื’ื•ืœ?</p> <p style="text-align: right; direction: rtl; float: right;"> ืืฃ ืขืœ ืคื™ ืฉืจืฆื™ื ื• ืœื”ืชื™ื™ื—ืก ืœืงืœื˜ ื›ืืœ ื ืชื•ืŸ ืžืกืคืจื™ (<code>int</code>), ืคื™ื™ืชื•ืŸ ื”ื—ืœื™ื˜ื” ืœื”ืชื™ื™ื—ืก ืืœื™ื• ื›ืžื—ืจื•ื–ืช (<code>str</code>), ื•ืœื›ืŸ ื—ื™ื‘ืจื” ื‘ื™ืŸ ืžื—ืจื•ื–ื•ืช ื•ืœื ื‘ื™ืŸ ืžืกืคืจื™ื.<br> ืžื›ืืŸ ืื ื—ื ื• ืœื•ืžื“ื™ื ื—ื•ืง ื—ืฉื•ื‘ ืžืื•ื“, ืฉืื ื ื™ื˜ื™ื‘ ืœื–ื›ื•ืจ ืื•ืชื• ื™ื—ืกื•ืš ืœื ื• ื”ืจื‘ื” ืชืงืœื•ืช ื‘ืขืชื™ื“: </p> <div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl;"> ื›ืฉืื ื—ื ื• ืžืงื‘ืœื™ื ืงืœื˜ ื‘ืืžืฆืขื•ืช <code dir="ltr">input()</code>, ื”ืขืจืš ืฉื ืงื‘ืœ ื™ื”ื™ื” ืชืžื™ื“ ืžื˜ื™ืคื•ืก ืžื—ืจื•ื–ืช. </p> </div> </div> <p style="text-align: right; direction: rtl; float: right;"> ืฉื™ืžื• ืœื‘ ืฉื ื™ืกื™ื•ืŸ ืœืขืฉื•ืช ืคืขื•ืœื•ืช ื‘ื™ืŸ ื˜ื™ืคื•ืกื™ื ืฉื•ื ื™ื (ื›ืžื• ืžื—ืจื•ื–ืช ื•ืžืกืคืจ ืฉืœื) ืขืœื•ืœ ืœื’ืจื•ื ืœื›ื ืœืฉื’ื™ืื•ืช ื‘ืชืจื’ื•ืœื™ื ื”ืงืจื•ื‘ื™ื.<br> ื ืกื•, ืœื“ื•ื’ืžื”, ืœื”ืจื™ืฅ ืืช ื”ืงื•ื“ ื”ื‘ื: </p>
moshe_apples = input("How many apples does Moshe have? ") moshe_apples = moshe_apples + 1 # Give Moshe a single apple print(moshe_apples)
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="align: right; direction: rtl; float: right;">ื”ืžืจืช ื˜ื™ืคื•ืกื™ื (Casting)</p> <p style="text-align: right; direction: rtl; float: right;"> ืฉืคื›ื ื• ืœื™ื˜ืจ ืžื™ื ืœืงืขืจื” ืขื 5 ืงื•ื‘ื™ื•ืช ืงืจื—. ื›ืžื” ื™ืฉ ื‘ื” ืขื›ืฉื™ื•?<br> ืงืฉื” ืœื ื• ืžืื•ื“ ืœืขื ื•ืช ืขืœ ื”ืฉืืœื” ื›ื™ื•ื•ืŸ ืฉื”ื™ื ืžื ื•ืกื—ืช ื‘ืื•ืคืŸ ื’ืจื•ืข ื•ืžืขืจื‘ืช ื“ื‘ืจื™ื ืžืกื•ื’ื™ื ืฉื•ื ื™ื. ืžืื•ืชื” ืกื™ื‘ื” ื‘ื“ื™ื•ืง ืœืคื™ื™ืชื•ืŸ ืงืฉื” ืขื ื”ืงื•ื“ ืžืœืžืขืœื”.<br> ื ื•ื›ืœ ืœื”ืงืคื™ื ืืช ื”ืžื™ื ื•ืœืžื“ื•ื“ ื›ืžื” ืงืจื— ื™ืฉ ื‘ืงืขืจื”, ืื• ืœื”ืžื™ืก ืืช ื”ืงืจื— ื•ืœืžื“ื•ื“ ื›ืžื” ืžื™ื ื™ืฉ ื‘ื”.<br> ื‘ืคื™ื™ืชื•ืŸ ื ืฆื˜ืจืš ืœื”ื—ืœื™ื˜ ืžื” ืื ื—ื ื• ืจื•ืฆื™ื ืœืขืฉื•ืช, ื•ืœื”ืžื™ืจ ืืช ื”ืขืจื›ื™ื ืฉืื ื—ื ื• ืขื•ื‘ื“ื™ื ืื™ืชื ืœื˜ื™ืคื•ืกื™ื ื”ืžืชืื™ืžื™ื ืœืคื ื™ ืฉื ื‘ืฆืข ืืช ื”ืคืขื•ืœื”. </p> <p style="text-align: right; direction: rtl; float: right;"> ื ื–ื›ื™ืจ ืฉื”ื˜ื™ืคื•ืก ืฉืœ ื›ืœ ืงืœื˜ ืฉื ืงื‘ืœ ื‘ืขื–ืจืช <code dir="ltr">input()</code> ืชืžื™ื“ ื™ื”ื™ื” ืžื—ืจื•ื–ืช (<code dir="ltr">str</code>): </p>
color = input("What is your favorite color? ") print("The type of the input " + color + " is...") type(color) age = input("What is your age? ") print("The type of the input " + age + " is...") type(age)
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right;"> ื›ื–ื›ื•ืจ, ื›ืœ ืขื•ื“ ื”ืงืœื˜ ืฉืœื ื• ื”ื•ื ืžืกื•ื’ ืžื—ืจื•ื–ืช, ืคืขื•ืœื•ืช ื›ืžื• ื—ื™ื‘ื•ืจ ืฉืœื• ืขื ืžืกืคืจ ื™ื™ื›ืฉืœื•.<br> ืœื›ืŸ ื ืฆื˜ืจืš ืœื“ืื•ื’ ืฉืฉื ื™ื”ื ื™ื”ื™ื• ืžืื•ืชื• ืกื•ื’ ืขืœ ื™ื“ื™ ื”ืžืจื” ืฉืœ ืื—ื“ ื”ืขืจื›ื™ื ืžืกื•ื’ ืื—ื“ ืœืกื•ื’ ืื—ืจ.<br> ืชื”ืœื™ืš ื”ืคื™ื›ืช ืขืจืš ืœืกื•ื’ ื˜ื™ืคื•ืก ืื—ืจ ื ืงืจื <dfn>ื”ืžืจืช ื˜ื™ืคื•ืกื™ื</dfn>, ืื• <dfn>Casting</dfn> / <dfn>Type Conversion</dfn>.<br> ืื ื ื‘ื—ืŸ ืืช ื‘ืขื™ื™ืช ื”ืชืคื•ื—ื™ื ืฉืœ ืžืฉื” ืžื”ื›ื•ืชืจืช ื”ืงื•ื“ืžืช: </p>
moshe_apples = input("How many apples does Moshe have? ") moshe_apples = moshe_apples + 1 # Give Moshe a single apple print(moshe_apples)
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right;"> ื ืจืื” ืฉื”ืงื•ื“ ืœื ื™ืขื‘ื•ื“, ื›ื™ื•ื•ืŸ ืฉืื™ืŸ ืืคืฉืจื•ืช ืœื—ื‘ืจ ื‘ื™ืŸ ืžื—ืจื•ื–ืช (ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืฉืœ ืžืฉื” ืžื”ืงืœื˜ ืฉืœ ื”ืžืฉืชืžืฉ) ืœื‘ื™ืŸ ืžืกืคืจ (ื”ึพ1 ืฉืื ื—ื ื• ืจื•ืฆื™ื ืœื”ื•ืกื™ืฃ).<br> ื›ื™ื•ื•ืŸ ืฉื”ืžื˜ืจื” ื”ื™ื ืœื”ื•ืกื™ืฃ ืชืคื•ื— 1 ืœืžืกืคืจ ืžืกื•ื™ื ืฉืœ ืชืคื•ื—ื™ื, ื ื‘ื—ืจ ืœื”ืžื™ืจ ืืช <code>moshe_apples</code> ืœืžืกืคืจ ืฉืœื (<code>int</code>) ื‘ืžืงื•ื ืžื—ืจื•ื–ืช (<code>str</code>).<br> ื ืขืฉื” ื–ืืช ื›ืš: </p>
moshe_apples = input("How many apples does Moshe have? ") moshe_apples = int(moshe_apples) # <--- Casting moshe_apples = moshe_apples + 1 print(moshe_apples)
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right;"> ืื™ื–ื” ื›ื™ืฃ, ื”ืžืจื ื• ืืช ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืฉืœ ืžืฉื” ืœืขืจืš ืžื˜ื™ืคื•ืก ืฉืœื (ืฉื•ืจื” 2), ื•ืขื›ืฉื™ื• ื”ืงื•ื“ ืขื•ื‘ื“!<br> ืฉื™ืžื• ืœื‘ ืฉืขื›ืฉื™ื• ืื ื ืจืฆื” ืœื”ื“ืคื™ืก ืืช ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืœืฆื“ ืžืฉืคื˜ ืฉืื•ืžืจ "ืœืžืฉื” ื™ืฉ X ืชืคื•ื—ื™ื", ืื ื—ื ื• ืขืœื•ืœื™ื ืœื”ื™ืชืงืœ ื‘ื‘ืขื™ื”.<br> ื”ืžืฉืคื˜ ืฉืื ื—ื ื• ืจื•ืฆื™ื ืœื”ื“ืคื™ืก ื”ื•ื <code>str</code>, ื•ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืฉื—ื™ืฉื‘ื ื• ื•ืฉื ื ืกื” ืœืฉืจืฉืจ ืืœื™ื• ื™ื”ื™ื” <code>int</code>.<br> ืจืื• ืื™ืš ื–ื” ื™ืฉืคื™ืข ืขืœ ื”ืชื•ื›ื ื™ืช: </p>
moshe_apples = input("How many apples does Moshe have? ") moshe_apples = int(moshe_apples) # <--- Casting moshe_apples = moshe_apples + 1 print("Moshe has " + moshe_apples + " apples")
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right;"> ืคื™ื™ืชื•ืŸ ื”ืชืจื™ืขื” ืœืคื ื™ื ื• ืฉื™ืฉ ืคื” ื‘ืขื™ื”: ื‘ืฉื•ืจื” ื”ืื—ืจื•ื ื”, ื”ื™ื ืœื ืžืฆืœื™ื—ื” ืœื—ื‘ืจ ืืช ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืขื ื”ืžื—ืจื•ื–ื•ืช ื”ื ืžืฆืื•ืช ื‘ืฆื“ื“ื™ื•.<br> ืžื” ื”ืคืชืจื•ืŸ?<br> ืื ืืžืจืชื ืœื”ืžื™ืจ ืืช ืžืกืคืจ ื”ืชืคื•ื—ื™ื ืฉืœ ืžืฉื” ืœืžื—ืจื•ื–ืช, ื–ื” ืื›ืŸ ื™ืขื‘ื•ื“. ื ืขืฉื” ืืช ื–ื” ื›ื›ื”: </p>
moshe_apples = input("How many apples does Moshe have? ") moshe_apples = int(moshe_apples) # <--- Casting to int moshe_apples = moshe_apples + 1 moshe_apples = str(moshe_apples) # <--- Casting to str print("Moshe has " + moshe_apples + " apples")
week01/5_Input_and_Casting.ipynb
PythonFreeCourse/Notebooks
mit
Data First we load and pre-process data.
# load data data_path = os.path.abspath( os.path.join( os.path.pardir, 'utilities_and_data', 'diabetes.csv' ) ) data = pd.read_csv(data_path) # print some basic info() data.info() # preview some first rows data.head() # some summary data.describe()
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Preprocess data
# modify the data column names slightly for easier typing # rename DiabetesPedigreeFunction to dpf data.rename(columns={'DiabetesPedigreeFunction': 'dpf'}, inplace=True) # make lower data.rename(columns=lambda old_name: old_name.lower(), inplace=True) # removing those observation rows with 0 in selected variables normed_predictors = [ 'glucose', 'bloodpressure', 'skinthickness', 'insulin', 'bmi' ] data = data[(data[normed_predictors] != 0).all(axis=1)] # scale the covariates for easier comparison of coefficient posteriors # N.B. int columns turn into floats data.iloc[:,:-1] -= data.iloc[:,:-1].mean() data.iloc[:,:-1] /= data.iloc[:,:-1].std() # preview some first rows againg data.head() # preparing the inputs X = data.iloc[:,:-1] y = data.iloc[:,-1] # get shape into variables n, p = X.shape print('number of observations = {}'.format(n)) print('number of predictors = {}'.format(p))
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Stan model code for logistic regression Logistic regression with Student's $t$ prior as discussed above.
with open('logistic_t.stan') as file: print(file.read()) model = stan_utility.compile_model('logistic_t.stan')
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Set priors and sample from the posterior Here we'll use a Student t prior with 7 degrees of freedom and a scale of 2.5, which, as discussed above, is a reasonable default prior when coefficients should be close to zero but have some chance of being large. PyStan returns the posterior distribution for the parameters describing the uncertainty related to unknown parameter values.
data1 = dict( n=n, d=p, X=X, y=y, p_alpha_df=7, p_alpha_loc=0, p_alpha_scale=2.5, p_beta_df=7, p_beta_loc=0, p_beta_scale=2.5 ) fit1 = model.sampling(data=data1, seed=74749) samples1 = fit1.extract(permuted=True)
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Inspect the resulting posterior Check n_effs and Rhats
# print summary of selected variables # use pandas data frame for layout summary = fit1.summary(pars=['alpha', 'beta']) pd.DataFrame( summary['summary'], index=summary['summary_rownames'], columns=summary['summary_colnames'] )
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
n_effs are high and Rhats<1.1, which is good. Next we check divergences, E-BMFI and treedepth exceedences as explained in Robust Statistical Workflow with PyStan Case Study by Michael Betancourt.
stan_utility.check_treedepth(fit1) stan_utility.check_energy(fit1) stan_utility.check_div(fit1)
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Everything is fine based on these diagnostics and we can proceed with our analysis. Visualise the marginal posterior distributions of each parameter
# plot histograms fig, axes = plot_tools.hist_multi_sharex( [samples1['alpha']] + [sample for sample in samples1['beta'].T], rowlabels=['intercept'] + list(X.columns), n_bins=60, x_lines=0, figsize=(7, 10) )
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
We can use Pareto smoothed importance sampling leave-one-out cross-validation to estimate the predictive performance.
loo1, loos1, ks1 = psis.psisloo(samples1['log_lik']) loo1_se = np.sqrt(np.var(loos1, ddof=1)*n) print('elpd_loo: {:.4} (SE {:.3})'.format(loo1, loo1_se)) # check the number of large (> 0.5) Pareto k estimates np.sum(ks1 > 0.5)
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Alternative horseshoe prior on weights In this example, with $n \gg p$ the difference is small, and thus we donโ€™t expect much difference with a different prior and horseshoe prior is usually more useful for $n<p$. The global scale parameter for horseshoe prior is chosen as recommended by Juho Piironen and Aki Vehtari (2017). On the Hyperprior Choice for the Global Shrinkage Parameter in the Horseshoe Prior. Journal of Machine Learning Research: Workshop and Conference Proceedings (AISTATS 2017 Proceedings), accepted for publication. arXiv preprint arXiv:1610.05559.
with open('logistic_hs.stan') as file: print(file.read()) model = stan_utility.compile_model('logistic_hs.stan') p0 = 2 # prior guess for the number of relevant variables tau0 = p0 / (p - p0) * 1 / np.sqrt(n) data2 = dict( n=n, d=p, X=X, y=y, p_alpha_df=7, p_alpha_loc=0, p_alpha_scale=2.5, p_beta_df=1, p_beta_global_df=1, p_beta_global_scale=tau0 ) fit2 = model.sampling(data=data2, seed=74749) samples2 = fit2.extract(permuted=True)
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
We see that the horseshoe prior has shrunk the posterior distribution of irrelevant features closer to zero, without affecting the posterior distribution of the relevant features.
# print summary of selected variables # use pandas data frame for layout summary = fit2.summary(pars=['alpha', 'beta']) pd.DataFrame( summary['summary'], index=summary['summary_rownames'], columns=summary['summary_colnames'] ) # plot histograms fig, axes = plot_tools.hist_multi_sharex( [samples2['alpha']] + [sample for sample in samples2['beta'].T], rowlabels=['intercept'] + list(X.columns), n_bins=60, x_lines=0, figsize=(7, 10) )
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
We compute LOO also for the model with Horseshoe prior. Expected log predictive density is higher, but not significantly. This is not surprising as this is a easy data with $n \gg p$.
loo2, loos2, ks2 = psis.psisloo(samples2['log_lik']) loo2_se = np.sqrt(np.var(loos2, ddof=1)*n) print('elpd_loo: {:.4} (SE {:.3})'.format(loo2, loo2_se)) # check the number of large (> 0.5) Pareto k estimates np.sum(ks2 > 0.5) elpd_diff = loos2 - loos1 elpd_diff_se = np.sqrt(np.var(elpd_diff, ddof=1)*n) elpd_diff = np.sum(elpd_diff) print('elpd_diff: {:.4} (SE {:.3})'.format(elpd_diff, elpd_diff_se))
demos_pystan/diabetes.ipynb
tsivula/becs-114.1311
gpl-3.0
Numerically Integrating Newton's Second Law There are many times in physics when you want to know the solution to a differential equation that you can't solve analytically. This comes up in fields from ranging from astrophysics to biophysics to particle physics. Once you change from finding exact solutions to numerical solutions, all sorts of nuaced difficulties can arise. We'll explore a few examples in this workbook. The Simple Harmonic Oscillator Let's start with a system where we know the answer so that we'll have something concrete to compare against. For a simple harmonic oscillator we know that the acceleration comes from a spring force: $$\ddot x=-\tfrac{k}{m}x.$$ We know that the solution to this differential equation is: $$x(t) = A\sin{\sqrt{\tfrac{k}{m}}t}.$$ Let's work on integrating it numerically. The simplest way of integrating this equation is to use a "delta" approximation for the derivates. $$\tfrac{\Delta v}{\Delta t}=-\tfrac{k}{m}x\implies\Delta v=-\tfrac{k}{m}x\Delta t = a\Delta t$$ $$\tfrac{\Delta x}{\Delta t}=v\implies\Delta x=v\Delta t$$ Combining these, we can work out that one way of integrating this equation to find $x(t)$ is: Problem 2.1 on the worksheet. $$v_{t+\Delta t}=v_{t}+a\Delta=v_{t}-\tfrac{k}{m}x\Delta t$$ $$x_{t+\Delta t}=x_{t}+v_{t}\Delta t.$$ Let's set this up!
# Input Values time = 40. delta_t = .1 time_steps = int(time/delta_t) # Create arrays for storing variables x = np.zeros(time_steps) v = np.zeros(time_steps) t = np.zeros(time_steps) # Set initial values to "stretched" x[0] = 1. v[0] = 0. t[0] = 0. omega = .75 # Create function to calculate acceleration def accel(x,omegaIn): return -omegaIn**2*x # Solve the equation for tt in xrange(time_steps-1): v[tt+1] = v[tt]+accel(x[tt],omega)*delta_t# x[tt+1] = x[tt]+v[tt]*delta_t t[tt+1] = t[tt]+delta_t # Find exact answers xExact = x[0]*np.cos(omega*t) vExact = -x[0]*omega*np.sin(omega*t)
integration/Numerical Integration.ipynb
JesseLivezey/science-programming
mit
Plots
plt.plot(t,x,'ro',label='x(t)') plot(t,v,'bo',label='v(t)') legend() plt.ylim((-1.5,1.5)) plt.title('Numerical x(t) and v(t)') plt.figure() plt.plot(t,xExact,'ro',label='x(t)') plot(t,vExact,'bo',label='v(t)') legend() plt.ylim((-1.5,1.5)) plt.title('Exact x(t) and v(t)')
integration/Numerical Integration.ipynb
JesseLivezey/science-programming
mit
Try a few different values of delta_t. What happens as you make delta_t larger? One subtle problem with the method we are using above is that it may not be conserving energy. You can see this happening as the amplitude grows over time. Let's try creating a quick "hack" to fix this. Copy the position and velocity code from above. After each update, rescale the velocity so that energy is conserved.
# Code goes here:
integration/Numerical Integration.ipynb
JesseLivezey/science-programming
mit
What about friction? How could you incorporate a drag force into this program? You can assume the drag force is proportional to velocity: $$F_\text{drag} = -b v$$ Copy your code from above and add in a drag term. Do the resulting plots make sense?
# Code goes here: # Plots go here:
integration/Numerical Integration.ipynb
JesseLivezey/science-programming
mit
Load the run metadata. You can also just look through the directory, but this index file is convenient if (as we do here) you only want to download some of the files.
run_info = pd.read_csv(os.path.join(scratch_dir, 'index.tsv'), sep='\t') # Filter to SQuAD 2.0 runs from either 2M MultiBERTs or the original BERT checkpoint ("public"). mask = run_info.task == "v2.0" mask &= (run_info.n_steps == "2M") | (run_info.release == 'public') run_info = run_info[mask] run_info # Download all prediction files for fname in tqdm(run_info.file): !curl $preds_root/$fname -o $scratch_dir/$fname --create-dirs --silent !ls $scratch_dir/v2.0
language/multiberts/multi_vs_original.ipynb
google-research/language
apache-2.0
Now we should have everything in our scratch directory, and can load individual predictions. SQuAD has a monolithic eval script that isn't easily compatible with a bootstrap procedure (among other things, it parses a lot of JSON, and you don't want to do that in the inner loop!). Ultimately, though, it relies on computing some point-wise scores (exact-match $\in {0,1}$ and F1 $\in [0,1]$) and averaging these across examples. For efficiency, we'll pre-compute these before running our bootstrap.
# Import the SQuAD 2.0 eval script; we'll use some functions from this below. import sys sys.path.append(scratch_dir) import evaluate_squad2 as squad_eval # Load dataset with open(os.path.join(scratch_dir, 'dev-v2.0.json')) as fd: dataset = json.load(fd)['data']
language/multiberts/multi_vs_original.ipynb
google-research/language
apache-2.0
The official script supports thresholding for no-answer, but the default settings ignore this and treat only predictions of emptystring ("") as no-answer. So, we can score on exact_raw and f1_raw directly.
exact_scores = {} # filename -> qid -> score f1_scores = {} # filename -> qid -> score for fname in tqdm(run_info.file): with open(os.path.join(scratch_dir, fname)) as fd: preds = json.load(fd) exact_raw, f1_raw = squad_eval.get_raw_scores(dataset, preds) exact_scores[fname] = exact_raw f1_scores[fname] = f1_raw def dict_of_dicts_to_matrix(dd): """Convert a scores to a dense matrix. Outer keys assumed to be rows, inner keys are columns (e.g. example IDs). Uses pandas to ensure that different rows are correctly aligned. Args: dd: map of row -> column -> value Returns: np.ndarray of shape [num_rows, num_columns] """ # Use pandas to ensure keys are correctly aligned. df = pd.DataFrame(dd).transpose() return df.values exact_scores = dict_of_dicts_to_matrix(exact_scores) f1_scores = dict_of_dicts_to_matrix(f1_scores) exact_scores.shape
language/multiberts/multi_vs_original.ipynb
google-research/language
apache-2.0
Run multibootstrap base (L) is the original BERT checkpoint, expt (L') is MultiBERTs with 2M steps. Since we pre-computed the pointwise exact match and F1 scores for each run and each example, we can just pass dummy labels and use a simple average over predictions as our scoring function.
import multibootstrap num_bootstrap_samples = 1000 selected_runs = run_info.copy() selected_runs['seed'] = selected_runs['pretrain_id'] selected_runs['intervention'] = (selected_runs['release'] == 'multiberts') # Dummy labels dummy_labels = np.zeros_like(exact_scores[0]) # [num_examples] score_fn = lambda y_true, y_pred: np.mean(y_pred) # Targets; run once for each. targets = {'exact': exact_scores, 'f1': f1_scores} stats = {} for name, preds in targets.items(): print(f"Metric: {name:s}") samples = multibootstrap.multibootstrap(selected_runs, preds, dummy_labels, score_fn, nboot=num_bootstrap_samples, paired_seeds=False, progress_indicator=tqdm) stats[name] = multibootstrap.report_ci(samples, c=0.95) print("") pd.concat({k: pd.DataFrame(v) for k,v in stats.items()}).transpose()
language/multiberts/multi_vs_original.ipynb
google-research/language
apache-2.0
TacticToolkit Introduction TacticToolkit is a codebase to assist with machine learning and natural language processing. We build on top of sklearn, tensorflow, keras, nltk, spaCy and other popular libraries. The TacticToolkit will help throughout; from data acquisition to preprocessing to training to inference. | Modules | Description | |---------------|------------------------------------------------------| | corpus | Load and work with text corpora | | data | Data generation and common data functions | | plotting | Predefined and customizable plots | | preprocessing | Transform and clean data in preparation for training | | sandbox | Newer experimental features and references | | text | Text manipulation and processing |
# until we can install, add parent dir to path so ttk is found import sys sys.path.insert(0, '..') # basic imports import pandas as pd import numpy as np import re import matplotlib %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) import matplotlib.pyplot as plt
examples/2017-09-11_TacticToolkit_Intro.ipynb
tacticsiege/TacticToolkit
mit
Let's start with some text The ttk.text module includes classes and functions to make working with text easier. These are meant to supplement existing nltk and spaCy text processing, and often work in conjunction with these libraries. Below is an overview of some of the major components. We'll explore these objects with some simple text now. | Class | Purpose | |-----------------|------------------------------------------------------------------------| | Normalizer | Normalizes text by formatting, stemming and substitution | | Tokenizer | High level tokenizer, provides word, sentence and paragraph tokenizers |
# simple text normalization # apply individually # apply to sentences # simple text tokenization # harder text tokenization # sentence tokenization # paragraph tokenization
examples/2017-09-11_TacticToolkit_Intro.ipynb
tacticsiege/TacticToolkit
mit
Corpii? Corpuses? Corpora! The ttk.corpus module builds on the nltk.corpus model, adding new corpus readers and corpus processing objects. It also includes loading functions for the corpora included with ttk, which will download the content from github as needed. We'll use the Dated Headline corpus included with ttk. This corpus was created using ttk, and is maintained in a complimentary github project, TacticCorpora (https://github.com/tacticsiege/TacticCorpora). First, a quick look at the corpus module's major classes and functions. | Class | Purpose | |------------------------------|----------------------------------------------------------------------------------| |CategorizedDatedCorpusReader |Extends nltk's CategorizedPlainTextCorpusReader to include a second category, Date| |CategorizedDatedCorpusReporter|Summarizes corpora. Filterable, and output can be str, list or DataFrame | | Function | Purpose | |--------------------------------------|-------------------------------------------------------------------------| | load_headline_corpus(with_date=True) | Loads Categorized or CategorizedDated CorpusReader from headline data |
from ttk.corpus import load_headline_corpus # load the dated corpus. # This will attempt to download the corpus from github if it is not present locally. corpus = load_headline_corpus(verbose=True) # inspect categories print (len(corpus.categories()), 'categories') for cat in corpus.categories(): print (cat) # all main corpus methods allow lists of categories and dates filters d = '2017-08-22' print (len(corpus.categories(dates=[d])), 'categories') for cat in corpus.categories(dates=[d]): print (cat) # use the Corpus Reporters to get summary reports from ttk.corpus import CategorizedDatedCorpusReporter reporter = CategorizedDatedCorpusReporter() # summarize categories print (reporter.category_summary(corpus)) # reporters can return str, list or dataframe for s in reporter.date_summary(corpus, dates=['2017-08-17', '2017-08-18', '2017-08-19',], output='list'): print (s) cat_frame = reporter.category_summary(corpus, categories=['BBC', 'CNBC', 'CNN', 'NPR',], output='dataframe') cat_frame.head()
examples/2017-09-11_TacticToolkit_Intro.ipynb
tacticsiege/TacticToolkit
mit
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here predictions.append(1) if passenger['Sex'] == 'female' else predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data)
P0:_Titanic_Survival/Titanic_Survival_Exploration.ipynb
parambharat/ML-Programs
mit
Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger["Sex"] == 'female' or (passenger["Sex"] == 'male' and passenger["Age"] < 10): predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictpassenger['Sex'] == 'female'ions predictions = predictions_2(data)
P0:_Titanic_Survival/Titanic_Survival_Exploration.ipynb
parambharat/ML-Programs
mit
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger["Pclass"] < 3 and (passenger['Sex'] == 'female' or passenger['Age'] < 15): predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data)
P0:_Titanic_Survival/Titanic_Survival_Exploration.ipynb
parambharat/ML-Programs
mit
ะกะพั€ั‚ะธั€ะพะฒะบะฐ ะฒ ะพะฑั€ะฐั‚ะฝะพะผ ะฟะพั€ัะดะบะต ะ”ะปั ัะพั€ั‚ะธั€ะพะฒะบะธ ะฒ ะพะฑั€ะฐั‚ะฝะพะผ ะฟะพั€ัะดะบะต ะผะพะถะฝะพ ัƒะบะฐะทะฐั‚ัŒ ะฟะฐั€ะฐะผะตั‚ั€ reverse.
a = [5, 3, -2, 9, 1] a.sort(reverse=True) print(a)
crash-course/builtin-sort.ipynb
citxx/sis-python
mit
ะกะพั€ั‚ะธั€ะพะฒะบะฐ ะฟะพ ะบะปัŽั‡ัƒ ะกะพั€ั‚ะธั€ะพะฒะบะฐ ะฟะพ ะบะปัŽั‡ัƒ ะฟะพะทะฒะพะปัะตั‚ ะพั‚ัะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ ัะฟะธัะพะบ ะฝะต ะฟะพ ะทะฝะฐั‡ะตะฝะธัŽ ัะฐะผะพะณะพ ัะปะตะผะตะฝั‚ะฐ, ะฐ ะฟะพ ั‡ะตะผัƒ-ั‚ะพ ะดั€ัƒะณะพะผัƒ.
# ะžะฑั‹ั‡ะฝะพ ัั‚ั€ะพะบะธ ัะพั€ั‚ะธั€ัƒัŽั‚ัั ะฒ ะฐะปั„ะฐะฒะธั‚ะฝะพะผ ะฟะพั€ัะดะบะต a = ["bee", "all", "accessibility", "zen", "treasure"] a.sort() print(a) # ะ ะธัะฟะพะปัŒะทัƒั ัะพั€ั‚ะธั€ะพะฒะบัƒ ะฟะพ ะบะปัŽั‡ัƒ ะผะพะถะฝะพ ัะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ, ะฝะฐะฟั€ะธะผะตั€, ะฟะพ ะดะปะธะฝะต a = ["bee", "all", "accessibility", "zen", "treasure"] a.sort(key=len) print(a)
crash-course/builtin-sort.ipynb
citxx/sis-python
mit
ะ’ ะบะฐั‡ะตัั‚ะฒะต ะฟะฐั€ะฐะผะตั‚ั€ะฐ key ะผะพะถะฝะพ ัƒะบะฐะทั‹ะฒะฐั‚ัŒ ะฝะต ั‚ะพะปัŒะบะพ ะฒัั‚ั€ะพะตะฝะฝั‹ะต ั„ัƒะฝะบั†ะธะธ, ะฝะพ ะธ ัะฐะผะพัั‚ะพัั‚ะตะปัŒะฝะพ ะพะฟั€ะตะดะตะปั‘ะฝะฝั‹ะต. ะขะฐะบะฐั ั„ัƒะฝะบั†ะธั ะดะพะปะถะฝะฐ ะฟั€ะธะฝะธะผะฐั‚ัŒ ะพะดะธะฝ ะฐั€ะณัƒะผะตะฝั‚, ัะปะตะผะตะฝั‚ ัะฟะธัะบะฐ, ะธ ะฒะพะทั€ะฐั‰ะฐั‚ัŒ ะทะฝะฐั‡ะตะฝะธะต, ะฟะพ ะบะพั‚ะพั€ะพะผัƒ ะฝะฐะดะพ ัะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ.
# ะกะพั€ั‚ะธั€ัƒะตะผ ะฟะพ ะพัั‚ะฐั‚ะบัƒ ะพั‚ ะดะตะปะตะฝะธั ะฝะฐ 10 def mod(x): return x % 10 a = [1, 15, 143, 8, 0, 5, 17, 48] a.sort(key=mod) print(a) # ะžะฑั‹ั‡ะฝะพ ัะฟะธัะบะธ ัะพั€ั‚ะธั€ัƒัŽั‚ัั ัะฝะฐั‡ะฐะปะฐ ะฟะพ ะฟะตั€ะฒะพะผัƒ ัะปะตะผะตะฝั‚ัƒ, ะฟะพั‚ะพะผ ะฟะพ ะฒั‚ะพั€ะพะผัƒ ะธ ั‚ะฐะบ ะดะฐะปะตะต a = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]] a.sort() print(a) # ะ ั‚ะฐะบ ะผะพะถะฝะพ ะพั‚ัะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ ัะฝะฐั‡ะฐะปะฐ ะฟะพ ะฟะตั€ะฒะพะผัƒ ะฟะพ ะฒะพะทั€ะฐัั‚ะฐะฝะธัŽ, ะฐ ะฟั€ะธ ั€ะฐะฒะตะฝัั‚ะต โ€” ะฟะพ ะฒั‚ะพั€ะพะผ def my_key(x): return x[0], -x[1] a = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]] a.sort(key=my_key) print(a)
crash-course/builtin-sort.ipynb
citxx/sis-python
mit
Pressure Drop calculations Collecting K-values of fittings, connections, etc...
""" """
Archive/Analysis/pressure-reservoir-notebook.ipynb
psas/liquid-engine-test-stand
gpl-2.0
Template representation variant 1 bundles for each workflow step (characterized by output, activity, and agent with relationships) every activity uses information from a global provenance log file (used relationship) and every activity updates parts of a global provenance log file (was generated by relationship) NB: ! this produces not valid ProvTemplates, as multiple bundles are used
# generate prov_template options and print provn representation gen_bundles(workflow_dict,prov_doc01) print(prov_doc01.get_provn()) %matplotlib inline prov_doc01.plot() prov_doc01.serialize('data-ingest1.rdf',format='rdf')
prov_templates/Data_ingest_use_case_templates.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Template representation variant 2: workflow steps without bundles workflow steps are chained (output is input to next step)
nodes = in_bundles(workflow_dict,prov_doc2) chain_bundles(nodes) print(prov_doc02.get_provn()) %matplotlib inline prov_doc02.plot() from prov.dot import prov_to_dot dot = prov_to_dot(prov_doc02) prov_doc02.serialize('ingest-prov-version2.rdf',format='rdf') dot.write_png('ingest-prov-version2.png')
prov_templates/Data_ingest_use_case_templates.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Template representation variant 3: workflow steps without bundles workflow steps are chained (output is input to next step) global workflow representation generation added
gnodes = in_bundles(workflow_dict,prov_doc3) chain_hist_bundles(gnodes,prov_doc3) print(prov_doc03.get_provn()) dot = prov_to_dot(prov_doc03) dot.write_png('ingest-prov-version3.png') %matplotlib inline prov_doc03.plot() prov_doc03.serialize('data-ingest3.rdf',format='rdf') # ------------------ to be removed -------------------------------------- # generate prov_template options and print provn representation gen_bundles(workflow_dict,prov_doc1) print(prov_doc1.get_provn()) nodes = in_bundles(workflow_dict,prov_doc2) chain_bundles(nodes) print(prov_doc2.get_provn()) gnodes = in_bundles(workflow_dict,prov_doc3) chain_hist_bundles(gnodes,prov_doc3) print(prov_doc3.get_provn()) %matplotlib inline prov_doc1.plot() prov_doc2.plot() prov_doc3.plot()
prov_templates/Data_ingest_use_case_templates.ipynb
stephank16/enes_graph_use_case
gpl-3.0
The BERT tokenizer To fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training. The BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a keras.layer like you can with preprocessing.TextVectorization. The following code rebuilds the tokenizer that was used by the base model:
# Set up tokenizer to generate Tensorflow dataset tokenizer = # TODO 1: Your code goes here print("Vocab size:", len(tokenizer.vocab))
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match:
# Print the key value and shapes for key, value in glue_train.items(): # TODO 2: Your code goes here print(f'glue_train_labels shape: {glue_train_labels.shape}')
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note: The pretrained TransformerEncoder is also available on TensorFlow Hub. See the Hub appendix for details. Set up the optimizer BERT adopts the Adam optimizer with weight decay (aka "AdamW"). It also employs a learning rate schedule that firstly warms up from 0 and then decays to 0.
# Set up epochs and steps epochs = 3 batch_size = 32 eval_batch_size = 32 train_data_size = len(glue_train_labels) steps_per_epoch = int(train_data_size / batch_size) num_train_steps = steps_per_epoch * epochs warmup_steps = int(epochs * train_data_size * 0.1 / batch_size) # creates an optimizer with learning rate schedule optimizer = # TODO 3: Your code goes here
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
To see an example of how to customize the optimizer and it's schedule, see the Optimizer schedule appendix. Train the model The metric is accuracy and we use sparse categorical cross-entropy as loss.
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)] loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) bert_classifier.compile( optimizer=optimizer, loss=loss, metrics=metrics) # Train the model bert_classifier.fit(# TODO 4: Your code goes here)
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Then apply the transformation to generate new TFRecord files.
# Set up output of training and evaluation Tensorflow dataset train_data_output_path="./mrpc_train.tf_record" eval_data_output_path="./mrpc_eval.tf_record" max_seq_length = 128 batch_size = 32 eval_batch_size = 32 # Generate and save training data into a tf record file input_meta_data = (# TODO 5: Your code goes here)
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Measuring chromatin fluorescence Goal: we want to quantify the amount of a particular protein (red fluorescence) localized on the centromeres (green) versus the rest of the chromosome (blue). The main challenge here is the uneven illumination, which makes isolating the chromosomes a struggle.
import numpy as np from matplotlib import cm, pyplot as plt import skdemo plt.rcParams['image.cmap'] = 'cubehelix' plt.rcParams['image.interpolation'] = 'none' from skimage import io image = io.imread('images/chromosomes.tif') skdemo.imshow_with_histogram(image)
scikit_image/lectures/solutions/adv0_chromosomes.ipynb
M-R-Houghton/euroscipy_2015
mit
But getting the chromosomes is not so easy:
chromosomes_binary = chromosomes > threshold_otsu(chromosomes) skdemo.imshow_all(chromosomes, chromosomes_binary)
scikit_image/lectures/solutions/adv0_chromosomes.ipynb
M-R-Houghton/euroscipy_2015
mit
Not only is the uneven illumination a problem, but there seem to be some artifacts due to the illumination pattern! Exercise: Can you think of a way to fix this? (Hint: in addition to everything you've learned so far, check out skimage.morphology.remove_small_objects)
from skimage.morphology import (opening, selem, remove_small_objects) d = selem.diamond(radius=4) chr0 = opening(chromosomes_adapt, d) chr1 = remove_small_objects(chr0.astype(bool), 256) images = [chromosomes, chromosomes_adapt, chr0, chr1] titles = ['original', 'adaptive threshold', 'opening', 'small objects removed'] skdemo.imshow_all(*images, titles=titles, shape=(2, 2))
scikit_image/lectures/solutions/adv0_chromosomes.ipynb
M-R-Houghton/euroscipy_2015
mit
Now that we have the centromeres and the chromosomes, it's time to do the science: get the distribution of intensities in the red channel using both centromere and chromosome locations.
# Replace "None" below with the right expressions! centromere_intensities = protein[centromeres_binary] chromosome_intensities = protein[chr1] all_intensities = np.concatenate((centromere_intensities, chromosome_intensities)) minint = np.min(all_intensities) maxint = np.max(all_intensities) bins = np.linspace(minint, maxint, 100) plt.hist(centromere_intensities, bins=bins, color='blue', alpha=0.5, label='centromeres') plt.hist(chromosome_intensities, bins=bins, color='orange', alpha=0.5, label='chromosomes') plt.legend(loc='upper right') plt.show()
scikit_image/lectures/solutions/adv0_chromosomes.ipynb
M-R-Houghton/euroscipy_2015
mit
Restore BF Reviews and Ratings
cmd = "SELECT review_rating, review_text FROM bf_reviews" bfdf = pd.read_sql_query(cmd, engine) print(len(bfdf)) bfdf.head(5)
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Now limit the reviews used in training to only reviews with more than 350 characters.
bfdfl = bfdf[bfdf['review_text'].str.len() > 350].copy() len(bfdfl)
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Create Training and Testing Data
train_data = bfdfl['review_text'].values[:750] y_train = bfdfl['review_rating'].values[:750] t0 = time() vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english') X_train = vectorizer.fit_transform(train_data) duration = time() - t0 print('vectorized in {:.2f} seconds.'.format(duration)) print(X_train.shape) test_data = bfdfl['review_text'].values[750:] t0 = time() X_test = vectorizer.transform(test_data) duration = time() - t0 print('transformed test data in {:.2f} seconds.'.format(duration)) feature_names = np.asarray(vectorizer.get_feature_names()) len(feature_names) feature_names[:5] y_test = bfdfl['review_rating'].values[750:]
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Now Test Several Classifiers
def benchmark(clf, pos_label=None): print('_' * 80) print("Training: ") print(clf) t0 = time() clf.fit(X_train, y_train) train_time = time() - t0 print("train time: %0.3fs" % train_time) t0 = time() pred = clf.predict(X_test) test_time = time() - t0 print("test time: %0.3fs" % test_time) score = metrics.f1_score(y_test, pred, pos_label=pos_label) print("f1-score: %0.3f" % score) if hasattr(clf, 'coef_'): print("dimensionality: %d" % clf.coef_.shape[1]) print("density: %f" % density(clf.coef_)) # if opts.print_top10 and feature_names is not None: # print("top 10 keywords per class:") # for i, category in enumerate(categories): # top10 = np.argsort(clf.coef_[i])[-10:] # print(trim("%s: %s" # % (category, " ".join(feature_names[top10])))) print() # if opts.print_report: # print("classification report:") # print(metrics.classification_report(y_test, pred, # target_names=categories)) # if opts.print_cm: # print("confusion matrix:") # print(metrics.confusion_matrix(y_test, pred)) print() clf_descr = str(clf).split('(')[0] return clf_descr, score, train_time, test_time, pred results = [] for clf, name in ( (RidgeClassifier(tol=1e-2, solver="lsqr"), "Ridge Classifier"), (Perceptron(n_iter=50), "Perceptron"), (PassiveAggressiveClassifier(n_iter=50), "Passive-Aggressive"), (KNeighborsClassifier(n_neighbors=10), "kNN"), (RandomForestClassifier(n_estimators=20), 'RandomForest')): print('=' * 80) print(name) results.append(benchmark(clf)) for penalty in ["l2", "l1"]: print('=' * 80) print("%s penalty" % penalty.upper()) # Train Liblinear model results.append(benchmark(LinearSVC(loss='l2', penalty=penalty, dual=False, tol=1e-3))) # Train SGD model results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50, penalty=penalty))) # Train SGD with Elastic Net penalty print('=' * 80) print("Elastic-Net penalty") results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50, penalty="elasticnet"))) # Train NearestCentroid without threshold print('=' * 80) print("NearestCentroid (aka Rocchio classifier)") results.append(benchmark(NearestCentroid())) # Train sparse Naive Bayes classifiers print('=' * 80) print("Naive Bayes") results.append(benchmark(MultinomialNB(alpha=.01))) results.append(benchmark(BernoulliNB(alpha=.01))) class L1LinearSVC(LinearSVC): def fit(self, X, y): # The smaller C, the stronger the regularization. # The more regularization, the more sparsity. self.transformer_ = LinearSVC(penalty="l1", dual=False, tol=1e-3) X = self.transformer_.fit_transform(X, y) return LinearSVC.fit(self, X, y) def predict(self, X): X = self.transformer_.transform(X) return LinearSVC.predict(self, X) print('=' * 80) print("LinearSVC with L1-based feature selection") results.append(benchmark(L1LinearSVC()))
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Plot Results
indices = np.arange(len(results)) results = [[x[i] for x in results] for i in range(5)] font = {'family' : 'normal', 'weight' : 'bold', 'size' : 16} plt.rc('font', **font) plt.rcParams['figure.figsize'] = 12.94, 8 clf_names, score, training_time, test_time, pred = results training_time = np.array(training_time) / np.max(training_time) test_time = np.array(test_time) / np.max(test_time) #plt.figure(figsize=(12, 8)) plt.title("Score") plt.barh(indices, score, .2, label="score", color='#982023') plt.barh(indices + .3, training_time, .2, label="training time", color='#46959E') plt.barh(indices + .6, test_time, .2, label="test time", color='#C7B077') plt.yticks(()) plt.legend(loc='best') plt.subplots_adjust(left=.25) plt.subplots_adjust(top=.95) plt.subplots_adjust(bottom=.05) plt.ylim(0, 14) print(indices) for i, c in zip(indices, clf_names): plt.text(-0.025, i, c, horizontalalignment='right') clf_names[0] = 'Ridge' clf_names[2] = 'PassAggress' clf_names[3] = 'KNN' clf_names[4] = 'RandomForest' clf_names[5] = 'LinearSVC L2' clf_names[6] = 'SGDC SVM L2' clf_names[7] = 'LinearSVC L1' clf_names[8] = 'SGDC L1' clf_names[9] = 'SGDC ElNet' clf_names[13] = 'LinearSVC L1FS' fig, ax = plt.subplots(1, 1) clf_names, score, training_time, test_time, pred = results ax.plot(indices, score, '-o', label="score", color='#982023') ax.plot(indices, training_time, '-o', label="training time (s)", color='#46959E') ax.plot(indices, test_time, '-o', label="test time (s)", color='#C7B077') #labels = [item.get_text() for item in ax.get_xticklabels()] labels = clf_names ax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1)) ax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right') ax.set_xlim([-1, 14]) ax.set_ylim([0, 1]) ax.legend(loc='best') plt.subplots_adjust(left=0.05, bottom=0.3, top=.98) plt.savefig('ratingClassifierScores.png', dpi=144) for name, scr in zip(clf_names, score): print('{}: {:.3f}'.format(name, scr))
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Now Plot The Predicted Rating as a Function of the Given Rating for the BF Test Data
fig, ax = plt.subplots(1, 1) ax.plot(y_test + 0.1*np.random.random(len(y_test)) - 0.05, pred[0] + 0.1*np.random.random(len(y_test)) - 0.05, '.') ax.set_xlim([0, 6]) ax.set_ylim([0, 6]) ax.set_xlabel('Given Rating') ax.set_ylabel('Predicted Rating') ms = np.zeros((5, 5)) for row in range(5): for col in range(5): #print('row {}, col {}'.format(row, col)) ms[row, col] = len(np.where((y_test == col+1) & (pred[0] == row+1))[0]) ms logms = 5*np.log(ms+1) logms fig, ax = plt.subplots(1, 1) for row in range(5): for col in range(5): ax.plot(col+1, row+1, 'o', ms=logms[row, col], color='#83A7C8', alpha=0.5) ax.set_xlim([0, 6]) ax.set_ylim([0, 6]) ax.set_xlabel('Given Rating') ax.set_ylabel('Predicted Rating') #plt.savefig('Predicted_Vs_Given_Bubbles.png', dpi=144) for idx, prediction in enumerate(pred): print(idx, pearsonr(y_test, prediction)) fig, ax = plt.subplots(1, 1) ax.hist(y_test, bins=range(1, 7), align='left', color='#83A7C8', alpha=0.25, label='Given') ax.hist(pred[10], bins=range(1, 7), align='left', color='#BA4C37', alpha=0.25, label='Predicted') #ax.set_xlim([0, 6]) ax.xaxis.set_ticks([1, 2, 3, 4, 5]) ax.set_xlabel('Rating') ax.set_ylabel('Number of Reviews') ax.legend(loc='best') #plt.savefig('PredictedGivenDist.png', dpi=144)
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
confusion matrix
from sklearn import metrics def plot_confusion_matrix(y_pred, y, normalize=False, cmap=plt.cm.binary): cm = metrics.confusion_matrix(y, y_pred) cm = np.flipud(cm) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] plt.imshow(cm, cmap=cmap, interpolation='nearest') plt.colorbar() plt.xticks(np.arange(0, 5), np.arange(1, 6)) plt.yticks(np.arange(0, 5), np.arange(1, 6)[::-1]) plt.xlabel('bringfido.com rating (true rating)') plt.ylabel('predicted rating') print "classification accuracy:", metrics.accuracy_score(y_test, pred[10]) plot_confusion_matrix(y_test, pred[10], normalize=True, cmap=plt.cm.Blues) #plt.savefig('rating_confusion_matrix.png', dpi=144) clf = NearestCentroid() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) cens = clf.centroids_ clf.get_params() words = vectorizer.get_feature_names() len(words) cens.shape
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Which features/words have the highest weight towards rating 1? Unnormalized centroid
wgtarr = cens[4,:] ratwords = np.argsort(wgtarr).tolist()[::-1] for i in range(20): print(wgtarr[ratwords[i]], words[ratwords[i]], ratwords[i]) cens[:, 1148]
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Normalized centroid First compute the total for each feature across all ratings (1 to 5)
cen_tot = np.sum(cens, axis=0) cen_tot.shape wgtarr = cens[4,:]/cen_tot words[np.argsort(wgtarr)[0]] ratwords = np.argsort(wgtarr).tolist()[::-1] for i in range(20): print(wgtarr[ratwords[i]], words[ratwords[i]])
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Expanding the Model to 3-grams
t0 = time() vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english', ngram_range=(1, 3)) X_train = vectorizer.fit_transform(train_data) duration = time() - t0 print('vectorized in {:.2f} seconds.'.format(duration)) print(X_train.shape) t0 = time() X_test = vectorizer.transform(test_data) duration = time() - t0 print('transformed test data in {:.2f} seconds.'.format(duration)) results = [] for clf, name in ( (RidgeClassifier(tol=1e-2, solver="lsqr"), "Ridge Classifier"), (Perceptron(n_iter=50), "Perceptron"), (PassiveAggressiveClassifier(n_iter=50), "Passive-Aggressive"), (KNeighborsClassifier(n_neighbors=10), "kNN"), (RandomForestClassifier(n_estimators=20), 'RandomForest')): print('=' * 80) print(name) results.append(benchmark(clf)) for penalty in ["l2", "l1"]: print('=' * 80) print("%s penalty" % penalty.upper()) # Train Liblinear model results.append(benchmark(LinearSVC(loss='l2', penalty=penalty, dual=False, tol=1e-3))) # Train SGD model results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50, penalty=penalty))) # Train SGD with Elastic Net penalty print('=' * 80) print("Elastic-Net penalty") results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50, penalty="elasticnet"))) # Train NearestCentroid without threshold print('=' * 80) print("NearestCentroid (aka Rocchio classifier)") results.append(benchmark(NearestCentroid())) # Train sparse Naive Bayes classifiers print('=' * 80) print("Naive Bayes") results.append(benchmark(MultinomialNB(alpha=.01))) results.append(benchmark(BernoulliNB(alpha=.01))) class L1LinearSVC(LinearSVC): def fit(self, X, y): # The smaller C, the stronger the regularization. # The more regularization, the more sparsity. self.transformer_ = LinearSVC(penalty="l1", dual=False, tol=1e-3) X = self.transformer_.fit_transform(X, y) return LinearSVC.fit(self, X, y) def predict(self, X): X = self.transformer_.transform(X) return LinearSVC.predict(self, X) print('=' * 80) print("LinearSVC with L1-based feature selection") results.append(benchmark(L1LinearSVC())) indices = np.arange(len(results)) results = [[x[i] for x in results] for i in range(5)] fig, ax = plt.subplots(1, 1) clf_names, score, training_time, test_time, pred = results ax.plot(indices, score, '-o', label="score", color='#982023') ax.plot(indices, training_time, '-o', label="training time (s)", color='#46959E') ax.plot(indices, test_time, '-o', label="test time (s)", color='#C7B077') #labels = [item.get_text() for item in ax.get_xticklabels()] labels = clf_names ax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1)) ax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right') ax.set_xlim([-1, 14]) ax.set_ylim([0, 1]) ax.legend(loc='best') plt.subplots_adjust(left=0.05, bottom=0.3, top=.98) #plt.savefig('ratingClassifierScores.png', dpi=144)
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Expanding the Model to 2-grams
t0 = time() vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english', ngram_range=(1, 2)) X_train = vectorizer.fit_transform(train_data) duration = time() - t0 print('vectorized in {:.2f} seconds.'.format(duration)) print(X_train.shape) t0 = time() X_test = vectorizer.transform(test_data) duration = time() - t0 print('transformed test data in {:.2f} seconds.'.format(duration)) results = [] for clf, name in ( (RidgeClassifier(tol=1e-2, solver="lsqr"), "Ridge Classifier"), (Perceptron(n_iter=50), "Perceptron"), (PassiveAggressiveClassifier(n_iter=50), "Passive-Aggressive"), (KNeighborsClassifier(n_neighbors=10), "kNN"), (RandomForestClassifier(n_estimators=20), 'RandomForest'), (LinearSVC(loss='l2', penalty="L2",dual=False, tol=1e-3), "LinearSVC L2"), (SGDClassifier(alpha=.0001, n_iter=50, penalty="L2"), "SGDC SVM L2"), (LinearSVC(loss='l2', penalty="L1",dual=False, tol=1e-3), "LinearSVC L1"), (SGDClassifier(alpha=.0001, n_iter=50, penalty="L1"), "SGDC SVM L1"), (SGDClassifier(alpha=.0001, n_iter=50, penalty="elasticnet"), "Elastic Net"), (NearestCentroid(), "Nearest Centroid"), (MultinomialNB(alpha=.01), "MultinomialNB"), (BernoulliNB(alpha=.01), "BernouliNB")): print('=' * 80) print(name) results.append(benchmark(clf)) class L1LinearSVC(LinearSVC): def fit(self, X, y): # The smaller C, the stronger the regularization. # The more regularization, the more sparsity. self.transformer_ = LinearSVC(penalty="l1", dual=False, tol=1e-3) X = self.transformer_.fit_transform(X, y) return LinearSVC.fit(self, X, y) def predict(self, X): X = self.transformer_.transform(X) return LinearSVC.predict(self, X) print('=' * 80) print("LinearSVC with L1-based feature selection") results.append(benchmark(L1LinearSVC())) indices = np.arange(len(results)) results = [[x[i] for x in results] for i in range(5)] fig, ax = plt.subplots(1, 1) clf_names, score, training_time, test_time, pred = results ax.plot(indices, score, '-o', label="score", color='#982023') ax.plot(indices, training_time, '-o', label="training time (s)", color='#46959E') ax.plot(indices, test_time, '-o', label="test time (s)", color='#C7B077') #labels = [item.get_text() for item in ax.get_xticklabels()] labels = clf_names ax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1)) ax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right') ax.set_xlim([-1, 14]) ax.set_ylim([0, 1]) ax.legend(loc='best') plt.subplots_adjust(left=0.05, bottom=0.3, top=.98) #plt.savefig('ratingClassifierScores.png', dpi=144) for name, scr in zip(clf_names, score): print('{}: {:.3f}'.format(name, scr))
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Conclusions The 1-gram model worked just as well as the 3-gram model. To reduce complexity, I will therefore use the 1-gram model. Out of the models tested, the NearestCentroid performed the best, so I will use that for classification.
engine = cadb.connect_aws_db(write_unicode=True) city = 'palo_alto' cmd = 'select h.hotel_id, h.business_id, count(*) as count from ' cmd += 'ta_reviews r inner join ta_hotels h on r.business_id = ' cmd += 'h.business_id where h.hotel_city = "' cmd += (' ').join(city.split('_'))+'" ' cmd += 'GROUP BY r.business_id' cmd pd.read_sql_query(cmd, engine) cmd = 'select distinct r.business_id from ' cmd += 'ta_reviews r inner join ta_hotels h on r.business_id = ' cmd += 'h.business_id where h.hotel_city = "' cmd += (' ').join(city.split('_'))+'" ' cmd [int(bid[0]) for bid in pd.read_sql_query(cmd, engine).values] bids = [1, 2, 5, 10, 20, 54325] if 3 not in bids: print('it is clear!') else: print('already exists') np.where((y_test == 5) & (pred[10] == 1)) len(test_data) test_data[47] test_data[354] np.where((y_test == 1) & (pred[10] == 5)) np.where((y_test == 5) & (pred[10] == 5)) test_data[4]
code/test_rating_classifiers.ipynb
mattgiguere/doglodge
mit
Step 1: define a molecule Here, we use LiH in sto3g basis with PySCF driver as an example. The molecule object records the information from the PySCF driver.
# using driver to get fermionic Hamiltonian # PySCF example cfg_mgr = ConfigurationManager() pyscf_cfg = OrderedDict([('atom', 'Li .0 .0 .0; H .0 .0 1.6'), ('unit', 'Angstrom'), ('charge', 0), ('spin', 0), ('basis', 'sto3g')]) section = {} section['properties'] = pyscf_cfg driver = cfg_mgr.get_driver_instance('PYSCF') molecule = driver.run(section)
qiskit/aqua/chemistry/advanced_howto.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Step 2: Prepare qubit Hamiltonian Here, we setup the to-be-frozen and to-be-removed orbitals to reduce the problem size when we mapping to qubit Hamiltonian. Furthermore, we define the mapping type for qubit Hamiltonian. For the particular parity mapping, we can further reduce the problem size.
# please be aware that the idx here with respective to original idx freeze_list = [0] remove_list = [-3, -2] # negative number denotes the reverse order map_type = 'parity' h1 = molecule._one_body_integrals h2 = molecule._two_body_integrals nuclear_repulsion_energy = molecule._nuclear_repulsion_energy num_particles = molecule._num_alpha + molecule._num_beta num_spin_orbitals = molecule._num_orbitals * 2 print("HF energy: {}".format(molecule._hf_energy - molecule._nuclear_repulsion_energy)) print("# of electrons: {}".format(num_particles)) print("# of spin orbitals: {}".format(num_spin_orbitals)) # prepare full idx of freeze_list and remove_list # convert all negative idx to positive remove_list = [x % molecule._num_orbitals for x in remove_list] freeze_list = [x % molecule._num_orbitals for x in freeze_list] # update the idx in remove_list of the idx after frozen, since the idx of orbitals are changed after freezing remove_list = [x - len(freeze_list) for x in remove_list] remove_list += [x + molecule._num_orbitals - len(freeze_list) for x in remove_list] freeze_list += [x + molecule._num_orbitals for x in freeze_list] # prepare fermionic hamiltonian with orbital freezing and eliminating, and then map to qubit hamiltonian # and if PARITY mapping is selected, reduction qubits energy_shift = 0.0 qubit_reduction = True if map_type == 'parity' else False ferOp = FermionicOperator(h1=h1, h2=h2) if len(freeze_list) > 0: ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list) num_spin_orbitals -= len(freeze_list) num_particles -= len(freeze_list) if len(remove_list) > 0: ferOp = ferOp.fermion_mode_elimination(remove_list) num_spin_orbitals -= len(remove_list) qubitOp = ferOp.mapping(map_type=map_type, threshold=0.00000001) qubitOp = qubitOp.two_qubit_reduced_operator(num_particles) if qubit_reduction else qubitOp qubitOp.chop(10**-10)
qiskit/aqua/chemistry/advanced_howto.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
We use the classical eigen decomposition to get the smallest eigenvalue as a reference.
# Using exact eigensolver to get the smallest eigenvalue exact_eigensolver = get_algorithm_instance('ExactEigensolver') exact_eigensolver.init_args(qubitOp, k=1) ret = exact_eigensolver.run() print('The computed energy is: {:.12f}'.format(ret['eigvals'][0].real)) print('The total ground state energy is: {:.12f}'.format(ret['eigvals'][0].real + energy_shift + nuclear_repulsion_energy))
qiskit/aqua/chemistry/advanced_howto.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Step 3: Initiate and config dynamically-loaded instances To run VQE with UCCSD variational form, we require - VQE algorithm - Classical Optimizer - UCCSD variational form - Prepare the initial state into HartreeFock state [Optional] Setup token to run the experiment on a real device If you would like to run the experiement on a real device, you need to setup your account first. Note: If you do not store your token yet, use IBMQ.save_accounts() to store it first.
from qiskit import IBMQ IBMQ.load_accounts() # setup COBYLA optimizer max_eval = 200 cobyla = get_optimizer_instance('COBYLA') cobyla.set_options(maxiter=max_eval) # setup HartreeFock state HF_state = get_initial_state_instance('HartreeFock') HF_state.init_args(qubitOp.num_qubits, num_spin_orbitals, map_type, qubit_reduction, num_particles) # setup UCCSD variational form var_form = get_variational_form_instance('UCCSD') var_form.init_args(qubitOp.num_qubits, depth=1, num_orbitals=num_spin_orbitals, num_particles=num_particles, active_occupied=[0], active_unoccupied=[0, 1], initial_state=HF_state, qubit_mapping=map_type, two_qubit_reduction=qubit_reduction, num_time_slices=1) # setup VQE vqe_algorithm = get_algorithm_instance('VQE') vqe_algorithm.setup_quantum_backend(backend='statevector_simulator') vqe_algorithm.init_args(qubitOp, 'matrix', var_form, cobyla)
qiskit/aqua/chemistry/advanced_howto.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Step 4: Run algorithm and retrieve the results The smallest eigenvalue is stored in the first entry of the eigvals key.
results = vqe_algorithm.run() print('The computed ground state energy is: {:.12f}'.format(results['eigvals'][0])) print('The total ground state energy is: {:.12f}'.format(results['eigvals'][0] + energy_shift + nuclear_repulsion_energy)) print("Parameters: {}".format(results['opt_params']))
qiskit/aqua/chemistry/advanced_howto.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Radius of cell growing for last $t$ minutes: $1.05^t \times 10^{-3}cm$. Thus for a cell to form a detectable cluster it should grow for atleast: $\frac{log(10)}{log(1.05}=47.19$ minutes And the total time for grow is $60$ minutes, so that MAX wait time before growth starts is $60-47.19=12.81$ minutes
delta = (1-exp(-12.81/20)) print(delta)
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
$\delta = p(\text{wait time $\leq$ 12.81 minutes}) = 1-\exp(-12.81/20) = 0.473$ So now we need these criteria for calling a ball white: - It gets mutated (PPP) - It has a waiting time of <12.81 minutes So we have a new $PPP(\rho \mu \delta)$ and $\frac{17}{\rho \times A} = \exp(-\rho \mu \delta A) \times \frac{(\rho \mu \delta A)^{17}}{17!}$ $\ln(\frac{17}{\rho A}) = -\rho \mu \delta A + 17 \ln(\rho \mu \delta A) - \ln(17!)$
from sympy import solve, Eq, symbols from mpmath import log as mpl from math import factorial mu= symbols('mu') lhs = log(17/100.0,e) rhs = Eq(-rho*mu*delta*A + 17*(rho*mu*delta*A-1) -mpl(factorial(17))) s = solve(rhs, mu) print (s)
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Thus, approximated $\mu = 0.0667$ for the given dataset! Problem 2
%pylab inline import matplotlib.pyplot as plt N = 10**6 N_t = 10**6 mu = 10**-6 s = 0.001 generations = 8000 mu_t = N_t*mu from scipy.stats import poisson, binom import numpy as np def run_generations(distribution): mutations = [] all_mutations = [] for t in range(generations): # of mutations in generation t offspring_mutations = [] for mutation in mutations: # an individual carrying n mutations leaves behind on average (1 โˆ’ s)^n copies of each of her genes if distribution == 'poisson': mutated_copies = np.sum(poisson.rvs(1-s, size=mutation)) else: p = (1-s)/2 mutated_copies = np.sum(binom.rvs(2, p, size=mutation)) offspring_mutations.append(mutated_copies) M_t = poisson.rvs(mu_t, size=1)[0] offspring_mutations.append(M_t) ## Done with this generation mutations = offspring_mutations all_mutations.append(mutations) return all_mutations
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Poisson
pylab.rcParams['figure.figsize'] = (16.0, 12.0) all_mutations = run_generations('poisson') plt.plot(range(1,generations+1),[np.mean(x) for x in all_mutations]) plt.title('Average distinct mutation per generations') plt.plot(range(1,generations+1),[np.sum(x) for x in all_mutations]) plt.title('Total mutation per generations') plt.hist([np.max(x) for x in all_mutations], 50) plt.title('Most common mutation') plt.hist([np.mean(x) for x in all_mutations], 50) plt.title('Distinct mutation')
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Poisson Results
mu = 10**-6 s = 0.001 N= 10**6 theoretical_tot_mut = mu*N/s print(theoretical_tot_mut) print ('Average total mutations per generation: {}'.format(np.mean([np.sum(x) for x in all_mutations]))) print ('Average distinct mutations per generation: {}'.format(np.mean([len(x) for x in all_mutations]))) print ('Theoretical total mutations per generation: {}'.format(theoretical_tot_mut))
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Binomial
pylab.rcParams['figure.figsize'] = (16.0, 12.0) all_mutations = run_generations('binomial') plt.plot(range(1,generations+1),[np.mean(x) for x in all_mutations]) plt.title('Average distinct mutation per generations') plt.plot(range(1,generations+1),[np.sum(x) for x in all_mutations]) plt.title('Total mutation per generations') plt.hist([np.max(x) for x in all_mutations], 50) plt.title('Most common mutation') plt.hist([np.mean(x) for x in all_mutations], 50) plt.title('Distinct mutation')
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Binomial results
mu = 10**-6 s=0.001 N= 10**6 theoretical_tot_mut = mu*N/s print(theoretical_tot_mut) print ('Average total mutations per generation: {}'.format(np.mean([np.sum(x) for x in all_mutations]))) print ('Average distinct mutations per generation: {}'.format(np.mean([len(x) for x in all_mutations]))) print ('Theoretical total mutations per generation: {}'.format(theoretical_tot_mut))
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
saketkc/hatex
mit
Using if-elif for discrete classification This problem will use elif statements, the useful sibling of if and else, it basically extends your if statements to test multiple situations. In our current situation, where each star will have exactly one spectral type, elif will really come through to make our if statements more efficient. Unlike using many if statements, elif only executes if no previous statements have been deemed True. This is nice, especially if we can anticipate which scenarios are most probable. Let's start with a simple classification problem to get into the mindset of if-elif-else logic. We want to classify if a random number n is between 0 and 100, 101 and 149, or 150 to infinity. This could be useful if, for example, we wanted to classify a person's IQ score. Fill in the if-elif-else statements below so that our number, n, will be classified. Use a print() statement to print out n and its classification (make sure you are descriptive!) You can use the following template: print(n, 'your description here')
# Fill in the parentheses. Don't forget indentation! n = random_number(50,250) # this should be given! if ( n <= 100 ): print(n,'is less than or equal to 100.') elif (100 < n <= 150): print(n,'is between 100 and 150.') else: print(n, 'is greater than or equal to 150.')
notebooks/Lectures2021/Lecture1/L1_challenge_problem_stars_instructor.ipynb
astroumd/GradMap
gpl-3.0
Test your statement a few times so that you see if it works for various numbers. Every time you run the cell, a new random number will be chosen, but you can also set it to make sure that the code works correctly. Just comment (#) before the random_number() function. Make sure to also test the boundary numbers, as they may act odd if there is a precarious &lt;=. The loop We have a list of stellar classifications above. Our new classifier will be a lot like the number classifier, but you will need to use the stellar classification boundaries in Wikipedia's table instead of our previous boundaries. Another thing you will need to do is make a loop so that each star in temp is classified within one cell! You can do this with a while-loop, using a dummy index that goes up to len(temp). Construct a loop such that, for each temperature in temp, you will print out the star's temperature and classification.
i = 0 end = len(temp) # Define your loop here while i < end: if temp[i] < 3700: print('Star',temp[i],'K is type', 'M') elif 3700 <= temp[i] < 5200: print('Star',temp[i],'K is type', 'K') elif 5200 <= temp[i] < 6000: print('Star',temp[i],'K is type', 'G') elif 6000 <= temp[i] < 7500: print('Star',temp[i],'K is type', 'F') elif 7500 <= temp[i] < 10000: print('Star',temp[i],'K is type', 'A') elif 10000 <= temp[i] < 30000: print('Star',temp[i],'K is type', 'B') else: # Greater than 30000: print('Star', temp[i], 'K is type', 'O') i = i + 1
notebooks/Lectures2021/Lecture1/L1_challenge_problem_stars_instructor.ipynb
astroumd/GradMap
gpl-3.0
Scrape data The file wowwiki_pages_current.xml is a database dump of the WoW Wikia, containing the current version of every page on the Wikia. This amounts to ~500 MB of uncompressed text! To process this, we will use the lxml library. A typical page we are interested in looks something like this: ```xml <page> <title>Rat</title> <ns>0</ns> <id>15369</id> <sha1>1q38rt4376m9s74uwwslwuea2yy16or</sha1> <revision> <id>2594586</id> <timestamp>2012-08-25T17:06:39Z</timestamp> <contributor> <username>MarkvA</username> <id>1171508</id> </contributor> <minor /> <comment>adding ID to infobox, replaced: ...snip...</comment> <text xml:space="preserve" bytes="1491">{{npcbox |name=Rat|id=4075 |image=Rat.png |level=1 |race=Rat |creature=Critter |faction=Combat |aggro={{aggro|0|0}} |health=8 |location=All around [[Azeroth (world)|Azeroth]] }} '''Rats''' are small [[critter]]s, found in many places, including [[Booty Bay]], [[The Barrens]], and the [[Deeprun Tram]] ....snip... </text> </revision> </page> ``` We can see that the information we care about is inside of a &lt;text&gt; element, contained in an npcbox, following the pattern of attribute=value. In this case, name=Rat, level=1, health=8. Because the file is pretty large, we will use the lxml.etree.iterparse() method to examine each individual page as it is parsed, extracting relevant character information if it is present, and discarding each element when we are done with it (to save memory). Our strategy will be to process any &lt;text&gt; element we come across using the regular expressions library (regular expressions are a powerful tool for processing textual data).
MAXPRINT = 100 MAXPROCESS = 1e7 numprocessed = 0 names = [] levels = [] healths = [] itertree = etree.iterparse('wowwiki_pages_current.xml') for event, element in itertree: if numprocessed > MAXPROCESS: raise Exception('Maximum number of records processed') break numprocessed += 1 # if we are currently looking at the text of an article **and** there's a health value, let's process it if element.tag.endswith('text') and element.text and 'health' in element.text: # this set of regular expressions will try to extract the name, health, and level of the NPC name_re = re.search('name ?= ?(.+)(\||\n)', element.text) health_re = re.search('health ?= ?([\d,]+)', element.text) level_re = re.search('level ?= ?(.+)(\n|\|)', element.text) health = int(health_re.group(1).replace(',', '')) if health_re else None try: level = int(level_re.group(1)) except: level = None name = name_re.group(1) if name_re else None if name and health: names.append(name) levels.append(level) healths.append(health) element.clear() # get rid of the current element, we're done with it
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
Data cleaning and exploratory analysis Now we have a set of lists (names, levels, healths) that contain information about every NPC we were able to find. We can look at this data using the numpy library to find out what the data look like, using the techniques of exploratory data analysis.
# convert the lists we've built into numpy arrays. `None` entries will be mapped to NaNs names = np.array(names) lvls = np.array(levels, dtype=np.float64) hps = np.array(healths, dtype=np.float64) num_NPCs = len(names) min_lvl, max_lvl = lvls.min(), lvls.max() min_hp, max_hp = hps.min(), hps.max() print( "Number of entries: {}\n" "Min/max level: {}, {}\n" "Min/max health: {}, {}".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp) )
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
We can see we've successfully extracted over 12,000 NPCs, but there are some NaNs in the levels! Let's look at these...
names[np.isnan(lvls)][:5]
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
Looking at the page for Vol'jin, we see that his level is encoded on the page as ?? Boss, which can't be converted to an integer. World of Warcraft has a lot of these, and for a more detailed analysis, we could isolate these entries by assigning them a special value during parsing, but in this first pass at the data, we will simply discard them by getting rid of all NaNs. The numpy.isfinite() function will helps us select only entries in the lvls array that are finite (i.e. not NaN)
idx = np.isfinite(lvls) num_NPCs = len(names[idx]) min_lvl, max_lvl = lvls[idx].min(), lvls[idx].max() min_hp, max_hp = hps[idx].min(), hps[idx].max() print( "Number of entries: {}\n" "Min/max level: {}, {}\n" "Min/max health: {}, {}".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp) )
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
Ah, there we go! Knowing that the maximum player level (as of this writing) in WoW is 100, we can surmise that there are still some NPCs in this dataset that represent very-high-level bosses/etc., which may skew the statistics of hitpoints. We can set our cutoff at this point to only consider NPCs that are directly comparable to player characters. Since it appears there is a large range of HP values, we will look at the logarithm (np.log10) of the HPs. (N.B. a numpy trick we're using here: idx is a boolean array, but calling np.sum() forces a typecast (False -> 0, True -> 1))
LEVEL_CUTOFF = 100 hps = np.log10(healths) lvls = np.array(levels) idx = np.isfinite(lvls) lvls = lvls[idx] hps = hps[idx] print("Number of NPCs with level > 100: %d" % (lvls > 100).sum()) idx = (lvls <= LEVEL_CUTOFF) print("Number of NPCs with finite level < 100: %d\n" % (idx.sum())) lvls = lvls[idx] hps = hps[idx] num_NPCs = lvls.size min_lvl, max_lvl = lvls.min(), lvls.max() min_hp, max_hp = hps.min(), hps.max() print( "Number of entries: {}\n" "Min/max level: {}, {}\n" "Min/max log10(health): {}, {}".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp) )
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
Visualizing the data We could continue to explore these data using text printouts of statistical information (mean, median, etc.), but with a dataset this large, visualization becomes a very powerful tool. We will use the matplotlib library (and the seaborn library that wraps it) to generate a hexplot (and marginal distributions) of the data. (N.B. the use of the inferno colormap here! There are a lot of good reasons to be particular about your choice of colormaps)
ax = sns.jointplot(lvls, hps, stat_func=None, kind='hex', xlim=(0, LEVEL_CUTOFF), bins='log', color='r', cmap='inferno_r' ) ax.fig.suptitle('HP vs Lvl of WoW NPCS (N={N})'.format(N=lvls.size), y=1.02) ax.ax_joint.set_xlabel('NPC level') ax.ax_joint.set_ylabel(r'$log_{10}(\mathrm{HP})$') cax = ax.fig.add_axes([0.98, 0.1, 0.05, 0.65]) plt.colorbar(cax=cax) cax.set_title('$log_{10}$(count)', x=1.5)
WoW_NPC_analysis.ipynb
SnoopJeDi/WoWNPCs
mit
In this notebook, we introduce some basic uses of the agavepy Python library for interacting with the Agave Platform science-as-a-service APIs. The examples primarily draw from the apps service, but the concepts introduced are broadly applicable to all Agave services. In subsequent notebooks, we'll take deeper dives into specific topics such as using agavepy to launch and monitor an Agave job. For more information about Agave, please see the developer site: http://agaveapi.co/ The agavepy library provides a high-level Python binding to the Agave API. The first step is to import the Agave class:
from agavepy.agave import Agave import json
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Before we can interact with Agave, we need to instantiate a client. Typically, we would use the constructor and pass in our credentials (OAuth client key and secret as well as our username and password) together with configuration data for our "tenant", the organization within Agave we wish to interact with.
agave_cache_dir = os.environ.get('AGAVE_CACHE_DIR') ag_token_cache = json.loads(open(agave_cache_dir + '/current').read()) print (ag_token_cache) AGAVE_APP_NAME="funwave-tvd-nectar" + os.environ['AGAVE_USERNAME'] ag = Agave(token=ag_token_cache['access_token'], refresh_token=ag_token_cache['refresh_token'], api_key=ag_token_cache['apikey'], api_secret=ag_token_cache['apisecret'],api_server=ag_token_cache['baseurl'], client_name=AGAVE_APP_NAME, verify=False) #(api_server=ag_token_cache['baseurl'], api_key=ag_token_cache['apikey'], api_secret=ag_token_cache['apisecret'], verify=False, username=ag_token_cache['username'], password=os.environ.get('AGAVE_PASSWORD'))
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
The agavepy library's Agave class also provides a restore() method for reconstituting previous OAuth sessions. Previous sessions are read from and written to a cache file, /etc/.agpy, so that OAuth sessions persist across iPython sessions. When you authenticated to JupyterHub, the OAuth login was written to the .agpy file. We can therefore use the restore method to create an OAuth client without needing to pass any credentials: Note that the restore method can take arguments (such as client_name) so that you can restore/manage multiple OAuth sessions. When first getting started on the hub, there is only one session in the cache file, so no arguments are required. If we ever want to inspect the OAuth session being used by our client, we have a few methods available to us. First, we can print the token_info dictionary on the token object:
ag.token.token_info ag.token.refresh() ag.token.token_info
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
This shows us both the access and refresh tokens being used. We can also see the end user profile associated with these tokens:
ag.profiles.get()
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Finally, we can inspect the ag object directly for attributes like api_key, api_secret, api_server, etc.
print ag.api_key, ag.api_secret, ag.api_server
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
We are now ready to interact with Agave services using our agavepy client. We can take a quick look at the available top-level methods of our client:
dir(ag)
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
We see there is a top-level method for each of the core science APIs in agave. We will focus on the apps service since it is of broad interest, but much of what we illustrate is generally applicable to all Agave core science APIs. We can browse a specific collection using the list() method. For example, let's see what apps are available to us:
ag.apps.list()
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
What we see in the output above is a python list representing the JSON object returned from Agave's apps service. It is a list of objects, each of which representing a single app. Let's capture the first app object and inspect it. To do that we can use normal Python list notation:
app = ag.apps.list()[0] print type(app); app
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
We see that the app object is of type agavepy.agave.AttrDict. That's a Python dictionary with some customizations to provide convenience features such as using dot notation for keys/attributes. For example, we see that the app object has an 'id' key. We can access it directly using dot notation:
app.id
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Equivalently, we can use normal Python dictionary syntax:
app['id']
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
In Agave, the app id is the unique identifier for the application. We'll come back to that in a minute. For now, just know that this example is very typical of responses from agavepy: in general the JSON response object is represented by lists of AttrDicts. Stepping back for a second, let's explore the apps collection a bit. We can always get a list of operations available for a collection by using the dir(-) method:
dir(ag.apps)
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Also notice that we have tab-completion on these operations. So, if we start typing "ag.apps.l" and then hit tab, Jupyter provides a select box of operations beginning with "l". Try putting the following cell in focus and then hitting the tab key (but don't actually hit enter or try to execute the cell; otherwise you'll get an exception because there's no method called "l"):
ag.apps.l
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
If we would like to get details about a specific object for which we know the unique id, in general we use the get method, passing in the id for the object. Here, we will use an app id we found from the ag.apps.list command.
ag.apps.get(appId=app.id)
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Whoa, that's a lot of information. We aren't going to give a comprehensive introduction to Agave applications in this notebook. Instead we refer you to the official Agave app tutorial on the website: http://agaveapi.co/documentation/tutorials/app-management-tutorial/ However, we will point out a couple of important points. Let's capture that response in an object called full_app:
full_app = ag.apps.get(appId=app.id)
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Complex sub-objects of the application such as application inputs and parameters come back as top level attributes. and are represented as lists. The individual elements of the list are represented as AttrDicts. We can see this by exploring our full_app's inputs:
print type(full_app.inputs); print type(full_app.inputs[0]); full_app.inputs[0]
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
Then, if we want the input id, we can use dot notation or dictionary notation just as before:
full_app.inputs[0].id
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause
You now have the ability to fully explore individual Agave objects returned from agavepy, but what about searching for objects? The Agave platform provides a powerful search feature across most services, and agavepy supports that as well. Every retrieval operation in agavepy (for example, apps.list) supports a "search" argument. The syntax for the search argument is identical to that described in the Agave documentation: it uses a dot notation combining search terms, values and (optional) operators. The search object itself should be a python dictionary with strings for the keys and values. Formally, each key:value pair in the dictionary adheres to the following form: $$term.operator:value$$ The operator is optional and defaults to equality ('eq'). For example, the following search filters the list of all apps down to just those with the id attribute equal to our app.id:
ag.apps.list(search={'id': app.id})
content/notebooks/Python SDK.ipynb
agaveapi/SC17-container-tutorial
bsd-3-clause