content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Average of an uarray in python uncertainties My problem: I have an array of ufloats (e.g. an unarray) in pythons uncertainties package. All values of the array got their own errors, and I need a funktion, that gives me the average of the array in respect to both, the error I get when calculating the mean of the nominal values and the influence the values errors have. I have an uarray: 2 +/- 1 3 +/- 2 4 +/- 3 and need a funktion, that gives me an average value of the array. Thanks A: Assuming Gaussian statistics, the uncertainties stem from Gaussian parent distributions. In such a case, it is standard to weight the measurements (nominal values) by the inverse variance. This application to the general weighted average gives, $$ \frac{\sum_i w_i x_i}{\sum_i w_i} = \frac{\sum_i x_i/\sigma_i^2}{\sum_i 1/\sigma_i^2} $$. One need only perform good 'ol error propagation on this to get an uncertainty of the weighted average as, $$ \sqrt{\sum_i \frac{1}{1/\sum_i \sigma_i^2}} $$ I don't have an n-length formula to do this syntactically speaking on hand, but here's how one could get the weighted average and its uncertainty in a simple case: a = un.ufloat(5, 2) b = un.ufloat(8, 4) wavg = un.ufloat((a.n/a.s**2 + b.n/b.s**2)/(1/a.s**2 + 1/b.s**2), np.sqrt(2/(1/a.s**2 + 1/b.s**2))) print(wavg) >>> 5.6+/-2.5298221281347035 As one would expect, the result tends more-so towards the value with the smaller uncertainty. This is good since a smaller uncertainty in a measurement implies that its associated nominal value is closer to the true value in the parent distribution than those with larger uncertainties. A: Unless I'm missing something, you could calculate the sum divided by the length of the array: from uncertainties import unumpy, ufloat import numpy as np arr = np.array([ufloat(2, 1), ufloat(3, 2), ufloat(4,3)]) print(sum(arr)/len(arr)) # 3.0+/-1.2 You can also define it like this: arr1 = unumpy.uarray([2, 3, 4], [1, 2, 3]) print(sum(arr1)/len(arr1)) # 3.0+/-1.2 uncertainties takes care of the rest. A: I used Captain Morgan's answer to serve up some sweet Python code for a project and discovered that it needed a little extra ingredient: import uncertainties as un from un.unumpy import unp epsilon = unp.nominal_values(values).mean()/(1e12) wavg = ufloat(sum([v.n/(v.s**2+epsilon) for v in values])/sum([1/(v.s**2+epsilon) for v in values]), np.sqrt(len(values)/sum([1/(v.s**2+epsilon) for v in values]))) if wavg.s <= np.sqrt(epsilon): wavg = ufloat(wavg.n, 0.0) Without that little something (epsilon) we'd get div/0 errors from observations recorded with zero uncertainty. A: If you already have a .csv file which stores variables in 'mean+/-sted' format, you could try the code below; it works for me. from uncertainties import ufloat_fromstr df=pd.read_csv('Z:\compare\SL2P_PAR.csv') for i in range(len(df.uncertainty)): df['mean'] = ufloat_fromstr(df['uncertainty'][I]).n df['sted'] = ufloat_fromstr(df['uncertainty'][I]).s
Average of an uarray in python uncertainties
My problem: I have an array of ufloats (e.g. an unarray) in pythons uncertainties package. All values of the array got their own errors, and I need a funktion, that gives me the average of the array in respect to both, the error I get when calculating the mean of the nominal values and the influence the values errors have. I have an uarray: 2 +/- 1 3 +/- 2 4 +/- 3 and need a funktion, that gives me an average value of the array. Thanks
[ "Assuming Gaussian statistics, the uncertainties stem from Gaussian parent distributions. In such a case, it is standard to weight the measurements (nominal values) by the inverse variance. This application to the general weighted average gives,\n$$ \\frac{\\sum_i w_i x_i}{\\sum_i w_i} = \\frac{\\sum_i x_i/\\sigma_i^2}{\\sum_i 1/\\sigma_i^2} $$.\n\nOne need only perform good 'ol error propagation on this to get an uncertainty of the weighted average as,\n$$ \\sqrt{\\sum_i \\frac{1}{1/\\sum_i \\sigma_i^2}} $$\n\nI don't have an n-length formula to do this syntactically speaking on hand, but here's how one could get the weighted average and its uncertainty in a simple case:\n a = un.ufloat(5, 2)\n b = un.ufloat(8, 4)\n wavg = un.ufloat((a.n/a.s**2 + b.n/b.s**2)/(1/a.s**2 + 1/b.s**2), \n np.sqrt(2/(1/a.s**2 + 1/b.s**2)))\n print(wavg)\n >>> 5.6+/-2.5298221281347035\n\nAs one would expect, the result tends more-so towards the value with the smaller uncertainty. This is good since a smaller uncertainty in a measurement implies that its associated nominal value is closer to the true value in the parent distribution than those with larger uncertainties.\n", "Unless I'm missing something, you could calculate the sum divided by the length of the array:\nfrom uncertainties import unumpy, ufloat\nimport numpy as np\narr = np.array([ufloat(2, 1), ufloat(3, 2), ufloat(4,3)])\nprint(sum(arr)/len(arr))\n# 3.0+/-1.2\n\nYou can also define it like this:\narr1 = unumpy.uarray([2, 3, 4], [1, 2, 3])\nprint(sum(arr1)/len(arr1))\n# 3.0+/-1.2\n\nuncertainties takes care of the rest.\n", "I used Captain Morgan's answer to serve up some sweet Python code for a project and discovered that it needed a little extra ingredient:\n import uncertainties as un\n from un.unumpy import unp\n epsilon = unp.nominal_values(values).mean()/(1e12)\n wavg = ufloat(sum([v.n/(v.s**2+epsilon) for v in values])/sum([1/(v.s**2+epsilon) for v in values]), \n np.sqrt(len(values)/sum([1/(v.s**2+epsilon) for v in values])))\n if wavg.s <= np.sqrt(epsilon):\n wavg = ufloat(wavg.n, 0.0)\n\nWithout that little something (epsilon) we'd get div/0 errors from observations recorded with zero uncertainty.\n", "If you already have a .csv file which stores variables in 'mean+/-sted' format, you could try the code below; it works for me.\nfrom uncertainties import ufloat_fromstr\ndf=pd.read_csv('Z:\\compare\\SL2P_PAR.csv')\nfor i in range(len(df.uncertainty)):\ndf['mean'] = ufloat_fromstr(df['uncertainty'][I]).n\ndf['sted'] = ufloat_fromstr(df['uncertainty'][I]).s\n\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "arrays", "python", "uncertainty" ]
stackoverflow_0043637370_arrays_python_uncertainty.txt
Q: Regex match 3 digits followed by 8 digits or just 8 digits, but do not match 3 digits if it is after decimal I have a string like this: '''82290574 BB BBBBB 3.0 195.00 3.75 0.00 0.00 0.00 85.00 113.75 21.61 135.36 220811 11.08.2022 00.000 82290600 BB BBBBB 2.5 375.00 3.13 0.00 0.00 0.00 225.00 153.13 29.09 182.22 220811 11.08.2022 00.000 82290633 BB BBBBB 36.0 270.00 45.00 0.00 0.00 0.00 122.04 192.96 36.66 229.62 220812 12.08.2022 04.110 123 12345678''' I need to change my regex: \b(?!\.\d{3})(\d{7})(\d)\s*(\d{8})?\b to match: (82290574, 82290600, 82290633, 123 12345678) Right now it matches (82290574, 82290600, 82290633, 12345678). The tricky part is that I cannot find a way so that it would not end up in (82290574, 000 82290600, 000 82290633, 123 12345678), like with: \b(\d{3})?\s*(\d{7})(\d)\s*(\d{8})?\b. I tried looking into negative lookback, but I ended up with \b(?<![.\d])\d+(?![.\d])(\d{3})?\s*(\d{7})(\d)\s*(\d{8})?\b which only matches (123 12345678) A: With your one example, this works: s = '''82290574 BB BBBBB 3.0 195.00 3.75 0.00 0.00 0.00 85.00 113.75 21.61 135.36 220811 11.08.2022 00.000 82290600 BB BBBBB 2.5 375.00 3.13 0.00 0.00 0.00 225.00 153.13 29.09 182.22 220811 11.08.2022 00.000 82290633 BB BBBBB 36.0 270.00 45.00 0.00 0.00 0.00 122.04 192.96 36.66 229.62 220812 12.08.2022 04.110 123 12345678''' print(re.findall(r'\b(?:\d{3} )?\d{8}\b', s)) ['82290574', '82290600', '82290633', '123 12345678'] It works even without the word breaks in this case.
Regex match 3 digits followed by 8 digits or just 8 digits, but do not match 3 digits if it is after decimal
I have a string like this: '''82290574 BB BBBBB 3.0 195.00 3.75 0.00 0.00 0.00 85.00 113.75 21.61 135.36 220811 11.08.2022 00.000 82290600 BB BBBBB 2.5 375.00 3.13 0.00 0.00 0.00 225.00 153.13 29.09 182.22 220811 11.08.2022 00.000 82290633 BB BBBBB 36.0 270.00 45.00 0.00 0.00 0.00 122.04 192.96 36.66 229.62 220812 12.08.2022 04.110 123 12345678''' I need to change my regex: \b(?!\.\d{3})(\d{7})(\d)\s*(\d{8})?\b to match: (82290574, 82290600, 82290633, 123 12345678) Right now it matches (82290574, 82290600, 82290633, 12345678). The tricky part is that I cannot find a way so that it would not end up in (82290574, 000 82290600, 000 82290633, 123 12345678), like with: \b(\d{3})?\s*(\d{7})(\d)\s*(\d{8})?\b. I tried looking into negative lookback, but I ended up with \b(?<![.\d])\d+(?![.\d])(\d{3})?\s*(\d{7})(\d)\s*(\d{8})?\b which only matches (123 12345678)
[ "With your one example, this works:\ns = '''82290574 BB BBBBB 3.0 195.00 3.75 0.00 0.00 0.00 85.00 113.75 21.61 135.36 220811 11.08.2022 00.000\n82290600 BB BBBBB 2.5 375.00 3.13 0.00 0.00 0.00 225.00 153.13 29.09 182.22 220811 11.08.2022 00.000\n82290633 BB BBBBB 36.0 270.00 45.00 0.00 0.00 0.00 122.04 192.96 36.66 229.62 220812 12.08.2022 04.110\n123 12345678'''\n\nprint(re.findall(r'\\b(?:\\d{3} )?\\d{8}\\b', s))\n\n['82290574', '82290600', '82290633', '123 12345678']\n\nIt works even without the word breaks in this case.\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074607791_python_regex.txt
Q: SQLAlchemy: How to change a MySQL server system variable using SQLAlchemy? I want to set the general_log and general_log_file variables using SQLAlchemy, is there a way to do this? I've been Googling around and can't find anything on the topic. A: You can execute any raw SQL query which you need (of course you have to get appropriate rights in the session). To change a variable run something like this: # change variable name and values to what you need connection.execute("SET SESSION query_cache_type = OFF") A: As mentioned previously, you could use the following code the set a variable using the raw Connection object. connection.execute("SET SESSION query_cache_type = OFF") If you have a Session object, you can retrieve the underlying Connection object using the Session.connection() function. So your code could look as follows. session.connection().execute("SET SESSION query_cache_type = OFF")
SQLAlchemy: How to change a MySQL server system variable using SQLAlchemy?
I want to set the general_log and general_log_file variables using SQLAlchemy, is there a way to do this? I've been Googling around and can't find anything on the topic.
[ "You can execute any raw SQL query which you need (of course you have to get appropriate rights in the session). To change a variable run something like this:\n# change variable name and values to what you need\nconnection.execute(\"SET SESSION query_cache_type = OFF\")\n\n", "As mentioned previously, you could use the following code the set a variable using the raw Connection object.\nconnection.execute(\"SET SESSION query_cache_type = OFF\")\n\nIf you have a Session object, you can retrieve the underlying Connection object using the Session.connection() function.\nSo your code could look as follows.\nsession.connection().execute(\"SET SESSION query_cache_type = OFF\")\n\n" ]
[ 5, 0 ]
[]
[]
[ "mysql", "python", "sqlalchemy" ]
stackoverflow_0030926998_mysql_python_sqlalchemy.txt
Q: Python 3.11 worse optimized than 3.10? I run this simple loop with Python 3.10.7 and 3.11.0 on Windows 10. import time a = 'a' start = time.time() for _ in range(1000000): a += 'a' end = time.time() print(a[:5], (end-start) * 1000) The older version executes in 187ms, Python 3.11 needs about 17000ms. Does 3.10 realize that only the first 5 chars of a are needed, whereas 3.11 executes the whole loop? I confirmed this performance difference on godbolt. A: TL;DR: you should not use such a loop in any performance critical code but ''.join instead. The inefficient execution appears to be related to a regression during the bytecode generation in CPython 3.11 (and missing optimizations during the evaluation of binary add operation on Unicode strings). General guidelines This is an antipattern. You should not write such a code if you want this to be fast. This is described in PEP-8: Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations. Indeed, other implementations like PyPy does not perform an efficient in-place string concatenation for example. A new bigger string is created for every iteration (since strings are immutable, the previous one may be referenced and PyPy does not use a reference counting but a garbage collector). This results in a quadratic runtime as opposed to a linear runtime in CPython (at least in past implementation). Deep Analysis I can reproduce the problem on Windows 10 between the embedded (64-bit x86-64) version of CPython 3.10.8 and the one of 3.11.0: Timings: - CPython 3.10.8: 146.4 ms - CPython 3.11.0: 15186.8 ms It turns out the code has not particularly changed between CPython 3.10 and 3.11 when it comes to Unicode string appending. See for example PyUnicode_Append: 3.10 and 3.11. A low-level profiling analysis shows that nearly all the time is spent in one unnamed function call of another unnamed function called by PyUnicode_Concat (which is also left unmodified between CPython 3.10.8 and 3.11.0). This slow unnamed function contains a pretty small set of assembly instructions and nearly all the time is spent in one unique x86-64 assembly instruction: rep movsb byte ptr [rdi], byte ptr [rsi]. This instruction is basically meant to copy a buffer pointed by the rsi register to a buffer pointed by the rdi register (the processor copy rcx bytes for the source buffer to the destination buffer and decrement the rcx register for each byte until it reach 0). This information shows that the unnamed function is actually memcpy of the standard MSVC C runtime (ie. CRT) which appears to be called by _copy_characters itself called by _PyUnicode_FastCopyCharacters of PyUnicode_Concat (all the functions are still belonging to the same file). However, these CPython functions are still left unmodified between CPython 3.10.8 and 3.11.0. The non-negligible time spent in malloc/free (about 0.3 seconds) seems to indicate that a lot of new string objects are created -- certainly at least 1 per iteration -- matching with the call to PyUnicode_New in the code of PyUnicode_Concat. All of this indicates that a new bigger string is created and copied as specified above. The thing is calling PyUnicode_Concat is certainly the root of the performance issue here and I think CPython 3.10.8 is faster because it certainly calls PyUnicode_Append instead. Both calls are directly performed by the main big interpreter evaluation loop and this loop is driven by the generated bytecode. It turns out that the generated bytecode is different between the two version and it is the root of the performance issue. Indeed, CPython 3.10 generates an INPLACE_ADD bytecode instruction while CPython 3.11 generates a BINARY_OP bytecode instruction. Here is the bytecode for the loops in the two versions: CPython 3.10 loop: >> 28 FOR_ITER 6 (to 42) 30 STORE_NAME 4 (_) 6 32 LOAD_NAME 1 (a) 34 LOAD_CONST 2 ('a') 36 INPLACE_ADD <---------- 38 STORE_NAME 1 (a) 40 JUMP_ABSOLUTE 14 (to 28) CPython 3.11 loop: >> 66 FOR_ITER 7 (to 82) 68 STORE_NAME 4 (_) 6 70 LOAD_NAME 1 (a) 72 LOAD_CONST 2 ('a') 74 BINARY_OP 13 (+=) <---------- 78 STORE_NAME 1 (a) 80 JUMP_BACKWARD 8 (to 66) This changes appears to come from this issue. The code of the main interpreter loop (see ceval.c) is different between the two CPython version. Here are the code executed by the two versions: // In CPython 3.10.8 case TARGET(INPLACE_ADD): { PyObject *right = POP(); PyObject *left = TOP(); PyObject *sum; if (PyUnicode_CheckExact(left) && PyUnicode_CheckExact(right)) { sum = unicode_concatenate(tstate, left, right, f, next_instr); // <----- /* unicode_concatenate consumed the ref to left */ } else { sum = PyNumber_InPlaceAdd(left, right); Py_DECREF(left); } Py_DECREF(right); SET_TOP(sum); if (sum == NULL) goto error; DISPATCH(); } //---------------------------------------------------------------------------- // In CPython 3.11.0 TARGET(BINARY_OP_ADD_UNICODE) { assert(cframe.use_tracing == 0); PyObject *left = SECOND(); PyObject *right = TOP(); DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); STAT_INC(BINARY_OP, hit); PyObject *res = PyUnicode_Concat(left, right); // <----- STACK_SHRINK(1); SET_TOP(res); _Py_DECREF_SPECIALIZED(left, _PyUnicode_ExactDealloc); _Py_DECREF_SPECIALIZED(right, _PyUnicode_ExactDealloc); if (TOP() == NULL) { goto error; } JUMPBY(INLINE_CACHE_ENTRIES_BINARY_OP); DISPATCH(); } Note that unicode_concatenate calls PyUnicode_Append (and do some reference counting checks before). In the end, CPython 3.10.8 calls PyUnicode_Append which is fast (in-place) and CPython 3.11.0 calls PyUnicode_Concat which is slow (out-of-place). It clearly looks like a regression to me. People in the comments reported having no performance issue on Linux. However, experimental tests shows a BINARY_OP instruction is also generated on Linux, and I cannot find so far any Linux-specific optimization regarding string concatenation. Thus, the difference between the platforms is pretty surprising. Update: towards a fix I have opened an issue about this available here. One should not that putting the code in a function is significantly faster due to the variable being local (as pointed out by @Dennis in the comments). Related posts: How slow is Python's string concatenation vs. str.join? Python string 'join' is faster (?) than '+', but what's wrong here? Python string concatenation in for-loop in-place?
Python 3.11 worse optimized than 3.10?
I run this simple loop with Python 3.10.7 and 3.11.0 on Windows 10. import time a = 'a' start = time.time() for _ in range(1000000): a += 'a' end = time.time() print(a[:5], (end-start) * 1000) The older version executes in 187ms, Python 3.11 needs about 17000ms. Does 3.10 realize that only the first 5 chars of a are needed, whereas 3.11 executes the whole loop? I confirmed this performance difference on godbolt.
[ "TL;DR: you should not use such a loop in any performance critical code but ''.join instead. The inefficient execution appears to be related to a regression during the bytecode generation in CPython 3.11 (and missing optimizations during the evaluation of binary add operation on Unicode strings).\n\nGeneral guidelines\nThis is an antipattern. You should not write such a code if you want this to be fast. This is described in PEP-8:\n\nCode should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). \nFor example, do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations.\n\nIndeed, other implementations like PyPy does not perform an efficient in-place string concatenation for example. A new bigger string is created for every iteration (since strings are immutable, the previous one may be referenced and PyPy does not use a reference counting but a garbage collector). This results in a quadratic runtime as opposed to a linear runtime in CPython (at least in past implementation).\n\nDeep Analysis\nI can reproduce the problem on Windows 10 between the embedded (64-bit x86-64) version of CPython 3.10.8 and the one of 3.11.0:\nTimings:\n - CPython 3.10.8: 146.4 ms\n - CPython 3.11.0: 15186.8 ms\n\nIt turns out the code has not particularly changed between CPython 3.10 and 3.11 when it comes to Unicode string appending. See for example PyUnicode_Append: 3.10 and 3.11.\nA low-level profiling analysis shows that nearly all the time is spent in one unnamed function call of another unnamed function called by PyUnicode_Concat (which is also left unmodified between CPython 3.10.8 and 3.11.0). This slow unnamed function contains a pretty small set of assembly instructions and nearly all the time is spent in one unique x86-64 assembly instruction: rep movsb byte ptr [rdi], byte ptr [rsi]. This instruction is basically meant to copy a buffer pointed by the rsi register to a buffer pointed by the rdi register (the processor copy rcx bytes for the source buffer to the destination buffer and decrement the rcx register for each byte until it reach 0). This information shows that the unnamed function is actually memcpy of the standard MSVC C runtime (ie. CRT) which appears to be called by _copy_characters itself called by _PyUnicode_FastCopyCharacters of PyUnicode_Concat (all the functions are still belonging to the same file). However, these CPython functions are still left unmodified between CPython 3.10.8 and 3.11.0. The non-negligible time spent in malloc/free (about 0.3 seconds) seems to indicate that a lot of new string objects are created -- certainly at least 1 per iteration -- matching with the call to PyUnicode_New in the code of PyUnicode_Concat. All of this indicates that a new bigger string is created and copied as specified above.\nThe thing is calling PyUnicode_Concat is certainly the root of the performance issue here and I think CPython 3.10.8 is faster because it certainly calls PyUnicode_Append instead. Both calls are directly performed by the main big interpreter evaluation loop and this loop is driven by the generated bytecode.\nIt turns out that the generated bytecode is different between the two version and it is the root of the performance issue. Indeed, CPython 3.10 generates an INPLACE_ADD bytecode instruction while CPython 3.11 generates a BINARY_OP bytecode instruction. Here is the bytecode for the loops in the two versions:\nCPython 3.10 loop:\n\n >> 28 FOR_ITER 6 (to 42)\n 30 STORE_NAME 4 (_)\n 6 32 LOAD_NAME 1 (a)\n 34 LOAD_CONST 2 ('a')\n 36 INPLACE_ADD <----------\n 38 STORE_NAME 1 (a)\n 40 JUMP_ABSOLUTE 14 (to 28)\n\nCPython 3.11 loop:\n\n >> 66 FOR_ITER 7 (to 82)\n 68 STORE_NAME 4 (_)\n 6 70 LOAD_NAME 1 (a)\n 72 LOAD_CONST 2 ('a')\n 74 BINARY_OP 13 (+=) <----------\n 78 STORE_NAME 1 (a)\n 80 JUMP_BACKWARD 8 (to 66)\n\nThis changes appears to come from this issue. The code of the main interpreter loop (see ceval.c) is different between the two CPython version. Here are the code executed by the two versions:\n // In CPython 3.10.8\n case TARGET(INPLACE_ADD): {\n PyObject *right = POP();\n PyObject *left = TOP();\n PyObject *sum;\n if (PyUnicode_CheckExact(left) && PyUnicode_CheckExact(right)) {\n sum = unicode_concatenate(tstate, left, right, f, next_instr); // <-----\n /* unicode_concatenate consumed the ref to left */\n }\n else {\n sum = PyNumber_InPlaceAdd(left, right);\n Py_DECREF(left);\n }\n Py_DECREF(right);\n SET_TOP(sum);\n if (sum == NULL)\n goto error;\n DISPATCH();\n }\n\n//----------------------------------------------------------------------------\n\n // In CPython 3.11.0\n TARGET(BINARY_OP_ADD_UNICODE) {\n assert(cframe.use_tracing == 0);\n PyObject *left = SECOND();\n PyObject *right = TOP();\n DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP);\n DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP);\n STAT_INC(BINARY_OP, hit);\n PyObject *res = PyUnicode_Concat(left, right); // <-----\n STACK_SHRINK(1);\n SET_TOP(res);\n _Py_DECREF_SPECIALIZED(left, _PyUnicode_ExactDealloc);\n _Py_DECREF_SPECIALIZED(right, _PyUnicode_ExactDealloc);\n if (TOP() == NULL) {\n goto error;\n }\n JUMPBY(INLINE_CACHE_ENTRIES_BINARY_OP);\n DISPATCH();\n }\n\nNote that unicode_concatenate calls PyUnicode_Append (and do some reference counting checks before). In the end, CPython 3.10.8 calls PyUnicode_Append which is fast (in-place) and CPython 3.11.0 calls PyUnicode_Concat which is slow (out-of-place). It clearly looks like a regression to me.\nPeople in the comments reported having no performance issue on Linux. However, experimental tests shows a BINARY_OP instruction is also generated on Linux, and I cannot find so far any Linux-specific optimization regarding string concatenation. Thus, the difference between the platforms is pretty surprising.\n\nUpdate: towards a fix\nI have opened an issue about this available here. One should not that putting the code in a function is significantly faster due to the variable being local (as pointed out by @Dennis in the comments).\n\nRelated posts:\n\nHow slow is Python's string concatenation vs. str.join?\nPython string 'join' is faster (?) than '+', but what's wrong here?\nPython string concatenation in for-loop in-place?\n\n" ]
[ 16 ]
[]
[]
[ "optimization", "performance", "python", "python_3.10", "python_3.11" ]
stackoverflow_0074605279_optimization_performance_python_python_3.10_python_3.11.txt
Q: Operation returned an invalid status 'unauthorized' when deploying Python app I am following this guide to deploy python app on Azure. https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/python-webapp?view=azure-devops I was successfully able to clone the repo and authenticated through Github in Azure Portal Shell. But I got above error when I tried to deploy the app using the following command. az webapp up -n PythonFlaskAppExampleApp A: We need to make sure that all the configurations has been done which building python webapp. below are few key points where we need to look into: For FlaskApp App service looks for App.py which has below content: If application.py gunicorn --bind=0.0.0.0 --timeout 600 application:app If app.py gunicorn --bind=0.0.0.0 --timeout 600 app:app Check for customized build automations in MS Docs Also below are few predefined configs which needs to be set, check them one by one based on your requirement: az group create -n <some_name> --location westus az appservice plan create --name <some_name> -g <some_name> --sku s1 --location westus --is-linux az webapp create -g <some_name> -n <some_globaly_unique_name> --plan <some_name> --runtime "PYTHON|3.7" az webapp config appsettings set -g <some_name> -n <some_globaly_unique_name> --settings WEBSITE_RUN_FROM_PACKAGE="1" az webapp config appsettings set -g <some_name> -n <some_globaly_unique_name> --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true az webapp restart -n <some_globaly_unique_name> -g <some_name> git clone https://github.com/Azure-Samples/python-docs-hello-world cd .\python-docs-hello-world\ Compress-Archive -Path * -DestinationPath package.zip az webapp deployment source config-zip -n <some_globaly_unique_name> -g <some_name> --src .\package.zip Note: Check for your current versions and replace them accordingly. A: For some reason, I had to manually create app-service from Azure Portal UI, as az webapp up -n <your-appservice> command didn't work. It deployed app successfully after creating it. A: Try running az account set -s <Subscription ID> first. Even if you already logged in with the Azure CLI, that still may not pick a subscription for billing. You have to set that first. This is what solved the problem for me. The selected answer did it through the web portal, which does pick a subscription for you.
Operation returned an invalid status 'unauthorized' when deploying Python app
I am following this guide to deploy python app on Azure. https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/python-webapp?view=azure-devops I was successfully able to clone the repo and authenticated through Github in Azure Portal Shell. But I got above error when I tried to deploy the app using the following command. az webapp up -n PythonFlaskAppExampleApp
[ "We need to make sure that all the configurations has been done which building python webapp.\nbelow are few key points where we need to look into:\n\nFor FlaskApp App service looks for App.py which has below content:\n\n\n If application.py\ngunicorn --bind=0.0.0.0 --timeout 600 application:app\n If app.py\ngunicorn --bind=0.0.0.0 --timeout 600 app:app\n\n\n\nCheck for customized build automations in MS Docs\n\nAlso below are few predefined configs which needs to be set, check them one by one based on your requirement:\n az group create -n <some_name> --location westus\n az appservice plan create --name <some_name> -g <some_name> --sku s1 --location westus --is-linux\n az webapp create -g <some_name> -n <some_globaly_unique_name> --plan <some_name> --runtime \"PYTHON|3.7\"\n az webapp config appsettings set -g <some_name> -n <some_globaly_unique_name> --settings WEBSITE_RUN_FROM_PACKAGE=\"1\"\n az webapp config appsettings set -g <some_name> -n <some_globaly_unique_name> --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true\n az webapp restart -n <some_globaly_unique_name> -g <some_name>\n\n git clone https://github.com/Azure-Samples/python-docs-hello-world\n cd .\\python-docs-hello-world\\\n Compress-Archive -Path * -DestinationPath package.zip\n az webapp deployment source config-zip -n <some_globaly_unique_name> -g <some_name> --src .\\package.zip\n\nNote: Check for your current versions and replace them accordingly.\n", "For some reason, I had to manually create app-service from Azure Portal UI, as az webapp up -n <your-appservice> command didn't work. It deployed app successfully after creating it.\n", "Try running az account set -s <Subscription ID> first.\nEven if you already logged in with the Azure CLI, that still may not pick a subscription for billing. You have to set that first. This is what solved the problem for me. The selected answer did it through the web portal, which does pick a subscription for you.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "azure", "azure_web_app_service", "python" ]
stackoverflow_0070015081_azure_azure_web_app_service_python.txt
Q: Turtle is moving slow I created this program but I don't understand why it takes so long to draw the 2 hills of the heart. I could reduce the numbers at my_turtle_cursor.speed(5) in line 117 and in line 128 as I wanted but there was no change. When I changed the numbers in my turtle cursor.speed(1) on line 142, the speed changed as well. import turtle my_turtle_cursor = turtle.Turtle() my_turtle_screen = turtle.Screen() def pause(): my_turtle_cursor.speed(2) for i in range(100): my_turtle_cursor.left(90) def write_Willst_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-240, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("Willst", font=("Helvetica", 24, "bold")) def write_du_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-140, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("du", font=("Helvetica", 24, "bold")) def write_meine_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-90, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("meine", font=("Helvetica", 24, "bold")) def write_Freundin_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(10, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("Freundin", font=("Helvetica", 24, "bold")) def write_sein_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(160, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("sein?", font=("Helvetica", 24, "bold")) def draw_complete_heart(): my_turtle_cursor.fillcolor("#FF0000") my_turtle_cursor.begin_fill() my_turtle_cursor.left(140) my_turtle_cursor.forward(294) draw_left_curve_of_heart() my_turtle_cursor.right(190) draw_right_curve_of_heart() my_turtle_cursor.forward(294) my_turtle_cursor.end_fill() def draw_left_curve_of_heart(): my_turtle_cursor.speed(5) # war eigentlich 50 for i in range(450): my_turtle_cursor.right(0.5) my_turtle_cursor.forward(1.2) def draw_right_curve_of_heart(): my_turtle_cursor.speed(5) # war eigentlich 50 for i in range(450): my_turtle_cursor.right(0.5) my_turtle_cursor.forward(1.2) my_turtle_cursor.penup() my_turtle_cursor.goto(0, -200) my_turtle_cursor.pendown() my_turtle_cursor.speed(1) # war eigentlich 50 draw_complete_heart() write_Willst_inside_heart() write_du_inside_heart() write_meine_inside_heart() write_Freundin_inside_heart() write_sein_inside_heart() turtle.done() A: We can get the performance up, without tracer(), by removing the intentional slowdowns (speed(5)) and drawing less precisely: from turtle import Screen, Turtle MESSAGE_FONT = ('Helvetica', 28, 'bold') def write_inside_heart(): turtle.penup() turtle.goto(0, 15) turtle.pencolor('white') turtle.write("Willst du meine Freundin sein?", align='center', font=MESSAGE_FONT) def draw_complete_heart(): turtle.fillcolor('red') turtle.left(140) turtle.begin_fill() turtle.forward(297.5) draw_left_curve_of_heart() turtle.left(170) draw_right_curve_of_heart() turtle.forward(297.5) turtle.end_fill() def draw_left_curve_of_heart(): for _ in range(90): turtle.right(2.5) turtle.forward(6) def draw_right_curve_of_heart(): for _ in range(90): turtle.forward(6) turtle.right(2.5) screen = Screen() turtle = Turtle() turtle.speed('fastest') turtle.penup() turtle.sety(-200) turtle.pendown() draw_complete_heart() write_inside_heart() turtle.hideturtle() screen.exitonclick() We can do better speed-wise, and possibly recapture some precision, by using turtle's circle() method with an extent argument instead of drawing the arcs ourselves: def draw_complete_heart(): turtle.fillcolor('red') turtle.left(140) turtle.begin_fill() turtle.forward(288.8) turtle.circle(-135, extent=225) turtle.left(170) turtle.circle(-135, extent=225) turtle.forward(288.8) turtle.end_fill()
Turtle is moving slow
I created this program but I don't understand why it takes so long to draw the 2 hills of the heart. I could reduce the numbers at my_turtle_cursor.speed(5) in line 117 and in line 128 as I wanted but there was no change. When I changed the numbers in my turtle cursor.speed(1) on line 142, the speed changed as well. import turtle my_turtle_cursor = turtle.Turtle() my_turtle_screen = turtle.Screen() def pause(): my_turtle_cursor.speed(2) for i in range(100): my_turtle_cursor.left(90) def write_Willst_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-240, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("Willst", font=("Helvetica", 24, "bold")) def write_du_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-140, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("du", font=("Helvetica", 24, "bold")) def write_meine_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(-90, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("meine", font=("Helvetica", 24, "bold")) def write_Freundin_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(10, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("Freundin", font=("Helvetica", 24, "bold")) def write_sein_inside_heart(): my_turtle_cursor.penup() my_turtle_cursor.goto(160, 15) my_turtle_cursor.pencolor("#FFFFFF") my_turtle_cursor.write("sein?", font=("Helvetica", 24, "bold")) def draw_complete_heart(): my_turtle_cursor.fillcolor("#FF0000") my_turtle_cursor.begin_fill() my_turtle_cursor.left(140) my_turtle_cursor.forward(294) draw_left_curve_of_heart() my_turtle_cursor.right(190) draw_right_curve_of_heart() my_turtle_cursor.forward(294) my_turtle_cursor.end_fill() def draw_left_curve_of_heart(): my_turtle_cursor.speed(5) # war eigentlich 50 for i in range(450): my_turtle_cursor.right(0.5) my_turtle_cursor.forward(1.2) def draw_right_curve_of_heart(): my_turtle_cursor.speed(5) # war eigentlich 50 for i in range(450): my_turtle_cursor.right(0.5) my_turtle_cursor.forward(1.2) my_turtle_cursor.penup() my_turtle_cursor.goto(0, -200) my_turtle_cursor.pendown() my_turtle_cursor.speed(1) # war eigentlich 50 draw_complete_heart() write_Willst_inside_heart() write_du_inside_heart() write_meine_inside_heart() write_Freundin_inside_heart() write_sein_inside_heart() turtle.done()
[ "We can get the performance up, without tracer(), by removing the intentional slowdowns (speed(5)) and drawing less precisely:\nfrom turtle import Screen, Turtle\n\nMESSAGE_FONT = ('Helvetica', 28, 'bold')\n\ndef write_inside_heart():\n turtle.penup()\n turtle.goto(0, 15)\n turtle.pencolor('white')\n turtle.write(\"Willst du meine Freundin sein?\", align='center', font=MESSAGE_FONT)\n\ndef draw_complete_heart():\n turtle.fillcolor('red')\n turtle.left(140)\n\n turtle.begin_fill()\n\n turtle.forward(297.5)\n draw_left_curve_of_heart()\n\n turtle.left(170)\n\n draw_right_curve_of_heart()\n turtle.forward(297.5)\n\n turtle.end_fill()\n\ndef draw_left_curve_of_heart():\n for _ in range(90):\n turtle.right(2.5)\n turtle.forward(6)\n\ndef draw_right_curve_of_heart():\n for _ in range(90):\n turtle.forward(6)\n turtle.right(2.5)\n\nscreen = Screen()\n\nturtle = Turtle()\nturtle.speed('fastest')\n\nturtle.penup()\nturtle.sety(-200)\nturtle.pendown()\n\ndraw_complete_heart()\nwrite_inside_heart()\n\nturtle.hideturtle()\nscreen.exitonclick()\n\nWe can do better speed-wise, and possibly recapture some precision, by using turtle's circle() method with an extent argument instead of drawing the arcs ourselves:\ndef draw_complete_heart():\n turtle.fillcolor('red')\n turtle.left(140)\n\n turtle.begin_fill()\n\n turtle.forward(288.8)\n turtle.circle(-135, extent=225)\n\n turtle.left(170)\n\n turtle.circle(-135, extent=225)\n turtle.forward(288.8)\n\n turtle.end_fill()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074606163_python_python_turtle_turtle_graphics.txt
Q: How to make Popen() understand UTF-8 properly? This is my code in Python: [...] proc = Popen(path, stdin=stdin, stdout=PIPE, stderr=PIPE) result = [x for x in proc.stdout.readlines()] result = ''.join(result); Everything works fine, when it's ASCII. When I'm receiving UTF-8 text in stdout the result is unpredictable. In most cases the output is damaged. What is wrong here? Btw, maybe this code should be optimized somehow? A: Have you tried decoding your string, and then combining your UTF-8 strings together? In Python 2.4+ (at least), this can be achieved with result = [x.decode('utf8') for x in proc.stdout.readlines()] The important point is that your lines x are sequences of bytes that must be interpreted as representing characters. The decode() method performs this interpretation (here, the bytes are assumed to be in the UTF-8 encoding): x.decode('utf8') is of type unicode, which you can think of as "string of characters" (which is different from "string of numbers between 0 and 255 [bytes]"). A: I run into the same issue when using LogPipe. I solved this by specifying additional arguments encoding='utf-8', errors='ignore' to fdopen(). # https://codereview.stackexchange.com/questions/6567/redirecting-subprocesses-output-stdout-and-stderr-to-the-logging-module class LogPipe(threading.Thread): def __init__(self): """Setup the object with a logger and a loglevel and start the thread """ threading.Thread.__init__(self) self.daemon = False # self.level = level self.fdRead, self.fdWrite = os.pipe() self.pipeReader = os.fdopen(self.fdRead, encoding='utf-8', errors='ignore') # set utf-8 encoding and just ignore illegal character self.start() def fileno(self): """Return the write file descriptor of the pipe """ return self.fdWrite def run(self): """Run the thread, logging everything. """ for line in iter(self.pipeReader.readline, ''): # vlogger.log(self.level, line.strip('\n')) vlogger.debug(line.strip('\n')) self.pipeReader.close() def close(self): """Close the write end of the pipe. """ os.close(self.fdWrite) A: Short answer Set enviornment variable PYTHONIOENCODING, and set the encoding in Popen: #tst1.py import subprocess import sys, os #print(sys.stdout.encoding) #output: utf-8 this default for interactive console os.environ['PYTHONIOENCODING'] = 'utf-8' p = subprocess.Popen(['python', 'tst2.py'], encoding='utf-8', stdout=subprocess.PIPE, stderr=subprocess.PIPE) #print(p.stdout) #output: <_io.TextIOWrapper name=3 encoding='utf-8'> #print(p.stdout.encoding, ' ', p.stderr.encoding) #ouput: utf-8 utf-8 outs, errors = p.communicate() print(outs, errors) where tst1.py, runs another python script tst2.py, like: #tst2.py import sys print(sys.stdout.encoding) #output: utf-8 print('\u2e85') #a chinese char Long Answer Using PIPE, indicates that a pipe to the standard stream should be opened. A pipe, is a unidirectional data channel that can be used for interprocess communication. Pipes deal with binary, and are agnostic to the encoding. Applications on each side of the pipe should have consensus on the text encoding , if it is text (read more). So firstly, stdout of tst2.py should have utf-8 encoding, otherwise it raises error: UnicodeEncodeError: 'charmap' codec can't encode character '\u2e85' in position 0: character maps to <undefined> The streams sys.stdout and sys.stderr are regular text files like those returned by the open() function. On Windows, non-character devices such as pipes and disk files use the system locale encoding (i.e. an ANSI codepage like CP1252). Under all platforms, you can override the character encoding by setting the PYTHONIOENCODING environment variable before running the interpreter. Secondly, tst1.py should know how to read from pipe, thus the encoding='utf-8' in Popen. More Details With python 3.6+, following PEP 528, the default encoding of the interactive console in Windows is utf-8 (it can be changed by setting both PYTHONIOENCODING and PYTHONLEGACYWINDOWSSTDIO). But this does not apply to pipes and redirecting.
How to make Popen() understand UTF-8 properly?
This is my code in Python: [...] proc = Popen(path, stdin=stdin, stdout=PIPE, stderr=PIPE) result = [x for x in proc.stdout.readlines()] result = ''.join(result); Everything works fine, when it's ASCII. When I'm receiving UTF-8 text in stdout the result is unpredictable. In most cases the output is damaged. What is wrong here? Btw, maybe this code should be optimized somehow?
[ "Have you tried decoding your string, and then combining your UTF-8 strings together? In Python 2.4+ (at least), this can be achieved with\nresult = [x.decode('utf8') for x in proc.stdout.readlines()]\n\nThe important point is that your lines x are sequences of bytes that must be interpreted as representing characters. The decode() method performs this interpretation (here, the bytes are assumed to be in the UTF-8 encoding): x.decode('utf8') is of type unicode, which you can think of as \"string of characters\" (which is different from \"string of numbers between 0 and 255 [bytes]\").\n", "I run into the same issue when using LogPipe. \nI solved this by specifying additional arguments encoding='utf-8', errors='ignore' to fdopen().\n# https://codereview.stackexchange.com/questions/6567/redirecting-subprocesses-output-stdout-and-stderr-to-the-logging-module\nclass LogPipe(threading.Thread):\n def __init__(self):\n \"\"\"Setup the object with a logger and a loglevel\n and start the thread\n \"\"\"\n threading.Thread.__init__(self)\n self.daemon = False\n # self.level = level\n self.fdRead, self.fdWrite = os.pipe()\n self.pipeReader = os.fdopen(self.fdRead, encoding='utf-8', errors='ignore') # set utf-8 encoding and just ignore illegal character\n self.start()\n\n def fileno(self):\n \"\"\"Return the write file descriptor of the pipe\n \"\"\"\n return self.fdWrite\n\n def run(self):\n \"\"\"Run the thread, logging everything.\n \"\"\"\n for line in iter(self.pipeReader.readline, ''):\n # vlogger.log(self.level, line.strip('\\n'))\n vlogger.debug(line.strip('\\n'))\n\n self.pipeReader.close()\n\n def close(self):\n \"\"\"Close the write end of the pipe.\n \"\"\"\n os.close(self.fdWrite)\n\n", "Short answer\nSet enviornment variable PYTHONIOENCODING, and set the encoding in Popen:\n#tst1.py\nimport subprocess\nimport sys, os\n\n#print(sys.stdout.encoding) #output: utf-8 this default for interactive console\nos.environ['PYTHONIOENCODING'] = 'utf-8'\np = subprocess.Popen(['python', 'tst2.py'], encoding='utf-8', stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n#print(p.stdout) #output: <_io.TextIOWrapper name=3 encoding='utf-8'>\n#print(p.stdout.encoding, ' ', p.stderr.encoding) #ouput: utf-8 utf-8\nouts, errors = p.communicate()\nprint(outs, errors)\n\nwhere tst1.py, runs another python script tst2.py, like:\n#tst2.py\nimport sys\n\nprint(sys.stdout.encoding) #output: utf-8\nprint('\\u2e85') #a chinese char\n\nLong Answer\nUsing PIPE, indicates that a pipe to the standard stream should be opened. A pipe, is a unidirectional data channel that can be used for interprocess communication. Pipes deal with binary, and are agnostic to the encoding. Applications on each side of the pipe should have consensus on the text encoding , if it is text (read more).\nSo firstly, stdout of tst2.py should have utf-8 encoding, otherwise it raises error:\nUnicodeEncodeError: 'charmap' codec can't encode character '\\u2e85' in position 0: character maps to <undefined>\n\nThe streams sys.stdout and sys.stderr are regular text files like those returned by the open() function. On Windows, non-character devices such as pipes and disk files use the system locale encoding (i.e. an ANSI codepage like CP1252). Under all platforms, you can override the character encoding by setting the PYTHONIOENCODING environment variable before running the interpreter.\nSecondly, tst1.py should know how to read from pipe, thus the encoding='utf-8' in Popen.\nMore Details\nWith python 3.6+, following PEP 528, the default encoding of the interactive console in Windows is utf-8 (it can be changed by setting both PYTHONIOENCODING and PYTHONLEGACYWINDOWSSTDIO). But this does not apply to pipes and redirecting.\n" ]
[ 6, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003927151_python.txt
Q: tight_layout cannot make axes height small enough to accommodate all axes decorations Is not the first time I encounter this problem and my usual workaround is to explicitly define the figure size and avoid using tight_layout all along (see second code example). However, I find this solution not practical and I would simply like to have the figure getting automatically resized according to its content, taking into consideration also the labels, axis ticks, axis labels, etc. etc.. Below is an example of what I get using tight_layout (Just mimicking a qqplot) import numpy as np import pandas as pd import matplotlib.pyplot as plt #mock data df = pd.DataFrame(np.random.randn(500, 200), columns=['var_0', 'var_1','var_2', 'var_3', 'var_4', 'var_5', 'var_6', 'var_7','var_8','var_9','var_10', 'var_11', 'var_12', 'var_13', 'var_14', 'var_15','var_16','var_17', 'var_18','var_19', 'var_20', 'var_21', 'var_22', 'var_23','var_24', 'var_25', 'var_26', 'var_27', 'var_28', 'var_29', 'var_30', 'var_31', 'var_32', 'var_33', 'var_34', 'var_35', 'var_36', 'var_37', 'var_38', 'var_39', 'var_40', 'var_41', 'var_42', 'var_43', 'var_44', 'var_45', 'var_46', 'var_47', 'var_48', 'var_49', 'var_50', 'var_51', 'var_52', 'var_53', 'var_54', 'var_55', 'var_56', 'var_57', 'var_58', 'var_59', 'var_60', 'var_61', 'var_62', 'var_63', 'var_64', 'var_65', 'var_66', 'var_67', 'var_68', 'var_69', 'var_70', 'var_71', 'var_72', 'var_73', 'var_74', 'var_75', 'var_76', 'var_77', 'var_78', 'var_79', 'var_80', 'var_81', 'var_82', 'var_83', 'var_84', 'var_85', 'var_86', 'var_87', 'var_88', 'var_89', 'var_90', 'var_91', 'var_92', 'var_93', 'var_94', 'var_95', 'var_96', 'var_97', 'var_98', 'var_99', 'var_100', 'var_101', 'var_102', 'var_103', 'var_104', 'var_105', 'var_106', 'var_107', 'var_108', 'var_109', 'var_110', 'var_111', 'var_112', 'var_113', 'var_114', 'var_115', 'var_116', 'var_117', 'var_118', 'var_119', 'var_120', 'var_121', 'var_122', 'var_123', 'var_124', 'var_125', 'var_126', 'var_127', 'var_128', 'var_129', 'var_130', 'var_131', 'var_132', 'var_133', 'var_134', 'var_135', 'var_136', 'var_137', 'var_138', 'var_139', 'var_140', 'var_141', 'var_142', 'var_143', 'var_144', 'var_145', 'var_146', 'var_147', 'var_148', 'var_149', 'var_150', 'var_151', 'var_152', 'var_153', 'var_154', 'var_155', 'var_156', 'var_157', 'var_158', 'var_159', 'var_160', 'var_161', 'var_162', 'var_163', 'var_164', 'var_165', 'var_166', 'var_167', 'var_168', 'var_169', 'var_170', 'var_171', 'var_172', 'var_173', 'var_174', 'var_175', 'var_176', 'var_177', 'var_178', 'var_179', 'var_180', 'var_181', 'var_182', 'var_183', 'var_184', 'var_185', 'var_186', 'var_187', 'var_188', 'var_189', 'var_190', 'var_191', 'var_192', 'var_193', 'var_194', 'var_195', 'var_196', 'var_197', 'var_198', 'var_199']) y=np.random.randint(0,2, (500,1)) #something to plot dfquantiles1=df[y==1].quantile(np.linspace(0,1,101)) dfquantiles0=df[y==0].quantile(np.linspace(0,1,101)) fig,ax = plt.subplots(figsize=(12,12)) for i, var in enumerate(df.iloc[:,:100]): plt.subplot(25, 4, i+1) ax=plt.gca() ax.set_title(var) plt.plot(dfquantiles0.loc[:][var],dfquantiles0.loc[:][var]) plt.plot(dfquantiles0.loc[:][var],dfquantiles1.loc[:][var]) plt.tight_layout() The following code produces the figure as I would like to get it. fig,ax = plt.subplots(25,4,figsize=(14,98)) for i, var in enumerate(df.iloc[:,:100]): plt.subplot(25, 4, i+1) ax=plt.gca() ax.set_title(var) ax.plot(dfquantiles0.loc[:][var],dfquantiles0.loc[:][var]) ax.plot(dfquantiles0.loc[:][var],dfquantiles1.loc[:][var]) But it has a main drawbacks: I have to explicitly define the figure size (this is mostly with trial and error and repeated generation of the figure, which is time consuming). A: Why don't you just auto-scale the figsize argument? from math import ceil N=100 fig, axs = plt.subplots( ncols=4, nrows=ceil(N/4), layout='constrained', figsize=(3.5 * 4, 3.5 * ceil(N/4)) ) for (i, var), ax in zip(enumerate(df.iloc[:,:N]), axs.flat): ax.set_title(var) ax.axis('equal') ax.plot(dfquantiles0.loc[:][var], dfquantiles0.loc[:][var]) ax.plot(dfquantiles0.loc[:][var], dfquantiles1.loc[:][var]) I slightly changed your plotting routine to be a bit more straight forward. You might swap ceil(N/4) by -(-N//4) if you don't like importing from math.
tight_layout cannot make axes height small enough to accommodate all axes decorations
Is not the first time I encounter this problem and my usual workaround is to explicitly define the figure size and avoid using tight_layout all along (see second code example). However, I find this solution not practical and I would simply like to have the figure getting automatically resized according to its content, taking into consideration also the labels, axis ticks, axis labels, etc. etc.. Below is an example of what I get using tight_layout (Just mimicking a qqplot) import numpy as np import pandas as pd import matplotlib.pyplot as plt #mock data df = pd.DataFrame(np.random.randn(500, 200), columns=['var_0', 'var_1','var_2', 'var_3', 'var_4', 'var_5', 'var_6', 'var_7','var_8','var_9','var_10', 'var_11', 'var_12', 'var_13', 'var_14', 'var_15','var_16','var_17', 'var_18','var_19', 'var_20', 'var_21', 'var_22', 'var_23','var_24', 'var_25', 'var_26', 'var_27', 'var_28', 'var_29', 'var_30', 'var_31', 'var_32', 'var_33', 'var_34', 'var_35', 'var_36', 'var_37', 'var_38', 'var_39', 'var_40', 'var_41', 'var_42', 'var_43', 'var_44', 'var_45', 'var_46', 'var_47', 'var_48', 'var_49', 'var_50', 'var_51', 'var_52', 'var_53', 'var_54', 'var_55', 'var_56', 'var_57', 'var_58', 'var_59', 'var_60', 'var_61', 'var_62', 'var_63', 'var_64', 'var_65', 'var_66', 'var_67', 'var_68', 'var_69', 'var_70', 'var_71', 'var_72', 'var_73', 'var_74', 'var_75', 'var_76', 'var_77', 'var_78', 'var_79', 'var_80', 'var_81', 'var_82', 'var_83', 'var_84', 'var_85', 'var_86', 'var_87', 'var_88', 'var_89', 'var_90', 'var_91', 'var_92', 'var_93', 'var_94', 'var_95', 'var_96', 'var_97', 'var_98', 'var_99', 'var_100', 'var_101', 'var_102', 'var_103', 'var_104', 'var_105', 'var_106', 'var_107', 'var_108', 'var_109', 'var_110', 'var_111', 'var_112', 'var_113', 'var_114', 'var_115', 'var_116', 'var_117', 'var_118', 'var_119', 'var_120', 'var_121', 'var_122', 'var_123', 'var_124', 'var_125', 'var_126', 'var_127', 'var_128', 'var_129', 'var_130', 'var_131', 'var_132', 'var_133', 'var_134', 'var_135', 'var_136', 'var_137', 'var_138', 'var_139', 'var_140', 'var_141', 'var_142', 'var_143', 'var_144', 'var_145', 'var_146', 'var_147', 'var_148', 'var_149', 'var_150', 'var_151', 'var_152', 'var_153', 'var_154', 'var_155', 'var_156', 'var_157', 'var_158', 'var_159', 'var_160', 'var_161', 'var_162', 'var_163', 'var_164', 'var_165', 'var_166', 'var_167', 'var_168', 'var_169', 'var_170', 'var_171', 'var_172', 'var_173', 'var_174', 'var_175', 'var_176', 'var_177', 'var_178', 'var_179', 'var_180', 'var_181', 'var_182', 'var_183', 'var_184', 'var_185', 'var_186', 'var_187', 'var_188', 'var_189', 'var_190', 'var_191', 'var_192', 'var_193', 'var_194', 'var_195', 'var_196', 'var_197', 'var_198', 'var_199']) y=np.random.randint(0,2, (500,1)) #something to plot dfquantiles1=df[y==1].quantile(np.linspace(0,1,101)) dfquantiles0=df[y==0].quantile(np.linspace(0,1,101)) fig,ax = plt.subplots(figsize=(12,12)) for i, var in enumerate(df.iloc[:,:100]): plt.subplot(25, 4, i+1) ax=plt.gca() ax.set_title(var) plt.plot(dfquantiles0.loc[:][var],dfquantiles0.loc[:][var]) plt.plot(dfquantiles0.loc[:][var],dfquantiles1.loc[:][var]) plt.tight_layout() The following code produces the figure as I would like to get it. fig,ax = plt.subplots(25,4,figsize=(14,98)) for i, var in enumerate(df.iloc[:,:100]): plt.subplot(25, 4, i+1) ax=plt.gca() ax.set_title(var) ax.plot(dfquantiles0.loc[:][var],dfquantiles0.loc[:][var]) ax.plot(dfquantiles0.loc[:][var],dfquantiles1.loc[:][var]) But it has a main drawbacks: I have to explicitly define the figure size (this is mostly with trial and error and repeated generation of the figure, which is time consuming).
[ "Why don't you just auto-scale the figsize argument?\nfrom math import ceil\n\nN=100\n\nfig, axs = plt.subplots( ncols=4, nrows=ceil(N/4), layout='constrained',\n figsize=(3.5 * 4, 3.5 * ceil(N/4)) )\n\nfor (i, var), ax in zip(enumerate(df.iloc[:,:N]), axs.flat):\n ax.set_title(var)\n ax.axis('equal')\n ax.plot(dfquantiles0.loc[:][var], dfquantiles0.loc[:][var])\n ax.plot(dfquantiles0.loc[:][var], dfquantiles1.loc[:][var])\n\nI slightly changed your plotting routine to be a bit more straight forward.\nYou might swap ceil(N/4) by -(-N//4) if you don't like importing from math.\n\n" ]
[ 1 ]
[]
[]
[ "autoresize", "matplotlib", "python" ]
stackoverflow_0055475035_autoresize_matplotlib_python.txt
Q: Program returning dictionary with coins for change not recognizing 0.01//0.01 as being 1.0 This code outputs a dictionary containing the number of coins of each type necessary to reach a certain change, while using the least coins possible. def change(money): res = {} coin = 2.0 while coin>=0.01: parcel = money // coin res[coin] = int(parcel) money -= parcel * coin if coin not in (0.5, 0.05): coin = coin/2 else: if coin == 0.5: coin = 0.2 else: coin = 0.02 return res when executing the function for 7.71 it returns the following: {2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 0} how come it uses 0 coins of 0.01? A: As others have stated, it's because of floating-point rounding errors. A standard workaround is to represent monetary amounts as integer numbers of the smallest unit (usually the cent) instead of as a float number of dollars/euros/pounds/whatever. Here's an alternative implementation of your function that uses integer arithmetic internally, and only uses float for the input and final output (dict keys). It also specifies the denominations in a list “constant” instead of in the code. This lets you easily adapt it to other coinage systems, like the US dollar's [100, 25, 10, 5, 1]. (We technically have a half-dollar in circulation, but it's rarely used.) # Assuming you're using Euro coins here. DENOMINATIONS = [200, 100, 50, 20, 10, 5, 2, 1] def change(money): res = {} cents = round(money * 100) for coin in DENOMINATIONS: parcel, cents = divmod(cents, coin) res[coin / 100] = parcel return res Sample function call: >>> change(7.71) {2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 1} A: The problem is in float rounding. E. g. 0.71 - 0.5 = 0.20999999999999996. Use round function. def change(money): res = {} coin = 2.0 while coin>=0.01: parcel = money // coin res[coin] = int(parcel) print(parcel, money, coin, parcel*coin) money = round(money - parcel * coin, 2) if coin not in (0.5, 0.05): coin = coin/2 else: if coin == 0.5: coin = 0.2 else: coin = 0.02 return res print(change(7.71)) Result {2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 1}
Program returning dictionary with coins for change not recognizing 0.01//0.01 as being 1.0
This code outputs a dictionary containing the number of coins of each type necessary to reach a certain change, while using the least coins possible. def change(money): res = {} coin = 2.0 while coin>=0.01: parcel = money // coin res[coin] = int(parcel) money -= parcel * coin if coin not in (0.5, 0.05): coin = coin/2 else: if coin == 0.5: coin = 0.2 else: coin = 0.02 return res when executing the function for 7.71 it returns the following: {2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 0} how come it uses 0 coins of 0.01?
[ "As others have stated, it's because of floating-point rounding errors.\nA standard workaround is to represent monetary amounts as integer numbers of the smallest unit (usually the cent) instead of as a float number of dollars/euros/pounds/whatever.\nHere's an alternative implementation of your function that uses integer arithmetic internally, and only uses float for the input and final output (dict keys).\nIt also specifies the denominations in a list “constant” instead of in the code. This lets you easily adapt it to other coinage systems, like the US dollar's [100, 25, 10, 5, 1]. (We technically have a half-dollar in circulation, but it's rarely used.)\n# Assuming you're using Euro coins here.\nDENOMINATIONS = [200, 100, 50, 20, 10, 5, 2, 1]\n\ndef change(money):\n res = {}\n cents = round(money * 100)\n for coin in DENOMINATIONS:\n parcel, cents = divmod(cents, coin)\n res[coin / 100] = parcel\n return res\n\nSample function call:\n>>> change(7.71)\n{2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 1}\n\n", "The problem is in float rounding. E. g. 0.71 - 0.5 = 0.20999999999999996.\nUse round function.\ndef change(money):\n res = {}\n coin = 2.0\n while coin>=0.01:\n parcel = money // coin\n res[coin] = int(parcel)\n print(parcel, money, coin, parcel*coin)\n money = round(money - parcel * coin, 2)\n if coin not in (0.5, 0.05):\n coin = coin/2\n else:\n if coin == 0.5:\n coin = 0.2\n else:\n coin = 0.02\n return res\n\nprint(change(7.71))\n\nResult\n{2.0: 3, 1.0: 1, 0.5: 1, 0.2: 1, 0.1: 0, 0.05: 0, 0.02: 0, 0.01: 1}\n\n" ]
[ 3, 0 ]
[]
[]
[ "dictionary", "floating_point", "function", "math", "python" ]
stackoverflow_0074604355_dictionary_floating_point_function_math_python.txt
Q: Why do I get "ModuleNotFoundError: No module named 'pyperclip'" despite installing it with pip? I get a "module not found" error when using idle while trying to import pyperclip. In command terminal as administrator tried to install pyperclip using: pip install pyperclip Output was: Requirement already satisfied: pyperclip in c:\users\ john smith\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (1.8.2) I previously had anaconda navigator and jupyter notebook. I could import pyperclip in jupyter notebook. I deleted anaconda to try and see if it was because I had pyperclip installed in only location, but it did not solve the problem. So where can I go from here? edit : I uninstalled python 3.7, as i had both 3.7 32 bit and 3.9 64 bit installed, i ran the command : pip install pyperclip again in command output : Requirement already satisfied: pyperclip in c:\users\ john smith\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (1.8.2) module still not found edit: problem solved went into the script for 3.9 and opened a cmd terminal installed pip from there thank you for your responses A: It's possible that you're running two version of Python on your machine. Ex. a 2.7 and a 3.10. If you're on Windows, you can run the command py -0p to list all your python versions and their paths. If you're looking to install pyperclip for a 3+ version of Python, you might want to use pip3 to install it.
Why do I get "ModuleNotFoundError: No module named 'pyperclip'" despite installing it with pip?
I get a "module not found" error when using idle while trying to import pyperclip. In command terminal as administrator tried to install pyperclip using: pip install pyperclip Output was: Requirement already satisfied: pyperclip in c:\users\ john smith\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (1.8.2) I previously had anaconda navigator and jupyter notebook. I could import pyperclip in jupyter notebook. I deleted anaconda to try and see if it was because I had pyperclip installed in only location, but it did not solve the problem. So where can I go from here? edit : I uninstalled python 3.7, as i had both 3.7 32 bit and 3.9 64 bit installed, i ran the command : pip install pyperclip again in command output : Requirement already satisfied: pyperclip in c:\users\ john smith\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (1.8.2) module still not found edit: problem solved went into the script for 3.9 and opened a cmd terminal installed pip from there thank you for your responses
[ "It's possible that you're running two version of Python on your machine. Ex. a 2.7 and a 3.10. If you're on Windows, you can run the command py -0p to list all your python versions and their paths.\nIf you're looking to install pyperclip for a 3+ version of Python, you might want to use pip3 to install it.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074607321_python_python_3.x.txt
Q: How to position suptitle I'm trying to adjust a suptitle above a multi-panel figure and am having trouble figuring out how to adjust the figsize and subsequently position the suptitle. The problem is that calling plt.suptitle("my title", y=...) to adjust the position of the suptitle also adjusts the figure dimensions. A few questions: where does suptitle(..., y=1.1) actually put the title? As far as I can tell, the documentation for the y parameter of suptitle points to matplotlib.text.Text, but I don't know what figure coordinates mean when you have multiple subplots. what is the effect on figure size when specifying y to suptitle? how do I manually adjust figure size and spacing (subplots_adjust?) to add a figure title per panel and a suptitle for the entire figure, maintaining the size of each ax in the figure? An example: data = np.random.random(size=100) f, a = plt.subplots(2, 2, figsize=(10, 5)) a[0,0].plot(data) a[0,0].set_title("this is a really long title\n"*2) a[0,1].plot(data) a[1,1].plot(data) plt.suptitle("a big long suptitle that runs into the title\n"*2, y=1.05); Obviously I can tweak y each time I make a figure, but I need a solution that generally works without manual intervention. I've tried both constrained layout and tight layout; neither works reliably with figures of any complexity. A: 1. What do figure coordinates mean? Figure coordinates go 0 to 1, where (0,0) is the lower left corner and (1,1) is the upper right corner. A coordinate of y=1.05 is hence slightly outside the figure. 2. what is the effect on figure size when specifying y to suptitle? Specifying y to suptitle has no effect whatsoever on the figure size. 3a. How do I manually adjust figure size and spacing to add a figure title per panel and a suptitle for the entire figure? First, one would not add an additional linebreak. I.e. if you want to have 2 lines, don't use 3 linebreaks (\n). Then one can adjust the subplot parameters as desired to leave space for the titles. E.g. fig.subplots_adjust(top=0.8) and use a y <= 1 for the title to be inside the figure. import matplotlib.pyplot as plt import numpy as np data = np.random.random(size=100) fig, axes = plt.subplots(2, 2, figsize=(10, 5)) fig.subplots_adjust(top=0.8) axes[0,0].plot(data) axes[0,0].set_title("\n".join(["this is a really long title"]*2)) axes[0,1].plot(data) axes[1,1].plot(data) fig.suptitle("\n".join(["a big long suptitle that runs into the title"]*2), y=0.98) plt.show() 3b. ... while maintaining the size of each ax in the figure? Maintaining the size of the axes and still have enough space for the titles is only possible by changing the overall figure size. This could look as follows, where we define a function make_space_above which takes the array of axes as input, as well as the newly desired top margin in units of inches. So for example, you come to the conclusion that you need 1 inch of margin on top to host your titles: import matplotlib.pyplot as plt import numpy as np data = np.random.random(size=100) fig, axes = plt.subplots(2, 2, figsize=(10, 5), squeeze = False) axes[0,0].plot(data) axes[0,0].set_title("\n".join(["this is a really long title"]*2)) axes[0,1].plot(data) axes[1,1].plot(data) fig.suptitle("\n".join(["a big long suptitle that runs into the title"]*2), y=0.98) def make_space_above(axes, topmargin=1): """ increase figure size to make topmargin (in inches) space for titles, without changing the axes sizes""" fig = axes.flatten()[0].figure s = fig.subplotpars w, h = fig.get_size_inches() figh = h - (1-s.top)*h + topmargin fig.subplots_adjust(bottom=s.bottom*h/figh, top=1-topmargin/figh) fig.set_figheight(figh) make_space_above(axes, topmargin=1) plt.show() (left: without calling make_space_above; right: with call to make_space_above(axes, topmargin=1)) A: Short Answer For those coming from Google for adjusting the title position on a scatter matrix, you can simply set the y parameter to a value slightly lower than 1: plt.suptitle('My Title', y=0.92) A: ... or use constrained_layout: import matplotlib.pyplot as plt import numpy as np data = np.random.random(size=100) f, a = plt.subplots(2, 2, figsize=(10, 5), constrained_layout=True) a[0,0].plot(data) a[0,0].set_title("this is a really long title\n"*2) a[0,1].plot(data) a[1,1].plot(data) plt.suptitle("a big long suptitle that runs into the title\n"*2); A: A bit of a hacky solution, but if your plots only have 1 column, perhaps consider just add the main title to the title of the first plot, like so: ax[0].set_title("Main Title\
How to position suptitle
I'm trying to adjust a suptitle above a multi-panel figure and am having trouble figuring out how to adjust the figsize and subsequently position the suptitle. The problem is that calling plt.suptitle("my title", y=...) to adjust the position of the suptitle also adjusts the figure dimensions. A few questions: where does suptitle(..., y=1.1) actually put the title? As far as I can tell, the documentation for the y parameter of suptitle points to matplotlib.text.Text, but I don't know what figure coordinates mean when you have multiple subplots. what is the effect on figure size when specifying y to suptitle? how do I manually adjust figure size and spacing (subplots_adjust?) to add a figure title per panel and a suptitle for the entire figure, maintaining the size of each ax in the figure? An example: data = np.random.random(size=100) f, a = plt.subplots(2, 2, figsize=(10, 5)) a[0,0].plot(data) a[0,0].set_title("this is a really long title\n"*2) a[0,1].plot(data) a[1,1].plot(data) plt.suptitle("a big long suptitle that runs into the title\n"*2, y=1.05); Obviously I can tweak y each time I make a figure, but I need a solution that generally works without manual intervention. I've tried both constrained layout and tight layout; neither works reliably with figures of any complexity.
[ "1. What do figure coordinates mean?\nFigure coordinates go 0 to 1, where (0,0) is the lower left corner and (1,1) is the upper right corner. A coordinate of y=1.05 is hence slightly outside the figure.\n\n2. what is the effect on figure size when specifying y to suptitle?\nSpecifying y to suptitle has no effect whatsoever on the figure size.\n3a. How do I manually adjust figure size and spacing to add a figure title per panel and a suptitle for the entire figure?\nFirst, one would not add an additional linebreak. I.e. if you want to have 2 lines, don't use 3 linebreaks (\\n). Then one can adjust the subplot parameters as desired to leave space for the titles. E.g. fig.subplots_adjust(top=0.8) and use a y <= 1 for the title to be inside the figure.\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.random(size=100)\nfig, axes = plt.subplots(2, 2, figsize=(10, 5))\nfig.subplots_adjust(top=0.8)\n\naxes[0,0].plot(data)\naxes[0,0].set_title(\"\\n\".join([\"this is a really long title\"]*2))\naxes[0,1].plot(data)\naxes[1,1].plot(data)\n\nfig.suptitle(\"\\n\".join([\"a big long suptitle that runs into the title\"]*2), y=0.98)\n\nplt.show()\n\n\n3b. ... while maintaining the size of each ax in the figure?\nMaintaining the size of the axes and still have enough space for the titles is only possible by changing the overall figure size.\nThis could look as follows, where we define a function make_space_above which takes the array of axes as input, as well as the newly desired top margin in units of inches. So for example, you come to the conclusion that you need 1 inch of margin on top to host your titles:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.random(size=100)\nfig, axes = plt.subplots(2, 2, figsize=(10, 5), squeeze = False)\n\naxes[0,0].plot(data)\naxes[0,0].set_title(\"\\n\".join([\"this is a really long title\"]*2))\naxes[0,1].plot(data)\naxes[1,1].plot(data)\n\nfig.suptitle(\"\\n\".join([\"a big long suptitle that runs into the title\"]*2), y=0.98)\n\n\ndef make_space_above(axes, topmargin=1):\n \"\"\" increase figure size to make topmargin (in inches) space for \n titles, without changing the axes sizes\"\"\"\n fig = axes.flatten()[0].figure\n s = fig.subplotpars\n w, h = fig.get_size_inches()\n\n figh = h - (1-s.top)*h + topmargin\n fig.subplots_adjust(bottom=s.bottom*h/figh, top=1-topmargin/figh)\n fig.set_figheight(figh)\n\n\nmake_space_above(axes, topmargin=1) \n\nplt.show()\n\n\n(left: without calling make_space_above; right: with call to make_space_above(axes, topmargin=1))\n", "Short Answer\nFor those coming from Google for adjusting the title position on a scatter matrix, you can simply set the y parameter to a value slightly lower than 1:\nplt.suptitle('My Title', y=0.92)\n", "... or use constrained_layout:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.random(size=100)\nf, a = plt.subplots(2, 2, figsize=(10, 5), constrained_layout=True)\n\na[0,0].plot(data)\na[0,0].set_title(\"this is a really long title\\n\"*2)\na[0,1].plot(data)\na[1,1].plot(data)\n\nplt.suptitle(\"a big long suptitle that runs into the title\\n\"*2);\n\n\n", "A bit of a hacky solution, but if your plots only have 1 column, perhaps consider just add the main title to the title of the first plot, like so:\nax[0].set_title(\"Main Title\\\n\n" ]
[ 93, 53, 12, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0055767312_matplotlib_python.txt
Q: Modified assignment problem (more tasks than agents) Assume that N is the number of agents and M is the number of tasks. The number of tasks is greater than number of agents, i.e. M > N. Each agent must have at least one task. Given rectangular matrix of costs, find the optimal solution (i.e. assign each task to exactly one agent so each agent has at least one task and the cost is minimized). What effective algorithm could solve this problem? I've tried to implement naive recursive algorithm with memorization, but it is too slow for values of M over 1000. I know about Hungarian method, but I wasn't able to use the algorithm with my constraint (each agent must have at least one task). A: This can be formulated as a Minimum Cost Maximum Flow Problem. Start with a sink. It connects to the tasks along channels of cost 0 and flow 1. Each task connects to the agents along channels of costs from your matrix and flow 1. Each agent connects to the sink with a channel of flow 1 and cost 0. Agents are also connected to a pool with channels of flow M-N and high cost. And the pool is connected to the sink with a channel of flow M-N and similarly high cost. The maximum flow with minimum cost will have a flow of M from the source to the tasks. It will then have a flow of M from the source to the agents. A flow of N will take the cheap and narrow pipes to the sink directly from the agents. And the remaining M-N will take the expensive route to the pool, and from the pool to the sink. Because the flow from agents to pool and back again is expensive, there will be only a minimum of flow to the pool, and no flow from the pool back to the agents. Therefore the maximum flow will be an answer to your problem with each agent getting the flow from at least one task. A: Here is a simple optimization model: # sets i : tasks j : agents # model min sum((i,j), c[i,j]*x[i,j]) sum(j, x[i,j]) = 1 ∀i "assign each task to an agent" sum(i, x[i,j]) >= 1 ∀j "each agent needs to do at least one task" x[i,j] ∈ {0,1} These models tend to solve very easily. Here is a small test script: import cvxpy as cp import numpy as np NTASK = 1000 NAGENT = 200 # random data np.random.seed(123) c = np.random.uniform(1,100,(NTASK,NAGENT)) # model x = cp.Variable((NTASK,NAGENT),boolean=True) prob = cp.Problem(cp.Minimize(cp.sum(cp.multiply(c,x))), [cp.sum(x,axis=1) == 1, cp.sum(x,axis=0) >= 1]) # solve and print results res = prob.solve(verbose=True) print(prob.status) print(prob.value) # print(x.value) If you prefer, this model can be solved as an LP by relaxing the x variable to x[i,j] ∈ [0,1].
Modified assignment problem (more tasks than agents)
Assume that N is the number of agents and M is the number of tasks. The number of tasks is greater than number of agents, i.e. M > N. Each agent must have at least one task. Given rectangular matrix of costs, find the optimal solution (i.e. assign each task to exactly one agent so each agent has at least one task and the cost is minimized). What effective algorithm could solve this problem? I've tried to implement naive recursive algorithm with memorization, but it is too slow for values of M over 1000. I know about Hungarian method, but I wasn't able to use the algorithm with my constraint (each agent must have at least one task).
[ "This can be formulated as a Minimum Cost Maximum Flow Problem.\nStart with a sink. It connects to the tasks along channels of cost 0 and flow 1. Each task connects to the agents along channels of costs from your matrix and flow 1. Each agent connects to the sink with a channel of flow 1 and cost 0. Agents are also connected to a pool with channels of flow M-N and high cost. And the pool is connected to the sink with a channel of flow M-N and similarly high cost.\nThe maximum flow with minimum cost will have a flow of M from the source to the tasks. It will then have a flow of M from the source to the agents. A flow of N will take the cheap and narrow pipes to the sink directly from the agents. And the remaining M-N will take the expensive route to the pool, and from the pool to the sink. Because the flow from agents to pool and back again is expensive, there will be only a minimum of flow to the pool, and no flow from the pool back to the agents.\nTherefore the maximum flow will be an answer to your problem with each agent getting the flow from at least one task.\n", "Here is a simple optimization model:\n# sets\ni : tasks\nj : agents\n\n# model\nmin sum((i,j), c[i,j]*x[i,j])\nsum(j, x[i,j]) = 1 ∀i \"assign each task to an agent\"\nsum(i, x[i,j]) >= 1 ∀j \"each agent needs to do at least one task\"\nx[i,j] ∈ {0,1}\n\nThese models tend to solve very easily.\nHere is a small test script:\nimport cvxpy as cp\nimport numpy as np\n\nNTASK = 1000\nNAGENT = 200\n\n# random data\nnp.random.seed(123)\nc = np.random.uniform(1,100,(NTASK,NAGENT))\n\n# model\nx = cp.Variable((NTASK,NAGENT),boolean=True)\nprob = cp.Problem(cp.Minimize(cp.sum(cp.multiply(c,x))),\n [cp.sum(x,axis=1) == 1, \n cp.sum(x,axis=0) >= 1])\n\n# solve and print results\nres = prob.solve(verbose=True)\nprint(prob.status)\nprint(prob.value)\n# print(x.value) \n\nIf you prefer, this model can be solved as an LP by relaxing the x variable to x[i,j] ∈ [0,1].\n" ]
[ 3, 2 ]
[]
[]
[ "algorithm", "computer_science", "optimization", "python" ]
stackoverflow_0074606127_algorithm_computer_science_optimization_python.txt
Q: Kivy: How To Limit Simultaneous Touches to One? I am using Kivy 1.9.1 and Python 3.5.2. My application breaks when more than one touch event is fired before the first has finished processing. I'm looking for some way to either restrict the number of touch events at a given time to one (something like the max_pointers attribute in the HTML5 engine Phaser) or to filter the touch events and only process the first. As far as I'm aware, the touch event doesn't hold this information about itself. I don't believe the specifics of my code are relevant, but better safe than sorry: def on_touch_down(self, touch): x, y = touch.x, touch.y x -= self.img_board.pos[0] y -= self.img_board.pos[1] if not (0 <= x <= self.img_board.size[0] and 0 <= y <= self.img_board.size[1]): touch.ud['piece'] = None return file = int(x / self.square_size) rank = int(y / self.square_size) self.select((file, rank)) piece = Board.instance.at_square(self.selected) touch.ud['piece'] = piece if piece: self.canvas.remove(piece.rect) self.canvas.after.add(piece.rect) def on_touch_move(self, touch): piece = touch.ud['piece'] if piece: if Board.instance.to_move == piece.color: piece.rect.pos = (touch.x - piece.rect.size[0]/2, touch.y - piece.rect.size[1]/2) def on_touch_up(self, touch): piece = touch.ud['piece'] if piece: self.canvas.after.remove(piece.rect) self.canvas.add(piece.rect) if Board.instance.to_move != piece.color: return x, y = touch.x, touch.y x -= self.img_board.pos[0] y -= self.img_board.pos[1] if not (0 <= x <= self.img_board.size[0] and 0 <= y <= self.img_board.size[1]): self.update_geometry() return dest = Board.an_from_tup( (int(x / self.square_size), int(y / self.square_size)) ) if dest in piece.get_valid_moves(): Board.instance.move(piece.position,dest) self.select(dest) self.update_geometry() A: I was able to implement this by creating a TouchHandler widget: class TouchHandler(Widget): """ Non-display widget to handle touch order """ instance = None def __init__(self): super().__init__() TouchHandler.instance = self self.active = None Then overriding Window's on_motion method prior to calling MyApp().run(): if __name__ == '__main__': # Ignore new touches before first touch has resolved TouchHandler() def on_motion(self, etype, me): if 'pos' in me.profile: if etype == 'begin': if not TouchHandler.instance.active: TouchHandler.instance.active = me Window.bind(on_motion=on_motion) Window.size = (500,500) MyApp().run() For this to work, you need to manage your touch events very carefully and reset TouchHandler.instance.active to None when you're done with the touch in the on_touch_up method for any widget that will use the touch: def on_touch_up(self, touch): if touch != TouchHandler.instance.active: return True # Signals Kivy to kill the event piece = touch.ud['piece'] if piece: # ... self.update_geometry() TouchHandler.instance.active = None A: This is an answer, but I'm kind of afraid of it, I'm pretty sure it's bad programming practice but: I created a global variable with touch_counter = 0, then in on_touch_down I referenced the global variable and then += 1 it, and in on_touch_up I -= 1 it. In on_touch_move I check if the touch_counter == 1 and then performed what I wanted it to do with only one touch. This works with multitouch_sim, I realize this might mess up with actually multitouch devices, but so far so good.
Kivy: How To Limit Simultaneous Touches to One?
I am using Kivy 1.9.1 and Python 3.5.2. My application breaks when more than one touch event is fired before the first has finished processing. I'm looking for some way to either restrict the number of touch events at a given time to one (something like the max_pointers attribute in the HTML5 engine Phaser) or to filter the touch events and only process the first. As far as I'm aware, the touch event doesn't hold this information about itself. I don't believe the specifics of my code are relevant, but better safe than sorry: def on_touch_down(self, touch): x, y = touch.x, touch.y x -= self.img_board.pos[0] y -= self.img_board.pos[1] if not (0 <= x <= self.img_board.size[0] and 0 <= y <= self.img_board.size[1]): touch.ud['piece'] = None return file = int(x / self.square_size) rank = int(y / self.square_size) self.select((file, rank)) piece = Board.instance.at_square(self.selected) touch.ud['piece'] = piece if piece: self.canvas.remove(piece.rect) self.canvas.after.add(piece.rect) def on_touch_move(self, touch): piece = touch.ud['piece'] if piece: if Board.instance.to_move == piece.color: piece.rect.pos = (touch.x - piece.rect.size[0]/2, touch.y - piece.rect.size[1]/2) def on_touch_up(self, touch): piece = touch.ud['piece'] if piece: self.canvas.after.remove(piece.rect) self.canvas.add(piece.rect) if Board.instance.to_move != piece.color: return x, y = touch.x, touch.y x -= self.img_board.pos[0] y -= self.img_board.pos[1] if not (0 <= x <= self.img_board.size[0] and 0 <= y <= self.img_board.size[1]): self.update_geometry() return dest = Board.an_from_tup( (int(x / self.square_size), int(y / self.square_size)) ) if dest in piece.get_valid_moves(): Board.instance.move(piece.position,dest) self.select(dest) self.update_geometry()
[ "I was able to implement this by creating a TouchHandler widget:\nclass TouchHandler(Widget):\n \"\"\" Non-display widget to handle touch order \"\"\"\n instance = None\n\n def __init__(self):\n super().__init__()\n TouchHandler.instance = self\n self.active = None\n\nThen overriding Window's on_motion method prior to calling MyApp().run():\nif __name__ == '__main__':\n # Ignore new touches before first touch has resolved\n TouchHandler()\n def on_motion(self, etype, me):\n if 'pos' in me.profile:\n if etype == 'begin':\n if not TouchHandler.instance.active:\n TouchHandler.instance.active = me\n\n Window.bind(on_motion=on_motion)\n\n Window.size = (500,500)\n MyApp().run()\n\nFor this to work, you need to manage your touch events very carefully and reset TouchHandler.instance.active to None when you're done with the touch in the on_touch_up method for any widget that will use the touch:\ndef on_touch_up(self, touch):\n if touch != TouchHandler.instance.active:\n return True # Signals Kivy to kill the event\n piece = touch.ud['piece']\n if piece:\n # ...\n self.update_geometry()\n TouchHandler.instance.active = None\n\n", "This is an answer, but I'm kind of afraid of it, I'm pretty sure it's bad programming practice but:\nI created a global variable with touch_counter = 0, then in on_touch_down I referenced the global variable and then += 1 it, and in on_touch_up I -= 1 it. In on_touch_move I check if the touch_counter == 1 and then performed what I wanted it to do with only one touch.\nThis works with multitouch_sim, I realize this might mess up with actually multitouch devices, but so far so good.\n" ]
[ 0, 0 ]
[]
[]
[ "kivy", "python", "python_3.x" ]
stackoverflow_0042234075_kivy_python_python_3.x.txt
Q: how to change a date format 01oct2012 to 01-10-2012 I have a weird date format in my data. I currently have 01oct2012 and I would like 01-10-2012. Can someone help me change it! Thank you! I tried using the pd.to_datetime, but I dont think I have the right code. A: As mentioned above this question has many answers. Taking just 1 date as an example: ts = pd.to_datetime('01oct2012') produces the timestamp: Timestamp('2012-10-01 00:00:00') To convert to the desired format: ts.strftime('%d-%m-%Y') yields: '01-10-2012'
how to change a date format 01oct2012 to 01-10-2012
I have a weird date format in my data. I currently have 01oct2012 and I would like 01-10-2012. Can someone help me change it! Thank you! I tried using the pd.to_datetime, but I dont think I have the right code.
[ "As mentioned above this question has many answers. Taking just 1 date as an example:\nts = pd.to_datetime('01oct2012') produces the timestamp:\nTimestamp('2012-10-01 00:00:00')\nTo convert to the desired format:\nts.strftime('%d-%m-%Y') \n\nyields:\n'01-10-2012'\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074606928_pandas_python.txt
Q: how to create score.py file for azure ml I am new to Azure ML and trying to deploy my model into Azure. My trained model is of classification in which text data is being first processed, then encoded using BERT model and then trained using catBoost. I have already registered my model; however, I am bit confused with the scoring.py script. This is what I using, but not working: import json import joblib import numpy as np import os # Called when the service is loaded def init(): global model # Get the path to the registered model file and load it model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'nlp_cla.pkl') model = joblib.load(model_path) # Called when a request is received def run(raw_data): # Get the input data as a numpy array data = np.array(json.loads(raw_data)['data']) # Get a prediction from the model predictions = model.predict(data) # Return the predictions as any JSON serializable format return predictions.tolist() How I configure my this script so I could deploy on azure? A: You can start debugging from the visual studio code as shown here and deployment sample with score.py. %%writefile source_directory/x/y/score.py import joblib import json import numpy as np import os from inference_schema.schema_decorators import input_schema, output_schema from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType def init(): global model # AZUREML_MODEL_DIR is an environment variable created during deployment. Join this path with the filename of the model file. # It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION) # If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models) model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl') # Deserialize the model file back into a sklearn model. model = joblib.load(model_path) global name # Note here, the entire source directory from inference config gets added into image. # Below is an example of how you can use any extra files in image. with open('./source_directory/extradata.json') as json_file: data = json.load(json_file) name = data["people"][0]["name"] input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]]) output_sample = np.array([3726.995]) @input_schema('data', NumpyParameterType(input_sample)) @output_schema(NumpyParameterType(output_sample)) def run(data): try: result = model.predict(data) # You can return any JSON-serializable object. return "Hello " + name + " here is your result = " + str(result) except Exception as e: error = str(e) return error
how to create score.py file for azure ml
I am new to Azure ML and trying to deploy my model into Azure. My trained model is of classification in which text data is being first processed, then encoded using BERT model and then trained using catBoost. I have already registered my model; however, I am bit confused with the scoring.py script. This is what I using, but not working: import json import joblib import numpy as np import os # Called when the service is loaded def init(): global model # Get the path to the registered model file and load it model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'nlp_cla.pkl') model = joblib.load(model_path) # Called when a request is received def run(raw_data): # Get the input data as a numpy array data = np.array(json.loads(raw_data)['data']) # Get a prediction from the model predictions = model.predict(data) # Return the predictions as any JSON serializable format return predictions.tolist() How I configure my this script so I could deploy on azure?
[ "You can start debugging from the visual studio code as shown here and deployment sample with score.py.\n%%writefile source_directory/x/y/score.py\nimport joblib\nimport json\nimport numpy as np\nimport os\n\nfrom inference_schema.schema_decorators import input_schema, output_schema\nfrom inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n\ndef init():\n global model\n # AZUREML_MODEL_DIR is an environment variable created during deployment. Join this path with the filename of the model file.\n # It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION)\n # If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models)\n model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n # Deserialize the model file back into a sklearn model.\n model = joblib.load(model_path)\n\n global name\n # Note here, the entire source directory from inference config gets added into image.\n # Below is an example of how you can use any extra files in image.\n with open('./source_directory/extradata.json') as json_file:\n data = json.load(json_file)\n name = data[\"people\"][0][\"name\"]\n\ninput_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\noutput_sample = np.array([3726.995])\n\n@input_schema('data', NumpyParameterType(input_sample))\n@output_schema(NumpyParameterType(output_sample))\ndef run(data):\n try:\n result = model.predict(data)\n # You can return any JSON-serializable object.\n return \"Hello \" + name + \" here is your result = \" + str(result)\n except Exception as e:\n error = str(e)\n return error\n\n" ]
[ 0 ]
[]
[]
[ "azure_machine_learning_service", "azure_machine_learning_studio", "azure_machine_learning_workbench", "python", "scoring" ]
stackoverflow_0074576907_azure_machine_learning_service_azure_machine_learning_studio_azure_machine_learning_workbench_python_scoring.txt
Q: Python popen() - communicate( str.encode(encoding="utf-8", errors="ignore") ) crashes Using Python 3.4.3 on Windows. My script runs a little java program in console, and should get the ouput: import subprocess p1 = subprocess.Popen([ ... ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) out, err = p1.communicate(str.encode("utf-8")) This leads to a normal 'UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 135: character maps to < undefined>'. Now I want to ignore errors: out, err = p1.communicate(str.encode(encoding="utf-8", errors="ignore")) This leads to a more interesting error I found no help for using google: TypeError: descriptor 'encode' of 'str' object needs an argument So it seems that python does not even know anymore what the arguments for str.encode(...) are. The same also applies when you leave out the errors part. A: universal_newlines=True enables text mode. Combined with stdout=PIPE, it forces decoding of the child process' output using locale.getpreferredencoding(False) that is not utf-8 on Windows. That is why you see UnicodeDecodeError. To read the subprocess' output using utf-8 encoding, drop universal_newlines=True: #!/usr/bin/env python3 from subprocess import Popen, PIPE with Popen(r'C:\path\to\program.exe "arg 1" "arg 2"', stdout=PIPE, stderr=PIPE) as p: output, errors = p.communicate() lines = output.decode('utf-8').splitlines() str.encode("utf-8") is equivalent to "utf-8".encode(). There is no point to pass it to .communicate() unless you set stdin=PIPE and the child process expects b'utf-8' bytestring as an input. str.encode(encoding="utf-8", errors="ignore) has the form klass.method(**kwargs). .encode() method expects self (a string object) that is why you see TypeError. >>> str.encode("abc", encoding="utf-8", errors="ignore") #XXX don't do it b'abc' >>> "abc".encode(encoding="utf-8", errors="ignore") b'abc' Do not use klass.method(obj) instead of obj.method() without a good reason. A: You are not supposed to call .encode() on the class itself. What you probably want to do is something like p1.communicate("FOOBAR".encode("utf-8")) The error message you're getting means that the encode() function has nothing to encode, since you called it on the class, rather than on an instance (that would then be passed as the self parameter to encode()). A: If using Popen to run another Python script, then see my answer. The short answer is set the enviornment variable PYTHONIOENCODING, and set encoding='utf-8' in Popen.
Python popen() - communicate( str.encode(encoding="utf-8", errors="ignore") ) crashes
Using Python 3.4.3 on Windows. My script runs a little java program in console, and should get the ouput: import subprocess p1 = subprocess.Popen([ ... ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) out, err = p1.communicate(str.encode("utf-8")) This leads to a normal 'UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 135: character maps to < undefined>'. Now I want to ignore errors: out, err = p1.communicate(str.encode(encoding="utf-8", errors="ignore")) This leads to a more interesting error I found no help for using google: TypeError: descriptor 'encode' of 'str' object needs an argument So it seems that python does not even know anymore what the arguments for str.encode(...) are. The same also applies when you leave out the errors part.
[ "universal_newlines=True enables text mode. Combined with stdout=PIPE, it forces decoding of the child process' output using locale.getpreferredencoding(False) that is not utf-8 on Windows. That is why you see UnicodeDecodeError.\nTo read the subprocess' output using utf-8 encoding, drop universal_newlines=True:\n#!/usr/bin/env python3\nfrom subprocess import Popen, PIPE\n\nwith Popen(r'C:\\path\\to\\program.exe \"arg 1\" \"arg 2\"',\n stdout=PIPE, stderr=PIPE) as p:\n output, errors = p.communicate()\nlines = output.decode('utf-8').splitlines()\n\n\nstr.encode(\"utf-8\") is equivalent to \"utf-8\".encode(). There is no point to pass it to .communicate() unless you set stdin=PIPE and the child process expects b'utf-8' bytestring as an input.\nstr.encode(encoding=\"utf-8\", errors=\"ignore) has the form klass.method(**kwargs). .encode() method expects self (a string object) that is why you see TypeError.\n>>> str.encode(\"abc\", encoding=\"utf-8\", errors=\"ignore\") #XXX don't do it\nb'abc'\n>>> \"abc\".encode(encoding=\"utf-8\", errors=\"ignore\")\nb'abc'\n\nDo not use klass.method(obj) instead of obj.method() without a good reason.\n", "You are not supposed to call .encode() on the class itself. What you probably want to do is something like\np1.communicate(\"FOOBAR\".encode(\"utf-8\"))\n\nThe error message you're getting means that the encode() function has nothing to encode, since you called it on the class, rather than on an instance (that would then be passed as the self parameter to encode()).\n", "If using Popen to run another Python script, then see my answer.\nThe short answer is set the enviornment variable PYTHONIOENCODING, and set encoding='utf-8' in Popen.\n" ]
[ 22, 2, 0 ]
[]
[]
[ "encoding", "popen", "python", "python_3.x", "subprocess" ]
stackoverflow_0033283603_encoding_popen_python_python_3.x_subprocess.txt
Q: Trouble Using Feature Columns in TensorFlow I am using the HR Analytics: Employee Promotion Dataset from the following link: https://www.kaggle.com/datasets/arashnic/hr-ana My goal is to train a tensor flow neural network using this dataset in order to classify whether an employee will be promoted or not. Many of the columns in the dataset are... numerical values: ['no_of_trainings', 'age', 'previous_year_rating', 'length_of_service', 'awards_won?', 'avg_training_score'] while the rest are... categorical values: ['department', 'region', 'education', 'gender', 'recruitment_channel']. I am looking for a good way to transform a dataframe that has categorical and numerical columns into a dataset that can effectively train a neural network. I stumbled upon this documentation for feature columns and it seems like it should give me what I want https://www.tensorflow.org/tutorials/structured_data/feature_columns, but I am getting an error message and am not sure why. Below is the code I have tried so far with the error message. The directory "data" stores two csv files (train and test) for the training and testing data pre-made from the kaggle link. # Import training data from kaggle download "data" df = pd.read_csv('data/train.csv') # Removing NaN from dataset df['previous_year_rating'] = df['previous_year_rating'].fillna(0) df['education'] = df['education'].fillna('None') #Define variable names for numerical and categorical data categorical_vars = ['department', 'region', 'education', 'gender', 'recruitment_channel'] numeric_vars = ['no_of_trainings', 'age', 'previous_year_rating', 'length_of_service', 'awards_won?', 'avg_training_score'] # Create Feature Columns feature_columns = [] # Categorical variables for var in categorical_vars: categorical_column = tf.feature_column.categorical_column_with_vocabulary_list( var, df[var].unique()) indicator_column = tf.feature_column.indicator_column(categorical_column) feature_columns.append(indicator_column) # Numeric variables for var in numeric_vars: feature_columns.append(tf.feature_column.numeric_column(var)) # A utility method to create a tf.data dataset from a Pandas Dataframe def df_to_dataset(df): y = df.pop('is_promoted') ds = tf.data.Dataset.from_tensor_slices((dict(df), y)) return ds # Create dataset from dataframe using df_to_datas ds = df_to_dataset(df) # Set random seed tf.random.set_seed(42) # Create Model model = tf.keras.Sequential([ tf.keras.layers.DenseFeatures(feature_columns), tf.keras.layers.Dense(1, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) # Compile model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(), metrics=['accuracy']) # Train model.fit(ds, epochs=10) Entire Error Message: ValueError Traceback (most recent call last) Cell In [150], line 16 11 model.compile(optimizer='adam', 12 loss=tf.keras.losses.BinaryCrossentropy(), 13 metrics=['accuracy']) 15 # Train ---> 16 model.fit(train_ds, 17 epochs=10) File ~\miniconda3\envs\py310\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb File ~\AppData\Local\Temp\__autograph_generated_file3hf_ppno.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator) 13 try: 14 do_return = True ---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope) 16 except: 17 do_return = False ValueError: in user code: File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1249, in train_function * return step_function(self, iterator) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1233, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1222, in run_step ** outputs = model.train_step(data) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1023, in train_step y_pred = self(x, training=True) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None ValueError: Exception encountered when calling layer 'dense_features_2' (type DenseFeatures). Feature (key: age) cannot have rank 0. Given: Tensor("IteratorGetNext:0", shape=(), dtype=int64) Call arguments received by layer 'dense_features_2' (type DenseFeatures): • features={'employee_id': 'tf.Tensor(shape=(), dtype=int64)', 'department': 'tf.Tensor(shape=(), dtype=string)', 'region': 'tf.Tensor(shape=(), dtype=string)', 'education': 'tf.Tensor(shape=(), dtype=string)', 'gender': 'tf.Tensor(shape=(), dtype=string)', 'recruitment_channel': 'tf.Tensor(shape=(), dtype=string)', 'no_of_trainings': 'tf.Tensor(shape=(), dtype=int64)', 'age': 'tf.Tensor(shape=(), dtype=int64)', 'previous_year_rating': 'tf.Tensor(shape=(), dtype=float32)', 'length_of_service': 'tf.Tensor(shape=(), dtype=int64)', 'awards_won?': 'tf.Tensor(shape=(), dtype=int64)', 'avg_training_score': 'tf.Tensor(shape=(), dtype=int64)'} • cols_to_output_tensors=None • training=True Any idea what is going wrong here? Or any suggestions for a better way to do this? Essentially I am trying to combine numerical and categorical columns into a single dataset. I do not want to just transform categorical columns into integers with LabelEncoder() for example. A: Solution: the first dimension of each tensorflow dataset element is supposed to be the batch_size. To do this all you have to do is slightly modify the df_to_dataset method: # A utility method to create a tf.data dataset from a Pandas Dataframe def df_to_dataset(df, batch_size=32): y = df.pop('is_promoted') ds = tf.data.Dataset.from_tensor_slices((dict(df), y)) ds = ds.batch(batch_size) return ds Also a '?' cannot be in the name of any of your columns since this is incompatible with tf datasets. You can change the name of this column with the pd.rename() method.
Trouble Using Feature Columns in TensorFlow
I am using the HR Analytics: Employee Promotion Dataset from the following link: https://www.kaggle.com/datasets/arashnic/hr-ana My goal is to train a tensor flow neural network using this dataset in order to classify whether an employee will be promoted or not. Many of the columns in the dataset are... numerical values: ['no_of_trainings', 'age', 'previous_year_rating', 'length_of_service', 'awards_won?', 'avg_training_score'] while the rest are... categorical values: ['department', 'region', 'education', 'gender', 'recruitment_channel']. I am looking for a good way to transform a dataframe that has categorical and numerical columns into a dataset that can effectively train a neural network. I stumbled upon this documentation for feature columns and it seems like it should give me what I want https://www.tensorflow.org/tutorials/structured_data/feature_columns, but I am getting an error message and am not sure why. Below is the code I have tried so far with the error message. The directory "data" stores two csv files (train and test) for the training and testing data pre-made from the kaggle link. # Import training data from kaggle download "data" df = pd.read_csv('data/train.csv') # Removing NaN from dataset df['previous_year_rating'] = df['previous_year_rating'].fillna(0) df['education'] = df['education'].fillna('None') #Define variable names for numerical and categorical data categorical_vars = ['department', 'region', 'education', 'gender', 'recruitment_channel'] numeric_vars = ['no_of_trainings', 'age', 'previous_year_rating', 'length_of_service', 'awards_won?', 'avg_training_score'] # Create Feature Columns feature_columns = [] # Categorical variables for var in categorical_vars: categorical_column = tf.feature_column.categorical_column_with_vocabulary_list( var, df[var].unique()) indicator_column = tf.feature_column.indicator_column(categorical_column) feature_columns.append(indicator_column) # Numeric variables for var in numeric_vars: feature_columns.append(tf.feature_column.numeric_column(var)) # A utility method to create a tf.data dataset from a Pandas Dataframe def df_to_dataset(df): y = df.pop('is_promoted') ds = tf.data.Dataset.from_tensor_slices((dict(df), y)) return ds # Create dataset from dataframe using df_to_datas ds = df_to_dataset(df) # Set random seed tf.random.set_seed(42) # Create Model model = tf.keras.Sequential([ tf.keras.layers.DenseFeatures(feature_columns), tf.keras.layers.Dense(1, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) # Compile model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(), metrics=['accuracy']) # Train model.fit(ds, epochs=10) Entire Error Message: ValueError Traceback (most recent call last) Cell In [150], line 16 11 model.compile(optimizer='adam', 12 loss=tf.keras.losses.BinaryCrossentropy(), 13 metrics=['accuracy']) 15 # Train ---> 16 model.fit(train_ds, 17 epochs=10) File ~\miniconda3\envs\py310\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb File ~\AppData\Local\Temp\__autograph_generated_file3hf_ppno.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator) 13 try: 14 do_return = True ---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope) 16 except: 17 do_return = False ValueError: in user code: File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1249, in train_function * return step_function(self, iterator) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1233, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1222, in run_step ** outputs = model.train_step(data) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\engine\training.py", line 1023, in train_step y_pred = self(x, training=True) File "C:\Users\mjadw\miniconda3\envs\py310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None ValueError: Exception encountered when calling layer 'dense_features_2' (type DenseFeatures). Feature (key: age) cannot have rank 0. Given: Tensor("IteratorGetNext:0", shape=(), dtype=int64) Call arguments received by layer 'dense_features_2' (type DenseFeatures): • features={'employee_id': 'tf.Tensor(shape=(), dtype=int64)', 'department': 'tf.Tensor(shape=(), dtype=string)', 'region': 'tf.Tensor(shape=(), dtype=string)', 'education': 'tf.Tensor(shape=(), dtype=string)', 'gender': 'tf.Tensor(shape=(), dtype=string)', 'recruitment_channel': 'tf.Tensor(shape=(), dtype=string)', 'no_of_trainings': 'tf.Tensor(shape=(), dtype=int64)', 'age': 'tf.Tensor(shape=(), dtype=int64)', 'previous_year_rating': 'tf.Tensor(shape=(), dtype=float32)', 'length_of_service': 'tf.Tensor(shape=(), dtype=int64)', 'awards_won?': 'tf.Tensor(shape=(), dtype=int64)', 'avg_training_score': 'tf.Tensor(shape=(), dtype=int64)'} • cols_to_output_tensors=None • training=True Any idea what is going wrong here? Or any suggestions for a better way to do this? Essentially I am trying to combine numerical and categorical columns into a single dataset. I do not want to just transform categorical columns into integers with LabelEncoder() for example.
[ "Solution: the first dimension of each tensorflow dataset element is supposed to be the batch_size. To do this all you have to do is slightly modify the df_to_dataset method:\n# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef df_to_dataset(df, batch_size=32):\n y = df.pop('is_promoted')\n ds = tf.data.Dataset.from_tensor_slices((dict(df), y))\n ds = ds.batch(batch_size)\n return ds\n\nAlso a '?' cannot be in the name of any of your columns since this is incompatible with tf datasets. You can change the name of this column with the pd.rename() method.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "scikit_learn", "tensorflow" ]
stackoverflow_0074607599_pandas_python_scikit_learn_tensorflow.txt
Q: Diagonalizing an unitary matrix with numpy doesn't yield orthonormal eigenvectors I'm trying to diagonalize an unitary matrix using numpy, in particular the numpy.linalg.eig function. Since the matrix is unitary, its eigenvectors are supposed to form an orthonormal basis. However, it seems that this is not the case: import numpy as np from qiskit.circuit.library import QFT from qiskit.quantum_info import Operator op = Operator(QFT(num_qubits=4, do_swaps=True)).data eigvals, eigvecs = np.linalg.eig(op) # Compute the eigenvalues eigvecs = eigvecs.T # Since the eigenvectors are arranged in columns lambda_i = eigvals[2] lambda_j = eigvals[-1] v_i = eigvecs[2].reshape((-1, 1)) v_j = eigvecs[-1].reshape((-1, 1)) print(np.linalg.norm(op @ v_i - lambda_i * v_i)) # Should be close to 0 by definition, actually yields 6.706985734026871e-16, which is fine print(np.linalg.norm(op @ v_j - lambda_j * v_j)) # Should be close to 0 by definition, actually yields 8.151878100248519e-16, which is fine print(v_j.T.conj() @ v_i) # Should be close to zero since the basis is supposed to be orthonormal but actually yields array([[-0.15147621-0.06735767j]]), which is not fine print(np.linalg.norm(op.T.conj() @ op - np.eye(16))) # Should be around zero if and only if op.data is unitary, actually yields 2.3337334181537826e-15, which is ine I've tested it with the qiskit library, however my question is purely numpy-related. If required, the same matrix can be loaded using the following .npy file: b"\x93NUMPY\x01\x00v\x00{'descr': '<c16', 'fortran_order': False, 'shape': (16, 16), } \n\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?`\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?c\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?A\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbf`\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfc\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfA\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?D\x8d2\xcfk\x90\xcd\xbf[\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfD\x8d2\xcfk\x90\xcd?[\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd?d\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd\xbf]\xa9\xae\xa6\xe2}\xb8\xbf\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfd\xa9\xae\xa6\xe2}\xb8?A\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd\xbfd\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd?]\xa9\xae\xa6\xe2}\xb8?\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?d\xa9\xae\xa6\xe2}\xb8\xbfA\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8\xbfD\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfd\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd?Y\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8?D\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?d\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd\xbfY\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbf`\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?c\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?A\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?`\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfc\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfA\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?D\x8d2\xcfk\x90\xcd?[\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfD\x8d2\xcfk\x90\xcd\xbf[\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd\xbfd\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd?]\xa9\xae\xa6\xe2}\xb8?\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfd\xa9\xae\xa6\xe2}\xb8\xbfA\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd?d\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd\xbf]\xa9\xae\xa6\xe2}\xb8\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?d\xa9\xae\xa6\xe2}\xb8?A\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8?D\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfd\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd\xbfY\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8\xbfD\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?d\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd?Y\xa9\xae\xa6\xe2}\xb8?" The behavior seems to occur only for purely complex (that is, with a nil real part) eigenvalues. For most of the other eigenvectors, the scalar product does yield 0, but I was able to remplicate this behavior on others. I don't think it is due to some numerical approximation since the final scalar product is quite large. I don't think it is due to some conjugation that I would have forgot. As a sanity check, I've ensured that v_i and v_j are indeed eigenvectors of op, which means they are correct, and which means they should be orthogonal. A: The error in the reasoning comes from this part: As a sanity check, I've ensured that v_i and v_j are indeed eigenvectors of op, which means they are correct, and which means they should be orthogonal. If v_i and v_j are associated to the same eigenvalue lambda, then (v_i+v_j)/sqrt(2) is an eigenvector associated to lambda and which isn't orthogonal to v_i or v_j. In order to ensure that the eigenvectors are orthogonal, simply perform: eigvecs = np.linalg.qr(eigvecs)[0] eigvecs will now contain orthonormal eigenvectors of op.
Diagonalizing an unitary matrix with numpy doesn't yield orthonormal eigenvectors
I'm trying to diagonalize an unitary matrix using numpy, in particular the numpy.linalg.eig function. Since the matrix is unitary, its eigenvectors are supposed to form an orthonormal basis. However, it seems that this is not the case: import numpy as np from qiskit.circuit.library import QFT from qiskit.quantum_info import Operator op = Operator(QFT(num_qubits=4, do_swaps=True)).data eigvals, eigvecs = np.linalg.eig(op) # Compute the eigenvalues eigvecs = eigvecs.T # Since the eigenvectors are arranged in columns lambda_i = eigvals[2] lambda_j = eigvals[-1] v_i = eigvecs[2].reshape((-1, 1)) v_j = eigvecs[-1].reshape((-1, 1)) print(np.linalg.norm(op @ v_i - lambda_i * v_i)) # Should be close to 0 by definition, actually yields 6.706985734026871e-16, which is fine print(np.linalg.norm(op @ v_j - lambda_j * v_j)) # Should be close to 0 by definition, actually yields 8.151878100248519e-16, which is fine print(v_j.T.conj() @ v_i) # Should be close to zero since the basis is supposed to be orthonormal but actually yields array([[-0.15147621-0.06735767j]]), which is not fine print(np.linalg.norm(op.T.conj() @ op - np.eye(16))) # Should be around zero if and only if op.data is unitary, actually yields 2.3337334181537826e-15, which is ine I've tested it with the qiskit library, however my question is purely numpy-related. If required, the same matrix can be loaded using the following .npy file: b"\x93NUMPY\x01\x00v\x00{'descr': '<c16', 'fortran_order': False, 'shape': (16, 16), } \n\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?`\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?c\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?A\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbf`\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfc\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfA\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?D\x8d2\xcfk\x90\xcd\xbf[\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfD\x8d2\xcfk\x90\xcd?[\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd?d\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd\xbf]\xa9\xae\xa6\xe2}\xb8\xbf\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfd\xa9\xae\xa6\xe2}\xb8?A\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd\xbfd\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd?]\xa9\xae\xa6\xe2}\xb8?\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?d\xa9\xae\xa6\xe2}\xb8\xbfA\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8\xbfD\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfd\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd?Y\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8?D\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?d\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd\xbfY\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbf`\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?c\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?A\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?`\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfc\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfA\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?D\x8d2\xcfk\x90\xcd?[\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00a\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfD\x8d2\xcfk\x90\xcd\xbf[\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xca;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd\xbfd\xa9\xae\xa6\xe2}\xb8?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?C\x8d2\xcfk\x90\xcd?]\xa9\xae\xa6\xe2}\xb8?\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfd\xa9\xae\xa6\xe2}\xb8\xbfA\x8d2\xcfk\x90\xcd\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00^\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6?\xca;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd?d\xa9\xae\xa6\xe2}\xb8\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfC\x8d2\xcfk\x90\xcd\xbf]\xa9\xae\xa6\xe2}\xb8\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?d\xa9\xae\xa6\xe2}\xb8?A\x8d2\xcfk\x90\xcd?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00\xc9;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?\xfd\xff\xff\xff\xff\xff\xcf?\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd?c\xa9\xae\xa6\xe2}\xb8\xbf\xca;\x7ff\x9e\xa0\xc6?\xcb;\x7ff\x9e\xa0\xc6\xbfZ\xa9\xae\xa6\xe2}\xb8?D\x8d2\xcfk\x90\xcd\xbf\x05\\\x143&\xa6q\xbc\xfd\xff\xff\xff\xff\xff\xcf\xbfd\xa9\xae\xa6\xe2}\xb8\xbfC\x8d2\xcfk\x90\xcd\xbf\xcb;\x7ff\x9e\xa0\xc6\xbf\xc9;\x7ff\x9e\xa0\xc6\xbfC\x8d2\xcfk\x90\xcd\xbfY\xa9\xae\xa6\xe2}\xb8\xbf\xfd\xff\xff\xff\xff\xff\xcf\xbf\x00\x00\x00\x00\x00\x00\x00\x00C\x8d2\xcfk\x90\xcd\xbfc\xa9\xae\xa6\xe2}\xb8?\xca;\x7ff\x9e\xa0\xc6\xbf\xcb;\x7ff\x9e\xa0\xc6?Z\xa9\xae\xa6\xe2}\xb8\xbfD\x8d2\xcfk\x90\xcd?\x05\\\x143&\xa6q<\xfd\xff\xff\xff\xff\xff\xcf?d\xa9\xae\xa6\xe2}\xb8?C\x8d2\xcfk\x90\xcd?\xcb;\x7ff\x9e\xa0\xc6?\xc9;\x7ff\x9e\xa0\xc6?C\x8d2\xcfk\x90\xcd?Y\xa9\xae\xa6\xe2}\xb8?" The behavior seems to occur only for purely complex (that is, with a nil real part) eigenvalues. For most of the other eigenvectors, the scalar product does yield 0, but I was able to remplicate this behavior on others. I don't think it is due to some numerical approximation since the final scalar product is quite large. I don't think it is due to some conjugation that I would have forgot. As a sanity check, I've ensured that v_i and v_j are indeed eigenvectors of op, which means they are correct, and which means they should be orthogonal.
[ "The error in the reasoning comes from this part:\n\nAs a sanity check, I've ensured that v_i and v_j are indeed eigenvectors of op, which means they are correct, and which means they should be orthogonal.\n\nIf v_i and v_j are associated to the same eigenvalue lambda, then (v_i+v_j)/sqrt(2) is an eigenvector associated to lambda and which isn't orthogonal to v_i or v_j.\nIn order to ensure that the eigenvectors are orthogonal, simply perform:\neigvecs = np.linalg.qr(eigvecs)[0]\n\neigvecs will now contain orthonormal eigenvectors of op.\n" ]
[ 2 ]
[]
[]
[ "linear_algebra", "numpy", "python" ]
stackoverflow_0074607636_linear_algebra_numpy_python.txt
Q: How to correctly use typing for a sort key function For this sort key method: def srt(self,r:Optional[Colour])->Optional[int]: if not isinstance(r,Colour): return None return r.index used in a later method in the same class (NOT Colour): items=sorted(self.__dict__.values(),key=self.srt) results in this error from mypy: error: Argument "key" to "sorted" has incompatible type "Callable[[Optional[Colour]], Optional[int]]"; expected "Callable[[Any], SupportsLessThan]" [arg-type] I can make it shut up by either forcing srt() to the signature it wants, or by doing a type: ignore on the call to sorted. But I'd rather do it right. I'm clearly doing something wrong, but what? Even if there is no good way to do this, I want to know why mypy is complaining. A: You are passing a function that returns Optional[int] but a function is required that returns SupportsLessThan. Optional[int] does not support "less than", because None cannot be compared to any int. What does support "less than" is just int. So you could change the return type of your function to int and make it return some integer in the special case that r is not a Colour, instead of None. Which integer it should return in this case depends on what you are trying to achieve. Possibly you could return -1 in case you want any non-Colour values to be sorted before any Colour values, since that would be smaller than any value of r.index (at least that's what I expect from something called "index").
How to correctly use typing for a sort key function
For this sort key method: def srt(self,r:Optional[Colour])->Optional[int]: if not isinstance(r,Colour): return None return r.index used in a later method in the same class (NOT Colour): items=sorted(self.__dict__.values(),key=self.srt) results in this error from mypy: error: Argument "key" to "sorted" has incompatible type "Callable[[Optional[Colour]], Optional[int]]"; expected "Callable[[Any], SupportsLessThan]" [arg-type] I can make it shut up by either forcing srt() to the signature it wants, or by doing a type: ignore on the call to sorted. But I'd rather do it right. I'm clearly doing something wrong, but what? Even if there is no good way to do this, I want to know why mypy is complaining.
[ "You are passing a function that returns Optional[int] but a function is required that returns SupportsLessThan. Optional[int] does not support \"less than\", because None cannot be compared to any int.\nWhat does support \"less than\" is just int. So you could change the return type of your function to int and make it return some integer in the special case that r is not a Colour, instead of None.\nWhich integer it should return in this case depends on what you are trying to achieve. Possibly you could return -1 in case you want any non-Colour values to be sorted before any Colour values, since that would be smaller than any value of r.index (at least that's what I expect from something called \"index\").\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x", "sorting", "typing" ]
stackoverflow_0074608046_python_python_3.x_sorting_typing.txt
Q: How would I visually graph 2 string type data using python? I am rather new to coding, and tutorial hell has started to show it's toll. I need help to graph data that are both strings. I have attempted transforming the data using matplotlib, and pandas. However, I seem to not be able to graph them as the ones I have used require int type data. I have managed to group the data using df.groupby(['type', 'url']).sum() My current goal is to get the sum (how many are in each type) of each group and graph them. Dataset link below Kaggle - Malicious Links Edit: Had an Image here. Made it into a code block instead: df = pd.read_csv('/content/malicious_phish.csv') df <output: csv contents> df.shape <output: 651191, 2> df.groupby(['type', 'url']).sum() <output: corrupted text in a table> Not sure if this is any better I have tried using len() and .sum() or .count(). I have started to read into the matplotlib and pandas library on functions and tools for me to use, and hopefully use to resolve this problem. A: from collections import Counter Counter(df['Wafer']) To plot the dict result, the follwing link is helpful https://stackoverflow.com/a/52572237/16353662.
How would I visually graph 2 string type data using python?
I am rather new to coding, and tutorial hell has started to show it's toll. I need help to graph data that are both strings. I have attempted transforming the data using matplotlib, and pandas. However, I seem to not be able to graph them as the ones I have used require int type data. I have managed to group the data using df.groupby(['type', 'url']).sum() My current goal is to get the sum (how many are in each type) of each group and graph them. Dataset link below Kaggle - Malicious Links Edit: Had an Image here. Made it into a code block instead: df = pd.read_csv('/content/malicious_phish.csv') df <output: csv contents> df.shape <output: 651191, 2> df.groupby(['type', 'url']).sum() <output: corrupted text in a table> Not sure if this is any better I have tried using len() and .sum() or .count(). I have started to read into the matplotlib and pandas library on functions and tools for me to use, and hopefully use to resolve this problem.
[ "from collections import Counter\nCounter(df['Wafer'])\n\nTo plot the dict result, the follwing link is helpful https://stackoverflow.com/a/52572237/16353662.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "google_colaboratory", "python" ]
stackoverflow_0074607897_dataframe_google_colaboratory_python.txt
Q: How to create an array of NA or Null values in Python? This is easy to do in R and I am wondering if it is straight forward in Python and I am just missing something, but how do you create a vector of NaN values and Null values in Python? I am trying to do this using the np.full function. R Code: vec <- vector("character", 15) vec[1:15] <- NA vec Python Code unknowns = np.full(shape = 5, fill_value = ???, dtype = 'str') '''test if fill value worked or not''' random.seed(1177) categories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True) categories = np.concatenate([categories, unknowns]) example = pd.DataFrame(data = {'categories': categories}) example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] print(example['transformed'].value_counts()) This should lead to 5 counts of unknown in the value counts total. Ideally I would like to know how to write this fill_value for NaN and Null and know whether it differs for variable types. I have tried np.nan with and without the string data type. I have tried None and Null with and without quotes. I cannot think of anything else to try and I am starting to wonder if it is possible. Thank you in advance and I apologize if this question is already addressed and for my lack of knowledge in this area. A: you could use either None or np.nan to create an array of just missing values in Python like so: np.full(shape=5, fill_value=None) np.full(shape=5, fill_value=np.nan) back to your example, this works just fine: import numpy as np import pandas as pd unknowns = np.full(shape=5, fill_value=None) categories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True) categories = np.concatenate([categories, unknowns]) example = pd.DataFrame(data = {'categories': categories}) example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] print(example['transformed'].value_counts()) Lastly, this line is inefficient. example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] You do want to avoid loops & list comprehensions when using pandas on large data, this is going to run much faster: example['transformed'] = example.categories.apply(lambda s: s if s else 'unknown') A: There is a typing problem here. If you're working in numpy, vectors are typed after being initialized. Assigning a np.nan value to a vector initialized with strings will try to coalesce back into a string: import numpy as np v1 = np.array(['a', 'b', 'c']) v1[0] = np.nan # v1 = array(['n', 'b', 'c'], dtype='<U1') v2 = np.array(['ab', 'cd', 'ef']) v2[0] = np.nan # v2 = array(['na', 'cd', 'ef'], dtype='<U2') v3 = np.array(['abc', 'def', 'ghi']) v3[0] = np.nan # v3 = array(['nan', 'def', 'ghi'], dtype='<U3') However, if you're working with pandas in the second half of the question, there's a separate way for handling missing data: import pandas as pd df = pd.DataFrame({"x": [pd.NA, "Hello", "World"]}) A: A simple way to create an empty Series in pandas: s = pd.Series(index=range(15)) Output: 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN 10 NaN 11 NaN 12 NaN 13 NaN 14 NaN dtype: float64 Or, with a string dtype: s = pd.Series(index=range(15), dtype='string') Output: 0 <NA> 1 <NA> 2 <NA> 3 <NA> 4 <NA> 5 <NA> 6 <NA> 7 <NA> 8 <NA> 9 <NA> 10 <NA> 11 <NA> 12 <NA> 13 <NA> 14 <NA> dtype: string
How to create an array of NA or Null values in Python?
This is easy to do in R and I am wondering if it is straight forward in Python and I am just missing something, but how do you create a vector of NaN values and Null values in Python? I am trying to do this using the np.full function. R Code: vec <- vector("character", 15) vec[1:15] <- NA vec Python Code unknowns = np.full(shape = 5, fill_value = ???, dtype = 'str') '''test if fill value worked or not''' random.seed(1177) categories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True) categories = np.concatenate([categories, unknowns]) example = pd.DataFrame(data = {'categories': categories}) example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] print(example['transformed'].value_counts()) This should lead to 5 counts of unknown in the value counts total. Ideally I would like to know how to write this fill_value for NaN and Null and know whether it differs for variable types. I have tried np.nan with and without the string data type. I have tried None and Null with and without quotes. I cannot think of anything else to try and I am starting to wonder if it is possible. Thank you in advance and I apologize if this question is already addressed and for my lack of knowledge in this area.
[ "you could use either None or np.nan to create an array of just missing values in Python like so:\nnp.full(shape=5, fill_value=None)\nnp.full(shape=5, fill_value=np.nan)\n\nback to your example, this works just fine:\nimport numpy as np\nimport pandas as pd\n\nunknowns = np.full(shape=5, fill_value=None)\ncategories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True)\ncategories = np.concatenate([categories, unknowns])\nexample = pd.DataFrame(data = {'categories': categories})\nexample['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']]\n\nprint(example['transformed'].value_counts())\n\nLastly, this line is inefficient.\nexample['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']]\nYou do want to avoid loops & list comprehensions when using pandas\non large data, this is going to run much faster:\nexample['transformed'] = example.categories.apply(lambda s: s if s else 'unknown')\n", "There is a typing problem here.\nIf you're working in numpy, vectors are typed after being initialized. Assigning a np.nan value to a vector initialized with strings will try to coalesce back into a string:\nimport numpy as np\n\nv1 = np.array(['a', 'b', 'c'])\nv1[0] = np.nan\n# v1 = array(['n', 'b', 'c'], dtype='<U1')\n\nv2 = np.array(['ab', 'cd', 'ef'])\nv2[0] = np.nan\n# v2 = array(['na', 'cd', 'ef'], dtype='<U2')\n\nv3 = np.array(['abc', 'def', 'ghi'])\nv3[0] = np.nan\n# v3 = array(['nan', 'def', 'ghi'], dtype='<U3')\n\nHowever, if you're working with pandas in the second half of the question, there's a separate way for handling missing data:\nimport pandas as pd\n\ndf = pd.DataFrame({\"x\": [pd.NA, \"Hello\", \"World\"]})\n\n", "A simple way to create an empty Series in pandas:\ns = pd.Series(index=range(15))\n\nOutput:\n0 NaN\n1 NaN\n2 NaN\n3 NaN\n4 NaN\n5 NaN\n6 NaN\n7 NaN\n8 NaN\n9 NaN\n10 NaN\n11 NaN\n12 NaN\n13 NaN\n14 NaN\ndtype: float64\n\nOr, with a string dtype:\ns = pd.Series(index=range(15), dtype='string')\n\nOutput:\n0 <NA>\n1 <NA>\n2 <NA>\n3 <NA>\n4 <NA>\n5 <NA>\n6 <NA>\n7 <NA>\n8 <NA>\n9 <NA>\n10 <NA>\n11 <NA>\n12 <NA>\n13 <NA>\n14 <NA>\ndtype: string\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074607741_numpy_python.txt
Q: Infinite looping Issue Python. Can't quit game I made a code for Blackjack in Python and whenever I run blackjack_game(deck) saying no to the 'Play Again' input should quit the game but it doesn't. Funds going zero and below should also trigger the game to quit but it doesn't. This is what it looks like: import random import os # The Card class definition class Card: def __init__(self, suit, value, card_value): # Suit of the Card like Spades and Clubs self.suit = suit # Representing Value of the Card like A for Ace, K for King self.value = value # Score Value for the Card like 10 for King self.card_value = card_value # Clear the terminal def clear(): os.system("clear") # Print player stats def print_stats(player_name, funds, wins, losses, ties, blackjacks, busts): print('Player: ', player_name) print('Funds: $', funds) print(f'Wins: {wins} Losses: {losses} Ties: {ties} Blackjacks: {blackjacks} Busts: {busts}') # Function to print the cards def print_cards(cards, hidden): s = "" for card in cards: s = s + "\t ________________" if hidden: s += "\t ________________" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| {} |".format(card.suit) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t|________________|" if hidden: s += "\t|________________|" print(s) print() # Function for a game of blackjack def blackjack_game(deck): end_game = False play_again = 'Y' # Player name player_name = str(input('Enter player name: ')) # Intro print('Lets have a fun game of Blackjack, ', player_name) # Cards for both dealer and player player_cards = [] dealer_cards = [] # Scores for both dealer and player player_score = 0 dealer_score = 0 # Player stats funds = 100 wins = 0 losses = 0 ties = 0 blackjacks = 0 busts = 0 bet = 0 clear() # Current Stats Display print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) # Bets while play_again == 'Y': while end_game == False: while funds > 0: while bet == 0: bet = int(input('Enter bet amount: ')) if bet > funds: print('Insufficient funds') bet = 0 # Initial dealing for player and dealer while len(player_cards) < 2: # Randomly dealing a card player_card = random.choice(deck) player_cards.append(player_card) deck.remove(player_card) # Updating the player score player_score += player_card.card_value # In case both the cards are Ace, make the first ace value as 1 if len(player_cards) == 2: if player_cards[0].card_value == 11 and player_cards[1].card_value == 11: player_cards[0].card_value = 1 player_score -= 10 # Print player cards and score print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) input() # Randomly dealing a card dealer_card = random.choice(deck) dealer_cards.append(dealer_card) deck.remove(dealer_card) # Updating the dealer score dealer_score += dealer_card.card_value # Print dealer cards and score, keeping in mind to hide the second card and score print("DEALER CARDS: ") if len(dealer_cards) == 1: print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) else: print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) # In case both the cards are Ace, make the second ace value as 1 if len(dealer_cards) == 2: if dealer_cards[0].card_value == 11 and dealer_cards[1].card_value == 11: dealer_cards[1].card_value = 1 dealer_score -= 10 input() clear() # Print dealer and player cards print("DEALER CARDS: ") print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) print() print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) # Managing the player moves while player_score < 21: choice = input("Enter H to Hit or S to Stand : ") # Sanity checks for player's choice if len(choice) != 1 or (choice.upper() != 'H' and choice.upper() != 'S'): clear() print("Wrong choice!! Try Again") # If player decides to HIT if choice.upper() == 'H': # Dealing a new card player_card = random.choice(deck) player_cards.append(player_card) deck.remove(player_card) # Updating player score player_score += player_card.card_value # Updating player score in case player's card have ace in them c = 0 while player_score > 21 and c < len(player_cards): if player_cards[c].card_value == 11: player_cards[c].card_value = 1 player_score -= 10 c += 1 else: c += 1 clear() # Print player and dealer cards print("DEALER CARDS: ") print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) print() print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) # If player decides to Stand if choice.upper() == 'S': break clear() # Print player and dealer cards print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) print() print("DEALER IS REVEALING THE CARDS....") print("DEALER CARDS: ") print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) # Check if player has a Blackjack if player_score == 21: print("PLAYER HAS A BLACKJACK") blackjacks += 1 # Check if player busts if player_score > 21: print("PLAYER BUSTED!!!") busts += 1 print("DEALER WINS!!!") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True input() # Managing the dealer moves while dealer_score < 17: clear() print("DEALER DECIDES TO HIT.....") # Dealing card for dealer dealer_card = random.choice(deck) dealer_cards.append(dealer_card) deck.remove(dealer_card) # Updating the dealer's score dealer_score += dealer_card.card_value # Updating player score in case player's card have ace in them c = 0 while dealer_score > 21 and c < len(dealer_cards): if dealer_cards[c].card_value == 11: dealer_cards[c].card_value = 1 dealer_score -= 10 c += 1 else: c += 1 # print player and dealer cards print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) print() print("DEALER CARDS: ") print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) input() # TIE Game if dealer_score == player_score: print("TIE GAME!!!!") ties += 1 bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer busts elif dealer_score > 21: print("DEALER BUSTED!!! YOU WIN!!!") wins += 1 funds += bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer gets a blackjack elif dealer_score == 21: print("DEALER HAS A BLACKJACK!!! PLAYER LOSES") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Player Wins elif player_score < 21 and player_score > dealer_score: print("PLAYER WINS!!!") wins += 1 funds += bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer Wins else: print("DEALER WINS!!!") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True quit() if __name__ == '__main__': # The type of suit suits = ["Spades", "Hearts", "Clubs", "Diamonds"] # The suit value suits_values = {"Spades":"\u2664", "Hearts":"\u2661", "Clubs": "\u2667", "Diamonds": "\u2662"} # The type of card cards = ["A", "2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K"] # The card value cards_values = {"A": 11, "2":2, "3":3, "4":4, "5":5, "6":6, "7":7, "8":8, "9":9, "10":10, "J":10, "Q":10, "K":10} # The deck of cards deck = [] # Loop for every type of suit for suit in suits: # Loop for every type of card in a suit for card in cards: # Adding card to the deck deck.append(Card(suits_values[suit], card, cards_values[card])) I added a quit() that should trigger should 'while play_again == 'Y':' no longer be true. This should have quit the game and stopped it from running. Instead it prompts the user again for a betting amount, acting as if I chose 'Y' instead. I also tried removing: play_again = 'Y' and replacing this code block: end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() with this: end_choice = input('Play again(Y/N)?: ') if end_choice.upper() == 'N': exit() But it still wouldn't quit and stayed as an infinite loop. Help me please, I've been stuck with this issue all day. A: Can you please try by replacing from this: # Bets while play_again == 'Y': while end_game == False: while funds > 0: while bet == 0: bet = int(input('Enter bet amount: ')) if bet > funds: print('Insufficient funds') bet = 0 With # Bets while play_again == 'Y': while end_game == False: if play_again != 'Y': break while funds > 0: if play_again != 'Y' or end_game: break while bet == 0: bet = int(input('Enter bet amount: ')) if bet > funds: print('Insufficient funds') bet = 0 I believe we don't even need to add end_game as it is unnecessary in current scope. These are nested while loop and parent while loop will not break until child breaks. so we need to add extra condition in child while loop which break it on the basis of parent check. you can read this post 5 Ways To Break Out of Nested Loops in Python A: So the issue (to Usman's point) really comes from the nesting levels. Basically for the below logic, it will only check if play_again is still True once end_game is False, which it will only check once funds is no longer > 0. while play_again == 'Y': while end_game == False: while funds > 0: while bet == 0: ... play_again = input("Play again?") Your code would actually work fine if you had the proper indentation (end of loop being within the "play_again" or "end_game" levels), but unless there's a really good reason, it's typically better to combine conditionals rather than nesting them in multiple while loops that are easy to lose track of. Consider instead: while (play_again == "Y") and (funds > 0): if bet == 0: bet = int(input('Enter bet amount: ')) if bet < self.funds: print("Insufficient funds") ... Other Notes Based on the coding style it seems like you're at the start of your wonderful journey into Python, and I wanted to pay forward the help I got starting out by giving some (hopefully) useful tips in case this wasn't part of a structured course. I wound up slapping together an MVP for how this might look below (can build further), please read through the notes though additionally: Break apart large functions! It'll make your life a lot easier, this looks to be maybe an initial project and you get a better knack for where to break things up as you move along, but a single function being ~300 lines is just going to make you miserable. General rule of thumb to have maybe 40-50 lines max unless there's a really good reason. Utilize classes more (especially if you're trying to be OOP!), a "BlackjackGame" class could help keep methods and attributes organized vs having to set them only once per iteration, and helps keep track of variables across games (i.e. total value). Type hinting to make your life easier. Depending on where you are in your learning path this might be a bit later on, but it makes things infinitely easier for working with Intellisense hints (see below using Pylance in VSCode). You could also just create a Deck class itself. Sample import itertools import random import os from typing import List, Literal def clear(): os.system("clear") class BlackjackRound: def __init__(self, deck): self.player_hand: Hand = Hand() self.dealer_hand: Hand = Hand() self.deck: List[Card] = deck ## Draw top cards for hand in [self.player_hand, self.dealer_hand]: for _ in range(2): self.deck = hand.draw(self.deck) self._print_stats() while self.player_hand.value < 21: choice = input("Enter H to Hit or S to Stand : ") if len(choice) != 1 or (choice.upper() != 'H' and choice.upper() != 'S'): clear() print("Wrong choice!! Try Again") continue if choice.upper() == 'H': self.deck = self.player_hand.draw(self.deck) if choice.upper() == 'S': break self._print_stats() ## Resolve Dealer Hand self._print_stats(mask_dealer=False) while self.dealer_hand.value < 17: print("Dealer draws...") self.deck = self.dealer_hand.draw(self.deck) self._print_stats(mask_dealer=False) ##For dramatic tension if self.dealer_hand.value < 17: input("Enter to continue...") def _print_stats(self, mask_dealer=True): print("You're hand") self.player_hand.print_cards() print(f"PLAYER SCORE = {self.player_hand.value}") print("Dealer showing...") if mask_dealer: self.dealer_hand.print_cards(hidden=True, n_cards=1) print(f"DEALER SCORE = {self.dealer_hand.cards[0].card_value}") else: self.dealer_hand.print_cards() print(f"DEALER SCORE = {self.dealer_hand.value}") # The Card class definition class Card: def __init__(self, suit, value, card_value, suit_format): # Suit of the Card like Spades and Clubs self.suit = suit # Representing Value of the Card like A for Ace, K for King self.value = value # Score Value for the Card like 10 for King self.card_value = card_value self.suit_format = suit_format #How should Card be printed? def __repr__(self): return str(self.__dict__) class Hand: def __init__(self): self.cards: List[Card] = [] self.value = 0 def draw(self, deck: List[Card]): player_card = random.choice(deck) self.cards.append(player_card) deck.remove(player_card) self.value += player_card.card_value self.revalue_ace() return deck def revalue_ace(self): ##Sum greater than 21, if (self.value > 21) and ("A" in [card.value for card in self.cards]): swap_card = list(filter(lambda x: x.value == "A", self.cards))[0] self.cards[self.cards.index(swap_card)].card_value = 1 self.value-=10 else: pass def print_cards(self, hidden: bool=False, n_cards=None): s = "" cards = self.cards if n_cards is None else self.cards[0:n_cards] for card in cards: s = s + "\t ________________" if hidden: s += "\t ________________" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| {} |".format(card.suit_format) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t|________________|" if hidden: s += "\t|________________|" print(s) print() class BlackjackGame: def __init__(self): ##play_again can only be Y/N self.play_again: Literal["Y", "N"] = "Y" self.player_name: str = input("Player name?: ") self.funds = 100 self.wins = 0 self.losses = 0 self.ties = 0 self.blackjacks = 0 self.busts = 0 def _print_stats(self): print(f"Current funds: {self.funds}") def play_game(self): self._print_stats self.shuffle_deck() bet = 0 while (self.play_again == "Y") and (self.funds > 0): if bet == 0: bet = int(input('Enter bet amount: ')) if bet < self.funds: print("Insufficient funds") continue #Restart at the top of the key round = BlackjackRound(self.deck) if round.player_hand.value == 21: print("Blackjack!") self.blackjacks+=1 self.funds+=bet elif round.player_hand.value > 21: print("BUST") self.busts+=1 self.funds-=bet elif round.dealer_hand.value > 21: print("DEALER BUST!") self.wins +=1 self.funds+=bet elif round.player_hand.value > round.dealer_hand.value: print("Win!") self.wins+=1 self.funds+=bet elif round.player_hand.value < round.dealer_hand.value: print("Lose!") self.losses+=1 self.funds-=bet else: print("Tie!") self.ties+=1 ## Error handling for exit bet=0 ##Update deck from round self.deck = round.deck while True: self.play_again = input('Play again(Y/N)?: ') if self.play_again == "N": print("So long!") quit() elif self.funds == 0: print("Too bad, you're broke!") quit() elif self.play_again == "Y": print("Around we go!") print("Try again plz") def shuffle_deck(self, suits = {"Spades":"\u2664", "Hearts":"\u2661", "Clubs": "\u2667", "Diamonds": "\u2662"}, cards = {"A": 11, "2":2, "3":3, "4":4, "5":5, "6":6, "7":7, "8":8, "9":9, "10":10, "J":10, "Q":10, "K":10} ): """ Create a deck of 52 cards """ self.deck = [Card(combo[0], combo[1], cards[combo[1]], suits[combo[0]]) for combo in itertools.product(suits.keys(), cards.keys())] if __name__ == '__main__': bg = BlackjackGame() bg.play_game()
Infinite looping Issue Python. Can't quit game
I made a code for Blackjack in Python and whenever I run blackjack_game(deck) saying no to the 'Play Again' input should quit the game but it doesn't. Funds going zero and below should also trigger the game to quit but it doesn't. This is what it looks like: import random import os # The Card class definition class Card: def __init__(self, suit, value, card_value): # Suit of the Card like Spades and Clubs self.suit = suit # Representing Value of the Card like A for Ace, K for King self.value = value # Score Value for the Card like 10 for King self.card_value = card_value # Clear the terminal def clear(): os.system("clear") # Print player stats def print_stats(player_name, funds, wins, losses, ties, blackjacks, busts): print('Player: ', player_name) print('Funds: $', funds) print(f'Wins: {wins} Losses: {losses} Ties: {ties} Blackjacks: {blackjacks} Busts: {busts}') # Function to print the cards def print_cards(cards, hidden): s = "" for card in cards: s = s + "\t ________________" if hidden: s += "\t ________________" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * * |" print(s) s = "" for card in cards: s = s + "\t| {} |".format(card.suit) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: s = s + "\t| |" if hidden: s += "\t| |" print(s) s = "" for card in cards: if card.value == '10': s = s + "\t| {} |".format(card.value) else: s = s + "\t| {} |".format(card.value) if hidden: s += "\t| * |" print(s) s = "" for card in cards: s = s + "\t|________________|" if hidden: s += "\t|________________|" print(s) print() # Function for a game of blackjack def blackjack_game(deck): end_game = False play_again = 'Y' # Player name player_name = str(input('Enter player name: ')) # Intro print('Lets have a fun game of Blackjack, ', player_name) # Cards for both dealer and player player_cards = [] dealer_cards = [] # Scores for both dealer and player player_score = 0 dealer_score = 0 # Player stats funds = 100 wins = 0 losses = 0 ties = 0 blackjacks = 0 busts = 0 bet = 0 clear() # Current Stats Display print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) # Bets while play_again == 'Y': while end_game == False: while funds > 0: while bet == 0: bet = int(input('Enter bet amount: ')) if bet > funds: print('Insufficient funds') bet = 0 # Initial dealing for player and dealer while len(player_cards) < 2: # Randomly dealing a card player_card = random.choice(deck) player_cards.append(player_card) deck.remove(player_card) # Updating the player score player_score += player_card.card_value # In case both the cards are Ace, make the first ace value as 1 if len(player_cards) == 2: if player_cards[0].card_value == 11 and player_cards[1].card_value == 11: player_cards[0].card_value = 1 player_score -= 10 # Print player cards and score print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) input() # Randomly dealing a card dealer_card = random.choice(deck) dealer_cards.append(dealer_card) deck.remove(dealer_card) # Updating the dealer score dealer_score += dealer_card.card_value # Print dealer cards and score, keeping in mind to hide the second card and score print("DEALER CARDS: ") if len(dealer_cards) == 1: print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) else: print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) # In case both the cards are Ace, make the second ace value as 1 if len(dealer_cards) == 2: if dealer_cards[0].card_value == 11 and dealer_cards[1].card_value == 11: dealer_cards[1].card_value = 1 dealer_score -= 10 input() clear() # Print dealer and player cards print("DEALER CARDS: ") print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) print() print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) # Managing the player moves while player_score < 21: choice = input("Enter H to Hit or S to Stand : ") # Sanity checks for player's choice if len(choice) != 1 or (choice.upper() != 'H' and choice.upper() != 'S'): clear() print("Wrong choice!! Try Again") # If player decides to HIT if choice.upper() == 'H': # Dealing a new card player_card = random.choice(deck) player_cards.append(player_card) deck.remove(player_card) # Updating player score player_score += player_card.card_value # Updating player score in case player's card have ace in them c = 0 while player_score > 21 and c < len(player_cards): if player_cards[c].card_value == 11: player_cards[c].card_value = 1 player_score -= 10 c += 1 else: c += 1 clear() # Print player and dealer cards print("DEALER CARDS: ") print_cards(dealer_cards[:-1], True) print("DEALER SCORE = ", dealer_score - dealer_cards[-1].card_value) print() print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) # If player decides to Stand if choice.upper() == 'S': break clear() # Print player and dealer cards print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) print() print("DEALER IS REVEALING THE CARDS....") print("DEALER CARDS: ") print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) # Check if player has a Blackjack if player_score == 21: print("PLAYER HAS A BLACKJACK") blackjacks += 1 # Check if player busts if player_score > 21: print("PLAYER BUSTED!!!") busts += 1 print("DEALER WINS!!!") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True input() # Managing the dealer moves while dealer_score < 17: clear() print("DEALER DECIDES TO HIT.....") # Dealing card for dealer dealer_card = random.choice(deck) dealer_cards.append(dealer_card) deck.remove(dealer_card) # Updating the dealer's score dealer_score += dealer_card.card_value # Updating player score in case player's card have ace in them c = 0 while dealer_score > 21 and c < len(dealer_cards): if dealer_cards[c].card_value == 11: dealer_cards[c].card_value = 1 dealer_score -= 10 c += 1 else: c += 1 # print player and dealer cards print("PLAYER CARDS: ") print_cards(player_cards, False) print("PLAYER SCORE = ", player_score) print() print("DEALER CARDS: ") print_cards(dealer_cards, False) print("DEALER SCORE = ", dealer_score) input() # TIE Game if dealer_score == player_score: print("TIE GAME!!!!") ties += 1 bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer busts elif dealer_score > 21: print("DEALER BUSTED!!! YOU WIN!!!") wins += 1 funds += bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer gets a blackjack elif dealer_score == 21: print("DEALER HAS A BLACKJACK!!! PLAYER LOSES") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Player Wins elif player_score < 21 and player_score > dealer_score: print("PLAYER WINS!!!") wins += 1 funds += bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True # Dealer Wins else: print("DEALER WINS!!!") losses += 1 funds -= bet bet = 0 print_stats(player_name, funds, wins, losses, ties, blackjacks, busts) end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() player_cards = [] dealer_cards = [] player_score = 0 dealer_score = 0 end_game = True quit() if __name__ == '__main__': # The type of suit suits = ["Spades", "Hearts", "Clubs", "Diamonds"] # The suit value suits_values = {"Spades":"\u2664", "Hearts":"\u2661", "Clubs": "\u2667", "Diamonds": "\u2662"} # The type of card cards = ["A", "2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K"] # The card value cards_values = {"A": 11, "2":2, "3":3, "4":4, "5":5, "6":6, "7":7, "8":8, "9":9, "10":10, "J":10, "Q":10, "K":10} # The deck of cards deck = [] # Loop for every type of suit for suit in suits: # Loop for every type of card in a suit for card in cards: # Adding card to the deck deck.append(Card(suits_values[suit], card, cards_values[card])) I added a quit() that should trigger should 'while play_again == 'Y':' no longer be true. This should have quit the game and stopped it from running. Instead it prompts the user again for a betting amount, acting as if I chose 'Y' instead. I also tried removing: play_again = 'Y' and replacing this code block: end_choice = input('Play again(Y/N)?: ') play_again = end_choice.upper() with this: end_choice = input('Play again(Y/N)?: ') if end_choice.upper() == 'N': exit() But it still wouldn't quit and stayed as an infinite loop. Help me please, I've been stuck with this issue all day.
[ "Can you please try by replacing from this:\n # Bets\n while play_again == 'Y':\n while end_game == False:\n while funds > 0:\n while bet == 0:\n bet = int(input('Enter bet amount: '))\n\n if bet > funds:\n print('Insufficient funds')\n bet = 0\n\nWith\n # Bets\n while play_again == 'Y':\n while end_game == False:\n if play_again != 'Y':\n break\n while funds > 0:\n if play_again != 'Y' or end_game:\n break\n while bet == 0:\n bet = int(input('Enter bet amount: '))\n\n if bet > funds:\n print('Insufficient funds')\n bet = 0\n\n\nI believe we don't even need to add end_game as it is unnecessary in current scope.\nThese are nested while loop and parent while loop will not break until child breaks. so we need to add extra condition in child while loop which break it on the basis of parent check.\nyou can read this post 5 Ways To Break Out of Nested Loops in Python\n", "So the issue (to Usman's point) really comes from the nesting levels. Basically for the below logic, it will only check if play_again is still True once end_game is False, which it will only check once funds is no longer > 0.\nwhile play_again == 'Y':\n while end_game == False:\n while funds > 0:\n while bet == 0:\n ...\n play_again = input(\"Play again?\")\n\nYour code would actually work fine if you had the proper indentation (end of loop being within the \"play_again\" or \"end_game\" levels), but unless there's a really good reason, it's typically better to combine conditionals rather than nesting them in multiple while loops that are easy to lose track of. Consider instead:\nwhile (play_again == \"Y\") and (funds > 0):\n if bet == 0:\n bet = int(input('Enter bet amount: '))\n if bet < self.funds:\n print(\"Insufficient funds\")\n ...\n\nOther Notes\nBased on the coding style it seems like you're at the start of your wonderful journey into Python, and I wanted to pay forward the help I got starting out by giving some (hopefully) useful tips in case this wasn't part of a structured course. I wound up slapping together an MVP for how this might look below (can build further), please read through the notes though additionally:\n\nBreak apart large functions! It'll make your life a lot easier, this looks to be maybe an initial project and you get a better knack for where to break things up as you move along, but a single function being ~300 lines is just going to make you miserable. General rule of thumb to have maybe 40-50 lines max unless there's a really good reason.\nUtilize classes more (especially if you're trying to be OOP!), a \"BlackjackGame\" class could help keep methods and attributes organized vs having to set them only once per iteration, and helps keep track of variables across games (i.e. total value).\nType hinting to make your life easier. Depending on where you are in your learning path this might be a bit later on, but it makes things infinitely easier for working with Intellisense hints (see below using Pylance in VSCode). You could also just create a Deck class itself.\n\n\nSample\nimport itertools\nimport random\nimport os\nfrom typing import List, Literal\n\ndef clear():\n os.system(\"clear\")\n\nclass BlackjackRound:\n def __init__(self, deck):\n self.player_hand: Hand = Hand()\n self.dealer_hand: Hand = Hand()\n self.deck: List[Card] = deck\n \n ## Draw top cards\n for hand in [self.player_hand, self.dealer_hand]:\n for _ in range(2):\n self.deck = hand.draw(self.deck)\n \n self._print_stats()\n while self.player_hand.value < 21:\n choice = input(\"Enter H to Hit or S to Stand : \")\n if len(choice) != 1 or (choice.upper() != 'H' and choice.upper() != 'S'):\n clear()\n print(\"Wrong choice!! Try Again\")\n continue\n if choice.upper() == 'H':\n self.deck = self.player_hand.draw(self.deck)\n if choice.upper() == 'S':\n break\n self._print_stats()\n ## Resolve Dealer Hand\n self._print_stats(mask_dealer=False)\n while self.dealer_hand.value < 17:\n print(\"Dealer draws...\")\n self.deck = self.dealer_hand.draw(self.deck)\n self._print_stats(mask_dealer=False)\n ##For dramatic tension\n if self.dealer_hand.value < 17:\n input(\"Enter to continue...\")\n\n def _print_stats(self, mask_dealer=True):\n print(\"You're hand\")\n self.player_hand.print_cards()\n print(f\"PLAYER SCORE = {self.player_hand.value}\")\n\n print(\"Dealer showing...\")\n if mask_dealer:\n self.dealer_hand.print_cards(hidden=True, n_cards=1)\n print(f\"DEALER SCORE = {self.dealer_hand.cards[0].card_value}\")\n else:\n self.dealer_hand.print_cards()\n print(f\"DEALER SCORE = {self.dealer_hand.value}\")\n\n\n# The Card class definition\nclass Card:\n def __init__(self, suit, value, card_value, suit_format): \n # Suit of the Card like Spades and Clubs\n self.suit = suit\n # Representing Value of the Card like A for Ace, K for King\n self.value = value\n # Score Value for the Card like 10 for King\n self.card_value = card_value\n self.suit_format = suit_format\n #How should Card be printed?\n def __repr__(self):\n return str(self.__dict__)\n\nclass Hand:\n def __init__(self):\n self.cards: List[Card] = []\n self.value = 0\n \n def draw(self, deck: List[Card]):\n player_card = random.choice(deck)\n self.cards.append(player_card)\n deck.remove(player_card)\n self.value += player_card.card_value\n self.revalue_ace()\n return deck\n \n def revalue_ace(self):\n ##Sum greater than 21, \n if (self.value > 21) and (\"A\" in [card.value for card in self.cards]):\n swap_card = list(filter(lambda x: x.value == \"A\", self.cards))[0]\n self.cards[self.cards.index(swap_card)].card_value = 1\n self.value-=10\n else:\n pass\n \n def print_cards(self, hidden: bool=False, n_cards=None):\n s = \"\"\n cards = self.cards if n_cards is None else self.cards[0:n_cards]\n for card in cards:\n s = s + \"\\t ________________\"\n if hidden:\n s += \"\\t ________________\"\n print(s)\n \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| |\" \n print(s)\n \n s = \"\"\n for card in cards:\n if card.value == '10':\n s = s + \"\\t| {} |\".format(card.value)\n else:\n s = s + \"\\t| {} |\".format(card.value) \n if hidden:\n s += \"\\t| |\" \n print(s)\n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| {} |\".format(card.suit_format)\n if hidden:\n s += \"\\t| * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * |\"\n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| * |\"\n print(s)\n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| |\"\n print(s)\n \n s = \"\"\n for card in cards:\n s = s + \"\\t| |\"\n if hidden:\n s += \"\\t| |\"\n print(s) \n \n s = \"\"\n for card in cards:\n if card.value == '10':\n s = s + \"\\t| {} |\".format(card.value)\n else:\n s = s + \"\\t| {} |\".format(card.value)\n if hidden:\n s += \"\\t| * |\" \n print(s) \n \n s = \"\"\n for card in cards:\n s = s + \"\\t|________________|\"\n if hidden:\n s += \"\\t|________________|\"\n print(s) \n \n print()\n \n\n\nclass BlackjackGame:\n def __init__(self):\n ##play_again can only be Y/N\n self.play_again: Literal[\"Y\", \"N\"] = \"Y\"\n self.player_name: str = input(\"Player name?: \")\n self.funds = 100\n self.wins = 0\n self.losses = 0\n self.ties = 0\n self.blackjacks = 0\n self.busts = 0\n \n def _print_stats(self):\n print(f\"Current funds: {self.funds}\")\n def play_game(self):\n self._print_stats\n self.shuffle_deck()\n bet = 0\n while (self.play_again == \"Y\") and (self.funds > 0):\n if bet == 0:\n bet = int(input('Enter bet amount: '))\n if bet < self.funds:\n print(\"Insufficient funds\")\n continue #Restart at the top of the key\n round = BlackjackRound(self.deck)\n if round.player_hand.value == 21:\n print(\"Blackjack!\")\n self.blackjacks+=1\n self.funds+=bet\n elif round.player_hand.value > 21:\n print(\"BUST\")\n self.busts+=1\n self.funds-=bet\n elif round.dealer_hand.value > 21:\n print(\"DEALER BUST!\")\n self.wins +=1\n self.funds+=bet\n elif round.player_hand.value > round.dealer_hand.value:\n print(\"Win!\")\n self.wins+=1\n self.funds+=bet\n elif round.player_hand.value < round.dealer_hand.value:\n print(\"Lose!\")\n self.losses+=1\n self.funds-=bet\n else:\n print(\"Tie!\")\n self.ties+=1\n ## Error handling for exit\n bet=0\n ##Update deck from round\n self.deck = round.deck\n while True:\n self.play_again = input('Play again(Y/N)?: ')\n if self.play_again == \"N\":\n print(\"So long!\")\n quit()\n elif self.funds == 0:\n print(\"Too bad, you're broke!\")\n quit()\n elif self.play_again == \"Y\":\n print(\"Around we go!\")\n print(\"Try again plz\")\n \n def shuffle_deck(self,\n suits = {\"Spades\":\"\\u2664\", \"Hearts\":\"\\u2661\", \"Clubs\": \"\\u2667\", \"Diamonds\": \"\\u2662\"},\n cards = {\"A\": 11, \"2\":2, \"3\":3, \"4\":4, \"5\":5, \"6\":6, \"7\":7, \"8\":8, \"9\":9, \"10\":10, \"J\":10, \"Q\":10, \"K\":10}\n ):\n \"\"\"\n Create a deck of 52 cards\n \"\"\"\n self.deck = [Card(combo[0], combo[1], cards[combo[1]], suits[combo[0]]) for combo in itertools.product(suits.keys(), cards.keys())]\n\n\n \nif __name__ == '__main__':\n bg = BlackjackGame()\n bg.play_game()\n\n" ]
[ 0, 0 ]
[]
[]
[ "blackjack", "oop", "python" ]
stackoverflow_0074574860_blackjack_oop_python.txt
Q: How do I push to a nested array in a PyMongo database? I have a MongoDB database with the following structure (simplified for the question's sake): User: "id": int "aquarium": Aquarium[] Aquarium: "name": str "fish": Fish[] I have access to: The database, which contains a list of objects of type User, which in turn have their own Aquarium objects (users_db) The unique ID of the target User, which is supposed to be the subject of the operation (id) The unique name of the Aquarium, which is supposed to be the subject of the operation (aquarium_name) A Fish type object (obj) My purpose is to push the Fish type object (refered to as "obj" in the code) into the target Aquarium's fish array. So far I have attempted to achieve this with the following code: users_db.find_one_and_update ( { "_id": ObjectId(str(id)), "aquarium.name": aquarium_name }, { "$push": {"aquarium.fish": obj} } ) This was, however, unsuccessful. The following error was returned: I have reviewed numerous other question, such as this one, however I could not find a question which simultaneously demands a query dependent on both the inner and outer layer and insertion into the inner layer at the same time. It is hard for me to tell whether the issue comes from an invalid query or an invalid update operation, therefore I am unsure of which direction to go from this point. Does anybody know what could be the cause of this? I'd appreciate any help. A: You get the mentioned problem as you are trying to add an object into the fish array which is a nested array (aquarium is also an array). You need $ operator after aquarium. Aims to update the first matched aquarium array. MongoDB query db.collection.update({ "_id": ObjectId("5a934e000102030405000000"), "aquarium.name": "Aqua A" }, { "$push": { "aquarium.$.fish": { name: "Tortoise", color: "Green" } } }) Demo @ Mongo Playground users_db.find_one_and_update ( { "_id": ObjectId(str(id)), "aquarium.name": aquarium_name }, { "$push": {"aquarium.$.fish": obj} } )
How do I push to a nested array in a PyMongo database?
I have a MongoDB database with the following structure (simplified for the question's sake): User: "id": int "aquarium": Aquarium[] Aquarium: "name": str "fish": Fish[] I have access to: The database, which contains a list of objects of type User, which in turn have their own Aquarium objects (users_db) The unique ID of the target User, which is supposed to be the subject of the operation (id) The unique name of the Aquarium, which is supposed to be the subject of the operation (aquarium_name) A Fish type object (obj) My purpose is to push the Fish type object (refered to as "obj" in the code) into the target Aquarium's fish array. So far I have attempted to achieve this with the following code: users_db.find_one_and_update ( { "_id": ObjectId(str(id)), "aquarium.name": aquarium_name }, { "$push": {"aquarium.fish": obj} } ) This was, however, unsuccessful. The following error was returned: I have reviewed numerous other question, such as this one, however I could not find a question which simultaneously demands a query dependent on both the inner and outer layer and insertion into the inner layer at the same time. It is hard for me to tell whether the issue comes from an invalid query or an invalid update operation, therefore I am unsure of which direction to go from this point. Does anybody know what could be the cause of this? I'd appreciate any help.
[ "You get the mentioned problem as you are trying to add an object into the fish array which is a nested array (aquarium is also an array).\nYou need $ operator after aquarium. Aims to update the first matched aquarium array.\n\nMongoDB query\n\ndb.collection.update({\n \"_id\": ObjectId(\"5a934e000102030405000000\"),\n \"aquarium.name\": \"Aqua A\"\n},\n{\n \"$push\": {\n \"aquarium.$.fish\": {\n name: \"Tortoise\",\n color: \"Green\"\n }\n }\n})\n\nDemo @ Mongo Playground\nusers_db.find_one_and_update\n (\n {\n \"_id\": ObjectId(str(id)),\n \"aquarium.name\": aquarium_name\n }, \n {\n \"$push\": {\"aquarium.$.fish\": obj}\n }\n )\n\n" ]
[ 1 ]
[]
[]
[ "mongodb", "pymongo", "python", "rest" ]
stackoverflow_0074607734_mongodb_pymongo_python_rest.txt
Q: module 'pandas._typing' has no attribute 'FilePathOrBuffer' When trying to import a darts atribute using the line from darts import TimeSeries, concatenate i get the error message AttributeError: module 'pandas._typing' has no attribute 'FilePathOrBuffer' Is there a fix for this? Versions of what I'm using: pandas 1.5.2 Windows 10 64-bit, ver 21H2 Python 3.9.13 darts ver 0.16.0 A: Like yourself, running Windows 10 64-bit, using a sufficiently up to date copy of Python 3.9 (I'm using 3.9.13), if I do the following: virtualenv test_env testenv\Scripts\activate pip install pandas pip install darts This installs a large number of dependencies for darts and takes several minutes to complete, without errors or warnings. The installed version of pandas after this is 1.5.2 and the installed version of darts is 0.22.0. (it's unclear why yours would be 0.16.0) I can then run the following using Python: from darts import TimeSeries, concatenate And I get no error message. The error you're reporting cannot be reproduced using current and up to date versions of the packages you mentioned. Why exactly do you have a reliance upon darts version 0.16.0 specifically?
module 'pandas._typing' has no attribute 'FilePathOrBuffer'
When trying to import a darts atribute using the line from darts import TimeSeries, concatenate i get the error message AttributeError: module 'pandas._typing' has no attribute 'FilePathOrBuffer' Is there a fix for this? Versions of what I'm using: pandas 1.5.2 Windows 10 64-bit, ver 21H2 Python 3.9.13 darts ver 0.16.0
[ "Like yourself, running Windows 10 64-bit, using a sufficiently up to date copy of Python 3.9 (I'm using 3.9.13), if I do the following:\nvirtualenv test_env\ntestenv\\Scripts\\activate\npip install pandas\npip install darts\n\nThis installs a large number of dependencies for darts and takes several minutes to complete, without errors or warnings.\nThe installed version of pandas after this is 1.5.2 and the installed version of darts is 0.22.0. (it's unclear why yours would be 0.16.0)\nI can then run the following using Python:\nfrom darts import TimeSeries, concatenate\n\nAnd I get no error message. The error you're reporting cannot be reproduced using current and up to date versions of the packages you mentioned.\nWhy exactly do you have a reliance upon darts version 0.16.0 specifically?\n" ]
[ 0 ]
[]
[]
[ "libraries", "python" ]
stackoverflow_0074607309_libraries_python.txt
Q: How to match one to one data-point based on some conditions Let's say I have a dataframe that looks like this df = pd.DataFrame(columns=['ID', 'job', 'eligible', "date"]) df['ID'] = ['1', '2', '3', '4', '5', '6', '7', '8'] df['job'] = ['waitress', 'doctor', 'benevolent', 'nurse', 'hairstylist', 'banker', 'waitress', 'waitress'] df['eligible'] = [No, Yes, No, Yes, No, No, No, No] df['date'] = ['1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2015', '1.1.2015'] df["date"] = pd.to_datetime(df["date"]) I would like to pair the data with matched job, eligibility and mismatched year (2015 with 2016). And this only with a one-to-one matching, which means a part of the data may have several match or not at all. If there are several, the matched pair will be chosen at random. Therefore, I would like to have an outcome which looks like this: df_paired = (columns=['ID', 'job', 'eligible', "paired_ID"]) df['ID'] = ['1'] df['paired_ID'] = ['8'] df['job'] = ['waitress'] df['eligible'] = [No] I tried a lot of solutions but the main problem was the one-to-one matching, to get unique match even tough there could be several match for a single observation... A: This solution uses the groupby method to find rows which share job and eligible values. Subgroups which share the same year are then identified. A random index is selected to choose the ID to assign for the paired_ID. import numpy as np import pandas as pd df = pd.DataFrame(columns=['ID', 'job', 'eligible', "date"]) df['ID'] = ['1', '2', '3', '4', '5', '6', '7', '8'] df['job'] = ['waitress', 'doctor', 'benevolent', 'nurse', 'hairstylist', 'banker', 'waitress', 'waitress'] df['eligible'] = ['No', 'Yes', 'No', 'Yes', 'No', 'No', 'No', 'No'] df['date'] = ['1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2015', '1.1.2015'] df["date"] = pd.to_datetime(df["date"]) # Add column df['paired_ID'] = None for _, matches in df.groupby(['job', 'eligible']): if (len(matches) > 1): year_groups = matches.groupby(by=matches['date'].dt.year) if (len(year_groups) > 1): years = tuple(year_groups.groups.keys()) for y in years: pair_y = np.random.choice(tuple([x for x in years if x != y])) for index in year_groups.groups[y]: paired_index = np.random.choice(year_groups.groups[pair_y]) paired_ID = int(df['ID'].iloc[paired_index]) df['paired_ID'].iloc[index] = paired_ID
How to match one to one data-point based on some conditions
Let's say I have a dataframe that looks like this df = pd.DataFrame(columns=['ID', 'job', 'eligible', "date"]) df['ID'] = ['1', '2', '3', '4', '5', '6', '7', '8'] df['job'] = ['waitress', 'doctor', 'benevolent', 'nurse', 'hairstylist', 'banker', 'waitress', 'waitress'] df['eligible'] = [No, Yes, No, Yes, No, No, No, No] df['date'] = ['1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2015', '1.1.2015'] df["date"] = pd.to_datetime(df["date"]) I would like to pair the data with matched job, eligibility and mismatched year (2015 with 2016). And this only with a one-to-one matching, which means a part of the data may have several match or not at all. If there are several, the matched pair will be chosen at random. Therefore, I would like to have an outcome which looks like this: df_paired = (columns=['ID', 'job', 'eligible', "paired_ID"]) df['ID'] = ['1'] df['paired_ID'] = ['8'] df['job'] = ['waitress'] df['eligible'] = [No] I tried a lot of solutions but the main problem was the one-to-one matching, to get unique match even tough there could be several match for a single observation...
[ "This solution uses the groupby method to find rows which share job and eligible values. Subgroups which share the same year are then identified. A random index is selected to choose the ID to assign for the paired_ID.\nimport numpy as np\nimport pandas as pd\n\ndf = pd.DataFrame(columns=['ID', 'job', 'eligible', \"date\"])\ndf['ID'] = ['1', '2', '3', '4', '5', '6', '7', '8']\ndf['job'] = ['waitress', 'doctor', 'benevolent', 'nurse', 'hairstylist', 'banker', 'waitress', 'waitress']\ndf['eligible'] = ['No', 'Yes', 'No', 'Yes', 'No', 'No', 'No', 'No']\ndf['date'] = ['1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2016', '31.12.2015', '1.1.2015', '1.1.2015']\n\ndf[\"date\"] = pd.to_datetime(df[\"date\"])\n\n# Add column\ndf['paired_ID'] = None\n\nfor _, matches in df.groupby(['job', 'eligible']):\n if (len(matches) > 1):\n year_groups = matches.groupby(by=matches['date'].dt.year)\n if (len(year_groups) > 1):\n years = tuple(year_groups.groups.keys())\n for y in years:\n pair_y = np.random.choice(tuple([x for x in years if x != y]))\n for index in year_groups.groups[y]:\n paired_index = np.random.choice(year_groups.groups[pair_y])\n paired_ID = int(df['ID'].iloc[paired_index])\n df['paired_ID'].iloc[index] = paired_ID\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074607527_pandas_python.txt
Q: Function in DolphinDB similar to numpy.clip() Function numpy.clip() is used to clip(limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [2, 6] is specified, values smaller than 2 become 2, and values larger than 6 become 6. import numpy as np    in_array = [1, 2, 3, 4, 5, 6, 7, 8 ] print ("Input array:", in_array)    out_array = np.clip(in_array, a_min = 2, a_max = 6) print ("Output array:", out_array) Output: Input array: [1, 2, 3, 4, 5, 6, 7, 8] Output array: [2 2 3 4 5 6 6 6] A: Function winsorize may work, to some extent, if you can specify the percentages to cut on each side of the vector. Or, you can use the user-defined function as follows with iif: def clip(x,minValue,maxValue): iif(x>maxValue,maxValue,iif(x<minValue,minValue,x)) in = [1, 2, 3, 4, 5, 6, 7, 8 ] clip(in,2,6) Output: offset 0 1 2 3 4 5 6 7 0 2 2 3 4 5 6 6 6
Function in DolphinDB similar to numpy.clip()
Function numpy.clip() is used to clip(limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [2, 6] is specified, values smaller than 2 become 2, and values larger than 6 become 6. import numpy as np    in_array = [1, 2, 3, 4, 5, 6, 7, 8 ] print ("Input array:", in_array)    out_array = np.clip(in_array, a_min = 2, a_max = 6) print ("Output array:", out_array) Output: Input array: [1, 2, 3, 4, 5, 6, 7, 8] Output array: [2 2 3 4 5 6 6 6]
[ "Function winsorize may work, to some extent, if you can specify the percentages to cut on each side of the vector.\nOr, you can use the user-defined function as follows with iif:\ndef clip(x,minValue,maxValue): iif(x>maxValue,maxValue,iif(x<minValue,minValue,x))\nin = [1, 2, 3, 4, 5, 6, 7, 8 ] \nclip(in,2,6)\n\nOutput:\noffset 0 1 2 3 4 5 6 7\n0 2 2 3 4 5 6 6 6\n\n" ]
[ 0 ]
[]
[]
[ "dolphindb", "numpy", "python" ]
stackoverflow_0074595836_dolphindb_numpy_python.txt
Q: Python Pandas Pivot Table issue with margins : got "Grouper for 'something' not 1-dimensional" I'm working with a df and a simple pivot table my purpose is to add the margins. Everything works fine until I add the arg "margins=True". here is my code : df1=pd.DataFrame({'brand':['A','A','A','A','A','B','A','B','A','B','B','A','A'], 'type':['C','C','C','C','C','C','C','C','D','D','C','C','C'], 'Year':[2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022], 'Month':[9,9,9,8,9,9,9,9,8,10,9,10,10]}) table_1 = pd.pivot_table(df1, values = 'type', index = ['brand','type'], columns = ['Year','Month'], aggfunc = {'type':len}, fill_value = '0', margins=True) print(table_1) And I got the error : "ValueError: Grouper for 'type' not 1-dimensional" Do you have any ideas to make it works ? Thank you I tried to change the parameters for aggfunc, I don't see what I'm missing here... Without the margins the outcome is fine. I just need the sum for each rows and columns which is what margins should do... A: I think it doesn't like that you use both type as index and value. A workaround would be to use a dummy column: table_1 = pd.pivot_table(df1.assign(val=1), values='val', index=['brand','type'], columns=['Year','Month'], aggfunc={'val':len}, fill_value=0, margins=True) print(table_1) Output: Year 2022 All Month 8 9 10 brand type A C 1 5 2 8 D 1 0 0 1 B C 0 3 0 3 D 0 0 1 1 All 2 8 3 13 NB. Use fill_value=0 to avoid having an object dtype.
Python Pandas Pivot Table issue with margins : got "Grouper for 'something' not 1-dimensional"
I'm working with a df and a simple pivot table my purpose is to add the margins. Everything works fine until I add the arg "margins=True". here is my code : df1=pd.DataFrame({'brand':['A','A','A','A','A','B','A','B','A','B','B','A','A'], 'type':['C','C','C','C','C','C','C','C','D','D','C','C','C'], 'Year':[2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022], 'Month':[9,9,9,8,9,9,9,9,8,10,9,10,10]}) table_1 = pd.pivot_table(df1, values = 'type', index = ['brand','type'], columns = ['Year','Month'], aggfunc = {'type':len}, fill_value = '0', margins=True) print(table_1) And I got the error : "ValueError: Grouper for 'type' not 1-dimensional" Do you have any ideas to make it works ? Thank you I tried to change the parameters for aggfunc, I don't see what I'm missing here... Without the margins the outcome is fine. I just need the sum for each rows and columns which is what margins should do...
[ "I think it doesn't like that you use both type as index and value. A workaround would be to use a dummy column:\ntable_1 = pd.pivot_table(df1.assign(val=1), \n values='val', index=['brand','type'],\n columns=['Year','Month'], aggfunc={'val':len},\n fill_value=0, margins=True)\n\nprint(table_1)\n\nOutput:\nYear 2022 All\nMonth 8 9 10 \nbrand type \nA C 1 5 2 8\n D 1 0 0 1\nB C 0 3 0 3\n D 0 0 1 1\nAll 2 8 3 13\n\nNB. Use fill_value=0 to avoid having an object dtype.\n" ]
[ 1 ]
[]
[]
[ "margins", "pandas", "pivot_table", "python" ]
stackoverflow_0074607735_margins_pandas_pivot_table_python.txt
Q: Django, pandas, excel: uploading files, parsing them with pandas in Django I have a big command-line script for parsing data in Excel (with pandas) and I want to wrap it with Django. I've tried both uploading files thru request.FILES and pandas, but get stuck on uploading file and, for example, saving it (not necessarily but just to check the upload for now). Haven't had any problems with other apps on Django which didn't require uploading and parsing anything external and thought that would be much easier..:) I've also tried Redirecting, doesn't really work, the only redirect which is actually happening is action in the form tag.. Here are the code snippets: views.py: def uploads(request): if request.method == 'POST': form = DocumentForm(request.POST, request.FILES) if form.is_valid(): excel_file = request.FILES['document'] excel_file.save() return render(request, 'index.html') else: form = DocumentForm() return render(request, 'index.html', {'form': form}) models.py class Document(models.Model): document = models.FileField(upload_to="files/") forms.py: class DocumentForm(forms.ModelForm): class Meta: model = Document fields = ('document', ) index.html: <form action="{% url 'reports'%}" method="post" enctype="multipart/form-data" > {% csrf_token %} <span> Upload .xlsx file <input type="file" name="document" /> </span> <button type="submit"> SUBMIT </button> </form> A: I think you must save the form's content actually: form.save()
Django, pandas, excel: uploading files, parsing them with pandas in Django
I have a big command-line script for parsing data in Excel (with pandas) and I want to wrap it with Django. I've tried both uploading files thru request.FILES and pandas, but get stuck on uploading file and, for example, saving it (not necessarily but just to check the upload for now). Haven't had any problems with other apps on Django which didn't require uploading and parsing anything external and thought that would be much easier..:) I've also tried Redirecting, doesn't really work, the only redirect which is actually happening is action in the form tag.. Here are the code snippets: views.py: def uploads(request): if request.method == 'POST': form = DocumentForm(request.POST, request.FILES) if form.is_valid(): excel_file = request.FILES['document'] excel_file.save() return render(request, 'index.html') else: form = DocumentForm() return render(request, 'index.html', {'form': form}) models.py class Document(models.Model): document = models.FileField(upload_to="files/") forms.py: class DocumentForm(forms.ModelForm): class Meta: model = Document fields = ('document', ) index.html: <form action="{% url 'reports'%}" method="post" enctype="multipart/form-data" > {% csrf_token %} <span> Upload .xlsx file <input type="file" name="document" /> </span> <button type="submit"> SUBMIT </button> </form>
[ "I think you must save the form's content actually:\nform.save()\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_views", "pandas", "python" ]
stackoverflow_0074608110_django_django_views_pandas_python.txt
Q: How can I write columns 'name' attribute to excel in Pandas? df=pd.DataFrame({'A':[1,2,3],'B':[4,5,6]}) df.index.name='class1' df.columns.name='class2' df.to_excel('...') The index name attribute 'class1' can be written normally, but the columns 'name' attribute 'class2' can't. Please note that I am not talking about the columns name 'A and B'. How can I write it? A: Based on the information you have provided, you should try setting the column names with df.columns = ['column name 1', 'column name 2'] and so on before you export. If that answer does not help, could you provide some of the data and commands you are putting in along with the output you are getting? That could be helpful in resolving this.
How can I write columns 'name' attribute to excel in Pandas?
df=pd.DataFrame({'A':[1,2,3],'B':[4,5,6]}) df.index.name='class1' df.columns.name='class2' df.to_excel('...') The index name attribute 'class1' can be written normally, but the columns 'name' attribute 'class2' can't. Please note that I am not talking about the columns name 'A and B'. How can I write it?
[ "Based on the information you have provided, you should try setting the column names with df.columns = ['column name 1', 'column name 2'] and so on before you export. If that answer does not help, could you provide some of the data and commands you are putting in along with the output you are getting? That could be helpful in resolving this.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074608194_pandas_python.txt
Q: Import "parselmouth.praat" could not be resolved I'm trying to install praat-parselmouth everything is fine when I use Jupyter Notebook. But when I tried to import this package on the VsCode I got the below error. I've checked the Python interpreter and it's the same as the installed Python directory. How can I solve this? A: The solution is given in the link you pasted. Sometimes on Windows, the installation works, but importing Parselmouth fails with an error message saying . This error is cause by some missing system files, but can luckily be solved quite easily by installing the “Microsoft Visual C++ Redistributable for Visual Studio 2017”. For a 64-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x64.exe For a 32-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x86.exe
Import "parselmouth.praat" could not be resolved
I'm trying to install praat-parselmouth everything is fine when I use Jupyter Notebook. But when I tried to import this package on the VsCode I got the below error. I've checked the Python interpreter and it's the same as the installed Python directory. How can I solve this?
[ "The solution is given in the link you pasted.\nSometimes on Windows, the installation works, but importing Parselmouth fails with an error message saying . This error is cause by some missing system files, but can luckily be solved quite easily by installing the “Microsoft Visual C++ Redistributable for Visual Studio 2017”.\nFor a 64-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x64.exe\nFor a 32-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x86.exe\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074601422_python_visual_studio_code.txt
Q: "No artists with labels found to put in legend." error when changing the legend size in pyplot I want to make my legend size bigger in Pyplot. I used this answer to do that. Here is my code. import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.rcParams['figure.figsize'] = [15, 7] lst = [1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,4,5,6] plt.plot(lst) plt.legend(fontsize="x-large") # Here I make it bigger but doesn't work plt.legend(["This is my legend"]) plt.ylabel('some numbers') plt.show() I get this error and I don't know what is wrong. I don't understand what "Artist" means here. No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument. A: "Artist" is a term from matplotlib: https://matplotlib.org/stable/tutorials/intermediate/artists.html Presumably the error message means that there are no items in the legend whose font size can be changed. Maybe pass the fontsize argument to the same plt.legend call in which you create the legend, or call plt.legend(fontsize=...) after you have created the legend.
"No artists with labels found to put in legend." error when changing the legend size in pyplot
I want to make my legend size bigger in Pyplot. I used this answer to do that. Here is my code. import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.rcParams['figure.figsize'] = [15, 7] lst = [1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,4,5,6] plt.plot(lst) plt.legend(fontsize="x-large") # Here I make it bigger but doesn't work plt.legend(["This is my legend"]) plt.ylabel('some numbers') plt.show() I get this error and I don't know what is wrong. I don't understand what "Artist" means here. No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
[ "\"Artist\" is a term from matplotlib:\nhttps://matplotlib.org/stable/tutorials/intermediate/artists.html\nPresumably the error message means that there are no items in the legend whose font size can be changed.\nMaybe pass the fontsize argument to the same plt.legend call in which you create the legend, or call plt.legend(fontsize=...) after you have created the legend.\n" ]
[ 1 ]
[]
[]
[ "legend", "matplotlib", "python" ]
stackoverflow_0074608230_legend_matplotlib_python.txt
Q: Statsmodel Multiple Linear Regression Error - Python I am running (what I think is) as fairly straightforward multiple linear regression model fit using Stats model. My code is as follows: y = 'EXITS|20:00:00' all_columns = "+".join(y_2015piv.columns - ['EXITS|20:00:00']) reg_formula = "y~" + all_columns lm= smf.ols(formula=reg_formula, data=y_2015piv).fit() Because I have about 30 factor variables I'm creating the formula using Python string manipulation. "y" is as presented above. all_columns is the dataframe y_2015piv columns without "y". This is all_columns: DAY_Fri+DAY_Mon+DAY_Sat+DAY_Sun+DAY_Thu+DAY_Tue+DAY_Wed+ENTRIES|00:00:00+ENTRIES|04:00:00+ENTRIES|08:00:00+ENTRIES|12:00:00+ENTRIES|16:00:00+ENTRIES|20:00:00+EXITS|00:00:00+EXITS|04:00:00+EXITS|08:00:00+EXITS|12:00:00+EXITS|16:00:00+MONTH_Apr+MONTH_Aug+MONTH_Dec+MONTH_Feb+MONTH_Jan+MONTH_Jul+MONTH_Jun+MONTH_Mar+MONTH_May+MONTH_Nov+MONTH_Oct+MONTH_Sep The values in the dataframe are continuous numerical variables and 0/1 dummy variables. When I try and fit the model I get this error: PatsyError: numbers besides '0' and '1' are only allowed with ** y~DAY_Fri+DAY_Mon+DAY_Sat+DAY_Sun+DAY_Thu+DAY_Tue+DAY_Wed+ENTRIES|00:00:00+ENTRIES|04:00:00+ENTRIES|08:00:00+ENTRIES|12:00:00+ENTRIES|16:00:00+ENTRIES|20:00:00+EXITS|00:00:00+EXITS|04:00:00+EXITS|08:00:00+EXITS|12:00:00+EXITS|16:00:00+MONTH_Apr+MONTH_Aug+MONTH_Dec+MONTH_Feb+MONTH_Jan+MONTH_Jul+MONTH_Jun+MONTH_Mar+MONTH_May+MONTH_Nov+MONTH_Oct+MONTH_Sep There is nothing on line that addresses what this could be. Any help appreciated. By the way, when I fit this model in Scikit-learn it works fine. So I figure the data is in order. Thanks in advance. A: The first error that I got was this: PatsyError: numbers besides '0' and '1' are only allowed with ** Temp ~ MEI+ CO2+ CH4+ N2O+ CFC-11+ CFC-12+ TSI+ Aerosols ^^ According to this link: http://patsy.readthedocs.io/en/latest/builtins-reference.html#patsy.builtins.Q you can use Q("var") in the formula to get rid of the error. I was getting the same error but it was solved. linMod = smf.ols('Temp ~ MEI+ CO2+ CH4+ N2O+ Q("CFC-11")+ Q("CFC-12")+ TSI+ Aerosols',data = trainingSet).fit() this is the solved line of code. I had tried linMod = smf.ols('Temp ~ MEI+ CO2+ CH4+ N2O+ Q("CFC-11 + CFC-12")+ TSI+ Aerosols',data = trainingSet).fit() but this did not work. It seems that when using formula, the numbers and variables happen to have certain meaning that does not let the use of certain names. in my case error was: PatsyError: Error evaluating factor: NameError: no data named 'CFC-11+ CFC-12' found Temp ~ MEI+ CO2+ CH4+ N2O+ Q("CFC-11+ CFC-12")+ TSI+ Aerosols ^^^^^^^^^^^^^^^^^^^ A: patsy is handling the formula parsing and is parsing the string and interpreting it as formula with the given syntax. So some elements in the string are not allowed because they are part of the formula syntax. To keep them as names, patsy also has a code for taking the names as literal text Q which should work in this case http://patsy.readthedocs.io/en/latest/builtins-reference.html#patsy.builtins.Q Otherwise, if you already have the full design matrix with all the dummy variables, then there is no reason to go through the formula interface. Using the direct interface with pandas DataFrames or numpy arrays: sm.OLS(y, x) will ignore any names of DataFrame columns except for using it as strings in the summary table. Variable/column names are also used as one way of defining restrictions for t_test but those go also through patsy and I am not sure it works with special characters in the names. A: Error: Temp ~ MEI+ CO2+ CH4+ N2O+ Q("CFC-11+ CFC-12")+ TSI+ Aerosols Answer: Temp ~ MEI+ CO2+ CH4+ N2O+ CFC_11+ CFC_12+ TSI+ Aerosols. You need to remove the symbols like minus or hyphen ('-'), small brackets from the column names. In this way you can solve the problem. df = pd.read_csv(filepath) col = [] for i in df.columns: i = i.replace('-','_') i = i.replace('(','_') i = i.replace(')','_') col.append(i) df.columns = columns
Statsmodel Multiple Linear Regression Error - Python
I am running (what I think is) as fairly straightforward multiple linear regression model fit using Stats model. My code is as follows: y = 'EXITS|20:00:00' all_columns = "+".join(y_2015piv.columns - ['EXITS|20:00:00']) reg_formula = "y~" + all_columns lm= smf.ols(formula=reg_formula, data=y_2015piv).fit() Because I have about 30 factor variables I'm creating the formula using Python string manipulation. "y" is as presented above. all_columns is the dataframe y_2015piv columns without "y". This is all_columns: DAY_Fri+DAY_Mon+DAY_Sat+DAY_Sun+DAY_Thu+DAY_Tue+DAY_Wed+ENTRIES|00:00:00+ENTRIES|04:00:00+ENTRIES|08:00:00+ENTRIES|12:00:00+ENTRIES|16:00:00+ENTRIES|20:00:00+EXITS|00:00:00+EXITS|04:00:00+EXITS|08:00:00+EXITS|12:00:00+EXITS|16:00:00+MONTH_Apr+MONTH_Aug+MONTH_Dec+MONTH_Feb+MONTH_Jan+MONTH_Jul+MONTH_Jun+MONTH_Mar+MONTH_May+MONTH_Nov+MONTH_Oct+MONTH_Sep The values in the dataframe are continuous numerical variables and 0/1 dummy variables. When I try and fit the model I get this error: PatsyError: numbers besides '0' and '1' are only allowed with ** y~DAY_Fri+DAY_Mon+DAY_Sat+DAY_Sun+DAY_Thu+DAY_Tue+DAY_Wed+ENTRIES|00:00:00+ENTRIES|04:00:00+ENTRIES|08:00:00+ENTRIES|12:00:00+ENTRIES|16:00:00+ENTRIES|20:00:00+EXITS|00:00:00+EXITS|04:00:00+EXITS|08:00:00+EXITS|12:00:00+EXITS|16:00:00+MONTH_Apr+MONTH_Aug+MONTH_Dec+MONTH_Feb+MONTH_Jan+MONTH_Jul+MONTH_Jun+MONTH_Mar+MONTH_May+MONTH_Nov+MONTH_Oct+MONTH_Sep There is nothing on line that addresses what this could be. Any help appreciated. By the way, when I fit this model in Scikit-learn it works fine. So I figure the data is in order. Thanks in advance.
[ "The first error that I got was this:\nPatsyError: numbers besides '0' and '1' are only allowed with **\nTemp ~ MEI+ CO2+ CH4+ N2O+ CFC-11+ CFC-12+ TSI+ Aerosols\n ^^\n\nAccording to this link: http://patsy.readthedocs.io/en/latest/builtins-reference.html#patsy.builtins.Q\nyou can use Q(\"var\") in the formula to get rid of the error. I was getting the same error but it was solved.\nlinMod = smf.ols('Temp ~ MEI+ CO2+ CH4+ N2O+ Q(\"CFC-11\")+ Q(\"CFC-12\")+ TSI+ Aerosols',data = trainingSet).fit()\n\nthis is the solved line of code. I had tried\nlinMod = smf.ols('Temp ~ MEI+ CO2+ CH4+ N2O+ Q(\"CFC-11 + CFC-12\")+ TSI+ Aerosols',data = trainingSet).fit()\n\nbut this did not work. It seems that when using formula, the numbers and variables happen to have certain meaning that does not let the use of certain names. in my case error was:\nPatsyError: Error evaluating factor: NameError: no data named 'CFC-11+ CFC-12' found\nTemp ~ MEI+ CO2+ CH4+ N2O+ Q(\"CFC-11+ CFC-12\")+ TSI+ Aerosols\n ^^^^^^^^^^^^^^^^^^^\n\n", "patsy is handling the formula parsing and is parsing the string and interpreting it as formula with the given syntax. So some elements in the string are not allowed because they are part of the formula syntax. To keep them as names, patsy also has a code for taking the names as literal text Q which should work in this case\nhttp://patsy.readthedocs.io/en/latest/builtins-reference.html#patsy.builtins.Q\nOtherwise, if you already have the full design matrix with all the dummy variables, then there is no reason to go through the formula interface. Using the direct interface with pandas DataFrames or numpy arrays:\nsm.OLS(y, x) \nwill ignore any names of DataFrame columns except for using it as strings in the summary table. \nVariable/column names are also used as one way of defining restrictions for t_test but those go also through patsy and I am not sure it works with special characters in the names.\n", "Error: Temp ~ MEI+ CO2+ CH4+ N2O+ Q(\"CFC-11+ CFC-12\")+ TSI+ Aerosols\nAnswer: Temp ~ MEI+ CO2+ CH4+ N2O+ CFC_11+ CFC_12+ TSI+ Aerosols.\nYou need to remove the symbols like minus or hyphen ('-'), small brackets from the column names. In this way you can solve the problem.\n df = pd.read_csv(filepath)\n col = []\n for i in df.columns:\n i = i.replace('-','_')\n i = i.replace('(','_')\n i = i.replace(')','_')\n col.append(i)\n df.columns = columns\n\n" ]
[ 8, 2, 0 ]
[]
[]
[ "linear_regression", "python", "statsmodels" ]
stackoverflow_0037356559_linear_regression_python_statsmodels.txt
Q: How to get a list of built-in modules in python? I would like to get a list of names of built-in modules in python such that I can test the popularity of function's naming conventions (underline, CamelCase or mixedCase). I know there is a Global Module Index but I am wondering if there is a list of strings, which is easier to use :) Update: len(dir(__builtins__)) = 145 len(stdlib_list("2.7")) = 430 help('modules') = 508 # counting manually the output A: The compiled-in module names are in sys.builtin_module_names. For all importable modules, see pkgutil.iter_modules. Run these in a clean virtualenv to get (almost) only the modules that come with Python itself. Note that a “popularity poll” will necessarily include modules that use old, discouraged naming conventions because they were written before today's guidelines were put in place, and can't change because need to be backwards compatible. It might be useful for something, but not for answering best-practice questions such as “How should I name my functions?”. For that, see the PEP8, the Python style guide, especially the “Naming Conventions” section. A: How about this? Though, this gets a list of built-in functions and variables rather than modules... dir(__builtins__) help('modules') will give you a list of all modules, according to How can I get a list of locally installed Python modules?. Not a list of strings, though. A: Now there is a 3rd party package for this. It scrapes the TOC of the Standard Library page in the official Python docs and builds a list. You can install it using pip pip install stdlib_list and got get a list of libraries In [12]: from stdlib_list import stdlib_list In [13]: libraries = stdlib_list("3.5") In [14]: libraries[4:12] Out[14]: ['abc', 'aifc', 'argparse', 'array', 'ast', 'asynchat', 'asyncio', 'asyncore'] You can find source code here. A: >>>dir (__builtins__) or >>>help (__builtins__) A: From the CPython`s docs: All known built-in modules are listed in sys.builtin_module_names Names of modules in sys.builtin_module_names is actual only for used a Python interpreter: A tuple of strings giving the names of all modules that are compiled into this Python interpreter Each built-in module use the special loader while importing: BuiltinImporter In [65]: import itertools, sys, gc In [66]: itertools.__loader__, sys.__loader__, gc.__loader__ Out[66]: (_frozen_importlib.BuiltinImporter, _frozen_importlib.BuiltinImporter, _frozen_importlib.BuiltinImporter) In the Python 3 the number of built-in modules has slightly increased $ python2.7 -c "import sys; print('Count built-in modules: %d' %len(sys.builtin_module_names)); print(sys.builtin_module_names)" Count built-in modules: 51 ('__builtin__', '__main__', '_ast', '_bisect', '_codecs', '_collections', '_functools', '_heapq', '_io', '_locale', '_md5', '_random', '_sha', '_sha256', '_sha512', '_socket', '_sre', '_struct', '_symtable', '_warnings', '_weakref', 'array', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'datetime', 'errno', 'exceptions', 'fcntl', 'gc', 'grp', 'imp', 'itertools', 'marshal', 'math', 'operator', 'posix', 'pwd', 'select', 'signal', 'spwd', 'strop', 'sys', 'syslog', 'thread', 'time', 'unicodedata', 'xxsubtype', 'zipimport', 'zlib') $ python3.4 -c "import sys; print('Count built-in modules: %d' %len(sys.builtin_module_names)); print(sys.builtin_module_names)" Count built-in modules: 54 ('_ast', '_bisect', '_codecs', '_collections', '_datetime', '_elementtree', '_functools', '_heapq', '_imp', '_io', '_locale', '_md5', '_operator', '_pickle', '_posixsubprocess', '_random', '_sha1', '_sha256', '_sha512', '_socket', '_sre', '_stat', '_string', '_struct', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', 'array', 'atexit', 'binascii', 'builtins', 'errno', 'faulthandler', 'fcntl', 'gc', 'grp', 'itertools', 'marshal', 'math', 'posix', 'pwd', 'pyexpat', 'select', 'signal', 'spwd', 'sys', 'syslog', 'time', 'unicodedata', 'xxsubtype', 'zipimport', 'zlib') As the CPython is implemented (primary) on the C programming language, so it is not easy to find it, as example location the Python`s module sys (based on this answer): $ locate sysmodule | grep python /usr/include/python2.7/sysmodule.h /usr/include/python3.4m/sysmodule.h /usr/local/include/python3.5m/sysmodule.h More information about getting an information about all available modules is the CPython, look in my answer here. A: It can be done using the given block of code below and it is the most effective way as per me. import sys a = sys.builtin_module_names print(a) The last line to be included if you want to print them. Here, a is a tuple and so it can access all the functionalities of a tuple. You can have a look at sys.builtin_module_names for further help https://docs.python.org/3/library/sys.html A: I was working on a similar problem when I learned that 'builtin' means something like "there is no source file associated with this object". Here's a solution based on checking the /lib and /Dlls folders manually. The use of "unique" may be redundant; it's there because I'm not sure if DLLs is strictly for packages which come with python/it was useful for a different problem I was trying to solve (finding out the requirements for a source package). from typing import Generator, Iterable import os, re, sys def unique(iterable:Iterable, yielded:list=None) -> Generator: """ Iterate over unique elements of an iterable examples: >>> [*unique('abbcccdddd')] ['a', 'b', 'c', 'd'] >>> [*unique('abcd')] ['a', 'b', 'c', 'd'] """ yielded = yielded if not isinstance(yielded, type(None)) else [] for i in iterable: if not i in yielded: yield i yielded.append(i) def native_modules() -> Generator: """ Every module found: under your installation's /lib and /DLLs directories; excuding any under /lib/site-packages/* in sys.builtin_module_names """ omitables = 'site-packages __pycache__ .pyo .ico .dll .pdb'.split() path = lambda folder: os.path.join(sys.exec_prefix, folder) tidy = lambda pth: os.path.split(os.path.splitext(pth)[0])[-1] # separate the name from the path and extension, if present discriminant = lambda pth: not any(re.match(i, pth) for i in map(re.escape, omitables)) lib = map(tidy, filter(discriminant, os.listdir(path('lib')))) dlls = map(tidy, filter(discriminant, os.listdir(path('DLLs')))) yielded = [] yield from yielded yield from unique(sys.builtin_module_names, yielded) yield from unique(lib, yielded) yield from unique(dlls, yielded) A: All the commands like dir(builtins) or help(builtins) or sys.builtin_module_names, are either missing several core inbuilt package names or giving thousands of lines of verbose output. >>> help('modules') The best command to show all the installed modules is help('modules'). It gives you name of all the installed modules in python without missing anything. Incase you need names of inbuilt modules only, create a temporary fresh venev and give the command there. It might be bit slow but is the only way by far in my experience to list every single package. A: In Linux, you can get a dump of all the built-in modules that came with the fresh install with the following commands sudo apt-get install python3-virtualenv tmpDir=`mktemp -d` python3 -m virtualenv $tmpDir source /tmp/virtualenv/bin/activate python help('modules') Example Execution Here's an example execution of the above commands in Debian 11 with python 3.9 user@disp643:~$ sudo apt-get install python3-virtualenv Reading package lists... Done Building dependency tree... Done Reading state information... Done python3-virtualenv is already the newest version (20.4.0+ds-2+deb11u1). The following packages were automatically installed and are no longer required: ethtool libbotan-2-17 libtspi1 linux-image-5.10.0-10-amd64 linux-image-5.10.0-13-amd64 linux-image-5.10.0-14-amd64 linux-image-5.10.0-15-amd64 linux-image-5.10.0-16-amd64 linux-image-5.10.0-17-amd64 net-tools sse3-support Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. user@disp643:~$ user@disp643:~$ tmpDir=`mktemp -d` user@disp643:~$ user@disp643:~$ python3 -m virtualenv $tmpDir created virtual environment CPython3.9.2.final.0-64 in 120ms creator CPython3Posix(dest=/tmp/tmp.qQsKZHGZqk, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv) added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator user@disp643:~$ source /tmp/virtualenv/bin/activate (virtualenv) user@disp643:~$ python Python 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> help('modules') Please wait a moment while I gather a list of all available modules... __future__ _tracemalloc graphlib retrying _abc _uuid grp rlcompleter _aix_support _virtualenv gzip runpy _ast _warnings hashlib sched _asyncio _weakref heapq secrets _bisect _weakrefset hmac select _blake2 _xxsubinterpreters html selectors _bootlocale _xxtestfuzz html5lib setuptools _bootsubprocess _zoneinfo http shelve _bz2 abc idna shlex _codecs aifc imaplib shutil _codecs_cn antigravity imghdr signal _codecs_hk appdirs imp site _codecs_iso2022 argparse importlib sitecustomize _codecs_jp array inspect smtpd _codecs_kr ast io smtplib _codecs_tw asynchat ipaddr sndhdr _collections asyncio ipaddress socket _collections_abc asyncore itertools socketserver _compat_pickle atexit json spwd _compression audioop keyword sqlite3 _contextvars base64 lib2to3 sre_compile _crypt bdb linecache sre_constants _csv binascii locale sre_parse _ctypes binhex logging ssl _ctypes_test bisect lzma stat _curses builtins mailbox statistics _curses_panel bz2 mailcap string _datetime cProfile marshal stringprep _dbm calendar math struct _decimal certifi mimetypes subprocess _elementtree cgi mmap sunau _functools cgitb modulefinder symbol _hashlib chunk msgpack symtable _heapq cmath multiprocessing sys _imp cmd netrc sysconfig _io code nis syslog _json codecs nntplib tabnanny _locale codeop ntpath tarfile _lsprof collections nturl2path telnetlib _lzma colorama numbers tempfile _markupbase colorsys opcode termios _md5 compileall operator test _multibytecodec concurrent optparse textwrap _multiprocessing configparser os this _opcode contextlib ossaudiodev threading _operator contextlib2 packaging time _osx_support contextvars parser timeit _peg_parser copy pathlib tkinter _pickle copyreg pdb token _posixshmem crypt pep517 tokenize _posixsubprocess csv pickle toml _py_abc ctypes pickletools trace _pydecimal curses pip traceback _pyio dataclasses pipes tracemalloc _queue datetime pkg_resources tty _random dbm pkgutil turtle _sha1 decimal platform types _sha256 difflib plistlib typing _sha3 dis poplib unicodedata _sha512 distlib posix unittest _signal distutils posixpath urllib _sitebuiltins doctest pprint urllib3 _socket easy_install profile uu _sqlite3 email progress uuid _sre encodings pstats venv _ssl enum pty warnings _stat errno pwd wave _statistics faulthandler py_compile weakref _string fcntl pyclbr webbrowser _strptime filecmp pydoc wheel _struct fileinput pydoc_data wsgiref _symtable fnmatch pyexpat xdrlib _sysconfigdata__linux_x86_64-linux-gnu formatter pyparsing xml _sysconfigdata__x86_64-linux-gnu fractions queue xmlrpc _testbuffer ftplib quopri xxlimited _testcapi functools random xxsubtype _testimportmultiple gc re zipapp _testinternalcapi genericpath readline zipfile _testmultiphase getopt reprlib zipimport _thread getpass requests zlib _threading_local gettext resolvelib zoneinfo _tkinter glob resource Enter any module name to get more help. Or, type "modules spam" to search for modules whose name or summary contain the string "spam". >>>
How to get a list of built-in modules in python?
I would like to get a list of names of built-in modules in python such that I can test the popularity of function's naming conventions (underline, CamelCase or mixedCase). I know there is a Global Module Index but I am wondering if there is a list of strings, which is easier to use :) Update: len(dir(__builtins__)) = 145 len(stdlib_list("2.7")) = 430 help('modules') = 508 # counting manually the output
[ "The compiled-in module names are in sys.builtin_module_names. For all importable modules, see pkgutil.iter_modules.\nRun these in a clean virtualenv to get (almost) only the modules that come with Python itself.\n\nNote that a “popularity poll” will necessarily include modules that use old, discouraged naming conventions because they were written before today's guidelines were put in place, and can't change because need to be backwards compatible. It might be useful for something, but not for answering best-practice questions such as “How should I name my functions?”. For that, see the PEP8, the Python style guide, especially the “Naming Conventions” section.\n", "How about this? Though, this gets a list of built-in functions and variables rather than modules...\ndir(__builtins__)\n\nhelp('modules') will give you a list of all modules, according to How can I get a list of locally installed Python modules?. Not a list of strings, though.\n", "Now there is a 3rd party package for this. It scrapes the TOC of the Standard Library page in the official Python docs and builds a list.\nYou can install it using pip\npip install stdlib_list\n\nand got get a list of libraries\nIn [12]: from stdlib_list import stdlib_list\n\nIn [13]: libraries = stdlib_list(\"3.5\")\n\nIn [14]: libraries[4:12]\nOut[14]: ['abc', 'aifc', 'argparse', 'array', 'ast', 'asynchat', 'asyncio', 'asyncore']\n\nYou can find source code here.\n", ">>>dir (__builtins__)\nor\n>>>help (__builtins__)\n", "From the CPython`s docs:\n\nAll known built-in modules are listed in sys.builtin_module_names\n\nNames of modules in sys.builtin_module_names is actual only for used a Python interpreter:\n\nA tuple of strings giving the names of all modules that are compiled into this Python interpreter\n\nEach built-in module use the special loader while importing: BuiltinImporter\nIn [65]: import itertools, sys, gc\n\nIn [66]: itertools.__loader__, sys.__loader__, gc.__loader__\nOut[66]: \n(_frozen_importlib.BuiltinImporter,\n _frozen_importlib.BuiltinImporter,\n _frozen_importlib.BuiltinImporter)\n\nIn the Python 3 the number of built-in modules has slightly increased \n$ python2.7 -c \"import sys; print('Count built-in modules: %d' %len(sys.builtin_module_names)); print(sys.builtin_module_names)\"\nCount built-in modules: 51\n('__builtin__', '__main__', '_ast', '_bisect', '_codecs', '_collections', '_functools', '_heapq', '_io', '_locale', '_md5', '_random', '_sha', '_sha256', '_sha512', '_socket', '_sre', '_struct', '_symtable', '_warnings', '_weakref', 'array', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'datetime', 'errno', 'exceptions', 'fcntl', 'gc', 'grp', 'imp', 'itertools', 'marshal', 'math', 'operator', 'posix', 'pwd', 'select', 'signal', 'spwd', 'strop', 'sys', 'syslog', 'thread', 'time', 'unicodedata', 'xxsubtype', 'zipimport', 'zlib')\n$ python3.4 -c \"import sys; print('Count built-in modules: %d' %len(sys.builtin_module_names)); print(sys.builtin_module_names)\"\nCount built-in modules: 54\n('_ast', '_bisect', '_codecs', '_collections', '_datetime', '_elementtree', '_functools', '_heapq', '_imp', '_io', '_locale', '_md5', '_operator', '_pickle', '_posixsubprocess', '_random', '_sha1', '_sha256', '_sha512', '_socket', '_sre', '_stat', '_string', '_struct', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', 'array', 'atexit', 'binascii', 'builtins', 'errno', 'faulthandler', 'fcntl', 'gc', 'grp', 'itertools', 'marshal', 'math', 'posix', 'pwd', 'pyexpat', 'select', 'signal', 'spwd', 'sys', 'syslog', 'time', 'unicodedata', 'xxsubtype', 'zipimport', 'zlib')\n\nAs the CPython is implemented (primary) on the C programming language, so it is not easy to find it, as example location the Python`s module sys (based on this answer):\n$ locate sysmodule | grep python\n/usr/include/python2.7/sysmodule.h\n/usr/include/python3.4m/sysmodule.h\n/usr/local/include/python3.5m/sysmodule.h\n\n\nMore information about getting an information about all available modules is the CPython, look in my answer here.\n", "It can be done using the given block of code below and it is the most effective way as per me.\nimport sys\na = sys.builtin_module_names\nprint(a)\n\nThe last line to be included if you want to print them.\nHere, a is a tuple and so it can access all the functionalities of a tuple.\nYou can have a look at sys.builtin_module_names for further help\nhttps://docs.python.org/3/library/sys.html\n", "I was working on a similar problem when I learned that 'builtin' means something like \"there is no source file associated with this object\".\nHere's a solution based on checking the /lib and /Dlls folders manually.\nThe use of \"unique\" may be redundant; it's there because I'm not sure if DLLs is strictly for packages which come with python/it was useful for a different problem I was trying to solve (finding out the requirements for a source package).\nfrom typing import Generator, Iterable\nimport os, re, sys\n\ndef unique(iterable:Iterable, yielded:list=None) -> Generator:\n \"\"\"\n Iterate over unique elements of an iterable\n examples:\n >>> [*unique('abbcccdddd')]\n ['a', 'b', 'c', 'd']\n >>> [*unique('abcd')]\n ['a', 'b', 'c', 'd']\n \"\"\"\n yielded = yielded if not isinstance(yielded, type(None)) else []\n for i in iterable:\n if not i in yielded:\n yield i\n yielded.append(i)\n\ndef native_modules() -> Generator:\n \"\"\"\n Every module found:\n under your installation's /lib and /DLLs directories; excuding any under /lib/site-packages/*\n in sys.builtin_module_names\n \"\"\"\n omitables = 'site-packages __pycache__ .pyo .ico .dll .pdb'.split()\n \n path = lambda folder: os.path.join(sys.exec_prefix, folder)\n tidy = lambda pth: os.path.split(os.path.splitext(pth)[0])[-1] # separate the name from the path and extension, if present\n discriminant = lambda pth: not any(re.match(i, pth) for i in map(re.escape, omitables))\n \n lib = map(tidy, filter(discriminant, os.listdir(path('lib'))))\n dlls = map(tidy, filter(discriminant, os.listdir(path('DLLs'))))\n\n yielded = []\n yield from yielded\n yield from unique(sys.builtin_module_names, yielded)\n yield from unique(lib, yielded)\n yield from unique(dlls, yielded)\n\n", "All the commands like dir(builtins) or help(builtins) or sys.builtin_module_names, are either missing several core inbuilt package names or giving thousands of lines of verbose output.\n>>> help('modules')\nThe best command to show all the installed modules is help('modules'). It gives you name of all the installed modules in python without missing anything.\nIncase you need names of inbuilt modules only, create a temporary fresh venev and give the command there. It might be bit slow but is the only way by far in my experience to list every single package.\n", "In Linux, you can get a dump of all the built-in modules that came with the fresh install with the following commands\nsudo apt-get install python3-virtualenv\n\ntmpDir=`mktemp -d`\n\npython3 -m virtualenv $tmpDir\nsource /tmp/virtualenv/bin/activate\npython\n\nhelp('modules')\n\nExample Execution\nHere's an example execution of the above commands in Debian 11 with python 3.9\nuser@disp643:~$ sudo apt-get install python3-virtualenv\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\npython3-virtualenv is already the newest version (20.4.0+ds-2+deb11u1).\nThe following packages were automatically installed and are no longer required:\n ethtool libbotan-2-17 libtspi1 linux-image-5.10.0-10-amd64\n linux-image-5.10.0-13-amd64 linux-image-5.10.0-14-amd64\n linux-image-5.10.0-15-amd64 linux-image-5.10.0-16-amd64\n linux-image-5.10.0-17-amd64 net-tools sse3-support\nUse 'sudo apt autoremove' to remove them.\n0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.\nuser@disp643:~$ \n\nuser@disp643:~$ tmpDir=`mktemp -d`\nuser@disp643:~$ \n\nuser@disp643:~$ python3 -m virtualenv $tmpDir\ncreated virtual environment CPython3.9.2.final.0-64 in 120ms\n creator CPython3Posix(dest=/tmp/tmp.qQsKZHGZqk, clear=False, no_vcs_ignore=False, global=False)\n seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv)\n added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2\n activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator\nuser@disp643:~$ source /tmp/virtualenv/bin/activate\n(virtualenv) user@disp643:~$ python\nPython 3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> help('modules')\n\nPlease wait a moment while I gather a list of all available modules...\n\n__future__ _tracemalloc graphlib retrying\n_abc _uuid grp rlcompleter\n_aix_support _virtualenv gzip runpy\n_ast _warnings hashlib sched\n_asyncio _weakref heapq secrets\n_bisect _weakrefset hmac select\n_blake2 _xxsubinterpreters html selectors\n_bootlocale _xxtestfuzz html5lib setuptools\n_bootsubprocess _zoneinfo http shelve\n_bz2 abc idna shlex\n_codecs aifc imaplib shutil\n_codecs_cn antigravity imghdr signal\n_codecs_hk appdirs imp site\n_codecs_iso2022 argparse importlib sitecustomize\n_codecs_jp array inspect smtpd\n_codecs_kr ast io smtplib\n_codecs_tw asynchat ipaddr sndhdr\n_collections asyncio ipaddress socket\n_collections_abc asyncore itertools socketserver\n_compat_pickle atexit json spwd\n_compression audioop keyword sqlite3\n_contextvars base64 lib2to3 sre_compile\n_crypt bdb linecache sre_constants\n_csv binascii locale sre_parse\n_ctypes binhex logging ssl\n_ctypes_test bisect lzma stat\n_curses builtins mailbox statistics\n_curses_panel bz2 mailcap string\n_datetime cProfile marshal stringprep\n_dbm calendar math struct\n_decimal certifi mimetypes subprocess\n_elementtree cgi mmap sunau\n_functools cgitb modulefinder symbol\n_hashlib chunk msgpack symtable\n_heapq cmath multiprocessing sys\n_imp cmd netrc sysconfig\n_io code nis syslog\n_json codecs nntplib tabnanny\n_locale codeop ntpath tarfile\n_lsprof collections nturl2path telnetlib\n_lzma colorama numbers tempfile\n_markupbase colorsys opcode termios\n_md5 compileall operator test\n_multibytecodec concurrent optparse textwrap\n_multiprocessing configparser os this\n_opcode contextlib ossaudiodev threading\n_operator contextlib2 packaging time\n_osx_support contextvars parser timeit\n_peg_parser copy pathlib tkinter\n_pickle copyreg pdb token\n_posixshmem crypt pep517 tokenize\n_posixsubprocess csv pickle toml\n_py_abc ctypes pickletools trace\n_pydecimal curses pip traceback\n_pyio dataclasses pipes tracemalloc\n_queue datetime pkg_resources tty\n_random dbm pkgutil turtle\n_sha1 decimal platform types\n_sha256 difflib plistlib typing\n_sha3 dis poplib unicodedata\n_sha512 distlib posix unittest\n_signal distutils posixpath urllib\n_sitebuiltins doctest pprint urllib3\n_socket easy_install profile uu\n_sqlite3 email progress uuid\n_sre encodings pstats venv\n_ssl enum pty warnings\n_stat errno pwd wave\n_statistics faulthandler py_compile weakref\n_string fcntl pyclbr webbrowser\n_strptime filecmp pydoc wheel\n_struct fileinput pydoc_data wsgiref\n_symtable fnmatch pyexpat xdrlib\n_sysconfigdata__linux_x86_64-linux-gnu formatter pyparsing xml\n_sysconfigdata__x86_64-linux-gnu fractions queue xmlrpc\n_testbuffer ftplib quopri xxlimited\n_testcapi functools random xxsubtype\n_testimportmultiple gc re zipapp\n_testinternalcapi genericpath readline zipfile\n_testmultiphase getopt reprlib zipimport\n_thread getpass requests zlib\n_threading_local gettext resolvelib zoneinfo\n_tkinter glob resource \n\nEnter any module name to get more help. Or, type \"modules spam\" to search\nfor modules whose name or summary contain the string \"spam\".\n\n>>> \n\n" ]
[ 56, 24, 15, 7, 5, 2, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0008370206_python.txt
Q: how to parse "Id"= "name" with python I'm trying to scrap data from a website which has the below element data. I need the Name, Roll No, Marks and Result status. I'm using this code student_data = soup.findAll('div', attrs= {'class':'fontLight'}) and for store in student_data: name = store.h5.name.text i get error 'NoneType' object has no attribute 'name' However. If i try for store in student_data: name = store.h5.text I get only the result status which is the last string. 'Eligible for Admission, And if I append this for store in student_data: name = store.h5.text candidate_name.append(name) I get all the data together which I'm unable seperate. ['muhammad bilal', '346010', '193', 'Eligible for Admission'] Is there any way I can get these details separate? please [<div class="row"> <div class="col-sm-2"> </div> <div class="col-sm-8"> <h3 class="fontBold">Results 2022 - Result Details</h3> <img src="/Documents/Others/Rects.png" style="margin-left:-30px;margin-top:-10px;height:30px;width:420px"/> </div> <div class="col-sm-2"> </div> </div>, <div class="row"> <div class="col-sm-2"> </div> <div class="col-sm-5"> <h3 class="fontMedium" style="color:#29a2d8">Student Details</h3> <div style="display: flex;"> <div class="fontBold" style="flex: 1;"> <h5>Name</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="name">muhammad bilal</h5> </div> </div> <div style="display: flex;"> <div class="fontBold" style="flex: 1;"> <h5> Student Rollno </h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="rollno">346010</h5> </div> </div> </div> <div class="col-sm-3"> <h3 class="fontMedium" style="color:#29a2d8">Result</h3> <div style="display: flex;"> <div class="fontBold" style="flex: 1.3;"> <h5>Obtained Marks</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 class="fontMedium" id="omarks">193</h5> </div> </div> <div style="display: flex;"> <div class="fontBold" style="flex: 1.3;"> <h5>Result Status</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="" style="color:green"><b>Eligible for Admission</b></h5> </div> A: You can check on the id and value like this: for store in student_data: id = store.h5.attrs['id'] value = store.h5.text print(id, value) Then you can use if statements to check the id's. For example, for store in student_data: id = store.h5.attrs['id'] value = store.h5.text if id == "name": print(value) Does this answer your question?
how to parse "Id"= "name" with python
I'm trying to scrap data from a website which has the below element data. I need the Name, Roll No, Marks and Result status. I'm using this code student_data = soup.findAll('div', attrs= {'class':'fontLight'}) and for store in student_data: name = store.h5.name.text i get error 'NoneType' object has no attribute 'name' However. If i try for store in student_data: name = store.h5.text I get only the result status which is the last string. 'Eligible for Admission, And if I append this for store in student_data: name = store.h5.text candidate_name.append(name) I get all the data together which I'm unable seperate. ['muhammad bilal', '346010', '193', 'Eligible for Admission'] Is there any way I can get these details separate? please [<div class="row"> <div class="col-sm-2"> </div> <div class="col-sm-8"> <h3 class="fontBold">Results 2022 - Result Details</h3> <img src="/Documents/Others/Rects.png" style="margin-left:-30px;margin-top:-10px;height:30px;width:420px"/> </div> <div class="col-sm-2"> </div> </div>, <div class="row"> <div class="col-sm-2"> </div> <div class="col-sm-5"> <h3 class="fontMedium" style="color:#29a2d8">Student Details</h3> <div style="display: flex;"> <div class="fontBold" style="flex: 1;"> <h5>Name</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="name">muhammad bilal</h5> </div> </div> <div style="display: flex;"> <div class="fontBold" style="flex: 1;"> <h5> Student Rollno </h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="rollno">346010</h5> </div> </div> </div> <div class="col-sm-3"> <h3 class="fontMedium" style="color:#29a2d8">Result</h3> <div style="display: flex;"> <div class="fontBold" style="flex: 1.3;"> <h5>Obtained Marks</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 class="fontMedium" id="omarks">193</h5> </div> </div> <div style="display: flex;"> <div class="fontBold" style="flex: 1.3;"> <h5>Result Status</h5> </div> <div class="fontLight" style="flex: 1;"> <h5 id="" style="color:green"><b>Eligible for Admission</b></h5> </div>
[ "You can check on the id and value like this:\nfor store in student_data:\n id = store.h5.attrs['id']\n value = store.h5.text\n print(id, value)\n\nThen you can use if statements to check the id's. For example,\nfor store in student_data:\n id = store.h5.attrs['id']\n value = store.h5.text\n if id == \"name\":\n print(value)\n\nDoes this answer your question?\n" ]
[ 1 ]
[]
[]
[ "element", "html", "parsing", "python", "web_scraping" ]
stackoverflow_0074607489_element_html_parsing_python_web_scraping.txt
Q: Functions code running, but dont know how to call a certain function This code is running, but not properly analyzing budget, prompting, or returning values correctly. I am having trouble writing the correct functions. When I run it, it prompts the user for their budget, and amount spent, then does nothing month = 0 months = 0 def DescribeProgram(): print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") def GetMonths(): Months = input("Enter the number of months you want to analyze") return Months def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") def AnalyzeBudget(months): for month in range(1,months+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) def main(): DescribeProgram() months = GetMonths() AnalyzeBudget(months) main() A: def DescribeProgram(): print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") def GetMonths(): Months = int(input("Enter the number of months you want to analyze")) return Months def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") return Mobudget, MoSpent def AnalyzeBudget(months): for month in range(1,months+1): print(f"\nMonth {month}:") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) def main(): DescribeProgram() months = GetMonths() AnalyzeBudget(months) main() Only if MoBudget and MoSpent is at this level of indentation will the code work as of what I felt the code is for which is: - Display what this program does - Get no. of months to analyse budget - For each month, take user input of budget and amount spend You've not written any code to analyse the budget?
Functions code running, but dont know how to call a certain function
This code is running, but not properly analyzing budget, prompting, or returning values correctly. I am having trouble writing the correct functions. When I run it, it prompts the user for their budget, and amount spent, then does nothing month = 0 months = 0 def DescribeProgram(): print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") def GetMonths(): Months = input("Enter the number of months you want to analyze") return Months def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") def AnalyzeBudget(months): for month in range(1,months+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) def main(): DescribeProgram() months = GetMonths() AnalyzeBudget(months) main()
[ "def DescribeProgram():\n \n print(\"\"\"\\\nThis program uses a for loop to monitor your budget.\nThe program will prompt you to enter your budget, and amount spent\nfor a certain month and calculate if your were under or over budget.\nYou will have the option of choosing how many months you would like to\nmonitor.\\n\"\"\")\n\n\ndef GetMonths():\n Months = int(input(\"Enter the number of months you want to analyze\")) \n return Months\n\ndef GetMonthBudgetandSpent(month):\n Mobudget = input(\"Enter the budget you have for the month\")\n MoSpent = input(\"Enter the amount you spent this month\")\n return Mobudget, MoSpent\n\ndef AnalyzeBudget(months):\n for month in range(1,months+1):\n print(f\"\\nMonth {month}:\")\n print(\"=======\")\n MoBudget,MoSpent = GetMonthBudgetandSpent(month)\n\ndef main():\n DescribeProgram()\n months = GetMonths()\n AnalyzeBudget(months)\n\nmain()\n\nOnly if MoBudget and MoSpent is at this level of indentation will the code work as of what I felt the code is for which is:\n- Display what this program does\n- Get no. of months to analyse budget\n- For each month, take user input of budget and amount spend\n\nYou've not written any code to analyse the budget?\n" ]
[ 0 ]
[]
[]
[ "for_loop", "function", "loops", "python", "while_loop" ]
stackoverflow_0074607852_for_loop_function_loops_python_while_loop.txt
Q: How to create a list containing an arithmetic progression? Here's an example of what I'm trying to achieve: What I'm tring to do is make the sum of a starting number X, and sum it by Y, and with each sum, add the numbers to a previously empty list: lst = [] i = -0.5 tot = 0.025 while i <= 100: tot = tot + i i = i + 1 a = tot print("value: ",tot) print(a) lst.append(a) print(lst) Though I'm unable to keep them as individual numbers, and they just get clumped together. A: This is really about applying an incrementing multiplier to Y, so it is more suitably implemented by iterating over a range of multipliers. To produce 4 items, for example: i = -0.5 tot = 0.025 lst = [i + tot * m for m in range(4)]
How to create a list containing an arithmetic progression?
Here's an example of what I'm trying to achieve: What I'm tring to do is make the sum of a starting number X, and sum it by Y, and with each sum, add the numbers to a previously empty list: lst = [] i = -0.5 tot = 0.025 while i <= 100: tot = tot + i i = i + 1 a = tot print("value: ",tot) print(a) lst.append(a) print(lst) Though I'm unable to keep them as individual numbers, and they just get clumped together.
[ "This is really about applying an incrementing multiplier to Y, so it is more suitably implemented by iterating over a range of multipliers.\nTo produce 4 items, for example:\ni = -0.5\ntot = 0.025\nlst = [i + tot * m for m in range(4)]\n\n" ]
[ 0 ]
[]
[]
[ "append", "list", "python" ]
stackoverflow_0074608361_append_list_python.txt
Q: How to listen event with Python threading I'm trying to implement a multi thread program in python and am having troubles. I try to design a program, when the program (main thread) receives a specific command, The counter of the program will return to the previous number and continue counting down. The following is the code I tried to write: import threading import time count = 0 preEvent = threading.Event() runEvent = threading.Event() runEvent.set() preEvent.clear() def pre(): global count while True: if event.isSet(): count -= 1 event.clear() def doSomething(): global count while True: # if not runEvent.isSet(): # runEvent.wait() print(count) count += 1 time.sleep(1) def main(): t1 = threading.Thread(target=pre, args=()) t2 = threading.Thread(target=doSomething, args=()) t1.start() t2.start() command = input("input command") while command != 'quit': command = input("input command") if command == 'pre': preEvent.set() main() But I encountered a few problems How to block t1 simultaneously while I inputting a specific command How to start from the beginning when t1 is restored instead of starting from the blocked point Regarding question 1, I tried adding a condition check before the print(count) command, but if I enter the "pre" command when the program outputs count, the program will still perform count+=1 After that, the program will go back to check the conditions and block t1, but this does not achieve the synchronization effect I wanted. Is there a way to achieve this goal? Question 2 is similar to question 1. I hope that whenever I enter the command, my program will output like this. 1 2 3 4 pre 3 4 ... But if t1 is blocked after finishing the print(count) instruction, when the event is cleared, t1 will continue to execute from count+=1 So the output will become the following 1 2 3 4 pre 4 5 ... I tried to find information on the Internet, but I never knew how to add keywords. Is there a method or library that can achieve this function? I have tried my best to describe my problem, but it may not be good enough. If I have any questions about my problem, I can add more explanation. Thank you all for your patience to read my question A: The threading module has a wait method that will block until notify or notify_all is called. This should accomplish what you're looking for in your first question. For question 2 you can either define a function that handles the exit case, or just recreate a thread to start form the beginning. A: Simply it can work like this, maybe helps. from threading import Thread, Event count = 0 counter = Event() def pre(): global count counter.wait() count -= 1 print(count) def main(): global count print(count) while True: command = input("Enter 1 to increase, Enter 2 to decrease :") if command == "1": t1 = Thread(target=pre, args=()) t1.start() counter.set() else : count += 1 print(count) main()
How to listen event with Python threading
I'm trying to implement a multi thread program in python and am having troubles. I try to design a program, when the program (main thread) receives a specific command, The counter of the program will return to the previous number and continue counting down. The following is the code I tried to write: import threading import time count = 0 preEvent = threading.Event() runEvent = threading.Event() runEvent.set() preEvent.clear() def pre(): global count while True: if event.isSet(): count -= 1 event.clear() def doSomething(): global count while True: # if not runEvent.isSet(): # runEvent.wait() print(count) count += 1 time.sleep(1) def main(): t1 = threading.Thread(target=pre, args=()) t2 = threading.Thread(target=doSomething, args=()) t1.start() t2.start() command = input("input command") while command != 'quit': command = input("input command") if command == 'pre': preEvent.set() main() But I encountered a few problems How to block t1 simultaneously while I inputting a specific command How to start from the beginning when t1 is restored instead of starting from the blocked point Regarding question 1, I tried adding a condition check before the print(count) command, but if I enter the "pre" command when the program outputs count, the program will still perform count+=1 After that, the program will go back to check the conditions and block t1, but this does not achieve the synchronization effect I wanted. Is there a way to achieve this goal? Question 2 is similar to question 1. I hope that whenever I enter the command, my program will output like this. 1 2 3 4 pre 3 4 ... But if t1 is blocked after finishing the print(count) instruction, when the event is cleared, t1 will continue to execute from count+=1 So the output will become the following 1 2 3 4 pre 4 5 ... I tried to find information on the Internet, but I never knew how to add keywords. Is there a method or library that can achieve this function? I have tried my best to describe my problem, but it may not be good enough. If I have any questions about my problem, I can add more explanation. Thank you all for your patience to read my question
[ "The threading module has a wait method that will block until notify or notify_all is called. This should accomplish what you're looking for in your first question. For question 2 you can either define a function that handles the exit case, or just recreate a thread to start form the beginning.\n", "Simply it can work like this, maybe helps.\nfrom threading import Thread, Event\n\ncount = 0\ncounter = Event()\n\ndef pre():\n\n global count\n counter.wait()\n count -= 1\n print(count)\n\ndef main():\n\n global count\n print(count)\n while True:\n command = input(\"Enter 1 to increase, Enter 2 to decrease :\")\n if command == \"1\":\n t1 = Thread(target=pre, args=())\n t1.start()\n counter.set()\n else :\n count += 1\n print(count)\n\n\nmain()\n\n" ]
[ 0, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0064474965_multithreading_python.txt
Q: Split strings from nested list by delimiter I'm trying to go from a pandas dataframe series of which the elements are strings with the following format: 'x ± a' 'y ± b' 'z ± c' I'd like to extract the numbers x,y,z from this series into a numpy array: X = np.array([x,y,z]) How can I do this? I tried to turn the series into a nested list, but then I am stuck on how to apply the .split('±')[0] method to this nested list. A: You can use: a = pd.to_numeric(df['col'].str.split('±').str[0], errors='coerce').to_numpy() Or (more efficient): a = pd.to_numeric(df['col'].str.extract('(.*)\s*±', expand=False), errors='coerce').to_numpy() Example output: array([10. , 20. , 33.3]) Used input: df = pd.DataFrame({'col': ['10 ± 1', '20 ± 2', '33.3 ± 3']})
Split strings from nested list by delimiter
I'm trying to go from a pandas dataframe series of which the elements are strings with the following format: 'x ± a' 'y ± b' 'z ± c' I'd like to extract the numbers x,y,z from this series into a numpy array: X = np.array([x,y,z]) How can I do this? I tried to turn the series into a nested list, but then I am stuck on how to apply the .split('±')[0] method to this nested list.
[ "You can use:\na = pd.to_numeric(df['col'].str.split('±').str[0], errors='coerce').to_numpy()\n\nOr (more efficient):\na = pd.to_numeric(df['col'].str.extract('(.*)\\s*±', expand=False), errors='coerce').to_numpy()\n\nExample output:\narray([10. , 20. , 33.3])\n\nUsed input:\ndf = pd.DataFrame({'col': ['10 ± 1', '20 ± 2', '33.3 ± 3']})\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074608353_pandas_python.txt
Q: \django_school\\manage.py': [Errno 2] No such file or directory I downloaded pip and django with python -m pip install django, I also downloaded crycrs_forms but when I click to python manage.py runserver it crashes and doesn't run: How can I fix it? A: For better location of packages and files and better management, you should use virtual environments. First create a folder (Django) and open it in vscode. Then use the following command in the terminal to create a new virtual environment (.venv) python -m venv .venv After the command is executed, select the virtual environment interpreter in the select interpreter panel Create a new terminal activation environment Install django using the command in the new terminal python -m pip install django Create a Django project django-admin startproject web_project . Create an empty development database python manage.py migrate To verify the Django project, make sure your virtual environment is activated, then start Django's development server using the command python manage.py runserver
\django_school\\manage.py': [Errno 2] No such file or directory
I downloaded pip and django with python -m pip install django, I also downloaded crycrs_forms but when I click to python manage.py runserver it crashes and doesn't run: How can I fix it?
[ "For better location of packages and files and better management, you should use virtual environments.\n\nFirst create a folder (Django) and open it in vscode.\n\nThen use the following command in the terminal to create a new virtual environment (.venv)\npython -m venv .venv\n\n\n\nAfter the command is executed, select the virtual environment interpreter in the select interpreter panel\n\n\nCreate a new terminal activation environment\n\n\nInstall django using the command in the new terminal\npython -m pip install django\n\n\n\nCreate a Django project\ndjango-admin startproject web_project .\n\n\n\nCreate an empty development database\npython manage.py migrate\n\n\n\nTo verify the Django project, make sure your virtual environment is activated, then start Django's development server using the command\npython manage.py runserver\n\n\n\n\n" ]
[ 0 ]
[]
[]
[ "django", "python", "visual_studio_code" ]
stackoverflow_0074601110_django_python_visual_studio_code.txt
Q: Writing a for loop in Python checking for divisibility I am trying to write a program in Python with turtle graphics to create a game. The game is built on blocks hovering in the air, where there is space between some of these blocks. The goal is to move a ball from the left side to the right without falling through the gaps. I am experiencing some trouble with writing a for loop, that's counting from 0-20 (the amount of blocks) and if the number in the loop is divisible by 2 or 5 it will draw a block. (If not there will be a gap between the previous and next block) Here is my code for the for loop: def draw_blocks(): for x in range(0, 21): if x % 5 == 0 or x % 2 == 0: draw_square(0, 0) #starting index draw_square(square_size, 0) draw_square(square_size * 2, 0) draw_square(square_size * 3, 0) draw_square(square_size * 4, 0) draw_square(square_size * 5, 0) draw_square(square_size * 6, 0) draw_square(square_size * 7, 0) draw_square(square_size * 8, 0) draw_square(square_size * 9, 0) draw_square(square_size * 10, 0) draw_square(square_size * 11, 0) draw_square(square_size * 12, 0) draw_square(square_size * 13, 0) draw_square(square_size * 14, 0) draw_square(square_size * 15, 0) draw_square(square_size * 16, 0) draw_square(square_size * 17, 0) draw_square(square_size * 18, 0) draw_square(square_size * 19, 0) draw_square(square_size * 20, 0) The draw_square function is defined as below: def draw_square(x, y): turtle.tracer(0) turtle.speed(0) turtle.hideturtle() turtle.fillcolor("pink") turtle.penup() turtle.goto(x, y) turtle.pendown() turtle.begin_fill() turtle.goto(x+square_size, y) turtle.goto(x+square_size, y-square_size) turtle.goto(x, y-square_size) turtle.goto(x, y) turtle.end_fill() (Where square_size is a global variable with the value = 40.) I'm thinking there is some issue with not defining each of draw_square statements in the for loop? Because right now I am only getting 20 blocks next to each other without any gaps. Thankful for any help :) A: It looks like you created draw_blocks incorrectly for what you are trying to do. If you want it to only draw blocks when x is divisible by 2 or 5, you need to draw the block at the position of x times square_size. I've modified your draw_blocks function below, which fixes the issue: def draw_blocks(): for x in range(0, 21): if x % 5 == 0 or x % 2 == 0: draw_square(square_size*x, 0)
Writing a for loop in Python checking for divisibility
I am trying to write a program in Python with turtle graphics to create a game. The game is built on blocks hovering in the air, where there is space between some of these blocks. The goal is to move a ball from the left side to the right without falling through the gaps. I am experiencing some trouble with writing a for loop, that's counting from 0-20 (the amount of blocks) and if the number in the loop is divisible by 2 or 5 it will draw a block. (If not there will be a gap between the previous and next block) Here is my code for the for loop: def draw_blocks(): for x in range(0, 21): if x % 5 == 0 or x % 2 == 0: draw_square(0, 0) #starting index draw_square(square_size, 0) draw_square(square_size * 2, 0) draw_square(square_size * 3, 0) draw_square(square_size * 4, 0) draw_square(square_size * 5, 0) draw_square(square_size * 6, 0) draw_square(square_size * 7, 0) draw_square(square_size * 8, 0) draw_square(square_size * 9, 0) draw_square(square_size * 10, 0) draw_square(square_size * 11, 0) draw_square(square_size * 12, 0) draw_square(square_size * 13, 0) draw_square(square_size * 14, 0) draw_square(square_size * 15, 0) draw_square(square_size * 16, 0) draw_square(square_size * 17, 0) draw_square(square_size * 18, 0) draw_square(square_size * 19, 0) draw_square(square_size * 20, 0) The draw_square function is defined as below: def draw_square(x, y): turtle.tracer(0) turtle.speed(0) turtle.hideturtle() turtle.fillcolor("pink") turtle.penup() turtle.goto(x, y) turtle.pendown() turtle.begin_fill() turtle.goto(x+square_size, y) turtle.goto(x+square_size, y-square_size) turtle.goto(x, y-square_size) turtle.goto(x, y) turtle.end_fill() (Where square_size is a global variable with the value = 40.) I'm thinking there is some issue with not defining each of draw_square statements in the for loop? Because right now I am only getting 20 blocks next to each other without any gaps. Thankful for any help :)
[ "It looks like you created draw_blocks incorrectly for what you are trying to do.\nIf you want it to only draw blocks when x is divisible by 2 or 5, you need to draw the block at the position of x times square_size. I've modified your draw_blocks function below, which fixes the issue:\ndef draw_blocks():\n for x in range(0, 21):\n if x % 5 == 0 or x % 2 == 0:\n draw_square(square_size*x, 0)\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074606508_for_loop_python.txt
Q: comparing two sets and finding equation about them python I have two sets list1 = {1,2,3,4,5,6,7,8,9,10} list2 = {10,20,30,40,50,60,70,80,90,100} I want python to see if there is a relation between each number and if the relation is the same for each number(for this example it would be the same for each number and the relation is *10)or if it is not it would print that they do not have a relation A: If you use sets, you cannot define a 1-to-1 relationship, as those are unordered. If you have lists, you could use: list1 = [1,2,3,4,5,6,7,8,9,10] list2 = [10,20,30,40,50,60,70,80,90,100] ratios = [b/a for a,b in zip(list1, list2)] Output: [10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0] Or using a set comprehension: S = {round(b/a, 3) for a,b in zip(list1, list2)} # {10.0} # check there is only one possibility if len(S) != 1: print('there is not a unique ratio')
comparing two sets and finding equation about them python
I have two sets list1 = {1,2,3,4,5,6,7,8,9,10} list2 = {10,20,30,40,50,60,70,80,90,100} I want python to see if there is a relation between each number and if the relation is the same for each number(for this example it would be the same for each number and the relation is *10)or if it is not it would print that they do not have a relation
[ "If you use sets, you cannot define a 1-to-1 relationship, as those are unordered.\nIf you have lists, you could use:\nlist1 = [1,2,3,4,5,6,7,8,9,10]\nlist2 = [10,20,30,40,50,60,70,80,90,100]\n\nratios = [b/a for a,b in zip(list1, list2)]\n\nOutput: [10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0]\nOr using a set comprehension:\nS = {round(b/a, 3) for a,b in zip(list1, list2)}\n# {10.0}\n\n# check there is only one possibility\nif len(S) != 1:\n print('there is not a unique ratio')\n\n" ]
[ 0 ]
[]
[]
[ "compare", "list", "python" ]
stackoverflow_0074608447_compare_list_python.txt
Q: How can I create four columns from this 2d list I'm new into python and pandas and I'm having hard time transforming this 2d list into four independent columns, I'm getting two columns with more than one data for every record. May be I created it the wrong way? I don't know. Please help me out, I'm trying to make results look like this: These are the columns that I'm looking for And This is what I have done THANK YOU VERY MUCH!! A: import pandas as pd results = [[('three', 'beer'), ('zero', 'wine')], [('one', 'beer'), ('two', 'wine')]] Player1Qty = [] Player1type = [] Player2Qty = [] Player2type = [] for tmp in results: Player1Qty.append(tmp[0][0]) Player1type.append(tmp[0][1]) Player2Qty.append(tmp[1][0]) Player2type.append(tmp[1][1]) pd.DataFrame({'Player1Qty': Player1Qty, 'Player1type': Player1type, 'Player2Qty': Player2Qty, 'Player2type': Player2type}) A: try this: import numpy as np import pandas as pd data = [ [('one', 'apple'), ('three', 'orange') ], [('two', 'pear'), ('four', 'banana') ] ] df = pd.DataFrame(data) out = pd.DataFrame([*df.apply(np.hstack, axis=1)]) cols = [f'Player{i//2 + 1}{"Qty" if i % 2 else "type"}' for i in out.columns] out.columns = cols print(out) >>> Player1type Player1Qty Player2type Player2Qty 0 one apple three orange 1 two pear four banana
How can I create four columns from this 2d list
I'm new into python and pandas and I'm having hard time transforming this 2d list into four independent columns, I'm getting two columns with more than one data for every record. May be I created it the wrong way? I don't know. Please help me out, I'm trying to make results look like this: These are the columns that I'm looking for And This is what I have done THANK YOU VERY MUCH!!
[ "import pandas as pd\nresults = [[('three', 'beer'), ('zero', 'wine')], [('one', 'beer'), ('two', 'wine')]]\nPlayer1Qty = []\nPlayer1type = []\nPlayer2Qty = []\nPlayer2type = []\nfor tmp in results:\n Player1Qty.append(tmp[0][0])\n Player1type.append(tmp[0][1])\n Player2Qty.append(tmp[1][0])\n Player2type.append(tmp[1][1])\npd.DataFrame({'Player1Qty': Player1Qty, 'Player1type': Player1type, 'Player2Qty': Player2Qty, 'Player2type': Player2type})\n\n", "try this:\nimport numpy as np\nimport pandas as pd\n\n\ndata = [\n [('one', 'apple'), ('three', 'orange') ],\n [('two', 'pear'), ('four', 'banana') ]\n]\ndf = pd.DataFrame(data)\nout = pd.DataFrame([*df.apply(np.hstack, axis=1)])\ncols = [f'Player{i//2 + 1}{\"Qty\" if i % 2 else \"type\"}' for i in out.columns]\nout.columns = cols\nprint(out)\n>>>\n\n Player1type Player1Qty Player2type Player2Qty\n0 one apple three orange\n1 two pear four banana\n\n" ]
[ 0, 0 ]
[]
[]
[ "arraylist", "dataframe", "pandas", "python" ]
stackoverflow_0074608320_arraylist_dataframe_pandas_python.txt
Q: Stuck on making a RSS feed discord bot I'm new to this so please cut me some slack. I'm trying to get RSS feed news into my Discord guild but I hit a bump. I'm using import request to get the feed but I don't know how to have the bot output it in my channel. This is what I have in my bot file to run my bot: import hikari bot = hikari.GatewayBot(token='') bot.run() and this is what I have in another file to get the feed: import requests x = requests.get('http://feeds.marketwatch.com/marketwatch/topstories/') print(x.text) I'm just trying to combine this together or maybe have another way introduced to me. Thank you A: If you're trying to output to a specific channel, you need something like this: async def on_ready(): channel = client.get_channel(channel ID here) await channel.send(x.link) As for the x.link part, that comes with feedparser, which you can get with import feedparser. You would then get the feed with the following: x = feedparser.parse('rss link here') This is what I've been using so far, I hope it's of some help.
Stuck on making a RSS feed discord bot
I'm new to this so please cut me some slack. I'm trying to get RSS feed news into my Discord guild but I hit a bump. I'm using import request to get the feed but I don't know how to have the bot output it in my channel. This is what I have in my bot file to run my bot: import hikari bot = hikari.GatewayBot(token='') bot.run() and this is what I have in another file to get the feed: import requests x = requests.get('http://feeds.marketwatch.com/marketwatch/topstories/') print(x.text) I'm just trying to combine this together or maybe have another way introduced to me. Thank you
[ "If you're trying to output to a specific channel, you need something like this:\nasync def on_ready():\n channel = client.get_channel(channel ID here)\n await channel.send(x.link)\n\nAs for the x.link part, that comes with feedparser, which you can get with import feedparser. You would then get the feed with the following:\nx = feedparser.parse('rss link here')\n\nThis is what I've been using so far, I hope it's of some help.\n" ]
[ 0 ]
[]
[]
[ "discord", "python", "rss" ]
stackoverflow_0072509695_discord_python_rss.txt
Q: subprocess popen + curl + binary data The following statement works as expected: os.system("curl --data-binary \@"+input_file_path+" -o "+ file_name +" localhost:30") But when trying it with subprocess.popen: Popen(['curl','--data-binary','\@'+input_file_path, '-o', file_name,'localhost:30'], stdout=PIPE).communicate()[0] Curl seems to hang up(logs into endless loop), like if the input file is not passed to it(which is mandatory for localhost:30 to function properly)... Any ideas? A: how about using a library instead of calling system's curl? A: You could try using the original string in subprocess.Popen with the additional keyword argument to Popen of shell=True: subprocess.Popen("curl --data-binary \@"+input_file_path+" -o "+ file_name +" localhost:30", stdout=subprocess.PIPE, shell=True) A: How about using requests library Python POST binary data Or yet another Check out this link for binary (image file) case How to download image using requests
subprocess popen + curl + binary data
The following statement works as expected: os.system("curl --data-binary \@"+input_file_path+" -o "+ file_name +" localhost:30") But when trying it with subprocess.popen: Popen(['curl','--data-binary','\@'+input_file_path, '-o', file_name,'localhost:30'], stdout=PIPE).communicate()[0] Curl seems to hang up(logs into endless loop), like if the input file is not passed to it(which is mandatory for localhost:30 to function properly)... Any ideas?
[ "how about using a library instead of calling system's curl? \n", "You could try using the original string in subprocess.Popen with the additional keyword argument to Popen of shell=True:\nsubprocess.Popen(\"curl --data-binary \\@\"+input_file_path+\" -o \"+ file_name +\" localhost:30\",\n stdout=subprocess.PIPE,\n shell=True)\n\n", "How about using requests library\nPython POST binary data\nOr yet another\nCheck out this link for binary (image file) case\nHow to download image using requests\n" ]
[ 3, 2, 0 ]
[]
[]
[ "curl", "popen", "python" ]
stackoverflow_0002061420_curl_popen_python.txt
Q: How can I apply maths to a pandas dataframe comparing 2 specific row and column indexes I have this dataframe import pandas as pd import numpy as np np.random.seed(2022) # make example data close = np.sin(range(610)) + 10 high = close + np.random.rand(*close.shape) open = high - np.random.rand(*close.shape) low = high - 3 close[2] += 100 dates = pd.date_range(end='2022-06-30', periods=len(close)) # insert into pd.dataframe df = pd.DataFrame(index=dates, data=np.array([open, high, low, close]).T, columns=['Open', 'High', 'Low', 'Close']) print(df) Output Open High Low Close 2020-10-29 9.557631 10.009359 7.009359 10.000000 2020-10-30 10.794789 11.340529 8.340529 10.841471 2020-10-31 10.631242 11.022681 8.022681 110.909297 2020-11-01 9.639562 10.191094 7.191094 10.141120 2020-11-02 9.835697 9.928605 6.928605 9.243198 ... ... ... ... ... 2022-06-26 10.738942 11.167593 8.167593 10.970521 2022-06-27 10.031187 10.868859 7.868859 10.321565 2022-06-28 9.991932 10.271633 7.271633 9.376964 2022-06-29 9.069759 9.684232 6.684232 9.005179 2022-06-30 9.479291 10.300242 7.300242 9.548028 The goal here is to compare a specific value in the dataframe, to another value in the dataframe. Edit: I now know many different ways to achieve this however I have re-written the question so it is more clear for future readers what the original goal was. For example: Check when the value at 'open' column is less than the value at close column. One solution for this is using itertuples, I have written an answer below explaining the solution A: The first step you want to do can be done by df.loc["A", "High"] > df.loc["C", "Low"]. To apply this to all rows you could do something like below: for i in range(2, len(df)): print(df["High"][i-2] > df["Low"][i]) I'm sure there are better ways to do it, but this would work. A: you can use shift operation on column to shift the rows up/down `df['High'] > df['Low'].shift(-2)` To elaborate what's going on, run below commands df = pd.DataFrame(np.random.randn(5,4), list('ABCDE'), ['Open', 'High', 'Low', 'Close']) df['Low_shiftup'] = df['Low'].shift(-2) df.head() df['High'] > df['Low_shiftup'] A: As I explained in the question I have now found multiple solutions for this problem. One being itertuples. Here is how to use itertuples to solve the problem. First, create the dataframe import pandas as pd import numpy as np np.random.seed(2022) # make example data close = np.sin(range(610)) + 10 high = close + np.random.rand(*close.shape) open = high - np.random.rand(*close.shape) low = high - 3 close[2] += 100 dates = pd.date_range(end='2022-06-30', periods=len(close)) # insert into pd.dataframe df = pd.DataFrame(index=dates, data=np.array([open, high, low, close]).T, columns=['Open', 'High', 'Low', 'Close']) print(df) Now we use itertuples to iterate over the rows of the dataframe for row in df.itertuples(): o = row.Open for r in df.itertuples(): c = r.Close if o < c: print('O is less than C') else: print('O is greater than C') This will find all instances of when the open price is less than the close price This can be expanded on to check other conditions within the same loop just by adding more variables and more if statements, and also using enumerate to check positioning For example: for idx, row in enumerate(df.itertuples()): o = row.Open h = row.High for i, r in enumerate(df.itertuples()): c = r.Close l = r.Low if (i > idx) & ((h - 2) > l): if o < c: print('O is less than C') else: print('O is greater than C') else: continue The above code uses enumerate to add a counter to each loop. The additional if statement will only check if 'o < c' in rows which the loop counter for 'c' is greater than the loop counter for 'o'. As you can see any value in the dataframe can be compared to another using the correct if statements.
How can I apply maths to a pandas dataframe comparing 2 specific row and column indexes
I have this dataframe import pandas as pd import numpy as np np.random.seed(2022) # make example data close = np.sin(range(610)) + 10 high = close + np.random.rand(*close.shape) open = high - np.random.rand(*close.shape) low = high - 3 close[2] += 100 dates = pd.date_range(end='2022-06-30', periods=len(close)) # insert into pd.dataframe df = pd.DataFrame(index=dates, data=np.array([open, high, low, close]).T, columns=['Open', 'High', 'Low', 'Close']) print(df) Output Open High Low Close 2020-10-29 9.557631 10.009359 7.009359 10.000000 2020-10-30 10.794789 11.340529 8.340529 10.841471 2020-10-31 10.631242 11.022681 8.022681 110.909297 2020-11-01 9.639562 10.191094 7.191094 10.141120 2020-11-02 9.835697 9.928605 6.928605 9.243198 ... ... ... ... ... 2022-06-26 10.738942 11.167593 8.167593 10.970521 2022-06-27 10.031187 10.868859 7.868859 10.321565 2022-06-28 9.991932 10.271633 7.271633 9.376964 2022-06-29 9.069759 9.684232 6.684232 9.005179 2022-06-30 9.479291 10.300242 7.300242 9.548028 The goal here is to compare a specific value in the dataframe, to another value in the dataframe. Edit: I now know many different ways to achieve this however I have re-written the question so it is more clear for future readers what the original goal was. For example: Check when the value at 'open' column is less than the value at close column. One solution for this is using itertuples, I have written an answer below explaining the solution
[ "The first step you want to do can be done by df.loc[\"A\", \"High\"] > df.loc[\"C\", \"Low\"]. To apply this to all rows you could do something like below:\nfor i in range(2, len(df)):\n print(df[\"High\"][i-2] > df[\"Low\"][i])\n\nI'm sure there are better ways to do it, but this would work.\n", "you can use shift operation on column to shift the rows up/down\n`df['High'] > df['Low'].shift(-2)`\n\nTo elaborate what's going on, run below commands\ndf = pd.DataFrame(np.random.randn(5,4), list('ABCDE'), ['Open', 'High', 'Low', 'Close'])\ndf['Low_shiftup'] = df['Low'].shift(-2)\ndf.head()\ndf['High'] > df['Low_shiftup']\n\n", "As I explained in the question I have now found multiple solutions for this problem. One being itertuples.\nHere is how to use itertuples to solve the problem.\nFirst, create the dataframe\nimport pandas as pd\nimport numpy as np\nnp.random.seed(2022)\n\n# make example data\nclose = np.sin(range(610)) + 10\nhigh = close + np.random.rand(*close.shape)\nopen = high - np.random.rand(*close.shape)\nlow = high - 3\nclose[2] += 100\ndates = pd.date_range(end='2022-06-30', periods=len(close))\n\n# insert into pd.dataframe\ndf = pd.DataFrame(index=dates, data=np.array([open, high, low, close]).T, columns=['Open', 'High', 'Low', 'Close'])\nprint(df)\n\nNow we use itertuples to iterate over the rows of the dataframe\nfor row in df.itertuples():\n o = row.Open\n for r in df.itertuples():\n c = r.Close\n if o < c:\n print('O is less than C')\n else:\n print('O is greater than C')\n\nThis will find all instances of when the open price is less than the close price\nThis can be expanded on to check other conditions within the same loop just by adding more variables and more if statements, and also using enumerate to check positioning\nFor example:\nfor idx, row in enumerate(df.itertuples()):\n o = row.Open\n h = row.High\n for i, r in enumerate(df.itertuples()):\n c = r.Close\n l = r.Low\n if (i > idx) & ((h - 2) > l):\n if o < c:\n print('O is less than C')\n else:\n print('O is greater than C')\n else:\n continue\n\nThe above code uses enumerate to add a counter to each loop. The additional if statement will only check if 'o < c' in rows which the loop counter for 'c' is greater than the loop counter for 'o'.\nAs you can see any value in the dataframe can be compared to another using the correct if statements.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074093315_dataframe_pandas_python.txt
Q: How to dynamically name dataframes? Suppose I have a dataframe as follows: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}, 'Gross foreign assets': {1998: 19.720897432405103, 1999: 22.66200738564236, 2000: 25.18270679890144, 2001: 30.394226651732836, 2002: 37.26477320359688}, 'Gross domestic income': {1998: 104.9037939043707, 1999: 103.15361867816479, 2000: 103.06777792080423, 2001: 102.85886528974339, 2002: 102.28518242008846}, 'Gross foreign liabilities': {1998: 60.59784839338306, 1999: 61.03308220978983, 2000: 64.01438055825233, 2001: 67.07798172469921, 2002: 70.16108592109364}, 'Inflation rate': {1998: 52.6613, 1999: 19.3349, 2000: 16.0798, 2001: 15.076, 2002: 17.236}, 'Credit': {1998: 0.20269913592846378, 1999: 0.2154280880177353, 2000: 0.282948948505006, 2001: 0.3954812893893278, 2002: 0.3578263032373988}} which can be converted back to its original form using: df = pd.DataFrame.from_dict(s) I am interested in dynamically naming dataframes such that, for instance: dim = df.shape[1] counter1 = 0 counter2 = 1 while(counter1 <= dim): df_str(counter2) = df.iloc[:, counter1: (counter1 + 3)] counter1 = counter1 + 3 counter2 = counter2 + 1 Obviously this code is wrong and won't work. But essentially I need to end up with df_1, df_2, df_3 and so on, depending on the number of columns of the dataframe. I have read in many posts that this is bad practice and that this can be accomplished using dictionaries; but this is not clear to me. Some guidance is very much appreciated. A: i = 0 res = [] while i < df.shape[1]: res.append(df.iloc[:, i: (i + 3)]) i = i + 3 print(res[0]) print(res[1]) print(res[2])
How to dynamically name dataframes?
Suppose I have a dataframe as follows: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}, 'Gross foreign assets': {1998: 19.720897432405103, 1999: 22.66200738564236, 2000: 25.18270679890144, 2001: 30.394226651732836, 2002: 37.26477320359688}, 'Gross domestic income': {1998: 104.9037939043707, 1999: 103.15361867816479, 2000: 103.06777792080423, 2001: 102.85886528974339, 2002: 102.28518242008846}, 'Gross foreign liabilities': {1998: 60.59784839338306, 1999: 61.03308220978983, 2000: 64.01438055825233, 2001: 67.07798172469921, 2002: 70.16108592109364}, 'Inflation rate': {1998: 52.6613, 1999: 19.3349, 2000: 16.0798, 2001: 15.076, 2002: 17.236}, 'Credit': {1998: 0.20269913592846378, 1999: 0.2154280880177353, 2000: 0.282948948505006, 2001: 0.3954812893893278, 2002: 0.3578263032373988}} which can be converted back to its original form using: df = pd.DataFrame.from_dict(s) I am interested in dynamically naming dataframes such that, for instance: dim = df.shape[1] counter1 = 0 counter2 = 1 while(counter1 <= dim): df_str(counter2) = df.iloc[:, counter1: (counter1 + 3)] counter1 = counter1 + 3 counter2 = counter2 + 1 Obviously this code is wrong and won't work. But essentially I need to end up with df_1, df_2, df_3 and so on, depending on the number of columns of the dataframe. I have read in many posts that this is bad practice and that this can be accomplished using dictionaries; but this is not clear to me. Some guidance is very much appreciated.
[ "i = 0\nres = []\nwhile i < df.shape[1]:\n res.append(df.iloc[:, i: (i + 3)])\n i = i + 3\nprint(res[0])\nprint(res[1])\nprint(res[2])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "slice" ]
stackoverflow_0074608497_dataframe_pandas_python_slice.txt
Q: Split one column of a dataframe with xy coordinates into two columns, without a delimeter (every 2 values) I have a .csv with xy coordinates, which then I read with pandas. The problem is that the .csv has the data in only one column (here, the 1st value is a X value, the 2nd value is a Y value, the 3rd value is a X value, and so on) as shown here This csv is readed with pandas, and the resulting dataframe is in the same format as shown here. The dataframe which I want to get, is like X | Y 1) 792.0 | 610.0 2) 786.0 | 602.0 3) ... | ... The problem is that the dataframe/csv doesn't have a delimiter, like ','. I want to split the one and only column into two columns (called as X and Y), with every two values. A: Assuming the number of values is even, you can use: out = pd.DataFrame(df.iloc[:,0].to_numpy().reshape(-1,2), columns=['X', 'Y']) Output: X Y 0 792.0 610.0 1 786.0 602.0
Split one column of a dataframe with xy coordinates into two columns, without a delimeter (every 2 values)
I have a .csv with xy coordinates, which then I read with pandas. The problem is that the .csv has the data in only one column (here, the 1st value is a X value, the 2nd value is a Y value, the 3rd value is a X value, and so on) as shown here This csv is readed with pandas, and the resulting dataframe is in the same format as shown here. The dataframe which I want to get, is like X | Y 1) 792.0 | 610.0 2) 786.0 | 602.0 3) ... | ... The problem is that the dataframe/csv doesn't have a delimiter, like ','. I want to split the one and only column into two columns (called as X and Y), with every two values.
[ "Assuming the number of values is even, you can use:\nout = pd.DataFrame(df.iloc[:,0].to_numpy().reshape(-1,2), columns=['X', 'Y'])\n\nOutput:\n X Y\n0 792.0 610.0\n1 786.0 602.0\n\n" ]
[ 0 ]
[]
[]
[ "csv", "multiple_columns", "pandas", "python", "split" ]
stackoverflow_0074608524_csv_multiple_columns_pandas_python_split.txt
Q: Removing self-intersection from invalid polygon using python? I need to union polygons of the shapefile using python. https://i.stack.imgur.com/qY6gD.png There are some self-intersection inside polygon and my python code always results in error. import geopandas as gpd from shapely.geometry import Polygon from shapely.validation import make_valid from shapely.ops import cascaded_union from shapely.validation import explain_validity pz32 = gpd.read_file("B://_Shp_robocze//test//test.shp") shp = gpd.geoseries.GeoSeries([geom for geom in pz32.unary_union.geoms]) shp.geometry = shp.apply(lambda row: make_valid(row.geometry) if not row.geometry.is_valid else row.geometry, axis=1) print(shp) Errors that console printing TopologyException: side location conflict at 389425.99965310335 578312.21068473824. This can occur if the input geometry is invalid. Traceback (most recent call last): File "C:\...\PycharmProjects\pythonProject\main.py", line 8, in <module> shp = gpd.geoseries.GeoSeries([geom for geom in pz32.unary_union.geoms]) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\base.py", line 800, in unary_union return self.geometry.values.unary_union() File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\array.py", line 650, in unary_union return vectorized.unary_union(self.data) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\_vectorized.py", line 1034, in unary_union return shapely.ops.unary_union(data) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\shapely\ops.py", line 161, in unary_union return geom_factory(lgeos.methods['unary_union'](collection)) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\shapely\geometry\base.py", line 73, in geom_factory raise ValueError("No Shapely geometry can be created from null value") ValueError: No Shapely geometry can be created from null value How to make this geometry valid ? A: This error message indicates the presence of null values: No Shapely geometry can be created from null value You must have a None or NaN in your geometry column. This is different from (and possibly in addition to) any issues you may have with self-intersections. You can search for nulls, e.g. with the following, to diagnose what’s going on: pz32[pz32.isnull()] Or just drop the row: pz32 = pz32[pz32.notnull()] Shapely can’t just create a geometry from nothing, so it’s asking you to decide what to do with the null value. While geopandas can handle null values, you’re forcing the whole GeometryArray to be converted to a single shapely object by taking the unary_union.
Removing self-intersection from invalid polygon using python?
I need to union polygons of the shapefile using python. https://i.stack.imgur.com/qY6gD.png There are some self-intersection inside polygon and my python code always results in error. import geopandas as gpd from shapely.geometry import Polygon from shapely.validation import make_valid from shapely.ops import cascaded_union from shapely.validation import explain_validity pz32 = gpd.read_file("B://_Shp_robocze//test//test.shp") shp = gpd.geoseries.GeoSeries([geom for geom in pz32.unary_union.geoms]) shp.geometry = shp.apply(lambda row: make_valid(row.geometry) if not row.geometry.is_valid else row.geometry, axis=1) print(shp) Errors that console printing TopologyException: side location conflict at 389425.99965310335 578312.21068473824. This can occur if the input geometry is invalid. Traceback (most recent call last): File "C:\...\PycharmProjects\pythonProject\main.py", line 8, in <module> shp = gpd.geoseries.GeoSeries([geom for geom in pz32.unary_union.geoms]) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\base.py", line 800, in unary_union return self.geometry.values.unary_union() File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\array.py", line 650, in unary_union return vectorized.unary_union(self.data) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\geopandas\_vectorized.py", line 1034, in unary_union return shapely.ops.unary_union(data) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\shapely\ops.py", line 161, in unary_union return geom_factory(lgeos.methods['unary_union'](collection)) File "C:\...\PycharmProjects\pythonProject\venv\lib\site-packages\shapely\geometry\base.py", line 73, in geom_factory raise ValueError("No Shapely geometry can be created from null value") ValueError: No Shapely geometry can be created from null value How to make this geometry valid ?
[ "This error message indicates the presence of null values:\nNo Shapely geometry can be created from null value\n\nYou must have a None or NaN in your geometry column. This is different from (and possibly in addition to) any issues you may have with self-intersections.\nYou can search for nulls, e.g. with the following, to diagnose what’s going on:\npz32[pz32.isnull()]\n\nOr just drop the row:\npz32 = pz32[pz32.notnull()]\n\nShapely can’t just create a geometry from nothing, so it’s asking you to decide what to do with the null value. While geopandas can handle null values, you’re forcing the whole GeometryArray to be converted to a single shapely object by taking the unary_union.\n" ]
[ 0 ]
[]
[]
[ "geometry", "geopandas", "gis", "python", "shapely" ]
stackoverflow_0074600665_geometry_geopandas_gis_python_shapely.txt
Q: Shampoo cycle loop I am having trouble figuring out how to stop the loop in my code: def shampoo_instructions(num_cycles): for num_cycles in range(1,num_cycles+1): if num_cycles < 1: print 'Too few.' elif num_cycles > 4: print 'Too many.' else: print num_cycles, ': Lather and rinse.' else: print 'Done.' shampoo_instructions(2) My output would be: 1 : Lather and rinse. 2 : Lather and rinse. Done. How can i make it so when shampoo_instructions(6) it just prints "Too many." ? A: Move your range check to be outside your actual looping, eg: def shampoo_instructions(num_cycles): if num_cycles < 1: print 'Too few.' elif num_cyles > 4: print 'Too many.' else: for num_cycles in range(1,num_cycles+1): print num_cycles, 'lather and rinse.' print 'Done' A: found some help from reddit. def shampoo_instructions(num_cycles): if num_cycles <= 0: print("Too few.") elif num_cycles >= 5: print("Too many.") else: for num_cycles in range (1, num_cycles + 1): print(num_cycles,":", "Lather and rinse.") print("Done.") A: ''' Your solution goes here ''' def print_shampoo_instructions(num_cycles): if num_cycles < 1: print('Too few.') elif num_cycles > 4: print('Too many.') else: i = 1 while i <= num_cycles: print(f'{i} : Lather and rinse.') i += 1 print('Done.') user_cycles = int(input()) print_shampoo_instructions(user_cycles)
Shampoo cycle loop
I am having trouble figuring out how to stop the loop in my code: def shampoo_instructions(num_cycles): for num_cycles in range(1,num_cycles+1): if num_cycles < 1: print 'Too few.' elif num_cycles > 4: print 'Too many.' else: print num_cycles, ': Lather and rinse.' else: print 'Done.' shampoo_instructions(2) My output would be: 1 : Lather and rinse. 2 : Lather and rinse. Done. How can i make it so when shampoo_instructions(6) it just prints "Too many." ?
[ "Move your range check to be outside your actual looping, eg:\ndef shampoo_instructions(num_cycles):\n if num_cycles < 1:\n print 'Too few.'\n elif num_cyles > 4:\n print 'Too many.'\n else:\n for num_cycles in range(1,num_cycles+1):\n print num_cycles, 'lather and rinse.'\n print 'Done'\n\n", "found some help from reddit.\ndef shampoo_instructions(num_cycles):\n if num_cycles <= 0:\n print(\"Too few.\")\n elif num_cycles >= 5:\n print(\"Too many.\")\n else:\n for num_cycles in range (1, num_cycles + 1):\n print(num_cycles,\":\", \"Lather and rinse.\")\n print(\"Done.\")\n\n", "''' Your solution goes here '''\ndef print_shampoo_instructions(num_cycles):\n if num_cycles < 1:\n print('Too few.')\n elif num_cycles > 4:\n print('Too many.')\n else:\n i = 1\n while i <= num_cycles:\n print(f'{i} : Lather and rinse.')\n i += 1\n print('Done.')\n\nuser_cycles = int(input())\nprint_shampoo_instructions(user_cycles)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0026268460_python.txt
Q: How do I load a bytes object WAV audio file in torchaudio? I am trying to load a bytes-class object named "audio" to be loaded as a torchaudio object: def convert_audio(audio, target_sr: int = 16000): wav, sr = torchaudio.load(audio) #(...) some other code I cannot find any documentation online with instructions on how to load a bytes audio object inside Torchaudio, it seems to only accept path strings. But I have to save I/O in my application and I cannot write and load .wav files, only handle the audio objects directly. Does anyone have a suggestion in this case? If I use audio directly, I get this error: Exception has occurred: AttributeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) 'bytes' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 348, in _check_seekable f.seek(f.tell()) With BytesIO: Exception has occurred: UnpicklingError (note: full exception trace is shown but execution is paused at: _run_module_as_main) invalid load key, '\x00'. File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 1002, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 795, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/felipe/Coding projects/silero/stt.py", line 35, in convert_audio wav,sr = torch.load(io.BytesIO(audio)) File "/home/felipe/Coding projects/silero/stt.py", line 60, in transcribe input = prepare_model_input(convert_audio(audio), File "/home/felipe/Coding projects/silero/psgui.py", line 97, in <module> transcripton = stt.transcribe('en',audio) File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, A: If it's WAV format, torchaudio.load should be able to decode it from file-like object. Your code snippet looks good to me. The following tutorial demonstrates it with different file-like objects. https://pytorch.org/audio/0.13.0/tutorials/audio_io_tutorial.html#loading-from-file-like-object Still, there are many reasons it does not work. For example, is your file-like object's cursor pointing the correct position (the beginning of the audio data)? Does the read method conformant to the io.RawIOBase.read protocol? It's hard to tell without seeing the error stacktrace.
How do I load a bytes object WAV audio file in torchaudio?
I am trying to load a bytes-class object named "audio" to be loaded as a torchaudio object: def convert_audio(audio, target_sr: int = 16000): wav, sr = torchaudio.load(audio) #(...) some other code I cannot find any documentation online with instructions on how to load a bytes audio object inside Torchaudio, it seems to only accept path strings. But I have to save I/O in my application and I cannot write and load .wav files, only handle the audio objects directly. Does anyone have a suggestion in this case? If I use audio directly, I get this error: Exception has occurred: AttributeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) 'bytes' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 348, in _check_seekable f.seek(f.tell()) With BytesIO: Exception has occurred: UnpicklingError (note: full exception trace is shown but execution is paused at: _run_module_as_main) invalid load key, '\x00'. File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 1002, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) File "/home/felipe/.local/lib/python3.10/site-packages/torch/serialization.py", line 795, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/felipe/Coding projects/silero/stt.py", line 35, in convert_audio wav,sr = torch.load(io.BytesIO(audio)) File "/home/felipe/Coding projects/silero/stt.py", line 60, in transcribe input = prepare_model_input(convert_audio(audio), File "/home/felipe/Coding projects/silero/psgui.py", line 97, in <module> transcripton = stt.transcribe('en',audio) File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None,
[ "If it's WAV format, torchaudio.load should be able to decode it from file-like object. Your code snippet looks good to me.\nThe following tutorial demonstrates it with different file-like objects.\nhttps://pytorch.org/audio/0.13.0/tutorials/audio_io_tutorial.html#loading-from-file-like-object\nStill, there are many reasons it does not work. For example, is your file-like object's cursor pointing the correct position (the beginning of the audio data)? Does the read method conformant to the io.RawIOBase.read protocol?\nIt's hard to tell without seeing the error stacktrace.\n" ]
[ 0 ]
[]
[]
[ "python", "torch", "torchaudio" ]
stackoverflow_0074605909_python_torch_torchaudio.txt
Q: Pandas DataFrame applying user defined function only using last row of data from DataFrame I'm working with a fairly large DataFrame that has multiple columns. It looks something like this: Date Temp Dewpt_Temp Rainfall (cm) Snowfall (cm) 12/16/2021 -1.6 -5.4 0 6.7 12/17/2021 -5.5 -12.4 0 0 .......... .... .......... ............. ............. I have formulas I want to apply to the DataFrame to calculate new variables, those being saturation vapor pressure, vapor pressure, and relative humidity. Here is my code: data = pd.read_csv('file path/weather_data.csv') def new_vars(dataframe): temp = dataframe.Temp dewpt = dataframe.Dewpt_Temp e = 6.11*(10**((7.5*dewpt)/(273.3 + dewpt))) e_s = 6.11*(10**((7.5*temp)/(273.3 + temp))) rh = (e/e_s) * 100 return (e, e_s, rh) new_df = data.apply(lambda x: new_vars(data), axis=1) The code seems to work; however, when I run it, it seems to only compute the new variables using the last row in the DataFrame. The amount of output rows matches what the original DataFrame size is, but the new variable values calculated are all the same for each of the rows, seemingly using only the last row of data from the original DataFrame. Am I missing something that is needed to prevent this from happening? I know there are probably simpler ways of calculating the new variables given its in a DataFrame, but I have more complex equations that I will need to use in the future, so I wanted to get practice using a user defined function. A: Try this: new_df[['e', 'e_s', 'rh']] = data.apply(lambda x: new_vars(x['Temp'], x['Dewpt_Temp']), axis=1) And in the function declaration: def new_vars(temp, dewpt) And delete these two lines: temp = dataframe.Temp dewpt = dataframe.Dewpt_Temp A: Can you try this: new_df = pd.DataFrame() new_df[['e', 'e_s', 'rh']] = df.apply(lambda x: new_vars(x),axis=1) Full code: data = pd.read_csv('file path/weather_data.csv') def new_vars(dataframe): temp = dataframe.Temp dewpt = dataframe.Dewpt_Temp e = 6.11*(10**((7.5*dewpt)/(273.3 + dewpt))) e_s = 6.11*(10**((7.5*temp)/(273.3 + temp))) rh = (e/e_s) * 100 return (e, e_s, rh) new_df = pd.DataFrame() new_df[['e', 'e_s', 'rh']] = df.apply(lambda x: new_vars(x),axis=1) A: You seem to be unclear on what lambda means. It defines a function in terms of the dummy variable given immediately after it. If that is then followed by an expression in terms of that variable after a colon, then you have a function that varies as that variable varies. But if you have a different variable, then the function will always return the expression evaluated for that variable. For instance, lambda x: x*2 defines a function that takes any value and doubles it. But lambda x: y*2 will always return whatever y*2 is, regardless of what x is. If you want to apply new_vars, you can just pass it to apply: new_df = data.apply(new_vars, axis=1)
Pandas DataFrame applying user defined function only using last row of data from DataFrame
I'm working with a fairly large DataFrame that has multiple columns. It looks something like this: Date Temp Dewpt_Temp Rainfall (cm) Snowfall (cm) 12/16/2021 -1.6 -5.4 0 6.7 12/17/2021 -5.5 -12.4 0 0 .......... .... .......... ............. ............. I have formulas I want to apply to the DataFrame to calculate new variables, those being saturation vapor pressure, vapor pressure, and relative humidity. Here is my code: data = pd.read_csv('file path/weather_data.csv') def new_vars(dataframe): temp = dataframe.Temp dewpt = dataframe.Dewpt_Temp e = 6.11*(10**((7.5*dewpt)/(273.3 + dewpt))) e_s = 6.11*(10**((7.5*temp)/(273.3 + temp))) rh = (e/e_s) * 100 return (e, e_s, rh) new_df = data.apply(lambda x: new_vars(data), axis=1) The code seems to work; however, when I run it, it seems to only compute the new variables using the last row in the DataFrame. The amount of output rows matches what the original DataFrame size is, but the new variable values calculated are all the same for each of the rows, seemingly using only the last row of data from the original DataFrame. Am I missing something that is needed to prevent this from happening? I know there are probably simpler ways of calculating the new variables given its in a DataFrame, but I have more complex equations that I will need to use in the future, so I wanted to get practice using a user defined function.
[ "Try this:\nnew_df[['e', 'e_s', 'rh']] = data.apply(lambda x: new_vars(x['Temp'], x['Dewpt_Temp']), axis=1)\n\nAnd in the function declaration:\ndef new_vars(temp, dewpt)\n\nAnd delete these two lines:\n temp = dataframe.Temp\n dewpt = dataframe.Dewpt_Temp\n\n", "Can you try this:\nnew_df = pd.DataFrame()\nnew_df[['e', 'e_s', 'rh']] = df.apply(lambda x: new_vars(x),axis=1)\n\nFull code:\ndata = pd.read_csv('file path/weather_data.csv')\n\ndef new_vars(dataframe):\n temp = dataframe.Temp\n dewpt = dataframe.Dewpt_Temp\n \n e = 6.11*(10**((7.5*dewpt)/(273.3 + dewpt)))\n e_s = 6.11*(10**((7.5*temp)/(273.3 + temp)))\n rh = (e/e_s) * 100\n \n return (e, e_s, rh)\n\nnew_df = pd.DataFrame()\nnew_df[['e', 'e_s', 'rh']] = df.apply(lambda x: new_vars(x),axis=1)\n\n", "You seem to be unclear on what lambda means. It defines a function in terms of the dummy variable given immediately after it. If that is then followed by an expression in terms of that variable after a colon, then you have a function that varies as that variable varies. But if you have a different variable, then the function will always return the expression evaluated for that variable. For instance, lambda x: x*2 defines a function that takes any value and doubles it. But lambda x: y*2 will always return whatever y*2 is, regardless of what x is. If you want to apply new_vars, you can just pass it to apply:\nnew_df = data.apply(new_vars, axis=1)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "apply", "function", "pandas", "python" ]
stackoverflow_0074606026_apply_function_pandas_python.txt
Q: Creating Function to Count Values Based on Datetime I have data that look like this: activity_date company_name new_company_status calling visit quotation po 03/10/2022 ABC Yes Yes No No No 04/10/2022 ABC No No No Yes Yes 05/10/2022 DEF No Yes Yes No No 06/10/2022 XYZ Yes No Yes Yes No 07/10/2022 DEF No No No Yes Yes 08/10/2022 XYZ No Yes No No Yes I want to create a function that will check every same company_name that has at least one new_company_status as a 'Yes' that turn into a 'Yes' calling and count the sum of it even in the different date. I want to create a function that will check every same company_name that has calling as a 'Yes' that also has po as a Yes even in the different date. This is the pseudocode that I created: 1. for every same company name: if 'new_company_status' = 'Yes': check 'activity_date' # for new company status if it is a Yes if 'calling' = 'Yes': check 'activity_date' # for calling if it is a Yes if calling_date >= new_company_date: new_company to call =+ 1 for every same company name: if 'calling' = 'Yes': check 'activity_date' # for calling if it is a Yes if 'visit' = 'No': if 'quotation' = 'No': if 'po' = 'Yes': check 'activity_date' # for po if it is a Yes if po_date >= calling_date: call to po += 1 Expected output: 1 3 How to code the pseudocode into Python? Can anyone help me? Thank u in advance. A: Assuming a pandas dataframe, you can use boolean operations and groupby.any: question 1 # define boolean masks m1 = df['new_company_status'].eq('Yes') m2 = df['calling'].eq('Yes') # count the number of companies # with at least one occurrence of m1 and m2 (m1&m2).groupby(df['company_name']).any().sum() Output: 1 question 2 # counting the companies with at least one calling AND one po as Yes df[['calling', 'po']].eq('Yes').groupby(df['company_name']).any().all(axis=1).sum() Output: 3
Creating Function to Count Values Based on Datetime
I have data that look like this: activity_date company_name new_company_status calling visit quotation po 03/10/2022 ABC Yes Yes No No No 04/10/2022 ABC No No No Yes Yes 05/10/2022 DEF No Yes Yes No No 06/10/2022 XYZ Yes No Yes Yes No 07/10/2022 DEF No No No Yes Yes 08/10/2022 XYZ No Yes No No Yes I want to create a function that will check every same company_name that has at least one new_company_status as a 'Yes' that turn into a 'Yes' calling and count the sum of it even in the different date. I want to create a function that will check every same company_name that has calling as a 'Yes' that also has po as a Yes even in the different date. This is the pseudocode that I created: 1. for every same company name: if 'new_company_status' = 'Yes': check 'activity_date' # for new company status if it is a Yes if 'calling' = 'Yes': check 'activity_date' # for calling if it is a Yes if calling_date >= new_company_date: new_company to call =+ 1 for every same company name: if 'calling' = 'Yes': check 'activity_date' # for calling if it is a Yes if 'visit' = 'No': if 'quotation' = 'No': if 'po' = 'Yes': check 'activity_date' # for po if it is a Yes if po_date >= calling_date: call to po += 1 Expected output: 1 3 How to code the pseudocode into Python? Can anyone help me? Thank u in advance.
[ "Assuming a pandas dataframe, you can use boolean operations and groupby.any:\nquestion 1\n# define boolean masks\nm1 = df['new_company_status'].eq('Yes')\nm2 = df['calling'].eq('Yes')\n\n# count the number of companies\n# with at least one occurrence of m1 and m2\n(m1&m2).groupby(df['company_name']).any().sum()\n\nOutput: 1\nquestion 2\n# counting the companies with at least one calling AND one po as Yes\ndf[['calling', 'po']].eq('Yes').groupby(df['company_name']).any().all(axis=1).sum()\n\nOutput: 3\n" ]
[ 0 ]
[]
[]
[ "count", "datetime", "function", "python" ]
stackoverflow_0074608543_count_datetime_function_python.txt
Q: sum up purchases of current year and month I need to add the purchases of users in a list with the following conditions: "Those customers who have a sum of purchases in the current year of 15,000 or more are entitled to a 20% discount. If they have also spent 3000 or more in the current month, they have an additional 15%." The list is the following: purchases=[{"id": "123", 'price':16000, 'date': "(2022, 3, 12)"},{"id": "123", 'price':4000, 'date': "(2022, 11, 20)"}] I don't know how to add up the purchases in the current year and month, if someone can help :) A: You can get the current year and month from datetime.date.today() (See Datetime current year and month in Python) import datetime today = datetime.date.today() # Current year: today.year # Current month today.month Iterate over each element of purchases. The "date" key in each element tells you the date for that purchase. It's annoying that the value at this key is a string representation of a tuple, but we can parse it using ast.literal_eval. The first element of this tuple is the year for the purchase. The second element is the month. So we need to check if these values match the values for today, and add to the corresponding totals: import ast total_month = 0 total_year = 0 for purchase in purchases: p_date = ast.literal_eval(purchase["date"]) if p_date[0] == today.year: total_year += purchase["price"] if p_date[1] == today.month: total_month += purchase["price"] And after we've looked through all purchases, we can check our criteria: if total_year >= 15_000: print("20% discount") if total_month >= 3000: print("Extra 15% discount")
sum up purchases of current year and month
I need to add the purchases of users in a list with the following conditions: "Those customers who have a sum of purchases in the current year of 15,000 or more are entitled to a 20% discount. If they have also spent 3000 or more in the current month, they have an additional 15%." The list is the following: purchases=[{"id": "123", 'price':16000, 'date': "(2022, 3, 12)"},{"id": "123", 'price':4000, 'date': "(2022, 11, 20)"}] I don't know how to add up the purchases in the current year and month, if someone can help :)
[ "You can get the current year and month from datetime.date.today() (See Datetime current year and month in Python)\nimport datetime\n\ntoday = datetime.date.today()\n\n# Current year: today.year\n# Current month today.month\n\nIterate over each element of purchases. The \"date\" key in each element tells you the date for that purchase. It's annoying that the value at this key is a string representation of a tuple, but we can parse it using ast.literal_eval.\nThe first element of this tuple is the year for the purchase. The second element is the month. So we need to check if these values match the values for today, and add to the corresponding totals:\nimport ast\n\ntotal_month = 0\ntotal_year = 0\n\nfor purchase in purchases:\n p_date = ast.literal_eval(purchase[\"date\"])\n if p_date[0] == today.year:\n total_year += purchase[\"price\"]\n if p_date[1] == today.month:\n total_month += purchase[\"price\"]\n\nAnd after we've looked through all purchases, we can check our criteria:\nif total_year >= 15_000:\n print(\"20% discount\")\n if total_month >= 3000:\n print(\"Extra 15% discount\")\n\n" ]
[ 1 ]
[]
[]
[ "date", "list", "python", "python_3.x" ]
stackoverflow_0074608622_date_list_python_python_3.x.txt
Q: Accuracy and loss fluctuating in binary classification problem in deep learning I'm currently working on a classification problem for stroke on UNet. The task is based the size of the lesion area(large - 1, small - 0). Note that the labels is actually produce by me(I will try to improve it) so they are not that accurate. When I trained like 20 epochs, my accuracy waved around 0.5 and loss is around 0.6, which basically says my model makes random choices. So what should I do to make my model learning again? Here's the Unet I'm using: `import keras_unet def define_unet(n_filters=neuron, n_layers=4, dropout_rate=0.25): model_unet = keras_unet.models.custom_unet(input_shape=(img_size, img_size, 3), activation='relu', use_batch_norm=True, upsample_mode='deconv', dropout=dropout_rate, dropout_type='spatial', filters=n_filters, num_layers=n_layers, output_activation='linear' ) GAP = keras.layers.GlobalAveragePooling2D()(model_unet.output) outputs = keras.layers.Dense(1,activation = 'sigmoid')(GAP) model_unet = keras.Model(inputs = model_unet.input, outputs = outputs) #bce is just the binary crossentropy model_unet.compile(optimizer=adam, loss=bce_loss,metrics=['accuracy']) model_unet.summary() return model_unet` here's the hyperparameters: `learning_rate = 0.0001 epochs = 20 dropout_rate = 0.2 batch_size = 16 kernel_size = 3 neuron = 8 adam = keras.optimizers.Adam(learning_rate=learning_rate)` My data set contains 1000 images spilt into 80:20 for training and validation and I'm using batch_size = 16. Here's the plot for acc and loss: I've tried to implement a few learning rate and it didn't work:( Thanks in advance for your help!!! Any suggestions would be appreciated. A: there are many variances including samples and models to improve the accuracy of binary class-entropy as a single objective. To improves the accuracy first you need to use the correct measurement, the accuracy matric works correctly when you label with 0 to 1 as float since it is binary cross entropy it may reflect minus number result because that is from the square different of the label values. It is not fluctuating loss and accuracy but you need to use the correct approaches, binary cross-entropy as single is a very fast drive but the return of turn points is deterministic. Sample: You can apply the mean of all output or simply critical points. plt.figure(figsize=(5,2)) plt.title("Actors recognitions") for i in range(len(list_file)): img = tf.keras.preprocessing.image.array_to_img( list_file[i], data_format=None, scale=True ) img_array = tf.keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) predictions = model.predict(img_array) score = tf.nn.softmax(predictions[0]) plt.subplot(5, 2, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(list_file_actual[i]) if predictions[0] > 0.51 : plt.xlabel(str(list_label_actual[1]) + " score: " + str(predictions[0])) else : plt.xlabel(str(list_label_actual[0]) + " score: " + str(predictions[0])) Sample: You may try to use vectors to improves of the results. import os from os.path import exists import tensorflow as tf import tensorflow_io as tfio import matplotlib.pyplot as plt """"""""""""""""""""""""""""""""""""""""""""""""""""""""" [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] None """"""""""""""""""""""""""""""""""""""""""""""""""""""""" physical_devices = tf.config.experimental.list_physical_devices('GPU') assert len(physical_devices) > 0, "Not enough GPU hardware devices available" config = tf.config.experimental.set_memory_growth(physical_devices[0], True) print(physical_devices) print(config) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" Variables """"""""""""""""""""""""""""""""""""""""""""""""""""""""" PATH = os.path.join('F:\\datasets\\downloads\\Actors\\train\\Pikaploy', '*.tif') PATH_2 = os.path.join('F:\\datasets\\downloads\\Actors\\train\\Candidt Kibt', '*.tif') files = tf.data.Dataset.list_files(PATH) files_2 = tf.data.Dataset.list_files(PATH_2) list_file = [] list_file_actual = [] list_label = [] list_label_actual = [ 'Pikaploy', 'Candidt Kibt' ] for file in files.take(5): image = tf.io.read_file( file ) image = tfio.experimental.image.decode_tiff(image, index=0) list_file_actual.append(image) image = tf.image.resize(image, [32,32], method='nearest') list_file.append(image) # list_label.append([0, 0]) list_label.append([0.0]) for file in files_2.take(5): image = tf.io.read_file( file ) image = tfio.experimental.image.decode_tiff(image, index=0) list_file_actual.append(image) image = tf.image.resize(image, [32,32], method='nearest') list_file.append(image) # list_label.append([1, 1]) list_label.append([1.0]) checkpoint_path = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\TF_DataSets_01.h5" checkpoint_dir = os.path.dirname(checkpoint_path) loggings = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\loggings.log" if not exists(checkpoint_dir) : os.mkdir(checkpoint_dir) print("Create directory: " + checkpoint_dir) log_dir = checkpoint_dir """"""""""""""""""""""""""""""""""""""""""""""""""""""""" DataSet """"""""""""""""""""""""""""""""""""""""""""""""""""""""" # dataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 1, 32, 32, 4), dtype=tf.int64),tf.constant(list_label, shape=(10, 1, 2), dtype=tf.int64))) dataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 1, 32, 32, 4), dtype=tf.int64),tf.constant(list_label, shape=(10, 1, 1), dtype=tf.float32))) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Initialize """"""""""""""""""""""""""""""""""""""""""""""""""""""""" model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=( 32, 32, 4 )), tf.keras.layers.Normalization(mean=3., variance=2.), tf.keras.layers.Normalization(mean=4., variance=6.), tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), tf.keras.layers.MaxPooling2D((2, 2)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Reshape((128, 225)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(192, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid'), ]) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Optimizer """"""""""""""""""""""""""""""""""""""""""""""""""""""""" # optimizer = tf.keras.optimizers.SGD( # learning_rate=0.001, # momentum=0.0, # nesterov=False, # name='SGD',# ) optimizer = tf.keras.optimizers.Nadam( learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam' ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Loss Fn """"""""""""""""""""""""""""""""""""""""""""""""""""""""" lossfn = tf.keras.losses.BinaryCrossentropy( from_logits=False, label_smoothing=0.0, axis=-1, reduction=tf.keras.losses.Reduction.AUTO, name='binary_crossentropy' ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Summary """"""""""""""""""""""""""""""""""""""""""""""""""""""""" model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy']) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Callback """"""""""""""""""""""""""""""""""""""""""""""""""""""""" class custom_callback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): # if( logs['loss'] <= 0.2 ): # self.model.stop_training = True if( logs['accuracy'] >= 0.95 ): self.model.stop_training = True custom_callback = custom_callback() """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : FileWriter """"""""""""""""""""""""""""""""""""""""""""""""""""""""" if exists(checkpoint_path) : model.load_weights(checkpoint_path) print("model load: " + checkpoint_path) input("Press Any Key!") """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Training """"""""""""""""""""""""""""""""""""""""""""""""""""""""" history = model.fit( dataset, validation_data=dataset, batch_size=10, epochs=10000, callbacks=[custom_callback] ) model.save_weights(checkpoint_path) plt.figure(figsize=(5,2)) plt.title("Actors recognitions") for i in range(len(list_file)): img = tf.keras.preprocessing.image.array_to_img( list_file[i], data_format=None, scale=True ) img_array = tf.keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) predictions = model.predict(img_array) score = tf.nn.softmax(predictions[0]) plt.subplot(5, 2, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(list_file_actual[i]) if predictions[0] > 0.51 : plt.xlabel(str(list_label_actual[1]) + " score: " + str(predictions[0])) else : plt.xlabel(str(list_label_actual[0]) + " score: " + str(predictions[0])) plt.show() plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') plt.show() Output results: Accuracy versus epoaches:
Accuracy and loss fluctuating in binary classification problem in deep learning
I'm currently working on a classification problem for stroke on UNet. The task is based the size of the lesion area(large - 1, small - 0). Note that the labels is actually produce by me(I will try to improve it) so they are not that accurate. When I trained like 20 epochs, my accuracy waved around 0.5 and loss is around 0.6, which basically says my model makes random choices. So what should I do to make my model learning again? Here's the Unet I'm using: `import keras_unet def define_unet(n_filters=neuron, n_layers=4, dropout_rate=0.25): model_unet = keras_unet.models.custom_unet(input_shape=(img_size, img_size, 3), activation='relu', use_batch_norm=True, upsample_mode='deconv', dropout=dropout_rate, dropout_type='spatial', filters=n_filters, num_layers=n_layers, output_activation='linear' ) GAP = keras.layers.GlobalAveragePooling2D()(model_unet.output) outputs = keras.layers.Dense(1,activation = 'sigmoid')(GAP) model_unet = keras.Model(inputs = model_unet.input, outputs = outputs) #bce is just the binary crossentropy model_unet.compile(optimizer=adam, loss=bce_loss,metrics=['accuracy']) model_unet.summary() return model_unet` here's the hyperparameters: `learning_rate = 0.0001 epochs = 20 dropout_rate = 0.2 batch_size = 16 kernel_size = 3 neuron = 8 adam = keras.optimizers.Adam(learning_rate=learning_rate)` My data set contains 1000 images spilt into 80:20 for training and validation and I'm using batch_size = 16. Here's the plot for acc and loss: I've tried to implement a few learning rate and it didn't work:( Thanks in advance for your help!!! Any suggestions would be appreciated.
[ "there are many variances including samples and models to improve the accuracy of binary class-entropy as a single objective. To improves the accuracy first you need to use the correct measurement, the accuracy matric works correctly when you label with 0 to 1 as float since it is binary cross entropy it may reflect minus number result because that is from the square different of the label values.\nIt is not fluctuating loss and accuracy but you need to use the correct approaches, binary cross-entropy as single is a very fast drive but the return of turn points is deterministic.\n\nSample: You can apply the mean of all output or simply critical points.\n\nplt.figure(figsize=(5,2))\nplt.title(\"Actors recognitions\")\nfor i in range(len(list_file)):\n img = tf.keras.preprocessing.image.array_to_img(\n list_file[i],\n data_format=None,\n scale=True\n )\n img_array = tf.keras.preprocessing.image.img_to_array(img)\n img_array = tf.expand_dims(img_array, 0)\n predictions = model.predict(img_array)\n score = tf.nn.softmax(predictions[0])\n \n plt.subplot(5, 2, i + 1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(list_file_actual[i])\n \n if predictions[0] > 0.51 :\n plt.xlabel(str(list_label_actual[1]) + \" score: \" + str(predictions[0]))\n else :\n plt.xlabel(str(list_label_actual[0]) + \" score: \" + str(predictions[0]))\n\n\nSample: You may try to use vectors to improves of the results.\n\nimport os\nfrom os.path import exists\n\nimport tensorflow as tf\nimport tensorflow_io as tfio\n\nimport matplotlib.pyplot as plt\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\nNone\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nassert len(physical_devices) > 0, \"Not enough GPU hardware devices available\"\nconfig = tf.config.experimental.set_memory_growth(physical_devices[0], True)\nprint(physical_devices)\nprint(config)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nVariables\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nPATH = os.path.join('F:\\\\datasets\\\\downloads\\\\Actors\\\\train\\\\Pikaploy', '*.tif')\nPATH_2 = os.path.join('F:\\\\datasets\\\\downloads\\\\Actors\\\\train\\\\Candidt Kibt', '*.tif')\nfiles = tf.data.Dataset.list_files(PATH)\nfiles_2 = tf.data.Dataset.list_files(PATH_2)\n\nlist_file = []\nlist_file_actual = []\nlist_label = []\nlist_label_actual = [ 'Pikaploy', 'Candidt Kibt' ]\nfor file in files.take(5):\n image = tf.io.read_file( file )\n image = tfio.experimental.image.decode_tiff(image, index=0)\n list_file_actual.append(image)\n image = tf.image.resize(image, [32,32], method='nearest')\n list_file.append(image)\n # list_label.append([0, 0])\n list_label.append([0.0])\n \nfor file in files_2.take(5):\n image = tf.io.read_file( file )\n image = tfio.experimental.image.decode_tiff(image, index=0)\n list_file_actual.append(image)\n image = tf.image.resize(image, [32,32], method='nearest')\n list_file.append(image)\n # list_label.append([1, 1])\n list_label.append([1.0])\n\ncheckpoint_path = \"F:\\\\models\\\\checkpoint\\\\\" + os.path.basename(__file__).split('.')[0] + \"\\\\TF_DataSets_01.h5\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\nloggings = \"F:\\\\models\\\\checkpoint\\\\\" + os.path.basename(__file__).split('.')[0] + \"\\\\loggings.log\"\n\nif not exists(checkpoint_dir) : \n os.mkdir(checkpoint_dir)\n print(\"Create directory: \" + checkpoint_dir)\n \nlog_dir = checkpoint_dir\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nDataSet\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n# dataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 1, 32, 32, 4), dtype=tf.int64),tf.constant(list_label, shape=(10, 1, 2), dtype=tf.int64)))\ndataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 1, 32, 32, 4), dtype=tf.int64),tf.constant(list_label, shape=(10, 1, 1), dtype=tf.float32)))\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Initialize\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=( 32, 32, 4 )),\n tf.keras.layers.Normalization(mean=3., variance=2.),\n tf.keras.layers.Normalization(mean=4., variance=6.),\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n tf.keras.layers.MaxPooling2D((2, 2)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Reshape((128, 225)),\n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)),\n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(192, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid'),\n])\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Optimizer\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n# optimizer = tf.keras.optimizers.SGD(\n # learning_rate=0.001,\n # momentum=0.0,\n # nesterov=False,\n # name='SGD',# )\n\noptimizer = tf.keras.optimizers.Nadam(\n learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,\n name='Nadam'\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Loss Fn\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" \nlossfn = tf.keras.losses.BinaryCrossentropy(\n from_logits=False,\n label_smoothing=0.0,\n axis=-1,\n reduction=tf.keras.losses.Reduction.AUTO,\n name='binary_crossentropy'\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Summary\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Callback\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass custom_callback(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n # if( logs['loss'] <= 0.2 ):\n # self.model.stop_training = True\n if( logs['accuracy'] >= 0.95 ):\n self.model.stop_training = True\n \ncustom_callback = custom_callback()\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: FileWriter\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nif exists(checkpoint_path) :\n model.load_weights(checkpoint_path)\n print(\"model load: \" + checkpoint_path)\n input(\"Press Any Key!\")\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Training\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nhistory = model.fit( dataset, validation_data=dataset, batch_size=10, epochs=10000, callbacks=[custom_callback] )\nmodel.save_weights(checkpoint_path)\n\nplt.figure(figsize=(5,2))\nplt.title(\"Actors recognitions\")\nfor i in range(len(list_file)):\n img = tf.keras.preprocessing.image.array_to_img(\n list_file[i],\n data_format=None,\n scale=True\n )\n img_array = tf.keras.preprocessing.image.img_to_array(img)\n img_array = tf.expand_dims(img_array, 0)\n predictions = model.predict(img_array)\n score = tf.nn.softmax(predictions[0])\n \n plt.subplot(5, 2, i + 1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(list_file_actual[i])\n \n if predictions[0] > 0.51 :\n plt.xlabel(str(list_label_actual[1]) + \" score: \" + str(predictions[0]))\n else :\n plt.xlabel(str(list_label_actual[0]) + \" score: \" + str(predictions[0]))\n\nplt.show()\n\nplt.plot(history.history['accuracy'], label='accuracy')\nplt.plot(history.history['val_accuracy'], label = 'val_accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.ylim([0.5, 1])\nplt.legend(loc='lower right')\n\nplt.show()\n\n\nOutput results:\n\n\n\nAccuracy versus epoaches:\n\n\n" ]
[ 0 ]
[]
[]
[ "classification", "deep_learning", "machine_learning", "python", "tensorflow" ]
stackoverflow_0074608283_classification_deep_learning_machine_learning_python_tensorflow.txt
Q: Printing boolean numpy array without separators I would like to print this array: a = np.array([[0, 1, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=bool) as .8.. 8888 .... .... without iterating over each element in a double loop. A terse function like this one: def showGrid(g): print(np.vectorize(lambda x: '8' if x else '.')(g)) but without standard separators: [['.' '8' '.' '.'] ['8' '8' '8' '8'] ['.' '.' '.' '.'] ['.' '.' '.' '.']] I couldn't find a way to make np.set_printoptions drop the standard numpy array formatting separators. Is that possible? If not, pointers to any relevant numpy trickery would be appreciated. A: First, use np.where to optimize your current code, which is the same and faster than the function wrapped with np.vectorize: >>> np.where(a, '8', '.') array([['.', '8', '.', '.'], ['8', '8', '8', '8'], ['.', '.', '.', '.'], ['.', '.', '.', '.']], dtype='<U1') To concatenate the characters in each line, I prefer to use ndarray.view, which will create a view at a very low cost. It treats all characters in each line as a string with a length of 4: >>> np.where(a, '8', '.').view('<U4') array([['.8..'], ['8888'], ['....'], ['....']], dtype='<U4') Then use ndarray.ravel() or ndarray.flat to unpack the flat results into the print function, with the newline character as the separator: >>> print(*np.where(a, '8', '.').view('<U4').flat, sep='\n') .8.. 8888 .... .... Or use str.join to get the complete string: >>> print('\n'.join(np.where(a, '8', '.').view('<U4').flat)) .8.. 8888 .... ....
Printing boolean numpy array without separators
I would like to print this array: a = np.array([[0, 1, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=bool) as .8.. 8888 .... .... without iterating over each element in a double loop. A terse function like this one: def showGrid(g): print(np.vectorize(lambda x: '8' if x else '.')(g)) but without standard separators: [['.' '8' '.' '.'] ['8' '8' '8' '8'] ['.' '.' '.' '.'] ['.' '.' '.' '.']] I couldn't find a way to make np.set_printoptions drop the standard numpy array formatting separators. Is that possible? If not, pointers to any relevant numpy trickery would be appreciated.
[ "First, use np.where to optimize your current code, which is the same and faster than the function wrapped with np.vectorize:\n>>> np.where(a, '8', '.')\narray([['.', '8', '.', '.'],\n ['8', '8', '8', '8'],\n ['.', '.', '.', '.'],\n ['.', '.', '.', '.']], dtype='<U1')\n\nTo concatenate the characters in each line, I prefer to use ndarray.view, which will create a view at a very low cost. It treats all characters in each line as a string with a length of 4:\n>>> np.where(a, '8', '.').view('<U4')\narray([['.8..'],\n ['8888'],\n ['....'],\n ['....']], dtype='<U4')\n\nThen use ndarray.ravel() or ndarray.flat to unpack the flat results into the print function, with the newline character as the separator:\n>>> print(*np.where(a, '8', '.').view('<U4').flat, sep='\\n')\n.8..\n8888\n....\n....\n\nOr use str.join to get the complete string:\n>>> print('\\n'.join(np.where(a, '8', '.').view('<U4').flat))\n.8..\n8888\n....\n....\n\n" ]
[ 3 ]
[]
[]
[ "arrays", "numpy", "pretty_print", "python" ]
stackoverflow_0074608633_arrays_numpy_pretty_print_python.txt
Q: How to create new column based on average with certain conditions and ignore null in python dataframe? I have 2 tables date James Jamie John Allysia Jean 2022-01-01 NaN 6 5 4 3 2022-01-02 7 6 7 NaN 5 names groupings James guy John guy Jamie girl Allysia girl Jean girl into date James Jamie John Allysia Jean girl guy 2022-01-01 NaN 6 5 4 3 5 5 2022-01-02 7 6 7 NaN 5 5.5 7 threshold= >3 I want to create a new column grouped by guys /girls scores where the score taken is above the threshold and get their mean while ignoring NaN and scores that does not fit the threshold. I do not know on how to replace scores that is below the threshold with nan. I tried doing to do a group by to get them in to a list and create new row with mean. groupingseries = groupings.groupby(['grouping'])['names'].apply(list) for k,s in zip(groupingseries.keys(),groupingseries): try: its='"'+',"'.join(s)+'"' df[k]=df[s].mean() except: print('not in item') Not sure why the results return NaN for girl and guy. Please do help. A: Assuming df and groupings your two input DataFrames: out = df.join(df.groupby(df.columns.map(groupings.set_index('names')['groupings']), axis=1).sum() ) Output: date James Jamie John Allysia Jean girl guy 0 2022-01-01 NaN 6 5 4.0 3 13.0 5.0 1 2022-01-02 7.0 6 7 NaN 5 11.0 14.0
How to create new column based on average with certain conditions and ignore null in python dataframe?
I have 2 tables date James Jamie John Allysia Jean 2022-01-01 NaN 6 5 4 3 2022-01-02 7 6 7 NaN 5 names groupings James guy John guy Jamie girl Allysia girl Jean girl into date James Jamie John Allysia Jean girl guy 2022-01-01 NaN 6 5 4 3 5 5 2022-01-02 7 6 7 NaN 5 5.5 7 threshold= >3 I want to create a new column grouped by guys /girls scores where the score taken is above the threshold and get their mean while ignoring NaN and scores that does not fit the threshold. I do not know on how to replace scores that is below the threshold with nan. I tried doing to do a group by to get them in to a list and create new row with mean. groupingseries = groupings.groupby(['grouping'])['names'].apply(list) for k,s in zip(groupingseries.keys(),groupingseries): try: its='"'+',"'.join(s)+'"' df[k]=df[s].mean() except: print('not in item') Not sure why the results return NaN for girl and guy. Please do help.
[ "Assuming df and groupings your two input DataFrames:\nout = df.join(df.groupby(df.columns.map(groupings.set_index('names')['groupings']),\n axis=1).sum()\n )\n\nOutput:\n date James Jamie John Allysia Jean girl guy\n0 2022-01-01 NaN 6 5 4.0 3 13.0 5.0\n1 2022-01-02 7.0 6 7 NaN 5 11.0 14.0\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074608652_python.txt
Q: Split pandas dataframe into groups of 20 and assign column value to each group I have a df as follows. TimeStamp,Value t1,akak t2,bb t3,vvv t5,ff t6,44 t7,99 t8,kfkkf t9,ff t10,oo I want to split df into sizes of 2 rows and assign class as group number. TimeStamp,Value, class t1,akak,c1 t2,bb,c1 t3,vvv,c2 t4,ff,c2 t5,44,c3 t6,99,c3 t7,kfkkf,c4 t8,ff,c4 t9,oo,c5 t10,oo,c5 One approach is to iterate and do it one at a time. Was thinking of there is inbuilt way in pandas to do it A: You could do: df['class'] = [i//2 for i in range(len(df))] But this is a pretty limited answer; you might want to apply a certain value on your other columns to get the group ID, or you may have a specific label in mind to apply for the class column, in which case you could follow up with a map function on the series to turn those numbers into something else. A: You can use this to achieve what you want: df["class"] = [f"c{(i // 2) + 1}" for i in range(df.shape[0])] A: Another possible solution: df['class'] = ['c' + str(1+x) for x in np.repeat(range(int(len(df)/2)), 2)] Output: TimeStamp Value class 0 t1 akak c1 1 t2 bb c1 2 t3 vvv c2 3 t4 ff c2 4 t5 ff c3 5 t6 44 c3 6 t7 99 c4 7 t8 kfkkf c4 8 t9 ff c5 9 t10 oo c5 A: You can vectorize the operation with numpy: import numpy as np df['class'] = np.core.defchararray.add('c', (np.arange(len(df))//2+1).astype(str)) Or, with a Series: df['class'] = pd.Series(np.arange(len(df))//2+1, index=df.index, dtype='string').radd('c') Output: TimeStamp Value class 0 t1 akak c1 1 t2 bb c1 2 t3 vvv c2 3 t4 ff c2 4 t5 ff c3 5 t6 44 c3 6 t7 99 c4 7 t8 kfkkf c4 8 t9 ff c5 9 t10 oo c5 A: try this: df.assign(Class=(df.index//2+1).map('c{}'.format)) >>> TimeStamp Value Class 0 t1 akak c1 1 t2 bb c1 2 t3 vvv c2 3 t5 ff c2 4 t6 44 c3 5 t7 99 c3 6 t8 kfkkf c4 7 t9 ff c4 8 t10 oo c5
Split pandas dataframe into groups of 20 and assign column value to each group
I have a df as follows. TimeStamp,Value t1,akak t2,bb t3,vvv t5,ff t6,44 t7,99 t8,kfkkf t9,ff t10,oo I want to split df into sizes of 2 rows and assign class as group number. TimeStamp,Value, class t1,akak,c1 t2,bb,c1 t3,vvv,c2 t4,ff,c2 t5,44,c3 t6,99,c3 t7,kfkkf,c4 t8,ff,c4 t9,oo,c5 t10,oo,c5 One approach is to iterate and do it one at a time. Was thinking of there is inbuilt way in pandas to do it
[ "You could do:\ndf['class'] = [i//2 for i in range(len(df))]\nBut this is a pretty limited answer; you might want to apply a certain value on your other columns to get the group ID, or you may have a specific label in mind to apply for the class column, in which case you could follow up with a map function on the series to turn those numbers into something else.\n", "You can use this to achieve what you want:\ndf[\"class\"] = [f\"c{(i // 2) + 1}\" for i in range(df.shape[0])]\n\n\n", "Another possible solution:\ndf['class'] = ['c' + str(1+x) for x in np.repeat(range(int(len(df)/2)), 2)]\n\nOutput:\n TimeStamp Value class\n0 t1 akak c1\n1 t2 bb c1\n2 t3 vvv c2\n3 t4 ff c2\n4 t5 ff c3\n5 t6 44 c3\n6 t7 99 c4\n7 t8 kfkkf c4\n8 t9 ff c5\n9 t10 oo c5\n\n", "You can vectorize the operation with numpy:\nimport numpy as np\n\ndf['class'] = np.core.defchararray.add('c', (np.arange(len(df))//2+1).astype(str))\n\nOr, with a Series:\ndf['class'] = pd.Series(np.arange(len(df))//2+1, index=df.index, dtype='string').radd('c')\n\nOutput:\n TimeStamp Value class\n0 t1 akak c1\n1 t2 bb c1\n2 t3 vvv c2\n3 t4 ff c2\n4 t5 ff c3\n5 t6 44 c3\n6 t7 99 c4\n7 t8 kfkkf c4\n8 t9 ff c5\n9 t10 oo c5\n\n", "try this:\ndf.assign(Class=(df.index//2+1).map('c{}'.format))\n>>>\n\nTimeStamp Value Class\n0 t1 akak c1\n1 t2 bb c1\n2 t3 vvv c2\n3 t5 ff c2\n4 t6 44 c3\n5 t7 99 c3\n6 t8 kfkkf c4\n7 t9 ff c4\n8 t10 oo c5\n\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074607663_dataframe_pandas_python.txt
Q: ImportError: cannot import name 'TFTModel' from 'darts.models' I receive the error message "ImportError: cannot import name 'TFTModel' from 'darts.models'" when trying to import the atribute "TFTmodel" from darts using the line from darts.models import TFTModel I have tried using "pip install statsforecast==0.6.0" however that leads to me not being able to use darts since the current darts version isn't compatible with that statsforecast version, and when i try to downgrade the darts version i get a error with pandas that just leads me to upgrade my darts version which puts me in a loop The versions im using are darts 0.22.0 startsforecast 1.3.2 Windows 10 x64 version 21H2 python 3.9.13 A: This could possibly mean there is no TFTModel in that version. Have you checked documentation and other versions? A: I think I figured it out. As seen in the documentation it says it's in darts.models.forcasting, so try: from darts.models.forcasting import TFTModel
ImportError: cannot import name 'TFTModel' from 'darts.models'
I receive the error message "ImportError: cannot import name 'TFTModel' from 'darts.models'" when trying to import the atribute "TFTmodel" from darts using the line from darts.models import TFTModel I have tried using "pip install statsforecast==0.6.0" however that leads to me not being able to use darts since the current darts version isn't compatible with that statsforecast version, and when i try to downgrade the darts version i get a error with pandas that just leads me to upgrade my darts version which puts me in a loop The versions im using are darts 0.22.0 startsforecast 1.3.2 Windows 10 x64 version 21H2 python 3.9.13
[ "This could possibly mean there is no TFTModel in that version. Have you checked documentation and other versions?\n", "I think I figured it out.\nAs seen in the documentation it says it's in darts.models.forcasting, so try:\nfrom darts.models.forcasting import TFTModel\n\n" ]
[ 0, 0 ]
[]
[]
[ "libraries", "python" ]
stackoverflow_0074608718_libraries_python.txt
Q: Return the last not null, nan, and non empty value from a python list How can I return the last not null, not empty, and not nan from a list? if not exists, then return "null" or a customized message! I have tried these pieces of codes and none of them is bullet proof: import numpy as np listF=[0.0,np. NaN,2,0,0.0,""] print([j for j in listF if j][-1])#returns 2 while should retrun 0 listF=[0.0,np. NaN,np. NaN,0,0.0,np. NaN] print([j for j in listF if j][-1])#returns nan while should return 0 listF=[] print([j for j in listF if j][-1])#returns Index out of range listF=[1,np. NaN,np. NaN,0,0.0,np. NaN] print([j for j in listF if j][-1])#returns nan while should return 0 listF=[np. NaN,np. NaN,np. NaN] print([j for j in listF if j][-1])#returns nan while should return "null" A: You can use math.isnan (or numpy.isnan) to check the NA status. Combine it with a generator and next with a default value to handle cases without valid value: from math import isnan def last_valid(lst): return next((x for x in reversed(lst) if x and not isnan(x)), None) # or 'null' last_valid([]) # None last_valid([0.0,np. NaN,2,0,0.0,""]) # 2 last_valid([1,np. NaN,np. NaN,0,0.0,np. NaN]) # 1 last_valid([0.0,np. NaN,np. NaN,0,0.0,np. NaN]) # None accepting 0 as valid: Given your update of the rule (0 was initially described as non-valid), you can convert to string in the first test to consider 0 valid: from math import isnan def last_valid(lst): return next((x for x in reversed(lst) if str(x) and not isnan(x)), '"null"') last_valid([])) # '"null"' last_valid([0.0,np. NaN,2,0,0.0,""]) # 0.0 last_valid([1,np. NaN,np. NaN,0,0.0,np. NaN]) # 0.0 last_valid([0.0,np. NaN,np. NaN,0,0.0,np. NaN]) # 0.0 last_valid([np. NaN,np. NaN,np. NaN]) # '"null"' A: import numpy as np import pandas as pd listF=[0.0,np. NaN,2,0,0.0,""] output = 'empty list' for tmp in listF[::-1]: if not tmp or pd.isna(tmp): continue else: output = tmp break output A: This is the oneliner you almost got. You have to keep in mind that nan is not false when in a bolean expression! But always returns false if compared to itself :) print([i for i in list if i and not i!=i][-1])
Return the last not null, nan, and non empty value from a python list
How can I return the last not null, not empty, and not nan from a list? if not exists, then return "null" or a customized message! I have tried these pieces of codes and none of them is bullet proof: import numpy as np listF=[0.0,np. NaN,2,0,0.0,""] print([j for j in listF if j][-1])#returns 2 while should retrun 0 listF=[0.0,np. NaN,np. NaN,0,0.0,np. NaN] print([j for j in listF if j][-1])#returns nan while should return 0 listF=[] print([j for j in listF if j][-1])#returns Index out of range listF=[1,np. NaN,np. NaN,0,0.0,np. NaN] print([j for j in listF if j][-1])#returns nan while should return 0 listF=[np. NaN,np. NaN,np. NaN] print([j for j in listF if j][-1])#returns nan while should return "null"
[ "You can use math.isnan (or numpy.isnan) to check the NA status. Combine it with a generator and next with a default value to handle cases without valid value:\nfrom math import isnan\n\ndef last_valid(lst):\n return next((x for x in reversed(lst) if x and not isnan(x)), None) # or 'null'\n\nlast_valid([])\n# None\n\nlast_valid([0.0,np. NaN,2,0,0.0,\"\"])\n# 2\n\nlast_valid([1,np. NaN,np. NaN,0,0.0,np. NaN])\n# 1\n\nlast_valid([0.0,np. NaN,np. NaN,0,0.0,np. NaN])\n# None\n\naccepting 0 as valid:\nGiven your update of the rule (0 was initially described as non-valid), you can convert to string in the first test to consider 0 valid:\nfrom math import isnan\ndef last_valid(lst):\n return next((x for x in reversed(lst) if str(x) and not isnan(x)), '\"null\"')\n\nlast_valid([]))\n# '\"null\"'\n\nlast_valid([0.0,np. NaN,2,0,0.0,\"\"])\n# 0.0\n\nlast_valid([1,np. NaN,np. NaN,0,0.0,np. NaN])\n# 0.0\n\nlast_valid([0.0,np. NaN,np. NaN,0,0.0,np. NaN])\n# 0.0\n\nlast_valid([np. NaN,np. NaN,np. NaN])\n# '\"null\"'\n\n", "import numpy as np\nimport pandas as pd\n\nlistF=[0.0,np. NaN,2,0,0.0,\"\"]\n\noutput = 'empty list'\nfor tmp in listF[::-1]:\n if not tmp or pd.isna(tmp):\n continue\n else:\n output = tmp\n break\noutput\n\n", "This is the oneliner you almost got.\nYou have to keep in mind that nan is not false when in a bolean expression!\nBut always returns false if compared to itself :)\nprint([i for i in list if i and not i!=i][-1])\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "arrays", "list", "numpy", "python" ]
stackoverflow_0074608745_arrays_list_numpy_python.txt
Q: Pandas code to round values greater than 0.5 to 1 I am having a dataframe column and want to round it. If the value is equal to 0.5 it is getting rounded as 0, but i want it to be 1 if the value is greater than or equal to 0.5. Could someone please help A: As mentionned by @ALoolz in a comment, pandas (and python in general) is using a rounding to minimize bias when summing different elements, called Rounding half to even. As Wikipedi says: Rounding half to even A tie-breaking rule without positive/negative bias and without bias toward/away from zero is round half to even. By this convention, if the fractional part of x is 0.5, then y is the even integer nearest to x. Thus, for example, +23.5 becomes +24, as does +24.5; however, −23.5 becomes −24, as does −24.5. This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative, provided they are neither mostly even nor mostly odd. This variant of the round-to-nearest method is also called convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, odd–even rounding,[6] or bankers' rounding. This is the default rounding mode used in IEEE 754 operations for results in binary floating-point formats, and the more sophisticated mode[clarification needed] used when rounding to significant figures. By eliminating bias, repeated addition or subtraction of independent numbers, as in a one-dimensional random walk, will give a rounded result with an error that tends to grow in proportion to the square root of the number of operations rather than linearly. However, this rule distorts the distribution by increasing the probability of evens relative to odds. Typically this is less important[citation needed] than the biases that are eliminated by this method. In your case, if you want to use the rounding used "in school" called Rounding half up, you could apply a function that adds 0.5 and then truncate: lambda x: int(x + 0.5) So, you should do when rounding your DataFrame: df_rounding_half_up = df.applymap(lambda x: int(x + 0.5)) ... while Python-default round up to even would be something like: df_rounding_half_to_even = df.applymap(lambda x: int(round(x, 0)))
Pandas code to round values greater than 0.5 to 1
I am having a dataframe column and want to round it. If the value is equal to 0.5 it is getting rounded as 0, but i want it to be 1 if the value is greater than or equal to 0.5. Could someone please help
[ "As mentionned by @ALoolz in a comment, pandas (and python in general) is using a rounding to minimize bias when summing different elements, called Rounding half to even.\nAs Wikipedi says:\n\nRounding half to even\nA tie-breaking rule without positive/negative\nbias and without bias toward/away from zero is round half to even. By\nthis convention, if the fractional part of x is 0.5, then y is the\neven integer nearest to x. Thus, for example, +23.5 becomes +24, as\ndoes +24.5; however, −23.5 becomes −24, as does −24.5. This function\nminimizes the expected error when summing over rounded figures, even\nwhen the inputs are mostly positive or mostly negative, provided they\nare neither mostly even nor mostly odd.\nThis variant of the round-to-nearest method is also called convergent\nrounding, statistician's rounding, Dutch rounding, Gaussian rounding,\nodd–even rounding,[6] or bankers' rounding.\nThis is the default rounding mode used in IEEE 754 operations for\nresults in binary floating-point formats, and the more sophisticated\nmode[clarification needed] used when rounding to significant figures.\nBy eliminating bias, repeated addition or subtraction of independent\nnumbers, as in a one-dimensional random walk, will give a rounded\nresult with an error that tends to grow in proportion to the square\nroot of the number of operations rather than linearly.\nHowever, this rule distorts the distribution by increasing the\nprobability of evens relative to odds. Typically this is less\nimportant[citation needed] than the biases that are eliminated by this\nmethod.\n\nIn your case, if you want to use the rounding used \"in school\" called Rounding half up, you could apply a function that adds 0.5 and then truncate: lambda x: int(x + 0.5)\nSo, you should do when rounding your DataFrame:\ndf_rounding_half_up = df.applymap(lambda x: int(x + 0.5))\n\n... while Python-default round up to even would be something like:\ndf_rounding_half_to_even = df.applymap(lambda x: int(round(x, 0)))\n\n" ]
[ 0 ]
[ "There seems to be an issue in pandas.round function which does not round 0.5 to 1. In that case you could use built-in round with applymap\nimport pandas as pd\nimport numpy as np\n\ndef getRound(x):\n return(round(x))\n\ndf = pd.DataFrame(np.random.random([3, 3]),\ncolumns=['A', 'B', 'C'], index=['first', 'second', 'third'])\n\ndf will look like this\n A B C\nfirst 0.474011 0.082135 0.476545\nsecond 0.313154 0.265458 0.523410\nthird 0.057491 0.141635 0.037582\n\nChange one value to be 0.5\ndf['A'][1]=0.5\n\nApply lambda function\ndf.applymap(getRound)\n\nOutput:\n A B C\nfirst 0.0 0.0 0.0\nsecond 1.0 0.0 1.0\nthird 0.0 0.0 0.0\n\n" ]
[ -2 ]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0054990394_pandas_python_python_3.x.txt
Q: How to define a matrix by vectorization (without for loop) in numpy? I want to define an NxN matrix A, whose element A(i,j) is sin(i^2 + j). In MATLAB, I can quickly define this matrix by employing vectorization. N = 2000; ii = 1:N; A = sin(ii.^2 + ii'); How to achieve this in Python? Right now, I am using for loops, which are slow. import numpy N = 2000; A = numpy.empty((N, N)); for i in range(1,N+1): for j in range(1,N+1): A[i-1][j-1] = numpy.sin(j + numpy.power(i, 2.)) A: This can be done with the use of numpy which allows for vectorization of arrays and array-like operations. I'm sure this can be reduced in size, but we can create an array in x and y before making a grid out of these points. We then apply the entire grid to the function A=sin(i**2+j). We then get (I assume ii' is the same as the transpose of ii?) import numpy as np # Import library import matplotlib.pyplot as plt # To visualize N = 2000 # NxN x = np.linspace(0,1,N)*np.pi #Define two arrays of length N y = np.linspace(0,1,N)*np.pi X, Y = np.meshgrid(x,y) #Create a NxN grid with arrays A = np.sin(X**2+Y) # Apply grid to function plt.imshow(A.T, origin='lower', aspect='auto') #Show that it has worked plt.show() which gives (updated to match your result in the range [0,pi]) A: Many things you can do in MATLAB have exact translations in numpy. Here's a handy cheat-sheet N = 2000 # remains unchanged ii = np.arange(1, N+1) # In the cheat-sheet section Linear Algebra Equivalents Now, there's a slight difference. In MATLAB, everything is treated as a matrix, so a 1-dimensional array is just a 1xN matrix, so transposing it is done just like any other matrix. In numpy, 1-dimensional arrays can't be transposed, so let's add a dummy dimension to ii: ii = ii[None, :] Now, ii is a 1xN matrix, so we can get A as: A = np.sin(ii**2 + ii.T); Numpy takes care of broadcasting the shapes (1, N) and (N, 1) and gives you a result that is (N, N) Note that a' in MATLAB is the conjugate transpose, which would be a.conj().T in numpy, but since these are all real numbers it makes no difference
How to define a matrix by vectorization (without for loop) in numpy?
I want to define an NxN matrix A, whose element A(i,j) is sin(i^2 + j). In MATLAB, I can quickly define this matrix by employing vectorization. N = 2000; ii = 1:N; A = sin(ii.^2 + ii'); How to achieve this in Python? Right now, I am using for loops, which are slow. import numpy N = 2000; A = numpy.empty((N, N)); for i in range(1,N+1): for j in range(1,N+1): A[i-1][j-1] = numpy.sin(j + numpy.power(i, 2.))
[ "This can be done with the use of numpy which allows for vectorization of arrays and array-like operations. I'm sure this can be reduced in size, but we can create an array in x and y before making a grid out of these points. We then apply the entire grid to the function A=sin(i**2+j). We then get (I assume ii' is the same as the transpose of ii?)\nimport numpy as np # Import library\nimport matplotlib.pyplot as plt # To visualize\nN = 2000 # NxN \nx = np.linspace(0,1,N)*np.pi #Define two arrays of length N\ny = np.linspace(0,1,N)*np.pi\n\nX, Y = np.meshgrid(x,y) #Create a NxN grid with arrays\n\nA = np.sin(X**2+Y) # Apply grid to function\n\nplt.imshow(A.T, origin='lower', aspect='auto') #Show that it has worked\nplt.show()\n\nwhich gives (updated to match your result in the range [0,pi])\n\n", "Many things you can do in MATLAB have exact translations in numpy. Here's a handy cheat-sheet\nN = 2000 # remains unchanged\nii = np.arange(1, N+1) # In the cheat-sheet section Linear Algebra Equivalents \n\nNow, there's a slight difference. In MATLAB, everything is treated as a matrix, so a 1-dimensional array is just a 1xN matrix, so transposing it is done just like any other matrix. In numpy, 1-dimensional arrays can't be transposed, so let's add a dummy dimension to ii:\nii = ii[None, :]\n\nNow, ii is a 1xN matrix, so we can get A as:\nA = np.sin(ii**2 + ii.T);\n\nNumpy takes care of broadcasting the shapes (1, N) and (N, 1) and gives you a result that is (N, N)\n Note that a' in MATLAB is the conjugate transpose, which would be a.conj().T in numpy, but since these are all real numbers it makes no difference\n" ]
[ 3, 3 ]
[]
[]
[ "numpy", "python", "vectorization" ]
stackoverflow_0074608784_numpy_python_vectorization.txt
Q: Apply Ordinal Encoding to an entire column I'm working with a dataset of movies to run a regression and predict the gross. But because some columns have String values, I'm doing an Ordinal Encoding before I do the split and run the regression. One example is the column containing the movie titles(shown below): Series_Title 1 The Shawshank Redemption 2 The Godfather 3 The Dark Knight ... 1001 The 39 Steps Here's my code to run Ordinal Encoding on these first 3 movies: df2 = pd.DataFrame({'title': ['The Shawshank Redemption','The Godfather','The Dark Knight']}) df2['title'] = df1.Series_Title.astype("category").cat.codes df2 OUTPUT: 0 875 1 786 2 766 Since the Series_Title column has 1,000 values, how can I apply Ordinal Encoding to the entire Series_Title column? A: Example data = {'col': {1: 'A', 2: 'B', 3: 'C', 4:'A'}} df = pd.DataFrame(data) df: col 1 A 2 B 3 C 4 A Ordinal Encoding df['col'] = pd.factorize(df['col'])[0] result(df) col 1 0 2 1 3 2 4 0
Apply Ordinal Encoding to an entire column
I'm working with a dataset of movies to run a regression and predict the gross. But because some columns have String values, I'm doing an Ordinal Encoding before I do the split and run the regression. One example is the column containing the movie titles(shown below): Series_Title 1 The Shawshank Redemption 2 The Godfather 3 The Dark Knight ... 1001 The 39 Steps Here's my code to run Ordinal Encoding on these first 3 movies: df2 = pd.DataFrame({'title': ['The Shawshank Redemption','The Godfather','The Dark Knight']}) df2['title'] = df1.Series_Title.astype("category").cat.codes df2 OUTPUT: 0 875 1 786 2 766 Since the Series_Title column has 1,000 values, how can I apply Ordinal Encoding to the entire Series_Title column?
[ "Example\ndata = {'col': {1: 'A', 2: 'B', 3: 'C', 4:'A'}}\ndf = pd.DataFrame(data)\n\ndf:\n col\n1 A\n2 B\n3 C\n4 A\n\nOrdinal Encoding\ndf['col'] = pd.factorize(df['col'])[0]\n\nresult(df)\n col\n1 0\n2 1\n3 2\n4 0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "linear_regression", "machine_learning", "pandas", "python" ]
stackoverflow_0074608620_dataframe_linear_regression_machine_learning_pandas_python.txt
Q: Single source of truth for Python project version in presence of pyproject.toml The pyproject.toml specification affords the ability to specify the project version, e.g. [project] name = "foo" version = "0.0.1" However, it is also a common Python idiom to put __version__ = "0.0.1" in foo/__init__.py so that users can query it. Is there a standard way of extracting the version from the pyproject.toml and getting it into the foo/__init__.py? A: There are two approaches you can take here. Keep version in pyproject.toml and get it from the package metadata in the source code. So, in your foo/__init__.py or wherever: from importlib.metadata import version __version__ = version(__package__) importlib.metadata.version is available since Python 3.8. For earlier Python versions, you can do similar with the importlib_metadata backport. Keep the version in the source code and instruct the build system to get it from there. For a setuptools build backend, it looks like this in pyproject.toml: [project] dynamic = ["version"] [tool.setuptools.dynamic] version = {attr = "foo.__version__"} My recommendation is actually () ... neither! Don't keep a __version__ attribute in the source code at all. It's an outdated habit which we can do without these days. Version is already a required field in the package metadata, it's redundant to keep the same string as an attribute in the package/module namespace.
Single source of truth for Python project version in presence of pyproject.toml
The pyproject.toml specification affords the ability to specify the project version, e.g. [project] name = "foo" version = "0.0.1" However, it is also a common Python idiom to put __version__ = "0.0.1" in foo/__init__.py so that users can query it. Is there a standard way of extracting the version from the pyproject.toml and getting it into the foo/__init__.py?
[ "There are two approaches you can take here.\n\nKeep version in pyproject.toml and get it from the package metadata in the source code. So, in your foo/__init__.py or wherever:\n\nfrom importlib.metadata import version\n__version__ = version(__package__)\n\nimportlib.metadata.version is available since Python 3.8. For earlier Python versions, you can do similar with the importlib_metadata backport.\n\nKeep the version in the source code and instruct the build system to get it from there. For a setuptools build backend, it looks like this in pyproject.toml:\n\n[project]\ndynamic = [\"version\"]\n\n[tool.setuptools.dynamic]\nversion = {attr = \"foo.__version__\"}\n\nMy recommendation is actually () ... neither! Don't keep a __version__ attribute in the source code at all. It's an outdated habit which we can do without these days. Version is already a required field in the package metadata, it's redundant to keep the same string as an attribute in the package/module namespace.\n" ]
[ 2 ]
[]
[]
[ "pyproject.toml", "python" ]
stackoverflow_0074608905_pyproject.toml_python.txt
Q: Moving a specific column of a dataframe to first column I have a dataframe as follows: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}, 'Gross foreign assets': {1998: 19.720897432405103, 1999: 22.66200738564236, 2000: 25.18270679890144, 2001: 30.394226651732836, 2002: 37.26477320359688}, 'Gross domestic income': {1998: 104.9037939043707, 1999: 103.15361867816479, 2000: 103.06777792080423, 2001: 102.85886528974339, 2002: 102.28518242008846}, 'Gross foreign liabilities': {1998: 60.59784839338306, 1999: 61.03308220978983, 2000: 64.01438055825233, 2001: 67.07798172469921, 2002: 70.16108592109364}, 'Inflation rate': {1998: 52.6613, 1999: 19.3349, 2000: 16.0798, 2001: 15.076, 2002: 17.236}, 'Credit': {1998: 0.20269913592846378, 1999: 0.2154280880177353, 2000: 0.282948948505006, 2001: 0.3954812893893278, 2002: 0.3578263032373988}} which can be converted back to its original form using: df = pd.DataFrame.from_dict(s) Suppose, I want to move the column named "Gross foreign liabilities" to the first column. I know this can be done by reindexing. However, in my case the dataframe has 100 columns. How can I move say a specific column the very beginning? I read about pandas pop() method, but in my framework I get an error. A: Here are a few ways I do it: columns = df.columns.tolist() columns.insert(0, columns.pop(columns.index("Gross foreign liabilities"))) df = df.reindex(columns=columns) OR col = ["Gross foreign liabilities"] df = df[col + [x for x in df.columns if x not in col]] A: You can use pop and insert: name = 'Gross foreign liabilities' df.insert(0, name, df.pop(name)) as a function: def move_first(df, name): df.insert(0, name, df.pop(name)) move_first(df, 'Gross foreign liabilities') Output: Gross foreign liabilities BoP transfers Current balance \ 1998 60.597848 12.346282 -6.7953 1999 61.033082 19.064381 -2.9895 2000 64.014381 18.248880 -3.9694 2001 67.077982 24.860020 1.1716 2002 70.161086 32.382422 5.7433 Domestic demand Effective exchange rate RoR (foreign liabilities) \ 1998 106.861039 88.134000 0.0433 1999 104.703025 95.642500 0.0437 2000 104.592542 99.927725 0.0542 2001 103.835322 101.927450 0.0539 2002 102.817094 107.855650 0.0474 Gross foreign assets Gross domestic income Inflation rate Credit 1998 19.720897 104.903794 52.6613 0.202699 1999 22.662007 103.153619 19.3349 0.215428 2000 25.182707 103.067778 16.0798 0.282949 2001 30.394227 102.858865 15.0760 0.395481 2002 37.264773 102.285182 17.2360 0.357826 A: Example data = {'col1': {0: 0, 1: 2}, 'col2': {0: 1, 1: 3}, 'col3': {0: 2, 1: 4}} df = pd.DataFrame(data) df col1 col2 col3 0 0 1 2 1 2 3 4 Code df.insert(0, 'col3', df.pop('col3')) output(df): col3 col1 col2 0 2 0 1 1 4 2 3 make example more simple plz
Moving a specific column of a dataframe to first column
I have a dataframe as follows: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}, 'Gross foreign assets': {1998: 19.720897432405103, 1999: 22.66200738564236, 2000: 25.18270679890144, 2001: 30.394226651732836, 2002: 37.26477320359688}, 'Gross domestic income': {1998: 104.9037939043707, 1999: 103.15361867816479, 2000: 103.06777792080423, 2001: 102.85886528974339, 2002: 102.28518242008846}, 'Gross foreign liabilities': {1998: 60.59784839338306, 1999: 61.03308220978983, 2000: 64.01438055825233, 2001: 67.07798172469921, 2002: 70.16108592109364}, 'Inflation rate': {1998: 52.6613, 1999: 19.3349, 2000: 16.0798, 2001: 15.076, 2002: 17.236}, 'Credit': {1998: 0.20269913592846378, 1999: 0.2154280880177353, 2000: 0.282948948505006, 2001: 0.3954812893893278, 2002: 0.3578263032373988}} which can be converted back to its original form using: df = pd.DataFrame.from_dict(s) Suppose, I want to move the column named "Gross foreign liabilities" to the first column. I know this can be done by reindexing. However, in my case the dataframe has 100 columns. How can I move say a specific column the very beginning? I read about pandas pop() method, but in my framework I get an error.
[ "Here are a few ways I do it:\ncolumns = df.columns.tolist()\ncolumns.insert(0, columns.pop(columns.index(\"Gross foreign liabilities\")))\ndf = df.reindex(columns=columns)\n\nOR\ncol = [\"Gross foreign liabilities\"]\ndf = df[col + [x for x in df.columns if x not in col]]\n\n", "You can use pop and insert:\nname = 'Gross foreign liabilities'\ndf.insert(0, name, df.pop(name))\n\nas a function:\ndef move_first(df, name):\n df.insert(0, name, df.pop(name))\n\nmove_first(df, 'Gross foreign liabilities')\n\nOutput:\n Gross foreign liabilities BoP transfers Current balance \\\n1998 60.597848 12.346282 -6.7953 \n1999 61.033082 19.064381 -2.9895 \n2000 64.014381 18.248880 -3.9694 \n2001 67.077982 24.860020 1.1716 \n2002 70.161086 32.382422 5.7433 \n\n Domestic demand Effective exchange rate RoR (foreign liabilities) \\\n1998 106.861039 88.134000 0.0433 \n1999 104.703025 95.642500 0.0437 \n2000 104.592542 99.927725 0.0542 \n2001 103.835322 101.927450 0.0539 \n2002 102.817094 107.855650 0.0474 \n\n Gross foreign assets Gross domestic income Inflation rate Credit \n1998 19.720897 104.903794 52.6613 0.202699 \n1999 22.662007 103.153619 19.3349 0.215428 \n2000 25.182707 103.067778 16.0798 0.282949 \n2001 30.394227 102.858865 15.0760 0.395481 \n2002 37.264773 102.285182 17.2360 0.357826 \n\n", "Example\ndata = {'col1': {0: 0, 1: 2}, 'col2': {0: 1, 1: 3}, 'col3': {0: 2, 1: 4}}\ndf = pd.DataFrame(data)\n\ndf\n col1 col2 col3\n0 0 1 2\n1 2 3 4\n\nCode\ndf.insert(0, 'col3', df.pop('col3'))\n\noutput(df):\n col3 col1 col2\n0 2 0 1\n1 4 2 3\n\nmake example more simple plz\n" ]
[ 1, 0, 0 ]
[]
[]
[ "columnsorting", "pandas", "python" ]
stackoverflow_0074608944_columnsorting_pandas_python.txt
Q: List All Files in a Folder Sitting in a Data Lake I'm trying to get an inventory of all files in a folder, which has a few sub-folders, all of which sit in a data lake. Here is the code that I'm testing. import sys, os import pandas as pd mylist = [] root = "/mnt/rawdata/parent/" path = os.path.join(root, "targetdirectory") for path, subdirs, files in os.walk(path): for name in files: mylist.append(os.path.join(path, name)) df = pd.DataFrame(mylist) print(df) I also tried the sample code from this link: Python list directory, subdirectory, and files I'm working in Azure Databricks. I'm open to using Scala to do the job. So far, nothing has worked for me. Each time, I keep getting an empty dataframe. I believe this is pretty close, but I must be missing something small. Thoughts? A: Databricks File System (DBFS) is a distributed file system mounted into an Azure Databricks workspace and available on Azure Databricks clusters. If you are using local file API you have to reference the Databricks filesystem. Azure Databricks configures each cluster node with a FUSE mount /dbfs that allows processes running on cluster nodes to read and write to the underlying distributed storage layer with local file APIs (see also the documentation). So in the path /dbfs: has to be included: root = "/dbfs/mnt/rawdata/parent/" That is different then working with the Databricks Filesystem Utility (DBUtils). The file system utilities access Databricks File System, making it easier to use Azure Databricks as a file system: dbutils.fs.ls("/mnt/rawdata/parent/") For larger Data Lakes I can recommend a Scala example in the Knowledge Base. Advantage is that it runs the listing for all child leaves distributed, so will work also for bigger directories. A: I got this to work. from azure.storage.blob import BlockBlobService blob_service = BlockBlobService(account_name='your_account_name', account_key='your_account_key') blobs = [] marker = None while True: batch = blob_service.list_blobs('rawdata', marker=marker) blobs.extend(batch) if not batch.next_marker: break marker = batch.next_marker for blob in blobs: print(blob.name) The only prerequisite is that you need to import azure.storage. So, in the Clusters window, click 'Install-New' -> PyPI > package = 'azure.storage'. Finally, click 'Install'. A: This worked for me - finding all the parquet's in starting from a DBFS path: #------ # find parquet files in subdirectories recursively def find_parquets(dbfs_ls_list): parquet_list = [] if isinstance(dbfs_ls_list, str): # allows for user to start recursion with just a path dbfs_ls_list = dbutils.fs.ls(root_dir) parquet_list += find_parquets(dbfs_ls_list) else: for file_data in dbfs_ls_list: if file_data.size == 0 and file_data.name[-1] == '/': # found subdir new_dbdf_ls_list = dbutils.fs.ls(file_data.path) parquet_list += find_parquets(new_dbdf_ls_list) elif '.parquet' in file_data.name: parquet_list.append(file_data.path) return parquet_list #------ root_dir = 'dbfs:/FileStore/my/parent/folder/' file_list = find_parquets(root_dir)
List All Files in a Folder Sitting in a Data Lake
I'm trying to get an inventory of all files in a folder, which has a few sub-folders, all of which sit in a data lake. Here is the code that I'm testing. import sys, os import pandas as pd mylist = [] root = "/mnt/rawdata/parent/" path = os.path.join(root, "targetdirectory") for path, subdirs, files in os.walk(path): for name in files: mylist.append(os.path.join(path, name)) df = pd.DataFrame(mylist) print(df) I also tried the sample code from this link: Python list directory, subdirectory, and files I'm working in Azure Databricks. I'm open to using Scala to do the job. So far, nothing has worked for me. Each time, I keep getting an empty dataframe. I believe this is pretty close, but I must be missing something small. Thoughts?
[ "Databricks File System (DBFS) is a distributed file system mounted into an Azure Databricks workspace and available on Azure Databricks clusters. If you are using local file API you have to reference the Databricks filesystem. Azure Databricks configures each cluster node with a FUSE mount /dbfs that allows processes running on cluster nodes to read and write to the underlying distributed storage layer with local file APIs (see also the documentation).\nSo in the path /dbfs: has to be included:\nroot = \"/dbfs/mnt/rawdata/parent/\"\n\nThat is different then working with the Databricks Filesystem Utility (DBUtils). The file system utilities access Databricks File System, making it easier to use Azure Databricks as a file system:\ndbutils.fs.ls(\"/mnt/rawdata/parent/\")\n\nFor larger Data Lakes I can recommend a Scala example in the Knowledge Base.\nAdvantage is that it runs the listing for all child leaves distributed, so will work also for bigger directories.\n", "I got this to work.\nfrom azure.storage.blob import BlockBlobService \n\nblob_service = BlockBlobService(account_name='your_account_name', account_key='your_account_key')\n\nblobs = []\nmarker = None\nwhile True:\n batch = blob_service.list_blobs('rawdata', marker=marker)\n blobs.extend(batch)\n if not batch.next_marker:\n break\n marker = batch.next_marker\nfor blob in blobs:\n print(blob.name)\n\nThe only prerequisite is that you need to import azure.storage. So, in the Clusters window, click 'Install-New' -> PyPI > package = 'azure.storage'. Finally, click 'Install'.\n", "This worked for me - finding all the parquet's in starting from a DBFS path:\n#------\n# find parquet files in subdirectories recursively\ndef find_parquets(dbfs_ls_list):\n parquet_list = []\n if isinstance(dbfs_ls_list, str):\n # allows for user to start recursion with just a path\n dbfs_ls_list = dbutils.fs.ls(root_dir)\n parquet_list += find_parquets(dbfs_ls_list)\n else:\n for file_data in dbfs_ls_list:\n if file_data.size == 0 and file_data.name[-1] == '/':\n # found subdir\n new_dbdf_ls_list = dbutils.fs.ls(file_data.path)\n parquet_list += find_parquets(new_dbdf_ls_list)\n elif '.parquet' in file_data.name:\n parquet_list.append(file_data.path)\n return parquet_list\n\n#------\nroot_dir = 'dbfs:/FileStore/my/parent/folder/'\nfile_list = find_parquets(root_dir)\n\n" ]
[ 16, 2, 0 ]
[]
[]
[ "azure_data_lake", "azure_databricks", "databricks", "python", "scala" ]
stackoverflow_0058751144_azure_data_lake_azure_databricks_databricks_python_scala.txt
Q: Error while downloading the requirements using pip install (setup command: use_2to3 is invalid.) version pip 21.2.4 python 3.6 The command: pip install -r requirements.txt The content of my requirements.txt: mongoengine==0.19.1 numpy==1.16.2 pylint pandas==1.1.5 fawkes The command is failing with this error ERROR: Command errored out with exit status 1: command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"'; __file__='"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/ Complete output (1 lines): error in mongoengine setup command: use_2to3 is invalid. ---------------------------------------- WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1) ERROR: No matching distribution found for mongoengine==0.19.1 A: It looks like setuptools>=58 breaks support for use_2to3: setuptools changelog for v58 So you should update setuptools to setuptools<58 or avoid using packages with use_2to3 in the setup parameters. I was having the same problem, pip==19.3.1 A: I install setuptools==58It worked for me. pip install setuptools==58. The error coming from setuptools==69 that previously run on my device. and Finally saved me setuptools version 58 for this error. A: "pip install setuptools==58" worked for me. The setuptools version was 59 when I upgraded ubuntu to 22.04 and its python 3.10. I started a clean virtual environment for an existings django project. It had just two packages: `pip list Package Version pip 22.0.2 setuptools 59.6.0` Then I downgrade the setuptools to 58 as pip install setuptools==58.0.0. After that the pip install -r requirements.txt has not such error above. A: Upgrading MongoEngine to >= 0.20 would also fix the problem as Python2 support (hence use_2to3) was dropped in 0.20 A: This worked for me. pip install --upgrade pip setuptools==57.5.0 A: I'm working on Windows 11 and these solutions didn't work. I installed pybluez2 instead. Your python version >= 3.9 on Windows. pip install pybluez2
Error while downloading the requirements using pip install (setup command: use_2to3 is invalid.)
version pip 21.2.4 python 3.6 The command: pip install -r requirements.txt The content of my requirements.txt: mongoengine==0.19.1 numpy==1.16.2 pylint pandas==1.1.5 fawkes The command is failing with this error ERROR: Command errored out with exit status 1: command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"'; __file__='"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/ Complete output (1 lines): error in mongoengine setup command: use_2to3 is invalid. ---------------------------------------- WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1) ERROR: No matching distribution found for mongoengine==0.19.1
[ "It looks like setuptools>=58 breaks support for use_2to3:\nsetuptools changelog for v58\nSo you should update setuptools to setuptools<58 or avoid using packages with use_2to3 in the setup parameters.\nI was having the same problem, pip==19.3.1\n", "I install setuptools==58It worked for me. pip install setuptools==58. The error coming from setuptools==69 that previously run on my device. and Finally saved me setuptools version 58 for this error.\n", "\"pip install setuptools==58\" worked for me. The setuptools version was 59 when I upgraded ubuntu to 22.04 and its python 3.10. I started a clean virtual environment for an existings django project. It had just two packages:\n`pip list\nPackage Version\n\npip 22.0.2\nsetuptools 59.6.0`\nThen I downgrade the setuptools to 58 as pip install setuptools==58.0.0. After that the pip install -r requirements.txt has not such error above.\n", "Upgrading MongoEngine to >= 0.20 would also fix the problem as Python2 support (hence use_2to3) was dropped in 0.20\n", "This worked for me.\npip install --upgrade pip setuptools==57.5.0 \n\n", "I'm working on Windows 11 and these solutions didn't work. I installed pybluez2 instead. Your python version >= 3.9 on Windows.\npip install pybluez2\n\n" ]
[ 152, 47, 2, 1, 1, 0 ]
[]
[]
[ "pip", "python", "python_3.x", "setuptools" ]
stackoverflow_0069100275_pip_python_python_3.x_setuptools.txt
Q: how can you use formatting like '0.2f' but on a string to left align the text? I am trying to make a receipt-esk output, but the problem that I am running into is the text (which I want on the right side) is not being aligned and it looks very rough. Is there some function where I can format string literals in the way that you can with decimals like format(var, '0.2f') For the most part I have done some googling and only found advice for formatting floats and ints. From everything that I have seen I may just have to manually align the stings, but if someone has advice it would be greatly appreciated! A: The method str.ljust sounds like what you are interested in. This will pad out the input string with spaces to the length specified. >>> 'test'.ljust(10) 'test ' And if you wanted it to justify to the right side then you could also use the related function str.rjust.
how can you use formatting like '0.2f' but on a string to left align the text?
I am trying to make a receipt-esk output, but the problem that I am running into is the text (which I want on the right side) is not being aligned and it looks very rough. Is there some function where I can format string literals in the way that you can with decimals like format(var, '0.2f') For the most part I have done some googling and only found advice for formatting floats and ints. From everything that I have seen I may just have to manually align the stings, but if someone has advice it would be greatly appreciated!
[ "The method str.ljust sounds like what you are interested in. This will pad out the input string with spaces to the length specified.\n>>> 'test'.ljust(10)\n'test '\n\nAnd if you wanted it to justify to the right side then you could also use the related function str.rjust.\n" ]
[ 0 ]
[]
[]
[ "python", "string", "string_formatting" ]
stackoverflow_0074609059_python_string_string_formatting.txt
Q: How do i create code for a vlookup in python? df Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 Overall_Season_Avg Team_Season_Code Team TS OS MOV 15_2019 Arkansas_St 70.909091 65.242424 5.666667 70_2019 Elon 73.636364 71.818182 1.818182 82_2019 Denver 74.03125 72.15625 1.875 224_2019 Pacific 78.333333 76.466667 1.866667 306_2019 UC_Riverside 79.545455 78.060606 1.484848 I have these two dataframes and I want to be able to look up the Opponent_Season_Code from df in Overall_Season_Avg - "Team_Season_Code" and bring back "TS" and "OS" to create a new column in df called "OOS" and "OTS" So a new column for row 1 in df should have Column name OOS with data - 65.24... and Column name OTS with data 70.90... In excel its a simple vlookup but i haven't been able to use the solutions that i have found to the vlookup question on overflow so i decided to post my own question. I will also say that the Overall_Season_Avg dataframe was created through by Overall_Season_Avg = df.groupby(['Team_Season_Code', 'Team']).agg({'TS': np.mean, 'OS': np.mean, 'MOV': np.mean}) A: You can use a merge, after reworking a bit Overall_Season_Avg : df.merge(Overall_Season_Avg .set_index(['Team_Season_Code', 'Team']) [['OS', 'TS']].add_prefix('O'), left_on=['Opponent_Season_Code', 'Opponent'], right_index=True, how='left' ) Output: Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS OOS OTS 0 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 65.242424 70.909091 1 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 72.156250 74.031250 2 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 71.818182 73.636364 3 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 76.466667 78.333333 4 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 78.060606 79.545455 merging only on Opponent_Season_Code/Team_Season_Code: df.merge(Overall_Season_Avg .set_index('Team_Season_Code') [['OS', 'TS']].add_prefix('O'), left_on=['Opponent_Season_Code'], right_index=True, how='left' ) Output: Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS OOS OTS 0 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 65.242424 70.909091 1 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 72.156250 74.031250 2 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 71.818182 73.636364 3 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 76.466667 78.333333 4 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 78.060606 79.545455 A: df.merge(Overall_Season_Avg, on=['Team_Season_Code', 'Team'], how='left') and rename column's names or use transform instead agg when make Overall_Season_Avg. but i don remain transform code becuz you don provide reproducible example make simple and reproducible example plz https://stackoverflow.com/help/minimal-reproducible-example
How do i create code for a vlookup in python?
df Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 Overall_Season_Avg Team_Season_Code Team TS OS MOV 15_2019 Arkansas_St 70.909091 65.242424 5.666667 70_2019 Elon 73.636364 71.818182 1.818182 82_2019 Denver 74.03125 72.15625 1.875 224_2019 Pacific 78.333333 76.466667 1.866667 306_2019 UC_Riverside 79.545455 78.060606 1.484848 I have these two dataframes and I want to be able to look up the Opponent_Season_Code from df in Overall_Season_Avg - "Team_Season_Code" and bring back "TS" and "OS" to create a new column in df called "OOS" and "OTS" So a new column for row 1 in df should have Column name OOS with data - 65.24... and Column name OTS with data 70.90... In excel its a simple vlookup but i haven't been able to use the solutions that i have found to the vlookup question on overflow so i decided to post my own question. I will also say that the Overall_Season_Avg dataframe was created through by Overall_Season_Avg = df.groupby(['Team_Season_Code', 'Team']).agg({'TS': np.mean, 'OS': np.mean, 'MOV': np.mean})
[ "You can use a merge, after reworking a bit Overall_Season_Avg :\ndf.merge(Overall_Season_Avg\n .set_index(['Team_Season_Code', 'Team'])\n [['OS', 'TS']].add_prefix('O'),\n left_on=['Opponent_Season_Code', 'Opponent'],\n right_index=True, how='left'\n )\n\nOutput:\n Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS OOS OTS\n0 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 65.242424 70.909091\n1 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 72.156250 74.031250\n2 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 71.818182 73.636364\n3 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 76.466667 78.333333\n4 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 78.060606 79.545455\n\nmerging only on Opponent_Season_Code/Team_Season_Code:\ndf.merge(Overall_Season_Avg\n .set_index('Team_Season_Code')\n [['OS', 'TS']].add_prefix('O'),\n left_on=['Opponent_Season_Code'],\n right_index=True, how='left'\n )\n\nOutput:\n Season Date Team Team_Season_Code TS L Opponent Opponent_Season_Code OS OOS OTS\n0 2019 20181109 Abilene_Chr 1_2019 94 Home Arkansas_St 15_2019 73 65.242424 70.909091\n1 2019 20181115 Abilene_Chr 1_2019 67 Away Denver 82_2019 61 72.156250 74.031250\n2 2019 20181122 Abilene_Chr 1_2019 72 N Elon 70_2019 56 71.818182 73.636364\n3 2019 20181123 Abilene_Chr 1_2019 73 Away Pacific 224_2019 71 76.466667 78.333333\n4 2019 20181124 Abilene_Chr 1_2019 60 N UC_Riverside 306_2019 48 78.060606 79.545455\n\n", "df.merge(Overall_Season_Avg, on=['Team_Season_Code', 'Team'], how='left')\n\nand rename column's names\n\nor use transform instead agg when make Overall_Season_Avg.\nbut i don remain transform code becuz you don provide reproducible example\nmake simple and reproducible example plz\nhttps://stackoverflow.com/help/minimal-reproducible-example\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074609047_dataframe_group_by_numpy_pandas_python.txt
Q: how to test spam mail classification user_input = input().split() user_data = [[]] # X_test_encoded = tokenizer.texts_to_sequences(X_test) # X_test_padded = pad_sequences(X_test_encoded, maxlen = max_len) if (model.predict(X_test_padded).all() > 0.5): print(f"[{user_input}] is spam") else: print(f"[{user_input}] is non-spam") from tensorflow.keras.layers import SimpleRNN, Embedding, Dense from tensorflow.keras.models import Sequential embedding_dim = 32 hidden_units = 32 model = Sequential() model.add(Embedding(vocab_size, embedding_dim)) model.add(SimpleRNN(hidden_units)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(X_train_padded, y_train, epochs=4, batch_size=64, validation_split=0.2) X_test_encoded = tokenizer.texts_to_sequences(X_test) X_test_padded = pad_sequences(X_test_encoded, maxlen = max_len) print("\n 테스트 정확도: %.4f" % (model.evaluate(X_test_padded, y_test)[1])) epochs = range(1, len(history.history['acc']) + 1) plt.plot(epochs, history.history['loss']) plt.plot(epochs, history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() This is the test code that determines whether the entered email is spam or normal email. But no matter how much text I keep here, it is recognized as spam, is there any solution? A: I have trained a similar model based on a movie review dataset(positive or negative) model.predict(X_test_padded).all() > 0.5 The out of above will always be True so the if loop will execute all the time. Instead use model.predict(X_test_padded) > 0.5 For more details please refer to this gist. Thank You.
how to test spam mail classification
user_input = input().split() user_data = [[]] # X_test_encoded = tokenizer.texts_to_sequences(X_test) # X_test_padded = pad_sequences(X_test_encoded, maxlen = max_len) if (model.predict(X_test_padded).all() > 0.5): print(f"[{user_input}] is spam") else: print(f"[{user_input}] is non-spam") from tensorflow.keras.layers import SimpleRNN, Embedding, Dense from tensorflow.keras.models import Sequential embedding_dim = 32 hidden_units = 32 model = Sequential() model.add(Embedding(vocab_size, embedding_dim)) model.add(SimpleRNN(hidden_units)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(X_train_padded, y_train, epochs=4, batch_size=64, validation_split=0.2) X_test_encoded = tokenizer.texts_to_sequences(X_test) X_test_padded = pad_sequences(X_test_encoded, maxlen = max_len) print("\n 테스트 정확도: %.4f" % (model.evaluate(X_test_padded, y_test)[1])) epochs = range(1, len(history.history['acc']) + 1) plt.plot(epochs, history.history['loss']) plt.plot(epochs, history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() This is the test code that determines whether the entered email is spam or normal email. But no matter how much text I keep here, it is recognized as spam, is there any solution?
[ "I have trained a similar model based on a movie review dataset(positive or negative)\nmodel.predict(X_test_padded).all() > 0.5\n\nThe out of above will always be True so the if loop will execute all the time.\nInstead use\nmodel.predict(X_test_padded) > 0.5\n\nFor more details please refer to this gist. Thank You.\n" ]
[ 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0074350119_python_tensorflow.txt
Q: How to obtain Jupyter Notebook's path? Is there a function to obtain a Notebook's path? I've Googled a little on the subject but didn't find a simple way to do it... I want to obtain the Notebook's path so I can then use it elsewhere. This way I could save/use files in the same path as the notebook without worrying about where it got saved. Right now my solution is to put the following code on top but obviously this poses at least the problem of manually having to execute a cell and also if the working directory changes this will stop working. import os current_path = os.getcwd() A: TLDR: You can't It is not possible to consistently get the path of a Jupyter notebook. See ipython issue #10123 for more information. I'll quote Carreau: Here are some reasons why the kernel (in this case IPython): may not be running from single file even if one file, the file may not be a notebook. even if notebook, the notebook may not be on a filesystem. even if on a file system, it may not be on the same machine. even if on the same machine the path to the file may not make sens in the IPython context. even if it make sens the Jupyter Protocol has not been designed to do so. And we have no plan to change this abstraction in short or long term. Your hack works in most cases and is not too bad depending on the situation. A: if you can open it you can use this function 1-open your Jupyter notebook 2- write this function 3-it will print out the path pwd if not navigate to your python installation folder open folder scripts and there you will find it. hope this may help others A: Update: I found this answer which solves the problem. The following command returns the folder path for both *.py and *.ipynb files. import os os.path.abspath("") I have a bit of a work around to solve this issue but I find that it is often helpful to have for non-notebook projects as well. In the same folder as your notebook create a file called "base_fns.py". Inside this file place the following code: import os def get_local_folder(): return os.path.dirname(os.path.realpath(__file__)) Then, you can get the path to the folder containing base_fns using: from base_fns import get_local_folder() rt_fldr = get_local_folder() print(rt_fldr) A few notes: This gives you the absolute path to the folder containing "base_fns.py", not your notebook. If your notebook and base_fns are in the same folder, then the absolute path to the folder for your notebook and base_fns will be the same. If you place base_fns in a different folder, you will need to know the relative path to navigate from the base_fns folder to your notebook folder. If you are working on a larger project with a folder structure, you can place base_fns.py in a known folder and then navigate around to find any other folders/files that you may require. A: use this in cell %%javascript IPython.notebook.kernel.execute('nb_name = "' + IPython.notebook.notebook_name + '"') print(nb_name) A: What I am doing is not really beautifull but quite efficient. On VScode, I right click "Copy Path" on a sub folder in my working folder, in which I have my multiples Jupyter Notebook. I remove the end of the string and I obtain the aboslute path to the folder I after use in one of my jupyter notebook the command: os.chdir(r"path_to_your_folder") and this is it. NB: Since I work collaborately on repo cointaining Jupyter notebooks, this is the only ugly but efficient solution that I found.
How to obtain Jupyter Notebook's path?
Is there a function to obtain a Notebook's path? I've Googled a little on the subject but didn't find a simple way to do it... I want to obtain the Notebook's path so I can then use it elsewhere. This way I could save/use files in the same path as the notebook without worrying about where it got saved. Right now my solution is to put the following code on top but obviously this poses at least the problem of manually having to execute a cell and also if the working directory changes this will stop working. import os current_path = os.getcwd()
[ "TLDR: You can't\nIt is not possible to consistently get the path of a Jupyter notebook. See ipython issue #10123 for more information. I'll quote Carreau:\n\nHere are some reasons why the kernel (in this case IPython):\n\nmay not be running from single file\neven if one file, the file may not be a notebook.\neven if notebook, the notebook may not be on a filesystem.\neven if on a file system, it may not be on the same machine.\neven if on the same machine the path to the file may not make sens in the IPython context.\neven if it make sens the Jupyter Protocol has not been designed to do so. And we have no plan to change this abstraction in short or long term.\n\n\nYour hack works in most cases and is not too bad depending on the situation.\n", "if you can open it \nyou can use this function \n1-open your Jupyter notebook \n2- write this function\n3-it will print out the path\npwd\n\nif not\nnavigate to your python installation folder \nopen folder scripts and there you will find it.\nhope this may help others \n", "Update:\nI found this answer which solves the problem. The following command returns the folder path for both *.py and *.ipynb files.\nimport os\nos.path.abspath(\"\")\n\n\nI have a bit of a work around to solve this issue but I find that it is often helpful to have for non-notebook projects as well. In the same folder as your notebook create a file called \"base_fns.py\". Inside this file place the following code:\nimport os\n\ndef get_local_folder():\n return os.path.dirname(os.path.realpath(__file__))\n\nThen, you can get the path to the folder containing base_fns using:\nfrom base_fns import get_local_folder()\nrt_fldr = get_local_folder()\nprint(rt_fldr)\n\nA few notes:\n\nThis gives you the absolute path to the folder containing \"base_fns.py\", not your notebook. If your notebook and base_fns are in the same folder, then the absolute path to the folder for your notebook and base_fns will be the same.\nIf you place base_fns in a different folder, you will need to know the relative path to navigate from the base_fns folder to your notebook folder.\nIf you are working on a larger project with a folder structure, you can place base_fns.py in a known folder and then navigate around to find any other folders/files that you may require.\n\n", "use this in cell \n%%javascript\nIPython.notebook.kernel.execute('nb_name = \"' + IPython.notebook.notebook_name + '\"')\nprint(nb_name)\n\n", "What I am doing is not really beautifull but quite efficient. On VScode, I right click \"Copy Path\" on a sub folder in my working folder, in which I have my multiples Jupyter Notebook. I remove the end of the string and I obtain the aboslute path to the folder\nI after use in one of my jupyter notebook the command:\nos.chdir(r\"path_to_your_folder\") and this is it.\nNB: Since I work collaborately on repo cointaining Jupyter notebooks, this is the only ugly but efficient solution that I found.\n" ]
[ 35, 4, 3, 1, 0 ]
[ "I know this is an old post, but it seems the path to the notebook can be found using\nos.path.abspath(\"mynotebook.ipynb\")\nIt's hardcoding the name of the notebook, but that should be relatively easy to keep in sync.\n", "You can just use \"pwd\" which stands for print working directory.\nenter image description here\n", "You can right clic in the Jupyter notebook shortcut icon (in my case under Anaconda3 folder) and go to properties.\nThere you will find the full path to Jupyter: D:\\anaconda3\\python.exe d:\\anaconda3\\cwp.py d:\\anaconda3 d:\\anaconda3\\python.exe d:\\anaconda3\\Scripts\\jupyter-notebook-script.py \"%USERPROFILE%/\"\nProperties of Jupyter shortcut:\n\n" ]
[ -1, -4, -4 ]
[ "jupyter", "jupyter_notebook", "python" ]
stackoverflow_0052119454_jupyter_jupyter_notebook_python.txt
Q: How to do the program to replace the sting space with a given character using replace() method? PYTHON PROBLEM I have searched for the answer in many website, quora is example for that. A: Are you sure you even tried? Check the following out. Is this what you want? s='123 we wr 21' s.replace(' ' ,'@') s #123@we@wr@21
How to do the program to replace the sting space with a given character using replace() method?
PYTHON PROBLEM I have searched for the answer in many website, quora is example for that.
[ "Are you sure you even tried?\nCheck the following out. Is this what you want?\ns='123 we wr 21'\ns.replace(' ' ,'@')\ns #123@we@wr@21\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074609098_python.txt
Q: from exceptions import PendingDeprecationWarning ModuleNotFoundError: No module named 'exceptions' I am trying to create a word document with Python. I did pip install python-docx in my terminal. My code looks like this: from docx import Document document = Document() document.save('Test.docx') I could not create a new document. What am I missing? The existing answer to install python-docx did not work for me. from exceptions import PendingDeprecationWarning ModuleNotFoundError: No module named 'exceptions' A: you need to install "python-docx" by pip install python-docx A: Try pip freeze and check whether the module name is listed or not. A: Uninstall docx if installed pip uninstall docx Install python-docx pip install python-docx Import docx import docx Import Document from docx from docx import Document
from exceptions import PendingDeprecationWarning ModuleNotFoundError: No module named 'exceptions'
I am trying to create a word document with Python. I did pip install python-docx in my terminal. My code looks like this: from docx import Document document = Document() document.save('Test.docx') I could not create a new document. What am I missing? The existing answer to install python-docx did not work for me. from exceptions import PendingDeprecationWarning ModuleNotFoundError: No module named 'exceptions'
[ "you need to install\n\n\"python-docx\"\n\nby\npip install python-docx \n\n", "Try pip freeze and check whether the module name is listed or not.\n", "\nUninstall docx if installed\npip uninstall docx\n\n\nInstall python-docx\npip install python-docx\n\n\nImport docx\nimport docx\n\n\nImport Document from docx\nfrom docx import Document\n\n\n\n" ]
[ 8, 0, 0 ]
[]
[]
[ "python", "python_docx" ]
stackoverflow_0053019209_python_python_docx.txt
Q: When import docx in python3.3 I have error ImportError: No module named 'exceptions' when I import docx I have this error: File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/docx-0.2.4-py3.3.egg/docx.py", line 30, in <module> from exceptions import PendingDeprecationWarning ImportError: No module named 'exceptions' How to fix this error (python3.3, docx 0.2.4)? A: If you are using python 3x don't do pip install docx instead go for pip install python-docx It is compatible with python 3.x Official Documentation available here: https://pypi.org/project/python-docx/ A: When want to use import docx, be sure to install python-docx, not docx.You can install the module by running pip install python-docx. The installation name docx is for a different module However, when you are going to import the python-docx module, you’ll need to run import docx, not import python-docx. if still you want to use docx module then: First of all, you will need to make sure that the docx module is installed. If not then simply run pip install docx. If it shows '*requirement already satisfied*' then the solution is : Go to the library to find docx.py file, you'll need to go to directory where you installed python then \Lib\site-packages\ and find docx.py file Open docx.py file in text editor and find this code from exceptions import PendingDeprecationWarning Replace the above code with try: from exceptions import PendingDeprecationWarning except ImportError: pass Save the file Now you can run import docx module in Python 3.x without any problem A: Uninstall docx module with pip uninstall docx Download python_docx-0.8.6-py2.py3-none-any.whl file from http://www.lfd.uci.edu/~gohlke/pythonlibs/ Run pip install python_docx-0.8.6-py2.py3-none-any.whl to reinstall docx. This solved the above import error smoothly for me. A: If you're using python 3.x, Make sure you have both python-docx & docx installed. Installing python-docx : pip install python-docx Installing docx : pip install docx A: You may be install docx, not python-docx You can see this for install python-docx http://python-docx.readthedocs.io/en/latest/user/install.html#install A: In Python 3 exceptions module was removed and all standard exceptions were moved to builtin module. Thus meaning that there is no more need to do explicit import of any standard exceptions. copied from A: The problem, as was noted previously in comments, is the docx module was not compatible with Python 3. It was fixed in this pull-request on github: https://github.com/mikemaccana/python-docx/pull/67 Since the exception is now built-in, the solution is to not import it. docx.py @@ -27,7 +27,12 @@ except ImportError: TAGS = {} -from exceptions import PendingDeprecationWarning +# Handle PendingDeprecationWarning causing an ImportError if using Python 3 +try: + from exceptions import PendingDeprecationWarning +except ImportError: + pass + from warnings import warn import logging A: pip install python-docx this worked for me, try installing with admin mode A: I had the same problem, but pip install python-docx worked for me, I'm using python 3.7.1 A: You need to make it work with python3. sudo pip3 install python-docx This installation worked for me in Python3 without any further additions. python3 >> import docx PS: Note that the 'pip install python-docx' or apt-get python3-docx are not useful. A: I had the same problem, after installing docx module, got a bunch of errors regarding docx, .oxml and lxml...seems that for my case the package was installed in this folder: C:\Program Files\Python3.7\Lib\site-packages and I moved it one step back, to: C:\Program Files\Python3.7\Lib and this solved the issue. A: Had faced similar issue and found a work around have posted answer aswell under similar question find its link :https://stackoverflow.com/a/74609166/17385292
When import docx in python3.3 I have error ImportError: No module named 'exceptions'
when I import docx I have this error: File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/docx-0.2.4-py3.3.egg/docx.py", line 30, in <module> from exceptions import PendingDeprecationWarning ImportError: No module named 'exceptions' How to fix this error (python3.3, docx 0.2.4)?
[ "If you are using python 3x don't do pip install docx instead go for\npip install python-docx \n\nIt is compatible with python 3.x\nOfficial Documentation available here: https://pypi.org/project/python-docx/\n", "When want to use import docx, be sure to install python-docx, not docx.You can install the module by running pip install python-docx.\nThe installation name docx is for a different module\nHowever, \nwhen you are going to import the python-docx module, \nyou’ll need to run\nimport docx, not import python-docx.\nif still you want to use docx module then:\nFirst of all, you will need to make sure that the docx module is installed.\nIf not then simply run pip install docx. \nIf it shows '*requirement already satisfied*'\nthen the solution is :\n\nGo to the library to find docx.py file,\nyou'll need to go to directory where you installed python then \\Lib\\site-packages\\ and find docx.py file\nOpen docx.py file in text editor and find this code\nfrom exceptions import PendingDeprecationWarning\n\nReplace the above code with\n\ntry:\n from exceptions import PendingDeprecationWarning\nexcept ImportError:\n pass\n\n\nSave the file\nNow you can run import docx module in Python 3.x without any problem\n\n", "\nUninstall docx module with pip uninstall docx\nDownload python_docx-0.8.6-py2.py3-none-any.whl file from http://www.lfd.uci.edu/~gohlke/pythonlibs/\nRun pip install python_docx-0.8.6-py2.py3-none-any.whl to reinstall docx.\n\nThis solved the above import error smoothly for me.\n", "If you're using python 3.x, Make sure you have both python-docx & docx installed.\nInstalling python-docx :\npip install python-docx\n\nInstalling docx :\npip install docx\n\n", "You may be install docx, not python-docx \nYou can see this for install python-docx\nhttp://python-docx.readthedocs.io/en/latest/user/install.html#install\n", "\nIn Python 3 exceptions module was removed and all standard exceptions were moved to builtin module. Thus meaning that there is no more need to do explicit import of any standard exceptions.\n\ncopied from\n", "The problem, as was noted previously in comments, is the docx module was not compatible with Python 3. It was fixed in this pull-request on github: https://github.com/mikemaccana/python-docx/pull/67\nSince the exception is now built-in, the solution is to not import it.\ndocx.py\n@@ -27,7 +27,12 @@\n except ImportError:\n TAGS = {}\n\n-from exceptions import PendingDeprecationWarning\n+# Handle PendingDeprecationWarning causing an ImportError if using Python 3\n+try:\n+ from exceptions import PendingDeprecationWarning\n+except ImportError:\n+ pass\n+\n from warnings import warn\n\n import logging\n\n", "pip install python-docx\nthis worked for me, try installing with admin mode\n", "I had the same problem, but pip install python-docx worked for me, I'm using python 3.7.1\n", "You need to make it work with python3. \n sudo pip3 install python-docx\n\nThis installation worked for me in Python3 without any further additions. \n python3\n >> import docx\n\nPS: Note that the 'pip install python-docx' or apt-get python3-docx are not useful. \n", "I had the same problem, after installing docx module, got a bunch of errors regarding docx, .oxml and lxml...seems that for my case the package was installed in this folder:\nC:\\Program Files\\Python3.7\\Lib\\site-packages\nand I moved it one step back, to:\nC:\\Program Files\\Python3.7\\Lib\nand this solved the issue.\n", "Had faced similar issue and found a work around have posted answer aswell under similar question find its link :https://stackoverflow.com/a/74609166/17385292\n" ]
[ 277, 27, 19, 15, 10, 8, 3, 3, 1, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "python_docx" ]
stackoverflow_0022765313_python_python_3.x_python_docx.txt
Q: How to fix spaCy en_training incompatible with current spaCy version UserWarning: [W094] Model 'en_training' (0.0.0) specifies an under-constrained spaCy version requirement: >=2.1.4. This can lead to compatibility problems with older versions, or as new spaCy versions are released, because the model may say it's compatible when it's not. Consider changing the "spacy_version" in your meta.json to a version range, with a lower and upper pin. For example: >=3.2.1,<3.3.0 spaCy version 3.2.1 Python version 3.9.7 OS Window A: For spacy v2 models, the under-constrained requirement >=2.1.4 means >=2.1.4,<2.2.0 in effect, and as a result this model will only work with spacy v2.1.x. There is no way to convert a v2 model to v3. You can either use the model with v2.1.x or retrain the model from scratch with your training data. A: pip3 install spacy==2.1.4 This can download required
How to fix spaCy en_training incompatible with current spaCy version
UserWarning: [W094] Model 'en_training' (0.0.0) specifies an under-constrained spaCy version requirement: >=2.1.4. This can lead to compatibility problems with older versions, or as new spaCy versions are released, because the model may say it's compatible when it's not. Consider changing the "spacy_version" in your meta.json to a version range, with a lower and upper pin. For example: >=3.2.1,<3.3.0 spaCy version 3.2.1 Python version 3.9.7 OS Window
[ "For spacy v2 models, the under-constrained requirement >=2.1.4 means >=2.1.4,<2.2.0 in effect, and as a result this model will only work with spacy v2.1.x.\nThere is no way to convert a v2 model to v3. You can either use the model with v2.1.x or retrain the model from scratch with your training data.\n", "pip3 install spacy==2.1.4\n\nThis can download required\n" ]
[ 3, 0 ]
[]
[]
[ "python", "spacy" ]
stackoverflow_0070880056_python_spacy.txt
Q: what is the fgets() equivalent in Python I am currently working with file handling in Python. I have problem in copying the string value of the file. I wanted to copy the string from file and store it to a variable, just like in C example: this is how we do in C FILE *fptr = fopen("read.txt", "r"); fgets(charVar, 100, fptr); where we store the string file to charVar. So is there a fgets() function equivalent to python? A: You can pass the limit argument to readline for a file object which would have similar behavior of stopping on a max character or a newline. Example text file: 01234567890123456789 01234567890123456789 with open("test.txt", "r") as f: while data := f.readline(8): print("line:", data) Outputs: line: 01234567 line: 89012345 line: 6789 line: 01234567 line: 89012345 line: 6789
what is the fgets() equivalent in Python
I am currently working with file handling in Python. I have problem in copying the string value of the file. I wanted to copy the string from file and store it to a variable, just like in C example: this is how we do in C FILE *fptr = fopen("read.txt", "r"); fgets(charVar, 100, fptr); where we store the string file to charVar. So is there a fgets() function equivalent to python?
[ "You can pass the limit argument to readline for a file object which would have similar behavior of stopping on a max character or a newline. Example text file:\n01234567890123456789\n01234567890123456789\n\nwith open(\"test.txt\", \"r\") as f:\n while data := f.readline(8):\n print(\"line:\", data)\n\nOutputs:\nline: 01234567\nline: 89012345\nline: 6789\n\nline: 01234567\nline: 89012345\nline: 6789\n\n\n" ]
[ 1 ]
[]
[]
[ "c", "python" ]
stackoverflow_0074609187_c_python.txt
Q: How do I merge different Dataframes and label each dataframe in the merged dataframe in python? lets say for example we have 2 Dataframes, df1 and df2; df1 = pd.DataFrame({'id': ['A01', 'A02'], 'Name': ['ABC', 'PQR']}) df2 = pd.DataFrame({'id': ['B05', 'B06'], 'Name': ['XYZ', 'TUV']}) I want to merge the two and label each dataframes, so it appears like this. So basically, i want to concatenate two dataframes into a new dataframe and create a third column that labels each of those dataframes. As seen the the table above, you can see that there is a 3rd column named 'class' and the values there are grouping of each dataframes that were merged. The first two above are data for df1 and it was labelled as 1 for all of them. it groups all of them and put them as one. i'm trying to make sure it doesn't appear like this one below; in this case, it's appending for each line.. i prefer to append to the whole DF as single entity as shown in the first table. This is what I have tried; df1['class'] = 1 df2['class'] = 2 df_merge = pd.concat([df1,df2]) and i got result like this But this is not what I was expecting. I am expecting the result to look like this. Grouping each df as one and add the 3rd column. A: You can concat using the keys and names parameters, then reset_index: (pd.concat([df1, df2], keys=[1, 2], names=['class', None]) .reset_index('class') ) Output: class id Name 0 1 A01 ABC 1 1 A02 PQR 0 2 B05 XYZ 1 2 B06 TUV Or without reset_index to get a MultiIndex: pd.concat([df1, df2], keys=[1, 2], names=['class', None]) id Name class 1 0 A01 ABC 1 A02 PQR 2 0 B05 XYZ 1 B06 TUV hiding the "duplicated" class: (pd.concat([df1, df2], keys=[1, 2], names=['class', None]) .reset_index('class') .assign(**{'class': lambda d: d['class'].mask(d['class'].duplicated(), '')}) ) Output: class id Name 0 1 A01 ABC 1 A02 PQR 0 2 B05 XYZ 1 B06 TUV
How do I merge different Dataframes and label each dataframe in the merged dataframe in python?
lets say for example we have 2 Dataframes, df1 and df2; df1 = pd.DataFrame({'id': ['A01', 'A02'], 'Name': ['ABC', 'PQR']}) df2 = pd.DataFrame({'id': ['B05', 'B06'], 'Name': ['XYZ', 'TUV']}) I want to merge the two and label each dataframes, so it appears like this. So basically, i want to concatenate two dataframes into a new dataframe and create a third column that labels each of those dataframes. As seen the the table above, you can see that there is a 3rd column named 'class' and the values there are grouping of each dataframes that were merged. The first two above are data for df1 and it was labelled as 1 for all of them. it groups all of them and put them as one. i'm trying to make sure it doesn't appear like this one below; in this case, it's appending for each line.. i prefer to append to the whole DF as single entity as shown in the first table. This is what I have tried; df1['class'] = 1 df2['class'] = 2 df_merge = pd.concat([df1,df2]) and i got result like this But this is not what I was expecting. I am expecting the result to look like this. Grouping each df as one and add the 3rd column.
[ "You can concat using the keys and names parameters, then reset_index:\n(pd.concat([df1, df2], keys=[1, 2], names=['class', None])\n .reset_index('class')\n)\n\nOutput:\n class id Name\n0 1 A01 ABC\n1 1 A02 PQR\n0 2 B05 XYZ\n1 2 B06 TUV\n\nOr without reset_index to get a MultiIndex:\npd.concat([df1, df2], keys=[1, 2], names=['class', None])\n\n id Name\nclass \n1 0 A01 ABC\n 1 A02 PQR\n2 0 B05 XYZ\n 1 B06 TUV\n\nhiding the \"duplicated\" class:\n(pd.concat([df1, df2], keys=[1, 2], names=['class', None])\n .reset_index('class')\n .assign(**{'class': lambda d: d['class'].mask(d['class'].duplicated(), '')})\n)\n\nOutput:\n class id Name\n0 1 A01 ABC\n1 A02 PQR\n0 2 B05 XYZ\n1 B06 TUV\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074609248_dataframe_numpy_pandas_python.txt
Q: Why am I getting this error when in the documentation it says to use it? The problem I am having is that when I try to use the command it gives me the error, TypeError: Embed.init() got an unexpected keyword argument 'value'. It cites line 34 as the issue, does anyone know why this is happening? @client.event async def on_message(message) if "!sell" in message.content: tip = random.randrange(1, 500) k = f"| {message.author.name} " val = random.randrange(1000, 5000) # open entire file for reading and writing with open("income.json", "r+") as f: incomes = json.load(f) # Either add the value to the user's balance # or, if they don't exist yet, create the entry # with the val try: incomes[k] += val + tip except KeyError: incomes[k] = val + tip # seek to beginning and rewrite file f.seek(0) json.dump(incomes, f, indent=4) embedVar = discord.Embed(title=(message.author.name) + " Distributed", value=None, inline=False, color=0xfc3503) embedVar.add_field(name=('﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉'), value=(("You were tipped ")+(str(tip))+(" by the buyer")), inline=False) embedVar.add_field(name=('﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉'), value=(f"You Earned {val} Dollars from distributing")) I have tried changing the keyword value to other ones like description and have also tried changing value to = none but none of this worked. A: Official documentation doesn't include value as a valid parameter https://discordpy.readthedocs.io/en/stable/api.html#embed class discord.Embed(*, colour=None, color=None, title=None, type='rich', url=None, description=None, timestamp=None) You are probably reading outdated documentation somewhere.
Why am I getting this error when in the documentation it says to use it?
The problem I am having is that when I try to use the command it gives me the error, TypeError: Embed.init() got an unexpected keyword argument 'value'. It cites line 34 as the issue, does anyone know why this is happening? @client.event async def on_message(message) if "!sell" in message.content: tip = random.randrange(1, 500) k = f"| {message.author.name} " val = random.randrange(1000, 5000) # open entire file for reading and writing with open("income.json", "r+") as f: incomes = json.load(f) # Either add the value to the user's balance # or, if they don't exist yet, create the entry # with the val try: incomes[k] += val + tip except KeyError: incomes[k] = val + tip # seek to beginning and rewrite file f.seek(0) json.dump(incomes, f, indent=4) embedVar = discord.Embed(title=(message.author.name) + " Distributed", value=None, inline=False, color=0xfc3503) embedVar.add_field(name=('﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉'), value=(("You were tipped ")+(str(tip))+(" by the buyer")), inline=False) embedVar.add_field(name=('﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉﹉'), value=(f"You Earned {val} Dollars from distributing")) I have tried changing the keyword value to other ones like description and have also tried changing value to = none but none of this worked.
[ "Official documentation doesn't include value as a valid parameter\nhttps://discordpy.readthedocs.io/en/stable/api.html#embed\nclass discord.Embed(*, colour=None, color=None, title=None, type='rich', url=None, description=None, timestamp=None)\nYou are probably reading outdated documentation somewhere.\n" ]
[ 2 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074609261_discord.py_python.txt
Q: using class inheritance on tkinter python please help, I am trying to create a button on a tkinter window with the use of python class inheritance but it does not show the button. I am new to inheritance in python and not sure how do this. please see my code. thanks in advance from tkinter import * class A: def __init__(self): self.root = Tk() self.root.mainloop() class B(A): def __init__(self): super().__init__() self.my_button = Button(self.root,text="test") self.my_button.pack() d = B() A: It is not recommended (or should not) call tkinter mainloop() inside __init__() as mainloop() is a blocking function. Therefore super().__init__() will not return until the root window is closed. Since the button is created after super().__init__(), so the root window is blank. Remove self.root.mainloop() inside A.__init__() and call tkinter mainloop() after you created instance of B(): from tkinter import * class A: def __init__(self): self.root = Tk() # don't call mainloop() here class B(A): def __init__(self): super().__init__() self.my_button = Button(self.root,text="test") self.my_button.pack() d = B() d.root.mainloop() # call mainloop() here
using class inheritance on tkinter python
please help, I am trying to create a button on a tkinter window with the use of python class inheritance but it does not show the button. I am new to inheritance in python and not sure how do this. please see my code. thanks in advance from tkinter import * class A: def __init__(self): self.root = Tk() self.root.mainloop() class B(A): def __init__(self): super().__init__() self.my_button = Button(self.root,text="test") self.my_button.pack() d = B()
[ "It is not recommended (or should not) call tkinter mainloop() inside __init__() as mainloop() is a blocking function. Therefore super().__init__() will not return until the root window is closed. Since the button is created after super().__init__(), so the root window is blank.\nRemove self.root.mainloop() inside A.__init__() and call tkinter mainloop() after you created instance of B():\nfrom tkinter import *\n\nclass A:\n def __init__(self):\n self.root = Tk()\n # don't call mainloop() here\n\nclass B(A):\n def __init__(self):\n super().__init__()\n self.my_button = Button(self.root,text=\"test\")\n self.my_button.pack()\n\nd = B()\nd.root.mainloop() # call mainloop() here\n\n" ]
[ 1 ]
[]
[]
[ "inheritance", "python", "tkinter", "user_interface" ]
stackoverflow_0074609066_inheritance_python_tkinter_user_interface.txt
Q: Python: How can I implement yield in my recursion? How can I implement yield from in my recursion? I am trying to understand how to implement it but failing: # some data init_parent = [1020253] df = pd.DataFrame({'parent': [1020253, 1020253], 'id': [1101941, 1101945]}) # look for parent child def recur1(df, parents, parentChild=None, step=0): if len(parents) != 0: yield parents, parentChild else: parents = df.loc[df['parent'].isin(parents)][['id', 'parent']] parentChild = parents['parent'].to_numpy() parents = parents['id'].to_numpy() yield from recur1(df=df, parents=parents, parentChild=parentChild, step=step+1) # exec / only printing results atm out = recur1(df, init_parent, step=0) [x for x in out] A: I'd say your biggest issue here is that recur1 isn't always guaranteed to return a generator. For example, suppose your stack calls into the else branch three times before calling into the if branch. In this case, the top three frames would be returning a generator received from the lower frame, but the lowest from would be returned from this: yield parents, parentChild So, then, there is a really simple way you can fix this code to ensure that yield from works. Simply transform your return from a tuple to a generator-compatible type by enclosing it in a list: yield [(parents, parentChild)] Then, when you call yield from recur1(df=df, parents=parents, parentChild=parentChild, step=step+1) you'll always be working with something for which yeild from makes sense.
Python: How can I implement yield in my recursion?
How can I implement yield from in my recursion? I am trying to understand how to implement it but failing: # some data init_parent = [1020253] df = pd.DataFrame({'parent': [1020253, 1020253], 'id': [1101941, 1101945]}) # look for parent child def recur1(df, parents, parentChild=None, step=0): if len(parents) != 0: yield parents, parentChild else: parents = df.loc[df['parent'].isin(parents)][['id', 'parent']] parentChild = parents['parent'].to_numpy() parents = parents['id'].to_numpy() yield from recur1(df=df, parents=parents, parentChild=parentChild, step=step+1) # exec / only printing results atm out = recur1(df, init_parent, step=0) [x for x in out]
[ "I'd say your biggest issue here is that recur1 isn't always guaranteed to return a generator. For example, suppose your stack calls into the else branch three times before calling into the if branch. In this case, the top three frames would be returning a generator received from the lower frame, but the lowest from would be returned from this:\nyield parents, parentChild\n\nSo, then, there is a really simple way you can fix this code to ensure that yield from works. Simply transform your return from a tuple to a generator-compatible type by enclosing it in a list:\nyield [(parents, parentChild)]\n\nThen, when you call yield from recur1(df=df, parents=parents, parentChild=parentChild, step=step+1) you'll always be working with something for which yeild from makes sense.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "recursion" ]
stackoverflow_0074608995_pandas_python_recursion.txt
Q: Is there anyway to fix 'int' is not subscriptable without changing it to string data type? I'm making a list to store my direction: direction = [(0, +24),(+24,0),(0, -24),(-24,0)] And using that list in this function to determine the next direction the robot will take (or backtrack) def backtrack(self,x,y,direction): x_walls = round(sprite.xcor(), 0) y_walls = round(sprite.ycor(), 0) visited.append((x_walls, y_walls)) for i in range(4): new_direction = (direction + i) % 4 new_x = x + direction[new_direction][0] new_y = y + direction[new_direction][1] if (new_x,new_y) not in visited and sprite.spriteMove(): sprite.backtrack(new_x,new_y,new_direction) sprite.spriteback() sprite.right(90) But when I try to call that function sprite.backtrack(0,0,0) It give me error int is not subscriptable, any tips for this guys? I try to convert the whole list to string, but I need it at integer for the direction formula in the function, so currently I am clueless of what to do next A: I was confusing the object with the list, changing the list to another name then add it to the function make it work new_x = x + sprite.directions[new_direction][0] new_y = y + sprite.directions[new_direction][1]
Is there anyway to fix 'int' is not subscriptable without changing it to string data type?
I'm making a list to store my direction: direction = [(0, +24),(+24,0),(0, -24),(-24,0)] And using that list in this function to determine the next direction the robot will take (or backtrack) def backtrack(self,x,y,direction): x_walls = round(sprite.xcor(), 0) y_walls = round(sprite.ycor(), 0) visited.append((x_walls, y_walls)) for i in range(4): new_direction = (direction + i) % 4 new_x = x + direction[new_direction][0] new_y = y + direction[new_direction][1] if (new_x,new_y) not in visited and sprite.spriteMove(): sprite.backtrack(new_x,new_y,new_direction) sprite.spriteback() sprite.right(90) But when I try to call that function sprite.backtrack(0,0,0) It give me error int is not subscriptable, any tips for this guys? I try to convert the whole list to string, but I need it at integer for the direction formula in the function, so currently I am clueless of what to do next
[ "I was confusing the object with the list, changing the list to another name then add it to the function make it work\nnew_x = x + sprite.directions[new_direction][0]\n new_y = y + sprite.directions[new_direction][1]\n\n" ]
[ 0 ]
[]
[]
[ "function", "list", "python", "python_3.x" ]
stackoverflow_0074609023_function_list_python_python_3.x.txt
Q: How to list distinct values of pyspark dataframe wrt null values in another column i have a pyspark dataframe: rowNum Vehicle Production 1 1234 5678 2 null 1254 3 null 4567 4 null 4567 i want to pick all the distinct values of Production in a list format where Vehicle is null. How to achieve this? result: production list=['1254','4567'] how to achieve this in pyspark dataframe A: I would do something like this: # Using Spark 3.3.0 # Dataset as per the question data = [ [1, '1234', 5678] , [2, 'Null', 1254] , [3, 'Null', 4567] , [4, 'Null', 4567] ] cols = ['rowNum', 'Vehicle', 'Production'] # Creating Dataframe df = spark.createDataFrame(data, cols) # list comprehension to represent the distinct Production values on 'Null' Vehicles list = [p.Production for p in df.select('Production').distinct().where("Vehicle == 'Null'").collect()] list The output I am having is the following:
How to list distinct values of pyspark dataframe wrt null values in another column
i have a pyspark dataframe: rowNum Vehicle Production 1 1234 5678 2 null 1254 3 null 4567 4 null 4567 i want to pick all the distinct values of Production in a list format where Vehicle is null. How to achieve this? result: production list=['1254','4567'] how to achieve this in pyspark dataframe
[ "I would do something like this:\n# Using Spark 3.3.0\n\n# Dataset as per the question\ndata = [\n [1, '1234', 5678]\n, [2, 'Null', 1254]\n, [3, 'Null', 4567]\n, [4, 'Null', 4567] \n]\n\ncols = ['rowNum', 'Vehicle', 'Production']\n\n# Creating Dataframe\ndf = spark.createDataFrame(data, cols)\n\n\n# list comprehension to represent the distinct Production values on 'Null' Vehicles\nlist = [p.Production for p in df.select('Production').distinct().where(\"Vehicle == 'Null'\").collect()]\n\nlist\n\nThe output I am having is the following:\n\n" ]
[ 1 ]
[]
[]
[ "pyspark", "python", "python_3.x" ]
stackoverflow_0074608499_pyspark_python_python_3.x.txt
Q: How do I make 2 images appear side by side in Jupyter notebook (iPython)? I want to display 2 PNG images in iPython side by side. My code to do this is: from IPython.display import Image, HTML, display img_A = '\path\to\img_A.png' img_B = '\path\to\img_B.png' display(HTML("<table><tr><td><img src=img_A></td><td><img src=img_B></td></tr></table>")) But it doesn't output the images, and instead only displays placeholders for the 2 images: I tried the following as well: s = """<table> <tr> <th><img src="%s"/></th> <th><img src="%s"/></th> </tr></table>"""%(img_A, img_B) t=HTML(s) display(t) But the result is the same: The images are in the path for sure, because I verified by displaying them in a pop up: plt.imshow(img_A) plt.imshow(img_B) and they do appear in the pop ups. How do I make the 2 images appear side by side in iPython? A: You can try using matplotlib. You can read image to numpy array by using mpimg.imread (documentation) from matplotlib, then you can use subplots (documentation) and for creating two columns for figures and finally imshow (documetation) to display images. import matplotlib.pyplot as plt import matplotlib.image as mpimg from matplotlib import rcParams %matplotlib inline # figure size in inches optional rcParams['figure.figsize'] = 11 ,8 # read images img_A = mpimg.imread('\path\to\img_A.png') img_B = mpimg.imread('\path\to\img_B.png') # display images fig, ax = plt.subplots(1,2) ax[0].imshow(img_A) ax[1].imshow(img_B) A: matplotlib is a very good tool for plotting but I found it very heavy and slow for scenarios where I simply need a fast and easy way to display bigger number of images. To solve this I'm using IPyPlot package: import ipyplot ipyplot.plot_images(images_list, max_images=20, img_width=150) You would get a plot similar to this: A: Kind of late but i found out about this over here You can do it via Hbox. Hbox is a special container where you can add widgets. It aims at providing an efficient way to lay out, align and distribute space among items in a given space. Although its a bit cumbersome to define so many elements just to display 2 images you can add a lot more functionality like sliders, dropdown menus as well as buttons making your Jupiter notebook more interactive. import IPython.display as display import ipywidgets as widgets img1=open('path_to_image','rb').read() wi1 = widgets.Image(value=img1, format='jpg', width=300, height=400) img2=open('path_to_image','rb').read() wi2 = widgets.Image(value=img2, format='jpg', width=300, height=400) a=[wi1,wi2] wid=widgets.HBox(a) display.display(wid) A: This easy solution worked great for me: from IPython.display import Video, Image, HTML, display image_path1 = "/myfolder/my_img1.jpg" image_path2 = "/myfolder/my_img2.jpg" HTML(f""" <div class="row"> <img src={image_path1} style="width:30%"> </img> <img src={image_path1} style="width:53.2%"> </img> </div> """) I put in different widths for if you have a portrait and a landscape picture, but that is up to you depending on the images and the aspect ratios. A: For some guys who may want to place an image and a text snippet side by side(adapted from this answer): import base64 from IPython.display import HTML, display b64_img = base64.b64encode(open('./path', 'rb').read()).decode('ascii') txt = "sjfa;\nakd;f\nsdfa" display(HTML(f""" <div class="row"> <img style="float:left;margin-right:30px;" src="data:image/jpeg;base64,{b64_img}" width="400" height="300" /> <div style="font-size:10px;white-space:pre;">{txt}</div> </div> """)) A: It should be like this: from IPython.display import Image, HTML, display img_A = '\path\to\img_A.png' img_B = '\path\to\img_B.png' display(HTML("<table><tr><td><img src={0}></td><td><img src={1}></td></tr></table>".format(image_A,img_B))) You mistook the variable name as variable itself.
How do I make 2 images appear side by side in Jupyter notebook (iPython)?
I want to display 2 PNG images in iPython side by side. My code to do this is: from IPython.display import Image, HTML, display img_A = '\path\to\img_A.png' img_B = '\path\to\img_B.png' display(HTML("<table><tr><td><img src=img_A></td><td><img src=img_B></td></tr></table>")) But it doesn't output the images, and instead only displays placeholders for the 2 images: I tried the following as well: s = """<table> <tr> <th><img src="%s"/></th> <th><img src="%s"/></th> </tr></table>"""%(img_A, img_B) t=HTML(s) display(t) But the result is the same: The images are in the path for sure, because I verified by displaying them in a pop up: plt.imshow(img_A) plt.imshow(img_B) and they do appear in the pop ups. How do I make the 2 images appear side by side in iPython?
[ "You can try using matplotlib. You can read image to numpy array by using mpimg.imread (documentation) from matplotlib, then you can use subplots (documentation) and for creating two columns for figures and finally imshow (documetation) to display images.\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom matplotlib import rcParams\n\n%matplotlib inline\n\n# figure size in inches optional\nrcParams['figure.figsize'] = 11 ,8\n\n# read images\nimg_A = mpimg.imread('\\path\\to\\img_A.png')\nimg_B = mpimg.imread('\\path\\to\\img_B.png')\n\n# display images\nfig, ax = plt.subplots(1,2)\nax[0].imshow(img_A)\nax[1].imshow(img_B)\n\n", "matplotlib is a very good tool for plotting but I found it very heavy and slow for scenarios where I simply need a fast and easy way to display bigger number of images.\nTo solve this I'm using IPyPlot package:\nimport ipyplot\n\nipyplot.plot_images(images_list, max_images=20, img_width=150)\n\nYou would get a plot similar to this:\n\n", "Kind of late but i found out about this over here\nYou can do it via Hbox.\nHbox is a special container where you can add widgets. It aims at providing an efficient way to lay out, align and distribute space among items in a given space. Although its a bit cumbersome to define so many elements just to display 2 images you can add a lot more functionality like sliders, dropdown menus as well as buttons making your Jupiter notebook more interactive.\nimport IPython.display as display\nimport ipywidgets as widgets\n\nimg1=open('path_to_image','rb').read()\nwi1 = widgets.Image(value=img1, format='jpg', width=300, height=400)\nimg2=open('path_to_image','rb').read()\nwi2 = widgets.Image(value=img2, format='jpg', width=300, height=400)\na=[wi1,wi2]\nwid=widgets.HBox(a)\ndisplay.display(wid)\n\n", "This easy solution worked great for me:\nfrom IPython.display import Video, Image, HTML, display\n\nimage_path1 = \"/myfolder/my_img1.jpg\"\nimage_path2 = \"/myfolder/my_img2.jpg\"\n\n\nHTML(f\"\"\"\n <div class=\"row\">\n <img src={image_path1} style=\"width:30%\"> </img>\n <img src={image_path1} style=\"width:53.2%\"> </img>\n </div>\n \"\"\")\n\nI put in different widths for if you have a portrait and a landscape picture, but that is up to you depending on the images and the aspect ratios.\n", "For some guys who may want to place an image and a text snippet side by side(adapted from this answer):\nimport base64\nfrom IPython.display import HTML, display\nb64_img = base64.b64encode(open('./path', 'rb').read()).decode('ascii')\ntxt = \"sjfa;\\nakd;f\\nsdfa\"\ndisplay(HTML(f\"\"\"\n <div class=\"row\">\n <img style=\"float:left;margin-right:30px;\" src=\"data:image/jpeg;base64,{b64_img}\" width=\"400\" height=\"300\" />\n <div style=\"font-size:10px;white-space:pre;\">{txt}</div>\n </div>\n \"\"\"))\n\n", "It should be like this:\nfrom IPython.display import Image, HTML, display\n\nimg_A = '\\path\\to\\img_A.png'\nimg_B = '\\path\\to\\img_B.png'\n\ndisplay(HTML(\"<table><tr><td><img src={0}></td><td><img src={1}></td></tr></table>\".format(image_A,img_B)))\n\nYou mistook the variable name as variable itself.\n" ]
[ 21, 13, 3, 1, 0, 0 ]
[]
[]
[ "image", "ipython", "jupyter_notebook", "python" ]
stackoverflow_0050559000_image_ipython_jupyter_notebook_python.txt
Q: How to pass data from one view to another one in Fastapi? I have a variable set in one view in Fastapi and want to pass it to another one : from fastapi import APIRouter, Request, Response from fastapi.templating import Jinja2Templates templates = Jinja2Templates(directory="templates") router = APIRouter() @router.get("/my-first-view") async def function1(request: Request) -> Response: """Display the home page.""" my_variable = value return templates.TemplateResponse( "home.jinja", context={ "my_variable": my_variable }, ) @router.get("/my-second-view") async def function2(request: Request, my_variable: str) -> Response: """Display the variable processing page.""" return templates.TemplateResponse( "page.jinja" ) Normally, this would come to send my_variable from home.jinja to page.jinja. Thus, in home.jinja I have the following : ... <a href="{{url_for('function2', my_variable=my_variable)}}" title="connect">Connect</a> ... But this is throwing me an error : "starlette.routing.NoMatchFound: No route exists for name \"function2\" and params \"my_variable\".\n". I did some researches but I haven't found something really helpful What is the proper way to do it with Fastapi ? What am I missing ?
How to pass data from one view to another one in Fastapi?
I have a variable set in one view in Fastapi and want to pass it to another one : from fastapi import APIRouter, Request, Response from fastapi.templating import Jinja2Templates templates = Jinja2Templates(directory="templates") router = APIRouter() @router.get("/my-first-view") async def function1(request: Request) -> Response: """Display the home page.""" my_variable = value return templates.TemplateResponse( "home.jinja", context={ "my_variable": my_variable }, ) @router.get("/my-second-view") async def function2(request: Request, my_variable: str) -> Response: """Display the variable processing page.""" return templates.TemplateResponse( "page.jinja" ) Normally, this would come to send my_variable from home.jinja to page.jinja. Thus, in home.jinja I have the following : ... <a href="{{url_for('function2', my_variable=my_variable)}}" title="connect">Connect</a> ... But this is throwing me an error : "starlette.routing.NoMatchFound: No route exists for name \"function2\" and params \"my_variable\".\n". I did some researches but I haven't found something really helpful What is the proper way to do it with Fastapi ? What am I missing ?
[]
[]
[ "Very little context information on this post, so I'll help out with fixes that change as little as possible.\nFirst of all, to fix your code you need a place to save the changes that you've made. A variable that only exists in a function is deleted at the end of that function.\nNow usually you'd give more information regarding your use case, and I'd give you the best choice and why. It could be to store data in a JSON file, or a Database, or an in memory object that would serve the purpose best.\nHere there's no info so we'll just make a variable. Keep in mind that this variable will be reset every time you restart the shell. So if your API is not \"always on\" then this may no work.\nfrom fastapi import APIRouter, Request, Response\nfrom fastapi.templating import Jinja2Templates\n\nVARIABLE = None\nAPI_URL = \"my_domain.com/api\"\n\ntemplates = Jinja2Templates(directory=\"templates\")\nrouter = APIRouter()\n\[email protected](\"/my-first-view\")\nasync def function1(request: Request) -> Response:\n \"\"\"Display the home page.\"\"\"\n my_variable = value\n return templates.TemplateResponse(\n \"home.jinja\",\n context={\n \"complete_url\": f\"{API_URL}/my-second-view?my_variable={VARIABLE}\"\n },\n )\n\[email protected](\"/my-second-view\")\nasync def function2(request: Request, my_variable: str) -> Response:\n \"\"\"Display the variable processing page.\"\"\"\n\n VARIABLE = my_variable\n \n return templates.TemplateResponse(\n \"page.jinja\"\n )\n\nAs far as the template goes, you haven't shared url_for, but I suppose it just creates the path. I'd just make home.jinja as follows.\n<a href=\"{{ complete_url }}\" title=\"connect\">Connect</a>\n\nAlso I've replaced the url_for call with a simple f-string.\nDepending on what you pass in your VARIABLE you may need to encode it.\nFor an example of that, please look at the following thread: How to urlencode a querystring in Python?\n" ]
[ -1 ]
[ "fastapi", "python" ]
stackoverflow_0074609265_fastapi_python.txt
Q: Killing a python thread I would like to kill somehow a running thread from my GUI application via setting an event, but I can't use a for loop in my thread so I need some other solution to check the event I have the following situation. In a tkinter gui when I click a button I start a thread and set a global variable. self.thread = StoppableThread(caller=self) self.thread.start() is_running = 1 When I next click the button I check the global variable state and if it is already set I send a stop request: if is_running: is_running = 0 self.thread.stop() This is my thread class: import threading from time import sleep class StoppableThread(threading.Thread): def __init__(self, caller=None): super(StoppableThread, self).__init__() self._stop_event = threading.Event() self.caller = caller def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set() def run(self) -> None: while True: # check for stop if self.stopped(): break for i in range(10): sleep(1) print('Test') print('Worker done') break Everything works if I change the while to a for loop, but because in this point in my business logic I doesn't have anything to loop for I need to check somehow different the state of the self.stopped(). Is there any way to check it in the while loop? Or how can I achive this? I tried to use process instead of thread but it wasnt worked because of an error 'process can't pickle tkinter'. Thank you for any help A: This loop will run forever until you set the flag: def run(self): while not self.stopped(): sleep(1) print('Test') You don't actually need an event. A simple Boolean will do. FOLLOWUP Here's an example based on your code that shows how this works: import threading from time import sleep class StoppableThread(threading.Thread): def __init__(self, caller=None): super(StoppableThread, self).__init__() self._stop_event = False self.caller = caller def stop(self): self._stop_event = True def stopped(self): return self._stop_event def run(self) -> None: while not self.stopped(): sleep(1) print('Test') print("exited") thread = StoppableThread(caller=None) thread.start() sleep(5) thread.stop() sleep(1) print("program ending")
Killing a python thread
I would like to kill somehow a running thread from my GUI application via setting an event, but I can't use a for loop in my thread so I need some other solution to check the event I have the following situation. In a tkinter gui when I click a button I start a thread and set a global variable. self.thread = StoppableThread(caller=self) self.thread.start() is_running = 1 When I next click the button I check the global variable state and if it is already set I send a stop request: if is_running: is_running = 0 self.thread.stop() This is my thread class: import threading from time import sleep class StoppableThread(threading.Thread): def __init__(self, caller=None): super(StoppableThread, self).__init__() self._stop_event = threading.Event() self.caller = caller def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set() def run(self) -> None: while True: # check for stop if self.stopped(): break for i in range(10): sleep(1) print('Test') print('Worker done') break Everything works if I change the while to a for loop, but because in this point in my business logic I doesn't have anything to loop for I need to check somehow different the state of the self.stopped(). Is there any way to check it in the while loop? Or how can I achive this? I tried to use process instead of thread but it wasnt worked because of an error 'process can't pickle tkinter'. Thank you for any help
[ "This loop will run forever until you set the flag:\n def run(self):\n while not self.stopped():\n sleep(1)\n print('Test')\n\nYou don't actually need an event. A simple Boolean will do.\nFOLLOWUP\nHere's an example based on your code that shows how this works:\nimport threading\nfrom time import sleep\n\nclass StoppableThread(threading.Thread):\n def __init__(self, caller=None):\n super(StoppableThread, self).__init__()\n self._stop_event = False\n self.caller = caller\n\n def stop(self):\n self._stop_event = True\n\n def stopped(self):\n return self._stop_event\n\n def run(self) -> None:\n while not self.stopped():\n sleep(1)\n print('Test')\n print(\"exited\")\n\nthread = StoppableThread(caller=None)\nthread.start()\nsleep(5)\nthread.stop()\nsleep(1)\nprint(\"program ending\")\n\n" ]
[ 1 ]
[]
[]
[ "multithreading", "python", "python_3.x", "tkinter" ]
stackoverflow_0074609532_multithreading_python_python_3.x_tkinter.txt
Q: How to use multiple GPUs in pytorch? I use this command to use a GPU. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") But, I want to use two GPUs in jupyter, like this: device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu") A: Assuming that you want to distribute the data across the available GPUs (If you have batch size of 16, and 2 GPUs, you might be looking providing the 8 samples to each of the GPUs), and not really spread out the parts of models across difference GPU's. This can be done as follows: If you want to use all the available GPUs: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = CreateModel() model= nn.DataParallel(model) model.to(device) If you want to use specific GPUs: (For example, using 2 out of 4 GPUs) device = torch.device("cuda:1,3" if torch.cuda.is_available() else "cpu") ## specify the GPU id's, GPU id's start from 0. model = CreateModel() model= nn.DataParallel(model,device_ids = [1, 3]) model.to(device) To use the specific GPU's by setting OS environment variable: Before executing the program, set CUDA_VISIBLE_DEVICES variable as follows: export CUDA_VISIBLE_DEVICES=1,3 (Assuming you want to select 2nd and 4th GPU) Then, within program, you can just use DataParallel() as though you want to use all the GPUs. (similar to 1st case). Here the GPUs available for the program is restricted by the OS environment variable. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = CreateModel() model= nn.DataParallel(model) model.to(device) In all of these cases, the data has to be mapped to the device. If X and y are the data: X.to(device) y.to(device) A: Using multi-GPUs is as simply as wrapping a model in DataParallel and increasing the batch size. Check these two tutorials for a quick start: Multi-GPU Examples Data Parallelism A: Another option would be to use some helper libraries for PyTorch: PyTorch Ignite library Distributed GPU training In there there is a concept of context manager for distributed configuration on: nccl - torch native distributed configuration on multiple GPUs xla-tpu - TPUs distributed configuration PyTorch Lightning Multi-GPU training This is of possible the best option IMHO to train on CPU/GPU/TPU without changing your original PyTorch code. Worth cheking Catalyst for similar distributed GPU options. A: When I ran naiveinception_googlenet, the above methods didn't work for me. The following method solved my problem. import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0,3" # specify which GPU(s) to be used A: If you want to run your code only on specific GPUs (e.g. only on GPU id 2 and 3), then you can specify that using the CUDA_VISIBLE_DEVICES=2,3 variable when triggering the python code from terminal. CUDA_VISIBLE_DEVICES=2,3 python lstm_demo_example.py --epochs=30 --lr=0.001 and inside the code, leave it as: device = torch.device("cuda" if torch.cuda.is_available() else 'cpu') model = LSTMModel() model = nn.DataParallel(model) model = model.to(device) Source : https://glassboxmedicine.com/2020/03/04/multi-gpu-training-in-pytorch-data-and-model-parallelism/ A: In 2022, PyTorch says: It is recommended to use DistributedDataParallel, instead of this class, to do multi-GPU training, even if there is only a single node. See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel and Distributed Data Parallel. in https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel Thus, seems that we should use DistributedDataParallel, not DataParallel.
How to use multiple GPUs in pytorch?
I use this command to use a GPU. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") But, I want to use two GPUs in jupyter, like this: device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu")
[ "Assuming that you want to distribute the data across the available GPUs (If you have batch size of 16, and 2 GPUs, you might be looking providing the 8 samples to each of the GPUs), and not really spread out the parts of models across difference GPU's. This can be done as follows:\nIf you want to use all the available GPUs:\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\nmodel = CreateModel()\n\nmodel= nn.DataParallel(model)\nmodel.to(device)\n\nIf you want to use specific GPUs:\n(For example, using 2 out of 4 GPUs)\ndevice = torch.device(\"cuda:1,3\" if torch.cuda.is_available() else \"cpu\") ## specify the GPU id's, GPU id's start from 0.\n\nmodel = CreateModel()\n\nmodel= nn.DataParallel(model,device_ids = [1, 3])\nmodel.to(device)\n\nTo use the specific GPU's by setting OS environment variable:\nBefore executing the program, set CUDA_VISIBLE_DEVICES variable as follows:\nexport CUDA_VISIBLE_DEVICES=1,3 (Assuming you want to select 2nd and 4th GPU)\nThen, within program, you can just use DataParallel() as though you want to use all the GPUs. (similar to 1st case). Here the GPUs available for the program is restricted by the OS environment variable.\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\nmodel = CreateModel()\n\nmodel= nn.DataParallel(model)\nmodel.to(device)\n\nIn all of these cases, the data has to be mapped to the device.\nIf X and y are the data:\nX.to(device)\ny.to(device)\n\n\n", "Using multi-GPUs is as simply as wrapping a model in DataParallel and increasing the batch size. Check these two tutorials for a quick start: \n\nMulti-GPU Examples\nData Parallelism\n\n", "Another option would be to use some helper libraries for PyTorch:\nPyTorch Ignite library Distributed GPU training\nIn there there is a concept of context manager for distributed configuration on:\n\nnccl - torch native distributed configuration on multiple GPUs\nxla-tpu - TPUs distributed configuration\n\nPyTorch Lightning Multi-GPU training\nThis is of possible the best option IMHO to train on CPU/GPU/TPU without changing your original PyTorch code.\nWorth cheking Catalyst for similar distributed GPU options.\n", "When I ran naiveinception_googlenet, the above methods didn't work for me. The following method solved my problem.\nimport os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,3\" # specify which GPU(s) to be used\n\n", "If you want to run your code only on specific GPUs (e.g. only on GPU id 2 and 3), then you can specify that using the CUDA_VISIBLE_DEVICES=2,3 variable when triggering the python code from terminal.\nCUDA_VISIBLE_DEVICES=2,3 python lstm_demo_example.py --epochs=30 --lr=0.001\n\nand inside the code, leave it as:\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else 'cpu')\nmodel = LSTMModel()\nmodel = nn.DataParallel(model)\nmodel = model.to(device)\n\nSource : https://glassboxmedicine.com/2020/03/04/multi-gpu-training-in-pytorch-data-and-model-parallelism/\n", "In 2022, PyTorch says:\n\nIt is recommended to use DistributedDataParallel, instead of this class, to do multi-GPU training, even if there is only a single node. See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel and Distributed Data Parallel.\n\nin https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel\nThus, seems that we should use DistributedDataParallel, not DataParallel.\n" ]
[ 58, 22, 5, 1, 1, 1 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0054216920_python_pytorch.txt
Q: How to make Optuna get replicable results? I'm using optuna to tune LGBM. Random seeds had been set but each time Optuna got different set of best params. Here's my optuna code: def get_hpo_params(opt_X_train, opt_X_val, opt_y_train, opt_y_val, n_trials=180, cat_features=""): def objective(trial): dtrain = lgb.Dataset(opt_X_train, opt_y_train, categorical_feature=cat_features) dval = lgb.Dataset(opt_X_val, opt_y_val, categorical_feature=cat_features) upper = min(32768, int(opt_X_train.shape[0])) params = { "objective": "binary", "metric": "auc", "random_state": 10, "verbosity": -1, "boosting": "gbdt", "num_threads": 4, "num_leaves": trial.suggest_int("num_leaves", 4, 30), "learning_rate": trial.suggest_loguniform("learning_rate", 0.005, 1.0), "bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.1, 1.0), "feature_fraction": trial.suggest_uniform("feature_fraction", 0.1, 1.0), "bagging_freq": trial.suggest_int("bagging_freq", 10, 30), "min_data_in_leaf": trial.suggest_int("min_data_in_leaf", 1000, 3000), "num_iterations": trial.suggest_int("num_iterations", 1000, 3000) } pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc") clf = lgb.train( params, dtrain, valid_sets=[dval], verbose_eval=False, callbacks=[pruning_callback] ) y_val_pred = clf.predict(opt_X_val) auc = roc_auc_score(opt_y_val, y_val_pred) return auc start = timeit.default_timer() study = optuna.create_study(direction="maximize", pruner=optuna.pruners.HyperbandPruner(), sampler=optuna.samplers.TPESampler(seed=10), study_name='lgbm_hpo') study.optimize(objective, n_trials=n_trials) print("Number of finished trials: {}".format(len(study.trials))) best_trial = study.best_trial print(f"Best trial performance: {best_trial.value}") stop = timeit.default_timer() print('Time (min): ', (stop - start)/60) return best_trial.params If I restart my ipython notebook kernel, the results will be different. Is there anyway to make optuna output reproducable? A: Ohh, I finally solved the problem. Need to set hash seed on my laptop using this method: https://gerrychain.readthedocs.io/en/latest/topics/reproducibility.html#set-pythonhashseed-0 This solution is mentioned in optuna for reproducing pruning behaviour: https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html
How to make Optuna get replicable results?
I'm using optuna to tune LGBM. Random seeds had been set but each time Optuna got different set of best params. Here's my optuna code: def get_hpo_params(opt_X_train, opt_X_val, opt_y_train, opt_y_val, n_trials=180, cat_features=""): def objective(trial): dtrain = lgb.Dataset(opt_X_train, opt_y_train, categorical_feature=cat_features) dval = lgb.Dataset(opt_X_val, opt_y_val, categorical_feature=cat_features) upper = min(32768, int(opt_X_train.shape[0])) params = { "objective": "binary", "metric": "auc", "random_state": 10, "verbosity": -1, "boosting": "gbdt", "num_threads": 4, "num_leaves": trial.suggest_int("num_leaves", 4, 30), "learning_rate": trial.suggest_loguniform("learning_rate", 0.005, 1.0), "bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.1, 1.0), "feature_fraction": trial.suggest_uniform("feature_fraction", 0.1, 1.0), "bagging_freq": trial.suggest_int("bagging_freq", 10, 30), "min_data_in_leaf": trial.suggest_int("min_data_in_leaf", 1000, 3000), "num_iterations": trial.suggest_int("num_iterations", 1000, 3000) } pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc") clf = lgb.train( params, dtrain, valid_sets=[dval], verbose_eval=False, callbacks=[pruning_callback] ) y_val_pred = clf.predict(opt_X_val) auc = roc_auc_score(opt_y_val, y_val_pred) return auc start = timeit.default_timer() study = optuna.create_study(direction="maximize", pruner=optuna.pruners.HyperbandPruner(), sampler=optuna.samplers.TPESampler(seed=10), study_name='lgbm_hpo') study.optimize(objective, n_trials=n_trials) print("Number of finished trials: {}".format(len(study.trials))) best_trial = study.best_trial print(f"Best trial performance: {best_trial.value}") stop = timeit.default_timer() print('Time (min): ', (stop - start)/60) return best_trial.params If I restart my ipython notebook kernel, the results will be different. Is there anyway to make optuna output reproducable?
[ "Ohh, I finally solved the problem. Need to set hash seed on my laptop using this method: https://gerrychain.readthedocs.io/en/latest/topics/reproducibility.html#set-pythonhashseed-0\nThis solution is mentioned in optuna for reproducing pruning behaviour: https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html\n" ]
[ 0 ]
[]
[]
[ "lightgbm", "optuna", "python", "replicate" ]
stackoverflow_0074609322_lightgbm_optuna_python_replicate.txt
Q: Splitting text and numbers and adding a seperator I have a string of the form: Abu Dhabi1.90Morrisville Samp Army1.90 Deccan Gladiators1.40The Chennai Braves2.87 Bangla Tigers1.90Delhi Bulls1.90 New Zealand1.68India2.15 Australia1.09Draw14.00West Indies13.00 Sri Lanka1.51Afghanistan2.50 Tas Tigers1.28South Australia3.50 Is there a regular expression that can be used so that the final output looks like Abu Dhabi , 1.90 ,Morrisville Samp Army,1.90 Deccan Gladiators, 1.40,The Chennai Braves,2.87 Bangla Tigers, 1.90, Delhi Bulls, 1.90 New Zealand, 1.68, India, 2.15 Australia, 1.09, Draw, 14.00, West Indies, 13.00 Sri Lanka, 1.51, Afghanistan, 2.50 Tas Tigers, 1.28, South Australia, 3.50 A: What about using (?<=[\d.])(?=[^\d.\n])|(?<=[^\d.])(?=[\d.]) to detect alternating numbers/non-numbers? text = '''Abu Dhabi1.90Morrisville Samp Army1.90 Deccan Gladiators1.40The Chennai Braves2.87 Bangla Tigers1.90Delhi Bulls1.90 New Zealand1.68India2.15 Australia1.09Draw14.00West Indies13.00 Sri Lanka1.51Afghanistan2.50 Tas Tigers1.28South Australia3.50''' print(re.sub('(?<=[\d.])(?=[^\d.\n])|(?<=[^\d.])(?=[\d.])', ', ', text)) Output: Abu Dhabi, 1.90, Morrisville Samp Army, 1.90 Deccan Gladiators, 1.40, The Chennai Braves, 2.87 Bangla Tigers, 1.90, Delhi Bulls, 1.90 New Zealand, 1.68, India, 2.15 Australia, 1.09, Draw, 14.00, West Indies, 13.00 Sri Lanka, 1.51, Afghanistan, 2.50 Tas Tigers, 1.28, South Australia, 3.50 regex demo A: You can use an alternation pattern to match either consecutive alphabets and spaces followed by a digit, or consecutive digits and dots followed by an alphabet, and substitute the match with itself followed by a comma and a space: import re s = '''Abu Dhabi1.90Morrisville Samp Army1.90 Deccan Gladiators1.40The Chennai Braves2.87 Bangla Tigers1.90Delhi Bulls1.90 New Zealand1.68India2.15 Australia1.09Draw14.00West Indies13.00 Sri Lanka1.51Afghanistan2.50 Tas Tigers1.28South Australia3.50''' print(re.sub(r'([A-Za-z ]+(?=\d)|[\d.]+(?=[A-Za-z]))', r'\1, ', s)) This outputs: Abu Dhabi, 1.90, Morrisville Samp Army, 1.90 Deccan Gladiators, 1.40, The Chennai Braves, 2.87 Bangla Tigers, 1.90, Delhi Bulls, 1.90 New Zealand, 1.68, India, 2.15 Australia, 1.09, Draw, 14.00, West Indies, 13.00 Sri Lanka, 1.51, Afghanistan, 2.50 Tas Tigers, 1.28, South Australia, 3.50 Demo: https://replit.com/@blhsing/NewInternationalGravity
Splitting text and numbers and adding a seperator
I have a string of the form: Abu Dhabi1.90Morrisville Samp Army1.90 Deccan Gladiators1.40The Chennai Braves2.87 Bangla Tigers1.90Delhi Bulls1.90 New Zealand1.68India2.15 Australia1.09Draw14.00West Indies13.00 Sri Lanka1.51Afghanistan2.50 Tas Tigers1.28South Australia3.50 Is there a regular expression that can be used so that the final output looks like Abu Dhabi , 1.90 ,Morrisville Samp Army,1.90 Deccan Gladiators, 1.40,The Chennai Braves,2.87 Bangla Tigers, 1.90, Delhi Bulls, 1.90 New Zealand, 1.68, India, 2.15 Australia, 1.09, Draw, 14.00, West Indies, 13.00 Sri Lanka, 1.51, Afghanistan, 2.50 Tas Tigers, 1.28, South Australia, 3.50
[ "What about using (?<=[\\d.])(?=[^\\d.\\n])|(?<=[^\\d.])(?=[\\d.]) to detect alternating numbers/non-numbers?\ntext = '''Abu Dhabi1.90Morrisville Samp Army1.90\nDeccan Gladiators1.40The Chennai Braves2.87\nBangla Tigers1.90Delhi Bulls1.90\nNew Zealand1.68India2.15\nAustralia1.09Draw14.00West Indies13.00\nSri Lanka1.51Afghanistan2.50\nTas Tigers1.28South Australia3.50'''\n\nprint(re.sub('(?<=[\\d.])(?=[^\\d.\\n])|(?<=[^\\d.])(?=[\\d.])', ', ', text))\n\nOutput:\nAbu Dhabi, 1.90, Morrisville Samp Army, 1.90\nDeccan Gladiators, 1.40, The Chennai Braves, 2.87\nBangla Tigers, 1.90, Delhi Bulls, 1.90\nNew Zealand, 1.68, India, 2.15\nAustralia, 1.09, Draw, 14.00, West Indies, 13.00\nSri Lanka, 1.51, Afghanistan, 2.50\nTas Tigers, 1.28, South Australia, 3.50\n\nregex demo\n", "You can use an alternation pattern to match either consecutive alphabets and spaces followed by a digit, or consecutive digits and dots followed by an alphabet, and substitute the match with itself followed by a comma and a space:\nimport re\n\ns = '''Abu Dhabi1.90Morrisville Samp Army1.90\nDeccan Gladiators1.40The Chennai Braves2.87\nBangla Tigers1.90Delhi Bulls1.90\nNew Zealand1.68India2.15\nAustralia1.09Draw14.00West Indies13.00\nSri Lanka1.51Afghanistan2.50\nTas Tigers1.28South Australia3.50'''\n\nprint(re.sub(r'([A-Za-z ]+(?=\\d)|[\\d.]+(?=[A-Za-z]))', r'\\1, ', s))\n\nThis outputs:\nAbu Dhabi, 1.90, Morrisville Samp Army, 1.90\nDeccan Gladiators, 1.40, The Chennai Braves, 2.87\nBangla Tigers, 1.90, Delhi Bulls, 1.90\nNew Zealand, 1.68, India, 2.15\nAustralia, 1.09, Draw, 14.00, West Indies, 13.00\nSri Lanka, 1.51, Afghanistan, 2.50\nTas Tigers, 1.28, South Australia, 3.50\n\nDemo: https://replit.com/@blhsing/NewInternationalGravity\n" ]
[ 0, 0 ]
[]
[]
[ "data_cleaning", "python", "strsplit", "web_scraping" ]
stackoverflow_0074609093_data_cleaning_python_strsplit_web_scraping.txt
Q: TypeError: iText() missing 2 required positional arguments: 'text' and 'data_frame' this is my function.py def iText(text, data_frame): url = requests.get("http://nlp.cs.aueb.gr/software_and_datasets/lingspam_public.tar.gz") text = url\[-1\].strip() label = url\[-1\].strip() data_frame = pd.DataFrame(text, label) return text, data_frame this my file test_fc.py def test_coba(): text = iText data_frame = iText assert text, data_frame It runs but I'm not sure if it is right. Can anyone help? A: No, it's not right. Your iText function doesn't have any inputs, so it shouldn't have any parameters. Next, in your test function, you need to CALL iText and check what it returns: def iText(): url = requests.get("http://nlp.cs.aueb.gr/software_and_datasets/lingspam_public.tar.gz") text = url[-1].strip() label = url[-1].strip() data_frame = pd.DataFrame(text, label) return text, data_frame def test_coba(): text, data_frame = iText() assert text, data_frame However, if you're fetching a tarball, what do you think url[-1].strip() is going to do?
TypeError: iText() missing 2 required positional arguments: 'text' and 'data_frame'
this is my function.py def iText(text, data_frame): url = requests.get("http://nlp.cs.aueb.gr/software_and_datasets/lingspam_public.tar.gz") text = url\[-1\].strip() label = url\[-1\].strip() data_frame = pd.DataFrame(text, label) return text, data_frame this my file test_fc.py def test_coba(): text = iText data_frame = iText assert text, data_frame It runs but I'm not sure if it is right. Can anyone help?
[ "No, it's not right. Your iText function doesn't have any inputs, so it shouldn't have any parameters. Next, in your test function, you need to CALL iText and check what it returns:\ndef iText():\n url = requests.get(\"http://nlp.cs.aueb.gr/software_and_datasets/lingspam_public.tar.gz\")\n text = url[-1].strip() \n label = url[-1].strip() \n data_frame = pd.DataFrame(text, label)\n return text, data_frame\n\ndef test_coba():\n text, data_frame = iText()\n assert text, data_frame\n\nHowever, if you're fetching a tarball, what do you think url[-1].strip() is going to do?\n" ]
[ 0 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0074609547_pytest_python.txt
Q: Get parameter estimates from logistic regression model using pycaret I am training and tuning a model in pycaret such as: from pycaret.classification import * clf1 = setup(data = train, target = 'target', feature_selection = True, test_data = test, remove_multicollinearity = True, multicollinearity_threshold = 0.4) # create model lr = create_model('lr') # tune model tuned_lr = tune_model(lr) # optimize threshold optimized_lr = optimize_threshold(tuned_lr) I would like to get the parameters estimated for the features in the Logistic Regression, so I could proceed on understanding the effect size of each feature on the target. However, the object optimized_lr has a function optimized_lr.get_params() which returns the hiperparameters of the model, however, I am not quite interested about my tuning decisions, instead, I am very interested by the real parameters of the model, the ones estimated in Logistic Regression. How could I get them using pycaret? (I could easily get those using other packages such as statsmodels, but I want to know in pycaret) A: how about for f, c in zip (optimized_lr.feature_names_in_,tuned.coef_[0]): print(f, c)
Get parameter estimates from logistic regression model using pycaret
I am training and tuning a model in pycaret such as: from pycaret.classification import * clf1 = setup(data = train, target = 'target', feature_selection = True, test_data = test, remove_multicollinearity = True, multicollinearity_threshold = 0.4) # create model lr = create_model('lr') # tune model tuned_lr = tune_model(lr) # optimize threshold optimized_lr = optimize_threshold(tuned_lr) I would like to get the parameters estimated for the features in the Logistic Regression, so I could proceed on understanding the effect size of each feature on the target. However, the object optimized_lr has a function optimized_lr.get_params() which returns the hiperparameters of the model, however, I am not quite interested about my tuning decisions, instead, I am very interested by the real parameters of the model, the ones estimated in Logistic Regression. How could I get them using pycaret? (I could easily get those using other packages such as statsmodels, but I want to know in pycaret)
[ "how about\nfor f, c in zip (optimized_lr.feature_names_in_,tuned.coef_[0]):\n print(f, c)\n\n" ]
[ 0 ]
[]
[]
[ "logistic_regression", "pycaret", "python", "regression" ]
stackoverflow_0073023479_logistic_regression_pycaret_python_regression.txt
Q: My Python-3 Code is kinda confuzing if anyone can help i would like to know, I made a window #imports from tkinter import * import os #Window Creation window = Tk() window.title('Potato Defense') window.configure(width=1500, height=1500) window.configure(bg='lightblue') photo = PhotoImage(file = r"TestButton.png") Button(window, text = 'Click Me !', image = photo).pack(side = TOP) window.mainloop() my code is quite janky, but currently, my image location gives an error because I don't know how to get the current location of files I don't want my directory I want it to work on other pcs. how do I get a current directory? ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ
My Python-3 Code is kinda confuzing if anyone can help i would like to know, I made a window
#imports from tkinter import * import os #Window Creation window = Tk() window.title('Potato Defense') window.configure(width=1500, height=1500) window.configure(bg='lightblue') photo = PhotoImage(file = r"TestButton.png") Button(window, text = 'Click Me !', image = photo).pack(side = TOP) window.mainloop() my code is quite janky, but currently, my image location gives an error because I don't know how to get the current location of files I don't want my directory I want it to work on other pcs. how do I get a current directory? ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ
[]
[]
[ "If your code is in the same directory as this file, you can simply do photo = PhotoImage(file = \"./TestButton.png\")\n" ]
[ -2 ]
[ "button", "python", "python_3.x" ]
stackoverflow_0074609638_button_python_python_3.x.txt
Q: How To Automatically Create Objects Of Django Model While Creation Another Model Objects I Want To Create Object For "ProductOut" Model When "CusOrder" Model Is Being Created Here Is My Code class CusOrder(models.Model): cus_name = models.CharField(max_length=100) cus_number = models.CharField(max_length=11) product = models.ManyToManyField(Product) qty = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) def __str__(self): return self.cus_name def save(self,*args,**kwrgs): ProductOut.objects.create( refrence=self.cus_number, stock_out = self.qty, sell_price = self.sell_price, product = self.product.P_name ) super(CusOrder,self).save(*args,**kwrgs) class ProductOut(models.Model): refrence = models.ManyToManyField(CusOrder) stock_out = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) product = models.ForeignKey(Product,on_delete=models.CASCADE) def __str__(self): return self.refrence.cus_number But I am Getting a ValueError which Is '"<CusOrder: Asif>" needs to have a value for field "id" before this many-to-many relationship can be used.' When I want to save a CusOrder Object here Is My Whole class Catagory(models.Model): name = models.CharField(max_length=60) def __str__(self): return self.name class Product(models.Model): P_name = models.CharField(max_length=30) stock_in = models.IntegerField(default=0) unit_price = models.IntegerField(default=0) cata = models.ForeignKey(Catagory, on_delete=models.CASCADE) def __str__(self): return self.P_name class CusOrder(models.Model): cus_name = models.CharField(max_length=100) cus_number = models.CharField(max_length=11) product = models.ManyToManyField(Product) qty = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) def __str__(self): return self.cus_name def save(self,*args,**kwrgs): ProductOut.objects.create( refrence=self.cus_number, stock_out = self.qty, sell_price = self.sell_price, product = self.product.P_name ) super(CusOrder,self).save(*args,**kwrgs) class ProductOut(models.Model): refrence = models.ManyToManyField(CusOrder) stock_out = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) product = models.ForeignKey(Product,on_delete=models.CASCADE) def __str__(self): return self.refrence.cus_number class ReturnedProduct(models.Model): cus_number = models.ForeignKey(CusOrder,on_delete=models.CASCADE) product = models.ForeignKey(Product,on_delete=models.CASCADE) qty = models.IntegerField(default=0) def __str__(self): return self.cus_number.cus_number What Will Be The Right Proccess To Do That. A: refrence should have CusOrder instance and super should call before the creation as CusOrder object should be created first. def save(self,*args,**kwrgs): instance = super(CusOrder,self).save(*args,**kwrgs) p_instance = ProductOut.objects.create( stock_out = self.qty, sell_price = self.sell_price, product = self.product.first() ) p_instance.refrence.add(instance) p_instance.save() Best way is to use Django Signals, In above case specifically use post_save. Django Signals- master pre_save and post_save
How To Automatically Create Objects Of Django Model While Creation Another Model Objects
I Want To Create Object For "ProductOut" Model When "CusOrder" Model Is Being Created Here Is My Code class CusOrder(models.Model): cus_name = models.CharField(max_length=100) cus_number = models.CharField(max_length=11) product = models.ManyToManyField(Product) qty = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) def __str__(self): return self.cus_name def save(self,*args,**kwrgs): ProductOut.objects.create( refrence=self.cus_number, stock_out = self.qty, sell_price = self.sell_price, product = self.product.P_name ) super(CusOrder,self).save(*args,**kwrgs) class ProductOut(models.Model): refrence = models.ManyToManyField(CusOrder) stock_out = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) product = models.ForeignKey(Product,on_delete=models.CASCADE) def __str__(self): return self.refrence.cus_number But I am Getting a ValueError which Is '"<CusOrder: Asif>" needs to have a value for field "id" before this many-to-many relationship can be used.' When I want to save a CusOrder Object here Is My Whole class Catagory(models.Model): name = models.CharField(max_length=60) def __str__(self): return self.name class Product(models.Model): P_name = models.CharField(max_length=30) stock_in = models.IntegerField(default=0) unit_price = models.IntegerField(default=0) cata = models.ForeignKey(Catagory, on_delete=models.CASCADE) def __str__(self): return self.P_name class CusOrder(models.Model): cus_name = models.CharField(max_length=100) cus_number = models.CharField(max_length=11) product = models.ManyToManyField(Product) qty = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) def __str__(self): return self.cus_name def save(self,*args,**kwrgs): ProductOut.objects.create( refrence=self.cus_number, stock_out = self.qty, sell_price = self.sell_price, product = self.product.P_name ) super(CusOrder,self).save(*args,**kwrgs) class ProductOut(models.Model): refrence = models.ManyToManyField(CusOrder) stock_out = models.IntegerField(default=0) sell_price = models.IntegerField(default=0) product = models.ForeignKey(Product,on_delete=models.CASCADE) def __str__(self): return self.refrence.cus_number class ReturnedProduct(models.Model): cus_number = models.ForeignKey(CusOrder,on_delete=models.CASCADE) product = models.ForeignKey(Product,on_delete=models.CASCADE) qty = models.IntegerField(default=0) def __str__(self): return self.cus_number.cus_number What Will Be The Right Proccess To Do That.
[ "refrence should have CusOrder instance and super should call before the creation as CusOrder object should be created first.\n def save(self,*args,**kwrgs):\n instance = super(CusOrder,self).save(*args,**kwrgs)\n p_instance = ProductOut.objects.create(\n stock_out = self.qty,\n sell_price = self.sell_price,\n product = self.product.first()\n )\n p_instance.refrence.add(instance)\n p_instance.save()\n \n\nBest way is to use Django Signals, In above case specifically use post_save. Django Signals- master pre_save and post_save \n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_modeltranslation", "python" ]
stackoverflow_0074609592_django_django_models_django_modeltranslation_python.txt
Q: Install nodejs in python slim buster I have a Dockerfile that starts with FROM python:3.7-slim-buster and I want to install node.js and npm in it. How can I install them in this image? A: This should work: FROM python:3.7-slim-buster # setup dependencies RUN apt-get update RUN apt-get install xz-utils RUN apt-get -y install curl # Download latest nodejs binary RUN curl https://nodejs.org/dist/v14.15.4/node-v14.15.4-linux-x64.tar.xz -O # Extract & install RUN tar -xf node-v14.15.4-linux-x64.tar.xz RUN ln -s /node-v14.15.4-linux-x64/bin/node /usr/local/bin/node RUN ln -s /node-v14.15.4-linux-x64/bin/npm /usr/local/bin/npm RUN ln -s /node-v14.15.4-linux-x64/bin/npx /usr/local/bin/npx To run node start it with docker run -it <containerName> /bin/bash Then node, npm and npx are available A: npm globally packages need to add links if you want to use it as command. FROM python:3.7-slim-buster ENV NODE_VERSION 14.15.4 #if build in `china`, debian mirrors, npm registry change to china source ARG AREA=london RUN set -ex \ && if [ 'china' = "$AREA" ] ; then \ sed -i "s@http://deb.debian.org@https://mirrors.aliyun.com@g" /etc/apt/sources.list; \ fi \ && apt-get update \ && apt-get install -y git xz-utils curl \ # install node && curl "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" -O \ && tar -xf "node-v$NODE_VERSION-linux-x64.tar.xz" \ && ln -s "/node-v$NODE_VERSION-linux-x64/bin/node" /usr/local/bin/node \ && ln -s "/node-v$NODE_VERSION-linux-x64/bin/npm" /usr/local/bin/npm \ && ln -s "/node-v$NODE_VERSION-linux-x64/bin/npx" /usr/local/bin/npx \ # npm install bump, openapi-generator && if [ 'china' = "$AREA" ] ; then \ npm config set registry https://registry.npm.taobao.org/; \ fi \ && npm install -g [email protected] \ && ln -s "/node-v$NODE_VERSION-linux-x64/bin/bump" /usr/local/bin/bump \ # clear && npm cache clean --force \ && rm -rf /var/lib/apt/lists/* \ && rm -f "/node-v$NODE_VERSION-linux-x64.tar.xz" \ && apt-get clean \ && apt-get autoremove A: Just use the official installation for Debian from here : RUN curl -fsSL https://deb.nodesource.com/setup_19.x | bash - &&\ apt-get install -y nodejs
Install nodejs in python slim buster
I have a Dockerfile that starts with FROM python:3.7-slim-buster and I want to install node.js and npm in it. How can I install them in this image?
[ "This should work:\nFROM python:3.7-slim-buster\n\n# setup dependencies\nRUN apt-get update\nRUN apt-get install xz-utils\nRUN apt-get -y install curl\n\n# Download latest nodejs binary\nRUN curl https://nodejs.org/dist/v14.15.4/node-v14.15.4-linux-x64.tar.xz -O\n\n# Extract & install\nRUN tar -xf node-v14.15.4-linux-x64.tar.xz\nRUN ln -s /node-v14.15.4-linux-x64/bin/node /usr/local/bin/node\nRUN ln -s /node-v14.15.4-linux-x64/bin/npm /usr/local/bin/npm\nRUN ln -s /node-v14.15.4-linux-x64/bin/npx /usr/local/bin/npx\n\n\nTo run node start it with docker run -it <containerName> /bin/bash\nThen node, npm and npx are available\n", "npm globally packages need to add links if you want to use it as command.\nFROM python:3.7-slim-buster\nENV NODE_VERSION 14.15.4\n\n#if build in `china`, debian mirrors, npm registry change to china source\nARG AREA=london\n\nRUN set -ex \\\n && if [ 'china' = \"$AREA\" ] ; then \\\n sed -i \"s@http://deb.debian.org@https://mirrors.aliyun.com@g\" /etc/apt/sources.list; \\\n fi \\\n && apt-get update \\\n && apt-get install -y git xz-utils curl \\\n\n # install node\n && curl \"https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz\" -O \\\n && tar -xf \"node-v$NODE_VERSION-linux-x64.tar.xz\" \\\n && ln -s \"/node-v$NODE_VERSION-linux-x64/bin/node\" /usr/local/bin/node \\\n && ln -s \"/node-v$NODE_VERSION-linux-x64/bin/npm\" /usr/local/bin/npm \\\n && ln -s \"/node-v$NODE_VERSION-linux-x64/bin/npx\" /usr/local/bin/npx \\\n\n # npm install bump, openapi-generator\n && if [ 'china' = \"$AREA\" ] ; then \\\n npm config set registry https://registry.npm.taobao.org/; \\\n fi \\\n && npm install -g [email protected] \\\n && ln -s \"/node-v$NODE_VERSION-linux-x64/bin/bump\" /usr/local/bin/bump \\ \n\n # clear\n && npm cache clean --force \\\n && rm -rf /var/lib/apt/lists/* \\\n && rm -f \"/node-v$NODE_VERSION-linux-x64.tar.xz\" \\\n && apt-get clean \\\n && apt-get autoremove\n\n", "Just use the official installation for Debian from here :\nRUN curl -fsSL https://deb.nodesource.com/setup_19.x | bash - &&\\\n apt-get install -y nodejs\n\n" ]
[ 12, 2, 1 ]
[]
[]
[ "docker", "dockerfile", "node.js", "python" ]
stackoverflow_0065706138_docker_dockerfile_node.js_python.txt
Q: Getting HttpErrorResponse error in angular for Delete request I am trying to send a Delete request from my movie.component.ts page to delete a review. But I am getting HttpErrorResponse error. Where am I going wrong? here is a screenshot of the error message I am receiving in the console: HttpErrorResponse movie.component.ts deleteReview(reviewID: any) { this.webService.deleteReview(reviewID).subscribe((res) => { alert('Review deleted'); }); } web.service.ts deleteReview(id: any) { return this.http.delete( 'http://localhost:5000/api/v1.0/movies' + id + '/reviews/' + this.reviewID ); } Here is my back-end Delete method @app.route("/api/v1.0/movies/<string:id>/reviews/<string:reviewID>", methods=["DELETE"]) def delete_review(id, reviewID): movies.update_one( {"_id": ObjectId(id)}, {"$pull": {"reviews": {"_id": ObjectId(reviewID)}}} ) return make_response(jsonify({}), 204) A: Typo in web.service.ts. Forgot a slash after /reviews/. Second a typo on this.reviewIDw. So in your error you see the value of reviewID is undefined. This can bring this CORS error, too. Then the route was not found. Greetings Florian
Getting HttpErrorResponse error in angular for Delete request
I am trying to send a Delete request from my movie.component.ts page to delete a review. But I am getting HttpErrorResponse error. Where am I going wrong? here is a screenshot of the error message I am receiving in the console: HttpErrorResponse movie.component.ts deleteReview(reviewID: any) { this.webService.deleteReview(reviewID).subscribe((res) => { alert('Review deleted'); }); } web.service.ts deleteReview(id: any) { return this.http.delete( 'http://localhost:5000/api/v1.0/movies' + id + '/reviews/' + this.reviewID ); } Here is my back-end Delete method @app.route("/api/v1.0/movies/<string:id>/reviews/<string:reviewID>", methods=["DELETE"]) def delete_review(id, reviewID): movies.update_one( {"_id": ObjectId(id)}, {"$pull": {"reviews": {"_id": ObjectId(reviewID)}}} ) return make_response(jsonify({}), 204)
[ "Typo in web.service.ts. Forgot a slash after /reviews/. Second a typo on this.reviewIDw.\nSo in your error you see the value of reviewID is undefined. This can bring this CORS error, too. Then the route was not found.\nGreetings Florian\n" ]
[ 0 ]
[]
[]
[ "angular", "python", "typescript" ]
stackoverflow_0074607245_angular_python_typescript.txt
Q: Not Able to Run GEM5 with RISC-V: "!seWorkload occurred: Couldn't find appropriate workload object" I am trying to run gem5 with RISC-V. I have the Linux 64-bits cross compiler ready and I have also installed and compiled gem5. I then tried to use the following tutorial to run gem5: https://canvas.kth.se/courses/24933/pages/tutorial-simulating-a-cpu-with-gem5 I wrote a simple Hello World C program and compiled it using the following command: riscv64-unknown-linux-gnu-gcc -c hello.c -static -Wall -O0 -o hello But when I try to run gem5, I get the following error: build/RISCV/sim/process.cc:137: fatal: fatal condition !seWorkload occurred: Couldn't find appropriate workload object. I tried to come over this problem but I could not. I added print statements to the configuration file and realized that the error occurs in the line m5.instantiate() in the configuration file attached below. Does anyone know how to solve this issue? What is an seWorkload and why gem5 considers the object as not appropriate? I am using Ubuntu 22.04. For reference, this is the configuration python file I use for gem5: import m5 from m5.objects import * import sys system = System() system.clk_domain = SrcClockDomain() system.clk_domain.clock = '1GHz' system.clk_domain.voltage_domain = VoltageDomain() system.mem_mode = 'timing' system.mem_ranges = [AddrRange('512MB')] system.cpu = TimingSimpleCPU() system.membus = SystemXBar() system.cpu.icache_port = system.membus.cpu_side_ports system.cpu.dcache_port = system.membus.cpu_side_ports system.mem_ctrl = MemCtrl() system.mem_ctrl.dram = DDR3_1600_8x8() system.mem_ctrl.dram.range = system.mem_ranges[0] system.mem_ctrl.port = system.membus.mem_side_ports # start a process process = Process() # read command line arguments for the path to the executable process.cmd = [str(sys.argv[1])] system.cpu.workload = process system.cpu.createThreads() root = Root(full_system = False, system = system) m5.instantiate() # the error occurs from this line print("Beginning simulation!") exit_event = m5.simulate() print('Exiting @ tick %i because %s' %(m5.curTick(), exit_event.getCause())) A: m5.util.addToPath('../../') is missing. This is used to add the common scripts to the path to your directory from where you are instantiating the simulation.
Not Able to Run GEM5 with RISC-V: "!seWorkload occurred: Couldn't find appropriate workload object"
I am trying to run gem5 with RISC-V. I have the Linux 64-bits cross compiler ready and I have also installed and compiled gem5. I then tried to use the following tutorial to run gem5: https://canvas.kth.se/courses/24933/pages/tutorial-simulating-a-cpu-with-gem5 I wrote a simple Hello World C program and compiled it using the following command: riscv64-unknown-linux-gnu-gcc -c hello.c -static -Wall -O0 -o hello But when I try to run gem5, I get the following error: build/RISCV/sim/process.cc:137: fatal: fatal condition !seWorkload occurred: Couldn't find appropriate workload object. I tried to come over this problem but I could not. I added print statements to the configuration file and realized that the error occurs in the line m5.instantiate() in the configuration file attached below. Does anyone know how to solve this issue? What is an seWorkload and why gem5 considers the object as not appropriate? I am using Ubuntu 22.04. For reference, this is the configuration python file I use for gem5: import m5 from m5.objects import * import sys system = System() system.clk_domain = SrcClockDomain() system.clk_domain.clock = '1GHz' system.clk_domain.voltage_domain = VoltageDomain() system.mem_mode = 'timing' system.mem_ranges = [AddrRange('512MB')] system.cpu = TimingSimpleCPU() system.membus = SystemXBar() system.cpu.icache_port = system.membus.cpu_side_ports system.cpu.dcache_port = system.membus.cpu_side_ports system.mem_ctrl = MemCtrl() system.mem_ctrl.dram = DDR3_1600_8x8() system.mem_ctrl.dram.range = system.mem_ranges[0] system.mem_ctrl.port = system.membus.mem_side_ports # start a process process = Process() # read command line arguments for the path to the executable process.cmd = [str(sys.argv[1])] system.cpu.workload = process system.cpu.createThreads() root = Root(full_system = False, system = system) m5.instantiate() # the error occurs from this line print("Beginning simulation!") exit_event = m5.simulate() print('Exiting @ tick %i because %s' %(m5.curTick(), exit_event.getCause()))
[ "m5.util.addToPath('../../') is missing. This is used to add the common scripts to the path to your directory from where you are instantiating the simulation.\n" ]
[ 0 ]
[]
[]
[ "c++", "gcc", "gem5", "python", "riscv" ]
stackoverflow_0073614148_c++_gcc_gem5_python_riscv.txt
Q: How to reference python package when filename contains a period I am using django and I have a file named models.admin.py and I want to do the following idea in models.py: from "models.admin" import * however, I get a syntax error for having double quotes. But if I just do from models.admin import * then I get "ImportError: No module named admin" Is there any way to import from a python file that has a period in its name? A: Actually, you can import a module with an invalid name. But you'll need to use imp for that, e.g. assuming file is named models.admin.py, you could do import imp with open('models.admin.py', 'rb') as fp: models_admin = imp.load_module( 'models_admin', fp, 'models.admin.py', ('.py', 'rb', imp.PY_SOURCE) ) But read the docs on imp.find_module and imp.load_module before you start using it. A: If you really want to, you can import a module with an unusual filename (e.g., a filename containing a '.' before the '.py') using the imp module: >>> import imp >>> a_b = imp.load_source('a.b', 'a.b.py') >>> a_b.x "I was defined in a.b.py!" However, that's generally a bad idea. It's more likely that you're trying to use packages, in which case you should create a directory named "a", containing a file named "b.py"; and then "import a.b" will load a/b.py. A: The file is called models/admin.py. (Source) That is, it should be called admin.py in a directory called models. Then you can import using from models.admin import *, assuming that it is in your Python path. A: Like below Assume dir structure is like this: C:. │ script.py │ └───Parent └───Child ├───1.1 │ main.py │ └───1.2 **assume you want to import main.py in script.py ** your main.py looks like below def my_function(): print("Hello from a function") your script.py looks like below from os import path import importlib from os.path import dirname import sys import importlib.util def getPath(): # your logic to get to the path return path.join(dirname(__file__),'Parent','Child','1.1','main.py') file_path = getPath() module_name = 'main' spec = importlib.util.spec_from_file_location(module_name, file_path) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) #call functions like this module.my_function() Check out this gist A: No, you can't import a python file as a module if its name contains a period (or a question mark, or exclamation mark, etc). A python module's name (not including the .py) must be a valid python name (ie can be used as a variable name). A: In my case, I am using grafanalib, and the filename has to be xx.dashboard.py based on the doc. However, I do want to import this file to simplify the uploading step. I got warning when I use import imp: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses Here is the simple demo using importlib and pathlib: foo.bar.py and main.py are in the same foler. # foo.bar.py num = 42 # main.py import importlib.machinery import pathlib module = importlib.machinery.SourceFileLoader( "foo_bar", pathlib.Path(__file__).parent.joinpath("foo.bar.py").resolve().as_posix(), ).load_module() print(module.num) # 42
How to reference python package when filename contains a period
I am using django and I have a file named models.admin.py and I want to do the following idea in models.py: from "models.admin" import * however, I get a syntax error for having double quotes. But if I just do from models.admin import * then I get "ImportError: No module named admin" Is there any way to import from a python file that has a period in its name?
[ "Actually, you can import a module with an invalid name. But you'll need to use imp for that, e.g. assuming file is named models.admin.py, you could do\nimport imp\nwith open('models.admin.py', 'rb') as fp:\n models_admin = imp.load_module(\n 'models_admin', fp, 'models.admin.py',\n ('.py', 'rb', imp.PY_SOURCE)\n )\n\nBut read the docs on imp.find_module and imp.load_module before you start using it.\n", "If you really want to, you can import a module with an unusual filename (e.g., a filename containing a '.' before the '.py') using the imp module:\n>>> import imp\n>>> a_b = imp.load_source('a.b', 'a.b.py')\n>>> a_b.x\n\"I was defined in a.b.py!\"\n\nHowever, that's generally a bad idea. It's more likely that you're trying to use packages, in which case you should create a directory named \"a\", containing a file named \"b.py\"; and then \"import a.b\" will load a/b.py.\n", "The file is called models/admin.py. (Source)\nThat is, it should be called admin.py in a directory called models.\nThen you can import using from models.admin import *, assuming that it is in your Python path.\n", "Like below\nAssume dir structure is like this:\nC:.\n│ script.py\n│ \n└───Parent\n └───Child\n ├───1.1\n │ main.py\n │ \n └───1.2\n\n**assume you want to import main.py in script.py **\nyour main.py looks like below\ndef my_function():\n print(\"Hello from a function\")\n\nyour script.py looks like below\nfrom os import path\nimport importlib\nfrom os.path import dirname\nimport sys\nimport importlib.util\n\n\ndef getPath():\n # your logic to get to the path\n return path.join(dirname(__file__),'Parent','Child','1.1','main.py')\n\nfile_path = getPath() \nmodule_name = 'main'\n\nspec = importlib.util.spec_from_file_location(module_name, file_path)\nmodule = importlib.util.module_from_spec(spec)\nspec.loader.exec_module(module)\n\n#call functions like this\nmodule.my_function()\n\nCheck out this gist\n", "No, you can't import a python file as a module if its name contains a period (or a question mark, or exclamation mark, etc). A python module's name (not including the .py) must be a valid python name (ie can be used as a variable name).\n", "In my case, I am using grafanalib, and the filename has to be xx.dashboard.py based on the doc. However, I do want to import this file to simplify the uploading step.\nI got warning when I use import imp:\n\nthe imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses\n\nHere is the simple demo using importlib and pathlib:\nfoo.bar.py and main.py are in the same foler.\n# foo.bar.py\n\nnum = 42\n\n# main.py\n\nimport importlib.machinery\nimport pathlib\n\nmodule = importlib.machinery.SourceFileLoader(\n \"foo_bar\",\n pathlib.Path(__file__).parent.joinpath(\"foo.bar.py\").resolve().as_posix(),\n).load_module()\nprint(module.num) # 42\n\n" ]
[ 40, 15, 4, 4, 3, 0 ]
[ "You are not referencing files in the import statement, you are referencing modules and packages.\nPlease read the docs, they are very clear on that matter.\nAnyway, since you are using django, the usual approach won't work. If you want to keep models in separate files, rather than in models.py, you have to take extra steps, outlined, for example, here.\nEdit:\nWell, I don't really know what the questioneer means when he mentions admin and whether or not it is related to the admin interface of django. My points still stand.\n" ]
[ -1 ]
[ "import", "module", "package", "python", "python_import" ]
stackoverflow_0001828127_import_module_package_python_python_import.txt
Q: Exporting jupyter notebook to pdf with offline plotly graph; missing graphs I am trying to create pdf export of my lesson plans and I use plotly offline for the graphs. In a MWE below, the plot will display in the Jupyter Notebook but will not show up when I export to pdf. I export using File-->Download as-->PDF via Latex (.pdf). I'd like to make a pdf instead of using html. I understand it might take an extra step to convert an html export to pdf, but I was just wondering if there was a more direct route (a code modification?) that would allow me to export directly through File-->Download as-->PDF via Latex (.pdf) from plotly.offline import download_plotlyjs, init_notebook_mode, iplot init_notebook_mode(connected=True) import plotly.graph_objs as go data = [go.Scatter( x=[1, 2, 3], y=[3, 2, 1]) ] iplot(data) A: You need to specify the appropriate Default Renderer (or Renderers, if you want to visualize it in the Notebook and also when exporting to PDF using the File-->Download as-->PDF via Latex (.pdf) option you mentioned). I have been struggling with this myself for some hours too, but the setup that ended up working for me is the following: import plotly.io as pio pio.renderers.default = "notebook+pdf" # Renderer for Notebook and HTML exports + Renderer for PDF exports # init_notebook_mode(connected=True) # Do not include this line because it may not work Note that you can concatenate as many Renderers as you want using the + symbol, and Plotly will magically know when to use each of them. A: I think because plotly graphs are svg objects and generated by javascript, I dont have export to PDF working in my jupyter notebook, so I was unable to check and confirm my answer. Plotly offline does not have show as image, you can use plotly online to do this, its free to generate graphs, You need to create an online account, also you need to paste the username and API key from plotly website (API key can be found in settings). Note: please check in plotly if the plots are shared in public or private, I am not responsible for your plots becoming public. Anyway this will give you an image output of the graph, and you can export it to PDF Code: import plotly import plotly.graph_objs as go plotly.plotly.sign_in('<<username goes here>>', '<<api key goes here>>') trace = go.Bar(x=[2, 4, 6], y= [10, 12, 15]) data = [trace] layout = go.Layout(title='A Simple Plot', width=800, height=640) fig = go.Figure(data=data, layout=layout) plotly.plotly.image.save_as(fig, filename='a-simple-plot.png') from IPython.display import Image Image('a-simple-plot.png') A: A bit easier solution is to use image_bytes = fig.to_image(format='png') and than display using Image(image_bytes), but firstly you should install orca, and pip3 install psutil requests example: fig = go.Figure() """ here goes some code to draw """ image_bytes = fig.to_image(format='png', , width=1200, height=700, scale=1) # you can use other formats as well (like 'svg','jpeg','pdf') #instead of using fig.show() from IPython.display import Image Image(img_bytes) You can read more here Than you can convert it with File -> Download as -> PDF or using terminal jupyter nbconvert --to pdf <notebook_name>.ipynb A: I don't have a solution for the exact original problem above. However, if you wish to consider the .html format, here is a nice solution that worked for me. After all, my goal was just to share the notebook with people who didn't have jupyter installed. The conversion to .html is performed with plotlyhtmlexporter. Here is a piece of code you can copy-paste: pip install plotlyhtmlexporter jupyter nbconvert --to plotlyhtml mynotebook.ipynb You might then want to try and save (print) the .html from your browser as a .pdf, but I think it would be equivalent to just printing the original notebook to .pdf. As I say, in my case, having a jupyter-independent .html file was enough. A: Just use this and download the notebook as PDF via HTML. Also you need to pip install kaleido first. import plotly.io as pio pio.kaleido.scope.default_format = "png"
Exporting jupyter notebook to pdf with offline plotly graph; missing graphs
I am trying to create pdf export of my lesson plans and I use plotly offline for the graphs. In a MWE below, the plot will display in the Jupyter Notebook but will not show up when I export to pdf. I export using File-->Download as-->PDF via Latex (.pdf). I'd like to make a pdf instead of using html. I understand it might take an extra step to convert an html export to pdf, but I was just wondering if there was a more direct route (a code modification?) that would allow me to export directly through File-->Download as-->PDF via Latex (.pdf) from plotly.offline import download_plotlyjs, init_notebook_mode, iplot init_notebook_mode(connected=True) import plotly.graph_objs as go data = [go.Scatter( x=[1, 2, 3], y=[3, 2, 1]) ] iplot(data)
[ "You need to specify the appropriate Default Renderer (or Renderers, if you want to visualize it in the Notebook and also when exporting to PDF using the File-->Download as-->PDF via Latex (.pdf) option you mentioned).\nI have been struggling with this myself for some hours too, but the setup that ended up working for me is the following:\nimport plotly.io as pio\npio.renderers.default = \"notebook+pdf\" # Renderer for Notebook and HTML exports + Renderer for PDF exports\n# init_notebook_mode(connected=True) # Do not include this line because it may not work\n\nNote that you can concatenate as many Renderers as you want using the + symbol, and Plotly will magically know when to use each of them.\n", "I think because plotly graphs are svg objects and generated by javascript, I dont have export to PDF working in my jupyter notebook, so I was unable to check and confirm my answer.\nPlotly offline does not have show as image, you can use plotly online to do this, its free to generate graphs, \nYou need to create an online account, also you need to paste the username and API key from plotly website (API key can be found in settings).\n\nNote: please check in plotly if the plots are shared in public or private, I am not responsible for your plots becoming public.\n\nAnyway this will give you an image output of the graph, and you can export it to PDF\nCode:\nimport plotly\nimport plotly.graph_objs as go\n\nplotly.plotly.sign_in('<<username goes here>>', '<<api key goes here>>')\ntrace = go.Bar(x=[2, 4, 6], y= [10, 12, 15])\ndata = [trace]\nlayout = go.Layout(title='A Simple Plot', width=800, height=640)\nfig = go.Figure(data=data, layout=layout)\n\nplotly.plotly.image.save_as(fig, filename='a-simple-plot.png')\n\nfrom IPython.display import Image\nImage('a-simple-plot.png')\n\n", "A bit easier solution is to use image_bytes = fig.to_image(format='png') and than display using Image(image_bytes), but firstly you should install orca, and pip3 install psutil requests \nexample: \nfig = go.Figure()\n\"\"\" here goes some code to draw \"\"\"\n\nimage_bytes = fig.to_image(format='png', , width=1200, height=700, scale=1) # you can use other formats as well (like 'svg','jpeg','pdf')\n\n#instead of using fig.show()\nfrom IPython.display import Image\nImage(img_bytes)\n\nYou can read more here\nThan you can convert it with File -> Download as -> PDF or using terminal jupyter nbconvert --to pdf <notebook_name>.ipynb\n", "I don't have a solution for the exact original problem above. However, if you wish to consider the .html format, here is a nice solution that worked for me. After all, my goal was just to share the notebook with people who didn't have jupyter installed.\nThe conversion to .html is performed with plotlyhtmlexporter.\nHere is a piece of code you can copy-paste:\n pip install plotlyhtmlexporter\n jupyter nbconvert --to plotlyhtml mynotebook.ipynb\n\nYou might then want to try and save (print) the .html from your browser as a .pdf, but I think it would be equivalent to just printing the original notebook to .pdf. As I say, in my case, having a jupyter-independent .html file was enough.\n", "Just use this and download the notebook as PDF via HTML. Also you need to pip install kaleido first.\n import plotly.io as pio\n pio.kaleido.scope.default_format = \"png\"\n\n" ]
[ 6, 2, 1, 1, 0 ]
[]
[]
[ "jupyter_notebook", "pdf", "plotly", "python" ]
stackoverflow_0045761893_jupyter_notebook_pdf_plotly_python.txt
Q: Splitting a tensorflow dataset into training, test, and validation sets from keras.preprocessing API I'm new to tensorflow/keras and I have a file structure with 3000 folders containing 200 images each to be loaded in as data. I know that keras.preprocessing.image_dataset_from_directory allows me to load the data and split it into training/validation set as below: val_data = tf.keras.preprocessing.image_dataset_from_directory('etlcdb/ETL9G_IMG/', image_size = (128, 127), validation_split = 0.3, subset = "validation", seed = 1, color_mode = 'grayscale', shuffle = True) Found 607200 files belonging to 3036 classes. Using 182160 files for validation. But then I'm not sure how to further split my validation into a test split while maintaining proper classes. From what I can tell (through the GitHub source code), the take method simply takes the first x elements of the dataset, and skip does the same. I am unsure if this maintains stratification of the data or not, and I'm not quite sure how to return labels from the dataset to test it. Any help would be appreciated. A: I could not find supporting documentation, but I believe image_dataset_from_directory is taking the end portion of the dataset as the validation split. shuffle is now set to True by default, so the dataset is shuffled before training, to avoid using only some classes for the validation split. The split done by image_dataset_from_directory only relates to the training process. If you need a (highly recommended) test split, you should split your data beforehand into training and testing. Then, image_dataset_from_directory will split your training data into training and validation. I usually take a smaller percent (10%) for the in-training validation, and split the original dataset 80% training, 20% testing. With these values, the final splits (from the initial dataset size) are: 80% training: 72% training (used to adjust the weights in the network) 8% in-training validation (used only to check the metrics of the model after each epoch) 20% testing (never seen by the training process at all) There is additional information how to split data in your directories in this question: Keras split train test set when using ImageDataGenerator A: For splitting into train and validation maybe you can do smth like that. The main point is to keep the same seed. train_ds = tf.keras.preprocessing.image_dataset_from_directory( directory, label_mode='categorical', validation_split=0.2, subset="training", seed=1337, color_mode="grayscale", image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( directory, validation_split=0.2, subset="validation", label_mode='categorical', seed=1337, color_mode="grayscale", image_size=image_size, batch_size=batch_size, ) is taken from: https://keras.io/examples/vision/image_classification_from_scratch/ A: You almost got the answer. The key is to use .take() and .skip() to further split the validation set into 2 datasets -- one for validation and the other for test. If I use your example, then you need to execute the following lines of codes. Let's assume that you need 70% for training set, 10% for validation set, and 20% for test set. For the sake of completeness, I am also including the step to generate the training set. Let's also assign a few basic variables that must be same when first splitting the entire data set into training and validation sets. seed_train_validation = 1 # Must be same for train_ds and val_ds shuffle_value = True validation_split = 0.3 train_ds = tf.keras.utils.image_dataset_from_directory( directory ='etlcdb/ETL9G_IMG/', image_size = (128, 127), validation_split = validation_split, subset = "training", seed = seed_train_validation, color_mode = 'grayscale', shuffle = shuffle_value) val_ds = tf.keras.utils.image_dataset_from_directory( directory ='etlcdb/ETL9G_IMG/', image_size = (128, 127), validation_split = validation_split, subset = "validation", seed = seed_train_validation, color_mode = 'grayscale', shuffle = shuffle_value) Next, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, and then move the two-third of them (2/3 of 30% = 20%) to a test set as follows. Note that the default value of batch_size is 32 (re: documentation). val_batches = tf.data.experimental.cardinality(val_ds) test_ds = val_ds.take((2*val_batches) // 3) val_ds = val_ds.skip((2*val_batches) // 3) All the three datasets (train_ds, val_ds, and test_ds) yield batches of images together with labels inferred from the directory structure. So, you are good to go from here.
Splitting a tensorflow dataset into training, test, and validation sets from keras.preprocessing API
I'm new to tensorflow/keras and I have a file structure with 3000 folders containing 200 images each to be loaded in as data. I know that keras.preprocessing.image_dataset_from_directory allows me to load the data and split it into training/validation set as below: val_data = tf.keras.preprocessing.image_dataset_from_directory('etlcdb/ETL9G_IMG/', image_size = (128, 127), validation_split = 0.3, subset = "validation", seed = 1, color_mode = 'grayscale', shuffle = True) Found 607200 files belonging to 3036 classes. Using 182160 files for validation. But then I'm not sure how to further split my validation into a test split while maintaining proper classes. From what I can tell (through the GitHub source code), the take method simply takes the first x elements of the dataset, and skip does the same. I am unsure if this maintains stratification of the data or not, and I'm not quite sure how to return labels from the dataset to test it. Any help would be appreciated.
[ "I could not find supporting documentation, but I believe image_dataset_from_directory is taking the end portion of the dataset as the validation split. shuffle is now set to True by default, so the dataset is shuffled before training, to avoid using only some classes for the validation split.\nThe split done by image_dataset_from_directory only relates to the training process. If you need a (highly recommended) test split, you should split your data beforehand into training and testing. Then, image_dataset_from_directory will split your training data into training and validation.\nI usually take a smaller percent (10%) for the in-training validation, and split the original dataset 80% training, 20% testing.\nWith these values, the final splits (from the initial dataset size) are:\n\n80% training:\n\n72% training (used to adjust the weights in the network)\n8% in-training validation (used only to check the metrics of the model after each epoch)\n\n\n20% testing (never seen by the training process at all)\n\nThere is additional information how to split data in your directories in this question: Keras split train test set when using ImageDataGenerator\n", "For splitting into train and validation maybe you can do smth like that.\nThe main point is to keep the same seed.\ntrain_ds = tf.keras.preprocessing.image_dataset_from_directory(\n directory,\n label_mode='categorical',\n validation_split=0.2,\n subset=\"training\",\n seed=1337,\n color_mode=\"grayscale\",\n image_size=image_size,\n batch_size=batch_size,\n)\nval_ds = tf.keras.preprocessing.image_dataset_from_directory(\n directory,\n validation_split=0.2,\n subset=\"validation\",\n label_mode='categorical',\n seed=1337,\n color_mode=\"grayscale\",\n image_size=image_size,\n batch_size=batch_size,\n)\n\nis taken from:\nhttps://keras.io/examples/vision/image_classification_from_scratch/\n", "You almost got the answer. The key is to use .take() and .skip() to further split the validation set into 2 datasets -- one for validation and the other for test. If I use your example, then you need to execute the following lines of codes. Let's assume that you need 70% for training set, 10% for validation set, and 20% for test set. For the sake of completeness, I am also including the step to generate the training set. Let's also assign a few basic variables that must be same when first splitting the entire data set into training and validation sets.\nseed_train_validation = 1 # Must be same for train_ds and val_ds\nshuffle_value = True\nvalidation_split = 0.3\n\ntrain_ds = tf.keras.utils.image_dataset_from_directory(\ndirectory ='etlcdb/ETL9G_IMG/',\nimage_size = (128, 127),\nvalidation_split = validation_split,\nsubset = \"training\",\nseed = seed_train_validation,\ncolor_mode = 'grayscale',\nshuffle = shuffle_value)\n\nval_ds = tf.keras.utils.image_dataset_from_directory(\ndirectory ='etlcdb/ETL9G_IMG/',\nimage_size = (128, 127),\nvalidation_split = validation_split,\nsubset = \"validation\",\nseed = seed_train_validation,\ncolor_mode = 'grayscale',\nshuffle = shuffle_value)\n\n\nNext, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, and then move the two-third of them (2/3 of 30% = 20%) to a test set as follows. Note that the default value of batch_size is 32 (re: documentation).\nval_batches = tf.data.experimental.cardinality(val_ds)\ntest_ds = val_ds.take((2*val_batches) // 3)\nval_ds = val_ds.skip((2*val_batches) // 3)\n\nAll the three datasets (train_ds, val_ds, and test_ds) yield batches of images together with labels inferred from the directory structure. So, you are good to go from here.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0066036271_keras_python_tensorflow.txt
Q: python create class with brackets In python I can create class as follows: class Car: def __init__(self): pass But I have seen some people create class with empty brackets e.g.: class Car(): def __init__(self): pass The second form works, so my question is which one is the correct (or pythonic) way of defining class. I know that in brackets I can provide another class for inheritance e.g. class Car(Vehicle):, so when a class is created with empty brackets does it explicitly imply no inheritance? A: In python3, you can just ignore the brackets. In other words, class Car is equivalent to class Car() and the former one is more pythonic. However, In python2, you have to use class Car(object) to suit python2's requirement.
python create class with brackets
In python I can create class as follows: class Car: def __init__(self): pass But I have seen some people create class with empty brackets e.g.: class Car(): def __init__(self): pass The second form works, so my question is which one is the correct (or pythonic) way of defining class. I know that in brackets I can provide another class for inheritance e.g. class Car(Vehicle):, so when a class is created with empty brackets does it explicitly imply no inheritance?
[ "In python3, you can just ignore the brackets.\nIn other words, class Car is equivalent to class Car() and the former one is more pythonic. However, In python2, you have to use class Car(object) to suit python2's requirement.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074609819_python_python_3.x.txt
Q: Login Issue "POST" 200 3689 (Python, Django) I am quite new to Django and followed a tutorial to create a website. I'm not able to log in to an account. When I log in with any details (correct or incorrect), my 'login' page just reloads and nothing else happens (The expected result is that I go into a different page when I log in correctly) I am getting "POST /login/ HTTP/1.1" 200 3689 in the terminal. Here's part of the code: (views.py) def loginpage(request): error = "" page = "" if request.method == 'POST': u = request.POST['email'] p = request.POST['password'] user = authenticate(request,username=u,password=p) try: if user is not None: login(request,user) error = "no" g = request.user.groups.all()[0].name if g == 'Doctor': page = 'doctor' d = {'error': error, 'page':page} return render(request,'doctorhome.html',d) elif g == 'Receptionist': page = 'reception' d = {'error': error, 'page':page} return render(request,'receptionhome.html',d) elif g == 'Patient': page = 'patient' d = {'error': error, 'page':page} return render(request,'patienthome.html',d) else: error = "yes" except Exception as e: error = "yes" #print(e) #raise e return render(request,'login.html') Creating an account: def createaccountpage(request): error = "" user="none" if request.method == 'POST': name = request.POST['name'] email = request.POST['email'] password = request.POST['password'] repeatpassword = request.POST['repeatpassword'] gender = request.POST['gender'] phonenumber = request.POST['phonenumber'] address = request.POST['address'] birthdate = request.POST['dateofbirth'] bloodgroup = request.POST['bloodgroup'] try: if password == repeatpassword: Patient.objects.create(name=name,email=email,password=password,gender=gender,phonenumber=phonenumber,address=address,birthdate=birthdate,bloodgroup=bloodgroup) user = User.objects.create_user(name=name,email=email,password=password,username=email) pat_group = Group.objects.get(name='Patient') pat_group.user.set.add(user) user.save() error = "no" else: error = "yes" except Exception as e: error = "yes" print("Erorr:",e) d = {'error' : error} #print(error) return render(request,'createaccount.html',d) #return render(request,'createaccount.html') I have an issue with creating an account as well. Whenever I create an account, the data isn't saved anywhere on the database for some reason. So instead, I manually added my details to the DB and tried logging in with those details but still it's not letting me log in. I also thought the issue could be related to the DB itself (like certain data fields might be missing, I don't think the tutorial said all the data in the DB). Hence, I tried adding some data to it to see if permissions or something would affect anything and help me log in but it did not. I'm now completely stuck and not sure how to proceed. I don't know if it will help but I have added a picture of the Database here I appreciate any kind of advice on how I can fix my issue of not being able to log in correctly. A: You can Create a User like this. Django uses hashing technique to store users' passwords with some salt. If You are creating a user you have to call set_password method to store the password. from django.contrib.auth.models import User user = User() user.username = request.POST.get('username') user.set_password(request.POST.get('password')) user.save() A: First of all, there is no better tutorial than the original one, it will help you understand many concepts that are not quite well explained in most. No better place to start and to continue growing by further exploring the documentation. I must say, there are many flaws in your code and project design. Lets start with the database schema you shared. I can see that you don't have a custom user model, instead you have four user models, three of them to represent groups. In your views, use the 'snake_case' name convention, it is a good practice. Also, to handle input data and validation Django Forms are the way to do it. Lastly, it is not necessary to have three different templates to render based on the group. To the code...It is all inside an app called 'core' edit: updated views.py with login required decorator First, Create the Custom User Model: models.py from django.db import models from django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin # Create your models here. GENDER_CHOICES = [ ('F', 'Female'), ('M', 'Male'), ] BLOOD_GROUP_CHOICES = [ ('A', 'A'), ('B', 'B'), ('O', 'O'), ('AB', 'AB'), ] class UserManager(BaseUserManager): def create_user(self, email, password=None, **kwargs): if not email: raise ValueError('Users must have an email address') user = self.model(email=self.normalize_email(email), **kwargs) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password=None): user = self.create_user(email, password=password) user.is_staff = True user.is_superuser = True user.save(using=self._db) return user class User(AbstractBaseUser, PermissionsMixin): email = models.CharField(max_length=128, unique=True) name = models.CharField(max_length=128) gender = models.CharField(max_length=10, choices=GENDER_CHOICES) phone_number = models.CharField(max_length=128, blank=True) address = models.CharField(max_length=128) birth_date = models.DateField() blood_group = models.CharField(max_length=2, choices=BLOOD_GROUP_CHOICES) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=True) objects = UserManager() USERNAME_FIELD = 'email' remember to add the model in settings.py: AUTH_USER_MODEL = 'core.User' forms.py from django import forms from django.core.exceptions import ValidationError from datetime import datetime from django.contrib.auth import get_user_model GENDER_CHOICES = [ ('F', 'Female'), ('M', 'Male'), ] BLOOD_GROUP_CHOICES = [ ('A', 'A'), ('B', 'B'), ('O', 'O'), ('AB', 'AB'), ] class UserRegisterForm(forms.ModelForm): password = forms.CharField(label='Password', widget=forms.PasswordInput) password_confirmation = forms.CharField(label='Password confirmation', widget=forms.PasswordInput) class Meta: model = get_user_model() fields = ('email', 'name', 'gender', 'phone_number', 'address', 'birth_date', 'blood_group') def clean_password_confirmation(self): # Check that the two password entries match password1 = self.cleaned_data.get("password") password2 = self.cleaned_data.get("password_confirmation") if password1 and password2 and password1 != password2: raise ValidationError("Passwords don't match") return password2 def save(self, commit=True): # Save the provided password in hashed format user = super().save(commit=False) user.set_password(self.cleaned_data["password"]) user.last_login = datetime.now() if commit: user.save() return user class UserSignInForm(forms.Form): email = forms.EmailField(required=True) password = forms.CharField(widget=forms.PasswordInput()) def save(self, commit=True): # Save the provided password in hashed format user = super().save(commit=False) user.last_login = datetime.now() if commit: user.save() return user Create the views using forms and setup the URLs: views.py from django.shortcuts import render, redirect from django.contrib import messages from django.contrib.auth import authenticate, login, logout from django.contrib.auth.decorators import login_required from core.forms import UserRegisterForm, UserSignInForm # Create your views here. def register(request): if request.method == 'POST': form = UserRegisterForm(request.POST) if form.is_valid(): form.save() messages.success(request, 'Successful registration') return redirect('/user/index/') else: messages.error(request, 'Please fill the fields correctly') else: form = UserRegisterForm() context = { 'form': form } return render(request, 'register.html', context) def user_login(request): if request.method == 'POST': form = UserSignInForm(request.POST) if form.is_valid(): email = form.cleaned_data['email'] password = form.cleaned_data['password'] user = authenticate(request, email=email, password=password) if user is not None: login(request, user) qs = user.groups.values_list('name',flat = True) groups = l_as_list = list(qs) messages.success(request, 'Successfull login') return render(request, 'index.html', {'groups': groups}) else: messages.error(request, 'User not found') else: form = UserSignInForm() context = { 'form': form } return render(request, 'login.html', context) def user_logout(request): logout(request) return redirect('/user/login/') @login_required(login_url='/user/login/') def index(request): return render(request, 'index.html', {}) urls.py from django.urls import path from core import views app_name = 'core' urlpatterns = [ path('index/', views.index, name='index'), path('register/', views.register, name='user-register'), path('login/', views.user_login, name='user-login'), path('logout/', views.user_logout, name='user-logout'), ] Create templates using forms and messages: login.html {% extends 'base.html' %} {% block content %} {% if messages %} <ul class="messages"> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> {% endif %} <form method="POST" action="{% url 'core:user-login' %}"> {% csrf_token %} <table> {{ form }} </table> <input type="submit" value="Submit"> </form> {% endblock %} register.html {% extends 'base.html' %} {% block content %} {% if messages %} <ul class="messages"> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> {% endif %} <form method="post" action="{% url 'core:user-register' %}"> {% csrf_token %} <table> {{ form }} </table> <button type="submit">Register</button> </form> {% endblock %} index.html {% extends 'base.html' %} {% block content %} {% if messages %} <ul class="messages"> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> {% endif %} {% if 'doctor' in groups %} <p>I am a doctor</p> {% elif 'patient' in groups %} <p>I am a patient</p> {% elif 'receptionist' in groups %} <p>I am a recepcionist</p> {% endif %} <a href="{% url 'core:user-logout' %}">Logout</a> {% endblock %}
Login Issue "POST" 200 3689 (Python, Django)
I am quite new to Django and followed a tutorial to create a website. I'm not able to log in to an account. When I log in with any details (correct or incorrect), my 'login' page just reloads and nothing else happens (The expected result is that I go into a different page when I log in correctly) I am getting "POST /login/ HTTP/1.1" 200 3689 in the terminal. Here's part of the code: (views.py) def loginpage(request): error = "" page = "" if request.method == 'POST': u = request.POST['email'] p = request.POST['password'] user = authenticate(request,username=u,password=p) try: if user is not None: login(request,user) error = "no" g = request.user.groups.all()[0].name if g == 'Doctor': page = 'doctor' d = {'error': error, 'page':page} return render(request,'doctorhome.html',d) elif g == 'Receptionist': page = 'reception' d = {'error': error, 'page':page} return render(request,'receptionhome.html',d) elif g == 'Patient': page = 'patient' d = {'error': error, 'page':page} return render(request,'patienthome.html',d) else: error = "yes" except Exception as e: error = "yes" #print(e) #raise e return render(request,'login.html') Creating an account: def createaccountpage(request): error = "" user="none" if request.method == 'POST': name = request.POST['name'] email = request.POST['email'] password = request.POST['password'] repeatpassword = request.POST['repeatpassword'] gender = request.POST['gender'] phonenumber = request.POST['phonenumber'] address = request.POST['address'] birthdate = request.POST['dateofbirth'] bloodgroup = request.POST['bloodgroup'] try: if password == repeatpassword: Patient.objects.create(name=name,email=email,password=password,gender=gender,phonenumber=phonenumber,address=address,birthdate=birthdate,bloodgroup=bloodgroup) user = User.objects.create_user(name=name,email=email,password=password,username=email) pat_group = Group.objects.get(name='Patient') pat_group.user.set.add(user) user.save() error = "no" else: error = "yes" except Exception as e: error = "yes" print("Erorr:",e) d = {'error' : error} #print(error) return render(request,'createaccount.html',d) #return render(request,'createaccount.html') I have an issue with creating an account as well. Whenever I create an account, the data isn't saved anywhere on the database for some reason. So instead, I manually added my details to the DB and tried logging in with those details but still it's not letting me log in. I also thought the issue could be related to the DB itself (like certain data fields might be missing, I don't think the tutorial said all the data in the DB). Hence, I tried adding some data to it to see if permissions or something would affect anything and help me log in but it did not. I'm now completely stuck and not sure how to proceed. I don't know if it will help but I have added a picture of the Database here I appreciate any kind of advice on how I can fix my issue of not being able to log in correctly.
[ "You can Create a User like this. Django uses hashing technique to store users' passwords with some salt. If You are creating a user you have to call set_password method to store the password.\nfrom django.contrib.auth.models import User\n\nuser = User()\nuser.username = request.POST.get('username')\nuser.set_password(request.POST.get('password'))\nuser.save()\n\n", "First of all, there is no better tutorial than the original one, it will help you understand many concepts that are not quite well explained in most. No better place to start and to continue growing by further exploring the documentation.\nI must say, there are many flaws in your code and project design. Lets start with the database schema you shared. I can see that you don't have a custom user model, instead you have four user models, three of them to represent groups.\nIn your views, use the 'snake_case' name convention, it is a good practice. Also, to handle input data and validation Django Forms are the way to do it. Lastly, it is not necessary to have three different templates to render based on the group.\nTo the code...It is all inside an app called 'core'\nedit: updated views.py with login required decorator\nFirst, Create the Custom User Model:\nmodels.py\nfrom django.db import models\nfrom django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin\n\n# Create your models here.\nGENDER_CHOICES = [\n ('F', 'Female'),\n ('M', 'Male'),\n]\nBLOOD_GROUP_CHOICES = [\n ('A', 'A'),\n ('B', 'B'),\n ('O', 'O'),\n ('AB', 'AB'),\n]\n\nclass UserManager(BaseUserManager):\n def create_user(self, email, password=None, **kwargs):\n if not email:\n raise ValueError('Users must have an email address')\n\n user = self.model(email=self.normalize_email(email), **kwargs)\n user.set_password(password)\n user.save(using=self._db)\n return user \n\n def create_superuser(self, email, password=None):\n user = self.create_user(email, password=password)\n user.is_staff = True\n user.is_superuser = True\n user.save(using=self._db)\n return user\n\nclass User(AbstractBaseUser, PermissionsMixin): \n email = models.CharField(max_length=128, unique=True)\n name = models.CharField(max_length=128)\n gender = models.CharField(max_length=10, choices=GENDER_CHOICES)\n phone_number = models.CharField(max_length=128, blank=True)\n address = models.CharField(max_length=128)\n birth_date = models.DateField()\n blood_group = models.CharField(max_length=2, choices=BLOOD_GROUP_CHOICES)\n is_active = models.BooleanField(default=True)\n is_staff = models.BooleanField(default=True)\n\n objects = UserManager()\n \n USERNAME_FIELD = 'email'\n\nremember to add the model in settings.py:\nAUTH_USER_MODEL = 'core.User'\n\nforms.py\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom datetime import datetime\nfrom django.contrib.auth import get_user_model\n\nGENDER_CHOICES = [\n ('F', 'Female'),\n ('M', 'Male'),\n]\nBLOOD_GROUP_CHOICES = [\n ('A', 'A'),\n ('B', 'B'),\n ('O', 'O'),\n ('AB', 'AB'),\n]\n\nclass UserRegisterForm(forms.ModelForm): \n password = forms.CharField(label='Password', widget=forms.PasswordInput)\n password_confirmation = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)\n\n class Meta:\n model = get_user_model()\n fields = ('email', 'name', 'gender', 'phone_number', 'address', 'birth_date', 'blood_group')\n\n def clean_password_confirmation(self):\n # Check that the two password entries match\n password1 = self.cleaned_data.get(\"password\")\n password2 = self.cleaned_data.get(\"password_confirmation\")\n if password1 and password2 and password1 != password2:\n raise ValidationError(\"Passwords don't match\")\n return password2\n \n def save(self, commit=True):\n # Save the provided password in hashed format\n user = super().save(commit=False)\n user.set_password(self.cleaned_data[\"password\"])\n user.last_login = datetime.now()\n if commit:\n user.save()\n return user\n\nclass UserSignInForm(forms.Form):\n email = forms.EmailField(required=True)\n password = forms.CharField(widget=forms.PasswordInput())\n\n def save(self, commit=True):\n # Save the provided password in hashed format\n user = super().save(commit=False)\n user.last_login = datetime.now()\n if commit:\n user.save()\n return user\n\nCreate the views using forms and setup the URLs:\nviews.py\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth import authenticate, login, logout\nfrom django.contrib.auth.decorators import login_required\n\nfrom core.forms import UserRegisterForm, UserSignInForm\n\n# Create your views here.\ndef register(request):\n if request.method == 'POST':\n form = UserRegisterForm(request.POST)\n if form.is_valid():\n form.save()\n messages.success(request, 'Successful registration')\n return redirect('/user/index/')\n else:\n messages.error(request, 'Please fill the fields correctly')\n else:\n form = UserRegisterForm()\n\n context = {\n 'form': form\n }\n\n return render(request, 'register.html', context)\n\ndef user_login(request):\n if request.method == 'POST':\n form = UserSignInForm(request.POST)\n if form.is_valid():\n email = form.cleaned_data['email']\n password = form.cleaned_data['password']\n user = authenticate(request, email=email, password=password)\n if user is not None:\n login(request, user)\n qs = user.groups.values_list('name',flat = True)\n groups = l_as_list = list(qs)\n messages.success(request, 'Successfull login')\n return render(request, 'index.html', {'groups': groups})\n else:\n messages.error(request, 'User not found')\n else:\n form = UserSignInForm()\n\n context = {\n 'form': form\n }\n return render(request, 'login.html', context)\n\ndef user_logout(request):\n logout(request)\n return redirect('/user/login/')\n\n@login_required(login_url='/user/login/')\ndef index(request):\n return render(request, 'index.html', {})\n\nurls.py\nfrom django.urls import path\nfrom core import views\n\napp_name = 'core'\n\nurlpatterns = [\n path('index/', views.index, name='index'),\n path('register/', views.register, name='user-register'),\n path('login/', views.user_login, name='user-login'),\n path('logout/', views.user_logout, name='user-logout'),\n]\n\nCreate templates using forms and messages:\nlogin.html\n{% extends 'base.html' %}\n\n{% block content %}\n {% if messages %}\n <ul class=\"messages\">\n {% for message in messages %}\n <li{% if message.tags %} class=\"{{ message.tags }}\"{% endif %}>{{ message }}</li>\n {% endfor %}\n </ul>\n {% endif %}\n\n <form method=\"POST\" action=\"{% url 'core:user-login' %}\">\n {% csrf_token %}\n <table>\n {{ form }}\n </table>\n <input type=\"submit\" value=\"Submit\">\n </form>\n{% endblock %}\n\nregister.html\n{% extends 'base.html' %}\n\n{% block content %}\n {% if messages %}\n <ul class=\"messages\">\n {% for message in messages %}\n <li{% if message.tags %} class=\"{{ message.tags }}\"{% endif %}>{{ message }}</li>\n {% endfor %}\n </ul>\n {% endif %}\n\n <form method=\"post\" action=\"{% url 'core:user-register' %}\">\n {% csrf_token %}\n <table>\n {{ form }}\n </table>\n <button type=\"submit\">Register</button>\n </form>\n{% endblock %}\n\nindex.html\n{% extends 'base.html' %}\n\n{% block content %}\n {% if messages %}\n <ul class=\"messages\">\n {% for message in messages %}\n <li{% if message.tags %} class=\"{{ message.tags }}\"{% endif %}>{{ message }}</li>\n {% endfor %}\n </ul>\n {% endif %}\n\n {% if 'doctor' in groups %}\n <p>I am a doctor</p>\n {% elif 'patient' in groups %}\n <p>I am a patient</p>\n {% elif 'receptionist' in groups %}\n <p>I am a recepcionist</p>\n {% endif %}\n <a href=\"{% url 'core:user-logout' %}\">Logout</a>\n{% endblock %}\n\n" ]
[ 0, 0 ]
[]
[]
[ "authentication", "django", "python", "sqlite" ]
stackoverflow_0074606523_authentication_django_python_sqlite.txt
Q: Conversion of df one column values to multiple column values in pandas id date decision 1 2022-11-10 improve 1 2022-11-10 checked 2 2021-09-12 checked 3 2020-08-22 checked 4 2019-11-10 complete 4 2019-11-10 revise Converting above dataframe as id date CR Principal 1 2022-11-10 checked improve 2 2021-09-12 checked NA 3 2020-08-22 checked NA 4 2019-11-10 revise complete A: Use GroupBy.cumcount with ascending=False for counter in descending order and pivoting by 4 columns, then use rename - add keys to dictionary for rename if 3 or 4 duplicated decisions: df = (df.assign(g = df.groupby(['id','date']).cumcount(ascending=False)) .pivot(['id','date'], 'g', 'decision') .reindex(columns=range(4)) .fillna(0) .rename(columns={0:'CR',1:'Principal',2:'final',3:'post final'}) .rename_axis(columns=None) .reset_index()) print (df) id date CR Principal final post final 0 1 2022-11-10 checked improve 0.0 0.0 1 2 2021-09-12 checked 0 0.0 0.0 2 3 2020-08-22 checked 0 0.0 0.0 3 4 2019-11-10 revise complete 0.0 0.0
Conversion of df one column values to multiple column values in pandas
id date decision 1 2022-11-10 improve 1 2022-11-10 checked 2 2021-09-12 checked 3 2020-08-22 checked 4 2019-11-10 complete 4 2019-11-10 revise Converting above dataframe as id date CR Principal 1 2022-11-10 checked improve 2 2021-09-12 checked NA 3 2020-08-22 checked NA 4 2019-11-10 revise complete
[ "Use GroupBy.cumcount with ascending=False for counter in descending order and pivoting by 4 columns, then use rename - add keys to dictionary for rename if 3 or 4 duplicated decisions:\ndf = (df.assign(g = df.groupby(['id','date']).cumcount(ascending=False))\n .pivot(['id','date'], 'g', 'decision')\n .reindex(columns=range(4))\n .fillna(0)\n .rename(columns={0:'CR',1:'Principal',2:'final',3:'post final'})\n .rename_axis(columns=None)\n .reset_index())\nprint (df)\n id date CR Principal final post final\n0 1 2022-11-10 checked improve 0.0 0.0\n1 2 2021-09-12 checked 0 0.0 0.0\n2 3 2020-08-22 checked 0 0.0 0.0\n3 4 2019-11-10 revise complete 0.0 0.0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "datatables", "pandas", "pivot", "python" ]
stackoverflow_0074609858_dataframe_datatables_pandas_pivot_python.txt