Search is not available for this dataset
id
stringlengths 1
8
| text
stringlengths 72
9.81M
| addition_count
int64 0
10k
| commit_subject
stringlengths 0
3.7k
| deletion_count
int64 0
8.43k
| file_extension
stringlengths 0
32
| lang
stringlengths 1
94
| license
stringclasses 10
values | repo_name
stringlengths 9
59
|
---|---|---|---|---|---|---|---|---|
1900 | <NME> what_does_the_init_script_do.rst
<BEF> What does the init script do?
=============================
.. note::
This script tries to respect your existing environment as much as possible and
avoids the use of sudo except where necessary to install packages via your
system's package manager.
The init script is a one step method of setting up a hitch environment and running
all the tests in a directory. It is intended to be a low friction way of:
* Getting a CI or test driven development environment up and running.
* Rebuilding an environment from scratch that may have been broken.
If you'd prefer instead to perform the steps manually, you can use this document
as a guide.
Note that the first three steps take about 5 minutes and the last step can take
roughly 15 minutes (or longer, sometimes).
1. Installs python, pip, virtualenv, python-dev, automake and libtool (may require sudo)
----------------------------------------------------------------------------------------
Takes approximately: 1 minute
These packages are required for hitch to initialize.
On Ubuntu/Debian::
$ sudo apt-get install -y python python3 python-dev python-setuptools python-virtualenv python3-dev automake libtool
On Fedora/Red Hat/CentOS::
$ sudo yum -y install python python-devel python-setuptools python-virtualenv python-pip python3 python3-devel automake libtool gcc-c++
On Arch::
$ sudo pacman -Sy python python-setuptools python-virtualenv python automake libtool
On Mac OS X::
$ brew install python python3 libtool automake cmake
See also:
* :doc:`/faq/what_does_the_hitch_bootstrap_script_do`
* :doc:`/faq/why_install_hitch_on_the_system_path`
3. Runs "hitch clean" and "hitch init" in the current directory (does not require sudo)
Takes approximately: 5 seconds
This is a small python script with no dependencies that bootstraps your testing
environment and lets you trigger test runs. It installs a single command ('hitch')
on your system's path.
On the Mac the init script will run::
$ pip install --upgrade hitch
On Linux::
$ sudo pip install --upgrade hitch
See also:
* :doc:`/faq/what_does_the_hitch_bootstrap_script_do`
3. Runs "hitch clean", "hitch cleanpkg" and "hitch init" in the current directory (may require sudo)
----------------------------------------------------------------------------------------------------
Takes approximately: 2 minutes
If no ".hitch" directory is already installed then this command does nothing. If a .hitch
directory *is* found, it will remove it::
$ hitch clean
If no "~/.hitchpkg" directory is found, this will also do nothing. If you already used hitch
before you may have packages downloaded into this directory, in which case it will destroy it
so it can be rebuilt::
$ hitch cleanpkg
This builds a .hitch directory in the current directory and installs any more required
system packages via unixpackage. This asks to install system packages specified in
hitch plugins and packages specified in the system.packages file::
$ hitch init
* :doc:`/faq/what_does_hitch_init_do`
4. Run "hitch test ." to run all tests (does not require sudo)
--------------------------------------------------------------
Takes approximately: 15 minutes (subsequent test runs will be quicker)
If there are tests in the directory where the init script is run, it will run all
of them.
During the course of running the tests it will attempt to download and compile
certain pieces of software (e.g. postgres). The software will be installed in the
"~/.hitchpkg" directory. This does not require sudo and it will not interfere
with software you may already have installed.
See also:
* :doc:`why_is_my_test_downloading_and_compiling_software`
* :doc:`why_does_the_first_test_run_take_so_long`
All software installed there can easily be removed by deleting the "~/.hitchpkg"
directory or running the command "hitch cleanpkg".
See also:
* :doc:`how_do_i_uninstall_hitch_completely`
<MSG> DOCS : Added more explanation about what the hitch bootstrap script does.
<DFF> @@ -46,7 +46,6 @@ This is a small python script with zero dependencies.
See also:
* :doc:`/faq/what_does_the_hitch_bootstrap_script_do`
-* :doc:`/faq/why_install_hitch_on_the_system_path`
3. Runs "hitch clean" and "hitch init" in the current directory (does not require sudo)
| 0 | DOCS : Added more explanation about what the hitch bootstrap script does. | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1901 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
@group()
def cli():
pass
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
check_call([pip, "install", "-U", "pip"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@command()
def freeze():
"""List installed hitch packages."""
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Run system package installer after hitch install/hitch upgrade/hitch init.
<DFF> @@ -34,6 +34,14 @@ def stop_everything(sig, frame):
exit(1)
+def installpackages():
+ """Install packages with hitchsystem."""
+ hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
+ check_call([hitchsystem, "installpackages", ])
+ signal.signal(signal.SIGINT, stop_everything)
+
+
@group()
def cli():
pass
@@ -103,11 +111,7 @@ def init(python, virtualenv):
check_call([pip, "install", "-U", "pip"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
- hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
-
- signal.signal(signal.SIGINT, signal.SIG_IGN)
- check_call([hitchsystem, "installpackages", ])
- signal.signal(signal.SIGINT, stop_everything)
+ installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
@@ -119,9 +123,7 @@ def init(python, virtualenv):
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
- signal.signal(signal.SIGINT, signal.SIG_IGN)
- check_call([hitchsystem, "installpackages", ])
- signal.signal(signal.SIGINT, stop_everything)
+ installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
@@ -199,6 +201,9 @@ def install(package):
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+ installpackages()
+
+
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
@@ -218,6 +223,9 @@ def upgrade():
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+ installpackages()
+
+
@command()
def freeze():
"""List installed hitch packages."""
| 16 | FEATURE : Run system package installer after hitch install/hitch upgrade/hitch init. | 8 | .py | py | agpl-3.0 | hitchtest/hitch |
1902 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.1",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -22,7 +22,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.1",
+ version="0.5.2",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1903 | <NME> generic_service_api.rst
<BEF> Generic Service API
===================
All of the services listed are created using the generic service API. This API lets
you start, monitor and stop any kind of process during a test.
Defining a Service Bundle
-------------------------
To run one or more services together during your tests, you must first define a
:doc:`/glossary/service_bundle` which will run them all together.
.. code-block:: python
self.services['MyService'] = hitchserve.Service(
command=["command", "arg1", "arg2, "arg3"], # Mandatory - command to run the service
log_line_ready_checker=lambda line: line == "READY", # Mandatory - function used to ascertain readiness of the service
directory="/directory/to/run/command/in", # Optional
no_libfaketime=False, # Optional (if set to True, the service is run without libfaketime)
self.services = hitchserve.ServiceBundle(
project_directory=PROJECT_DIRECTORY, # Default directory all of your services are started in
startup_timeout=15.0, # How long to wait for all of the services to startup
.. warning::
Libfaketime sometimes causes buggy and unpredictable behavior in some programs.
If you see problems when running a service, you may need to switch it off with 'no_libfaketime=True'.
Problems have been reported specifically with node.js and Java apps.
Logs
----
Most services output information about what they are doing. In UNIX, there are two
'pipes' known as stdout and stderr where processes can log normal information
and errors.
During normal operation in a test, both of these are logged to the screen, alongside
log_line_ready_checker=lambda line: line == "READY", # Mandatory - function used to ascertain readiness of the service
directory="/directory/to/run/command/in", # Optional
no_libfaketime=False, # Optional (if set to True, the service is run without libfaketime)
env_vars={'A':1, 'B':2}, # Optional (dictionary of environment variables to feed to the service)
needs=[self.services['Django']], # Optional (services to start and wait for before starting this one)
)
Starting a service bundle
-------------------------
Once all of your services are defined, they still aren't started. To start your services
you must call the startup method:
.. code-block:: python
self.services.startup(interactive=False)
In [2]: self.service['Django'].logs.out
[ Prints all of the logs ]
In [1]: self.service['Django'].logs.err
[ Prints all of the logs ]
Hitch also lets you grab a list of log lines encoded as JSON and return them
as a list of dicts/lists. For example::
In [5]: self.service['HitchSMTP'].logs.json()
Out[5]:
[{'contenttype': 'text/plain',
'date': 'Tue, 14 Jul 2015 05:59:44 -0000',
'header_from': 'webmaster@localhost',
'header_from_email': None,
'header_from_name': None,
'header_to': '[email protected]',
'header_to_email': None,
'header_to_name': None,
'links': ['http://127.0.0.1:18080/accounts/confirm-email/oro7rarxl8poqk9moe6jru5do6uoqijlllpcllmuqfaotqpvrdw3wlezsfvdtto4/'],
'multipart': False,
'payload': 'User django at 127.0.0.1:18080 has given this as an email address.\n\nTo confirm this is correct, go to http://127.0.0.1:18080/accounts/confirm-email/oro7rarxl8poqk9moe6jru5do6uoqijlllpcllmuqfaotqpvrdw3wlezsfvdtto4/',
'sent_from': 'webmaster@localhost',
'sent_to': ['[email protected]'],
'subject': '[127.0.0.1:18080] Confirm E-mail Address'},
{'contenttype': 'text/plain',
'date': 'Thu, 13 Aug 2015 13:59:47 -0000',
'header_from': 'noreply@localhost',
'header_from_email': None,
'header_from_name': None,
'header_to': '<[email protected]>',
'header_to_email': '[email protected]',
'header_to_name': '',
'links': [],
'multipart': False,
'payload': 'Remind me about upcoming gig.',
'sent_from': 'noreply@localhost',
'sent_to': ['[email protected]'],
'subject': 'Reminder'}]
This is a useful feature for verifying interactions with mock services went according to plan.
You can also tail the logs until a specific condition is met in a JSON line, for instance::
In [5]: self.services['HitchSMTP'].logs.out.tail.until_json(
lambda email: containing in email['payload'] or containing in email['subject'],
timeout=15,
lines_back=1,
)
[ returns full dict representation of JSON snippet representing email once it has been received ]
Interacting with a Service Bundle: Time Travel
----------------------------------------------
Many bugs and test scenarios often cannot be realistically replicated without
jumping through time.
The example application - a reminders app - is one example. To test that a reminder
is really sent after 30 days, the application must *think* that 30 days have actually
passed.
You can mimic these scenarios for services run using your service bundle by
calling the time_travel API, which can be used like so::
In [1]: self.services.time_travel(days=1)
Time traveling to 23 hours from now
In [2]: self.services.time_travel(hours=25)
Time traveling to 2 days from now
In [3]: self.services.time_travel(minutes=60)
Time traveling to 2 days from now
In [4]: self.services.time_travel(seconds=60)
Time traveling to 2 days from now
In [5]: from datetime import timedelta
In [6]: self.services.time_travel(timedelta=timedelta(hours=1))
Time traveling to 2 days from now
If you forgot where you are, you can get the current (mocked) time via::
In [7]: self.services.now()
Out[7]: datetime.datetime(2015, 7, 19, 16, 21, 33, 703669)
To move to an absolute time::
In [8]: from datetime import datetime
In [9]: self.services.time_travel(datetime=datetime.now())
Time traveling to now
Note that if no_libfaketime is set to True for a service, it will not pick up on the new time.
.. warning::
This feature relies upon a C library called libfaketime.
Libfaketime sometimes causes buggy and unpredictable behavior in some programs (e.g. node.js and Java)
on some platforms.
If you see problems when running a service, you may need to switch it off with 'no_libfaketime=True'.
Some programs will also work fine (e.g. firefox), but they will not pick up on the time being fed
to them.
Libfaketime works well with python and postgresql.
Interacting with a Service Bundle: Connecting to a service's IPython Kernel
---------------------------------------------------------------------------
IPython kernels are a great way of debugging your code. They give you access
to a REPL which you can use to inspect variables and run commands to see their
effect.
With python code, you can invoke a kernel by putting the following line of
code in your application:
.. code-block:: python
import IPython ; IPython.embed_kernel()
Hitch provides a convenience function which you can use to listen to a service's
logs and detect the presence of a recently embedded kernel and then connect
directly to it and launch an interpreter in interactive mode.
.. code-block:: python
def connect_to_kernel(self, service_name):
self.services.connect_to_ipykernel(service_name)
This is a step that can be called just by adding ::
- Connect to kernel: Celery
Note that if you are connecting to a kernel after clicking a button in a web
app, be sure to replace 'click' with the following step::
- Click and dont wait for page load: button-id
The regular click step will wait for the next page to load before continuing,
which will never happen because your app paused on loading it due to the embed_kernel.
Interacting with a Service Bundle: The Process API
--------------------------------------------------
To see a service's process ID::
In [1]: self.services['HitchSMTP'].pid
Out[1]: 43215
To interact with or inspect the service's process::
In [1]: self.services['HitchSMTP'].process.<TAB>
self.services['HitchSMTP'].process.as_dict self.services['HitchSMTP'].process.is_running self.services['HitchSMTP'].process.pid
self.services['HitchSMTP'].process.children self.services['HitchSMTP'].process.kill self.services['HitchSMTP'].process.ppid
self.services['HitchSMTP'].process.cmdline self.services['HitchSMTP'].process.memory_info self.services['HitchSMTP'].process.resume
self.services['HitchSMTP'].process.connections self.services['HitchSMTP'].process.memory_info_ex self.services['HitchSMTP'].process.rlimit
self.services['HitchSMTP'].process.cpu_affinity self.services['HitchSMTP'].process.memory_maps self.services['HitchSMTP'].process.send_signal
self.services['HitchSMTP'].process.cpu_percent self.services['HitchSMTP'].process.memory_percent self.services['HitchSMTP'].process.status
self.services['HitchSMTP'].process.cpu_times self.services['HitchSMTP'].process.name self.services['HitchSMTP'].process.suspend
self.services['HitchSMTP'].process.create_time self.services['HitchSMTP'].process.nice self.services['HitchSMTP'].process.terminal
self.services['HitchSMTP'].process.cwd self.services['HitchSMTP'].process.num_ctx_switches self.services['HitchSMTP'].process.terminate
self.services['HitchSMTP'].process.exe self.services['HitchSMTP'].process.num_fds self.services['HitchSMTP'].process.threads
self.services['HitchSMTP'].process.gids self.services['HitchSMTP'].process.num_threads self.services['HitchSMTP'].process.uids
self.services['HitchSMTP'].process.io_counters self.services['HitchSMTP'].process.open_files self.services['HitchSMTP'].process.username
self.services['HitchSMTP'].process.ionice self.services['HitchSMTP'].process.parent self.services['HitchSMTP'].process.wait
The psutil Process class API can be used to inspect the CPU usage of the service, its memory usage, list open files and much much more.
The full API docs for psutil's Process class are here: https://pythonhosted.org/psutil/#process-class
Interacting with a Service Bundle: Service Sub-commands
-------------------------------------------------------
Many services have special commands which are run during their operation.
For example, Django has the manage command, Redis has redis-cli and
Postgresql has psql.
Hitch provides an API to let you run these commands in the same environment
as the service you are running. This means that they will inherit the same
environment variables and time::
In [1]: self.services['Django'].manage("help").run()
Running Arbitrary Code Before and After Starting a Service
----------------------------------------------------------
Some services can just be started and stopped, but others require special
code to be run before and after. A good example of this is postgresql,
which requires initdb be run before starting the database service, and CREATE
USER / CREATE DATABASE to be run after.
If your service has special requirements like this, you can subclass the
hitchserve Service object and override the setup and poststart
methods:
.. code-block:: python
from hitchserve import Service
import signal
class MyService(Service):
def __init__(self, **kwargs):
kwargs['log_line_ready_checker'] = lambda line: "line in logs that signals readiness" in line
kwargs['command'] = ["start_service_command", "arg1", "arg2", "arg3", ]
super(MyService, self).__init__(**kwargs)
def setup(self):
"""This is where you run all of the code you want run before starting the service."""
pass
def poststart(self):
"""This is where you put all of the code you want run after the service is ready."""
pass
<MSG> DOCS : Updated API docs for plugins.
<DFF> @@ -1,6 +1,10 @@
Generic Service API
===================
+.. note::
+
+ This documentation applies to the latest version of hitchserve: version 0.3.8
+
All of the services listed are created using the generic service API. This API lets
you start, monitor and stop any kind of process during a test.
@@ -12,7 +16,7 @@ To use, define the service after initializing the ServiceBundle object but befor
.. code-block:: python
self.services['MyService'] = hitchserve.Service(
- command=["command", "arg1", "arg2, "arg3"], # Mandatory - command to run the service
+ command=["command", "arg1", "arg2", "arg3"], # Mandatory - command to run the service
log_line_ready_checker=lambda line: line == "READY", # Mandatory - function used to ascertain readiness of the service
directory="/directory/to/run/command/in", # Optional
no_libfaketime=False, # Optional (if set to True, the service is run without libfaketime)
@@ -22,16 +26,15 @@ To use, define the service after initializing the ServiceBundle object but befor
.. warning::
- Libfaketime sometimes causes buggy and unpredictable behavior in some programs.
+ Libfaketime sometimes causes buggy and unpredictable behavior in some programs (e.g. node.js and Java).
If you see problems when running a service, you may need to switch it off with 'no_libfaketime=True'.
- Problems have been reported specifically with node.js and Java apps.
Logs
----
Most services output information about what they are doing. In UNIX, there are two
-'pipes' known as stdout and stderr where processes can log normal information
+'pipes' known as stdout and stderr where processes can log regular information
and errors.
During normal operation in a test, both of these are logged to the screen, alongside
@@ -58,5 +61,87 @@ You can see the stdout and stderr individually, too::
In [2]: self.service['Django'].logs.out
[ Prints all of the logs ]
- In [1]: self.service['Django'].logs.err
+ In [3]: self.service['Django'].logs.err
[ Prints all of the logs ]
+
+You can tail the logs too::
+
+ In [4]: self.service['Django'].logs.tail.follow(lines_back=2)
+ [ Prints logs from two lines before the command starts. ]
+ [ Continues logging in real time until you hit ctrl-C ]
+
+A unique feature of hitchserve is also the ability to parse log lines looking for JSON
+logs and then parse them and present them to you as python list of lists and dicts::
+
+ In [5]: self.service['HitchSMTP'].logs.json()
+ Out[5]:
+ [{'contenttype': 'text/plain',
+ 'date': 'Tue, 14 Jul 2015 05:59:44 -0000',
+ 'header_from': 'webmaster@localhost',
+ 'header_from_email': None,
+ 'header_from_name': None,
+ 'header_to': '[email protected]',
+ 'header_to_email': None,
+ 'header_to_name': None,
+ 'links': ['http://127.0.0.1:18080/accounts/confirm-email/oro7rarxl8poqk9moe6jru5do6uoqijlllpcllmuqfaotqpvrdw3wlezsfvdtto4/'],
+ 'multipart': False,
+ 'payload': 'User django at 127.0.0.1:18080 has given this as an email address.\n\nTo confirm this is correct, go to http://127.0.0.1:18080/accounts/confirm-email/oro7rarxl8poqk9moe6jru5do6uoqijlllpcllmuqfaotqpvrdw3wlezsfvdtto4/',
+ 'sent_from': 'webmaster@localhost',
+ 'sent_to': ['[email protected]'],
+ 'subject': '[127.0.0.1:18080] Confirm E-mail Address'},
+ {'contenttype': 'text/plain',
+ 'date': 'Thu, 13 Aug 2015 13:59:47 -0000',
+ 'header_from': 'noreply@localhost',
+ 'header_from_email': None,
+ 'header_from_name': None,
+ 'header_to': '<[email protected]>',
+ 'header_to_email': '[email protected]',
+ 'header_to_name': '',
+ 'links': [],
+ 'multipart': False,
+ 'payload': 'Remind me about upcoming gig.',
+ 'sent_from': 'noreply@localhost',
+ 'sent_to': ['[email protected]'],
+ 'subject': 'Reminder'}]
+
+This is a useful feature for verifying interactions with mock services.
+
+You can also tail the logs until a specific JSON line is seen::
+
+ In [5]: self.services['HitchSMTP'].logs.out.tail.until_json(
+ lambda email: containing in email['payload'] or containing in email['subject'],
+ timeout=15,
+ lines_back=1,
+ )
+ [ outputs dict representation of line representing email once it has been received ]
+
+
+Process API
+-----------
+
+To see a service's process ID::
+
+ In [1]: self.services['HitchSMTP'].pid
+ Out[1]: 43215
+
+To interact with or inspect the service's process::
+
+ In [1]: self.services['HitchSMTP'].process.<TAB>
+ self.services['HitchSMTP'].process.as_dict self.services['HitchSMTP'].process.is_running self.services['HitchSMTP'].process.pid
+ self.services['HitchSMTP'].process.children self.services['HitchSMTP'].process.kill self.services['HitchSMTP'].process.ppid
+ self.services['HitchSMTP'].process.cmdline self.services['HitchSMTP'].process.memory_info self.services['HitchSMTP'].process.resume
+ self.services['HitchSMTP'].process.connections self.services['HitchSMTP'].process.memory_info_ex self.services['HitchSMTP'].process.rlimit
+ self.services['HitchSMTP'].process.cpu_affinity self.services['HitchSMTP'].process.memory_maps self.services['HitchSMTP'].process.send_signal
+ self.services['HitchSMTP'].process.cpu_percent self.services['HitchSMTP'].process.memory_percent self.services['HitchSMTP'].process.status
+ self.services['HitchSMTP'].process.cpu_times self.services['HitchSMTP'].process.name self.services['HitchSMTP'].process.suspend
+ self.services['HitchSMTP'].process.create_time self.services['HitchSMTP'].process.nice self.services['HitchSMTP'].process.terminal
+ self.services['HitchSMTP'].process.cwd self.services['HitchSMTP'].process.num_ctx_switches self.services['HitchSMTP'].process.terminate
+ self.services['HitchSMTP'].process.exe self.services['HitchSMTP'].process.num_fds self.services['HitchSMTP'].process.threads
+ self.services['HitchSMTP'].process.gids self.services['HitchSMTP'].process.num_threads self.services['HitchSMTP'].process.uids
+ self.services['HitchSMTP'].process.io_counters self.services['HitchSMTP'].process.open_files self.services['HitchSMTP'].process.username
+ self.services['HitchSMTP'].process.ionice self.services['HitchSMTP'].process.parent self.services['HitchSMTP'].process.wait
+
+The psutil Process class API can be used to inspect the CPU usage of the process, memory usage, list open files and much much more is available.
+
+The full API docs for psutil's Process class are here: https://pythonhosted.org/psutil/#process-class
+
| 90 | DOCS : Updated API docs for plugins. | 5 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1904 | <NME> how_does_hitch_compare_to_other_technologies.rst
<BEF> How does Hitch compare to other technologies?
=============================================
Cucumber/Behave/RSpec/Behat/Behave
----------------------------------
Cucumber, RSpec, Behat and Behave and are all keyword driven test automation
frameworks that run automated acceptance tests. They contain an interpreter
for executing high level test cases written in Gherkin.
Hitch follows a similar approach but has its own equivalent to
Gherkin: :doc:`/glossary/hitch_test_description_language`.
Unlike Gherkin it does not use its own syntax - its syntax
is built upon YAML.
Test cases written with Hitch test should usually be less verbose
and more to the point, although still ideally maintaining
readability.
Gherkin example from the Cucumber website (223 characters; English-like):
.. code-block:: gherkin
Feature: Division
In order to avoid silly mistakes
Cashiers must be able to calculate a fraction
Scenario: Regular numbers
* I have entered 3 into the calculator
* I press divide
* I have entered 2 into the calculator
* I press equal
* The result should be 1.5 on the screen
Hitch equivalent (113 characters; not English-like):
.. code-block:: yaml
- name: Division
description: Cashier calculates a fraction
scenario:
- Enter: 3
- Press: divide
- Enter: 2
- Press: equal
- Result: 1.5
Step-to-code regular expression translation is also unnecessary in Hitch
sidestepping `potential traps like this. <https://stackoverflow.com/questions/1186547/regular-expressions-in-cucumber-steps>`_
.. note::
This pitfall is `recognized by Cucumber in issue #1. <https://github.com/cucumber/cucumber/issues/1>`_
The python tool behave gives you `three different parser options <https://pythonhosted.org/behave/tutorial.html#step-parameters>`_
as a way to deal with it. There are other `suggested <http://laxmareddy.com/cucumber-step-definitions-regular-expressions-matching-steps/>`_
`workarounds <http://chrismcmahonsblog.blogspot.sg/2013/09/magic-strings-and-regular-expressions.html>`_ too.
The above three steps are implemented as follows in Hitch:
.. code-block:: python
def enter(self, number):
# code that enters a number
def press(self, key):
# code that presses a key
def result(self, number):
assert displayed_result == number
More complex data can also be cleanly encoded into steps and preconditions. Anything that is valid YAML is allowed.
You can write a complex step like this:
.. code-block:: yaml
- Send mail:
From address: Receiver <[email protected]>
To address: Sender <[email protected]>
Body:
From: Receiver <[email protected]>
To: Sender <[email protected]>
Subject: Test email for "HitchSMTP"
Content: |
http://www.google.com
Another link: http://yahoo.com
Another link: https://www.google.com.sg/?gfe_rd=cr&ei=2X4mVebUFYTDuATVtoHoAQ#q=long+long+long+long+long+long+url
Which would trigger a python method call equivalent to the following:
.. code-block:: python
self.send_mail(
from_address="Receiver <[email protected]>",
to_address="To address: Sender <[email protected]>",
body={
"From" : "Receiver <[email protected]>",
"To" : "Sender <[email protected]>",
"Subject" : "Test email for \"HitchSMTP\""
"Content" : (
"http://www.google.com\n"
"Another link: http://yahoo.com\n"
"Another link: https://www.google.com.sg/?gfe_rd=cr&ei=2X4mVebUFYTDuATVtoHoAQ#q=long+long+long+long+long+long+url"
)
}
)
Where reading the data in the step code :doc:`/glossary/execution_engine` is still straightforward:
.. code-block:: python
self.send_mail(self, from_address, to_address, body)
content = body.get("content")
The above applies to the following packages:
* hitchtest
Hitch also provides plugins to perform many more test and development related tasks, saving on boilerplate (see :doc:`/plugins/index`).
Hitch does *not* provide:
* Bindings to write the execution engine in languages other than python. This is not roadmapped and not possible currently.
* Plugins to easily test other languages and frameworks (e.g. Java, node, Ruby, etc.). This possible but not easy currently and is roadmapped.
Docker/Docker Compose
---------------------
Docker is a lightweight virtualization technology that provides
system :doc:`/glossary/isolation` using cgroups and kernel
namespaces.
Docker can be used to develop software in, test software in and
deploy software in. By running the same container in all three
environments, development and testing can achieve a greater
degree of :doc:`/glossary/test_realism` thus avoiding many
'surprise' production bugs.
Nonetheless, the isolation and realism is not as high as "true
virtualization" (VirtualBox, Xen, VMWare) provided via kernel
emulation.
The same Docker container running on different systems
can (and probably will, for many projects eventually),
exhibit different behavior due to different versions of the
linux kernel or libc in development, testing and production
environments (TODO : verify libc differences??).
Due to the reliance on Linux kernel features for isolation,
docker also does not work on Mac OS X or BSD platforms
without running it in a heavyweight virtual machine.
Hitch can run docker containers, as it can any other
process (a plugin to make this easier is coming soon).
If you deploy docker containers in your production
environment, this is a recommended approach since it
will bring a greater level of :doc:`/glossary/test_realism`.
If you do *not* deploy docker containers in your
production environment, you may want to avoid using
docker for development and test environments.
Hitch achieves a similar, although lower level of
isolation and realism using a different approach:
* :doc:`/glossary/package_isolation`
* :doc:`/glossary/data_isolation`
* :doc:`/glossary/process_isolation`
* :doc:`/glossary/environment_isolation`
You can, for instance, run the exact same database version,
python version and redis version that you do in production
on your development machine.
[ TO DO : docker-compose and starting services bug ]
The above applies to the following packages:
* hitchserve
* hitchtest
* All hitch plugins
.. note::
You can also run hitch *in* docker. It is regularly tested with the latest version.
Built-in Django Testing Framework
---------------------------------
Django already comes with four official classes for unit testing web apps, each of which test at a progressively higher level:
* SimpleTestCase - a low level unit tester for Django views.
* TransactionTestCase - a low level unit tester for Django views which also rolls back the database.
* TestCase - a low level unit tester which performs the above and also loads fixtures and adds django specific assertions.
* LiveServerTestCase - a higher level TransactionTestCase which runs the django web server to allow for the use of selenium.
See : https://docs.djangoproject.com/en/1.8/topics/testing/tools/ for details.
Hitch serves as an effective drop in replacement for all of these. While slower, tests written
using hitch should exhibit a greater degree of :doc:`/glossary/test_realism`, :doc:`/glosary/isolation`
and looser :doc:`/glossary/coupling`.
Practical benefits:
* You can run a celery service alongside the test.
* Hitch test maintains stricter database isolation.
* It runs all services with faketime, allowing you to mock the forward passage of time via your tests.
* Looser coupling means that if you refactor or rewrite your application code, you should only need minimal changes to your tests.
* Hitch tests can more easily be made to be :doc:`/glossary/business_readable`.
Tox, PyEnv and Virtualenv
-------------------------
Tox is a small, popular python framework that can run unit tests in multiple python environments.
It can be used to run unit tests with multiple versions of python if those versions are installed.
PyEnv is a small application which can download and compile specific versions of python and
run them alongside one another.
Virtualenv is a tool for creating a python environment where you can install an isolated
group of packages which you can use to run or test an application that depends upon them.
Hitch can supplant tox for integration tests (See : :doc:`/howto/parameterize_test_cases`).
Hitch *bundles* pyenv and uses it to build a python virtualenv(s) for you.
It does this with two lines of code:
.. code-block:: python
# Define the version of python you want
python_package = PythonPackage(version="3.4.3")
# Installs python 3.4.3 into ~/.hitchpkg (if it isn't already present)
# Creates virtualenv in .hitch folder (if it doesn't already exist)
python_package.build()
# Python virtualenv you can use with your project:
python_package.python == "/path/to/your/project/tests/.hitch/py3.4.3/bin/python"
python_package.pip == "/path/to/your/project/tests/.hitch/py3.4.3/bin/pip"
The above applies to the following packages:
* hitchpython
* python-build
.. note::
Hitch *also* uses virtualenv to isolate *itself* and the code it runs the
:doc:`/glossary/execution_engine` with. This is a virtualenv created with
your system's python 3.
py.test/nose/unittest2
----------------------
py.test, nose, unittest and unittest2 are all unit test frameworks, although they
are often used to write integration tests.
See :doc:`/faq/when_should_i_use_a_unit_test_and_when_should_i_use_an_integration_test`
[ TO DO : parameterization, readability, boilerplate to handle services, isolation features, loosely coupled, muliple services ]
Robot Framework
---------------
[ TO DO ]
Other technologies?
-------------------
If you'd like to see a comparison with other technologies here or would like to correct
something said above, raising a ticket is welcome:
https://github.com/hitchtest/hitch/issues/new
<MSG> DOCS : typo
<DFF> @@ -202,7 +202,7 @@ Django already comes with four official classes for unit testing web apps, each
See : https://docs.djangoproject.com/en/1.8/topics/testing/tools/ for details.
Hitch serves as an effective drop in replacement for all of these. While slower, tests written
-using hitch should exhibit a greater degree of :doc:`/glossary/test_realism`, :doc:`/glosary/isolation`
+using hitch should exhibit a greater degree of :doc:`/glossary/test_realism`, :doc:`/glossary/isolation`
and looser :doc:`/glossary/coupling`.
Practical benefits:
| 1 | DOCS : typo | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1905 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
[*Win64-VS2015*](https://pan.baidu.com/s/1c2eX6lq) (OpenBLAS / Protobuf2.6 for VS2015 / CUDNN v7 / Microsoft MPI)
[*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
For Windows, ``python27/35/36.lib`` should be copied to ``Dragon/3rdparty/lib``, it depends on the version of Python.
- Run 3rdparty/setup_mpi.sh
```Shell
./setup_mpi.sh
```
- Install
```Shell
sudo cp openmpi/install/bin/mpirun /usr/bin
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
<MSG> fix the potential crash of DragonBoard
<DFF> @@ -22,7 +22,7 @@
[*Win64-VS2015*](https://pan.baidu.com/s/1c2eX6lq) (OpenBLAS / Protobuf2.6 for VS2015 / CUDNN v7 / Microsoft MPI)
- [*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
+ [*Linux64*](https://pan.baidu.com/s/1c2ChKHy) (OpenMPI)
For Windows, ``python27/35/36.lib`` should be copied to ``Dragon/3rdparty/lib``, it depends on the version of Python.
@@ -73,12 +73,12 @@
- Run 3rdparty/setup_mpi.sh
```Shell
- ./setup_mpi.sh
+ bash ./setup_mpi.sh
```
- Install
```Shell
- sudo cp openmpi/install/bin/mpirun /usr/bin
+ sudo cp 3rdparty/openmpi/install/bin/mpirun /usr/bin
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
| 3 | fix the potential crash of DragonBoard | 3 | .md | md | bsd-2-clause | neopenx/Dragon |
1906 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
[*Win64-VS2015*](https://pan.baidu.com/s/1c2eX6lq) (OpenBLAS / Protobuf2.6 for VS2015 / CUDNN v7 / Microsoft MPI)
[*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
For Windows, ``python27/35/36.lib`` should be copied to ``Dragon/3rdparty/lib``, it depends on the version of Python.
- Run 3rdparty/setup_mpi.sh
```Shell
./setup_mpi.sh
```
- Install
```Shell
sudo cp openmpi/install/bin/mpirun /usr/bin
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
<MSG> fix the potential crash of DragonBoard
<DFF> @@ -22,7 +22,7 @@
[*Win64-VS2015*](https://pan.baidu.com/s/1c2eX6lq) (OpenBLAS / Protobuf2.6 for VS2015 / CUDNN v7 / Microsoft MPI)
- [*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
+ [*Linux64*](https://pan.baidu.com/s/1c2ChKHy) (OpenMPI)
For Windows, ``python27/35/36.lib`` should be copied to ``Dragon/3rdparty/lib``, it depends on the version of Python.
@@ -73,12 +73,12 @@
- Run 3rdparty/setup_mpi.sh
```Shell
- ./setup_mpi.sh
+ bash ./setup_mpi.sh
```
- Install
```Shell
- sudo cp openmpi/install/bin/mpirun /usr/bin
+ sudo cp 3rdparty/openmpi/install/bin/mpirun /usr/bin
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
| 3 | fix the potential crash of DragonBoard | 3 | .md | md | bsd-2-clause | neopenx/Dragon |
1907 | <NME> index.rst
<BEF> Hitch
=====
Hitch is a framework for :doc:`/glossary/integration_testing`.
Features
--------
* Runs reliably without modification on Mac OS X, Ubuntu/Debian, Fedora, CentOS and Arch and in Docker.
* Automates its own deployment and does not interfere with your system other than to install packages.
* Provides boilerplate and tools to substantially minimize the problem of :doc:`/glossary/brittle_tests`.
* Readable :doc:`/glossary/hitch_test_description_language` that doesn't require you to write regular expressions.
* Built-in :doc:`/glossary/service_orchestration` library for running groups of services (databases, webservers, microservices) together.
* Built-in :doc:`/glossary/step_library` for common tasks (interacting with browsers, command line & emails).
* Provides a suitable environment for :doc:`/glossary/acceptance_test_driven_development` complete with debugging tools.
.. toctree::
:maxdepth: 2
tutorials/index
glossary/index
.. toctree::
:maxdepth: 2
plugins/index
Documentation
-------------
.. toctree::
:maxdepth: 2
quickstart/index
howto/index
faq/index
api/index
misc/index
See the full :doc:`/glossary/index` here.
<MSG> DOCS : Removed tutorials contents entry.
<DFF> @@ -16,5 +16,4 @@ Contents:
.. toctree::
:maxdepth: 2
- tutorials/index
glossary/index
| 0 | DOCS : Removed tutorials contents entry. | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1908 | <NME> RedisCacheTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.subject.PrincipalCollection;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import java.util.Collection;
import java.util.Iterator;
public class RedisCacheTest {
private RedisManager redisManager;
private RedisCache<String, FakeSession> redisCache;
private String testKey;
private StringSerializer keySerializer;
public class RedisCacheTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisCache mountRedisCache() {
return new RedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), "employee:", 1, RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME);
}
@Test
public void testInitialize() {
Assertions.assertThrows(IllegalArgumentException.class, () -> new RedisCache<String, String>(null, null, null, "abc:", 1, RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME));
Assertions.assertThrows(IllegalArgumentException.class, () -> new RedisCache<String, String>(new RedisManager(), null, null, "abc:", 1, RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME));
Assertions.assertThrows(IllegalArgumentException.class, () -> new RedisCache<String, String>(new RedisManager(), new StringSerializer(), null, "abc:", 1, RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME));
}
@Test
public void testPut() throws SerializationException {
RedisCache rc = mountRedisCache();
Object value = rc.put("foo", "bar");
assertThat(value, is("bar"));
testValues.add(paulSession);
billySession = new FakeSession(3, "billy");
testValues.add(billySession);
redisManager = mock(RedisManager.class);
when(redisManager.dbSize()).thenReturn(2L);
when(redisManager.get(keySerializer.serialize(testPrefix + testKey))).thenReturn(valueSerializer.serialize(testValue));
when(redisManager.keys(keySerializer.serialize(testPrefix + "*"))).thenReturn(testSet);
}
class Employee {
private int id;
public Employee(int id) {
this.id = id;
}
public int getId() {
return this.id;
}
}
class EmployeePrincipal implements PrincipalCollection {
private Employee primaryPrincipal;
public EmployeePrincipal(int id) {
this.primaryPrincipal = new Employee(id);
}
@Override
public Employee getPrimaryPrincipal() {
return this.primaryPrincipal;
}
@Override
public <T> T oneByType(Class<T> aClass) {
return null;
}
@Override
public <T> Collection<T> byType(Class<T> aClass) {
return null;
}
@Override
public List asList() {
return null;
}
@Override
public Set asSet() {
return null;
}
@Override
public Collection fromRealm(String s) {
return null;
}
@Override
public Set<String> getRealmNames() {
return null;
}
@Override
public boolean isEmpty() {
return false;
}
@Override
public Iterator iterator() {
return null;
}
}
<MSG> rename class name , modify test case
<DFF> @@ -13,7 +13,7 @@ import static org.mockito.Mockito.when;
public class RedisCacheTest {
- private RedisManager redisManager;
+ private RedisSingletonManager redisManager;
private RedisCache<String, FakeSession> redisCache;
private String testKey;
private StringSerializer keySerializer;
@@ -46,7 +46,7 @@ public class RedisCacheTest {
testValues.add(paulSession);
billySession = new FakeSession(3, "billy");
testValues.add(billySession);
- redisManager = mock(RedisManager.class);
+ redisManager = mock(RedisSingletonManager.class);
when(redisManager.dbSize()).thenReturn(2L);
when(redisManager.get(keySerializer.serialize(testPrefix + testKey))).thenReturn(valueSerializer.serialize(testValue));
when(redisManager.keys(keySerializer.serialize(testPrefix + "*"))).thenReturn(testSet);
| 2 | rename class name , modify test case | 2 | .java | java | mit | alexxiyang/shiro-redis |
1909 | <NME> RedisCacheTest.java
<BEF> package org.crazycake.shiro;
import org.apache.commons.lang3.math.NumberUtils;
import org.apache.shiro.subject.PrincipalCollection;
import org.crazycake.shiro.exception.CacheManagerPrincipalIdNotAssignedException;
import org.crazycake.shiro.exception.PrincipalInstanceException;
import org.crazycake.shiro.model.*;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
import java.util.Properties;
import java.util.Set;
import static fixture.TestFixture.turnUserToFakeAuth;
import static org.junit.Assert.fail;
import static fixture.TestFixture.*;
/**
* input key, value (java)
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisCache<PrincipalCollection, FakeAuth> redisCache;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithPrincipalIdFieldName;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithEmptyPrincipalIdFieldName;
private Properties properties = loadProperties("shiro-standalone.ini");
private PrincipalCollection user1;
private PrincipalCollection user2;
private PrincipalCollection user3;
private Set users1_2_3;
private String prefix;
RedisCache rc = mountRedisCache();
Object value = rc.put("foo", "bar");
assertThat(value, is("bar"));
verify(redisManager).set(keySerializer.serialize("employee:foo"), valueSerializer.serialize("bar"), 1);
PrincipalCollection principal = new EmployeePrincipal(3);
rc.put(principal, "account information");
redisCache = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME);
redisCacheWithPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
redisCacheWithEmptyPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), "");
user1 = scaffoldAuthKey(scaffoldUser());
user2 = scaffoldAuthKey(scaffoldUser());
user3 = scaffoldAuthKey(scaffoldUser());
users1_2_3 = scaffoldKeys(user1, user2, user3);
}
public int getId() {
return this.id;
}
}
class EmployeePrincipal implements PrincipalCollection {
private Employee primaryPrincipal;
public EmployeePrincipal(int id) {
this.primaryPrincipal = new Employee(id);
}
@Override
public Employee getPrimaryPrincipal() {
return this.primaryPrincipal;
}
@Override
public <T> T oneByType(Class<T> aClass) {
return null;
}
@Override
public <T> Collection<T> byType(Class<T> aClass) {
return null;
}
@Override
public List asList() {
return null;
}
@Override
public Set asSet() {
return null;
}
@Override
public Collection fromRealm(String s) {
return null;
}
FakeAuth fakeAuth = redisCache.get(user1);
assertAuthEquals(fakeAuth, turnUserToFakeAuth((UserInfo)user1.getPrimaryPrincipal()));
}
@Test
public void testSize() throws InterruptedException {
return null;
}
}
<MSG> Add support for strings being in Principal
<DFF> @@ -1,20 +1,24 @@
package org.crazycake.shiro;
+import static fixture.TestFixture.*;
+import static org.junit.Assert.fail;
+
+import java.util.Properties;
+import java.util.Set;
+
import org.apache.commons.lang3.math.NumberUtils;
import org.apache.shiro.subject.PrincipalCollection;
+import org.apache.shiro.subject.SimplePrincipalCollection;
import org.crazycake.shiro.exception.CacheManagerPrincipalIdNotAssignedException;
import org.crazycake.shiro.exception.PrincipalInstanceException;
-import org.crazycake.shiro.model.*;
+import org.crazycake.shiro.model.FakeAuth;
+import org.crazycake.shiro.model.UserInfo;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
-import java.util.Properties;
-import java.util.Set;
-import static fixture.TestFixture.turnUserToFakeAuth;
-import static org.junit.Assert.fail;
-import static fixture.TestFixture.*;
+import com.github.javafaker.Faker;
/**
* input key, value (java)
@@ -25,10 +29,14 @@ public class RedisCacheTest {
private RedisCache<PrincipalCollection, FakeAuth> redisCache;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithPrincipalIdFieldName;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithEmptyPrincipalIdFieldName;
+ private RedisCache<PrincipalCollection, String> redisCacheWithStrings;
+
private Properties properties = loadProperties("shiro-standalone.ini");
private PrincipalCollection user1;
private PrincipalCollection user2;
private PrincipalCollection user3;
+ private PrincipalCollection user4;
+
private Set users1_2_3;
private String prefix;
@@ -42,9 +50,11 @@ public class RedisCacheTest {
redisCache = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME);
redisCacheWithPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
redisCacheWithEmptyPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), "");
+ redisCacheWithStrings = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
user1 = scaffoldAuthKey(scaffoldUser());
user2 = scaffoldAuthKey(scaffoldUser());
user3 = scaffoldAuthKey(scaffoldUser());
+ user4 = new SimplePrincipalCollection(Faker.instance().gameOfThrones().character(), Faker.instance().gameOfThrones().city());
users1_2_3 = scaffoldKeys(user1, user2, user3);
}
@@ -94,6 +104,13 @@ public class RedisCacheTest {
FakeAuth fakeAuth = redisCache.get(user1);
assertAuthEquals(fakeAuth, turnUserToFakeAuth((UserInfo)user1.getPrimaryPrincipal()));
}
+
+ @Test
+ public void testPutString() {
+ redisCacheWithStrings.put(user4, user4.getPrimaryPrincipal().toString());
+ String auth = redisCacheWithStrings.get(user4);
+ assertEquals(auth, user4.getPrimaryPrincipal());
+ }
@Test
public void testSize() throws InterruptedException {
| 23 | Add support for strings being in Principal | 6 | .java | java | mit | alexxiyang/shiro-redis |
1910 | <NME> cudnn_device.cc
<BEF> #ifdef WITH_CUDNN
#include "core/types.h"
#include "core/tensor.h"
#include "utils/cudnn_device.h"
namespace dragon {
float CUDNNType<float>::oneval = 1.0;
float CUDNNType<float>::zeroval = 0.0;
const void* CUDNNType<float>::one =
static_cast<void *>(&CUDNNType<float>::oneval);
const void* CUDNNType<float>::zero =
static_cast<void *>(&CUDNNType<float>::zeroval);
double CUDNNType<double>::oneval = 1.0;
double CUDNNType<double>::zeroval = 0.0;
const void* CUDNNType<double>::one =
static_cast<void *>(&CUDNNType<double>::oneval);
const void* CUDNNType<double>::zero =
static_cast<void *>(&CUDNNType<double>::zeroval);
#ifdef WITH_CUDA_FP16
float CUDNNType<float16>::oneval = 1.0;
float CUDNNType<float16>::zeroval = 0.0;
const void* CUDNNType<float16>::one =
static_cast<void*>(&CUDNNType<float16>::oneval);
const void* CUDNNType<float16>::zero =
static_cast<void*>(&CUDNNType<float16>::zeroval);
#endif
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc, const vector<TIndex>& dims) {
int ndim = (int)dims.size();
int* dimA = new int[ndim];
int* strideA = new int[ndim];
TIndex stride = 1;
for (int i = ndim - 1; i >= 0; i--) {
strideA[i] = stride;
dimA[i] = dims[i];
stride *= dimA[i];
}
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc, CUDNNType<T>::type, ndim, dimA, strideA));
delete[] dimA;
delete[] strideA;
}
template <typename T>
void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
if (data_format == "NCHW") {
CUDNN_CHECK(cudnnSetTensor4dDescriptor(*desc, CUDNN_TENSOR_NCHW,
CUDNNType<T>::type,
dims[0],
dims[1],
dims[2],
dims[3]));
} else if (data_format == "NHWC") {
CUDNN_CHECK(cudnnSetTensor4dDescriptor(*desc, CUDNN_TENSOR_NHWC,
CUDNNType<T>::type,
dims[0],
dims[3],
dims[1],
dims[2]));
} else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
if (data_format == "NCHW") {
cudnnSetTensorDesc<T>(desc, dims);
} else if (data_format == "NHWC") {
const int N = (int)dims[0];
const int C = (int)dims[4];
const int H = (int)dims[1];
const int W = (int)dims[2];
const int D = (int)dims[3];
vector<int> fake_dims = { N, C, H, W, D };
vector<int> fake_strides = { H * W * D * C, 1, W * D * C, D * C, C };
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc,
CUDNNType<T>::type,
5,
fake_dims.data(),
fake_strides.data()));
} else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
void cudnnSetTensor3dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
vector<TIndex> fake_dims = dims;
if (data_format == "NCHW") {
// NCH -> NCHXX
fake_dims.push_back(1);
fake_dims.push_back(1);
} else if (data_format == "NHWC") {
// NHC -> NHXXC
fake_dims.insert(fake_dims.begin() + 2, 1);
fake_dims.insert(fake_dims.begin() + 2, 1);
} else LOG(FATAL) << "Unknown data format: " << data_format;
cudnnSetTensor5dDesc<T>(desc, data_format, fake_dims);
}
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc,
const vector<TIndex>& dims,
const vector<TIndex>& strides) {
CHECK_EQ(dims.size(), strides.size());
CHECK(dims.size() >= 3 && dims.size() <= 8);
int ndim = (int)dims.size();
int* dimA = new int[ndim];
int* strideA = new int[ndim];
for (int i = ndim - 1; i >= 0; i--) {
strideA[i] = strides[i];
dimA[i] = dims[i];
}
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc, CUDNNType<T>::type, ndim, dimA, strideA));
delete[] dimA;
delete[] strideA;
}
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc, Tensor* tensor) {
// cuDNN requires ndim from 3 to 8
// we fake a reshaped dims to pass check
vector<TIndex> fake_dims(tensor->dims());
if (fake_dims.size() < 3 || fake_dims.size() > 8) {
fake_dims.assign({ 1, 1 });
fake_dims.push_back(tensor->count());
}
cudnnSetTensorDesc<T>(desc, fake_dims);
}
template <typename T>
void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 4)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 4, but got " << tensor->ndim() << ".";
cudnnSetTensor4dDesc<T>(desc, data_format, tensor->dims());
}
template <typename T>
void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 5)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 5, but got " << tensor->ndim() << ".";
cudnnSetTensor5dDesc<T>(desc, data_format, tensor->dims());
}
template <typename T>
void cudnnSetTensor3dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 3)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 3, but got " << tensor->ndim() << ".";
cudnnSetTensor3dDesc<T>(desc, data_format, tensor->dims());
}
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#ifdef WITH_CUDA_FP16
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#endif
} // namespace dragon
#endif // WITH_CUDNN
<MSG> Fix/Refactor the GroupConvolution on cuDNN
<DFF> @@ -65,7 +65,35 @@ void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc,
dims[3],
dims[1],
dims[2]));
- } else LOG(FATAL) << "Unknown data format: " << data_format;
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
+}
+
+template <typename T>
+void cudnnSetTensor4dDescWithGroup(cudnnTensorDescriptor_t* desc,
+ const string& data_format,
+ const vector<TIndex>& dims,
+ const TIndex group) {
+ if (data_format == "NCHW") {
+ CUDNN_CHECK(cudnnSetTensor4dDescriptorEx(*desc, CUDNNType<T>::type,
+ dims[0],
+ dims[1] / group,
+ dims[2],
+ dims[3],
+ dims[1] * dims[2] * dims[3],
+ dims[2] * dims[3],
+ dims[3],
+ 1));
+ } else if (data_format == "NHWC") {
+ CUDNN_CHECK(cudnnSetTensor4dDescriptorEx(*desc, CUDNNType<T>::type,
+ dims[0],
+ dims[3] / group,
+ dims[1],
+ dims[2],
+ dims[1] * dims[2] * dims[3],
+ 1,
+ dims[2] * dims[3],
+ dims[3]));
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
@@ -87,7 +115,7 @@ void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc,
5,
fake_dims.data(),
fake_strides.data()));
- } else LOG(FATAL) << "Unknown data format: " << data_format;
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
@@ -169,6 +197,7 @@ template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<T
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
@@ -180,6 +209,7 @@ template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
@@ -192,9 +222,10 @@ template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#endif
} // namespace dragon
-#endif // WITH_CUDNN
\ No newline at end of file
+#endif // WITH_CUDNN
| 34 | Fix/Refactor the GroupConvolution on cuDNN | 3 | .cc | cc | bsd-2-clause | neopenx/Dragon |
1911 | <NME> cudnn_device.cc
<BEF> #ifdef WITH_CUDNN
#include "core/types.h"
#include "core/tensor.h"
#include "utils/cudnn_device.h"
namespace dragon {
float CUDNNType<float>::oneval = 1.0;
float CUDNNType<float>::zeroval = 0.0;
const void* CUDNNType<float>::one =
static_cast<void *>(&CUDNNType<float>::oneval);
const void* CUDNNType<float>::zero =
static_cast<void *>(&CUDNNType<float>::zeroval);
double CUDNNType<double>::oneval = 1.0;
double CUDNNType<double>::zeroval = 0.0;
const void* CUDNNType<double>::one =
static_cast<void *>(&CUDNNType<double>::oneval);
const void* CUDNNType<double>::zero =
static_cast<void *>(&CUDNNType<double>::zeroval);
#ifdef WITH_CUDA_FP16
float CUDNNType<float16>::oneval = 1.0;
float CUDNNType<float16>::zeroval = 0.0;
const void* CUDNNType<float16>::one =
static_cast<void*>(&CUDNNType<float16>::oneval);
const void* CUDNNType<float16>::zero =
static_cast<void*>(&CUDNNType<float16>::zeroval);
#endif
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc, const vector<TIndex>& dims) {
int ndim = (int)dims.size();
int* dimA = new int[ndim];
int* strideA = new int[ndim];
TIndex stride = 1;
for (int i = ndim - 1; i >= 0; i--) {
strideA[i] = stride;
dimA[i] = dims[i];
stride *= dimA[i];
}
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc, CUDNNType<T>::type, ndim, dimA, strideA));
delete[] dimA;
delete[] strideA;
}
template <typename T>
void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
if (data_format == "NCHW") {
CUDNN_CHECK(cudnnSetTensor4dDescriptor(*desc, CUDNN_TENSOR_NCHW,
CUDNNType<T>::type,
dims[0],
dims[1],
dims[2],
dims[3]));
} else if (data_format == "NHWC") {
CUDNN_CHECK(cudnnSetTensor4dDescriptor(*desc, CUDNN_TENSOR_NHWC,
CUDNNType<T>::type,
dims[0],
dims[3],
dims[1],
dims[2]));
} else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
if (data_format == "NCHW") {
cudnnSetTensorDesc<T>(desc, dims);
} else if (data_format == "NHWC") {
const int N = (int)dims[0];
const int C = (int)dims[4];
const int H = (int)dims[1];
const int W = (int)dims[2];
const int D = (int)dims[3];
vector<int> fake_dims = { N, C, H, W, D };
vector<int> fake_strides = { H * W * D * C, 1, W * D * C, D * C, C };
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc,
CUDNNType<T>::type,
5,
fake_dims.data(),
fake_strides.data()));
} else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
void cudnnSetTensor3dDesc(cudnnTensorDescriptor_t* desc,
const string& data_format,
const vector<TIndex>& dims) {
vector<TIndex> fake_dims = dims;
if (data_format == "NCHW") {
// NCH -> NCHXX
fake_dims.push_back(1);
fake_dims.push_back(1);
} else if (data_format == "NHWC") {
// NHC -> NHXXC
fake_dims.insert(fake_dims.begin() + 2, 1);
fake_dims.insert(fake_dims.begin() + 2, 1);
} else LOG(FATAL) << "Unknown data format: " << data_format;
cudnnSetTensor5dDesc<T>(desc, data_format, fake_dims);
}
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc,
const vector<TIndex>& dims,
const vector<TIndex>& strides) {
CHECK_EQ(dims.size(), strides.size());
CHECK(dims.size() >= 3 && dims.size() <= 8);
int ndim = (int)dims.size();
int* dimA = new int[ndim];
int* strideA = new int[ndim];
for (int i = ndim - 1; i >= 0; i--) {
strideA[i] = strides[i];
dimA[i] = dims[i];
}
CUDNN_CHECK(cudnnSetTensorNdDescriptor(*desc, CUDNNType<T>::type, ndim, dimA, strideA));
delete[] dimA;
delete[] strideA;
}
template <typename T>
void cudnnSetTensorDesc(cudnnTensorDescriptor_t* desc, Tensor* tensor) {
// cuDNN requires ndim from 3 to 8
// we fake a reshaped dims to pass check
vector<TIndex> fake_dims(tensor->dims());
if (fake_dims.size() < 3 || fake_dims.size() > 8) {
fake_dims.assign({ 1, 1 });
fake_dims.push_back(tensor->count());
}
cudnnSetTensorDesc<T>(desc, fake_dims);
}
template <typename T>
void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 4)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 4, but got " << tensor->ndim() << ".";
cudnnSetTensor4dDesc<T>(desc, data_format, tensor->dims());
}
template <typename T>
void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 5)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 5, but got " << tensor->ndim() << ".";
cudnnSetTensor5dDesc<T>(desc, data_format, tensor->dims());
}
template <typename T>
void cudnnSetTensor3dDesc(cudnnTensorDescriptor_t* desc, const string& data_format, Tensor* tensor) {
CHECK_EQ((int)tensor->ndim(), 3)
<< "\nThe num of dimensions of Tensor(" << tensor->name() << ") "
<< "should be 3, but got " << tensor->ndim() << ".";
cudnnSetTensor3dDesc<T>(desc, data_format, tensor->dims());
}
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#ifdef WITH_CUDA_FP16
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, Tensor*);
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, Tensor*);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&);
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#endif
} // namespace dragon
#endif // WITH_CUDNN
<MSG> Fix/Refactor the GroupConvolution on cuDNN
<DFF> @@ -65,7 +65,35 @@ void cudnnSetTensor4dDesc(cudnnTensorDescriptor_t* desc,
dims[3],
dims[1],
dims[2]));
- } else LOG(FATAL) << "Unknown data format: " << data_format;
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
+}
+
+template <typename T>
+void cudnnSetTensor4dDescWithGroup(cudnnTensorDescriptor_t* desc,
+ const string& data_format,
+ const vector<TIndex>& dims,
+ const TIndex group) {
+ if (data_format == "NCHW") {
+ CUDNN_CHECK(cudnnSetTensor4dDescriptorEx(*desc, CUDNNType<T>::type,
+ dims[0],
+ dims[1] / group,
+ dims[2],
+ dims[3],
+ dims[1] * dims[2] * dims[3],
+ dims[2] * dims[3],
+ dims[3],
+ 1));
+ } else if (data_format == "NHWC") {
+ CUDNN_CHECK(cudnnSetTensor4dDescriptorEx(*desc, CUDNNType<T>::type,
+ dims[0],
+ dims[3] / group,
+ dims[1],
+ dims[2],
+ dims[1] * dims[2] * dims[3],
+ 1,
+ dims[2] * dims[3],
+ dims[3]));
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
@@ -87,7 +115,7 @@ void cudnnSetTensor5dDesc(cudnnTensorDescriptor_t* desc,
5,
fake_dims.data(),
fake_strides.data()));
- } else LOG(FATAL) << "Unknown data format: " << data_format;
+ } else LOG(FATAL) << "Unknown data format: " << data_format;
}
template <typename T>
@@ -169,6 +197,7 @@ template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<T
template void cudnnSetTensor4dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<float>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<float>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
@@ -180,6 +209,7 @@ template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<
template void cudnnSetTensor4dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<double>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<double>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
@@ -192,9 +222,10 @@ template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector
template void cudnnSetTensor4dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor5dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
template void cudnnSetTensor3dDesc<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&);
+template void cudnnSetTensor4dDescWithGroup<float16>(cudnnTensorDescriptor_t*, const string&, const vector<TIndex>&, const TIndex);
template void cudnnSetTensorDesc<float16>(cudnnTensorDescriptor_t*, const vector<TIndex>&, const vector<TIndex>&);
#endif
} // namespace dragon
-#endif // WITH_CUDNN
\ No newline at end of file
+#endif // WITH_CUDNN
| 34 | Fix/Refactor the GroupConvolution on cuDNN | 3 | .cc | cc | bsd-2-clause | neopenx/Dragon |
1912 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, check_output, PIPE, CalledProcessError
from click import command, group, argument, option
from sys import stderr, exit, modules, argv
from os import path, makedirs, listdir, getpgrp, killpg
import hitchdir
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
command = [binfile, ] + argv[2:]
# When receiving a signal, distribute it to the process group
def distribute_signal_to_process_group(signum, frame):
killpg(getpgrp(), signum)
signal.signal(signal.SIGINT, distribute_signal_to_process_group)
signal.signal(signal.SIGTERM, distribute_signal_to_process_group)
signal.signal(signal.SIGHUP, distribute_signal_to_process_group)
signal.signal(signal.SIGQUIT, distribute_signal_to_process_group)
return_code = call(command)
exit(return_code)
@command()
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> BUG : Instead of distributing process signal to everything in the process group (including the process running this process), just send it to the child process.
<DFF> @@ -1,8 +1,9 @@
"""High level command line interface to hitch."""
-from subprocess import call, check_output, PIPE, CalledProcessError
+from subprocess import call, check_output, PIPE, CalledProcessError, Popen
from click import command, group, argument, option
from sys import stderr, exit, modules, argv
from os import path, makedirs, listdir, getpgrp, killpg
+from functools import partial
import hitchdir
import shutil
import signal
@@ -76,15 +77,15 @@ def runpackage(arguments):
command = [binfile, ] + argv[2:]
# When receiving a signal, distribute it to the process group
- def distribute_signal_to_process_group(signum, frame):
- killpg(getpgrp(), signum)
-
- signal.signal(signal.SIGINT, distribute_signal_to_process_group)
- signal.signal(signal.SIGTERM, distribute_signal_to_process_group)
- signal.signal(signal.SIGHUP, distribute_signal_to_process_group)
- signal.signal(signal.SIGQUIT, distribute_signal_to_process_group)
-
- return_code = call(command)
+ def forward_signal_to_child(pid, signum, frame):
+ kill(pid, signum)
+
+ process = Popen(command)
+ signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
+ signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
+ signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
+ signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
+ return_code = process.wait()
exit(return_code)
@command()
| 11 | BUG : Instead of distributing process signal to everything in the process group (including the process running this process), just send it to the child process. | 10 | .py | py | agpl-3.0 | hitchtest/hitch |
1913 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
@command()
def init():
"""Initialize hitch in this directory."""
if call(["which", "virtualenv"], stdout=PIPE):
stderr.write("You must have python-virtualenv installed to use hitch.\n")
stderr.flush()
exit(1)
if call(["which", "python3"], stdout=PIPE):
stderr.write("To use Hitch, you must have python 3 installed and available on the system path with the name 'python3'.\n")
stderr.flush()
exit(1)
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving a signal, distribute it to the process group
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"])
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
@argument('package', required=True)
def install(package):
"""Install hitch package."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"])
hitchdir.check_hitch_directory_integrity()
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@command()
def freeze():
"""List install hitch packages."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(clean)
cli.add_command(freeze)
cli.add_command(init)
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Added upgrade command to upgrade all requirements in hitchreqs.txt.
<DFF> @@ -17,12 +17,12 @@ def cli():
@command()
def init():
"""Initialize hitch in this directory."""
- if call(["which", "virtualenv"], stdout=PIPE):
+ if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE):
stderr.write("You must have python-virtualenv installed to use hitch.\n")
stderr.flush()
exit(1)
- if call(["which", "python3"], stdout=PIPE):
+ if call(["which", "python3"], stdout=PIPE, stderr=PIPE):
stderr.write("To use Hitch, you must have python 3 installed and available on the system path with the name 'python3'.\n")
stderr.flush()
exit(1)
@@ -68,6 +68,9 @@ def update_requirements():
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+def get_pip():
+ """Get the file path to the hitch pip."""
+ return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
@@ -77,7 +80,7 @@ def runpackage(arguments):
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{}".format(argv[1]))
command = [binfile, ] + argv[2:]
- # When receiving a signal, distribute it to the process group
+ # When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
@@ -93,7 +96,7 @@ def runpackage(arguments):
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
- pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
+ pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"])
@@ -105,7 +108,7 @@ def uninstall(package):
@argument('package', required=True)
def install(package):
"""Install hitch package."""
- pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
+ pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"])
@@ -113,9 +116,27 @@ def install(package):
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+@command()
+def upgrade():
+ """Upgrade all installed hitch packages."""
+ pip = get_pip()
+ package_list = [
+ p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
+ if p != "" and "==" in p
+ ]
+ version_fixed_package_list = [p.split("==")[0] for p in package_list]
+
+ for package in version_fixed_package_list:
+ call([pip, "install", package, "-U", ])
+
+ pip_freeze = check_output([pip, "freeze"])
+
+ with open("hitchreqs.txt", "w") as hitchreqs_handle:
+ hitchreqs_handle.write(pip_freeze)
+
@command()
def freeze():
- """List install hitch packages."""
+ """List installed hitch packages."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@@ -166,6 +187,7 @@ def run():
cli.add_command(install)
cli.add_command(uninstall)
+ cli.add_command(upgrade)
cli.add_command(clean)
cli.add_command(freeze)
cli.add_command(init)
| 28 | FEATURE : Added upgrade command to upgrade all requirements in hitchreqs.txt. | 6 | .py | py | agpl-3.0 | hitchtest/hitch |
1914 | <NME> index.rst
<BEF> Quickstart
==========
Contents:
.. toctree::
:glob:
:maxdepth: 2
*
chose postgres, the latest version of postgres will have been installed in the ~/.hitchpkg
directory and it will be running and accessible.
To exit, simply hit ctrl-D.
This will shut everything down and then quit.
You're now ready to start writing new tests.
Happy testing!
.. note::
Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
---------------
* :doc:`/howto/web_applications`
* :doc:`/howto/command_line_applications`
Advanced topics
---------------
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
Plugin Documentation
--------------------
.. toctree::
:glob:
:maxdepth: 1
/plugins/*
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
<MSG> DOCS : Improvements to the documentation.
<DFF> @@ -1,10 +1,122 @@
-Quickstart
-==========
+1: Creating a skeleton test
+===========================
-Contents:
+This is a basic introduction to getting your first hitch test up and running.
-.. toctree::
- :glob:
- :maxdepth: 2
+Create your test directory
+--------------------------
- *
+Create a directory inside the root of your project to put your tests in. For example::
+
+ ~/yourproject$ mkdir tests
+ ~/yourproject$ cd tests
+ ~/yourproject/tests$
+
+If you already have a tests directory you can call it something else.
+
+
+Create the hitch environment
+----------------------------
+
+If you have hitch installed already, run the following command::
+
+ ~/yourproject/tests$ hitch init
+
+If you don't, run the init script by copying and pasting the following line::
+
+ ~/yourproject/tests$ curl -sSL https://hitchtest.com/init.sh > init.sh ; chmod +x init.sh ; ./init.sh
+
+.. note::
+
+ :doc:`/faq/what_does_the_init_script_do` instead.
+
+Once the installation has completed, it will ask you a few basic questions about your project,
+mostly requiring a yes or no answer and will then generate a skeleton project template for you.
+
+Apart from installing all of the required packages and creating a .hitch directory,
+the following files are created in your tests directory:
+
+* :doc:`glossary/hitchreqs.txt`
+* :doc:`glossary/engine.py`
+* tdd.settings (:doc:`glossary/hitch_settings`)
+* ci.settings
+* all.settings
+* :doc:`stub.test`
+* README.rst
+
+You might want to take a look around these files. They all try to be self-explanatory.
+
+
+Running your first test
+-----------------------
+
+You can now run the stub test. Try running it in test driven development mode::
+
+ $ hitch test stub.test --settings tdd.settings
+
+The first time you run this command it may take a while (up to 25 minutes depending upon what you configured).
+
+Time for coffee?
+
+While you're at it, check out the hitch subreddit and subscribe to the twitter feed!
+
+.. note::
+
+ :doc:`/faq/why_does_the_first_test_run_take_so_long`
+
+
+Back?
+-----
+
+Once the test run is done setting up and running things, if there were no problems, you should see this::
+
+ Python 3.4.3 (default, Jul 28 2015, 18:20:59)
+ Type "copyright", "credits" or "license" for more information.
+
+ IPython 4.0.0 -- An enhanced Interactive Python.
+ ? -> Introduction and overview of IPython's features.
+ %quickref -> Quick reference.
+ help -> Python's own help system.
+ object? -> Details about 'object', use 'object??' for extra details.
+
+
+ SUCCESS
+
+ In [1]:
+
+This is the interactive prompt that appears during the pause step. This is an :doc:`glossary/ipython`
+prompt that can be used to interact with your app, inspect logs and try out test
+steps.
+
+The components you selected during the set up should also be running. For example, if you
+chose postgres, postgres will be running.
+
+To exit, simply hit ctrl-D.
+
+This will shut everything down and then quit.
+
+You're now ready to start writing new tests.
+
+Happy testing!
+
+.. note::
+
+ Was there anything that confused you? Please tell us! Help with :doc:`misc/clarifying_documentation`.
+
+
+Further reading
+---------------
+
+* :doc:`howto/web_applications`
+* :doc:`howto/command_line_applications`
+
+Advanced topics
+---------------
+
+* :doc:`howto/test_driven_development`
+* :doc:`howto/parameterize_test_cases`
+* :doc:`howto/continuous_integration`
+
+.. note::
+
+ Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
| 119 | DOCS : Improvements to the documentation. | 7 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1915 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
shiroCacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
<MSG> Merge pull request #5 from alex-sherwin/patch-1
fixing typo in example shiro.ini
<DFF> @@ -51,7 +51,7 @@ securityManager.sessionManager = $sessionManager
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
-shiroCacheManager.keyPrefix = users:security:authz:
+cacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
| 1 | Merge pull request #5 from alex-sherwin/patch-1 | 1 | .md | md | mit | alexxiyang/shiro-redis |
1916 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.0",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -22,7 +22,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.0",
+ version="0.5.1",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1917 | <NME> RedisClusterManager.java
<BEF> ADDFILE
<MSG> add Redis cluster manager
<DFF> @@ -0,0 +1,211 @@
+package org.crazycake.shiro;
+
+import redis.clients.jedis.*;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+public class RedisClusterManager implements IRedisManager {
+
+ protected String ip = "127.0.0.1";
+
+ protected String host = ip + ":" + Protocol.DEFAULT_PORT ;
+
+ protected static final int DEFAULT_EXPIRE = 3600;
+
+ // expire time in seconds
+ protected int expire = DEFAULT_EXPIRE;
+
+ // timeout for jedis try to connect to redis server, not expire time! In milliseconds
+ protected int timeout = Protocol.DEFAULT_TIMEOUT;
+
+ // timeout for jedis try to read data from redis server
+ protected int soTimeout = Protocol.DEFAULT_TIMEOUT;
+
+ // requirepass
+ protected String password;
+
+ // default select database
+ protected int database = Protocol.DEFAULT_DATABASE;
+
+ //scan numbers each time
+ protected int count = 100;
+
+
+ // max attempts to connect to server
+ private int maxAttempts = 3;
+
+ private volatile JedisCluster jedisCluster = null;
+
+ private void init() {
+ synchronized (this) {
+ if (jedisCluster == null) {
+ jedisCluster = new JedisCluster(getHostAndPortSet(), timeout, soTimeout, maxAttempts, password, new JedisPoolConfig());
+ }
+ }
+ }
+
+ private Set<HostAndPort> getHostAndPortSet() {
+ String[] hostAndPortArr = host.split(",");
+ Set<HostAndPort> hostAndPorts = new HashSet<HostAndPort>();
+ for (String hostAndPortStr : hostAndPortArr) {
+ String[] hostAndPort = hostAndPortStr.split(":");
+ hostAndPorts.add(new HostAndPort(hostAndPort[0], Integer.parseInt(hostAndPort[1])));
+ }
+ return hostAndPorts;
+ }
+
+
+ protected JedisCluster getJedisCluster() {
+ if (jedisCluster == null) {
+ init();
+ }
+ return jedisCluster;
+ }
+
+ public byte[] get(byte[] key) {
+ if (key == null) {
+ return null;
+ }
+ return getJedisCluster().get(key);
+ }
+
+ public byte[] set(byte[] key, byte[] value) {
+ if (key == null) {
+ return null;
+ }
+ getJedisCluster().set(key, value);
+ if (this.expire != 0) {
+ getJedisCluster().expire(key, this.expire);
+ }
+ return value;
+ }
+
+ public byte[] set(byte[] key, byte[] value, int expire) {
+ if (key == null) {
+ return null;
+ }
+ getJedisCluster().set(key, value);
+ if (this.expire != 0) {
+ getJedisCluster().expire(key, expire);
+ }
+ return value;
+ }
+
+ public void del(byte[] key) {
+ if (key == null) {
+ return;
+ }
+ getJedisCluster().del(key);
+ }
+
+ public Long dbSize() {
+ Long dbSize = 0L;
+ Map<String, JedisPool> clusterNodes = getJedisCluster().getClusterNodes();
+ for (String k : clusterNodes.keySet()) {
+ JedisPool jp = clusterNodes.get(k);
+ Jedis connection = jp.getResource();
+ try {
+ dbSize += connection.dbSize();
+ } catch (Exception e) {
+ e.printStackTrace();
+ } finally {
+ connection.close();
+ }
+ }
+ return dbSize;
+ }
+
+ public Set<byte[]> keys(byte[] pattern) {
+ Set<byte[]> keys = new HashSet<byte[]>();
+ ScanParams params = new ScanParams();
+ params.count(count);
+ params.match(pattern);
+ byte[] cursor = ScanParams.SCAN_POINTER_START_BINARY;
+ ScanResult<byte[]> scanResult;
+ do {
+ scanResult = getJedisCluster().scan(cursor, params);
+ keys.addAll(scanResult.getResult());
+ cursor = scanResult.getCursorAsBytes();
+ } while (scanResult.getStringCursor().compareTo(ScanParams.SCAN_POINTER_START) > 0);
+
+ return keys;
+ }
+
+ public int getMaxAttempts() {
+ return maxAttempts;
+ }
+
+ public void setMaxAttempts(int maxAttempts) {
+ this.maxAttempts = maxAttempts;
+ }
+
+ public String getIp() {
+ return ip;
+ }
+
+ public void setIp(String ip) {
+ this.ip = ip;
+ }
+
+ public String getHost() {
+ return host;
+ }
+
+ public void setHost(String host) {
+ this.host = host;
+ }
+
+ public int getExpire() {
+ return expire;
+ }
+
+ public void setExpire(int expire) {
+ this.expire = expire;
+ }
+
+ public int getTimeout() {
+ return timeout;
+ }
+
+ public void setTimeout(int timeout) {
+ this.timeout = timeout;
+ }
+
+ public int getSoTimeout() {
+ return soTimeout;
+ }
+
+ public void setSoTimeout(int soTimeout) {
+ this.soTimeout = soTimeout;
+ }
+
+ public String getPassword() {
+ return password;
+ }
+
+ public void setPassword(String password) {
+ this.password = password;
+ }
+
+ public int getDatabase() {
+ return database;
+ }
+
+ public void setDatabase(int database) {
+ this.database = database;
+ }
+
+ public int getCount() {
+ return count;
+ }
+
+ public void setCount(int count) {
+ this.count = count;
+ }
+
+ public void setJedisCluster(JedisCluster jedisCluster) {
+ this.jedisCluster = jedisCluster;
+ }
+}
| 211 | add Redis cluster manager | 0 | .java | java | mit | alexxiyang/shiro-redis |
1918 | <NME> hitchpostgres.rst
<BEF> HitchPostgres
=============
.. note::
This documentation applies to the latest version of hitchpostgres.
HitchPostgres is a :doc:`/glossary/hitch_plugin` created to make testing applications that use Postgresql easier.
It contains:
* A :doc:`/glossary/hitch_package` to download and install postgresql.
* A :doc:`/glossary/service` to set up a test-specific postgresql environment and run postgresql.
Note: the postgresql service destroys and sets up a new database during each test run in order
to provide :doc:`/glossary/isolation` for your tests.
Installation
------------
First, install the the plugin in your tests directory::
$ hitch install hitchpostgres
Set up postgres
---------------
In your test, define the postgres installation you will use, e.g. a system postgres:
.. code-block:: python
import hitchpostgres
postgres_package = hitchpostgres.PostgresPackage(
version="9.3.9" # Optional (default is the latest version of postgres)
)
# Downloads & installs Postgres to ~/.hitchpkg if not already installed by previous test
postgres_package.build()
To use, define the service after initializing the :doc:`/glossary/service_bundle` but before starting it:
.. note::
See also: :doc:`/api/generic_service_api`
.. code-block:: python
# Define a postgresql user for your service to set up
postgres_user = hitchpostgres.PostgresUser("newpguser", "pguserpassword")
# Define a postgresql database for your service to set up
postgres_database = hitchpostgres.PostgresDatabase(
name="databasename", # Mandatory
owner=newpguser, # Mandatory
dump="dumps/yourdump.sql" # Optional (default: create empty database)
)
self.services['Postgres'] = hitchpostgres.PostgresService(
postgres_package=postgres_package, # Mandatory
port=15432, # Optional (default: 15432)
users=[postgres_user, ], # Optional (default: no users)
databases=[postgres_database, ] # Optional (default: no databases)
encoding='UTF-8', # Optional (default: UTF-8)
locale='en_US' # Optional (default: en_US)
pgdata=None, # Optional location for pgdata dir (default: put in .hitch)
)
Interacting with Postgres
-------------------------
Once it is running, you can interact with the service and its databases::
In [1]: self.services['Postgres'].databases[0].psql("-c", "SELECT * FROM yourtable;").run()
[ Prints output ]
In [2]: self.services['Postgres'].databases[0].psql().run()
[ Launches into postgres shell ]
<MSG> DOCS : Substantion documentation update
<DFF> @@ -9,7 +9,7 @@ HitchPostgres is a :doc:`/glossary/hitch_plugin` created to make testing applica
It contains:
-* A :doc:`/glossary/hitch_package` to download and install postgresql.
+* A :doc:`/glossary/hitch_package` to download and install specified version(s) of postgresql.
* A :doc:`/glossary/service` to set up a test-specific postgresql environment and run postgresql.
Note: the postgresql service destroys and sets up a new database during each test run in order
@@ -26,7 +26,7 @@ First, install the the plugin in your tests directory::
Set up postgres
---------------
-In your test, define the postgres installation you will use, e.g. a system postgres:
+In your test, define the version of postgres that you want to test with:
.. code-block:: python
@@ -54,7 +54,7 @@ To use, define the service after initializing the :doc:`/glossary/service_bundle
# Define a postgresql database for your service to set up
postgres_database = hitchpostgres.PostgresDatabase(
name="databasename", # Mandatory
- owner=newpguser, # Mandatory
+ owner=postgres_user, # Mandatory
dump="dumps/yourdump.sql" # Optional (default: create empty database)
)
| 3 | DOCS : Substantion documentation update | 3 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1919 | <NME> vicious_cycle.rst
<BEF> ADDFILE
<MSG> DOCS : Added some terms to the glossary.
<DFF> @@ -0,0 +1,12 @@
+Vicious Cycle
+=============
+
+The terms vicious cycle refer to complex chains of events which reinforce themselves
+through a feedback loop with ultimately detrimental results.
+
+There are two common vicious cycles related to testing and software development. They are:
+
+* :doc:`test_failure_habituation`
+* :doc:`technical_debt`
+
+See also: `Virtuous circle and vicious circle Wikipedia Page <https://en.wikipedia.org/wiki/Virtuous_circle_and_vicious_circle>`_
| 12 | DOCS : Added some terms to the glossary. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1920 | <NME> shiro-standalone.ini
<BEF> ADDFILE
<MSG> Merge branch 'master' of https://github.com/gjhuai/shiro-redis
<DFF> @@ -0,0 +1,4 @@
+redisManager.host = 127.0.0.1:6379
+redisSessionDAO.expire = 3000
+cacheManager.expire = 3000
+cacheManager.principalIdFieldName = userId
\ No newline at end of file
| 4 | Merge branch 'master' of https://github.com/gjhuai/shiro-redis | 0 | .ini | ini | mit | alexxiyang/shiro-redis |
1921 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped release.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4",
+ version="0.4.1",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped release. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1922 | <NME> web_applications.rst
<BEF> How to test web applications
============================
Tutorial coming soon.
- Wait to appear: first friend-link
- Wait to appear: 2nd friend-link
- Wait to appear: Last friend-link
- Wait to appear: First calendar day-31
This is the recommended approach for items which signify certain things that you want to happen.
If, for example, you are testing for the presence of an error message indicating that a user must enter a ZIP code,
the following is a good way of doing it::
- Wait to appear: error-message-zip-code
Waiting for text to appear
--------------------------
Note that waiting for specific text to appear is *not* a good approach for detecting error messages,
or, indeed, any other kind of text which is decided upon by the application. Why? Translations.
If an application is translated and you test the same scenario by checking for IDs, the test will
continue to work. If you just check for the presence of text, it will break.
Nonetheless, waiting for text to appear is often a good way to determine if text entered by the user
in a test shows up in the right place.
Waiting for text to appear also follows the same pattern as above::
- Wait to contain:
item: first username
text: django
- Wait to appear:
item: second username
text: django
- Wait to appear:
item: last username
text: django
- Wait to appear:
item: first user username
text: django
<MSG> DOCS : Fleshed out some how-to's
<DFF> @@ -1,4 +1,70 @@
How to test web applications
============================
-Tutorial coming soon.
+.. note::
+
+ This tutorial assumes that you have the :doc:`glossary/hitch_plugin` :doc:`plugins/hitchselenium`
+ installed and its step library is set up.
+
+ If you followed the quickstart tutorial, this should already be done for you.
+
+.. warning::
+
+ This tutorial is a work in progress. It is not currently complete.
+
+
+Writing a step that clicks on a button or link
+----------------------------------------------
+
+To click on an individual item, you need to use the step "click" like so::
+
+ - Click: register
+
+This is telling hitch to click on an HTML element with the HTML ID "register".
+
+.. note::
+
+ This part is sometimes controversial. If you disagree, read :doc:`faq/why_just_html_ids_and_classes` for the rationale.
+
+Now, there's a good chance that:
+
+* Your HTML element does not have that ID - in which case you should *change the HTML itself* so that it does have that ID.
+* That button has a different HTML ID - in which case you should use that ID instead (bookmark :doc:`/howto/refactoring_your_tests` for later).
+
+
+
+Writing a step that clicks on an item that is part of a group
+-------------------------------------------------------------
+
+Sometimes the thing that you want to click on is part of a group, or a group of groups.
+
+For instance, you may want to click on the first link in a list of links. To do that you use the same step::
+
+ - Click: first friend-link
+
+Here, "friend-link" is an *HTML class*.
+
+As before, if the list of elements do not have a readable HTML class signifying what they are, you should *add* a class in the HTML itself.
+
+Elements can have multiple classes, so if an element already has a class but it does not clearly identify all of the items
+in the list, you should add a class that does.
+
+If you want to click on the 2nd item::
+
+ - Click: 2nd friend-link
+
+Or the last::
+
+ - Click: Last friend link
+
+Or to click on an item that is part of a group which is *also* itself part of a group, you can specify two classes::
+
+ - Click: First calendar day-31
+
+Try to keep the test steps readable by using appropriately named classes where possible.
+
+
+Verifying an element exists on the page - e.g. an error message
+---------------------------------------------------------------
+
+[ TO DO ]
| 67 | DOCS : Fleshed out some how-to's | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1923 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in os.listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(file_or_dir):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Hooked up cleanpkg command.
<DFF> @@ -242,9 +242,9 @@ def cleanpkg(packages):
if packages is None:
shutil.rmtree(hitchpkg)
else:
- for file_or_dir in os.listdir(hitchpkg):
+ for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
- if path.isdir(file_or_dir):
+ if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
@@ -297,11 +297,13 @@ def run():
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
+ cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
+ cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
| 4 | FEATURE : Hooked up cleanpkg command. | 2 | .py | py | agpl-3.0 | hitchtest/hitch |
1924 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Run hitchquickstart if no existing project detected in directory.
<DFF> @@ -112,6 +112,7 @@ def init(python, virtualenv):
exit(1)
if hitchdir.hitch_exists():
+ hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
@@ -137,12 +138,17 @@ def init(python, virtualenv):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
+ check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
+ check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
+ signal.signal(signal.SIGINT, stop_everything)
+
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
| 6 | FEATURE : Run hitchquickstart if no existing project detected in directory. | 0 | .py | py | agpl-3.0 | hitchtest/hitch |
1925 | <NME> vagrant.rst
<BEF> ADDFILE
<MSG> DOCS : Added hitchvagrant plugin. Cleaned up part 3 of quickstart.
<DFF> @@ -0,0 +1,39 @@
+Vagrant
+=======
+
+.. note::
+
+ This documentation applies to the latest version of hitchvagrant: version 0.1
+
+Prerequisites
+-------------
+
+First, if it is not already installed, install the hitch vagrant package::
+
+ $ hitch install hitchvagrant
+
+Also ensure that vagrant is installed::
+
+ $ sudo apt-get install vagrant
+
+Usage
+-----
+
+To use, define the service after initializing the ServiceBundle object but before starting it.
+
+Like so:
+
+.. code-block:: python
+
+ # Imports
+ import hitchvagrant
+
+ # Service definition in engine's setUp:
+ self.services['MyVM'] = hitchvagrant.VagrantService(
+ directory="vagrantubuntu/", # Directory containing Vagrantfile (optional)
+ )
+
+Once it is running, you can run ssh commands against the machine::
+
+ In [1]: self.services['MyVM'].ssh("pwd").run()
+ /vagrant
| 39 | DOCS : Added hitchvagrant plugin. Cleaned up part 3 of quickstart. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1926 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
Code will be available soon.
<MSG> try it
<DFF> @@ -1,4 +1,179 @@
# Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
-Code will be available soon.
+### Compile Requirements for C++
+
+0. Google Protocol Buffer
+1. Python (2.7, 64bit) | Anaconda (2.7, 64bit)
+2. CUDA [Optional]
+3. CUDNN [Optional]
+4. OpenMPI [Optional]
+
+-----
+### Runtime Requirements for Python
+
+0. Package: protobuf
+1. Package: lmdb
+
+-----
+### Installation
+1. Clone this repository
+
+2. (Optional) Download and install [CUDA](https://developer.nvidia.com/cuda-toolkit)
+
+ (Optional) Download and install [CUDNN](https://developer.nvidia.com/cudnn)
+
+3. (Optional) Download 3rdparty.zip and unzip to Dragon/3rdparty (Out of source code dir)
+
+ [*Win64*](https://pan.baidu.com/s/1pLmGOLt) (OpenBLAS / Protobuf for VS2013 / CUDNN v6 / Microsoft MPI)
+
+ [*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
+
+4. Configure Dragon/CMakeLists.txt
+ - Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
+ - Set 3rdparty path (recommend to keep defualt)
+ - Set python path
+ - Set cuda compiling architectures if necessary
+
+5. Environment Variables
+ ### Linux(Only for OpenMPI):
+ - Create dragon.conf
+
+ ```Shell
+ sudo vim /etc/ld.so.conf.d/dragon.conf
+ ```
+
+ - Append 1 line for libraries dir of your 3rdparty, e.g. :
+ - /home/Dragon/3rdparty/lib
+ - rebuild the scaning cache
+
+ ```Shell
+ sudo ldconfig
+ ```
+
+ ### Windows
+ - add binary directionary to system environment variables, e.g. :
+ - PATH=........;C:\Dragon\3rdparty\bin;
+
+
+6. Setup MPI [Optional]
+ #### Linux:
+ - We use OpenMPI which support "cuda-aware-mpi"
+ - See more:
+ - https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
+ - https://www.open-mpi.org/faq/?category=buildcuda
+ - Run 3rdparty/setup_mpi.sh
+
+ ```Shell
+ sudo ./setup_mpi.sh
+ ```
+
+ #### Windows:
+ - We use Microsoft MPI which can perfectly run at lastest Windows10
+ - Microsoft MPI is intergrated into 3rdparty and you should do nothing
+
+7. Compile
+ #### Linux:
+ - Install cmake
+
+ ```Shell
+ sudo apt-get install cmake
+ ```
+ - Make
+
+ ```Shell
+ cd Dragon
+ mkdir build
+ cd build
+ cmake ..
+ make install -j16
+ ```
+
+
+
+ #### Windows:
+ - Install cmake-gui
+ - Mkdir Dragon/build
+ - Configure and generate MSVC project in Dragon/build
+ - Open Dragon/build/Dragon.sln
+ - Compile and generate for "INSTALL" solution
+
+8. Deploy
+
+ ```Shell
+ cp Dragon/libs/libdragon.so Dragon/python
+ cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
+ cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
+ ```
+
+----
+
+## Usage
+
+### Import
+
+```Shell
+import dragon
+```
+
+### Virtual DL Frameworks
+
+```Shell
+import dragon.vm.theano as theano
+import dragon.vm.caffe as caffe
+import dragon.vm.tensorflow as tf
+```
+
+### Tutorials
+
+[IPython Notebook] -> (https://github.com/PhyscalX/Tutorials)
+
+We will revise several classical examples, covering both CV, NLP and RL.
+
+### Device
+
+```Shell
+import dragon.config
+dragon.config.EnableCPU()
+dragon.config.EnableCUDA(device_id, use_cudnn=True)
+```
+
+### Automatic Memory Optimization(AMC)
+
+```Shell
+import dragon.config
+dragon.config.SetDebugMode(False)
+```
+
+This option will make all gradients share a global tensor(debugging is intractable).
+
+which prefers a 50% memory-usage and 15% slower solution during training phase.
+
+### Scope
+
+- NameScope
+
+```Shell
+import dragon
+from dragon.core.tensor import Tensor
+with dragon.name_scope(prefix='conv1'):
+ w = Tensor('weight').Variable() # named as conv1/weight
+ b = Tensor('bias').Variable() # named as conv1/bias
+```
+
+- DeviceScope
+
+```Shell
+import dragon
+with dragon.deive_scope(deivce='gpu', id=0, use_cudnn=True):
+ x = ops.Add(a, b) # use /gpu:0 and cuDNN
+```
+
+- PhaseScope
+
+```Shell
+import dragon
+import dragon.vm.theano as theano
+ with dragon.phase_scope(phase='train'):
+ f = theano.function(outputs=y) # force the training phase even without gradients computation
+```
| 176 | try it | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1927 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
Code will be available soon.
<MSG> try it
<DFF> @@ -1,4 +1,179 @@
# Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
-Code will be available soon.
+### Compile Requirements for C++
+
+0. Google Protocol Buffer
+1. Python (2.7, 64bit) | Anaconda (2.7, 64bit)
+2. CUDA [Optional]
+3. CUDNN [Optional]
+4. OpenMPI [Optional]
+
+-----
+### Runtime Requirements for Python
+
+0. Package: protobuf
+1. Package: lmdb
+
+-----
+### Installation
+1. Clone this repository
+
+2. (Optional) Download and install [CUDA](https://developer.nvidia.com/cuda-toolkit)
+
+ (Optional) Download and install [CUDNN](https://developer.nvidia.com/cudnn)
+
+3. (Optional) Download 3rdparty.zip and unzip to Dragon/3rdparty (Out of source code dir)
+
+ [*Win64*](https://pan.baidu.com/s/1pLmGOLt) (OpenBLAS / Protobuf for VS2013 / CUDNN v6 / Microsoft MPI)
+
+ [*Linux64*](https://pan.baidu.com/s/1qXPEOWG) (OpenMPI)
+
+4. Configure Dragon/CMakeLists.txt
+ - Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
+ - Set 3rdparty path (recommend to keep defualt)
+ - Set python path
+ - Set cuda compiling architectures if necessary
+
+5. Environment Variables
+ ### Linux(Only for OpenMPI):
+ - Create dragon.conf
+
+ ```Shell
+ sudo vim /etc/ld.so.conf.d/dragon.conf
+ ```
+
+ - Append 1 line for libraries dir of your 3rdparty, e.g. :
+ - /home/Dragon/3rdparty/lib
+ - rebuild the scaning cache
+
+ ```Shell
+ sudo ldconfig
+ ```
+
+ ### Windows
+ - add binary directionary to system environment variables, e.g. :
+ - PATH=........;C:\Dragon\3rdparty\bin;
+
+
+6. Setup MPI [Optional]
+ #### Linux:
+ - We use OpenMPI which support "cuda-aware-mpi"
+ - See more:
+ - https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
+ - https://www.open-mpi.org/faq/?category=buildcuda
+ - Run 3rdparty/setup_mpi.sh
+
+ ```Shell
+ sudo ./setup_mpi.sh
+ ```
+
+ #### Windows:
+ - We use Microsoft MPI which can perfectly run at lastest Windows10
+ - Microsoft MPI is intergrated into 3rdparty and you should do nothing
+
+7. Compile
+ #### Linux:
+ - Install cmake
+
+ ```Shell
+ sudo apt-get install cmake
+ ```
+ - Make
+
+ ```Shell
+ cd Dragon
+ mkdir build
+ cd build
+ cmake ..
+ make install -j16
+ ```
+
+
+
+ #### Windows:
+ - Install cmake-gui
+ - Mkdir Dragon/build
+ - Configure and generate MSVC project in Dragon/build
+ - Open Dragon/build/Dragon.sln
+ - Compile and generate for "INSTALL" solution
+
+8. Deploy
+
+ ```Shell
+ cp Dragon/libs/libdragon.so Dragon/python
+ cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
+ cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
+ ```
+
+----
+
+## Usage
+
+### Import
+
+```Shell
+import dragon
+```
+
+### Virtual DL Frameworks
+
+```Shell
+import dragon.vm.theano as theano
+import dragon.vm.caffe as caffe
+import dragon.vm.tensorflow as tf
+```
+
+### Tutorials
+
+[IPython Notebook] -> (https://github.com/PhyscalX/Tutorials)
+
+We will revise several classical examples, covering both CV, NLP and RL.
+
+### Device
+
+```Shell
+import dragon.config
+dragon.config.EnableCPU()
+dragon.config.EnableCUDA(device_id, use_cudnn=True)
+```
+
+### Automatic Memory Optimization(AMC)
+
+```Shell
+import dragon.config
+dragon.config.SetDebugMode(False)
+```
+
+This option will make all gradients share a global tensor(debugging is intractable).
+
+which prefers a 50% memory-usage and 15% slower solution during training phase.
+
+### Scope
+
+- NameScope
+
+```Shell
+import dragon
+from dragon.core.tensor import Tensor
+with dragon.name_scope(prefix='conv1'):
+ w = Tensor('weight').Variable() # named as conv1/weight
+ b = Tensor('bias').Variable() # named as conv1/bias
+```
+
+- DeviceScope
+
+```Shell
+import dragon
+with dragon.deive_scope(deivce='gpu', id=0, use_cudnn=True):
+ x = ops.Add(a, b) # use /gpu:0 and cuDNN
+```
+
+- PhaseScope
+
+```Shell
+import dragon
+import dragon.vm.theano as theano
+ with dragon.phase_scope(phase='train'):
+ f = theano.function(outputs=y) # force the training phase even without gradients computation
+```
| 176 | try it | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1928 | <NME> engine_api.rst
<BEF> Hitch Engine API
================
The Hitch Engine is a python class which is tasked with executing your tests
and responding to successes and failures.
For a test like this, written in YAML:
.. code-block:: yaml
- name: Example scenario
scenario:
- Do something
- Do something else
The basic Hitch Engine, written in python, would need to look something like this:
.. code-block:: python
import hitchtest
class ExecutionEngine(hitchtest.ExecutionEngine):
def set_up(self):
# set up code
def do_something(self):
# code run when test says "Do something"
def do_something_else(self, with_what):
# code run run when test says "Do something else"
def tear_down(self):
# code that always runs at the end
Step Translation
----------------
Test steps and their arguments are fed to the engine directly as method calls
and arguments. All step names and arguments are first changed into underscore_case.
For example, putting this as a test step:
.. code-block:: yaml
- Do something
Would be equivalent to calling this in your engine:
.. code-block:: python
self.do_something()
This, on the other hand (note the semicolon):
.. code-block:: yaml
- Do something else: value 1
Would be translated into:
.. code-block:: python
self.do_something_else("value 1")
You can include as many arguments as you like in steps like so:
.. code-block:: yaml
- Do complicated thing:
Variable 1: Value 1
Variable 2: 2
If the equivalent were written in python it would look like this:
.. code-block:: python
self.do_complicated_thing(variable_1="Value 1", variable_2="2")
Your steps can also contain arguments that contain lists:
.. code-block:: yaml
- Do another complicated thing:
Variable 1: value 1
Variable 2:
- List item 1
- List item 2
The python equivalent of that would look like this:
.. code-block:: python
self.do_another_complicated_thing(variable_1="value 1", variable_2=["list item 1", "list item 2",])
They can contain dicts (or associative arrays) as well:
.. code-block:: yaml
- A 3rd complicated thing:
Variable 1: value 1
Variable 2:
Dict item 1: val 1
Dict item 2: val 2
Which in python would be equivalent to this:
.. code-block:: python
self.a_3rd_complicated_thing(variable_1="value 1", variable_2={'Dict item 1': 'val 1', 'Dict item 2': 'val 2'})
Careful with semicolons and braces like { and }
-----------------------------------------------
Since the tests are written in YAML with optional Jinja2, braces and
semicolons have special meanings and must be escaped if you want
to use them.
-------------
self.preconditions is a dictionary representation of the YAML snippet in the test being run.
What goes in this snippet is up to you. Anything that is valid YAML and an associative arrays
is allowed.
Example:
.. code-block:: yaml
preconditions:
db_fixtures:
- fixture1.sql
python_version: 2.7.3
This will mean your preconditions variable will be::
In [1]: self.preconditions
Out[1]: {'db_fixtures': ['fixture1.sql'], 'python_version': '2.7.3'}
You can access any properties you set here using python's get method (which
you can also use to program in a sensible default)::
In [1]: self.preconditions.get('db_fixtures', [])
Out[1]: ['fixture1.sql']
If no preconditions are set, self.preconditions will be an empty dict::
In [1]: self.preconditions
Out[1]: {}
Note that while preconditions can contain lists, you can't set preconditions
to be a list.
Tags
----
Tests can also have tags, which let you single out individual tests to run
or to run groups of tests together. Example:
- name: Test with tags
tags:
- registration
- registration
- email
- firefox
scenario:
- Step 1
- Step 2
You can use these tags to run related sets of tests together like so::
$ hitch test . --tags registration
Or, if you want to be more specific, you can list the tags, separated by a comma::
$ hitch test . --tags registration,email,firefox
Description
-----------
You can also include comments in the description property. This where you can
put comments in your tests to help explain to people what your test is doing
and why.
It is ignored by the engine.
.. code-block:: yaml
- name: Test with long description
description: |
This test has a long history behind it. First there was a feature, then
ther was another bug BUG-431, which it was tweaked to accomodate.
It registers, recieves an email and checks the email arrived.
scenario:
- Step 1
- Step 2
db_fixtures:
- fixture1.sql
Stacktrace
Stacktrace
----------
self.stacktrace is an object representation of the stack trace that occurs after a failure
occurs in your test. It is set to None if no error has occurred while running the test.
You can use it to pretty-print a representation of the last error that occurred::
In [1]: print(self.stacktrace.to_template())
[ prints colorized, pretty printed version of the stacktrace ]
You can also use it to *dive into* the specific engine code where the exception occurred,
so that you can check the contents of variables at that point or even re-run the code::
In [1]: self.stacktrace[0].ipython()
Entering /home/user/django-remindme/django-remindme-tests/engine.py at line 122
In [1]: on
Out[1]: 'register'
Settings
--------
Test settings are also available in the test engine, e.g.::
In [1]: self.settings
Out[1]:
{'engine_folder': '/home/user/django-remindme/django-remindme-tests',
'pause_on_failure': True,
'python_version': '2.7.3',
'xvfb': False,
'quiet': False}
To read more about setting settings see :doc:`settings`.
<MSG> DOCS : Update to engine API settings.
<DFF> @@ -123,8 +123,7 @@ Preconditions
-------------
self.preconditions is a dictionary representation of the YAML snippet in the test being run.
-What goes in this snippet is up to you. Anything that is valid YAML and an associative arrays
-is allowed.
+What goes in this snippet is up to you. Anything that is valid YAML is allowed.
Example:
@@ -160,6 +159,8 @@ Tags
Tests can also have tags, which let you single out individual tests to run
or to run groups of tests together. Example:
+.. code-block:: yaml
+
- name: Test with tags
tags:
- registration
@@ -197,9 +198,12 @@ It is ignored by the engine.
It registers, recieves an email and checks the email arrived.
scenario:
- Step 1
- - Step 2
- db_fixtures:
- - fixture1.sql
+ - Step 2: with parameter
+ - Step 3:
+ var 1: 1
+ var 2: 2
+ var 3: 3
+ - Last step
Stacktrace
| 9 | DOCS : Update to engine API settings. | 5 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1929 | <NME> index.rst
<BEF> ADDFILE
<MSG> DOCS : Added index for API folder.
<DFF> @@ -0,0 +1,12 @@
+Hitch API
+=========
+
+Documentation for Hitch core APIs.
+
+Contents:
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ *
| 12 | DOCS : Added index for API folder. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1930 | <NME> isolation.rst
<BEF> Isolation
=========
Isolation is a property of tests that prevents system state from
'contaminating' the environment which a test runs under. It is a
particular problem when integration testing.
Non-isolated integration tests are often referred to as :doc:`/glossary/brittle_tests`.
Common examples of 'brittle' integration tests suffering from a lack of isolation include:
* Databases that are changed by one test and then re-used in subsequent tests.
* Files which are created by tests and not destroyed which affect the behavior of subsequent tests.
* A system package manager upgrading a crucial piece of software that the test relies upon, breaking it.
* A stray process monopolizing a network port used by the test.
Radical isolation is a primary goal of Hitch. Hitch achieves this by controlling as much
of the environment as is feasible and running a suite of checks for the rest.
* :doc:`/glossary/package_isolation`
* :doc:`/glossary/data_isolation`
* :doc:`/glossary/process_isolation`
* :doc:`/glossary/environment_isolation`
See also:
* :doc:`/glossary/test_realism`
* `Non-determinism by Martin Fowler <http://martinfowler.com/articles/nonDeterminism.html>`_
<MSG> DOCS : Continued overhaul of docs.
<DFF> @@ -3,4 +3,4 @@ Isolation
Isolation is a property of tests that prevents system state from
'contaminating' the environment which a test runs under. It is a
-particular problem when integration testing.
+common problem when integration testing.
| 1 | DOCS : Continued overhaul of docs. | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1931 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.7',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> Add ClearWorkspace interface
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.7',
+ version='0.2.1.8',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | Add ClearWorkspace interface | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1932 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.7',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> Add ClearWorkspace interface
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.7',
+ version='0.2.1.8',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | Add ClearWorkspace interface | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1933 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private int database = Protocol.DEFAULT_DATABASE;
private JedisSentinelPool jedisPool;
@Override
protected Jedis getJedis() {
protected Jedis getJedis() {
if (jedisPool == null) {
init();
}
return jedisPool.getResource();
}
private void init() {
if (jedisPool == null) {
synchronized (RedisSentinelManager.class) {
if (jedisPool == null) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, getJedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public JedisSentinelPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> Update RedisSentinelManager.java
<DFF> @@ -26,7 +26,7 @@ public class RedisSentinelManager extends WorkAloneRedisManager implements IRedi
private int database = Protocol.DEFAULT_DATABASE;
- private JedisSentinelPool jedisPool;
+ private volatile JedisSentinelPool jedisPool;
@Override
protected Jedis getJedis() {
| 1 | Update RedisSentinelManager.java | 1 | .java | java | mit | alexxiyang/shiro-redis |
1934 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
#required
securityManager.cacheManager = $cacheManager
```
<MSG> Update README.md
<DFF> @@ -20,3 +20,8 @@ cacheManager.expire=5
#required
securityManager.cacheManager = $cacheManager
```
+
+If you found any bugs
+===========
+
+Please send email to [email protected]
| 5 | Update README.md | 0 | .md | md | mit | alexxiyang/shiro-redis |
1935 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can chose these 2 ways to include shiro-redis into your project
1. directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
2. add maven dependency
------------------------------------
<dependency>
<groupId>org.crazycake</groupId>
<MSG> Update README.md
update readme
<DFF> @@ -7,9 +7,9 @@ How to use it?
===========
You can chose these 2 ways to include shiro-redis into your project
-1. directly download jar file
++ directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
-2. add maven dependency
++ add maven dependency
------------------------------------
<dependency>
<groupId>org.crazycake</groupId>
| 2 | Update README.md | 2 | .md | md | mit | alexxiyang/shiro-redis |
1936 | <NME> clarifying_documentation.rst
<BEF> Clarifying documentation
========================
Was there something you were confused about? Does a part of the documentation
not make sense? Is there something you think is missing but should be there?
Is there something you think should be made obvious but you had to dig around
for?
If so, please raise an issue at https://github.com/hitchtest/hitch/issues/new
Thanks!
<MSG> DOCS : Improvement to docs.
<DFF> @@ -1,11 +1,18 @@
Clarifying documentation
========================
-Was there something you were confused about? Does a part of the documentation
-not make sense? Is there something you think is missing but should be there?
-Is there something you think should be made obvious but you had to dig around
-for?
+Was there something you were confused about? Are you having problems with hitch?
-If so, please raise an issue at https://github.com/hitchtest/hitch/issues/new
+If you are finding a test hard to implement or a problem with the framework
+hard to fix, please raise an issue:
-Thanks!
+https://github.com/hitchtest/hitch/issues/new
+
+*Even if you think the problem was probably your fault* we really want to hear
+from you. There's a good chance if you're confused that the documentation or
+even the code is deficient in some way and we'd like to fix that.
+
+Support queries are the lifeblood of this framework. Without them it will
+wither and die.
+
+Thanks for helping out!
| 13 | DOCS : Improvement to docs. | 6 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1937 | <NME> README.md
<BEF> shiro-redis
=============
[](https://travis-ci.org/alexxiyang/shiro-redis)
[](https://maven-badges.herokuapp.com/maven-central/org.crazycake/shiro-redis)
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
# Download
You use either of the following 2 ways to include `shiro-redis` into your project
* use `git clone https://github.com/alexxiyang/shiro-redis.git` to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>3.3.1</version>
</dependency>
```
> **Note:**
> 3.3.0 is compiled by java11
> 3.3.1 is compiled by java8
## shiro-core/jedis Version Comparison Charts
| shiro-redis | shiro | jedis |
| :----------------:| :-------: | :-------: |
| 3.2.3 | 1.3.2 | 2.9.0 |
| 3.3.0 (java11) | 1.6.0 | 3.3.0 |
| 3.3.1 (java8) | 1.6.0 | 3.3.0 |
# Before use
Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
For example:
If you create `SimpleAuthenticationInfo` like this:
```java
@Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken)token;
UserInfo userInfo = new UserInfo();
userInfo.setUsername(usernamePasswordToken.getUsername());
return new SimpleAuthenticationInfo(userInfo, "123456", getName());
}
```
Then the `userInfo` object is your principal object. You need to make sure `UserInfo` has an unique field for Redis to identify it. Take `userId` as an example:
```java
public class UserInfo implements Serializable{
private Integer userId
private String username;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Integer getUserId() {
return this.userId;
}
}
```
Put userId as the value of `cacheManager.principalIdFieldName`, like this:
```properties
cacheManager.principalIdFieldName = userId
```
If you're using Spring, the configuration should be
```xml
<property name="principalIdFieldName" value="userId" />
```
Then `shiro-redis` will call `userInfo.getUserId()` to get the id for saving Redis object.
# How to configure ?
You can configure `shiro-redis` either in `shiro.ini` or in `spring-*.xml`
## shiro.ini
Here is the configuration example for shiro.ini.
### Redis Standalone
If you are running Redis in Standalone mode
```properties
[main]
#====================================
# shiro-redis configuration [start]
#====================================
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
#===================================
# Redis Manager [end]
#===================================
#=========================================
# Redis session DAO [start]
#=========================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#=========================================
# Redis session DAO [end]
#=========================================
#==========================================
# Redis cache manager [start]
#==========================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
# Default value is id, that means your principal object must has a method called `getId()`
cacheManager.principalIdFieldName = id
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
#==========================================
# Redis cache manager [end]
#==========================================
#=================================
# shiro-redis configuration [end]
#=================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisSentinelManager
# Sentinel host. If you don't specify host the default value is 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you're using redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Spring
If you are using Spring
### Redis Standalone
If you are running Redis in Standalone mode
```xml
<!-- shiro-redis configuration [start] -->
<!-- Redis Manager [start] -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
</bean>
<!-- Redis Manager [end] -->
<!-- Redis session DAO [start] -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
<!-- Redis session DAO [end] -->
<!-- Redis cache manager [start] -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
</bean>
<!-- Redis cache manager [end] -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
<!-- other configurations -->
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
<!-- shiro-redis configuration [end] -->
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
If you use redis sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you use redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Serializer
Since redis only accept `byte[]`, there comes a serializer problem.
Shiro-redis is using `StringSerializer` as key serializer and `ObjectSerializer` as value serializer.
You can use your own custom serializer, as long as this custom serializer implements `org.crazycake.shiro.serializer.RedisSerializer`
For example, we can change the charset of keySerializer like this
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
# cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 options that you can replace them with your cutom serializers:
- cacheManager.keySerializer
- cacheManager.valueSerializer
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
## Configurable Options
Here are all the available options you can use in `shiro-redis` configuration file.
### RedisManager
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig instance and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| jedisPool | `null` | **Only used for sentinel mode or single mode**<br>You can create your own JedisPool instance and set attributes as you wish |
### RedisSessionDAO
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| sessionInMemoryEnabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
### CacheManager
| Title | Default | Description |
| :--------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
# Spring boot starter
Using `Spring-Boot` integration is the easiest way to integrate `shiro-redis` into a Spring-base application.
> Note: `shiro-redis-spring-boot-starter` version `3.2.1` is based on `shiro-spring-boot-web-starter` version `1.4.0-RC2`
First include the `shiro-redis` Spring boot starter dependency in you application classpath
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis-spring-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
```
The next step depends on whether you've created your own `SessionManager` or `SessionsSecurityManager`.
## If you haven't created your own `SessionManager` or `SessionsSecurityManager`
If you don't have your own `SessionManager` or `SessionsSecurityManager` in your configuration, `shiro-redis-spring-boot-starter` will create `RedisSessionDAO` and `RedisCacheManager` for you. Then inject them into `SessionManager` and `SessionsSecurityManager` automatically.
So, You are all set. Enjoy it!
## If you have created your own `SessionManager` or `SessionsSecurityManager`
If you have created your own `SessionManager` or `SessionsSecurityManager` like this:
```java
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
// other stuff...
return securityManager;
}
```
Then inject `redisSessionDAO` and `redisCacheManager` which created by `shiro-redis-spring-boot-starter` already
```java
@Autowired
RedisSessionDAO redisSessionDAO;
@Autowired
RedisCacheManager redisCacheManager;
```
Inject them into your own `SessionManager` and `SessionsSecurityManager`
```java
@Bean
public SessionManager sessionManager() {
DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
// inject redisSessionDAO
sessionManager.setSessionDAO(redisSessionDAO);
// other stuff...
return sessionManager;
}
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
//inject sessionManager
securityManager.setSessionManager(sessionManager);
// inject redisCacheManager
securityManager.setCacheManager(redisCacheManager);
// other stuff...
return securityManager;
}
```
For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alexxiyang/shiro-redis-spring-boot-tutorial)
### Configuration Properties
Here are all available options you can use in Spring-boot starter configuration
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
| shiro-redis.redis-manager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| shiro-redis.redis-manager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| shiro-redis.redis-manager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| shiro-redis.redis-manager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| shiro-redis.redis-manager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| shiro-redis.redis-manager.password | | Redis password |
| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| shiro-redis.session-dao.session-in-memory-enabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| shiro-redis.cache-manager.principal-id-field-name | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| shiro-redis.cache-manager.expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| shiro-redis.cache-manager.key-prefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
## Working with `spring-boot-devtools`
If you are using `shiro-redis` with `spring-boot-devtools`. Please add this line to `resources/META-INF/spring-devtools.properties` (Create it if there is no this file):
```ini
restart.include.shiro-redis=/shiro-[\\w-\\.]+jar
```
# If you found any bugs
Please create the issue
可以用中文
<MSG> Use jdk 1.8 in 3.3.1
<DFF> @@ -20,8 +20,8 @@ You use either of the following 2 ways to include `shiro-redis` into your projec
</dependency>
```
-> **Note:**
-> 3.3.0 is compiled by java11
+> **Note:**\
+> 3.3.0 is compiled by java11\
> 3.3.1 is compiled by java8
## shiro-core/jedis Version Comparison Charts
| 2 | Use jdk 1.8 in 3.3.1 | 2 | .md | md | mit | alexxiyang/shiro-redis |
1938 | <NME> ProcessAgent.py
<BEF> # --------------------------------------------------------
# GA3C for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
from datetime import datetime
from multiprocessing import Process, Queue, Value
import numpy as np
import time
from Config import Config
from Environment import Environment
class Experience(object):
def __init__(self, state, action, prob, reward, done):
self.state = state
self.action = action
self.prob = prob
self.reward = reward
self.done = done
class ProcessAgent(Process):
def __init__(self, id, prediction_q, training_q, episode_log_q):
super(ProcessAgent, self).__init__()
self.id = id
self.prediction_q = prediction_q
self.training_q = training_q
self.episode_log_q = episode_log_q
self.env = Environment()
self.num_actions = self.env.get_num_actions()
self.actions = np.arange(self.num_actions)
self.discount_factor = Config.DISCOUNT
# one frame at a time
self.wait_q = Queue(maxsize=1)
self.exit_flag = Value('i', 0)
@staticmethod
def _accumulate_rewards(experiences, discount_factor, terminal_reward):
reward_sum = terminal_reward
for t in reversed(range(0, len(experiences) - 1)):
r = np.clip(experiences[t].reward, Config.REWARD_MIN, Config.REWARD_MAX)
reward_sum = discount_factor * reward_sum + r
experiences[t].reward = reward_sum
return experiences[:-1]
def convert_data(self, experiences):
x_ = np.array([exp.state for exp in experiences])
a_ = np.eye(self.num_actions)[np.array([exp.action for exp in experiences])].astype(np.float32)
r_ = np.array([exp.reward for exp in experiences], dtype=np.float32) # R
r_ = r_.reshape((-1, 1))
return x_, r_, a_
def predict(self, state):
# put the state in the prediction q
self.prediction_q.put((self.id, state))
# wait for the prediction to come back
p, v = self.wait_q.get()
return p, v
def select_action(self, prediction):
if Config.PLAY_MODE:
action = np.argmax(prediction)
else:
action = np.random.choice(self.actions, p=prediction)
return action
def run_episode(self):
self.env.reset()
done = False
experiences = []
time_count = 0
reward_sum = 0.0
while not done:
# very first few frames
if self.env.current_state is None:
self.env.step(0) # 0 == NOOP
continue
prediction, value = self.predict(self.env.current_state)
action = self.select_action(prediction)
reward, done = self.env.step(action)
reward_sum += reward
exp = Experience(self.env.previous_state, action, prediction, reward, done)
experiences.append(exp)
if done or time_count == Config.TIME_MAX:
terminal_reward = 0 if done else value
updated_exps = ProcessAgent._accumulate_rewards(experiences, self.discount_factor, terminal_reward)
x_, r_, a_ = self.convert_data(updated_exps)
yield x_, r_, a_, reward_sum
# reset the tmax count
time_count = 0
# keep the last experience for the next batch
experiences = [experiences[-1]]
reward_sum = 0.0
time_count += 1
def run(self):
# randomly sleep up to 1 second. helps agents boot smoothly.
time.sleep(np.random.rand())
np.random.seed(np.int32(time.time() % 1 * 1000 + self.id * 10))
while self.exit_flag.value == 0:
total_reward = 0
total_length = 0
for x_, r_, a_, reward_sum in self.run_episode():
total_reward += reward_sum
total_length += len(r_) + 1 # +1 for last frame that we drop
self.training_q.put((x_, r_, a_))
self.episode_log_q.put((datetime.now(), total_reward, total_length))
<MSG> Mix Static/Dynamic Arguments
<DFF> @@ -84,6 +84,8 @@ class ProcessAgent(Process):
continue
prediction, value = self.predict(self.env.current_state)
+
+
action = self.select_action(prediction)
reward, done = self.env.step(action)
reward_sum += reward
| 2 | Mix Static/Dynamic Arguments | 0 | .py | py | bsd-2-clause | neopenx/Dragon |
1939 | <NME> ProcessAgent.py
<BEF> # --------------------------------------------------------
# GA3C for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
from datetime import datetime
from multiprocessing import Process, Queue, Value
import numpy as np
import time
from Config import Config
from Environment import Environment
class Experience(object):
def __init__(self, state, action, prob, reward, done):
self.state = state
self.action = action
self.prob = prob
self.reward = reward
self.done = done
class ProcessAgent(Process):
def __init__(self, id, prediction_q, training_q, episode_log_q):
super(ProcessAgent, self).__init__()
self.id = id
self.prediction_q = prediction_q
self.training_q = training_q
self.episode_log_q = episode_log_q
self.env = Environment()
self.num_actions = self.env.get_num_actions()
self.actions = np.arange(self.num_actions)
self.discount_factor = Config.DISCOUNT
# one frame at a time
self.wait_q = Queue(maxsize=1)
self.exit_flag = Value('i', 0)
@staticmethod
def _accumulate_rewards(experiences, discount_factor, terminal_reward):
reward_sum = terminal_reward
for t in reversed(range(0, len(experiences) - 1)):
r = np.clip(experiences[t].reward, Config.REWARD_MIN, Config.REWARD_MAX)
reward_sum = discount_factor * reward_sum + r
experiences[t].reward = reward_sum
return experiences[:-1]
def convert_data(self, experiences):
x_ = np.array([exp.state for exp in experiences])
a_ = np.eye(self.num_actions)[np.array([exp.action for exp in experiences])].astype(np.float32)
r_ = np.array([exp.reward for exp in experiences], dtype=np.float32) # R
r_ = r_.reshape((-1, 1))
return x_, r_, a_
def predict(self, state):
# put the state in the prediction q
self.prediction_q.put((self.id, state))
# wait for the prediction to come back
p, v = self.wait_q.get()
return p, v
def select_action(self, prediction):
if Config.PLAY_MODE:
action = np.argmax(prediction)
else:
action = np.random.choice(self.actions, p=prediction)
return action
def run_episode(self):
self.env.reset()
done = False
experiences = []
time_count = 0
reward_sum = 0.0
while not done:
# very first few frames
if self.env.current_state is None:
self.env.step(0) # 0 == NOOP
continue
prediction, value = self.predict(self.env.current_state)
action = self.select_action(prediction)
reward, done = self.env.step(action)
reward_sum += reward
exp = Experience(self.env.previous_state, action, prediction, reward, done)
experiences.append(exp)
if done or time_count == Config.TIME_MAX:
terminal_reward = 0 if done else value
updated_exps = ProcessAgent._accumulate_rewards(experiences, self.discount_factor, terminal_reward)
x_, r_, a_ = self.convert_data(updated_exps)
yield x_, r_, a_, reward_sum
# reset the tmax count
time_count = 0
# keep the last experience for the next batch
experiences = [experiences[-1]]
reward_sum = 0.0
time_count += 1
def run(self):
# randomly sleep up to 1 second. helps agents boot smoothly.
time.sleep(np.random.rand())
np.random.seed(np.int32(time.time() % 1 * 1000 + self.id * 10))
while self.exit_flag.value == 0:
total_reward = 0
total_length = 0
for x_, r_, a_, reward_sum in self.run_episode():
total_reward += reward_sum
total_length += len(r_) + 1 # +1 for last frame that we drop
self.training_q.put((x_, r_, a_))
self.episode_log_q.put((datetime.now(), total_reward, total_length))
<MSG> Mix Static/Dynamic Arguments
<DFF> @@ -84,6 +84,8 @@ class ProcessAgent(Process):
continue
prediction, value = self.predict(self.env.current_state)
+
+
action = self.select_action(prediction)
reward, done = self.env.step(action)
reward_sum += reward
| 2 | Mix Static/Dynamic Arguments | 0 | .py | py | bsd-2-clause | neopenx/Dragon |
1940 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
if hitchdir.hitch_exists():
stderr.write(languagestrings.HITCH_ALREADY_INITIALIZED)
stderr.flush()
exit(1)
makedirs(".hitch")
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Changed exit code to zero when running hitch init in a folder where .hitch directory already exists.
<DFF> @@ -76,7 +76,7 @@ def init(python, virtualenv):
if hitchdir.hitch_exists():
stderr.write(languagestrings.HITCH_ALREADY_INITIALIZED)
stderr.flush()
- exit(1)
+ exit(0)
makedirs(".hitch")
| 1 | FEATURE : Changed exit code to zero when running hitch init in a folder where .hitch directory already exists. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1941 | <NME> RedisManagerTest.java
<BEF> ADDFILE
<MSG> add RedisManager test case
<DFF> @@ -0,0 +1,33 @@
+package org.crazycake.shiro;
+
+import static org.junit.Assert.*;
+
+import org.junit.Before;
+import org.junit.Test;
+
+public class RedisManagerTest {
+
+
+ @Test
+ public void testSet(){
+ RedisManager redisManager = new RedisManager();
+ redisManager.setHost("127.0.0.1");
+ redisManager.setPort(6379);
+ redisManager.setExpire(2);
+ redisManager.setTimeout(0);
+ redisManager.init();
+
+
+ String key = "abc";
+ UserMock u = new UserMock();
+ u.setId("123");
+ u.setLocked(true);
+ u.setPassword("111");
+ u.setSalt("222");
+ u.setUsername("jack");
+
+ redisManager.set(key.getBytes(), SerializeUtils.serialize(u));
+ }
+
+
+}
| 33 | add RedisManager test case | 0 | .java | java | mit | alexxiyang/shiro-redis |
1942 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can chose these 2 ways to include shiro-redis into your project
* directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
* add maven dependency
```xml
<MSG> Update README.md
<DFF> @@ -7,8 +7,7 @@ How to use it?
===========
You can chose these 2 ways to include shiro-redis into your project
-* directly download jar file
-Download shiro-redis.jar in bin folder and add it into your classpath.
+* use "git clone https://github.com/alexxiyang/shiro-redis.git" to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
| 1 | Update README.md | 2 | .md | md | mit | alexxiyang/shiro-redis |
1943 | <NME> op_kernel.cu <BEF> #ifdef WITH_CUDA #include <cmath> #include "core/context_cuda.h" #include "core/tensor.h" #include "utils/cuda_device.h" #include "utils/op_kernel.h" #include "utils/math_functions.h" namespace dragon { namespace kernel { template <typename T> __global__ void _Empty() { } template<> void Empty<float, CUDAContext>() { _Empty<float> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } template<> void Empty<float16, CUDAContext>() { _Empty<float16> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } /******************** activation.dropout ********************/ template<typename T> __global__ void _Dropout(const int count, const uint32_t thresh, const T scale, const T* x, const uint32_t* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] * (mask[idx] > thresh) * scale; } } template<> void Dropout<float, CUDAContext>(const int count, float prob, float scale, const float* x, uint32_t* mask, float* y, CUDAContext* context) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); math::RandomUniform<uint32_t, CUDAContext>(count, float(0), float(UINT_MAX), mask); _Dropout<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, x, mask, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _DropoutGrad(const int count, const uint32_t thresh, const T scale, const T* dy, const uint32_t* mask, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * (mask[idx] > thresh) * scale; } } template<> void DropoutGrad<float, CUDAContext>(const int count, float prob, float scale, const float* dy, const uint32_t* mask, float* dx) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); _DropoutGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, dy, mask, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.prelu ********************/ template <typename T> __global__ void _PRelu(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[0]; } } template <typename T> __global__ void _PReluNCHW(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = (idx / dim) % channels; y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[c]; } } template <typename T> __global__ void _PReluNHWC(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % channels; y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[c]; } } template<> void PRelu<float, CUDAContext>(const int count, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* x, const float* w, float* y) { if (channel_shared) { _PRelu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else { if (data_format == "NCHW") { _PReluNCHW<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else if (data_format == "NHWC") { _PReluNHWC<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _PReluGrad(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[0]); } } template <typename T> __global__ void _PReluGradNCHW(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int c = (idx / dim) % channels; dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[c]); } } template <typename T> __global__ void _PReluGradNHWC(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % channels; dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[c]); } } template<> void PReluGrad<float, CUDAContext>(const int count, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* dy, const float* x, const float* w, float* dx) { if (channel_shared) { _PReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else { if (data_format == "NCHW") { _PReluGradNCHW<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else if (data_format == "NHWC") { _PReluGradNHWC<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else LOG(FATAL) << "Unknown data format: " << data_format; } CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _PReluWGradBcast(const int count, const int rows, const int row_offset, const T* dy, const T* x, T* bcast_dw) { CUDA_KERNEL_LOOP(idx, count) { bcast_dw[idx] = dy[idx] * x[idx] * (x[idx] <= 0); for (int n = 1; n < rows; n++) { const int cur_idx = idx + n * row_offset; bcast_dw[idx] += dy[cur_idx] * x[cur_idx] * (x[cur_idx] <= 0); } } } template<> void PReluWGrad<float, CUDAContext>(const int rows, const int row_offset, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* dy, const float* x, const float* multiplier, float* bcast_dw, float* dw) { const int cdim = channels * dim; _PReluWGradBcast<float> << < GET_BLOCKS(cdim), CUDA_NUM_THREADS >> >(cdim, rows, row_offset, dy, x, bcast_dw); CUDA_POST_KERNEL_CHECK; if (channel_shared) { float w_sum = math::Dot<float, CUDAContext>(channels * dim, bcast_dw, multiplier); math::AddScalar<float, CUDAContext>(1, w_sum, dw); } else { if (data_format == "NCHW") { math::Gemv<float, CUDAContext>(CblasNoTrans, channels, dim, 1.0, bcast_dw, multiplier, 1.0, dw); } else if (data_format == "NHWC") { math::Gemv<float, CUDAContext>(CblasTrans, dim, channels, 1.0, bcast_dw, multiplier, 1.0, dw); } else LOG(FATAL) << "Unknown data format: " << data_format; } } /******************** activation.elu ********************/ template <typename T> __global__ void _Elu(const int count, const T* x, const float alpha, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : alpha * (std::exp(x[idx]) - 1); } } template<> void Elu<float, CUDAContext>(const int count, const float* x, const float alpha, float* y) { _Elu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, alpha, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _EluGrad(const int count, const T* dy, const T* y, const float alpha, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + (alpha + y[idx]) * (y[idx] <= 0)); } } template<> void EluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float alpha, float* dx) { _EluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, alpha, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.relu ********************/ template <typename T> __global__ void _Relu(const int count, const T* x, const float slope, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : x[idx] * slope; } } template<> void Relu<float, CUDAContext>(const int count, const float* x, const float slope, float* y) { _Relu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, slope, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ReluHalf(const int count, const half* x, const float slope, half* y) { const half kSlope = __float2half(slope); const half kZero = __float2half(0.0); CUDA_KERNEL_LOOP(idx, count) { #if __CUDA_ARCH__ >= 530 y[idx] = __hgt(x[idx], kZero) ? x[idx] : __hmul(x[idx], kSlope); #endif } } template<> void Relu<float16, CUDAContext>(const int count, const float16* x, const float slope, float16* y) { _ReluHalf<half> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(x), slope, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ReluGrad(const int count, const T* dy, const T* y, const float slope, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + slope * (y[idx] <= 0)); } } template<> void ReluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float slope, float* dx) { _ReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, slope, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.selu ********************/ template <typename T> __global__ void _SElu(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? 1.0507 * x[idx] : 1.7581 * (std::exp(x[idx]) - 1); } } template<> void SElu<float, CUDAContext>(const int count, const float* x, float* y) { _SElu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SEluGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = y[idx] > 0 ? 1.0507 * dy[idx] : (1.7581 + y[idx]) * dy[idx]; } } template<> void SEluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SEluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.sigmoid ********************/ template <typename T> __device__ T _SigmoidUnit(const T x) { return T(1) / (T(1) + exp(-x)); } template <typename T> __global__ void _Sigmoid(const int n, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, n) { y[idx] = _SigmoidUnit<T>(x[idx]); } } template<> void Sigmoid<float, CUDAContext>(const int count, const float* x, float* y) { _Sigmoid<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * y[idx] * (1 - y[idx]); } } template<> void SigmoidGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SigmoidGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.softmax ********************/ template <typename T> __global__ void _SoftmaxMaxClass(const int outer_dim, const int classes, const int inner_dim, const T* x, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T max_val = -FLT_MAX; for (int c = 0; c < classes; c++) max_val = max(x[(o_idx * classes + c) * inner_dim + i_idx], max_val); scale[idx] = max_val; } } template <typename T> __global__ void _SoftmaxSubtract(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] -= scale[o_idx * inner_dim + i_idx]; } } template <typename T> __global__ void _SoftmaxExp(const int count, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = std::exp(y[idx]); } } template <typename T> __global__ void _SoftmaxSumClass(const int outer_dim, const int classes, const int inner_dim, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T sum = 0; for (int c = 0; c < classes; c++) sum += y[(o_idx * classes + c) * inner_dim + i_idx]; scale[idx] = sum; } } template <typename T> __global__ void _SoftmaxDiv(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] /= scale[o_idx * inner_dim + i_idx]; } } template<> void Softmax<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* x, float* scale, float* y, CUDAContext* context) { const int num_preds = inner_dim * outer_dim; _SoftmaxMaxClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, x, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); _SoftmaxExp<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, y); _SoftmaxSumClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, y, scale); _SoftmaxDiv<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SoftmaxDot(const int outer_dim, const int classes, const int inner_dim, const T* dy, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T dot = 0; for (int c = 0; c < classes; c++) dot += (y[(o_idx * classes + c) * inner_dim + i_idx] * dy[(o_idx * classes + c) * inner_dim + i_idx]); scale[idx] = dot; } } template<> void SoftmaxGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* dy, const float* y, float* scale, float* dx) { const int num_preds = inner_dim * outer_dim; _SoftmaxDot<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, dy, y, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, dx); math::Mul<float, CUDAContext>(count, dx, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.tanh ********************/ template <typename T> __global__ void _Tanh(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(i, count) { y[i] = std::tanh(x[i]); } } template<> void Tanh<float, CUDAContext>(const int count, const float* x, float* y) { _Tanh<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TanhGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(i, count) { dx[i] = dy[i] * (1 - y[i] * y[i]); } } template<> void TanhGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _TanhGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** arithmetic.bias_add ********************/ template <typename T> __global__ void _BiasAdd_NCHW(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int bias_idx = (idx / inner_dim) % dim; y[idx] += bias[bias_idx]; } } template <typename T> __global__ void _BiasAdd_NHWC(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] += bias[idx % dim]; } } template<> void BiasAdd<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const string& data_format, const float* bias, const float* bias_multiplier, float* y) { if (data_format == "NCHW") { _BiasAdd_NCHW<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else if (data_format == "NHWC") { _BiasAdd_NHWC<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } /******************** arithmetic.clip ********************/ template <typename T> __global__ void _Clip(const int count, const T low, const T high, const T* x, T* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { mask[idx] = 1.0; if (x[idx] > high || x[idx] < low) mask[idx] = 0.0; y[idx] = x[idx] > high ? high : x[idx]; y[idx] = x[idx] < low ? low : x[idx]; } } template <> void Clip<float, CUDAContext>(const int count, const float low, const float high, const float* x, float* mask, float* y) { _Clip<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, low, high, x, mask, y); } /******************** arithmetic.scale ********************/ template <typename T> __global__ void _ScaleWithoutBias(const int n, const T* x, const T* scale, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx]; } } template <typename T> __global__ void _ScaleWithBias(const int n, const T* x, const T* scale, const T* bias, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx] + bias[scale_idx]; } } template<> void Scale<float, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float, CUDAContext>(); auto* Ydata = y->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, Bdata, scale_dim, inner_dim, Ydata); else _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, scale_dim, inner_dim, Ydata); } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ScaleWithoutBiasHalf(const int n, const half* x, const half* scale, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hmul(x[idx], scale[scale_idx]); #endif } } template <typename T> __global__ void _ScaleWithBiasHalf(const int n, const half* x, const half* scale, const half* bias, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hadd(__hmul(x[idx], scale[scale_idx]), bias[scale_idx]); #endif } } template<> void Scale<float16, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float16, CUDAContext>(); auto* Ydata = y->mutable_data<float16, CUDAContext>(); auto* Sdata = gamma->data<float16, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float16, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), reinterpret_cast<const half*>(Bdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); else _ScaleWithoutBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); } #endif template <> void ScaleGrad<float, CUDAContext>(const int axis, Tensor* dy, Tensor* gamma, Tensor* dx) { const int count = dx->count(); const int inner_dim = dx->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* dYdata = dy->data<float, CUDAContext>(); auto* dXdata = dx->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dYdata, Sdata, scale_dim, inner_dim, dXdata); } /******************** cast.float2half ********************/ #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _FloatToHalfKernel(const int count, const float* x, half* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = __float2half(x[idx]); } } template <> void Float2Half<float, CUDAContext>(const int count, const float* x, float16* y) { _FloatToHalfKernel<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif /******************** control_flow.compare ********************/ template <typename T> __global__ void _Equal(const int count, const T* a, const T* b, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = fabs(a[idx] - b[idx]) < FLT_EPSILON ? 1.0 : 0.0; } } template <> void Equal<float, CUDAContext>(const int count, const float* a, const float* b, float* y) { _Equal<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, a, b, y); CUDA_POST_KERNEL_CHECK; } /******************** loss.l1_loss ********************/ template <typename T> __global__ void _AbsGrad(const int count, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; // val > 0: 1 | val == 0: 0 | val < 0: -1 dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void AbsGrad<float, CUDAContext>(const int count, const float* dy, float* dx) { _AbsGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.sigmoid_cross_entropy ********************/ template <typename T> __global__ void _SigmoidCrossEntropy(const int count, const T* x, const T* target, T* loss, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { loss[idx] = 0.; valid[idx] = 0.; } else { loss[idx] = std::log(1 + std::exp(x[idx] - 2 * x[idx] * (x[idx] >= 0))) + x[idx] * ((x[idx] >= 0) - target[idx]); valid[idx] = 1.; } } } template <> void SigmoidCrossEntropy<float, CUDAContext>(const int count, const float* x, const float* target, float* loss, float* valid) { _SigmoidCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, loss, valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidCrossEntropyGrad(const int count, const T* x, const T* target, T* dx, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { dx[idx] = 0.; valid[idx] = 0.; } else { dx[idx] = 1. / (1. + expf(-x[idx])) - target[idx]; valid[idx] = 1.; } } } template <> void SigmoidCrossEntropyGrad<float, CUDAContext>(const int count, const float* x, const float* target, float* dx, float* valid) { _SigmoidCrossEntropyGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, dx, valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.smooth_l1_loss ********************/ template <typename T> __global__ void _SmoothL1(const int count, const float sigma2, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const T val = x[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) y[idx] = 0.5 * val * val * sigma2; else y[idx] = abs_val - 0.5 / sigma2; } } template<> void SmoothL1<float, CUDAContext>(const int count, const float sigma2, const float* x, float* y) { _SmoothL1<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SmoothL1Grad(const int count, const float sigma2, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) dx[idx] = val * sigma2; // val > 0: 1 | val == 0: 0 | val < 0: -1 else dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void SmoothL1Grad<float, CUDAContext>(const int count, const float sigma2, const float* dy, float* dx) { _SmoothL1Grad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.softmax_cross_entropy ********************/ template <typename T> __global__ void _SoftmaxCrossEntropy(const int count, const T* prob, const T* target, T* loss) { CUDA_KERNEL_LOOP(idx, count) { loss[idx] = -target[idx] * log(max(prob[idx], FLT_MIN)); } } template <> void SoftmaxCrossEntropy<float, CUDAContext>(const int count, const float* prob, const float* target, float* loss) { _SoftmaxCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, prob, target, loss); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_cross_entropy ********************/ template <typename T> __global__ void _SparseSoftmaxCrossEntropy(const int count, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { loss[idx] = -log(max(prob[(o_idx * classes + label) * inner_dim + i_idx], FLT_MIN)); valid[idx] = 1; } } } template <> void SparseSoftmaxCrossEntropy<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropy<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxCrossEntropyGrad(const int count, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { dx[(o_idx * classes + label) * inner_dim + i_idx] -= 1; valid[idx] = 1; } } } template<> void SparseSoftmaxCrossEntropyGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropyGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_focal_loss ********************/ template <typename T> __global__ void _SparseSoftmaxFocalScale(const int count, const float gamma, const T* prob, T* scale) { CUDA_KERNEL_LOOP(idx, count) { scale[idx] = std::pow((1.0f - prob[idx]), gamma); } } template <typename T> __global__ void _SparseSoftmaxFocalLoss(const int count, const float pos_alpha, const float neg_alpha, const int neg_id, T* scale, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; scale[t_] = label > neg_id ? pos_alpha * scale[t_] : neg_alpha * scale[t_]; loss[idx] = -scale[t_] * std::log(max(prob[t_], FLT_MIN)); valid[idx] = label > neg_id ? 1 : 0; } } } template <> void SparseSoftmaxFocalLoss<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float pos_alpha, const float neg_alpha, const float gamma, const int neg_id, const float* prob, const float* labels, float* scale, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalScale<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, gamma, prob, scale); _SparseSoftmaxFocalLoss<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, pos_alpha, neg_alpha, neg_id, scale, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxFocalLossGrad(const int count, const float gamma, const int neg_id, const float eps, const T* scale, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; T grad = -gamma * (scale[t_] / max((1.0f - prob[t_]), eps)) * std::log(max(prob[t_], FLT_MIN)) * prob[t_] + scale[t_]; for (int c = 0; c < classes; c++) { const int i_ = (o_idx * classes + c) * inner_dim + i_idx; if (c == label) { dx[i_] = grad * (prob[t_] - 1); } else { dx[i_] = grad * prob[i_]; } } valid[idx] = label > neg_id ? 1 : 0; } } } template<> void SparseSoftmaxFocalLossGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float gamma, const int neg_id, const float eps, const float* scale, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalLossGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, gamma, neg_id, eps, scale, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** misc.image_data ********************/ template <typename Tx, typename Ty> __global__ void _ImageData_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; Ty raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageData_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; Ty raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; float raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; float raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <> void ImageData<float, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ImageData<float, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Arange(const int count, const int start, const int step, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = start + idx * step; } } template<> void Arange<float, CUDAContext>(const int count, const int start, const int step, float* y) { _Arange<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } template<> void Arange<int, CUDAContext>(const int count, const int start, const int step, int* y) { _Arange<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Argmax(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T max_val = -FLT_MAX; int max_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val > max_val) { max_val = val; max_idx = j; } } y[idx] = max_idx; } } template<> void Argmax<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmax<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmin ********************/ template <typename T> __global__ void _Argmin(const int count, const int axis_dim, } } template <> void CanonicalAxis<float, CUDAContext>(const int count, const int dim, float* y) { _CanonicalAxis<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _At(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const T* indices, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; } } template <> void At<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const float* indices, const float* x, float* y, CUDAContext* context) { _At<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _AtGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const T* indices, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int y_slice_dim, const int* indices, const float* x, float* y, CUDAContext* context) { _Gather<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, } } template <> void AtGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const float* indices, const float* dy, float* dx, CUDAContext* context) { _AtGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; atomicAdd(dx + x_idx, dy[idx]); } } template <> void GatherGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* dy, float* dx) { _GatherGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } template <> void GatherGrad<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* dy, int* dx) { _GatherGrad<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.concat ********************/ template <typename T> __global__ void _Concat(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; y[y_idx] = x[idx]; } } template <> void Concat<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* x, float* y, CUDAContext* context) { _Concat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Concat<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* x, float16* y, CUDAContext* context) { _Concat<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ConcatGrad(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; dx[idx] = dy[y_idx]; } } template <> void ConcatGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* dy, float* dx, CUDAContext* context) { _ConcatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ConcatGrad<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* dy, float16* dx, CUDAContext* context) { _ConcatGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.crop ********************/ template<typename T> __global__ void _Crop1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; y[idx] = x[(o * dim + ex_d + start) * inner_dim + i]; } } template<> void Crop1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const float* x, float* y, CUDAContext* context) { _Crop1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, x, y); CUDA_POST_KERNEL_CHECK; } template<typename T> __global__ void _Crop1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int d = (idx / inner_dim) % dim; const int o = idx / inner_dim / dim; if (d >= start && d < end) dx[idx] = dy[(o * ex_dim + d - start) * inner_dim + i]; } } template<> void Crop1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const float* dy, float* dx, CUDAContext* context) { _Crop1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, end, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.pad ********************/ template <typename T> __global__ void _ConstPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T value, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = ex_d - pad_l; y[idx] = (d < 0 || d >= dim) ? value : x[(o * dim + d) * inner_dim + i]; } } template <> void ConstPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float value, const float* x, float* y, CUDAContext* context) { _ConstPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, value, x, y); } template <typename T> __global__ void _ReflectPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void ReflectPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _ReflectPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _EdgePad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void EdgePad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _EdgePad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _ConstPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % dim + pad_l; const int o = idx / inner_dim / dim; dx[idx] = dy[(o * ex_dim + ex_d) * inner_dim + i]; } } template <> void ConstPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _ConstPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _ReflectPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void ReflectPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx) { _ReflectPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _EdgePad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void EdgePad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _EdgePad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } /******************** ndarray.one_hot ********************/ template <typename T> __global__ void _OneHot(const int count, const int depth, const int on_value, const float* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { const int val = x[idx]; y[idx * depth + val] = on_value; } } template <> void OneHot<float, CUDAContext>(const int count, const int depth, const int on_value, const float* x, float* y) { _OneHot<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, depth, on_value, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.reduce ********************/ template <typename T> __global__ void _Sum(const int count, const int axis_dim, const int inner_dim, const T* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { T sum_val = 0.0; for (int j = 0; j < axis_dim; j++) sum_val += x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; y[idx] = sum_val; } } template<> void Sum<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float* x, float* y) { _Sum<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SumGrad(const int count, const int axis_dim, const int inner_dim, const T coeff, const T* dy, float* dx) { CUDA_KERNEL_LOOP(idx, count) { for (int j = 0; j < axis_dim; j++) dx[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim] = dy[idx] * coeff; } } template<> void SumGrad<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float coeff, const float* dy, float* dx) { _SumGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, coeff, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.repeat ********************/ template <typename T> __global__ void _Repeat(const int count, const int inner_dim, const int repeats, const int dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim / repeats) % dim; const int n = idx / inner_dim / repeats / dim; const int x_idx = (n * dim + b) * inner_dim + d; y[idx] = x[x_idx]; } } template <> void Repeat<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* x, float* y, CUDAContext* context) { _Repeat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _RepeatGrad(const int count, const int inner_dim, const int repeats, const int dim, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim) % dim; const int n = idx / inner_dim / dim; T gradient = 0; for (int t = 0; t < repeats; t++) gradient += dy[(((n * dim + b) * repeats) + t) * inner_dim + d]; dx[idx] = gradient; } } template <> void RepeatGrad<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* dy, float* dx, CUDAContext* context) { _RepeatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.slice ********************/ template <typename T> __global__ void _Slice(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Slice<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* x, float* y, CUDAContext* context) { _Slice<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SliceGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; dx[x_idx] = dy[idx]; } } template <> void SliceGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* dy, float* dx, CUDAContext* context) { _SliceGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.tile ********************/ template <typename T> __global__ void _Tile(const int count, const int ex_inner_dim, const int multiple, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim / multiple; const int x_idx = n * ex_inner_dim + d; y[idx] = x[x_idx]; } } template <> void Tile<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* x, float* y, CUDAContext* context) { _Tile<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TileGrad(const int count, const int ex_inner_dim, const int multiple, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim; T gradient = 0; for (int t = 0; t < multiple; t++) gradient += dy[(n * multiple + t) * ex_inner_dim + d]; dx[idx] = gradient; } } template <> void TileGrad<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* dy, float* dx, CUDAContext* context) { _TileGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.transpose ********************/ template <typename T> __global__ void _Transpose(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } y[idx] = x[x_idx]; } } template <> void Transpose<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* x, float* y) { _Transpose<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Transpose<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* x, float16* y) { _Transpose<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _TransposeGrad(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } dx[x_idx] = dy[idx]; } } template <> void TransposeGrad<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* dy, float* dx) { _TransposeGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void TransposeGrad<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* dy, float16* dx) { _TransposeGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** recurrent.lstm_uint ********************/ template <typename T> __global__ void _LSTMUnitAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x, T* x_act) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; if (ch_4 < g_offset) x_act[idx] = _SigmoidUnit<float>(x[idx]); else x_act[idx] = std::tanh(x[idx]); } } template <typename T> __global__ void _LSTMUnit(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, T* x_act, const T* cont, T* c, T* h) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; T* x_act_ = x_act + n * x_offset; const T i = x_act_[ch]; if (cont != nullptr && cont[n] != T(1)) x_act_[channels + ch] *= cont[n]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g | 84 | Refactor Shape Module | 50 | .cu | cu | bsd-2-clause | neopenx/Dragon |
1944 | <NME> op_kernel.cu <BEF> #ifdef WITH_CUDA #include <cmath> #include "core/context_cuda.h" #include "core/tensor.h" #include "utils/cuda_device.h" #include "utils/op_kernel.h" #include "utils/math_functions.h" namespace dragon { namespace kernel { template <typename T> __global__ void _Empty() { } template<> void Empty<float, CUDAContext>() { _Empty<float> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } template<> void Empty<float16, CUDAContext>() { _Empty<float16> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } /******************** activation.dropout ********************/ template<typename T> __global__ void _Dropout(const int count, const uint32_t thresh, const T scale, const T* x, const uint32_t* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] * (mask[idx] > thresh) * scale; } } template<> void Dropout<float, CUDAContext>(const int count, float prob, float scale, const float* x, uint32_t* mask, float* y, CUDAContext* context) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); math::RandomUniform<uint32_t, CUDAContext>(count, float(0), float(UINT_MAX), mask); _Dropout<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, x, mask, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _DropoutGrad(const int count, const uint32_t thresh, const T scale, const T* dy, const uint32_t* mask, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * (mask[idx] > thresh) * scale; } } template<> void DropoutGrad<float, CUDAContext>(const int count, float prob, float scale, const float* dy, const uint32_t* mask, float* dx) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); _DropoutGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, dy, mask, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.prelu ********************/ template <typename T> __global__ void _PRelu(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[0]; } } template <typename T> __global__ void _PReluNCHW(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = (idx / dim) % channels; y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[c]; } } template <typename T> __global__ void _PReluNHWC(const int count, const int channels, const int dim, const T* x, const T* w, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % channels; y[idx] = (x[idx] > 0) * x[idx] + (x[idx] < 0) * x[idx] * w[c]; } } template<> void PRelu<float, CUDAContext>(const int count, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* x, const float* w, float* y) { if (channel_shared) { _PRelu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else { if (data_format == "NCHW") { _PReluNCHW<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else if (data_format == "NHWC") { _PReluNHWC<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, x, w, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _PReluGrad(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[0]); } } template <typename T> __global__ void _PReluGradNCHW(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int c = (idx / dim) % channels; dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[c]); } } template <typename T> __global__ void _PReluGradNHWC(const int count, const int channels, const int dim, const T* dy, const T* x, const T* w, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % channels; dx[idx] = dy[idx] * ((x[idx] > 0) + (x[idx] <= 0) * w[c]); } } template<> void PReluGrad<float, CUDAContext>(const int count, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* dy, const float* x, const float* w, float* dx) { if (channel_shared) { _PReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else { if (data_format == "NCHW") { _PReluGradNCHW<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else if (data_format == "NHWC") { _PReluGradNHWC<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, dim, dy, x, w, dx); } else LOG(FATAL) << "Unknown data format: " << data_format; } CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _PReluWGradBcast(const int count, const int rows, const int row_offset, const T* dy, const T* x, T* bcast_dw) { CUDA_KERNEL_LOOP(idx, count) { bcast_dw[idx] = dy[idx] * x[idx] * (x[idx] <= 0); for (int n = 1; n < rows; n++) { const int cur_idx = idx + n * row_offset; bcast_dw[idx] += dy[cur_idx] * x[cur_idx] * (x[cur_idx] <= 0); } } } template<> void PReluWGrad<float, CUDAContext>(const int rows, const int row_offset, const int channels, const int dim, const bool channel_shared, const string& data_format, const float* dy, const float* x, const float* multiplier, float* bcast_dw, float* dw) { const int cdim = channels * dim; _PReluWGradBcast<float> << < GET_BLOCKS(cdim), CUDA_NUM_THREADS >> >(cdim, rows, row_offset, dy, x, bcast_dw); CUDA_POST_KERNEL_CHECK; if (channel_shared) { float w_sum = math::Dot<float, CUDAContext>(channels * dim, bcast_dw, multiplier); math::AddScalar<float, CUDAContext>(1, w_sum, dw); } else { if (data_format == "NCHW") { math::Gemv<float, CUDAContext>(CblasNoTrans, channels, dim, 1.0, bcast_dw, multiplier, 1.0, dw); } else if (data_format == "NHWC") { math::Gemv<float, CUDAContext>(CblasTrans, dim, channels, 1.0, bcast_dw, multiplier, 1.0, dw); } else LOG(FATAL) << "Unknown data format: " << data_format; } } /******************** activation.elu ********************/ template <typename T> __global__ void _Elu(const int count, const T* x, const float alpha, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : alpha * (std::exp(x[idx]) - 1); } } template<> void Elu<float, CUDAContext>(const int count, const float* x, const float alpha, float* y) { _Elu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, alpha, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _EluGrad(const int count, const T* dy, const T* y, const float alpha, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + (alpha + y[idx]) * (y[idx] <= 0)); } } template<> void EluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float alpha, float* dx) { _EluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, alpha, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.relu ********************/ template <typename T> __global__ void _Relu(const int count, const T* x, const float slope, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : x[idx] * slope; } } template<> void Relu<float, CUDAContext>(const int count, const float* x, const float slope, float* y) { _Relu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, slope, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ReluHalf(const int count, const half* x, const float slope, half* y) { const half kSlope = __float2half(slope); const half kZero = __float2half(0.0); CUDA_KERNEL_LOOP(idx, count) { #if __CUDA_ARCH__ >= 530 y[idx] = __hgt(x[idx], kZero) ? x[idx] : __hmul(x[idx], kSlope); #endif } } template<> void Relu<float16, CUDAContext>(const int count, const float16* x, const float slope, float16* y) { _ReluHalf<half> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(x), slope, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ReluGrad(const int count, const T* dy, const T* y, const float slope, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + slope * (y[idx] <= 0)); } } template<> void ReluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float slope, float* dx) { _ReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, slope, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.selu ********************/ template <typename T> __global__ void _SElu(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? 1.0507 * x[idx] : 1.7581 * (std::exp(x[idx]) - 1); } } template<> void SElu<float, CUDAContext>(const int count, const float* x, float* y) { _SElu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SEluGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = y[idx] > 0 ? 1.0507 * dy[idx] : (1.7581 + y[idx]) * dy[idx]; } } template<> void SEluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SEluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.sigmoid ********************/ template <typename T> __device__ T _SigmoidUnit(const T x) { return T(1) / (T(1) + exp(-x)); } template <typename T> __global__ void _Sigmoid(const int n, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, n) { y[idx] = _SigmoidUnit<T>(x[idx]); } } template<> void Sigmoid<float, CUDAContext>(const int count, const float* x, float* y) { _Sigmoid<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * y[idx] * (1 - y[idx]); } } template<> void SigmoidGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SigmoidGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.softmax ********************/ template <typename T> __global__ void _SoftmaxMaxClass(const int outer_dim, const int classes, const int inner_dim, const T* x, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T max_val = -FLT_MAX; for (int c = 0; c < classes; c++) max_val = max(x[(o_idx * classes + c) * inner_dim + i_idx], max_val); scale[idx] = max_val; } } template <typename T> __global__ void _SoftmaxSubtract(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] -= scale[o_idx * inner_dim + i_idx]; } } template <typename T> __global__ void _SoftmaxExp(const int count, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = std::exp(y[idx]); } } template <typename T> __global__ void _SoftmaxSumClass(const int outer_dim, const int classes, const int inner_dim, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T sum = 0; for (int c = 0; c < classes; c++) sum += y[(o_idx * classes + c) * inner_dim + i_idx]; scale[idx] = sum; } } template <typename T> __global__ void _SoftmaxDiv(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] /= scale[o_idx * inner_dim + i_idx]; } } template<> void Softmax<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* x, float* scale, float* y, CUDAContext* context) { const int num_preds = inner_dim * outer_dim; _SoftmaxMaxClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, x, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); _SoftmaxExp<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, y); _SoftmaxSumClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, y, scale); _SoftmaxDiv<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SoftmaxDot(const int outer_dim, const int classes, const int inner_dim, const T* dy, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T dot = 0; for (int c = 0; c < classes; c++) dot += (y[(o_idx * classes + c) * inner_dim + i_idx] * dy[(o_idx * classes + c) * inner_dim + i_idx]); scale[idx] = dot; } } template<> void SoftmaxGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* dy, const float* y, float* scale, float* dx) { const int num_preds = inner_dim * outer_dim; _SoftmaxDot<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, dy, y, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, dx); math::Mul<float, CUDAContext>(count, dx, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.tanh ********************/ template <typename T> __global__ void _Tanh(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(i, count) { y[i] = std::tanh(x[i]); } } template<> void Tanh<float, CUDAContext>(const int count, const float* x, float* y) { _Tanh<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TanhGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(i, count) { dx[i] = dy[i] * (1 - y[i] * y[i]); } } template<> void TanhGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _TanhGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** arithmetic.bias_add ********************/ template <typename T> __global__ void _BiasAdd_NCHW(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int bias_idx = (idx / inner_dim) % dim; y[idx] += bias[bias_idx]; } } template <typename T> __global__ void _BiasAdd_NHWC(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] += bias[idx % dim]; } } template<> void BiasAdd<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const string& data_format, const float* bias, const float* bias_multiplier, float* y) { if (data_format == "NCHW") { _BiasAdd_NCHW<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else if (data_format == "NHWC") { _BiasAdd_NHWC<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } /******************** arithmetic.clip ********************/ template <typename T> __global__ void _Clip(const int count, const T low, const T high, const T* x, T* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { mask[idx] = 1.0; if (x[idx] > high || x[idx] < low) mask[idx] = 0.0; y[idx] = x[idx] > high ? high : x[idx]; y[idx] = x[idx] < low ? low : x[idx]; } } template <> void Clip<float, CUDAContext>(const int count, const float low, const float high, const float* x, float* mask, float* y) { _Clip<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, low, high, x, mask, y); } /******************** arithmetic.scale ********************/ template <typename T> __global__ void _ScaleWithoutBias(const int n, const T* x, const T* scale, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx]; } } template <typename T> __global__ void _ScaleWithBias(const int n, const T* x, const T* scale, const T* bias, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx] + bias[scale_idx]; } } template<> void Scale<float, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float, CUDAContext>(); auto* Ydata = y->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, Bdata, scale_dim, inner_dim, Ydata); else _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, scale_dim, inner_dim, Ydata); } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ScaleWithoutBiasHalf(const int n, const half* x, const half* scale, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hmul(x[idx], scale[scale_idx]); #endif } } template <typename T> __global__ void _ScaleWithBiasHalf(const int n, const half* x, const half* scale, const half* bias, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hadd(__hmul(x[idx], scale[scale_idx]), bias[scale_idx]); #endif } } template<> void Scale<float16, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float16, CUDAContext>(); auto* Ydata = y->mutable_data<float16, CUDAContext>(); auto* Sdata = gamma->data<float16, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float16, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), reinterpret_cast<const half*>(Bdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); else _ScaleWithoutBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); } #endif template <> void ScaleGrad<float, CUDAContext>(const int axis, Tensor* dy, Tensor* gamma, Tensor* dx) { const int count = dx->count(); const int inner_dim = dx->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* dYdata = dy->data<float, CUDAContext>(); auto* dXdata = dx->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dYdata, Sdata, scale_dim, inner_dim, dXdata); } /******************** cast.float2half ********************/ #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _FloatToHalfKernel(const int count, const float* x, half* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = __float2half(x[idx]); } } template <> void Float2Half<float, CUDAContext>(const int count, const float* x, float16* y) { _FloatToHalfKernel<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif /******************** control_flow.compare ********************/ template <typename T> __global__ void _Equal(const int count, const T* a, const T* b, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = fabs(a[idx] - b[idx]) < FLT_EPSILON ? 1.0 : 0.0; } } template <> void Equal<float, CUDAContext>(const int count, const float* a, const float* b, float* y) { _Equal<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, a, b, y); CUDA_POST_KERNEL_CHECK; } /******************** loss.l1_loss ********************/ template <typename T> __global__ void _AbsGrad(const int count, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; // val > 0: 1 | val == 0: 0 | val < 0: -1 dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void AbsGrad<float, CUDAContext>(const int count, const float* dy, float* dx) { _AbsGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.sigmoid_cross_entropy ********************/ template <typename T> __global__ void _SigmoidCrossEntropy(const int count, const T* x, const T* target, T* loss, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { loss[idx] = 0.; valid[idx] = 0.; } else { loss[idx] = std::log(1 + std::exp(x[idx] - 2 * x[idx] * (x[idx] >= 0))) + x[idx] * ((x[idx] >= 0) - target[idx]); valid[idx] = 1.; } } } template <> void SigmoidCrossEntropy<float, CUDAContext>(const int count, const float* x, const float* target, float* loss, float* valid) { _SigmoidCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, loss, valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidCrossEntropyGrad(const int count, const T* x, const T* target, T* dx, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { dx[idx] = 0.; valid[idx] = 0.; } else { dx[idx] = 1. / (1. + expf(-x[idx])) - target[idx]; valid[idx] = 1.; } } } template <> void SigmoidCrossEntropyGrad<float, CUDAContext>(const int count, const float* x, const float* target, float* dx, float* valid) { _SigmoidCrossEntropyGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, dx, valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.smooth_l1_loss ********************/ template <typename T> __global__ void _SmoothL1(const int count, const float sigma2, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const T val = x[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) y[idx] = 0.5 * val * val * sigma2; else y[idx] = abs_val - 0.5 / sigma2; } } template<> void SmoothL1<float, CUDAContext>(const int count, const float sigma2, const float* x, float* y) { _SmoothL1<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SmoothL1Grad(const int count, const float sigma2, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) dx[idx] = val * sigma2; // val > 0: 1 | val == 0: 0 | val < 0: -1 else dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void SmoothL1Grad<float, CUDAContext>(const int count, const float sigma2, const float* dy, float* dx) { _SmoothL1Grad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.softmax_cross_entropy ********************/ template <typename T> __global__ void _SoftmaxCrossEntropy(const int count, const T* prob, const T* target, T* loss) { CUDA_KERNEL_LOOP(idx, count) { loss[idx] = -target[idx] * log(max(prob[idx], FLT_MIN)); } } template <> void SoftmaxCrossEntropy<float, CUDAContext>(const int count, const float* prob, const float* target, float* loss) { _SoftmaxCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, prob, target, loss); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_cross_entropy ********************/ template <typename T> __global__ void _SparseSoftmaxCrossEntropy(const int count, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { loss[idx] = -log(max(prob[(o_idx * classes + label) * inner_dim + i_idx], FLT_MIN)); valid[idx] = 1; } } } template <> void SparseSoftmaxCrossEntropy<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropy<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxCrossEntropyGrad(const int count, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { dx[(o_idx * classes + label) * inner_dim + i_idx] -= 1; valid[idx] = 1; } } } template<> void SparseSoftmaxCrossEntropyGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropyGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_focal_loss ********************/ template <typename T> __global__ void _SparseSoftmaxFocalScale(const int count, const float gamma, const T* prob, T* scale) { CUDA_KERNEL_LOOP(idx, count) { scale[idx] = std::pow((1.0f - prob[idx]), gamma); } } template <typename T> __global__ void _SparseSoftmaxFocalLoss(const int count, const float pos_alpha, const float neg_alpha, const int neg_id, T* scale, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; scale[t_] = label > neg_id ? pos_alpha * scale[t_] : neg_alpha * scale[t_]; loss[idx] = -scale[t_] * std::log(max(prob[t_], FLT_MIN)); valid[idx] = label > neg_id ? 1 : 0; } } } template <> void SparseSoftmaxFocalLoss<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float pos_alpha, const float neg_alpha, const float gamma, const int neg_id, const float* prob, const float* labels, float* scale, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalScale<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, gamma, prob, scale); _SparseSoftmaxFocalLoss<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, pos_alpha, neg_alpha, neg_id, scale, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxFocalLossGrad(const int count, const float gamma, const int neg_id, const float eps, const T* scale, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; T grad = -gamma * (scale[t_] / max((1.0f - prob[t_]), eps)) * std::log(max(prob[t_], FLT_MIN)) * prob[t_] + scale[t_]; for (int c = 0; c < classes; c++) { const int i_ = (o_idx * classes + c) * inner_dim + i_idx; if (c == label) { dx[i_] = grad * (prob[t_] - 1); } else { dx[i_] = grad * prob[i_]; } } valid[idx] = label > neg_id ? 1 : 0; } } } template<> void SparseSoftmaxFocalLossGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float gamma, const int neg_id, const float eps, const float* scale, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalLossGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, gamma, neg_id, eps, scale, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** misc.image_data ********************/ template <typename Tx, typename Ty> __global__ void _ImageData_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; Ty raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageData_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; Ty raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; float raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; float raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <> void ImageData<float, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ImageData<float, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Arange(const int count, const int start, const int step, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = start + idx * step; } } template<> void Arange<float, CUDAContext>(const int count, const int start, const int step, float* y) { _Arange<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } template<> void Arange<int, CUDAContext>(const int count, const int start, const int step, int* y) { _Arange<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Argmax(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T max_val = -FLT_MAX; int max_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val > max_val) { max_val = val; max_idx = j; } } y[idx] = max_idx; } } template<> void Argmax<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmax<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmin ********************/ template <typename T> __global__ void _Argmin(const int count, const int axis_dim, } } template <> void CanonicalAxis<float, CUDAContext>(const int count, const int dim, float* y) { _CanonicalAxis<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _At(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const T* indices, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; } } template <> void At<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const float* indices, const float* x, float* y, CUDAContext* context) { _At<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _AtGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const T* indices, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int y_slice_dim, const int* indices, const float* x, float* y, CUDAContext* context) { _Gather<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, } } template <> void AtGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const float* indices, const float* dy, float* dx, CUDAContext* context) { _AtGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; atomicAdd(dx + x_idx, dy[idx]); } } template <> void GatherGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* dy, float* dx) { _GatherGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } template <> void GatherGrad<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* dy, int* dx) { _GatherGrad<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.concat ********************/ template <typename T> __global__ void _Concat(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; y[y_idx] = x[idx]; } } template <> void Concat<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* x, float* y, CUDAContext* context) { _Concat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Concat<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* x, float16* y, CUDAContext* context) { _Concat<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ConcatGrad(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; dx[idx] = dy[y_idx]; } } template <> void ConcatGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* dy, float* dx, CUDAContext* context) { _ConcatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ConcatGrad<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* dy, float16* dx, CUDAContext* context) { _ConcatGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.crop ********************/ template<typename T> __global__ void _Crop1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; y[idx] = x[(o * dim + ex_d + start) * inner_dim + i]; } } template<> void Crop1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const float* x, float* y, CUDAContext* context) { _Crop1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, x, y); CUDA_POST_KERNEL_CHECK; } template<typename T> __global__ void _Crop1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int d = (idx / inner_dim) % dim; const int o = idx / inner_dim / dim; if (d >= start && d < end) dx[idx] = dy[(o * ex_dim + d - start) * inner_dim + i]; } } template<> void Crop1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const float* dy, float* dx, CUDAContext* context) { _Crop1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, end, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.pad ********************/ template <typename T> __global__ void _ConstPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T value, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = ex_d - pad_l; y[idx] = (d < 0 || d >= dim) ? value : x[(o * dim + d) * inner_dim + i]; } } template <> void ConstPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float value, const float* x, float* y, CUDAContext* context) { _ConstPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, value, x, y); } template <typename T> __global__ void _ReflectPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void ReflectPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _ReflectPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _EdgePad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void EdgePad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _EdgePad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _ConstPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % dim + pad_l; const int o = idx / inner_dim / dim; dx[idx] = dy[(o * ex_dim + ex_d) * inner_dim + i]; } } template <> void ConstPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _ConstPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _ReflectPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void ReflectPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx) { _ReflectPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _EdgePad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void EdgePad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _EdgePad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } /******************** ndarray.one_hot ********************/ template <typename T> __global__ void _OneHot(const int count, const int depth, const int on_value, const float* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { const int val = x[idx]; y[idx * depth + val] = on_value; } } template <> void OneHot<float, CUDAContext>(const int count, const int depth, const int on_value, const float* x, float* y) { _OneHot<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, depth, on_value, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.reduce ********************/ template <typename T> __global__ void _Sum(const int count, const int axis_dim, const int inner_dim, const T* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { T sum_val = 0.0; for (int j = 0; j < axis_dim; j++) sum_val += x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; y[idx] = sum_val; } } template<> void Sum<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float* x, float* y) { _Sum<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SumGrad(const int count, const int axis_dim, const int inner_dim, const T coeff, const T* dy, float* dx) { CUDA_KERNEL_LOOP(idx, count) { for (int j = 0; j < axis_dim; j++) dx[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim] = dy[idx] * coeff; } } template<> void SumGrad<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float coeff, const float* dy, float* dx) { _SumGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, coeff, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.repeat ********************/ template <typename T> __global__ void _Repeat(const int count, const int inner_dim, const int repeats, const int dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim / repeats) % dim; const int n = idx / inner_dim / repeats / dim; const int x_idx = (n * dim + b) * inner_dim + d; y[idx] = x[x_idx]; } } template <> void Repeat<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* x, float* y, CUDAContext* context) { _Repeat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _RepeatGrad(const int count, const int inner_dim, const int repeats, const int dim, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim) % dim; const int n = idx / inner_dim / dim; T gradient = 0; for (int t = 0; t < repeats; t++) gradient += dy[(((n * dim + b) * repeats) + t) * inner_dim + d]; dx[idx] = gradient; } } template <> void RepeatGrad<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* dy, float* dx, CUDAContext* context) { _RepeatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.slice ********************/ template <typename T> __global__ void _Slice(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Slice<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* x, float* y, CUDAContext* context) { _Slice<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SliceGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; dx[x_idx] = dy[idx]; } } template <> void SliceGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* dy, float* dx, CUDAContext* context) { _SliceGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.tile ********************/ template <typename T> __global__ void _Tile(const int count, const int ex_inner_dim, const int multiple, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim / multiple; const int x_idx = n * ex_inner_dim + d; y[idx] = x[x_idx]; } } template <> void Tile<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* x, float* y, CUDAContext* context) { _Tile<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TileGrad(const int count, const int ex_inner_dim, const int multiple, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim; T gradient = 0; for (int t = 0; t < multiple; t++) gradient += dy[(n * multiple + t) * ex_inner_dim + d]; dx[idx] = gradient; } } template <> void TileGrad<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* dy, float* dx, CUDAContext* context) { _TileGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.transpose ********************/ template <typename T> __global__ void _Transpose(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } y[idx] = x[x_idx]; } } template <> void Transpose<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* x, float* y) { _Transpose<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Transpose<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* x, float16* y) { _Transpose<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _TransposeGrad(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } dx[x_idx] = dy[idx]; } } template <> void TransposeGrad<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* dy, float* dx) { _TransposeGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void TransposeGrad<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* dy, float16* dx) { _TransposeGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** recurrent.lstm_uint ********************/ template <typename T> __global__ void _LSTMUnitAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x, T* x_act) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; if (ch_4 < g_offset) x_act[idx] = _SigmoidUnit<float>(x[idx]); else x_act[idx] = std::tanh(x[idx]); } } template <typename T> __global__ void _LSTMUnit(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, T* x_act, const T* cont, T* c, T* h) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; T* x_act_ = x_act + n * x_offset; const T i = x_act_[ch]; if (cont != nullptr && cont[n] != T(1)) x_act_[channels + ch] *= cont[n]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g | 84 | Refactor Shape Module | 50 | .cu | cu | bsd-2-clause | neopenx/Dragon |
1945 | <NME> RedisCacheManagerTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.cache.Cache;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.is;
import static org.mockito.Mockito.*;
public class RedisCacheManagerTest {
private IRedisManager redisManager;
private RedisCacheManager redisCacheManager;
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
@Test
public void testInitWithoutSettingRedisManager() {
redisCacheManager = new RedisCacheManager();
Assertions.assertThrows(IllegalArgumentException.class, () -> {
redisCacheManager.getCache("testCache");
Cache cache1 = redisCacheManager.getCache("testCache1");
assertThat(cache,is(cache1));
redisCacheManager.setKeyPrefix("testRedisManager1");
Cache cache2 = redisCacheManager.getCache("testCache2");
assertThat(cache2.getClass().getName(), is("org.crazycake.shiro.RedisCache"));
RedisCache redisCache2 = (RedisCache) cache2;
assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1"));
}
}
assertThat(redisTestCache.getKeyPrefix(), is("testRedisManager1:testCache:"));
assertThat(redisTestCache.getPrincipalIdFieldName(), is("id"));
}
}
<MSG> 1、 在org.crazycake.shiro.RedisCache#put(key,value)时, 允许设置超时时间。
2、 在org.crazycake.shiro.RedisCacheManager#getCache(name)时, 将name扩展至key的前缀keyPrefix, 防止在获取不同name的Cache 时冲突。
<DFF> @@ -26,12 +26,12 @@ public class RedisCacheManagerTest {
Cache cache1 = redisCacheManager.getCache("testCache1");
assertThat(cache,is(cache1));
- redisCacheManager.setKeyPrefix("testRedisManager1");
+ redisCacheManager.setKeyPrefix("testRedisManager1:");
Cache cache2 = redisCacheManager.getCache("testCache2");
assertThat(cache2.getClass().getName(), is("org.crazycake.shiro.RedisCache"));
RedisCache redisCache2 = (RedisCache) cache2;
- assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1"));
+ assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1:testCache2:"));
}
}
| 2 | 1、 在org.crazycake.shiro.RedisCache#put(key,value)时, 允许设置超时时间。 2、 在org.crazycake.shiro.RedisCacheManager#getCache(name)时, 将name扩展至key的前缀keyPrefix, 防止在获取不同name的Cache 时冲突。 | 2 | .java | java | mit | alexxiyang/shiro-redis |
1946 | <NME> engine_api.rst
<BEF> Hitch Engine API
================
The Hitch Engine is a python class which is tasked with executing your tests
and responding to successes and failures.
The basic Hitch Engine looks something like this:
.. code-block:: python
import hitchtest
class ExecutionEngine(hitchtest.ExecutionEngine):
def set_up(self):
# set up code
def do_something(self):
# code run when test says "Do something"
def do_something_else(self, with_what):
# code run run when test says "Do something else"
def tear_down(self):
# code that always runs at the end
For a test like this:
.. code-block:: yaml
- name: Example scenario
scenario:
- Do something
- Do something else
Step Translation
----------------
Test steps and their arguments are fed to the engine directly as method calls
and arguments. All step names and arguments are first changed into underscore_case.
For example, putting this as a test step:
.. code-block:: yaml
- Do something
Would be equivalent to calling this in your engine:
.. code-block:: python
self.do_something()
This, on the other hand (note the semicolon):
.. code-block:: yaml
- Do something else: value 1
Would be translated into:
.. code-block:: python
self.do_something_else("value 1")
You can include as many arguments as you like in steps like so:
.. code-block:: yaml
- Do complicated thing:
Variable 1: Value 1
Variable 2: 2
If the equivalent were written in python it would look like this:
.. code-block:: python
self.do_complicated_thing(variable_1="Value 1", variable_2="2")
Your steps can also contain arguments that contain lists:
.. code-block:: yaml
- Do another complicated thing:
Variable 1: value 1
Variable 2:
- List item 1
- List item 2
The python equivalent of that would look like this:
.. code-block:: python
self.do_another_complicated_thing(variable_1="value 1", variable_2=["list item 1", "list item 2",])
They can contain dicts (or associative arrays) as well:
.. code-block:: yaml
- A 3rd complicated thing:
Variable 1: value 1
Variable 2:
Dict item 1: val 1
Dict item 2: val 2
Which in python would be equivalent to this:
.. code-block:: python
self.a_3rd_complicated_thing(variable_1="value 1", variable_2={'Dict item 1': 'val 1', 'Dict item 2': 'val 2'})
Careful with semicolons and braces like { and }
-----------------------------------------------
Since the tests are written in YAML with optional Jinja2, braces and
semicolons have special meanings and must be escaped if you want
to use them.
Preconditions
-------------
self.preconditions is a dictionary representation of the YAML snippet in the test being run.
What goes in this snippet is up to you. Anything that is valid YAML is allowed.
Example:
.. code-block:: yaml
preconditions:
db_fixtures:
- fixture1.sql
python_version: 2.7.3
This will mean your preconditions variable will be::
In [1]: self.preconditions
Out[1]: {'db_fixtures': ['fixture1.sql'], 'python_version': '2.7.3'}
You can access any properties you set here using python's get method (which
you can also use to program in a sensible default)::
In [1]: self.preconditions.get('db_fixtures', [])
Out[1]: ['fixture1.sql']
If no preconditions are set, self.preconditions will be an empty dict::
In [1]: self.preconditions
Out[1]: {}
Note that while preconditions can contain lists, you can't set preconditions
to be a list.
Tags
----
Tests can also have tags, which let you single out individual tests to run
or to run groups of tests together. Example:
.. code-block:: yaml
- name: Test with tags
tags:
- registration
- email
- firefox
scenario:
- Step 1
- Step 2
You can use these tags to run related sets of tests together like so::
$ hitch test . --tags registration
Or, if you want to be more specific, you can list the tags, separated by a comma::
$ hitch test . --tags registration,email,firefox
Description
-----------
You can also include comments in the description property. This where you can
put comments in your tests to help explain to people what your test is doing
and why.
It is ignored by the engine.
.. code-block:: yaml
- name: Test with long description
description: |
This test has a long history behind it. First there was a feature, then
ther was another bug BUG-431, which it was tweaked to accomodate.
It registers, recieves an email and checks the email arrived.
scenario:
- Step 1
- Step 2: with parameter
- Step 3:
var 1: 1
var 2: 2
var 3: 3
- Last step
Stacktrace
----------
self.stacktrace is an object representation of the stack trace that occurs after a failure
occurs in your test. It is set to None if no error has occurred while running the test.
You can use it to pretty-print a representation of the last error that occurred::
In [1]: print(self.stacktrace.to_template())
[ prints colorized, pretty printed version of the stacktrace ]
You can also use it to *dive into* the specific engine code where the exception occurred,
so that you can check the contents of variables at that point or even re-run the code::
In [1]: self.stacktrace[0].ipython()
Entering /home/user/django-remindme/django-remindme-tests/engine.py at line 122
In [1]: on
Out[1]: 'register'
Settings
--------
Test settings are also available in the test engine, e.g.::
In [1]: self.settings
Out[1]:
{'engine_folder': '/home/user/django-remindme/django-remindme-tests',
'pause_on_failure': True,
'python_version': '2.7.3',
'xvfb': False,
'quiet': False}
To read more about setting settings see :doc:`settings`.
<MSG> DOCS : Tweaked the engine API docs.
<DFF> @@ -4,7 +4,17 @@ Hitch Engine API
The Hitch Engine is a python class which is tasked with executing your tests
and responding to successes and failures.
-The basic Hitch Engine looks something like this:
+For a test like this, written in YAML:
+
+.. code-block:: yaml
+
+ - name: Example scenario
+ scenario:
+ - Do something
+ - Do something else
+
+
+The basic Hitch Engine, written in python, would need to look something like this:
.. code-block:: python
@@ -24,15 +34,6 @@ The basic Hitch Engine looks something like this:
# code that always runs at the end
-For a test like this:
-
-.. code-block:: yaml
-
- - name: Example scenario
- scenario:
- - Do something
- - Do something else
-
Step Translation
----------------
| 11 | DOCS : Tweaked the engine API docs. | 10 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1947 | <NME> RedisSessionDAO.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.apache.shiro.session.mgt.eis.AbstractSessionDAO;
import org.crazycake.shiro.common.SessionInMemory;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.RedisSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Serializable;
import java.util.*;
/**
* Used for setting/getting authentication information from Redis
*/
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
@Override
public void update(Session session) throws UnknownSessionException {
* doReadSession be called about 10 times when login.
* Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
* The default value is 1000 milliseconds (1s).
* Most of time, you don't need to change it.
*
* You can turn it off by setting sessionInMemoryEnabled to false
*/
private static final long DEFAULT_SESSION_IN_MEMORY_TIMEOUT = 1000L;
private long sessionInMemoryTimeout = DEFAULT_SESSION_IN_MEMORY_TIMEOUT;
private static final boolean DEFAULT_SESSION_IN_MEMORY_ENABLED = true;
private boolean sessionInMemoryEnabled = DEFAULT_SESSION_IN_MEMORY_ENABLED;
private static ThreadLocal sessionsInThread = new ThreadLocal();
/**
* expire time in seconds.
* NOTE: Please make sure expire is longer than session.getTimeout(),
* otherwise you might need the issue that session in Redis got erased when the Session is still available
*
* DEFAULT_EXPIRE: use the timeout of session instead of setting it by yourself
* NO_EXPIRE: never expire
*/
private static final int DEFAULT_EXPIRE = -2;
private static final int NO_EXPIRE = -1;
private int expire = DEFAULT_EXPIRE;
private static final int MILLISECONDS_IN_A_SECOND = 1000;
/**
* redisManager used for communicate with Redis
*/
private IRedisManager redisManager;
/**
* Serializer of key
*/
private RedisSerializer keySerializer = new StringSerializer();
/**
* Serializer of value
*/
private RedisSerializer valueSerializer = new ObjectSerializer();
/**
* save/update session
* @param session
* @throws UnknownSessionException
*/
@Override
public void update(Session session) throws UnknownSessionException {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
this.saveSession(session);
if (this.sessionInMemoryEnabled) {
this.setSessionToThreadLocal(session.getId(), session);
}
}
private void saveSession(Session session) throws UnknownSessionException {
if (session == null || session.getId() == null) {
logger.error("session or session id is null");
throw new UnknownSessionException("session or session id is null");
}
byte[] key;
byte[] value;
try {
key = keySerializer.serialize(getRedisSessionKey(session.getId()));
value = valueSerializer.serialize(session);
} catch (SerializationException e) {
logger.error("serialize session error. session id=" + session.getId());
throw new UnknownSessionException(e);
return null;
}
Session s = null;
try {
s = (Session)valueSerializer.deserialize(redisManager.get(keySerializer.serialize(getRedisSessionKey(sessionId))));
} catch (SerializationException e) {
logger.error("read session error. settionId=" + sessionId);
}
/**
* delete session
* @param session
*/
@Override
public void delete(Session session) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (session == null || session.getId() == null) {
logger.error("session or session id is null");
return;
}
if (this.sessionInMemoryEnabled) {
this.delSessionFromThreadLocal(session.getId());
}
try {
redisManager.del(keySerializer.serialize(getRedisSessionKey(session.getId())));
} catch (SerializationException e) {
logger.error("delete session error. session id=" + session.getId());
}
}
/**
* get all active sessions
* @return
*/
@Override
public Collection<Session> getActiveSessions() {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
Set<Session> sessions = new HashSet<Session>();
try {
Set<byte[]> keys = redisManager.keys(keySerializer.serialize(this.keyPrefix + "*"));
if (keys != null && keys.size() > 0) {
for (byte[] key:keys) {
Session s = (Session) valueSerializer.deserialize(redisManager.get(key));
sessions.add(s);
}
}
} catch (SerializationException e) {
logger.error("get active sessions error.");
}
return sessions;
}
@Override
protected Serializable doCreate(Session session) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (session == null) {
logger.error("session is null");
throw new UnknownSessionException("session is null");
}
Serializable sessionId = this.generateSessionId(session);
this.assignSessionId(session, sessionId);
this.saveSession(session);
return sessionId;
}
/**
* I change
* @param sessionId
* @return
*/
@Override
protected Session doReadSession(Serializable sessionId) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (sessionId == null) {
logger.warn("session id is null");
return null;
}
if (this.sessionInMemoryEnabled) {
Session session = getSessionFromThreadLocal(sessionId);
if (session != null) {
return session;
}
}
Session session = null;
try {
String sessionRedisKey = getRedisSessionKey(sessionId);
logger.debug("read session: " + sessionRedisKey + " from Redis");
session = (Session) valueSerializer.deserialize(redisManager.get(keySerializer.serialize(sessionRedisKey)));
if (this.sessionInMemoryEnabled) {
setSessionToThreadLocal(sessionId, session);
}
} catch (SerializationException e) {
logger.error("read session error. sessionId: " + sessionId);
}
return session;
}
private void setSessionToThreadLocal(Serializable sessionId, Session session) {
this.initSessionsInThread();
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
sessionMap.put(sessionId, this.createSessionInMemory(session));
}
private void delSessionFromThreadLocal(Serializable sessionId) {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
sessionMap.remove(sessionId);
}
private SessionInMemory createSessionInMemory(Session session) {
SessionInMemory sessionInMemory = new SessionInMemory();
sessionInMemory.setCreateTime(new Date());
sessionInMemory.setSession(session);
return sessionInMemory;
}
private void initSessionsInThread() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
sessionMap = new HashMap<Serializable, SessionInMemory>();
sessionsInThread.set(sessionMap);
}
}
private void removeExpiredSessionInMemory() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
Iterator<Serializable> it = sessionMap.keySet().iterator();
while (it.hasNext()) {
Serializable sessionId = it.next();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
it.remove();
continue;
}
long liveTime = getSessionInMemoryLiveTime(sessionInMemory);
if (liveTime > sessionInMemoryTimeout) {
it.remove();
}
}
if (sessionMap.size() == 0) {
sessionsInThread.remove();
}
}
private Session getSessionFromThreadLocal(Serializable sessionId) {
if (sessionsInThread.get() == null) {
return null;
}
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
return null;
}
logger.debug("read session from memory");
return sessionInMemory.getSession();
}
private long getSessionInMemoryLiveTime(SessionInMemory sessionInMemory) {
Date now = new Date();
return now.getTime() - sessionInMemory.getCreateTime().getTime();
}
private String getRedisSessionKey(Serializable sessionId) {
return this.keyPrefix + sessionId;
}
public IRedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
public String getKeyPrefix() {
return keyPrefix;
}
public void setKeyPrefix(String keyPrefix) {
this.keyPrefix = keyPrefix;
}
public RedisSerializer getKeySerializer() {
return keySerializer;
}
public void setKeySerializer(RedisSerializer keySerializer) {
this.keySerializer = keySerializer;
}
public RedisSerializer getValueSerializer() {
return valueSerializer;
}
public void setValueSerializer(RedisSerializer valueSerializer) {
this.valueSerializer = valueSerializer;
}
public long getSessionInMemoryTimeout() {
return sessionInMemoryTimeout;
}
public void setSessionInMemoryTimeout(long sessionInMemoryTimeout) {
this.sessionInMemoryTimeout = sessionInMemoryTimeout;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public boolean getSessionInMemoryEnabled() {
return sessionInMemoryEnabled;
}
public void setSessionInMemoryEnabled(boolean sessionInMemoryEnabled) {
this.sessionInMemoryEnabled = sessionInMemoryEnabled;
}
public static ThreadLocal getSessionsInThread() {
return sessionsInThread;
}
}
<MSG> 2.4.8
Major changes:
1. Use Threadlocal to cache session in one thread, so that it won't call redis several times when login.
2. Update RedisCache.clear(). Use prefix to clear redis cache, so that it won't clear all data in redis database
3. Fix the problem which auth object cannot be clear after logout.
Minor changes:
1. Update jedis version to 2.8.1
2. Change JedisPool.returnResource to Jedis.close, based on Jedis documentation.
3. Change redis default expire time to 3600 sec(1 hour)
<DFF> @@ -20,6 +20,8 @@ public class RedisSessionDAO extends AbstractSessionDAO {
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
+
+ private static ThreadLocal threadLocalSession = new ThreadLocal();
@Override
public void update(Session session) throws UnknownSessionException {
@@ -99,8 +101,14 @@ public class RedisSessionDAO extends AbstractSessionDAO {
return null;
}
Session s = null;
+ if (threadLocalSession.get() != null) {
+ s = (Session) threadLocalSession.get();
+ return s;
+ }
+ logger.debug("read session from redis");
try {
s = (Session)valueSerializer.deserialize(redisManager.get(keySerializer.serialize(getRedisSessionKey(sessionId))));
+ // threadLocalSession.set(s);
} catch (SerializationException e) {
logger.error("read session error. settionId=" + sessionId);
}
| 8 | 2.4.8 Major changes: 1. Use Threadlocal to cache session in one thread, so that it won't call redis several times when login. 2. Update RedisCache.clear(). Use prefix to clear redis cache, so that it won't clear all data in redis database 3. Fix the problem which auth object cannot be clear after logout. | 0 | .java | java | mit | alexxiyang/shiro-redis |
1948 | <NME> ROADMAP.rst
<BEF> ADDFILE
<MSG> DOCS : Added roadmap.
<DFF> @@ -0,0 +1,16 @@
+Roadmap
+=======
+
+This is a list of features which are planned for the future, in no particular order:
+
+* Service plugins for lots more popular databases, web frameworks, task queues and more - even in PHP, Java, Ruby, etc. and tutorials to use them.
+* Use of py.test's assert statement.
+* Tests that repeat themselves with % pass/failure and a threshold for passing (default 100%), for those annoying integration tests that only fail sometimes.
+* Configurable mid-steps - running an engine method between each test step - e.g. to pause mid-step when using tests for demonstrations or take screenshots.
+* Tools to let you stop and start services mid-test, e.g. to test how they behave when receiving different UNIX signals, or to mock scenarios where services are restarted.
+* Test tagging
+* Step skipping - cache your test state at a certain point in a test, and run only from that point - to get quicker feedback on long running tests when doing TDD.
+* Bisect - tools to be able to figure out which commit caused a test failure.
+* Mock REST server
+* CPU/Memory/I/O tracking for services.
+* Artefact generation - a seamless way of creating artefacts from tests such as screenshots, CPU/Memory/I/O usage reports, code coverage reports, etc.
| 16 | DOCS : Added roadmap. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1949 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can chose these 2 ways to include shiro-redis into your project
```xml
<dependency>
<artifactId>shiro-redis</artifactId>
<version>2.4.2-RELEASE</version>
</dependency>
```xml
Edit shiro.ini
<MSG> Update README.md
update readme
<DFF> @@ -7,7 +7,9 @@ How to use it?
===========
You can chose these 2 ways to include shiro-redis into your project
-
+* directly download jar file
+Download shiro-redis.jar in bin folder and add it into your classpath.
+* add maven dependency
```xml
<dependency>
@@ -15,7 +17,7 @@ You can chose these 2 ways to include shiro-redis into your project
<artifactId>shiro-redis</artifactId>
<version>2.4.2-RELEASE</version>
</dependency>
-```xml
+```
Edit shiro.ini
| 4 | Update README.md | 2 | .md | md | mit | alexxiyang/shiro-redis |
1950 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can choose these 2 ways to include shiro-redis into your project
* compiler jar with source code from https://github.com/alexxiyang/shiro-redis.git
Download shiro-redis.jar in bin folder and add it into your classpath.
* add maven dependency
```xml
<MSG> Merge remote-tracking branch 'origin/master'
Conflicts:
README.md
<DFF> @@ -7,8 +7,7 @@ How to use it?
===========
You can choose these 2 ways to include shiro-redis into your project
-* compiler jar with source code from https://github.com/alexxiyang/shiro-redis.git
-Download shiro-redis.jar in bin folder and add it into your classpath.
+* use "git clone https://github.com/alexxiyang/shiro-redis.git" to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
| 1 | Merge remote-tracking branch 'origin/master' | 2 | .md | md | mit | alexxiyang/shiro-redis |
1951 | <NME> gradient_maker.py
<BEF> # --------------------------------------------------------------------------------------------------
# Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import defaultdict
from dragon.import_c_apis import *
import dragon.config as config
import dragon.protos.dragon_pb2 as pb
from dragon.core.utils import MakeOperatorDef
""" parse ops from string """
g_ops, g_inputs, defaults = CreateGradientDefsCC(op_def.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
if sys.version_info >= (3, 0): g_op = g_op.encode()
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
It relies on the generating rules defined in the C++ backend.
"""
@classmethod
def CreateGradientForOp(cls, forward_op, g_output):
"""Generate the OperatorDef for ``BackwardOp`` by ``ForwardOp``.
Parameters
----------
forward_op : dragon_pb2.OperatorDef
The OperatorDef of ``ForwardOp``.
g_output : list of str
The inputs of ``BackwardOp`` (Precomputed Grads).
Returns
-------
tuple
The OpDef, outputs and defaults of ``BackwardOp``.
References
----------
The wrapper of ``CreateGradientDefsCC``.
"""
g_ops, g_inputs, defaults = \
CreateGradientDefsCC(forward_op.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
g_ops[idx] = new_def
return g_ops, g_inputs, defaults
@classmethod
def CheckMissingGrad(cls, forward_op, inputs_to_grads, blacklist, targets):
"""Check if missing Grads. If True, skip this Op.
Parameters
----------
forward_op : dragon_pb2.OperatorDef
The OperatorDef of ``ForwardOp``.
inputs_to_grads : dict
The dict of <input, g_input>.
blacklist : set of str
The set of ``NoGradient`` tensors.
targets : list of str
The solving targets.
Returns
-------
tuple
The result of checking and generated filling grads.
"""
if forward_op.type in config.NO_GRADIENT_OPERATORS:
for input in forward_op.input: blacklist.add(input)
return (True, None)
# generate virtual grads for targets if necessary
gen_grads = []
for idx, output in enumerate(forward_op.output):
if output not in inputs_to_grads:
if output in targets:
gen_grads.append((output, idx))
inputs_to_grads[output] = output + '_grad'
# check
for output in forward_op.output:
if inputs_to_grads.get(output, None) is None:
# check failed: skip backward
if output in blacklist: return (True, gen_grads)
if len(forward_op.output) == 1: return (True, gen_grads)
# check pass, even if missing some grads
return (False, gen_grads)
@classmethod
def Make(cls, forward_ops, targets):
"""Make ``BackwardOps`` based on ``ForwardOps``.
Parameters
----------
forward_ops : list of dragon_pb2.OperatorDef
The operators of ``ForwardOp``.
targets : list of str
The solving targets.
Returns
-------
tuple
The ``ForwardOps`` and ``BackwardOps``.
See Also
--------
`theano.function(*args, **kwargs)`_ - How to make a graph. [**Theano Style**]
"""
inputs_to_grads = {}
inputs_count = defaultdict(int)
grads_count = defaultdict(int)
all_split_grads = set()
blacklist = set()
backward_ops = []
# PLAY for the forward
for forward_op in forward_ops:
if forward_op.type in config.NO_GRADIENT_OPERATORS: continue
for input in forward_op.input: inputs_count[input] += 1
# PLAY for the backward
for forward_op in forward_ops[::-1]:
is_skip, gen_grads = cls.CheckMissingGrad(forward_op, inputs_to_grads, blacklist, targets)
g_outputs = list(inputs_to_grads.get(name, None) for name in forward_op.output)
g_ops, g_inputs, defaults = cls.CreateGradientForOp(forward_op, g_outputs)
# append ops
if not is_skip:
if len(gen_grads) > 0:
op_inputs = []; op_outputs = []; values = []
for item in gen_grads:
op_inputs.append(item[0])
op_outputs.append(item[0] + '_grad')
values.append(defaults[item[1]])
gen_op = MakeOperatorDef('GradientGenerate', op_inputs, op_outputs,
GetOperatorName()[1], defaults=values)
if forward_op.HasField('device_option'):
gen_op.device_option.CopyFrom(forward_op.device_option)
backward_ops.append(gen_op)
for g_op in g_ops: backward_ops.append(g_op)
# split & gather grads for multi-used input
for g_op in g_ops:
for g_output_idx, g_output in enumerate(g_op.output):
original_idx = -1
for g_input_idx, g_input in enumerate(g_inputs):
if g_output == g_input: original_idx = g_input_idx
if original_idx == -1: continue
original_name = forward_op.input[original_idx]
if inputs_count[original_name] > 1:
# split
split_name = g_output + '_autosplit_%d' % grads_count[g_output]
if not is_skip: all_split_grads.add(split_name)
grads_count[g_output] += 1
# gather
if grads_count[g_output] == inputs_count[original_name]:
split_inputs = []
for idx in range(grads_count[g_output]):
if '%s_autosplit_%d' % (g_output, idx) in all_split_grads:
split_inputs.append('%s_autosplit_%d' % (g_output, idx))
gather_op = MakeOperatorDef('GradientGather', split_inputs, [g_output])
if g_op.HasField('device_option'):
gather_op.device_option.CopyFrom(g_op.device_option)
_, gather_op.name = GetOperatorName()
backward_ops.append(gather_op)
g_op.output[g_output_idx] = split_name
# done
if not is_skip:
for name, grad in zip(forward_op.input, g_inputs):
if grad != '': inputs_to_grads[name] = grad
return forward_ops, backward_ops
<MSG> fix bugs of https://github.com/neopenx/Dragon/issues/6
<DFF> @@ -18,7 +18,6 @@ class GraphGradientMaker(object):
""" parse ops from string """
g_ops, g_inputs, defaults = CreateGradientDefsCC(op_def.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
- if sys.version_info >= (3, 0): g_op = g_op.encode()
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
| 0 | fix bugs of https://github.com/neopenx/Dragon/issues/6 | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1952 | <NME> gradient_maker.py
<BEF> # --------------------------------------------------------------------------------------------------
# Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import defaultdict
from dragon.import_c_apis import *
import dragon.config as config
import dragon.protos.dragon_pb2 as pb
from dragon.core.utils import MakeOperatorDef
""" parse ops from string """
g_ops, g_inputs, defaults = CreateGradientDefsCC(op_def.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
if sys.version_info >= (3, 0): g_op = g_op.encode()
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
It relies on the generating rules defined in the C++ backend.
"""
@classmethod
def CreateGradientForOp(cls, forward_op, g_output):
"""Generate the OperatorDef for ``BackwardOp`` by ``ForwardOp``.
Parameters
----------
forward_op : dragon_pb2.OperatorDef
The OperatorDef of ``ForwardOp``.
g_output : list of str
The inputs of ``BackwardOp`` (Precomputed Grads).
Returns
-------
tuple
The OpDef, outputs and defaults of ``BackwardOp``.
References
----------
The wrapper of ``CreateGradientDefsCC``.
"""
g_ops, g_inputs, defaults = \
CreateGradientDefsCC(forward_op.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
g_ops[idx] = new_def
return g_ops, g_inputs, defaults
@classmethod
def CheckMissingGrad(cls, forward_op, inputs_to_grads, blacklist, targets):
"""Check if missing Grads. If True, skip this Op.
Parameters
----------
forward_op : dragon_pb2.OperatorDef
The OperatorDef of ``ForwardOp``.
inputs_to_grads : dict
The dict of <input, g_input>.
blacklist : set of str
The set of ``NoGradient`` tensors.
targets : list of str
The solving targets.
Returns
-------
tuple
The result of checking and generated filling grads.
"""
if forward_op.type in config.NO_GRADIENT_OPERATORS:
for input in forward_op.input: blacklist.add(input)
return (True, None)
# generate virtual grads for targets if necessary
gen_grads = []
for idx, output in enumerate(forward_op.output):
if output not in inputs_to_grads:
if output in targets:
gen_grads.append((output, idx))
inputs_to_grads[output] = output + '_grad'
# check
for output in forward_op.output:
if inputs_to_grads.get(output, None) is None:
# check failed: skip backward
if output in blacklist: return (True, gen_grads)
if len(forward_op.output) == 1: return (True, gen_grads)
# check pass, even if missing some grads
return (False, gen_grads)
@classmethod
def Make(cls, forward_ops, targets):
"""Make ``BackwardOps`` based on ``ForwardOps``.
Parameters
----------
forward_ops : list of dragon_pb2.OperatorDef
The operators of ``ForwardOp``.
targets : list of str
The solving targets.
Returns
-------
tuple
The ``ForwardOps`` and ``BackwardOps``.
See Also
--------
`theano.function(*args, **kwargs)`_ - How to make a graph. [**Theano Style**]
"""
inputs_to_grads = {}
inputs_count = defaultdict(int)
grads_count = defaultdict(int)
all_split_grads = set()
blacklist = set()
backward_ops = []
# PLAY for the forward
for forward_op in forward_ops:
if forward_op.type in config.NO_GRADIENT_OPERATORS: continue
for input in forward_op.input: inputs_count[input] += 1
# PLAY for the backward
for forward_op in forward_ops[::-1]:
is_skip, gen_grads = cls.CheckMissingGrad(forward_op, inputs_to_grads, blacklist, targets)
g_outputs = list(inputs_to_grads.get(name, None) for name in forward_op.output)
g_ops, g_inputs, defaults = cls.CreateGradientForOp(forward_op, g_outputs)
# append ops
if not is_skip:
if len(gen_grads) > 0:
op_inputs = []; op_outputs = []; values = []
for item in gen_grads:
op_inputs.append(item[0])
op_outputs.append(item[0] + '_grad')
values.append(defaults[item[1]])
gen_op = MakeOperatorDef('GradientGenerate', op_inputs, op_outputs,
GetOperatorName()[1], defaults=values)
if forward_op.HasField('device_option'):
gen_op.device_option.CopyFrom(forward_op.device_option)
backward_ops.append(gen_op)
for g_op in g_ops: backward_ops.append(g_op)
# split & gather grads for multi-used input
for g_op in g_ops:
for g_output_idx, g_output in enumerate(g_op.output):
original_idx = -1
for g_input_idx, g_input in enumerate(g_inputs):
if g_output == g_input: original_idx = g_input_idx
if original_idx == -1: continue
original_name = forward_op.input[original_idx]
if inputs_count[original_name] > 1:
# split
split_name = g_output + '_autosplit_%d' % grads_count[g_output]
if not is_skip: all_split_grads.add(split_name)
grads_count[g_output] += 1
# gather
if grads_count[g_output] == inputs_count[original_name]:
split_inputs = []
for idx in range(grads_count[g_output]):
if '%s_autosplit_%d' % (g_output, idx) in all_split_grads:
split_inputs.append('%s_autosplit_%d' % (g_output, idx))
gather_op = MakeOperatorDef('GradientGather', split_inputs, [g_output])
if g_op.HasField('device_option'):
gather_op.device_option.CopyFrom(g_op.device_option)
_, gather_op.name = GetOperatorName()
backward_ops.append(gather_op)
g_op.output[g_output_idx] = split_name
# done
if not is_skip:
for name, grad in zip(forward_op.input, g_inputs):
if grad != '': inputs_to_grads[name] = grad
return forward_ops, backward_ops
<MSG> fix bugs of https://github.com/neopenx/Dragon/issues/6
<DFF> @@ -18,7 +18,6 @@ class GraphGradientMaker(object):
""" parse ops from string """
g_ops, g_inputs, defaults = CreateGradientDefsCC(op_def.SerializeToString(), g_output)
for idx, g_op in enumerate(g_ops):
- if sys.version_info >= (3, 0): g_op = g_op.encode()
new_def = pb.OperatorDef()
new_def.ParseFromString(g_op)
_, new_def.name = GetOperatorName()
| 0 | fix bugs of https://github.com/neopenx/Dragon/issues/6 | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1953 | <NME> README.rst
<BEF> Hitch
=====
.. image:: https://badges.gitter.im/Join%20Chat.svg
:alt: Join the chat at https://gitter.im/hitchtest/hitch
:target: https://gitter.im/hitchtest/hitch?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
Hitch is a UNIX-based testing framework for writing integration tests with an emphasis on:
* Minimizing and eliminating `brittle tests <https://hitchtest.readthedocs.org/en/latest/glossary/brittle_tests.html>`_
* `Test readability <https://hitchtest.readthedocs.org/en/latest/glossary/test_readability.html>`_
* `Loose coupling <https://hitchtest.readthedocs.org/en/latest/glossary/loose_coupling.html>`_
* `Test realism <https://hitchtest.readthedocs.org/en/latest/glossary/test_realism.html>`_
* Tests that `fail fast <https://hitchtest.readthedocs.org/en/latest/glossary/fail_fast.html>`_ and `fail clearly <https://hitchtest.readthedocs.org/en/latest/glossary/fail_clearly.html>`_
Available plugins
-----------------
Hitch comes with a variety of plugins to aid you to realistically testing various
kinds of software, components and scenarios, including:
* `Python <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpython.html>`_ (includes Django and Celery service definitions)
* `Postgresql <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpostgres.html>`_
* `Redis <https://hitchtest.readthedocs.org/en/latest/plugins/hitchredis.html>`_
* `Web apps (using selenium) <https://hitchtest.readthedocs.org/en/latest/plugins/hitchselenium.html>`_
* Command line apps (using pexpect)
* `Cron <https://hitchtest.readthedocs.org/en/latest/plugins/hitchcron.html>`_
* MySQL
* RabbitMQ
* Elastic Search
`Plugin documentation <https://hitchtest.readthedocs.org/en/latest/plugins/>`_
Getting started
---------------
See the `quickstart tutorial <https://hitchtest.readthedocs.org/en/latest/quickstart/index.html>`_ on how to
get started testing an existing project.
Also check out `cookiecutter-django <https://github.com/pydanny/cookiecutter-django>`_
if you want to start a new Django project with tests.
Status
------
Hitch is currently in beta.
It is regression tested on:
* Operating Systems : Mac OS X Yosemite, Ubuntu, Debian, Fedora and Arch Linux.
* Python versions : 3.5.0, 3.4.3, 3.4.0 and 3.3.0 `(what about python 2?) <https://hitchtest.readthedocs.org/en/latest/faq/what_about_python2.html>`_
It does not currently work on Windows.
See `tested on <https://hitchtest.readthedocs.org/en/latest/misc/tested_on.html>`_ for more details on
how the framework is tested (with itself, naturally).
Contents of this project
------------------------
This project contains:
* The code for the bootstrapper script
* Documentation for the whole project (`hosted at readthedocs <https://hitchtest.readthedocs.org/en/latest/>`_)
* Code for other components is at: https://github.com/hitchtest/
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
.. _HitchDjango: https://github.com/hitchtest/hitchdjango
.. _HitchCelery: https://github.com/hitchtest/hitchcelery
.. _pipsi: https://github.com/mitsuhiko/pipsi
<MSG> DOCS : Missed a link from README.
<DFF> @@ -116,6 +116,7 @@ tested on Ubuntu and Mac OS X. Currently, hitchserve will not run on Windows.
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
.. _HitchDjango: https://github.com/hitchtest/hitchdjango
+.. _HitchPostgres: https://github.com/hitchtest/hitchpostgres
.. _HitchCelery: https://github.com/hitchtest/hitchcelery
.. _pipsi: https://github.com/mitsuhiko/pipsi
| 1 | DOCS : Missed a link from README. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1954 | <NME> RedisSessionDAOTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.crazycake.shiro.model.FakeSession;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
import java.util.*;
import static fixture.TestFixture.scaffoldStandaloneRedisManager;
import static org.junit.Assert.fail;
import static fixture.TestFixture.*;
public class RedisSessionDAOTest {
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.*;
public class RedisSessionDAOTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisSessionDAO mountRedisSessionDAO(Integer expire) {
RedisSessionDAO redisSessionDAO = new RedisSessionDAO();
if (expire != null) {
redisSessionDAO.setExpire(expire);
}
redisSessionDAO.setKeyPrefix("student:");
redisSessionDAO.setRedisManager(redisManager);
return redisSessionDAO;
}
@Test
public void testUpdate() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(99, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:99"), valueSerializer.serialize(session), 2);
}
@Test
public void testUpdateByCustomExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(3);
StudentSession session = new StudentSession(98, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:98"), valueSerializer.serialize(session), 3);
}
@Test
public void testUpdateByNoExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(-1);
StudentSession session = new StudentSession(97, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:97"), valueSerializer.serialize(session), -1);
}
@Test
public void testDelete() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(96, 1000);
sessionDAO.delete(session);
verify(redisManager).del(keySerializer.serialize("student:96"));
}
@Test
public void testGetActiveSessions() throws SerializationException {
Set<byte[]> mockKeys = new HashSet<byte[]>();
mockKeys.add(keySerializer.serialize("student:1"));
mockKeys.add(keySerializer.serialize("student:2"));
when(redisManager.keys(keySerializer.serialize("student:*"))).thenReturn(mockKeys);
StudentSession mockSession1 = new StudentSession(1, 2000);
StudentSession mockSession2 = new StudentSession(2, 2000);
when(redisManager.get(keySerializer.serialize("student:1"))).thenReturn(valueSerializer.serialize(mockSession1));
when(redisManager.get(keySerializer.serialize("student:2"))).thenReturn(valueSerializer.serialize(mockSession2));
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
assertThat(sessionDAO.getActiveSessions().size(), is(2));
}
}
class StudentSession implements Session, Serializable {
private Integer id;
private long timeout;
public StudentSession(Integer id, long timeout) {
this.id = id;
this.timeout = timeout;
}
@Override
public Serializable getId() {
return id;
}
@Override
public Date getStartTimestamp() {
return null;
}
@Override
public Date getLastAccessTime() {
return null;
}
@Override
public long getTimeout() throws InvalidSessionException {
return timeout;
}
@Override
Collection<Session> activeSessions = redisSessionDAO.getActiveSessions();
assertEquals(activeSessions.size(), 2);
}
}
}
@Override
public Collection<Object> getAttributeKeys() throws InvalidSessionException {
return null;
}
@Override
public Object getAttribute(Object o) throws InvalidSessionException {
return null;
}
@Override
public void setAttribute(Object o, Object o1) throws InvalidSessionException {
}
@Override
public Object removeAttribute(Object o) throws InvalidSessionException {
return null;
}
}
<MSG> Remove expired sessionInMap when there is new session in sessionInMap
<DFF> @@ -2,14 +2,18 @@ package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
+import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.model.FakeSession;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
-import java.util.*;
-import static fixture.TestFixture.scaffoldStandaloneRedisManager;
-import static org.junit.Assert.fail;
+
+import java.io.Serializable;
+import java.util.Collection;
+import java.util.Map;
+
import static fixture.TestFixture.*;
+import static org.junit.Assert.fail;
public class RedisSessionDAOTest {
@@ -118,4 +122,16 @@ public class RedisSessionDAOTest {
Collection<Session> activeSessions = redisSessionDAO.getActiveSessions();
assertEquals(activeSessions.size(), 2);
}
+
+ @Test
+ public void testRemoveExpiredSessionInMemory() throws InterruptedException, SerializationException {
+ redisSessionDAO.setSessionInMemoryTimeout(500L);
+ redisSessionDAO.doCreate(session1);
+ redisSessionDAO.doReadSession(session1.getId());
+ Thread.sleep(1000);
+ redisSessionDAO.doCreate(session2);
+ redisSessionDAO.doReadSession(session2.getId());
+ Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) redisSessionDAO.getSessionsInThread().get();
+ assertEquals(sessionMap.size(), 1);
+ }
}
| 19 | Remove expired sessionInMap when there is new session in sessionInMap | 3 | .java | java | mit | alexxiyang/shiro-redis |
1955 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
securityManager.cacheManager = $cacheManager
```
If you found any bugs
===========
<MSG> try to remember
<DFF> @@ -23,7 +23,6 @@ cacheManager.expire=5
securityManager.cacheManager = $cacheManager
```
-
If you found any bugs
===========
| 0 | try to remember | 1 | .md | md | mit | alexxiyang/shiro-redis |
1956 | <NME> README.md
<BEF> shiro-redis
===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
<MSG> Update README.md
<DFF> @@ -2,3 +2,19 @@ shiro-redis
===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
+
+How to use it?
+===========
+
+edit in shiro.ini
+
+#required
+cacheManager = org.yqr.shiro.RedisCacheManager
+#optional if you don't specify host the default value is 127.0.0.1
+cacheManager.host=127.0.0.1
+#optional , default value: 6379
+cacheManager.port=6379
+#optional, default value:0 .The expire time is in second
+cacheManager.expire=5
+#required
+securityManager.cacheManager = $cacheManager
| 16 | Update README.md | 0 | .md | md | mit | alexxiyang/shiro-redis |
1957 | <NME> RedisCacheManager.java
<BEF> package org.yqr.shiro;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import org.apache.shiro.cache.CacheManager;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.RedisSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
public class RedisCacheManager implements CacheManager {
private final Logger logger = LoggerFactory.getLogger(RedisCacheManager.class);
// fast lookup by name map
private final ConcurrentMap<String, Cache> caches = new ConcurrentHashMap<>();
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
private IRedisManager redisManager;
// expire time in seconds
public static final int DEFAULT_EXPIRE = 1800;
private int expire = DEFAULT_EXPIRE;
/**
* The Redis key prefix for caches
*/
public static final String DEFAULT_CACHE_KEY_PREFIX = "shiro:cache:";
private String keyPrefix = DEFAULT_CACHE_KEY_PREFIX;
public static final String DEFAULT_PRINCIPAL_ID_FIELD_NAME = "id";
private String principalIdFieldName = DEFAULT_PRINCIPAL_ID_FIELD_NAME;
@Override
public <K, V> Cache<K, V> getCache(String name) throws CacheException {
logger.debug("get cache, name=" + name);
Cache<K, V> cache = caches.get(name);
if (cache == null) {
cache = new RedisCache<K, V>(redisManager, keySerializer, valueSerializer, keyPrefix + name + ":", expire, principalIdFieldName);
caches.put(name, cache);
}
return cache;
}
public IRedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
public String getKeyPrefix() {
return keyPrefix;
}
public void setKeyPrefix(String keyPrefix) {
this.keyPrefix = keyPrefix;
}
public RedisSerializer getKeySerializer() {
return keySerializer;
}
public void setKeySerializer(RedisSerializer keySerializer) {
this.keySerializer = keySerializer;
}
public RedisSerializer getValueSerializer() {
return valueSerializer;
}
public void setValueSerializer(RedisSerializer valueSerializer) {
this.valueSerializer = valueSerializer;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public String getPrincipalIdFieldName() {
return principalIdFieldName;
}
public void setPrincipalIdFieldName(String principalIdFieldName) {
this.principalIdFieldName = principalIdFieldName;
}
}
<MSG> chang group id
<DFF> @@ -1,4 +1,4 @@
-package org.yqr.shiro;
+package org.crazycake.shiro;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
| 1 | chang group id | 1 | .java | java | mit | alexxiyang/shiro-redis |
1958 | <NME> shiro.ini
<BEF> ADDFILE
<MSG> Merge pull request #1 from theChefArchitect/master
Jedis upgrade & custom Redis key prefixes
<DFF> @@ -0,0 +1,46 @@
+# =============================================================================
+# Tutorial INI configuration
+#
+# Usernames/passwords are based on the classic Mel Brooks' film "Spaceballs" :)
+# =============================================================================
+
+# -----------------------------------------------------------------------------
+# Users and their (optional) assigned roles
+# username = password, role1, role2, ..., roleN
+# -----------------------------------------------------------------------------
+[users]
+root = secret, admin
+guest = guest, guest
+presidentskroob = 12345, president
+darkhelmet = ludicrousspeed, darklord, schwartz
+lonestarr = vespa, goodguy, schwartz
+
+# -----------------------------------------------------------------------------
+# Roles with assigned permissions
+# roleName = perm1, perm2, ..., permN
+# -----------------------------------------------------------------------------
+[roles]
+admin = *
+schwartz = lightsaber:*
+goodguy = winnebago:drive:eagle5
+
+[main]
+
+redisManager = org.crazycake.shiro.RedisManager
+redisManager.host = localhost
+redisManager.port = 6379
+
+shiroCacheManager = org.crazycake.shiro.RedisCacheManager
+shiroCacheManager.redisManager = $redisManager
+shiroCacheManager.keyPrefix = users:security:authz:
+
+sessionDAO = org.crazycake.shiro.RedisSessionDAO
+sessionDAO.redisManager = $redisManager
+sessionDAO.keyPrefix = users:security:sessions:
+
+sessionManager = org.apache.shiro.session.mgt.DefaultSessionManager
+sessionManager.sessionDAO = $sessionDAO
+
+# Use the configured native session manager:
+securityManager.sessionManager = $sessionManager
+securityManager.cacheManager = $shiroCacheManager
| 46 | Merge pull request #1 from theChefArchitect/master | 0 | .ini | ini | mit | alexxiyang/shiro-redis |
1959 | <NME> commandline.py
<BEF> """Command line interface to hitch."""
from click import command, group, argument, option
from subprocess import call, check_output, PIPE
from os import path, makedirs
from sys import stderr, exit
import hitchdir
import shutil
import signal
@group()
def cli():
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
if path.exists("hitchreqs.txt"):
call([pip, "install", "-r", "hitchreqs.txt"])
else:
call([pip, "install", "hitch"])
call([pip, "install", "hitchserve"])
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
# TODO: Repeat and get % pass/fail.
@command()
@argument('filename')
@option('-y', '--yaml', is_flag=True, help='Output the YAML test (for debugging).')
@option('-s', '--settings', default=None, help="Load settings from file.")
@option('-e', '--extra', default=None, help="""Load extra vars on command line as JSON (e.g. --extra '{"postgres_version": "3.5.5"}'""")
def test(filename, yaml, settings, extra):
"""Run test"""
if filename.endswith(".yml") and path.exists(filename):
python = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "test")
command = [python, path.abspath(filename), ]
if yaml:
command = command + ['--yaml', ]
if settings is not None:
command = command + ['--settings', settings, ]
if extra is not None:
command = command + ['--extra', extra, ]
return_code = call(command)
exit(return_code)
else:
stderr.write("I didn't understand {}\n".format(filename))
stderr.flush()
exit(1)
@command()
@argument('package', required=True)
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGTERM, signal.SIG_IGN)
cli.add_command(init)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(test)
cli.add_command(clean)
cli.add_command(freeze)
cli()
if __name__ == '__main__':
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Decoupled CLI for hitch test (and other hitch apps) from this app, meaning it should need less frequent updates
<DFF> @@ -1,11 +1,13 @@
-"""Command line interface to hitch."""
+"""High level command line interface to hitch."""
+from subprocess import call, check_output, PIPE, CalledProcessError
from click import command, group, argument, option
-from subprocess import call, check_output, PIPE
-from os import path, makedirs
-from sys import stderr, exit
+from sys import stderr, exit, modules, argv
+from os import path, makedirs, listdir
import hitchdir
import shutil
import signal
+import copy
+
@group()
def cli():
@@ -34,40 +36,34 @@ def init():
if path.exists("hitchreqs.txt"):
call([pip, "install", "-r", "hitchreqs.txt"])
else:
- call([pip, "install", "hitch"])
- call([pip, "install", "hitchserve"])
+ call([pip, "install", "hitchtest"])
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
+def update_requirements():
+ """Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
+ pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
+ hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
+ pip_freeze = check_output([pip, "freeze"])
+ hitchreqs_handle = ""
+ with open(hitchreqs_filename, "r") as hitchreqs_handle:
+ hitchreqs = hitchreqs_handle.read()
+ if not pip_freeze == hitchreqs:
+ call([pip, "install", "-r", "hitchreqs.txt"])
-# TODO: Repeat and get % pass/fail.
-
-@command()
-@argument('filename')
-@option('-y', '--yaml', is_flag=True, help='Output the YAML test (for debugging).')
-@option('-s', '--settings', default=None, help="Load settings from file.")
-@option('-e', '--extra', default=None, help="""Load extra vars on command line as JSON (e.g. --extra '{"postgres_version": "3.5.5"}'""")
-def test(filename, yaml, settings, extra):
- """Run test"""
- if filename.endswith(".yml") and path.exists(filename):
- python = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "test")
- command = [python, path.abspath(filename), ]
- if yaml:
- command = command + ['--yaml', ]
- if settings is not None:
- command = command + ['--settings', settings, ]
- if extra is not None:
- command = command + ['--extra', extra, ]
- return_code = call(command)
- exit(return_code)
- else:
- stderr.write("I didn't understand {}\n".format(filename))
- stderr.flush()
- exit(1)
+@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
+@argument('arguments', nargs=-1)
+def runpackage(arguments):
+ # Generic method to run any installed app in the virtualenv whose name starts with hitch*
+ update_requirements()
+ binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{}".format(argv[1]))
+ command = [binfile, ] + argv[2:]
+ return_code = call(command)
+ exit(return_code)
@command()
@argument('package', required=True)
@@ -111,12 +107,40 @@ def run():
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGTERM, signal.SIG_IGN)
+ if hitchdir.hitch_exists():
+ # Get packages from bin folder that are hitch related
+ python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
+ packages = [
+ package.replace("hitch", "") for package in listdir(
+ path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
+ )
+ if package.startswith("hitch") and package != "hitch"
+ ]
+
+ # Add packages that start with hitch* to the list of commands available
+ for package in packages:
+ cmd = copy.deepcopy(runpackage)
+ cmd.name = package
+ try:
+ description = check_output([
+ python_bin, '-c',
+ 'import sys;sys.stdout.write(__import__("hitch{}").commandline.cli.help)'.format(
+ package
+ )
+ ], stderr=PIPE)
+ except CalledProcessError:
+ description = ""
+ cmd.help = description
+ cmd.short_help = description
+ cli.add_command(cmd)
+
+
+ cli.add_command(install)
+ cli.add_command(uninstall)
+ cli.add_command(clean)
+ cli.add_command(freeze)
+
cli.add_command(init)
- cli.add_command(install)
- cli.add_command(uninstall)
- cli.add_command(test)
- cli.add_command(clean)
- cli.add_command(freeze)
cli()
if __name__ == '__main__':
| 59 | FEATURE : Decoupled CLI for hitch test (and other hitch apps) from this app, meaning it should need less frequent updates | 35 | .py | py | agpl-3.0 | hitchtest/hitch |
1960 | <NME> contributors.rst
<BEF> Contributors
============
* Jon Hadfield @jonhadfield
* Omer Katz @omerkatz
Additional thanks to
====================
* Phoebe Bright @phoebebright
* Rui Pacheco @ruipacheco
* Andy Baker @andybak
* Audrey Roy Greenfeld @audreyr
* Daniel Greenfeld @pydanny
<MSG> Added new contributor
<DFF> @@ -3,6 +3,7 @@ Contributors
* Jon Hadfield @jonhadfield
* Omer Katz @omerkatz
+* Flavio Curella @fcurella
Additional thanks to
| 1 | Added new contributor | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1961 | <NME> unit_overtesting.rst
<BEF> ADDFILE
<MSG> DOCS : More glossary
<DFF> @@ -0,0 +1,9 @@
+Unit Overtesting
+================
+
+Unit overtesting is a :doc:`testing_antipattern` where unit tests are
+overused to test :doc:`integration_code`.
+
+Unit tests used to test integration code do provide protection from
+:doc:`logical_bugs` but they usually do not catch :doc:`integration_bugs`.
+Integration tests will catch both logical bugs and integration bugs.
| 9 | DOCS : More glossary | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1962 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4.6",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4.6",
+ version="0.4.7",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1963 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
import subprocess
import codecs
import sys
import os
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
if version_info[0] == 3:
setup(name="hitch",
version="0.4.9",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> BUG : Fixed bootstrap so that it works under python 2.6 and added switch to let you pick a virtualenv.
<DFF> @@ -2,11 +2,20 @@
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
-import subprocess
+from sys import version_info, stderr, exit
import codecs
import sys
import os
+if version_info[0] == 2:
+ if version_info[1] < 6:
+ stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
+ exit(1)
+if version_info[0] == 3:
+ if version_info[1] < 3:
+ stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
+ exit(1)
+
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
@@ -14,7 +23,7 @@ def read(*parts):
setup(name="hitch",
version="0.4.9",
- description="Loosely coupled testing framework",
+ description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
| 11 | BUG : Fixed bootstrap so that it works under python 2.6 and added switch to let you pick a virtualenv. | 2 | .py | py | agpl-3.0 | hitchtest/hitch |
1964 | <NME> AppTest.java
<BEF> ADDFILE
<MSG> initailcommit
<DFF> @@ -0,0 +1,38 @@
+package org.yqr.shiro_redis;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for simple App.
+ */
+public class AppTest
+ extends TestCase
+{
+ /**
+ * Create the test case
+ *
+ * @param testName name of the test case
+ */
+ public AppTest( String testName )
+ {
+ super( testName );
+ }
+
+ /**
+ * @return the suite of tests being tested
+ */
+ public static Test suite()
+ {
+ return new TestSuite( AppTest.class );
+ }
+
+ /**
+ * Rigourous Test :-)
+ */
+ public void testApp()
+ {
+ assertTrue( true );
+ }
+}
| 38 | initailcommit | 0 | .java | java | mit | alexxiyang/shiro-redis |
1965 | <NME> hitchselenium.rst
<BEF> HitchSelenium
=============
.. note::
This documentation applies to the latest version of hitchselenium.
HitchSelenium is a :doc:`/glossary/hitch_plugin` specifically to make testing web applications easier.
It contains:
* A :doc:`/glossary/service` to run firefox and provide access to its webdriver.
* A :doc:`/glossary/step_library` to perform common actions with a selenium webdriver.
Installation
------------
Install the hitch selenium plugin like so::
$ hitch install hitchselenium
.. note::
If you are running Mac OS X you must download and install firefox *manually* before running a test that uses hitchselenium.
Set up the Firefox service
--------------------------
To use, define the service after initializing the :doc:`/api/service_bundle`:
.. code-block:: python
To use, define the service after initializing the :doc:`/glossary/service_bundle`:
.. code-block:: python
import hitchselenium
# Service definition in engine's setUp:
self.services['Firefox'] = hitchselenium.SeleniumService(
xvfb=False # Optional (default: False). If true, this will run firefox hidden (only available on Linux).
shunt_window=True # Optional (default: True). This will move the window out of the way of the mouse, to coordinates (0, 0).
implicitly_wait=5.0 # Optional (default: 5.0). Set implicitly_wait value of the selenium driver.
)
After the service bundle has been started, you can access the selenium webdriver like so:
.. code-block:: python
self.driver = self.services['Firefox'].driver
Interacting with Firefox
------------------------
You can then interact with firefox via the selenium webdriver in your steps or with :doc:`/glossary/ipython`::
In [2]: self.driver.get("http://localhost:8080/")
[ Opens http://localhost:8080 in firefox ]
In [3]: self.driver.find_element_by_id("id_description").send_keys("type something...")
[ Find element with ID description and types "type something" ]
The full selenium driver docs are available here: https://selenium-python.readthedocs.org/en/latest/navigating.html
Using the selenium step library
-------------------------------
Using the selenium web driver can be cumbersome, so there is also a step library provided with steps that,
when used correctly, should aid with :doc:`/glossary/test_readability`.
To set the selenium step library up in your test setup after the service bundle has been started:
.. code-block:: python
self.webapp = hitchselenium.SeleniumStepLibrary(
selenium_webdriver=self.services['Firefox'].driver,
wait_for_timeout=5,
)
self.click = self.webapp.click
self.wait_to_appear = self.webapp.wait_to_appear
self.wait_to_contain = self.webapp.wait_to_contain
self.wait_for_any_to_contain = self.webapp.wait_for_any_to_contain
self.click_and_dont_wait_for_page_load = self.webapp.click_and_dont_wait_for_page_load
For instructions on how to use the step library in your steps see :doc:`/howto/web_applications`.
<MSG> DOCS : Added more explanations to the documentation.
<DFF> @@ -30,7 +30,7 @@ Install the hitch selenium plugin like so::
Set up the Firefox service
--------------------------
-To use, define the service after initializing the :doc:`/api/service_bundle`:
+To use, define the service after initializing the :doc:`/glossary/service_bundle`:
.. code-block:: python
| 1 | DOCS : Added more explanations to the documentation. | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1966 | <NME> gen_lmdb.py
<BEF> # --------------------------------------------------------
# Cifar-10 for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
""" Generate database """
import os
import sys
import time
import shutil
import tarfile
from six.moves import range as xrange
import cv2
ZFILL = 8
def untar(tar_file):
t = tarfile.open(tar_file)
t.extractall(path='data')
def wrapper_str(raw_str):
if sys.version_info >= (3, 0):
return raw_str.encode()
return raw_str
def extract_images():
prefix = 'data/cifar-10-batches-py'
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
total_idx = 0
images_list = []
# process batches
for batch in batches:
with open(batch, 'rb') as f:
if sys.version_info >= (3, 0):
import pickle
with open(batch, 'rb') as f:
dict = pickle.load(f, encoding='bytes')
else:
import cPickle
with open(batch, 'rb') as f:
dict = cPickle.load(f)
for item_idx in xrange(len(dict[wrapper_str('labels')])):
im = dict[wrapper_str('data')][item_idx].reshape((3, 32, 32))
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
images_list.append((im, str(label)))
total_idx += 1
return images_list
def make_db(images_list, database_path):
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
print('start time: ', time.strftime("%a, %d %b %Y %H:%M:%S", time.gmtime()))
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
total_line = len(images_list)
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
start_time = time.time()
for record in images_list:
count += 1
if count % 10000 == 0:
now_time = time.time()
f.write(item)
def make_db(image_path, label_path, database_path):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
datum = caffe_pb2.Datum()
datum.height, datum.width, datum.channels = img.shape
datum.label = int(label)
datum.encoded = False
datum.data = img.tostring()
db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
db.put('size', wrapper_str(str(count)))
db.put('zfill', wrapper_str(str(ZFILL)))
db.commit()
db.close()
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
end_time - start_time))
print('The size of database is {0} MB.'.format(
float(os.path.getsize(database_path + '/data.mdb') / 1000 / 1000)))
if __name__ == '__main__':
untar('data/cifar-10-python.tar.gz')
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
<MSG> io refactoring
<DFF> @@ -11,6 +11,7 @@ import sys
import time
import shutil
import tarfile
+import numpy as np
from six.moves import range as xrange
import cv2
@@ -78,7 +79,7 @@ def extract_images():
f.write(item)
-def make_db(image_path, label_path, database_path):
+def make_db(image_path, label_path, database_path, pad=0):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
@@ -111,6 +112,12 @@ def make_db(image_path, label_path, database_path):
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
+ if pad > 0:
+ pad_img = np.zeros((img.shape[0] + 2 * pad,
+ img.shape[1] + 2 * pad, 3), dtype=np.uint8)
+ pad_img[pad : pad + img.shape[0],
+ pad : pad + img.shape[1], :] = img
+ img = pad_img
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
| 8 | io refactoring | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1967 | <NME> gen_lmdb.py
<BEF> # --------------------------------------------------------
# Cifar-10 for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
""" Generate database """
import os
import sys
import time
import shutil
import tarfile
from six.moves import range as xrange
import cv2
ZFILL = 8
def untar(tar_file):
t = tarfile.open(tar_file)
t.extractall(path='data')
def wrapper_str(raw_str):
if sys.version_info >= (3, 0):
return raw_str.encode()
return raw_str
def extract_images():
prefix = 'data/cifar-10-batches-py'
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
total_idx = 0
images_list = []
# process batches
for batch in batches:
with open(batch, 'rb') as f:
if sys.version_info >= (3, 0):
import pickle
with open(batch, 'rb') as f:
dict = pickle.load(f, encoding='bytes')
else:
import cPickle
with open(batch, 'rb') as f:
dict = cPickle.load(f)
for item_idx in xrange(len(dict[wrapper_str('labels')])):
im = dict[wrapper_str('data')][item_idx].reshape((3, 32, 32))
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
images_list.append((im, str(label)))
total_idx += 1
return images_list
def make_db(images_list, database_path):
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
print('start time: ', time.strftime("%a, %d %b %Y %H:%M:%S", time.gmtime()))
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
total_line = len(images_list)
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
start_time = time.time()
for record in images_list:
count += 1
if count % 10000 == 0:
now_time = time.time()
f.write(item)
def make_db(image_path, label_path, database_path):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
datum = caffe_pb2.Datum()
datum.height, datum.width, datum.channels = img.shape
datum.label = int(label)
datum.encoded = False
datum.data = img.tostring()
db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
db.put('size', wrapper_str(str(count)))
db.put('zfill', wrapper_str(str(ZFILL)))
db.commit()
db.close()
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
end_time - start_time))
print('The size of database is {0} MB.'.format(
float(os.path.getsize(database_path + '/data.mdb') / 1000 / 1000)))
if __name__ == '__main__':
untar('data/cifar-10-python.tar.gz')
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
<MSG> io refactoring
<DFF> @@ -11,6 +11,7 @@ import sys
import time
import shutil
import tarfile
+import numpy as np
from six.moves import range as xrange
import cv2
@@ -78,7 +79,7 @@ def extract_images():
f.write(item)
-def make_db(image_path, label_path, database_path):
+def make_db(image_path, label_path, database_path, pad=0):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
@@ -111,6 +112,12 @@ def make_db(image_path, label_path, database_path):
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
+ if pad > 0:
+ pad_img = np.zeros((img.shape[0] + 2 * pad,
+ img.shape[1] + 2 * pad, 3), dtype=np.uint8)
+ pad_img[pad : pad + img.shape[0],
+ pad : pad + img.shape[1], :] = img
+ img = pad_img
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
| 8 | io refactoring | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1968 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.6",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version
<DFF> @@ -31,7 +31,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.6",
+ version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1969 | <NME> index.rst
<BEF> ADDFILE
<MSG> DOCS : Updated documentation.
<DFF> @@ -0,0 +1,12 @@
+Hitch Services Documentation
+============================
+
+Documentation for Hitch service plugins.
+
+Contents:
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ *
| 12 | DOCS : Updated documentation. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1970 | <NME> README.md
<BEF> shiro-redis
[](https://travis-ci.org/alexxiyang/shiro-redis)
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<MSG> Update README.md
change style
<DFF> @@ -1,4 +1,4 @@
-shiro-redis
+# shiro-redis
[](https://travis-ci.org/alexxiyang/shiro-redis)
| 1 | Update README.md | 1 | .md | md | mit | alexxiyang/shiro-redis |
1971 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
| shiro-redis.redis-anager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
| shiro-redis.redis-anager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| shiro-redis.redis-anager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| shiro-redis.redis-anager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| shiro-redis.redis-anager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| shiro-redis.redis-anager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| shiro-redis.redis-anager.password | | Redis password |
| shiro-redis.redis-anager.database | `0` | Redis database. Default value is 0 |
| shiro-redis.redis-anager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
<MSG> README.md 中 redis-manager 写成 redis-anager
<DFF> @@ -409,15 +409,15 @@ For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alex
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
-| shiro-redis.redis-anager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
-| shiro-redis.redis-anager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
-| shiro-redis.redis-anager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
-| shiro-redis.redis-anager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
-| shiro-redis.redis-anager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
-| shiro-redis.redis-anager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
-| shiro-redis.redis-anager.password | | Redis password |
-| shiro-redis.redis-anager.database | `0` | Redis database. Default value is 0 |
-| shiro-redis.redis-anager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
+| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
+| shiro-redis.redis-manager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
+| shiro-redis.redis-manager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
+| shiro-redis.redis-manager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
+| shiro-redis.redis-manager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
+| shiro-redis.redis-manager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
+| shiro-redis.redis-manager.password | | Redis password |
+| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
+| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| 9 | README.md 中 redis-manager 写成 redis-anager | 9 | .md | md | mit | alexxiyang/shiro-redis |
1972 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
#====================================
#===================================
# Redis Manager
#===================================
# Create redisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
#
# redisManager.timeout = <timeout>
# Redis password.(Optional)
#
# redisManager.password = <password>
# Redis database. Default value is 0(Optional)
#
# redisManager.database = <database>
# JedisPoolConfig (Optional)
# Most of time, you don't need to set jedisPoolConfig
#
# jedisPoolConfig = redis.clients.jedis.JedisPoolConfig
# jedisPoolConfig.<attribute> = <value>
# redisManager.jedisPoolConfig = jedisPoolConfig
# Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. (Optional)
#
# redisManager.count = <count>
#====================================
# Redis-based session configuration
#====================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
# Redis cache key/value expire time. The expire time is in second.
# Special values:
# -1: no expire
# -2: the same timeout with session
# Default value: -2
# Note: Make sure expire time is longer than session timeout. (Optional)
#
# redisSessionDAO.expire = <expire>
# Custom your redis key prefix for session management
# Default value is "shiro:session:"
# Note: Remember to add colon at the end of prefix.
#
# redisSessionDAO.keyPrefix = <session key prefix>
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
# doReadSession be called about 10 times when login. Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
# The default value is 1000 milliseconds (1s)
# Most of time, you don't need to change it.
#
# redisSessionDAO.sessionInMemoryTimeout = <timeout>
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#=====================================
# Redis-based cache configuration
#=====================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
# For example, if you use UserInfo as Principal class, the id field maybe userId, userName, email, etc.
# Remember to add getter to this id field. For example, getUserId(), getUserName(), getEmail(), etc.
# Default value is authCacheKey or id, that means your principal object has a method called "getAuthCacheKey()" or "getId()"
#
# cacheManager.principalIdFieldName = id
# Redis cache key/value expire time. Default value: 1800 .The expire time is in second. (Optional)
#
# cacheManager.expire = <expire>
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
# cacheManager.keySerializer = $cacheManagerKeySerializer
# Custom your redis key prefix for cache management
# Default value is "shiro:cache:"
# Note: Remember to add colon at the end of prefix.
#
# cacheManager.keyPrefix = <cache key prefix>
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
#=================================
# shiro-redis configuration [end]
#=================================
```
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please change the redisManager configuration into the following:
```properties
#===================================
# Redis Manager
#===================================
# Create redisManager
# Sentinel master name
redisManager.masterName = mymaster
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
#
# redisManager.timeout = <timeout>
# Timeout for jedis try to read data from redis server (Optional)
#
# redisManager.soTimeout = <soTimeout>
# Redis password.(Optional)
#
# redisManager.password = <password>
# Redis database. Default value is 0 (Optional)
#
# redisManager.database = <database>
# JedisPoolConfig (Optional)
# Most of time, you don't need to set jedisPoolConfig
#
# jedisPoolConfig = redis.clients.jedis.JedisPoolConfig
# jedisPoolConfig.<attribute> = <value>
# redisManager.jedisPoolConfig = jedisPoolConfig
# Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. (Optional)
#
# redisManager.count = <count>
```
### Redis Cluster
If you're using redis cluster, here is an example of configuration :
```properties
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
#
# redisManager.timeout = 2000
# timeout for jedis try to read data from redis server (Optional)
#
# redisManager.soTimeout = 2000
# max attempts to connect to server (Optional)
#
# redisManager.maxAttempts = 3
# Redis password.(Optional)
#
# redisManager.password = <password>
```
## Spring
### Redis Standalone
spring.xml:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
<!-- optional properties
<property name="timeout" value="10000"/>
<property name="password" value="123456"/>
<property name="database" value="1"/>
<property name="jedisPoolConfig" ref="jedisPoolConfig"/>
<property name="count" value="100"/>
-->
</bean>
<!-- Redis-based session configuration -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
<!-- optional properties
<property name="expire" value="-2"/>
<property name="keyPrefix" value="shiro:session:" />
-->
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
<!-- Redis-based cache configuration -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
<!-- optional properties
<property name="expire" value="1800"/>
<property name="keyPrefix" value="shiro:cache:" />
<property name="principalIdFieldName" value="id" />
-->
</bean>
<!-- securityManager -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
<!-- shiro-redis configuration [end] -->
```
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
<!-- optional properties
<property name="timeout" value="2000"/>
<property name="soTimeout" value="2000"/>
<property name="password" value=""/>
<property name="database" value="0"/>
<property name="count" value="100"/>
-->
</bean>
```
### Redis Cluster
If you use redis cluster, here is an example of configuration :
```xml
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
<!-- optional properties
<property name="timeout" value="10000"/>
<property name="soTimeout" value="10000"/>
<property name="maxAttempts" value="2"/>
<property name="password" value="123456"/>
-->
</bean>
```
## Serializer
Since redis only accept `byte[]`, there comes to a serializer problem.
Shiro-redis is using StringSerializer as key serializer and ObjectSerializer as value serializer.
You can use your own custom serializer, as long as this custom serializer implemens `org.crazycake.shiro.serializer.RedisSerializer`
For example, you need to change the charset of keySerializer.
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
# If you found any bugs
<MSG> Update README.md. Add configurable options section. Move most of attributes in example into configurable options.
<DFF> @@ -91,7 +91,7 @@ Here is the configuration for shiro.ini.
#====================================
#===================================
-# Redis Manager
+# Redis Manager [start]
#===================================
# Create redisManager
@@ -100,118 +100,67 @@ redisManager = org.crazycake.shiro.RedisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
-# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
-#
-# redisManager.timeout = <timeout>
-
-# Redis password.(Optional)
-#
-# redisManager.password = <password>
-
-# Redis database. Default value is 0(Optional)
-#
-# redisManager.database = <database>
-
-# JedisPoolConfig (Optional)
-# Most of time, you don't need to set jedisPoolConfig
-#
-# jedisPoolConfig = redis.clients.jedis.JedisPoolConfig
-# jedisPoolConfig.<attribute> = <value>
-# redisManager.jedisPoolConfig = jedisPoolConfig
-
-# Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. (Optional)
-#
-# redisManager.count = <count>
+#===================================
+# Redis Manager [end]
+#===================================
-#====================================
-# Redis-based session configuration
-#====================================
+#=========================================
+# Redis session DAO [start]
+#=========================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
-# Redis cache key/value expire time. The expire time is in second.
-# Special values:
-# -1: no expire
-# -2: the same timeout with session
-# Default value: -2
-# Note: Make sure expire time is longer than session timeout. (Optional)
-#
-# redisSessionDAO.expire = <expire>
-
-# Custom your redis key prefix for session management
-# Default value is "shiro:session:"
-# Note: Remember to add colon at the end of prefix.
-#
-# redisSessionDAO.keyPrefix = <session key prefix>
-
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
-# doReadSession be called about 10 times when login. Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
-# The default value is 1000 milliseconds (1s)
-# Most of time, you don't need to change it.
-#
-# redisSessionDAO.sessionInMemoryTimeout = <timeout>
-
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
-#=====================================
-# Redis-based cache configuration
-#=====================================
+#=========================================
+# Redis session DAO [end]
+#=========================================
+
+#==========================================
+# Redis cache manager [start]
+#==========================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
-# For example, if you use UserInfo as Principal class, the id field maybe userId, userName, email, etc.
-# Remember to add getter to this id field. For example, getUserId(), getUserName(), getEmail(), etc.
-# Default value is authCacheKey or id, that means your principal object has a method called "getAuthCacheKey()" or "getId()"
-#
-# cacheManager.principalIdFieldName = id
-
-# Redis cache key/value expire time. Default value: 1800 .The expire time is in second. (Optional)
-#
-# cacheManager.expire = <expire>
-
-# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
-#
-# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
-
-# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
-# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
+# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
+# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
+# Default value is id, that means your principal object must has a method called `getId()`
#
-# cacheManagerKeySerializer.charset = UTF-8
-
-# cacheManager.keySerializer = $cacheManagerKeySerializer
-
-# Custom your redis key prefix for cache management
-# Default value is "shiro:cache:"
-# Note: Remember to add colon at the end of prefix.
-#
-# cacheManager.keyPrefix = <cache key prefix>
+cacheManager.principalIdFieldName = id
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
+#==========================================
+# Redis cache manager [end]
+#==========================================
+
#=================================
# shiro-redis configuration [end]
#=================================
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please change the redisManager configuration into the following:
```properties
#===================================
-# Redis Manager
+# Redis Manager [start]
#===================================
# Create redisManager
@@ -223,103 +172,62 @@ redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
-# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
-#
-# redisManager.timeout = <timeout>
-
-# Timeout for jedis try to read data from redis server (Optional)
-#
-# redisManager.soTimeout = <soTimeout>
-
-# Redis password.(Optional)
-#
-# redisManager.password = <password>
-
-# Redis database. Default value is 0 (Optional)
-#
-# redisManager.database = <database>
-
-# JedisPoolConfig (Optional)
-# Most of time, you don't need to set jedisPoolConfig
-#
-# jedisPoolConfig = redis.clients.jedis.JedisPoolConfig
-# jedisPoolConfig.<attribute> = <value>
-# redisManager.jedisPoolConfig = jedisPoolConfig
-
-# Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. (Optional)
-#
-# redisManager.count = <count>
+#===================================
+# Redis Manager [end]
+#===================================
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
### Redis Cluster
If you're using redis cluster, here is an example of configuration :
```properties
+#===================================
+# Redis Manager [start]
+#===================================
+
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
-# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
-#
-# redisManager.timeout = 2000
-
-# timeout for jedis try to read data from redis server (Optional)
-#
-# redisManager.soTimeout = 2000
-
-# max attempts to connect to server (Optional)
-#
-# redisManager.maxAttempts = 3
-
-# Redis password.(Optional)
-#
-# redisManager.password = <password>
-
+#===================================
+# Redis Manager [end]
+#===================================
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
## Spring
### Redis Standalone
spring.xml:
```xml
<!-- shiro-redis configuration [start] -->
-<!-- shiro redisManager -->
+
+<!-- Redis Manager [start] -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
- <!-- optional properties
- <property name="timeout" value="10000"/>
- <property name="password" value="123456"/>
- <property name="database" value="1"/>
- <property name="jedisPoolConfig" ref="jedisPoolConfig"/>
- <property name="count" value="100"/>
- -->
</bean>
+<!-- Redis Manager [end] -->
-<!-- Redis-based session configuration -->
+<!-- Redis session DAO [start] -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
- <!-- optional properties
- <property name="expire" value="-2"/>
- <property name="keyPrefix" value="shiro:session:" />
- -->
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
+<!-- Redis session DAO [end] -->
-<!-- Redis-based cache configuration -->
+<!-- Redis cache manager [start] -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
- <!-- optional properties
- <property name="expire" value="1800"/>
- <property name="keyPrefix" value="shiro:cache:" />
- <property name="principalIdFieldName" value="id" />
- -->
</bean>
+<!-- Redis cache manager [end] -->
-<!-- securityManager -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
@@ -328,9 +236,12 @@ spring.xml:
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
+
<!-- shiro-redis configuration [end] -->
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
@@ -341,16 +252,11 @@ If you use redis sentinel, here is an example of configuration :
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
- <!-- optional properties
- <property name="timeout" value="2000"/>
- <property name="soTimeout" value="2000"/>
- <property name="password" value=""/>
- <property name="database" value="0"/>
- <property name="count" value="100"/>
- -->
</bean>
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
### Redis Cluster
If you use redis cluster, here is an example of configuration :
```xml
@@ -358,27 +264,23 @@ If you use redis cluster, here is an example of configuration :
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
- <!-- optional properties
- <property name="timeout" value="10000"/>
- <property name="soTimeout" value="10000"/>
- <property name="maxAttempts" value="2"/>
- <property name="password" value="123456"/>
- -->
</bean>
```
+For complete configurable options list, check [Configurable Options](#configurable-options).
+
## Serializer
Since redis only accept `byte[]`, there comes to a serializer problem.
Shiro-redis is using StringSerializer as key serializer and ObjectSerializer as value serializer.
You can use your own custom serializer, as long as this custom serializer implemens `org.crazycake.shiro.serializer.RedisSerializer`
-For example, you need to change the charset of keySerializer.
+For example, let's change the charset of keySerializer.
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
-# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
+# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
@@ -392,6 +294,44 @@ These 4 Serializers are replaceable:
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
+## Configurable Options
+
+### RedisManager
+
+| Title | Default | Description |
+| :------------------| :------------------- | :---------------------------|
+| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is 127.0.0.1:6379. If you run redis in sentinel mode or cluster mode, separate host names with comma, like 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381 |
+| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
+| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
+| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
+| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
+| password | | Redis password |
+| database | `0` | Redis database. Default value is 0 |
+| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br><pre>jedisPoolConfig = redis.clients.jedis.JedisPoolConfig<br>jedisPoolConfig.testWhileIdle = false<br>redisManager.jedisPoolConfig = jedisPoolConfig</pre> |
+| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
+
+### RedisSessionDAO
+
+| Title | Default | Description |
+| :------------------| :------------------- | :---------------------------|
+| redisManager | | RedisManager which you just configured above (Required) |
+| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
+| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
+| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
+| keySerializer | `new org.crazycake.shiro.serializer.StringSerializer()` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
+| valueSerializer | `new org.crazycake.shiro.serializer.ObjectSerializer()` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
+
+### CacheManager
+
+| Title | Default | Description |
+| :--------------------| :------------------- | :---------------------------|
+| redisManager | | RedisManager which you just configured above (Required) |
+| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
+| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
+| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
+| keySerializer | `new org.crazycake.shiro.serializer.StringSerializer()` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
+| valueSerializer | `new org.crazycake.shiro.serializer.ObjectSerializer()` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
+
# If you found any bugs
| 93 | Update README.md. Add configurable options section. Move most of attributes in example into configurable options. | 153 | .md | md | mit | alexxiyang/shiro-redis |
1973 | <NME> step_library.rst
<BEF> ADDFILE
<MSG> DOCS : Added more definitions to the glossary.
<DFF> @@ -0,0 +1,5 @@
+Step Library
+============
+
+A step library is a class that contains a group of predefined steps that
+can be used in an engine.
| 5 | DOCS : Added more definitions to the glossary. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1974 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
# redisManager.password = chenxing
```
If you use redis cluster, config like this :
```properties
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
# Redis cache key/value expire time. Default value:0 .The expire time is in second (Optional)
redisManager.expire = 600
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
redisManager.timeout = 2000
# timeout for jedis try to read data from redis server (Optional)
redisManager.soTimeout = 2000
# max attempts to connect to server (Optional)
redisManager.maxAttempts = 2
# Redis password.(Optional)
#redisManager.password = xxxx
```
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
## Spring
</bean>
<!-- shiro-redis configuration [end] -->
```
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
## Serializer
<MSG> Merge pull request #2 from alexxiyang/master
update code
<DFF> @@ -105,26 +105,6 @@ redisManager.soTimeout = 2000
# redisManager.password = chenxing
```
-If you use redis cluster, config like this :
-
-```properties
-# Create redisManager
-redisManager = org.crazycake.shiro.RedisClusterManager
-# Redis host and port list
-redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
-# Redis cache key/value expire time. Default value:0 .The expire time is in second (Optional)
-redisManager.expire = 600
-# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
-redisManager.timeout = 2000
-# timeout for jedis try to read data from redis server (Optional)
-redisManager.soTimeout = 2000
-# max attempts to connect to server (Optional)
-redisManager.maxAttempts = 2
-# Redis password.(Optional)
-#redisManager.password = xxxx
-
-```
-
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
## Spring
@@ -170,6 +150,25 @@ spring.xml:
</bean>
<!-- shiro-redis configuration [end] -->
```
+
+If you use redis sentinel, config like this :
+```xml
+<!-- shiro-redis configuration [start] -->
+<!-- shiro redisManager -->
+<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
+ <property name="host" value="192.168.0.192:26379,192.168.0.192:26380,192.168.0.192:26381"/>
+ <property name="expire" value="1800"/>
+ <!-- optional properties:
+ <property name="timeout" value="10000"/>
+ <property name="soTimeout" value="10000"/>
+ <property name="masterName" value="mymaster"/>
+ <property name="password" value="123456"/>
+ <property name="database" value="1"/>
+ -->
+</bean>
+```
+
+
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
## Serializer
| 19 | Merge pull request #2 from alexxiyang/master | 20 | .md | md | mit | alexxiyang/shiro-redis |
1975 | <NME> why_is_hitch_so_fast.rst
<BEF> Why is Hitch so fast?
=====================
<MSG> DOCS : Updated docs.
<DFF> @@ -1,3 +1,59 @@
Why is Hitch so fast?
=====================
+There are two main reasons why Hitch is generally faster than other testing
+frameworks: parallelization and built in epoll/kqueue triggers.
+
+Automatic Parallelization
+-------------------------
+
+When hitch services are started they are started in parallel by default. If
+you have seven services (like the example project), hitch will try to start
+all of the services that do not have a "needs" property set. As soon as
+services are ready that are prerequisites of other services, those will be
+started.
+
+This means two things: even very complex service architectures can be
+started extremely quickly and also that your test speed will increase
+substantially the more CPU power, RAM and CPU cores you have.
+
+As an example, the django-remindme-tests project runs the following
+services:
+
+* Postgresql (including running initdb to create all necessary database
+files and creating a user and database after service start-up)
+* Django (including installing fixtures and running migrations)
+* Mock Cron server
+* Mock SMTP server
+* Celery
+* Redis
+* Selenium (running and connecting to firefox).
+
+When tested on a laptop with an SSD, and an i5 processor with 4 CPU cores,
+just starting firefox takes 4.5 seconds. *All* of the above, when
+parallelized, takes between 5.1 and 5.8 seconds.
+
+
+Epoll/Kqueue Triggers
+---------------------
+
+A common feature of other testing frameworks is the use of 'sleeps' and
+polling to determine if an event has occurred. This can not only contribute
+to test indeterminacy, it slows down your integration tests.
+
+A feature of hitch that contributes to its speed is the in-built use of
+epoll/kqueue triggers. These are kernel features in Linux, FreeBSD and Mac
+OS X that allow 'watches' to be put on files. When a file is changed, the
+test is automatically notified without the need for polling.
+
+This is used in the following situations:
+
+* To ascertain service readiness - the instant that Postgresql logs the
+line "database system is ready to accept connections", for example, Hitch
+will move straight on to creating users and databases.
+
+* Mock service interactions - the instant that the mock SMTP server
+receives an email, it logs out a snippet of JSON. The watcher on the mock
+SMTP logs receives the epoll trigger during that split second and the test
+can continue.
+
| 56 | DOCS : Updated docs. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1976 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
@command()
@argument('filename')
@option('-s', '--settings', default=None, help="Load settings from file.")
@option('-e', '--extra', default=None, help="""Load extra vars on command line as JSON (e.g. --extra '{"postgres_version": "3.5.5"}'""")
def test(filename, settings, extra):
"""Run test"""
if filename.endswith(".yml") and path.exists(filename):
python = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "test")
command = [python, path.abspath(filename), ]
if settings is not None:
command = command + ['--settings', settings, ]
if extra is not None:
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Print the jinja2 generated YAML using the hitch test command.
<DFF> @@ -47,13 +47,16 @@ def init():
@command()
@argument('filename')
+@option('-y', '--yaml', is_flag=True, help='Output the YAML test (for debugging).')
@option('-s', '--settings', default=None, help="Load settings from file.")
@option('-e', '--extra', default=None, help="""Load extra vars on command line as JSON (e.g. --extra '{"postgres_version": "3.5.5"}'""")
-def test(filename, settings, extra):
+def test(filename, yaml, settings, extra):
"""Run test"""
if filename.endswith(".yml") and path.exists(filename):
python = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "test")
command = [python, path.abspath(filename), ]
+ if yaml:
+ command = command + ['--yaml', ]
if settings is not None:
command = command + ['--settings', settings, ]
if extra is not None:
| 4 | FEATURE : Print the jinja2 generated YAML using the hitch test command. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1977 | <NME> automated_overtesting.rst
<BEF> Automated Overtesting
=====================
Automated overtesting is a :doc:`testing_antipattern` where
new automated test cases are written and committed to source control,
for scenarios where:
* No bug was found, either by users or by :doc:`exploratory_testing`.
* Where there were no :doc:`surprise_requirements`.
Automated overtesting can lead to bloated regression test suites:
* Are more expensive to run.
* Take a longer time to complete.
* Are more expensive to maintain.
<MSG> DOCS : Updated glossary
<DFF> @@ -1,14 +1,14 @@
Automated Overtesting
=====================
-Automated overtesting is a :doc:`testing_antipattern` where
-new automated test cases are written and committed to source control,
+Automated overtesting is a :doc:`testing_antipattern` for which
+fresh automated test cases are written and committed to source control,
for scenarios where:
* No bug was found, either by users or by :doc:`exploratory_testing`.
* Where there were no :doc:`surprise_requirements`.
-Automated overtesting can lead to bloated regression test suites:
+Automated overtesting can lead to bloated regression test suites that are:
* Are more expensive to run.
* Take a longer time to complete.
| 3 | DOCS : Updated glossary | 3 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1978 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
exit(1)
def read(*parts):
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> COPY : Clarified wording for errors when being installed in the wrong versions of python.
<DFF> @@ -9,11 +9,11 @@ import os
if version_info[0] == 2:
if version_info[1] < 6:
- stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
+ stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
- stderr.write("Hitch will not run on python 2 versions below 2.6 or python 3 versions below 3.3.\n")
+ stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
| 2 | COPY : Clarified wording for errors when being installed in the wrong versions of python. | 2 | .py | py | agpl-3.0 | hitchtest/hitch |
1979 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4.8",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4.8",
+ version="0.4.9",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1980 | <NME> README.md
<BEF> shiro-redis
=============
[](https://travis-ci.org/alexxiyang/shiro-redis)
[](https://maven-badges.herokuapp.com/maven-central/org.crazycake/shiro-redis)
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
# Download
You use either of the following 2 ways to include `shiro-redis` into your project
* use `git clone https://github.com/alexxiyang/shiro-redis.git` to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>3.3.1</version>
</dependency>
```
> **Note:**\
> 3.3.0 is compiled in java11 by mistake.
> Please use 3.3.1 which is compiled in java8
## shiro-core/jedis Version Comparison Charts
| shiro-redis | shiro | jedis |
| :----------------:| :-------: | :-------: |
| 3.2.3 | 1.3.2 | 2.9.0 |
| 3.3.0 (java11) | 1.6.0 | 3.3.0 |
| 3.3.1 (java8) | 1.6.0 | 3.3.0 |
# Before use
Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
For example:
If you create `SimpleAuthenticationInfo` like this:
```java
@Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken)token;
UserInfo userInfo = new UserInfo();
userInfo.setUsername(usernamePasswordToken.getUsername());
return new SimpleAuthenticationInfo(userInfo, "123456", getName());
}
```
Then the `userInfo` object is your principal object. You need to make sure `UserInfo` has an unique field for Redis to identify it. Take `userId` as an example:
```java
public class UserInfo implements Serializable{
private Integer userId
private String username;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Integer getUserId() {
return this.userId;
}
}
```
Put userId as the value of `cacheManager.principalIdFieldName`, like this:
```properties
cacheManager.principalIdFieldName = userId
```
If you're using Spring, the configuration should be
```xml
<property name="principalIdFieldName" value="userId" />
```
Then `shiro-redis` will call `userInfo.getUserId()` to get the id for saving Redis object.
# How to configure ?
You can configure `shiro-redis` either in `shiro.ini` or in `spring-*.xml`
## shiro.ini
Here is the configuration example for shiro.ini.
### Redis Standalone
If you are running Redis in Standalone mode
```properties
[main]
#====================================
# shiro-redis configuration [start]
#====================================
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
#===================================
# Redis Manager [end]
#===================================
#=========================================
# Redis session DAO [start]
#=========================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#=========================================
# Redis session DAO [end]
#=========================================
#==========================================
# Redis cache manager [start]
#==========================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
# Default value is id, that means your principal object must has a method called `getId()`
cacheManager.principalIdFieldName = id
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
#==========================================
# Redis cache manager [end]
#==========================================
#=================================
# shiro-redis configuration [end]
#=================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisSentinelManager
# Sentinel host. If you don't specify host the default value is 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you're using redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Spring
If you are using Spring
### Redis Standalone
If you are running Redis in Standalone mode
```xml
<!-- shiro-redis configuration [start] -->
<!-- Redis Manager [start] -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
</bean>
<!-- Redis Manager [end] -->
<!-- Redis session DAO [start] -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
<!-- Redis session DAO [end] -->
<!-- Redis cache manager [start] -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
</bean>
<!-- Redis cache manager [end] -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
<!-- other configurations -->
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
<!-- shiro-redis configuration [end] -->
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
If you use redis sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you use redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Serializer
Since redis only accept `byte[]`, there comes a serializer problem.
Shiro-redis is using `StringSerializer` as key serializer and `ObjectSerializer` as value serializer.
You can use your own custom serializer, as long as this custom serializer implements `org.crazycake.shiro.serializer.RedisSerializer`
For example, we can change the charset of keySerializer like this
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
# cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 options that you can replace them with your cutom serializers:
- cacheManager.keySerializer
- cacheManager.valueSerializer
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
## Configurable Options
Here are all the available options you can use in `shiro-redis` configuration file.
### RedisManager
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig instance and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| jedisPool | `null` | **Only used for sentinel mode or single mode**<br>You can create your own JedisPool instance and set attributes as you wish |
### RedisSessionDAO
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| sessionInMemoryEnabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
### CacheManager
| Title | Default | Description |
| :--------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
# Spring boot starter
Using `Spring-Boot` integration is the easiest way to integrate `shiro-redis` into a Spring-base application.
> Note: `shiro-redis-spring-boot-starter` version `3.2.1` is based on `shiro-spring-boot-web-starter` version `1.4.0-RC2`
First include the `shiro-redis` Spring boot starter dependency in you application classpath
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis-spring-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
```
The next step depends on whether you've created your own `SessionManager` or `SessionsSecurityManager`.
## If you haven't created your own `SessionManager` or `SessionsSecurityManager`
If you don't have your own `SessionManager` or `SessionsSecurityManager` in your configuration, `shiro-redis-spring-boot-starter` will create `RedisSessionDAO` and `RedisCacheManager` for you. Then inject them into `SessionManager` and `SessionsSecurityManager` automatically.
So, You are all set. Enjoy it!
## If you have created your own `SessionManager` or `SessionsSecurityManager`
If you have created your own `SessionManager` or `SessionsSecurityManager` like this:
```java
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
// other stuff...
return securityManager;
}
```
Then inject `redisSessionDAO` and `redisCacheManager` which created by `shiro-redis-spring-boot-starter` already
```java
@Autowired
RedisSessionDAO redisSessionDAO;
@Autowired
RedisCacheManager redisCacheManager;
```
Inject them into your own `SessionManager` and `SessionsSecurityManager`
```java
@Bean
public SessionManager sessionManager() {
DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
// inject redisSessionDAO
sessionManager.setSessionDAO(redisSessionDAO);
// other stuff...
return sessionManager;
}
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
//inject sessionManager
securityManager.setSessionManager(sessionManager);
// inject redisCacheManager
securityManager.setCacheManager(redisCacheManager);
// other stuff...
return securityManager;
}
```
For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alexxiyang/shiro-redis-spring-boot-tutorial)
### Configuration Properties
Here are all available options you can use in Spring-boot starter configuration
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
# If you found any bugs
Please send email to [email protected]
可以用中文
| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| shiro-redis.session-dao.session-in-memory-enabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| shiro-redis.cache-manager.principal-id-field-name | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| shiro-redis.cache-manager.expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| shiro-redis.cache-manager.key-prefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
## Working with `spring-boot-devtools`
If you are using `shiro-redis` with `spring-boot-devtools`. Please add this line to `resources/META-INF/spring-devtools.properties` (Create it if there is no this file):
```ini
restart.include.shiro-redis=/shiro-[\\w-\\.]+jar
```
# If you found any bugs
Please create the issue
可以用中文
<MSG> Merge remote-tracking branch 'origin/master'
<DFF> @@ -430,6 +430,6 @@ For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alex
# If you found any bugs
-Please send email to [email protected]
+Please create the issue
可以用中文
| 1 | Merge remote-tracking branch 'origin/master' | 1 | .md | md | mit | alexxiyang/shiro-redis |
1981 | <NME> RedisCacheManagerTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.cache.Cache;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.is;
import static org.mockito.Mockito.*;
public class RedisCacheManagerTest {
private IRedisManager redisManager;
private RedisCacheManager redisCacheManager;
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
@Test
public void testInitWithoutSettingRedisManager() {
redisCacheManager = new RedisCacheManager();
Assertions.assertThrows(IllegalArgumentException.class, () -> {
redisCacheManager.getCache("testCache");
Cache cache1 = redisCacheManager.getCache("testCache1");
assertThat(cache,is(cache1));
redisCacheManager.setKeyPrefix("testRedisManager1");
Cache cache2 = redisCacheManager.getCache("testCache2");
assertThat(cache2.getClass().getName(), is("org.crazycake.shiro.RedisCache"));
RedisCache redisCache2 = (RedisCache) cache2;
assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1"));
}
}
assertThat(redisTestCache.getKeyPrefix(), is("testRedisManager1:testCache:"));
assertThat(redisTestCache.getPrincipalIdFieldName(), is("id"));
}
}
<MSG> Merge branch 'master' of https://github.com/alexxiyang/shiro-redis
<DFF> @@ -26,12 +26,12 @@ public class RedisCacheManagerTest {
Cache cache1 = redisCacheManager.getCache("testCache1");
assertThat(cache,is(cache1));
- redisCacheManager.setKeyPrefix("testRedisManager1");
+ redisCacheManager.setKeyPrefix("testRedisManager1:");
Cache cache2 = redisCacheManager.getCache("testCache2");
assertThat(cache2.getClass().getName(), is("org.crazycake.shiro.RedisCache"));
RedisCache redisCache2 = (RedisCache) cache2;
- assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1"));
+ assertThat(redisCache2.getKeyPrefix(), is("testRedisManager1:testCache2:"));
}
}
| 2 | Merge branch 'master' of https://github.com/alexxiyang/shiro-redis | 2 | .java | java | mit | alexxiyang/shiro-redis |
1982 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
# redisManager.count = <count>
```
## Spring
### Redis Standalone
<!-- shiro-redis configuration [end] -->
```
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
<MSG> add Redis cluster manager readme
<DFF> @@ -182,6 +182,26 @@ redisManager.masterName = mymaster
# redisManager.count = <count>
```
+If you use redis cluster, config like this :
+
+```properties
+# Create redisManager
+redisManager = org.crazycake.shiro.RedisClusterManager
+# Redis host and port list
+redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
+# Redis cache key/value expire time. Default value:0 .The expire time is in second (Optional)
+redisManager.expire = 600
+# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
+redisManager.timeout = 2000
+# timeout for jedis try to read data from redis server (Optional)
+redisManager.soTimeout = 2000
+# max attempts to connect to server (Optional)
+redisManager.maxAttempts = 2
+# Redis password.(Optional)
+#redisManager.password = xxxx
+
+```
+
## Spring
### Redis Standalone
@@ -229,6 +249,22 @@ spring.xml:
<!-- shiro-redis configuration [end] -->
```
+If you use redis cluster, config like this :
+```xml
+<!-- shiro-redis configuration [start] -->
+<!-- shiro redisManager -->
+<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
+ <property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
+ <property name="expire" value="1800"/>
+ <!-- optional properties:
+ <property name="timeout" value="10000"/>
+ <property name="soTimeout" value="10000"/>
+ <property name="maxAttempts" value="2"/>
+ <property name="password" value="123456"/>
+ -->
+</bean>
+```
+
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
| 36 | add Redis cluster manager readme | 0 | .md | md | mit | alexxiyang/shiro-redis |
1983 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br><pre>jedisPoolConfig = redis.clients.jedis.JedisPoolConfig<br>jedisPoolConfig.testWhileIdle = false<br>redisManager.jedisPoolConfig = jedisPoolConfig</pre> |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
### RedisSessionDAO
<MSG> Update README.md. Update jedisPoolConfig section
<DFF> @@ -307,7 +307,7 @@ These 4 Serializers are replaceable:
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
-| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br><pre>jedisPoolConfig = redis.clients.jedis.JedisPoolConfig<br>jedisPoolConfig.testWhileIdle = false<br>redisManager.jedisPoolConfig = jedisPoolConfig</pre> |
+| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
### RedisSessionDAO
| 1 | Update README.md. Update jedisPoolConfig section | 1 | .md | md | mit | alexxiyang/shiro-redis |
1984 | <NME> .travis.yml
<BEF> ADDFILE
<MSG> add travis
<DFF> @@ -0,0 +1,1 @@
+language: java
\ No newline at end of file
| 1 | add travis | 0 | .yml | travis | mit | alexxiyang/shiro-redis |
1985 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
# 'Programming Language :: Python :: 3.2',
# 'Programming Language :: Python :: 3.3',
],
keywords='testing framework bdd tdd',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitch.readthedocs.org/',
license='GPLv3',
install_requires=['click', ],
packages=find_packages(exclude=["docs", ]),
package_data={},
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> DOCS : Added explanation in the README, setup.py and made some tweaks to the django tutorial.
<DFF> @@ -19,10 +19,12 @@ setup(name="hitch",
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
- 'License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)',
+ 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
+ 'Operating System :: Unix',
+ 'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
@@ -31,11 +33,11 @@ setup(name="hitch",
# 'Programming Language :: Python :: 3.2',
# 'Programming Language :: Python :: 3.3',
],
- keywords='testing framework bdd tdd',
+ keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitch.readthedocs.org/',
- license='GPLv3',
+ license='AGPL',
install_requires=['click', ],
packages=find_packages(exclude=["docs", ]),
package_data={},
| 5 | DOCS : Added explanation in the README, setup.py and made some tweaks to the django tutorial. | 3 | .py | py | agpl-3.0 | hitchtest/hitch |
1986 | <NME> index.rst
<BEF> Hitch
=====
Hitch is a framework for :doc:`/glossary/integration_testing`.
Features
--------
* Runs reliably without modification on Mac OS X, Ubuntu/Debian, Fedora, CentOS and Arch and in Docker.
* Automates its own deployment and does not interfere with your system other than to install packages.
* Provides boilerplate and tools to substantially minimize the problem of :doc:`/glossary/brittle_tests`.
* Readable :doc:`/glossary/hitch_test_description_language` that doesn't require you to write regular expressions.
* Built-in :doc:`/glossary/service_orchestration` library for running groups of services (databases, webservers, microservices) together.
* Built-in :doc:`/glossary/step_library` for common tasks (interacting with browsers, command line & emails).
* Provides a :doc:`/glossary/test_driven_development_environment` with first class debugging tools.
Plugins
-------
.. toctree::
:maxdepth: 2
plugins/index
Documentation
-------------
.. toctree::
:maxdepth: 2
quickstart/index
howto/index
faq/index
api/index
misc/index
See the full :doc:`/glossary/index` here.
<MSG> DOCS : Added more on BDD and ATDD.
<DFF> @@ -12,7 +12,7 @@ Features
* Readable :doc:`/glossary/hitch_test_description_language` that doesn't require you to write regular expressions.
* Built-in :doc:`/glossary/service_orchestration` library for running groups of services (databases, webservers, microservices) together.
* Built-in :doc:`/glossary/step_library` for common tasks (interacting with browsers, command line & emails).
-* Provides a :doc:`/glossary/test_driven_development_environment` with first class debugging tools.
+* Provides a suitable environment for :doc:`/glossary/acceptance_test_driven_development` complete with debugging tools.
Plugins
-------
| 1 | DOCS : Added more on BDD and ATDD. | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1987 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
# Sentinel master name
redisManager.masterName = mymaster
# Redis cache key/value expire time. Default value:0 .The expire time is in second (Optional)
redisManager.expire = 1200
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
#
# redisManager.timeout = <timeout>
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
<!-- optional properties:
<property name="expire" value="1800"/>
<property name="timeout" value="10000"/>
<property name="password" value="123456"/>
<property name="database" value="1"/>
<!-- Redis-based session configuration -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
<property name="keyPrefix" value="shiro:session:" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<!-- Redis-based cache configuration -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
<property name="keyPrefix" value="shiro:cache:" />
</bean>
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
<!-- optional properties:
<property name="expire" value="1800"/>
<property name="timeout" value="2000"/>
<property name="soTimeout" value="2000"/>
<property name="password" value=""/>
<property name="database" value="0"/>
<MSG> Update README.md
<DFF> @@ -154,9 +154,6 @@ redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
-# Redis cache key/value expire time. Default value:0 .The expire time is in second (Optional)
-redisManager.expire = 1200
-
# Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds).(Optional)
#
# redisManager.timeout = <timeout>
@@ -195,7 +192,6 @@ spring.xml:
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
<!-- optional properties:
- <property name="expire" value="1800"/>
<property name="timeout" value="10000"/>
<property name="password" value="123456"/>
<property name="database" value="1"/>
@@ -207,6 +203,7 @@ spring.xml:
<!-- Redis-based session configuration -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
+ <property name="expire" value="1800"/>
<property name="keyPrefix" value="shiro:session:" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
@@ -216,6 +213,7 @@ spring.xml:
<!-- Redis-based cache configuration -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
+ <property name="expire" value="1800"/>
<property name="keyPrefix" value="shiro:cache:" />
</bean>
@@ -241,9 +239,8 @@ If you use redis sentinel, config like this :
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
- <!-- optional properties:
- <property name="expire" value="1800"/>
- <property name="timeout" value="2000"/>
+ <!-- optional properties:、
+ <property name="timeout" value="2000"/>
<property name="soTimeout" value="2000"/>
<property name="password" value=""/>
<property name="database" value="0"/>
| 4 | Update README.md | 7 | .md | md | mit | alexxiyang/shiro-redis |
1988 | <NME> RedisManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Protocol;
public class RedisManager extends BaseRedisManager implements IRedisManager{
private String host = "127.0.0.1";
private int port = Protocol.DEFAULT_PORT ;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private JedisPool jedisPool;
private void init() {
if (jedisPool == null) {
synchronized (RedisManager.class) {
if (jedisPool == null) {
String[] hostAndPort = host.split(":");
jedisPool = new JedisPool(getJedisPoolConfig(), hostAndPort[0], Integer.parseInt(hostAndPort[1]), timeout, password, database);
}
}
}
}
@Override
protected Jedis getJedis() {
if (jedisPool == null) {
init();
}
return jedisPool.getResource();
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public JedisPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> Update README.md
<DFF> @@ -5,9 +5,11 @@ import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.Protocol;
public class RedisManager extends BaseRedisManager implements IRedisManager{
-
- private String host = "127.0.0.1";
+ private static final String DEFAULT_HOST = "127.0.0.1:6379";
+ private String host = DEFAULT_HOST;
+
+ @Deprecated
private int port = Protocol.DEFAULT_PORT ;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
| 4 | Update README.md | 2 | .java | java | mit | alexxiyang/shiro-redis |
1989 | <NME> README.md
<BEF> shiro-redis
=============
[](https://travis-ci.org/alexxiyang/shiro-redis)
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
# Download
You can choose these 2 ways to include shiro-redis into your project
* use "git clone https://github.com/alexxiyang/shiro-redis.git" to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>3.2.2</version>
</dependency>
```
> **Note:**\
> Do not use version < 3.1.0\
> **注意**:\
> 请不要使用3.1.0以下版本
# Before use
Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
For example:
If you create SimpleAuthenticationInfo like the following:
```java
@Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken)token;
UserInfo userInfo = new UserInfo();
userInfo.setUsername(usernamePasswordToken.getUsername());
return new SimpleAuthenticationInfo(userInfo, "123456", getName());
}
```
Then the userInfo object is your principal object. You need to make sure `UserInfo` has an unique field to identify it in Redis. Take userId as an example:
```java
public class UserInfo implements Serializable{
private Integer userId
private String username;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Integer getUserId() {
return this.userId;
}
}
```
Put userId as the value of `cacheManager.principalIdFieldName`, like this:
```properties
cacheManager.principalIdFieldName = userId
```
If you're using Spring, the configuration should be
```xml
<property name="principalIdFieldName" value="userId" />
```
Then shiro-redis will call `userInfo.getUserId()` to get the id for storing Redis object.
# How to configure ?
You can configure shiro-redis either in `shiro.ini` or in `spring-*.xml`
## shiro.ini
Here is the configuration for shiro.ini.
### Redis Standalone
```properties
[main]
#====================================
# shiro-redis configuration [start]
#====================================
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
#===================================
# Redis Manager [end]
#===================================
#=========================================
# Redis session DAO [start]
#=========================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#=========================================
# Redis session DAO [end]
#=========================================
#==========================================
# Redis cache manager [start]
#==========================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
# Default value is id, that means your principal object must has a method called `getId()`
#
cacheManager.principalIdFieldName = id
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
#==========================================
# Redis cache manager [end]
#==========================================
#=================================
# shiro-redis configuration [end]
#=================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please change the redisManager configuration into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisSentinelManager
# Sentinel host. If you don't specify host the default value is 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you're using redis cluster, here is an example of configuration :
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Spring
### Redis Standalone
spring.xml:
```xml
<!-- shiro-redis configuration [start] -->
<!-- Redis Manager [start] -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
</bean>
<!-- Redis Manager [end] -->
<!-- Redis session DAO [start] -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
<!-- Redis session DAO [end] -->
<!-- Redis cache manager [start] -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
</bean>
<!-- Redis cache manager [end] -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
<!-- other configurations -->
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
<!-- shiro-redis configuration [end] -->
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
If you use redis sentinel, here is an example of configuration :
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you use redis cluster, here is an example of configuration :
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Serializer
Since redis only accept `byte[]`, there comes to a serializer problem.
Shiro-redis is using StringSerializer as key serializer and ObjectSerializer as value serializer.
You can use your own custom serializer, as long as this custom serializer implemens `org.crazycake.shiro.serializer.RedisSerializer`
For example, let's change the charset of keySerializer.
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
# cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 Serializers are replaceable:
- cacheManager.keySerializer
- cacheManager.valueSerializer
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
## Configurable Options
### RedisManager
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
### RedisSessionDAO
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| sessionInMemoryEnabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
### CacheManager
| Title | Default | Description |
| :--------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
# Spring boot starter
Shiro-redis’s Spring-Boot integration is the easiest way to integrate Shiro-redis into a Spring-base application.
> Note: `shiro-redis-spring-boot-starter` version `3.2.1` is based on `shiro-spring-boot-web-starter` version `1.4.0-RC2`
First include the Shiro-redis Spring boot starter dependency in you application classpath
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis-spring-boot-starter</artifactId>
<version>3.2.1</version>
</dependency>
```
The next step depends on whether you've created your own `SessionManager` or `SessionsSecurityManager`.
Because `shiro-redis-spring-boot-starter` will create `RedisSessionDAO` and `RedisCacheManager` for you. Then inject them into `SessionManager` and `SessionsSecurityManager` automatically.
But if you've created your own `SessionManager` or `SessionsSecurityManager` as below:
```java
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
// other stuff
return securityManager;
}
```
You will have to inject them by yourself. for more deail, see below
## If you haven't created your own `SessionManager` or `SessionsSecurityManager`
You are all set. Enjoy it!
## If you have created your own `SessionManager` or `SessionsSecurityManager`
Inject `redisSessionDAO` and `redisCacheManager` which created by `shiro-redis-spring-boot-starter` already
```java
@Autowired
RedisSessionDAO redisSessionDAO;
@Autowired
RedisCacheManager redisCacheManager;
```
Inject them into `SessionManager` and `SessionsSecurityManager`
```java
@Bean
public SessionManager sessionManager() {
DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
// inject redisSessionDAO
sessionManager.setSessionDAO(redisSessionDAO);
return sessionManager;
}
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
//inject sessionManager
securityManager.setSessionManager(sessionManager);
// inject redisCacheManager
securityManager.setCacheManager(redisCacheManager);
return securityManager;
}
```
For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alexxiyang/shiro-redis-spring-boot-tutorial)
### Configuration Properties
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
| shiro-redis.redis-manager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| shiro-redis.redis-manager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| shiro-redis.redis-manager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| shiro-redis.redis-manager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| shiro-redis.redis-manager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| shiro-redis.redis-manager.password | | Redis password |
| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| shiro-redis.session-dao.session-in-memory-enabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| shiro-redis.cache-manager.principal-id-field-name | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| shiro-redis.cache-manager.expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| shiro-redis.cache-manager.key-prefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
# If you found any bugs
Please send email to [email protected]
可以用中文
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<MSG> Use github Page
<DFF> @@ -1,435 +1,4 @@
shiro-redis
=============
-[](https://travis-ci.org/alexxiyang/shiro-redis)
-
-
-shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
-
-# Download
-
-You can choose these 2 ways to include shiro-redis into your project
-* use "git clone https://github.com/alexxiyang/shiro-redis.git" to clone project to your local workspace and build jar file by your self
-* add maven dependency
-
-```xml
-<dependency>
- <groupId>org.crazycake</groupId>
- <artifactId>shiro-redis</artifactId>
- <version>3.2.2</version>
-</dependency>
-```
-
-> **Note:**\
-> Do not use version < 3.1.0\
-> **注意**:\
-> 请不要使用3.1.0以下版本
-
-# Before use
-Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
-
-For example:
-
-If you create SimpleAuthenticationInfo like the following:
-```java
-@Override
-protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
- UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken)token;
- UserInfo userInfo = new UserInfo();
- userInfo.setUsername(usernamePasswordToken.getUsername());
- return new SimpleAuthenticationInfo(userInfo, "123456", getName());
-}
-```
-
-Then the userInfo object is your principal object. You need to make sure `UserInfo` has an unique field to identify it in Redis. Take userId as an example:
-```java
-public class UserInfo implements Serializable{
-
- private Integer userId
-
- private String username;
-
- public String getUsername() {
- return username;
- }
-
- public void setUsername(String username) {
- this.username = username;
- }
-
- public Integer getUserId() {
- return this.userId;
- }
-}
-```
-
-Put userId as the value of `cacheManager.principalIdFieldName`, like this:
-```properties
-cacheManager.principalIdFieldName = userId
-```
-
-If you're using Spring, the configuration should be
-```xml
-<property name="principalIdFieldName" value="userId" />
-```
-
-Then shiro-redis will call `userInfo.getUserId()` to get the id for storing Redis object.
-
-# How to configure ?
-
-You can configure shiro-redis either in `shiro.ini` or in `spring-*.xml`
-
-## shiro.ini
-Here is the configuration for shiro.ini.
-
-### Redis Standalone
-
-```properties
-[main]
-#====================================
-# shiro-redis configuration [start]
-#====================================
-
-#===================================
-# Redis Manager [start]
-#===================================
-
-# Create redisManager
-redisManager = org.crazycake.shiro.RedisManager
-
-# Redis host. If you don't specify host the default value is 127.0.0.1:6379
-redisManager.host = 127.0.0.1:6379
-
-#===================================
-# Redis Manager [end]
-#===================================
-
-#=========================================
-# Redis session DAO [start]
-#=========================================
-
-# Create redisSessionDAO
-redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
-
-# Use redisManager as cache manager
-redisSessionDAO.redisManager = $redisManager
-
-sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
-
-sessionManager.sessionDAO = $redisSessionDAO
-
-securityManager.sessionManager = $sessionManager
-
-#=========================================
-# Redis session DAO [end]
-#=========================================
-
-#==========================================
-# Redis cache manager [start]
-#==========================================
-
-# Create cacheManager
-cacheManager = org.crazycake.shiro.RedisCacheManager
-
-# Principal id field name. The field which you can get unique id to identify this principal.
-# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
-# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
-# Default value is id, that means your principal object must has a method called `getId()`
-#
-cacheManager.principalIdFieldName = id
-
-# Use redisManager as cache manager
-cacheManager.redisManager = $redisManager
-
-securityManager.cacheManager = $cacheManager
-
-#==========================================
-# Redis cache manager [end]
-#==========================================
-
-#=================================
-# shiro-redis configuration [end]
-#=================================
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
-
-### Redis Sentinel
-if you're using Redis Sentinel, please change the redisManager configuration into the following:
-```properties
-#===================================
-# Redis Manager [start]
-#===================================
-
-# Create redisManager
-redisManager = org.crazycake.shiro.RedisSentinelManager
-
-# Sentinel host. If you don't specify host the default value is 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
-redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
-
-# Sentinel master name
-redisManager.masterName = mymaster
-
-#===================================
-# Redis Manager [end]
-#===================================
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-### Redis Cluster
-If you're using redis cluster, here is an example of configuration :
-
-```properties
-#===================================
-# Redis Manager [start]
-#===================================
-
-# Create redisManager
-redisManager = org.crazycake.shiro.RedisClusterManager
-
-# Redis host and port list
-redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
-
-#===================================
-# Redis Manager [end]
-#===================================
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-## Spring
-
-### Redis Standalone
-spring.xml:
-```xml
-<!-- shiro-redis configuration [start] -->
-
-<!-- Redis Manager [start] -->
-<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
- <property name="host" value="127.0.0.1:6379"/>
-</bean>
-<!-- Redis Manager [end] -->
-
-<!-- Redis session DAO [start] -->
-<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
- <property name="redisManager" ref="redisManager" />
-</bean>
-<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
- <property name="sessionDAO" ref="redisSessionDAO" />
-</bean>
-<!-- Redis session DAO [end] -->
-
-<!-- Redis cache manager [start] -->
-<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
- <property name="redisManager" ref="redisManager" />
-</bean>
-<!-- Redis cache manager [end] -->
-
-<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
- <property name="sessionManager" ref="sessionManager" />
- <property name="cacheManager" ref="cacheManager" />
-
- <!-- other configurations -->
- <property name="realm" ref="exampleRealm"/>
- <property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
-</bean>
-
-<!-- shiro-redis configuration [end] -->
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
-
-### Redis Sentinel
-If you use redis sentinel, here is an example of configuration :
-```xml
-<!-- shiro-redis configuration [start] -->
-<!-- shiro redisManager -->
-<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
- <property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
- <property name="masterName" value="mymaster"/>
-</bean>
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-### Redis Cluster
-If you use redis cluster, here is an example of configuration :
-```xml
-<!-- shiro-redis configuration [start] -->
-<!-- shiro redisManager -->
-<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
- <property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
-</bean>
-```
-
-For complete configurable options list, check [Configurable Options](#configurable-options).
-
-## Serializer
-Since redis only accept `byte[]`, there comes to a serializer problem.
-Shiro-redis is using StringSerializer as key serializer and ObjectSerializer as value serializer.
-You can use your own custom serializer, as long as this custom serializer implemens `org.crazycake.shiro.serializer.RedisSerializer`
-
-For example, let's change the charset of keySerializer.
-```properties
-# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
-#
-# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
-
-# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
-# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
-#
-# cacheManagerKeySerializer.charset = UTF-8
-
-# cacheManager.keySerializer = $cacheManagerKeySerializer
-```
-
-These 4 Serializers are replaceable:
-- cacheManager.keySerializer
-- cacheManager.valueSerializer
-- redisSessionDAO.keySerializer
-- redisSessionDAO.valueSerializer
-
-## Configurable Options
-
-### RedisManager
-
-| Title | Default | Description |
-| :------------------| :------------------- | :---------------------------|
-| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
-| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
-| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
-| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
-| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
-| password | | Redis password |
-| database | `0` | Redis database. Default value is 0 |
-| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
-| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
-
-### RedisSessionDAO
-
-| Title | Default | Description |
-| :------------------| :------------------- | :---------------------------|
-| redisManager | | RedisManager which you just configured above (Required) |
-| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
-| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
-| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
-| sessionInMemoryEnabled | `true` | Whether or not enable temporary save session in ThreadLocal |
-| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
-| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
-
-### CacheManager
-
-| Title | Default | Description |
-| :--------------------| :------------------- | :---------------------------|
-| redisManager | | RedisManager which you just configured above (Required) |
-| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
-| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
-| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
-| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
-| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
-
-# Spring boot starter
-
-Shiro-redis’s Spring-Boot integration is the easiest way to integrate Shiro-redis into a Spring-base application.
-
-> Note: `shiro-redis-spring-boot-starter` version `3.2.1` is based on `shiro-spring-boot-web-starter` version `1.4.0-RC2`
-
-First include the Shiro-redis Spring boot starter dependency in you application classpath
-
-```xml
-<dependency>
- <groupId>org.crazycake</groupId>
- <artifactId>shiro-redis-spring-boot-starter</artifactId>
- <version>3.2.1</version>
-</dependency>
-```
-
-The next step depends on whether you've created your own `SessionManager` or `SessionsSecurityManager`.
-Because `shiro-redis-spring-boot-starter` will create `RedisSessionDAO` and `RedisCacheManager` for you. Then inject them into `SessionManager` and `SessionsSecurityManager` automatically.
-
-But if you've created your own `SessionManager` or `SessionsSecurityManager` as below:
-```java
-@Bean
-public SessionsSecurityManager securityManager(List<Realm> realms) {
- DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
- // other stuff
- return securityManager;
-}
-```
-You will have to inject them by yourself. for more deail, see below
-
-## If you haven't created your own `SessionManager` or `SessionsSecurityManager`
-
-You are all set. Enjoy it!
-
-## If you have created your own `SessionManager` or `SessionsSecurityManager`
-
-Inject `redisSessionDAO` and `redisCacheManager` which created by `shiro-redis-spring-boot-starter` already
-```java
-@Autowired
-RedisSessionDAO redisSessionDAO;
-
-@Autowired
-RedisCacheManager redisCacheManager;
-```
-
-Inject them into `SessionManager` and `SessionsSecurityManager`
-
-```java
-@Bean
-public SessionManager sessionManager() {
- DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
-
- // inject redisSessionDAO
- sessionManager.setSessionDAO(redisSessionDAO);
- return sessionManager;
-}
-
-@Bean
-public SessionsSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
- DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
-
- //inject sessionManager
- securityManager.setSessionManager(sessionManager);
-
- // inject redisCacheManager
- securityManager.setCacheManager(redisCacheManager);
- return securityManager;
-}
-```
-
-For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alexxiyang/shiro-redis-spring-boot-tutorial)
-
-### Configuration Properties
-
-| Title | Default | Description |
-| :--------------------------------------------------| :------------------- | :---------------------------|
-| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
-| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
-| shiro-redis.redis-manager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
-| shiro-redis.redis-manager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
-| shiro-redis.redis-manager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
-| shiro-redis.redis-manager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
-| shiro-redis.redis-manager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
-| shiro-redis.redis-manager.password | | Redis password |
-| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
-| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
-| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
-| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
-| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
-| shiro-redis.session-dao.session-in-memory-enabled | `true` | Whether or not enable temporary save session in ThreadLocal |
-| shiro-redis.cache-manager.principal-id-field-name | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
-| shiro-redis.cache-manager.expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
-| shiro-redis.cache-manager.key-prefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
-
-
-# If you found any bugs
-
-Please send email to [email protected]
-
-可以用中文
+view (Documentation)[http://alexxiyang.github.io/shiro-redis/]
| 1 | Use github Page | 432 | .md | md | mit | alexxiyang/shiro-redis |
1990 | <NME> RedisSessionDAO.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.apache.shiro.session.mgt.eis.AbstractSessionDAO;
import org.crazycake.shiro.common.SessionInMemory;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.RedisSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Serializable;
import java.util.*;
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private RedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
/**
* doReadSession be called about 10 times when login.
* Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
* The default value is 1000 milliseconds (1s).
* Most of time, you don't need to change it.
*
* You can turn it off by setting sessionInMemoryEnabled to false
*/
private static final long DEFAULT_SESSION_IN_MEMORY_TIMEOUT = 1000L;
private long sessionInMemoryTimeout = DEFAULT_SESSION_IN_MEMORY_TIMEOUT;
private static final boolean DEFAULT_SESSION_IN_MEMORY_ENABLED = true;
private boolean sessionInMemoryEnabled = DEFAULT_SESSION_IN_MEMORY_ENABLED;
private static ThreadLocal sessionsInThread = new ThreadLocal();
/**
* expire time in seconds.
* NOTE: Please make sure expire is longer than session.getTimeout(),
* otherwise you might need the issue that session in Redis got erased when the Session is still available
*
* DEFAULT_EXPIRE: use the timeout of session instead of setting it by yourself
* NO_EXPIRE: never expire
*/
private static final int DEFAULT_EXPIRE = -2;
private static final int NO_EXPIRE = -1;
private int expire = DEFAULT_EXPIRE;
private static final int MILLISECONDS_IN_A_SECOND = 1000;
/**
* redisManager used for communicate with Redis
*/
private IRedisManager redisManager;
/**
* Serializer of key
*/
private RedisSerializer keySerializer = new StringSerializer();
/**
* Serializer of value
*/
private RedisSerializer valueSerializer = new ObjectSerializer();
/**
* save/update session
* @param session
* @throws UnknownSessionException
*/
@Override
public void update(Session session) throws UnknownSessionException {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
this.saveSession(session);
if (this.sessionInMemoryEnabled) {
this.setSessionToThreadLocal(session.getId(), session);
}
}
private void saveSession(Session session) throws UnknownSessionException {
if (session == null || session.getId() == null) {
logger.error("session or session id is null");
throw new UnknownSessionException("session or session id is null");
}
byte[] key;
byte[] value;
try {
key = keySerializer.serialize(getRedisSessionKey(session.getId()));
value = valueSerializer.serialize(session);
} catch (SerializationException e) {
logger.error("serialize session error. session id=" + session.getId());
throw new UnknownSessionException(e);
}
if (expire == DEFAULT_EXPIRE) {
redisManager.set(key, value, (int) (session.getTimeout() / MILLISECONDS_IN_A_SECOND));
return;
}
if (expire != NO_EXPIRE && expire * MILLISECONDS_IN_A_SECOND < session.getTimeout()) {
logger.warn("Redis session expire time: "
logger.debug("read session from redis");
try {
s = (Session)valueSerializer.deserialize(redisManager.get(keySerializer.serialize(getRedisSessionKey(sessionId))));
// threadLocalSession.set(s);
} catch (SerializationException e) {
logger.error("read session error. settionId=" + sessionId);
}
/**
* delete session
* @param session
return this.keyPrefix + sessionId;
}
public RedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(RedisManager redisManager) {
this.redisManager = redisManager;
}
this.delSessionFromThreadLocal(session.getId());
}
try {
redisManager.del(keySerializer.serialize(getRedisSessionKey(session.getId())));
} catch (SerializationException e) {
logger.error("delete session error. session id=" + session.getId());
}
}
/**
* get all active sessions
* @return
*/
@Override
public Collection<Session> getActiveSessions() {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
Set<Session> sessions = new HashSet<Session>();
try {
Set<byte[]> keys = redisManager.keys(keySerializer.serialize(this.keyPrefix + "*"));
if (keys != null && keys.size() > 0) {
for (byte[] key:keys) {
Session s = (Session) valueSerializer.deserialize(redisManager.get(key));
sessions.add(s);
}
}
} catch (SerializationException e) {
logger.error("get active sessions error.");
}
return sessions;
}
@Override
protected Serializable doCreate(Session session) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (session == null) {
logger.error("session is null");
throw new UnknownSessionException("session is null");
}
Serializable sessionId = this.generateSessionId(session);
this.assignSessionId(session, sessionId);
this.saveSession(session);
return sessionId;
}
/**
* I change
* @param sessionId
* @return
*/
@Override
protected Session doReadSession(Serializable sessionId) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (sessionId == null) {
logger.warn("session id is null");
return null;
}
if (this.sessionInMemoryEnabled) {
Session session = getSessionFromThreadLocal(sessionId);
if (session != null) {
return session;
}
}
Session session = null;
try {
String sessionRedisKey = getRedisSessionKey(sessionId);
logger.debug("read session: " + sessionRedisKey + " from Redis");
session = (Session) valueSerializer.deserialize(redisManager.get(keySerializer.serialize(sessionRedisKey)));
if (this.sessionInMemoryEnabled) {
setSessionToThreadLocal(sessionId, session);
}
} catch (SerializationException e) {
logger.error("read session error. sessionId: " + sessionId);
}
return session;
}
private void setSessionToThreadLocal(Serializable sessionId, Session session) {
this.initSessionsInThread();
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
sessionMap.put(sessionId, this.createSessionInMemory(session));
}
private void delSessionFromThreadLocal(Serializable sessionId) {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
sessionMap.remove(sessionId);
}
private SessionInMemory createSessionInMemory(Session session) {
SessionInMemory sessionInMemory = new SessionInMemory();
sessionInMemory.setCreateTime(new Date());
sessionInMemory.setSession(session);
return sessionInMemory;
}
private void initSessionsInThread() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
sessionMap = new HashMap<Serializable, SessionInMemory>();
sessionsInThread.set(sessionMap);
}
}
private void removeExpiredSessionInMemory() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
Iterator<Serializable> it = sessionMap.keySet().iterator();
while (it.hasNext()) {
Serializable sessionId = it.next();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
it.remove();
continue;
}
long liveTime = getSessionInMemoryLiveTime(sessionInMemory);
if (liveTime > sessionInMemoryTimeout) {
it.remove();
}
}
if (sessionMap.size() == 0) {
sessionsInThread.remove();
}
}
private Session getSessionFromThreadLocal(Serializable sessionId) {
if (sessionsInThread.get() == null) {
return null;
}
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
return null;
}
logger.debug("read session from memory");
return sessionInMemory.getSession();
}
private long getSessionInMemoryLiveTime(SessionInMemory sessionInMemory) {
Date now = new Date();
return now.getTime() - sessionInMemory.getCreateTime().getTime();
}
private String getRedisSessionKey(Serializable sessionId) {
return this.keyPrefix + sessionId;
}
public IRedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
public String getKeyPrefix() {
return keyPrefix;
}
public void setKeyPrefix(String keyPrefix) {
this.keyPrefix = keyPrefix;
}
public RedisSerializer getKeySerializer() {
return keySerializer;
}
public void setKeySerializer(RedisSerializer keySerializer) {
this.keySerializer = keySerializer;
}
public RedisSerializer getValueSerializer() {
return valueSerializer;
}
public void setValueSerializer(RedisSerializer valueSerializer) {
this.valueSerializer = valueSerializer;
}
public long getSessionInMemoryTimeout() {
return sessionInMemoryTimeout;
}
public void setSessionInMemoryTimeout(long sessionInMemoryTimeout) {
this.sessionInMemoryTimeout = sessionInMemoryTimeout;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public boolean getSessionInMemoryEnabled() {
return sessionInMemoryEnabled;
}
public void setSessionInMemoryEnabled(boolean sessionInMemoryEnabled) {
this.sessionInMemoryEnabled = sessionInMemoryEnabled;
}
public static ThreadLocal getSessionsInThread() {
return sessionsInThread;
}
}
<MSG> Add sentinel support
<DFF> @@ -16,7 +16,7 @@ public class RedisSessionDAO extends AbstractSessionDAO {
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
- private RedisManager redisManager;
+ private IRedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
@@ -108,7 +108,7 @@ public class RedisSessionDAO extends AbstractSessionDAO {
logger.debug("read session from redis");
try {
s = (Session)valueSerializer.deserialize(redisManager.get(keySerializer.serialize(getRedisSessionKey(sessionId))));
- // threadLocalSession.set(s);
+ threadLocalSession.set(s);
} catch (SerializationException e) {
logger.error("read session error. settionId=" + sessionId);
}
@@ -119,11 +119,11 @@ public class RedisSessionDAO extends AbstractSessionDAO {
return this.keyPrefix + sessionId;
}
- public RedisManager getRedisManager() {
+ public IRedisManager getRedisManager() {
return redisManager;
}
- public void setRedisManager(RedisManager redisManager) {
+ public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
| 4 | Add sentinel support | 4 | .java | java | mit | alexxiyang/shiro-redis |
1991 | <NME> RedisSessionDAOTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.InvalidSessionException;
import org.apache.shiro.session.Session;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.model.FakeSession;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
import java.util.Collection;
import java.util.Date;
import java.util.HashSet;
import java.util.Set;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.*;
public class RedisSessionDAOTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisSessionDAO mountRedisSessionDAO(Integer expire) {
RedisSessionDAO redisSessionDAO = new RedisSessionDAO();
if (expire != null) {
redisSessionDAO.setExpire(expire);
}
redisSessionDAO.setKeyPrefix("student:");
redisSessionDAO.setRedisManager(redisManager);
return redisSessionDAO;
}
@Test
scaffold();
}
@Test
public void testDoCreateNull() {
try {
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:98"), valueSerializer.serialize(session), 3);
}
@Test
public void testUpdateByNoExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(-1);
StudentSession session = new StudentSession(97, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:97"), valueSerializer.serialize(session), -1);
}
@Test
public void testDelete() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(96, 1000);
sessionDAO.delete(session);
verify(redisManager).del(keySerializer.serialize("student:96"));
}
@Test
public void testGetActiveSessions() throws SerializationException {
Set<byte[]> mockKeys = new HashSet<byte[]>();
mockKeys.add(keySerializer.serialize("student:1"));
mockKeys.add(keySerializer.serialize("student:2"));
when(redisManager.keys(keySerializer.serialize("student:*"))).thenReturn(mockKeys);
StudentSession mockSession1 = new StudentSession(1, 2000);
StudentSession mockSession2 = new StudentSession(2, 2000);
when(redisManager.get(keySerializer.serialize("student:1"))).thenReturn(valueSerializer.serialize(mockSession1));
when(redisManager.get(keySerializer.serialize("student:2"))).thenReturn(valueSerializer.serialize(mockSession2));
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
assertThat(sessionDAO.getActiveSessions().size(), is(2));
}
}
class StudentSession implements Session, Serializable {
private Integer id;
private long timeout;
public StudentSession(Integer id, long timeout) {
this.id = id;
this.timeout = timeout;
}
@Override
public Serializable getId() {
return id;
}
@Override
public Date getStartTimestamp() {
return null;
}
@Override
public Date getLastAccessTime() {
return null;
}
@Override
public long getTimeout() throws InvalidSessionException {
return timeout;
}
@Override
public void setTimeout(long l) throws InvalidSessionException {
}
@Override
public String getHost() {
return null;
}
@Override
public void touch() throws InvalidSessionException {
}
@Override
public void stop() throws InvalidSessionException {
}
@Override
public Collection<Object> getAttributeKeys() throws InvalidSessionException {
return null;
}
@Override
public Object getAttribute(Object o) throws InvalidSessionException {
return null;
}
@Override
public void setAttribute(Object o, Object o1) throws InvalidSessionException {
}
@Override
public Object removeAttribute(Object o) throws InvalidSessionException {
return null;
}
}
<MSG> Add tearDown to testCase
<DFF> @@ -5,6 +5,7 @@ import org.apache.shiro.session.UnknownSessionException;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.model.FakeSession;
import org.crazycake.shiro.serializer.StringSerializer;
+import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -43,6 +44,11 @@ public class RedisSessionDAOTest {
scaffold();
}
+ @After
+ public void tearDown() {
+ blast();
+ }
+
@Test
public void testDoCreateNull() {
try {
| 6 | Add tearDown to testCase | 0 | .java | java | mit | alexxiyang/shiro-redis |
1992 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
7. Setup MPI [Optional]
#### Linux:
- We use OpenMPI which support "cuda-aware-mpi"
- See more:
- https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
- https://www.open-mpi.org/faq/?category=buildcuda
<MSG> soft-target support for softmax crossentropy
<DFF> @@ -66,7 +66,7 @@
7. Setup MPI [Optional]
#### Linux:
- - We use OpenMPI which support "cuda-aware-mpi"
+ - We use OpenMPI which supports "cuda-aware-mpi"
- See more:
- https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
- https://www.open-mpi.org/faq/?category=buildcuda
| 1 | soft-target support for softmax crossentropy | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1993 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
7. Setup MPI [Optional]
#### Linux:
- We use OpenMPI which support "cuda-aware-mpi"
- See more:
- https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
- https://www.open-mpi.org/faq/?category=buildcuda
<MSG> soft-target support for softmax crossentropy
<DFF> @@ -66,7 +66,7 @@
7. Setup MPI [Optional]
#### Linux:
- - We use OpenMPI which support "cuda-aware-mpi"
+ - We use OpenMPI which supports "cuda-aware-mpi"
- See more:
- https://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/
- https://www.open-mpi.org/faq/?category=buildcuda
| 1 | soft-target support for softmax crossentropy | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1994 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<version>2.4.2-RELEASE</version>
</dependency>
```xml
Edit shiro.ini
```properties
#redisManager
redisManager = org.crazycake.shiro.RedisManager
#optional if you don't specify host the default value is 127.0.0.1
redisManager.host = 127.0.0.1
#optional , default value: 6379
redisManager.port = 6379
#optional, default value:0 .The expire time is in second
redisManager.expire = 30
#============redisSessionDAO=============
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#============redisCacheManager===========
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
shiroCacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
If you found any bugs
===========
Please send email to [email protected]
<MSG> Update README.md
update readme
<DFF> @@ -18,36 +18,3 @@ Download shiro-redis.jar in bin folder and add it into your classpath.
<version>2.4.2-RELEASE</version>
</dependency>
```xml
-
-Edit shiro.ini
-
-```properties
-#redisManager
-redisManager = org.crazycake.shiro.RedisManager
-#optional if you don't specify host the default value is 127.0.0.1
-redisManager.host = 127.0.0.1
-#optional , default value: 6379
-redisManager.port = 6379
-#optional, default value:0 .The expire time is in second
-redisManager.expire = 30
-
-#============redisSessionDAO=============
-redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
-redisSessionDAO.redisManager = $redisManager
-sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
-sessionManager.sessionDAO = $redisSessionDAO
-securityManager.sessionManager = $sessionManager
-
-#============redisCacheManager===========
-cacheManager = org.crazycake.shiro.RedisCacheManager
-cacheManager.redisManager = $redisManager
-#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
-shiroCacheManager.keyPrefix = users:security:authz:
-securityManager.cacheManager = $cacheManager
-```
-
-
-If you found any bugs
-===========
-
-Please send email to [email protected]
| 0 | Update README.md | 33 | .md | md | mit | alexxiyang/shiro-redis |
1995 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>3.1.0</version>
</dependency>
```
<MSG> Merge remote-tracking branch 'origin/master'
<DFF> @@ -16,7 +16,7 @@ You can choose these 2 ways to include shiro-redis into your project
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
- <version>3.1.0</version>
+ <version>3.2.0</version>
</dependency>
```
| 1 | Merge remote-tracking branch 'origin/master' | 1 | .md | md | mit | alexxiyang/shiro-redis |
1996 | <NME> ObjectSerializer.java
<BEF> package org.crazycake.shiro.serializer;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import org.crazycake.shiro.exception.SerializationException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ObjectSerializer implements RedisSerializer<Object> {
private static Logger log = LoggerFactory.getLogger(ObjectSerializer.class);
public static final int BYTE_ARRAY_OUTPUT_STREAM_SIZE = 128;
@Override
public byte[] serialize(Object object) throws SerializationException {
byte[] result = new byte[0];
if (object == null) {
return result;
}
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(BYTE_ARRAY_OUTPUT_STREAM_SIZE);
if (!(object instanceof Serializable)) {
throw new SerializationException("requires a Serializable payload "
+ "but received an object of type [" + object.getClass().getName() + "]");
}
try {
ObjectOutputStream objectOutputStream = new ObjectOutputStream(byteStream);
objectOutputStream.writeObject(object);
objectOutputStream.flush();
result = byteStream.toByteArray();
} catch (IOException e) {
throw new SerializationException("serialize error, object=" + object, e);
}
return result;
}
@Override
public Object deserialize(byte[] bytes) throws SerializationException {
Object result = null;
if (bytes == null || bytes.length == 0) {
return result;
}
try {
ByteArrayInputStream byteStream = new ByteArrayInputStream(bytes);
ObjectInputStream objectInputStream = new MultiClassLoaderObjectInputStream(byteStream);
result = objectInputStream.readObject();
} catch (IOException e) {
throw new SerializationException("deserialize error", e);
} catch (ClassNotFoundException e) {
throw new SerializationException("deserialize error", e);
}
return result;
}
}
<MSG> Fix https://github.com/alexxiyang/shiro-redis/issues/84. Load class from multiple levels
<DFF> @@ -8,11 +8,8 @@ import java.io.ObjectOutputStream;
import java.io.Serializable;
import org.crazycake.shiro.exception.SerializationException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
public class ObjectSerializer implements RedisSerializer<Object> {
- private static Logger log = LoggerFactory.getLogger(ObjectSerializer.class);
public static final int BYTE_ARRAY_OUTPUT_STREAM_SIZE = 128;
@@ -50,7 +47,7 @@ public class ObjectSerializer implements RedisSerializer<Object> {
try {
ByteArrayInputStream byteStream = new ByteArrayInputStream(bytes);
- ObjectInputStream objectInputStream = new MultiClassLoaderObjectInputStream(byteStream);
+ ObjectInputStream objectInputStream = new MultiClassLoaderObjectInputStream(byteStream);
result = objectInputStream.readObject();
} catch (IOException e) {
throw new SerializationException("deserialize error", e);
| 1 | Fix https://github.com/alexxiyang/shiro-redis/issues/84. Load class from multiple levels | 4 | .java | java | mit | alexxiyang/shiro-redis |
1997 | <NME> MultiClassLoaderObjectInputStream.java
<BEF> package org.crazycake.shiro.serializer;
import java.io.IOException;
import java.io.InputStream;
import java.io.ObjectInputStream;
import java.io.ObjectStreamClass;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* For fixing https://github.com/alexxiyang/shiro-redis/issues/84
*/
public class MultiClassLoaderObjectInputStream extends ObjectInputStream {
private static Logger logger = LoggerFactory.getLogger(MultiClassLoaderObjectInputStream.class);
MultiClassLoaderObjectInputStream(InputStream str) throws IOException {
super(str);
}
/**
* Try :
try {
ClassLoader cl = Thread.currentThread().getContextClassLoader();
return Class.forName(name, false, cl);
}
catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access thread context ClassLoader - falling back...
}
*/
@Override
// No thread context class loader -> use class loader of this class.
ClassLoader cl = MultiClassLoaderObjectInputStream.class.getClassLoader();
return Class.forName(name, false, cl);
}
catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access thread context ClassLoader - falling back...
}
logger.debug("Cannot access thread context ClassLoader!", ex);
}
try {
ClassLoader cl = ClassLoader.getSystemClassLoader();
return Class.forName(name, false, cl);
}
catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access system ClassLoader - oh well, maybe the caller can live with null...
}
try {
ClassLoader cl = ClassLoader.getSystemClassLoader();
return Class.forName(name, false, cl);
} catch (Throwable ex) {
logger.debug("Cannot access system ClassLoader", ex);
}
return super.resolveClass(desc);
}
}
<MSG> format code
<DFF> @@ -23,8 +23,7 @@ public class MultiClassLoaderObjectInputStream extends ObjectInputStream {
try {
ClassLoader cl = Thread.currentThread().getContextClassLoader();
return Class.forName(name, false, cl);
- }
- catch (Throwable ex) {
+ } catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access thread context ClassLoader - falling back...
}
@@ -33,8 +32,7 @@ public class MultiClassLoaderObjectInputStream extends ObjectInputStream {
// No thread context class loader -> use class loader of this class.
ClassLoader cl = MultiClassLoaderObjectInputStream.class.getClassLoader();
return Class.forName(name, false, cl);
- }
- catch (Throwable ex) {
+ } catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access thread context ClassLoader - falling back...
}
@@ -43,8 +41,7 @@ public class MultiClassLoaderObjectInputStream extends ObjectInputStream {
try {
ClassLoader cl = ClassLoader.getSystemClassLoader();
return Class.forName(name, false, cl);
- }
- catch (Throwable ex) {
+ } catch (Throwable ex) {
log.debug(ex.getMessage());
// Cannot access system ClassLoader - oh well, maybe the caller can live with null...
}
| 3 | format code | 6 | .java | java | mit | alexxiyang/shiro-redis |
1998 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.1",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> BUG : hitch clean command was broken and -r switch should not have been present (yet).
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.1",
+ version="0.2",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | BUG : hitch clean command was broken and -r switch should not have been present (yet). | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1999 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private volatile JedisSentinelPool jedisPool;
@Override
protected Jedis getJedis() {
if (jedisPool == null) {
init();
}
return jedisPool.getResource();
}
private void init() {
if (jedisPool == null) {
synchronized (RedisSentinelManager.class) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, new JedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
}
}
public String getHost() {
return host;
}
this.host = host;
}
public int getTimeout() {
return timeout;
}
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public JedisSentinelPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> refactor
<DFF> @@ -43,7 +43,7 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
- jedisPool = new JedisSentinelPool(masterName, sentinels, new JedisPoolConfig(), timeout, soTimeout, password, database);
+ jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
}
}
}
@@ -56,8 +56,6 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
this.host = host;
}
-
-
public int getTimeout() {
return timeout;
}
| 1 | refactor | 3 | .java | java | mit | alexxiyang/shiro-redis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.