Dataset Viewer
Auto-converted to Parquet
repo
stringlengths
9
44
pull_number
int64
1
7.98k
instance_id
stringlengths
14
49
issue_numbers
stringlengths
5
24
base_commit
stringlengths
40
40
patch
stringlengths
187
846k
test_patch
stringlengths
215
744k
problem_statement
stringlengths
11
58.9k
hints_text
stringlengths
0
240k
created_at
timestamp[us]date
2013-01-13 16:37:56
2025-03-31 06:00:03
version
stringclasses
1 value
neuralmagic/guidellm
39
neuralmagic__guidellm-39
['36']
912ec77d10c366c3548a386145a233b129ecc67c
diff --git a/README.md b/README.md index 7893333..8b06f1c 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ Scale Efficiently: Evaluate and Optimize Your LLM Deployments for Real-World Inference Needs </h3> -[![GitHub Release](https://img.shields.io/github/release/neuralmagic/guidellm.svg?label=Version)](https://github.com/neuralmagic/guidellm/releases) [![Documentation](https://img.shields.io/badge/Documentation-8A2BE2?logo=read-the-docs&logoColor=%23ffffff&color=%231BC070)](https://github.com/neuralmagic/guidellm/tree/main/docs) [![License](https://img.shields.io/github/license/neuralmagic/guidellm.svg)](https://github.com/neuralmagic/guidellm/blob/main/LICENSE) [![PyPi Release](https://img.shields.io/pypi/v/guidellm.svg?label=PyPi%20Release)](https://pypi.python.org/pypi/guidellm) [![Pypi Release](https://img.shields.io/pypi/v/guidellm-nightly.svg?label=PyPi%20Nightly)](https://pypi.python.org/pypi/guidellm-nightly) [![Python Versions](https://img.shields.io/pypi/pyversions/guidellm.svg?label=Python)](https://pypi.python.org/pypi/guidellm) [![Nightly Build](https://img.shields.io/github/actions/workflow/status/neuralmagic/guidellm/nightly.yml?branch=main&label=Nightly%20Build)](https://github.com/neuralmagic/guidellm/actions/workflows/nightly.yml) +[![GitHub Release](https://img.shields.io/github/release/neuralmagic/guidellm.svg?label=Version)](https://github.com/neuralmagic/guidellm/releases) [![Documentation](https://img.shields.io/badge/Documentation-8A2BE2?logo=read-the-docs&logoColor=%23ffffff&color=%231BC070)](https://github.com/neuralmagic/guidellm/tree/main/docs) [![License](https://img.shields.io/github/license/neuralmagic/guidellm.svg)](https://github.com/neuralmagic/guidellm/blob/main/LICENSE) [![PyPI Release](https://img.shields.io/pypi/v/guidellm.svg?label=PyPI%20Release)](https://pypi.python.org/pypi/guidellm) [![Pypi Release](https://img.shields.io/pypi/v/guidellm-nightly.svg?label=PyPI%20Nightly)](https://pypi.python.org/pypi/guidellm-nightly) [![Python Versions](https://img.shields.io/pypi/pyversions/guidellm.svg?label=Python)](https://pypi.python.org/pypi/guidellm) [![Nightly Build](https://img.shields.io/github/actions/workflow/status/neuralmagic/guidellm/nightly.yml?branch=main&label=Nightly%20Build)](https://github.com/neuralmagic/guidellm/actions/workflows/nightly.yml) ## Overview @@ -65,10 +65,12 @@ To run a GuideLLM evaluation, use the `guidellm` command with the appropriate mo ```bash guidellm \ --target "http://localhost:8000/v1" \ - --model "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16" + --model "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16" \ + --data-type emulated \ + --data "prompt_tokens=512,generated_tokens=128" ``` -The above command will begin the evaluation and output progress updates similar to the following: <img src="https://github.com/neuralmagic/guidellm/blob/main/docs/assets/sample-benchmark.gif" /> +The above command will begin the evaluation and output progress updates similar to the following (if running on a different server, be sure to update the target!): <img src="https://github.com/neuralmagic/guidellm/blob/main/docs/assets/sample-benchmark.gif" /> Notes: @@ -88,17 +90,39 @@ The end of the output will include important performance summary metrics such as <img alt="Sample GuideLLM benchmark end output" src="https://github.com/neuralmagic/guidellm/blob/main/docs/assets/sample-output-end.png" /> -### Advanced Settings +### Configurations -GuideLLM provides various options to customize evaluations, including setting the duration of each benchmark run, the number of concurrent requests, and the request rate. For a complete list of options and advanced settings, see the [GuideLLM CLI Documentation](https://github.com/neuralmagic/guidellm/blob/main/docs/guides/cli.md). +GuideLLM provides various CLI and environment options to customize evaluations, including setting the duration of each benchmark run, the number of concurrent requests, and the request rate. -Some common advanced settings include: +Some common configurations for the CLI include: -- `--rate-type`: The rate to use for benchmarking. Options include `sweep` (shown above), `synchronous` (one request at a time), `throughput` (all requests at once), `constant` (a constant rate defined by `--rate`), and `poisson` (a poisson distribution rate defined by `--rate`). -- `--data-type`: The data to use for the benchmark. Options include `emulated` (default shown above, emulated to match a given prompt and output length), `transformers` (a transformers dataset), and `file` (a {text, json, jsonl, csv} file with a list of prompts). +- `--rate-type`: The rate to use for benchmarking. Options include `sweep`, `synchronous`, `throughput`, `constant`, and `poisson`. + - `--rate-type sweep`: (default) Sweep runs through the full range of performance for the server. Starting with a `synchronous` rate first, then `throughput`, and finally 10 `constant` rates between the min and max request rate found. + - `--rate-type synchronous`: Synchronous runs requests in a synchronous manner, one after the other. + - `--rate-type throughput`: Throughput runs requests in a throughput manner, sending requests as fast as possible. + - `--rate-type constant`: Constant runs requests at a constant rate. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`. + - `--rate-type poisson`: Poisson draws from a poisson distribution with the mean at the specified rate, adding some real-world variance to the runs. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`. +- `--data-type`: The data to use for the benchmark. Options include `emulated`, `transformers`, and `file`. + - `--data-type emulated`: Emulated supports an EmulationConfig in string or file format for the `--data` argument to generate fake data. Specify the number of prompt tokens at a minimum and optionally the number of output tokens and other params for variance in the length. For example, `--data "prompt_tokens=128"`, `--data "prompt_tokens=128,generated_tokens=128"`, or `--data "prompt_tokens=128,prompt_tokens_variance=10"`. + - `--data-type file`: File supports a file path or URL to a file for the `--data` argument. The file should contain data encoded as a CSV, JSONL, TXT, or JSON/YAML file with a single prompt per line for CSV, JSONL, and TXT or a list of prompts for JSON/YAML. For example, `--data "data.txt"` where data.txt contents are `"prompt1\nprompt2\nprompt3"`. + - `--data-type transformers`: Transformers supports a dataset name or dataset file path for the `--data` argument. For example, `--data "neuralmagic/LLM_compression_calibration"`. - `--max-seconds`: The maximum number of seconds to run each benchmark. The default is 120 seconds. - `--max-requests`: The maximum number of requests to run in each benchmark. +For a full list of supported CLI arguments, run the following command: + +```bash +guidellm --help +``` + +For a full list of configuration options, run the following command: + +```bash +guidellm-config +``` + +For further information, see the [GuideLLM Documentation](#Documentation). + ## Resources ### Documentation @@ -109,7 +133,7 @@ Our comprehensive documentation provides detailed guides and resources to help y - [**Installation Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/install.md) - Step-by-step instructions to install GuideLLM, including prerequisites and setup tips. - [**Architecture Overview**](https://github.com/neuralmagic/guidellm/tree/main/docs/architecture.md) - A detailed look at GuideLLM's design, components, and how they interact. -- [**CLI Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/cli_usage.md) - Comprehensive usage information for running GuideLLM via the command line, including available commands and options. +- [**CLI Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/cli.md) - Comprehensive usage information for running GuideLLM via the command line, including available commands and options. - [**Configuration Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/configuration.md) - Instructions on configuring GuideLLM to suit various deployment needs and performance goals. ### Supporting External Documentation diff --git a/docs/assets/sample-benchmark.gif b/docs/assets/sample-benchmark.gif deleted file mode 100644 index f0c8614..0000000 Binary files a/docs/assets/sample-benchmark.gif and /dev/null differ diff --git a/docs/assets/sample-benchmarks.gif b/docs/assets/sample-benchmarks.gif new file mode 100644 index 0000000..160763b Binary files /dev/null and b/docs/assets/sample-benchmarks.gif differ diff --git a/pyproject.toml b/pyproject.toml index dcd1496..db2e65a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -75,6 +75,7 @@ dev = [ [project.entry-points.console_scripts] guidellm = "guidellm.main:generate_benchmark_report_cli" +guidellm-config = "guidellm.config:print_config" # ************************************************ diff --git a/src/guidellm/backend/base.py b/src/guidellm/backend/base.py index c2432b2..d71c5f6 100644 --- a/src/guidellm/backend/base.py +++ b/src/guidellm/backend/base.py @@ -1,9 +1,14 @@ +import asyncio import functools from abc import ABC, abstractmethod from typing import AsyncGenerator, Dict, List, Literal, Optional, Type, Union from loguru import logger from pydantic import BaseModel +from transformers import ( # type: ignore # noqa: PGH003 + AutoTokenizer, + PreTrainedTokenizer, +) from guidellm.core import TextGenerationRequest, TextGenerationResult @@ -103,10 +108,21 @@ def create(cls, backend_type: BackendEngine, **kwargs) -> "Backend": return Backend._registry[backend_type](**kwargs) def __init__(self, type_: BackendEngine, target: str, model: str): + """ + Base constructor for the Backend class. + Calls into test_connection to ensure the backend is reachable. + Ensure all setup is done in the subclass constructor before calling super. + + :param type_: The type of the backend. + :param target: The target URL for the backend. + :param model: The model used by the backend. + """ self._type = type_ self._target = target self._model = model + self.test_connection() + @property def default_model(self) -> str: """ @@ -148,6 +164,48 @@ def model(self) -> str: """ return self._model + def model_tokenizer(self) -> PreTrainedTokenizer: + """ + Get the tokenizer for the backend model. + + :return: The tokenizer instance. + """ + return AutoTokenizer.from_pretrained(self.model) + + def test_connection(self) -> bool: + """ + Test the connection to the backend by running a short text generation request. + If successful, returns True, otherwise raises an exception. + + :return: True if the connection is successful. + :rtype: bool + :raises ValueError: If the connection test fails. + """ + try: + asyncio.get_running_loop() + is_async = True + except RuntimeError: + is_async = False + + if is_async: + logger.warning("Running in async mode, cannot test connection") + return True + + try: + request = TextGenerationRequest( + prompt="Test connection", output_token_count=5 + ) + + asyncio.run(self.submit(request)) + return True + except Exception as err: + raise_err = RuntimeError( + f"Backend connection test failed for backend type={self.type_} " + f"with target={self.target} and model={self.model} with error: {err}" + ) + logger.error(raise_err) + raise raise_err from err + async def submit(self, request: TextGenerationRequest) -> TextGenerationResult: """ Submit a text generation request and return the result. diff --git a/src/guidellm/config.py b/src/guidellm/config.py index 8d16cd6..c3d950e 100644 --- a/src/guidellm/config.py +++ b/src/guidellm/config.py @@ -1,5 +1,6 @@ +import json from enum import Enum -from typing import Dict, List, Optional +from typing import Dict, List, Optional, Sequence from pydantic import BaseModel, Field, model_validator from pydantic_settings import BaseSettings, SettingsConfigDict @@ -10,6 +11,7 @@ "Environment", "LoggingSettings", "OpenAISettings", + "print_config", "ReportGenerationSettings", "Settings", "reload_settings", @@ -70,7 +72,6 @@ class DatasetSettings(BaseModel): preferred_data_splits: List[str] = Field( default_factory=lambda: ["test", "tst", "validation", "val", "train"] ) - default_tokenizer: str = "neuralmagic/Meta-Llama-3.1-8B-FP8" class EmulatedDataSettings(BaseModel): @@ -163,6 +164,53 @@ def set_default_source(cls, values): return values + def generate_env_file(self) -> str: + """ + Generate the .env file from the current settings + """ + return Settings._recursive_generate_env( + self, + self.model_config["env_prefix"], # type: ignore # noqa: PGH003 + self.model_config["env_nested_delimiter"], # type: ignore # noqa: PGH003 + ) + + @staticmethod + def _recursive_generate_env(model: BaseModel, prefix: str, delimiter: str) -> str: + env_file = "" + add_models = [] + for key, value in model.model_dump().items(): + if isinstance(value, BaseModel): + # add nested properties to be processed after the current level + add_models.append((key, value)) + continue + + dict_values = ( + { + f"{prefix}{key.upper()}{delimiter}{sub_key.upper()}": sub_value + for sub_key, sub_value in value.items() + } + if isinstance(value, dict) + else {f"{prefix}{key.upper()}": value} + ) + + for tag, sub_value in dict_values.items(): + if isinstance(sub_value, Sequence) and not isinstance(sub_value, str): + value_str = ",".join(f'"{item}"' for item in sub_value) + env_file += f"{tag}=[{value_str}]\n" + elif isinstance(sub_value, Dict): + value_str = json.dumps(sub_value) + env_file += f"{tag}={value_str}\n" + elif not sub_value: + env_file += f"{tag}=\n" + else: + env_file += f'{tag}="{sub_value}"\n' + + for key, value in add_models: + env_file += Settings._recursive_generate_env( + value, f"{prefix}{key.upper()}{delimiter}", delimiter + ) + return env_file + settings = Settings() @@ -173,3 +221,14 @@ def reload_settings(): """ new_settings = Settings() settings.__dict__.update(new_settings.__dict__) + + +def print_config(): + """ + Print the current configuration settings + """ + print(f"Settings: \n{settings.generate_env_file()}") # noqa: T201 + + +if __name__ == "__main__": + print_config() diff --git a/src/guidellm/core/request.py b/src/guidellm/core/request.py index 133d12e..4f7315c 100644 --- a/src/guidellm/core/request.py +++ b/src/guidellm/core/request.py @@ -28,3 +28,17 @@ class TextGenerationRequest(Serializable): default_factory=dict, description="The parameters for the text generation request.", ) + + def __str__(self) -> str: + prompt_short = ( + self.prompt[:32] + "..." + if self.prompt and len(self.prompt) > 32 # noqa: PLR2004 + else self.prompt + ) + + return ( + f"TextGenerationRequest(id={self.id}, " + f"prompt={prompt_short}, prompt_token_count={self.prompt_token_count}, " + f"output_token_count={self.output_token_count}, " + f"params={self.params})" + ) diff --git a/src/guidellm/main.py b/src/guidellm/main.py index 2f7f9c4..c0bea83 100644 --- a/src/guidellm/main.py +++ b/src/guidellm/main.py @@ -22,6 +22,7 @@ @click.option( "--target", type=str, + required=True, help=( "The target path or url for the backend to evaluate. " "Ex: 'http://localhost:8000/v1'" @@ -33,8 +34,8 @@ default="openai_server", help=( "The backend to use for benchmarking. " - "The default is OpenAI Server enabling compatiability with any server that " - "follows the OpenAI spec including vLLM" + "The default is OpenAI Server enabling compatability with any server that " + "follows the OpenAI spec including vLLM." ), ) @click.option( @@ -49,18 +50,25 @@ @click.option( "--data", type=str, - default=None, + required=True, help=( - "The data source to use for benchmarking. Depending on the data-type, " - "it should be a path to a file, a dataset name, " - "or a configuration for emulated data." + "The data source to use for benchmarking. " + "Depending on the data-type, it should be a " + "path to a data file containing prompts to run (ex: data.txt), " + "a HuggingFace dataset name (ex: 'neuralmagic/LLM_compression_calibration'), " + "or a configuration for emulated data " + "(ex: 'prompt_tokens=128,generated_tokens=128')." ), ) @click.option( "--data-type", type=click.Choice(["emulated", "file", "transformers"]), - default="emulated", - help="The type of data given for benchmarking", + required=True, + help=( + "The type of data to use for benchmarking. " + "Use 'emulated' for synthetic data, 'file' for a file, or 'transformers' " + "for a HuggingFace dataset. Specify the data source with the --data flag." + ), ) @click.option( "--tokenizer", @@ -68,7 +76,10 @@ default=None, help=( "The tokenizer to use for calculating the number of prompt tokens. " - "If not provided, will use a Llama 3.1 tokenizer." + "This should match the tokenizer used by the model." + "By default, it will use the --model flag to determine the tokenizer. " + "If not provided and the model is not available, will raise an error. " + "Ex: 'neuralmagic/Meta-Llama-3.1-8B-quantized.w8a8'" ), ) @click.option( @@ -77,15 +88,21 @@ default="sweep", help=( "The type of request rate to use for benchmarking. " - "The default is sweep, which will run a synchronous and throughput " - "rate-type and then interfill with constant rates between the two values" + "Use sweep to run a full range from synchronous to throughput (default), " + "synchronous for sending requests one after the other, " + "throughput to send requests as fast as possible, " + "constant for a fixed request rate, " + "or poisson for a real-world variable request rate." ), ) @click.option( "--rate", type=float, default=None, - help="The request rate to use for constant and poisson rate types", + help=( + "The request rate to use for constant and poisson rate types. " + "To run multiple, provide the flag multiple times. " + ), multiple=True, ) @click.option( @@ -117,18 +134,20 @@ @click.option( "--output-path", type=str, - default="guidance_report.json", + default=None, help=( - "The output path to save the output report to. " - "The default is guidance_report.json." + "The output path to save the output report to for loading later. " + "Ex: guidance_report.json. " + "The default is None, meaning no output is saved and results are only " + "printed to the console." ), ) @click.option( - "--disable-continuous-refresh", + "--enable-continuous-refresh", is_flag=True, default=False, help=( - "Disable continual refreshing of the output table in the CLI " + "Enable continual refreshing of the output table in the CLI " "until the user exits. " ), ) @@ -144,7 +163,7 @@ def generate_benchmark_report_cli( max_seconds: Optional[int], max_requests: Optional[int], output_path: str, - disable_continuous_refresh: bool, + enable_continuous_refresh: bool, ): """ Generate a benchmark report for a specified backend and dataset. @@ -161,7 +180,7 @@ def generate_benchmark_report_cli( max_seconds=max_seconds, max_requests=max_requests, output_path=output_path, - cont_refresh_table=not disable_continuous_refresh, + cont_refresh_table=enable_continuous_refresh, ) @@ -177,7 +196,7 @@ def generate_benchmark_report( max_seconds: Optional[int], max_requests: Optional[int], output_path: str, - cont_refresh_table: bool = True, + cont_refresh_table: bool, ) -> GuidanceReport: """ Generate a benchmark report for a specified backend and dataset. @@ -213,14 +232,26 @@ def generate_benchmark_report( request_generator: RequestGenerator - # Create request generator + # Create tokenizer and request generator + tokenizer_inst = tokenizer + if not tokenizer_inst: + try: + tokenizer_inst = backend_inst.model_tokenizer() + except Exception as err: + raise ValueError( + "Could not load model's tokenizer, " + "--tokenizer must be provided for request generation" + ) from err + if data_type == "emulated": - request_generator = EmulatedRequestGenerator(config=data, tokenizer=tokenizer) + request_generator = EmulatedRequestGenerator( + config=data, tokenizer=tokenizer_inst + ) elif data_type == "file": - request_generator = FileRequestGenerator(path=data, tokenizer=tokenizer) + request_generator = FileRequestGenerator(path=data, tokenizer=tokenizer_inst) elif data_type == "transformers": request_generator = TransformersDatasetRequestGenerator( - dataset=data, tokenizer=tokenizer + dataset=data, tokenizer=tokenizer_inst ) else: raise ValueError(f"Unknown data type: {data_type}") @@ -252,8 +283,14 @@ def generate_benchmark_report( # Save and print report guidance_report = GuidanceReport() guidance_report.benchmarks.append(report) - guidance_report.save_file(output_path) - guidance_report.print(output_path, continual_refresh=cont_refresh_table) + + if output_path: + guidance_report.save_file(output_path) + + guidance_report.print( + save_path=output_path if output_path is not None else "stdout", + continual_refresh=cont_refresh_table, + ) return guidance_report diff --git a/src/guidellm/request/base.py b/src/guidellm/request/base.py index 52935b7..242bf89 100644 --- a/src/guidellm/request/base.py +++ b/src/guidellm/request/base.py @@ -3,7 +3,7 @@ import time from abc import ABC, abstractmethod from queue import Empty, Full, Queue -from typing import Iterator, Literal, Optional, Union +from typing import Iterator, Literal, Union from loguru import logger from transformers import ( # type: ignore # noqa: PGH003 @@ -11,7 +11,6 @@ PreTrainedTokenizer, ) -from guidellm.config import settings from guidellm.core.request import TextGenerationRequest __all__ = ["GenerationMode", "RequestGenerator"] @@ -41,7 +40,7 @@ def __init__( self, type_: str, source: str, - tokenizer: Optional[Union[str, PreTrainedTokenizer]] = None, + tokenizer: Union[str, PreTrainedTokenizer], mode: GenerationMode = "async", async_queue_size: int = 50, ): @@ -53,19 +52,16 @@ def __init__( self._stop_event: threading.Event = threading.Event() if not tokenizer: - self._tokenizer = AutoTokenizer.from_pretrained( - settings.dataset.default_tokenizer - ) - logger.info("Initialized fake tokenizer for request generation") - else: - self._tokenizer = ( - AutoTokenizer.from_pretrained(tokenizer) - if isinstance(tokenizer, str) - else tokenizer - ) - logger.info( - "Tokenizer initialized for request generation: {}", self._tokenizer - ) + err = "Tokenizer must be provided for request generation" + logger.error(err) + raise ValueError(err) + + self._tokenizer = ( + AutoTokenizer.from_pretrained(tokenizer) + if isinstance(tokenizer, str) + else tokenizer + ) + logger.info("Tokenizer initialized for request generation: {}", self._tokenizer) if self._mode == "async": self._thread = threading.Thread(target=self._populate_queue, daemon=True)
diff --git a/tests/unit/backend/test_base.py b/tests/unit/backend/test_base.py index 9247eb1..edd61a9 100644 --- a/tests/unit/backend/test_base.py +++ b/tests/unit/backend/test_base.py @@ -10,6 +10,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse(type_="final", output="Test") @@ -48,6 +51,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse( type_="token_iter", @@ -91,6 +97,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse(type_="final", output="Test") @@ -110,6 +119,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse(type_="token_iter", add_token="Token") yield GenerativeResponse(type_="token_iter", add_token=" ") @@ -132,6 +144,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): if False: # simulate no yield yield @@ -152,6 +167,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse(type_="token_iter", add_token="Token") yield GenerativeResponse(type_="token_iter", add_token=" ") @@ -174,6 +192,9 @@ class MockBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + def available_models(self): return ["mock-model", "mock-model-2"] @@ -185,6 +206,39 @@ async def make_request(self, request): assert backend.default_model == "mock-model" [email protected]() +def test_backend_test_connection(): + class MockBackend(Backend): + def __init__(self): + super().__init__("test", "http://localhost:8000", "mock-model") + + def available_models(self): + return ["mock-model", "mock-model-2"] + + async def make_request(self, request): + yield GenerativeResponse(type_="final", output="") + + assert MockBackend().test_connection() + + [email protected]() +def test_backend_tokenizer(mock_auto_tokenizer): + class MockBackend(Backend): + def __init__(self): + super().__init__("test", "http://localhost:8000", "mock-model") + + def available_models(self): + return ["mock-model", "mock-model-2"] + + async def make_request(self, request): + yield GenerativeResponse(type_="final", output="") + + backend = MockBackend() + tokenizer = backend.model_tokenizer() + assert tokenizer is not None + assert tokenizer.tokenize("text") is not None + + @pytest.mark.regression() def test_backend_abstract_methods(): with pytest.raises(TypeError): @@ -194,6 +248,9 @@ class IncompleteBackend(Backend): def __init__(self): super().__init__("test", "http://localhost:8000", "mock-model") + def test_connection(self) -> bool: + return True + async def make_request(self, request): yield GenerativeResponse(type_="final", output="Test") diff --git a/tests/unit/request/test_base.py b/tests/unit/request/test_base.py index 8b75be1..73cf1b1 100644 --- a/tests/unit/request/test_base.py +++ b/tests/unit/request/test_base.py @@ -11,14 +11,16 @@ @pytest.mark.smoke() def test_request_generator_sync_constructor(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="sync") + generator = TestRequestGenerator(mode="sync", tokenizer="mock-tokenizer") assert generator.mode == "sync" assert generator.async_queue_size == 50 # Default value @pytest.mark.smoke() def test_request_generator_async_constructor(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="async", async_queue_size=10) + generator = TestRequestGenerator( + mode="async", tokenizer="mock-tokenizer", async_queue_size=10 + ) assert generator.mode == "async" assert generator.async_queue_size == 10 generator.stop() @@ -26,7 +28,7 @@ def test_request_generator_async_constructor(mock_auto_tokenizer): @pytest.mark.smoke() def test_request_generator_sync_iter(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="sync") + generator = TestRequestGenerator(mode="sync", tokenizer="mock-tokenizer") items = [] for item in generator: items.append(item) @@ -39,7 +41,7 @@ def test_request_generator_sync_iter(mock_auto_tokenizer): @pytest.mark.smoke() def test_request_generator_async_iter(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="async") + generator = TestRequestGenerator(mode="async", tokenizer="mock-tokenizer") items = [] for item in generator: items.append(item) @@ -53,7 +55,7 @@ def test_request_generator_async_iter(mock_auto_tokenizer): @pytest.mark.smoke() def test_request_generator_iter_calls_create_item(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="sync") + generator = TestRequestGenerator(mode="sync", tokenizer="mock-tokenizer") generator.create_item = Mock( # type: ignore return_value=TextGenerationRequest(prompt="Mock prompt"), ) @@ -70,7 +72,7 @@ def test_request_generator_iter_calls_create_item(mock_auto_tokenizer): @pytest.mark.smoke() def test_request_generator_async_iter_calls_create_item(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="sync") + generator = TestRequestGenerator(mode="sync", tokenizer="mock-tokenizer") generator.create_item = Mock( # type: ignore return_value=TextGenerationRequest(prompt="Mock prompt"), ) @@ -88,7 +90,9 @@ def test_request_generator_async_iter_calls_create_item(mock_auto_tokenizer): @pytest.mark.sanity() def test_request_generator_repr(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="sync", async_queue_size=100) + generator = TestRequestGenerator( + mode="sync", tokenizer="mock-tokenizer", async_queue_size=100 + ) repr_str = repr(generator) assert repr_str.startswith("RequestGenerator(") assert "mode=sync" in repr_str @@ -98,7 +102,7 @@ def test_request_generator_repr(mock_auto_tokenizer): @pytest.mark.sanity() def test_request_generator_stop(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="async") + generator = TestRequestGenerator(mode="async", tokenizer="mock-tokenizer") generator.stop() assert generator._stop_event.is_set() assert not generator._thread.is_alive() @@ -127,7 +131,9 @@ def _fake_tokenize(text: str) -> List[int]: @pytest.mark.regression() def test_request_generator_populate_queue(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="async", async_queue_size=2) + generator = TestRequestGenerator( + mode="async", tokenizer="mock-tokenizer", async_queue_size=2 + ) generator.create_item = Mock( # type: ignore return_value=TextGenerationRequest(prompt="Mock prompt") ) @@ -139,7 +145,9 @@ def test_request_generator_populate_queue(mock_auto_tokenizer): @pytest.mark.regression() def test_request_generator_async_stop_during_population(mock_auto_tokenizer): - generator = TestRequestGenerator(mode="async", async_queue_size=2) + generator = TestRequestGenerator( + mode="async", tokenizer="mock-tokenizer", async_queue_size=2 + ) generator.create_item = Mock( # type: ignore return_value=TextGenerationRequest(prompt="Mock prompt") ) diff --git a/tests/unit/request/test_emulated.py b/tests/unit/request/test_emulated.py index 699b1d6..f6af130 100644 --- a/tests/unit/request/test_emulated.py +++ b/tests/unit/request/test_emulated.py @@ -237,7 +237,7 @@ def test_endless_data_words_create_text(data, start, length, expected_text): @pytest.mark.smoke() -def test_emulated_request_generator_construction(mocker): +def test_emulated_request_generator_construction(mocker, mock_auto_tokenizer): mocker.patch( "guidellm.request.emulated.EmulatedConfig.create_config", return_value=EmulatedConfig(prompt_tokens=10), @@ -246,7 +246,9 @@ def test_emulated_request_generator_construction(mocker): "guidellm.request.emulated.EndlessTokens", return_value=EndlessTokens("word1 word2"), ) - generator = EmulatedRequestGenerator(config="mock_config", mode="sync") + generator = EmulatedRequestGenerator( + config="mock_config", tokenizer="mock-tokenizer", mode="sync" + ) assert isinstance(generator._config, EmulatedConfig) assert isinstance(generator._tokens, EndlessTokens) @@ -276,7 +278,9 @@ def test_emulated_request_generator_sample_prompt(mocker, mock_auto_tokenizer): "guidellm.request.emulated.EndlessTokens", return_value=EndlessTokens("word1 word2"), ) - generator = EmulatedRequestGenerator(config={"prompt_tokens": 3}, mode="sync") + generator = EmulatedRequestGenerator( + config={"prompt_tokens": 3}, tokenizer="mock-tokenizer", mode="sync" + ) prompt = generator.sample_prompt(3) assert prompt == "word1 word2 word1" @@ -285,7 +289,7 @@ def test_emulated_request_generator_sample_prompt(mocker, mock_auto_tokenizer): @pytest.mark.smoke() -def test_emulated_request_generator_random_seed(mocker): +def test_emulated_request_generator_random_seed(mocker, mock_auto_tokenizer): mocker.patch( "guidellm.request.emulated.EndlessTokens", return_value=EndlessTokens("word1 word2"), @@ -293,16 +297,19 @@ def test_emulated_request_generator_random_seed(mocker): rand_gen = EmulatedRequestGenerator( config={"prompt_tokens": 20, "prompt_tokens_variance": 10}, + tokenizer="mock-tokenizer", random_seed=42, mode="sync", ) rand_gen_comp_pos = EmulatedRequestGenerator( config={"prompt_tokens": 20, "prompt_tokens_variance": 10}, + tokenizer="mock-tokenizer", random_seed=42, mode="sync", ) rand_gen_comp_neg = EmulatedRequestGenerator( config={"prompt_tokens": 20, "prompt_tokens_variance": 10}, + tokenizer="mock-tokenizer", random_seed=43, mode="sync", ) @@ -339,13 +346,14 @@ def test_emulated_request_generator_lifecycle( config: Union[str, dict, Path], ): if config_type in ["dict", "json_str", "key_value_str"]: - generator = EmulatedRequestGenerator(config) + generator = EmulatedRequestGenerator(config, tokenizer="mock-tokenizer") elif config_type in ["file_str", "file_path"]: with tempfile.TemporaryDirectory() as temp_dir: file_path = Path(temp_dir) / "test.json" file_path.write_text(config) # type: ignore generator = EmulatedRequestGenerator( - str(file_path) if config_type == "file_str" else file_path + str(file_path) if config_type == "file_str" else file_path, + tokenizer="mock-tokenizer", ) for _ in range(5): diff --git a/tests/unit/request/test_file.py b/tests/unit/request/test_file.py index b214e41..429de52 100644 --- a/tests/unit/request/test_file.py +++ b/tests/unit/request/test_file.py @@ -12,7 +12,7 @@ def test_file_request_generator_constructor(mock_auto_tokenizer): with tempfile.TemporaryDirectory() as temp_dir: file_path = Path(temp_dir) / "example.txt" file_path.write_text("This is a test.\nThis is another test.") - generator = FileRequestGenerator(file_path) + generator = FileRequestGenerator(file_path, tokenizer="mock-tokenizer") assert generator._path == file_path assert generator._data == ["This is a test.", "This is another test."] assert generator._iterator is not None @@ -23,7 +23,9 @@ def test_file_request_generator_create_item(mock_auto_tokenizer): with tempfile.TemporaryDirectory() as temp_dir: file_path = Path(temp_dir) / "example.txt" file_path.write_text("This is a test.\nThis is another test.") - generator = FileRequestGenerator(file_path, mode="sync") + generator = FileRequestGenerator( + file_path, tokenizer="mock-tokenizer", mode="sync" + ) request = generator.create_item() assert isinstance(request, TextGenerationRequest) assert request.prompt == "This is a test." @@ -87,7 +89,7 @@ def test_file_request_generator_file_types_lifecycle( with tempfile.TemporaryDirectory() as temp_dir: file_path = Path(temp_dir) / f"example.{file_extension}" file_path.write_text(file_content) - generator = FileRequestGenerator(file_path) + generator = FileRequestGenerator(file_path, tokenizer="mock-tokenizer") for index, request in enumerate(generator): assert isinstance(request, TextGenerationRequest) diff --git a/tests/unit/request/test_transformers.py b/tests/unit/request/test_transformers.py index fcf933b..415fed6 100644 --- a/tests/unit/request/test_transformers.py +++ b/tests/unit/request/test_transformers.py @@ -28,6 +28,7 @@ def test_transformers_dataset_request_generator_constructor( dataset="dummy_dataset", split="train", column="text", + tokenizer="mock-tokenizer", ) assert generator._dataset == "dummy_dataset" assert generator._split == "train" @@ -45,6 +46,7 @@ def test_transformers_dataset_request_generator_create_item( dataset=create_sample_dataset_dict(), split="train", column="text", + tokenizer="mock-tokenizer", mode="sync", ) request = generator.create_item() @@ -83,7 +85,7 @@ def test_transformers_dataset_request_generator_lifecycle( return_value=dataset, ): generator = TransformersDatasetRequestGenerator( - dataset=dataset_arg, mode="sync" + dataset=dataset_arg, tokenizer="mock-tokenizer", mode="sync" ) for index, request in enumerate(generator): diff --git a/tests/unit/test_main.py b/tests/unit/test_main.py index 0ecc978..6923092 100644 --- a/tests/unit/test_main.py +++ b/tests/unit/test_main.py @@ -232,6 +232,8 @@ def test_generate_benchmark_report_cli_smoke( "openai_server", "--data-type", "emulated", + "--data", + "prompt_tokens=512", "--rate-type", "sweep", "--max-seconds", @@ -240,10 +242,12 @@ def test_generate_benchmark_report_cli_smoke( "10", "--output-path", "benchmark_report.json", - "--disable-continuous-refresh", ], ) + if result.stdout: + print(result.stdout) # noqa: T201 + assert result.exit_code == 0 assert "Benchmarks" in result.output assert "Generating report..." in result.output
How to specify input/output token lengths for requests? Looking at the help command for guidellm, there is no information on how to set input or output length ``` guidellm --help Usage: guidellm [OPTIONS] Generate a benchmark report for a specified backend and dataset. Options: --target TEXT The target path or url for the backend to evaluate. Ex: 'http://localhost:8000/v1' --backend [openai_server] The backend to use for benchmarking. The default is OpenAI Server enabling compatiability with any server that follows the OpenAI spec including vLLM --model TEXT The Model to use for benchmarking. If not provided, it will use the first available model provided the backend supports listing models. --data TEXT The data source to use for benchmarking. Depending on the data-type, it should be a path to a file, a dataset name, or a configuration for emulated data. --data-type [emulated|file|transformers] The type of data given for benchmarking --tokenizer TEXT The tokenizer to use for calculating the number of prompt tokens. If not provided, will use a Llama 3.1 tokenizer. --rate-type [sweep|synchronous|throughput|constant|poisson] The type of request rate to use for benchmarking. The default is sweep, which will run a synchronous and throughput rate- type and then interfill with constant rates between the two values --rate FLOAT The request rate to use for constant and poisson rate types --max-seconds INTEGER The maximum number of seconds for each benchmark run. Either max-seconds, max- requests, or both must be set. The default is 120 seconds. Note, this is the maximum time for each rate supplied, not the total time. This value should be large enough to allow for the server's performance to stabilize. --max-requests INTEGER The maximum number of requests for each benchmark run. Either max-seconds, max- requests, or both must be set. Note, this is the maximum number of requests for each rate supplied, not the total number of requests. This value should be large enough to allow for the server's performance to stabilize. --output-path TEXT The output path to save the output report to. The default is guidance_report.json. --disable-continuous-refresh Disable continual refreshing of the output table in the CLI until the user exits. --help Show this message and exit. ``` In other benchmarking frameworks there are usually top-level parameters for this, especially when using random data: https://github.com/vllm-project/vllm/blob/9db642138b54ef3df81873eac9fe7e15fc2da584/benchmarks/benchmark_serving.py#L692-L705 ```python parser.add_argument( "--random-input-len", type=int, default=1024, help= "Number of input tokens per request, used only for random sampling.", ) parser.add_argument( "--random-output-len", type=int, default=128, help= "Number of output tokens per request, used only for random sampling.", ) ```
2024-08-28T21:50:42
-1.0
lxdock/lxdock
117
lxdock__lxdock-117
['114']
86e6c7dcbbea3674569af066fd47434336e1f1bd
diff --git a/lxdock/conf/schema.py b/lxdock/conf/schema.py index 01ed760..fade192 100644 --- a/lxdock/conf/schema.py +++ b/lxdock/conf/schema.py @@ -21,11 +21,11 @@ def get_schema(): # The existence of the source directory will be checked! 'source': IsDir(), 'dest': str, - 'set_host_acl': bool, + 'set_host_acl': bool, # TODO: need a way to deprecate this }], 'shell': { 'user': str, - 'home': str, + 'home': str, # TODO: deprecated }, 'users': [{ # Usernames max length is set 32 characters according to useradd's man page. diff --git a/lxdock/container.py b/lxdock/container.py index 25a0e7b..afb97f1 100644 --- a/lxdock/container.py +++ b/lxdock/container.py @@ -122,12 +122,9 @@ def shell(self, username=None, cmd_args=[]): shellcfg = self.options.get('shell', {}) shelluser = username or shellcfg.get('user') if shelluser: - # This part is the result of quite a bit of `su` args trial-and-error. - shellhome = shellcfg.get('home') if not username else None - homearg = '--env HOME={}'.format(shellhome) if shellhome else '' - cmd = 'lxc exec {} {} -- su -m {}'.format(self.lxd_name, homearg, shelluser) + cmd = 'lxc exec {} -- su -l {}'.format(self.lxd_name, shelluser) else: - cmd = 'lxc exec {} -- su -m root'.format(self.lxd_name) + cmd = 'lxc exec {} -- su -l root'.format(self.lxd_name) if cmd_args: # Again, a bit of trial-and-error. @@ -361,12 +358,15 @@ def _setup_shares(self): logger.info('Setting up shares...') + if not self._host.has_subuidgid_been_set(): + raise ContainerOperationFailed() + container = self._container # First, let's make an inventory of shared sources that were already there. existing_shares = { - k: d for k, d in container.devices.items() if k.startswith('lxdockshare')} - existing_sources = {d['source'] for d in existing_shares.values()} + k: d for k, d in container.devices.items() if k.startswith('lxdockshare') + } # Let's get rid of previously set up lxdock shares. for k in existing_shares: @@ -374,26 +374,27 @@ def _setup_shares(self): for i, share in enumerate(self.options.get('shares', []), start=1): source = os.path.join(self.homedir, share['source']) - # It is possible to disable setting host side ACL but by default it is always enabled. - set_host_acl = share.get('set_host_acl', True) - if set_host_acl and source not in existing_sources: - logger.info('Setting host-side ACL for {}'.format(source)) - self._host.give_current_user_access_to_share(source) - if not self.is_privileged: - # We are considering a safe container. So give the mapped root user permissions - # to read/write contents in the shared folders too. - self._host.give_mapped_user_access_to_share(self._container, source) - # We also give these permissions to any user that was created with LXDock. - for uconfig in self.options.get('users', []): - username = uconfig.get('name') - self._host.give_mapped_user_access_to_share( - self._container, source, - userpath=uconfig.get('home', '/home/' + username)) - shareconf = {'type': 'disk', 'source': source, 'path': share['dest'], } container.devices['lxdockshare%s' % i] = shareconf + + guest_username = self.options.get("users", [{"name": "root"}])[0]["name"] + host_uid, host_gid = self._host.uidgid() + guest_uid, guest_gid = self._guest.uidgid(guest_username) + raw_idmap = "uid {} {}\ngid {} {}".format(host_uid, guest_uid, host_gid, guest_gid) + raw_idmap_updated = container.config.get("raw.idmap") != raw_idmap + if raw_idmap_updated: + container.config["raw.idmap"] = raw_idmap + container.save(wait=True) + if raw_idmap_updated: + # the container must be restarted for this to take effect + logger.info( + "share uid map (raw.idmap) updated, container must be restarted to take effect" + ) + container.restart(wait=True) + self._setup_ip() + def _setup_users(self): """ Creates users defined in the container's options if applicable. """ users = self.options.get('users', []) diff --git a/lxdock/guests/base.py b/lxdock/guests/base.py index 4b98306..bc25cfb 100644 --- a/lxdock/guests/base.py +++ b/lxdock/guests/base.py @@ -13,6 +13,7 @@ from pylxd.exceptions import NotFound +from ..exceptions import ContainerOperationFailed from ..utils.metaclass import with_metaclass @@ -126,6 +127,18 @@ def create_user(self, username, home=None, password=None): options += ['-p', password, ] self.run(['useradd', ] + options + [username, ]) + def uidgid(self, username): + """Obtain the uid and gid """ + exit_code, uid, _ = self.run(['id', '-u', username]) + if exit_code != 0: + raise ContainerOperationFailed("cannot get uid from container") + + exit_code, gid, _ = self.run(['id', '-g', username]) + if exit_code != 0: + raise ContainerOperationFailed("cannot get gid from container") + + return int(uid), int(gid) + ######################################################## # METHODS THAT SHOULD BE OVERRIDEN IN GUEST SUBCLASSES # ######################################################## @@ -149,7 +162,7 @@ def run(self, cmd_args): exit_code, stdout, stderr = self.lxd_container.execute(cmd_args) logger.debug(stdout) logger.debug(stderr) - return exit_code + return exit_code, stdout, stderr def copy_file(self, host_path, guest_path): """ diff --git a/lxdock/guests/gentoo.py b/lxdock/guests/gentoo.py index 212f850..bd2b9e8 100644 --- a/lxdock/guests/gentoo.py +++ b/lxdock/guests/gentoo.py @@ -11,6 +11,6 @@ def install_packages(self, packages): # It contains "equery" that can check which package has been installed. self.run(['emerge', 'app-portage/gentoolkit']) for p in packages: - retcode = self.run(['equery', 'list', p]) + retcode, _, _ = self.run(['equery', 'list', p]) if retcode != 0: # Not installed yet self.run(['emerge', p]) diff --git a/lxdock/hosts/base.py b/lxdock/hosts/base.py index dcbab3b..e09b4f2 100644 --- a/lxdock/hosts/base.py +++ b/lxdock/hosts/base.py @@ -12,7 +12,6 @@ import subprocess from pathlib import Path -from ..utils.lxd import get_lxd_dir from ..utils.metaclass import with_metaclass @@ -95,33 +94,42 @@ def get_ssh_pubkey(self): except FileNotFoundError: # pragma: no cover pass - def give_current_user_access_to_share(self, source): - """ Give read/write access to `source` for the current user. """ - self.run(['setfacl', '-Rdm', 'u:{}:rwX'.format(os.getuid()), source]) - - def give_mapped_user_access_to_share(self, lxd_container, source, userpath=None): - """ Give read/write access to `source` for the mapped user owning `userpath`. - - `userpath` is a path that is relative to the LXD base directory (where LXD store contaners). - """ - # LXD uses user namespaces when running safe containers. This means that it maps a set of - # uids and gids on the host to a set of uids and gids in the container. - # When considering unprivileged containers we want to ensure that the "root user" (or any - # other user) of such containers have the proper rights to write in shared folders. To do so - # we have to retrieve the UserID on the host-side that is mapped to the "root"'s UserID (or - # any other user's UserID) on the guest-side. This will allow to set ACL on the host-side - # for this UID. By doing this we will also allow "root" user on the guest-side to read/write - # in shared folders. - container_path_parts = [get_lxd_dir(), 'containers', lxd_container.name, 'rootfs'] - container_path_parts += userpath.split('/') if userpath else [] - container_path = os.path.join(*container_path_parts) - container_path_stats = os.stat(container_path) - host_userpath_uid = container_path_stats.st_uid - self.run([ - 'setfacl', '-Rm', - 'user:lxd:rwx,default:user:lxd:rwx,' - 'user:{0}:rwx,default:user:{0}:rwx'.format(host_userpath_uid), source, - ]) + def uidgid(self): + return os.getuid(), os.getgid() + + def has_subuidgid_been_set(self): + # Setup host subuid and subgid mapping + # For more information, see + # https://stgraber.org/2017/06/15/custom-user-mappings-in-lxd-containers/ + + host_uid, host_gid = self.uidgid() + subuid_lines = ["lxd:{}:1".format(host_uid), "root:{}:1".format(host_uid)] + subgid_lines = ["lxd:{}:1".format(host_gid), "root:{}:1".format(host_gid)] + + configured_correctly = True + + with open("/etc/subuid") as f: + subuid_content = f.read() + + for line in subuid_lines: + if line not in subuid_content: + logger.error("/etc/subuid missing the line: {}".format(line)) + configured_correctly = False + + with open("/etc/subgid") as f: + subgid_content = f.read() + + for line in subgid_lines: + if line not in subgid_content: + logger.error("/etc/subgid missing the line: {}".format(line)) + configured_correctly = False + + if not configured_correctly: + logger.error( + "you must set these lines and then restart the lxd daemon before continuing" + ) + + return configured_correctly ################## # HELPER METHODS #
diff --git a/tests/integration/test_container.py b/tests/integration/test_container.py index e3666e1..7a09941 100644 --- a/tests/integration/test_container.py +++ b/tests/integration/test_container.py @@ -121,7 +121,7 @@ def test_can_open_a_shell_for_the_root_user(self, mocked_call, persistent_contai persistent_container.shell() assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - 'lxc exec {} -- su -m root'.format(persistent_container.lxd_name) + 'lxc exec {} -- su -l root'.format(persistent_container.lxd_name) @unittest.mock.patch('subprocess.call') def test_can_open_a_shell_for_a_specific_shelluser(self, mocked_call): @@ -134,7 +134,7 @@ def test_can_open_a_shell_for_a_specific_shelluser(self, mocked_call): container.shell() assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - 'lxc exec {} --env HOME=/opt -- su -m test'.format(container.lxd_name) + 'lxc exec {} -- su -l test'.format(container.lxd_name) @unittest.mock.patch('subprocess.call') def test_can_run_quoted_shell_command_for_the_root_user( @@ -142,7 +142,7 @@ def test_can_run_quoted_shell_command_for_the_root_user( persistent_container.shell(cmd_args=['echo', 'he re"s', '-u', '$PATH']) assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - 'lxc exec {} -- su -m root -s {}'.format( + 'lxc exec {} -- su -l root -s {}'.format( persistent_container.lxd_name, persistent_container._guest_shell_script_file) script = persistent_container._container.files.get( persistent_container._guest_shell_script_file) @@ -159,7 +159,7 @@ def test_can_run_quoted_shell_command_for_a_specific_shelluser(self, mocked_call container.shell(cmd_args=['echo', 'he re"s', '-u', '$PATH']) assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - 'lxc exec {} --env HOME=/opt -- su -m test -s {}'.format( + 'lxc exec {} -- su -l test -s {}'.format( container.lxd_name, container._guest_shell_script_file) script = container._container.files.get(container._guest_shell_script_file) assert script == b"""#!/bin/sh\necho 'he re"s' -u '$PATH'\n""" diff --git a/tests/integration/test_project.py b/tests/integration/test_project.py index 55bb90b..1248ced 100644 --- a/tests/integration/test_project.py +++ b/tests/integration/test_project.py @@ -155,7 +155,7 @@ def test_can_open_a_shell_for_a_specific_container(self, mocked_call, persistent project.shell(container_name='testcase-persistent') assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - 'lxc exec {} -- su -m root'.format(persistent_container.lxd_name) + 'lxc exec {} -- su -l root'.format(persistent_container.lxd_name) @unittest.mock.patch('subprocess.call') def test_can_run_shell_command_for_a_specific_container( @@ -165,7 +165,7 @@ def test_can_run_shell_command_for_a_specific_container( project.shell(container_name='testcase-persistent', cmd_args=['echo', 'HELLO']) assert mocked_call.call_count == 1 assert mocked_call.call_args[0][0] == \ - "lxc exec {} -- su -m root -s {}".format( + "lxc exec {} -- su -l root -s {}".format( persistent_container.lxd_name, persistent_container._guest_shell_script_file) @unittest.mock.patch.object(project_logger, 'info') diff --git a/tests/unit/guests/test_base.py b/tests/unit/guests/test_base.py index 6bccb76..4cba9c7 100644 --- a/tests/unit/guests/test_base.py +++ b/tests/unit/guests/test_base.py @@ -119,3 +119,13 @@ class DummyGuest(Guest): assert guest.lxd_container.files.put.call_count == 1 assert guest.lxd_container.files.put.call_args[0][0] == guest._guest_temporary_tar_path + + def test_uidgid(self): + class DummyGuest(Guest): + name = 'dummy' + guest = DummyGuest(FakeContainer()) + guest.lxd_container.execute.return_value = (0, "10000", "") + + uid, gid = guest.uidgid("user") + assert uid == 10000 + assert gid == 10000 diff --git a/tests/unit/hosts/test_base.py b/tests/unit/hosts/test_base.py index 3157e9c..6610d06 100644 --- a/tests/unit/hosts/test_base.py +++ b/tests/unit/hosts/test_base.py @@ -1,4 +1,3 @@ -import os import platform import unittest.mock from pathlib import Path @@ -7,7 +6,6 @@ from lxdock.hosts import Host from lxdock.hosts.base import InvalidHost -from lxdock.test import FakeContainer class TestGuest: @@ -32,22 +30,12 @@ def test_can_return_ssh_pubkey(self, mock_open): assert host.get_ssh_pubkey() assert mock_open.call_count == 1 - @unittest.mock.patch('subprocess.Popen') - def test_can_give_current_user_access_to_share(self, mocked_call): - host = Host() - host.give_current_user_access_to_share('.') - assert mocked_call.call_count == 1 - assert mocked_call.call_args[0][0] == 'setfacl -Rdm u:{}:rwX .'.format(os.getuid()) - - @unittest.mock.patch('subprocess.Popen') - @unittest.mock.patch('os.stat') - def test_can_give_mapped_user_access_to_share(self, mocked_stat, mocked_call): - class MockedContainer(object): - name = 'test' - mocked_stat.return_value = unittest.mock.MagicMock(st_uid='19958953') + @unittest.mock.patch("os.getuid") + @unittest.mock.patch("os.getgid") + def test_uidgid(self, mock_getuid, mock_getgid): + mock_getuid.return_value = 10000 + mock_getgid.return_value = 10001 + host = Host() - host.give_mapped_user_access_to_share(FakeContainer(), '.', '.') - assert mocked_call.call_count == 1 - assert mocked_call.call_args[0] == ( - 'setfacl -Rm user:lxd:rwx,default:user:lxd:rwx,user:19958953:rwx,default:user:19958953' - ':rwx .',) + + assert host.uidgid(), (10000, 10001)
lxdock share guest created files not mapped to the host user? I have a guest created file via the a normal guest user account (non-root). Back on the host, the file has the attributes: ``` -rw-r--r--+ 1 166537 166537 497 Mar 2 11:18 handler.py ``` This means I can't edit this file easily as vim will complain about readonly. How does vagrant-lxc get around this issue?
This is a good question because I have had to work around this issue myself too. And this is the number one issue on my mind since getting access rights to the project late last year. I don't like the way it works right now and this is one of the big things I would like to solve. Currently what lxdock does is set advanced permissions on each file in your source tree using the setfacl command, which is the bit I really dislike as it dirties every file. What happened to me is the additional permissions on the files interfered with our debian package building process which we do in an lxdock container, so I had to find a solution for this problem last year. Last year before having full access to the lxdock project, I managed to convince to at least add a setting to disable this behaviour. The setting is putting this on each share you have in lxdock.yml shares: - source: . dest: /src/myapp set_host_acl: false Because every file has extra permissions set by setfacl, is why you are seeing the + by the perms. -rw-r--r--+ To remove these perms, you need to do a clean git checkout, or remove the advanced setfacl permissions from every file recursively first. There's a command for that. Then you need to edit your lxdock.yml and for each share use set_host_facl false. Additionally, the container must be privileged: privileged: yes The last bit I do with ansible, I have an ansible playbook to read the uid and gid of the guest system. I then set the uid and gid of the user in the container to these values. In addition, in case the ubuntu user already in the container has a uid and gid clash, I delete the ubuntu user. This seems really hacky and roundabout, which is why this is the number one thing on my mind to fix in lxdock, but it might mean making some decisions and doing this user mapping behaviour by default. I was thinking of always creating a default user "lxdock" with password "lxdock" like vagrant, but with the ability to configure that still in the lxdock.yml if you want another username and password. I was then going to automatically map the first user to the guest system user. As I see lxdock a tool for doing development containers, and you really want user mapping to work for shares out of the box or it is just frustrating, even if that means the container needs to be privileged to do so because the core purpose of lxdock the way I see it, is development containers which are generally a controlled environment anyway. But I need to do some research as well, I think there might be better ways to do these things with lxd now as I am currently doing them, because it all seems like a big hack the way I am achieving this now. One usecase I'm looking into is the ability to use lxdock as a base for a build container, like building kernels/debian packages/whatever else that requires a lot of dependencies to be installed. I used to use lxd for this purpose but with lxdock the configuration phase may be simpler. That's for another day, tho. It will be once I fix these quirks, the setfacl thing dirty every file is really bad I think. We use ansible to setup a build container with lxdock, but there is another issue that the ansible user is root by default when lxdock invokes ansible which is also very backwards compared to how vagrant works where it would go in as the vagrant user. This has disallowed me from using the same ansible playbooks with both a vagrantfile and lxdock file in the repo which is a bit annoying. So unless I really fix lxdock to have a "default user" say lxdock by default, and fix the user mapping with the guest os user, can I really fix that ansible root user issue. So there is a fair amount I have to fix really :) The previous devs worked around this with the setfacl thing, basically giving the guest OS user "access" to the files, but the uid and gid are not mapped which was the problem for me when doing debian package builds. I worked around this before by doing build in /tmp rather than on a share. > ansible user is root by default when lxdock invokes ansible Ha! I was looking at this issue earlier, but haven't had a chance to look at a fix. Having a default user sounds like a good idea as this will bring some much needed stability to the wide variety of images out there. That was also why I added #111. Thanks, I'll have a look a bit later. Shared folders is a mess, I agree. I could never manage to implement a proper solution. Good luck and godspeed :) I think if we create a default user then in order to avoid uid / gid clashes for 2 users on a container, the ubuntu user does need to be deleted and it's home directory removed. I was thinking create a lxdock / lxdock user by default and map the uid and gid to the guest os user later for shares. If one of the users in the lxdock.yml has default = yes, then that becomes default user instead and it won't create an lxdock user. If there are multiple users with default = yes, then produce an error as there can only be one default user per container. For future reference, the command to unset the setfacl permissions is this: ``` setfacl -Rbn . ``` May need to chown * to your user, as well (and any .dot directories/files along with that)
2018-03-04T05:47:50
-1.0
vittoriozamboni/django-groups-manager
68
vittoriozamboni__django-groups-manager-68
['63']
a1bafd8c79970624aac392be6cc91cb2f3591326
diff --git a/.coverage b/.coverage deleted file mode 100644 index 778d964..0000000 Binary files a/.coverage and /dev/null differ diff --git a/.gitignore b/.gitignore index 72ce493..1bd2166 100644 --- a/.gitignore +++ b/.gitignore @@ -34,10 +34,11 @@ pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ -.tox/ +.tox +.coverage .coverage_html .cache -nosetests.xml +.pytest_cache coverage.xml # Translations @@ -70,4 +71,4 @@ target/ # testproject local files testproject/static* testproject/db.sqlite3 -/.venv/ +.venv diff --git a/AUTHORS b/AUTHORS index 7b91e6d..f6529e7 100644 --- a/AUTHORS +++ b/AUTHORS @@ -1,2 +1,3 @@ Vittorio Zamboni -Oskar Persson \ No newline at end of file +Oskar Persson +David Linke diff --git a/README.md b/README.md index 7313228..90611f2 100644 --- a/README.md +++ b/README.md @@ -26,12 +26,12 @@ GROUPS_MANAGER = { ## Requirements - - Python >= 3.5 - - Django >= 2 + - Python >= 3.8 + - Django >= 3.2 - django-guardian for user permissions - jsonfield == 3.1.0 -For older versions of Python or Django, please look at 0.6.2 version. +For older versions of Python or Django, please look at 1.2.0 (Django <3.2, Python < 3.8>) or 0.6.2 version (Django 1.x, Python < 3.5). ## Installation diff --git a/docs/requirements_docs.txt b/docs/requirements_docs.txt new file mode 100644 index 0000000..ebc02d6 --- /dev/null +++ b/docs/requirements_docs.txt @@ -0,0 +1,3 @@ +sphinx +sphinxcontrib-django2 +sphinx_rtd_theme diff --git a/docs/source/auth_integration.rst b/docs/source/auth_integration.rst index 5cdcbca..b218609 100644 --- a/docs/source/auth_integration.rst +++ b/docs/source/auth_integration.rst @@ -1,7 +1,7 @@ Django auth models integration ============================== -It is possibile to auto-map ``Group`` and ``Member`` instances with ``django.contrib.auth.models`` ``Group`` and ``User``. +It is possible to auto-map ``Group`` and ``Member`` instances with ``django.contrib.auth.models`` ``Group`` and ``User``. To enable mapping, ``"AUTH_MODELS_SYNC"`` setting must be set to ``True`` (default: ``False``), and also ``Group`` and ``Member`` instances attribute ``django_auth_sync`` (that is ``True`` by default). Add to your ``settings`` file:: diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst index 5bb1667..e464d62 100644 --- a/docs/source/changelog.rst +++ b/docs/source/changelog.rst @@ -1,59 +1,66 @@ Changelog ========= -- 22-01-29 (1.2.0): + +- 2023-mm-dd (1.3.0): + - Verified compatibility with Django up to 4.2 and Python up to 3.11. + - Dropped compatibility guarantees with anything older than Django 3.2 and Python 3.8. + - Use Django's build in models.JSONField instead of the one from the jsonfield package. + - Minor documentation changes + +- 2022-01-29 (1.2.0): - Support Django 4: uses correct url parser function `re_path` (see issue #60, thank you Lukas Hennies!) - Updated intro.rst, fixed wrong example (see issue #57, thank you Areski Belaid!) -- 21-04-11 (1.1.0): +- 2021-04-11 (1.1.0): - Removed `awesome-slugify` from requirements. It needs to be installed separately due to his licence (see issue #54, thank you BoPeng!); - Added a new settings to customize the slugify functions for username and other cases. -- 20-06-17 (1.0.2): +- 2020-06-17 (1.0.2): - Changed jsonfield2 to jsonfield in requirements and tests (see issue #49, thank you ioio!); -- 20-03-07 (1.0.1): +- 2020-03-07 (1.0.1): - Amended Django 3 deprecations - Documentation changes -- 19-12-10 (1.0.0): +- 2019-12-10 (1.0.0): - Dropped support for Django < 2 and Python 2.* -- 19-01-11 (0.6.2): +- 2019-01-11 (0.6.2): - Added migrations for expiration_date and verbose names -- 18-01-18 (0.6.1): +- 2018-01-18 (0.6.1): - Added support for Django 2 -- 17-12-09 (0.6.0) (thank you Oskar Persson!): +- 2017-12-09 (0.6.0) (thank you Oskar Persson!): - Added group type permission handling - Added ``expiration_date`` attribute - Added support to django-jsonfield -- 16-11-08 (0.5.0): +- 2016-11-08 (0.5.0): - Added models mixins - Removed compatibility for Django < 1.7 -- 16-10-10 (0.4.2): +- 2016-10-10 (0.4.2): - Added initial migration - Removed null attributes from m2m relations -- 16-04-19 (0.4.1): +- 2016-04-19 (0.4.1): - Removed unique to group name (this cause issues when subclassing, since it does not allows to have same names for different models) - Fixed issue with python 3 compatibility in templatetags (thank you Josh Manning!) -- 16-03-01 (0.4.0): +- 2016-03-01 (0.4.0): - Added kwargs to signals for override settings parameters - Added remove_member to group as a method (previously must be done manually) -- 16-02-25 (0.3.0): +- 2016-02-25 (0.3.0): - Added permissions assignment to groups - Added support for Django 1.8 and 1.9 -- 15-05-05 (0.2.1): +- 2015-05-05 (0.2.1): - Added 'add' to default permissions -- 15-05-05 (0.2.0): +- 2015-05-05 (0.2.0): - Changed retrieval of permission's name: 'view', 'change' and 'delete' will be translated to '<name>_<model_name>', the others are left untouched (see :ref:`permission name policy <permission-name-policy>`) - - Added GroupsManagerMeta class to Group that allows to specify the member model to use for members list (see :ref:`custom Member model <custom-member-model>`) + - Added GroupsManagerMeta class to Group that allows to specify the member model to use for members list (see `custom Member model <custom_member>`) -- 14-10-29 (0.1.0): Initial version +- 2014-10-29 (0.1.0): Initial version diff --git a/docs/source/conf.py b/docs/source/conf.py index 03c8187..24d191a 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -16,10 +16,11 @@ import os from datetime import datetime + base_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')) sys.path.insert(1, base_path) sys.path.insert(1, os.path.join(base_path, 'testproject')) -os.environ['DJANGO_SETTINGS_MODULE'] = 'testproject.settings' + # If extensions (or modules to document with autodoc) are in another directory, @@ -37,9 +38,11 @@ # ones. extensions = [ 'sphinx.ext.autodoc', + "sphinx.ext.autosectionlabel", 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode', + 'sphinxcontrib_django2', ] todo_include_todos = True @@ -63,15 +66,19 @@ # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. -# -# The short X.Y version. -version = '1.0' + # The full version, including alpha/beta/rc tags. -release = '1.0.0' +try: + from groups_manager import VERSION + release = VERSION +except ImportError: + release = "0.0.0" +# The short X.Y.Z version. (will be shown in docs header) +version = '.'.join(release.split('.')[:3]) # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. -#language = None +language = "en" # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: @@ -277,3 +284,16 @@ # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False + +# -- Options for sphinx.ext.autosectionlabel ------------------------------ + +# Make sure the ref targets created by are unique +autosectionlabel_prefix_document = True + +# -- Options for sphinxcontrib-django2 ------------------------------------ + +# Configure the path to the Django settings module +django_settings = 'testproject.settings' +autodoc_default_options = { + "exclude-members": "__weakref__,DoesNotExist,MultipleObjectsReturned" +} diff --git a/docs/source/custom_member.rst b/docs/source/custom_member.rst index f300ce2..82d298f 100644 --- a/docs/source/custom_member.rst +++ b/docs/source/custom_member.rst @@ -1,5 +1,3 @@ -.. _custom-member-model: - Custom member model ------------------- diff --git a/docs/source/custom_signals.rst b/docs/source/custom_signals.rst index b6d3bd7..6839328 100644 --- a/docs/source/custom_signals.rst +++ b/docs/source/custom_signals.rst @@ -1,5 +1,3 @@ -.. _custom-signals: - Custom signals -------------- diff --git a/docs/source/index.rst b/docs/source/index.rst index 09c703d..d383c8c 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -6,11 +6,11 @@ Django Groups Manager ===================== -Django Groups Manager allows to manage groups based on `django-mptt <https://github.com/django-mptt/django-mptt>`_. - -The application offers three main classes: `Group`, `Member` and `GroupMember`. -It's possible to *map* groups and members with Django's auth models, in order to use external applcations such `django-guardian <https://github.com/lukaszb/django-guardian>`_ to handle permissions. +Django Groups Manager allows to manage groups, group membership and members. +It's possible to *map* groups and members with Django's auth models, in order to use external applications such `django-guardian <https://github.com/lukaszb/django-guardian>`_ to handle permissions. +Groups can have a hierarchical structure based on `django-mptt <https://github.com/django-mptt/django-mptt>`_. +The application offers three main classes: `Group`, `Member` and `GroupMember`. The basic idea of Groups is that each `Group` instance could have a `Group` instance as parent (this relation is managed via django-mptt). The code is hosted on `github <https://github.com/vittoriozamboni/django-groups-manager>`_. diff --git a/docs/source/intro.rst b/docs/source/intro.rst index 1d934a1..c85a602 100644 --- a/docs/source/intro.rst +++ b/docs/source/intro.rst @@ -2,8 +2,8 @@ Installation ============ Requirements - - Python >= 3.5 - - Django >= 2 + - Python >= 3.8 + - Django >= 3.2 First of all, install the latest build with ``pip``:: diff --git a/docs/source/model_mixins.rst b/docs/source/model_mixins.rst index 41d3f5a..a34e9d3 100644 --- a/docs/source/model_mixins.rst +++ b/docs/source/model_mixins.rst @@ -13,8 +13,8 @@ Cons: - all external foreign keys must be declared in the concrete model; - all signals must be declared with concrete models. -Model mixins example --------------------- +Mixins Example +^^^^^^^^^^^^^^ The following models allow to manage a set of Organizations with related members (from ``organization`` app). In this example, a ``last_edit_date`` is added to models, and member display name has the user email (if defined). diff --git a/docs/source/permissions_by_group_type.rst b/docs/source/permissions_by_group_type.rst index ab4336d..e332cd4 100644 --- a/docs/source/permissions_by_group_type.rst +++ b/docs/source/permissions_by_group_type.rst @@ -1,7 +1,5 @@ -.. _custom-permissions-by-group-type: - Resource assignment via group type permissions ----------------------------------------- +---------------------------------------------- Permissions can also be applied to related groups filtered by group types. Instead of simply using a list to specify permissions one can use a ``dict`` to @@ -9,8 +7,7 @@ specify which group types get which permissions. Example -####### - +^^^^^^^ John Money is the commercial referent of the company; Patrick Html is the web developer. John and Patrick can view the site, but only Patrick can change and diff --git a/docs/source/permissions_by_role.rst b/docs/source/permissions_by_role.rst index 9374389..8147235 100644 --- a/docs/source/permissions_by_role.rst +++ b/docs/source/permissions_by_role.rst @@ -1,5 +1,3 @@ -.. _custom-permissions-by-role: - Resource assignment via role permissions ---------------------------------------- diff --git a/docs/source/proxy_models.rst b/docs/source/proxy_models.rst index a405546..fc1cf59 100644 --- a/docs/source/proxy_models.rst +++ b/docs/source/proxy_models.rst @@ -4,7 +4,7 @@ Projects management with Proxy Models John Boss is the project leader. Marcus Worker and Julius Backend are the django backend guys; Teresa Html is the front-end developer and Jack College is the student that has to learn to write good backends. -The Celery pipeline is owned by Marcus, and Jack must see it without intercations. +The Celery pipeline is owned by Marcus, and Jack must see it without interactions. Teresa can't see the pipeline, but John has full permissions as project leader. As part of the backend group, Julius has the right of viewing and editing, but not to stop (delete) the pipeline. diff --git a/docs/source/settings.rst b/docs/source/settings.rst index a43d718..270b636 100644 --- a/docs/source/settings.rst +++ b/docs/source/settings.rst @@ -32,7 +32,7 @@ Valid keys are: .. note:: The four special permission names ``"add"``, ``"view"``, ``"change"``, and ``"delete"`` are translated to ``<permission>_<model_name>`` string during permission's name lookup. - This allows to use a standard permission policy (*view*, *change*, *delete*) but also allows to use :ref:`custom permissions <custom-permissions-by-role>`. + This allows to use a standard permission policy (*view*, *change*, *delete*) but also allows to use `custom permissions <custom-permissions-by-role>`. An example of permissions assigned by role can be found on use cases. diff --git a/groups_manager/__init__.py b/groups_manager/__init__.py index ee65984..9e23359 100644 --- a/groups_manager/__init__.py +++ b/groups_manager/__init__.py @@ -1,1 +1,1 @@ -VERSION = '1.2.0' +VERSION = '1.3.0' diff --git a/groups_manager/migrations/0007_1_2_0_alter_group_group_entities_alter_group_group_members_and_more.py b/groups_manager/migrations/0007_1_2_0_alter_group_group_entities_alter_group_group_members_and_more.py new file mode 100644 index 0000000..2ad632b --- /dev/null +++ b/groups_manager/migrations/0007_1_2_0_alter_group_group_entities_alter_group_group_members_and_more.py @@ -0,0 +1,73 @@ +# Generated by Django 4.2.1 on 2023-06-12 19:04 +# using django-mptt 0.14.0 + +from django.conf import settings +from django.db import migrations, models +import django.db.models.deletion +import mptt.fields + + +class Migration(migrations.Migration): + dependencies = [ + migrations.swappable_dependency(settings.AUTH_USER_MODEL), + ("groups_manager", "0006_1_0_0_default"), + ] + + operations = [ + migrations.AlterField( + model_name="group", + name="group_entities", + field=models.ManyToManyField( + blank=True, + related_name="%(app_label)s_%(class)s_set", + to="groups_manager.groupentity", + verbose_name="group entities", + ), + ), + migrations.AlterField( + model_name="group", + name="group_members", + field=models.ManyToManyField( + related_name="%(app_label)s_%(class)s_set", + through="groups_manager.GroupMember", + to="groups_manager.member", + verbose_name="group members", + ), + ), + migrations.AlterField( + model_name="group", + name="group_type", + field=models.ForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.SET_NULL, + related_name="%(app_label)s_%(class)s_set", + to="groups_manager.grouptype", + verbose_name="group type", + ), + ), + migrations.AlterField( + model_name="group", + name="parent", + field=mptt.fields.TreeForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.CASCADE, + related_name="sub_%(app_label)s_%(class)s_set", + to="groups_manager.group", + verbose_name="parent", + ), + ), + migrations.AlterField( + model_name="member", + name="django_user", + field=models.ForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.SET_NULL, + related_name="%(app_label)s_%(class)s_set", + to=settings.AUTH_USER_MODEL, + verbose_name="django user", + ), + ), + ] diff --git a/groups_manager/migrations/0008_1_3_0_jsonfield_from_django.py b/groups_manager/migrations/0008_1_3_0_jsonfield_from_django.py new file mode 100644 index 0000000..482a076 --- /dev/null +++ b/groups_manager/migrations/0008_1_3_0_jsonfield_from_django.py @@ -0,0 +1,20 @@ +# Generated by Django 4.2.1 on 2023-06-13 13:44 + +from django.db import migrations, models + + +class Migration(migrations.Migration): + dependencies = [ + ( + "groups_manager", + "0007_1_2_0_alter_group_group_entities_alter_group_group_members_and_more", + ), + ] + + operations = [ + migrations.AlterField( + model_name="group", + name="properties", + field=models.JSONField(blank=True, default=dict, verbose_name="properties"), + ), + ] diff --git a/groups_manager/models.py b/groups_manager/models.py index 7566638..58a8abe 100644 --- a/groups_manager/models.py +++ b/groups_manager/models.py @@ -1,5 +1,4 @@ from collections import OrderedDict -from importlib import import_module from uuid import uuid4 import warnings @@ -15,7 +14,6 @@ from django.contrib.auth.models import User as DefaultUser DjangoUser = getattr(django_settings, 'AUTH_USER_MODEL', DefaultUser) -from jsonfield import JSONField from mptt.models import MPTTModel, TreeForeignKey from groups_manager import exceptions_gm @@ -71,9 +69,6 @@ class Meta: abstract = True ordering = ('last_name', 'first_name') - def __unicode__(self): - return self.full_name - def __str__(self): return self.full_name @@ -131,6 +126,7 @@ class Member(MemberMixin): django_user = models.ForeignKey(DjangoUser, null=True, blank=True, on_delete=models.SET_NULL, related_name='%(app_label)s_%(class)s_set', verbose_name=_('django user')) + # This class Meta is not necessary acc. to https://docs.djangoproject.com/en/4.2/topics/db/models/#meta-inheritance class Meta(MemberMixin.Meta): abstract = False @@ -182,7 +178,7 @@ def member_save(sender, instance, created, *args, **kwargs): def member_delete(sender, instance, *args, **kwargs): """ - Remove the related Django Group + Remove the related Django User """ get_auth_models_sync_func = kwargs.get('get_auth_models_sync_func', get_auth_models_sync_func_default) @@ -313,7 +309,7 @@ class GroupMixin(GroupRelationsMixin, MPTTModel): - `description`: text field - `comment`: text field - `full_name`: auto generated full name starting from tree root - - `properties`: jsonfield properties + - `properties`: JSONField properties - `group_members`: m2m to Member, through GroupMember model (related name: `groups`) - `group_type`: foreign key to GroupType (related name: `groups`) - `group_entities`: m2m to GroupEntity (related name: `groups`) @@ -336,10 +332,10 @@ class GroupMixin(GroupRelationsMixin, MPTTModel): related_name='sub_%(app_label)s_%(class)s_set', verbose_name=_('parent')) full_name = models.CharField(_('full name'), max_length=255, default='', blank=True) try: - properties = JSONField(_('properties'), default={}, blank=True, + properties = models.JSONField(_('properties'), default=dict, blank=True, load_kwargs={'object_pairs_hook': OrderedDict}) except TypeError: - properties = JSONField(_('properties'), default={}, blank=True) + properties = models.JSONField(_('properties'), default=dict, blank=True) django_auth_sync = models.BooleanField(default=True, blank=True) @@ -403,7 +399,7 @@ def get_members(self, subgroups=False): else: members = [gm.member for gm in group_member_model.objects.filter(group=self)] if subgroups: - for subgroup in self.subgroups.all(): + for subgroup in self.sub_groups_manager_group_set.all(): members += subgroup.members members = list(set(members)) return members @@ -430,7 +426,7 @@ def get_entities(self, subgroups=False): """ entities = list(self.group_entities.all()) if subgroups: - for subgroup in self.subgroups.all(): + for subgroup in self.sub_groups_manager_group_set.all(): entities += subgroup.entities entities = list(set(entities)) return entities @@ -620,7 +616,8 @@ class GroupMemberMixin(models.Model): class Meta: abstract = True ordering = ('group', 'member') - # BUG in Django: https://code.djangoproject.com/ticket/16732 + # BUG in Django: "Unable to have abstract model with unique_together" + # https://code.djangoproject.com/ticket/16732 # unique_together = (('group', 'member'), ) def __unicode__(self): diff --git a/groups_manager/views.py b/groups_manager/views.py index 4e43db0..7b7ac04 100644 --- a/groups_manager/views.py +++ b/groups_manager/views.py @@ -1,5 +1,3 @@ -import json - from django.contrib.auth.mixins import LoginRequiredMixin from django.http import HttpResponseRedirect from django.urls import reverse diff --git a/setup.py b/setup.py index e11dc00..9ff75b4 100644 --- a/setup.py +++ b/setup.py @@ -16,7 +16,7 @@ from groups_manager import VERSION install_requires=[ - 'django>=2', + 'django>=3.2', 'django-mptt', ] @@ -49,10 +49,10 @@ 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', + 'Programming Language :: Python :: 3.9', + 'Programming Language :: Python :: 3.10', + 'Programming Language :: Python :: 3.11', 'Topic :: Security', ], ) diff --git a/tox.ini b/tox.ini index 64f4788..a3c9623 100644 --- a/tox.ini +++ b/tox.ini @@ -1,30 +1,41 @@ [tox] +requires = + tox>=4.2 envlist = - {py35,py36,py37}-django20-mptt0100-jsonfield, - {py35,py36,py37}-django20-mptt0100-guardian14-jsonfield, - {py35,py36,py37}-django21-mptt0100-guardian21-jsonfield, - {py35,py36,py37}-django22-mptt0100-guardian21-jsonfield, - {py36,py37,py38}-django30-mptt0100-jsonfield, - {py36,py37,py38}-django30-mptt0100-guardian21-jsonfield, + py311-django{42} + py311-django{42, 41}-guardian + py310-django{42, 41, 40, 32}-guardian + py39-django{42, 41, 40, 32}-guardian + py38-django{42, 41, 40, 32}-guardian-{lin,mac} [testenv] +set_env = + PYTHONDEVMODE = 1 commands = pip freeze - coverage run testproject/manage.py test testproject groups_manager - + python \ + -W error::ResourceWarning \ + -W error::DeprecationWarning \ + -W error::PendingDeprecationWarning \ + -I \ # isolate python interpreter; don't add cwd to path + -m coverage run \ + testproject/manage.py test testproject groups_manager deps = - django20: django==2.0.* - django21: django==2.1.* - django22: django==2.2.* - django30: django>=3.0a1,<3.1 - - mptt0100: django-mptt==0.10.0 - - guardian14: django-guardian==1.4.* - guardian21: django-guardian==2.1.* - - jsonfield: jsonfield - coverage - coveralls + django32: django==3.2.* + django40: django==4.0.* + django41: django==4.1.* + django42: django==4.2.* + guardian: django-guardian==2.4.* + jsonfield + # for testproject django-extensions + django-bootstrap3 + +[testenv:py38-django{42, 41, 40, 32}-guardian-{lin,mac}] +# Python 3.8 is special because tests cannot run on Windows. +# On Windows sqlite comes without the required json1-extension. + +# skip if regex does not match against sys.platform string +platform = lin: linux + mac: darwin
diff --git a/groups_manager/tests/test_models.py b/groups_manager/tests/test_models.py index e6c2559..e59e1d8 100644 --- a/groups_manager/tests/test_models.py +++ b/groups_manager/tests/test_models.py @@ -1,6 +1,5 @@ from copy import deepcopy import re -import sys from django.contrib.auth import get_user_model from django.contrib.auth.models import Group as DjangoGroup @@ -40,10 +39,6 @@ def setUp(self): def test_str(self): self.assertEqual(str(self.member), 'Lucio Silla') - def test_unicode(self): - if sys.version_info < (3, ): - self.assertEqual(unicode(self.member), 'Lucio Silla') - def test_full_name(self): self.assertEqual(self.member.full_name, 'Lucio Silla') @@ -103,10 +98,6 @@ def setUp(self): def test_str(self): self.assertEqual(str(self.group_type), 'Organization') - def test_unicode(self): - if sys.version_info < (3, ): - self.assertEqual(unicode(self.group_type), 'Organization') - def test_save(self): self.group_type.codename = '' self.group_type.save() @@ -122,7 +113,8 @@ def test_group_type_groups_reverse(self): g1 = models.Group.objects.create(name='Group 1', group_type=self.group_type) self.assertEqual(list(self.group_type.groups_manager_group_set.all()), [g1]) # Deprecated - self.assertEqual(list(self.group_type.groups.all()), [g1]) + with self.assertWarns(DeprecationWarning): + self.assertEqual(list(self.group_type.groups.all()), [g1]) class TestGroupEntity(TestCase): @@ -134,10 +126,6 @@ def setUp(self): def test_str(self): self.assertEqual(str(self.group_entity), 'Organization Partner') - def test_unicode(self): - if sys.version_info < (3, ): - self.assertEqual(unicode(self.group_entity), 'Organization Partner') - def test_save(self): self.group_entity.save() self.assertEqual(self.group_entity.codename, 'organization-partner') @@ -153,7 +141,8 @@ def test_group_entity_groups_reverse(self): g1.group_entities.add(self.group_entity) self.assertEqual(list(self.group_entity.groups_manager_group_set.all()), [g1]) # Deprecated - self.assertEqual(list(self.group_entity.groups.all()), [g1]) + with self.assertWarns(DeprecationWarning): + self.assertEqual(list(self.group_entity.groups.all()), [g1]) class TestGroup(TestCase): @@ -165,10 +154,6 @@ def setUp(self): def test_str(self): self.assertEqual(str(self.group), 'Istituto di Genomica Applicata') - def test_unicode(self): - if sys.version_info < (3, ): - self.assertEqual(unicode(self.group), 'Istituto di Genomica Applicata') - def test_save(self): group = models.Group.objects.create(name='Istituto di Genomica Applicata') self.assertEqual(group.codename, 'istituto-di-genomica-applicata') @@ -348,10 +333,6 @@ def setUp(self): def test_str(self): self.assertEqual(str(self.group_member_role), 'Administrator') - def test_unicode(self): - if sys.version_info < (3, ): - self.assertEqual(unicode(self.group_member_role), 'Administrator') - def test_save(self): self.group_member_role.save() self.assertEqual(self.group_member_role.codename, 'administrator') @@ -370,13 +351,6 @@ def test_str(self): gm = models.GroupMember.objects.create(group=main, member=m1) self.assertEqual(str(gm), 'Main - Caio Mario') - def test_unicode(self): - if sys.version_info < (3, ): - m1 = models.Member.objects.create(first_name='Caio', last_name='Mario') - main = models.Group.objects.create(name='Main') - gm = models.GroupMember.objects.create(group=main, member=m1) - self.assertEqual(unicode(gm), 'Main - Caio Mario') - def test_groups_membership_django_integration(self): from groups_manager import settings settings.GROUPS_MANAGER = deepcopy(GROUPS_MANAGER_MOCK) @@ -415,4 +389,5 @@ def test_member_groups_reverse(self): models.GroupMember.objects.create(group=g2, member=m1) self.assertEqual(list(m1.groups_manager_group_set.all()), [g1, g2]) # Deprecated - self.assertEqual(list(m1.groups.all()), [g1, g2]) + with self.assertWarns(DeprecationWarning): + self.assertEqual(list(m1.groups.all()), [g1, g2]) diff --git a/testproject/requirements.txt b/testproject/requirements.txt index 4a42a3c..e9e6f8b 100644 --- a/testproject/requirements.txt +++ b/testproject/requirements.txt @@ -1,6 +1,11 @@ django>=2 django-guardian django-mptt + +# still used in early migrations jsonfield django-extensions + +# for testing templates and admin UI +django-bootstrap3 diff --git a/testproject/testproject/admin.py b/testproject/testproject/admin.py new file mode 100644 index 0000000..e419059 --- /dev/null +++ b/testproject/testproject/admin.py @@ -0,0 +1,8 @@ +from django.contrib import admin + +from groups_manager import models + +from mptt.admin import MPTTModelAdmin + +admin.site.unregister(models.Group) +admin.site.register(models.Group, MPTTModelAdmin) diff --git a/testproject/testproject/migrations/0003_alter_organizationentitywithmixin_codename_and_more.py b/testproject/testproject/migrations/0003_alter_organizationentitywithmixin_codename_and_more.py new file mode 100644 index 0000000..09b6722 --- /dev/null +++ b/testproject/testproject/migrations/0003_alter_organizationentitywithmixin_codename_and_more.py @@ -0,0 +1,205 @@ +# Generated by Django 4.2.1 on 2023-06-14 12:26 + +from django.conf import settings +from django.db import migrations, models +import django.db.models.deletion +import mptt.fields + + +class Migration(migrations.Migration): + dependencies = [ + ("groups_manager", "0008_1_3_0_jsonfield_from_django"), + migrations.swappable_dependency(settings.AUTH_USER_MODEL), + ("testproject", "0002_organizationgroupmemberwithmixin_expiration_date"), + ] + + operations = [ + migrations.AlterField( + model_name="organizationentitywithmixin", + name="codename", + field=models.SlugField( + blank=True, max_length=255, unique=True, verbose_name="codename" + ), + ), + migrations.AlterField( + model_name="organizationentitywithmixin", + name="label", + field=models.CharField(max_length=255, verbose_name="label"), + ), + migrations.AlterField( + model_name="organizationgroupmemberwithmixin", + name="roles", + field=models.ManyToManyField( + blank=True, to="testproject.organizationmemberrolewithmixin" + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="city", + field=models.CharField(blank=True, default="", max_length=200), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="codename", + field=models.SlugField(blank=True, max_length=255, verbose_name="codename"), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="comment", + field=models.TextField(blank=True, default="", verbose_name="comment"), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="description", + field=models.TextField(blank=True, default="", verbose_name="description"), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="django_auth_sync", + field=models.BooleanField(blank=True, default=True), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="full_name", + field=models.CharField( + blank=True, default="", max_length=255, verbose_name="full name" + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="group_entities", + field=models.ManyToManyField( + blank=True, + related_name="%(app_label)s_%(class)s_set", + to="testproject.organizationentitywithmixin", + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="group_members", + field=models.ManyToManyField( + related_name="%(app_label)s_%(class)s_set", + through="testproject.OrganizationGroupMemberWithMixin", + to="testproject.organizationmemberwithmixin", + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="group_type", + field=models.ForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.SET_NULL, + related_name="%(app_label)s_%(class)s_set", + to="groups_manager.grouptype", + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="level", + field=models.PositiveIntegerField(editable=False), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="lft", + field=models.PositiveIntegerField(editable=False), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="name", + field=models.CharField(max_length=255, verbose_name="name"), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="parent", + field=mptt.fields.TreeForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.CASCADE, + related_name="sub_%(app_label)s_%(class)s_set", + to="testproject.organizationgroupwithmixin", + verbose_name="parent", + ), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="properties", + field=models.JSONField(blank=True, default=dict, verbose_name="properties"), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="rght", + field=models.PositiveIntegerField(editable=False), + ), + migrations.AlterField( + model_name="organizationgroupwithmixin", + name="short_name", + field=models.CharField(blank=True, default="", max_length=50), + ), + migrations.AlterField( + model_name="organizationmemberrolewithmixin", + name="codename", + field=models.SlugField( + blank=True, max_length=255, unique=True, verbose_name="codename" + ), + ), + migrations.AlterField( + model_name="organizationmemberrolewithmixin", + name="label", + field=models.CharField(max_length=255, verbose_name="label"), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="django_auth_sync", + field=models.BooleanField( + blank=True, default=True, verbose_name="django auth sync" + ), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="django_user", + field=models.ForeignKey( + blank=True, + null=True, + on_delete=django.db.models.deletion.SET_NULL, + related_name="%(app_label)s_%(class)s_set", + to=settings.AUTH_USER_MODEL, + ), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="email", + field=models.EmailField( + blank=True, default="", max_length=255, verbose_name="email" + ), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="first_name", + field=models.CharField(max_length=255, verbose_name="first name"), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="last_name", + field=models.CharField(max_length=255, verbose_name="last name"), + ), + migrations.AlterField( + model_name="organizationmemberwithmixin", + name="username", + field=models.CharField( + blank=True, default="", max_length=255, verbose_name="username" + ), + ), + migrations.AlterField( + model_name="organizationtypewithmixin", + name="codename", + field=models.SlugField( + blank=True, max_length=255, unique=True, verbose_name="codename" + ), + ), + migrations.AlterField( + model_name="organizationtypewithmixin", + name="label", + field=models.CharField(max_length=255, verbose_name="label"), + ), + ] diff --git a/testproject/testproject/settings.py b/testproject/testproject/settings.py index 7aeed53..b2706cf 100644 --- a/testproject/testproject/settings.py +++ b/testproject/testproject/settings.py @@ -9,8 +9,6 @@ """ import os -import django - try: import guardian has_guardian = True @@ -48,11 +46,14 @@ 'django_extensions', # Uncomment for testing templates, and after a `pip install django-bootstrap3` - # 'bootstrap3', + 'bootstrap3', # App test 'groups_manager', 'testproject', + + # mptt provides the templates for rendering trees in admin + 'mptt', ) if has_guardian: @@ -100,7 +101,6 @@ USE_I18N = True -USE_L10N = True USE_TZ = True @@ -124,7 +124,7 @@ 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'django.template.context_processors.request', - ] + ], }, }, ] @@ -160,3 +160,6 @@ 'SLUGIFY_USERNAME_FUNCTION': lambda s: slugify(s, to_lower=True, separator="_") } """ + +# Setting for mptt-tree in admin UI (default: 20) +MPTT_ADMIN_LEVEL_INDENT = 25 diff --git a/testproject/testproject/urls.py b/testproject/testproject/urls.py index 211e6c4..3dc0840 100644 --- a/testproject/testproject/urls.py +++ b/testproject/testproject/urls.py @@ -1,12 +1,12 @@ from django.conf import settings -from django.urls import include, re_path +from django.urls import include, path from django.conf.urls.static import static from django.contrib import admin from testproject import views urlpatterns = [ - re_path(r'^admin/', admin.site.urls), - re_path(r'^$', views.TestView.as_view(), name='home'), - re_path(r'^groups-manager/', include('groups_manager.urls', namespace='groups_manager')), + path('admin/', admin.site.urls), + path('', views.TestView.as_view(), name='home'), + path('groups-manager/', include('groups_manager.urls', namespace='groups_manager')), ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
Migration being created inside groups_manager/migrations Hi, first of all, thank you, this library is amazing. I've noticed in my migrations that I have dependencies on a `0007_alter_group_group_entities_alter_group_group_members_and_more.py` This is being created, but it's difficult to version in git. Is this missing from your repo? Or am I doing something wrong? ``` Migrations for 'groups_manager': venv/lib/python3.8/site-packages/groups_manager/migrations/0007_alter_group_group_entities_alter_group_group_members_and_more.py - Alter field group_entities on group - Alter field group_members on group - Alter field group_type on group - Alter field id on group - Alter field parent on group - Alter field id on groupentity - Alter field id on groupmember - Alter field id on groupmemberrole - Alter field id on grouptype - Alter field django_user on member - Alter field id on member ``` thank you!
Hi @raulsperoni , thank you for opening this issue and raise the missing migration. This is very strange, it must be a change in a recent version of Django causing the differences to be picked up. I will try to investigate soon and will patch accordingly. I have encountered the same problem, for django 4.2.x, so it appears that a migration is missing. The content of `0007_alter_group_group_entities_alter_group_group_members_and_more.py` is ```python # Generated by Django 4.1.3 on 2023-01-01 01:45 from django.conf import settings from django.db import migrations, models import django.db.models.deletion import mptt.fields class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ("groups_manager", "0006_1_0_0_default"), ] operations = [ migrations.AlterField( model_name="group", name="group_entities", field=models.ManyToManyField( blank=True, related_name="%(app_label)s_%(class)s_set", to="groups_manager.groupentity", verbose_name="group entities", ), ), migrations.AlterField( model_name="group", name="group_members", field=models.ManyToManyField( related_name="%(app_label)s_%(class)s_set", through="groups_manager.GroupMember", to="groups_manager.member", verbose_name="group members", ), ), migrations.AlterField( model_name="group", name="group_type", field=models.ForeignKey( blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="%(app_label)s_%(class)s_set", to="groups_manager.grouptype", verbose_name="group type", ), ), migrations.AlterField( model_name="group", name="parent", field=mptt.fields.TreeForeignKey( blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name="sub_%(app_label)s_%(class)s_set", to="groups_manager.group", verbose_name="parent", ), ), migrations.AlterField( model_name="member", name="django_user", field=models.ForeignKey( blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="%(app_label)s_%(class)s_set", to=settings.AUTH_USER_MODEL, verbose_name="django user", ), ), ] ``` and currently the only way to fix this issue is to copy the migration files somewhere and add ```python MIGRATION_MODULES = { "groups_manager": "path_to.groups_manager.migrations", } ```
2023-06-14T14:41:22
-1.0
pysal/mapclassify
194
pysal__mapclassify-194
['191']
2788475a53726a6e4fb2299f3be103aa48fed7c3
"diff --git a/mapclassify/greedy.py b/mapclassify/greedy.py\nindex 8e8b22e..4854b6e 100644\n--- a/ma(...TRUNCATED)
"diff --git a/mapclassify/tests/test_greedy.py b/mapclassify/tests/test_greedy.py\nindex 7accea5..c2(...TRUNCATED)
"testing with `geopandas.dataset` module – deprecation\n[Some of our tests here](https://github.co(...TRUNCATED)
2023-09-06T10:11:55
-1.0
pysal/mapclassify
135
pysal__mapclassify-135
['141']
bd11fd02a5b79fca3205c298fb730edb16abd3b1
"diff --git a/docs/Makefile b/docs/Makefile\nindex 1e06dbd..979d0c6 100644\n--- a/docs/Makefile\n+++(...TRUNCATED)
"diff --git a/.github/workflows/testing.yml b/.github/workflows/testing.yml\nindex 2e88284..84624e3 (...TRUNCATED)
"love and care for docstrings, etc.\nIn working through #135 and making some doc edits, I'm seeing t(...TRUNCATED)
2022-11-03T01:02:51
-1.0
pangeo-data/climpred
702
pangeo-data__climpred-702
['701']
f0a9e32fa57acbc6067e3f83d1884731307cab8f
"diff --git a/CHANGELOG.rst b/CHANGELOG.rst\nindex a0e94e942..aeb41850c 100644\n--- a/CHANGELOG.rst\(...TRUNCATED)
"diff --git a/ci/requirements/maximum-tests.yml b/ci/requirements/maximum-tests.yml\nindex 609f17beb(...TRUNCATED)
"HindcastEnsemble plot verification times based on alignment \n**Is your feature request related to (...TRUNCATED)
2021-12-04T16:28:07
-1.0
pangeo-data/climpred
675
pangeo-data__climpred-675
['200']
51f9fec691d7a21b7a3332bee0da4cfda96330f2
"diff --git a/CHANGELOG.rst b/CHANGELOG.rst\nindex 78b47c4cc..429b62bbd 100644\n--- a/CHANGELOG.rst\(...TRUNCATED)
"diff --git a/climpred/conftest.py b/climpred/conftest.py\nindex 334e945bb..5eaad9468 100644\n--- a/(...TRUNCATED)
"Check init dimension types\nWe need to check that the `init` dimension comes in as an int/float for(...TRUNCATED)
"This check will be more advanced when going subannual\nReminder @aaronspring that we should do this(...TRUNCATED)
2021-09-24T11:10:06
-1.0
rackerlabs/lambda-uploader
35
rackerlabs__lambda-uploader-35
['31']
c40923a6982a0a3d4fd41b135a4f9b7e97b74f90
"diff --git a/README.md b/README.md\nindex e2feefd..bf0a3bb 100644\n--- a/README.md\n+++ b/README.md(...TRUNCATED)
"diff --git a/test/test_package.py b/test/test_package.py\nindex 1a64572..a293d4f 100644\n--- a/test(...TRUNCATED)
"Option zip and upload only the folder contents\nRight now, uploading grabs a bunch of stuff I don't(...TRUNCATED)
"Hi @dpurrington -- I recently added the ability to [use an existing virtualenv](https://github.com/(...TRUNCATED)
2015-11-20T15:06:36
-1.0
rspeer/langcodes
38
rspeer__langcodes-38
['28']
522592c0442b7ffc928c24b8689f0be9149aff77
"diff --git a/.gitignore b/.gitignore\nindex a9a036af..6713a4fd 100644\n--- a/.gitignore\n+++ b/.git(...TRUNCATED)
"diff --git a/langcodes/tests/test_wikt_languages.py b/langcodes/tests/test_wikt_languages.py\nindex(...TRUNCATED)
"[Question] Is there a way to check that a language code is valid?\nUsing Language.make(x) or Langua(...TRUNCATED)
"I'm also looking for an easy way to validate that a string is valid BCP47.\r\n\r\nWhen input is `\"(...TRUNCATED)
2021-02-09T16:18:11
-1.0
trevorstephens/gplearn
270
trevorstephens__gplearn-270
['257']
11605c4d0954557a26bfd7c1e81e93b2a1975d5f
"diff --git a/.coveragerc b/.coveragerc\nindex 86763ac3..6feb3fba 100644\n--- a/.coveragerc\n+++ b/.(...TRUNCATED)
"diff --git a/gplearn/tests/test_examples.py b/gplearn/tests/test_examples.py\nindex d799b123..f4f3f(...TRUNCATED)
boston dataset being depreciated by scikitlearn Some tests and examples will need re-work
"Reference issue on background: https://github.com/scikit-learn/scikit-learn/issues/16155\r\n\r\nNow(...TRUNCATED)
2022-05-02T10:32:12
-1.0
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
15