Pytest: Nested use of request fixture

I’d like to get some help on how to run pytests with multiple layers of parameterized pytest fixtures. I have a global request-based fixture for selecting a base implementation of the system I’d like …

pytest breaks with pip install

I’m working in a repo that has tests and some test helper libraries in a directory named test ( Mysteriously pip install dash …

Pytest: Mock multiple calls of same method with different side_effect

I have a unit test like so below: # def get_side_effects(): def side_effect_func3(self): # Need the “self” to do some stuff at run time. return {“… needed for imports to work in Pytest. But using Python 3.8

I have read that after Python 3.3, is not required anymore, So I am not understanding why I need to add it ( for my pytests to work. Directory structure: |- root/ |- foo/ …

Python add pytest –black to test suite

I use pytest for testing my Python project. What I want to do is to add to my test suite a function to check whether my code is formatted in “Black” or not. When I press the command “pytest –black” my whole project is tested as I want to. How can I add this function in my tests suite and check the same thing when I run the “python pytest” command? Answer pytest.ini has an addopts key which does literally what it says on the tin: it adds options to the pytest command line you used. So at the

python3.7 subprocess failed to delete files for me

I have a python script using ‘subprocess’ running linux command to confirm my task is doing the right thing, and it worked well. But i found that at the same time it will generate some log files when running my task. So i added a clean up function to rm log files for me at the beginning. My script is: but it turns out it did not remove log files for me after i added first test, it still have more and more logs file in my build dir. I run it using: My question is: 1. Why did not

Pytest + mock: patch does not work without with clause

I’m testing complex logic that require joining of central fact table with 10-20 smaller dimensional tables. I want to mock that 10-20 smaller tables. How to patch methods return values in a for loop? See code below. P.S. alternatively I can try to mock the BaseClass.load, but then I don’t know how to return the different data set for different table (class). Answer Under the assumption that do_join shall be called outside the loop, with all tables patched, you could write a fixture that uses contextlib.ExitStack to setup all mocks: This means that all mocks are still active

Coverage for one-liner if statement

Static code analyzers for Python do not generate branches in the following case (one-liner if). Is this by design? Even coverage gives 100% coverage even if only one case is tested. Can anyone please shed some light on this? Answer The first comment says “does not create a control structure.” I don’t know what that means. Whether it’s one line or many lines, the compiled byte code is nearly the same: The reason can’t tell you about the one-line case is because of the trace function that Python uses to tell what is going on. The trace function

pytest requires Python ‘>=3.5’ but the running Python is 2.7.10

I’m trying to install pytest using pip but running into this error: Pretty sure pytest is compatible with python 2. Any reason why I am not able to install it on my machine? As you can see in the error, I am running python 2.7.10 and do not have issue installing other packages. Answer Quoting the changelog: The 4.6.X series will be the last series to support Python 2 and Python 3.4. Therefore, use to install the latest pytest version that supports Python 2.7.

How to run script as pytest test

Suppose I have a test expressed as a simple script with assert-statements (see background for why), e.g How would I include this script in my pytest test suite — in a nice way? I have tried two working but less-than-nice approaches: One approach is to name the script like a test, but this makes the whole pytest discovery fail when the test fails. My current approach is to import the script from within a test function: This works, but the scripts are not reported individually and test failures have a long and winding stack trace: Background The reason I have