Skip to content

integrate kernel tests in testthat#371

Closed
flying-sheep wants to merge 1 commit into
masterfrom
testthat
Closed

integrate kernel tests in testthat#371
flying-sheep wants to merge 1 commit into
masterfrom
testthat

Conversation

@flying-sheep
Copy link
Copy Markdown
Member

and updated travis with it

@flying-sheep flying-sheep force-pushed the testthat branch 5 times, most recently from 1cf4190 to e399eee Compare July 1, 2016 15:53
@takluyver
Copy link
Copy Markdown
Member

My take is that this is more complexity than it's worth - we're can already run the two sets of tests on Travis without integrating them, and for manual testing I'd rather run two separate straightforward test commands than debug odd cross-language issues. If you want to maintain it, however, that's your lookout. ;-)

@flying-sheep
Copy link
Copy Markdown
Member Author

what cross-language issues do you mean?

i only communicate using strings and dicts of string, which rPython can handle well.

@flying-sheep flying-sheep force-pushed the testthat branch 8 times, most recently from a6fb93b to ed332e5 Compare July 2, 2016 08:19
@jankatins
Copy link
Copy Markdown
Contributor

I think a hint is the commit text: "Work around rPython’s insanity" :-)

@flying-sheep
Copy link
Copy Markdown
Member Author

the insanity being that it’s compiled against a single python version, and there isn’t a rPython3 or so. it’s really stupid to have to globally decide which python version you want to use.

also i’m currently trying to keep the ancient broken readline library from interfering, which isn’t rPython’s fault, but conda’s 😭

@flying-sheep flying-sheep force-pushed the testthat branch 2 times, most recently from 8367c4a to d28265d Compare July 2, 2016 08:51
@takluyver
Copy link
Copy Markdown
Member

Yeah, that's the sort of cross language issues I mean. I'd rather just run the Python tests directly than deal with rPython.

If it's really necessary to implement them, why not just use R's subprocess machinery to run python? That seems less faff than doing it in process.

@takluyver
Copy link
Copy Markdown
Member

(I'm not saying it's rPython's 'fault' that this is tricky. I'm not interested in assigning blame. It just evidently is tricky, and I don't see any real advantage in dealing with it)

@flying-sheep
Copy link
Copy Markdown
Member Author

yeah, that would also work, but it would have to start up the python interpreter for each test…

and you can blame rPython. it’s obviously a quick hack that was never extended to be more. https://github.com/asieira/SnakeCharmR could be better but it’s not on CRAN

@takluyver
Copy link
Copy Markdown
Member

yeah, that would also work, but it would have to start up the python interpreter for each test…

We can definitely run the whole Python test suite in one interpreter. If this is something to do with the number of tests it shows it's run... who cares? ;-)

@flying-sheep
Copy link
Copy Markdown
Member Author

what it currently does is converting the python unittest suite into a testthat suite, completely integrated. of course it’s overkill, but it’s also convenient.

@takluyver
Copy link
Copy Markdown
Member

I still think complete integration is not worth the complexity. Remember the KISS principle!

@flying-sheep
Copy link
Copy Markdown
Member Author

what kind of integration do you want to see?

i definitely want the kernel tests to be run via devtools::test()

@flying-sheep flying-sheep force-pushed the testthat branch 2 times, most recently from ab32fc6 to dba2970 Compare July 2, 2016 09:48
@takluyver
Copy link
Copy Markdown
Member

takluyver commented Jul 2, 2016 via email

Comment thread tests/testthat/get-tests.py Outdated
module_suite = unittest.defaultTestLoader.loadTestsFromModule(test_ir)
test_ids = list(get_test_ids(module_suite))

# this has to be this compex since an individual TestCase does not call setUpClass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/compex/complex/

@jankatins
Copy link
Copy Markdown
Contributor

jankatins commented Jul 4, 2016

Another way to do the tests in R would be to create a kernel object and afterwards set the base display functions to something we control (like in IRdisplay) and then implement the (or some) tests in R.

This wouldn't tests the zmq stuff but basically everything from "execute this code" to "do we get the right stuff back".

@flying-sheep
Copy link
Copy Markdown
Member Author

very good idea!

@flying-sheep flying-sheep force-pushed the testthat branch 2 times, most recently from 9b8d5e4 to 69fecab Compare July 4, 2016 19:26
@flying-sheep
Copy link
Copy Markdown
Member Author

OK, i switched to a simpler approach: make unittest output text and parse it from R

@takluyver
Copy link
Copy Markdown
Member

make unittest output text and parse it from R

Why? What's wrong with just running it as a subprocess using system2 and asserting that the exit code is 0?

@flying-sheep
Copy link
Copy Markdown
Member Author

because i wanted it to be neat.

and since it’ll still fail the test if something happens that i didn’t think about, it’s also not a risk

@takluyver
Copy link
Copy Markdown
Member

But why add the extra complexity to write the test results in a different format and then parse them out again? All we need to do is get the output to the console, and check whether the test suite as a whole passed or failed. This should be simple, there's no need to come up with Heath Robinson solutions.

@flying-sheep
Copy link
Copy Markdown
Member Author

flying-sheep commented Jul 5, 2016

Integration. E.g. you can use runthat or other test runners. But also because I felt like it and there’s no downsides.

@takluyver
Copy link
Copy Markdown
Member

But if the test is just 'run the Python tests, check that they passed', that should work under any test runner, right? It will only show up as one test rather than several, but that doesn't really matter.

there’s no downsides.

Extra complexity is a downside by itself!

@flying-sheep
Copy link
Copy Markdown
Member Author

then sending everything but basic ZMQ functionality tests through the jupyter kernel tests interface is overkill as well.

@takluyver
Copy link
Copy Markdown
Member

There is a need for some integration tests that start a kernel and communicate with it, because that's the contract the kernel has with the Jupyter system as a whole. Once the machinery to do that is in place, there's little extra complexity involved in writing more tests using it. But sure, I'd be happy to have the bulk of the tests in R and relatively few in Python, so long as we're exercising the ZMQ interface properly.

If there's a good reason for adding complexity, we can. But we're not building random extra complexity on a whim.

This is getting rancorous, and it's dragged on long enough. I'm going to push to this branch to do what I mean.

@flying-sheep
Copy link
Copy Markdown
Member Author

could you maybe use another one, i’d like to try one more thing here 😜

@takluyver
Copy link
Copy Markdown
Member

Merged in #375

@takluyver takluyver closed this Jul 5, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants