integrate kernel tests in testthat#371
Conversation
1cf4190 to
e399eee
Compare
|
My take is that this is more complexity than it's worth - we're can already run the two sets of tests on Travis without integrating them, and for manual testing I'd rather run two separate straightforward test commands than debug odd cross-language issues. If you want to maintain it, however, that's your lookout. ;-) |
|
what cross-language issues do you mean? i only communicate using strings and dicts of string, which rPython can handle well. |
a6fb93b to
ed332e5
Compare
|
I think a hint is the commit text: "Work around rPython’s insanity" :-) |
|
the insanity being that it’s compiled against a single python version, and there isn’t a rPython3 or so. it’s really stupid to have to globally decide which python version you want to use. also i’m currently trying to keep the ancient broken readline library from interfering, which isn’t rPython’s fault, but conda’s 😭 |
8367c4a to
d28265d
Compare
|
Yeah, that's the sort of cross language issues I mean. I'd rather just run the Python tests directly than deal with rPython. If it's really necessary to implement them, why not just use R's subprocess machinery to run python? That seems less faff than doing it in process. |
|
(I'm not saying it's rPython's 'fault' that this is tricky. I'm not interested in assigning blame. It just evidently is tricky, and I don't see any real advantage in dealing with it) |
|
yeah, that would also work, but it would have to start up the python interpreter for each test… and you can blame rPython. it’s obviously a quick hack that was never extended to be more. https://github.com/asieira/SnakeCharmR could be better but it’s not on CRAN |
We can definitely run the whole Python test suite in one interpreter. If this is something to do with the number of tests it shows it's run... who cares? ;-) |
|
what it currently does is converting the python unittest suite into a testthat suite, completely integrated. of course it’s overkill, but it’s also convenient. |
|
I still think complete integration is not worth the complexity. Remember the KISS principle! |
|
what kind of integration do you want to see? i definitely want the kernel tests to be run via |
ab32fc6 to
dba2970
Compare
|
If the priority is that the R tests run the Python tests, just run the
whole python test suite in a subprocess and check that it exits with 0.
|
| module_suite = unittest.defaultTestLoader.loadTestsFromModule(test_ir) | ||
| test_ids = list(get_test_ids(module_suite)) | ||
|
|
||
| # this has to be this compex since an individual TestCase does not call setUpClass |
|
Another way to do the tests in R would be to create a kernel object and afterwards set the base display functions to something we control (like in IRdisplay) and then implement the (or some) tests in R. This wouldn't tests the zmq stuff but basically everything from "execute this code" to "do we get the right stuff back". |
|
very good idea! |
9b8d5e4 to
69fecab
Compare
|
OK, i switched to a simpler approach: make unittest output text and parse it from R |
Why? What's wrong with just running it as a subprocess using system2 and asserting that the exit code is 0? |
|
because i wanted it to be neat. and since it’ll still fail the test if something happens that i didn’t think about, it’s also not a risk |
|
But why add the extra complexity to write the test results in a different format and then parse them out again? All we need to do is get the output to the console, and check whether the test suite as a whole passed or failed. This should be simple, there's no need to come up with Heath Robinson solutions. |
|
Integration. E.g. you can use runthat or other test runners. But also because I felt like it and there’s no downsides. |
|
But if the test is just 'run the Python tests, check that they passed', that should work under any test runner, right? It will only show up as one test rather than several, but that doesn't really matter.
Extra complexity is a downside by itself! |
|
then sending everything but basic ZMQ functionality tests through the jupyter kernel tests interface is overkill as well. |
|
There is a need for some integration tests that start a kernel and communicate with it, because that's the contract the kernel has with the Jupyter system as a whole. Once the machinery to do that is in place, there's little extra complexity involved in writing more tests using it. But sure, I'd be happy to have the bulk of the tests in R and relatively few in Python, so long as we're exercising the ZMQ interface properly. If there's a good reason for adding complexity, we can. But we're not building random extra complexity on a whim. This is getting rancorous, and it's dragged on long enough. I'm going to push to this branch to do what I mean. |
|
could you maybe use another one, i’d like to try one more thing here 😜 |
|
Merged in #375 |
and updated travis with it