Remove pystencils.GPU_DEVICE
All threads resolved!
All threads resolved!
-
SerialDataHandling
now performs the device selection upon construction. It can also be constructed with an explicit device number to deviate from the default selection. - For
ParallelDataHandling
, the assignment of devices to MPI ranks should be handled by Walberla by callingcudaSetDevice()
. It hasselectDeviceBasedOnMpiRank
for this purpose. I am not sure it actually calls it -- I think it should be called fromMPIManager::initializeMPI
. Right now everything probably just ends up on the first GPU. - The kernel wrapper now determines the correct device by inspecting the fields.
-
gpu_indexing_params
needs an explicit device number, I don't think any kind of default is reasonable. - Some tests now iterate over all devices instead of using a default device. This is actually the right thing to do because it tests whether the device selection works correctly.
lbmpy's test_gpu_block_size_limiting.py::test_gpu_block_size_limiting fails since !335 (merged), but that is due to an error in the test, which lbmpy!146 (merged) fixes.
Edited by Michael Kuron
Merge request reports
Activity
Filter activity
assigned to @holzer
mentioned in merge request lbmpy!146 (merged)
- Resolved by Michael Kuron
- Resolved by Michael Kuron
- Resolved by Michael Kuron
mentioned in commit 32de591c
Please register or sign in to reply