Skip to content
Snippets Groups Projects

Remove pystencils.GPU_DEVICE

Merged Michael Kuron requested to merge device_selection into master
All threads resolved!
  • SerialDataHandling now performs the device selection upon construction. It can also be constructed with an explicit device number to deviate from the default selection.
  • For ParallelDataHandling, the assignment of devices to MPI ranks should be handled by Walberla by calling cudaSetDevice(). It has selectDeviceBasedOnMpiRank for this purpose. I am not sure it actually calls it -- I think it should be called from MPIManager::initializeMPI. Right now everything probably just ends up on the first GPU.
  • The kernel wrapper now determines the correct device by inspecting the fields.
  • gpu_indexing_params needs an explicit device number, I don't think any kind of default is reasonable.
  • Some tests now iterate over all devices instead of using a default device. This is actually the right thing to do because it tests whether the device selection works correctly.

lbmpy's test_gpu_block_size_limiting.py::test_gpu_block_size_limiting fails since !335 (merged), but that is due to an error in the test, which lbmpy!146 (merged) fixes.

Edited by Michael Kuron

Merge request reports

Pipeline #54320 failed

Pipeline failed for bd47d369 on device_selection

Test coverage 86.80% (0.03%) from 1 job
Approval is optional

Merged by Markus HolzerMarkus Holzer 1 year ago (Jul 13, 2023 7:58am UTC)

Merge details

  • Changes merged into master with 32de591c (commits were squashed).
  • Deleted the source branch.

Pipeline #54331 passed

Pipeline passed for 32de591c on master

Test coverage 86.83% (0.03%) from 1 job

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
Please register or sign in to reply