Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

Dockerfile-CUDA12.1 2.4 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
  1. FROM nvidia/cuda:12.1.0-devel-ubuntu20.04
  2. LABEL github="https://github.com/mlcommons/GaNDLF"
  3. LABEL docs="https://mlcommons.github.io/GaNDLF/"
  4. LABEL version=1.0
  5. # Install instructions for NVIDIA Container Toolkit allowing you to use the host's GPU: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
  6. # Note that to do this on a Windows host you need experimental feature "CUDA on WSL" -- not yet stable.
  7. ENV DEBIAN_FRONTEND=noninteractive
  8. # Explicitly install python3.9 (this uses 11.1 for now, as PyTorch LTS 1.8.2 is built against it)
  9. RUN apt-get update && apt-get install -y software-properties-common
  10. RUN add-apt-repository ppa:deadsnakes/ppa
  11. RUN apt-get update && apt-get install -y python3.9 python3-pip libjpeg8-dev zlib1g-dev python3-dev libpython3.9-dev libffi-dev libgl1
  12. RUN python3.9 -m pip install --upgrade pip
  13. RUN python3.9 -m pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  14. RUN python3.9 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker
  15. # Do some dependency installation separately here to make layer caching more efficient
  16. COPY ./setup.py ./setup.py
  17. RUN python3.9 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
  18. && python3.9 -m pip install -r ./requirements.txt
  19. COPY . /GaNDLF
  20. WORKDIR /GaNDLF
  21. RUN python3.9 -m pip install -e .
  22. # Entrypoint forces all commands given via "docker run" to go through python, CMD forces the default entrypoint script argument to be gandlf_run
  23. # If a user calls "docker run gandlf:[tag] gandlf_anonymize", it will resolve to running "python gandlf_anonymize" instead.
  24. # CMD is inherently overridden by args to "docker run", entrypoint is constant.
  25. ENTRYPOINT python3.9
  26. CMD gandlf_run
  27. # The below force the container commands to run as a nonroot user with UID > 10000.
  28. # This greatly reduces UID collision probability between container and host, helping prevent privilege escalation attacks.
  29. # As a side benefit this also decreases the likelihood that users on a cluster won't be able to access their files.
  30. # See https://github.com/hexops/dockerfile as a best practices guide.
  31. #RUN addgroup --gid 10001 --system nonroot \
  32. # && adduser --uid 10000 --system --ingroup nonroot --home /home/nonroot nonroot
  33. #USER nonroot
  34. # Prepare the container for possible model embedding later.
  35. RUN mkdir /embedded_model
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...