Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1596 Discussions

Can you enable the NVidia GPU on my system?

HEAD-SHIP-CIC
Beginner
435 Views
Cell In[11], line 96, in main(kwargs)
     94 args.nClasses = len(args.alphabet)
     95 model = CRNN(args)
---> 96 model = model.cuda()
     97 resume_file = os.path.join(args.save_dir, args.name, 'crnn.ckpt')
     98 if os.path.isfile(resume_file):

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:749, in Module.cuda(self, device)
    732 def cuda(self: T, device: Optional[Union[int, device]] = None) -> T:
    733     r"""Moves all model parameters and buffers to the GPU.
    734 
    735     This also makes associated parameters and buffers different objects. So
   (...)
    747         Module: self
    748     """
--> 749     return self._apply(lambda t: t.cuda(device))

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:641, in Module._apply(self, fn)
    639 def _apply(self, fn):
    640     for module in self.children():
--> 641         module._apply(fn)
    643     def compute_should_use_set_data(tensor, tensor_applied):
    644         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
    645             # If the new tensor has compatible tensor type as the existing tensor,
    646             # the current behavior is to change the tensor in-place using `.data =`,
   (...)
    651             # global flag to let the user control whether they want the future
    652             # behavior of overwriting the existing tensor or not.

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:641, in Module._apply(self, fn)
    639 def _apply(self, fn):
    640     for module in self.children():
--> 641         module._apply(fn)
    643     def compute_should_use_set_data(tensor, tensor_applied):
    644         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
    645             # If the new tensor has compatible tensor type as the existing tensor,
    646             # the current behavior is to change the tensor in-place using `.data =`,
   (...)
    651             # global flag to let the user control whether they want the future
    652             # behavior of overwriting the existing tensor or not.

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:664, in Module._apply(self, fn)
    660 # Tensors stored in modules are graph leaves, and we don't want to
    661 # track autograd history of `param_applied`, so we have to use
    662 # `with torch.no_grad():`
    663 with torch.no_grad():
--> 664     param_applied = fn(param)
    665 should_use_set_data = compute_should_use_set_data(param, param_applied)
    666 if should_use_set_data:

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:749, in Module.cuda.<locals>.<lambda>(t)
    732 def cuda(self: T, device: Optional[Union[int, device]] = None) -> T:
    733     r"""Moves all model parameters and buffers to the GPU.
    734 
    735     This also makes associated parameters and buffers different objects. So
   (...)
    747         Module: self
    748     """
--> 749     return self._apply(lambda t: t.cuda(device))

File ~/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:229, in _lazy_init()
    227 if 'CUDA_MODULE_LOADING' not in os.environ:
    228     os.environ['CUDA_MODULE_LOADING'] = 'LAZY'
--> 229 torch._C._cuda_init()
    230 # Some of the queued calls may reentrantly call _lazy_init();
    231 # we need to just return without initializing in that case.
    232 # However, we must not let any *other* threads in!
    233 _tls.is_initializing = True

RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

I don;t have the NVidia on my system. I've started a new computer because of a technical difficulty. I'm a participant on Intel oneAPI hackathon. 

Please enable this, so I can resume the work. 

 

- Aswin

 

0 Kudos
1 Reply
JaideepK_Intel
Moderator
366 Views

Hi,


Thank you for posting in Intel Communities


All hackathon questions are answered on discord, and one of the organizers have contacted you on this issue. You can continue this discussion on that discord thread.


Discord link:

https://discord.gg/ycwqTP6


If you need any additional information, please post a new question as this thread will no longer be monitored by Intel


Thanks,

Jaideep


0 Kudos
Reply