Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

rhubrak

1
Posts
A member registered Aug 13, 2023

Recent community posts

Thanks for making this game, it's an incredible piece of technology. I had looked at GPT and thought it'd be cool if Commodore 64 era text adventures could be hooked up to it, and was quite excited to come across your game.

I have been having trouble running stable 1.5.1 release of aidventure with CUDA (both in the paid and demo versions). I can play the stable release with CPU. The experimental release does seem to make CUDA available, however, I run into a separate issue where the game crashes.

I have a NVIDIA GeForce RTX 3070/PCIe/SSE2, and outside of the game, have successfully installed a PyTorch environment where CUDA functions. I'm testing with the distilgpt2 model as per your troubleshooting suggestions elsewhere.

Here are the troubleshooting steps I've tried:

==Start afresh==
For the logs below, I deleted all installations of aidventure, Deleted .local/share/aidventure config files, Rebooted

==initial issue on the stable version of the game==
Installed aidventure, cowwarts scenario, installed distilgpt2

From installer log: [2023.08.13 20:30:42] | DEBUG | [installer.gd] [start_installation] >> Is cuda enabled: True

OpenGL ES 3.0 Renderer: NVIDIA GeForce RTX 3070/PCIe/SSE2

Game version: 1.5.1

OS: Linux

Press F1 to close the console.

remote_server : false

Server started ws://0.0.0.0:9999

Client 127.0.0.1 connected

Model successfully loaded from local file

Is CUDA available: False

Is GPU enabled for the generator: False


The game works fine with CPU, however.

Opening ./mamba/envs/aidventure/bin/python3 and running "import torch; torch.cuda.is_available()" returns false. "torch.zeros(1).cuda()" returns "AssertionError: Torch not compiled with CUDA enabled"

==is it my computer==
Outside of the game, I installed pytorch, cuda in my local python environment. I did have some driver issues that I resolved. In my local python environment I was able to get torch.cuda.is_available() to return True and was able to compute torch.zeros(1).cuda()

==do later updates in experimental fix it==

Install experimental to a clean folder. Reboot.

From installer log: [2023.08.13 21:08:38] | DEBUG | [installer.gd] [start_installation] >> Is cuda enabled: True

I understand that experimental releases don't come with promises of being stable. It does look like conda config file with new dependencies resolves my CUDA issue. Opening ./mamba/envs/aidventure/bin/python3 and running "import torch; torch.cuda.is_available()" returns True, and the game recognises CUDA as available.

However, I run into a separate error.

I first opened the game and started a new cowwarts file. This runs into a server error.

Godot Engine v3.5.1.stable.official.6fed1ffa3 - https://godotengine.org

OpenGL ES 3.0 Renderer: NVIDIA GeForce RTX 3070/PCIe/SSE2

Async. shader compilation: OFF

AIdventure (Godot 3.5.1 stable)

Type help to get more information about usage

2023/08/13 21:22:55

Game version: 1.5.2-BETA.1

OS: Linux

Press F1 to close the console.

remote_server : false

ERROR: Connection lost with the server. clean: False

   at: call (modules/gdscript/gdscript_functions.cpp:775)

connection closed

ERROR: File must be opened before use.

   at: get_as_text (core/bind/core_bind.cpp:2092)

ERROR: Couldn't read res://server_logs.text Error: 7

   at: call (modules/gdscript/gdscript_functions.cpp:775)

   

   I then exit and try to load the game i just created

..further errors

Client 127.0.0.1 connected

This gives the model selection. I try to install distilgpt.

This downloads the files to the cache in .local/share/aidventure/cache . However, the game has trouble moving this into the models folder.

Error renaming user://cache/config.json into /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/config.json. Error : 1

Couldn't move user://cache/config.json to /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/config.json

Error renaming user://cache/merges.txt into /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/merges.txt. Error : 1

Couldn't move user://cache/merges.txt to /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/merges.txt

Error renaming user://cache/special_tokens_map.json into /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/special_tokens_map.json. Error : 1

...

Couldn't move user://cache/pytorch_model-00003-of-00003.bin to /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/pytorch_model-00003-of-00003.bin

Error renaming user://cache/pytorch_model.bin.index.json into /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/pytorch_model.bin.index.json. Error : 1

Couldn't move user://cache/pytorch_model.bin.index.json to /media/user/HDD/aidventure-experimental/models/LyaaaaaGames/distilgpt2/pytorch_model.bin.index.json

loading generator

Setting up the model.

--------

Server logs  for the experimental release are are

21:28:48,484 AIdventure_Server DEBUG loading generator

21:28:48,485 AIdventure_Server INFO Setting up the model.

21:28:48,485 AIdventure_Server DEBUG {'low_memory_mode': True, 'allow_offload': False, 'limit_memory': False, 'max_memory': {'0': '0 MB', 'cpu': '0 MB'}, 'allow_download': False, 'device_map': 'auto', 'torch_dtype': 4, 'offload_dict': True}

21:28:48,485 AIdventure_Server DEBUG Clearing GPU cache

21:28:48,527 AIdventure_Server DEBUG ---------------Memory allocated---------------

21:28:48,527 AIdventure_Server DEBUG 0.0000 B

21:28:48,527 AIdventure_Server DEBUG ---------------Max memory allocated---------------

21:28:48,527 AIdventure_Server DEBUG 0.0000 B

21:28:48,527 AIdventure_Server DEBUG ---------------Memory reserved---------------

21:28:48,527 AIdventure_Server DEBUG 0.0000 B

21:28:48,527 AIdventure_Server DEBUG ---------------Max memory reserved---------------

21:28:48,527 AIdventure_Server DEBUG 0.0000 B

21:28:48,527 AIdventure_Server INFO Token file in 'models/LyaaaaaGames/distilgpt2' not found.

21:28:48,527 AIdventure_Server INFO Couldn't load the model files.

21:28:48,527 AIdventure_Server INFO Downloading the model with the server is disabled.

21:28:48,527 AIdventure_Server INFO Is CUDA available: True

---

The game appears as normal. Typing anything and hitting Generate crashes the game:

Unexpected error shutting down the server

ERROR: Connection lost with the server. clean: False

   at: call (modules/gdscript/gdscript_functions.cpp:775)

connection closed

The experimental server logs now read

21:31:55,674 asyncio ERROR Task exception was never retrieved

future: <Task finished name='Task-4' coro=<WebSocketServerProtocol.handler() done, defined at /media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/site-packages/websockets/legacy/server.py:153> exception=SystemExit(None)>

Traceback (most recent call last):

  File "/media/user/HDD/aidventure-experimental/server/server.py", line 135, in handler

    data_to_send = handle_request(p_websocket, json_message)

  File "/media/user/HDD/aidventure-experimental/server/server.py", line 169, in handle_request

    generated_text = generator.generate_text(prompt,

  File "/media/user/HDD/aidventure-experimental/server/generator.py", line 52, in generate_text

    model_input    = self._Tokenizer(model_input, return_tensors = "pt")

AttributeError: 'Generator' object has no attribute '_Tokenizer'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

  File "/media/user/HDD/aidventure-experimental/server/server.py", line 267, in <module>

    asyncio.run(main())

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/asyncio/runners.py", line 44, in run

    return loop.run_until_complete(main)

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/asyncio/base_events.py", line 629, in run_until_complete

    self.run_forever()

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/asyncio/base_events.py", line 596, in run_forever

    self._run_once()

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/asyncio/base_events.py", line 1890, in _run_once

    handle._run()

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/asyncio/events.py", line 80, in _run

    self._context.run(self._callback, *self._args)

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/site-packages/websockets/legacy/server.py", line 236, in handler

    await self.ws_handler(self)

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/site-packages/websockets/legacy/server.py", line 1175, in _ws_handler

    return await cast(

  File "/media/user/HDD/aidventure-experimental/server/server.py", line 150, in handler

    shutdown_server()

  File "/media/user/HDD/aidventure-experimental/server/server.py", line 253, in shutdown_server

    exit()

  File "/media/user/HDD/aidventure-experimental/mamba/envs/aidventure/lib/python3.9/_sitebuiltins.py", line 26, in __call__

    raise SystemExit(code)

SystemExit: None