You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My regular desktop PC does not have a CUDA-capable graphics card, so each execution is done via the CPU in a single thread. A question for "Is there any PII information in this dataset?" takes about 30 minutes with a fairly recent AMD Ryzen CPU.
My server has several (smaller) CUDA-capable graphics cards. However, when running with the LAMBDAPROMPT_BACKEND = StarCoder I reach the memory limit of the graphics cards: RuntimeError: CUDA out of memory.
What experiences have you made with the execution of "df.sketch.ask" with Nvidia graphics cards?
At DataDM I read that the local execution needs a 24GB graphics card. Have you done a test with a RTX 3090 or even RTX 4090 with 24GB? Or do you have the same setup: a system with more than one CUDA-capable graphics card but with at least 24GB of RAM in sum?
Do you have a recommendation for upgrading my/one server to work with the appropriate models (e.g. StarCoder)?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
First of all: thank you very much for the great work on "Sketch".
I would like to look into the topic more, but without sending my data to OpenAI. Therefore I would like to use the local execution.
os.environ['SKETCH_USE_REMOTE_LAMBDAPROMPT']
='False'
My regular desktop PC does not have a CUDA-capable graphics card, so each execution is done via the CPU in a single thread. A question for "Is there any PII information in this dataset?" takes about 30 minutes with a fairly recent AMD Ryzen CPU.
My server has several (smaller) CUDA-capable graphics cards. However, when running with the
LAMBDAPROMPT_BACKEND = StarCoder
I reach the memory limit of the graphics cards:RuntimeError: CUDA out of memory
.Beta Was this translation helpful? Give feedback.
All reactions