-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running SQL using SparkConnect should not print full stack trace #1011
Comments
jupysql/src/sql/connection/connection.py Line 1114 in 0433444
import pyspark
try:
return handle_spark_dataframe(self._connection.sql(query))
except pyspark.sql.utils.AnalysisException as e:
print(e)
except Exception as e:
print(e)
raise (e) Would this be a solution? |
did you try the short_errors option? I remember the spark compatibility came from an external contributor so I'm unsure if the |
@edublancas I tested it, doesn't work on sparkConnect, I'm gonna open a PR |
Can anyone review the PR? |
What happens?
There are a lot of complains that the stack trace is really long and doesn't help identify the error.
The solution would be to just print the error that sparkSQL provides.
E.G.
Running this query:
The stack trace should be:
To Reproduce
After connecting to a spark cluster, run the code below:
This should output something like this:
OS:
Linux
JupySQL Version:
0.10.10
Full Name:
Athanasios Keramas
Affiliation:
The text was updated successfully, but these errors were encountered: