-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 40.3 KB
/
index.json
1
[{"authors":["r-richmond"],"categories":null,"content":"r-richmond is an analytically minded individual who is currently pursuing data analytics at a large tech company. His expertise lies in SQL, data engineering, data visualizations \u0026amp; misc programming. His experience ranges from submitting pull requests for open sourced tools to recommending new business strategies.\n","date":-62135596800,"expirydate":-62135596800,"kind":"section","lang":"en","lastmod":-62135596800,"objectID":"7a9fe28e6493804000254b354c1e56e7","permalink":"http://justnumbersandthings.com/authors/r-richmond/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/r-richmond/","section":"authors","summary":"r-richmond is an analytically minded individual who is currently pursuing data analytics at a large tech company. His expertise lies in SQL, data engineering, data visualizations \u0026amp; misc programming. His experience ranges from submitting pull requests for open sourced tools to recommending new business strategies.","tags":null,"title":"r-richmond","type":"authors"},{"authors":["r-richmond"],"categories":["Tinkerer"],"content":" 0) Install ISQL If you\u0026rsquo;re on macOS the easiest way to install isql is using homebrew brew install unixodbc.\n1) Download drivers Download ODBC Package \u0026amp; Basic Package from here Create an oracle account if necessary 2) Prepare driver files cd Downloads unzip instantclient-basic-macos.x64-19.3.0.0.0dbru.zip unsip instantclient-odbc-macos.x64-19.3.0.0.0dbru.zip mkdir ~/lib mkdir -p /opt/oracle/ mv $(pwd)/instantclient_19_3 /opt/oracle/ ln -s /opt/oracle/instantclient_19_3/libclntsh.dylib.19.1 /opt/oracle/instantclient_19_3/libclntshcore.dylib.19.1 ~/lib 3) Update ODBC ini files update odbcinst.ini to add\n[ODBC Drivers] Oracle ODBC Driver = Installed ... ... [Oracle ODBC Driver] Description = Oracle 19 ODBC driver Driver = /opt/oracle/instantclient_19_3/libsqora.dylib.19.1 Setup = FileUsage = CPTimeout = CPReuse = Update odbc.ini to add\n[ODBC Data Sources] {dsn_name} = [Oracle ODBC Driver] ... ... [{dsn_name}] AggregateSQLType=FLOAT Application Attributes=T Attributes=W BatchAutocommitMode=IfAllSuccessful BindAsFLOAT=F CacheBufferSize=20 CloseCursor=F DisableDPM=F DisableMTS=T DisableRULEHint=T Driver=/opt/oracle/instantclient_19_3/libsqora.dylib.19.1 DSN={dsn_name} EXECSchemaOpt= EXECSyntax=T Failover=T FailoverDelay=10 FailoverRetryCount=10 FetchBufferSize=64000 ForceWCHAR=F LobPrefetchSize=8192 Lobs=T Longs=T MaxLargeData=0 MaxTokenSize=8192 MetadataIdDefault=F QueryTimeout=T ResultSets=T ServerName={host}:{port}/{server} SQLGetData extensions=F SQLTranslateErrors=F StatementCache=F Translation DLL= Translation Option=0 UseOCIDescribeAny=F UserID={username} Password={password} 4) Verify installation Verify by connecting isql -v {dsn_name} Note: on macOS Catalina or later you may have to workaround the lack of notarization by navigating to \u0026ldquo;system preferences\u0026gt;security\u0026gt;allow\u0026rdquo; and approving multiple files after multiple connection attempts using step 4.1 ","date":1574553600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1574553600,"objectID":"c408444489c9cc7bbb7de786eedade42","permalink":"http://justnumbersandthings.com/post/2019-11-24-setup-oracle-odbc-with-isql/","publishdate":"2019-11-24T00:00:00Z","relpermalink":"/post/2019-11-24-setup-oracle-odbc-with-isql/","section":"post","summary":"0) Install ISQL If you\u0026rsquo;re on macOS the easiest way to install isql is using homebrew brew install unixodbc.\n1) Download drivers Download ODBC Package \u0026amp; Basic Package from here Create an oracle account if necessary 2) Prepare driver files cd Downloads unzip instantclient-basic-macos.x64-19.3.0.0.0dbru.zip unsip instantclient-odbc-macos.x64-19.3.0.0.0dbru.zip mkdir ~/lib mkdir -p /opt/oracle/ mv $(pwd)/instantclient_19_3 /opt/oracle/ ln -s /opt/oracle/instantclient_19_3/libclntsh.dylib.19.1 /opt/oracle/instantclient_19_3/libclntshcore.dylib.19.1 ~/lib 3) Update ODBC ini files update odbcinst.","tags":["SQL","Oracle","isql","ODBC"],"title":" Setting up Oracle ODBC on macOS with ISQL","type":"post"},{"authors":null,"categories":["Analyst"],"content":" 0) Install DBeaver You can find installation instructions here. Make sure to install version 5.2.2 or later. If you haven\u0026rsquo;t updated to 5.2.2 or later you may use this post as a guide for connecting to BigQuery.\n1) Create a new service account Instructions \u0026amp; details for creating a new service account can be found on Google\u0026rsquo;s website Grant your desired BigQuery permissions to your new service account Download the service account key 2) Create a new connection In the menu bar navigate to Database \u0026gt; New Connection Select BigQuery \u0026amp; press next Fill in project with the name of your BigQuery Project Optional, add additional projects in the subsequent field Select service-based Fill in the name of the service account ex: bigquery-demo@project_name.iam.gserviceaccount.com Fill in the path to the service key downloaded earlier Press finish Congrats you\u0026rsquo;ve successfully connected to BigQuery using Dbeaver!\n7) Troubleshooting If you receive [Simba][BigQueryJDBCDriver](100004) HttpTransport IO error : 403 Forbidden make sure you\u0026rsquo;ve created a service account with the necessary permissions. ","date":1538870400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1538870400,"objectID":"16ff8011f41886a0c7e22f32ab8d427f","permalink":"http://justnumbersandthings.com/post/2018-10-07-dbeaver-bigquery-part2/","publishdate":"2018-10-07T00:00:00Z","relpermalink":"/post/2018-10-07-dbeaver-bigquery-part2/","section":"post","summary":"0) Install DBeaver You can find installation instructions here. Make sure to install version 5.2.2 or later. If you haven\u0026rsquo;t updated to 5.2.2 or later you may use this post as a guide for connecting to BigQuery.\n1) Create a new service account Instructions \u0026amp; details for creating a new service account can be found on Google\u0026rsquo;s website Grant your desired BigQuery permissions to your new service account Download the service account key 2) Create a new connection In the menu bar navigate to Database \u0026gt; New Connection Select BigQuery \u0026amp; press next Fill in project with the name of your BigQuery Project Optional, add additional projects in the subsequent field Select service-based Fill in the name of the service account ex: bigquery-demo@project_name.","tags":["SQL","DBeaver","BigQuery"],"title":" Connecting to Google BigQuery with DBeaver","type":"post"},{"authors":null,"categories":["Analyst"],"content":" Introduction Google\u0026rsquo;s BigQuery has support for complex types (arrays \u0026amp; structs) which are relatively new in analytical databases. While the ideas and of arrays and structs aren\u0026rsquo;t unique to BigQuery some of the syntax and capabilities are unique. In this post I\u0026rsquo;ll be going over what I\u0026rsquo;ve found to be the most useful patterns and tricks.\nArrays Put plainly an array is a series of values of the same type stored within a single value. You can create array literals via brackets [] as demonstrated by the following snippet.\nselect [1, 2, 3] as array_of_ints You can also explicitly declare the type of an array as follows.\nselect ['2018-01-01', '2018-02-01', '2018-03-01'] as array_of_string, array\u0026lt;date\u0026gt;['2018-01-01', '2018-02-01', '2018-03-01'] as array_of_date Structs A struct is a grouping of values that need not be of the same type and is very similar to the concept of tuples. They are commonly used to group related values together. You can create struct literals using the function struct\nselect struct(1 as id, 2 as value) as user_info Purpose Arrays and structs allow for a more compact organization of related data which makes writing and reading many queries easier. So while BigQuery is the engine that I\u0026rsquo;m covering today I expect these concepts to spread to other databases as knowledge of their utility spreads.\nBasic Usage Example DataSet I\u0026rsquo;ll be using the following CTE to demonstrate various functions for arrays \u0026amp; structs.\nwith data_sample as ( select 1 as id_race, date'2018-08-01' as date_race, [3, 4] as id_participants, [ struct(7.0 as distance, 1 as lap_number, [struct(3 as id_participant, 1 as position), struct(4 as id_participant, 2 as position)] as finish_order), struct(6.5 as distance, 2 as lap_number, [struct(3 as id_participant, 1 as position), struct(4 as id_participant, 2 as position)] as finish_order), struct(7.2 as distance, 3 as lap_number, [struct(4 as id_participant, 1 as position), struct(3 as id_participant, 2 as position)] as finish_order) ] as race_laps union all select 2 as id_race, date'2018-08-08' as date_race, [3, 5] as id_participants, [ struct(7.5 as distance, 1 as lap_number, [struct(5 as id_participant, 1 as position), struct(3 as id_participant, 2 as position)] as finish_order), struct(7.4 as distance, 2 as lap_number, [struct(5 as id_participant, 1 as position), struct(3 as id_participant, 2 as position)] as finish_order), struct(7.3 as distance, 3 as lap_number, [struct(5 as id_participant, 1 as position), struct(3 as id_participant, 2 as position)] as finish_order) ] as race_laps ) Access an individual element from an array with data_sample as ( --See above --Note: these lines will be omitted from subsequent examples ) select ds.id_participants[offset(0)] as first_participant, --zero based ds.id_participants[ordinal(1)] as first_participant_also --one based from data_sample as ds Determine the length of an array select array_length(ds.id_participants) as number_of_participants from data_sample as ds Access values \u0026amp; structs within an array There are a couple of different ways to interact with arrays in BigQuery. The following three examples show different ways to access the example data structure and calculate the total distance for each race.\nby joining the lap array select ds.id_race, sum(rl.distance) as race_distance from data_sample as ds join ds.race_laps as rl group by 1 by unnesting the lap array select ds.id_race, sum(rl.distance) as race_distance from data_sample as ds, unnest(ds.race_laps) as rl group by 1 by using at inline query select ds.id_race, (select sum(rl.distance) from unnest(ds.race_laps) as rl) as race_distance from data_sample as ds by joining multiple arrays The following query returns a list of race ids \u0026amp; participants ids \u0026amp; and a comma separated string showing that participants place each lap of each race.\nselect ds.id_race, fo.id_participant, string_agg(cast(fo.position as string), ', ' order by rl.lap_number) lap_positions from data_sample as ds join ds.race_laps as rl join rl.finish_order as fo group by 1, 2 Filtering by contents of an array select ds.id_race from data_sample as ds where 3 in unnest(ds.id_participants) ","date":1537660800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1537660800,"objectID":"63e5adb7a8c191e3142c22bcf25aa54e","permalink":"http://justnumbersandthings.com/post/2018-09-23-exploring-complex-types-bigquery/","publishdate":"2018-09-23T00:00:00Z","relpermalink":"/post/2018-09-23-exploring-complex-types-bigquery/","section":"post","summary":"Introduction Google\u0026rsquo;s BigQuery has support for complex types (arrays \u0026amp; structs) which are relatively new in analytical databases. While the ideas and of arrays and structs aren\u0026rsquo;t unique to BigQuery some of the syntax and capabilities are unique. In this post I\u0026rsquo;ll be going over what I\u0026rsquo;ve found to be the most useful patterns and tricks.\nArrays Put plainly an array is a series of values of the same type stored within a single value.","tags":["SQL","BigQuery"],"title":"Exploring Complex Types in BigQuery","type":"post"},{"authors":null,"categories":["Analyst"],"content":" Update (2018-10-07) Shortly after this post DBeaver was updated with a native connector. Please see this post for a more up to date connection instructions if you have updated DBeaver to 5.2.2 or later.\n0) Install DBeaver You can find installation instructions here\n1) Download the latest drivers You can find the latest drivers on Google\u0026rsquo;s website\n2) Create a folder to store the drivers mkdir ~/.dbeaver-drivers/bigquery/\n3) Extract driver jars and move to the folder we made earlier 4) Create a New Driver in DBeaver Navigate to Database \u0026gt; Driver Manager \u0026gt; New Add all the files from ~/.dbeaver-drivers/bigquery/ Driver name: BigQuery (for labeling only) Class name: com.simba.googlebigquery.jdbc42.Driver (at the time of this writing) Default port: 443 URL template: jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId={server};OAuthType=0;OAuthServiceAcctEmail={user};OAuthPvtKeyPath={host}; Note there are 4 different ways to connect to BigQuery using the JDBC driver. This tutorial illustrates connecting using the service account authorization method. Additionally, at the time of this writing, Dbeaver only supports a couple URL template variables (e.g. server, user, host) which is why I\u0026rsquo;ve used the host variable for a path to key file rather than something like key_path 5) Create a new service account Instructions \u0026amp; details for creating a new service account can be found on Google\u0026rsquo;s website Grant your desired BigQuery permissions to your new service account Download the service account key 6) Create a New Connection In the menu bar navigate to Database \u0026gt; New Connection Select BigQuery Fill in the appropriate values for host, server, user Set host to the path the service account key e.g., /Users/admin/.dbeaver_drivers/bigquery/project_name-####.json Set server to project id for your BigQuery project Set user to the email address for the generated service account Press finish Congrats you\u0026rsquo;ve successfully connected to BigQuery using Dbeaver!\n7) Troubleshooting If you receive [Simba][BigQueryJDBCDriver](100004) HttpTransport IO error : 403 Forbidden make sure you\u0026rsquo;ve created a service account with the necessary permissions. -As of Dbeaver 5.2.0 Dbeaver is unable to return arrays of ints and other numerics- e.g., select [1, 2] as ids results in org.jkiss.dbeaver.model.exec.DBCException: SQL Error [10140] [22003]: [Simba][JDBC](10140) Error converting value to long. ","date":1537574400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1537574400,"objectID":"2254a6627f08645f30f9016cd06d99a3","permalink":"http://justnumbersandthings.com/post/2018-09-22-dbeaver-bigquery/","publishdate":"2018-09-22T00:00:00Z","relpermalink":"/post/2018-09-22-dbeaver-bigquery/","section":"post","summary":"Update (2018-10-07) Shortly after this post DBeaver was updated with a native connector. Please see this post for a more up to date connection instructions if you have updated DBeaver to 5.2.2 or later.\n0) Install DBeaver You can find installation instructions here\n1) Download the latest drivers You can find the latest drivers on Google\u0026rsquo;s website\n2) Create a folder to store the drivers mkdir ~/.dbeaver-drivers/bigquery/\n3) Extract driver jars and move to the folder we made earlier 4) Create a New Driver in DBeaver Navigate to Database \u0026gt; Driver Manager \u0026gt; New Add all the files from ~/.","tags":["SQL","DBeaver","BigQuery"],"title":" Connecting to Google BigQuery with DBeaver with JDBC Drivers","type":"post"},{"authors":null,"categories":["Analyst"],"content":" Forward This is a followup to my previous post. My previous post demonstrated how to import a CSV using Dbeaver\u0026rsquo;s database to database export \u0026amp; import feature. As of version 5.1.5 Dbeaver introduced a direct CSV option for importing CSVs.\n0) Install DBeaver You can find installation instructions here\n1) Connect to your target database 1.1) Navigate through your target database \u0026amp; schema and right click on your target table and select import table data\n1.2) Next select CSV from the list\n1.3) Select your CSV file for upload\n2) Ensure that the mappings of each of your columns is correct For column names that are an exact match DBeaver will automatically map them for you For the remaining columns make sure to map the source columns to your desired target columns 3) Complete the wizard and watch DBeaver import your data Note: For large files it may be necessary to go get lunch but in my case 4 records doesn\u0026rsquo;t take long to import :)\n4) Check to make sure that the data has loaded correctly As a last optional step it is good practice to make sure that everything loaded correctly which can easily be done by running a query against your target DB\n","date":1534032000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1534032000,"objectID":"c2700f81d9a8341a5233e7f1a0814882","permalink":"http://justnumbersandthings.com/post/2018-08-12-dbeaver-import-csv-2/","publishdate":"2018-08-12T00:00:00Z","relpermalink":"/post/2018-08-12-dbeaver-import-csv-2/","section":"post","summary":"Forward This is a followup to my previous post. My previous post demonstrated how to import a CSV using Dbeaver\u0026rsquo;s database to database export \u0026amp; import feature. As of version 5.1.5 Dbeaver introduced a direct CSV option for importing CSVs.\n0) Install DBeaver You can find installation instructions here\n1) Connect to your target database 1.1) Navigate through your target database \u0026amp; schema and right click on your target table and select import table data","tags":["SQL","DBeaver","CSV"],"title":"Importing a CSV into a database using DBeaver Part 2","type":"post"},{"authors":null,"categories":["Analyst"],"content":" Update: August 12, 2018 The following post demonstrates how to import CSVs using Dbeaver\u0026rsquo;s database to database export \u0026amp; import feature. If you are certain of the quality of your CSV \u0026amp; just want to import it quickly my subsequent post may be more useful.\n0) Install DBeaver You can find installation instructions here\n1) Create a folder to be used as your CSV Database mkdir ~/desktop/csvs\nPlace the CSV you want to load into this folder\n2) Create a CSV database connection In the menu bar select Database \u0026gt; Create a New Connection \u0026amp; from the list of drivers select Flat files(CSV) \u0026gt; CSV/DBF\nSet the path of the connection to the folder you created earlier (the JDBC URL will auto-populate)\nNote: If you run into trouble downloading the driver navigate to the source website and download the driver manually\n3) Connect to your target database 3.1) Navigate through your target database \u0026amp; schema and right click on your target table and select import table data\n3.2) Next select your source CSV from your CSV connection as the source container\nNote: In this example case I\u0026rsquo;m loading a test CSV into a Postgres database but this functionality works with any connection that DBeaver supports (which is basically everything)\n4) Ensure that the mappings of each of your columns is correct For column names that are an exact match DBeaver will automatically map them for you For the remaining columns make sure to map the source columns to your desired target columns 5) Complete the wizard and watch DBeaver import your data Note: For large files it may be necessary to go get lunch but in my case 4 records doesn\u0026rsquo;t take long to import :)\n6) Check to make sure that the data has loaded correctly As a last optional step it is good practice to make sure that everything loaded correctly which can easily be done by running a query against your target DB\n7) Final Notes \u0026amp; Thoughts While this process takes a little bit more time to get setup than other tools setting up the CSV connection only needs to be done once One side benefit of this as well is that you are now able to run SQL queries against CSVs very easily The only real pain point that I have run across is that if you add a new CSV file or add/delete columns in an active CSV connection you have to cancel the import wizard \u0026amp; refresh the CSV connection for the changes to be picked up this feedback was provided in issue 926 and hopefully it will be resolved in a future update ","date":1528761600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1528761600,"objectID":"676dadd9014a7fb7cc5cb6ddbb6a495b","permalink":"http://justnumbersandthings.com/post/2018-06-12-dbeaver-import-csv/","publishdate":"2018-06-12T00:00:00Z","relpermalink":"/post/2018-06-12-dbeaver-import-csv/","section":"post","summary":"Update: August 12, 2018 The following post demonstrates how to import CSVs using Dbeaver\u0026rsquo;s database to database export \u0026amp; import feature. If you are certain of the quality of your CSV \u0026amp; just want to import it quickly my subsequent post may be more useful.\n0) Install DBeaver You can find installation instructions here\n1) Create a folder to be used as your CSV Database mkdir ~/desktop/csvs\nPlace the CSV you want to load into this folder","tags":["SQL","DBeaver","CSV"],"title":"Importing a CSV into a database using DBeaver","type":"post"},{"authors":null,"categories":["Tinkerer"],"content":" What was I using before Previously I was using Pelican a static site generator written in Python. Personally I\u0026rsquo;m a huge Python fan which led me to search out Pelican rather than go with a more popular solution such as Jekyll.\nWhy Change As it turns out if you aren\u0026rsquo;t going to be modifying the static site generator in anyway you don\u0026rsquo;t need to concern yourself with the language that it is written in. This fact escaped me while I was seeking out Pelican. Coming to this realization opened up a world of possibilities and when I saw an article discussing Hugo and its killer feature, Live Reload, I made the jump!\nUnique Migration tasks The most complex task of this migration for me was ensuring that the old urls would point to the correct articles after I moved. fortunately Hugo made this dead simple via url Aliases.\nSimply add the following to the front matter of the article you\u0026rsquo;d like migrate.\n+++ aliases = [ \u0026quot;/posts/my-original-url/\u0026quot;, \u0026quot;/2010/01/01/even-earlier-url.html\u0026quot; ] +++ ","date":1521763200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1521763200,"objectID":"478b9d098f63fc8c2fc8df7b2e7639b3","permalink":"http://justnumbersandthings.com/post/2018-03-23-hugo-migration/","publishdate":"2018-03-23T00:00:00Z","relpermalink":"/post/2018-03-23-hugo-migration/","section":"post","summary":"What was I using before Previously I was using Pelican a static site generator written in Python. Personally I\u0026rsquo;m a huge Python fan which led me to search out Pelican rather than go with a more popular solution such as Jekyll.\nWhy Change As it turns out if you aren\u0026rsquo;t going to be modifying the static site generator in anyway you don\u0026rsquo;t need to concern yourself with the language that it is written in.","tags":["Blogging"],"title":"Moved to Hugo","type":"post"},{"authors":null,"categories":["Tinkerer"],"content":" 1) What is an EGPU \u0026amp; when did Apple start supporting them? The acronym EGPU stands for external GPU. Until recently they were relatively niche however, with the release of thunderbolt3 they have begun to gain traction. At WWDC17 Apple announced that they would be fully supported in High Sierra sometime during the spring of 2018. This news was welcomed at egpu.io where a community of capable individuals have been trail blazing EGPUs on macOS prior to 10.13 \u0026amp; Windows. They quickly updated some of their guides and recommendations to help beginners, such as myself, start their EGPU journey on High Sierra.\n2) My setup So after reading about the success and watching several youtube reviews I thought it was time to give this EGPU thing a go. For my enclosure I decided to go with the Mantiz Venus ($399). For my graphics card I decided to go with the Vega64. As a side note it looks like some users have had success with Nvidia cards on macOS after some elbow grease but AMD cards were reported to be plug and play; hence my decisions to go with AMD.\n3) Initial experiences After plugging it in and turning it on for the first time I was excited to see that everything was working out of the box.\nThe only oddity that I noticed at first is that system information lists the Vega64 as \u0026ldquo;AMD RX xxx\u0026rdquo;. However, his appears to only be a cosmetic issue so I ignore it.\nSo after my initial excitement I tried playing a couple games with my new setup and quickly stumbled across something I should have uncovered sooner in my research. The Vega64 is quite power hungry and the enclosure I purchased doesn\u0026rsquo;t actually support it, it only supports Vega56.\nFor reference the problem that you\u0026rsquo;ll encounter is that the entire enclosure will shut down which results in a soft or sometimes a hard system crash. So for those are thinking about purchasing the Mantiz \u0026amp; Vega make sure to go with the Vega56 rather than the Vega64. However, if you are like me and already have a Vega64 and/or you want to try an unsupported setup keep reading.\n4) Resolving the power draw issue. After some more research I found a new power supply ($190) that would provide the needed power \u0026amp; had the same dimensions as the power supply that ships with the Mantiz. I\u0026rsquo;m happy to report that installing this part turned out to be essentially painless. The existing power supply comes out without much fuss and the new power supply fits perfectly in place of the old one. The only exception here is that the new power supply is a little longer than the existing one but there is ample room in the enclosure to accommodate this.\n5) Subsequent experiences After upgrading the power supply I\u0026rsquo;ve yet to experience another power related crash and have had a far more enjoyable experience.\nThe only remaining issue that I\u0026rsquo;ve had is a rare crash. Twice I\u0026rsquo;ve had macOS freeze for several seconds followed by a flash which results in all of my applications being closed (according to the dock, force-quite dialogs, \u0026amp; command-tab) but leaving the underlying processes untouched. This results in a bizarre situation where I can\u0026rsquo;t view my application however, I can still hear the audio and see the process running in activity monitor. I expect/hope that this is one of those things that will be squashed once Apple rolls out official EGPU support in the spring with 10.13.x.\n6) Benchmarks What review of an EGPU setup would be complete without some benchmarks? Given the wide variety of applications \u0026amp; uses for an EGPU I decided to stay simple with some Geekbench benchmarks.\nUsing the AMD Pro 555 in my macbook-pro I received a score of 37,697\nUsing my EGPU setup I received a score of 175,303 a 4.65x improvement over the internal GPU offered by Apple. Quite the improvement!\n7) Final Thoughts Pros Performance The Geekbench results show a 4.65x increase and my experiences with this increase have been tremendously positive Flexibility If I need my laptop to be portable it is as simple as unplugging a cord If I need my laptop to be ultra-fast it is as simple as plugging in a cord Upgradability With this setup my Apple laptop now has an upgradable GPU which means I can keep my machine going for longer Ports One of the nice things about the Mantiz enclosure is that it provides a lot of ports (5 USB \u0026amp; 1 gigabit ethernet port) and even includes a spot for a 2.5\u0026rdquo; SSD Drive So rather than buying more dongles you can utilize the Mantiz as a hub as well Cons Weight One thing that caught me off guard was just how heavy the enclosure \u0026amp; GPU were It is far lighter than a desktop tower but still heavy enough to dissuade any thoughts of lugging it around Drivers I thought that the Vega series would have first class support as an EGPU because of the Vega\u0026rsquo;s inclusion in the iMac Pro however, since system information doesn\u0026rsquo;t accurately identify the card it is clear that some support is still yet to come, likely in the spring Update as of 10.13.4 This has been resolved with Apple now officially supporting EGPU setups Support (Apple now officially supports EGPU setups for AMD cards) If you go with the Vega or really any other setup other than the Apple EGPU kit you are in an area that isn\u0026rsquo;t officially supported That being said you won\u0026rsquo;t be alone or the first so don\u0026rsquo;t let that hold you back too much In summary I\u0026rsquo;d recommend an EGPU to anyone who is looking for a more robust GPU. The Pros are very compelling and the cons while worth considering don\u0026rsquo;t out weight the benefits for anyone looking for improved performance. The only other thing I\u0026rsquo;ll call out is that I\u0026rsquo;d recommend the Vega56 + Mantiz rather than the Vega64 + Mantiz given the Vega64 requires an unsupported/warranty-breaking modification.\n","date":1514419200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514419200,"objectID":"44ee942b261872cc4bed6fe0edef175d","permalink":"http://justnumbersandthings.com/post/2017-12-28-egpu-adventures/","publishdate":"2017-12-28T00:00:00Z","relpermalink":"/post/2017-12-28-egpu-adventures/","section":"post","summary":"1) What is an EGPU \u0026amp; when did Apple start supporting them? The acronym EGPU stands for external GPU. Until recently they were relatively niche however, with the release of thunderbolt3 they have begun to gain traction. At WWDC17 Apple announced that they would be fully supported in High Sierra sometime during the spring of 2018. This news was welcomed at egpu.io where a community of capable individuals have been trail blazing EGPUs on macOS prior to 10.","tags":["EGPU"],"title":"macOS 10.13.2 EGPU Adventures","type":"post"},{"authors":null,"categories":["Tinkerer"],"content":" 1) What is Tableau\u0026rsquo;s Document API? With the release of Tableau 10, Tableau released a python utility called the Tableau Document API (or TDA for short). TDA allows users to easily programmatically modify tableau workbooks. Modifying tableau workbooks without using Tableau Desktop was possible before as tableau files .twb are actually just xml files. However, manually editing the xml of .twb files could easily result in a corrupted workbook. Fortunately with the release of this tool it is now much less risky to modify workbooks without using Tableau Desktop.\n2) Using Tableau\u0026rsquo;s Document API TDA is written in Python so using it is as simple as pip install TableauDocumentApi and import TableauDocumentApi. For example if you needed to update the connection strings in a dozen local tableau files you could use the following script to update them all.\nimport TableauDocumentApi as tda import glob, os os.chdir(\u0026quot;my_folder\u0026quot;) for file in glob.glob(\u0026quot;*.twb\u0026quot;): tableau_workbook = tda.Workbook(file) for datasource in tableau_workbook.datasources: for connection in datasource.connections: connection.server = 'my-new-host' tableau_workbook.save() 3) Tableau\u0026rsquo;s Query Bands \u0026amp; Initial SQL Taking a step away from TDA for a moment Tableau has two less visible but still immensely useful features: Initial SQL \u0026amp; querybands. I\u0026rsquo;ve personally used these features to stage temporary tables \u0026amp; tag each query (there are so many\u0026hellip;) that tableau runs.\n4) How does this relate to the TableauDocumentAPI The first version I used of Tableau\u0026rsquo;s Document API did not support modifying initial sql \u0026amp; query bands. After being faced with modifying thousands of workbooks to use query bands I decided to review the library and see if support could be easily added. Glancing at the source code I could see that each of the properties of a connection was cleanly defined. Using Port as an example I was able to submit a pull request implementing the functionality. Fortunately, the Tableau team was very responsive and quickly merged the pull request.\n5) Tying it all together So now if you find yourself in a situation where you need to modify the query bands or the initial sql in your workbooks you can use the TDA to save yourself some time.\nUpdate: Looks like Tableau was even kind enough to mention this new feature in the release notes of 10.2 Search for \u0026ldquo;and with our Document API\u0026rdquo;\nCheers!\n","date":1496534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1496534400,"objectID":"b3f2a9e9a05b2036ff2e20f70e58c311","permalink":"http://justnumbersandthings.com/post/2017-06-04-improving-tableaus-document-api/","publishdate":"2017-06-04T00:00:00Z","relpermalink":"/post/2017-06-04-improving-tableaus-document-api/","section":"post","summary":"1) What is Tableau\u0026rsquo;s Document API? With the release of Tableau 10, Tableau released a python utility called the Tableau Document API (or TDA for short). TDA allows users to easily programmatically modify tableau workbooks. Modifying tableau workbooks without using Tableau Desktop was possible before as tableau files .twb are actually just xml files. However, manually editing the xml of .twb files could easily result in a corrupted workbook. Fortunately with the release of this tool it is now much less risky to modify workbooks without using Tableau Desktop.","tags":["Tableau","Python"],"title":"Improving Tableau's Document API","type":"post"},{"authors":null,"categories":["Analyst"],"content":" 0) Install DBeaver You can find installation instructions here\n1) Download the latest drivers You can find the latest drivers on the Cloudera website\n2) Create a folder to store the drivers mkdir ~/.dbeaver-drivers/cloudera-hive/\n3) Extract driver jars and move to the folder we made earlier 4) Create a New Driver in DBeaver Navigate to Database \u0026gt; Driver Manager \u0026gt; New Add all the files from ~/.dbeaver-drivers/cloudera-hive/ Driver name: Hive-Cloudera (for labeling only) Class name: com.cloudera.hive.jdbc41.HS2Driver (at the time of this writing) Default port: 10000 URL template: jdbc:hive2://{host}:{port}/{database};AuthMech=1;KrbRealm=FOO.BAR;KrbHostFQDN={server}; KrbServiceName=hive;KrbAuthType=2 Note you need to change FOO.BAR to match your krb5.conf settings 5) Create a New Connection In the menu bar Navigate to Database \u0026gt; New Connection Select Hive-Cloudera Fill in the appropriate values for host \u0026amp; database (I set database to default) Set server to be your KrbHostFQDN Leave your user name \u0026amp; password blank Test connection Press next, next, \u0026amp; change the name of this connection as you see fit Press finish Congrats you\u0026rsquo;ve successfully connected to hive using kerberos authentication!\n6) Troubleshooting If you are receiving [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication make sure to check the following\n Ensure that you have the latest cryptography libraries installed Java 9 includes these libraries by default That you\u0026rsquo;ve configured your /etc/krb5.conf successfully If you\u0026rsquo;ve done this correctly you should be able to run kinit in terminal and create a ticket without issue For Windows adding the following lines to your dbeaver.ini may be necessary as well\n -Djava.security.krb5.conf=c:\\kerberos\\krb5.ini note: this is the windows equivalent of /etc/krb5.conf -Djava.security.auth.login.config=c:\\kerberos\\jaas.conf\n success has also been reported with the following jaas.conf file \u0026amp; keytab usage\nClient { com.sun.security.auth.module.Krb5LoginModule required debug=true doNotPrompt=true useKeyTab=true keyTab=\u0026quot;C:\\Users\\{user}\\krb5cc_{user}\u0026quot; useTicketCache=true renewTGT=true principal=\u0026quot;{user}@FOO.BAR\u0026quot; ; }; ","date":1494028800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1494028800,"objectID":"0f8325acf5f429a62875c3a0e8a8d875","permalink":"http://justnumbersandthings.com/post/2017-05-06-dbeaver-hive/","publishdate":"2017-05-06T00:00:00Z","relpermalink":"/post/2017-05-06-dbeaver-hive/","section":"post","summary":"0) Install DBeaver You can find installation instructions here\n1) Download the latest drivers You can find the latest drivers on the Cloudera website\n2) Create a folder to store the drivers mkdir ~/.dbeaver-drivers/cloudera-hive/\n3) Extract driver jars and move to the folder we made earlier 4) Create a New Driver in DBeaver Navigate to Database \u0026gt; Driver Manager \u0026gt; New Add all the files from ~/.dbeaver-drivers/cloudera-hive/ Driver name: Hive-Cloudera (for labeling only) Class name: com.","tags":["SQL","DBeaver","Hive","Kerberos"],"title":" Connecting to Hive with DBeaver using Kerberos Authentication","type":"post"},{"authors":null,"categories":["Analyst"],"content":" 0) What is DBeaver? Quite simply DBeaver is the best multi-database SQL IDE that I\u0026rsquo;ve used. It supports every JDBC connection that I\u0026rsquo;ve thrown at it and has advanced features for some of the more popular databases such as Mysql \u0026amp; Postgres. Many thanks to serge-rider for creating such an awesome tool.\n1) Install Java First we need to install java which can be easily by running the following command in terminal (homebrew required):\nbrew cask install java Note: Previous versions of this article instructed an installation of the cask jce-unlimited-strength-policy but that has been removed as its contents have been incorporated into the cask java with the release of 9.0\nNotes I recommend the newest version of Java as DBeaver regularly depreciates old versions of Java Java SE6 is not supported 2) Install DBeaver Download the latest version | Github Alternate After downloading drag and drop into your application folder The first time you run the application you may need to right-click on the application and then press open Congratulations you\u0026rsquo;ve installed DBeaver!\nYour next step is configuring a connection(s) to your database(s).\n","date":1490592950,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1490592950,"objectID":"056327fd974178adcfe2a25a92e44d0a","permalink":"http://justnumbersandthings.com/post/2017-03-26-dbeaver-mac/","publishdate":"2017-03-26T21:35:50-08:00","relpermalink":"/post/2017-03-26-dbeaver-mac/","section":"post","summary":"0) What is DBeaver? Quite simply DBeaver is the best multi-database SQL IDE that I\u0026rsquo;ve used. It supports every JDBC connection that I\u0026rsquo;ve thrown at it and has advanced features for some of the more popular databases such as Mysql \u0026amp; Postgres. Many thanks to serge-rider for creating such an awesome tool.\n1) Install Java First we need to install java which can be easily by running the following command in terminal (homebrew required):","tags":["SQL","DBeaver"],"title":"Installing DBeaver on a Mac","type":"post"},{"authors":null,"categories":["Tinkerer"],"content":" Update 2018-03-18: I\u0026rsquo;ve since moved to Hugo This is my first blog post using Pelican and Markdown. A lot of the content below is to be used as reference mostly for myself and any others who are exploring using Python3, Pelican, \u0026amp; Markdown to create a blog.\nHow to get up and running\nmkvirtualenv personal_blog pip install pelican pip install markdown pip install fabric3 pip install ghp-import pip install webassets npm install less -g cd Dropbox/projects/python/personal_blog/ git clone https://github.com/r-richmond/rirchmond.github.io.git git submodule add https://github.com/textbook/bulrush.git git submodule add https://github.com/getpelican/pelican-plugins.git Running commands using Fab3 which help prepare posts\nfab rebuild fab preview fab serve fab clean fab gh_pages fab reserve This post was very helpful as well\nI followed this post to setup my custom domain\n","date":1488499200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1488499200,"objectID":"641f3d5cc0dbcacb908f6460ea675bc3","permalink":"http://justnumbersandthings.com/post/2017-03-03-pelican-intro/","publishdate":"2017-03-03T00:00:00Z","relpermalink":"/post/2017-03-03-pelican-intro/","section":"post","summary":"Update 2018-03-18: I\u0026rsquo;ve since moved to Hugo This is my first blog post using Pelican and Markdown. A lot of the content below is to be used as reference mostly for myself and any others who are exploring using Python3, Pelican, \u0026amp; Markdown to create a blog.\nHow to get up and running\nmkvirtualenv personal_blog pip install pelican pip install markdown pip install fabric3 pip install ghp-import pip install webassets npm install less -g cd Dropbox/projects/python/personal_blog/ git clone https://github.","tags":["Blogging","Python"],"title":"First Blog With Pelican","type":"post"}]