- final keys
- final values
- intermediate keys
- intermediate values
- UNION DISTINCT, RANK
- OVER, RANK
- OVER, EXCEPT
- UNION DISTINCT, RANK
Q3. Rather than adding a Secondary Sort to a slow Reduce job, it is Hadoop best practice to perform which optimization?
- Add a partitioned shuffle to the Map job.
- Add a partitioned shuffle to the Reduce job.
- Break the Reduce job into multiple, chained Reduce jobs.
- Break the Reduce job into multiple, chained Map jobs.
Q4. Hadoop Auth enforces authentication on protected resources. Once authentication has been established, it sets what type of authenticating cookie?
- encrypted HTTP
- unsigned HTTP
- compressed HTTP
- signed HTTP
- Java or Python
- SQL only
- SQL or Java
- Python or SQL
Q6. To perform local aggregation of the intermediate outputs, MapReduce users can optionally specify which object?
- Reducer
- Combiner
- Mapper
- Counter
- SUCCEEDED; syslog
- SUCCEEDED; stdout
- DONE; syslog
- DONE; stdout
- public void reduce(Text key, Iterator values, Context context){…}
- public static void reduce(Text key, IntWritable[] values, Context context){…}
- public static void reduce(Text key, Iterator values, Context context){…}
- public void reduce(Text key, IntWritable[] values, Context context){…}
Q9. To get the total number of mapped input records in a map job task, you should review the value of which counter?
- FileInputFormatCounter
- FileSystemCounter
- JobCounter
- TaskCounter (NOT SURE)
- A, P
- C, A
- C, P
- C, A, P
- combine, map, and reduce
- shuffle, sort, and reduce
- reduce, sort, and combine
- map, sort, and combine
Q12. To set up Hadoop workflow with synchronization of data between jobs that process tasks both on disk and in memory, use the ___
service, which is ___
.
- Oozie; open source
- Oozie; commercial software
- Zookeeper; commercial software
- Zookeeper; open source
- data
- name
- memory
- worker
- hot swappable
- cold swappable
- warm swappable
- non-swappable
- on disk of all workers
- on disk of the master node
- in memory of the master node
- in memory of all workers
- on the reducer nodes of the cluster
- on the data nodes of the cluster (NOT SURE)
- on the master node of the cluster
- on every node of the cluster
- distributed cache
- local cache
- partitioned cache
- cluster cache
Q18. Skip bad records provides an option where a certain set of bad input records can be skipped when processing what type of data?
- cache inputs
- reducer inputs
- intermediate values
- map inputs
- spark import --connect jdbc:mysql://mysql.example.com/spark --username spark --warehouse-dir user/hue/oozie/deployments/spark
- sqoop import --connect jdbc:mysql://mysql.example.com/sqoop --username sqoop --warehouse-dir user/hue/oozie/deployments/sqoop
- sqoop import --connect jdbc:mysql://mysql.example.com/sqoop --username sqoop --password sqoop --warehouse-dir user/hue/oozie/deployments/sqoop
- spark import --connect jdbc:mysql://mysql.example.com/spark --username spark --password spark --warehouse-dir user/hue/oozie/deployments/spark
- compressed (NOT SURE)
- sorted
- not sorted
- encrypted
- JUnit
- XUnit
- MRUnit
- HadoopUnit
- hadoop-user
- super-user
- node-user
- admin-user
- can be configured to be shared
- is partially shared
- is shared
- is not shared (https://www.lynda.com/Hadoop-tutorials/Understanding-Java-virtual-machines-JVMs/191942/369545-4.html)
- a static job() method
- a Job class and instance (NOT SURE)
- a job() method
- a static Job class
- S3A
- S3N
- S3
- the EMR S3
- schema on write
- no schema
- external schema
- schema on read
- read-write
- read-only
- write-only
- append-only
- hdfs or top
- http
- hdfs or http
- hdfs
- Hive
- Pig
- Impala
- Mahout
- a relational table
- an update to the input file
- a single, combined list
- a set of <key, value> pairs
map function processes a certain key-value pair and emits a certain number of key-value pairs and the Reduce function processes values grouped by the same key and emits another set of key-value pairs as output.
- Override the default Partitioner.
- Skip bad records.
- Break up Mappers that do more than one task into multiple Mappers.
- Combine Mappers that do one task into large Mappers.
- files in object storage
- graph data in graph databases
- relational data in managed RDBMS systems
- JSON data in NoSQL databases
- data mode
- safe mode
- single-user mode
- pseudo-distributed mode
- <key, value> pairs
- keys
- values
- <value, key> pairs
- an average of keys for values
- a sum of keys for values
- a set of intermediate key/value pairs
- a set of final key/value pairs
- SELECT…WHERE value = 1000
- SELECT … LIMIT 1000
- SELECT TOP 1000 …
- SELECT MAX 1000…
- one
- zero
- shared
- two or more (https://data-flair.training/blogs/hadoop-high-availability-tutorial)
- kubernetes
- JobManager
- JobTracker
- YARN
- tasks; jobs
- jobs; activities
- jobs; tasks
- activities; tasks
- database
- distributed computing framework
- operating system
- productivity tool
- combiner
- reduce
- mapper
- intermediate
- mapper
- reducer
- combiner
- counter
- HDFS; HQL
- HQL; HBase
- HDFS; SQL
- SQL; HBase
- does not include
- is the same thing as
- includes
- replaces
Q45. Which type of Hadoop node executes file system namespace operations like opening, closing, and renaming files and directories?
- ControllerNode
- DataNode
- MetadataNode
- NameNode
- Impala
- MapReduce
- Spark
- Pig
Q47. Suppose you are trying to finish a Pig script that converts text in the input string to uppercase. What code is needed on line 2 below?
1 data = LOAD '/user/hue/pig/examples/data/midsummer.txt'... 2
- as (text:CHAR[]); upper_case = FOREACH data GENERATE org.apache.pig.piggybank.evaluation.string.UPPER(TEXT);
- as (text:CHARARRAY); upper_case = FOREACH data GENERATE org.apache.pig.piggybank.evaluation.string.UPPER(TEXT);
- as (text:CHAR[]); upper_case = FOREACH data org.apache.pig.piggybank.evaluation.string.UPPER(TEXT);
- as (text:CHARARRAY); upper_case = FOREACH data org.apache.pig.piggybank.evaluation.string.UPPER(TEXT);
- Combiner
- Reducer
- Map2
- Shuffle and Sort
- dfs.block.size in hdfs-site.xmls
- orc.write.variable.length.blocks in hive-default.xml
- mapreduce.job.ubertask.maxbytes in mapred-site.xml
- hdfs.block.size in hdfs-site.xml
- replacements for
- not used with
- substitutes for
- additions for
- distributed cache
- library manager
- lookup store
- registry
- explain
- query action
- detail
- query plan
Q53. Which feature is used to roll back a corrupted HDFS instance to a previously known good point in time?
- partitioning
- snapshot
- replication
- high availability
- C++
- C
- Haskell
- Java
- NAS
- FAT
- HDFS
- NFS
- encrypted
- verified
- distributed
- remote
- Spark and YARN
- HDFS and MapReduce
- HDFS and S3
- Spark and MapReduce
- Cloudera
- Microsoft
- Amazon
- Reporter
- IntReadable
- IntWritable
- Writer
Q60. After changing the default block size and restarting the cluster, to which data does the new size apply?
- all data
- no data
- existing data
- new data
SELECT
c.id,
c.name,
c.email_preferences.categories.surveys
FROM customers c;
- GROUP BY
- FILTER
- SUB-SELECT
- SORT
- Comparator
- Mapper
- Combiner
- Reducer
- secondary indices
- summary statistics
- column-based statistics
- a primary key index
- partition-only
- map-only
- reduce-only
- combine-only
- Add more master nodes.
- Implement optimized InputSplits.
- Add more DataNodes.
- Implement a custom Mapper.
- a sort policy
- a combiner policy
- a compression policy
- a filter policy
- hadoop fs -copy
- hadoop fs -copy
- hadoop fs -copyFromLocal
- hadoop fs -copyFromLocal
- managed; metadata
- external; data and metadata
- external; metadata
- managed; data
- EXPLAIN; JOIN Operator
- QUERY; MAP JOIN Operator
- EXPLAIN; MAP JOIN Operator
- QUERY; JOIN Operator
- Two
- Three
- Four
- Five
- invalidate metadata; Impala
- validate metadata; Impala
- invalidate metadata; Hive
- validate metadata; Hive