forked from clizzin/cascading.avro
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
159 lines (115 loc) · 6.47 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
===============================
Fork Changes
===============================
This was a quick fork to make cascading.avro do what I needed for use within
Cascalog. This is a pretty quick and dirty fork to fit my immediate needs.
1. Added a new constructor to AvroSchema (cascadingFields, schemaTypes, avroFieldNames)
where cascadingFields == cascading tuple field names and avroFieldNames will equal
the avro field names. cascalog uses prefixes such as "?" on tuple names, which are
illegal in avro. The alias approach was a quick way to work around this (but the
three list/array inputs would probably be best as a list of maps or objects
providing field settings (tuple name, type, avro field name, etc.)
2. Updated to Avro 1.6.3. This had some interesting effects though. The newer
Avro no longer supports nested Enums, so I needed to split out the test. I also
ran into trouble with nullSchema and Map and Enum fields. So these are no longer
"optional" if used. Didn't impact me, but it may impact others, so just be aware of
it. Some of the tests were adjusted because of this..
3. I added support for codec. avro.codec is now always written to the avro file
(while optional, some utilities like avro-c's avrocat, failed to read files without
an avro.codec metadata property in the container). Now avro.codec is always written
even if nil codec. Hadoop FileOutputFormat.setCompresseOutput is also set to true
otherwise avro.codec won't be written. With this change, I've been able to write
files with deflate codec as well.
4. I added automatic conversion to/from Date fields. If the schemaType is a
Date.class, it will be written to avro files as a long (getTime) and converted
to a Date when read back in. This was especially handy when working with
cascading db-migrate source and writing to avro sinks. Given enough time though,
I'd revisit how conversion is handled for "fields" to/from avro files (find an
approach more extensible than arrays of types and standard conversion routines).
5. I wasn't sure what etiquette dictated when forking a Maven packaged project, so
I modified the POM slightly. incremented the version to 1.1, and changed the group
from com.bixolabs to org.mikestanley. I did not change any Java package structure
or anything like that though. I haven't pushed this into any central repository
so you will need to install it locally (ant install) to use it in your projects.
I don't plan on maintaining this as a fork though and will be sending a pull
request.
If you have any questions i suppose contact me through github.
Cheers,
...Mike
===============================
Introduction
===============================
cascading.avro is a Cascading Scheme for the Apache Avro serialization format,
which has been publicly released by Bixo Labs under the Apache license.
This means you can use Avro as the source of tuples for a Cascading
flow, and as a sink for saving results. This is particularly useful when
you need to exchange data with other programming languages, as Avro is both
efficient and cross-language.
Information about Avro is available from http://hadoop.apache.org/avro/
and also http://wiki.apache.org/hadoop/Avro
===============================
Design
===============================
When you create an AvroSchema, you specify the fields and types (classes) of
each field. This lets the schema auto-generate a corresponding Avro scheme,
for both reading and writing data.
The set of supported types, with the corresponding Avro type, is:
Integer.class, Schema.Type.INT
Long.class, Schema.Type.LONG
Boolean.class, Schema.Type.BOOLEAN
Double.class, Schema.Type.DOUBLE
Float.class, Schema.Type.FLOAT
String.class, Schema.Type.STRING
BytesWritable.class, Schema.Type.BYTES
List.class, Schema.Type.ARRAY
Map.class, Schema.Type.MAP
Enum.class, Schema.Type.ENUM
See below for how the List, Map and Enum types are actually represented in a Tuple as a
nested Tuple.
===============================
Example
===============================
final Fields cascadingFields = new Fields("id", "timestamp", "statusMsg", "content");
final Class<?>[] schemeTypes = {Integer.class, Long.class, String.class, BytesWritable.class};
Tap avroSource = new Hfs(new AvroScheme(cascadingFields, schemeTypes), "outputdir");
...hook up your Cascading workflow here...
===============================
Limitations
===============================
There's currently no way to set the metadata for the generated Avro file, nor is there
any way to read that data back in. This is now supported in Avro, we just need to hook
it up in the Scheme.
The support for lists and maps is rudimentary and kludgy. Since these aren't natively
supported by Cascading yet (as of 1.1), we imitate this by nesting Tuples into the Tuple.
So a list item is a Tuple of 0...n primitive values, and a map is a Tuple that contains
alternative key/value pairs. Note that Avro maps always use a String as the key, so that
same constraint exists on the Cascading side.
The support for enums assumes that the Tuple has stored the enum value as the string
returned by enum.toString(), such that enum.valueOf(string) will return the original enum.
There's no support for using anything other than primitive types as values, so you can't
leverage Avro support for nesting records/maps/arrays as fields in the Avro record. While
Cascading does support using arbitrary types for fields, as long as they can be serialized,
we do not and probably will not attempt to synthesize mappings of arbitrarily complex data
types between Cascading and Avro.
===============================
Known Issues
===============================
Avro doesn't work properly with standard Hadoop installations prior to 0.20, since
Hadoop includes an older version (1.0.1) of the Jackson jar that is required by Avro,
and Avro uses some newer APIs. So you'll have to sculpt the Hadoop setup to exclude
that older Jackson jar, or play games with the classloader to ensure the newer jar
gets used with your job.
===============================
Building
===============================
You need Apache Ant 1.7 or higher, and a git client.
1. Download source from GitHub
% git clone git://github.com/bixolabs/cascading.avro.git
% cd cascading.avro
2. Build the jar
% ant clean jar
or to build and install the jar in your local Maven repo:
% ant clean install
3. Create Eclipse project files
% ant eclipse
Then, from Eclipse follow the standard procedure to import an existing Java project into your Workspace.