Unison Systems is looking for a Mid-Level Java Engineer (Back
End Developer) consultant who's interested in expanding their
experience and knowledge within the fast growing world of Big Data.
This is an exciting opportunity to learn and work hands on in the
pioneering field of Big Data, developing solutions alongside some
highly seasoned software experts who are experience in this domain.
This consultant will be joining a small team in the research of
develop support and deploy solutions within the Hadoop ecosystem
and real-time distributing computing architectures.
- Help develop solutions to Big Data problems utilizing the Hadoop
- Help develop solutions to real-time and off line event/log
collecting from various data sources.
- Help develop, maintain, and perform analysis within a real-time
architecture supporting large amounts of data from various
- Analyze massive amounts of data and help drive prototype ideas
for new tools and products.
- Design, build and support APIs and services that are exposed to
other internal teams
- Bachelors or Masters in Engineering Sciences, Computer Science,
Physics or Mathematics or equivalent
- Proven track record of delivering backend systems that
participate in a complex ecosystem.
- At least 1+ years experience in Java back end development
- A solid foundation in computer science, with strong competencies
in data structures, algorithms, non-blocking I/O and software
- Extensive experience programming in Java, good current knowledge
of Unix/Linux environments (including scripting) as well as solid
experience in code optimization and high performance computing.
- 1+ years of MapReduce with experience in Hadoop utilizing Pig,
Hive, or Oozie preferred.
- Experience with Java servlet containers or application servers
such as JBoss, Tomcast, Glassfish, WebLogic, or Jetty.
- Good communicator, able to analyze and clearly articulate complex
issues and technologies understandably and engagingly.
- Great design and problem solving skills, with a strong bias for
architecting at scale.
- Adaptable, proactive and willing to take ownership.
- Good understanding in any: advanced mathematics, statistics, and
- Excellent verbal and written communication skills.
- Keen attention to detail and high level of commitment.
- Nice to haves: experience in Mongo, Python, and log collection
frameworks like Flume, Scribe or Splunk.
- Good understanding and/or experience with serialization
frameworks such as Thrift, Avro, Google Protocol Buffers, and Kyro
- 1+ years of distributed database experience (HBase or Cassandra
- Knowledge in Big Data related technologies and open source
- Experience in software development of large-scale distributed
LOCATION: Downtown Denver, CO
DURATION: Minimum 6 mos
HOURLY PAY RANGE: Pending Experience