Working on technologies like Python, Flask, and Jupyter Hub
Analyze user requirements and design appropriate big data solutions that best fit those requirements
Work with a cross-functional Agile Scrum team to design, develop, and test. Write clean, robust code, backed by automated tests.
Ensure that solutions are in line with department and domain architecture strategies and contribute to defining and improving those strategies.
Help build, maintain, and continually implement tests for an automated testing framework that drives integration tests front to back across components
Work collaboratively – share knowledge and help teammates in your area of expertise.
Skills and Experience:
Experience with Flask, Pandas, PyArrow, Dask, and working knowledge of Jupyter Notebooks in a Multi-user environment. Implementation of Automation test suite for Python Kernels and working knowledge of PySpark API
Experience with enterprise development primarily using Python and related technologies; Predictive Analytics in a Big Data environment
AWS Lambda function knowledge
Familiarity with Big Data technologies (Hadoop, HDFS, Spark, Hive, Impala, Yarn)
MS SQL Server. In-depth knowledge in Stored Procedures(development and debugging),
Understanding and experience designing and building REST microservices and hands-on experience with all layers of the development stack, including unit and automated tests, to be able to collaboratively deliver front-to-back solutions
Understanding of concurrency (multi-threading), distributed systems development and performance optimization; and experience with architectural design patterns, highly optimized, low latency, and massively scalable platforms
Exposure to deployment environment (Openshift, Kubernetes, Jenkins, Helm etc.)
Java Development skills are a Plus.