|This posting is managed by:||Morgan McKinley K.K.|
|Company Name||Company is not publicly visible|
IT (Mainframe) - Database SE
Designing and developing a high-capacity data pipeline capable of processing large volumes of data on an hourly basis, ensuring timely accessibility for end-users.
Ensuring the smooth operation and maintenance of the data pipeline, which encompasses tasks such as deploying new features/fixes, troubleshooting outages, investigating data discrepancies, and similar activities.
Collaborating with the customer service and product teams to incorporate additional features and address customer use cases related to data access.
Collaborating with other engineering teams to enhance the performance and reliability of the services responsible for utilizing the data.
|Company Info||Join our dynamic and innovative team and take your Big Data Engineer career to new heights. Apply now to be part of our exciting journey!|
|Working Hours||9:00 to 17:30|
Experience on building applications via SQL and Spark/Scala
Experience in spark application from scratch to production grade
Good understanding with modifying existing spark applications to add new features
Working experience in DevOps tooling like Jenkins, Ansible, Chef
|English Level||Business Conversation Level (TOEIC 735-860)|
|Hour Salary||Depends on experience|
|Estimated Annual Salary||JPY - Japanese Yen JPY 4500K - JPY 7000K|
Commuting/ Transportation Allowance