The Core Technology Group (CTG) is the innovation hub within Informatica that is responsible for delivering an extensible, reliable and scalable platform that provides the solid foundation for all other Informatica products. Specifically, our team focusses on building a high performance and scalable Data Engine that can scale on multi-processor environments as well on a cluster of nodes. The engine is capable of handling batch, request/response and real-time workloads to deliver data for applications. Here are some of the areas you will be testing as part of our team:
- Optimizing query plans based on heuristics and rules
- Cost based optimizations such as join re-ordering and usage of execution statistics
- Cross compiler to translate representation of a data flow plan from one engine to another such as Hadoop
- Reducing latency and processing time of query operations to return instantaneous results
- Delivering large data sets to client applications with extremely high throughput
- Enabling the Data Engine to be available across on-premise, Cloud, Hadoop and embedded applications.
Advanced Customer Engineering (ACE) for Big Data is the customer interfacing group of the Big Data R&D team. ACE Engineers work on supporting Big Data products including Big Data Management (BDM), Intelligent Data Lake (IDL) and Informatica Intelligent Streaming (IIS). ACE engineers get to work on understanding complex real world architecture and implementations, debug/narrow challenging issues on customer/POC environments and an opportunity to interact with world class Professional Service/ Sales PreSales Consultants and Architects.
You will be a key contributor to adoption of Informaticas next generation Big Data Management solution using Spark and Blaze Engine. You will be a part of rewarding experience to see your solutions / recommendations make immediate impact and feedback. You get a first-hand experience of the rapidly evolving Big Data/ Cloud Computing landscape such as support for Ephemeral Hadoop Clusters in the Cloud or HDFS Support for Multihomed Networks. You get the best shot to productize your solution in the upcoming release or opportunity to articulate the product short comings/limitations and thus shape the future roadmap of Big Data products.
Our Ideal Candidate
You are an experienced software engineer who has worked on developing/testing large scale data management or database products with strong java programming skills. You relish the opportunity to work on implementations involving Data Integration/ Cloud Computing/ Big Data/ Hadoop including:
- Distributed File System HDFS/Hive Tables
- Distributed execution using Spark /Hive
- Exposure or good understanding of Cloud Deployment AWS / Azure
- Ability to learning new technologies / experiment / fail fast
The successful candidate will be based in Bangalore and will:
- Ability to understand complex Big Data use cases
- Analyze and Debug reported issues
- Ability to narrow down the root cause of the issue and explore suitable workarounds
- Code/Build and publish critical Emergency Bug Fixes (EBFs)
- Ability to develop/address most commonly asked supportability fixes/features.
- Collaborate with geographically dispersed, cross-functional teams and customers
- Work closely with the development/support and product management and ability to respond to customer queries in the shortest time possible
- Collaborate effectively with peer engineers and architects to solve complex problems/situations during customer POCs
- Influence road map of Big data products
- Ability to articulate a given situation to the customer and clearly communicate the next steps
- BS/MS in Computer Science, Computer Engineering or equivalent technical degree
- 3 or more years of professional experience in development
- Product development experience with Java as primary programming language
- Networking fundamentals firewall, ports, Virtual private networks etc.
- Debugging skills Java and C++ processes, analyzing process stacks, heap dumps, core dumps etc.
- Operating system fundamentals- CPU, IO, other resources (FD, open ports).
- Ability to use maven, eclipse, visual studio is a plus.
- Excellent problem solving, analytical skills and technical troubleshooting abilities
- Hands-on exposure to Hadoop: Distributed File System HDFS/Hive Tables; Distributed execution using Spark /Hive
- Good understanding of Cloud Deployment AWS / Azure
- Experienced in Agile software development methodology
- Ability to prioritize tasks, track and deliver on-time
- Good knowledge of RDBMS and expert in one of Oracle, Db2, SQL Server, Teradata or Sybase
- Experience writing SQL scripts, shell scripts and/or in any other scripting languages
- Strong working knowledge of Linux, Unix and Windows Operating systems
- Knowledge of Data Warehousing concepts
- In-depth understanding of large complex software systems to isolate defects, reproduce defects, assess risk and understand varied customer deployments
- Clarity and precision in verbal and written communications
- Strong interpersonal and relationship building skills within an organization
- Proven ability to work well with others in a fast paced, iterative product definition and development environment
- Ability to work with customers, developers, QA, documentation, product management and support staff
- Ability to learn new skills quickly as needed
Qualities Needed for This Jobs
Lead Software Engineer, Ace
Our Team The Core Technology Group (ctg) Is The Innovation Hub Within Informatica That Is Responsible For Delivering An Extensible, Reliable And Scalable Platform That Provides The Solid Foundation F
Looking for Other graduates profile.
Looking for Other graduates profile.