In this post, we will see the list of questions asked with 4+ YOE candidate in Tiger Analytics Company Interview for AWS Data Engineer profile.
Let’s see the Questions:
1. Share your introduction and discuss any recent projects you have worked on.
2. Questions focused on your project experience and technical implementation.
3. How would you connect multiple tables from different AWS databases (e.g., RDS, Redshift) using a single connection in AWS Glue?
4. Explain the types of triggers available in AWS Glue or AWS Step Functions.
5. How do you handle code deployment from DEV to QA and PROD environments using AWS services?
6. Describe how to create a CI/CD pipeline in AWS using CodePipeline, CodeCommit, and CodeBuild.
7. What types of data transformations have you implemented in your projects using AWS Glue or other tools?
8. How can you replace spaces in column names with underscores in source files using AWS Glue and S3?
9. What is Slowly Changing Dimension (SCD) Type 2, and how can it be implemented using AWS Glue or Redshift?
10. Discuss the differences between AWS S3 and AWS Redshift in terms of storage and usage.
11. How do you read data from S3 using Amazon Redshift Spectrum or Athena?
12. Write a Python function to merge two sorted lists into a single sorted list.
13. Write an SQL query to fetch the second-highest salary department-wise and discuss different approaches to achieve it.
14. How do you create a view in AWS Glue or Amazon Redshift?
15. Write a DDL command to create a table in Amazon Redshift.
16. Which AWS Glue activities have you used in your projects?
17. Explain your familiarity with AWS S3 and IAM security. How do you secure access to data in S3?
18. Discuss the authentication methods available in AWS Glue for accessing S3 or RDS.
19. Provide details about your team structure and your role within the team.
20. What are your skill sets, roles, and responsibilities in your current data engineering project, particularly with Spark and AWS?
21. Design a pipeline to ingest, transform, and load large datasets from S3 into Amazon Redshift using Spark.
22. How would you implement data versioning in a Spark-based pipeline to enable tracking across different versions?
23. Questions about Spark optimizations—what they are and when they should be applied.
I hope these questions assist anyone preparing for their interviews.
Check out the given link for knowing about this company: https://www.tigeranalytics.com/
Check out the given link for knowing about this company rating on Glassdoor: https://www.glassdoor.co.in/Overview/Working-at-Tiger-Analytics-EI_IE717029.11,26.htm
Check out the given link for this company profile on LinkedIn: https://www.linkedin.com/company/tiger-analytics/posts/?feedView=all
Thank you for reading this post.