#hiringalert #c2crequirements #c2cjobs #highpriority #DatabaseAdministratorSystem #AdministratorSystems #NeedLocals #sacremento #California #requirements !!!! #Knacksoft Please share Profiles to pratapkammili@knacksoft.com Database Administrator (System)Client :OncoreWork Location City: Sacramento Hbrid Work State : CaliforniaDescription : Database Administrator (System) Responsibilities include, but are not limited to: ·Atively work with database software support for raising support tickets and triaging outages. ·Implement/support automation for production deployment on database platform. ·Develop, build & maintain mission critical centralized database platform. ·Diagnose and troubleshoot database errors. ·Implement user /query log analysis and alerts. ·Will collaborate with heterogeneous user groups from data engineers to data scientists. ·Implement data encryption/decryption/masking policies to protect PII/PCI data. ·Demonstrated proficiency of index design, query plan optimization and analysis required. ·User security auditing. ·Create utilization and capacity plans. ·Tune the database to improve queries. ·Provide technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources. ·Should be able to review project requests describing database user needs to estimate time and cost required to accomplish project. ·User management, Security management, performance management, High availability management, space management, and backup management of databases. ·Build and maintain secure, compliant data processing pipelines, tables, views, procedures, and datasets. ·Integrate, transform, and consolidate data from various structured and unstructured data systems into structures that are suitable for building analytics solutions and/or performing analysis. ·Use database management tools and other Azure services for availability and performance monitoring. ·Automate database administration tasks using SQL and shell scripts. ·Work with engineers in support of SSO authentication requirements. ·Managing sets of XML, JSON, and CSV from disparate sources. ·Interprets, identifies, and solves unique and meaningful problems. ·Leads technical contribution to requirements / story development conversations. ·Communicates complex findings in a structured and clear manner to a non-technical audience (written and verbal) including communication to leadership. ·Follows best practices and standards, identifies gaps, and suggests changes. ·Responsible for understanding environmental constraints and planning to mitigate. ·Works respectfully with all team members, both OneAmerica colleagues and vendor partner team members. General Qualifications: ·Actively work with Snowflake support for raising support tickets and triaging outages. ·Implement/support automation for production deployment in Snowflake ·Develop, build & maintain mission critical centralized data platform using Snowflake.
Pratap K.V’s Post
More Relevant Posts
-
#linkedinconnection #linkedinfamily #hiringimmediately Data Engineer with AWS Redshit (Mandatory) Harrisburgh, PA Onsite H4EAD/GC-EAD/GC/USC 12 Months C2C Send Resumes to mark@vitconsystems.com for immediate response. As an AWS Redshift Data Engineer, the primary responsibility is to design, implement, and maintain data solutions using Amazon Redshift. The ideal candidate should possess the following skills: Data Modeling and Design: Develop and maintain data models for Redshift databases, including schema design, table structures, and optimization techniques. Collaborate with data architects and stakeholders to understand requirements and translate them into efficient data structures. ETL Development: Design and implement Extract, Transform, Load (ETL) processes to extract data from various sources, transform it as per business requirements, and load it into Redshift. Develop efficient and scalable ETL workflows, considering data quality, performance, and data governance. Performance Optimization: Optimize query performance by creating appropriate data distribution keys, sort keys, and compression techniques. Identify and resolve performance bottlenecks, fine-tune queries, and optimize data processing to enhance Redshift's performance. Data Integration: Integrate Redshift with other AWS services, such as AWS Glue, AWS Lambda, Amazon S3, and more, to build end-to-end data pipelines. Ensure seamless data flow between different systems and platforms, maintaining data integrity and consistency. Monitoring and Troubleshooting: Implement monitoring and alerting systems to proactively identify issues and ensure the stability and availability of the Redshift cluster. Perform troubleshooting, diagnose and resolve data-related issues, and work closely with support teams to resolve any performance or operational problems. Security and Compliance: Implement security best practices to protect data stored in Redshift. Ensure compliance with data privacy regulations and industry standards, such as GDPR and HIPAA. Implement encryption, access controls, and data masking techniques to secure sensitive data. Documentation and Collaboration: Maintain documentation of data models, ETL processes, and system configurations. Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and provide data solutions that meet their needs. Scalability and Capacity Planning: Plan and execute strategies for scaling Redshift clusters to handle increasing data volumes and user demands. Monitor resource utilization, track data growth, and make recommendations for capacity planning and infrastructure scaling. Knowledge or previous experience in Oracle PLSQL will be an added advantage. #dataengineer #dataengineerjobs #usitrecruitment #c2crequirements #usitstaffing #immediatehiring #itandsoftware #usaitjobs #corp2corp
To view or add a comment, sign in
-
#hiringimmediately Data Engineer with AWS Redshit (Mandatory) Harrisburgh, PA Onsite H4EAD/GC-EAD/GC/USC 12 Months C2C/W2 As an AWS Redshift Data Engineer, the primary responsibility is to design, implement, and maintain data solutions using Amazon Redshift. The ideal candidate should possess the following skills: Data Modeling and Design: Develop and maintain data models for Redshift databases, including schema design, table structures, and optimization techniques. Collaborate with data architects and stakeholders to understand requirements and translate them into efficient data structures. ETL Development: Design and implement Extract, Transform, Load (ETL) processes to extract data from various sources, transform it as per business requirements, and load it into Redshift. Develop efficient and scalable ETL workflows, considering data quality, performance, and data governance. Performance Optimization: Optimize query performance by creating appropriate data distribution keys, sort keys, and compression techniques. Identify and resolve performance bottlenecks, fine-tune queries, and optimize data processing to enhance Redshift's performance. Data Integration: Integrate Redshift with other AWS services, such as AWS Glue, AWS Lambda, Amazon S3, and more, to build end-to-end data pipelines. Ensure seamless data flow between different systems and platforms, maintaining data integrity and consistency. Monitoring and Troubleshooting: Implement monitoring and alerting systems to proactively identify issues and ensure the stability and availability of the Redshift cluster. Perform troubleshooting, diagnose and resolve data-related issues, and work closely with support teams to resolve any performance or operational problems. Security and Compliance: Implement security best practices to protect data stored in Redshift. Ensure compliance with data privacy regulations and industry standards, such as GDPR and HIPAA. Implement encryption, access controls, and data masking techniques to secure sensitive data. Documentation and Collaboration: Maintain documentation of data models, ETL processes, and system configurations. Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and provide data solutions that meet their needs. Scalability and Capacity Planning: Plan and execute strategies for scaling Redshift clusters to handle increasing data volumes and user demands. Monitor resource utilization, track data growth, and make recommendations for capacity planning and infrastructure scaling. Knowledge or previous experience in Oracle PLSQL will be an added advantage. Send Resumes to sam@ojusllc.com for immediate response. #dataengineer #dataengineerjobs #usitrecruitment #c2crequirements #usitstaffing #immediatehiring #itandsoftware #usaitjobs #corp2corp
To view or add a comment, sign in
-
CoreTek Labs is #hiring #hiringnow #dataarchitect #seniordataanalyst #c2c #w2 Job Title: Sr. Data Architect Location: Remote Duration: 6 Months Contract to Hire Overview: This role is vital in implementing Enterprise Data Architecture, focusing on modern digital products and high-availability, sub-second response implementations. The ideal candidate will possess a deep understanding of both relational and NoSQL database systems, with a strong background in modernizing and migrating legacy systems. We need a Data Architect that is not just experienced in Relational DB implementations, but also on more modern digital products with no SQL high availability sub-second response Implementations. In addition we need experience in modernizing and migration of legacy systems as well as considering the data architecture and the simplification of integration needs from the legacy application upstream and downstream systems while the solution is being rolled out. Qualifications: • Minimum of 8-10 years of experience in data architecture, database design, and data modeling. • Proven experience as a Data Architect, particularly in relational and NoSQL database implementations. • Experience in the modernization and migration of legacy systems. • Proven track record with relational and NoSQL database implementations, including experience with high-availability, sub-second response systems. • Demonstrated experience in modernizing and migrating legacy systems, with a focus on integrating and simplifying data from these systems. • E-commerce Expertise: Experience with the architecture and data integration of e-commerce platforms, with an emphasis on enhancing user experiences and supporting online transaction processing. • Master Data Management: Knowledge and experience in implementing Master Data Management (MDM) strategies to improve data consistency, quality, and governance. Technical Skills: • Strong proficiency in SQL and familiarity with a variety of database technologies (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). • Experience with big data technologies and frameworks (e.g., Hadoop, Spark). • Familiarity with data integration tools and ETL processes. • Project Management: Experience with project management and agile methodologies. Ability to lead projects and coordinate with various teams to meet deadlines and objectives. • Continuous Learning: Commitment to staying current with the latest developments in data architecture, database technology, and related fields. • A strong understanding of big data solutions and lifecycle management. Share profiles to susil@coretek.io +(816) -330 -2321 #dataarchitect #dataarchitecture #dataarchitects #etl #legacysystems #mysql #postgresql #mongodb #cassandra #hadoop #spark #bigdatatechnologies #agilemethodologies #mdm #ecommerce #nosqldatabases #databasedesign #datamodelling #modernization #c2c #c2crequirements #c2cjobs #c2chotlist #c2cvendors #c2cconsultants #c2croles #w2requirements #w2 #w2jobs
To view or add a comment, sign in
-
Finance Manager || Vendor Compliance || SAP Fieldglass || Payroll Management ll Certified System Administrator || ATS
Redshift Advisory/SME role 12 months, with extensions Dallas, TX (must be local) 100% remote job Visa : USC,GC and GC-EAD TN We have a customer that is migrating from Teradata to RedShift. This opportunity is initially, a “P/T hours” role, so approx., 20 hours a week. Will likely increase to more hours once established. Also, we need someone “willing to consider” going FTE after 12 months. NO, we do not want them to, we want them to continue contracting, of course! But they must be willing to play the game and then at that time, when asked, say that they would prefer to continue contracting. The basics: Deep data warehousing experience doing large scale implementations and migrations Knowledge of Teradata (modeling, indexing, bteq, etc.) Experience with Ab Initio or similar ETL tools Redshift DW expertise – SME level This is not a “doer” role, this is more of an advisory / architect / thought leadership role to help steer, guide, CHAMPION the customer around corners. Description As a technical advisor on the Teradata to Redshift cloud data migration project, consultant is to provide expertise 1. **Architecture and Design:** Collaborate with the project team to design the optimal cloud architecture for data migration, ensuring alignment with best practices, security standards, and scalability requirements. 2. **Technology Evaluation:** Assess and recommend appropriate tools, frameworks, and methodologies for data extraction, transformation, and loading (ETL) processes, 3. **Data Migration Strategy:** Advise on the migration strategy that outlines the approach for transferring data from Teradata to Redshift, including migration timelines, dependencies, 4. **ETL Pipeline Implementation:** Collaborate with data engineers to design and implement efficient ETL pipelines for data transformation and transfer. 5. **Performance Optimization:** Advise on the monitoring and fine-tuning of Redshift performance to ensure optimal query execution, efficient resource utilization, and minimal latency. 6. **Security and Compliance:** Work with the security team to establish robust data encryption, access controls, and auditing mechanisms within the Redshift environment. 7. **Testing and Validation:** Advise and consult the testing team on the planning and execution of thorough testing procedures to validate data accuracy, completeness, and consistency after migration. 8. **Knowledge Transfer and Training:** Provide training and knowledge transfer to the internal team to help ensure they are equipped to manage and maintain the data platform post-migration. #RedshiftMigrationExpert #DataWarehousingAdvisor #ETLStrategyConsultant #CloudDataMigration #RedshiftSME #TeradataToRedshift #DataArchitecture #DataMigrationStrategy #PerformanceOptimization #DataSecurity #DataValidation #KnowledgeTransfer #c2c #w2 #remote #opening
To view or add a comment, sign in
-
Hello Everyone, We are #Hiring For the Role: #Data_Warehouse_Architect with one of our client's at #Lansing, #MI #Duration: 1+ year.Anyone #Interested please share your updated resume to Rushitha@arohatechnologies.com Mandatory: · 10+ Years of overall Experience · TSQL: 10+ Years · PowerShell: 6+ Years · Azure DevOps: 6+ Years · Scrum: 6+ Years Job Description: · Designs, implements and supports data warehousing. · Implements business rules via stored procedures, middleware, or other technologies. · Defines user interfaces and functional specifications. · Responsible for verifying accuracy of data, and the maintenance and support of the data warehouse. · Knowledge of data warehouse end-to-end implementation processes, from business requirement logical modeling, physical database design, ETL, end-user tools, database, SQL, performance tuning. · Demonstrated problem resolution skills with team of persons, and strong leadership with implementation team. · Development experience in implementation of data warehousing utilizing RDBMS. · Understanding of data warehouse Metadata concepts, tools and different data warehouse methodologies. · Expertise in SQL and proficiency in database tuning techniques. · Responsible for the ongoing architecture and design of the data warehouse, data mart, and reporting environments. · Relies on experience and judgment to plan and accomplish goals, independently performs a variety of complicated tasks, a wide degree of creativity and latitude is expected. · Defining and documenting the technical architecture of the data warehouse, including the physical components and their functionality. · Setting or enforcing standards and overall architecture for data warehouse systems. · Ensuring compatibility of the different components of the DW architecture and ensuring alignment with broader IT strategies and goals. · Ability to educate the project teams on the standards and architecture of each component of the data warehouse architecture. Roles and Responsibilities: 1. Participates in designing, developing and refining the Client's Azure database services to ensure that it is secure, reliable, and robust. Implements changes to the Azure environment to increase performance and efficiencies. 2. Develops and implements detection and disaster recovery activities to test Azure services; participates in detecting, investigating, documenting, and reporting actual or potential Azure environment security violations, intrusions, failures, performance or other issues. 3. Designs Azure data migration backbone infrastructure, to provide reliable, optimized, high performance cloud services. 4. Design CEDS complaint database and data structure in Azure environment. 5. Evaluates security products and tests security systems performance; assists in planning, implementing, and testing disaster recovery procedures; participates in making formal risk assessments related to the State of Michigan Azure environment.
To view or add a comment, sign in
-
THIS IS FOR #W2 NOT FOR #C2C Job Title: Senior Data Engineer Location: [Specify Location] Experience: 8+ Years Job Description: We are looking for a seasoned Senior Data Engineer with over 8 years of experience to lead and drive our data engineering initiatives. As a Senior Data Engineer, you will be responsible for designing, implementing, and maintaining robust data infrastructure while providing technical leadership to the data engineering team. If you have a proven track record of successfully delivering complex data solutions and are passionate about driving innovation in the data space, we encourage you to apply. Responsibilities: Data Architecture and Design: Lead the design and implementation of scalable and efficient data models. Provide expertise in designing and evolving data architecture to meet current and future business needs. Data Integration and ETL: Architect and implement data integration solutions to unify data from diverse sources. Oversee the development of ETL processes to ensure seamless and accurate data flow. Leadership and Mentoring: Provide technical leadership and mentorship to junior data engineers. Collaborate with cross-functional teams and guide them on best practices in data engineering. Optimization and Performance: Big Data Technologies: Leverage expertise in big data technologies, such as Hadoop and Spark, to solve complex data challenges. Stay current with industry trends and incorporate new technologies to improve data solutions. Cloud Platform Expertise: Demonstrate proficiency in cloud platforms (e.g., AWS, Azure, GCP) and utilize cloud services for scalable and flexible data solutions. Data Governance and Compliance: Implement and enforce data governance policies and ensure compliance with data regulations. Work closely with stakeholders to address data quality and security concerns. Collaboration and Communication: Collaborate with cross-functional teams, data scientists, and analysts to understand data requirements. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 8+ years of hands-on experience in data engineering roles. Strong expertise in SQL, database management systems, and data modeling. Proficient in programming languages such as Python, Java, or Scala. Experience with big data technologies (Hadoop, Spark) and related ecosystems. In-depth knowledge of ETL tools and processes. Demonstrated leadership and mentoring skills. Familiarity with cloud-based data solutions and services. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. #DataEngineer #BigData #DataInfrastructure #DataArchitecture #ETL #DataModeling #SQL #DataIntegration #DataPipeline #DataWarehouse #CloudComputing #AWS #Azure #GCP #Hadoop #Spark #Python #Java #Scala
To view or add a comment, sign in
-
Hello Everyone, We are #Hiring For the Role: #Data_Warehouse_Architect with one of our client's at #Lansing, #MI #Duration: 1+ year.Anyone #Interested please share your updated resume to darren@arohatechnologies.com Mandatory: · 10+ Years of overall Experience · TSQL: 10+ Years · PowerShell: 6+ Years · Azure DevOps: 6+ Years · Scrum: 6+ Years. Job Description: · Designs, implements and supports data warehousing. · Implements business rules via stored procedures, middleware, or other technologies. · Defines user interfaces and functional specifications. · Responsible for verifying accuracy of data, and the maintenance and support of the data warehouse. · Knowledge of data warehouse end-to-end implementation processes, from business requirement logical modeling, physical database design, ETL, end-user tools, database, SQL, performance tuning. · Demonstrated problem resolution skills with team of persons, and strong leadership with implementation team. · Development experience in implementation of data warehousing utilizing RDBMS. · Understanding of data warehouse Metadata concepts, tools and different data warehouse methodologies. · Expertise in SQL and proficiency in database tuning techniques. · Responsible for the ongoing architecture and design of the data warehouse, data mart, and reporting environments. · Relies on experience and judgment to plan and accomplish goals, independently performs a variety of complicated tasks, a wide degree of creativity and latitude is expected. · Defining and documenting the technical architecture of the data warehouse, including the physical components and their functionality. · Setting or enforcing standards and overall architecture for data warehouse systems. · Ensuring compatibility of the different components of the DW architecture and ensuring alignment with broader IT strategies and goals. · Ability to educate the project teams on the standards and architecture of each component of the data warehouse architecture. Roles and Responsibilities: 1. Participates in designing, developing and refining the Client's Azure database services to ensure that it is secure, reliable, and robust. Implements changes to the Azure environment to increase performance and efficiencies. 2. Develops and implements detection and disaster recovery activities to test Azure services; participates in detecting, investigating, documenting, and reporting actual or potential Azure environment security violations, intrusions, failures, performance or other issues. 3. Designs Azure data migration backbone infrastructure, to provide reliable, optimized, high performance cloud services. 4. Design CEDS complaint database and data structure in Azure environment. 5. Evaluates security products and tests security systems performance; assists in planning, implementing, and testing disaster recovery procedures; participates in making formal risk assessments related to the State of Michigan Azure environment.
To view or add a comment, sign in
-
-
#jobalert #openrequirements #applynow #W2 Title: Software Solutions Architect Location: Lansing, MI 👇 Send your profile to- pravardhan@data-solutions.org use subject: Resume under review 👆 JOB DESCRIPTION: This position works across Application Development, Service Delivery, and Infrastructure to identify, research, discuss, design, and implement key enterprise architecture standards. This individual will work with multiple project teams and various business areas to provide guidance to maximize value of technical solutions. The team is focused on modernization of legacy applications, performance tuning of environments, guidance in procuring the best solution for the agency needs and supporting the Business Relationship Manager with division goals to best serve the agencies. This individual needs to be experienced with independently learning new applications, working with business users to understand business rules related to data and completing analysis of system data related to transformation and cleansing in support of data migration efforts. Enterprise Architect General Description: Research, design, document, build, and pilot prioritized topics for value focused delivery. Work closely with DTMB and Vendor Development, Infrastructure, and Service Delivery teams to understand their needs and ensure the best cross agency standard is implemented. Work closely with DTMB and Vendor development teams to pilot and prove out new technical recommendations. Drive the identification, development, and implementation of key new standards in areas such as: Performance Testing, Security, Event Management, Web UI Framework, .NET Design Standards, Application to Application Communication, Caching, etc. Propose new standards/guidance based on business need, IT need and technology advances. Familiar with relational database concepts, and client-server concepts. Relies on limited experience and judgment to plan and accomplish goals. Performs a variety of tasks. Works under general supervision; typically reports to a project manager. A certain degree of creativity and latitude is required. Additional preferred skills: Designs and builds relational databases. Develops strategies for data acquisitions, archive recovery, and implementation of a database. Must be able to design, develop and manipulate database management systems, data warehouses and multidimensional databases. Requires a depth and breadth of database knowledge that shall help with formal design of relational databases and provides insight into strategic data manipulation. Responsible for making sure an organization's strategic goals are optimized through the use of enterprise data standards. This frequently involves creating and maintaining a centralized registry of metadata. 👇 Send your profile to- pravardhan@data-solutions.org use subject: Resume under review 👆 #commentforbetterreach #michigan #Nowhiring #datasolutionscareer
To view or add a comment, sign in
-
NOTE: *ONLY W2 no C2C or C2H* Hello Everyone, People who are located in the USA can apply (Only H1B) We are currently hiring the candidate for Data Engineer with API Role: Design, develop, and maintain ETL processes and data pipelines to collect and transform data from various sources (e.g., databases, APIs, logs) into a structured and usable format. Create and maintain data storage solutions, such as data warehouses, data lakes, and databases. Optimize data storage structures for performance and cost-effectiveness. Integrate and merge data from different sources while ensuring data quality, consistency, and accuracy. Manage and optimize data warehouses to store and organize data for efficient retrieval and analysis. Cleanse, preprocess, and transform data to meet business requirements and maintain data quality. data pipelines and database performance to ensure data processing efficiency. Align with the Product Owner and Scrum Master in assessing business needs and transforming them into scalable applications. Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII). Build new enterprise Datawarehouse and maintain the existing one. Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud migration Your Required Skills: 5+ years of experience in building out Data pipelines in Python. Strong knowledge of data warehousing, ETL processes, and database management. Proficiency in data modeling, database design, and SQL. 3+ years of experience developing and deploying pyspark/python scripts in cloud environment. 3 + years of experience working in AWS Cloud especially services like S3, Glue, Lambda, Step Functions, DynamoDB, ECS etc. 1+ years of hands-on experience in the design & development of data ingress/egress patterns on Snowflake Proficiency in Aurora Postgres database clusters on AWS Familiarity with Orchestration tools like Airflow, Autosys, etc Experience with data lake/data marts/data warehouse Proficiency in SQL, data querying, and performance optimization techniques Ability to communicate the status, challenges and proposed solution with the team. Demonstrating the ability to learn new skills and work as a team. Knowledge of data security and privacy best practices. Working knowledge of data governance and ability to ensure high data quality is maintained throughout the data lifecycle of a project. Knowledge of data visualization and business intelligence tools (e.g., Tableau, Power BI). Your Desired Skills: Good exposure to Containers like ECS or Docker Python API Development/Snowflake snowpark coding experience Streaming or messaging knowledge with Kafka or Kinesis is desirable. Understanding of capital markets within Fixed Income reach me out at basanthi@itechstack.com , WhatsApp: +1(732)979-2992
To view or add a comment, sign in
-
Hi Hope you are doing well!! This is Aashish from Lumen Solution Group Inc. We have below position with our direct client. Kindly check the job description and do let me know more about your interest in this role to proceed ahead. Role: Senior Hadoop Administrator Location: Reston, VA – Remote for now Job Type: Long term Contract Job Description: 20% Represents team in all architectural and design discussions. Knowledgeable in the end-to-end process and able to act as an SME providing credible feedback and input in all impacted areas. Require project tracking and task monitoring. The lead position ensures an overall successful implementation especially where team members all are working on multiple efforts at the same time. Lead the team to design, configure, implement, monitor, and manage all aspects of Data Integration Framework. Defines and develop the Data Integration best practices for the data management environment of optimal performance and reliability. Plan, develop and lead administrators with projects and efforts, achieve milestones and objectives. Oversees the delivery of engineering data initiatives and projects including hands on with install, configure, automation script, and deploy. 20% Develops and maintains infrastructure systems (e.g., data warehouses, data lakes) including data access APIs. Prepares and manipulates data using Hadoop or equivalent MapReduce platform. 15% Develop and implement techniques to prevent system problems, troubleshoots incidents to recover services, and support the root cause analysis. Develops and follows standard operating procedures (SOPs) for common tasks to ensure quality of service. 15% Manages customer and stakeholder needs, generates and develops requirements, and performs functional analysis. Fulfills business objectives by collaborating with network staff to ensure reliable software and systems. Enforces the implementation of best practices for data auditing, scalability, reliability, high availability and application performance. Develop and apply data extraction, transformation and loading techniques in order to connect large data sets from a variety of sources. 15% Acts as a mentor for junior and senior team members. 10% Installs, tunes, upgrades, troubleshoots, and maintains all computer systems relevant to the supported applications including all necessary tasks to perform operating system administration, user account management, disaster recovery strategy and networking configuration. 5% Expands engineering job knowledge and leading technologies by reviewing professional publications; establishing personal networks; benchmarking state-of-the-art practices; educational opportunities and participating in professional societies. Please share your resume to aranjan@lumensolutions.com
To view or add a comment, sign in