Sign in to view Tina’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
San Francisco Bay Area
Contact Info
Sign in to view Tina’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
4K followers
500+ connections
Sign in to view Tina’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Tina
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Tina
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Tina’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Google
****** ******** ********
-
**** ***
****** ******** ********
-
****** **********
********'* ****** ******** ******* *** **********
View Tina’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Tina’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
People also viewed
-
Perman Rejepow
--
AshgabatConnect -
Melissa Filer
--
United StatesConnect -
Chris B.
Senior Software Engineer at SAS
Andover, MAConnect -
Thomas Thai
Sr Software Developer at General Atomics
San Diego, CAConnect -
Nancy Blackman
Director at Dunn Area Committee of 100, Inc.
Dunn, NCConnect -
Tatiana Cerro
Real Estate Agent at DeWolf Realty Co., Inc.
San Francisco, CAConnect -
Jeffrey Myint
--
Melissa, TXConnect -
John Cobb
Senior Software Engineer - retired
Hutto, TXConnect -
Alex Plaks
Software Engineer at Google
Los Angeles, CAConnect -
Declan Outhit
UX, UI, Creative Director - Design at Medly Labs
Kitchener, ONConnect -
Lisa Salem-Wiseman
Special Advisor, Humber Institutional Learning Outcomes. Humber College
CanadaConnect -
Lora Tam
Mountain View, CAConnect -
Nancy Blackman
Clinical Supervisor
Tacoma, WAConnect -
Audumbar Pujari
Santa Clara, CAConnect -
Thanh Dinh
United StatesConnect -
Xiao Li
San Francisco Bay AreaConnect -
Yanyan Lu
Mountain View, CAConnect -
Qianwen Xie
San Francisco Bay AreaConnect -
Sumedh Shende
MumbaiConnect -
Scott Shi
San Francisco, CAConnect
Explore more posts
-
Abhishek Sharma
For years, I've led and mentored diverse teams of software engineers, and one gripe consistently surfaces, irrespective of team makeup or project nuances: "Why do I have to resolve conflicts when cherry-picking commits from the master branch?" 🤔 This complaint hits home. Engineers crave independence to innovate and deliver swiftly, yet software development thrives on collaboration, demanding robust code management and teamwork. 🚀💻 Cherry-picking commits can be an efficient way to incorporate specific changes or bug fixes into a hotfix branch or release candidate, but it often triggers conflicts as changes collide with ongoing developments. This conflict resolution process can be time-consuming, error-prone, and frustrating for engineers, detracting from their productivity and morale. 😫⚔️ The absence of a structured cherry-picking process exacerbates the problem. Here's a simple fix to avoid conflicts: 1️⃣ Track the last merged PR number into the base branch when cutting release/hotfix branches. 2️⃣ When receiving a cherry-pick PR, confirm its base branch PR number. 3️⃣ Ensure all PRs between the last merged and new ones are cherry-picked before merging, preventing conflicts. 4️⃣ Once merged, update the base PR number of the last merged cherry-pick for the next cherry-pick PR review can be verified. Yes, adopting this process entails a bit of change, but the dividends in smoother workflows and happier engineers are well worth it. Let's make life easier for our fellow coders! 💪🛠️ #SoftwareEngineering #CodeReview #Collaboration #ConflictResolution #Teamwork #DeveloperLife #CherryPicking #CodeManagement #ProductivityTips #EngineeringCulture #TechTips #Mentorship #WorkflowOptimization #CodingCommunity
8 -
Pratik Guharay
This is an unpleasant, but important one. Layoff. I have faced this question several times last year in 1:1 “How can you keep your head steady on work when so many roles are impacted around you? Does that not concern you?” My shorter response had been - “If I tell you I am not concerned, I will be lying. But, if you can convince yourself that layoff is not the worst thing that can happen, that will take the pressure off”. Then I shared 3 stories from my professional journey: Fresh out of collage, I joined a software company back in 2005. I had a plan and a dream to be successful, but someone else had a different one. That company laid off 40% of its population in 3 months, including me. I felt broken, humiliated and scared. I did not tell my parents about this for a week and was searching for jobs walking door to door of each tech firm. But, no company in town offered job to a fresher. Ultimately I revealed the situation to my parents and they offered me what I needed most: a hug and assurance. Eventually got a job; but I had to leave safety net of my hometown. But this is where my career acceleration started. Most importantly, I learned how to survive by myself in a different city. That layoff, which seemed catastrophic at that moment, actually set the foundation for my career. 4 years later, I applied for a job in US and relocated. Again, I had plan and a dream. But, I ended up being in a project that had no material connection to Company’s mission. Team members started leaving and that included my hiring manager. There was no one to support me. My employment visa did not even allow me to switch job or city. So, I was stuck and stuck bad. I wished everyday, this team just let me go. My inability to take action and decision, pushed my frustration to its limit.This is when I decided to join academia and completed my masters in CS. It was hard to dual between job and study; but it was beautiful. I could not change my professional situation, but enhanced my education. 3 years later, I joined Amazon. Walking into Amazon was like a dream come true. Very soon, I realized - this is way advanced game than what I was accustomed to. The speed of the company, the intense focus on solving complex problems and the culture is something I am not made for. I declared at home - I won’t survive here. My family again shared what I needed the most - a hug. Rest is history. I started believing into Company’s mission, culture and myself. I worked very hard to elevate my skills. I kept delivering most fascinating customer experiences for Amazon and raised rank to Principal engineer. Morale of the story: 1. Control the input metric and don’t wrestle with output metric. 2. Your family is your most valuable asset. Protect it more than anything else. 3. Layoff is not the worst. So why to be scared. 4. Job can be eliminated, not your career. Focus on your career and job will follow.
17612 Comments -
Coltin Caverhill
Here is how I use ChatGPT and LLMs to be a better Engineering Manager. 1. Whenever there is a birthday or work anniversary (we call them Yelpiversaries) I will hop over to ChatGPT or Stable Diffusion and generate a card catered to that person. It's a small investment from me, and usually sparks some joy! I might generate a few to get something interesting. Almost everyone else will search for gifs or memes, which are a great alternative. In either case, I will also try to write something meaningful and personal (I don't use ChatGPT for this, but you could ask for ideas if you feel stuck. Tell the LLM a bit about the person and ask it to construct a concise message. Use that as inspiration, don't copy paste, it will come across cheap). 2. Understanding new concepts quickly. I had never heard of RAPID before, and when I did a Google Search I had to deal with a bunch of really weird websites and bad explanations. I asked ChatGPT to explain RAPID to me, and then I asked it to be explained for a software engineer. It was fantastic. For deep research and understanding I wouldn't just use an LLM, and I'd be careful to double check, but for well known concepts it does really well. I didn't need to be a grand master at RAPID, I just wanted to know what the heck these Product Managers were talking about. 3. Helping format data. Sometimes I want to invert a table. Passing ChatGPT a markdown table and asking it to invert it has worked out really well. I usually carefully scan to make sure it didn't change values, but so far hasn't failed and saves me a bunch of work. 4. Helping me quickly write bash or python scripts to do different tasks. As a manager, I don't write production code. But sometimes I pull down data and want to process it. For example, looking over all of our incoming support requests we have on Slack (we support a lot of internal teams). I can use an internal API to take someones slack handle, and determine their team or department. In that way I can see how much support requests we had this month from a particular area which is useful data to see how we can better support the organization. 5. Rubber ducking. Talking through problems. In general I prefer to get feedback from a human, but mentors are busy creatures, and our schedules don't always mesh up. If I'm working through a difficult problem, chatting it through with an LLM has been very helpful. I keep out personal details because paranoia, but I describe the situation and how I'm thinking about it. That's the important part that helps; forming my thoughts. Rarely is the actual output from the LLM helpful, although I can't say it never has been. People get really hung up on LLMs/ChatGPT not just solving things for them 100%, and I think these people are going to be left behind. It doesn't need to be perfect, it can even give you the wrong answer and you still benefit. But it can help get your brain looking at lots of answers.
1733 Comments -
Colt McNealy
<rant> Yesterday morning I wrote a post about the phrase "scaling independently" while waiting for my KIND cluster to spin up, and I promised that I would give an example of where you really DO want things to "scale independently" in a separate post. Well, that post is here. Let's talk about #apachekafka and its new Raft-based metadata solution, known as KRaft (yes, it evokes Mac-and-Cheese, which is partially why I like it so much). KRaft is a replacement for ZooKeeper's former role in Kafka. In a KRaft-based cluster, the Metadata Quorum is a group of Kafka Servers that have the "controller" role. The Metadata Quorum stores information such as "what topics exist in my cluster?" and "which Broker is the leader for which Partition?" and "which follower replicas live on which Broker?". This information is CRITICAL for the availability and consistency of a Kafka cluster. In order to ensure consistency of the Metadata Quorum, one specific Controller is chosen as the leader. All write requests go through that leader. In order to ensure availability of the Metadata Quorum, there can be "follower" Controllers in the Metadata Quorum which also store the metadata updates (synchronous replication). In case of failure of the leader, one of the followers can become the leader. In this specific case, it DOES make sense to scale the Brokers and Controllers independently. Why? First, the Controllers don't actually scale. One can be a leader at a time, so the others are just providing backup (note: for reasons beyond the scope of this post, it's best to have an odd number). Additionally, before KIP-853 is implemented, it's actually really hard to add or remove Controllers to/from the Metadata Quorum without downtime. Secondly, it's possible for a misbehaving client to take down a Broker, for example by sending way too much data to a specific partition. If it's just a single Broker that's lost, most of the cluster will continue on and live to fight another day. However, losing a Controller is a Very Bad Thing. Thus, by separating the Controllers from the Brokers, we can improve the availability of the cluster. It is indeed possible to run Kafka with some servers that share the responsibility of Controller and Broker. This is great for development (especially local dev) and also in *SOME* highly resource-constrained production environments. However, I would suggest as a rule of thumb that you should probably: - Separate out your Controllers on their own isolated machines - Start with a Metadata Quorum of size 3, which allows losing one Controller and continuing on alive - Put your Brokers on another set of nodes. Now, there's another good example of how Kafka scales X and Y independently: Compute and Storage. Watch out for the next "Colt Rant" about this, coming later this week! </rant> PS—here's the link to the previous post: https://lnkd.in/gXU77rKW
126 Comments -
Jae Taylor
“Don’t wait for your next job to do your best work.” Satya from Microsoft said this in an interview and it was the most profound thing I’ve heard in recent years—for me. What does it mean for me to do my best, what’s expected in my role, compared with my strengths. Whats my unique value add— There are a few things that seem undeniable as priorities: 🔑 (1) building trusted relationships based on proven execution, and 🔑 (2) your ability to influence in person AND in writing. Your best should be defined by more than just your domain knowledge. How do you deal with change, how do you build trust in a constantly changing organization, and how can you verify that. What do your peers say about you when you’re not in the room? What do you want them to say? For me, I’d like to get better at long-form writing. So you’ll probably see more of that from me. What does your best look like?
536 Comments -
Zack Anselm
At Instrumental Inc. we are typically very secretive about the clients we work with. During onboarding 3 years ago, I knew I made the right decision to join, especially after seeing the list of major logos we support. Today we are doing something a bit out of character and publicizing our partnership with @Meta to assist in the quality control of their new hardware. I personally believe this is the sort of edge hardware companies need to outperform competitors in a rapidly evolving market. Don't just take my word for it though, check out the video! Really exciting chapter in tech and at both companies. #AR #Engineering #Innovation
14 -
Peter Gillard-Moss
From IC engineer to C-Level exec, the most effective people I've worked with have one thing in common: they break down and decouple decisions. Optionality, trade-offs, last responsible moment, one-way and two-way door decisions, slicing, MVPs, proof of concepts etc. are all techniques they use for breaking decisions down and getting them made. The best engineers did this when working on code. Rather than trying to decide on all the functionality at once they'd be able to say "I can break this story down into smaller pieces of value that we can get released to users fast" or "we don't need to deliver this piece of functionality now, we can wait until we see how user's respond". The best architects, product managers and business leaders do the same thing. It's a real skill to decouple decisions. It's a real skill to be able to defer parts of a decision to next or later (or even never) to enable faster movement and lower risk. I have always been impressed when I witness it. Often it's obvious with hindsight and sometimes its completely out-of-the-box and you would have never have realised.
603 Comments -
Lionel Touati
Enabling use cases like factory floor optimization, assisted workforce, and asset tracking can be expensive and time consuming. Building, deploying, and maintaining configuration can add to the complexity. In this webinar, you will learn from the experts at Google Cloud and ClearObject how to leverage the very latest in AI, modern infrastructure, and edge computing to enable modern manufacturing outcomes in your center today.
-
Adam Cataldo
Here's something counterintuitive; taking on more investment risk reduces long-term returns: https://lnkd.in/gHwkjHmP This is only my second investment video, so I'm still learning. If you have any feedback, I would love to hear it. Based on the feedback from the first video, I tried to make the audio more balanced on this video. I also shortened the length, to make the content more digestible.
132 Comments -
Jeremy Manson
Everything else being equal, finding bugs statically is better than finding bugs at runtime. Everything else is rarely equal. In the comments to my last post, someone asked about the benefits / adoption of formal verification compared to testing. I have some-but-not-a-ton of experience with formal verification, but I have a lot of experience with static analysis tools. For those of you who don't know, formal verification involves making a mathematical specification for what your code is supposed to do, and then running a tool that can prove it meets that specification. Most people reading this post will now know why most developers don't do it. (It's worth a reminder - tests *are* a specification for your code. They're just one that's written in the language in which you wrote the test instead of being written in math.) It's a truism that it is better to find a bug at compile time than at test time, at test time than at code review time, and at code review time than at deployment time. Each additional step bakes in problematic decisions, giving them the chance to spread to other code, and, eventually, to users. Also, every second that passes between when you write the code and when you notice the problem is a second your brain might use to forget what the code does. I wrote a post a couple of months ago about research done at Google to identify what the ideal static analysis tool looks like. In short: you don't have to do any work to make it run, it never gives you false positives, it points out real problems, and the tool tells you exactly what you need to do to fix the problem. Tools like this are great, and are why Google leans heavily on govet, clang-tidy, and Error Prone (errorprone.info). The more typing that someone has to do to make a static analysis tool work, though, the more that they are likely to write a test, instead (possibly adding a dynamic analysis tool). Formal verification tools are at the far end of this - they tend to make you write specifications for your code, which they then prove the code obeys. Everyone who has worked on a codebase in the broader tech industry knows that most specifications last (at most) as long as it takes to write the code, at which point you realize you got them wrong and change them. This makes formal verification a lot of work for very little reward. The only people who write formal specifications for their code are the ones who are writing code that will kill you if they get it wrong - code to keep airplanes flying or to keep nuclear reactors from meltdown. Most Java development is way back at the other end, where we still beg and plead with people to add nullness annotations (it will help when the jspecify.dev project finishes). This is why sneaking static analysis features into programming languages is so important (I hear you, Kotlin fans). At some point, people's interest in typing annotations drops off, and for everything beyond that, we have tests.
3510 Comments -
Tian Zhu
I heard a few people are suspecting this live demo showing a call to a virtual agent is not real https://lnkd.in/gnBiGDfd I can tell you that the call was real and we are able to allow the users to talk to the virtual agent with a text UI while still on the call. To find more, search the newly launched feature called 'Call Companion' under Dialogflow
31 -
Matt Feigal
Crystalloids (a Data focused partner that I have known for many years) took our Application Integration Partner training. They immediately started building integrations based own needs - for instance processing job applications and progressing through hiring a candidate, which touches many systems. This was a perfect way to build expertise and also create new customer-facing demos. Four Weeks Later (!!!) they came to the Cloud Summit with a detailed demo connecting Jira, Hubspot, BigQuery, DocAI and more. They even built a Custom Connectors for Exact Software, a popular Benelux accounting software - to me, this highlights how Partners can help adapt our product to meet the local market needs. The were kind enough to write an article that shows the speed and ease of Application Integration, the real use case they built, and gives great technical detail on how to build a Custom Connector: https://lnkd.in/evWGSHmb #applicationintegration #apigee #googlecloud cc Scott Haaland MadhuVandhini B Jean Dulau Madelon Privé Tessa Reef
424 Comments -
Steve Ash
When I first got to Amazon, one of the things that stood out as different from my past job experience is that Amazon has a strong written document culture. Meetings typically start with documents <= 6 pages and everyone reads the document for the first part of the meeting before starting discussion -- because who has time to read documents before the meeting? Starting the meetings with a read means that everyone gets the time baked into their schedule to actually read the doc, and then you can have a productive discussion with everyone starting at the same place. This simple tweak has a few powerful impacts, but the one that has had the most impact on me personally is the page length constraint. When you require everyone to read the doc at the start of the meeting, that imposes a time constraint on how much content you can expect them to read before discussion. This constraint puts the burden on *you*, the author of the doc, to make sure that you are: 1. Concise, highlighting only the important info. The details that don't make the cut can go in an appendix for reference. 2. Prioritizing the actual decisions that you want to get out of the meeting. You asked these people to meet for a reason! To help them help you, you have to organize the information in a way that makes it as simple as possible for them to grok and contribute/decide. 3. Actually understand your own problem! Size constraints force us to distill complexity and find the right conceptual abstractions to communicate at the right level. I started using the size constraint trick in my design documents as well for these reasons as I found it made it simpler for others to understand my docs - regardless if we're speed reading in a meeting or reading on their own. Happy Writing! ✒
2347 Comments -
Bernard Traversat - We're hiring.
We released a new update of our Visual Studio Code Java extension. Go try it! Bunch of new improvements and already supporting JDK 23 Early Access builds as the extension is directly leveraging javac. https://lnkd.in/g8BKsNjS #java #IDE #OpenJDK #VSCode
611 Comment -
Harminter Atwal
🚀 Exciting News Db2 Users! 🚀 ⌚ Amazon Web Services (AWS) has introduced on-demand Db2 licenses through AWS Marketplace! 🕺 This innovative solution enables Db2 users to procure database licenses for RDS Db2 workloads with ease. #AWS #AmazonRDS #DB2 #DatabaseManagement #Innovation #CloudComputing #TechNews #AWSUpdates
9
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Tina Chu in United States
-
Tina Chu
Los Angeles, CA -
Tina C.
HR, Operations & Systems
Denver, CO -
Tina Chu
United States -
Tina Chu
San Diego, CA -
Tina Chu
Chief Operations Officer at ScaleLA
Los Angeles, CA
86 others named Tina Chu in United States are on LinkedIn
See others named Tina Chu