Thursday, April 30, 2026

Amazon Quick Desktop: A Practical Guide to Getting Started

The views expressed in this post are my own and do not represent the opinions, positions, or endorsements of my current or any former employer.


Amazon Quick Desktop is a native AI assistant for macOS and Windows, launched in April 2026. It connects to your work tools — Google Workspace, Microsoft 365, Slack, Salesforce, Zoom, Jira, and more — and runs persistently in the background, learning the context of your work over time. This guide covers what it does, how to set it up, and how different teams are using it.


What Is Amazon Quick?

Amazon Quick is an AI assistant designed to work across all the tools you already use. Rather than operating within a single app, it connects your local files, calendar, email, and cloud applications into one place and builds a personal knowledge graph — a living model of your role, priorities, relationships, and projects that gets more useful the longer you use it.

The core idea is to reduce the time spent hunting for information across disconnected systems and replace it with a single interface that can answer questions, take actions, and automate workflows on your behalf.


Getting Started

No AWS account or credit card is required to try it.

  1. Sign up at aws.amazon.com/quick using your email, Google, Apple, Amazon, or GitHub account
  2. Download the desktop app for macOS or Windows
  3. Connect your data sources through the onboarding wizard — Google Workspace, Microsoft 365, Slack, Salesforce, Zoom, and others
  4. Quick begins indexing your files and emails in the background and starts building context

Every new signup includes a free 30-day trial of the Plus plan, which includes full desktop access and expanded agent hours. A free tier is available after the trial.


Core Capabilities

Personal Knowledge Graph

Quick builds a private model of your work — the people you collaborate with, the projects you're involved in, the documents and data you use regularly. This context shapes every answer and action. It's stored locally on your device and is not used to train Amazon's models.

Proactive Intelligence

Rather than waiting for you to ask a question, Quick monitors your connected apps in the background and surfaces what needs attention — an unanswered priority email, a Salesforce deal that hasn't been updated, a document waiting for your feedback. You can ask "What am I missing today?" or "What should I prioritize?" and get answers grounded in your actual work context.

Deep Research

Quick Research is a built-in agent that investigates questions by pulling from your internal documents, the public internet, and third-party datasets simultaneously. It creates a research plan, gathers evidence, and produces a fully cited, exportable report. Reports include clickable citations, version history, and export options for PDF, Word, and custom summary formats.

Document and Dashboard Creation

You can generate deliverables directly within the chat — presentations, spreadsheets, Word documents, PDFs, images, and live dashboards. Dashboards connect to your data sources and update automatically. No need to switch to a separate tool.

Workflow Automation

There are two levels of automation:

  • Quick Flows — natural language-based workflows for repeatable tasks like generating weekly reports, routing approvals, or sending automated briefings. No coding required.
  • Quick Automate (Professional/Enterprise) — complex, multi-step automations across systems, such as syncing data between Salesforce and a data warehouse or orchestrating cross-team processes.

Team Spaces and Custom Agents

Spaces are shared knowledge environments where teams pool documents, data, and AI agents around a project. You can also build custom chat agents configured with specific knowledge sources, personas, and guardrails — for example, an HR policy assistant, a sales pipeline tracker, or a project health monitor — and share them across your team.

Extensions

Quick extends beyond the desktop app into browsers (Chrome, Edge, Firefox) and directly into Microsoft Office apps (Word, Excel, PowerPoint, Teams, Outlook) and Slack, so you can access its capabilities without switching windows.


How Teams Are Using It

HR — Onboarding Automation

At the Austin Amazon Quick User Group meetup in January 2026, attendees built a working HR onboarding workflow in under an hour. The setup: upload an employee handbook, leave policy, performance review guidelines, and onboarding checklist into a Space, then create a Quick Flow that accepts employee questions as input, searches the HR documentation, and returns sourced answers automatically. The flow can be shared with the whole HR team.

Operations — Mystery Shopping Review (Ironside Group / HS Brands)

Ironside Group built a solution combining Amazon Quick and Amazon Bedrock to automate survey review for HS Brands Global, a mystery shop provider. The automated system detected inconsistencies, errors, and potential fraud across large volumes of unstructured survey data. Results: review time per batch dropped from days to seconds, annual review capacity scaled from roughly 50,000 shops to millions, and costs were reduced by approximately 85%.

Insurance — Nightly Reconciliation and Compliance (New York Life)

New York Life's Institutional Life division used Quick to replace a manual reporting process that required pulling multiple reports and waiting on analysts. A single conversational agent now handles structured operational data and unstructured documentation together. Their compliance dashboards moved from static reporting to live, self-service analytics, and nightly reconciliation workflows that previously required manual intervention are now automated with Quick Flows.

Manufacturing / Sales — Pipeline Insights (3M)

3M's sales teams used Quick Flows to automate administrative tasks like generating report summaries and updating records, and Quick's agentic capabilities to synthesize information across sales effectiveness, risks, and pricing from multiple platforms.

Pharma Research — Clinical Trial Site Database (Kitsa)

Kitsa built KScout on Amazon Quick — a database of over 300,000 clinical trial sites across 160+ countries — with a team of fewer than five people. Quick powers autonomous site research, medical literature review, and intelligence report generation.


Privacy and Security

  • Data stays on your device — conversation history, memory, knowledge graph, and file indexes are stored locally and not uploaded to the cloud
  • No model training on your data — AWS does not use your data to train models on any plan, free or paid
  • Write operations require approval — Quick will not send an email, update a record, or take an action without your explicit confirmation
  • Certifications: HIPAA eligible, FedRAMP authorized, SOC 2 audited
  • Open standards: supports Model Context Protocol (MCP), allowing integration with third-party agents and tools

Plans

Plan Includes Desktop app
Free Chat, Spaces, custom agents, knowledge bases, extensions No
Plus Full desktop app, expanded agent hours Yes
Professional Quick Sight (BI), Quick Automate, AWS data connectivity Yes
Enterprise SSO, advanced governance, region selection, full AWS infrastructure Yes

Every signup includes a free 30-day Plus trial with up to 10 team members. No AWS account or credit card required.


Tips for Getting Started

Based on community feedback from the Amazon Quick User Group:

  • Start with one specific use case rather than trying to connect everything at once. Pick the workflow that costs your team the most time and build that first.
  • Use Spaces to organize knowledge — upload the documents your team references most often and build agents on top of them.
  • Quick Flows are more accessible than they look — the natural language builder means non-technical team members can create and maintain automations without developer support.
  • The desktop app gets more useful over time — the knowledge graph improves as Quick learns your patterns, so the value compounds the longer you use it.

Get started: aws.amazon.com/quick/desktop


Sources: Amazon Quick features · Amazon Quick FAQs · Amazon Quick customers · Austin Quick User Group meetup · About Amazon: Quick desktop launch. Content was rephrased for compliance with licensing restrictions.

Thursday, April 16, 2026

The Long-Term Survival Plan for Humanity: What Happens After Earth?

At some point in the distant future, staying on Earth won't be an option. The Sun is slowly getting brighter, and within about a billion years, our home planet will become too hot to support life as we know it.

So what happens next? If humanity wants to survive — not just for thousands, but for millions or even billions of years — we'll need a plan.

Let's walk through the most realistic paths forward.


🌍 The Problem: Earth Has an Expiration Date

Right now, Earth is perfectly suited for life. But that won't last forever.

As the Sun ages:

  • Temperatures on Earth will rise
  • Oceans will evaporate
  • The atmosphere will change dramatically

Long before the Sun becomes a red giant, Earth will already be uninhabitable.

That means survival requires leaving Earth.


🚀 Option 1: Colonizing Other Planets

The first step outward is the most obvious — move to another world.

🔴 Mars

Mars is the leading candidate:

  • Similar day length to Earth
  • Evidence of water (in ice form)
  • Relatively close in cosmic terms

But it's far from ideal:

  • Thin atmosphere
  • Freezing temperatures
  • High radiation exposure

Mars won't become a second Earth anytime soon. Instead, future humans would likely live in domes or underground habitats.

🪐 Distant Moons

Other intriguing options include:

  • Europa — possibly hiding a vast ocean beneath its ice
  • Titan — with a thick, hazy atmosphere

These worlds are fascinating, but extremely hostile. For now, they're better suited for research stations than large-scale human settlement.


🏙️ Option 2: Building Homes in Space

Instead of adapting to planets, we could build our own environments.

🌀 O'Neill Cylinders

Imagine giant rotating structures in space:

  • Artificial gravity created by rotation
  • Controlled weather and ecosystems
  • Designed specifically for human life

These habitats could exist anywhere — from Earth orbit to the asteroid belt.

While technically challenging, many scientists believe this approach may be more practical than terraforming entire planets.


🌌 Option 3: Reaching Other Stars

Eventually, even the solar system won't be enough.

The closest star, Proxima Centauri, is over four light-years away. With today's technology, reaching it would take tens of thousands of years.

Possible solutions include:

  • Generation ships — where multiple generations live and die during the journey
  • Advanced propulsion systems far beyond what we have today
  • Autonomous or AI-led missions sent ahead of humans

Interstellar travel is not impossible — but it's one of the greatest engineering challenges imaginable.


🌞 Option 4: Moving Outward as the Sun Changes

As the Sun evolves, the "habitable zone" shifts outward.

In the far future:

  • Regions near Jupiter and Saturn may become warmer
  • Moons like Europa could become more hospitable

Human civilization could gradually migrate outward, staying within a livable zone for billions of years.


🤖 Option 5: Redefining What It Means to Be Human

There's also a more radical possibility: humans may not remain purely biological.

Future evolution could include:

  • Integration with artificial intelligence
  • Digital consciousness (if it becomes possible)
  • Machine-based life forms better suited for space

Unlike biological humans, machines could survive extreme radiation, cold, and long-duration space travel.


🧠 The Most Likely Path Forward

Rather than choosing just one option, humanity will likely follow a progression:

  1. Expand beyond Earth
  2. Establish colonies on nearby planets like Mars
  3. Build large-scale space habitats
  4. Spread throughout the solar system
  5. Eventually attempt interstellar travel

Each step builds on the last.


⚖️ The Reality Check

None of this is easy.

The challenges aren't just scientific — they're social, political, and economic. But there's good news: we have time. A lot of it.

The real question isn't whether it's possible. It's whether we choose to pursue it.


🌌 Final Thought

Humanity's story doesn't have to end with Earth.

If anything, Earth might just be the beginning.

Wednesday, March 25, 2026

How I Fixed My Sluggish Mac in Minutes Using Kiro CLI

My MacBook had been crawling for days. Apps took forever to open, switching between windows felt like wading through mud, and I had no idea why. I only had 4 browser tabs open — nothing unusual. Then I tried Kiro CLI, and within minutes I had my answer and my fix.

Here's exactly how it went down.


The Problem

Everything was slow. Not "a little laggy" slow — genuinely unusable slow. Spinning beach balls, delayed keystrokes, the works. I'd already tried the usual suspects: restarting apps, clearing cache, the classic "turn it off and on again." Nothing helped.


Installing Kiro CLI

Getting started took less than a minute. Kiro CLI is a terminal-based AI assistant that can interact directly with your system.

brew install kiro-cli

Then just launch it:

kiro-cli chat

That's it. No complex setup, no config files to edit.


Identifying the Issue

I typed one line to Kiro:

"my system is very slow"

Kiro immediately ran a system diagnostic and surfaced this:

Load Avg: 19.14, 69.88, 67.35
PhysMem: 7489M used, 142M unused
VM: 384609609 swapins, 394054546 swapouts

Kiro's analysis was direct: load average of 19 is dangerously high (healthy is 1–4), RAM was nearly exhausted, and the system was thrashing swap — reading and writing to disk constantly, which is orders of magnitude slower than RAM.

It also spotted the likely culprit immediately: 136 Chrome Helper processes running simultaneously.

I pushed back — I only had 4 tabs open. Kiro dug deeper and found something I hadn't considered:

"There are two user accounts running Chrome — your active session and a background session via Fast User Switching. Chrome is running in both."

That was the "aha" moment. I'd switched users earlier and never logged out. The other account had Chrome fully running in the background, invisible to me.


Fixing the Root Cause

Kiro walked me through the fix step by step.

First attempt — a clean kill signal:

sudo pkill -u [other-user] "Google Chrome"

That reduced Chrome processes from 136 to 55, but didn't finish the job. Kiro checked again and found 39 processes still running under the background account. So it gave me a harder fix:

sudo kill -9 $(ps aux | grep -i "Google Chrome" | grep [other-user] | awk '{print $2}' | tr '\n' ' ')

After that, Kiro ran another check:

Load Avg: 2.49
CPU idle: 83%
Chrome processes: 19 (normal — just my active session)

Done. System restored.


How Kiro CLI Saved My Day

What would have taken me an hour of Googling, Stack Overflow rabbit holes, and trial-and-error took about 10 minutes of conversation. Kiro didn't just tell me "Chrome uses a lot of memory" — it:

  • Ran live diagnostics on my system
  • Identified the non-obvious root cause (Fast User Switching + dual Chrome sessions)
  • Gave me the exact commands to fix it
  • Verified the fix actually worked after each step

It felt less like using a tool and more like having a sysadmin sitting next to me.


Try Kiro CLI for Free

Kiro offers a 500-credit free trial when you sign up — more than enough to explore what it can do. After the trial, there's a free tier to keep using it, with paid plans starting at $20/month if you need more capacity.

👉 kiro.dev

If your Mac (or any system) ever feels inexplicably slow, just open a terminal and ask. You might be surprised how fast you get an answer.


Content was rephrased for compliance with licensing restrictions.

References:

Thursday, March 5, 2026

Spark Connect vs RDD: Understanding Modern Spark Architecture

TL;DR: Spark Connect represents a shift toward remote, DataFrame-centric development, moving away from the low-level RDD API for client-side code. Here's what that means for your data pipelines.


The Evolution of Spark APIs

Apache Spark has always offered multiple levels of abstraction, but Spark Connect marks a deliberate move up the stack. Understanding these layers is crucial for modern data engineering.

Three Layers, One Engine

At the foundation sits Spark Core — the execution engine handling task scheduling, memory management, and fault tolerance. Everything else is built on top.

The RDD (Resilient Distributed Dataset) API gave developers fine-grained control with operations like map(), filter(), and reduceByKey(). It's powerful but requires manual optimization and deep Spark knowledge.

The DataFrame/SQL API provides a declarative, schema-aware interface. Think df.groupBy().count() or pure SQL queries. The Catalyst optimizer handles query planning automatically, often outperforming hand-tuned RDD code.


What Makes Spark Connect Different?

Spark Connect introduces a client-server architecture that fundamentally changes how you interact with Spark:

Traditional Spark: Your laptop runs the full Spark runtime. You have access to everything — DataFrames, RDDs, SparkContext — but need the entire Spark distribution installed locally.

Spark Connect: Your laptop runs a thin client that sends DataFrame operations to a remote Spark cluster via gRPC. Only DataFrame/SQL APIs are supported on the client side — RDDs and SparkContext are not exposed over the remote connection.

Spark Connect vs Traditional Spark Architecture


A Real-World Example

Let's analyze web server logs to find 404 errors and top pages.

With RDDs

With Spark Connect (DataFrames)

Or even simpler with SQL

The DataFrame approach is more readable, automatically optimized, and runs remotely without a full Spark installation.


Why the Restrictions?

Spark Connect's limitations are intentional design choices:

  • Simpler API surface — easier to maintain and evolve
  • Remote-friendly — DataFrames serialize well over the network; RDD closures don't
  • Better practices — encourages modern, optimized patterns
  • Stability — client crashes don't affect server-side jobs

When You Still Need RDDs

RDDs aren't obsolete — they're just specialized. You need them for:

  • Custom partitioning logic (rdd.partitionBy())
  • Complex stateful transformations outside DataFrame capabilities
  • Working with truly unstructured data that doesn't fit tabular models
  • Fine-grained control over shuffle and execution

The key constraint: if you need RDDs on the client side, you can't use Spark Connect. You'll need a traditional Spark setup with the full runtime installed locally.


The Bottom Line

For most data engineering workloads — ETL, analytics, aggregations — Spark Connect with DataFrames is simpler, faster, and more maintainable. The Catalyst optimizer often outperforms manually-tuned RDD code, and remote execution from notebooks or IDEs is a significant convenience gain.

RDDs remain available for edge cases requiring low-level control, but for most workloads the DataFrame API is the cleaner, more future-proof choice.


Decision Framework

Choose Spark Connect when:

  • You want remote development from Jupyter, VS Code, or other IDEs
  • Your workload fits DataFrame/SQL patterns
  • You value automatic query optimization
  • You want simplified dependency management (no full Spark install locally)

Stick with traditional Spark when:

  • You need RDD-level control on the client
  • Working with DynamicFrames (AWS Glue)
  • Custom partitioning or complex stateful operations
  • Legacy codebases that can't be refactored

Most new Spark applications can — and should — be built using DataFrames, making Spark Connect a natural fit for modern data platforms.

Thursday, December 14, 2023

xargs

  1.  Run rm cmd recursively in all subdirectories. Navigate to target dir and run below command
    1. ls | xargs -I % sh -c  'cd %; pwd; rm -rf ; cd ..'
    2.