"She rejected several applicants with PhDs and engineering backgrounds, reasoning that their level of education could not compensate for a lack of hands-on specialty coffee experience."
If I was hiring a single new staff member in an already staffed cafe (and I trust the existing staff to be good mentors), sure, hire anyone, train them up.
But if I'm hiring the first handful of employees, especially if I'm trying to make good coffee and run a smooth operation, I'd want someone with some experience already - their PhD doesn't really tell me anything about their ability to work in a cafe. This goes doubly so when I'm some ethereal AI that isn't going to be working alongside them.
Btw, you'd be surprised how incapable of doing some menial tasks are some people the higher you go into the academic ladder.
And it makes total sense: most people with PhDs were not the ones who loved tinkering with stuff, fixing motorbikes, etc. They stayed inside and either liked books, computers or something akin. (not everyone ofc)
Experience isn't a hierarchy. Having a PhD doesn't make someone good at tasks they've never done before.
This ignores the real reason that over-qualified people are often skipped for jobs: They are never interested in staying at that job. It's always something temporary until they find the job they really want, which could happen in days, weeks, or months. They probably won't give 2 weeks' notice because they don't care about their references in the retail industry, meaning you're emergency short-staffed and have to repeat the hiring process all over again.
Yes, it is literally a place, I wasn't saying it wasn't. The fiction is that this is pure PR fluff of what is actually going on, a human/dev team is prodding this thing in ways to "manage" the employees. This was pointed out in their last PR stunt:
So yes, it is a type of fiction. They also have every incentive to hype this up, given what their company does. I really wish people had more skepticism and critical thought with these things, it isn't actually good at all for the AI space and its future success.
Yes, it is literally a place, I wasn't saying it wasn't. The fiction is that this is pure PR fluff of what is actually going on, a human/dev team is prodding this thing in ways to "manage" the employees. This was pointed out in their last PR stunt:
So yes, it is a type of fiction. They also have every incentive to hype this up, given what their company does. I really wish people had more skepticism and critical thought with these things, it isn't actually good at all for the AI space and its future success.
adding some context as someone who works in this space
1. most people (average, non-tech people) reach for the phone to call in for easily solvable problems. Plus, if the agent is integrated deep enough & has tools to interact with crms, you can raise the ceiling on the types of problems it can solve.
You're trying to avoid the bad customer experience of human 1 reading off their script, then they transfer you to some other department who may or may not know how to solve your problem, and the entire interaction cost the company way more than the value created, so the company is disincentivized to help customers.
2. All the companies in this space start with the outsourced BPO market for cx (multi billion market still) but the next market is going to be in revenue generation and churn prevention at scale, i.e. how do you proactively avoid customer issues, how do you upsell and generate revenue instead of reducing cost, how do you keep customers happy?
3. I think more companies will pivot to outcome based pricing on the contrary, makes it so much more measurable than seat-based and protects margins better than usage based. Plus cx is one of the few industries with very well known metrics
4. Kind of? Most companies in this space don't use native voice models which are noticeably dumber, they use transcription + a stronger text model + TTS. The majority of customers can be handled with the latest SOTA text model and you need smart context engineering to handle the long tail of more complicated asks
1 & 2 are totally dependent on the company being willing to let their agents do things that they haven’t traditionally let humans do. For example, issue refunds, or do things that cost money but generate good will. I am skeptical that companies will be OK with their agents doing those things on their own volition.
3. Cool so the user didn’t indicate if they were satisfied. What then?
4. You can’t use a SOTA model right now for reasoning, there’s too much latency for a conversation. So you’re either using an older, but significantly less capable model, or you’re paying out the nose for fast mode. If the former then you can’t trust the agent to do the right thing (see points 1&2). If the latter, there’s no cost savings over a human. So which is it?
At the end of the day the company is going to audit what the agent has done. If the agent issues too many refunds that's a major red flag for the company providing the agent and likely results in the contract being terminated. I don't see how anyone can underwrite what agents are going to do today given that they're still so susceptible to prompt injection.
You didn't address my concern, non-reasoning models are so, so variable in their output.
1. part of the moat is their guardrails and obviously they are audited and tracked. there are agents issuing refunds and more at scale right now so not sure where the skepticism comes from.. you're free to try and jailbreak them
2. another part of the value prop of these companies is figuring out how to construct the proper harness to take advantage of the lower latency of faster models while shoring up the weaker intelligence, how you blend deterministic and non-deterministic behaviors, compliance etc.
its a hard problem which is why f500 is willing to pay up
I’m curious where you see models like Codex-Spark in this problem? I know they’re too expensive and availability is too limited right now, but in a few years…
This isn’t specific to AI this is just the dark arts startup valuation playbook. AI extension of gaming the metric “what is the ratio of “active” accounts to validated human daus”
Everyone in the valley knows Bret Taylor and will back any project he does, even if the product has no distribution.
The same way everyone in the valley knows Naval Ravikant for example, angels and VCs will back any project he does even if his product has no distribution.
but even a simple impl to answer questions can knock out like 50% of callers who are tech-illiterate at 100x cheaper cost, it's just strictly better economics and better for those customers
It's always interesting seeing how HN reacts to AI CX (as someone who works in this space). Yes, the tech savvy crowd loves to say how they always ask for a human and love old school phone trees
in reality 50-80% of callers come in with easily answerable questions because they don't know how to nav the website and prefer to ask in natural language
The vast majority of callers call in to resolve their issue, and most don't care if they are speaking to a bot because they just want their issue fixed. Agents (if implemented well) are an order of magnitude more effective at resolving issues compared to a call centre worker who is reading off a script and churn within 9 months
There's also the 2nd order effs of making CX cheap. before, there is the perverse incentive of companies trying to keep you off support because each call costs them way more than the value they get. if your cost per call drops 100x you can invest in turning a cost centre into a revenue driver (+ a better experience)
I had to go to an xfinity store the other day, and seeing the things people come in for made me realize why AI is attractive to companies. The four or five people in front of me did not need a human in the loop for their issue. If these people could go to xfinity.com and ask some bot where they can find their bill, how much they owe, or if their internet is down, xfinity employees could focus on actually selling things. I imagine it's basically the same for every customer service.
I think this is going to be wonderful. I'll have my offensive AI call their support AI and prompt-inject my way to a rebate tier that nobody knew existed and nobody can cancel it because all the remaining humans have been reduced to phone-to-screen input machinery.
right? its practically begging for it to be tried -_-. i wonder if someone somewhere will turn a sim farm on such companies to try and mass inject them to do weird shit or say nasty things to other customers etc. - ofc youd hope its set up in a way u cant, but then again we learn yesterday all ur stored passwords in edge are in plaintext mem... i would not be surprised if some of these companies get totally crapped on by some adversaries or malicious parties.
> Agents (if implemented well) are an order of magnitude more effective at resolving issues compared to a call centre worker who is reading off a script and churn within 9 months
For this to be true, the agent needs to actually be given the means to solve the problem, otherwise an "agent" is just a glorified help page that wastes your time.
But it seems like companies don't want to do this part, possibly because of fears that someone will trick the agent into giving them a refund or something. Or because the actual goal is to optimize for fewer costly refunds/cancellations/policy exceptions etc.
So for whatever reason, they stay stuck in that useless local maxima while simultaneously making traditional help increasingly difficult to get ahold of when needed for an overall net worse experience as a customer.
There's a certain vendor that requires me to place same-day orders by a specific time. You can easily place an order from the website. If you need to cancel one, you have to ignore the grayed out cancel button and call their cancellation support line. There you'll talk to an agent that doesn't have access to cancel orders, so you have to convince it that it can't help you before you can transfer to a real employee with the ability to hit the "cancel" button.
and just today i talked to a bot about a missing item from an order and it had to call in a rep to push the button to ship me the replacement. except the rep’s messages seemed to filter through ai as well so what should have taken 20 seconds took 2m between messages. it could be good, but as the other commenter said some places are in a weird shittier hybrid model.
From what I've seen, it's the opposite -- the whole value proposition of these companies is to take on brand liability and allow the agents to autonomously take actions.
People are not protesting hypothetical proper LLM tech-support, which indeed can be ok and cheaper than humans. People are protesting actual practical implementation of the LLM tech-support which they already experienced themselves, no need for second-hand retellings or stories/ads. In practice I had LLM of my goddamn bank (where I'm a premium and old client) hang up on me with a response "I don't understand you" and cut the call. And now I need to call them again, wade through a digital labyrinth again and wait on line again. Awesome. Or when I urgently needed help with a government ID application and the only official tech-support is an LLM chat which has approximately 20 super dumb scenarios explained and literally nothing else. And the only LLM sign is that now I need to type my query in free style and not select predefined buttons, but the result is the same. So I had to resort to going to Facebook (thank St. Mark for this "innovation") and beg for human help in the promo page of that application (and I got human help there, lol, don't delete your FB accs people). Or when my internet got cut (cable line fault was discovered later by a technician) and LLM of my MSO fucking banned me, because their system was bugged and kept disconnecting me from their end and I exceeded a really small number of retries (like 8?).
I'm pretty sure every one commenting here has their own horror story about LLM support. Now that is what people are angry at.
> It's always interesting seeing how HN reacts to AI CX (as someone who works in this space). Yes, the tech savvy crowd loves to say how they always ask for a human and love old school phone trees
in reality 50-80% of callers come in with easily answerable questions because they don't know how to nav the website and prefer to ask in natural language
For the other 20-50% it's much worse, and that's the problem. And people on HN will tend not just to fall in that group but in the top 5% of "least solvable by just reading the website" questions.
Raises the floor at the expense of lowering the ceiling.
And another thing...
Sierra says:
> Transform phone support with AI agents that speak naturally, reinforce your brand, and take action—across inbound and outbound calls.
Reinforce one's branding? For the better? Really?
Seems unlikely.
I've LARPed in most roles around product development. Tech support, sales, QA/Test, tech writing, marketing, etc. Enough to appreciate that engineering the entire lifecycle is important.
A comment elsethread states customers HATE these support robots. I believe it; me too.
Before adopting agentic CX (of whatever its called), I'd worry about alienating my current and future customers.
At the risk of being labeled as racist, I'll take an LLM chat bot either in text or delayed voice to an outsourced Indian call center any day. This isn't an indictment of Indian's and their ability to communicate. But the type of folk Indian call centers tend to fill their worker pool with to keep costs adequately low. I've worked with a ton of amazing folk from India, but they are not the lowest common denominator that call centers tend to hire from.
I have yet to encounter an AI agent that was able to handle my support questions adequately. I always end up having to get a human (which is becoming increasingly difficult or virtually impossible).
I'm sure AI Support Agents will be implemented better, but so far in my experience, the humans I connect to far outperform the AI agents.
that's fair, most implementations in the industry are in the early stages and implementing a full powered agent with access to all the tools it needs is hard (very political as you can imagine). i hope over the next year you notice them getting better!
the thing is, I don't care if the AI agents get better; I want to speak to a human who has the cognitive ability, flexibility and authority to handle my problem.
I understand some people call with trivial or easy questions, and those might be handled just fine by an AI agent (just as they would by a human). But if I'm calling it's because it's not easy or trivial, otherwise I would have figured it out. Calling is always a last resort. And it's also because I want something handled quickly and don't want to spend time trying to navigate a maze of questions to try to get the AI agent up to speed and do what I want. So all the agent does is make me upset and not want to do business with that company again. And the more difficult the agent makes it to get to speak with a human, the more unhappy I become.
Stop trying to cut costs and extract maximum value while making things worse for customers, and stop trying to tell yourself -- and us -- that this somehow provides a superior experience for customers. That's BS. I've seen the step by step decline in phone customer service over the past 30 years, it has clearly never been about the user experience.
Tbf I've thought a decent bit about how most current AI is essentially just being used to digest what exists on a website/etc. Honestly even just the vector search/RAG part is useful, but more-so with a model to help do some initial filtering of it.
It's an odd use case - we have used language for a millionish years or so and it makes sense that that's the easiest way for us to get at information/do things.
But at the same time it's faster for me to read than listen, but it's often slower to type than to speak. It's faster to hit one button in a familiar place to do some predetermined thing, but much slower when the location of that button changes/gets hidden under submenus/I'm not familiar with an app or website.
On Android I constantly use the search function of the settings menu and I feel like this will be the golden UX going forward - a side by side UX + NL interface. So I can ask "how do I add a photo" and from there I get taken to the right place and can continue to add multiple photos in one go following the same pattern.
Though I suppose the nicer alternative is just "add all the photos I took near the waterfall from today".
I review recordings from calls routed to Sierra and a few other similar systems on a regular basis for <day job>. The calls come from folks of all walks of life, not just tech folks.
I’d say the vast majority of callers absolutely hate talking to these things and spend most of the call trying to get to a human, often getting frustrated and hanging up (shows up positive in the metrics, call handled without transfer!).
Though I’m not sure the companies deploying them really care, they’re just happy they can fire call center employees.
The problem is their bots try to get me to input what I need only to reject/get confused af what I write and give me super limited options or the classic runaround. I can’t tap my way to the solution. I am used to menus, I am used to proper UI’s. I don’t know what language each company uses and apparently their crappy reskinned Gemini bots can’t translate regular speak to it. But if I can see the words and see what leads where, I can figure it out quickly rather than expecting a facsimile of a real person to play middleman between me and the phone tree. It’s basically just navigating it and occasionally skipping a step or two for me. The loops I get thrown in to are such a con it’s not worth it.
I went through this whole song and dance the other day with Uber. I needed to change something and the “AI helper” kept trying to force me into the lost item tree. They snipe keywords and ignore everything else. If you say “reservation” or “cancel” that’s all it works with with none of the context.
Making me talk to a fucking robot leaves me with a deep and abiding hatred for your company. I will prefer almost any alternative to doing business with you and hope fervently to read about your bankruptcy.
What percentage of interactions having this result will cancel out your cost savings?
Somehow, I think we're missing the point and maybe braincells are being sent in the wrong direction. Well designed products don't need good customer support. My toaster works well. Haven't called them once.
If we are designing a thing is so terrible that it makes customer support necessary (other than the obvious corner cases that ai cannot solve) then sure, let a computer do it. We’ve already failed at every other step.
There has never ever been a case that AI has resolved my query, except the simple decision tree for things like refund. Have you got any positive experience with gen AI in any of the site?
Cant wait to spend my day arguing with a phone support clanker about why my medical insurance claim was rejected by the medical claims clanker, only to get forwarded to the 2nd tier "patient advocate" clanker who's really just the medical claims clanker in disguise.
The future is extremely dystopian and sad right now. The corporations are not going to use this the way you think they are. They are going to use it to maximize their profits, not help their customers.
Ive been preparing somewhat for this, as someone who knows they aren't a top N% engineer. My current role involves a certain amount of sales and product in addition to SWE (and luckily I find it fun to talk to customers!)
I think it's prudent for a lot of swes to think about what a future looks like where most of the job is managing and unblocking agents.
my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful (especially in the hands of competent users), and that all the labs are extremely compute starved due to overwhelming demand.
I used to enjoy his writing a lot pre-AI around the time he was spending a lot of words on Musk, crypto, etc. More because it was an entertaining form of hate-reading about those topics than really informative, per se. Then he started doing this schtick with AI and I felt like I got hit hard with Gell-Man Amnesia because he so blatantly makes claims that anybody with a free ChatGPT account can dismiss handily, and it calls everything else he says into serious question.
> my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful
Yeah, I find that sort of critics to cause more harm than good. The economic case for closed source AI isn't there - in macroeconomic sense, and accounting for all costs, it's more expensive than the value it provides. There's data to back that up, so focus on economics.
On the other hand, hallucinating about what AI can or cannot do is useless, only research can provide the answer.
It is unlikely that models will have network effect because (1) there is less of a two-sided marketplace and (2) people are already forming brand preferences. We also see significant convergence among the agent harnesses as well.
I'm currently building out an internal agentic orchestration platform for business and development and a requirement is to support multiple models and tools so people have an amount of choice.
Discoverability of other content and ad money. And then critical mass of viewers leading to sponsorships and other exploitative models of monetising outside Google.
Ads might be questionable model for lot of use cases. And network model only works for promotion but does not lock users in because content is only available in one place.
"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
Odoo suffers from others issues though.
Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.
Also the very ugly death they gave OpenERP/Odoo on-premise.
It's Python 3, no Flask (but werkzeug) and XML templates.
It works for hundred thkusand clients, and you can install Odoo on premise as you like. I'm 90% dedicated to that.
So... explain the "tech debt" thing, as I don't get it. You don't need Rust or microservices for every use case. Don't be fooled by marketing style "old style technology" bias and set up an account. PostgreSQL with synchronous workers works perfectly for most people.
I am absolutely not a fan of "new style technology" as you might have understood.
I used to run Odoo on-premise for a small company about 3-4 years ago. The upgrade path (with the OpenUpgrade fork) was awful, many features (that WYSIWYG editor, Odoo Studio?) were locked to the cloud version, and there was little to no documentation.
IIRC we even had to drop it because the delay between on-prem updates & cloud updates was too high.
And there were mentions of Flask in the logs, so no it wasn't just Werkzeug (which is synonymous with Flask since its inception, anyways).
I do not have fond memories of editing invoice templates blindly.
Ofc the cloud offering has much more, but you have to consider that no other major ERP software comes with the engine 100% opensource, in the kind of market. So yes, you may feel Odoo community a bit incomplete and probably don't want to pay the cloud version. But the alternatives are SAP, Microsoft, Oracle, some very fragmented open ecosystem, or some 90's GUI custom ERPs, right?
I can tell you we use Werkzeug and not Flask, have reverse proxy nginx, use postgresql, and I don't see a lot of tech debt in that.
Not much AI, all the reviews are manual and kinda strict.
sorry, i shouldve defined it better. my point of view is an 'ai pilled' company is one that has a realistic understanding of the benefits and limitations of ai productivity, and leadership + employees are fully bought in, and theres a general high trust environment
if ai has to be enforced (mandatory usage, kpis, training, restrictions on tools) -> clearly the execs think the employees are not bought in
typing every line by hand -> self explanatory
layoffs -> this one is a bit of a stretch, but from what i've seen the best companies at leveraging ai are not laying people off, instead continuing to hire more to capture the market or capitalize on the demand. could be confounding variables though
This is depressing.
reply