90-day Google spend
€84.0K
Google Ads reports €260.8K in conversion value over the same period, but the read should stay directional rather than definitive.
Prepared by Vysta
A focused review of brand distortion, Performance Max concentration, video-led scaling opportunity, landing-page fit, and measurement confidence across Google and YouTube.
Section 1 · The gap
CuroSleep is not dealing with a channel-viability problem. The account is already spending enough to matter. The issue is that too much of the performance narrative is being carried by one blended Performance Max engine, while cleaner acquisition lanes remain underbuilt.
Vysta would frame the gap more simply: the account has found signal, especially in video, but it has not yet built the operating structure required to turn that signal into a governable growth system.
90-day Google spend
€84.0K
Google Ads reports €260.8K in conversion value over the same period, but the read should stay directional rather than definitive.
PMax share of spend
89.5%
Most budget still sits inside blended automation rather than in clearly governed acquisition lanes.
Branded search ROAS
9.1x
Captured demand is materially flattering efficiency, which makes the account look cleaner than it is.
Main upside
Video-led DG scale
The account appears to have found winning video signal, but it is sitting in the wrong container for confident scaling.
Section 2 · Brand vs non-brand
The biggest distortion in this account is not that brand exists. It is that brand capture, near-brand demand, and opaque automation are all helping the account look more efficient than a pure acquisition read would suggest.
Branded search spend
€7.46K
The brand-defense layer is efficient, but it is also carrying too much of the account narrative.
Brand-like search spend
€12.1K
The supplied search-term file still shows meaningful cost against direct brand and near-brand variants.
Brand-like conversions
1,122
These queries account for 11.9% of conversions in the supplied search-term export.
Non-brand truth
Partially obscured
Misspelled brand capture and blended PMax delivery make true acquisition quality harder to trust.
Branded search is efficient, brand-like queries remain visible in the supplied search-term file, and the largest PMax line is big enough to hide additional brand-adjacent capture behind one blended result.
Branded Search campaign
€7,464
Extremely efficient brand defense is helping the whole account look more acquisition-efficient than it really is.
Brand-like search terms in export
€12,111
The supplied query file still contains direct brand and near-brand demand that should be separated from acquisition truth.
Top PMax mobile campaign spend
€54,939
This blended campaign is large enough to hide brand-adjacent demand, retargeting, and non-brand acquisition inside one headline result.
These are not bad queries to have. They are bad queries to let dominate the acquisition story when the account still needs a stronger net-new demand engine.
| Query | Source | Spend | Conversions |
|---|---|---|---|
| curosleep | Brand Search | €4,064 | 691.36 |
| curosleep kissen | Brand Search | €1,888 | 213.45 |
| curo sleep | Brand Search | €288 | 21.63 |
| curosleep erfahrung | PMax mobile | €540 | 11.01 |
| curosleep bewertung | Brand Search | €220 | 10.99 |
| cura sleep | PMax mobile | €122 | 3.00 |
Vysta operating note
Brand protection should stay deliberate, efficient, and clearly separated. At Vysta, the point is not to remove branded demand. The point is to stop letting it define how hard the business thinks Google is acquiring new customers.
Section 3 · Funnel architecture
That is not inherently fatal, but it does make budget movement, creative diagnosis, and acquisition-quality interpretation far harder than they should be.
TOF
~1% of visible spend2.94x · Dedicated DG video
The cleanest explicit video-scaling lane exists, but it is barely funded relative to the rest of the account.
MOF / BOF
~90% of visible spend2.54x · Performance Max cluster
Automation is doing most of the work, but the middle and bottom of the funnel are blurred together inside the same system.
BOF
~9% of visible spend9.10x · Branded Search
Bottom-funnel capture is healthy, but it is too dominant in the story the account tells about itself.
The issue is not that these campaign types exist. The issue is that they are not yet doing clearly separated jobs, which leaves Vysta with too little control over what is genuinely scaling and what is merely being well credited.
| Campaign layer | Role | 90-day spend | ROAS | Reading |
|---|---|---|---|---|
| Brand Search | BOF | €7,464 | 9.10x | Captures existing demand very efficiently |
| PMax Assets + Mobile | MOF / BOF | €54,939 | 2.82x | Largest budget line, but too opaque to trust as pure acquisition |
| PMax Assets + Computer | MOF / BOF | €5,984 | 2.04x | Supportive layer, still blended and hard to govern |
| Demand Gen Video | TOF / MOF | €804 | 2.94x | Positive signal, but severely underfunded |
| Legacy PMax Assets | MOF / BOF | €5,814 | 1.84x | Older automation line with weaker efficiency |
| PMax Shopping variant | MOF / BOF | €3,043 | 1.47x | Product infrastructure is present, but not clearly leading scale |
Commercial interpretation
When a single automated campaign type carries most of the budget, the account can look simpler than it really is. In practice, it becomes harder to know whether scale is coming from real prospecting, warm recapture, or brand-adjacent query capture.
Section 4 · Shopping & feed
This section is less about dramatic conclusions and more about keeping the diagnostic honest. The product layer still needs a direct Merchant Center and feed review before it should carry bigger strategic confidence.
Shopping visibility
Partial
The supplied evidence is enough to see that Shopping is not the main growth engine, but not enough to fully diagnose Merchant Center health.
Product architecture
Obscured
Asset-led PMax delivery is making it harder to tell whether SKU structure or feed quality are helping or suppressing scale.
Feed confidence
Medium
This section should stay measured until title structure, segmentation, and Merchant Center diagnostics are reviewed directly.
The safest read is that asset-led automation is currently telling more of the story than clean product infrastructure. That does not prove the feed is broken. It proves the feed still needs its own diagnostic before Vysta should claim more certainty than the evidence warrants.
| Issue | What it means | Commercial impact | Vysta fix |
|---|---|---|---|
| No Merchant Center diagnostic supplied | The current evidence does not show product disapprovals, title quality, or attribute coverage directly. | Shopping conclusions can only be directional, which limits confidence in product-led scale decisions. | Run a dedicated Merchant Center and feed review before expanding Shopping harder. |
| Shopping is not the visible scale engine | The account story is being carried by asset-heavy PMax rather than a clearly evidenced Shopping structure. | Product truth is partially hidden beneath automated campaign mixing. | Separate product-led scaling logic from blended automation and inspect hero-SKU architecture. |
| PMax is blending product, video, and query intent | Feed quality, product-term demand, and video influence are all difficult to disentangle. | This makes it harder to decide whether feed work or media work should be prioritized first. | Create cleaner reporting splits and a dedicated product-structure audit. |
| Colder traffic is not yet matched to product storytelling layers | The product page is doing too much of the job for multiple traffic temperatures. | Even strong products can under-convert if the traffic arrives before the buyer is properly educated. | Pair feed and product architecture with better pre-sell and comparison-page logic. |
Vysta response
The next step is not to guess harder. It is to inspect Merchant Center health, hero-SKU structure, title logic, and segmentation directly so Shopping can become a governed growth lever rather than a hidden dependency beneath Performance Max.
Section 5 · YouTube & Demand Gen
That is the central YouTube read. CuroSleep appears to have found meaningful video signal already. The problem is that too much of that signal is still being interpreted through Performance Max rather than through a cleaner Demand Gen operating model.
Dedicated DG video spend
€804
The account already has a clean video lane, but it is far too small relative to the broader PMax dependency.
Largest video asset spend
€32.6K
One vertical YouTube asset is doing enough work to justify its own governed scaling environment.
Second video asset spend
€13.4K
A second asset is also producing meaningful reported value, reinforcing that video is a real lever rather than a side experiment.
Policy pressure
Claims-sensitive
User notes point to unreliable-claims disapprovals, which means compliance discipline needs to sit alongside scale planning.
The opportunity is not theoretical. The supplied asset and ad data already show enough video concentration to justify a more deliberate scaling system. The question is not whether video matters. The question is whether Google should continue to own that interpretation inside PMax.
| Layer | Spend | Reported value | ROAS | Verdict |
|---|---|---|---|---|
| PMax video asset · 9KXuLyYx5bQ | €32,559 | €94,814 | 2.91x | Best signal, but buried inside PMax |
| PMax video asset · xtmUxv-ZLdM | €13,375 | €44,954 | 3.36x | Strong supporting signal with limited status pressure |
| Demand Gen video campaign | €804 | €1,689 | 2.94x | Positive result, but too underfunded to define the strategy yet |
| Demand Gen image ad · Product Page | €259 | ~€605 | 2.34x | Useful support signal, but not the main opportunity |
| Demand Gen image ad · BOGO | €2 | €0 | 0.00x | Early weak signal and not the core scale path |
The top-performing video should move into its own Demand Gen campaign so budget, audience quality, and scaling rules can be judged cleanly.
Performance Max can support scale, but it should not remain the main home for video-led acquisition if the goal is governable growth.
Cold prospecting, competitor targeting, warmer return traffic, and support creatives should not all sit inside the same blurred result line.
Claims and disclaimers need a formal rewrite layer so winning assets do not keep colliding with unreliable-claims limits as budget rises.
Vysta operating note
A winning video should not stay trapped inside a campaign type that makes audience quality and incrementality harder to trust. Vysta would promote the winner into a dedicated Demand Gen lane, then scale it with clearer segmentation, policy discipline, and measurement guardrails.
Section 6 · Landing page strategy
That is especially true if Vysta expands competitor, category, or colder video-led acquisition. One destination can close warm demand well while still being the wrong answer for colder traffic.
Current destination pattern
The product page is strong for warmer traffic, but colder video and competitor traffic likely need more qualification before the close.
Biggest strength
The PDP already does several important jobs well: strong offer framing, social proof, FAQs, and a 100-night trial story.
Biggest missed lever
The account appears under-equipped with pre-sell, comparison, or competitor-response environments for colder acquisition paths.
A short educational bridge can help video traffic understand the product problem before the PDP is asked to close the sale.
Competitor or category-comparison pages can turn conquesting interest into a more controlled conversion path.
Longer-form education is especially useful when the account leans on pain-relief framing that needs context and trust-building.
Promotional or bundle traffic should land on the offer environment that matches the ad, rather than being forced into one generic destination.
Commercial interpretation
If the account keeps sending colder awareness or conquesting traffic into one generic product environment, it risks making the media look worse than it should. Better page matching is not cosmetic. It is part of whether upper-funnel spend becomes investable.
Section 7 · Measurement reality
That is why this account cannot be judged on platform comfort alone. The concentration of Performance Max and the video-heavy read both increase the cost of weak measurement discipline.
Primary reporting lens
Google platform value
Useful for directional optimization, but not sufficient to answer the incrementality question that matters most here.
New-visitor quality
Missing
Without NVP or an equivalent signal, YouTube-led PMax performance is too easy to over-trust.
Placement clarity
Partial
The account needs cleaner proof of where PMax and Demand Gen are actually serving and which placements are truly additive.
Incrementality proof
Not closed
There is still no external truth layer strong enough to separate genuine demand creation from well-credited demand capture.
Tier 1
Separate brand, near-brand, and non-brand reporting before making larger scale calls.
Judge PMax less by blended comfort and more by where its value appears to be coming from.
Anchor decisions around business reality rather than platform enthusiasm.
Tier 2
Introduce new-visitor percentage or equivalent acquisition-quality signals.
Review order overlap and assisted-demand patterns across paid channels.
Use placement-level analysis to understand whether YouTube exposure is truly prospecting-led.
Tier 3
Layer in external attribution or stronger source-of-truth diagnostics where available.
Run budget or geography-based validation once the structure is cleaner.
Scale only the lanes that remain credible under a stricter truth system.
Vysta measurement stance
Until the account has a stronger truth layer, scaling should stay disciplined. The goal is not to distrust Google by default. The goal is to stop letting opaque automation and incomplete attribution make bigger budget decisions feel more proven than they actually are.
Section 8 · 90-day roadmap
Vysta would rebuild the account in a sequence that protects truth before it chases volume. That is what makes the growth plan more investable over time.
Phase 1
Rebuild the account around explicit jobs: branded search defense, clean video acquisition, non-brand discovery, and controlled automation.
Tighten the reporting lens so brand capture and non-brand acquisition can no longer hide inside the same result line.
Audit Performance Max for brand-adjacent demand, misspellings, and over-credited video behaviour before pushing more budget through it.
Define the minimum measurement questions that must be answered before aggressive scale is treated as earned rather than hoped for.
Phase 2
Move the hero winning video into its own Demand Gen campaign with dedicated budget, cleaner audience segmentation, and explicit scaling rules.
Introduce a structured creative and policy-governance layer so assets can scale without repeated unreliable-claims friction.
Open competitor and category acquisition lanes that are currently underused, but do so through clearer landing-path logic rather than a generic PDP route.
Create at least one colder-traffic pre-sell or comparison environment to improve the match between video traffic and destination quality.
Phase 3
Scale only the channels and creatives that remain efficient once the truth layer is cleaner and the funnel is more readable.
Judge YouTube and Demand Gen not just by platform value, but by whether they improve new-customer quality and business confidence.
Reassess Shopping and feed infrastructure with a direct Merchant Center review so product-led scale can be expanded with more confidence.
Turn the first ninety days of cleaner learning into an operating model that tells CuroSleep what deserves more capital and what should stay constrained.
By the end of the first ninety days, the account should be easier to read, with brand defense, video-led acquisition, and automation support no longer blended into one comfort metric.
The winning creative should live in a campaign structure where Vysta can control budget, segmentation, compliance, and scaling rules rather than leaving those calls to a black-box container.
CuroSleep should move from one-size-fits-all destination logic toward a system that gives colder traffic more context before asking for the sale.
Leadership should leave the rebuild with a more reliable view of how much Google is really creating, what remains captured demand, and which lanes deserve more investment.
Instead of scaling because blended numbers look comfortable, the next quarter can be capitalized around the lanes that stay credible under stricter measurement and cleaner structure.
Too much commercial truth is still trapped inside Performance Max, which makes the account look simpler and more proven than it really is.
Promote the strongest video into a governed Demand Gen environment so Vysta can scale what is actually working with more control and less reporting ambiguity.
The diagnosis is strong enough to act on, but the final scale call should stay measured until incrementality, placement quality, and feed infrastructure are verified more directly.