Close Menu
    Trending
    • Five with MIT ties elected to National Academy of Medicine for 2025 | MIT News
    • Why Should We Bother with Quantum Computing in ML?
    • Federated Learning and Custom Aggregation Schemes
    • How To Choose The Perfect AI Tool In 2025 » Ofemwire
    • Implementing DRIFT Search with Neo4j and LlamaIndex
    • Agentic AI in Finance: Opportunities and Challenges for Indonesia
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Why Are Marketers Turning To Quasi Geo-Lift Experiments? (And How to Plan Them)
    Artificial Intelligence

    Why Are Marketers Turning To Quasi Geo-Lift Experiments? (And How to Plan Them)

    ProfitlyAIBy ProfitlyAISeptember 23, 2025No Comments24 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    💡 Word: Geo-Elevate R code on the finish of the article.

    of my profession, I’ve used quasi-experimental designs with artificial management teams to measure the influence of enterprise adjustments. At Trustpilot, we switched the place of a banner on the homepage and retrospectively noticed indicators pointing to a lower in our essential demand metric. By modeling how efficiency would have appeared with out the change and evaluating it to what occurred, we bought a transparent learn of incremental decline. At Zensai, we determined to deal with a few of our net varieties and confirmed in an analogous manner whether or not the change had a constructive influence on user-reported ease of completion.

    This method works wherever you may break up one thing into remedy and management teams: people, smartphones, working methods, varieties, or touchdown pages. You deal with some. The others make it easier to mannequin what would have occurred with out remedy. Then you definitely evaluate.

    If you can not randomize on the particular person stage, geography makes a fantastic unit of study. Cities and areas are simple to focus on by way of paid social and their borders assist comprise spillovers.

    There are three sensible designs for isolating incremental influence:

    1. Randomized managed trials (RCT) like conversion or model elevate checks are interesting when you should use them as they run inside huge advert platforms. They automate random person project and may ship statistically strong solutions throughout the platform’s surroundings. Nonetheless, until you leverage conversion API, their scope is constrained to the platform’s personal metrics and mechanics. Extra just lately, proofs (1, 2, 3) have emerged that algorithmic options might compromise the randomness of project. This creates “divergent supply”: advert A will get proven to 1 kind of viewers (e.g., extra males, or individuals with sure pursuits), whereas advert B will get proven to a different kind. Any distinction in CTRs or conversions can’t be attributed purely to the advert artistic. It’s confounded with how the algorithm delivered the adverts. On Google Advertisements, Conversion Elevate usually requires enablement by a Google account consultant. On Meta, eligible advertisers can run self-serve Conversion Elevate checks in Advertisements Supervisor (topic to spend and conversion stipulations), and Meta’s 2025 Incremental Attribution function additional lowers friction for lift-style measurement.
    2. Geo-RCTs (Randomized Geo Experiments) use geographies because the unit of study with random project to remedy and management. There is no such thing as a want for individual-level monitoring, however you want a sufficiently massive variety of geos to realize statistical energy and assured outcomes. As a result of project is randomized, you don’t construct an artificial management as you do within the subsequent kind of experiment.
    3. The Quasi Geo-Elevate Experiment approaches the identical query utilizing geos because the items of study, without having for individual-level monitoring. Not like Geo-RCTs that require randomization and extra geos, this quasi-experimental method presents three key benefits with no required randomization wanted: it a) works effectively with fewer geographies (no want for a lot of geos as with Geo-RCTs), b) allows strategic market choice (direct management over the place and when remedy is utilized based mostly on enterprise priorities), and c) accommodates retrospective evaluation and staggered rollouts (in case your marketing campaign already launched or should roll out in waves for operational causes, you may nonetheless measure incrementality after the actual fact since randomization isn’t required). The artificial management is constructed to match pre-intervention traits and traits of the remedy unit, so any variations after remedy begins could be attributed to the marketing campaign’s incrementality. However: profitable execution requires robust alignment between analytics and efficiency advertising groups to make sure correct implementation and interpretation.

    The advantages of the *Quasi Geo-Elevate Experiment* are nice. So that will help you perceive while you would possibly use a quasi-experiment in your advertising science perform, let’s contemplate the next instance beneath.


    Quasi Geo-Elevate Experiment Instance

    Fig. 2: Quasi-Experiment instance abstract (supply: personal manufacturing)

    Contemplate a ride-hailing firm comparable to Bolt evaluating a brand new channel in Poland. The enterprise has had a brand new channel on radar for a while and now desires to try it out. There’s a want to know how a probably new marketing campaign on this new channel referred to as TweetX impacts rides, and what minimal price range is required to detect the influence. The accessible dataset is a day by day experience panel overlaying 13 Polish cities beginning in 2022. The proposed begin date is 2023–09–18, and activation could be nationwide or city-based. Sensible constraints are current: customers can’t be enrolled into particular person remedy and management teams, because the new channel marketing campaign targets solely new riders. This guidelines out user-level A/B testing. Moreover, privateness situations stop reliance on tracking-based attribution. Historic priors from different channels in Poland counsel a value per experience between €6 and €12, CPMs between €2 and €4, and a mean revenue per experience of €6. A latest benchmark month-to-month price range for a comparable channel, Snapcap, was a most of €5,000/month.

    These situations favor a quasi-experimental geo design that measures enterprise outcomes immediately, with out relying on user-level monitoring or dependencies on platforms like Meta or Google, however as an alternative utilizing generally accessible historic knowledge comparable to rides, gross sales, conversions, or leads.

    To carry out the Geo-Elevate check with an artificial management group design on our instance, we’ll use the GeoLift bundle in R, developed by Meta. The dataset I will probably be utilizing is structured as day by day experience counts throughout 13 cities in Poland. 

    Fig. 3: Dataset for Quasi Geo-Elevate Experiment (supply: personal manufacturing)

    Finest practices in your knowledge when utilizing a Quasi Geo-Elevate Experiment.

    • Use day by day knowledge as an alternative of weekly when potential.
    • Use the most detailed location knowledge accessible (e.g., zip codes, cities).
    • Have no less than 4–5 instances the check length in secure, pre-campaign historic knowledge (no main adjustments or disruptions – extra on this later within the operationalization chapter beneath!)
    • Have no less than 25 pre-treatment durations with a minimal of 10, however ideally 20+ geo-units.
    • Ideally, acquire 52 weeks of historical past to seize seasonal patterns and different elements.
    • The check ought to final no less than one buy cycle for the product.
    • Run the research for no less than 15 days (day by day knowledge) or 4–6 weeks (weekly knowledge).
    • Panel knowledge (covariates) is useful however not required.
    • For each time/location, embody date, location, and KPIs (no lacking values). Further covariates could be added if in addition they meet this rule.

    Planning Your Quasi Geo-Elevate Experiment

    Fig. 4: Quasi Geo-Lift Experiment planning (source: own production)
    Fig. 4: Quasi Geo-Elevate Experiment planning (supply: personal manufacturing)

    Designing a quasi geo-lift experiment is not only about working a check,  it’s about making a system that may credibly hyperlink advertising actions to enterprise outcomes. 

    To do that in a structured manner, the launch of any new channel or marketing campaign must be approached in phases. Right here is my ABCDE framework that may make it easier to with it:

    (A) ASSESS

    • Set up how incrementality will probably be measured and which methodology will probably be used.

    (B) BUDGET 

    • Decide the minimal spend required to detect an impact that’s each statistically credible and commercially significant.

    (C) CONSTRUCT 

    • Specify which cities will probably be handled, how controls will probably be fashioned, how lengthy the marketing campaign will run, and what operational guardrails are wanted. 

    (D) DELIVER

    • Convert statistical outcomes into metrics and report outcomes as a readout.

    (E) EVALUATE

    • Use outcomes to tell broader selections by updating MMM and MTA. Give attention to calibrating, stress-testing, replicating, and localizing for rollout.

    (A) ASSESS the advertising triangulation variable in query by drilling down into it.

    Fig. 5: Advertising and marketing triangulation framework and Incrementality breakdown (supply: personal manufacturing)

    1. Advertising and marketing triangulation as a beginning level.

    Begin by evaluating which a part of the advertising triangulation you’ll need to interrupt down. In our case, that may be the incrementality piece. For different tasks, drill down into MTA and MMM equally. As an example, MTA uncovers heuristic out-of-the-box methods like final click on, first click on, first or final contact decaying, (inverted) U-shape, or W-shape, but additionally data-driven approaches like Markov chain. MMM could be customized, third occasion, Robyn, or Meridian, and entails further steps like saturation, adstock, and price range reallocation simulations. Don’t overlook to arrange your measurement metrics.

    2. Incrementality is operationalized via a geo-lift check. 

    The geo-lift check is the sensible expression of the framework as a result of it reads the end result in the identical items the enterprise manages, comparable to rides (or gross sales, conversions, leads) per metropolis per day. It creates remedy and management simply as a traditional randomized research would, nevertheless it does so on the geographic stage. 

    Fig. 6: Geo-Elevate check vs. Consumer-centric RCT Conversion Elevate check (supply: personal manufacturing)

    This makes the design executable throughout platforms and unbiased of user-level monitoring, whereas it lets you research your most popular enterprise metrics.

    3. Recognition of two experimental households of the Geo-Elevate check: RCTs and quasi-experiments. 

    The place in-platform RCTs comparable to conversion elevate checks or model elevate checks exist (Meta, Google), they continue to be the usual and could be leverage. When particular person randomization is infeasible, the geo-lift check then proceeds as a quasi-experiment.

    4. Identification depends on an artificial management methodology. 

    For every handled metropolis, a weighted mixture of management cities is discovered to breed its pre-period trajectory. The divergence between the noticed collection and its artificial counterpart throughout the check window is interpreted because the incremental impact. This estimator preserves scientific rigor whereas preserving execution possible and auditable.

    5. Calibration and validation are express steps, not afterthoughts. 

    The experimental estimate of incrementality is used to validate that attribution indicators level in the fitting path and to calibrate MMM elasticities and adstock (by way of the calibration multiplier), so cross-channel price range reallocations are grounded in causal fact.

    Fig. 7: Tips on how to calibrate your MTA and MMM. Leverage Quasi Geo-Elevate Experiment outcomes to calibrate your MTA/MMM utilizing a calibration multiplier (supply: personal manufacturing)

    6. Measure the influence in enterprise phrases. 

    Within the planing part, the core statistic is the Common Therapy Impact on Handled (ATT), expressed in end result items per day (e.g., rides per metropolis per day). That estimate is translated into Complete Incremental Rides over the check window after which into Value per Incremental Conversion (CPIC) by dividing spend by the overall variety of incremental rides. Minimal Detectable Impact (MDE) is reported to make the design’s sensitivity express and to separate actionable outcomes from inconclusive ones. Lastly, Web Revenue is calculated by combining historic rider revenue with the incremental outcomes and CPIC.

    The whole incremental leads could be multiplied by a blended historic conversion price from result in buyer to estimate what number of new prospects the marketing campaign is anticipated to generate. That determine is then multiplied by the typical revenue per buyer in {dollars}. This fashion, even when income is realized downstream of the lead stage, the experiment nonetheless delivers a clear estimate of incremental monetary influence and a transparent choice rule for whether or not to scale the channel. All different metrics like ATE, whole incremental leads, price per incremental lead, and MDE will probably be calcalted in an analogous style.


    (B) BUDGET in your quasi-experiment.

    Fig. 8: Estimating price range for a Quasi Geo-Elevate Experiment (supply: personal manufacturing)

    Finances estimation is just not guesswork. It’s a design selection that determines whether or not the experiment yields actionable outcomes or inconclusive noise. The important thing idea is the Minimal Detectable Impact (MDE): the smallest elevate the check can reliably detect given the variance in historic knowledge, the variety of handled and management cities, and the size of the check window.

    In observe, variance is estimated from historic rides (or gross sales, conversions, or leads in different industries). The variety of handled cities and check size then outline sensitivity. For instance, as you will notice later, treating 3 cities for 21 days whereas holding 10 as controls supplies sufficient energy to detect lifts of about 4–5%. Detecting smaller however statistically important results would require extra time, extra markets, or extra spend.

    The Geo-Elevate bundle fashions these energy analyses after which prints the price range–impact–energy simulation chart for any experiment ID.

    The price range is aligned with unit economics. Within the Bolt case, business priors counsel a value per experience of €6–€12 and a revenue per experience of €6. Beneath these assumptions, the minimal spend to realize an MDE of roughly 5% involves €3,038 for 3 weeks, or €48.23 per handled metropolis per day. This sits throughout the €5,000€ benchmark price range however, extra importantly, makes express what impact measurement the check can and can’t detect.

    Framing price range this manner has two benefits. First, it ensures the experiment is designed to detect solely results that matter for the enterprise. Second, it offers stakeholders readability: if the result’s null, it means the true impact was extra possible smaller than the brink, not that the check was poorly executed. Both manner, the spend is just not wasted. It buys causal information that sharpens future allocation selections.


    (C) CONSTRUCT your quasi-experiment design.

    Fig. 9: Therapy and Management geographies choice in a Quasi Geo-Elevate experiment (supply: personal manufacturing)

    Designing the experiment is about greater than selecting cities at random. It’s about constructing a format that preserves validity whereas staying sensible for operations. The unit of study is the city-day, and the end result is the enterprise metric of curiosity, comparable to rides, gross sales, or leads. Therapy is utilized to chose cities for a hard and fast check window, whereas the remaining cities function controls. 

    Geo-Elevate will mannequin and group the perfect metropolis candidates in your remedy.

    Management teams will not be simply left as-is. 

    They are often refined utilizing a artificial management methodology. Every handled metropolis is paired with a weighted mixture of management cities that reproduces its pre-test trajectory. When the pre-period match is correct, the post-launch divergence between noticed and artificial outcomes supplies a reputable estimate of incremental impact.

    Operational guardrails are vital to guard sign high quality. 

    Fig. 10: Quasi Geo-Elevate experiment planning and design (supply: personal manufacturing)

    Metropolis boundaries must be fenced tightly in settings to scale back spillovers from commuters. Additionally contemplate strict exclusion of management cities from remedy cities within the settings and vice-versa. There shouldn’t be a particular concentrating on and lookalike audiences utilized to the marketing campaign. Native promotions that would confound the check are both frozen in handled geographies or mirrored in controls. 

    Creatives, bids, and pacing are held fixed throughout the window, and outcomes are solely learn after a brief cooldown interval as a result of advertising adstock impact. Different enterprise related elements must be thought-about, e.g. in our case, we must always examine the provision driver capability upfront to make sure further demand could be served with out distorting costs or wait instances.

    Setting up the check means choosing the proper steadiness.

    Fig. 11: Simulated and ranked Quasi Geo-Elevate experiment teams (supply: personal manufacturing)

    The Geo-Elevate bundle will do the heavy lifting for you by modeling and constructing all optimum experiments.

    Out of your energy evaluation and market choice perform code, choosing the proper experiment setup is a steadiness between:

    • variety of cities
    • length
    • spend
    • desired incremental uplift
    • revenue
    • stat. significance
    • check vs management alignment 
    • smallest detectable elevate 
    • different enterprise context

    A configuration of three handled cities over a 21-day interval, as within the Bolt instance, supplies enough energy to detect lifts of ~4–5%, whereas preserving the check window quick sufficient to reduce contamination. Primarily based on earlier experiments, we all know this stage of energy is satisfactory. As well as, we have to stay inside our price range of €5,000 per thirty days, and the anticipated funding of €3,038.16 for 3 weeks suits effectively inside that constraint.


    (D) DELIVER the post-experiment readout.

    Fig. 12: Outcomes of the Quasi Geo-Elevate experiment (supply: personal manufacturing)

    The ultimate step is to ship outcomes that translate statistical elevate into clear enterprise influence for stakeholders. The experiment’s outputs must be framed in phrases that each analysts and decision-makers can use.

    On the core is the Common Therapy Impact on Handled (ATT), expressed in end result items per day comparable to rides, gross sales, or leads. From this, the evaluation calculates whole incremental outcomes over the check window and derives Value per Incremental Conversion (CPIC) by dividing spend by these outcomes. The Minimal Detectable Impact (MDE) is reported alongside outcomes to make the check’s sensitivity clear, separating actionable lifts from inconclusive noise. Lastly, the evaluation converts outcomes into Web Revenue by combining incremental conversions with unit economics.

    For lead-based companies, the identical logic applies however the internet revenue: for that, the overall incremental leads could be multiplied by a blended conversion price from result in buyer, then by common revenue per buyer, to approximate the web monetary influence.

    Be cautious when deciphering outcomes to stakeholders. Statistical evaluation with p-value supplies proof, not absolute proof, so phrasing issues. The GeoLift uses Artificial Management/Augmented Artificial Management with frequentist inference.

    A widespread however deceptive interpretation of statistical significance would possibly sound like this:

    “The results of the quasi Geo-Elevate experiment proves that rides elevated by 11.1% due to the marketing campaign with a 95% chance.”

    This interpretation is problematic for a number of causes:

    • It treats statistical significance as proof.
    • It assumes the impact measurement is precise (11.1%), ignoring the uncertainty vary round that estimate.
    • It misinterprets confidence intervals.
    • It leaves no room for various explorations which may create worth for the enterprise.
    • It might probably mislead decision-makers, creating overconfidence and probably resulting in dangerous enterprise selections.

    When performing quasi Geo-Elevate check, what a statistical check really checks?

    Each statistical check is dependent upon a statistical mannequin, which is a posh net of assumptions. This mannequin consists of not solely the principle speculation being examined (e.g., a brand new TweetX marketing campaign has no impact) but additionally a protracted checklist of different assumptions about how the info had been generated. These embody assumptions about:

    • Random sampling or randomization.
    • The kind of chance distribution the info observe.
    • Independence.
    • Choice bias.
    • The absence of main measurement errors.
    • How the evaluation was carried out.

    A statistical check does not simply consider the check speculation (just like the null speculation). It evaluates the whole statistical mannequin: the entire set of assumptions. And that’s why we attempt to at all times guarantee that all different assumptions are absolutely met and the experiment designs will not be flawed: so if we observe a small p-value, we are able to moderately learn it as proof towards the null, not as proof or ‘acceptance’ of H1.

    The P-value is a measure of compatibility, not fact.

    The most typical definition of a P-value is flawed. A extra correct and helpful definition is:

    The P-value is a steady measure of the compatibility between the noticed knowledge and your complete statistical mannequin used to compute it. It’s the chance that the chosen check statistic can be no less than as massive as its noticed worth if each single assumption within the mannequin (together with the check speculation) had been appropriate.

    Consider it as a “shock index.”

    • A small P-value (e.g., P=0.01) signifies that the info are shocking if your complete mannequin had been true. It’s a crimson flag telling us that a number of of our assumptions could be mistaken. Nonetheless, it doesn’t inform us which assumption is mistaken. The difficulty may very well be the null speculation, nevertheless it is also a violated research protocol, choice bias, or one other unmet assumption.
    • A massive P-value (e.g., P=0.40) signifies that the info will not be uncommon or shocking beneath the mannequin. It suggests the info are suitable with the mannequin, nevertheless it doesn’t show the mannequin or the check speculation is true. The info may very well be equally suitable with many different fashions and hypotheses. And that’s why we attempt to at all times guarantee that all different assumptions are absolutely met and experiment designs will not be flawed: so if we observe a small p-value, we are able to moderately learn it as proof towards the null, not as proof or ‘acceptance’ of H1.

    The widespread observe of degrading the P-value right into a easy binary, “statistically important” (P≤0.05) or “not important” (P>0.05), is damaging. It creates a false sense of certainty and ignores the precise amount of proof.

    Confidence Intervals (CI) and their significance within the Quasi Geo-Elevate check.

    A confidence interval (CI) is extra informative than a easy P-value from a null speculation check. It may be understood because the vary of impact sizes which might be comparatively suitable with the info, given the statistical mannequin.

    A 95% confidence interval has a particular frequentist property: if you happen to had been to repeat a quasi Geo-Elevate research numerous instances with legitimate statistical fashions, 95% of the calculated confidence intervals would, on common, comprise the true impact measurement.

    Crucially, this doesn’t imply there’s a 95% chance that your particular interval comprises the true impact. As soon as calculated, your interval both comprises the true worth or it doesn’t (0% or 100%). The “95%” tells how usually this methodology would seize the true impact over many repeated research, not how sure we’re about this single interval. If you wish to transfer from frequentist confidence intervals to direct chances about elevate, Bayesian strategies are the best way.


    In the event you’d wish to dive deeper into p-values, confidence intervals, and speculation testing, I like to recommend these 2 well-known papers:

    • A Soiled Dozen: Twelve P-Worth Misconceptions – hyperlink here.
    • Statistical checks, P values, confidence intervals, and energy: a information to misinterpretations – hyperlink here.

    (E) EVALUATE your Quasi Geo-Elevate Experiment with a broader lense.

    Fig. 13: Submit-experiment analysis and subsequent steps (supply: personal manufacturing)

    Bear in mind to zoom out and see the forest, not simply the bushes. 

    The outcomes should feed again into advertising triangulation by validating attribution ROAS, and calibrating advertising combine fashions with a causal multiplier. 

    They need to additionally information the subsequent steps: replicate constructive ends in new geographies, assess proportion elevate towards the minimal detectable threshold, and keep away from generalizing from a single market earlier than additional testing. 

    Stress-testing with placebo testing (in-space or in-time) will even strengthen confidence in your findings.

    Fig. 14: 5 random in-time placebo checks (supply: personal manufacturing)

    Under are outcomes from one in-time placebo: +1.3% elevate from a placebo is just not stat. robust to reject no distinction between remedy and management group = H0 (as anticipated for a placebo):

    Fig. 15: Outcomes of the in-time placebo check (supply: personal manufacturing)

    Incorporating the channel impressions and spend into cross-channel interactions within the MMM helps seize interactions with different channels in your media combine.

    If the marketing campaign fails to ship the anticipated elevate regardless of planning suggesting it ought to, you will need to consider elements not captured within the quantitative outcomes: the messaging or artistic execution. Typically, the shortfall could also be attributed to how the marketing campaign (doesn’t) resonate with the target market quite than flaws within the experimental design.

    What’s in it for you?

    Quasi geo-lift permits you to show whether or not a marketing campaign actually strikes the needle with out user-level monitoring or huge randomized checks. You choose just a few markets, construct an artificial management from the remaining, and skim the incremental influence immediately in enterprise items (rides, gross sales, leads). The ABCDE plan makes it sensible:

    • Assess advertising triangulation and the way you’ll measure,
    • Finances to a transparent MDE,
    • Assemble remedy/management and guardrails,
    • Ship ATT → CPIC → revenue, then
    • Consider by calibrating MMM/MTA, stress-testing with placebos, and by taking a look at your small business context with a broader lense.

    Web outcome? Quicker, cheaper, defensible solutions you may act on.


    Thanks for studying. In the event you loved this text or discovered one thing new, be happy to join and attain out to me on LinkedIn.


    Full code:
    library(tidyr)
    library(dplyr)
    library(GeoLift)
    
    # Assuming long_data is your pre-formatted dataset with columns: date, location, Y
    # The info must be loaded into your R surroundings earlier than working this code.
    
    long_data <- learn.csv("/Customers/tomasjancovic/Downloads/long_data.csv")
    
    # Market choice (energy evaluation)
    GeoLift_PreTest <- long_data
    GeoLift_PreTest$date <- as.Date(GeoLift_PreTest$date)
    
    # utilizing knowledge as much as 2023-09-18 (day earlier than launch)
    GeoTestData_PreTest <- GeoDataRead(
      knowledge = GeoLift_PreTest[GeoLift_PreTest$date < '2023-09-18', ],
      date_id = "date",
      location_id = "location",
      Y_id = "Y",
      format = "yyyy-mm-dd",
      abstract = TRUE
    )
    
    # overview plot
    GeoPlot(GeoTestData_PreTest, Y_id = "Y", time_id = "time", location_id = "location")
    
    # energy evaluation & market choice
    MarketSelections <- GeoLiftMarketSelection(
      knowledge = GeoTestData_PreTest,
      treatment_periods = c(14, 21, 28, 35, 42),
      N = c(1, 2, 3, 4, 5),
      Y_id = "Y",
      location_id = "location",
      time_id = "time",
      effect_size = seq(0, 0.26, 0.02),
      cpic = 6,
      price range = 5000,
      alpha = 0.05,
      fixed_effects = TRUE,
      side_of_test = "one_sided"
    )
    
    print(MarketSelections)
    plot(MarketSelections, market_ID = 4, print_summary = TRUE)
    
    # ------------- simulation begins, you'd use your noticed remedy/management teams knowledge as an alternative
    
    # parameters
    treatment_cities <- c("Zabrze", "Szczecin", "Czestochowa")
    lift_magnitude <- 0.11
    treatment_start_date <- as.Date('2023-09-18')
    treatment_duration <- 21
    treatment_end_date <- treatment_start_date + (treatment_duration - 1)
    
    # extending the time collection
    extend_time_series <- perform(knowledge, extend_days) {
      extended_data <- knowledge.body()
      
      for (metropolis in distinctive(knowledge$location)) {
        city_data <- knowledge %>% filter(location == metropolis) %>% prepare(date)
        
        baseline_value <- imply(tail(city_data$Y, 30))
        
        recent_data <- tail(city_data, 60) %>%
          mutate(dow = as.numeric(format(date, "%u")))
        
        dow_effects <- recent_data %>%
          group_by(dow) %>%
          summarise(dow_multiplier = imply(Y) / imply(recent_data$Y), .teams = 'drop')
        
        last_date <- max(city_data$date)
        extended_dates <- seq(from = last_date + 1, by = "day", size.out = extend_days)
        
        extended_values <- sapply(extended_dates, perform(date) {
          dow <- as.numeric(format(date, "%u"))
          multiplier <- dow_effects$dow_multiplier[dow_effects$dow == dow]
          if (size(multiplier) == 0) multiplier <- 1
          
          worth <- baseline_value * multiplier + rnorm(1, 0, sd(city_data$Y) * 0.1)
          max(0, spherical(worth))
        })
        
        extended_data <- rbind(extended_data, knowledge.body(
          date = extended_dates,
          location = metropolis,
          Y = extended_values
        ))
      }
      
      return(extended_data)
    }
    
    # extending to treatment_end_date
    original_end_date <- max(long_data$date)
    days_to_extend <- as.numeric(treatment_end_date - original_end_date)
    
    set.seed(123)
    extended_data <- extend_time_series(long_data, days_to_extend)
    
    # Combining unique + prolonged
    full_data <- rbind(
      long_data %>% choose(date, location, Y),
      extended_data
    ) %>% prepare(date, location)
    
    # making use of remedy impact
    simulated_data <- full_data %>%
      mutate(
        Y_original = Y,
        Y = if_else(
          location %in% treatment_cities &
            date >= treatment_start_date &
            date <= treatment_end_date,
          Y * (1 + lift_magnitude),
          Y
        )
      )
    
    # Verifying remedy (prints simply the desk)
    verification <- simulated_data %>%
      filter(location %in% treatment_cities,
             date >= treatment_start_date,
             date <= treatment_end_date) %>%
      group_by(location) %>%
      summarize(actual_lift = (imply(Y) / imply(Y_original)) - 1, .teams = 'drop')
    
    print(verification)
    
    # constructing GeoLift enter (simulated)
    GeoTestData_Full <- GeoDataRead(
      knowledge = simulated_data %>% choose(date, location, Y),
      date_id = "date",
      location_id = "location",
      Y_id = "Y",
      format = "yyyy-mm-dd",
      abstract = TRUE
    )
    
    # Computing time indices
    date_sequence <- seq(from = min(full_data$date), to = max(full_data$date), by = "day")
    treatment_start_time <- which(date_sequence == treatment_start_date)
    treatment_end_time <- which(date_sequence == treatment_end_date)
    
    # Working GeoLift
    GeoLift_Results <- GeoLift(
      Y_id = "Y",
      knowledge = GeoTestData_Full,
      areas = treatment_cities,
      treatment_start_time = treatment_start_time,
      treatment_end_time = treatment_end_time,
      mannequin = "None",
      fixed_effects = TRUE
    )
    
    # ---------------- simulation ends!
    
    # plots
    abstract(GeoLift_Results)
    plot(GeoLift_Results)
    plot(GeoLift_Results, kind = "ATT")
    
    # placebos
    set.seed(42)
    
    # window size (days) of the true remedy
    window_len <- treatment_end_time - treatment_start_time + 1
    
    # the furthest you may shift again whereas preserving the total window inside pre-period
    max_shift <- treatment_start_time - window_len
    n_placebos <- 5
    
    random_shifts <- pattern(1:max(1, max_shift), measurement = min(n_placebos, max_shift), substitute = FALSE)
    
    placebo_random_shift <- vector("checklist", size(random_shifts))
    names(placebo_random_shift) <- paste0("Shift_", random_shifts)
    
    for (i in seq_along(random_shifts)) {
      s <- random_shifts[i]
      placebo_random_shift[[i]] <- GeoLift(
        Y_id = "Y",
        knowledge = GeoTestData_Full,
        areas = treatment_cities,
        treatment_start_time = treatment_start_time - s,
        treatment_end_time   = treatment_end_time   - s,
        mannequin = "None",
        fixed_effects = TRUE
      )
    }
    
    # --- Print summaries for every random-shift placebo ---
    for (i in seq_along(placebo_random_shift)) {
      s <- random_shifts[i]
      cat("n=== Abstract for Random Shift", s, "days ===n")
      print(abstract(placebo_random_shift[[i]]))
    }
    
    # Plot ATT for every random-shift placebo
    for (i in seq_along(placebo_random_shift)) {
      s <- random_shifts[i]
      placebo_end_date <- treatment_end_date - s
      cat("n=== ATT Plot for Random Shift", s, "days ===n")
      print(plot(placebo_random_shift[[i]], kind = "ATT", treatment_end_date = placebo_end_date))
    }



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLayoffs, Shorter Work Weeks, and the Race to Automate Work
    Next Article The Art of Asking Good Questions
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Five with MIT ties elected to National Academy of Medicine for 2025 | MIT News

    October 22, 2025
    Artificial Intelligence

    Why Should We Bother with Quantum Computing in ML?

    October 22, 2025
    Artificial Intelligence

    Federated Learning and Custom Aggregation Schemes

    October 22, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Creating an AI Agent to Write Blog Posts with CrewAI

    April 4, 2025

    Shopify’s CEO Just Issued a Bold AI Ultimatum to His Entire Team

    April 15, 2025

    Vad världen har frågat ChatGPT under 2025

    July 15, 2025

    New method assesses and improves the reliability of radiologists’ diagnostic reports | MIT News

    April 4, 2025

    AI-enabled control system helps autonomous drones stay on target in uncertain environments | MIT News

    June 9, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Confusion Matrix Made Simple: Accuracy, Precision, Recall & F1-Score

    July 30, 2025

    Remembering Professor Emerita Jeanne Shapiro  Bamberger, a pioneer in music education | MIT News

    October 15, 2025

    Anthropics kostnadsfria AI-läskunnighetskurser för lärare och studenter

    August 25, 2025
    Our Picks

    Five with MIT ties elected to National Academy of Medicine for 2025 | MIT News

    October 22, 2025

    Why Should We Bother with Quantum Computing in ML?

    October 22, 2025

    Federated Learning and Custom Aggregation Schemes

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.