Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using long format for bal.tab #86

Open
maellecoursonnais opened this issue Sep 30, 2024 · 1 comment
Open

Using long format for bal.tab #86

maellecoursonnais opened this issue Sep 30, 2024 · 1 comment

Comments

@maellecoursonnais
Copy link

Hello, and thank you for the great work on this package. I would like to perform covariate balancing from inverse probability weights constructed with the ipw package to perform a marginal structural model. ipw accepts long data, and returns a long data set too, with one weight for each observation-time pair.

Take this example (taken from https://www.andrewheiss.com/blog/2020/12/03/ipw-tscs-msm/):

#Data
library(dplyr)
library(ipw)

happiness_data <- read.csv("https://www.andrewheiss.com/blog/2020/12/03/ipw-tscs-msm/happiness_data.csv")

happiness_binary <- happiness_data |>
  mutate(never_policy = all(policy == 0), .by = country) |>
  filter(!never_policy)

# IPW 
weights_binary_ipw <- ipwtm(
  exposure = policy,
  family = "binomial",
  link = "logit",
  # Time invariant stuff
  numerator = ~ lag_policy + country,
  # All confounders
  denominator = ~ log_gdp_cap + democracy + corruption + 
    lag_happiness_policy + lag_policy + country,
  id = country,
  timevar = year,
  type = "all",
  data = as.data.frame(happiness_binary)
)

The way I have been using IPW in WeightIt and cobalt so far has always been through wide data sets, with one weight for each observation, computed as a product of the observation-time weights.

I feel like using a long data set could help improve the weights, especially when many periods are observed. I've also seen elsewhere (mostly here) that one could use multilevel or GEE models to leverage long data sets.

My question is whether bal.tab can accommodate this type of data set? i.e., using a long data set, where each row is an observation-time, and where the formula is repeated across all preceding time points. And if not, why was the choice to only include weights aggregated at the individual-level, irrespective of time?

#Something like this?
bal.tab(
  list(policy ~ log_gdp_cap + democracy + corruption + 
    lag_happiness_policy),
  timevar = "year",
  idvar = "country",
  data = happiness_binary
)

Maybe adding an interaction with the time variable could help

bal.tab(
  list(policy ~ (log_gdp_cap + democracy + corruption + 
    lag_happiness_policy) * factor(year)),
  #timevar = "year",
  #idvar = "country",
  data = happiness_binary
)
@ngreifer
Copy link
Owner

Hi Maël,

Thanks for reaching out. I'm glad my packages have been helpful.

Unfrtounatelky I don't have time to write too long of an answer for you, but there is a long version of it. There are a few reasons why I made this choice.

cobalt is designed for the situation of estimating the effects of a fixed treatment history on a single outcome measured at a distant time point. This is by far the most common use of IPW for MSMs and is described most in the literature. There are only a few papers describing the method for estimating treatment effects for outcomes at multiple time points. This is called "repeated measures" MSM and described in Blackwell & Glynn (2018) and Hernán et al. (2002). This approach requires fitting a model for each time point, but in doing so makes extremely strong assumptions about the relationships between covariates and treatment. It also makes it hard to include lags (you have to manually include them as covariates and have to make strong assumptions about the distance of lag required; in your example, you only include a single lag) and assumes the relationships between covariates and the treatment is the same at each time point. A benefit of the approach is that you can share information across time points, which, as you mentioned, is valuable when you have many time points. However, that isn't the scenario cobalt was designed for. Balance needs to be assessed at each time point on all previous time points; that means you would need the complete output for bal.tab() you normally get for a MSM once for each additional period, which would dramatically expand the number of tables produced. You need to assess balance once for each set of weights; since there is a different set of weights for each time period, you would need to assess a lot of balance!

This procedure requires a weight for each individual at each time point. This is equivalent to performing the "wide" analysis once for each time point. You can actually do this manually by creating wide datasets at each time point and assessing balance as usual. To do this, for treatment time $t$, subset the data so that only measurements at time $t$ are included, then reshape the data to wide. Supply the time $t$ weights and supply the usual formulas with treatment at time $t$ and all previous covariates, treatment at time $t-1$ and all previous covariates, etc. If you do this for every treatment time period, you will get the balance tables for each set of weights for each relevant time period.

I don't plan to put this in cobalt because from my view it is kind of an obscure and complicated method, even if it does have utility. It is possible to make this work manually as I described above, but it would be a lot of work to make it so in cobalt. It would still require covariates to be manually specified at each time point since different variables may be measured (or are considered confounders) at different time points; assuming the same variables are measured at all time points and have the same status as confounders is against the ethos of cobalt, which is to get users to think clearly about balance at each time point without strong assumptions across time points. I don't like the ipw approach and I don't think most users understand what it is doing, which is why I don't include its methods in WeightIt, either.

I am open to changing this in the future, especially if I start to see a demand and use case for these methods. I have a bias toward public health and medical applications, where TSCS data is used less often (and typically goes by a different name). It might be that in political science or economics these methods have the potential to be more popular. Right now, I see most citations for Blackwell & Glynn (2018) are methodological papers proposing different methods.

Noah

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants