Ultimately, we will see various "flavors" of the same large models, each reflecting distinct "worldviews"—much like Google Maps displays different country names and borders based on a user’s geolocation. This approach will serve as a legally compliant solution to regional regulatory requirements. Currently, such adaptations are implemented crudely—often through rigid guardrails that suppress sensitive topics or force outputs toward regionally "approved" responses. In the future, however, MLOps systems could dynamically select the appropriate model variant for each user, mirroring Google Maps’ long-standing practice of geo-localized content delivery.
But this is not only about real stuff, say, model is trained to generate predictions that are more "rational", it would make it suboptimal fantasy LARP generator, even with excessive prompting.
But if MLOps team is stuck in its options of model choice and must wrestle the model to adhere to something model resists, oh that's a bad situation.