AI technologies have the potential to cause harm in society, but there are proposed ways to place limits on their use. The EU’s AI Act, for example, restricts systems based on their risk levels. However, another solution could be adapting software licensing models to AI technologies. Open responsible AI licenses (OpenRails) are similar to open-source software licenses, allowing developers to release their AI systems publicly under certain conditions. These conditions include not breaking the law, impersonating people without consent, or discriminating against people. OpenRails can be tailored to include other conditions specific to the technology. The benefit of this model is that it promotes open innovation while reducing the risk of irresponsible use. Although many big firms currently operate closed AI systems, there have been examples of companies using an open-source approach. Open but responsible sharing of AI can promote progress while still considering the potential risks associated with AI, such as discrimination and job replacement. OpenRail licenses may not be a cure-all, but they are a promising piece of the puzzle. While enforcement of licenses may be challenging, they have practical benefits and are well understood by the tech community. Overall, the idea of freely sharing AI technology while also upholding ethical and legal limits is gaining traction.