Can you at the moment to buy a Juul e-cigarette? It depends on what day of the week it is.
Earlier this week, the FDA denied marketing authorization to Juul, which first began selling its e-cigarettes in 2015 (although it has operated under various company names since 2007). The FDA said the reason for the refusal was that Juul “failed to provide sufficient toxicological data to demonstrate that the products were safe,” ArsTechnica reports, and as such, the Agency could not complete its toxicological assessment. The FDA specifically pointed to “potentially harmful chemicals leached from the company’s proprietary e-liquid pods” as a concern.
Juul, however, pushed back – and secured a temporary victory. In a lawsuit filed in the U.S. Court of Appeals for the DC Circuit, Juul called the FDA ban “arbitrary and capricious”And suggested that the agency succumb to pressure from Congress. The federal appeals court then decided to block the FDA order until it can hear more arguments on the issue.
The FDA’s rejection and subsequent stay is just the latest development in a years-long battle between regulators and Juul. Back in 2018, the FDA launched a study on the sale of Juul products to underage consumers, requested marketing materials from the company and required the company to present a plan to prevent sales to teens. The following year, the FDA sent a warning letter to Juul about their claims that fumes were less harmful than traditional cigarettes. At one point, fruity-flavored e-cigarette pods were banned in the United States.
The latest ban, if it ever takes effect, will apply to the Juul device itself, a slim vaping pen and four specific liquid cartridges, all of which have tobacco or menthol flavors – those that mimic the taste of traditional cigarettes. The FDA rejection came just a few days after the agency said it would also limit the amount of nicotine allowed in real cigarettes sold in the United States.
Here is some more news.
Instagram’s age breakdown
On Thursday, Instagram announced that they will introduce new tools for verification of the age of the users on the platform. When a user changes their date of birth to put them over or under 18, Instagram will now require them to confirm the change. That means either uploading an ID, getting mutual friends to vouch for you, or uploading a selfie video. The latter option is offered through a partnership with digital recognition company Yoti, which then scans the video selfie with its face recognition technology to assess the person’s age.
Instagram says its goal is to tailor the app differently for teens and adults and ensure that these experiences are different. Despite these declared noble intentions, still moving makes privacy and AI experts nervous. After all, Instagram’s parent company, Meta, has a long history of data sharing and the loss of privacy.
So far, Instagram is only testing the age verification requirement with users in the United States.
Microsoft drops controversial emotion-recording AI
On Tuesday, New York Times reported that Microsoft will remove features from its Azure cloud computing platform that uses face recognition software to track the physical characteristics and even emotions of people in images. It has been a controversial feature, criticized for its potential to be both biased and inaccurate.
Microsoft is no stranger to questionable ethical situations. In 2018, it came under fire for using the Azure platform to work with the ICE, US Immigration and Customs Enforcement program. But now Microsoft seems eager to come out in front of the critics. The move to rule in Azure came as part of Microsoft’s recently released Responsible AI Standard, a document that it says will guide the way the company uses AI in its products.