Whether these apocalyptic predictions come true or not, it's not in anyone's best interest to allow tech executives to control the future of AGI development. If achieved, the advent of smarter-than-human intelligence will bring severe consequences to most people with no say in its creation. The world, not just tech companies, should have a say in how these technological endeavors proceed.
Those sounding the alarm over AGI are important voices in this debate, but their doom-and-gloom predictions should not overwhelm public discourse. AGI has the ability to both harm and help the world, which is why these technologies should be developed quickly by well-intentioned developers before bad actors have a chance to. The sooner these technologies are developed, the sooner they can be studied and tweaked to achieve optimal outcomes.
The problem with listening to high-level AI executives is that they have a history of flip-flopping on this issue. There are many potential reasons for this, including using hyperbolic language to promote their products or as a PR stunt aimed at keeping the government from regulating them. The most important thing the public can do is listen to invested and independent voices and judge them on the merit of their claims.