Artificial Intelligence: A Looming Threat or Glimpse of Hope?
On one hand, it offers remarkable opportunities — automating tedious tasks, accelerating medical research, and transforming industries. It presents a glimpse of hope, where human limitations can be augmented, and creativity enhanced.
But on the flip side, the looming threat of AI can’t be ignored. Ethical concerns, job displacement, and the potential misuse of autonomous systems such as bombs and robots raise critical questions about control, trust, and responsibility.
Like any other powerful tool, the impact of AI depends on how we use it. Will it shape a better future or introduce challenges we’ve yet to fully grasp? The choice, it seems, lies in our hands.
It gets out of control especially when developers do not understand why the model is working ?
With reference to Physics where natural processes are presented using simple and elegant functions, however other processes, especially those that include human behaviours, have to represented using complex models with numerous limitations. It avers that despite the complexity of the models creates, large amount of data, such web data, can be deployed to overcome the limitations of the complex models. With reference to natural language processing and machine translation, simple model with large data out-perform complex models with less data.
surveycto package uses Application Programming Interface (API) to achieve data interoperability between Open Data Kit (ODK) based data management applications such as koboToolbox, Surveycto and RedCap. It is developed with a framework to be make it efficient in development of live or near-real-time dashboards such as Shiny dashboards etc. This means the data utilities in the packages are lightweight and efficient.
The package can also be use to import data into Redcap from R or shiny, even with multiple events. This package has the advantage that it only pull the columns you indicate, unlike other packages that download all the data and filters locally.
In the recent years, we have had an upsurge of many natural language processing models such as ChatGPT, liner, Claude among others that use vast amounts of web data which have hidden implications such as high carbon footprints, infringement of data privacy, and biased models (example of a model that was proven to be biased against black persons). Applying these AI models, which are in every aspect of our lives such as in our phones, cars, fridges etc, have a unprecedented contribution to climate change as they require a lot of energy to train and run which calls for parallel development of systems to quantify their contribution to changing climate. The developers of AI models ought to understand how their models work and should also assist in development of systems and regulation frameworks of mitigate their effects and contribution to various aspects of life such data privacy, racial discrimination, and climate change.