Google’s recent record €4.3 billion (£3.9 billion) fine is the latest action in a growing movement to tackle the dominance of big tech firms. Until now, most attention has been on the impact of this dominance on privacy, for example the recent Cambridge Analytica scandal that saw Facebook criticised for failing to tackle the unauthorised use of user data by a political campaigning firm.

As a result, some analysts and commentators have called for users to be given more control over their information. But this is a serious mistake.

Google and Facebook make money from their monopoly of our attention, not their access to our personal data. Even if, starting tomorrow, they had no access to our personal data for the purposes of targeting ads, they would still be dominant and hugely profitable because they can advertise to so many people, just like TV networks once were.

Limiting Facebook and Google’s access to personal data of users would make no difference to their monopoly power, or reduce the harmful effects of that power on innovation and freedom. In fact, any further controls on privacy are likely to play into the hands of the dominant firms. It would simply reinforce their monopoly position by increasing the cost of following privacy regulation and making it harder for potential competitors to enter and disrupt the market.

The true source of monopoly power

The tech giant have monopolies because of the convergence of three different phenomena. First, Google and Facebook operate as “platforms”, places where different participants connect. This is an ancient phenomenon. The market in the town square is a platform, where sellers and buyers congregate. Facebook is a platform, originally designed to connect one user with another to exchange content, though it quickly began attracting advertisers because they want to connect with the users too. Google is another platform, connecting users with content providers and advertisers.

Research shows that all platform businesses have a strong tendency to centralise a market, because the more customers they have, the more suppliers are attracted, and vice versa. As the first platform businesses in a sector grow, it becomes harder for new rivals to compete on equal terms. The initial advantages lead to entrenched monopolies and the market converges on a single or small number of platforms.

Owners of marketplaces and stock exchanges make good livings, but they are limited in their scope owing to the physical nature of their platform. But the owners of the vast online platforms are in an entirely different league, because of a second phenomenon – one of the fundamental characteristics of the digital age: infinite – costless copying.

Once someone has a single copy of a piece of digital information they can make as many copies as they wish at the touch of the button at practically no cost. Different versions of eBay, for instance, can be created for every country in the world at practically no extra cost, giving it a reach that goes far beyond a physical auction house.

Expansion is virtually free, with infinite economies of scale. So Google, Facebook and other dominant tech firms have been able to scale up their services at an unprecedented rate, and with unprecedented profitability.

But costless copying would not be so profitable if it were truly unlimited. The final component of these extraordinary businesses is their exclusive right to make the copies. Thanks to intellectual property in the form of patents and copyrights, they control the digital information at the heart of their platforms, such as the algorithms that run Google’s search engine or the software that powers Facebook. Their products and platforms, and the software and algorithms that run them are all protected by laws we have made.

Open up

This contrasts with the most famous platform of the digital age: the internet itself. The internet is a platform just like Google and Facebook except that it is open. It is open in a technical sense because its protocols and software are free for anyone to use, but it is also open socially because anyone can connect to it whatever their background or circumstances.

The internet is living proof that we can have the benefits of a single platform without it becoming a monopoly, and it stands as a testament to the creativity and innovation that this fosters.

Concept image of the global world telecommunication network with nodes connected around the Earth. Shutterstock

It also holds the solution to the present monopoly problem: openness. The fact that anyone can use, implement and build on the internet’s platform is what guaranteed its free and competitive opportunities.

So how would this work with platforms like Google or Facebook? At the moment Facebook and other social networks give us platforms on which to communicate and share content with others. Facebook determines who can use its platform and how they can do so. Anyone wanting to build or adapt the platform, for example to block ads or to create a new social network, must do so with Facebook’s permission.

Such permission is rarely granted. If you’re unhappy with the platform you have little option but to reluctantly accept it or lose access. If it were open this need not be the case. Just as with the internet, you could have had one open platform that anyone could connect to and build on. Dislike the ads? Well, you can create a version that does away with these. Only want to message friends and see their photos. That could be possible, too. Openness means you are not restricted by the whims and desires of just one company.

The ConversationThe solution to these platforms’ monopolies is to make the software, algorithms and protocols on which they run open and free for anyone to use, build on and share. In addition, all users, competitors and innovators should have universal, equitable access to the platforms. Doing this is the only way to give everyone a stake in our digital future.

 

Rufus Pollock, Associate Fellow, University of Cambridge

This article was originally published on The Conversation. Read the original article.

It would take Facebook just 18 minutes to pay off the £500,000 (almost R9 million) fine proposed as a punishment by the UK's data watchdog for the Cambridge Analytica scandal.
 
The Information Commissioner's Office (ICO) has suggested fining Facebook the maximum penalty for the way it mishandled user data and failing to safeguard people's information.
 
But the company makes so much money per minute from advertising that the penalty is barely a drop in the ocean.
 
Facebook made $4.8 billion (R64 billion) in net profit in the first three months of 2018, according to its own figures.
 
According to Business Insider's calculations, that means Facebook makes around $37,037 (Just shy of R500,000) a minute, which means it would take just less than 18 minutes to pay the fine.
 
Here is the calculation:
Q1 2018 (January, February and March) had 90 days.
Each day has 1,440 minutes (24 x 60).
Therefore in Q1 2018 there were 1,440 x 90 minutes, which equals 129,600.
$4.8 billion divided by 129,600 equals $37,037.04 profit per minute.
As of this morning, £500,000 is worth $663,575.
$663,575 divided by $37,037.04 per minute equals 17.91 minutes, or 17 minutes, 55 seconds.
 
Historically, the ICO hasn't had much power to issue a robust fine to companies which mishandle people's information. The £500,000 figure was the maximum penalty under the UK's data protection laws.
 
But now Europe brought in much stricter privacy laws in May, the GDPR. Importantly, these laws give regulators like the ICO much sharper teeth when it comes to issuing fines, with a maximum fine of €20 million (R316 million) or 4% of a company's global turnover.
 
Facebook made around $40 billion in revenue in 2017, meaning its maximum fine under the new laws would be $1.6 billion.
 
Facebook still has a chance to respond to the ICO before the watchdog makes its final decision. The company is expected to make its case later this month.
 
Source: Business Insider
Facebook CEO Mark Zuckerberg is now the third richest person in the world, surpassing famed investor Warren Buffett.
 
Zuckerberg's net worth increased to $81.6 billion on Friday, Bloomberg reported, after Facebook shares went up 2.4%. That is the equivalent of around R1.11 trillion.
 
The social network is now valued at about $571 billion, or R7.8 trillion, going into the weekend.
 
Warren Buffett, chairman and CEO of Berkshire Hathaway, meanwhile, has a net worth of about $81.1 billion. 
 
Amazon CEO Jeff Bezos and Microsoft cofounder Bill Gates are still ahead of Zuckerberg, with net worths of $139.6 billion and $92.3 billion, respectively. This is the first time that the top three wealthiest people in the world all made their fortunes in technology. 
 
Despite Facebook's string of scandals in the wake of the 2016 US presidential election — including accusations that the social network helped undermine democracy — Zuckerberg's fortune doesn't appear to be suffering for any of it.
 
Credit: Business Insider

Facebook announced it will notify 800,000 people about a bug that unblocked accounts those users had previously blocked.

The bug was active between May 29 and June 5.

In a blog post, Facebook's chief privacy officer, Erin Egan, said some blocked users couldn't view posts that the person who blocked them shared with friends, but they could have seen things that person shared.

"We know that the ability to block someone is important — and we'd like to apologize and explain what happened," Egan said in the post.

When someone is typically blocked on Facebook, that person cannot view posts on your profile, chat with you on Messenger or add you as a friend. The user is also automatically unfriended. A person may want to block another user for various reasons, including after a romantic break up or due to harassment.

Facebook said 83% of users impacted by the bug had one person temporarily unblocked. A user who was unblocked during that time may have been able to talk to the person who blocked them on Messenger.

The company says the issue has been resolved, and all previous settings have been reinstated

Credit: CNN 

British lawmakers want their European counterparts to quiz Facebook FB.O CEO Mark Zuckerberg about a scandal over improper use of millions of Facebook users’ data, as he will not give evidence in London himself.

Zuckerberg will be in Europe to defend the company after alleged misuse of its data by Cambridge Analytica, a British political consultancy that worked on U.S. President Donald Trump’s election campaign.

But while he will answer questions from lawmakers in Brussels on Tuesday, and is meeting French President Emmanuel Macron on Wednesday, he has so far declined to answer questions from British lawmakers, either in person or via video link.

Damian Collins, chair of the British parliament’s media committee, said on Tuesday that he believed Zuckerberg should still appear before British lawmakers.

“But if Mark Zuckerberg chooses not to address our questions directly, we are asking colleagues at the European Parliament to help us get answers - particularly on who knew what at the company, and when, about the data breach and the non-transparent use of political adverts which continue to undermine our democracy,” he said in a statement.

Last month, Facebook Chief Technical Officer Mike Schroepfer appeared before Collins’s Digital, Culture, Media and Sport Committee, which is investigating fake news.

But the lawmakers have said his testimony and subsequent written answers from the firm to follow-up questions have been inadequate.

Collins outlined deficiencies in Facebook’s answers so far in a letter to Rebecca Stimson, head of public policy at Facebook UK, which has been shared with the EU lawmakers who will quiz Zuckerberg. Collins requested a response from Facebook to his questions by June 4.

 

- REUTERS

The bitter truth buried in recent headlines about how the political consulting company – Cambridge Analytica – used social media and messaging, primarily Facebook and WhatsApp, to try to sway voters in presidential elections in the US and Kenya is simply this: Facebook is the reason why fake news is here to stay.

Various news outlets, and former Cambridge Analytica executives themselves, confirmed that the company used campaign speeches, surveys, and, of course, social media and social messaging to influence Kenyans in both 2013 and 2017.

The media reports also revealed that, working on behalf of US President Donald Trump’s campaign, Cambridge Analytica had got hold of data from 50 million Facebook users, which they sliced and diced to come up with “psychometric” profiles of American voters.

The political data company’s tactics have drawn scrutiny in the past, so the surprise of these revelations came more from the “how” than the “what.” The real stunner was learning how complicit Facebook and WhatsApp, which is owned by the social media behemoth, had been in aiding Cambridge Analytica in its work.

The Cambridge Analytica scandal appears to be symptomatic of much deeper challenges that Facebook must confront if it’s to become a force for good in the global fight against false narratives.

These hard truths include the fact that Facebook’s business model is built upon an inherent conflict of interest. The others are the company’s refusal to take responsibility for the power it wields and its inability to come up with a coherent strategy to tackle fake news.

Facebook’s biggest challenges

Facebook’s first issue is its business model. It has mushroomed into a multibillion-dollar corporation because its revenue comes from gathering and using the data shared by its audience of 2.2 billion monthly users.

Data shapes the ads that dominate our news feeds. Facebook retrieves information from what we like, comment on and share; the posts we hide and delete; the videos we watch; the ads we click on; the quizzes we take. It was, in fact, data sifted from one of these quizzes that Cambridge Analytica bought in 2014. Facebook executives knew of this massive data breach back then but chose to handle the mess internally. They shared nothing with the public.

This makes sense if the data from that public is what fuels your company’s revenues. It doesn’t make sense, however, if your mission is to make the world a more open and connected place, one built in transparency and trust. A corporation that says it protects privacy while also making billions of dollars from data, sets itself up for scandal.

This brings us to Facebook’s second challenge: its myopic vision of its own power. As repeated scandals and controversies have washed over the social network in the last couple of years, CEO Mark Zuckerberg’s response generally has been one of studied naivete. He seems to be in denial about his corporation’s singular influence and position.

Case in point: When it became clear in 2016 that fake news had affected American elections, Zuckerberg first dismissed that reality as “a pretty crazy idea.” In this latest scandal, he simply said nothing for days.

Throughout the world, news publishers report that 50% to 80% of their digital traffic comes from Facebook. No wonder Google and Facebook control 53% of the world’s digital and mobile advertising revenue. Yet Zuckerberg still struggles to accept that Facebook’s vast audience and its role as a purveyor of news and information combine to give it extraordinary power over what people consume, and by extension, how they behave.

All of this leads us to Facebook’s other challenge: its inability to articulate, and act on, a cogent strategy to attack fake news.

The fake news phenomenon

When Zuckerberg finally surfaced last month, he said out loud what a lot of people were already were thinking: there may be other Cambridge Analyticas out there.

This is very bad news for anyone worried about truth and democracy. For in America, fake news helped to propel into power a man whose presidential campaign may have been a branding exercise gone awry. But in countries like Kenya, fake news can kill.

Zuckerberg and his Facebook colleagues must face this truth. Fake news may not create tribal or regional mistrust, but inflammatory videos and posts shared on social media certainly feed those tensions.

And false narratives spread deep and wide: In 2016, BuzzFeed News found that in some cases, a fake news story was liked, commented and shared almost 500,000 times. A legitimate political news story might attract 75,000 likes, comments and shares.

After Zuckerberg was flogged for his initial statements about fake news, Facebook reached out to the Poynter Institute’s International Fact-checking Network in an effort to attack this scourge. Then in January 2018, the social network said that it was going to be more discriminating about how much news it would allow to find its way into the feeds of its users. In other words, more videos on cats and cooking, less news of any kind.

The policy sowed a lot of confusion and showed that Facebook is still groping for how to respond to fake news. It was also evidence that the social network does not understand that fake news endangers its own existence as well as the safety and security of citizens worldwide –- especially in young democracies such as Kenya.

Angry lawmakers in the US and Europe, along with a burgeoning rebellion among its vast audience, may finally grab Facebook’s attention. But we will only hear platitudes and see superficial change unless Facebook faces hard truths about its reliance on data, accepts its preeminent place in today’s media ecosystem and embraces its role in fighting fake news.

Until then, we should brace ourselves for more Cambridge Analyticas.

 

Stephen Buckley, Lecturer, The Aga Khan University Graduate School of Media and Communications (GSMC)

This article was originally published on The Conversation. Read the original article.

I began my research career in the last century with an analysis of how news organisations were adapting to this strange new thing called “the Internet”. Five years later I signed up for Twitter and, a year after that, for Facebook.

Now, as it celebrates its 14th birthday, Facebook is becoming ubiquitous, and its usage and impact is central to my (and many others’) research.

In 2017 the social network had 2 billion members, by its own count. Facebook’s relationship with news content is an important part of this ubiquity. Since 2008 the company has courted news organisations with features like “Connect”, “Share” and “Instant Articles”. As of 2017, 48% of Americans rely primarily on Facebook for news and current affairs information.

Social networks present news content in a way that’s integrated into the flow of personal and other communication. Media scholar Alfred Hermida calls this “ambient news”. It’s a trend that has been considered promising for the development of civil society. Social media – like the Internet before it – has being hailed as the new “public sphere”: a place for civic discourse and political engagement among the citizenry.

But, unlike the Internet, Facebook is not a public space in which all content is equal. It is a private company. It controls what content you see, according to algorithms and commercial interests. The new public sphere is, in fact, privately owned, and this has far-reaching implications for civic society worldwide.

When a single company is acting as the broker for news and current affairs content for a majority of the population, the possibility for abuse is rife. Facebook is not seen as a “news organisation”, so it falls outside of whatever regulations countries apply to “the news”. And its content is provided by myriad third parties, often with little oversight and tracking by countries’ authorities. So civic society’s ability to address concerns about Facebook’s content becomes even more constrained.

Getting to know all about you

Facebook’s primary goal is to sell advertising. It does so by knowing as much as possible about its users, then selling that information to advertisers. The provision of content to entice consumers to look at advertising is not new: it’s the entire basis of the commercial media.

But where newspapers can only target broad demographic groups based on language, location and, to an extent, education level and income, Facebook can narrow its target market down to individual level. How? Based on demographics – and everything your “likes”, posts and comments have told it.

This ability to fine tune content to subsets of the audience is not limited to advertising. Everything on your Facebook feed is curated and presented to you by an algorithm seeking to maximise your engagement by only showing you things that it thinks you will like and respond to. The more you engage and respond, the better the algorithm gets at predicting what you will like.

When it comes to news content and discussion of the news, this means you will increasingly only see material that’s in line with your stated interests. More and more, too, news items, advertisements and posts by friends are blurred in the interface. This all merges into a single stream of information.

And because of the way your network is structured, the nature of that information becomes ever more narrow. It is inherent in the ideals of democracy that people be exposed to a plurality of ideas; that the public sphere should be open to all. The loss of this plurality creates a society made up of extremes, with little hope for consensus or bridging of ideas.

An echo chamber

Most people’s “friends” on Facebook tend to be people with whom they have some real-life connection – actual friends, classmates, neighbours and family members. Functionally, this means that most of your network will consist largely of people who share your broad demographic profile: education level, income, location, ethnic and cultural background and age.

The algorithm knows who in this network you are most likely to engage with, which further narrows the field to people whose worldview aligns with your own. You may be Facebook friends with your Uncle Fred, whose political outbursts threaten the tranquillity of every family get-together. But if you ignore his conspiracy-themed posts and don’t engage, they will start to disappear from your feed.

Over time this means that your feed gets narrower and narrower. It shows less and less content that you might disagree with or find distasteful.

These two responses, engaging and ignoring are both driven by the invisible hand of the algorithm. And they have created an echo chamber. This isn’t dissimilar to what news organisations have been trying to do for some time: gatekeeping is the expression of the journalists’ idea of what the audience wants to read.

Traditional journalists had to rely on their instinct for what people would be interested in. Technology now makes it possible to know exactly what people read, responded to, or shared.

For Facebook, this process is now run by a computer; an algorithm which reacts instantly to provide the content it thinks you want. But this fine tuned and carefully managed algorithm is open to manipulation, especially by political and social interests.

Extreme views confirmed

In the last few years Facebook users have unwittingly become part of a massive social experiment – one which may have contributed to the equally surprising election of Donald Trump as president of the US and the UK electing to leave the European Union. We can’t be sure of this, since Facebook’s content algorithm is secret and most of the content is shown only to specific users.

It’s physically impossible for a researcher to see all of the content distributed on Facebook; the company explicitly prevents that kind of access. Researchers and journalists need to construct model accounts (fake ones, violating Facebook’s terms of use) and attempt to trick the algorithm into showing what the social network’s most extreme political users see.

What they’ve found is that the more extreme the views the user has already agreed with, the more extreme the content they saw was. People who liked or expressed support for leaving the EU were shown content that reflected this desire, but in a more extreme way.

If they liked that they’d be shown even more content, and so on, the group getting smaller and smaller and more and more insular. This is similar to how extremist groups would identify and court potential members, enticing them with more and more radical ideas and watching their reaction. That sort of personal interaction was a slow process. Facebook’s algorithm now works at lightning speed and the pace of radicalisation is exponentially increased.

 

Megan Knight, Associate Dean, University of Hertfordshire

This article was originally published on The Conversation. Read the original article.

Facebook has a world of problems. Beyond charges of Russian manipulation and promoting fake news, the company’s signature social media platform is under fire for being addictive, causing anxiety and depression, and even instigating human rights abuses.

Company founder and CEO Mark Zuckerberg says he wants to win back users’ trust. But his company’s efforts so far have ignored the root causes of the problems they intend to fix, and even risk making matters worse. Specifically, they ignore the fact that personal interaction isn’t always meaningful or benign, leave out the needs of users in the developing world, and seem to compete with the company’s own business model.

Based on The Digital Planet, a multi-year global study of how digital technologies spread and how much people trust them, which I lead at Tufts University’s Fletcher School, I have some ideas about how to fix Facebook’s efforts to fix itself.

Face-saving changes?

Like many technology companies, Facebook must balance the convergence of digital dependence, digital dominance and digital distrust. Over 2 billion people worldwide check Facebook each month; 45 percent of American adults get their news from Facebook. Together with Google, it captures half of all digital advertising revenues worldwide. Yet more people say they greatly distrust Facebook than any other member of the big five – Amazon, Apple, Google or Microsoft.

In March 2017 Facebook started taking responsibility for quality control as a way to restore users’ trust. The company hired fact-checkers to verify information in posts. Two months later the company changed its algorithms to help users find diverse viewpoints on current issues and events. And in October 2017, it imposed new transparency requirements to force advertisers to identify themselves clearly.

But Zuckerberg led off 2018 in a different direction, committing to “working to fix our issues together.” That last word, “together,” suggests an inclusive approach, but in my view, it really says the company is shifting the burden back onto its users.

The company began by overhauling its crucial News Feed feature, giving less priority to third-party publishers, whether more traditional media outlets like The New York Times, The Washington Post or newer online publications such as Buzzfeed or Vox. That will leave more room for posts from family and friends, which Zuckerberg has called “meaningful social interactions.”

However, Facebook will rely on users to rate how trustworthy groups, organizations and media outlets are. Those ratings will determine which third-party publishers do make it to users’ screens, if at all. Leaving trustworthiness ratings to users without addressing online political polarization risks making civic discourse even more divided and extreme.

Personal isn’t always ‘meaningful’

Unlike real-life interactions, online exchanges can exacerbate both passive and narcissistic tendencies. It’s easier to be invisible online, so people who want to avoid attention can do so without facing peer pressure to participate. By contrast, though, people who are active online can see their friends like, share and comment on their posts, motivating them to seek even more attention.

This creates two groups of online users, broadly speaking: disengaged observers and those who are competing for attention with ever more extreme efforts to catch users’ eyes. This environment has helped outrageous, untrue claims with clickbait headlines attract enormous amounts of attention.

This phenomenon is further complicated by two other elements of social interaction online. First, news of any kind – including fake news – gains credibility when it is forwarded by a personal connection.

And social media tends to group like-minded people together, creating an echo chamber effect that reinforces messages the group agrees with and resists outside views – including more accurate information and independent perspectives. It’s no coincidence that conservatives and liberals trust very different news sources.

Users of Facebook’s instant-messaging subsidiary WhatsApp have shown that even a technology focusing on individual connection isn’t always healthy or productive. WhatsApp has been identified as a primary carrier of fake news and divisive rumors in India, where its users’ messages have been described as a “mix of off-color jokes, doctored TV [clips], wild rumors and other people’s opinions, mostly vile.” Kenya has identified 21 hate-mongering WhatsApp groups. WhatsApp users in the U.K. have had to stay alert for scams in their personal messages.

Addressing the developing world

Facebook’s actions appear to be responding to public pressure from the U.S. and Europe. But Facebook is experiencing its fastest growth in Asia and Africa.

Research I have conducted with colleagues has found that users in the developing world are more trusting of online material, and therefore more vulnerable to manipulation by false information. In Myanmar, for instance, Facebook is the dominant internet site because of its Free Basics program, which lets mobile-phone users connect to a few selected internet sites, including Facebook, without paying extra or using up allotted data in their mobile plans. In 2014, Facebook had 2 million users in Myanmar; after Free Basics arrived in 2016, that number climbed to 30 million.

One of the effects has been devastating. Rumor campaigns against the Rohingya ethnic group in Myanmar were, in part, spread on Facebook, sparking violence. At least 6,700 Rohingya Muslims were killed by Myanmar’s security forces between August and September 2017; 630,000 more have fled the country. Facebook did not stop the rumors, and at one point actually shut down responding posts from a Rohingya activist group.

Facebook’s Free Basics program is in 63 developing countries and municipalities, each filled with people new to the digital economy and potentially vulnerable to manipulation.

Fighting against the business model

Facebook’s efforts to promote what might be called “corporate digital responsibility” runs counter to the company’s business model. Zuckerberg himself declared that the upcoming changes would cause people to spend less time on Facebook.

But the company makes 98 percent of its revenues from advertising. That is only possible if users keep their attention focused on the platform, so the company can analyze their usage data to generate more targeted advertising.

Our research finds that companies working toward corporate social responsibility will only succeed if their efforts align with their core business models. Otherwise, the responsibility project will become unsustainable in the face of pressure from the stock market, competitors or government regulators, as happened to Facebook with European privacy rules.

Real solutions

What can Facebook do instead? I recommend the following to fix Facebook’s fix:

  1. Own the reality of Facebook’s enormous role in society. It’s a primary source of news and communication that influences the beliefs and assumptions driving citizen behavior around the world. The company cannot rely on users to police the system. As a media company, Facebook needs to take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.

  2. Establish on-the-ground operations in every location where it has large numbers of users, to ensure the company understands local contexts. Rather than a virtual global entity operating from Silicon Valley, Facebook should engage with the nuances and complexities of cities, regions and countries, using local languages to customize content for users. Right now, Facebook passively publishes educational materials on digital safety and community standards, which are easily ignored. As Facebook adds users in developing nations, the company must pay close attention to the unintended consequences of explosive growth in connectivity.

  3. Reduce the company’s dependence on advertising revenue. As long as Facebook is almost entirely dependent on ad sales, it will be forced to hold users’ attention as long as possible and gather their data to analyze for future ad opportunities. Its strategy for expansion should go beyond building and buying other apps, like WhatsApp, Instagram and Messenger, all of which still feed the core business model of monopolizing and data-mining users’ attention. Taking inspiration from Amazon and Netflix – and even Google parent company Alphabet – Facebook could use its huge trove of user data responsibly to identify, design and deliver new services that people would pay for.

Ultimately, Zuckerberg and Facebook’s leaders have created an enormously powerful, compelling and potentially addictive service. This unprecedented opportunity has developed at an unprecedented pace. Growth may be the easy part; being the responsible grown-up is much harder.

 

Bhaskar Chakravorti, Senior Associate Dean, International Business & Finance, Tufts University

This article was originally published on The Conversation. Read the original article.

Facebook on Monday unveiled a version of its Messenger application for children, aimed at enabling kids under 12 to connect with others under parental supervision.

Messenger Kids is being rolled out for Apple iOS mobile devices in the United States on a test basis as a standalone video chat and messaging app. Product manager Loren Cheng said the social network leader is offering Messenger Kids because “there’s a need for a messaging app that lets kids connect with people they love but also has the level of control parents want.”

Facebook said that the new app, with no ads or in-app purchases, is aimed at 6- to 12-year-olds. It enables parents to control the contact list and does not allow children to connect with anyone their parent does not approve. The social media giant added it designed the app because many children are going online without safeguards.

“Many of us at Facebook are parents ourselves, and it seems we weren’t alone when we realized that our kids were getting online earlier and earlier,” a Facebook statement said.

It cited a study showing that 93 percent of 6- to 12-year-olds in the US have access to tablets or smartphones, and two-thirds have a smartphone or tablet of their own.

“We want to help ensure the experiences our kids have when using technology are positive, safer, and age-appropriate, and we believe teaching kids how to use technology in positive ways will bring better experiences later as they grow,” the company said.

Facebook’s rules require that children be at least 13 to create an account, but many are believed to get around the restrictions. Cheng said Facebook conducted its own research and worked with “over a dozen expert advisors” in building the app. He added that data from children would not be used for ad profiles and that the application would be compliant with the Children’s Online Privacy and Protection Act (COPPA).

“We’ve worked extensively with parents and families to shape Messenger Kids and we’re looking forward to learning and listening as more children and families start to use the iOS preview,” Cheng said.

 

(AFP)

Facebook on Wednesday reported profits leapt 79 percent on booming revenue from online ads in the third quarter, topping investor forecasts and buoying shares already at record highs.

The leading social network said it made a profit o $4.7 billion in the quarter that ended on September 30, compared with $2.6 billion in the same period a year earlier. Chief executive Mark Zuckerberg used the update to reaffirm efforts by Facebook to curb manipulation: “We’re serious about preventing abuse on our platforms,” he said.

Page 1 of 2

  1. Opinions and Analysis

Calender

« August 2018 »
Mon Tue Wed Thu Fri Sat Sun
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31