The bitter truth buried in recent headlines about how the political consulting company – Cambridge Analytica – used social media and messaging, primarily Facebook and WhatsApp, to try to sway voters in presidential elections in the US and Kenya is simply this: Facebook is the reason why fake news is here to stay.

Various news outlets, and former Cambridge Analytica executives themselves, confirmed that the company used campaign speeches, surveys, and, of course, social media and social messaging to influence Kenyans in both 2013 and 2017.

The media reports also revealed that, working on behalf of US President Donald Trump’s campaign, Cambridge Analytica had got hold of data from 50 million Facebook users, which they sliced and diced to come up with “psychometric” profiles of American voters.

The political data company’s tactics have drawn scrutiny in the past, so the surprise of these revelations came more from the “how” than the “what.” The real stunner was learning how complicit Facebook and WhatsApp, which is owned by the social media behemoth, had been in aiding Cambridge Analytica in its work.

The Cambridge Analytica scandal appears to be symptomatic of much deeper challenges that Facebook must confront if it’s to become a force for good in the global fight against false narratives.

These hard truths include the fact that Facebook’s business model is built upon an inherent conflict of interest. The others are the company’s refusal to take responsibility for the power it wields and its inability to come up with a coherent strategy to tackle fake news.

Facebook’s biggest challenges

Facebook’s first issue is its business model. It has mushroomed into a multibillion-dollar corporation because its revenue comes from gathering and using the data shared by its audience of 2.2 billion monthly users.

Data shapes the ads that dominate our news feeds. Facebook retrieves information from what we like, comment on and share; the posts we hide and delete; the videos we watch; the ads we click on; the quizzes we take. It was, in fact, data sifted from one of these quizzes that Cambridge Analytica bought in 2014. Facebook executives knew of this massive data breach back then but chose to handle the mess internally. They shared nothing with the public.

This makes sense if the data from that public is what fuels your company’s revenues. It doesn’t make sense, however, if your mission is to make the world a more open and connected place, one built in transparency and trust. A corporation that says it protects privacy while also making billions of dollars from data, sets itself up for scandal.

This brings us to Facebook’s second challenge: its myopic vision of its own power. As repeated scandals and controversies have washed over the social network in the last couple of years, CEO Mark Zuckerberg’s response generally has been one of studied naivete. He seems to be in denial about his corporation’s singular influence and position.

Case in point: When it became clear in 2016 that fake news had affected American elections, Zuckerberg first dismissed that reality as “a pretty crazy idea.” In this latest scandal, he simply said nothing for days.

Throughout the world, news publishers report that 50% to 80% of their digital traffic comes from Facebook. No wonder Google and Facebook control 53% of the world’s digital and mobile advertising revenue. Yet Zuckerberg still struggles to accept that Facebook’s vast audience and its role as a purveyor of news and information combine to give it extraordinary power over what people consume, and by extension, how they behave.

All of this leads us to Facebook’s other challenge: its inability to articulate, and act on, a cogent strategy to attack fake news.

The fake news phenomenon

When Zuckerberg finally surfaced last month, he said out loud what a lot of people were already were thinking: there may be other Cambridge Analyticas out there.

This is very bad news for anyone worried about truth and democracy. For in America, fake news helped to propel into power a man whose presidential campaign may have been a branding exercise gone awry. But in countries like Kenya, fake news can kill.

Zuckerberg and his Facebook colleagues must face this truth. Fake news may not create tribal or regional mistrust, but inflammatory videos and posts shared on social media certainly feed those tensions.

And false narratives spread deep and wide: In 2016, BuzzFeed News found that in some cases, a fake news story was liked, commented and shared almost 500,000 times. A legitimate political news story might attract 75,000 likes, comments and shares.

After Zuckerberg was flogged for his initial statements about fake news, Facebook reached out to the Poynter Institute’s International Fact-checking Network in an effort to attack this scourge. Then in January 2018, the social network said that it was going to be more discriminating about how much news it would allow to find its way into the feeds of its users. In other words, more videos on cats and cooking, less news of any kind.

The policy sowed a lot of confusion and showed that Facebook is still groping for how to respond to fake news. It was also evidence that the social network does not understand that fake news endangers its own existence as well as the safety and security of citizens worldwide –- especially in young democracies such as Kenya.

Angry lawmakers in the US and Europe, along with a burgeoning rebellion among its vast audience, may finally grab Facebook’s attention. But we will only hear platitudes and see superficial change unless Facebook faces hard truths about its reliance on data, accepts its preeminent place in today’s media ecosystem and embraces its role in fighting fake news.

Until then, we should brace ourselves for more Cambridge Analyticas.

 

Stephen Buckley, Lecturer, The Aga Khan University Graduate School of Media and Communications (GSMC)

This article was originally published on The Conversation. Read the original article.

I began my research career in the last century with an analysis of how news organisations were adapting to this strange new thing called “the Internet”. Five years later I signed up for Twitter and, a year after that, for Facebook.

Now, as it celebrates its 14th birthday, Facebook is becoming ubiquitous, and its usage and impact is central to my (and many others’) research.

In 2017 the social network had 2 billion members, by its own count. Facebook’s relationship with news content is an important part of this ubiquity. Since 2008 the company has courted news organisations with features like “Connect”, “Share” and “Instant Articles”. As of 2017, 48% of Americans rely primarily on Facebook for news and current affairs information.

Social networks present news content in a way that’s integrated into the flow of personal and other communication. Media scholar Alfred Hermida calls this “ambient news”. It’s a trend that has been considered promising for the development of civil society. Social media – like the Internet before it – has being hailed as the new “public sphere”: a place for civic discourse and political engagement among the citizenry.

But, unlike the Internet, Facebook is not a public space in which all content is equal. It is a private company. It controls what content you see, according to algorithms and commercial interests. The new public sphere is, in fact, privately owned, and this has far-reaching implications for civic society worldwide.

When a single company is acting as the broker for news and current affairs content for a majority of the population, the possibility for abuse is rife. Facebook is not seen as a “news organisation”, so it falls outside of whatever regulations countries apply to “the news”. And its content is provided by myriad third parties, often with little oversight and tracking by countries’ authorities. So civic society’s ability to address concerns about Facebook’s content becomes even more constrained.

Getting to know all about you

Facebook’s primary goal is to sell advertising. It does so by knowing as much as possible about its users, then selling that information to advertisers. The provision of content to entice consumers to look at advertising is not new: it’s the entire basis of the commercial media.

But where newspapers can only target broad demographic groups based on language, location and, to an extent, education level and income, Facebook can narrow its target market down to individual level. How? Based on demographics – and everything your “likes”, posts and comments have told it.

This ability to fine tune content to subsets of the audience is not limited to advertising. Everything on your Facebook feed is curated and presented to you by an algorithm seeking to maximise your engagement by only showing you things that it thinks you will like and respond to. The more you engage and respond, the better the algorithm gets at predicting what you will like.

When it comes to news content and discussion of the news, this means you will increasingly only see material that’s in line with your stated interests. More and more, too, news items, advertisements and posts by friends are blurred in the interface. This all merges into a single stream of information.

And because of the way your network is structured, the nature of that information becomes ever more narrow. It is inherent in the ideals of democracy that people be exposed to a plurality of ideas; that the public sphere should be open to all. The loss of this plurality creates a society made up of extremes, with little hope for consensus or bridging of ideas.

An echo chamber

Most people’s “friends” on Facebook tend to be people with whom they have some real-life connection – actual friends, classmates, neighbours and family members. Functionally, this means that most of your network will consist largely of people who share your broad demographic profile: education level, income, location, ethnic and cultural background and age.

The algorithm knows who in this network you are most likely to engage with, which further narrows the field to people whose worldview aligns with your own. You may be Facebook friends with your Uncle Fred, whose political outbursts threaten the tranquillity of every family get-together. But if you ignore his conspiracy-themed posts and don’t engage, they will start to disappear from your feed.

Over time this means that your feed gets narrower and narrower. It shows less and less content that you might disagree with or find distasteful.

These two responses, engaging and ignoring are both driven by the invisible hand of the algorithm. And they have created an echo chamber. This isn’t dissimilar to what news organisations have been trying to do for some time: gatekeeping is the expression of the journalists’ idea of what the audience wants to read.

Traditional journalists had to rely on their instinct for what people would be interested in. Technology now makes it possible to know exactly what people read, responded to, or shared.

For Facebook, this process is now run by a computer; an algorithm which reacts instantly to provide the content it thinks you want. But this fine tuned and carefully managed algorithm is open to manipulation, especially by political and social interests.

Extreme views confirmed

In the last few years Facebook users have unwittingly become part of a massive social experiment – one which may have contributed to the equally surprising election of Donald Trump as president of the US and the UK electing to leave the European Union. We can’t be sure of this, since Facebook’s content algorithm is secret and most of the content is shown only to specific users.

It’s physically impossible for a researcher to see all of the content distributed on Facebook; the company explicitly prevents that kind of access. Researchers and journalists need to construct model accounts (fake ones, violating Facebook’s terms of use) and attempt to trick the algorithm into showing what the social network’s most extreme political users see.

What they’ve found is that the more extreme the views the user has already agreed with, the more extreme the content they saw was. People who liked or expressed support for leaving the EU were shown content that reflected this desire, but in a more extreme way.

If they liked that they’d be shown even more content, and so on, the group getting smaller and smaller and more and more insular. This is similar to how extremist groups would identify and court potential members, enticing them with more and more radical ideas and watching their reaction. That sort of personal interaction was a slow process. Facebook’s algorithm now works at lightning speed and the pace of radicalisation is exponentially increased.

 

Megan Knight, Associate Dean, University of Hertfordshire

This article was originally published on The Conversation. Read the original article.

Facebook has a world of problems. Beyond charges of Russian manipulation and promoting fake news, the company’s signature social media platform is under fire for being addictive, causing anxiety and depression, and even instigating human rights abuses.

Company founder and CEO Mark Zuckerberg says he wants to win back users’ trust. But his company’s efforts so far have ignored the root causes of the problems they intend to fix, and even risk making matters worse. Specifically, they ignore the fact that personal interaction isn’t always meaningful or benign, leave out the needs of users in the developing world, and seem to compete with the company’s own business model.

Based on The Digital Planet, a multi-year global study of how digital technologies spread and how much people trust them, which I lead at Tufts University’s Fletcher School, I have some ideas about how to fix Facebook’s efforts to fix itself.

Face-saving changes?

Like many technology companies, Facebook must balance the convergence of digital dependence, digital dominance and digital distrust. Over 2 billion people worldwide check Facebook each month; 45 percent of American adults get their news from Facebook. Together with Google, it captures half of all digital advertising revenues worldwide. Yet more people say they greatly distrust Facebook than any other member of the big five – Amazon, Apple, Google or Microsoft.

In March 2017 Facebook started taking responsibility for quality control as a way to restore users’ trust. The company hired fact-checkers to verify information in posts. Two months later the company changed its algorithms to help users find diverse viewpoints on current issues and events. And in October 2017, it imposed new transparency requirements to force advertisers to identify themselves clearly.

But Zuckerberg led off 2018 in a different direction, committing to “working to fix our issues together.” That last word, “together,” suggests an inclusive approach, but in my view, it really says the company is shifting the burden back onto its users.

The company began by overhauling its crucial News Feed feature, giving less priority to third-party publishers, whether more traditional media outlets like The New York Times, The Washington Post or newer online publications such as Buzzfeed or Vox. That will leave more room for posts from family and friends, which Zuckerberg has called “meaningful social interactions.”

However, Facebook will rely on users to rate how trustworthy groups, organizations and media outlets are. Those ratings will determine which third-party publishers do make it to users’ screens, if at all. Leaving trustworthiness ratings to users without addressing online political polarization risks making civic discourse even more divided and extreme.

Personal isn’t always ‘meaningful’

Unlike real-life interactions, online exchanges can exacerbate both passive and narcissistic tendencies. It’s easier to be invisible online, so people who want to avoid attention can do so without facing peer pressure to participate. By contrast, though, people who are active online can see their friends like, share and comment on their posts, motivating them to seek even more attention.

This creates two groups of online users, broadly speaking: disengaged observers and those who are competing for attention with ever more extreme efforts to catch users’ eyes. This environment has helped outrageous, untrue claims with clickbait headlines attract enormous amounts of attention.

This phenomenon is further complicated by two other elements of social interaction online. First, news of any kind – including fake news – gains credibility when it is forwarded by a personal connection.

And social media tends to group like-minded people together, creating an echo chamber effect that reinforces messages the group agrees with and resists outside views – including more accurate information and independent perspectives. It’s no coincidence that conservatives and liberals trust very different news sources.

Users of Facebook’s instant-messaging subsidiary WhatsApp have shown that even a technology focusing on individual connection isn’t always healthy or productive. WhatsApp has been identified as a primary carrier of fake news and divisive rumors in India, where its users’ messages have been described as a “mix of off-color jokes, doctored TV [clips], wild rumors and other people’s opinions, mostly vile.” Kenya has identified 21 hate-mongering WhatsApp groups. WhatsApp users in the U.K. have had to stay alert for scams in their personal messages.

Addressing the developing world

Facebook’s actions appear to be responding to public pressure from the U.S. and Europe. But Facebook is experiencing its fastest growth in Asia and Africa.

Research I have conducted with colleagues has found that users in the developing world are more trusting of online material, and therefore more vulnerable to manipulation by false information. In Myanmar, for instance, Facebook is the dominant internet site because of its Free Basics program, which lets mobile-phone users connect to a few selected internet sites, including Facebook, without paying extra or using up allotted data in their mobile plans. In 2014, Facebook had 2 million users in Myanmar; after Free Basics arrived in 2016, that number climbed to 30 million.

One of the effects has been devastating. Rumor campaigns against the Rohingya ethnic group in Myanmar were, in part, spread on Facebook, sparking violence. At least 6,700 Rohingya Muslims were killed by Myanmar’s security forces between August and September 2017; 630,000 more have fled the country. Facebook did not stop the rumors, and at one point actually shut down responding posts from a Rohingya activist group.

Facebook’s Free Basics program is in 63 developing countries and municipalities, each filled with people new to the digital economy and potentially vulnerable to manipulation.

Fighting against the business model

Facebook’s efforts to promote what might be called “corporate digital responsibility” runs counter to the company’s business model. Zuckerberg himself declared that the upcoming changes would cause people to spend less time on Facebook.

But the company makes 98 percent of its revenues from advertising. That is only possible if users keep their attention focused on the platform, so the company can analyze their usage data to generate more targeted advertising.

Our research finds that companies working toward corporate social responsibility will only succeed if their efforts align with their core business models. Otherwise, the responsibility project will become unsustainable in the face of pressure from the stock market, competitors or government regulators, as happened to Facebook with European privacy rules.

Real solutions

What can Facebook do instead? I recommend the following to fix Facebook’s fix:

  1. Own the reality of Facebook’s enormous role in society. It’s a primary source of news and communication that influences the beliefs and assumptions driving citizen behavior around the world. The company cannot rely on users to police the system. As a media company, Facebook needs to take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.

  2. Establish on-the-ground operations in every location where it has large numbers of users, to ensure the company understands local contexts. Rather than a virtual global entity operating from Silicon Valley, Facebook should engage with the nuances and complexities of cities, regions and countries, using local languages to customize content for users. Right now, Facebook passively publishes educational materials on digital safety and community standards, which are easily ignored. As Facebook adds users in developing nations, the company must pay close attention to the unintended consequences of explosive growth in connectivity.

  3. Reduce the company’s dependence on advertising revenue. As long as Facebook is almost entirely dependent on ad sales, it will be forced to hold users’ attention as long as possible and gather their data to analyze for future ad opportunities. Its strategy for expansion should go beyond building and buying other apps, like WhatsApp, Instagram and Messenger, all of which still feed the core business model of monopolizing and data-mining users’ attention. Taking inspiration from Amazon and Netflix – and even Google parent company Alphabet – Facebook could use its huge trove of user data responsibly to identify, design and deliver new services that people would pay for.

Ultimately, Zuckerberg and Facebook’s leaders have created an enormously powerful, compelling and potentially addictive service. This unprecedented opportunity has developed at an unprecedented pace. Growth may be the easy part; being the responsible grown-up is much harder.

 

Bhaskar Chakravorti, Senior Associate Dean, International Business & Finance, Tufts University

This article was originally published on The Conversation. Read the original article.

Facebook on Monday unveiled a version of its Messenger application for children, aimed at enabling kids under 12 to connect with others under parental supervision.

Messenger Kids is being rolled out for Apple iOS mobile devices in the United States on a test basis as a standalone video chat and messaging app. Product manager Loren Cheng said the social network leader is offering Messenger Kids because “there’s a need for a messaging app that lets kids connect with people they love but also has the level of control parents want.”

Facebook said that the new app, with no ads or in-app purchases, is aimed at 6- to 12-year-olds. It enables parents to control the contact list and does not allow children to connect with anyone their parent does not approve. The social media giant added it designed the app because many children are going online without safeguards.

“Many of us at Facebook are parents ourselves, and it seems we weren’t alone when we realized that our kids were getting online earlier and earlier,” a Facebook statement said.

It cited a study showing that 93 percent of 6- to 12-year-olds in the US have access to tablets or smartphones, and two-thirds have a smartphone or tablet of their own.

“We want to help ensure the experiences our kids have when using technology are positive, safer, and age-appropriate, and we believe teaching kids how to use technology in positive ways will bring better experiences later as they grow,” the company said.

Facebook’s rules require that children be at least 13 to create an account, but many are believed to get around the restrictions. Cheng said Facebook conducted its own research and worked with “over a dozen expert advisors” in building the app. He added that data from children would not be used for ad profiles and that the application would be compliant with the Children’s Online Privacy and Protection Act (COPPA).

“We’ve worked extensively with parents and families to shape Messenger Kids and we’re looking forward to learning and listening as more children and families start to use the iOS preview,” Cheng said.

 

(AFP)

Facebook on Wednesday reported profits leapt 79 percent on booming revenue from online ads in the third quarter, topping investor forecasts and buoying shares already at record highs.

The leading social network said it made a profit o $4.7 billion in the quarter that ended on September 30, compared with $2.6 billion in the same period a year earlier. Chief executive Mark Zuckerberg used the update to reaffirm efforts by Facebook to curb manipulation: “We’re serious about preventing abuse on our platforms,” he said.

Facebook is to send more potential hoax articles to third-party fact checkers and show their findings below the original post, the world’s largest online social network said on Thursday as it tries to fight so-called fake news.

The company said in a statement on its website it will start using updated machine learning to detect possible hoaxes and send them to fact checkers, potentially showing fact-checking results under the original article.

Facebook has been criticized as being one of the main distribution points for so-called fake news, which many think influenced the 2016 U.S. presidential election.

The issue has also become a big political topic in Europe, with French voters deluged with false stories ahead of the presidential election in May and Germany backing a plan to fine social media networks if they fail to remove hateful postings promptly, ahead of elections there in September.

On Thursday Facebook said in a separate statement in German that a test of the new fact-checking feature was being launched in the United States, France, the Netherlands and Germany.

“In addition to seeing which stories are disputed by third-party fact checkers, people want more context to make informed decisions about what they read and share,” said Sara Su, Facebook news feed product manager, in a blog.

She added that Facebook would keep testing its “related article” feature and work on other changes to its news feed to cut down on false news.

 

Credit : Voice of America (VOA)

Facebook recently announced that it now has over 2 billion monthly users. This makes its “population” larger than that of China, the US, Mexico and Japan combined. Its popularity, and with it the influence it has in society, is beyond dispute.

But for many the experience of actually using the site fluctuates somewhere between the addictive and the annoying. Our new research shows that the reason for this is very simple. It’s all to do with other people, and how we feel about them.

For Facebook CEO Mark Zuckerburg and colleagues, the ethos behind the site is straightforward. It aims to “give people the power to build community and bring the world closer together”. By offering individuals the chance to connect with friends and share meaningful content, it aims to strengthen relationships and community ties.

The fact that this is a rather idealistic picture of society hasn’t prevented the site from flourishing. Yet, examining what people actually do on the site, how they interact with each other, and what they feel about the behaviour of friends and acquaintances, shows that the truth is rather more complex.

Silent watchers

We surveyed and selectively interviewed a network of over 100 Facebook users. Our findings show how we continue to use the site and remain connected to people through it even though they often annoy or offend us. But instead of challenging them or severing ties, we continue to use Facebook to silently watch them – and perhaps even take pleasure from judging them.

In other words, Facebook reflects the dynamics at the heart of all real human relationships. Just as in their offline life, people try to open up and bond with each other while simultaneously having to cope with the everyday frictions of friendship.

One of the most notable things we found in our research was the high number of people who said that they were frequently offended by what their friends posted. The sorts of things that caused offence ran the gamut from extremist or strongly-held political opinions (racism, homophobia, partisan political views) to oversharing of daily routines and acts of inadvertent self-promotion.

For example, one interviewee wrote of how she had “a particularly hard time with pro-gun posts”:

I really, really wish guns were significantly less accessible and less glorified in American culture. Still, I don’t think Facebook is really the place that people chose to listen to opposing views, so I usually ignore posts of that nature.

At the other end of the spectrum was this interviewee:

I wrote to a friend about how my two-year-old was counting to 40 and was saying the alphabet in three languages. This made a Facebook contact write passive aggressively on her wall about overachieving parents who spend all their time bragging about their children. I felt the need to de-friend her after that incident.

Why do we put up with this?

The reason these reactions happened so often was due to various factors native to the sort of communications technology that Facebook represents. First, there’s the specific type of diversity that exists among people’s online networks. That is, the diversity created by people from different parts of your life being brought together in one space.

On Facebook, you write your message without knowing who precisely will read it, but in the knowledge that the likely audience will include people from various parts of your life who have a range of different values and beliefs. In face-to-face conversations you’re likely to talk to you father-in-law, work colleagues or friends from primary school in separate contexts, using different styles of communication. Whereas on Facebook they’ll all see the same side of you, as well as getting to see the opinions of those you associate with.

Navigating relationships. Shutterstock

This means that people are engaging in personal conversations in a much more public space than they did before, and that the different value systems these diverse friends have can very easily come into conflict. But the nature of the ties people have on Facebook means that often they can’t just break loose from people they find annoying or offensive in this way.

For example, if a work colleague or relative offends you, there are likely to be reasons of duty or familial responsibility which mean you won’t want to de-friend them. Instead, people make discreet changes in their settings on the site to limit the views they find offensive from showing up in their feed, without provoking outward shows of conflict with people.

As one interviewee explained:

I remember de-friending one person (friend of a friend) as she kept posting her political opinions that were the complete opposite of mine. It frustrated me as I didn’t know her well enough to “bite” and reply to her posts, equally, I didn’t want to voice it on a public forum.

None of the people in the study, however, said that they’d reduced their use of Facebook because of the frequent offence they experienced from using it. Instead, we can speculate, it’s this opportunity to be slightly judgemental about the behaviour of your acquaintances that proves one of the compelling draws of the site.

Similar to the “hate-watching” experience of viewing television programmes you don’t like because you enjoy mocking them, this can be seen as a mild form of “hate-reading”. Logging onto Facebook gives you the chance to be indignantly offended (or maybe just mildly piqued) by other people’s ill-informed views and idiosyncratic behaviour. And there’s a surprising amount of pleasure in that.

Philip Seargeant, Senior Lecturer in Applied Linguistics, The Open University and Caroline Tagg, Lecturer in Applied Linguistics and English Language, The Open University

This article was originally published on The Conversation. Read the original article.

Telecommunication companies in Nigeria may block calls made on instant messaging applications like WhatsApp and Skype, in a bid to increase their revenue.

According to The Punch, the telcos are seeking to address their loss on international calls and are looking to raise a revenue of N20trillion. This may lead to subscribers being unable to carry out voice and video calls on WhatsApp, Facebook and some other Over-The-Top (OTT) services.

“It is an aggressive approach to stop further revenue loss to OTT players on international calls, having already lost about N100tn between 2012 and 2017,” a manager at one of the major telecoms companies in the country said. Speaking on the condition of anonymity, he added: “If we fail to be pro-active by taking cogent steps now, then there are indications that we may lose between N20tn and N30tn, or so, by the end of 2018.”

The source also revealed that the proliferation of apps like WhatsApp, Skype, Facebook, BlackBerry Messenger and Viber, was taking a big chunk of the voice revenue of telcos in the country. In reaction to the news, the Director, Public Affairs, NCC, Mr. Tony Ojobo, said: “We don’t have any evidence of that. We do not regulate the Internet.”

“I am not aware of this development but globally, operators and network equipment makers don’t really embrace Skype,” the Managing Director, TechTrends Nigeria, Mr. Kenneth Omeruo, said. “They liken Skype to an individual who takes undue advantage of other people’s generosity without giving anything in return.

“Globally, there is this apprehension among telecoms operators that Skype only steals their customers, while they invest billions of dollars to build, expand and upgrade networks”, he added.

You already get your news, gossip and cat videos from Facebook.

Could you find your next job there too?

Starting this week, Facebook users in the United States and Canada can search and apply for jobs directly from the social-media platform. It's one more way Facebook is trying to expand its reach, particularly among low-wage, hourly workers who may not have profiles on job-search sites such as LinkedIn or Monster.com.

Analysts say the new jobs feature is yet another way the social media site is testing how much privacy its 1.86 billion users are willing to sacrifice for the sake of convenience.

"Facebook is pushing the limits to see what people are willing to do on the site, and jobs is a natural step," said R "Ray" Wang, founder of Constellation Research, a Silicon Valley technology research and advisory firm. "It's an area where people will say, 'Oh, this makes a lot of sense.' Facebook is covering a very important gap."

Social media is increasingly playing a role in job searches. Roughly 14.4 million Americans say they have used social media to find employment, according to a recent survey by ADP. In addition, the survey found, 73 per cent of companies said they had successfully hired employees using social media. Facebook executives said they are also hoping to target users who may not be actively looking for a new job by flagging nearby opportunities in businesses they may frequent or support.

"Two-thirds of job seekers are already employed," Andrew Bosworth, Facebook's vice president of ads and business platform, told Tech Crunch. "They're not spending their days and nights out there canvassing for jobs. They're open to a job if a job comes."

Businesses can post jobs free through their profile pages. Users, meanwhile, can search for nearby listings and quickly apply for jobs by clicking an "Apply now" button. Facebook automatically fills in basic information, such as a user's name, location and photo, into the application, which is sent to the business via Facebook Messenger.

A recent search for Washington-area jobs turned up a doughnut-making position at Duck Donuts in Fairfax, Va., an engineering job at Tenable Network Securities in Columbia, Md., and a part-time bartending gig at Killarney House Irish Restaurant and Pub in Davidson, Md. Blue Feather Music in Arlington, Virginia, meanwhile, was looking for piano, guitar and voice instructors. Pay: $50 per hour.

"I thought this would be a great way to find a big audience," owner Laura Peacock said of the job posting, which went live Thursday morning. "I'm hiring, I need people and they're already all on Facebook."

But not everyone is convinced the plan will work in the long run. Jan Dawson, chief analyst at Jackdaw Research in Provo, Utah, says users are likely to be wary of combining their personal profiles with professional pursuits. Although most applicants know potential employers may look through their social media accounts, he said that's different from linking a user's Facebook profile to their job application.

"This is something many people are going to be very uncomfortable with," Dawson said. "Ultimately people are on Facebook to connect with their friends and to watch funny videos. They're not there to apply for jobs."

 

Source: Washington Post

Facebook is challenging developers across the Middle East and Africa to create innovative bots in the Bots for Messenger Developer Challenge. This aligns with Facebook’s commitment to promote innovation in the Middle East and Africa by providing developers and start-ups with the tools they need to build, grow, monetize, and measure products and services.

Facebook grew out of a hacker culture and thrives by promoting innovation on new platforms. That's why Facebook is launching the Bots for Messenger Challenge, a contest to recognize and reward developers who are able to create the most innovative new bots on Messenger.

Developers, in teams of up to three people, are invited to create bots in three categories: gaming and entertainment; productivity and utility; and social good.

Prizes
Finalist Teams
The 60 finalist teams (10 per category in each region) will win a Gear VR and mobile phone, one hour of Facebook mentorship and tools and services from FbStart, a Facebook program designed to help early stage mobile start-ups build and grow their bots.
All student teams who make it to the finals will win an additional $2,000 (students will be verified against their registration via their government accredited school email accounts).

Runner-Up Teams
For each region, three runner-up teams (one from each category) will win $10,000 and three months of Facebook mentorship.

Winning Teams
For each region, three winning teams (one from each category) will win $20,000 and three months of Facebook mentorship.

Dates

  • Submissions open: 15 February at 09:00 GMT
  • Deadline for entries: 28 April at 11:59:59 GMT
  • Finalists announced (30 teams each in Sub-Saharan Africa and the Middle East/North Africa): 19 May at 09:00 GMT
  • Deadline for entries from finalists: 2 June at 11:59:59 GMT

Winners announced: 19 June at 09:00 GMT (three winners and three runner up teams in the Middle East and North Africa; three winners and three runner up teams in Sub-Saharan Africa)

Page 1 of 2

  1. Opinions and Analysis

Calender

« April 2018 »
Mon Tue Wed Thu Fri Sat Sun
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30