SocialTube: P2P-assisted Video Sharing in
Online Social Networks
ABSTRACT:
Video sharing has been an increasingly popular application in
online social networks (OSNs). However, its sustainable development is severely
hindered by the intrinsic limit of the client/server architecture deployed in
current OSN video systems, which is not only costly in terms of server
bandwidth and storage but also not scalable with the soaring amount of users
and video content. The peer-assisted Video-on-Demand (VoD) technique, in which
participating peers assist the server in delivering video content has been
proposed recently. Unfortunately, videos can only be disseminated through
friends in OSNs. Therefore, current VoD works that explore clustering nodes
with similar interests or close location for high performance are suboptimal,
if not entirely inapplicable, in OSNs. Based on our long-term real-world
measurement of over 1,000,000 users and 2,500 videos on Facebook, we propose
SocialTube, a novel peer-assisted video sharing system that explores social
relationship, interest similarity, and physical location between peers in OSNs.
Specifically, SocialTube incorporates four algorithms: a social network
(SN)-based P2P overlay construction algorithm, a SN-based chunk prefetching
algorithm, chunk delivery and scheduling algorithm, and a buffer management
algorithm. Experimental results from a prototype on PlanetLab and an
event-driven simulator show that SocialTube can improve the quality of user
experience and system scalability over current P2P VoD techniques.
Existing
System:
The
recent rapid development of OSN video sharing applications illustrates the
evolution of OSNs from simply communication focused tools to a media portal. OSNs
are transforming from a platform for catching up with friends to a venue for
personal expression and for sharing a full variety of content and information.
However, OSN’s further advancement is severely hindered by the intrinsic limits
of the conventional client/server architecture of its video sharing system,
which is not only costly in terms of server storage and bandwidth but also not
scalable with the soaring amount of users and video content in OSNs. For
example, the world’s largest video sharing website, YouTube, spends roughly $1,000,000/day
for its server bandwidth. This high and ever rising expense was one of the
major reasons that YouTube was sold to Google. OSNs are now facing the same
formidable challenge as YouTube, as more and more users rely on Facebook for
video sharing. Though OSNs can depend on content delivery networks (CDNs) for
video content delivery (e.g., Facebook depends on Akamai for video delivery),
the CDN service is costly.
Problems on existing system:
1. It
is costly in terms of server storage and also bandwidth.
2. It
is not scalable with the soaring amount of users and video content in Online
Social Networks.
Proposed System:
Here, we extend the existing
definitions and also removed the drawbacks with the existing systems and also improve
the performance of the network. Our proposed system will improve the reduction
memory wastage and also the bandwidth which required for the overall
transformation.
Advantages:
1.
It
will reduce the data storage cost and also bandwidth requirement.
2.
With each peer contributing its
bandwidth to serving others, the P2P architecture provides high scalability for
large user bases.
Algorithm Used:
Social Network based P2P Overlay Construction Algorithm
Problem Statement:-
Online
Social Networks are transforming from a platform for catching up with friends
to a venue for personal expression and for sharing a full variety of content
and information. However, Online Social Networks further advancement is
severely hindered by the intrinsic limits of the conventional client/server architecture
of its video sharing system, which is not only costly in terms of server
storage and bandwidth but also not scalable with the soaring amount of users and
video content in Online Social Networks.
Scope:-
The
scope of the project is to reduce the data storage cost and also bandwidth requirement. And also with
each peer contributing its bandwidth to serving others, the P2P architecture
provides high scalability for large user bases.
Algorithm:-
Social Network
based P2P Overlay Construction Algorithm:
To identify followers and non-followers of a source node for
structure construction, SocialTube pre-defines two thresholds, Th and
Tl, for the percent of
videos in the source node that a viewer has watched during a time unit, say one
week. If the percent value of a viewer is ≥ Th,
the viewer is a follower. If the percent is Tl < x ≤ Th,
the viewer is a non-follower. Video sharing in Facebook distinguishes itself
from other video sharing websites (e.g., YouTube) in two aspects: video sharing
scope and video watching incentives. First, other websites provide system-wide
video sharing where a user can watch any video, while in Facebook, videos are
usually shared in a 2-hop small circle of friends (I1). Second, users in other
video sharing websites are driven to watch videos by interests, while in
Facebook, the followers of a source node (i.e., video owner) are driven to
watch almost all of the source’s videos primarily by social relationship, and non-followers
watch a certain amount of videos mainly driven by interest (I2). According to
these differentiating aspects, we design the P2P overlay structure. Based on
I1, SocialTube establishes a per-node (in
contrast to per-video in
YouTube) P2P overlay for each source node. It consists of peers within 2 hops
to the source that watch at least a certain percentage (> Tl)
of the source’s videos. Other peers can still fetch videos from the server. As
shown in the figure, such peers of a source node S in
the social network constitute a P2P overlay for the source node. We aim to
achieve an optimal tradeoff between P2P overlay maintenance costs and video
sharing efficiency. Some nodes within 2 hops may watch only a few videos in a
source. Including these nodes and users beyond 2-hops into the overlay
generates a greater structure maintenance cost than video sharing benefits.
Based on I2, we build a hierarchical structure that connects a source node with
its socially-close followers, and connects the followers with other non-followers.
Thus, the followers can quickly receive chunks from the source node, and also
function as a pseudo-source to distribute chunks to other friends. The source
pushes the first chunk of its new video to its followers. The chunk is cached
in each follower and has high probability of being used since followers watch
almost all videos of the source. Further, non-followers sharing the same
interest are grouped into an interest cluster for video sharing. We call peers
in an interest cluster interest-cluster-peers.
A node that has multiple interests is in multiple interest clusters of the
source node. Because the source node and followers are involved in every
interest cluster for providing video content, we call the group formed by the
source, followers, and interestcluster- peers in an interest cluster swarm,
and call allnodes in a swarm swarm-peers.
As I1 indicates, the cluster size of each interest cluster should be small. O9
indicates that many viewers of a video are physically close peers. Therefore,
in order to reduce delay, physically close interest-cluster-peers are randomly
connected with each other. The peers find their physically close peers based on
their ISP, subnet information. To preserve the privacy protection on OSN, we
can add a constraint in which peer A can connect to peer B only when peer A is
peer B’s friend or can access peer B’s shared videos. The viewers of S form
into two swarms. Because the nodes in each swarm have a high probability of
owning chunks of the same video, they can retrieve chunks from their
swarm-peers without relying on querying the server or large scale query
flooding. In current video sharing in Facebook, a node always requests the
server for videos uploaded by source nodes. We let the server keep track of the
video watching activities of viewers of a specific source node in order to
identify and update its followers and non-followers based on SocialTube’s
pre-defined thresholds of Tl and
Th. This duty can be
assigned to the source node itself if it has sufficient capacity. The nodes in
the system will periodically report their video providing activities to the
server. When the server determines that a peer is a follower of the source
node, it notifies the source node, which notifies all nodes in its swarms about
the follower. Consequently, the follower becomes a member of each of the
swarms, and all swarm-peers in each of the swarms connect to it. When the
server determines that a peer is a non-follower of the source node, the server
determines and notifies the source node about the non-follower along with its
interests. The source node then notifies the peers in the clusters of the
interests of that nonfollower, and notifies the non-follower about the
clusters. The non-follower connects to all followers and the source and to a
few physically close nodes in each cluster. Consequently, the non-follower
becomes a member of the swarm of each of the interest clusters. The server also
periodically updates the roles of the followers and non-followers. If a node
becomes neither follower nor non-follower, the server removes it by notifying
others to disconnect from the node. If a follower becomes a nonfollower, its
connections are also updated accordingly. In an OSN, after a node logs out
(i.e., leaves) the system, it will always logs in (i.e., joins in), so the
SocialTube overlay does not update due
to node departures. Only when the server notices that a node did not join in
after a very long time departure, the node is removed from the overlay.
Neighbors in the overlay periodically exchange messages. When a node notices
that its neighbor is offline, it marks the connection and will unmark it when the
neighbor is online. A node does not request videos from neighbors with marked
connections. The nodes in a
P2P
structure, including the source, followers and nonfollowers, remember their
roles and connections. Next time when a node goes online, it automatically
connects to its previous neighbors and function based on its role. The source
node has two followers, and its videos can be divided into two interest categories
based on video content. The 1-hop and 2-hop friends of the source node with
interest 1 and interest 2 form into two clusters, respectively. The source node
and the followers are in each interest cluster, all of which form a swarm.
3.2
Social Network based Prefetching Algorithm
To reduce the video startup latency, we propose a pushbased video
prefetching mechanism in SocialTube. In SocialTube, when a source node uploads
a new video to the server, it also pushes the prefix (i.e. first chunk) of the
video to its followers and to the interest-clusterpeers in the interest
clusters matching the content of the video. The prefix receivers store the
prefix in their cache. Those interest-cluster-peers and followers who are not
online when the source node pushes the prefix will automatically receive it
from the source node or the server once they come online. After the source node
leaves, the responsibility to push the prefix falls to the server. Since these
followers and interest-cluster-peers are very likely to watch the video, the
cached prefixes have a high probability of being used. Once the nodes request
the videos, the locally stored prefix can be played immediately without delay.
Meanwhile, the node tries to retrieve the remaining video chunks from its
swarm-peers. Similar to BitTorrent, SocialTube allows a requester to request 4
online nodes at the same time to provide the video content in order to guarantee
provider availability and achieve low delay by retrieving chunks in parallel.
It first contacts interestcluster- peers, then followers, then the source node.
If the requester still cannot find 4 providers after the source node is
contacted, it resorts to the server as the only provider. Considering the high
capacity of the server, the requester does not need to have 4 providers if it has
the server as a provider. This querying order can distribute the load of chunk
delivery among the swarmpeers while providing high chunk availability. The
algorithm takes advantage of all resources for efficient video sharing without
overloading specific nodes. The server can guarantee the availability of the
video, even if the number of online users in a swarm is small.
Implementation:
Implementation is the stage of the project when the theoretical
design is turned out into a working system. Thus it can be considered to be the
most critical stage in achieving a successful new system and in giving the
user, confidence that the new system will work and be effective.
The implementation stage involves
careful planning, investigation of the existing system and it’s constraints on
implementation, designing of methods to achieve changeover and evaluation of
changeover methods.
Main
Modules:-
1.
User
Module:
In this
module, Users are having authentication and security to access the detail which
is presented in the ontology system. Before accessing or searching the details
user should have the account in that otherwise they should register first.
2. Sharing Videos:
In the user home page user can view the
shared video which was shared by his friends. It might be a public or privately
shared. The user also can share the new photo/videos or shared photo/videos to
their friends. He can also share it privately or in public which means to a
particular person or to a group.
3. Friend Request:
The user can send friend request to a person by searching his name. While searching he will get the names with photos which is related to his name input. In that he can choose the specified one what he was actually searched about. And also he can receive the request which was send by another person who wants to be a friend with user. User can accept or he can reject the request, it depends on himself.
System Configuration:
H/W System Configuration:
Processor - Pentium –III
Speed - 1.1 Ghz
RAM - 256
MB(min)
Hard
Disk - 20 GB
Floppy
Drive - 1.44 MB
Key
Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System Configuration:
Operating
System : Windows95/98/2000/XP
Application Server :
Tomcat5.0/6.X
Front
End : HTML,
Java, Jsp
Scripts : JavaScript.
Server side
Script : Java
Server Pages.
Database :
Mysql 5.0
Database Connectivity
: JDBC.
No comments:
Post a Comment