[minor] Fix grammar + typo issues

Closes #557, closes #678, closes #748, closes #806, closes #818, closes #842, closes #866, closes #948, closes #1024, closes #1313, closes #1458, closes #1461, closes #1465, closes #1491, closes #1503, closes #1539, closes #1611
This commit is contained in:
twitter-team 2023-04-04 16:13:24 -05:00
parent 36588c650e
commit bb095608b7
20 changed files with 138 additions and 158 deletions

View file

@ -1,6 +1,6 @@
# Twitter Recommendation Algorithm # Twitter's Recommendation Algorithm
The Twitter Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the Twitter's Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the
Home Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The Home Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The
diagram below illustrates how major services and jobs interconnect. diagram below illustrates how major services and jobs interconnect.
@ -13,24 +13,24 @@ These are the main components of the Recommendation Algorithm included in this r
| Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. | | Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |
| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. | | | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |
| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. | | | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict likelihood of a Twitter User interacting with another User. | | | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict the likelihood of a Twitter User interacting with another User. |
| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. | | | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |
| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. | | | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |
| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). | | | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |
| Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. | | Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. |
| | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. | | | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. |
| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos) | | | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos). |
| | [follow-recommendation-service](follow-recommendations-service/README.md) (FRS)| Provides Users with recommendations for accounts to follow, and Tweets from those accounts. | | | [follow-recommendation-service](follow-recommendations-service/README.md) (FRS)| Provides Users with recommendations for accounts to follow, and Tweets from those accounts. |
| Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light ranker model used by search index (Earlybird) to rank Tweets. | | Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light Ranker model used by search index (Earlybird) to rank Tweets. |
| | [heavy-ranker](https://github.com/twitter/the-algorithm-ml/blob/main/projects/home/recap/README.md) | Neural network for ranking candidate tweets. One of the main signals used to select timeline Tweets post candidate sourcing. | | | [heavy-ranker](https://github.com/twitter/the-algorithm-ml/blob/main/projects/home/recap/README.md) | Neural network for ranking candidate tweets. One of the main signals used to select timeline Tweets post candidate sourcing. |
| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md) | | Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md). |
| | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking. | | | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking. |
| | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. | | | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. |
| Software framework | [navi](navi/navi/README.md) | High performance, machine learning model serving written in Rust. | | Software framework | [navi](navi/navi/README.md) | High performance, machine learning model serving written in Rust. |
| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. | | | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |
| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. | | | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |
We include Bazel BUILD files for most components, but not a top level BUILD or WORKSPACE file. We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file.
## Contributing ## Contributing

View file

@ -91,7 +91,7 @@ def parse_metric(config):
elif metric_str == "linf": elif metric_str == "linf":
return faiss.METRIC_Linf return faiss.METRIC_Linf
else: else:
raise Exception(f"Uknown metric: {metric_str}") raise Exception(f"Unknown metric: {metric_str}")
def run_pipeline(argv=[]): def run_pipeline(argv=[]):

View file

@ -2,6 +2,6 @@
CR-Mixer is a candidate generation service proposed as part of the Personalization Strategy vision for Twitter. Its aim is to speed up the iteration and development of candidate generation and light ranking. The service acts as a lightweight coordinating layer that delegates candidate generation tasks to underlying compute services. It focuses on Twitter's candidate generation use cases and offers a centralized platform for fetching, mixing, and managing candidate sources and light rankers. The overarching goal is to increase the speed and ease of testing and developing candidate generation pipelines, ultimately delivering more value to Twitter users. CR-Mixer is a candidate generation service proposed as part of the Personalization Strategy vision for Twitter. Its aim is to speed up the iteration and development of candidate generation and light ranking. The service acts as a lightweight coordinating layer that delegates candidate generation tasks to underlying compute services. It focuses on Twitter's candidate generation use cases and offers a centralized platform for fetching, mixing, and managing candidate sources and light rankers. The overarching goal is to increase the speed and ease of testing and developing candidate generation pipelines, ultimately delivering more value to Twitter users.
CR-Mixer act as a configurator and delegator, providing abstractions for the challenging parts of candidate generation and handling performance issues. It will offer a 1-stop-shop for fetching and mixing candidate sources, a managed and shared performant platform, a light ranking layer, a common filtering layer, a version control system, a co-owned feature switch set, and peripheral tooling. CR-Mixer acts as a configurator and delegator, providing abstractions for the challenging parts of candidate generation and handling performance issues. It will offer a 1-stop-shop for fetching and mixing candidate sources, a managed and shared performant platform, a light ranking layer, a common filtering layer, a version control system, a co-owned feature switch set, and peripheral tooling.
CR-Mixer's pipeline consists of 4 steps: source signal extraction, candidate generation, filtering, and ranking. It also provides peripheral tooling like scribing, debugging, and monitoring. The service fetches source signals externally from stores like UserProfileService and RealGraph, calls external candidate generation services, and caches results. Filters are applied for deduping and pre-ranking, and a light ranking step follows. CR-Mixer's pipeline consists of 4 steps: source signal extraction, candidate generation, filtering, and ranking. It also provides peripheral tooling like scribing, debugging, and monitoring. The service fetches source signals externally from stores like UserProfileService and RealGraph, calls external candidate generation services, and caches results. Filters are applied for deduping and pre-ranking, and a light ranking step follows.

View file

@ -1,13 +1,10 @@
# recos-injector # Recos-Injector
Recos-Injector is a streaming event processor for building input streams for GraphJet based services.
It is general purpose in that it consumes arbitrary incoming event stream (e.x. Fav, RT, Follow, client_events, etc), applies
filtering, combines and publishes cleaned up events to corresponding GraphJet services.
Each GraphJet based service subscribes to a dedicated Kafka topic. Recos-Injector enables a GraphJet based service to consume any
event it wants
## How to run recos-injector-server tests Recos-Injector is a streaming event processor used to build input streams for GraphJet-based services. It is a general-purpose tool that consumes arbitrary incoming event streams (e.g., Fav, RT, Follow, client_events, etc.), applies filtering, and combines and publishes cleaned up events to corresponding GraphJet services. Each GraphJet-based service subscribes to a dedicated Kafka topic, and Recos-Injector enables GraphJet-based services to consume any event they want.
Tests can be run by using this command from your project's root directory: ## How to run Recos-Injector server tests
You can run tests by using the following command from your project's root directory:
$ bazel build recos-injector/... $ bazel build recos-injector/...
$ bazel test recos-injector/... $ bazel test recos-injector/...
@ -28,17 +25,16 @@ terminal:
$ curl -s localhost:9990/admin/ping $ curl -s localhost:9990/admin/ping
pong pong
Run `curl -s localhost:9990/admin` to see a list of all of the available admin Run `curl -s localhost:9990/admin` to see a list of all available admin endpoints.
endpoints.
## Querying recos-injector-server from a Scala console ## Querying Recos-Injector server from a Scala console
Recos Injector does not have a thrift endpoint. It reads Event Bus and Kafka queues and writes to recos_injector kafka. Recos-Injector does not have a Thrift endpoint. Instead, it reads Event Bus and Kafka queues and writes to the Recos-Injector Kafka.
## Generating a package for deployment ## Generating a package for deployment
To package your service into a zip for deployment: To package your service into a zip file for deployment, run:
$ bazel bundle recos-injector/server:bin --bundle-jvm-archive=zip $ bazel bundle recos-injector/server:bin --bundle-jvm-archive=zip
If successful, a file `dist/recos-injector-server.zip` will be created. If the command is successful, a file named `dist/recos-injector-server.zip` will be created.

View file

@ -15,7 +15,7 @@ SimClusters from the Linear Algebra Perspective discussed the difference between
However, calculating the cosine similarity between two Tweets is pretty expensive in Tweet candidate generation. In TWISTLY, we scan at most 15,000 (6 source tweets * 25 clusters * 100 tweets per clusters) tweet candidates for every Home Timeline request. The traditional algorithm needs to make API calls to fetch 15,000 tweet SimCluster embeddings. Consider that we need to process over 6,000 RPS, its hard to support by the existing infrastructure. However, calculating the cosine similarity between two Tweets is pretty expensive in Tweet candidate generation. In TWISTLY, we scan at most 15,000 (6 source tweets * 25 clusters * 100 tweets per clusters) tweet candidates for every Home Timeline request. The traditional algorithm needs to make API calls to fetch 15,000 tweet SimCluster embeddings. Consider that we need to process over 6,000 RPS, its hard to support by the existing infrastructure.
## SimClusters Approximate Cosine Similariy Core Algorithm ## SimClusters Approximate Cosine Similarity Core Algorithm
1. Provide a source SimCluster Embedding *SV*, *SV = [(SC1, Score), (SC2, Score), (SC3, Score) …]* 1. Provide a source SimCluster Embedding *SV*, *SV = [(SC1, Score), (SC2, Score), (SC3, Score) …]*

View file

@ -513,12 +513,12 @@ public class BasicIndexingConverter {
Optional<Long> inReplyToUserId = Optional.of(inReplyToUserIdVal).filter(x -> x > 0); Optional<Long> inReplyToUserId = Optional.of(inReplyToUserIdVal).filter(x -> x > 0);
Optional<Long> inReplyToStatusId = Optional.of(inReplyToStatusIdVal).filter(x -> x > 0); Optional<Long> inReplyToStatusId = Optional.of(inReplyToStatusIdVal).filter(x -> x > 0);
// We have six combinations here. A tweet can be // We have six combinations here. A Tweet can be
// 1) a reply to another tweet (then it has both in-reply-to-user-id and // 1) a reply to another tweet (then it has both in-reply-to-user-id and
// in-reply-to-status-id set), // in-reply-to-status-id set),
// 2) directed-at a user (then it only has in-reply-to-user-id set), // 2) directed-at a user (then it only has in-reply-to-user-id set),
// 3) not a reply at all. // 3) not a reply at all.
// Additionally, it may or may not be a retweet (if it is, then it has retweet-user-id and // Additionally, it may or may not be a Retweet (if it is, then it has retweet-user-id and
// retweet-status-id set). // retweet-status-id set).
// //
// We want to set some fields unconditionally, and some fields (reference-author-id and // We want to set some fields unconditionally, and some fields (reference-author-id and

View file

@ -22,13 +22,13 @@ import static com.twitter.search.modeling.tweet_ranking.TweetScoringFeatures.Fea
/** /**
* Loads the scoring models for tweets and provides access to them. * Loads the scoring models for tweets and provides access to them.
* *
* This class relies on a list ModelLoader objects to retrieve the objects from them. It will * This class relies on a list of ModelLoader objects to retrieve the objects from them. It will
* return the first model found according to the order in the list. * return the first model found according to the order in the list.
* *
* For production, we load models from 2 sources: classpath and HDFS. If a model is available * For production, we load models from 2 sources: classpath and HDFS. If a model is available
* from HDFS, we return it, otherwise we use the model from the classpath. * from HDFS, we return it, otherwise we use the model from the classpath.
* *
* The models used in for default requests (i.e. not experiments) MUST be present in the * The models used for default requests (i.e. not experiments) MUST be present in the
* classpath, this allows us to avoid errors if they can't be loaded from HDFS. * classpath, this allows us to avoid errors if they can't be loaded from HDFS.
* Models for experiments can live only in HDFS, so we don't need to redeploy Earlybird if we * Models for experiments can live only in HDFS, so we don't need to redeploy Earlybird if we
* want to test them. * want to test them.

View file

@ -3,76 +3,81 @@ from twml.feature_config import FeatureConfigBuilder
def get_feature_config(data_spec_path, label): def get_feature_config(data_spec_path, label):
return FeatureConfigBuilder(data_spec_path=data_spec_path, debug=True) \ return (
FeatureConfigBuilder(data_spec_path=data_spec_path, debug=True)
.batch_add_features( .batch_add_features(
[ [
("ebd.author_specific_score", "A"), ("ebd.author_specific_score", "A"),
("ebd.has_diff_lang", "A"), ("ebd.has_diff_lang", "A"),
("ebd.has_english_tweet_diff_ui_lang", "A"), ("ebd.has_english_tweet_diff_ui_lang", "A"),
("ebd.has_english_ui_diff_tweet_lang", "A"), ("ebd.has_english_ui_diff_tweet_lang", "A"),
("ebd.is_self_tweet", "A"), ("ebd.is_self_tweet", "A"),
("ebd.tweet_age_in_secs", "A"), ("ebd.tweet_age_in_secs", "A"),
("encoded_tweet_features.favorite_count", "A"), ("encoded_tweet_features.favorite_count", "A"),
("encoded_tweet_features.from_verified_account_flag", "A"), ("encoded_tweet_features.from_verified_account_flag", "A"),
("encoded_tweet_features.has_card_flag", "A"), ("encoded_tweet_features.has_card_flag", "A"),
# ("encoded_tweet_features.has_consumer_video_flag", "A"), # ("encoded_tweet_features.has_consumer_video_flag", "A"),
("encoded_tweet_features.has_image_url_flag", "A"), ("encoded_tweet_features.has_image_url_flag", "A"),
("encoded_tweet_features.has_link_flag", "A"), ("encoded_tweet_features.has_link_flag", "A"),
("encoded_tweet_features.has_multiple_hashtags_or_trends_flag", "A"), ("encoded_tweet_features.has_multiple_hashtags_or_trends_flag", "A"),
# ("encoded_tweet_features.has_multiple_media_flag", "A"), # ("encoded_tweet_features.has_multiple_media_flag", "A"),
("encoded_tweet_features.has_native_image_flag", "A"), ("encoded_tweet_features.has_native_image_flag", "A"),
("encoded_tweet_features.has_news_url_flag", "A"), ("encoded_tweet_features.has_news_url_flag", "A"),
("encoded_tweet_features.has_periscope_flag", "A"), ("encoded_tweet_features.has_periscope_flag", "A"),
("encoded_tweet_features.has_pro_video_flag", "A"), ("encoded_tweet_features.has_pro_video_flag", "A"),
("encoded_tweet_features.has_quote_flag", "A"), ("encoded_tweet_features.has_quote_flag", "A"),
("encoded_tweet_features.has_trend_flag", "A"), ("encoded_tweet_features.has_trend_flag", "A"),
("encoded_tweet_features.has_video_url_flag", "A"), ("encoded_tweet_features.has_video_url_flag", "A"),
("encoded_tweet_features.has_vine_flag", "A"), ("encoded_tweet_features.has_vine_flag", "A"),
("encoded_tweet_features.has_visible_link_flag", "A"), ("encoded_tweet_features.has_visible_link_flag", "A"),
("encoded_tweet_features.is_offensive_flag", "A"), ("encoded_tweet_features.is_offensive_flag", "A"),
("encoded_tweet_features.is_reply_flag", "A"), ("encoded_tweet_features.is_reply_flag", "A"),
("encoded_tweet_features.is_retweet_flag", "A"), ("encoded_tweet_features.is_retweet_flag", "A"),
("encoded_tweet_features.is_sensitive_content", "A"), ("encoded_tweet_features.is_sensitive_content", "A"),
# ("encoded_tweet_features.is_user_new_flag", "A"), # ("encoded_tweet_features.is_user_new_flag", "A"),
("encoded_tweet_features.language", "A"), ("encoded_tweet_features.language", "A"),
("encoded_tweet_features.link_language", "A"), ("encoded_tweet_features.link_language", "A"),
("encoded_tweet_features.num_hashtags", "A"), ("encoded_tweet_features.num_hashtags", "A"),
("encoded_tweet_features.num_mentions", "A"), ("encoded_tweet_features.num_mentions", "A"),
# ("encoded_tweet_features.profile_is_egg_flag", "A"), # ("encoded_tweet_features.profile_is_egg_flag", "A"),
("encoded_tweet_features.reply_count", "A"), ("encoded_tweet_features.reply_count", "A"),
("encoded_tweet_features.retweet_count", "A"), ("encoded_tweet_features.retweet_count", "A"),
("encoded_tweet_features.text_score", "A"), ("encoded_tweet_features.text_score", "A"),
("encoded_tweet_features.user_reputation", "A"), ("encoded_tweet_features.user_reputation", "A"),
("extended_encoded_tweet_features.embeds_impression_count", "A"), ("extended_encoded_tweet_features.embeds_impression_count", "A"),
("extended_encoded_tweet_features.embeds_impression_count_v2", "A"), ("extended_encoded_tweet_features.embeds_impression_count_v2", "A"),
("extended_encoded_tweet_features.embeds_url_count", "A"), ("extended_encoded_tweet_features.embeds_url_count", "A"),
("extended_encoded_tweet_features.embeds_url_count_v2", "A"), ("extended_encoded_tweet_features.embeds_url_count_v2", "A"),
("extended_encoded_tweet_features.favorite_count_v2", "A"), ("extended_encoded_tweet_features.favorite_count_v2", "A"),
("extended_encoded_tweet_features.label_abusive_hi_rcl_flag", "A"), ("extended_encoded_tweet_features.label_abusive_hi_rcl_flag", "A"),
("extended_encoded_tweet_features.label_dup_content_flag", "A"), ("extended_encoded_tweet_features.label_dup_content_flag", "A"),
("extended_encoded_tweet_features.label_nsfw_hi_prc_flag", "A"), ("extended_encoded_tweet_features.label_nsfw_hi_prc_flag", "A"),
("extended_encoded_tweet_features.label_nsfw_hi_rcl_flag", "A"), ("extended_encoded_tweet_features.label_nsfw_hi_rcl_flag", "A"),
("extended_encoded_tweet_features.label_spam_flag", "A"), ("extended_encoded_tweet_features.label_spam_flag", "A"),
("extended_encoded_tweet_features.label_spam_hi_rcl_flag", "A"), ("extended_encoded_tweet_features.label_spam_hi_rcl_flag", "A"),
("extended_encoded_tweet_features.quote_count", "A"), ("extended_encoded_tweet_features.quote_count", "A"),
("extended_encoded_tweet_features.reply_count_v2", "A"), ("extended_encoded_tweet_features.reply_count_v2", "A"),
("extended_encoded_tweet_features.retweet_count_v2", "A"), ("extended_encoded_tweet_features.retweet_count_v2", "A"),
("extended_encoded_tweet_features.weighted_favorite_count", "A"), ("extended_encoded_tweet_features.weighted_favorite_count", "A"),
("extended_encoded_tweet_features.weighted_quote_count", "A"), ("extended_encoded_tweet_features.weighted_quote_count", "A"),
("extended_encoded_tweet_features.weighted_reply_count", "A"), ("extended_encoded_tweet_features.weighted_reply_count", "A"),
("extended_encoded_tweet_features.weighted_retweet_count", "A"), ("extended_encoded_tweet_features.weighted_retweet_count", "A"),
] ]
).add_labels([ )
label, # Tensor index: 0 .add_labels(
"recap.engagement.is_clicked", # Tensor index: 1 [
"recap.engagement.is_favorited", # Tensor index: 2 label, # Tensor index: 0
"recap.engagement.is_open_linked", # Tensor index: 3 "recap.engagement.is_clicked", # Tensor index: 1
"recap.engagement.is_photo_expanded", # Tensor index: 4 "recap.engagement.is_favorited", # Tensor index: 2
"recap.engagement.is_profile_clicked", # Tensor index: 5 "recap.engagement.is_open_linked", # Tensor index: 3
"recap.engagement.is_replied", # Tensor index: 6 "recap.engagement.is_photo_expanded", # Tensor index: 4
"recap.engagement.is_retweeted", # Tensor index: 7 "recap.engagement.is_profile_clicked", # Tensor index: 5
"recap.engagement.is_video_playback_50", # Tensor index: 8 "recap.engagement.is_replied", # Tensor index: 6
"timelines.earlybird_score", # Tensor index: 9 "recap.engagement.is_retweeted", # Tensor index: 7
]) \ "recap.engagement.is_video_playback_50", # Tensor index: 8
.define_weight("meta.record_weight/type=earlybird") \ "timelines.earlybird_score", # Tensor index: 9
]
)
.define_weight("meta.record_weight/type=earlybird")
.build() .build()
)

View file

@ -1,3 +1,5 @@
Tweepcred
Tweepcred is a social network analysis tool that calculates the influence of Twitter users based on their interactions with other users. The tool uses the PageRank algorithm to rank users based on their influence. Tweepcred is a social network analysis tool that calculates the influence of Twitter users based on their interactions with other users. The tool uses the PageRank algorithm to rank users based on their influence.
PageRank Algorithm PageRank Algorithm
@ -70,4 +72,4 @@ The algorithm tests for convergence by calculating the total difference between
This is a helper class called Reputation that contains methods for calculating a user's reputation score. The first method called scaledReputation takes a Double parameter raw which represents the user's page rank, and returns a Byte value that represents the user's reputation on a scale of 0 to 100. This method uses a formula that involves converting the logarithm of the page rank to a number between 0 and 100. This is a helper class called Reputation that contains methods for calculating a user's reputation score. The first method called scaledReputation takes a Double parameter raw which represents the user's page rank, and returns a Byte value that represents the user's reputation on a scale of 0 to 100. This method uses a formula that involves converting the logarithm of the page rank to a number between 0 and 100.
The second method called adjustReputationsPostCalculation takes three parameters: mass (a Double value representing the user's page rank), numFollowers (an Int value representing the number of followers a user has), and numFollowings (an Int value representing the number of users a user is following). This method reduces the page rank of users who have a low number of followers but a high number of followings. It calculates a division factor based on the ratio of followings to followers, and reduces the user's page rank by dividing it by this factor. The method returns the adjusted page rank. The second method called adjustReputationsPostCalculation takes three parameters: mass (a Double value representing the user's page rank), numFollowers (an Int value representing the number of followers a user has), and numFollowings (an Int value representing the number of users a user is following). This method reduces the page rank of users who have a low number of followers but a high number of followings. It calculates a division factor based on the ratio of followings to followers, and reduces the user's page rank by dividing it by this factor. The method returns the adjusted page rank.

View file

@ -1,17 +1,17 @@
# UserTweetEntityGraph (UTEG) # UserTweetEntityGraph (UTEG)
## What is it ## What is it
User Tweet Entity Graph (UTEG) is a Finalge thrift service built on the GraphJet framework. In maintains a graph of user-tweet relationships and serves user recommendations based on traversals in this graph. User Tweet Entity Graph (UTEG) is a Finalge thrift service built on the GraphJet framework. It maintains a graph of user-tweet relationships and serves user recommendations based on traversals in this graph.
## How is it used on Twitter ## How is it used on Twitter
UTEG generates the "XXX Liked" out-of-network tweets seen on Twitter's Home Timeline. UTEG generates the "XXX Liked" out-of-network tweets seen on Twitter's Home Timeline.
The core idea behind UTEG is collaborative filtering. UTEG takes a user's weighted follow graph (i.e a list of weighted userIds) as input, The core idea behind UTEG is collaborative filtering. UTEG takes a user's weighted follow graph (i.e a list of weighted userIds) as input,
performs efficient traversal & aggregation, and returns the top weighted tweets engaged basd on # of users that engaged the tweet, as well as performs efficient traversal & aggregation, and returns the top-weighted tweets engaged based on # of users that engaged the tweet, as well as
the engaged users' weights. the engaged users' weights.
UTEG is a stateful service and relies on a Kafka stream to ingest & persist states. It maintains an in-memory user engagements over the past UTEG is a stateful service and relies on a Kafka stream to ingest & persist states. It maintains in-memory user engagements over the past
24-48 hours. Older events are dropped and GC'ed. 24-48 hours. Older events are dropped and GC'ed.
For full details on storage & processing, please check out our open-sourced project GraphJet, a general-purpose high performance in-memory storage engine. For full details on storage & processing, please check out our open-sourced project GraphJet, a general-purpose high-performance in-memory storage engine.
- https://github.com/twitter/GraphJet - https://github.com/twitter/GraphJet
- http://www.vldb.org/pvldb/vol9/p1281-sharma.pdf - http://www.vldb.org/pvldb/vol9/p1281-sharma.pdf

View file

@ -78,7 +78,7 @@ sealed trait SimClustersEmbedding extends Equals {
CosineSimilarityUtil.applyNormArray(sortedScores, expScaledNorm) CosineSimilarityUtil.applyNormArray(sortedScores, expScaledNorm)
/** /**
* The Standard Deviation of a Embedding. * The Standard Deviation of an Embedding.
*/ */
lazy val std: Double = { lazy val std: Double = {
if (scores.isEmpty) { if (scores.isEmpty) {

View file

@ -306,7 +306,7 @@ struct ThriftFacetRankingOptions {
// penalty for keyword stuffing // penalty for keyword stuffing
60: optional i32 multipleHashtagsOrTrendsPenalty 60: optional i32 multipleHashtagsOrTrendsPenalty
// Langauge related boosts, similar to those in relevance ranking options. By default they are // Language related boosts, similar to those in relevance ranking options. By default they are
// all 1.0 (no-boost). // all 1.0 (no-boost).
// When the user language is english, facet language is not // When the user language is english, facet language is not
11: optional double langEnglishUIBoost = 1.0 11: optional double langEnglishUIBoost = 1.0

View file

@ -728,7 +728,7 @@ struct ThriftSearchResultMetadata {
29: optional double parusScore 29: optional double parusScore
// Extra feature data, all new feature fields you want to return from Earlybird should go into // Extra feature data, all new feature fields you want to return from Earlybird should go into
// this one, the outer one is always reaching its limit of the nubmer of fields JVM can // this one, the outer one is always reaching its limit of the number of fields JVM can
// comfortably support!! // comfortably support!!
86: optional ThriftSearchResultExtraMetadata extraMetadata 86: optional ThriftSearchResultExtraMetadata extraMetadata
@ -831,7 +831,7 @@ struct ThriftSearchResult {
12: optional list<hits.ThriftHits> cardTitleHitHighlights 12: optional list<hits.ThriftHits> cardTitleHitHighlights
13: optional list<hits.ThriftHits> cardDescriptionHitHighlights 13: optional list<hits.ThriftHits> cardDescriptionHitHighlights
// Expansion types, if expandResult == False, the expasions set should be ignored. // Expansion types, if expandResult == False, the expansions set should be ignored.
8: optional bool expandResult = 0 8: optional bool expandResult = 0
9: optional set<expansions.ThriftTweetExpansionType> expansions 9: optional set<expansions.ThriftTweetExpansionType> expansions
@ -971,7 +971,7 @@ struct ThriftTermStatisticsResults {
// The binIds will correspond to the times of the hits matching the driving search query for this // The binIds will correspond to the times of the hits matching the driving search query for this
// term statistics request. // term statistics request.
// If there were no hits matching the search query, numBins binIds will be returned, but the // If there were no hits matching the search query, numBins binIds will be returned, but the
// values of the binIds will not meaninfully correspond to anything related to the query, and // values of the binIds will not meaningfully correspond to anything related to the query, and
// should not be used. Such cases can be identified by ThriftSearchResults.numHitsProcessed being // should not be used. Such cases can be identified by ThriftSearchResults.numHitsProcessed being
// set to 0 in the response, and the response not being early terminated. // set to 0 in the response, and the response not being early terminated.
3: optional list<i32> binIds 3: optional list<i32> binIds
@ -1097,8 +1097,8 @@ struct ThriftSearchResults {
// Superroots' schema merge/choose logic when returning results to clients: // Superroots' schema merge/choose logic when returning results to clients:
// . pick the schema based on the order of: realtime > protected > archive // . pick the schema based on the order of: realtime > protected > archive
// . because of the above ordering, it is possible that archive earlybird schema with a new flush // . because of the above ordering, it is possible that archive earlybird schema with a new flush
// verion (with new bit features) might be lost to older realtime earlybird schema; this is // version (with new bit features) might be lost to older realtime earlybird schema; this is
// considered to to be rare and accetable because one realtime earlybird deploy would fix it // considered to to be rare and acceptable because one realtime earlybird deploy would fix it
21: optional features.ThriftSearchFeatureSchema featureSchema 21: optional features.ThriftSearchFeatureSchema featureSchema
// How long it took to score the results in earlybird (in nanoseconds). The number of results // How long it took to score the results in earlybird (in nanoseconds). The number of results

View file

@ -29,8 +29,8 @@ struct AdhocSingleSideClusterScores {
* we implement will use search abuse reports and impressions. We can build stores for new values * we implement will use search abuse reports and impressions. We can build stores for new values
* in the future. * in the future.
* *
* The consumer creates the interactions which the author recieves. For instance, the consumer * The consumer creates the interactions which the author receives. For instance, the consumer
* creates an abuse report for an author. The consumer scores are related to the interation creation * creates an abuse report for an author. The consumer scores are related to the interaction creation
* behavior of the consumer. The author scores are related to the whether the author receives these * behavior of the consumer. The author scores are related to the whether the author receives these
* interactions. * interactions.
* *

View file

@ -70,7 +70,7 @@ struct TweetTopKTweetsWithScore {
/** /**
* The generic SimClustersEmbedding for online long-term storage and real-time calculation. * The generic SimClustersEmbedding for online long-term storage and real-time calculation.
* Use SimClustersEmbeddingId as the only identifier. * Use SimClustersEmbeddingId as the only identifier.
* Warning: Doesn't include modelversion and embedding type in the value struct. * Warning: Doesn't include model version and embedding type in the value struct.
**/ **/
struct SimClustersEmbedding { struct SimClustersEmbedding {
1: required list<SimClusterWithScore> embedding 1: required list<SimClusterWithScore> embedding

View file

@ -50,7 +50,7 @@ struct CandidateTweets {
}(hasPersonalData = 'true') }(hasPersonalData = 'true')
/** /**
* An encapuslated collection of reference tweets * An encapsulated collection of reference tweets
**/ **/
struct ReferenceTweets { struct ReferenceTweets {
1: required i64 targetUserId(personalDataType = 'UserId') 1: required i64 targetUserId(personalDataType = 'UserId')

View file

@ -33,12 +33,12 @@ enum EmbeddingType {
Pop10000RankDecay11Tweet = 31, Pop10000RankDecay11Tweet = 31,
OonPop1000RankDecayTweet = 32, OonPop1000RankDecayTweet = 32,
// [Experimental] Offline generated produciton-like LogFavScore-based Tweet Embedding // [Experimental] Offline generated production-like LogFavScore-based Tweet Embedding
OfflineGeneratedLogFavBasedTweet = 40, OfflineGeneratedLogFavBasedTweet = 40,
// Reserve 51-59 for Ads Embedding // Reserve 51-59 for Ads Embedding
LogFavBasedAdsTweet = 51, // Experimenal embedding for ads tweet candidate LogFavBasedAdsTweet = 51, // Experimental embedding for ads tweet candidate
LogFavClickBasedAdsTweet = 52, // Experimenal embedding for ads tweet candidate LogFavClickBasedAdsTweet = 52, // Experimental embedding for ads tweet candidate
// Reserve 60-69 for Evergreen content // Reserve 60-69 for Evergreen content
LogFavBasedEvergreenTweet = 60, LogFavBasedEvergreenTweet = 60,
@ -104,7 +104,7 @@ enum EmbeddingType {
//Reserved 401 - 500 for Space embedding //Reserved 401 - 500 for Space embedding
FavBasedApeSpace = 401 // DEPRECATED FavBasedApeSpace = 401 // DEPRECATED
LogFavBasedListenerSpace = 402 // DEPRECATED LogFavBasedListenerSpace = 402 // DEPRECATED
LogFavBasedAPESpeakerSpace = 403 // DEPRCATED LogFavBasedAPESpeakerSpace = 403 // DEPRECATED
LogFavBasedUserInterestedInListenerSpace = 404 // DEPRECATED LogFavBasedUserInterestedInListenerSpace = 404 // DEPRECATED
// Experimental, internal-only IDs // Experimental, internal-only IDs

View file

@ -1,36 +1,13 @@
Overview # TimelineRanker
========
**TimelineRanker** (TLR) is a legacy service which provides relevance-scored tweets from the Earlybird Search Index and User Tweet Entity Graph (UTEG) service. Despite its name, it no longer does any kind of heavy ranking/model based ranking itself - just uses relevance scores from the Search Index for ranked tweet endpoints.
**TimelineRanker** (TLR) is a legacy service that provides relevance-scored tweets from the Earlybird Search Index and User Tweet Entity Graph (UTEG) service. Despite its name, it no longer performs heavy ranking or model-based ranking itself; it only uses relevance scores from the Search Index for ranked tweet endpoints.
The following is a list of major services that Timeline Ranker interacts with: The following is a list of major services that Timeline Ranker interacts with:
**Earlybird-root-superroot (a.k.a Search)** - **Earlybird-root-superroot (a.k.a Search):** Timeline Ranker calls the Search Index's super root to fetch a list of Tweets.
- **User Tweet Entity Graph (UTEG):** Timeline Ranker calls UTEG to fetch a list of tweets liked by the users you follow.
Timeline Ranker calls the Search Index's super root to fetch a list of Tweets. - **Socialgraph:** Timeline Ranker calls Social Graph Service to obtain the follow graph and user states such as blocked, muted, retweets muted, etc.
- **TweetyPie:** Timeline Ranker hydrates tweets by calling TweetyPie to post-filter tweets based on certain hydrated fields.
**User Tweet Entity Graph (UTEG)** - **Manhattan:** Timeline Ranker hydrates some tweet features (e.g., user languages) from Manhattan.
Timeline Ranker calls UTEG to fetch a list of tweets liked by the users you follow.
**Socialgraph**
Timeline Ranker calls Social Graph Service to obtain follow graph and user states such as blocked, muted, retweets muted, etc.
**TweetyPie**
Timeline Ranker hydrates tweets by calling TweetyPie so that it can post-filter tweets based on certain hydrated fields.
**Manhattan**
Timeline Ranker hydrates some tweet features (eg, user languages) from Manhattan.
**Home Mixer**
Home Mixer calls Timeline Ranker to fetch tweets from the Earlybird Search Index and User Tweet Entity Graph (UTEG) service to power both the For You and Following Home Timelines.
Timeline Ranker does light ranking based on Earlybird tweet candidate scores and truncates to the number of candidates requested by Home Mixer based on these scores
**Home Mixer** calls Timeline Ranker to fetch tweets from the Earlybird Search Index and User Tweet Entity Graph (UTEG) service to power both the For You and Following Home Timelines. Timeline Ranker performs light ranking based on Earlybird tweet candidate scores and truncates to the number of candidates requested by Home Mixer based on these scores.

View file

@ -3,8 +3,8 @@ Trust and Safety Models
We decided to open source the training code of the following models: We decided to open source the training code of the following models:
- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content. - pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.
- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics - pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.
- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter terms of service - pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.
- pAbuse: Model to detect abusive content. This includes violations of Twitter terms of service, including hate speech, targeted harassment and abusive behavior. - pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.
We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly. We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.

View file

@ -1,7 +1,7 @@
# TWML # TWML
--- ---
Note: `twml` is no longer under development. Much of the code here is not out of date and unused. Note: `twml` is no longer under development. Much of the code here is out of date and unused.
It is included here for completeness, because `twml` is still used to train the light ranker models It is included here for completeness, because `twml` is still used to train the light ranker models
(see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md`) (see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md`)
--- ---
@ -10,4 +10,4 @@ TWML is one of Twitter's machine learning frameworks, which uses Tensorflow unde
deprecated, deprecated,
it is still currently used to train the Earlybird light ranking models ( it is still currently used to train the Earlybird light ranking models (
see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/train.py`). see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/train.py`).
The most relevant part of this is the `DataRecordTrainer` class, which is where the core training logic resides. The most relevant part of this is the `DataRecordTrainer` class, which is where the core training logic resides.