Tuesday, January 30, 2024

OpenAI Embeddings

Introduction

OpenAI Embeddings are vector representations of text created by OpenAI's language models. These embeddings capture the semantic meaning of the text in a high-dimensional space. This means that texts with similar meanings are close to each other in this space, and texts with different meanings are far apart.

The primary use of these embeddings is in natural language processing (NLP) tasks where understanding the context and meaning of text is crucial. Some common applications include:

  1. Semantic Text Similarity: Determining how similar two pieces of text are, which can be used in recommendation systems, search engines, or duplicate detection.
  2. Text Classification: Categorizing text into predefined classes. The embeddings can be used as input features for a classifier.
  3. Clustering: Grouping similar texts together. Since embeddings represent semantic meanings, texts on similar topics tend to cluster together.
  4. Information Retrieval: Enhancing search engines by finding documents that are semantically related to the query, not just textually similar.

To generate embeddings, you typically pass your text through the model, which then outputs a high-dimensional vector. You can then use this vector in various machine learning models or for any of the applications mentioned above. OpenAI has provided APIs for generating embeddings, making it easy for developers to integrate this technology into their applications.

Examples:

In a vector database, each text is converted into a high-dimensional vector using the embeddings, and these vectors are stored in the database. Here are some simplified examples:
  1. Text Entries:

    • Text 1: "I love reading books"
    • Text 2: "Books are my passion"
    • Text 3: "Cooking is a great hobby"
    • Text 4: "I enjoy hiking in the mountains"
    • Text 5: "Mountains are breathtaking"
  2. Corresponding OpenAI Embeddings (hypothetical and highly simplified for illustration):

    • Embedding 1 (Text 1): [0.8, 0.1, 0.1]
    • Embedding 2 (Text 2): [0.7, 0.2, 0.1]
    • Embedding 3 (Text 3): [0.1, 0.8, 0.1]
    • Embedding 4 (Text 4): [0.2, 0.1, 0.7]
    • Embedding 5 (Text 5): [0.3, 0.1, 0.6]

In a vector database, these embeddings can be indexed for various purposes such as semantic search, clustering, or finding similar texts. For instance:

  • Semantic Search: If you query the database with a vector close to [0.7, 0.2, 0.1] (representing interest in books), the database will return Text 1 and Text 2 as they have the closest vectors.
  • Clustering: The database can cluster the vectors into groups, potentially grouping Text 1 and Text 2 in one cluster (related to books), Text 3 in another (related to cooking), and Text 4 and Text 5 in a third cluster (related to outdoor activities).
  • Finding Similar Texts: If you have a new text, say "I love the mountains", converted to a vector [0.3, 0.1, 0.6], the database can quickly find Text 4 and Text 5 as the most similar texts based on the vector proximity.

In real scenarios, the vectors are high-dimensional (often more than 300 dimensions) and capture much more nuanced semantic meanings. The database operations (search, cluster, find similar) use sophisticated algorithms to handle these high-dimensional spaces efficiently.

Interepretation of the vectors:

These embeddings are high-dimensional representations, but let's break down the interpretation based on the simplified example you've provided:

  1. Dimensions Reflect Semantic Features:

    • The dimensions (each element in the vector) can be thought of as representing some abstract features of the text. In real embeddings, these features are complex and not directly interpretable by humans. However, in this simplified example, you might imagine that each dimension could loosely correspond to different topics or concepts (e.g., the first dimension might be related to literature, the second to cooking, and the third to outdoor activities).
  2. Magnitude in Each Dimension:

    • Embedding 1 (Text 1): [0.8, 0.1, 0.1]
      • This text has a high value in the first dimension and low in the others, suggesting a strong relation to the concept represented by the first dimension (e.g., literature) and weak relation to the other concepts.
    • Embedding 2 (Text 2): [0.7, 0.2, 0.1]
      • Similar to Text 1, this text is also strongly related to the first dimension but has a slightly higher relation to the second dimension compared to Text 1.
    • Embedding 3 (Text 3): [0.1, 0.8, 0.1]
      • This text is strongly related to the second dimension, suggesting a strong relation to the concept represented by that dimension (e.g., cooking).
    • Embedding 4 (Text 4): [0.2, 0.1, 0.7]
    • Embedding 5 (Text 5): [0.3, 0.1, 0.6]
      • Both texts have their highest values in the third dimension, indicating a strong relationship with the concept related to that dimension (e.g., outdoor activities), with Text 5 having a slightly stronger relation to the first dimension compared to Text 4.
  3. Distance Between Vectors:

    • The Euclidean distance or cosine similarity between vectors indicates how similar the texts are in terms of their semantic content. Texts with similar vectors are semantically similar. For instance:
      • Text 1 and Text 2 are quite close to each other, indicating that they are semantically similar.
      • Text 4 and Text 5 are also close, suggesting a similarity in their content.
  4. Application in Vector Database:

    • When you use these embeddings in a vector database (like Annoy), you typically perform operations like finding the nearest neighbors. In this context, nearest neighbors are the texts with the most similar embeddings, implying the most similar semantic content.

Coding Sample

To illustrate how you might use OpenAI embeddings with a vector database in Python, I'll provide a sample code snippet. This example uses the Annoy library for creating and using a vector database. Annoy is a C++ library with Python bindings to search for points in space that are close to a given query point. It's particularly useful for nearest neighbor searches in high-dimensional spaces.

First, install the necessary libraries by running:

pip install openai annoy

Codes:

import openai import annoy from collections import defaultdict # Initialize OpenAI with your API key openai.api_key = 'your-api-key' # Sample texts texts = [ "I love reading books", "Books are my passion", "Cooking is a great hobby", "I enjoy hiking in the mountains", "Mountains are breathtaking" ] # Get embeddings from OpenAI def get_embeddings(texts): return openai.Embedding.create(input=texts, engine="text-similarity-babbage-001")['data'] embeddings = get_embeddings(texts) # Create an Annoy index for the embeddings f = 2048 # Length of item vector that will be indexed t = annoy.AnnoyIndex(f, 'angular') for i, embedding in enumerate(embeddings): t.add_item(i, embedding['embedding']) t.build(10) # 10 trees # Save the index to disk for later use t.save('test.ann') # Load the index (can be used in another process) u = annoy.AnnoyIndex(f, 'angular') u.load('test.ann') # Find the 3 nearest neighbors to the first item nearest_neighbors = u.get_nns_by_item(0, 3) for neighbor in nearest_neighbors: print(texts[neighbor]) # If you have another text and want to find similar texts in the database new_text = "I enjoy reading about mountains" new_embedding = get_embeddings([new_text])[0]['embedding'] # Find the 3 nearest neighbors to the new embedding nearest_neighbors = u.get_nns_by_vector(new_embedding, 3) for neighbor in nearest_neighbors: print(texts[neighbor])

In the code:

  1. OpenAI Embeddings: We fetch the embeddings for our sample texts from OpenAI.
  2. Annoy Index Creation: We create an Annoy index and add our embeddings to it.
  3. Querying: We demonstrate how to query the index to find the nearest neighbors to a given point (in our case, the embeddings of a text).



Wednesday, January 17, 2024

ChatGPT Limitation or Flaw?

I found an interesting limitation/flaw in ChatGPT when i present it with different questions in a single prompt.

First I present ChatGPT with two questions in one prompt, one after another. The second question in the same prompt uses the answer from the first question. 

Diagram 1: ChatGPT returns a wrong value.

Diagram 1 shows that ChatGPT generated a result for the prompt that includes 2 questions in the same prompt. If I asked ChaGPT separately, the result is shown in Diagram 3 which is not the same result as in Diagram 1.

Diagram 2: Different result from ChatGPT is question is asked separately.

If I ask a non mathematical questions to ChatGPT as shown in Diagram 3, the result tends to be consistent with questions that are asked separately and answered individually

Diagram 3: Non-mathematical question

As a matter of fact Blue + Yellow = Green, and Green + Red = Brown.

Another Example:
Diagram 4: Chat-GPT 4 seems to use different ways to interpret the 2 similar questions.


I have not gotten the time to find out further in what extend that ChatGPT become less reliable based on how I structure the prompts. Just be mindful that ChatGPT is not bullet proof and it's best that we do our verification on works that we rely on ChatGPT.