HomeBERTGetting started with OpenAI GPT-2
Deep Learning Specialization on Coursera

GPT-2 was released by OpenAI last year: Better Language Models and Their Implications, and the related code was released on Github: Code for the paper “Language Models are Unsupervised Multitask Learners” .

First install OpenAI GPT-2 from github, my pc is ubuntu16.04 with cuda10, two GPUs (one is Titan XP, another is GTX-1080Ti):

git clone https://github.com/openai/gpt-2.git
cd gpt-2
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
pip install tensorflow-gpu==1.15

Then download the default GPT-2 model released by OpenAI by specify the model size:

python download_model.py 124M

Now you can generate random samples by src/generate_unconditional_samples.py script:

python src/generate_unconditional_samples.py

This will generate many random samples and following is one paragraph which generate by GPT-2 124M model:

It seems very likely that a lot of people are collecting computers by the time the holidays arrive. Be aware, you need a laptop with enough drive space to install software on, interact with relationships, read a bunch of books, pay bills, organize your property and then hold hands to make a squiggly mess of your assorted miscreants. And increasingly, the entertainment world is gone from so-called "Kids' TVs" to "modern kids' sitcoms." What sort of Play Culture does that have to do with games and films? Despite "Wild Bunch" and Pixar, children's networks like Parenting Week travel comedy shows! Eh? Some pride! Big beards?! Yes! (It's bound to surprise you that we already know each day the… read more<|endoftext|>Near the end of a three-month deconstruction, Dolores is laser-prepared the courtroom that surrounds her as it talls the chlorine walls of her plinth filled with three-story cargo bunkers. Though Mazuri believes that their conversation may have made her impervious to attack. 

Another script is src/interactive_conditional_samples.py, you can input some prefix by interactive mode:

python src/interactive_conditional_samples.py

And one of the samples generate from: Natural language processing (NLP)

Model prompt >>> Natural language processing
======================================== SAMPLE 1 ========================================


Modes in which the neural network learns a set of questions is a key step in neural networks optimization, especially during reinforcement learning of natural language processing models: an initial set of questions must be present in a state of non-linearity for the neural network to be profitable. For the non-linear state (CODM), it is desirable to be able to easily select the minimum number of questions in the CODM set given the constraints. This can be done by using "fisher lines" which can be generated by training training images on the dataset. Fisher line models are similar and, in particular, can be easily tested on different images of the same set of questions. In addition to generating training inputs, the CODM is able to run a training program in parallel using preprocessing that generates training data as the result of this training. In order to avoid the use of expensive models, the CODM is able to train the input parameters (the task parameter) for a given task (i.e., training the input data) in parallel using a single process.

Constraints on computation of natural language and recognition

A constraint to computation of natural language is known as the problem of constraint (Jansson 1987 [3]).

What's more, when we look at the state of our state, there are several states of condition on the order of 1 from right to left. Suppose that we have three questions in our model. How can we know the answer to one of them? Would our task lie in a box (the other two are in the same box) or at a different point? Do we need to decide which two other boxes to get in? In this state we have the state of condition 'a'. How can we calculate which two boxes to get in from the first?' In other words, what should we do if we know that neither of the boxes is there?'

There is a great deal of debate on the best condition on this question. The literature reports a large number of different rules for the classification and classification of questions. In a number of situations the best state on this question appears in the form of the following:

The most common is given in this passage as: 'if we want a question, we should do it.' One way to interpret this as representing a constraint for computation of natural language processing is to assume that the natural language processing theory we learn about in the course of learning is consistent with that model and thus the best state on this question may
================================================================================

You can even use the full size of GPT-2 model, first download it by:

python download_model.py 1558M

Then use the iteractive mode:

python src/interactive_conditional_samples.py --model_name=1558M

This is one of prefix=”Natural language processing” output sample:

Model prompt >>> Natural language processing
======================================== SAMPLE 1 ========================================
 is widely used by AI systems, both human and artificial. It is likely to have the greatest impact in the near future. An understanding of natural language processing gives us a better understanding of human cognition, and how artificial systems can learn language based on natural language. However, the state-of-the-art is still far from reaching human performance. Even the Turing Test does not provide complete answers: a computer program that passes an AI task is a human program that cannot deceive humans. A new approach to human language is needed to reach this milestone.

A computer program that passes an AI task is a human program that cannot deceive humans.

How to get there?

There are many ways to apply natural language processing in AI, ranging from direct translation to inferring grammatically reasonable sentences. Recently, computer scientists have focused on machine translation that uses machine learning. However, machine translation is limited because there is a certain amount of human intervention in the process.

Now, it turns out that researchers at the University of Washington and Microsoft Research and their colleagues have proposed an artificial intelligence agent that accomplishes the same tasks as a human translator with much less intervention. Their paper has been published in the peer-reviewed scientific journal Neural Information Processing Systems. Their AI system is based on Deep Learning, an algorithm that learns to interpret and process the information in a variety of tasks.

The key to the system is the principle of local optimization. The local minimum is a method that is used to achieve the goal of minimizing the cost function of an algorithm. Local optimization is useful because it is often faster, or at least as efficient, than traditional optimization techniques. The local minimum concept was introduced in the 1960s by the psychologist Daniel Kahneman. It has been successfully applied in several machine learning methods such as support vector machines and support vector machines with back propagation, or in the classification of natural images, such as face detection. It is also widely used in human language processing.

The problem the researchers solved was to perform the task called human translation and compare it with a system with local minimum optimization. Although the researchers were aiming to create a system that could handle complex natural language processing tasks, their approach could also be beneficial for tasks that require high precision in language processing. To this end, the research team implemented their approach on the Microsoft Cognitive Toolkit.

The research team analyzed the performances of human translators, in English and Korean, in a variety of tasks with both a small and large dataset. In the small dataset set,
================================================================================

You can change the parameters by specify the value of parameter, this is supported by Google Python Fire: https://github.com/google/python-fire, just enjoy it:

def interact_model(
    model_name='124M',
    seed=None,
    nsamples=1,
    batch_size=1,
    length=None,
    temperature=1,
    top_k=40,
    top_p=1,
    models_dir='models',
):
    """
    Interactively run the model
    :model_name=124M : String, which model to use
    :seed=None : Integer seed for random number generators, fix seed to reproduce
     results
    :nsamples=1 : Number of samples to return total
    :batch_size=1 : Number of batches (only affects speed/memory).  Must divide nsamples.
    :length=None : Number of tokens in generated text, if None (default), is
     determined by model hyperparameters
    :temperature=1 : Float value controlling randomness in boltzmann
     distribution. Lower temperature results in less random completions. As the
     temperature approaches zero, the model will become deterministic and
     repetitive. Higher temperature results in more random completions.
    :top_k=0 : Integer value controlling diversity. 1 means only 1 word is
     considered for each step (token), resulting in deterministic completions,
     while 40 means 40 words are considered at each step. 0 (default) is a
     special setting meaning no restrictions. 40 generally is a good value.
     :models_dir : path to parent folder containing model subfolders
     (i.e. contains the <model_name> folder)
    """

Reference:
Better Language Models and Their Implications
openai/gpt-2
How to Run OpenAI’s GPT-2 Text Generator on Your Computer
The Illustrated GPT-2 (Visualizing Transformer Language Models)
Beginner’s Guide to Retrain GPT-2 (117M) to Generate Custom Text Content
nshepperd/gpt-2
minimaxir/gpt-2-simple
minimaxir/gpt-2-cloud-run
minimaxir/textgenrnn
graykode/gpt-2-Pytorch
How To Make Custom AI-Generated Text With GPT-2
TRANSFORMERS FROM SCRATCH
huggingface/transformers
Source code for transformers.tokenization_gpt2
rusiaaman/XLnet-gen
Deconstructing BERT: Distilling 6 Patterns from 100 Million Parameters
Deploy A Text Generating API With Hugging Face’s DistilGPT-2
Morizeyao/GPT2-Chinese
yangjianxin1/GPT2-chitchat
imcaspar/gpt2-ml
GaoPeng97/transformer-xl-chinese

Posted by TextMiner

Deep Learning Specialization on Coursera

Comments

Getting started with OpenAI GPT-2 — No Comments

Leave a Reply

Your email address will not be published.