Self-Education Book List

Self-education is life long and is neither standardized nor cast in stone and requires no evaluation for certification. You are free the pursue it at any time and space. Ultimately we would want to see an “new & refined you” – sane, well read, critical yet reasonable, cultured, rational, knowledgeable and loving human. In tandem to my earlier post, here is the list of books (As suggested by Susan Wise Bower in her The Well-Educated Mind) and selected by me for your reading pleasure and wisdom. The Author (refer to my earlier post) has given best translations and Abridged versions, please get her book for the full list

Novels

Author Book Year
Miguel de Cervantes Don Quixote 1605
Jonathan Swift Gulliver’s Travels 1726
Jane Austen Pride and Prejudice 1815
Charles Dickens Oliver Twist 1838
Herman Melville Moby Dick 1851
Leo Tolstoy Anna Karenina 1877
Mark Twain Adventures of Huckleberry Finn 1884
Scott Fitzgerald The Great Gatsby 1925

Memoir

Author Book Year
Augustine The Confessions 400 BC
Michel De Montaigne Essays 1580
Rene Descartes Meditations 1641
Benjamin Franklin The Autobiography of Benjamin Franklin 1791
Henry David Thoreau Walden 1834
Harriet Jacobs Incidents in the Life if a Slave Girl, Written by herself 1861
Friedrich Nietzsche Ecce Homo: How one Becomes What One IS 1908
Mohandas Gandhi An Autobiography: The story of My experiments with truth 1929

History

Author Book Year
Herodotus The Histories 441 BC
Thucydides The Peloponnesian War 400 BC
Plato The Republic 375 BC
Plutarch Lives AD 100 – 125
Nicole Machiavelli The Prince 1513
John Locke The True end of Civil Governments 1690
David Hume The History of England: Volume V 1754
Thomas Paine Common Sense 1776
Edward Gibbon The History of the Decline and Fall of the Roman Empire 1776-88
Mary Wollstonecraft A Vindication of the Rights of Women 1792
Alex De Tocqueville Democracy in America 1835
Max Weber The Protestant Ethic and the Spirit of Capitalism 1904
Lytton Strachey Queen Victoria 1921

Drama

Author Book Year
Aeschylus Agamemnon 458 BC
Sophocles Oedipus The King 450 BC
Euripides Medea 431 BC
Aristophanes The Birds 400 BC
Aristotle Poetics 330 BC
William Shakespeare Richard III, A Midsummer’s Night’s Dream, Hamlet 1592-1600
Moliere Tartuffe 1669
George Bernard Shaw Saint Joan 1924

Poetry

Author Book Year
The Epic of Gilgamesh 2000 BC
Homer The Iliad and The Odyssey 800 BC
Horace Odes 65-8 BC
Beowulf 1000
Dante Alighieri Inferno 1265-1321
Geoffrey Chaucer The Canterbury Tales 1343-1400
William Shakespeare Sonnets 1564-1616
John Milton The Paradise Lost 1608-1674
William Blake Songs of Innocence and of Experience 1757-1827
William Wordsworth 1770-1850
Samuel Taylor Coleridge 1772-1834
John Keats 1795-1821
Henry Wadsworth 1807-1882
Alfred Tennyson 1809-1883
Advertisements

The Well-Educated Mind

The modus operandi of today’s education is standardized evaluation at all levels and crams all students to a rote learning. This is partly relevant to their nascent young adult life than to their more mature old life and by then experience teaches and takes over. This formal education in schools and universities may sound effective but never creates a well-educated mind that’s rational, creative, adaptable, prodigious, self-regulating and scientific. Even though current education aims to achieve some, it never comes close to self-education – that’s done on our own pace with hunger to tame the gap and master.  Self education through books provide a clear, succinct path to knowledge that can be applied in real life which has been time tested.

A Well-Educated mind is our responsibility to aspire with a repertoire of books and maintain and sustain it forever rereading them and adding new ones worth that time unfolds relentlessly. Thomas Jefferson, second President of U.S. opines that university lectures are unnecessary for serious pursuit of historical reading. He advised his nephew Thomas Mann Randolph Jr. though a letter whose common understanding of times: any literary man can rely on self-education to train and fill the mind and all you need is a shelf of books. a congenial friend or two who can talk to you about your reading, and a few “chasms of times not otherwise appropriated”.

Isaac Watts in his self-education treatise Improvement of Mind (originally published in 1741) observes: “A well trained mind is the result of application, not inborn genius. Deep thinkers are not born with “bright genius, a ready wit and good parts”. No matter how ignorant and low a mind might be, “studios thought…the exercise of your own reason and judgement upon all you read…gives good sense..and affords your understanding the truest improvement. Sustained, serious reading is at the center of this self-improvement project.

Observation, reading , conversation and attendance at lectures are all ways  of self-teaching, as Isaac Watts goes on to tell us. But he concludes that reading is the most important method of self-improvement. Observation limits our learning to our immediate surroundings; conversation and attendance at lectures are valuable, but expose us only to the views of a few nearby persons. reading alone allows us to reach out beyond the restrictions of time and space, to take part in what Mortimer Adler has called the “Great Conversation” of ideas that began in ancient times and has continued unbroken to the present. Reading makes us part of this conversation, no matter where and when we pursue it.

We read newspapers and slapstick fiction but find great books tough but it requires different skill than reading for pleasure. This doesn’t demonstrate mental inadequacy built lack of preparation. The first task in self-education is not to dive in straight but to find time to read and steps are here

  1. schedule regular reading and self-study time – in my case I leverage commute time of 1 hour and 30 minutes of my to & fro to work in Mass Rapid Transit (MRT)
  2. practice the mechanics of reading – assess your speed, understating and familiarity of words (Wordly Wise 3000: Systematic Academic Vocabulary Development, Vocabulary from Classical Roots can help)
  3. keeping a journal: a written record of new ideas – practice taking notes as you write and then summarizing. Classical self-education demands that you understand, evaluate and react to ideas. In your journal, you will keep your summaries of your reading: this is your tool of understanding the idea you read – the mastery of facts – in my case is blogging what you read effectively summarizing what you read for posterity
  4. practice extensive reading and journal taking – akin to what Benjamin Franklin used to master his language through reading – this is governed by 3 stages of reading/enquiry:
    1. Grammar-Stage of reading
    2. Logic-Stage Reading
    3. Rhetoric-Stage Reading

Grammar-Stage reading being common for all genres whereas other two steps have some differences. These questions an observations become useful not only reading each of the genres but when you attempt to write your own book as well. Susan Wise Bauer has condensed and given clean explanations under each of the points. I suggest you read her book (The Well-Educated Mind – A Guide To The Classical Education You Never Had) to get the full details and here is the excerpts that summarizes these 3 stage readings on each genre. I’ll list all the books suggested for reading by this Author under each genre that are important

Grammar-Stage Reading on all Genres

  1. Plan to returning to each book more than once to reread sections and chapters
  2. Underline or mark passages that you find interesting or confusing
  3. Before you begin, read the title page, the copy on the back, and the table of contents
  4. At the end of each chapter or section, write down a sentences or two that summarizes the content. Remember not to include details
  5. As you read, use your journal to jot down questions that come to your mind
  6. Assemble your summary sentences into an informal outline, and then give the book a brief title and an extensive sub-title

The story of People: Reading through History with the  Novel

Logic-Stage Reading

  1. is this novel a “fable” or a “chronicle”?
  2. what does the central character (or characters) want? What is standing in his (or her) way? And what he (or she) pursues in order to overcome this bklock?
  3. who sis telling the story? First person point of view or Second Person or Third person limited or third person objective or the omniscient point of view?
  4. where is the story is set?
  5. What style the writer employ?
  6. Images & Metaphors, Beginnings and endings

Rhetoric-Stage Reading

  1. Do you sympathize with the characters? which ones and why?
  2. Does the writer’s technique give you a clue as to her “argument” – her take on teh unknown condition?
  3. Did the writer’s time affects him?
  4. Is there an argument in this book?
  5. Do you agree?

The story of Me: Autobiography or Memoir

Logic-Stage Reading

  1. what is the theme that ties the narrative together?
  2. where is the life’s turning point? Is there a conversion?
  3. For what does the writer apologize? how dos the writer justify?
  4. what is the model – the ideal – for this person’s life?
  5. what is the end of life: the place where the writer has arrived, found closure, discovered rest?
  6. Revisit the theme of this writer’s life

Rhetoric-Stage Reading

  1. IS the writer writing for himself, or for a group?
  2. what are the three moments, or time frames, of the autobiography?
  3. where does the writer’s judgement lie?
  4. Do you reach a different conclusion from the writer about the pattern of his life?
  5. what have you brought away from this story?

The Story of the Past: The tales of Historians (an Politicians)

Logic-Stage Reading

  1. what are the major events, challenges, causes of challenges to the historical her/line? where does it take place?
  2. What are the major assertions of the historian?
  3. what questions is the historian asking?
  4. what sources does the historian use to answer them?
  5. Does the evidence support the connection between questions and answers?
  6. Can you identify the history’s genre? diplomatic, military, international, etc. Also history spans across: Ancient, Medieval, Renaissance (Positivism, Progress-ism & Multicultural-ism / Romanticism, Relativism, Skepticism) and Postmodernism.
  7. Does the historian list his or her qualifications?

Rhetoric-Stage Reading

  1. What is the purpose of history?
  2. does the story have forward motion?
  3. what does it mean to be humans?
  4. why do things go wrong?
  5. what places does free will have?
  6. what relationship does this history have to social problems?
  7. what is the end of history?
  8. Hos is this history the same as – or different than – the stories of other historians who have come before?
  9. Given the same facts, would you come to a similar conclusion?

The World Stage: Reading through History with Drama

Logic-Stage Reading

  1. Id the play is given unity by plot? or by character? or by an idea?
  2. Do any characters stand in opposition to each other?
  3. How do the characters speak?
  4. Is there any confusion of identity?
  5. ids there a climax, or is the play open ended?
  6. what is the play’s theme?

Rhetoric-Stage Reading

  1. How would you direct and stage this play?

History Refracted: The Poets and Their Poems

Logic-Stage Reading

  1. Look back at the poem; identify the basic narrative strategy?
  2. Identify the poem’s basic form: Ballad, Epic, Elegy, Haiku, Ode, Sonnet (Petrarchan, Shakespearean, Spenserian), Villanelle
  3. Examine the poem’s syntax
  4. Try to identify the poem’s meter
  5. Examine the lines and stanzas
  6. Examine the rhyme patterns
  7. Examine the diction and vocabulary
  8. Look for monologue and dialogue

Rhetoric-Stage Reading

  1. Is there a moment of choice or of change in the poem?
  2. is there cause and effect/
  3. what is the tension between the physical and psychological, the earthly and the spiritual, the mind and body?
  4. what is the poem’s subject?
  5. where is the self?
  6. Do you feel sympathy?
  7. How does the poet relate to these, who came before?

The Cosmic Story: Understanding the Earth, the skies and Ourselves

Logic-Stage Reading

  1. Define the filed of Enquiry
  2. What sort of evidence does the writer cite?
  3. Identify the paces in which the work is inductive, and areas where it is deductive?
  4. Flag  anything that sounds like a statement of conclusion

Rhetoric-Stage Reading

  1. What are the metaphors, analogies, stories, and other literary techniques appear, and why are they there?
  2. Are there broad conclusions?

Every image is searchable with Inception & a Crawler in Google Cloud for 0$

As I was attempting a Kaggle contest on Bosch, suddenly I was piqued at reverse image search and having attempted face detection year ago by building a prototype web app and deep learning was beckoning. It has been making huge strides with FPGA, Elastic GPU hardware, neural processors on scene, it’s getting hotter by the days and time to get hands dirty. Deep learning visual frameworks have mushroomed eclipsing well established ones like OpenCV which still powers niche use cases. Google’s TensorFlow is getting it’s own limelight and I was curious as to how reverse image search engine might work utilizing TensorFlow, while googling stumbled on ViSenze and Tineye web services that were filling diverse needs, former being a e-commerce reverse search engine while the latter lets you know where a given image is sourced or identified in the entire internet. They can squeeze off search and display results in 1 or 2 seconds excluding the time to upload or extracting an image off an URL.This is pretty impressive given Tineye has indexed more than a billion images.

How do we make our own ViSenze or Tineye or IQnect or GoFind? Githubbing, found a TensorFlow based reverse image search project (credits to this GitHub project & Akshayu Bhat) and realized this to be a great way to start but a real use case can make it even more compelling. Thought of a commercial website selling apparel’s could be a good candidate to get real world images to index and test the capability of this reverse search. This experiment had a unique twist, all along being a Windows aficionado using MS software development tools, TensorFlow forced me to switch to Linux environment as it’s only available in Linux or Mac. As of 29 Nov 16, Tensorflow finally adds support to Windows, its too late. Being a windows developer, decided to naturally gravitate to Ubuntu desktop on Oracle VirtualBox, I’ve previously played around Ubuntu desktop albeit just to get a hang of GUI and used some of Linux tools in them but never did serious development.

Now, let’s get practical, setup the dev environment (I’m new to Linux and want to learn), spruce up the code from the fork, add a crawler, plus a commercially available API to detect whether uploaded image (for reverse visual search) is safe and appropriate and detect it’s content while returning nearest 12 items when an visual item is searched. My claim of 0$ is to leverage google cloud trial and before you jump in to test drive, you may want to take a look at such an implementation using google cloud engine @ http://visualsearch.avantprise.com/. The actual search takes 3 to 4 seconds on 70K images whereas approximate search is a bit faster.

Setup and run Visual Search Server

Get the latest Oracle VirtualBox here and install it on your Windows m/c (mine is windows 10 build 1439). Now proceed to download Linux desktop Ubuntu 14.04.5 Trusty Tahr from from osboxes.org to get the OS up and running. Using the userid as osboxes.org and same as password, you get the desktop up and running or just install it from scratch in VirtualBox, which is what I did (also suggested) by providing 80GB disk size for VM to have ample amount of space to grow dynamically if needed to that set limit. Make sure , you install GuestAdditions to VM instance of Ubuntu desktop, this is useful when you want to transfer files between Ubuntu and Host OS and also makes the display to adjust flexibly. Do note that as I change between office ethernet and home wireless, I got to change the network adapter to wireless for Adapter1 to get it going at home.

oraclevm-deep

Ubuntu desktop comes with python 2.7.6 and hence you don’t need anaconda or other python environments and I’m not looking at exclusive python environments to make this experiment long winded. About development environment? well I’m used to Visual Studio for C# & Python and WebStorm for NodeJs. Hence wanted to stick to the same tools with a slight difference, this time went with Visual Studio Code, a great open source tool with fantastic extensions and and works like a charm. Log into your Ubuntu desktop and launch terminal and type python –version to check the version and ensure it is 2.7.6. Don’t forget to enable shared clipboard to be bi-directional for this VM instance in VirtualBox. Get git, pip and fabric installed as follows:

sudo apt-get install git
sudo apt-get install python-pip
sudo pip install fabric
sudo apt-get install openssh-server

Ensure you have a rsa key created to connect to GCE and local dev environment if required (also do a ssh localhost) using the following commands

ssh-keygen -t rsa (Press enter for each line)
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod og-wx ~/.ssh/authorized_keys
ssh localhost 

Type exit to see logout message is displayed. Now you’re all set, go ahead to clone the repo. In the terminal (under home directory) type

sudo git clone https://github.com/dataspring/TensorFlowSearch.git
cd ~/TensorFlowSearch
vi settings.py (to change username, etc. & save)

Open up settings.py and change the LOCALUSER to the user you set while creating the Ubuntu desktop VM and optionally LOCALHOST to a specific IP address assigned if 127.0.0.1 doesn’t work. With the code base in desktop ready, we need to setup the development environment in local Ubuntu desktop, so that we can run, debug and change code. Fabric allows to run sudo commands either locally or remotely + tons of other features. With python fabric in place, run fabric calling the setup function from terminal as

sudo fab --list  (lists all methods)
sudo fab localdevsetup

with a couple of ENTER & Y key-presses, will install all prerequisites for python development environment, TensorFlow, Fabric, Sqlite3, Visual Studio Code and SQL LiteBrowser. If all goes well, run the crawler to get a few images from an e-commerce site (carousell.com) and we can start the visual search server as follows:

sudo fab ShopSiteImages
sudo fab index
sudo fab server

Open a browser in your Ubuntu desktop and type http://localhost/ – which should launch a screen as shown below and start searching

localhost

Launch Visual Studio Code from terminal by typing in below command and this to allow to launch VS Code with admin rights for debugging to work properly

sudo code --user-data-dir="~/.vscode-root"  

Install python extensions, you are all set to change, play around the code with nice debugging support! Once launched, point to git directory @ ~/TensorFlowSearch to open the code and modify.

Detect content appropriate-ness and type – Clarifai to our rescue

I thought of including a safe content search that will be vital while doing a visual search as it involves user uploaded/snapped image. Among myriad of video & image recognition services that offers detection of unsafe content, Clarifai is simple and there’s a free plan to test and play with REST API. Navigate to their developer site and obtain an API key and you’re all set. In this search form, from angular, uploaded image is sent to Clarifai API to check whether the image is safe and appropriate and you get a probability score which is displayed in the search screen. Also another API call is made to detect ‘content type’. The code snippet that’s used in controller.js (hosted in python flask web app) file is as follows and you may want to get your own API key as the current key is part of free tier and may exhaust.

code link for controller.js under angular

clarifai

Design and Implement a Simple Crawler

Getting images for this simple crawler is what makes it fun and useful. As to experiment, I selected carousell.com which sells anything that can be snapped in your cellphone camera. It’s a great and upcoming service that allows anyone to sell – their tagline is ‘Snap to Sell, Chat to Buy for FREE on the carousell marketplace!’. Now that it’d be good if we could get images off their site which is already meant for public to consume and buy items – how do we know what is offered and how to scrape meta data and images? Well, I just download their mobile app in android and started to look at underlying web traffic that provides data to app to decipher the contract i.e. API pattern that powers it. There are nifty ways to configure your android mobile to get internet traffic proxyed by Fiddler in a PC through WiFi and monitoring ongoing traffic in fiddler while using their app will provide enough information to understand API story behind – how their wares are categorized, metadata is designed and images are served. With this info, you can quickly write a routine in python to get images for our experiment and also define our metadata to make the search worthwhile i.e. upon performing a visual search, we not only present the nearest 12 items that resembles given image, but also display additional metadata – as to how much it costs, where it is available and refer them to actual e-commerce site for purchase if they intend – facilitating the buying process.
This crawler hinges on the product categorization, page iteration technique implemented in the API to get images and metadata which is further persisted in a local sqlite3 database for searching purposes. The idea here is to retrieve image once, extract TensorFlow model features and discard image but keep metadata. This prevents the service from serving it’s own images instead points to image URL at the commerce site to avoid egress cost from cloud provider. Sqlite3 fits the bill by providing a simple data store but this can be scaled depending on the future scalability requirements which we can’t anticipate now. Crawler is designed to restart wherever it stopped with a manual intervention to reset the following variables – – to facilitate re-crawling where it left.

Crawler Design

  1. Decide on Product Collection Number, Pagination Parameters part of the API (figured out from API pattern)
  2. Start iterating on each collection, setting the returning result count and keep increasing the page count until max-iteration count
  3. Issue a python requests.get and parse the returned json results to get meta data and fill ‘sellimages’ table of sqlite3 db
  4. Retrieve the image from the URL
  5. If and when this whole process is rerun, ensure metadata and image if present already is overwritten – crawler re-runs are idempotent as long API signature is not changed – in which case crawler may also fail
  6. We assume only jpeg images are provided from the API’s metadata URL and it is so
  7. Uses simple python modules like requests, json, sqlite3, urllib

crawler

Indexer & Searcher

The gist of indexing images is to simply use TensorFlow by loading a pre-trained model – trained off ImageNet aka InceptionV3 that is already available as a protobuffer file – in our case network.pb. Then parse it to import graph definitions and use this definition set to extract ‘incept/pool_3:0’ features off each image. Indexer further spits out chunk of these files and concatenates them based on the batch-size configured and gets stored as index files. KNN search is performed using scipy spatial function.

In the next iteration to this article, I want to further see which spatial functional metric is performant? (many are available ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘wminkowski’, ‘yule’.) I’m also looking at a new development to see how to incorporate newly available pre-trained models (.pb files) to see which one can fare better for this use case implementing KNN search. One such site that lists new pre-trained model is GradientZoo and you need to figure out how to generate protobuf file from these model constnat files and starting point is here.

Sqlite3 db

Essentially there are 2 tables keep track of data ingested, ‘indeximages’ to log crawler runs and ‘sellimages’ to ingest metadata for each image crawled. You can view database in the Ubuntu desktop – just launch sqlitebrowser and point to the db file @ /home/deep/shopsite/sqllite3/

sqllite3dbimg

Take it to Cloud Heights – Setup in GCE

I claim that indexing, crawling and search cost is 0$ with the generous credit of $300 to try out Google Cloud. Only thing is GPU HW availability is yet to be mainstream unlike AWS but Google has made the announcement few weeks of GPU in cloud. Fire up Google Cloud and set up an account keying in your credit card and they absolutely say that credit card will never be overcharged upon trial period completion and it’s worth giving a try. Use the Quick Start Guide to Linux VM, and follow the screenshots to guide you to create a Ubuntu 14.04 server with v4 CPU, 80 GB SSD and 15GB RAM.

  1. Create you Project
    gce-project
  2. Create a VM Instance
  3. Select Zone (near your place), Machine Type (4 vCPU 15 GB) , Boot disk (80 GB SSD)
  4. Allow HTTP and HTTPS traffic
  5. Click on Networking and chose an external IP to be static IP (so that it can retain same IP on restarts)
  6. Click on SSH keys, navigate to ~/.ssh/ and open id_rsa file we created early and copy all contents and paste it in SSH Keys
  7. You’ll end up in a VM created as follows

From your local Ubuntu desktop, launch the terminal, just do

ssh username@extrenalip

username is as per your id_rsa file when you copied to SSH Keys and External IP is the static IP that you reserved. It should connect to remote host and now logout.

Time to setup GCE VM instance. Open up settings.py file in local ubuntu desktop and change HOST (to static IP) and USER (SSH Keys assigned user) accordingly and save. Now fire up fabric to do setup for us in the remote host m/c

sudo fab live hostsetup

Once all setup is complete, ssh in and test run to crawl images, index them and start the web server. You can access the server by pointing your browser to http://<external ip>/. This ensures that all works. Next is to stress test. Open the settings.py file in remote machine again and change the following to larger values : RESULT_STEPS, MAX_ITER, MAX_COLLECTION, BATCH_SIZE.

Now that the process is going to be long running, you need to launch ssh window and use screen command that allows you to run processes uninterrupted even when ssh is disconnected. For those from windows world using command window, there’s a very nice tutorial explaining screen.

sudo apt-get install screen
screen -R crawlrun
cd ~/TensorFlowSearch
sudo fab ShopSiteImages

Cntrl+A followed by Cntrl D to close screen and logout (to detach) and once the crawling process is over, do the same for Indexing and then run another screen session for web app to let the search server to be available in the internet for all.

If the count of image file is very large, in millions, best way to check the image file count in /home/deep/shopsite/images/ is not to use ls but go with rsync. Also once index run is completed, all images move to /done folder.

rsync --stats --dry-run -ax /home/deep/shopsite/images/ /xxx

Another handy utility, similar to task manager in linux to monitor resource utilization

ps -eo pcpu,pid,user,args | sort -r -k1 | less 
<or simply use> 
top

Get FileZilla and install it which comes handy for installing code to Google VM later. Alternatively you can use your private GitLab project which is free.

Future of Visual Search

What the community have to do next is to take visual search to next level – and some thoughts :
As we have mature Apache products like Solr, a similar open source product is the need of the hour – one that is robust enough to

  1. ingest images of any type,resolution in batch and real-time
  2. capture preset time interval frames on continuous video streams for images
  3. crawl any site with a pluggable API engine,
  4. store images & metadata in different cloud storage services using connectors/plugins,
  5. a configurable pre-trained deep learning models for feature extraction off images
  6. store meta data in Lucene store
  7. search visual images using KNN and other advanced ML methods
  8. faceted search on meta data.
  9. etc.

Perhaps a combination of likes of Apache Spark + Apache Solr + Above Features + Stream Processing + ML/DL methods = Apache Visor – best open source image search out there!

P.S. :
If you’re interested on big test data  generation framework on SQL, check out my GitHub page