Skip to content

StarlangSoftware/WordToVec-Swift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Word Embeddings

Distributed representations (DR) of words (i.e., word embeddings) are used to capture semantic and syntactic regularities of the language by analyzing distributions of word relations within the textual data. Modeling methods generating DRs rely on the assumption that 'words that occur in similar contexts tend to have similar meanings' (distributional hypothesis) which stems from the nature of language itself. Due to their unsupervised nature, these modeling methods do not require any human judgement input to train, which allows researchers to train very large datasets in relatively low costs.

Traditional representations of words (i.e., one-hot vectors) are based on word-word (W x W) co-occurrence sparse matrices where W is the number of distinct words in the corpus. On the other hand, distributed word representations (DRs) (i.e., word embeddings) are word-context (W x C) dense matrices where C < W and C is the number of context dimensions which are determined by underlying model assumptions. Dense representations are arguably better at capturing generalized information and more resistant to overfitting due to context vectors representing shared properties of words. DRs are real valued vectors where each context can be considered as a continuous feature of a word. Due to their ability to represent abstract features of a word, DRs are considered as reusable across higher level tasks in ease, even if they are trained with totally different datasets.

Prediction based DR models gained much attention after Mikolov et al.’s neural network based SkipGram model in 2013. The secret behind the prediction based models is simple: never build a sparse matrix at all. Prediction based models construct dense matrix representations directly instead of reducing sparse ones to dense ones. These models are trained like any other supervised learning task by giving lots of positive and negative samples without adding any human supervision costs. Aim of these models is to maximize the probability of each context c with the same distributional assumptions on word-context co-occurrences, similar to count based models.

SkipGram is a prediction based distributional semantic model (DSM) consisting of a shallow neural network architecture inspired from neural language modeling (LM) intuitions. It is commonly known for its open-source implementation library word2vec. SkipGram acts like a log-linear classifier maximizing the prediction of the surrounding words of a word within a context (center window). Probabilistic word and sentence prediction by local neighbors of a word has been successfully applied on LM tasks under Markov assumption. SkipGram leverages the same idea by considering the words within the window as positive and negative instances and learning weights (for k contexts) which maximizes word predictions. In the training process, each word vector starts as a random vector, and then iteratively shifts to the neighboring vector.

Video Lectures

For Developers

You can also see Java, Python, Cython, C#, Js, C, or C++ repository.

Requirements

  • Xcode Editor
  • Git

Git

Install the latest version of Git.

Download Code

In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:

git clone <your-fork-git-link>

A directory called WordToVec-Swift will be created. Or you can use below link for exploring the code:

git clone https://github.com/starlangsoftware/WordToVec-Swift.git

Open project with XCode

To import projects from Git with version control:

  • XCode IDE, select Clone an Existing Project.

  • In the Import window, paste github URL.

  • Click Clone.

Result: The imported project is listed in the Project Explorer view and files are loaded.

Compile

From IDE

After being done with the downloading and opening project, select Build option from Product menu. After compilation process, user can run WordToVec-Swift.

Detailed Description

To initialize artificial neural network:

init(corpus: Corpus, parameter: WordToVecParameter)

To train neural network:

func train() -> VectorizedDictionary

Cite

@inproceedings{ercan-yildiz-2018-anlamver,
	title = "{A}nlam{V}er: Semantic Model Evaluation Dataset for {T}urkish - Word Similarity and Relatedness",
	author = {Ercan, G{\"o}khan  and
  	Y{\i}ld{\i}z, Olcay Taner},
	booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
	month = aug,
	year = "2018",
	address = "Santa Fe, New Mexico, USA",
	publisher = "Association for Computational Linguistics",
	url = "https://www.aclweb.org/anthology/C18-1323",
	pages = "3819--3836",
}

For Contibutors

Package.swift file

  1. Dependencies should be given w.r.t github.
    dependencies: [
        .package(name: "MorphologicalAnalysis", url: "https://github.com/StarlangSoftware/TurkishMorphologicalAnalysis-Swift.git", .exact("1.0.6"))],
  1. Targets should include direct dependencies, files to be excluded, and all resources.
    targets: [
        .target(
	dependencies: ["MorphologicalAnalysis"],
	exclude: ["turkish1944_dictionary.txt", "turkish1944_wordnet.xml",
	"turkish1955_dictionary.txt", "turkish1955_wordnet.xml",
	"turkish1959_dictionary.txt", "turkish1959_wordnet.xml",
	"turkish1966_dictionary.txt", "turkish1966_wordnet.xml",
	"turkish1969_dictionary.txt", "turkish1969_wordnet.xml",
	"turkish1974_dictionary.txt", "turkish1974_wordnet.xml",
	"turkish1983_dictionary.txt", "turkish1983_wordnet.xml",
	"turkish1988_dictionary.txt", "turkish1988_wordnet.xml",
	"turkish1998_dictionary.txt", "turkish1998_wordnet.xml"],
	resources:
[.process("turkish_wordnet.xml"),.process("english_wordnet_version_31.xml"),.process("english_exception.xml")]),
  1. Test targets should include test directory.
	.testTarget(
		name: "WordNetTests",
		dependencies: ["WordNet"]),

Data files

  1. Add data files to the project folder.

Swift files

  1. Do not forget to comment each function.
   /**
     * Returns the value to which the specified key is mapped.
     - Parameters:
        - id: String id of a key
     - Returns: value of the specified key
     */
    public func singleMap(id: String) -> String{
        return map[id]!
    }
  1. Do not forget to define classes as open in order to be able to extend them in other packages.
	open class Word : Comparable, Equatable, Hashable
  1. Function names should follow caml case.
	public func map(id: String)->String?
  1. Write getter and setter methods.
	public func getSynSetId() -> String{
	public func setOrigin(origin: String){
  1. Use separate test class extending XCTestCase for testing purposes.
final class WordNetTest: XCTestCase {
    var turkish : WordNet = WordNet()
    
    func testSize() {
        XCTAssertEqual(78326, turkish.size())
    }
  1. Enumerated types should be declared as enum.
public enum CategoryType : String{
    case MATHEMATICS
    case SPORT
    case MUSIC
  1. Implement == operator and hasher method for hashing purposes.
    public func hash(into hasher: inout Hasher) {
        hasher.combine(name)
    }
    public static func == (lhs: Relation, rhs: Relation) -> Bool {
        return lhs.name == rhs.name
    }
  1. Make classes Comparable for comparison, Equatable for equality, and Hashable for hashing check.
	open class Word : Comparable, Equatable, Hashable
  1. Implement < operator for comparison purposes.
    public static func < (lhs: Word, rhs: Word) -> Bool {
        return lhs.name < rhs.name
    }
  1. Implement description for toString method.
	open func description() -> String{
  1. Use Bundle and XMLParserDelegate for parsing Xml files.
	let url = Bundle.module.url(forResource: fileName, withExtension: "xml")
	var parser : XMLParser = XMLParser(contentsOf: url!)!
	parser.delegate = self
	parser.parse()

also use parser method.

public func parser(_ parser: XMLParser, didStartElement elementName: String, namespaceURI: String?, qualifiedName qName: String?, attributes attributeDict: [String : String])

Packages

 
 
 

Contributors

Languages