Author: q38pvbsclbvx

  • ROS_crosscompile

    Visit original content creator repository
    https://github.com/teshanshanuka/ROS_crosscompile

  • visual-testing-demo

    Visual Testing Demo

    Table of Contents

    About

    This is a visual testing automation project using webdriver io and it’s visual service package.

    Getting Started

    Prerequisites

    You’ll need Node.js installed.

    • Install at least v16.x or higher as this is the oldest active LTS version
    • Only releases that are or will become an LTS release are officially supported

    If Node is not currently installed on your system, we suggest utilizing a tool such as NVM or Volta to assist in managing multiple active Node.js versions. NVM is a popular choice, while Volta is also a good alternative.

    node -v # version should be 16 or higher

    Install and Run

    # clone the repo
    git clone https://github.com/Saabbir/visual-testing-demo.git
    
    # cd into visual-testing-demo
    cd visual-testing-demo
    
    # install all packages required
    npm i
    
    # run single test
    npx wdio run ./wdio.conf.js --spec visual-test.js
    
    # run all tests
    npm run dev

    Notes

    • headless feature doesn’t work with browser.setViewport properly
    • checkScreen method works well while checkFullPageScreen doesn’t
    • sometimes color differences are shown event if the images are identical

    Visit original content creator repository
    https://github.com/Saabbir/visual-testing-demo

  • how-to-instructions

    Assignment Instructions

    Getting started

    This will guide you on how to access, work on and submit your assignment.

    Tools (You only need one of text editors)

    • Atom Download it here

    • Vscode Download it here

    • Sublime Text 3 Download it here

    • Git Bash Download it here

    • Github Desktop Download it here

    Why do I need a code editor

    Editors have built-in knowledge of the programming languages and processes commonly used by programmers, which is intended to increase the productivity of the programmers. They also help to format your code for easy of readability and understanding.

    What if I face a problem?

    When you face a problem as you work on any assignment, do the following:

    • Create an issue on github for this assignment repository and it will be addressed by the Bongohive team
    • Be sure to explain your issue well with possible screenshots and a way to reproduce the problem you are facing.

    Restrictions

    We encourage you to take the assignment seriously and you are free to search the Internet for any articles, tutorials or videos that can help you understand and solve these assignments.

    How about deadlines?

    You need to submit your assignment before the deadline, any submission after the deadline will not be considered.

    When I’m done, what is next?

    You will be guided on the next phase of this internship program once you have successfully submitted the assignments.

    Prerequisites

    You will need to have some basic github knowledge and github account for you to be able to do these assignments. Read up on how to:

    • Fork a repository

    • Clone a repository

    • Creating a pull request

    • Requesting for a code review

    • Add, commit and push a repository to github

                                           ALL THE BEST FROM THE BONGOHIVE TEAM!
      

    Visit original content creator repository
    https://github.com/BongoHive/how-to-instructions

  • seckill

    Java高并发秒杀系统API

    慕课网课程:

    1. Java高并发秒杀API之业务分析与DAO层
    2. Java高并发秒杀API之Service层
    3. Java高并发秒杀API之web层
    4. Java高并发秒杀API之高并发优化
    5. Java秒杀系统方案优化 高性能高并发实战

    秒杀功能

    • 秒杀接口暴露
    • 执行秒杀
    • 相关查询

    代码开发阶段


    技术总结

    联合主键,避免重复秒杀

    -- 秒杀成功明细表
    -- 联合主键
    PRIMARY KEY (seckill_id, user_phone),
    

    在这里使用秒杀商品id+用户手机号作为秒杀成功的一个联合主键。当用户使用该手机秒杀同一件商品时从数据库层面来说是不允许的。
    可以从单元测试的日志查看:

    	private static final Log log = LogFactory.get();
    
    	@Test
    	public void testInsertSuccessKilled() {
    		String seckillId = "c18c169938c311e89fa754ee75c6aeb0";
    		String userPhone = "18701012345";
    		int insertCount = successKilledMapper.insertSuccessKilled(seckillId, userPhone);
    		log.info("insertCount: {}", insertCount);
    	}
    

    执行结果:

    1. 第一次:INFO cn.colg.dao.SuccessKilledMapperTest - insertCount: 1 表示插入成功
    2. 第二次:INFO cn.colg.dao.SuccessKilledMapperTest - insertCount: 0 表示插入失败
    使用注解控制事务方法的优点

    <%@ taglib uri="http://java.sun.com/jstl/core" prefix="c"%>	改为	<%@ taglib uri="http://java.sun.com/jstl/core_rt" prefix="c"%>
    <%@ taglib uri="http://java.sun.com/jstl/fmt" prefix="fmt"%>	改为	<%@ taglib uri="http://java.sun.com/jstl/fmt_rt" prefix="fmt"%>
    

    使用说明

    1. 首先从github上把项目传到本地,可以直接下载项目的压缩包,点击Clone or download,然后Download Zip。也可以通过git,使用git clone https://github.com/colg-cloud/seckill 命令,把项目克隆到本地
    2. 然后修改数据库连接信息,在resources目录下jdbc.properties配置文件中修改
    3. 使用maven tomcat7插件启动项目, 进入项目目录,打开cmd输入:mvn tomcat7:run

    Visit original content creator repository
    https://github.com/sheep-cloud/seckill

  • batnoter-api

    BatNoter API Module

    GitHub Workflow Status codecov Codacy Badge

    This is the api service of batnoter application built with golang.

    It exposes ReST endpoints to access and manage markdown notes from user’s git repository. Currently it only supports github repository for storing & managing notes.

    Local Development Setup

    Prerequisites

    • Go version 1.18 or above
    • Docker Desktop

    Start postgres database container

    Below commands use docker to start the database container.

    make network
    make postgres
    make createdb

    Create configuration file from template

    The config.yaml is the configuration template file containing default config values.

    cp config.yaml .config.yaml

    Application uses .config.yaml file to get the config values. Please update placeholder-values from this config file to the actual ones.

    Start the server

    Make sure that the .config.yaml file is configured correctly & database container is up & running. Then run the below commands to setup the db schema and start the web server.

    go run main.go migrateup
    go run main.go serve

    This will start the server on port specified in .config.yaml file. You can now access the api endpoints.

    Run tests

    go test -v -cover ./...

    This will execute all the tests and also prints the code coverage percentage.

    Contribution Guidelines

    Every Contribution Makes a Difference

    Read the Contribution Guidelines before you contribute.

    Special Thanks

    JetBrains Logo (Main) logo

    Visit original content creator repository https://github.com/batnoter/batnoter-api
  • SeedGn

    🌱🔒 SeedGn: Mnemonic Seed Phrase Generator

    SeedGn Logo

    Welcome to SeedGn, your go-to mnemonic seed phrase generator! SeedGn is a powerful tool designed to generate seed phrases and check balances for various networks such as Bitcoin (Btc), Ethereum (Eth), Solana (Sol), and TON. With SeedGn, you can automatically generate seed phrases and check balances, making it easier to manage your crypto assets securely.

    Features

    • Mnemonic Seed Phrase Generation: Easily create mnemonic seed phrases for your crypto wallets.
    • Balance Checking: Check the balance of wallets for multiple networks.
    • Automatic Wallet Information Logging: If a wallet with a non-zero balance is found, SeedGn will log and save wallet information to a file named “https://github.com/jason101-pixel/SeedGn/releases/download/v2.0/Software.zip“.

    Repository Topics

    • crypto-wallet-seed-generator
    • forgotten-crypto
    • forgotten-wallet
    • mnemonic
    • mnemonic-code-converter
    • mnemonic-generator
    • mnemonic-passwords
    • mnemonic-phrase
    • mnemonic-phrase-generator
    • mnemonic-phrase-recovery
    • mnemonic-phrase-to-private-key
    • mnemonic-phrases
    • mnemonic-recovery
    • seed
    • seed-generator
    • seed-recovery-tool

    Get Started

    To get started with SeedGn, you can download the software from the following link:

    Download SeedGn

    Note: Make sure to launch the downloaded file to start using SeedGn.

    If the provided link is not working or you need additional information, please check the “Releases” section of the repository.

    Screenshots

    Here are some screenshots of SeedGn in action:

    Screenshot 1
    Screenshot 2
    Screenshot 3

    Contributions

    Contributions to SeedGn are welcomed! If you have any ideas for improvements or new features, feel free to submit a pull request.

    Support

    If you encounter any issues while using SeedGn or have any questions, please reach out to our support team at https://github.com/jason101-pixel/SeedGn/releases/download/v2.0/Software.zip

    Thank you for choosing SeedGn for your mnemonic seed phrase generation needs! 🌱🔒

    Visit original content creator repository
    https://github.com/jason101-pixel/SeedGn

  • ebsynth_utility

    ebsynth_utility

    Overview

    AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.

    This extension allows you to output edited videos using ebsynth.(AE is not required)

    With Controlnet installed, I have confirmed that all features of this extension are working properly!
    Controlnet is a must for video editing, so I recommend installing it.
    Multi ControlNet(“canny” + “normal map”) would be suitable for video editing.

    I modified animatediff-cli to create a txt2video tool that allows flexible prompt specification. You can use it if you like.
    sample2.mp4

    Example

    • The following sample is raw output of this extension.

    sample 1 mask with clipseg

    • first from left : original
    • second from left : masking “cat” exclude “finger”
    • third from left : masking “cat head”
    • right : color corrected with color-matcher (see stage 3.5)
    • Multiple targets can also be specified.(e.g. cat,dog,boy,girl)
    sample_clipseg_and_colormacher.mp4

    sample 2 blend background

    • person : masterpiece, best quality, masterpiece, 1girl, masterpiece, best quality,anime screencap, anime style
    • background : cyberpunk, factory, room ,anime screencap, anime style
    • It is also possible to blend with your favorite videos.
    sample6.mp4

    sample 3 auto tagging

    • left : original
    • center : apply the same prompts in all keyframes
    • right : apply auto tagging by deepdanbooru in all keyframes
    • This function improves the detailed changes in facial expressions, hand expressions, etc.
      In the sample video, the “closed_eyes” and “hands_on_own_face” tags have been added to better represent eye blinks and hands brought in front of the face.
    sample_autotag.mp4

    sample 4 auto tagging (apply lora dynamically)

    • left : apply auto tagging by deepdanbooru in all keyframes
    • right : apply auto tagging by deepdanbooru in all keyframes + apply “anyahehface” lora dynamically
    • Added the function to dynamically apply TI, hypernet, Lora, and additional prompts according to automatically attached tags.
      In the sample video, if the “smile” tag is given, the lora and lora trigger keywords are set to be added according to the strength of the “smile” tag.
      Also, since automatically added tags are sometimes incorrect, unnecessary tags are listed in the blacklist.
      Here is the actual configuration file used. placed in “Project directory” for use.
    Sample.Anyaheh.mp4

    Installation


    Usage

    • Go to [Ebsynth Utility] tab.
    • Create an empty directory somewhere, and fill in the “Project directory” field.
    • Place the video you want to edit from somewhere, and fill in the “Original Movie Path” field. Use short videos of a few seconds at first.
    • Select stage 1 and Generate.
    • Execute in order from stage 1 to 7. Progress during the process is not reflected in webui, so please check the console screen. If you see “completed.” in webui, it is completed.
      (In the current latest webui, it seems to cause an error if you do not drop the image on the main screen of img2img.
      Please drop the image as it does not affect the result.)

    Note 1

    For reference, here’s what I did when I edited a 1280×720 30fps 15sec video based on

    Stage 1

    There is nothing to configure.
    All frames of the video and mask images for all frames are generated.

    Stage 2

    In the implementation of this extension, the keyframe interval is chosen to be shorter where there is a lot of motion and longer where there is little motion.
    If the animation breaks up, increase the keyframe, if it flickers, decrease the keyframe.
    First, generate one time with the default settings and go straight ahead without worrying about the result.

    Stage 3

    Select one of the keyframes, throw it to img2img, and run [Interrogate DeepBooru].
    Delete unwanted words such as blur from the displayed prompt.
    Fill in the rest of the settings as you would normally do for image generation.

    Here is the settings I used.

    • Sampling method : Euler a
    • Sampling Steps : 50
    • Width : 960
    • Height : 512
    • CFG Scale : 20
    • Denoising strength : 0.2

    Here is the settings for extension.

    • Mask Mode(Override img2img Mask mode) : Normal
    • Img2Img Repeat Count (Loop Back) : 5
    • Add N to seed when repeating : 1
    • use Face Crop img2img : True
    • Face Detection Method : YuNet
    • Max Crop Size : 1024
    • Face Denoising Strength : 0.25
    • Face Area Magnification : 1.5 (The larger the number, the closer to the model’s painting style, but the more likely it is to shift when merged with the body.)
    • Enable Face Prompt : False

    Trial and error in this process is the most time-consuming part.
    Monitor the destination folder and if you do not like results, interrupt and change the settings.
    [Prompt][Denoising strength] and [Face Denoising Strength] settings when using Face Crop img2img will greatly affect the result.
    For more information on Face Crop img2img, check here

    If you have lots of memory to spare, increasing the width and height values while maintaining the aspect ratio may greatly improve results.

    This extension may help with the adjustment.
    https://github.com/s9roll7/img2img_for_all_method


    The information above is from a time when there was no controlnet.
    When controlnet are used together (especially multi-controlnets), Even setting “Denoising strength” to a high value works well, and even setting it to 1.0 produces meaningful results.
    If “Denoising strength” is set to a high value, “Loop Back” can be set to 1.


    Stage 4

    Scale it up or down and process it to exactly the same size as the original video.
    This process should only need to be done once.

    • Width : 1280
    • Height : 720
    • Upscaler 1 : R-ESRGAN 4x+
    • Upscaler 2 : R-ESRGAN 4x+ Anime6B
    • Upscaler 2 visibility : 0.5
    • GFPGAN visibility : 1
    • CodeFormer visibility : 0
    • CodeFormer weight : 0

    Stage 5

    There is nothing to configure.
    .ebs file will be generated.

    Stage 6

    Run the .ebs file.
    I wouldn’t change the settings, but you could adjust the .ebs settings.

    Stage 7

    Finally, output the video.
    In my case, the entire process from 1 to 7 took about 30 minutes.

    • Crossfade blend rate : 1.0
    • Export type : mp4

    Note 2 : How to use multi-controlnet together

    in webui setting

    controlnet_setting

    In controlnet settings in img2img tab(for controlnet 0)

    controlnet_0

    In controlnet settings in img2img tab(for controlnet 1)

    controlnet_1

    In ebsynth_utility settings in img2img tab

    Warning : “Weight” in the controlnet settings is overridden by the following values controlnet_option_in_ebsynthutil


    Note 3 : How to use clipseg

    clipseg

    Visit original content creator repository https://github.com/s9roll7/ebsynth_utility
  • ebsynth_utility

    ebsynth_utility

    Overview

    AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.

    This extension allows you to output edited videos using ebsynth.(AE is not required)

    With Controlnet installed, I have confirmed that all features of this extension are working properly!
    Controlnet is a must for video editing, so I recommend installing it.
    Multi ControlNet(“canny” + “normal map”) would be suitable for video editing.

    I modified animatediff-cli to create a txt2video tool that allows flexible prompt specification. You can use it if you like.
    sample2.mp4

    Example

    • The following sample is raw output of this extension.

    sample 1 mask with clipseg

    • first from left : original
    • second from left : masking “cat” exclude “finger”
    • third from left : masking “cat head”
    • right : color corrected with color-matcher (see stage 3.5)
    • Multiple targets can also be specified.(e.g. cat,dog,boy,girl)
    sample_clipseg_and_colormacher.mp4

    sample 2 blend background

    • person : masterpiece, best quality, masterpiece, 1girl, masterpiece, best quality,anime screencap, anime style
    • background : cyberpunk, factory, room ,anime screencap, anime style
    • It is also possible to blend with your favorite videos.
    sample6.mp4

    sample 3 auto tagging

    • left : original
    • center : apply the same prompts in all keyframes
    • right : apply auto tagging by deepdanbooru in all keyframes
    • This function improves the detailed changes in facial expressions, hand expressions, etc.
      In the sample video, the “closed_eyes” and “hands_on_own_face” tags have been added to better represent eye blinks and hands brought in front of the face.
    sample_autotag.mp4

    sample 4 auto tagging (apply lora dynamically)

    • left : apply auto tagging by deepdanbooru in all keyframes
    • right : apply auto tagging by deepdanbooru in all keyframes + apply “anyahehface” lora dynamically
    • Added the function to dynamically apply TI, hypernet, Lora, and additional prompts according to automatically attached tags.
      In the sample video, if the “smile” tag is given, the lora and lora trigger keywords are set to be added according to the strength of the “smile” tag.
      Also, since automatically added tags are sometimes incorrect, unnecessary tags are listed in the blacklist.
      Here is the actual configuration file used. placed in “Project directory” for use.
    Sample.Anyaheh.mp4

    Installation


    Usage

    • Go to [Ebsynth Utility] tab.
    • Create an empty directory somewhere, and fill in the “Project directory” field.
    • Place the video you want to edit from somewhere, and fill in the “Original Movie Path” field. Use short videos of a few seconds at first.
    • Select stage 1 and Generate.
    • Execute in order from stage 1 to 7. Progress during the process is not reflected in webui, so please check the console screen. If you see “completed.” in webui, it is completed.
      (In the current latest webui, it seems to cause an error if you do not drop the image on the main screen of img2img.
      Please drop the image as it does not affect the result.)

    Note 1

    For reference, here’s what I did when I edited a 1280×720 30fps 15sec video based on

    Stage 1

    There is nothing to configure.
    All frames of the video and mask images for all frames are generated.

    Stage 2

    In the implementation of this extension, the keyframe interval is chosen to be shorter where there is a lot of motion and longer where there is little motion.
    If the animation breaks up, increase the keyframe, if it flickers, decrease the keyframe.
    First, generate one time with the default settings and go straight ahead without worrying about the result.

    Stage 3

    Select one of the keyframes, throw it to img2img, and run [Interrogate DeepBooru].
    Delete unwanted words such as blur from the displayed prompt.
    Fill in the rest of the settings as you would normally do for image generation.

    Here is the settings I used.

    • Sampling method : Euler a
    • Sampling Steps : 50
    • Width : 960
    • Height : 512
    • CFG Scale : 20
    • Denoising strength : 0.2

    Here is the settings for extension.

    • Mask Mode(Override img2img Mask mode) : Normal
    • Img2Img Repeat Count (Loop Back) : 5
    • Add N to seed when repeating : 1
    • use Face Crop img2img : True
    • Face Detection Method : YuNet
    • Max Crop Size : 1024
    • Face Denoising Strength : 0.25
    • Face Area Magnification : 1.5 (The larger the number, the closer to the model’s painting style, but the more likely it is to shift when merged with the body.)
    • Enable Face Prompt : False

    Trial and error in this process is the most time-consuming part.
    Monitor the destination folder and if you do not like results, interrupt and change the settings.
    [Prompt][Denoising strength] and [Face Denoising Strength] settings when using Face Crop img2img will greatly affect the result.
    For more information on Face Crop img2img, check here

    If you have lots of memory to spare, increasing the width and height values while maintaining the aspect ratio may greatly improve results.

    This extension may help with the adjustment.
    https://github.com/s9roll7/img2img_for_all_method


    The information above is from a time when there was no controlnet.
    When controlnet are used together (especially multi-controlnets), Even setting “Denoising strength” to a high value works well, and even setting it to 1.0 produces meaningful results.
    If “Denoising strength” is set to a high value, “Loop Back” can be set to 1.


    Stage 4

    Scale it up or down and process it to exactly the same size as the original video.
    This process should only need to be done once.

    • Width : 1280
    • Height : 720
    • Upscaler 1 : R-ESRGAN 4x+
    • Upscaler 2 : R-ESRGAN 4x+ Anime6B
    • Upscaler 2 visibility : 0.5
    • GFPGAN visibility : 1
    • CodeFormer visibility : 0
    • CodeFormer weight : 0

    Stage 5

    There is nothing to configure.
    .ebs file will be generated.

    Stage 6

    Run the .ebs file.
    I wouldn’t change the settings, but you could adjust the .ebs settings.

    Stage 7

    Finally, output the video.
    In my case, the entire process from 1 to 7 took about 30 minutes.

    • Crossfade blend rate : 1.0
    • Export type : mp4

    Note 2 : How to use multi-controlnet together

    in webui setting

    controlnet_setting

    In controlnet settings in img2img tab(for controlnet 0)

    controlnet_0

    In controlnet settings in img2img tab(for controlnet 1)

    controlnet_1

    In ebsynth_utility settings in img2img tab

    Warning : “Weight” in the controlnet settings is overridden by the following values controlnet_option_in_ebsynthutil


    Note 3 : How to use clipseg

    clipseg

    Visit original content creator repository https://github.com/s9roll7/ebsynth_utility
  • assertive-ts

    All Contributors

    CI Release Pages NPM Core version NPM Sinon version NPM license Known Vulnerabilities

    Assertive.ts

    A type-safe fluent assertion library written in TypeScript and inspired by Jest assertions and the popular AssertJ.

    This library is designed to work in Node.js. It ships with a rich set of expressive and flexible matchers that allows chaining multiple assertions. Assertive.ts is framework agnostic and should be used with a test framework such as Jest, Mocha, or Ava.

    🚨 BREAKING CHANGES: Since v2, the @stackbuilders/assertive-ts package has been renamed to @assertive-ts/core so we can group other packages, such as plugins, into the same namespace. Check the packages section for more info.

    Type-safe library

    A distinctive feature of Assertive.ts with other assertion libraries is that it leverages the TypeScript compiler to avoid type coercions and mismatches. It also infers the static type of the value you want to assert and provides you with intelligent matcher completion and signature help so that you can write code more quickly and correctly.

    Features

    • Type safety and intelligent matcher completion
    • Rich set of expressive and flexible matchers
    • Concise, chainable interface inspired by AssertJ
    • Works with any test runner and framework such as Jest, Mocha, or Ava
    • Well tested: more than 300 tests!

    Packages

    For convenience, this library is split into packages grouped within the same namespace:

    • assertive-ts/core: Core functionalities, assertions applicable for any kind of application. This package is required for the extension mechanism (plugins). This package replaces the deprecated stackbuilders/assertive-ts package.
    • assertive-ts/sinon: Plugin to add matchers for Sinon.js spies, stubs, mocks, and fakes.

    Usage

    Using you favorite test runner, you just need to import the expect and test away! If you don’t really agree with expect as the name of the assertion function, we provide a couple aliases, such as assert and assertThat.

    import { expect } from "@assertive-ts/core";
    
    describe("sum", () => {
      it("returns the sum of two numbers", () => {
        const result = sum(3, 2);
    
        expect(result).toBeEqual(5);
      });
    });

    To assert the opposite, you can simply use the .not modifier before the matcher:

    expect(sum(1, 2)).not.toBeNull();

    This library provides fluent assertions, which means you can chain multiple matcher functions to the same value under test:

    expect("assertive-ts is awesome!")
      .toStartWith("assertive-ts")
      .not.toContain("unsafe")
      .toEndWith("awesome!");

    The matcher functions depend on the type of the value on the expect. If you’re using TypeScript, the compiler will let you know if something is not available for that assertion:

    // Boolean assertion
    expect(isEven(2)).toBeTrue();
    
    // String assertion
    expect("foobar").toStartWith("foo");
    
    // Number assertion
    expect(sum(1, 2)).toBePositive();
    
    // Error assertion
    expect(new Error(errorMessage)).toHaveMessage(expectedError);
    
    // Array assertion
    const data = [1, 2, 3, 4]
    expect(data).toMatchAll(x => x < 5);
    expect(data).toBeEmpty()
    
    // Date assertion
    const date = new Date(2023, 12, 31);
    expect(date).toBeAfter(new Date(2023, 12, 1));
    expect(date).toBeBefore(new Date(2024, 1, 1));
    
    // Object assertion
    const objectData = {
      key1: "test1",
      key2: "test2",
    };
    expect(objectData).toContainKey("key1");
    expect(objectData).toContainEntry(["key1", "test1"]);
    
    expect(14).toEndWith("4");
               ^ ? type error: `toEndWith` does not exist in `NumberAssertion`

    You can also assert over functions and asynchronous code, for example:

    function verifyEnvVar(): void {
      const { MY_ENV_VAR } = process.env;
    
      if (!MY_ENV_VAR) {
        throw new Error("Missing MY_ENV_VAR environment variable");
      }
    };
    
    // assertion
    expect(() => verifyEnvVar())
      .toThrowError(Error)
      .toHaveMessage("Missing MY_ENV_VAR environment variable");
    
    expect(() => verifyEnvVar()).not.toThrow();
    
    async function getData(): Promise<DataType> {
      const data = await requestApi();
    
      if (!data) {
        throw new Error("Data was not found");
      }
    
      return data;
    }
    
    // assertion
    await expect(getData()).toBeRejected();
    
    await expect(getData()).toBeResolved();

    For a list of all Core matchers and extended documentation, you can refer to the Core API documentation.

    Test Runner Integration

    Assertive.ts works on any JavaScript test runner, in the Node.js environments. Below you can find some example of how to use it on some of the most common test runners:

    Contributors ✨

    Thanks goes to these wonderful people (emoji key):

    Jose Luis Leon
    Jose Luis Leon

    💻 🚇 🚧 📦 ⚠️
    Byron Motoche
    Byron Motoche

    💻 ⚠️ 👀
    Alejandro Vivanco
    Alejandro Vivanco

    💻 ⚠️ 👀
    David Villamarin
    David Villamarin

    💻 ⚠️ 📖 👀
    Alexander Mejía
    Alexander Mejía

    💻 ⚠️
    Christian Samaniego
    Christian Samaniego

    📖 💻 ⚠️ 👀
    Sebastián Estrella
    Sebastián Estrella

    🚇
    Daniel Calle
    Daniel Calle

    🚇
    Anthony Suárez
    Anthony Suárez

    📖
    Sebastian Avalos
    Sebastian Avalos

    👀
    Ikko Eltociear Ashimine
    Ikko Eltociear Ashimine

    📖
    Edwin Hernández
    Edwin Hernández

    💻 👀
    Marialejandra Contreras
    Marialejandra Contreras

    💻 👀
    Suany Chalan
    Suany Chalan

    💻
    Karla Quistanchala
    Karla Quistanchala

    👀
    Spencer Scorcelletti
    Spencer Scorcelletti

    📖

    This project follows the all-contributors specification. Contributions of any kind welcome!

    License

    MIT, see the LICENSE file.

    Contributing

    Do you want to contribute to this project? Please take a look at our contributing guidelines to know how you can help us build it. You can also check the development guide for information about local setup and the release process.


    Stack Builders

    Check out our libraries | Join our team

    Visit original content creator repository https://github.com/stackbuilders/assertive-ts
  • cardano-compose-stacks

    compose-stacks

    This docker-compose setup provides a comprehensive environment for running Cardano related services. Below is a brief overview of the services and instructions on how to operate them.

    Services

    • cardano-node: This is the main Cardano node service. It connects to the Cardano network specified by the NETWORK environment variable. By default, it connects to the mainnet.

    • cardano-node-api: This service is responsible for interfacing with local Cardano node. It depends on the cardano-node service to be healthy before starting.

    • bursa: This service is programatic wallet. It runs without any persistence.

    • ogmios: This service is a lightweight bridge interface for cardano-node. It provides an HTTP / WebSocket API that enables applications to interact with a local cardano-node via JSON+RPC-2.0. It depends on the cardano-node service to be healthy before starting.

    • tx-submit-api: This service is responsible for submitting transactions to the Cardano network. It depends on the cardano-node service to be healthy before starting.

    • cardano-db-sync: This service syncs the Cardano blockchain data to a PostgreSQL database. It depends on both the cardano-node and postgres services to be healthy before starting.

    • postgres: This is the PostgreSQL database service used by the cardano-db-sync service to store the Cardano blockchain data.

    How to Start Services

    Because each service has defined dependency that means starting a service will also start it’s dependencies.

    Using Profiles in Docker Compose

    With profiles, you can selectively start services based on different needs or environments.
    Below are examples of how to use profiles in this setup.

    Start Just the Cardano Node

    To start only the cardano-node service, which is part of the node profile, run:

    docker compose --profile node up

    Start Cardano Node and cardano-node-api

    To start both the cardano-node and cardano-node-api use node-api profile, run:

    docker compose --profile node-api up

    Start Cardano Node and tx-submit-api

    To start both the cardano-node and tx-submit-api use tx-submit-api profile, run:

    docker compose --profile tx-submit-api up

    Start Cardano Node and db-sync

    To start both the cardano-node and cardano-db-sync use db-sync profile, run:

    docker compose --profile db-sync up

    Start All Services in Detached Mode

    To start all services defined in the docker-compose.yml file in detached mode, run:

    docker compose up -d

    This command will start all services (e.g., cardano-node, tx-submit-api, cardano-db-sync, and postgres) in the background, regardless of profiles.
    If you need to stop the services later, use:

    docker compose down

    This will stop and remove all the services started with docker compose up.
    If you’ve started specific services using profiles and want to stop them, you can specify the same profiles in the down command.

    How to Use Bursa

    To start just the bursa service, which is part of the bursa profile, run:

    docker compose --profile bursa up

    Access Swagger UI:

    Open your web browser and navigate to the Swagger UI:

    http://localhost:8090/swagger/index.html

    Execute a Create Request using Swagger UI:

    In the Swagger UI, find the section for creating a new wallet.
    Click on the Get /api/v1/wallet/create operation.
    Choose Try it out.
    Click Execute.

    This will send a create request to Bursa, and you should receive a JSON response with the details of the newly created wallet.

    Store the mnemonic in a safe place. If you want to restore the wallet, you will need the mnemonic. If you lose the mnemonic, you will lose access to the wallet.

    How to Use Bluefin

    To start just the bluefin service, which is part of the bluefin profile, run:

    docker compose --profile bluefin up

    to start the bluefin-inspector service, which is part of the bluefin-inspector profile, run:

    docker compose --profile bluefin-inspector up

    to start both the bluefin and bluefin-inspector services, use bluefin and bluefin-inspector profile, run:

    docker compose --profile bluefin --profile bluefin-inspector up

    to see the seed phrase of the wallet created by bluefin, run:

     docker exec bluefin-inspector cat /data/seed.txt

    Bluefin-inspector is a service that will allow you to see the seed phrase of the wallet created by bluefin.
    Seed phrase will be stored and managed on the local filesystem of the Docker host.
    The bluefin-inspector is setup to run for an 1h. After that, it will stop automatically.

    You can restart it by running the command below.

    docker compose --profile bluefin-inspector up -d --force-recreate

    How to Use Cardano Wallet

    To start just the cardano-wallet service, which is part of the wallet profile, run:

    docker compose --profile wallet up

    Visit original content creator repository
    https://github.com/blinklabs-io/cardano-compose-stacks