Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama stack fails to build on WSL. #65

Open
teamblubee opened this issue Sep 12, 2024 · 3 comments
Open

llama stack fails to build on WSL. #65

teamblubee opened this issue Sep 12, 2024 · 3 comments

Comments

@teamblubee
Copy link

Running inside WSL

PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Cloned the repo made it to trying to build a model. First issue

llama stack build --name test
usage: llama [-h] {download,model,stack} ...
llama: error: unrecognized arguments: --name test

llama stack build doesn't work

After that we just manually go through the configuration, that fails with a missing file error:

(llama-stack) __username__@LAPTOP-I128A4F6:/mnt/c/Users/__username__/Documents/llama-stack$ llama stack build
Enter value for name (required): test
Enter value for distribution (default: local) (required):
Enter value for api_providers (optional):
Enter value for image_type (default: conda) (required):
Traceback (most recent call last):
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/common/exec.py", line 43, in run_with_pty
    process = subprocess.Popen(
              ^^^^^^^^^^^^^^^^^
  File "/home/__username__/anaconda3/envs/llama-stack/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/__username__/anaconda3/envs/llama-stack/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/core/build_conda_env.sh'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/__username__/anaconda3/envs/llama-stack/bin/llama", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/cli/llama.py", line 54, in main
    parser.run(args)
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/cli/llama.py", line 48, in run
    args.func(args)
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/cli/stack/build.py", line 161, in _run_stack_build_command
    self._run_stack_build_command_from_build_config(build_config)
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/cli/stack/build.py", line 122, in _run_stack_build_command_from_build_config
    build_package(
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/core/package.py", line 137, in build_package
    return_code = run_with_pty(args)
                  ^^^^^^^^^^^^^^^^^^
  File "/mnt/c/Users/__username__/Documents/llama-stack/llama_toolchain/common/exec.py", line 92, in run_with_pty
    if process.poll() is None:
       ^^^^^^^
UnboundLocalError: cannot access local variable 'process' where it is not associated with a value

the referenced file that fails to open is there:

#!/bin/bash

# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.

LLAMA_MODELS_DIR=${LLAMA_MODELS_DIR:-}
LLAMA_TOOLCHAIN_DIR=${LLAMA_TOOLCHAIN_DIR:-}
TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-}

if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
  echo "Using llama-toolchain-dir=$LLAMA_TOOLCHAIN_DIR"
fi
if [ -n "$LLAMA_MODELS_DIR" ]; then
  echo "Using llama-models-dir=$LLAMA_MODELS_DIR"
fi

set -euo pipefail

if [ "$#" -ne 4 ]; then
  echo "Usage: $0 <distribution_type> <build_name> <pip_dependencies>" >&2
  echo "Example: $0 <distribution_type> mybuild 'numpy pandas scipy'" >&2
  exit 1
fi

distribution_type="$1"
build_name="$2"
env_name="llamastack-$build_name"
config_file="$3"
pip_dependencies="$4"

# Define color codes
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color

# this is set if we actually create a new conda in which case we need to clean up
ENVNAME=""

SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
source "$SCRIPT_DIR/common.sh"

ensure_conda_env_python310() {
  local env_name="$1"
  local pip_dependencies="$2"
  local python_version="3.10"

  # Check if conda command is available
  if ! command -v conda &>/dev/null; then
    printf "${RED}Error: conda command not found. Is Conda installed and in your PATH?${NC}" >&2
    exit 1
  fi

  # Check if the environment exists
  if conda env list | grep -q "^${env_name} "; then
    printf "Conda environment '${env_name}' exists. Checking Python version...\n"

    # Check Python version in the environment
    current_version=$(conda run -n "${env_name}" python --version 2>&1 | cut -d' ' -f2 | cut -d'.' -f1,2)

    if [ "$current_version" = "$python_version" ]; then
      printf "Environment '${env_name}' already has Python ${python_version}. No action needed.\n"
    else
      printf "Updating environment '${env_name}' to Python ${python_version}...\n"
      conda install -n "${env_name}" python="${python_version}" -y
    fi
  else
    printf "Conda environment '${env_name}' does not exist. Creating with Python ${python_version}...\n"
    conda create -n "${env_name}" python="${python_version}" -y

    ENVNAME="${env_name}"
    # setup_cleanup_handlers
  fi

  eval "$(conda shell.bash hook)"
  conda deactivate && conda activate "${env_name}"

  if [ -n "$TEST_PYPI_VERSION" ]; then
    # these packages are damaged in test-pypi, so install them first
    pip install fastapi libcst
    pip install --extra-index-url https://test.pypi.org/simple/ llama-models==$TEST_PYPI_VERSION llama-toolchain==$TEST_PYPI_VERSION $pip_dependencies
  else
    # Re-installing llama-toolchain in the new conda environment
    if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
      if [ ! -d "$LLAMA_TOOLCHAIN_DIR" ]; then
        printf "${RED}Warning: LLAMA_TOOLCHAIN_DIR is set but directory does not exist: $LLAMA_TOOLCHAIN_DIR${NC}\n" >&2
        exit 1
      fi

      printf "Installing from LLAMA_TOOLCHAIN_DIR: $LLAMA_TOOLCHAIN_DIR\n"
      pip install --no-cache-dir -e "$LLAMA_TOOLCHAIN_DIR"
    else
      pip install --no-cache-dir llama-toolchain
    fi

    if [ -n "$LLAMA_MODELS_DIR" ]; then
      if [ ! -d "$LLAMA_MODELS_DIR" ]; then
        printf "${RED}Warning: LLAMA_MODELS_DIR is set but directory does not exist: $LLAMA_MODELS_DIR${NC}\n" >&2
        exit 1
      fi

      printf "Installing from LLAMA_MODELS_DIR: $LLAMA_MODELS_DIR\n"
      pip uninstall -y llama-models
      pip install --no-cache-dir -e "$LLAMA_MODELS_DIR"
    fi

    # Install pip dependencies
    if [ -n "$pip_dependencies" ]; then
      printf "Installing pip dependencies: $pip_dependencies\n"
      pip install $pip_dependencies
    fi
  fi
}

ensure_conda_env_python310 "$env_name" "$pip_dependencies"

printf "${GREEN}Successfully setup conda environment. Configuring build...${NC}\n"

$CONDA_PREFIX/bin/python3 -m llama_toolchain.cli.llama stack configure $config_file

Those errors were a bit misleading as I kept chasing them down. The main issue is that this doesn't build on windows and in WSL somehow all the files were corrupted. I had to dos2unix the entire cloned git repo and then those errors went away.

@yanxi0830
Copy link
Contributor

Re the first issue, we have simplified the CLI to only requiring llama stack build. Wondering if you have link the documentation where you have seen the llama stack build --name test command. Wanted to make it consistent with the actual implementation.

llama stack build -h
usage: llama stack build [-h] [--config CONFIG]

Build a Llama stack container

options:
  -h, --help       show this help message and exit
  --config CONFIG  Path to a config file to use for the build

Re the second issue, the error message suggests it may be cause by

process = subprocess.Popen(
running WSL, where process = subprocess.Popen may fail.

@teamblubee
Copy link
Author

The instructions that still shows using command line arguments here:
https://github.com/meta-llama/llama-stack/blob/main/docs/cli_reference.md#step-32-build-a-distribution

specifically here:

Step 3.2: Build a distribution
Let's imagine you are working with a 8B-Instruct model. The following command will build a package (in the form of a Conda environment) and configure it. As part of the configuration, you will be asked for some inputs (model_id, max_seq_len, etc.) Since we are working with a 8B model, we will name our build 8b-instruct to help us remember the config.

llama stack build local --name 8b-instruct

The WSL issue is was fixed with dos2unix somehow on git pull on windows then using going into WSL the files somehow got corrupted.

If you can update the above commands we can call this issue close.

@yanxi0830
Copy link
Contributor

The instructions that still shows using command line arguments here: https://github.com/meta-llama/llama-stack/blob/main/docs/cli_reference.md#step-32-build-a-distribution

Thanks! Could you run git pull, and check again? It have been updated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants